This is the multi-page printable view of this section. Click here to print.
Dapr Reference Docs
- 1: Dapr API reference
- 1.1: Service invocation API reference
- 1.2: Pub/sub API reference
- 1.3: Workflow API reference
- 1.4: State management API reference
- 1.5: Bindings API reference
- 1.6: Actors API reference
- 1.7: Secrets API reference
- 1.8: Configuration API reference
- 1.9: Distributed Lock API reference
- 1.10: Health API reference
- 1.11: Metadata API reference
- 1.12: Placement API reference
- 1.13: Cryptography API reference
- 1.14: Jobs API reference
- 1.15: Conversation API reference
- 2: Dapr CLI reference
- 2.1: Dapr command line interface (CLI) reference
- 2.2: annotate CLI command reference
- 2.3: build-info CLI command reference
- 2.4: completion CLI command reference
- 2.5: components CLI command reference
- 2.6: configurations CLI command reference
- 2.7: dashboard CLI command reference
- 2.8: help CLI command reference
- 2.9: init CLI command reference
- 2.10: invoke CLI command reference
- 2.11: list CLI command reference
- 2.12: logs CLI command reference
- 2.13: mtls CLI command reference
- 2.13.1: mtls export CLI command reference
- 2.13.2: mtls expiry CLI command reference
- 2.13.3: mtls renew certificate CLI command reference
- 2.14: publish CLI command reference
- 2.15: run CLI command reference
- 2.16: status CLI command reference
- 2.17: stop CLI command reference
- 2.18: uninstall CLI command reference
- 2.19: upgrade CLI command reference
- 2.20: version CLI command reference
- 3: Dapr arguments and annotations for daprd, CLI, and Kubernetes
- 4: Environment variable reference
- 5: Dapr components reference
- 5.1: Pub/sub brokers component specs
- 5.1.1: Apache Kafka
- 5.1.2: AWS SNS/SQS
- 5.1.3: Azure Event Hubs
- 5.1.4: Azure Service Bus Queues
- 5.1.5: Azure Service Bus Topics
- 5.1.6: GCP
- 5.1.7: In-memory
- 5.1.8: JetStream
- 5.1.9: KubeMQ
- 5.1.10: MQTT
- 5.1.11: MQTT3
- 5.1.12: Pulsar
- 5.1.13: RabbitMQ
- 5.1.14: Redis Streams
- 5.1.15: RocketMQ
- 5.1.16: Solace-AMQP
- 5.2: Bindings component specs
- 5.2.1: Alibaba Cloud DingTalk binding spec
- 5.2.2: Alibaba Cloud Log Storage Service binding spec
- 5.2.3: Alibaba Cloud Object Storage Service binding spec
- 5.2.4: Alibaba Cloud Tablestore binding spec
- 5.2.5: Apple Push Notification Service binding spec
- 5.2.6: AWS DynamoDB binding spec
- 5.2.7: AWS Kinesis binding spec
- 5.2.8: AWS S3 binding spec
- 5.2.9: AWS SES binding spec
- 5.2.10: AWS SNS binding spec
- 5.2.11: AWS SQS binding spec
- 5.2.12: Azure Blob Storage binding spec
- 5.2.13: Azure Cosmos DB (Gremlin API) binding spec
- 5.2.14: Azure Cosmos DB (SQL API) binding spec
- 5.2.15: Azure Event Grid binding spec
- 5.2.16: Azure Event Hubs binding spec
- 5.2.17: Azure OpenAI binding spec
- 5.2.18: Azure Service Bus Queues binding spec
- 5.2.19: Azure SignalR binding spec
- 5.2.20: Azure Storage Queues binding spec
- 5.2.21: Cloudflare Queues bindings spec
- 5.2.22: commercetools GraphQL binding spec
- 5.2.23: Cron binding spec
- 5.2.24: GCP Pub/Sub binding spec
- 5.2.25: GCP Storage Bucket binding spec
- 5.2.26: GraphQL binding spec
- 5.2.27: HTTP binding spec
- 5.2.28: Huawei OBS binding spec
- 5.2.29: InfluxDB binding spec
- 5.2.30: Kafka binding spec
- 5.2.31: Kitex
- 5.2.32: KubeMQ binding spec
- 5.2.33: Kubernetes Events binding spec
- 5.2.34: Local Storage binding spec
- 5.2.35: MQTT3 binding spec
- 5.2.36: MySQL & MariaDB binding spec
- 5.2.37: PostgreSQL binding spec
- 5.2.38: Postmark binding spec
- 5.2.39: RabbitMQ binding spec
- 5.2.40: Redis binding spec
- 5.2.41: RethinkDB binding spec
- 5.2.42: SFTP binding spec
- 5.2.43: SMTP binding spec
- 5.2.44: Twilio SendGrid binding spec
- 5.2.45: Twilio SMS binding spec
- 5.2.46: Wasm
- 5.2.47: Zeebe command binding spec
- 5.2.48: Zeebe JobWorker binding spec
- 5.3: State store component specs
- 5.3.1: Aerospike
- 5.3.2: AWS DynamoDB
- 5.3.3: Azure Blob Storage
- 5.3.4: Azure Cosmos DB (SQL API)
- 5.3.5: Azure Table Storage
- 5.3.6: Cassandra
- 5.3.7: Cloudflare Workers KV
- 5.3.8: CockroachDB
- 5.3.9: Coherence
- 5.3.10: Couchbase
- 5.3.11: Etcd
- 5.3.12: GCP Firestore (Datastore mode)
- 5.3.13: HashiCorp Consul
- 5.3.14: Hazelcast
- 5.3.15: In-memory
- 5.3.16: JetStream KV
- 5.3.17: Memcached
- 5.3.18: Microsoft SQL Server & Azure SQL
- 5.3.19: MongoDB
- 5.3.20: MySQL & MariaDB
- 5.3.21: OCI Object Storage
- 5.3.22: Oracle Database
- 5.3.23: PostgreSQL
- 5.3.24: PostgreSQL v1
- 5.3.25: Redis
- 5.3.26: RethinkDB
- 5.3.27: SQLite
- 5.3.28: Zookeeper
- 5.4: Secret store component specs
- 5.4.1: AlibabaCloud OOS Parameter Store
- 5.4.2: AWS Secrets Manager
- 5.4.3: AWS SSM Parameter Store
- 5.4.4: Azure Key Vault secret store
- 5.4.5: GCP Secret Manager
- 5.4.6: HashiCorp Vault
- 5.4.7: HuaweiCloud Cloud Secret Management Service (CSMS)
- 5.4.8: Kubernetes secrets
- 5.4.9: Local environment variables (for Development)
- 5.4.10: Local file (for Development)
- 5.5: Configuration store component specs
- 5.5.1: Azure App Configuration
- 5.5.2: PostgreSQL
- 5.5.3: Redis
- 5.6: Lock component specs
- 5.6.1: Redis
- 5.7: Cryptography component specs
- 5.7.1: Azure Key Vault
- 5.7.2: JSON Web Key Sets (JWKS)
- 5.7.3: Kubernetes Secrets
- 5.7.4: Local storage
- 5.8: Conversation component specs
- 5.8.1: Anthropic
- 5.8.2: AWS Bedrock
- 5.8.3: DeepSeek
- 5.8.4: Local Testing
- 5.8.5: GoogleAI
- 5.8.6: Huggingface
- 5.8.7: Mistral
- 5.8.8: Ollama
- 5.8.9: OpenAI
- 5.9: Name resolution provider component specs
- 5.9.1: HashiCorp Consul
- 5.9.2: Kubernetes DNS
- 5.9.3: mDNS
- 5.9.4: SQLite
- 5.10: Middleware component specs
- 5.10.1: Bearer
- 5.10.2: OAuth2
- 5.10.3: OAuth2 client credentials
- 5.10.4: Apply Open Policy Agent (OPA) policies
- 5.10.5: Rate limiting
- 5.10.6: Router alias http request routing
- 5.10.7: RouterChecker http request routing
- 5.10.8: Sentinel fault-tolerance middleware component
- 5.10.9: Uppercase request body
- 5.10.10: Wasm
- 6: Dapr resource specs
- 6.1: Component spec
- 6.2: Subscription spec
- 6.3: Resiliency spec
- 6.4: HTTPEndpoint spec
- 6.5: Configuration spec
1 - Dapr API reference
1.1 - Service invocation API reference
Dapr provides users with the ability to call other applications that are using Dapr with a unique named identifier (appId), or HTTP endpoints that are not using Dapr. This allows applications to interact with one another via named identifiers and puts the burden of service discovery on the Dapr runtime.
Invoke a method on a remote Dapr app
This endpoint lets you invoke a method in another Dapr enabled app.
HTTP Request
PATCH/POST/GET/PUT/DELETE http://localhost:<daprPort>/v1.0/invoke/<appID>/method/<method-name>
Invoke a method on a non-Dapr endpoint
This endpoint lets you invoke a method on a non-Dapr endpoint using an HTTPEndpoint
resource name, or a Fully Qualified Domain Name (FQDN) URL.
HTTP Request
PATCH/POST/GET/PUT/DELETE http://localhost:<daprPort>/v1.0/invoke/<HTTPEndpoint name>/method/<method-name>
PATCH/POST/GET/PUT/DELETE http://localhost:<daprPort>/v1.0/invoke/<FQDN URL>/method/<method-name>
HTTP Response codes
When a service invokes another service with Dapr, the status code of the called service will be returned to the caller.
If there’s a network error or other transient error, Dapr will return a 500
error with the detailed error message.
In case a user invokes Dapr over HTTP to talk to a gRPC enabled service, an error from the called gRPC service will return as 500
and a successful response will return as 200OK
.
Code | Description |
---|---|
XXX | Upstream status returned |
400 | Method name not given |
403 | Invocation forbidden by access control |
500 | Request failed |
URL Parameters
Parameter | Description |
---|---|
daprPort | the Dapr port |
appID | the App ID associated with the remote app |
HTTPEndpoint name | the HTTPEndpoint resource associated with the external endpoint |
FQDN URL | Fully Qualified Domain Name URL to invoke on the external endpoint |
method-name | the name of the method or url to invoke on the remote app |
Note, all URL parameters are case-sensitive.
Request Contents
In the request you can pass along headers:
{
"Content-Type": "application/json"
}
Within the body of the request place the data you want to send to the service:
{
"arg1": 10,
"arg2": 23,
"operator": "+"
}
Request received by invoked service
Once your service code invokes a method in another Dapr enabled app or non-Dapr endpoint, Dapr sends the request, along with the headers and body, on the <method-name>
endpoint.
The Dapr app or non-Dapr endpoint being invoked will need to be listening for and responding to requests on that endpoint.
Cross namespace invocation
On hosting platforms that support namespaces, Dapr app IDs conform to a valid FQDN format that includes the target namespace.
For example, the following string contains the app ID (myApp
) in addition to the namespace the app runs in (production
).
myApp.production
Namespace supported platforms
- Kubernetes
Examples
You can invoke the add
method on the mathService
service by sending the following:
curl http://localhost:3500/v1.0/invoke/mathService/method/add \
-H "Content-Type: application/json"
-d '{ "arg1": 10, "arg2": 23}'
The mathService
service will need to be listening on the /add
endpoint to receive and process the request.
For a Node app this would look like:
app.post('/add', (req, res) => {
let args = req.body;
const [operandOne, operandTwo] = [Number(args['arg1']), Number(args['arg2'])];
let result = operandOne + operandTwo;
res.send(result.toString());
});
app.listen(port, () => console.log(`Listening on port ${port}!`));
The response from the remote endpoint will be returned in the response body.
In case when your service listens on a more nested path (e.g. /api/v1/add
), Dapr implements a full reverse proxy so you can append all the necessary path fragments to your request URL like this:
http://localhost:3500/v1.0/invoke/mathService/method/api/v1/add
In case you are invoking mathService
on a different namespace, you can use the following URL:
http://localhost:3500/v1.0/invoke/mathService.testing/method/api/v1/add
In this URL, testing
is the namespace that mathService
is running in.
Non-Dapr Endpoint Example
If the mathService
service was a non-Dapr application, then it could be invoked using service invocation via an HTTPEndpoint
, as well as a Fully Qualified Domain Name (FQDN) URL.
curl http://localhost:3500/v1.0/invoke/mathHTTPEndpoint/method/add \
-H "Content-Type: application/json"
-d '{ "arg1": 10, "arg2": 23}'
curl http://localhost:3500/v1.0/invoke/http://mathServiceURL.com/method/add \
-H "Content-Type: application/json"
-d '{ "arg1": 10, "arg2": 23}'
Next Steps
1.2 - Pub/sub API reference
Publish a message to a given topic
This endpoint lets you publish data to multiple consumers who are listening on a topic
.
Dapr guarantees At-Least-Once semantics for this endpoint.
HTTP Request
POST http://localhost:<daprPort>/v1.0/publish/<pubsubname>/<topic>[?<metadata>]
HTTP Response codes
Code | Description |
---|---|
204 | Message delivered |
403 | Message forbidden by access controls |
404 | No pubsub name or topic given |
500 | Delivery failed |
URL Parameters
Parameter | Description |
---|---|
daprPort |
The Dapr port |
pubsubname |
The name of pubsub component |
topic |
The name of the topic |
metadata |
Query parameters for metadata as described below |
Note, all URL parameters are case-sensitive.
curl -X POST http://localhost:3500/v1.0/publish/pubsubName/deathStarStatus \
-H "Content-Type: application/json" \
-d '{
"status": "completed"
}'
Headers
The Content-Type
header tells Dapr which content type your data adheres to when constructing a CloudEvent envelope. The Content-Type
header value populates the datacontenttype
field in the CloudEvent.
Unless specified, Dapr assumes text/plain
. If your content type is JSON, use a Content-Type
header with the value of application/json
.
If you want to send your own custom CloudEvent, use the application/cloudevents+json
value for the Content-Type
header.
Metadata
Metadata can be sent via query parameters in the request’s URL. It must be prefixed with metadata.
, as shown below.
Parameter | Description |
---|---|
metadata.ttlInSeconds |
The number of seconds for the message to expire, as described here |
metadata.rawPayload |
Boolean to determine if Dapr should publish the event without wrapping it as CloudEvent, as described here |
Additional metadata parameters are available based on each pubsub component.
Publish multiple messages to a given topic
This endpoint lets you publish multiple messages to consumers who are listening on a topic
.
HTTP Request
POST http://localhost:<daprPort>/v1.0-alpha1/publish/bulk/<pubsubname>/<topic>[?<metadata>]
The request body should contain a JSON array of entries with:
- Unique entry IDs
- The event to publish
- The content type of the event
If the content type for an event is not application/cloudevents+json
, it is auto-wrapped as a CloudEvent (unless metadata.rawPayload
is set to true
).
Example:
curl -X POST http://localhost:3500/v1.0-alpha1/publish/bulk/pubsubName/deathStarStatus \
-H 'Content-Type: application/json' \
-d '[
{
"entryId": "ae6bf7c6-4af2-11ed-b878-0242ac120002",
"event": "first text message",
"contentType": "text/plain"
},
{
"entryId": "b1f40bd6-4af2-11ed-b878-0242ac120002",
"event": {
"message": "second JSON message"
},
"contentType": "application/json"
},
]'
Headers
The Content-Type
header should always be set to application/json
since the request body is a JSON array.
URL Parameters
Parameter | Description |
---|---|
daprPort |
The Dapr port |
pubsubname |
The name of pub/sub component |
topic |
The name of the topic |
metadata |
Query parameters for metadata |
Metadata
Metadata can be sent via query parameters in the request’s URL. It must be prefixed with metadata.
, as shown in the table below.
Parameter | Description |
---|---|
metadata.rawPayload |
Boolean to determine if Dapr should publish the messages without wrapping them as CloudEvent. |
metadata.maxBulkPubBytes |
Maximum bytes to publish in a bulk publish request. |
HTTP Response
HTTP Status | Description |
---|---|
204 | All messages delivered |
400 | Pub/sub does not exist |
403 | Forbidden by access controls |
500 | At least one message failed to be delivered |
In case of a 500 status code, the response body will contain a JSON object containing a list of entries that failed to be delivered. For example from our request above, if the entry with event "first text message"
failed to be delivered, the response would contain its entry ID and an error message from the underlying pub/sub component.
{
"failedEntries": [
{
"entryId": "ae6bf7c6-4af2-11ed-b878-0242ac120002",
"error": "some error message"
},
],
"errorCode": "ERR_PUBSUB_PUBLISH_MESSAGE"
}
Optional Application (User Code) Routes
Provide a route for Dapr to discover topic subscriptions
Dapr will invoke the following endpoint on user code to discover topic subscriptions:
HTTP Request
GET http://localhost:<appPort>/dapr/subscribe
URL Parameters
Parameter | Description |
---|---|
appPort |
The application port |
HTTP Response body
A JSON-encoded array of strings.
Example:
[
{
"pubsubname": "pubsub",
"topic": "newOrder",
"routes": {
"rules": [
{
"match": "event.type == order",
"path": "/orders"
}
]
"default" : "/otherorders"
},
"metadata": {
"rawPayload": "true"
}
}
]
Note, all subscription parameters are case-sensitive.
Metadata
Optionally, metadata can be sent via the request body.
Parameter | Description |
---|---|
rawPayload |
boolean to subscribe to events that do not comply with CloudEvent specification, as described here |
Provide route(s) for Dapr to deliver topic events
In order to deliver topic events, a POST
call will be made to user code with the route specified in the subscription response. Under routes
, you can provide rules that match a certain condition to a specific path when a message topic is received. You can also provide a default route for any rules that do not have a specific match.
The following example illustrates this point, considering a subscription for topic newOrder
with route orders
on port 3000: POST http://localhost:3000/orders
HTTP Request
POST http://localhost:<appPort>/<path>
Note, all URL parameters are case-sensitive.
URL Parameters
Parameter | Description |
---|---|
appPort |
The application port |
path |
Route path from the subscription configuration |
Expected HTTP Response
An HTTP 2xx response denotes successful processing of message.
For richer response handling, a JSON-encoded payload body with the processing status can be sent:
{
"status": "<status>"
}
Status | Description |
---|---|
SUCCESS |
Message is processed successfully |
RETRY |
Message to be retried by Dapr |
DROP |
Warning is logged and message is dropped |
Others | Error, message to be retried by Dapr |
Dapr assumes that a JSON-encoded payload response without status
field or an empty payload responses with HTTP 2xx is a SUCCESS
.
The HTTP response might be different from HTTP 2xx. The following are Dapr’s behavior in different HTTP statuses:
HTTP Status | Description |
---|---|
2xx | message is processed as per status in payload (SUCCESS if empty; ignored if invalid payload). |
404 | error is logged and message is dropped |
other | warning is logged and message to be retried |
Subscribe multiple messages from a given topic
This allows you to subscribe to multiple messages from a broker when listening to a topic
.
In order to receive messages in a bulk manner for a topic subscription, the application:
- Needs to opt for
bulkSubscribe
while sending list of topics to be subscribed to - Optionally, can configure
maxMessagesCount
and/ormaxAwaitDurationMs
Refer to the Send and receive messages in bulk guide for more details on how to opt-in.
Expected HTTP Response for Bulk Subscribe
An HTTP 2xx response denotes that entries (individual messages) inside this bulk message have been processed by the application and Dapr will now check each EntryId status. A JSON-encoded payload body with the processing status against each entry needs to be sent:
{
"statuses":
[
{
"entryId": "<entryId1>",
"status": "<status>"
},
{
"entryId": "<entryId2>",
"status": "<status>"
}
]
}
Note: If an EntryId status is not found by Dapr in a response received from the application, that entry’s status is considered
RETRY
.
Status | Description |
---|---|
SUCCESS |
Message is processed successfully |
RETRY |
Message to be retried by Dapr |
DROP |
Warning is logged and message is dropped |
The HTTP response might be different from HTTP 2xx. The following are Dapr’s behavior in different HTTP statuses:
HTTP Status | Description |
---|---|
2xx | message is processed as per status in payload. |
404 | error is logged and all messages are dropped |
other | warning is logged and all messages to be retried |
Message envelope
Dapr pub/sub adheres to version 1.0 of CloudEvents.
Related links
1.3 - Workflow API reference
Dapr provides users with the ability to interact with workflows through its built-in workflow engine, which is implemented using Dapr Actors. This workflow engine is accessed using the name dapr
in API calls as the workflowComponentName
.
Start workflow request
Start a workflow instance with the given name and optionally, an instance ID.
POST http://localhost:<daprPort>/v1.0/workflows/<workflowComponentName>/<workflowName>/start[?instanceID=<instanceID>]
Note that workflow instance IDs can only contain alphanumeric characters, underscores, and dashes.
URL parameters
Parameter | Description |
---|---|
workflowComponentName |
Use dapr for Dapr Workflows |
workflowName |
Identify the workflow type |
instanceID |
(Optional) Unique value created for each run of a specific workflow |
Request content
Any request content will be passed to the workflow as input. The Dapr API passes the content as-is without attempting to interpret it.
HTTP response codes
Code | Description |
---|---|
202 |
Accepted |
400 |
Request was malformed |
500 |
Request formatted correctly, error in dapr code |
Response content
The API call will provide a response similar to this:
{
"instanceID": "12345678"
}
Terminate workflow request
Terminate a running workflow instance with the given name and instance ID.
POST http://localhost:<daprPort>/v1.0/workflows/<workflowComponentName>/<instanceId>/terminate
Note
Terminating a workflow terminates all of the child workflows created by the workflow instance.
Terminating a workflow has no effect on any in-flight activity executions that were started by the terminated instance.
URL parameters
Parameter | Description |
---|---|
workflowComponentName |
Use dapr for Dapr Workflows |
instanceId |
Unique value created for each run of a specific workflow |
HTTP response codes
Code | Description |
---|---|
202 |
Accepted |
400 |
Request was malformed |
500 |
Request formatted correctly, error in dapr code |
Response content
This API does not return any content.
Raise Event request
For workflow components that support subscribing to external events, such as the Dapr Workflow engine, you can use the following “raise event” API to deliver a named event to a specific workflow instance.
POST http://localhost:<daprPort>/v1.0/workflows/<workflowComponentName>/<instanceID>/raiseEvent/<eventName>
Note
The exact mechanism for subscribing to an event depends on the workflow component that you’re using. Dapr Workflow has one way of subscribing to external events but other workflow components might have different ways.URL parameters
Parameter | Description |
---|---|
workflowComponentName |
Use dapr for Dapr Workflows |
instanceId |
Unique value created for each run of a specific workflow |
eventName |
The name of the event to raise |
HTTP response codes
Code | Description |
---|---|
202 |
Accepted |
400 |
Request was malformed |
500 |
Request formatted correctly, error in dapr code or underlying component |
Response content
None.
Pause workflow request
Pause a running workflow instance.
POST http://localhost:<daprPort>/v1.0/workflows/<workflowComponentName>/<instanceId>/pause
URL parameters
Parameter | Description |
---|---|
workflowComponentName |
Use dapr for Dapr Workflows |
instanceId |
Unique value created for each run of a specific workflow |
HTTP response codes
Code | Description |
---|---|
202 |
Accepted |
400 |
Request was malformed |
500 |
Error in Dapr code or underlying component |
Response content
None.
Resume workflow request
Resume a paused workflow instance.
POST http://localhost:<daprPort>/v1.0/workflows/<workflowComponentName>/<instanceId>/resume
URL parameters
Parameter | Description |
---|---|
workflowComponentName |
Use dapr for Dapr Workflows |
instanceId |
Unique value created for each run of a specific workflow |
HTTP response codes
Code | Description |
---|---|
202 |
Accepted |
400 |
Request was malformed |
500 |
Error in Dapr code |
Response content
None.
Purge workflow request
Purge the workflow state from your state store with the workflow’s instance ID.
POST http://localhost:<daprPort>/v1.0/workflows/<workflowComponentName>/<instanceId>/purge
Note
OnlyCOMPLETED
, FAILED
, or TERMINATED
workflows can be purged.
URL parameters
Parameter | Description |
---|---|
workflowComponentName |
Use dapr for Dapr Workflows |
instanceId |
Unique value created for each run of a specific workflow |
HTTP response codes
Code | Description |
---|---|
202 |
Accepted |
400 |
Request was malformed |
500 |
Error in Dapr code |
Response content
None.
Get workflow request
Get information about a given workflow instance.
GET http://localhost:<daprPort>/v1.0/workflows/<workflowComponentName>/<instanceId>
URL parameters
Parameter | Description |
---|---|
workflowComponentName |
Use dapr for Dapr Workflows |
instanceId |
Unique value created for each run of a specific workflow |
HTTP response codes
Code | Description |
---|---|
200 |
OK |
400 |
Request was malformed |
500 |
Error in Dapr code |
Response content
The API call will provide a JSON response similar to this:
{
"createdAt": "2023-01-12T21:31:13Z",
"instanceID": "12345678",
"lastUpdatedAt": "2023-01-12T21:31:13Z",
"properties": {
"property1": "value1",
"property2": "value2",
},
"runtimeStatus": "RUNNING",
}
Parameter | Description |
---|---|
runtimeStatus |
The status of the workflow instance. Values include: "RUNNING" , "COMPLETED" , "CONTINUED_AS_NEW" , "FAILED" , "CANCELED" , "TERMINATED" , "PENDING" , "SUSPENDED" |
Next Steps
1.4 - State management API reference
Component file
A Dapr statestore.yaml
component file has the following structure:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: state.<TYPE>
version: v1
metadata:
- name:<KEY>
value:<VALUE>
- name: <KEY>
value: <VALUE>
Setting | Description |
---|---|
metadata.name |
The name of the state store. |
spec/metadata |
An open key value pair metadata that allows a binding to define connection properties. |
Key scheme
Dapr state stores are key/value stores. To ensure data compatibility, Dapr requires these data stores follow a fixed key scheme. For general states, the key format is:
<App ID>||<state key>
For Actor states, the key format is:
<App ID>||<Actor type>||<Actor id>||<state key>
Save state
This endpoint lets you save an array of state objects.
HTTP Request
POST http://localhost:<daprPort>/v1.0/state/<storename>
URL Parameters
Parameter | Description |
---|---|
daprPort |
The Dapr port |
storename |
The metadata.name field in the user-configured statestore.yaml component file. Refer to the Dapr state store configuration structure mentioned above. |
The optional request metadata is passed via URL query parameters. For example,
POST http://localhost:3500/v1.0/state/myStore?metadata.contentType=application/json
All URL parameters are case-sensitive.
Since
||
is a reserved string it cannot be used in the<state key>
field.
Request Body
A JSON array of state objects. Each state object is comprised with the following fields:
Field | Description |
---|---|
key |
State key |
value |
State value, which can be any byte array |
etag |
(optional) State ETag |
metadata |
(optional) Additional key-value pairs to be passed to the state store |
options |
(optional) State operation options; see state operation options |
ETag format: Dapr runtime treats ETags as opaque strings. The exact ETag format is defined by the corresponding data store.
Metadata
Metadata can be sent via query parameters in the request’s URL. It must be prefixed with metadata.
, as shown below.
Parameter | Description |
---|---|
metadata.ttlInSeconds |
The number of seconds for the message to expire, as described here |
TTL: Only certain state stores support the TTL option, according the supported state stores.
HTTP Response
Response Codes
Code | Description |
---|---|
204 |
State saved |
400 |
State store is missing or misconfigured or malformed request |
500 |
Failed to save state |
Response Body
None.
Example
curl -X POST http://localhost:3500/v1.0/state/starwars?metadata.contentType=application/json \
-H "Content-Type: application/json" \
-d '[
{
"key": "weapon",
"value": "DeathStar",
"etag": "1234"
},
{
"key": "planet",
"value": {
"name": "Tatooine"
}
}
]'
Get state
This endpoint lets you get the state for a specific key.
HTTP Request
GET http://localhost:<daprPort>/v1.0/state/<storename>/<key>
URL Parameters
Parameter | Description |
---|---|
daprPort |
The Dapr port |
storename |
metadata.name field in the user-configured statestore.yaml component file. Refer to the Dapr state store configuration structure mentioned above. |
key |
The key of the desired state |
consistency |
(optional) Read consistency mode; see state operation options |
metadata |
(optional) Metadata as query parameters to the state store |
The optional request metadata is passed via URL query parameters. For example,
GET http://localhost:3500/v1.0/state/myStore/myKey?metadata.contentType=application/json
Note, all URL parameters are case-sensitive.
HTTP Response
Response Codes
Code | Description |
---|---|
200 |
Get state successful |
204 |
Key is not found |
400 |
State store is missing or misconfigured |
500 |
Get state failed |
Response Headers
Header | Description |
---|---|
ETag |
ETag of returned value |
Response Body
JSON-encoded value
Example
curl http://localhost:3500/v1.0/state/starwars/planet?metadata.contentType=application/json
The above command returns the state:
{
"name": "Tatooine"
}
To pass metadata as query parameter:
GET http://localhost:3500/v1.0/state/starwars/planet?metadata.partitionKey=mypartitionKey&metadata.contentType=application/json
Get bulk state
This endpoint lets you get a list of values for a given list of keys.
HTTP Request
POST/PUT http://localhost:<daprPort>/v1.0/state/<storename>/bulk
URL Parameters
Parameter | Description |
---|---|
daprPort |
The Dapr port |
storename |
metadata.name field in the user-configured statestore.yaml component file. Refer to the Dapr state store configuration structure mentioned above. |
metadata |
(optional) Metadata as query parameters to the state store |
The optional request metadata is passed via URL query parameters. For example,
POST/PUT http://localhost:3500/v1.0/state/myStore/bulk?metadata.partitionKey=mypartitionKey
Note, all URL parameters are case-sensitive.
HTTP Response
Response Codes
Code | Description |
---|---|
200 |
Get state successful |
400 |
State store is missing or misconfigured |
500 |
Get bulk state failed |
Response Body
An array of JSON-encoded values
Example
curl http://localhost:3500/v1.0/state/myRedisStore/bulk \
-H "Content-Type: application/json" \
-d '{
"keys": [ "key1", "key2" ],
"parallelism": 10
}'
The above command returns an array of key/value objects:
[
{
"key": "key1",
"value": "value1",
"etag": "1"
},
{
"key": "key2",
"value": "value2",
"etag": "1"
}
]
To pass metadata as query parameter:
POST http://localhost:3500/v1.0/state/myRedisStore/bulk?metadata.partitionKey=mypartitionKey
Delete state
This endpoint lets you delete the state for a specific key.
HTTP Request
DELETE http://localhost:<daprPort>/v1.0/state/<storename>/<key>
URL Parameters
Parameter | Description |
---|---|
daprPort |
The Dapr port |
storename |
metadata.name field in the user-configured statestore.yaml component file. Refer to the Dapr state store configuration structure mentioned above. |
key |
The key of the desired state |
concurrency |
(optional) Either first-write or last-write; see state operation options |
consistency |
(optional) Either strong or eventual; see state operation options |
The optional request metadata is passed via URL query parameters. For example,
DELETE http://localhost:3500/v1.0/state/myStore/myKey?metadata.contentType=application/json
Note, all URL parameters are case-sensitive.
Request Headers
Header | Description |
---|---|
If-Match | (Optional) ETag associated with the key to be deleted |
HTTP Response
Response Codes
Code | Description |
---|---|
204 |
Delete state successful |
400 |
State store is missing or misconfigured |
500 |
Delete state failed |
Response Body
None.
Example
curl -X DELETE http://localhost:3500/v1.0/state/starwars/planet -H "If-Match: xxxxxxx"
Query state
This endpoint lets you query the key/value state.
alpha
This API is in alpha stage.HTTP Request
POST/PUT http://localhost:<daprPort>/v1.0-alpha1/state/<storename>/query
URL Parameters
Parameter | Description |
---|---|
daprPort |
The Dapr port |
storename |
metadata.name field in the user-configured statestore.yaml component file. Refer to the Dapr state store configuration structure mentioned above. |
metadata |
(optional) Metadata as query parameters to the state store |
The optional request metadata is passed via URL query parameters. For example,
POST http://localhost:3500/v1.0-alpha1/state/myStore/query?metadata.contentType=application/json
Note, all URL parameters are case-sensitive.
Response Codes
Code | Description |
---|---|
200 |
State query successful |
400 |
State store is missing or misconfigured |
500 |
State query failed |
Response Body
An array of JSON-encoded values
Example
curl -X POST http://localhost:3500/v1.0-alpha1/state/myStore/query?metadata.contentType=application/json \
-H "Content-Type: application/json" \
-d '{
"filter": {
"OR": [
{
"EQ": { "person.org": "Dev Ops" }
},
{
"AND": [
{
"EQ": { "person.org": "Finance" }
},
{
"IN": { "state": [ "CA", "WA" ] }
}
]
}
]
},
"sort": [
{
"key": "state",
"order": "DESC"
},
{
"key": "person.id"
}
],
"page": {
"limit": 3
}
}'
The above command returns an array of objects along with a token:
{
"results": [
{
"key": "1",
"data": {
"person": {
"org": "Dev Ops",
"id": 1036
},
"city": "Seattle",
"state": "WA"
},
"etag": "6f54ad94-dfb9-46f0-a371-e42d550adb7d"
},
{
"key": "4",
"data": {
"person": {
"org": "Dev Ops",
"id": 1042
},
"city": "Spokane",
"state": "WA"
},
"etag": "7415707b-82ce-44d0-bf15-6dc6305af3b1"
},
{
"key": "10",
"data": {
"person": {
"org": "Dev Ops",
"id": 1054
},
"city": "New York",
"state": "NY"
},
"etag": "26bbba88-9461-48d1-8a35-db07c374e5aa"
}
],
"token": "3"
}
To pass metadata as query parameter:
POST http://localhost:3500/v1.0-alpha1/state/myStore/query?metadata.partitionKey=mypartitionKey
State transactions
Persists the changes to the state store as a transactional operation.
This API depends on a state store component that supports transactions.
Refer to the state store component spec for a full, current list of state stores that support transactions.
HTTP Request
POST/PUT http://localhost:<daprPort>/v1.0/state/<storename>/transaction
HTTP Response Codes
Code | Description |
---|---|
204 |
Request successful |
400 |
State store is missing or misconfigured or malformed request |
500 |
Request failed |
URL Parameters
Parameter | Description |
---|---|
daprPort |
The Dapr port |
storename |
metadata.name field in the user-configured statestore.yaml component file. Refer to the Dapr state store configuration structure mentioned above. |
The optional request metadata is passed via URL query parameters. For example,
POST http://localhost:3500/v1.0/state/myStore/transaction?metadata.contentType=application/json
Note, all URL parameters are case-sensitive.
Request Body
Field | Description |
---|---|
operations |
A JSON array of state operation |
metadata |
(optional) The metadata for the transaction that applies to all operations |
All transactional databases implement the following required operations:
Operation | Description |
---|---|
upsert |
Adds or updates the value |
delete |
Deletes the value |
Each operation has an associated request
that is comprised of the following fields:
Request | Description |
---|---|
key |
State key |
value |
State value, which can be any byte array |
etag |
(optional) State ETag |
metadata |
(optional) Additional key-value pairs to be passed to the state store that apply for this operation |
options |
(optional) State operation options; see state operation options |
Examples
The example below shows an upsert
operation for key1
and a delete
operation for key2
. This is applied to the partition named ‘planet’ in the state store. Both operations either succeed or fail in the transaction.
curl -X POST http://localhost:3500/v1.0/state/starwars/transaction \
-H "Content-Type: application/json" \
-d '{
"operations": [
{
"operation": "upsert",
"request": {
"key": "key1",
"value": "myData"
}
},
{
"operation": "delete",
"request": {
"key": "key2"
}
}
],
"metadata": {
"partitionKey": "planet"
}
}'
Configuring state store for actors
Actors don’t support multiple state stores and require a transactional state store to be used with Dapr. View which services currently implement the transactional state store interface.
Specify which state store to be used for actors with a true
value for the property actorStateStore
in the metadata section of the statestore.yaml
component file.
For example, the following components yaml will configure Redis to be used as the state store for Actors.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: <redis host>
- name: redisPassword
value: ""
- name: actorStateStore
value: "true"
Optional behaviors
Key scheme
A Dapr-compatible state store shall use the following key scheme:
- <App ID>||<state key> key format for general states
- <App ID>||<Actor type>||<Actor id>||<state key> key format for Actor states.
Concurrency
Dapr uses Optimized Concurrency Control (OCC) with ETags. Dapr makes the following requirements optional on state stores:
- A Dapr-compatible state store may support optimistic concurrency control using ETags. The store allows the update when an ETag:
- Is associated with an save or delete request.
- Matches the latest ETag in the database.
- When ETag is missing in the write requests, the state store shall handle the requests in a last-write-wins fashion. This allows optimizations for high-throughput write scenarios, in which data contingency is low or has no negative effects.
- A store shall always return ETags when returning states to callers.
Consistency
Dapr allows clients to attach a consistency hint to get, set, and delete operation. Dapr supports two consistency levels: strong and eventual.
Eventual Consistency
Dapr assumes data stores are eventually consistent by default. A state should:
- For read requests, return data from any of the replicas.
- For write requests, asynchronously replicate updates to configured quorum after acknowledging the update request.
Strong Consistency
When a strong consistency hint is attached, a state store should:
- For read requests, return the most up-to-date data consistently across replicas.
- For write/delete requests, synchronously replicate updated data to configured quorum before completing the write request.
Example: Complete options request example
The following is an example set request with a complete options
definition:
curl -X POST http://localhost:3500/v1.0/state/starwars \
-H "Content-Type: application/json" \
-d '[
{
"key": "weapon",
"value": "DeathStar",
"etag": "xxxxx",
"options": {
"concurrency": "first-write",
"consistency": "strong"
}
}
]'
Example: Working with ETags
The following is an example walk-through of an ETag usage when setting/deleting an object in a compatible state store. This sample defines Redis as statestore
.
-
Store an object in a state store:
curl -X POST http://localhost:3500/v1.0/state/statestore \ -H "Content-Type: application/json" \ -d '[ { "key": "sampleData", "value": "1" } ]'
-
Get the object to find the ETag set automatically by the state store:
curl http://localhost:3500/v1.0/state/statestore/sampleData -v * Connected to localhost (127.0.0.1) port 3500 (#0) > GET /v1.0/state/statestore/sampleData HTTP/1.1 > Host: localhost:3500 > User-Agent: curl/7.64.1 > Accept: */* > < HTTP/1.1 200 OK < Server: fasthttp < Date: Sun, 14 Feb 2021 04:51:50 GMT < Content-Type: application/json < Content-Length: 3 < Etag: 1 < Traceparent: 00-3452582897d134dc9793a244025256b1-b58d8d773e4d661d-01 < * Connection #0 to host localhost left intact "1"* Closing connection 0
The returned ETag above was 1. If you send a new request to update or delete the data with the wrong ETag, it will return an error. Omitting the ETag will allow the request.
# Update curl -X POST http://localhost:3500/v1.0/state/statestore \ -H "Content-Type: application/json" \ -d '[ { "key": "sampleData", "value": "2", "etag": "2" } ]' {"errorCode":"ERR_STATE_SAVE","message":"failed saving state in state store statestore: possible etag mismatch. error from state store: ERR Error running script (call to f_83e03ec05d6a3b6fb48483accf5e594597b6058f): @user_script:1: user_script:1: failed to set key nodeapp||sampleData"} # Delete curl -X DELETE -H 'If-Match: 5' http://localhost:3500/v1.0/state/statestore/sampleData {"errorCode":"ERR_STATE_DELETE","message":"failed deleting state with key sampleData: possible etag mismatch. error from state store: ERR Error running script (call to f_9b5da7354cb61e2ca9faff50f6c43b81c73c0b94): @user_script:1: user_script:1: failed to delete node app||sampleData"}
-
Update or delete the object by simply matching the ETag in either the request body (update) or the
If-Match
header (delete). When the state is updated, it receives a new ETag that future updates or deletes will need to use.# Update curl -X POST http://localhost:3500/v1.0/state/statestore \ -H "Content-Type: application/json" \ -d '[ { "key": "sampleData", "value": "2", "etag": "1" } ]' # Delete curl -X DELETE -H 'If-Match: 1' http://localhost:3500/v1.0/state/statestore/sampleData
Next Steps
1.5 - Bindings API reference
Dapr provides bi-directional binding capabilities for applications and a consistent approach to interacting with different cloud/on-premise services or systems. Developers can invoke output bindings using the Dapr API, and have the Dapr runtime trigger an application with input bindings.
Examples for bindings include Kafka
, Rabbit MQ
, Azure Event Hubs
, AWS SQS
, GCP Storage
to name a few.
Bindings Structure
A Dapr Binding yaml file has the following structure:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: bindings.<TYPE>
version: v1
metadata:
- name: <NAME>
value: <VALUE>
The metadata.name
is the name of the binding.
If running self hosted locally, place this file in your components
folder next to your state store and message queue yml configurations.
If running on kubernetes apply the component to your cluster.
Note: In production never place passwords or secrets within Dapr component files. For information on securely storing and retrieving secrets using secret stores refer to Setup Secret Store
Binding direction (optional)
In some scenarios, it would be useful to provide additional information to Dapr to indicate the direction supported by the binding component.
Providing the binding direction
helps the Dapr sidecar avoid the "wait for the app to become ready"
state, where it waits indefinitely for the application to become available. This decouples the lifecycle dependency between the Dapr sidecar and the application.
You can specify the direction
field as part of the component’s metadata. The valid values for this field are:
"input"
"output"
"input, output"
Note
It is highly recommended that all bindings should include thedirection
property.
Here a few scenarios when the "direction"
metadata field could help:
-
When an application (detached from the sidecar) runs as a serverless workload and is scaled to zero, the
"wait for the app to become ready"
check done by the Dapr sidecar becomes pointless. -
If the detached Dapr sidecar is scaled to zero and the application reaches the sidecar (before even starting an HTTP server), the
"wait for the app to become ready"
deadlocks the app and the sidecar into waiting for each other.
Example
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: kafkaevent
spec:
type: bindings.kafka
version: v1
metadata:
- name: brokers
value: "http://localhost:5050"
- name: topics
value: "someTopic"
- name: publishTopic
value: "someTopic2"
- name: consumerGroup
value: "group1"
- name: "direction"
value: "input, output"
Invoking Service Code Through Input Bindings
A developer who wants to trigger their app using an input binding can listen on a POST
http endpoint with the route name being the same as metadata.name
.
On startup Dapr sends a OPTIONS
request to the metadata.name
endpoint and expects a different status code as NOT FOUND (404)
if this application wants to subscribe to the binding.
The metadata
section is an open key/value metadata pair that allows a binding to define connection properties, as well as custom properties unique to the component implementation.
Examples
For example, here’s how a Python application subscribes for events from Kafka
using a Dapr API compliant platform. Note how the metadata.name value kafkaevent
in the components matches the POST route name in the Python code.
Kafka Component
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: kafkaevent
spec:
type: bindings.kafka
version: v1
metadata:
- name: brokers
value: "http://localhost:5050"
- name: topics
value: "someTopic"
- name: publishTopic
value: "someTopic2"
- name: consumerGroup
value: "group1"
Python Code
from flask import Flask
app = Flask(__name__)
@app.route("/kafkaevent", methods=['POST'])
def incoming():
print("Hello from Kafka!", flush=True)
return "Kafka Event Processed!"
Binding endpoints
Bindings are discovered from component yaml files. Dapr calls this endpoint on startup to ensure that app can handle this call. If the app doesn’t have the endpoint, Dapr ignores it.
HTTP Request
OPTIONS http://localhost:<appPort>/<name>
HTTP Response codes
Code | Description |
---|---|
404 | Application does not want to bind to the binding |
2xx or 405 | Application wants to bind to the binding |
URL Parameters
Parameter | Description |
---|---|
appPort | the application port |
name | the name of the binding |
Note, all URL parameters are case-sensitive.
Binding payload
In order to deliver binding inputs, a POST call is made to user code with the name of the binding as the URL path.
HTTP Request
POST http://localhost:<appPort>/<name>
HTTP Response codes
Code | Description |
---|---|
200 | Application processed the input binding successfully |
URL Parameters
Parameter | Description |
---|---|
appPort | the application port |
name | the name of the binding |
Note, all URL parameters are case-sensitive.
HTTP Response body (optional)
Optionally, a response body can be used to directly bind input bindings with state stores or output bindings.
Example:
Dapr stores stateDataToStore
into a state store named “stateStore”.
Dapr sends jsonObject
to the output bindings named “storage” and “queue” in parallel.
If concurrency
is not set, it is sent out sequential (the example below shows these operations are done in parallel)
{
"storeName": "stateStore",
"state": stateDataToStore,
"to": ['storage', 'queue'],
"concurrency": "parallel",
"data": jsonObject,
}
Invoking Output Bindings
This endpoint lets you invoke a Dapr output binding.
Dapr bindings support various operations, such as create
.
See the different specs on each binding to see the list of supported operations.
HTTP Request
POST/PUT http://localhost:<daprPort>/v1.0/bindings/<name>
HTTP Response codes
Code | Description |
---|---|
200 | Request successful |
204 | Empty Response |
400 | Malformed request |
500 | Request failed |
Payload
The bindings endpoint receives the following JSON payload:
{
"data": "",
"metadata": {
"": ""
},
"operation": ""
}
Note, all URL parameters are case-sensitive.
The data
field takes any JSON serializable value and acts as the payload to be sent to the output binding.
The metadata
field is an array of key/value pairs and allows you to set binding specific metadata for each call.
The operation
field tells the Dapr binding which operation it should perform.
URL Parameters
Parameter | Description |
---|---|
daprPort | the Dapr port |
name | the name of the output binding to invoke |
Note, all URL parameters are case-sensitive.
Examples
curl -X POST http://localhost:3500/v1.0/bindings/myKafka \
-H "Content-Type: application/json" \
-d '{
"data": {
"message": "Hi"
},
"metadata": {
"key": "redis-key-1"
},
"operation": "create"
}'
Common metadata values
There are common metadata properties which are support across multiple binding components. The list below illustrates them:
Property | Description | Binding definition | Available in |
---|---|---|---|
ttlInSeconds | Defines the time to live in seconds for the message | If set in the binding definition will cause all messages to have a default time to live. The message ttl overrides any value in the binding definition. | RabbitMQ, Azure Service Bus, Azure Storage Queue |
1.6 - Actors API reference
Dapr provides native, cross-platform, and cross-language virtual actor capabilities. Besides the language specific SDKs, a developer can invoke an actor using the API endpoints below.
User service code calling Dapr
Invoke actor method
Invoke an actor method through Dapr.
HTTP Request
POST/GET/PUT/DELETE http://localhost:<daprPort>/v1.0/actors/<actorType>/<actorId>/method/<method>
HTTP Response Codes
Code | Description |
---|---|
200 | Request successful |
500 | Request failed |
XXX | Status code from upstream call |
URL Parameters
Parameter | Description |
---|---|
daprPort |
The Dapr port. |
actorType |
The actor type. |
actorId |
The actor ID. |
method |
The name of the method to invoke. |
Note, all URL parameters are case-sensitive.
Examples
Example of invoking a method on an actor:
curl -X POST http://localhost:3500/v1.0/actors/stormtrooper/50/method/shoot \
-H "Content-Type: application/json"
You can provide the method parameters and values in the body of the request, for example in curl using -d "{\"param\":\"value\"}"
. Example of invoking a method on an actor that takes parameters:
curl -X POST http://localhost:3500/v1.0/actors/x-wing/33/method/fly \
-H "Content-Type: application/json" \
-d '{
"destination": "Hoth"
}'
or
curl -X POST http://localhost:3500/v1.0/actors/x-wing/33/method/fly \
-H "Content-Type: application/json" \
-d "{\"destination\":\"Hoth\"}"
The response (the method return) from the remote endpoint is returned in the request body.
Actor state transactions
Persists the change to the state for an actor as a multi-item transaction.
Note that this operation is dependant on a using state store component that supports multi-item transactions.
TTL
With the ActorStateTTL
feature enabled, actor clients can set the ttlInSeconds
field in the transaction metadata to have the state expire after that many
seconds. If the ttlInSeconds
field is not set, the state will not expire.
Keep in mind when building actor applications with this feature enabled; Currently, all actor SDKs will preserve the actor state in their local cache even after the state has expired. This means that the actor state will not be removed from the local cache if the TTL has expired until the actor is restarted or deactivated. This behaviour will be changed in a future release.
See the Dapr Community Call 80 recording for more details on actor state TTL.
HTTP Request
POST/PUT http://localhost:<daprPort>/v1.0/actors/<actorType>/<actorId>/state
HTTP Response Codes
Code | Description |
---|---|
204 | Request successful |
400 | Actor not found |
500 | Request failed |
URL Parameters
Parameter | Description |
---|---|
daprPort |
The Dapr port. |
actorType |
The actor type. |
actorId |
The actor ID. |
Note, all URL parameters are case-sensitive.
Examples
Note, the following example uses the
ttlInSeconds
field, which requires theActorStateTTL
feature enabled.
curl -X POST http://localhost:3500/v1.0/actors/stormtrooper/50/state \
-H "Content-Type: application/json" \
-d '[
{
"operation": "upsert",
"request": {
"key": "key1",
"value": "myData",
"metadata": {
"ttlInSeconds": "3600"
}
}
},
{
"operation": "delete",
"request": {
"key": "key2"
}
}
]'
Get actor state
Gets the state for an actor using a specified key.
HTTP Request
GET http://localhost:<daprPort>/v1.0/actors/<actorType>/<actorId>/state/<key>
HTTP Response Codes
Code | Description |
---|---|
200 | Request successful |
204 | Key not found, and the response will be empty |
400 | Actor not found |
500 | Request failed |
URL Parameters
Parameter | Description |
---|---|
daprPort |
The Dapr port. |
actorType |
The actor type. |
actorId |
The actor ID. |
key |
The key for the state value. |
Note, all URL parameters are case-sensitive.
Examples
curl http://localhost:3500/v1.0/actors/stormtrooper/50/state/location \
-H "Content-Type: application/json"
The above command returns the state:
{
"location": "Alderaan"
}
Create actor reminder
Creates a persistent reminder for an actor.
HTTP Request
POST/PUT http://localhost:<daprPort>/v1.0/actors/<actorType>/<actorId>/reminders/<name>
Reminder request body
A JSON object with the following fields:
Field | Description |
---|---|
dueTime |
Specifies the time after which the reminder is invoked. Its format should be time.ParseDuration |
period |
Specifies the period between different invocations. Its format should be time.ParseDuration or ISO 8601 duration format with optional recurrence. |
ttl |
Sets time at or interval after which the timer or reminder will be expired and deleted. Its format should be time.ParseDuration format, RFC3339 date format, or ISO 8601 duration format. |
data |
A string value and can be any related content. Content is returned when the reminder expires. For example this may be useful for returning a URL or anything related to the content. |
period
field supports time.Duration
format and ISO 8601 format with some limitations. For period
, only duration format of ISO 8601 duration Rn/PnYnMnWnDTnHnMnS
is supported. Rn/
specifies that the reminder will be invoked n
number of times.
n
should be a positive integer greater than 0.- If certain values are 0, the
period
can be shortened; for example, 10 seconds can be specified in ISO 8601 duration asPT10S
.
If Rn/
is not specified, the reminder will run an infinite number of times until deleted.
If only ttl
and dueTime
are set, the reminder will be accepted. However, only the dueTime
takes effect. For example, the reminder triggers at dueTime
, and ttl
is ignored.
If ttl
, dueTime
, and period
are set, the reminder first fires at dueTime
, then repeatedly fires and expires according to period
and ttl
.
The following example specifies a dueTime
of 3 seconds and a period of 7 seconds.
{
"dueTime":"0h0m3s0ms",
"period":"0h0m7s0ms"
}
A dueTime
of 0 means to fire immediately. The following body means to fire immediately, then every 9 seconds.
{
"dueTime":"0h0m0s0ms",
"period":"0h0m9s0ms"
}
To configure the reminder to fire only once, the period should be set to empty string. The following specifies a dueTime
of 3 seconds with a period of empty string, which means the reminder will fire in 3 seconds and then never fire again.
{
"dueTime":"0h0m3s0ms",
"period":""
}
When you specify the repetition number in both period
and ttl
, the timer/reminder is stopped when either condition is met. The following example has a timer with a period
of 3 seconds (in ISO 8601 duration format) and a ttl
of 20 seconds. This timer fires immediately after registration, then every 3 seconds after that for the duration of 20 seconds, after which it never fires again since the ttl
was met
{
"period":"PT3S",
"ttl":"20s"
}
Need description for data.
{
"data": "someData",
"dueTime": "1m",
"period": "20s"
}
HTTP Response Codes
Code | Description |
---|---|
204 | Request successful |
500 | Request failed |
400 | Actor not found or malformed request |
URL Parameters
Parameter | Description |
---|---|
daprPort |
The Dapr port. |
actorType |
The actor type. |
actorId |
The actor ID. |
name |
The name of the reminder to create. |
Note, all URL parameters are case-sensitive.
Examples
curl http://localhost:3500/v1.0/actors/stormtrooper/50/reminders/checkRebels \
-H "Content-Type: application/json" \
-d '{
"data": "someData",
"dueTime": "1m",
"period": "20s"
}'
Get actor reminder
Gets a reminder for an actor.
HTTP Request
GET http://localhost:<daprPort>/v1.0/actors/<actorType>/<actorId>/reminders/<name>
HTTP Response Codes
Code | Description |
---|---|
200 | Request successful |
500 | Request failed |
URL Parameters
Parameter | Description |
---|---|
daprPort |
The Dapr port. |
actorType |
The actor type. |
actorId |
The actor ID. |
name |
The name of the reminder to get. |
Note, all URL parameters are case-sensitive.
Examples
curl http://localhost:3500/v1.0/actors/stormtrooper/50/reminders/checkRebels \
"Content-Type: application/json"
The above command returns the reminder:
{
"dueTime": "1s",
"period": "5s",
"data": "0",
}
Delete actor reminder
Deletes a reminder for an actor.
HTTP Request
DELETE http://localhost:<daprPort>/v1.0/actors/<actorType>/<actorId>/reminders/<name>
HTTP Response Codes
Code | Description |
---|---|
204 | Request successful |
500 | Request failed |
URL Parameters
Parameter | Description |
---|---|
daprPort |
The Dapr port. |
actorType |
The actor type. |
actorId |
The actor ID. |
name |
The name of the reminder to delete. |
Note, all URL parameters are case-sensitive.
Examples
curl -X DELETE http://localhost:3500/v1.0/actors/stormtrooper/50/reminders/checkRebels \
-H "Content-Type: application/json"
Create actor timer
Creates a timer for an actor.
HTTP Request
POST/PUT http://localhost:<daprPort>/v1.0/actors/<actorType>/<actorId>/timers/<name>
Timer request body:
The format for the timer request body is the same as for actor reminders. For example:
The following specifies a dueTime
of 3 seconds and a period of 7 seconds.
{
"dueTime":"0h0m3s0ms",
"period":"0h0m7s0ms"
}
A dueTime
of 0 means to fire immediately. The following body means to fire immediately, then every 9 seconds.
{
"dueTime":"0h0m0s0ms",
"period":"0h0m9s0ms"
}
HTTP Response Codes
Code | Description |
---|---|
204 | Request successful |
500 | Request failed |
400 | Actor not found or malformed request |
URL Parameters
Parameter | Description |
---|---|
daprPort |
The Dapr port. |
actorType |
The actor type. |
actorId |
The actor ID. |
name |
The name of the timer to create. |
Note, all URL parameters are case-sensitive.
Examples
curl http://localhost:3500/v1.0/actors/stormtrooper/50/timers/checkRebels \
-H "Content-Type: application/json" \
-d '{
"data": "someData",
"dueTime": "1m",
"period": "20s",
"callback": "myEventHandler"
}'
Delete actor timer
Deletes a timer for an actor.
HTTP Request
DELETE http://localhost:<daprPort>/v1.0/actors/<actorType>/<actorId>/timers/<name>
HTTP Response Codes
Code | Description |
---|---|
204 | Request successful |
500 | Request failed |
URL Parameters
Parameter | Description |
---|---|
daprPort |
The Dapr port. |
actorType |
The actor type. |
actorId |
The actor ID. |
name |
The name of the timer to delete. |
Note, all URL parameters are case-sensitive.
curl -X DELETE http://localhost:3500/v1.0/actors/stormtrooper/50/timers/checkRebels \
-H "Content-Type: application/json"
Dapr calling to user service code
Get registered actors
Get the registered actors types for this app and the Dapr actor configuration settings.
HTTP Request
GET http://localhost:<appPort>/dapr/config
HTTP Response Codes
Code | Description |
---|---|
200 | Request successful |
500 | Request failed |
URL Parameters
Parameter | Description |
---|---|
appPort |
The application port. |
Examples
Example of getting the registered actors:
curl -X GET http://localhost:3000/dapr/config \
-H "Content-Type: application/json"
The above command returns the config (all fields are optional):
Parameter | Description |
---|---|
entities |
The actor types this app supports. |
actorIdleTimeout |
Specifies how long to wait before deactivating an idle actor. An actor is idle if no actor method calls and no reminders have fired on it. |
actorScanInterval |
A duration which specifies how often to scan for actors to deactivate idle actors. Actors that have been idle longer than the actorIdleTimeout will be deactivated. |
drainOngoingCallTimeout |
A duration used when in the process of draining rebalanced actors. This specifies how long to wait for the current active actor method to finish. If there is no current actor method call, this is ignored. |
drainRebalancedActors |
A bool. If true, Dapr will wait for drainOngoingCallTimeout to allow a current actor call to complete before trying to deactivate an actor. If false, do not wait. |
reentrancy |
A configuration object that holds the options for actor reentrancy. |
enabled |
A flag in the reentrancy configuration that is needed to enable reentrancy. |
maxStackDepth |
A value in the reentrancy configuration that controls how many reentrant calls be made to the same actor. |
entitiesConfig |
Array of entity configurations that allow per actor type settings. Any configuration defined here must have an entity that maps back into the root level entities. |
Note
Actor settings in configuration for timeouts and intervals use time.ParseDuration format. You can use string formats to represent durations. For example:
1h30m
or1.5h
: A duration of 1 hour and 30 minutes1d12h
: A duration of 1 day and 12 hours500ms
: A duration of 500 milliseconds-30m
: A negative duration of 30 minutes
{
"entities":["actorType1", "actorType2"],
"actorIdleTimeout": "1h",
"actorScanInterval": "30s",
"drainOngoingCallTimeout": "30s",
"drainRebalancedActors": true,
"reentrancy": {
"enabled": true,
"maxStackDepth": 32
},
"entitiesConfig": [
{
"entities": ["actorType1"],
"actorIdleTimeout": "1m",
"drainOngoingCallTimeout": "10s",
"reentrancy": {
"enabled": false
}
}
]
}
Deactivate actor
Deactivates an actor by persisting the instance of the actor to the state store with the specified actorId.
HTTP Request
DELETE http://localhost:<appPort>/actors/<actorType>/<actorId>
HTTP Response Codes
Code | Description |
---|---|
200 | Request successful |
400 | Actor not found |
500 | Request failed |
URL Parameters
Parameter | Description |
---|---|
appPort |
The application port. |
actorType |
The actor type. |
actorId |
The actor ID. |
Note, all URL parameters are case-sensitive.
Examples
The following example deactivates the actor type stormtrooper
that has actorId
of 50.
curl -X DELETE http://localhost:3000/actors/stormtrooper/50 \
-H "Content-Type: application/json"
Invoke actor method
Invokes a method for an actor with the specified methodName
where:
- Parameters to the method are passed in the body of the request message.
- Return values are provided in the body of the response message.
If the actor is not already running, the app side should activate it.
HTTP Request
PUT http://localhost:<appPort>/actors/<actorType>/<actorId>/method/<methodName>
HTTP Response Codes
Code | Description |
---|---|
200 | Request successful |
500 | Request failed |
404 | Actor not found |
URL Parameters
Parameter | Description |
---|---|
appPort |
The application port. |
actorType |
The actor type. |
actorId |
The actor ID. |
methodName |
The name of the method to invoke. |
Note, all URL parameters are case-sensitive.
Examples
The following example calls the performAction
method on the actor type stormtrooper
that has actorId
of 50.
curl -X POST http://localhost:3000/actors/stormtrooper/50/method/performAction \
-H "Content-Type: application/json"
Invoke reminder
Invokes a reminder for an actor with the specified reminderName. If the actor is not already running, the app side should activate it.
HTTP Request
PUT http://localhost:<appPort>/actors/<actorType>/<actorId>/method/remind/<reminderName>
HTTP Response Codes
Code | Description |
---|---|
200 | Request successful |
500 | Request failed |
404 | Actor not found |
URL Parameters
Parameter | Description |
---|---|
appPort |
The application port. |
actorType |
The actor type. |
actorId |
The actor ID. |
reminderName |
The name of the reminder to invoke. |
Note, all URL parameters are case-sensitive.
Examples
The following example calls the checkRebels
reminder method on the actor type stormtrooper
that has actorId
of 50.
curl -X POST http://localhost:3000/actors/stormtrooper/50/method/remind/checkRebels \
-H "Content-Type: application/json"
Invoke timer
Invokes a timer for an actor with the specified timerName
. If the actor is not already running, the app side should activate it.
HTTP Request
PUT http://localhost:<appPort>/actors/<actorType>/<actorId>/method/timer/<timerName>
HTTP Response Codes
Code | Description |
---|---|
200 | Request successful |
500 | Request failed |
404 | Actor not found |
URL Parameters
Parameter | Description |
---|---|
appPort |
The application port. |
actorType |
The actor type. |
actorId |
The actor ID. |
timerName |
The name of the timer to invoke. |
Note, all URL parameters are case-sensitive.
Examples
The following example calls the checkRebels
timer method on the actor type stormtrooper
that has actorId
of 50.
curl -X POST http://localhost:3000/actors/stormtrooper/50/method/timer/checkRebels \
-H "Content-Type: application/json"
Health check
Probes the application for a response to signal to Dapr that the app is healthy and running.
Any response status code other than 200
will be considered an unhealthy response.
A response body is not required.
HTTP Request
GET http://localhost:<appPort>/healthz
HTTP Response Codes
Code | Description |
---|---|
200 | App is healthy |
URL Parameters
Parameter | Description |
---|---|
appPort |
The application port. |
Examples
Example of getting a health check response from the app:
curl -X GET http://localhost:3000/healthz \
Activating an Actor
Conceptually, activating an actor means creating the actor’s object and adding the actor to a tracking table. Review an example from the .NET SDK.
Querying actor state externally
To enable visibility into the state of an actor and allow for complex scenarios like state aggregation, Dapr saves actor state in external state stores, such as databases. As such, it is possible to query for an actor state externally by composing the correct key or query.
The state namespace created by Dapr for actors is composed of the following items:
- App ID: Represents the unique ID given to the Dapr application.
- Actor Type: Represents the type of the actor.
- Actor ID: Represents the unique ID of the actor instance for an actor type.
- Key: A key for the specific state value. An actor ID can hold multiple state keys.
The following example shows how to construct a key for the state of an actor instance under the myapp
App ID namespace:
myapp||cat||hobbit||food
In the example above, we are getting the value for the state key food
, for the actor ID hobbit
with an actor type of cat
, under the App ID namespace of myapp
.
1.7 - Secrets API reference
Get Secret
This endpoint lets you get the value of a secret for a given secret store.
HTTP Request
GET http://localhost:<daprPort>/v1.0/secrets/<secret-store-name>/<name>
URL Parameters
Parameter | Description |
---|---|
daprPort | the Dapr port |
secret-store-name | the name of the secret store to get the secret from |
name | the name of the secret to get |
Note, all URL parameters are case-sensitive.
Query Parameters
Some secret stores support optional, per-request metadata properties. Use query parameters to provide those properties. For example:
GET http://localhost:<daprPort>/v1.0/secrets/<secret-store-name>/<name>?metadata.version_id=15
Observe that not all secret stores support the same set of parameters. For example:
- Hashicorp Vault, GCP Secret Manager and AWS Secret Manager support the
version_id
parameter - Only AWS Secret Manager supports the
version_stage
parameter - Only Kubernetes Secrets supports the
namespace
parameter Check each secret store’s documentation for the list of supported parameters.
HTTP Response
Response Body
If a secret store has support for multiple key-values in a secret, a JSON payload is returned with the key names as fields and their respective values.
In case of a secret store that only has name/value semantics, a JSON payload is returned with the name of the secret as the field and the value of the secret as the value.
See the classification of secret stores that support multiple keys in a secret and name/value semantics.
Response with multiple keys in a secret (eg. Kubernetes):
curl http://localhost:3500/v1.0/secrets/kubernetes/db-secret
{
"key1": "value1",
"key2": "value2"
}
The above example demonstrates a response from a secret store with multiple keys in a secret. Note that the secret name (db-secret
) is not returned as part of the result.
Response from a secret store with name/value semantics:
curl http://localhost:3500/v1.0/secrets/vault/db-secret
{
"db-secret": "value1"
}
The above example demonstrates a response from a secret store with name/value semantics. Compared to the result from a secret store with multiple keys in a secret, this result returns a single key-value pair, with the secret name (db-secret
) returned as the key in the key-value pair.
Response Codes
Code | Description |
---|---|
200 | OK |
204 | Secret not found |
400 | Secret store is missing or misconfigured |
403 | Access denied |
500 | Failed to get secret or no secret stores defined |
Examples
curl http://localhost:3500/v1.0/secrets/mySecretStore/db-secret
curl http://localhost:3500/v1.0/secrets/myAwsSecretStore/db-secret?metadata.version_id=15&metadata.version_stage=production
Get Bulk Secret
This endpoint lets you get all the secrets in a secret store. It’s recommended to use token authentication for Dapr if configuring a secret store.
HTTP Request
GET http://localhost:<daprPort>/v1.0/secrets/<secret-store-name>/bulk
URL Parameters
Parameter | Description |
---|---|
daprPort | the Dapr port |
secret-store-name | the name of the secret store to get the secret from |
Note, all URL parameters are case-sensitive.
HTTP Response
Response Body
The returned response is a JSON containing the secrets. The JSON object will contain the secret names as fields and a map of secret keys and values as the field value.
Response with multiple secrets and multiple key / values in a secret (eg. Kubernetes):
curl http://localhost:3500/v1.0/secrets/kubernetes/bulk
{
"secret1": {
"key1": "value1",
"key2": "value2"
},
"secret2": {
"key3": "value3",
"key4": "value4"
}
}
Response Codes
Code | Description |
---|---|
200 | OK |
400 | Secret store is missing or misconfigured |
403 | Access denied |
500 | Failed to get secret or no secret stores defined |
Examples
curl http://localhost:3500/v1.0/secrets/vault/bulk
{
"key1": {
"key1": "value1"
},
"key2": {
"key2": "value2"
}
}
1.8 - Configuration API reference
Get Configuration
This endpoint lets you get configuration from a store.
HTTP Request
GET http://localhost:<daprPort>/v1.0/configuration/<storename>
URL Parameters
Parameter | Description |
---|---|
daprPort |
The Dapr port |
storename |
The metadata.name field component file. Refer to the component spec |
Query Parameters
If no query parameters are provided, all configuration items are returned.
To specify the keys of the configuration items to get, use one or more key
query parameters. For example:
GET http://localhost:<daprPort>/v1.0/configuration/mystore?key=config1&key=config2
To retrieve all configuration items:
GET http://localhost:<daprPort>/v1.0/configuration/mystore
Request Body
None
HTTP Response
Response Codes
Code | Description |
---|---|
204 |
Get operation successful |
400 |
Configuration store is missing or misconfigured or malformed request |
500 |
Failed to get configuration |
Response Body
JSON-encoded value of key/value pairs for each configuration item.
Example
curl -X GET 'http://localhost:3500/v1.0/configuration/mystore?key=myConfigKey'
The above command returns the following JSON:
{
"myConfigKey": {
"value":"myConfigValue"
}
}
Subscribe Configuration
This endpoint lets you subscribe to configuration changes. Notifications happen when values are updated or deleted in the configuration store. This enables the application to react to configuration changes.
HTTP Request
GET http://localhost:<daprPort>/v1.0/configuration/<storename>/subscribe
URL Parameters
Parameter | Description |
---|---|
daprPort |
The Dapr port |
storename |
The metadata.name field component file. Refer to the component spec |
Query Parameters
If no query parameters are provided, all configuration items are subscribed to.
To specify the keys of the configuration items to subscribe to, use one or more key
query parameters. For example:
GET http://localhost:<daprPort>/v1.0/configuration/mystore/subscribe?key=config1&key=config2
To subscribe to all changes:
GET http://localhost:<daprPort>/v1.0/configuration/mystore/subscribe
Request Body
None
HTTP Response
Response Codes
Code | Description |
---|---|
200 |
Subscribe operation successful |
400 |
Configuration store is missing or misconfigured or malformed request |
500 |
Failed to subscribe to configuration changes |
Response Body
JSON-encoded value
Example
curl -X GET 'http://localhost:3500/v1.0/configuration/mystore/subscribe?key=myConfigKey'
The above command returns the following JSON:
{
"id": "<unique-id>"
}
The returned id
parameter can be used to unsubscribe to the specific set of keys provided on the subscribe API call. This should be retained by the application.
Unsubscribe Configuration
This endpoint lets you unsubscribe to configuration changes.
HTTP Request
GET http://localhost:<daprPort>/v1.0/configuration/<storename>/<subscription-id>/unsubscribe
URL Parameters
Parameter | Description |
---|---|
daprPort |
The Dapr port |
storename |
The metadata.name field component file. Refer to the component spec |
subscription-id |
The value from the id field returned from the response of the subscribe endpoint |
Query Parameters
None
Request Body
None
HTTP Response
Response Codes
Code | Description |
---|---|
200 |
Unsubscribe operation successful |
400 |
Configuration store is missing or misconfigured or malformed request |
500 |
Failed to unsubscribe to configuration changes |
Response Body
{
"ok" : true
}
Example
curl -X GET 'http://localhost:3500/v1.0-alpha1/configuration/mystore/bf3aa454-312d-403c-af95-6dec65058fa2/unsubscribe'
The above command returns the following JSON:
In case of successful operation:
{
"ok": true
}
In case of unsuccessful operation:
{
"ok": false,
"message": "<dapr returned error message>"
}
Optional application (user code) routes
Provide a route for Dapr to send configuration changes
subscribing to configuration changes, Dapr invokes the application whenever a configuration item changes. Your application can have a /configuration
endpoint that is called for all key updates that are subscribed to. The endpoint(s) can be made more specific for a given configuration store by adding /<store-name>
and for a specific key by adding /<store-name>/<key>
to the route.
HTTP Request
POST http://localhost:<appPort>/configuration/<store-name>/<key>
URL Parameters
Parameter | Description |
---|---|
appPort |
The application port |
storename |
The metadata.name field component file. Refer to the component spec |
key |
The key subscribed to |
Request Body
A list of configuration items for a given subscription id. Configuration items can have a version associated with them, which is returned in the notification.
{
"id": "<subscription-id>",
"items": [
"key": "<key-of-configuration-item>",
"value": "<new-value>",
"version": "<version-of-item>"
]
}
Example
{
"id": "bf3aa454-312d-403c-af95-6dec65058fa2",
"items": [
"key": "config-1",
"value": "abcdefgh",
"version": "1.1"
]
}
Next Steps
1.9 - Distributed Lock API reference
Lock
This endpoint lets you acquire a lock by supplying a named lock owner and the resource ID to lock.
HTTP Request
POST http://localhost:<daprPort>/v1.0-alpha1/lock/<storename>
URL Parameters
Parameter | Description |
---|---|
daprPort |
The Dapr port |
storename |
The metadata.name field component file. Refer to the component schema |
Query Parameters
None
HTTP Response codes
Code | Description |
---|---|
200 | Request successful |
204 | Empty Response |
400 | Malformed request |
500 | Request failed |
HTTP Request Body
The lock endpoint receives the following JSON payload:
{
"resourceId": "",
"lockOwner": "",
"expiryInSeconds": 0
}
Field | Description |
---|---|
resourceId | The ID of the resource to lock. Can be any value |
lockOwner | The name of the lock owner. Should be set to a unique value per-request |
expiryInSeconds | The time in seconds to hold the lock before it expires |
HTTP Response Body
The lock endpoint returns the following payload:
{
"success": true
}
Examples
curl -X POST http://localhost:3500/v1.0-alpha/lock/redisStore \
-H "Content-Type: application/json" \
-d '{
"resourceId": "lock1",
"lockOwner": "vader",
"expiryInSeconds": 60
}'
{
"success": "true"
}
Unlock
This endpoint lets you unlock an existing lock based on the lock owner and resource Id
HTTP Request
POST http://localhost:<daprPort>/v1.0-alpha1/unlock/<storename>
URL Parameters
Parameter | Description |
---|---|
daprPort |
The Dapr port |
storename |
The metadata.name field component file. Refer to the component schema |
Query Parameters
None
HTTP Response codes
Code | Description |
---|---|
200 | Request successful |
204 | Empty Response |
400 | Malformed request |
500 | Request failed |
HTTP Request Body
The unlock endpoint receives the following JSON payload:
{
"resourceId": "",
"lockOwner": ""
}
HTTP Response Body
The unlock endpoint returns the following payload:
{
"status": 0
}
The status
field contains the following response codes:
Code | Description |
---|---|
0 | Success |
1 | Lock doesn’t exist |
2 | Lock belongs to another owner |
3 | Internal error |
Examples
curl -X POST http://localhost:3500/v1.0-alpha/unlock/redisStore \
-H "Content-Type: application/json" \
-d '{
"resourceId": "lock1",
"lockOwner": "vader"
}'
{
"status": 0
}
1.10 - Health API reference
Dapr provides health checking probes that can be used as readiness or liveness of Dapr and for initialization readiness from SDKs.
Get Dapr health state
Gets the health state for Dapr by either:
- Check for sidecar health
- Check for the sidecar health, including component readiness, used during initialization.
Wait for Dapr HTTP port to become available
Wait for all components to be initialized, the Dapr HTTP port to be available and the app channel is initialized. For example, this endpoint is used with Kubernetes liveness probes.
HTTP Request
GET http://localhost:<daprPort>/v1.0/healthz
HTTP Response Codes
Code | Description |
---|---|
204 | Dapr is healthy |
500 | Dapr is not healthy |
URL Parameters
Parameter | Description |
---|---|
daprPort | The Dapr port |
Examples
curl -i http://localhost:3500/v1.0/healthz
Wait for specific health check against /outbound
path
Wait for all components to be initialized, the Dapr HTTP port to be available, however the app channel is not yet established. This endpoint enables your application to perform calls on the Dapr sidecar APIs before the app channel is initalized, for example reading secrets with the secrets API. For example used in the Dapr SDKs waitForSidecar
method (for example .NET and Java SDKs) to check sidecar is initialized correctly ready for any calls.
For example, the Java SDK and the .NET SDK uses this endpoint for initialization.
Currently, the v1.0/healthz/outbound
endpoint is supported in the:
HTTP Request
GET http://localhost:<daprPort>/v1.0/healthz/outbound
HTTP Response Codes
Code | Description |
---|---|
204 | Dapr is healthy |
500 | Dapr is not healthy |
URL Parameters
Parameter | Description |
---|---|
daprPort | The Dapr port |
Examples
curl -i http://localhost:3500/v1.0/healthz/outbound
Related articles
1.11 - Metadata API reference
Dapr has a metadata API that returns information about the sidecar allowing runtime discoverability. The metadata endpoint returns the following information.
- Runtime version
- List of the loaded resources (
components
,subscriptions
andHttpEndpoints
) - Registered actor types
- Features enabled
- Application connection details
- Custom, ephemeral attributes with information.
Metadata API
Components
Each loaded component provides its name, type and version and also information about supported features in the form of component capabilities. These features are available for the state store and binding component types. The table below shows the component type and the list of capabilities for a given version. This list might grow in future and only represents the capabilities of the loaded components.
Component type | Capabilities |
---|---|
State Store | ETAG, TRANSACTION, ACTOR, QUERY_API |
Binding | INPUT_BINDING, OUTPUT_BINDING |
HTTPEndpoints
Each loaded HttpEndpoint
provides a name to easily identify the Dapr resource associated with the runtime.
Subscriptions
The metadata API returns a list of pub/sub subscriptions that the app has registered with the Dapr runtime. This includes the pub/sub name, topic, routes, dead letter topic, the subscription type, and the metadata associated with the subscription.
Enabled features
A list of features enabled via Configuration spec (including build-time overrides).
App connection details
The metadata API returns information related to Dapr’s connection to the app. This includes the app port, protocol, host, max concurrency, along with health check details.
Scheduler connection details
Information related to the connection to one or more scheduler hosts.
Attributes
The metadata API allows you to store additional attribute information in the format of key-value pairs. These are ephemeral in-memory and are not persisted if a sidecar is reloaded. This information should be added at the time of a sidecar creation (for example, after the application has started).
Get the Dapr sidecar information
Gets the Dapr sidecar information provided by the Metadata Endpoint.
Usecase:
The Get Metadata API can be used for discovering different capabilities supported by loaded components. It can help operators in determining which components to provision, for required capabilities.
HTTP Request
GET http://localhost:<daprPort>/v1.0/metadata
URL Parameters
Parameter | Description |
---|---|
daprPort | The Dapr port. |
HTTP Response Codes
Code | Description |
---|---|
200 | Metadata information returned |
500 | Dapr could not return the metadata information |
HTTP Response Body
Metadata API Response Object
Name | Type | Description |
---|---|---|
id | string | Application ID |
runtimeVersion | string | Version of the Dapr runtime |
enabledFeatures | string[] | List of features enabled by Dapr Configuration, see https://docs.dapr.io/operations/configuration/preview-features/ |
actors | Metadata API Response Registered Actor[] | A json encoded array of registered actors metadata. |
extended.attributeName | string | List of custom attributes as key-value pairs, where key is the attribute name. |
components | Metadata API Response Component[] | A json encoded array of loaded components metadata. |
httpEndpoints | Metadata API Response HttpEndpoint[] | A json encoded array of loaded HttpEndpoints metadata. |
subscriptions | Metadata API Response Subscription[] | A json encoded array of pub/sub subscriptions metadata. |
appConnectionProperties | Metadata API Response AppConnectionProperties | A json encoded object of app connection properties. |
scheduler | Metadata API Response Scheduler | A json encoded object of scheduler connection properties. |
Metadata API Response Registered Actor
Name | Type | Description |
---|---|---|
type | string | The registered actor type. |
count | integer | Number of actors running. |
Metadata API Response Component
Name | Type | Description |
---|---|---|
name | string | Name of the component. |
type | string | Component type. |
version | string | Component version. |
capabilities | array | Supported capabilities for this component type and version. |
Metadata API Response HttpEndpoint
Name | Type | Description |
---|---|---|
name | string | Name of the HttpEndpoint. |
Metadata API Response Subscription
Name | Type | Description |
---|---|---|
pubsubname | string | Name of the pub/sub. |
topic | string | Topic name. |
metadata | object | Metadata associated with the subscription. |
rules | Metadata API Response Subscription Rules[] | List of rules associated with the subscription. |
deadLetterTopic | string | Dead letter topic name. |
type | string | Type of the subscription, either DECLARATIVE , STREAMING or PROGRAMMATIC . |
Metadata API Response Subscription Rules
Name | Type | Description |
---|---|---|
match | string | CEL expression to match the message, see https://docs.dapr.io/developing-applications/building-blocks/pubsub/howto-route-messages/#common-expression-language-cel |
path | string | Path to route the message if the match expression is true. |
Metadata API Response AppConnectionProperties
Name | Type | Description |
---|---|---|
port | integer | Port on which the app is listening. |
protocol | string | Protocol used by the app. |
channelAddress | string | Host address on which the app is listening. |
maxConcurrency | integer | Maximum number of concurrent requests the app can handle. |
health | Metadata API Response AppConnectionProperties Health | Health check details of the app. |
Metadata API Response AppConnectionProperties Health
Name | Type | Description |
---|---|---|
healthCheckPath | string | Health check path, applicable for HTTP protocol. |
healthProbeInterval | string | Time between each health probe, in go duration format. |
healthProbeTimeout | string | Timeout for each health probe, in go duration format. |
healthThreshold | integer | Max number of failed health probes before the app is considered unhealthy. |
Metadata API Response Scheduler
Name | Type | Description |
---|---|---|
connected_addresses | string[] | List of strings representing the addresses of the conntected scheduler hosts. |
Examples
curl http://localhost:3500/v1.0/metadata
{
"id": "myApp",
"runtimeVersion": "1.12.0",
"enabledFeatures": [
"ServiceInvocationStreaming"
],
"actors": [
{
"type": "DemoActor"
}
],
"components": [
{
"name": "pubsub",
"type": "pubsub.redis",
"version": "v1"
},
{
"name": "statestore",
"type": "state.redis",
"version": "v1",
"capabilities": [
"ETAG",
"TRANSACTIONAL",
"ACTOR"
]
}
],
"httpEndpoints": [
{
"name": "my-backend-api"
}
],
"subscriptions": [
{
"type": "DECLARATIVE",
"pubsubname": "pubsub",
"topic": "orders",
"deadLetterTopic": "",
"metadata": {
"ttlInSeconds": "30"
},
"rules": [
{
"match": "%!s(<nil>)",
"path": "orders"
}
]
}
],
"extended": {
"appCommand": "uvicorn --port 3000 demo_actor_service:app",
"appPID": "98121",
"cliPID": "98114",
"daprRuntimeVersion": "1.12.0"
},
"appConnectionProperties": {
"port": 3000,
"protocol": "http",
"channelAddress": "127.0.0.1",
"health": {
"healthProbeInterval": "5s",
"healthProbeTimeout": "500ms",
"healthThreshold": 3
}
},
"scheduler": {
"connected_addresses": [
"10.244.0.47:50006",
"10.244.0.48:50006",
"10.244.0.49:50006"
]
}
}
Add a custom label to the Dapr sidecar information
Adds a custom label to the Dapr sidecar information stored by the Metadata endpoint.
Usecase:
The metadata endpoint is, for example, used by the Dapr CLI when running dapr in self hosted mode to store the PID of the process hosting the sidecar and store the command used to run the application. Applications can also add attributes as keys after startup.
HTTP Request
PUT http://localhost:<daprPort>/v1.0/metadata/attributeName
URL Parameters
Parameter | Description |
---|---|
daprPort | The Dapr port. |
attributeName | Custom attribute name. This is they key name in the key-value pair. |
HTTP Request Body
In the request you need to pass the custom attribute value as RAW data:
{
"Content-Type": "text/plain"
}
Within the body of the request place the custom attribute value you want to store:
attributeValue
HTTP Response Codes
Code | Description |
---|---|
204 | Custom attribute added to the metadata information |
Examples
Add a custom attribute to the metadata endpoint:
curl -X PUT -H "Content-Type: text/plain" --data "myDemoAttributeValue" http://localhost:3500/v1.0/metadata/myDemoAttribute
Get the metadata information to confirm your custom attribute was added:
{
"id": "myApp",
"runtimeVersion": "1.12.0",
"enabledFeatures": [
"ServiceInvocationStreaming"
],
"actors": [
{
"type": "DemoActor"
}
],
"components": [
{
"name": "pubsub",
"type": "pubsub.redis",
"version": "v1"
},
{
"name": "statestore",
"type": "state.redis",
"version": "v1",
"capabilities": [
"ETAG",
"TRANSACTIONAL",
"ACTOR"
]
}
],
"httpEndpoints": [
{
"name": "my-backend-api"
}
],
"subscriptions": [
{
"type": "PROGRAMMATIC",
"pubsubname": "pubsub",
"topic": "orders",
"deadLetterTopic": "",
"metadata": {
"ttlInSeconds": "30"
},
"rules": [
{
"match": "%!s(<nil>)",
"path": "orders"
}
]
}
],
"extended": {
"myDemoAttribute": "myDemoAttributeValue",
"appCommand": "uvicorn --port 3000 demo_actor_service:app",
"appPID": "98121",
"cliPID": "98114",
"daprRuntimeVersion": "1.12.0"
},
"appConnectionProperties": {
"port": 3000,
"protocol": "http",
"channelAddress": "127.0.0.1",
"health": {
"healthProbeInterval": "5s",
"healthProbeTimeout": "500ms",
"healthThreshold": 3
}
},
"scheduler": {
"connected_addresses": [
"10.244.0.47:50006",
"10.244.0.48:50006",
"10.244.0.49:50006"
]
}
}
1.12 - Placement API reference
Dapr has an HTTP API /placement/state
for Placement service that exposes placement table information. The API is exposed on the sidecar on the same port as the healthz. This is an unauthenticated endpoint, and is disabled by default.
To enable the placement metadata in self-hosted mode you can either setDAPR_PLACEMENT_METADATA_ENABLED
environment variable or metadata-enabled
command line args on the Placement service to true
to. See how to run the Placement service in self-hosted mode.
Important
When running placement in multi-tenant mode, disable themetadata-enabled
command line args to prevent different namespaces from seeing each other’s data.
If you are using Helm for deployment of the Placement service on Kubernetes then to enable the placement metadata, set dapr_placement.metadataEnabled
to true
.
Usecase
The placement table API can be used for retrieving the current placement table, which contains all the actors registered. This can be helpful for debugging and allows tools to extract and present information about actors.
HTTP Request
GET http://localhost:<healthzPort>/placement/state
HTTP Response Codes
Code | Description |
---|---|
200 | Placement tables information returned |
500 | Placement could not return the placement tables information |
HTTP Response Body
Placement tables API Response Object
Name | Type | Description |
---|---|---|
tableVersion | int | The placement table version |
hostList | Actor Host Info[] | A json array of registered actors host info. |
Name | Type | Description |
---|---|---|
name | string | The host:port address of the actor. |
appId | string | app id. |
actorTypes | json string array | List of actor types it hosts. |
updatedAt | timestamp | Timestamp of the actor registered/updated. |
Examples
curl localhost:8080/placement/state
{
"hostList": [{
"name": "198.18.0.1:49347",
"namespace": "ns1",
"appId": "actor1",
"actorTypes": ["testActorType1", "testActorType3"],
"updatedAt": 1690274322325260000
},
{
"name": "198.18.0.2:49347",
"namespace": "ns2",
"appId": "actor2",
"actorTypes": ["testActorType2"],
"updatedAt": 1690274322325260000
},
{
"name": "198.18.0.3:49347",
"namespace": "ns2",
"appId": "actor2",
"actorTypes": ["testActorType2"],
"updatedAt": 1690274322325260000
}
],
"tableVersion": 1
}
1.13 - Cryptography API reference
Dapr provides cross-platform and cross-language support for encryption and decryption support via the cryptography building block. Besides the language specific SDKs, a developer can invoke these capabilities using the HTTP API endpoints below.
The HTTP APIs are intended for development and testing only. For production scenarios, the use of the SDKs is strongly recommended as they implement the gRPC APIs providing higher performance and capability than the HTTP APIs.
Encrypt Payload
This endpoint lets you encrypt a value provided as a byte array using a specified key and crypto component.
HTTP Request
PUT http://localhost:<daprPort>/v1.0-alpha1/crypto/<crypto-store-name>/encrypt
URL Parameters
Parameter | Description |
---|---|
daprPort | The Dapr port |
crypto-store-name | The name of the crypto store to get the encryption key from |
Note, all URL parameters are case-sensitive.
Headers
Additional encryption parameters are configured by setting headers with the appropriate values. The following table details the required and optional headers to set with every encryption request.
Header Key | Description | Allowed Values | Required |
---|---|---|---|
dapr-key-name | The name of the key to use for the encryption operation | Yes | |
dapr-key-wrap-algorithm | The key wrap algorithm to use | A256KW , A128CBC , A192CBC , RSA-OAEP-256 |
Yes |
dapr-omit-decryption-key-name | If true, omits the decryption key name from header dapr-decryption-key-name from the output. If false, includes the specified decryption key name specified in header dapr-decryption-key-name . |
The following values will be accepted as true: y , yes , true , t , on , 1 |
No |
dapr-decryption-key-name | If dapr-omit-decryption-key-name is true, this contains the name of the intended decryption key to include in the output. |
Required only if dapr-omit-decryption-key-name is true |
|
dapr-data-encryption-cipher | The cipher to use for the encryption operation | aes-gcm or chacha20-poly1305 |
No |
HTTP Response
Response Body
The response to an encryption request will have its content type header set to application/octet-stream
as it
returns an array of bytes with the encrypted payload.
Response Codes
Code | Description |
---|---|
200 | OK |
400 | Crypto provider not found |
500 | Request formatted correctly, error in dapr code or underlying component |
Examples
curl http://localhost:3500/v1.0-alpha1/crypto/myAzureKeyVault/encrypt \
-X PUT \
-H "dapr-key-name: myCryptoKey" \
-H "dapr-key-wrap-algorithm: aes-gcm" \
-H "Content-Type: application/octet-string" \
--data-binary "\x68\x65\x6c\x6c\x6f\x20\x77\x6f\x72\x6c\x64"
The above command sends an array of UTF-8 encoded bytes representing “hello world” and would return a stream of 8-bit values in the response similar to the following containing the encrypted payload:
gAAAAABhZfZ0Ywz4dQX8y9J0Zl5v7w6Z7xq4jV3cW9o2l4pQ0YD1LdR0Zk7zIYi4n2Ll7t6f0Z4X7r8x9o6a8GyL0X1m9Q0Z0A==
Decrypt Payload
This endpoint lets you decrypt a value provided as a byte array using a specified key and crypto component.
HTTP Request
PUT curl http://localhost:3500/v1.0-alpha1/crypto/<crypto-store-name>/decrypt
URL Parameters
Parameter | Description |
---|---|
daprPort | The Dapr port |
crypto-store-name | The name of the crypto store to get the decryption key from |
Note all parameters are case-sensitive.
Headers
Additional decryption parameters are configured by setting headers with the appropriate values. The following table details the required and optional headers to set with every decryption request.
Header Key | Description | Required |
---|---|---|
dapr-key-name | The name of the key to use for the decryption operation. | Yes |
HTTP Response
Response Body
The response to a decryption request will have its content type header to set application/octet-stream
as it
returns an array of bytes representing the decrypted payload.
Response Codes
Code | Description |
---|---|
200 | OK |
400 | Crypto provider not found |
500 | Request formatted correctly, error in dapr code or underlying component |
Examples
curl http://localhost:3500/v1.0-alpha1/crypto/myAzureKeyVault/decrypt \
-X PUT
-H "dapr-key-name: myCryptoKey"\
-H "Content-Type: application/octet-stream" \
--data-binary "gAAAAABhZfZ0Ywz4dQX8y9J0Zl5v7w6Z7xq4jV3cW9o2l4pQ0YD1LdR0Zk7zIYi4n2Ll7t6f0Z4X7r8x9o6a8GyL0X1m9Q0Z0A=="
The above command sends a base-64 encoded string of the encrypted message payload and would return a response with the content type header set to
application/octet-stream
returning the response bodyhello world
.
hello world
1.14 - Jobs API reference
Note
The jobs API is currently in alpha.With the jobs API, you can schedule jobs and tasks in the future.
The HTTP APIs are intended for development and testing only. For production scenarios, the use of the SDKs is strongly recommended as they implement the gRPC APIs providing higher performance and capability than the HTTP APIs. This is because HTTP does JSON marshalling which can be expensive, while with gRPC, the data is transmitted over the wire and stored as-is being more performant.
Schedule a job
Schedule a job with a name. Jobs are scheduled based on the clock of the server where the Scheduler service is running. The timestamp is not converted to UTC. You can provide the timezone with the timestamp in RFC3339 format to specify which timezone you’d like the job to adhere to. If no timezone is provided, the server’s local time is used.
POST http://localhost:<daprPort>/v1.0-alpha1/jobs/<name>
URL parameters
Note
At least one ofschedule
or dueTime
must be provided, but they can also be provided together.
Parameter | Description |
---|---|
name |
Name of the job you’re scheduling |
data |
A JSON serialized value or object. |
schedule |
An optional schedule at which the job is to be run. Details of the format are below. |
dueTime |
An optional time at which the job should be active, or the “one shot” time, if other scheduling type fields are not provided. Accepts a “point in time” string in the format of RFC3339, Go duration string (calculated from creation time), or non-repeating ISO8601. |
repeats |
An optional number of times in which the job should be triggered. If not set, the job runs indefinitely or until expiration. |
ttl |
An optional time to live or expiration of the job. Accepts a “point in time” string in the format of RFC3339, Go duration string (calculated from job creation time), or non-repeating ISO8601. |
overwrite |
A boolean value to specify if the job can overwrite an existing one with the same name. Default value is false |
failure_policy |
An optional failure policy for the job. Details of the format are below. If not set, the job is retried up to 3 times with a delay of 1 second between retries. |
schedule
schedule
accepts both systemd timer-style cron expressions, as well as human readable ‘@’ prefixed period strings, as defined below.
Systemd timer style cron accepts 6 fields:
seconds | minutes | hours | day of month | month | day of week |
---|---|---|---|---|---|
0-59 | 0-59 | 0-23 | 1-31 | 1-12/jan-dec | 0-6/sun-sat |
Example 1
“0 30 * * * *” - every hour on the half hour
Example 2
“0 15 3 * * *” - every day at 03:15
Period string expressions:
Entry | Description | Equivalent To |
---|---|---|
@every |
Run every |
N/A |
@yearly (or @annually) | Run once a year, midnight, Jan. 1st | 0 0 0 1 1 * |
@monthly | Run once a month, midnight, first of month | 0 0 0 1 * * |
@weekly | Run once a week, midnight on Sunday | 0 0 0 * * 0 |
@daily (or @midnight) | Run once a day, midnight | 0 0 0 * * * |
@hourly | Run once an hour, beginning of hour | 0 0 * * * * |
failure_policy
failure_policy
specifies how the job should handle failures.
It can be set to constant
or drop
.
- The
constant
policy retries the job constantly with the following configuration options.max_retries
configures how many times the job should be retried. Defaults to retrying indefinitely.nil
denotes unlimited retries, while0
means the request will not be retried.interval
configures the delay between retries. Defaults to retrying immediately. Valid values are of the form200ms
,15s
,2m
, etc.
- The
drop
policy drops the job after the first failure, without retrying.
Example 1
{
//...
"failure_policy": {
"constant": {
"max_retries": 3,
"interval": "10s"
}
}
}
Example 2
{
//...
"failure_policy": {
"drop": {}
}
}
Request body
{
"data": "some data",
"dueTime": "30s"
}
HTTP response codes
Code | Description |
---|---|
204 |
Accepted |
400 |
Request was malformed |
500 |
Request formatted correctly, error in dapr code or Scheduler control plane service |
Response content
The following example curl command creates a job, naming the job jobforjabba
and specifying the schedule
, repeats
and the data
.
$ curl -X POST \
http://localhost:3500/v1.0-alpha1/jobs/jobforjabba \
-H "Content-Type: application/json" \
-d '{
"data": "{\"value\":\"Running spice\"}",
"schedule": "@every 1m",
"repeats": 5
}'
Get job data
Get a job from its name.
GET http://localhost:<daprPort>/v1.0-alpha1/jobs/<name>
URL parameters
Parameter | Description |
---|---|
name |
Name of the scheduled job you’re retrieving |
HTTP response codes
Code | Description |
---|---|
200 |
Accepted |
400 |
Request was malformed |
500 |
Request formatted correctly, Job doesn’t exist or error in dapr code or Scheduler control plane service |
Response content
After running the following example curl command, the returned response is JSON containing the name
of the job, the dueTime
, and the data
.
$ curl -X GET http://localhost:3500/v1.0-alpha1/jobs/jobforjabba -H "Content-Type: application/json"
{
"name": "jobforjabba",
"schedule": "@every 1m",
"repeats": 5,
"data": 123
}
Delete a job
Delete a named job.
DELETE http://localhost:<daprPort>/v1.0-alpha1/jobs/<name>
URL parameters
Parameter | Description |
---|---|
name |
Name of the job you’re deleting |
HTTP response codes
Code | Description |
---|---|
204 |
Accepted |
400 |
Request was malformed |
500 |
Request formatted correctly, error in dapr code or Scheduler control plane service |
Response content
In the following example curl command, the job named test1
with app-id sub
will be deleted
$ curl -X DELETE http://localhost:3500/v1.0-alpha1/jobs/jobforjabba -H "Content-Type: application/json"
Next steps
1.15 - Conversation API reference
Alpha
The conversation API is currently in alpha.Dapr provides an API to interact with Large Language Models (LLMs) and enables critical performance and security functionality with features like prompt caching and PII data obfuscation.
Converse
This endpoint lets you converse with LLMs.
POST http://localhost:<daprPort>/v1.0-alpha1/conversation/<llm-name>/converse
URL parameters
Parameter | Description |
---|---|
llm-name |
The name of the LLM component. See a list of all available conversation components. |
Request body
Field | Description |
---|---|
inputs |
Inputs for the conversation. Multiple inputs at one time are supported. Required |
cacheTTL |
A time-to-live value for a prompt cache to expire. Uses Golang duration format. Optional |
scrubPII |
A boolean value to enable obfuscation of sensitive information returning from the LLM. Set this value if all PII (across contents) in the request needs to be scrubbed. Optional |
temperature |
A float value to control the temperature of the model. Used to optimize for consistency and creativity. Optional |
metadata |
Metadata passed to conversation components. Optional |
Input body
Field | Description |
---|---|
content |
The message content to send to the LLM. Required |
role |
The role for the LLM to assume. Possible values: ‘user’, ’tool’, ‘assistant’ |
scrubPII |
A boolean value to enable obfuscation of sensitive information present in the content field. Set this value if PII for this specific content needs to be scrubbed exclusively. Optional |
Request content example
REQUEST = {
"inputs": [
{
"content": "What is Dapr?",
"role": "user", // Optional
"scrubPII": "true", // Optional. Will obfuscate any sensitive information found in the content field
},
],
"cacheTTL": "10m", // Optional
"scrubPII": "true", // Optional. Will obfuscate any sensitive information returning from the LLM
"temperature": 0.5 // Optional. Optimizes for consistency (0) or creativity (1)
}
HTTP response codes
Code | Description |
---|---|
202 |
Accepted |
400 |
Request was malformed |
500 |
Request formatted correctly, error in Dapr code or underlying component |
Response content
RESPONSE = {
"outputs": {
{
"result": "Dapr is distribution application runtime ...",
"parameters": {},
},
{
"result": "Dapr can help developers ...",
"parameters": {},
}
},
}
Next steps
2 - Dapr CLI reference
2.1 - Dapr command line interface (CLI) reference
The Dapr CLI allows you to setup Dapr on your local dev machine or on a Kubernetes cluster, provides debugging support, and launches and manages Dapr instances.
__
____/ /___ _____ _____
/ __ / __ '/ __ \/ ___/
/ /_/ / /_/ / /_/ / /
\__,_/\__,_/ .___/_/
/_/
===============================
Distributed Application Runtime
Usage:
dapr [command]
Available Commands:
annotate Add dapr annotations to a Kubernetes configuration. Supported platforms: Kubernetes
build-info Print build info of Dapr CLI and runtime
completion Generates shell completion scripts
components List all Dapr components. Supported platforms: Kubernetes
configurations List all Dapr configurations. Supported platforms: Kubernetes
dashboard Start Dapr dashboard. Supported platforms: Kubernetes and self-hosted
help Help about any command
init Install Dapr on supported hosting platforms. Supported platforms: Kubernetes and self-hosted
invoke Invoke a method on a given Dapr application. Supported platforms: Self-hosted
list List all Dapr instances. Supported platforms: Kubernetes and self-hosted
logs Get Dapr sidecar logs for an application. Supported platforms: Kubernetes
mtls Check if mTLS is enabled. Supported platforms: Kubernetes
publish Publish a pub-sub event. Supported platforms: Self-hosted
run Run Dapr and (optionally) your application side by side. Supported platforms: Self-hosted
status Show the health status of Dapr services. Supported platforms: Kubernetes
stop Stop Dapr instances and their associated apps. Supported platforms: Self-hosted
uninstall Uninstall Dapr runtime. Supported platforms: Kubernetes and self-hosted
upgrade Upgrades a Dapr control plane installation in a cluster. Supported platforms: Kubernetes
version Print the Dapr runtime and CLI version
Flags:
-h, --help help for dapr
--log-as-json Log output in JSON format
-v, --version version for dapr
Use "dapr [command] --help" for more information about a command.
Command Reference
You can learn more about each Dapr command from the links below.
dapr annotate
dapr build-info
dapr completion
dapr components
dapr configurations
dapr dashboard
dapr help
dapr init
dapr invoke
dapr list
dapr logs
dapr mtls
dapr publish
dapr run
dapr status
dapr stop
dapr uninstall
dapr upgrade
dapr version
Environment Variables
Some Dapr flags can be set via environment variables (e.g. DAPR_NETWORK
for the --network
flag of the dapr init
command). Note that specifying the flag on the command line overrides any set environment variable.
2.2 - annotate CLI command reference
Description
Add Dapr annotations to a Kubernetes configuration. This enables you to add/change the Dapr annotations on a deployment files. See Kubernetes annotations for a full description of each annotation available in the following list of flags.
Supported platforms
Usage
dapr annotate [flags] CONFIG-FILE
Flags
Name | Environment Variable | Default | Description |
---|---|---|---|
--kubernetes, -k |
Apply annotations to Kubernetes resources. Required | ||
--api-token-secret |
The secret to use for the API token | ||
--app-id, -a |
The app id to annotate | ||
--app-max-concurrency |
-1 |
The maximum number of concurrent requests to allow | |
--app-port, -p |
-1 |
The port to expose the app on | |
--app-protocol |
The protocol to use for the app: http (default), grpc , https , grpcs , h2c |
||
--app-token-secret |
The secret to use for the app token | ||
--config, -c |
The config file to annotate | ||
--cpu-limit |
The CPU limit to set for the sidecar. See valid values here. | ||
--cpu-request |
The CPU request to set for the sidecar. See valid values here. | ||
--dapr-image |
The image to use for the dapr sidecar container | ||
--enable-debug |
false |
Enable debug | |
--enable-metrics |
false |
Enable metrics | |
--enable-profile |
false |
Enable profiling | |
--env |
Environment variables to set (key value pairs, comma separated) | ||
--graceful-shutdown-seconds |
-1 |
The number of seconds to wait for the app to shutdown | |
--help, -h |
help for annotate | ||
--listen-addresses |
The addresses for sidecar to listen on. To listen to all IPv4 addresses, use 0.0.0.0 . To listen to all IPv6 addresses, use [::] . |
||
--liveness-probe-delay |
-1 |
The delay for sidecar to use for the liveness probe. Read more here. | |
--liveness-probe-period |
-1 |
The period used by the sidecar for the liveness probe. Read more here. | |
--liveness-probe-threshold |
-1 |
The threshold used by the sidecar for the liveness probe. Read more here. | |
--liveness-probe-timeout |
-1 |
The timeout used by the sidecar for the liveness probe. Read more here. | |
--log-level |
The log level to use | ||
--max-request-body-size |
-1 |
The maximum request body size to use | |
--http-read-buffer-size |
-1 |
The maximum size of HTTP header read buffer in kilobytes | |
--memory-limit |
The memory limit to set for the sidecar. See valid values here | ||
--memory-request |
The memory request to set for the sidecar | ||
--metrics-port |
-1 |
The port to expose the metrics on | |
--namespace, -n |
The namespace the resource target is in (can only be set if --resource is also set) |
||
--readiness-probe-delay |
-1 |
The delay to use for the readiness probe in the sidecar. Read more here. | |
--readiness-probe-period |
-1 |
The period to use for the readiness probe in the sidecar. Read more here. | |
--readiness-probe-threshold |
-1 |
The threshold to use for the readiness probe in the sidecar. Read more here. | |
--readiness-probe-timeout |
-1 |
The timeout to use for the readiness probe in the sidecar. Read more here. | |
--resource, -r |
The Kubernetes resource target to annotate | ||
--enable-api-logging |
Enable API logging for the Dapr sidecar | ||
--unix-domain-socket-path |
Linux domain socket path to use for communicating with the Dapr sidecar | ||
--volume-mounts |
List of pod volumes to be mounted to the sidecar container in read-only mode | ||
--volume-mounts-rw |
List of pod volumes to be mounted to the sidecar container in read-write mode | ||
--disable-builtin-k8s-secret-store |
Disable the built-in Kubernetes secret store | ||
--placement-host-address |
Comma separated list of addresses for Dapr actor placement servers |
Warning
If an application ID is not provided using--app-id, -a
, an ID is generated using the format <namespace>-<kind>-<name>
.
Examples
# Annotate the first deployment found in the input
kubectl get deploy -l app=node -o yaml | dapr annotate -k - | kubectl apply -f -
# Annotate multiple deployments by name in a chain
kubectl get deploy -o yaml | dapr annotate -k -r nodeapp - | dapr annotate -k -r pythonapp - | kubectl apply -f -
# Annotate deployment in a specific namespace from file or directory by name
dapr annotate -k -r nodeapp -n namespace mydeploy.yaml | kubectl apply -f -
# Annotate deployment from url by name
dapr annotate -k -r nodeapp --log-level debug https://raw.githubusercontent.com/dapr/quickstarts/master/tutorials/hello-kubernetes/deploy/node.yaml | kubectl apply -f -
2.3 - build-info CLI command reference
Description
Get the version and git commit data for dapr
and daprd
executables.
Supported platforms
Usage
dapr build-info
Related facts
You can get daprd
build information directly by invoking daprd --build-info
command.
2.4 - completion CLI command reference
Description
Generates shell completion scripts
Usage
dapr completion [flags]
dapr completion [command]
Flags
Name | Environment Variable | Default | Description |
---|---|---|---|
--help , -h |
Prints this help message |
Examples
Installing bash completion on macOS using Homebrew
If running Bash 3.2 included with macOS:
brew install bash-completion
Or, if running Bash 4.1+:
brew install bash-completion@2
Add the completion to your completion directory:
dapr completion bash > $(brew --prefix)/etc/bash_completion.d/dapr
source ~/.bash_profile
Installing bash completion on Linux
If bash-completion is not installed on Linux, please install the bash-completion’ package via your distribution’s package manager.
Load the dapr completion code for bash into the current shell:
source <(dapr completion bash)
Write bash completion code to a file and source if from .bash_profile:
dapr completion bash > ~/.dapr/completion.bash.inc
printf "source '$HOME/.dapr/completion.bash.inc'" >> $HOME/.bash_profile
source $HOME/.bash_profile
Installing zsh completion on macOS using homebrew
If zsh-completion is not installed on macOS, please install the ‘zsh-completion’ package:
brew install zsh-completions
Set the dapr completion code for zsh[1] to autoload on startup:
dapr completion zsh > "${fpath[1]}/_dapr"
source ~/.zshrc
Installing zsh completion on Linux
If zsh-completion is not installed on Linux, please install the ‘zsh-completion’ package via your distribution’s package manager.
Load the dapr completion code for zsh into the current shell:
source <(dapr completion zsh)
Set the dapr completion code for zsh[1] to autoload on startup:
dapr completion zsh > "${fpath[1]}/_dapr"
Installing Powershell completion on Windows
Create $PROFILE if it not exists:
if (!(Test-Path -Path $PROFILE )){ New-Item -Type File -Path $PROFILE -Force }
Add the completion to your profile:
dapr completion powershell >> $PROFILE
Available Commands
bash Generates bash completion scripts
powershell Generates powershell completion scripts
zsh Generates zsh completion scripts
2.5 - components CLI command reference
Description
List all Dapr components.
Supported platforms
Usage
dapr components [flags]
Flags
Name | Environment Variable | Default | Description |
---|---|---|---|
--kubernetes , -k |
false |
List all Dapr components in a Kubernetes cluster (required) | |
--all-namespaces , -A |
true |
If true, list all Dapr components in all namespaces | |
--help , -h |
Print this help message | ||
--name , -n |
The components name to be printed (optional) | ||
--namespace |
List all components in the specified namespace | ||
--output , -o |
list |
Output format (options: json or yaml or list) |
Examples
# List Dapr components in all namespaces in Kubernetes mode
dapr components -k
# List Dapr components in specific namespace in Kubernetes mode
dapr components -k --namespace default
# Print specific Dapr component in Kubernetes mode
dapr components -k -n mycomponent
# List Dapr components in all namespaces in Kubernetes mode
dapr components -k --all-namespaces
Warning messages
This command can issue warning messages.
Root certificate renewal warning
If the mtls root certificate deployed to the Kubernetes cluster expires in under 30 days the following warning message is displayed:
Dapr root certificate of your Kubernetes cluster expires in <n> days. Expiry date: <date:time> UTC.
Please see docs.dapr.io for certificate renewal instructions to avoid service interruptions.
2.6 - configurations CLI command reference
Description
List all Dapr configurations.
Supported platforms
Usage
dapr configurations [flags]
Flags
Name | Environment Variable | Default | Description |
---|---|---|---|
--kubernetes , -k |
false |
List all Dapr configurations in Kubernetes cluster (required). | |
--all-namespaces , -A |
true |
If true, list all Dapr configurations in all namespaces (optional) | |
--namespace |
List Dapr configurations in specific namespace. | ||
--name , -n |
Print specific Dapr configuration. (optional) | ||
--output , -o |
list |
Output format (options: json or yaml or list) | |
--help , -h |
Print this help message |
Examples
# List Dapr configurations in all namespaces in Kubernetes mode
dapr configurations -k
# List Dapr configurations in specific namespace in Kubernetes mode
dapr configurations -k --namespace default
# Print specific Dapr configuration in Kubernetes mode
dapr configurations -k -n appconfig
# List Dapr configurations in all namespaces in Kubernetes mode
dapr configurations -k --all-namespaces
Warning messages
This command can issue warning messages.
Root certificate renewal warning
If the mtls root certificate deployed to the Kubernetes cluster expires in under 30 days the following warning message is displayed:
Dapr root certificate of your Kubernetes cluster expires in <n> days. Expiry date: <date:time> UTC.
Please see docs.dapr.io for certificate renewal instructions to avoid service interruptions.
2.7 - dashboard CLI command reference
Description
Start Dapr dashboard.
Supported platforms
Usage
dapr dashboard [flags]
Flags
Name | Environment Variable | Default | Description |
---|---|---|---|
--address , -a |
localhost |
Address to listen on. Only accepts IP address or localhost as a value | |
--help , -h |
Prints this help message | ||
--kubernetes , -k |
false |
Opens Dapr dashboard in local browser via local proxy to Kubernetes cluster | |
--namespace , -n |
dapr-system |
The namespace where Dapr dashboard is running | |
--port , -p |
8080 |
The local port on which to serve Dapr dashboard | |
--version , -v |
false |
Print the version for Dapr dashboard |
Examples
# Start dashboard locally
dapr dashboard
# Start dashboard service locally on a specified port
dapr dashboard -p 9999
# Port forward to dashboard service running in Kubernetes
dapr dashboard -k
# Port forward to dashboard service running in Kubernetes on all addresses on a specified port
dapr dashboard -k -p 9999 --address 0.0.0.0
# Port forward to dashboard service running in Kubernetes on a specified port
dapr dashboard -k -p 9999
Warning messages - Kubernetes Mode
This command can issue warning messages.
Root certificate renewal warning
If the mtls root certificate deployed to the Kubernetes cluster expires in under 30 days the following warning message is displayed:
Dapr root certificate of your Kubernetes cluster expires in <n> days. Expiry date: <date:time> UTC.
Please see docs.dapr.io for certificate renewal instructions to avoid service interruptions.
2.8 - help CLI command reference
Description
Help provides help for any command in the application.
Usage
dapr help [command] [flags]
Flags
Name | Environment Variable | Default | Description |
---|---|---|---|
--help , -h |
Prints this help message |
2.9 - init CLI command reference
Description
Install Dapr on supported hosting platforms.
Supported platforms
Usage
dapr init [flags]
Flags
Name | Environment Variable | Default | Description |
---|---|---|---|
--dashboard-version |
latest |
The version of the Dapr dashboard to install, for example: 1.0.0 |
|
--enable-ha |
false |
Enable high availability (HA) mode | |
--enable-mtls |
true |
Enable mTLS in your cluster | |
--from-dir |
Path to a local directory containing a downloaded “Dapr Installer Bundle” release which is used to init the airgap environment |
||
--help , -h |
Print this help message | ||
--image-registry |
Pulls container images required by Dapr from the given image registry | ||
--kubernetes , -k |
false |
Deploy Dapr to a Kubernetes cluster | |
--namespace , -n |
dapr-system |
The Kubernetes namespace to install Dapr in | |
--network |
The Docker network on which to install and deploy the Dapr runtime | ||
--runtime-version |
latest |
The version of the Dapr runtime to install, for example: 1.0.0 |
|
--image-variant |
The image variant to use for the Dapr runtime, for example: mariner |
||
--set |
Configure options on the command line to be passed to the Dapr Helm chart and the Kubernetes cluster upon install. Can specify multiple values in a comma-separated list, for example: key1=val1,key2=val2 |
||
--slim , -s |
false |
Exclude placement service, scheduler service, and the Redis and Zipkin containers from self-hosted installation | |
--timeout |
300 |
The wait timeout for the Kubernetes installation | |
--wait |
false |
Wait for Kubernetes initialization to complete | |
N/A | DAPR_DEFAULT_IMAGE_REGISTRY | It is used to specify the default container registry to pull images from. When its value is set to GHCR or ghcr it pulls the required images from Github container registry. To default to Docker hub, unset the environment variable or leave it blank |
|
N/A | DAPR_HELM_REPO_URL | Specifies a private Dapr Helm chart url | |
N/A | DAPR_HELM_REPO_USERNAME | A username for a private Helm chart | The username required to access the private Dapr Helm chart. If it can be accessed publicly, this env variable does not need to be set |
N/A | DAPR_HELM_REPO_PASSWORD | A password for a private Helm chart | The password required to access the private Dapr Helm chart. If it can be accessed publicly, this env variable does not need to be set |
--container-runtime |
docker |
Used to pass in a different container runtime other than Docker. Supported container runtimes are: docker , podman |
|
--dev |
Creates Redis and Zipkin deployments when run in Kubernetes. | ||
--scheduler-volume |
Self-hosted only. Optionally, you can specify a volume for the scheduler service data directory. By default, without this flag, scheduler data is not persisted and not resilient to restarts. |
Examples
Install
Install Dapr by pulling container images for Placement, Scheduler, Redis, and Zipkin. By default, these images are pulled from Docker Hub.
By default, a
dapr_scheduler
local volume is created for Scheduler service to be used as the database directory. The host file location for this volume is likely located at/var/lib/docker/volumes/dapr_scheduler/_data
or~/.local/share/containers/storage/volumes/dapr_scheduler/_data
, depending on your container runtime.
dapr init
Dapr can also run Slim self-hosted mode, without Docker.
dapr init -s
To switch to Dapr Github container registry as the default registry, set the
DAPR_DEFAULT_IMAGE_REGISTRY
environment variable value to beGHCR
. To switch back to Docker Hub as default registry, unset this environment variable.
Specify a runtime version
You can also specify a specific runtime version. By default, the latest version is used.
dapr init --runtime-version 1.13.4
Install with image variant
You can also install Dapr with a particular image variant, for example: mariner.
dapr init --image-variant mariner
Use Dapr Installer Bundle
In an offline or airgap environment, you can download a Dapr Installer Bundle and use this to install Dapr instead of pulling images from the network.
dapr init --from-dir <path-to-installer-bundle-directory>
Dapr can also run in slim self-hosted mode without Docker in an airgap environment.
dapr init -s --from-dir <path-to-installer-bundle-directory>
Specify private registry
You can also specify a private registry to pull container images from. These images need to be published to private registries as shown below to enable Dapr CLI to pull them successfully via the dapr init
command:
- Dapr runtime container image(dapr) (Used to run Placement) - dapr/dapr:
- Redis container image(rejson) - dapr/3rdparty/rejson
- Zipkin container image(zipkin) - dapr/3rdparty/zipkin
All the required images used by Dapr needs to be under the dapr
path. The 3rd party images have to be published under dapr/3rdparty
path.
image-registry
uri follows the docker.io/<username>
format.
dapr init --image-registry docker.io/username
This command resolves the complete image URI as shown below -
- Placement container image(dapr) - docker.io/username/dapr/dapr:
- Redis container image(rejson) - docker.io/username/dapr/3rdparty/rejson
- zipkin container image(zipkin) - docker.io/username/dapr/3rdparty/zipkin
You can specify a different container runtime while setting up Dapr. If you omit the --container-runtime
flag, the default container runtime is Docker.
dapr init --container-runtime podman
Use Docker network
You can deploy local containers into Docker networks, which is useful for deploying into separate networks or when using Docker Compose for local development to deploy applications.
Create the Docker network.
docker network create mynet
Initialize Dapr and specify the created Docker network.
dapr init --network mynet
Verify all containers are running in the specified network.
docker ps
Uninstall Dapr from that Docker network.
dapr uninstall --all --network mynet
dapr init -k
Using the --dev
flag initializes Dapr in dev mode, which includes Zipkin and Redis.
dapr init -k --dev
You can wait for the installation to complete its deployment with the --wait
flag.
The default timeout is 300s (5 min), but can be customized with the --timeout
flag.
dapr init -k --wait --timeout 600
You can also specify a specific runtime version.
dapr init -k --runtime-version 1.4.0
Use the --set
flag to configure a set of Helm Chart values during Dapr installation to help set up a Kubernetes cluster.
dapr init -k --set global.tag=1.0.0 --set dapr_operator.logLevel=error
You can also specify a private registry to pull container images from. As of now dapr init -k
does not use specific images for sentry, operator, placement, scheduler, and sidecar. It relies on only Dapr runtime container image dapr
for all these images.
Scenario 1 : dapr image hosted directly under root folder in private registry -
dapr init -k --image-registry docker.io/username
Scenario 2 : dapr image hosted under a new/different directory in private registry -
dapr init -k --image-registry docker.io/username/<directory-name>
2.10 - invoke CLI command reference
Description
Invoke a method on a given Dapr application.
Supported platforms
Usage
dapr invoke [flags]
Flags
Name | Environment Variable | Default | Description |
---|---|---|---|
--app-id , -a |
APP_ID |
The application id to invoke | |
--help , -h |
Print this help message | ||
--method , -m |
The method to invoke | ||
--data , -d |
The JSON serialized data string (optional) | ||
--data-file , -f |
A file containing the JSON serialized data (optional) | ||
--verb , -v |
POST |
The HTTP verb to use |
Examples
# Invoke a sample method on target app with POST Verb
dapr invoke --app-id target --method sample --data '{"key":"value"}'
# Invoke a sample method on target app with GET Verb
dapr invoke --app-id target --method sample --verb GET
2.11 - list CLI command reference
Description
List all Dapr instances.
Supported platforms
Usage
dapr list [flags]
Flags
Name | Environment Variable | Default | Description |
---|---|---|---|
--all-namespaces , -A |
false |
List all Dapr pods in all namespaces (optional) | |
--help , -h |
Print this help message | ||
--kubernetes , -k |
false |
List all Dapr pods in a Kubernetes cluster (optional) | |
--namespace , -n |
default |
List the Dapr pods in the defined namespace in Kubernetes. Only with -k flag (optional) |
|
--output , -o |
table |
The output format of the list. Valid values are: json , yaml , or table |
Examples
# List Dapr instances in self-hosted mode
dapr list
# List Dapr instances in all namespaces in Kubernetes mode
dapr list -k
# List Dapr instances in JSON format
dapr list -o json
# List Dapr instances in a specific namespace in Kubernetes mode
dapr list -k --namespace default
# List Dapr instances in all namespaces in Kubernetes mode
dapr list -k --all-namespaces
Warning messages - Kubernetes Mode
This command can issue warning messages.
Root certificate renewal warning
If the mtls root certificate deployed to the Kubernetes cluster expires in under 30 days the following warning message is displayed:
Dapr root certificate of your Kubernetes cluster expires in <n> days. Expiry date: <date:time> UTC.
Please see docs.dapr.io for certificate renewal instructions to avoid service interruptions.
2.12 - logs CLI command reference
Description
Get Dapr sidecar logs for an application.
Supported platforms
Usage
dapr logs [flags]
Flags
Name | Environment Variable | Default | Description |
---|---|---|---|
--app-id , -a |
APP_ID |
The application id for which logs are needed | |
--help , -h |
Print this help message | ||
--kubernetes , -k |
true |
Get logs from a Kubernetes cluster | |
--namespace , -n |
default |
The Kubernetes namespace in which your application is deployed | |
--pod-name , -p |
The name of the pod in Kubernetes, in case your application has multiple pods (optional) |
Examples
# Get logs of sample app from target pod in custom namespace
dapr logs -k --app-id sample --pod-name target --namespace custom
Warning messages
This command can issue warning messages.
Root certificate renewal warning
If the mtls root certificate deployed to the Kubernetes cluster expires in under 30 days the following warning message is displayed:
Dapr root certificate of your Kubernetes cluster expires in <n> days. Expiry date: <date:time> UTC.
Please see docs.dapr.io for certificate renewal instructions to avoid service interruptions.
2.13 - mtls CLI command reference
Description
Check if mTLS is enabled.
Supported platforms
Usage
dapr mtls [flags]
dapr mtls [command]
Flags
Name | Environment Variable | Default | Description |
---|---|---|---|
--help , -h |
Print this help message | ||
--kubernetes , -k |
false |
Check if mTLS is enabled in a Kubernetes cluster |
Available Commands
expiry Checks the expiry of the root Certificate Authority (CA) certificate
export Export the root Certificate Authority (CA), issuer cert and issuer key to local files
renew-certificate Rotates the existing root Certificate Authority (CA), issuer cert and issuer key
Command Reference
You can learn more about each sub command from the links below.
Examples
# Check if mTLS is enabled on the Kubernetes cluster
dapr mtls -k
Warning messages
This command can issue warning messages.
Root certificate renewal warning
If the mtls root certificate deployed to the Kubernetes cluster expires in under 30 days the following warning message is displayed:
Dapr root certificate of your Kubernetes cluster expires in <n> days. Expiry date: <date:time> UTC.
Please see docs.dapr.io for certificate renewal instructions to avoid service interruptions.
2.13.1 - mtls export CLI command reference
Description
Export the root Certificate Authority (CA), issuer cert and issuer key to local files
Supported platforms
Usage
dapr mtls export [flags]
Flags
Name | Environment Variable | Default | Description |
---|---|---|---|
--help , -h |
help for export | ||
--out , -o |
current directory | The output directory path to save the certs |
Examples
# Check expiry of Kubernetes certs
dapr mtls export -o ./certs
Warning messages
This command can issue warning messages.
Root certificate renewal warning
If the mtls root certificate deployed to the Kubernetes cluster expires in under 30 days the following warning message is displayed:
Dapr root certificate of your Kubernetes cluster expires in <n> days. Expiry date: <date:time> UTC.
Please see docs.dapr.io for certificate renewal instructions to avoid service interruptions.
2.13.2 - mtls expiry CLI command reference
Description
Checks the expiry of the root Certificate Authority (CA) certificate
Supported platforms
Usage
dapr mtls expiry [flags]
Flags
Name | Environment Variable | Default | Description |
---|---|---|---|
--help , -h |
help for expiry |
Examples
# Check expiry of Kubernetes certs
dapr mtls expiry
2.13.3 - mtls renew certificate CLI command reference
Description
This command can be used to renew expiring Dapr certificates. For example the Dapr Sentry service can generate default root and issuer certificates used by applications. For more information see secure Dapr to Dapr communication
Supported platforms
Usage
dapr mtls renew-certificate [flags]
Flags
Name | Environment Variable | Default | Description |
---|---|---|---|
--help , -h |
help for renew-certificate | ||
--kubernetes , -k |
false |
supported platform | |
--valid-until |
365 days | Validity for newly created certificates | |
--restart |
false | Restarts Dapr control plane services (Sentry service, Operator service and Placement server) | |
--timeout |
300 sec | The timeout for the certificate renewal process | |
--ca-root-certificate |
File path to user provided PEM root certificate | ||
--issuer-public-certificate |
File path to user provided PEM issuer certificate | ||
--issuer-private-key |
File path to user provided PEM issue private key | ||
--private-key |
User provided root.key file which is used to generate root certificate |
Examples
Renew certificates by generating brand new certificates
Generates new root and issuer certificates for the Kubernetes cluster with a default validity of 365 days. The certificates are not applied to the Dapr control plane.
dapr mtls renew-certificate -k
Generates new root and issuer certificates for the Kubernetes cluster with a default validity of 365 days and restarts the Dapr control plane services.
dapr mtls renew-certificate -k --restart
Generates new root and issuer certificates for the Kubernetes cluster with a given validity time.
dapr mtls renew-certificate -k --valid-until <no of days>
Generates new root and issuer certificates for the Kubernetes cluster with a given validity time and restarts the Dapr control plane services.
dapr mtls renew-certificate -k --valid-until <no of days> --restart
Renew certificate by using user provided certificates
Rotates certificates for the Kubernetes cluster with the provided ca.pem, issuer.pem and issuer.key file paths and restarts the Dapr control plane services
dapr mtls renew-certificate -k --ca-root-certificate <ca.pem> --issuer-private-key <issuer.key> --issuer-public-certificate <issuer.pem> --restart
Rotates certificates for the Kubernetes cluster with the provided ca.pem, issuer.pem and issuer.key file paths.
dapr mtls renew-certificate -k --ca-root-certificate <ca.pem> --issuer-private-key <issuer.key> --issuer-public-certificate <issuer.pem>
Renew certificates by generating brand new certificates using the provided root private key
Uses existing private root.key to generate new root and issuer certificates for the Kubernetes cluster with a given validity time for created certs.
dapr mtls renew-certificate -k --private-key myprivatekey.key --valid-until <no of days>
Uses the existing private root.key to generate new root and issuer certificates for the Kubernetes cluster.
dapr mtls renew-certificate -k --private-key myprivatekey.key
2.14 - publish CLI command reference
Description
Publish a pub-sub event.
Supported platforms
Usage
dapr publish [flags]
Flags
Name | Environment Variable | Default | Description |
---|---|---|---|
--publish-app-id , -i |
The ID that represents the app from which you are publishing | ||
--pubsub , -p |
The name of the pub/sub component | ||
--topic , -t |
The topic to be published to | ||
--data , -d |
The JSON serialized string (optional) | ||
--data-file , -f |
A file containing the JSON serialized data (optional) | ||
--help , -h |
Print this help message | ||
--metadata , -m |
A JSON serialized publish metadata (optional) | ||
--unix-domain-socket , -u |
The path to the unix domain socket (optional) |
Examples
# Publish to sample topic in target pubsub via a publishing app
dapr publish --publish-app-id appId --topic sample --pubsub target --data '{"key":"value"}'
# Publish to sample topic in target pubsub via a publishing app using Unix domain socket
dapr publish --enable-domain-socket --publish-app-id myapp --pubsub target --topic sample --data '{"key":"value"}'
# Publish to sample topic in target pubsub via a publishing app without cloud event
dapr publish --publish-app-id myapp --pubsub target --topic sample --data '{"key":"value"}' --metadata '{"rawPayload":"true"}'
2.15 - run CLI command reference
Description
Run Dapr and (optionally) your application side by side. A full list comparing daprd arguments, CLI arguments, and Kubernetes annotations can be found here.
Supported platforms
Usage
dapr run [flags] [command]
Flags
Name | Environment Variable | Default | Description |
---|---|---|---|
--app-id , -a |
APP_ID |
The id for your application, used for service discovery. Cannot contain dots. | |
--app-max-concurrency |
unlimited |
The concurrency level of the application; default is unlimited | |
--app-port , -p |
APP_PORT |
The port your application is listening on | |
--app-protocol , -P |
http |
The protocol Dapr uses to talk to the application. Valid values are: http , grpc , https (HTTP with TLS), grpcs (gRPC with TLS), h2c (HTTP/2 Cleartext) |
|
--resources-path , -d |
Linux/Mac: $HOME/.dapr/components Windows: %USERPROFILE%\.dapr\components |
The path for resources directory. If you’ve organized your resources into multiple folders (for example, components in one folder, resiliency policies in another), you can define multiple resource paths. See example below. | |
--app-channel-address |
127.0.0.1 |
The network address the application listens on | |
--runtime-path |
Dapr runtime install path | ||
--config , -c |
Linux/Mac: $HOME/.dapr/config.yaml Windows: %USERPROFILE%\.dapr\config.yaml |
Dapr configuration file | |
--dapr-grpc-port , -G |
DAPR_GRPC_PORT |
50001 |
The gRPC port for Dapr to listen on |
--dapr-internal-grpc-port , -I |
50002 |
The gRPC port for the Dapr internal API to listen on. Set during development for apps experiencing temporary errors with service invocation failures due to mDNS caching, or configuring Dapr sidecars behind firewall. Can be any value greater than 1024 and must be different for each app. | |
--dapr-http-port , -H |
DAPR_HTTP_PORT |
3500 |
The HTTP port for Dapr to listen on |
--enable-profiling |
false |
Enable “pprof” profiling via an HTTP endpoint | |
--help , -h |
Print the help message | ||
--run-file , -f |
Linux/MacOS: $HOME/.dapr/dapr.yaml |
Run multiple applications at once using a Multi-App Run template file. Currently in alpha and only available in Linux/MacOS | |
--image |
Use a custom Docker image. Format is repository/image for Docker Hub, or example.com/repository/image for a custom registry. |
||
--log-level |
info |
The log verbosity. Valid values are: debug , info , warn , error , fatal , or panic |
|
--enable-api-logging |
false |
Enable the logging of all API calls from application to Dapr | |
--metrics-port |
DAPR_METRICS_PORT |
9090 |
The port that Dapr sends its metrics information to |
--profile-port |
7777 |
The port for the profile server to listen on | |
--placement-host-address |
Linux/Mac: $HOME/.dapr/components Windows: %USERPROFILE%\.dapr\components |
Run in any containers within your Docker network. Uses <hostname> or <hostname>:<port> . If the port is omitted, it will default to:
|
|
--scheduler-host-address |
Linux/Mac: $HOME/.dapr/components Windows: %USERPROFILE%\.dapr\components |
Run in any containers within your Docker network. Uses <hostname> or <hostname>:<port> . If the port is omitted, it will default to:
|
|
--enable-app-health-check |
false |
Enable health checks for the application using the protocol defined with app-protocol | |
--app-health-check-path |
Path used for health checks; HTTP only | ||
--app-health-probe-interval |
Interval to probe for the health of the app in seconds | ||
--app-health-probe-timeout |
Timeout for app health probes in milliseconds | ||
--app-health-threshold |
Number of consecutive failures for the app to be considered unhealthy | ||
--unix-domain-socket , -u |
Path to a unix domain socket dir mount. If specified, communication with the Dapr sidecar uses unix domain sockets for lower latency and greater throughput when compared to using TCP ports. Not available on Windows. | ||
--dapr-http-max-request-size |
4 |
Max size of the request body in MB. | |
--dapr-http-read-buffer-size |
4 |
Max size of the HTTP read buffer in KB. This also limits the maximum size of HTTP headers. The default 4 KB | |
--kubernetes , -k |
Running Dapr on Kubernetes, and used for Multi-App Run template files on Kubernetes. | ||
--components-path , -d |
Linux/Mac: $HOME/.dapr/components Windows: %USERPROFILE%\.dapr\components |
Deprecated in favor of --resources-path |
Examples
# Run a .NET application
dapr run --app-id myapp --app-port 5000 -- dotnet run
# Run a .Net application with unix domain sockets
dapr run --app-id myapp --app-port 5000 --unix-domain-socket /tmp -- dotnet run
# Run a Java application
dapr run --app-id myapp -- java -jar myapp.jar
# Run a NodeJs application that listens to port 3000
dapr run --app-id myapp --app-port 3000 -- node myapp.js
# Run a Python application
dapr run --app-id myapp -- python myapp.py
# Run sidecar only
dapr run --app-id myapp
# Run a gRPC application written in Go (listening on port 3000)
dapr run --app-id myapp --app-port 5000 --app-protocol grpc -- go run main.go
# Run a NodeJs application that listens to port 3000 with API logging enabled
dapr run --app-id myapp --app-port 3000 --enable-api-logging -- node myapp.js
# Pass multiple resource paths
dapr run --app-id myapp --resources-path path1 --resources-path path2
# Run the multi-app run template file
dapr run -f dapr.yaml
# Run the multi-app run template file on Kubernetes
dapr run -k -f dapr.yaml
2.16 - status CLI command reference
Description
Show the health status of Dapr services.
Supported platforms
Usage
dapr status -k
Flags
Name | Environment Variable | Default | Description |
---|---|---|---|
--help , -h |
Print this help message | ||
--kubernetes , -k |
false |
Show the health status of Dapr services on Kubernetes cluster |
Examples
# Get status of Dapr services from Kubernetes
dapr status -k
Warning messages
This command can issue warning messages.
Root certificate renewal warning
If the mtls root certificate deployed to the Kubernetes cluster expires in under 30 days the following warning message is displayed:
Dapr root certificate of your Kubernetes cluster expires in <n> days. Expiry date: <date:time> UTC.
Please see docs.dapr.io for certificate renewal instructions to avoid service interruptions.
2.17 - stop CLI command reference
Description
Stop Dapr instances and their associated apps.
Supported platforms
Usage
dapr stop [flags]
Flags
Name | Environment Variable | Default | Description |
---|---|---|---|
--app-id , -a |
APP_ID |
The application id to be stopped | |
--help , -h |
Print this help message | ||
--run-file , -f |
Stop running multiple applications at once using a Multi-App Run template file. Currently in alpha and only available in Linux/MacOS |
Examples
# Stop Dapr application
dapr stop --app-id <ID>
2.18 - uninstall CLI command reference
Description
Uninstall Dapr runtime.
Supported platforms
Usage
dapr uninstall [flags]
Flags
Name | Environment Variable | Default | Description |
---|---|---|---|
--all |
false |
Remove Redis, Zipkin containers in addition to the Scheduler service and the actor Placement service containers. Remove default Dapr dir located at $HOME/.dapr or %USERPROFILE%\.dapr\ . |
|
--help , -h |
Print this help message | ||
--kubernetes , -k |
false |
Uninstall Dapr from a Kubernetes cluster | |
--namespace , -n |
dapr-system |
The Kubernetes namespace from which Dapr is uninstalled | |
--container-runtime |
docker |
Used to pass in a different container runtime other than Docker. Supported container runtimes are: docker , podman |
Examples
Uninstall from self-hosted mode
dapr uninstall
You can also use option --all
to remove .dapr directory, Redis, Placement, Scheduler, and Zipkin containers
dapr uninstall --all
You can specify a different container runtime while setting up Dapr. If you omit the --container-runtime
flag, the default container runtime is Docker.
dapr uninstall --all --container-runtime podman
Uninstall from Kubernetes
dapr uninstall -k
2.19 - upgrade CLI command reference
Description
Upgrade or downgrade Dapr on supported hosting platforms.
Warning
Version steps should be done incrementally, including minor versions as you upgrade or downgrade.
Prior to downgrading, confirm components are backwards compatible and application code does ultilize APIs that are not supported in previous versions of Dapr.
Supported platforms
Usage
dapr upgrade [flags]
Flags
Name | Environment Variable | Default | Description |
---|---|---|---|
--help , -h |
Print this help message | ||
--kubernetes , -k |
false |
Upgrade/Downgrade Dapr in a Kubernetes cluster | |
--runtime-version |
latest |
The version of the Dapr runtime to upgrade/downgrade to, for example: 1.0.0 |
|
--set |
Set values on the command line (can specify multiple or separate values with commas: key1=val1,key2=val2) | ||
--image-registry |
Pulls container images required by Dapr from the given image registry |
Examples
# Upgrade Dapr in Kubernetes to latest version
dapr upgrade -k
# Upgrade or downgrade to a specified version of Dapr runtime in Kubernetes
dapr upgrade -k --runtime-version 1.2
# Upgrade or downgrade to a specified version of Dapr runtime in Kubernetes with value set
dapr upgrade -k --runtime-version 1.2 --set global.logAsJson=true
# Upgrade or downgrade using private registry, if you are using private registry for hosting dapr images and have used it while doing `dapr init -k`
# Scenario 1 : dapr image hosted directly under root folder in private registry -
dapr init -k --image-registry docker.io/username
# Scenario 2 : dapr image hosted under a new/different directory in private registry -
dapr init -k --image-registry docker.io/username/<directory-name>
Warning messages
This command can issue warning messages.
Root certificate renewal warning
If the mtls root certificate deployed to the Kubernetes cluster expires in under 30 days the following warning message is displayed:
Dapr root certificate of your Kubernetes cluster expires in <n> days. Expiry date: <date:time> UTC.
Please see docs.dapr.io for certificate renewal instructions to avoid service interruptions.
Related links
2.20 - version CLI command reference
Description
Print the version for dapr
CLI and daprd
executables either in normal or JSON formats.
Supported platforms
Usage
dapr version [flags]
Flags
Name | Environment Variable | Default | Description |
---|---|---|---|
--help , -h |
Print this help message | ||
--output , -o |
Output format (options: json) |
Examples
# Version for Dapr CLI and runtime
dapr version --output json
Related facts
You can get daprd
version directly by invoking daprd --version
command.
You can also get the normal version output by running dapr --version
flag.
3 - Dapr arguments and annotations for daprd, CLI, and Kubernetes
This table is meant to help users understand the equivalent options for running Dapr sidecars in different contexts: via the CLI directly, via daprd, or on Kubernetes via annotations.
daprd | Dapr CLI | CLI shorthand | Kubernetes annotations | Description |
---|---|---|---|---|
--allowed-origins |
not supported | not supported | Allowed HTTP origins (default “*”) | |
--app-id |
--app-id |
-i |
dapr.io/app-id |
The unique ID of the application. Used for service discovery, state encapsulation and the pub/sub consumer ID |
--app-port |
--app-port |
-p |
dapr.io/app-port |
This parameter tells Dapr which port your application is listening on |
--components-path |
--components-path |
-d |
not supported | Deprecated in favor of --resources-path |
--resources-path |
--resources-path |
-d |
not supported | Path for components directory. If empty, components will not be loaded |
--config |
--config |
-c |
dapr.io/config |
Tells Dapr which Configuration resource to use |
--control-plane-address |
not supported | not supported | Address for a Dapr control plane | |
--dapr-grpc-port |
--dapr-grpc-port |
dapr.io/grpc-port |
Sets the Dapr API gRPC port (default 50001 ); all cluster services must use the same port for communication |
|
--dapr-http-port |
--dapr-http-port |
not supported | HTTP port for the Dapr API to listen on (default 3500 ) |
|
--dapr-http-max-request-size |
--dapr-http-max-request-size |
dapr.io/http-max-request-size |
Deprecated in favor of --max-body-size . Inreasing the request max body size to handle large file uploads using http and grpc protocols. Default is 4 MB |
|
--max-body-size |
not supported | dapr.io/max-body-size |
Inreasing the request max body size to handle large file uploads using http and grpc protocols. Set the value using size units (e.g., 16Mi for 16MB). The default is 4Mi |
|
--dapr-http-read-buffer-size |
--dapr-http-read-buffer-size |
dapr.io/http-read-buffer-size |
Deprecated in favor of --read-buffer-size . Increasing max size of http header read buffer in KB to to support larger header values, for example 16 to support headers up to 16KB . Default is 16 for 16KB |
|
--read-buffer-size |
not supported | dapr.io/read-buffer-size |
Increasing max size of http header read buffer in KB to to support larger header values. Set the value using size units, for example 32Ki will support headers up to 32KB . Default is 4Ki for 4KB |
|
not supported | --image |
dapr.io/sidecar-image |
Dapr sidecar image. Default is daprio/daprd:latest. The Dapr sidecar uses this image instead of the latest default image. Use this when building your own custom image of Dapr and or using an alternative stable Dapr image | |
--internal-grpc-port |
not supported | dapr.io/internal-grpc-port |
Sets the internal Dapr gRPC port (default 50002 ); all cluster services must use the same port for communication |
|
--enable-metrics |
not supported | configuration spec | Enable prometheus metric (default true) | |
--enable-mtls |
not supported | configuration spec | Enables automatic mTLS for daprd to daprd communication channels | |
--enable-profiling |
--enable-profiling |
dapr.io/enable-profiling |
Enable profiling | |
--unix-domain-socket |
--unix-domain-socket |
-u |
dapr.io/unix-domain-socket-path |
The parent directory of socket file. On Linux, when communicating with the Dapr sidecar, use unix domain sockets for lower latency and greater throughput compared to TCP ports. Not available on Windows OS. |
--log-as-json |
not supported | dapr.io/log-as-json |
Setting this parameter to true outputs logs in JSON format. Default is false |
|
--log-level |
--log-level |
dapr.io/log-level |
Sets the log level for the Dapr sidecar. Allowed values are debug , info , warn , error . Default is info |
|
--enable-api-logging |
--enable-api-logging |
dapr.io/enable-api-logging |
Enables API logging for the Dapr sidecar | |
--app-max-concurrency |
--app-max-concurrency |
dapr.io/app-max-concurrency |
Limit the concurrency of your application. A valid value is any number larger than 0 . Default value: -1 , meaning no concurrency. |
|
--metrics-port |
--metrics-port |
dapr.io/metrics-port |
Sets the port for the sidecar metrics server. Default is 9090 |
|
--mode |
not supported | not supported | Runtime hosting option mode for Dapr, either "standalone" or "kubernetes" (default "standalone" ). Learn more. |
|
--placement-host-address |
--placement-host-address |
dapr.io/placement-host-address |
Comma separated list of addresses for Dapr Actor Placement servers. When no annotation is set, the default value is set by the Sidecar Injector. When the annotation is set and the value is a single space ( ' ' ), or “empty”, the sidecar does not connect to Placement server. This can be used when there are no actors running in the sidecar. When the annotation is set and the value is not empty, the sidecar connects to the configured address. For example: 127.0.0.1:50057,127.0.0.1:50058 |
|
--scheduler-host-address |
--scheduler-host-address |
dapr.io/scheduler-host-address |
Comma separated list of addresses for Dapr Scheduler servers. When no annotation is set, the default value is set by the Sidecar Injector. When the annotation is set and the value is a single space ( ' ' ), or “empty”, the sidecar does not connect to Scheduler server. When the annotation is set and the value is not empty, the sidecar connects to the configured address. For example: 127.0.0.1:50055,127.0.0.1:50056 |
|
--actors-service |
not supported | not supported | Configuration for the service that offers actor placement information. The format is <name>:<address> . For example, setting this value to placement:127.0.0.1:50057,127.0.0.1:50058 is an alternative to using the --placement-host-address flag. |
|
--reminders-service |
not supported | not supported | Configuration for the service that enables actor reminders. The format is <name>[:<address>] . Currently, the only supported value is "default" (which is also the default value), which uses the built-in reminders subsystem in the Dapr sidecar. |
|
--profiling-port |
--profiling-port |
not supported | The port for the profile server (default 7777 ) |
|
--app-protocol |
--app-protocol |
-P |
dapr.io/app-protocol |
Configures the protocol Dapr uses to communicate with your app. Valid options are http , grpc , https (HTTP with TLS), grpcs (gRPC with TLS), h2c (HTTP/2 Cleartext). Note that Dapr does not validate TLS certificates presented by the app. Default is http |
--enable-app-health-check |
--enable-app-health-check |
dapr.io/enable-app-health-check |
Boolean that enables the health checks. Default is false . |
|
--app-health-check-path |
--app-health-check-path |
dapr.io/app-health-check-path |
Path that Dapr invokes for health probes when the app channel is HTTP (this value is ignored if the app channel is using gRPC). Requires app health checks to be enabled. Default is /healthz . |
|
--app-health-probe-interval |
--app-health-probe-interval |
dapr.io/app-health-probe-interval |
Number of seconds between each health probe. Requires app health checks to be enabled. Default is 5 |
|
--app-health-probe-timeout |
--app-health-probe-timeout |
dapr.io/app-health-probe-timeout |
Timeout in milliseconds for health probe requests. Requires app health checks to be enabled. Default is 500 |
|
--app-health-threshold |
--app-health-threshold |
dapr.io/app-health-threshold" |
Max number of consecutive failures before the app is considered unhealthy. Requires app health checks to be enabled. Default is 3 |
|
--sentry-address |
--sentry-address |
not supported | Address for the Sentry CA service | |
--version |
--version |
-v |
not supported | Prints the runtime version |
--dapr-graceful-shutdown-seconds |
not supported | dapr.io/graceful-shutdown-seconds |
Graceful shutdown duration in seconds for Dapr, the maximum duration before forced shutdown when waiting for all in-progress requests to complete. Defaults to 5 . If you are running in Kubernetes mode, this value should not be larger than the Kubernetes termination grace period, who’s default value is 30 . |
|
--dapr-block-shutdown-duration |
not supported | dapr.io/block-shutdown-duration |
Block shutdown duration, if set, blocks the graceful shutdown procedure (as described above) from starting until the given duration has elapsed or the application becomes unhealthy as configured through application health options. This is useful for applications that need to execute Dapr APIs during their own termination procedure. Any new invocations of any Dapr APIs are not available to the application once the block has expired. Accepts Go duration string. | |
not supported | not supported | dapr.io/enabled |
Setting this paramater to true injects the Dapr sidecar into the pod | |
not supported | not supported | dapr.io/api-token-secret |
Tells Dapr which Kubernetes secret to use for token-based API authentication. By default this is not set | |
not supported | not supported | dapr.io/app-token-secret |
Tells Dapr which Kubernetes secret to use for token-based application authentication. By default, this is not set | |
--dapr-listen-addresses |
not supported | dapr.io/sidecar-listen-addresses |
Comma separated list of IP addresses that sidecar will listen to. Defaults to all in standalone mode. Defaults to [::1],127.0.0.1 in Kubernetes. To listen to all IPv4 addresses, use 0.0.0.0 . To listen to all IPv6 addresses, use [::] . |
|
not supported | not supported | dapr.io/sidecar-cpu-limit |
Maximum amount of CPU that the Dapr sidecar can use. See valid values here. By default this is not set | |
not supported | not supported | dapr.io/sidecar-memory-limit |
Maximum amount of Memory that the Dapr sidecar can use. See valid values here. By default this is not set | |
not supported | not supported | dapr.io/sidecar-cpu-request |
Amount of CPU that the Dapr sidecar requests. See valid values here. By default this is not set | |
not supported | not supported | dapr.io/sidecar-memory-request |
Amount of Memory that the Dapr sidecar requests .See valid values here. By default this is not set | |
not supported | not supported | dapr.io/sidecar-liveness-probe-delay-seconds |
Number of seconds after the sidecar container has started before liveness probe is initiated. Read more here. Default is 3 |
|
not supported | not supported | dapr.io/sidecar-liveness-probe-timeout-seconds |
Number of seconds after which the sidecar liveness probe times out. Read more here. Default is 3 |
|
not supported | not supported | dapr.io/sidecar-liveness-probe-period-seconds |
How often (in seconds) to perform the sidecar liveness probe. Read more here. Default is 6 |
|
not supported | not supported | dapr.io/sidecar-liveness-probe-threshold |
When the sidecar liveness probe fails, Kubernetes will try N times before giving up. In this case, the Pod will be marked Unhealthy. Read more about failureThreshold here. Default is 3 |
|
not supported | not supported | dapr.io/sidecar-readiness-probe-delay-seconds |
Number of seconds after the sidecar container has started before readiness probe is initiated. Read more here. Default is 3 |
|
not supported | not supported | dapr.io/sidecar-readiness-probe-timeout-seconds |
Number of seconds after which the sidecar readiness probe times out. Read more here. Default is 3 |
|
not supported | not supported | dapr.io/sidecar-readiness-probe-period-seconds |
How often (in seconds) to perform the sidecar readiness probe. Read more here. Default is 6 |
|
not supported | not supported | dapr.io/sidecar-readiness-probe-threshold |
When the sidecar readiness probe fails, Kubernetes will try N times before giving up. In this case, the Pod will be marked Unready. Read more about failureThreshold here. Default is 3 |
|
not supported | not supported | dapr.io/env |
List of environment variable to be injected into the sidecar. Strings consisting of key=value pairs separated by a comma. | |
not supported | not supported | dapr.io/env-from-secret |
List of environment variables to be injected into the sidecar from secret. Strings consisting of "key=secret-name:secret-key" pairs are separated by a comma. |
|
not supported | not supported | dapr.io/volume-mounts |
List of pod volumes to be mounted to the sidecar container in read-only mode. Strings consisting of volume:path pairs separated by a comma. Example, "volume-1:/tmp/mount1,volume-2:/home/root/mount2" . |
|
not supported | not supported | dapr.io/volume-mounts-rw |
List of pod volumes to be mounted to the sidecar container in read-write mode. Strings consisting of volume:path pairs separated by a comma. Example, "volume-1:/tmp/mount1,volume-2:/home/root/mount2" . |
|
--disable-builtin-k8s-secret-store |
not supported | dapr.io/disable-builtin-k8s-secret-store |
Disables BuiltIn Kubernetes secret store. Default value is false. See Kubernetes secret store component for details. | |
not supported | not supported | dapr.io/sidecar-seccomp-profile-type |
Set the sidecar container’s securityContext.seccompProfile.type to Unconfined , RuntimeDefault , or Localhost . By default, this annotation is not set on the Dapr sidecar, hence the field is omitted from sidecar container. |
4 - Environment variable reference
The following table lists the environment variables used by the Dapr runtime, CLI, or from within your application:
Environment Variable | Used By | Description |
---|---|---|
APP_ID | Your application | The id for your application, used for service discovery |
APP_PORT | Dapr sidecar | The port your application is listening on |
APP_API_TOKEN | Your application | The token used by the application to authenticate requests from Dapr API. Read authenticate requests from Dapr using token authentication for more information. |
DAPR_HTTP_PORT | Your application | The HTTP port that the Dapr sidecar is listening on. Your application should use this variable to connect to Dapr sidecar instead of hardcoding the port value. Set by the Dapr CLI run command for self-hosted or injected by the dapr-sidecar-injector into all the containers in the pod. |
DAPR_GRPC_PORT | Your application | The gRPC port that the Dapr sidecar is listening on. Your application should use this variable to connect to Dapr sidecar instead of hardcoding the port value. Set by the Dapr CLI run command for self-hosted or injected by the dapr-sidecar-injector into all the containers in the pod. |
DAPR_API_TOKEN | Dapr sidecar | The token used for Dapr API authentication for requests from the application. Enable API token authentication in Dapr. |
NAMESPACE | Dapr sidecar | Used to specify a component’s namespace in self-hosted mode. |
DAPR_DEFAULT_IMAGE_REGISTRY | Dapr CLI | In self-hosted mode, it is used to specify the default container registry to pull images from. When its value is set to GHCR or ghcr , it pulls the required images from Github container registry. To default to Docker hub, unset this environment variable. |
SSL_CERT_DIR | Dapr sidecar | Specifies the location where the public certificates for all the trusted certificate authorities (CA) are located. Not applicable when the sidecar is running as a process in self-hosted mode. |
DAPR_HELM_REPO_URL | Your private Dapr Helm chart url | Specifies a private Dapr Helm chart url, which defaults to the official Helm chart URL: https://dapr.github.io/helm-charts |
DAPR_HELM_REPO_USERNAME | A username for a private Helm chart | The username required to access the private Dapr Helm chart. If it can be accessed publicly, this env variable does not need to be set |
DAPR_HELM_REPO_PASSWORD | A password for a private Helm chart | The password required to access the private Dapr helm chart. If it can be accessed publicly, this env variable does not need to be set |
OTEL_EXPORTER_OTLP_ENDPOINT | OpenTelemetry Tracing | Sets the Open Telemetry (OTEL) server address, turns on tracing. (Example: http://localhost:4318 ) |
OTEL_EXPORTER_OTLP_INSECURE | OpenTelemetry Tracing | Sets the connection to the endpoint as unencrypted. (true , false ) |
OTEL_EXPORTER_OTLP_PROTOCOL | OpenTelemetry Tracing | The OTLP protocol to use Transport protocol. (grpc , http/protobuf , http/json ) |
DAPR_COMPONENTS_SOCKETS_FOLDER | Dapr runtime and the .NET, Go, and Java pluggable component SDKs | The location or path where Dapr looks for Pluggable Components Unix Domain Socket files. If unset this location defaults to /tmp/dapr-components-sockets |
DAPR_COMPONENTS_SOCKETS_EXTENSION | .NET and Java pluggable component SDKs | A per-SDK configuration that indicates the default file extension applied to socket files created by the SDKs. Not a Dapr-enforced behavior. |
DAPR_PLACEMENT_METADATA_ENABLED | Dapr placement | Enable an endpoint for the Placement service that exposes placement table information on actor usage. Set to true to enable in self-hosted mode. Learn more about the Placement API |
DAPR_HOST_IP | Dapr sidecar | The host’s chosen IP address. If not specified, will loop over the network interfaces and select the first non-loopback address it finds. |
DAPR_HEALTH_TIMEOUT | SDKs | Sets the time on the “wait for sidecar” availability. Overrides the default timeout setting of 60 seconds. |
5 - Dapr components reference
5.1 - Pub/sub brokers component specs
The following table lists publish and subscribe brokers supported by the Dapr pub/sub building block. Learn how to set up different brokers for Dapr publish and subscribe.
Pub/sub component retries vs inbound resiliency
Each pub/sub component has its own built-in retry behaviors, unique to the message broker solution and unrelated to Dapr. Before explicity applying a Dapr resiliency policy, make sure you understand the implicit retry policy of the pub/sub component you’re using. Instead of overriding these built-in retries, Dapr resiliency augments them, which can cause repetitive clustering of messages.Table headers to note:
Header | Description | Example |
---|---|---|
Status | Component certification status |
Alpha Beta Stable |
Component version | The version of the component | v1 |
Since runtime version | The version of the Dapr runtime when the component status was set or updated | 1.11 |
Generic
Component | Status | Component version | Since runtime version |
---|---|---|---|
Apache Kafka | Stable | v1 | 1.5 |
In-memory | Stable | v1 | 1.7 |
JetStream | Beta | v1 | 1.10 |
KubeMQ | Beta | v1 | 1.10 |
MQTT3 | Stable | v1 | 1.7 |
Pulsar | Stable | v1 | 1.10 |
RabbitMQ | Stable | v1 | 1.7 |
Redis Streams | Stable | v1 | 1.0 |
RocketMQ | Alpha | v1 | 1.8 |
Solace-AMQP | Beta | v1 | 1.10 |
Amazon Web Services (AWS)
Component | Status | Component version | Since runtime version |
---|---|---|---|
AWS SNS/SQS | Stable | v1 | 1.10 |
Google Cloud Platform (GCP)
Component | Status | Component version | Since runtime version |
---|---|---|---|
GCP Pub/Sub | Stable | v1 | 1.11 |
Microsoft Azure
Component | Status | Component version | Since runtime version |
---|---|---|---|
Azure Event Hubs | Stable | v1 | 1.8 |
Azure Service Bus Queues | Beta | v1 | 1.10 |
Azure Service Bus Topics | Stable | v1 | 1.0 |
5.1.1 - Apache Kafka
Component format
To set up Apache Kafka pub/sub, create a component of type pubsub.kafka
. See the pub/sub broker component file to learn how ConsumerID is automatically generated. Read the How-to: Publish and Subscribe guide on how to create and apply a pub/sub configuration.
All component metadata field values can carry templated metadata values, which are resolved on Dapr sidecar startup.
For example, you can choose to use {namespace}
as the consumerGroup
to enable using the same appId
in different namespaces using the same topics as described in this article.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: kafka-pubsub
spec:
type: pubsub.kafka
version: v1
metadata:
- name: brokers # Required. Kafka broker connection setting
value: "dapr-kafka.myapp.svc.cluster.local:9092"
- name: consumerGroup # Optional. Used for input bindings.
value: "{namespace}"
- name: consumerID # Optional. If not supplied, runtime will create one.
value: "channel1"
- name: clientID # Optional. Used as client tracing ID by Kafka brokers.
value: "my-dapr-app-id"
- name: authType # Required.
value: "password"
- name: saslUsername # Required if authType is `password`.
value: "adminuser"
- name: saslPassword # Required if authType is `password`.
secretKeyRef:
name: kafka-secrets
key: saslPasswordSecret
- name: saslMechanism
value: "SHA-512"
- name: maxMessageBytes # Optional.
value: 1024
- name: consumeRetryInterval # Optional.
value: 200ms
- name: heartbeatInterval # Optional.
value: 5s
- name: sessionTimeout # Optional.
value: 15s
- name: version # Optional.
value: 2.0.0
- name: disableTls # Optional. Disable TLS. This is not safe for production!! You should read the `Mutual TLS` section for how to use TLS.
value: "true"
- name: consumerFetchMin # Optional. Advanced setting. The minimum number of message bytes to fetch in a request - the broker will wait until at least this many are available.
value: 1
- name: consumerFetchDefault # Optional. Advanced setting. The default number of message bytes to fetch from the broker in each request.
value: 2097152
- name: channelBufferSize # Optional. Advanced setting. The number of events to buffer in internal and external channels.
value: 512
- name: consumerGroupRebalanceStrategy # Optional. Advanced setting. The strategy to use for consumer group rebalancing.
value: sticky
- name: schemaRegistryURL # Optional. When using Schema Registry Avro serialization/deserialization. The Schema Registry URL.
value: http://localhost:8081
- name: schemaRegistryAPIKey # Optional. When using Schema Registry Avro serialization/deserialization. The Schema Registry API Key.
value: XYAXXAZ
- name: schemaRegistryAPISecret # Optional. When using Schema Registry Avro serialization/deserialization. The Schema Registry credentials API Secret.
value: "ABCDEFGMEADFF"
- name: schemaCachingEnabled # Optional. When using Schema Registry Avro serialization/deserialization. Enables caching for schemas.
value: true
- name: schemaLatestVersionCacheTTL # Optional. When using Schema Registry Avro serialization/deserialization. The TTL for schema caching when publishing a message with latest schema available.
value: 5m
- name: useAvroJson # Optional. Enables Avro JSON schema for serialization as opposed to Standard JSON default. Only applicable when the subscription uses valueSchemaType=Avro
value: "true"
- name: escapeHeaders # Optional.
value: false
For details on using
secretKeyRef
, see the guide on how to reference secrets in components.
Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
brokers | Y | A comma-separated list of Kafka brokers. | "localhost:9092,dapr-kafka.myapp.svc.cluster.local:9093" |
consumerGroup | N | A kafka consumer group to listen on. Each record published to a topic is delivered to one consumer within each consumer group subscribed to the topic. If a value for consumerGroup is provided, any value for consumerID is ignored - a combination of the consumer group and a random unique identifier will be set for the consumerID instead. |
"group1" |
consumerID | N | Consumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the consumerID is not provided, the Dapr runtime set it to the Dapr application ID (appID ) value. If a value for consumerGroup is provided, any value for consumerID is ignored - a combination of the consumer group and a random unique identifier will be set for the consumerID instead. |
Can be set to string value (such as "channel1" in the example above) or string format value (such as "{podName}" , etc.). See all of template tags you can use in your component metadata. |
clientID | N | A user-provided string sent with every request to the Kafka brokers for logging, debugging, and auditing purposes. Defaults to "namespace.appID" for Kubernetes mode or "appID" for Self-Hosted mode. |
"my-namespace.my-dapr-app" , "my-dapr-app" |
authRequired | N | Deprecated Enable SASL authentication with the Kafka brokers. | "true" , "false" |
authType | Y | Configure or disable authentication. Supported values: none , password , mtls , oidc or awsiam |
"password" , "none" |
saslUsername | N | The SASL username used for authentication. Only required if authType is set to "password" . |
"adminuser" |
saslPassword | N | The SASL password used for authentication. Can be secretKeyRef to use a secret reference. Only required if authType is set to “password”`. |
"" , "KeFg23!" |
saslMechanism | N | The SASL Authentication Mechanism you wish to use. Only required if authType is set to "password" . Defaults to PLAINTEXT |
"SHA-512", "SHA-256", "PLAINTEXT" |
initialOffset | N | The initial offset to use if no offset was previously committed. Should be “newest” or “oldest”. Defaults to “newest”. | "oldest" |
maxMessageBytes | N | The maximum size in bytes allowed for a single Kafka message. Defaults to 1024. | 2048 |
consumeRetryInterval | N | The interval between retries when attempting to consume topics. Treats numbers without suffix as milliseconds. Defaults to 100ms. | 200ms |
consumeRetryEnabled | N | Disable consume retry by setting "false" |
"true" , "false" |
version | N | Kafka cluster version. Defaults to 2.0.0. Note that this must be set to 1.0.0 if you are using Azure EventHubs with Kafka. |
0.10.2.0 |
caCert | N | Certificate authority certificate, required for using TLS. Can be secretKeyRef to use a secret reference |
"-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----" |
clientCert | N | Client certificate, required for authType mtls . Can be secretKeyRef to use a secret reference |
"-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----" |
clientKey | N | Client key, required for authType mtls Can be secretKeyRef to use a secret reference |
"-----BEGIN RSA PRIVATE KEY-----\n<base64-encoded PKCS8>\n-----END RSA PRIVATE KEY-----" |
skipVerify | N | Skip TLS verification, this is not recommended for use in production. Defaults to "false" |
"true" , "false" |
disableTls | N | Disable TLS for transport security. To disable, you’re not required to set value to "true" . This is not recommended for use in production. Defaults to "false" . |
"true" , "false" |
oidcTokenEndpoint | N | Full URL to an OAuth2 identity provider access token endpoint. Required when authType is set to oidc |
“https://identity.example.com/v1/token" |
oidcClientID | N | The OAuth2 client ID that has been provisioned in the identity provider. Required when authType is set to oidc |
dapr-kafka |
oidcClientSecret | N | The OAuth2 client secret that has been provisioned in the identity provider: Required when authType is set to oidc |
"KeFg23!" |
oidcScopes | N | Comma-delimited list of OAuth2/OIDC scopes to request with the access token. Recommended when authType is set to oidc . Defaults to "openid" |
"openid,kafka-prod" |
oidcExtensions | N | String containing a JSON-encoded dictionary of OAuth2/OIDC extensions to request with the access token | {"cluster":"kafka","poolid":"kafkapool"} |
awsRegion | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘region’ instead. The AWS region where the Kafka cluster is deployed to. Required when authType is set to awsiam |
us-west-1 |
awsAccessKey | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘accessKey’ instead. AWS access key associated with an IAM account. | "accessKey" |
awsSecretKey | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘secretKey’ instead. The secret key associated with the access key. | "secretKey" |
awsSessionToken | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘sessionToken’ instead. AWS session token to use. A session token is only required if you are using temporary security credentials. | "sessionToken" |
awsIamRoleArn | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘assumeRoleArn’ instead. IAM role that has access to AWS Managed Streaming for Apache Kafka (MSK). This is another option to authenticate with MSK aside from the AWS Credentials. | "arn:aws:iam::123456789:role/mskRole" |
awsStsSessionName | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘sessionName’ instead. Represents the session name for assuming a role. | "DaprDefaultSession" |
schemaRegistryURL | N | Required when using Schema Registry Avro serialization/deserialization. The Schema Registry URL. | http://localhost:8081 |
schemaRegistryAPIKey | N | When using Schema Registry Avro serialization/deserialization. The Schema Registry credentials API Key. | XYAXXAZ |
schemaRegistryAPISecret | N | When using Schema Registry Avro serialization/deserialization. The Schema Registry credentials API Secret. | ABCDEFGMEADFF |
schemaCachingEnabled | N | When using Schema Registry Avro serialization/deserialization. Enables caching for schemas. Default is true |
true |
schemaLatestVersionCacheTTL | N | When using Schema Registry Avro serialization/deserialization. The TTL for schema caching when publishing a message with latest schema available. Default is 5 min | 5m |
useAvroJson | N | Enables Avro JSON schema for serialization as opposed to Standard JSON default. Only applicable when the subscription uses valueSchemaType=Avro. Default is "false" |
"true" |
clientConnectionTopicMetadataRefreshInterval | N | The interval for the client connection’s topic metadata to be refreshed with the broker as a Go duration. Defaults to 9m . |
"4m" |
clientConnectionKeepAliveInterval | N | The maximum time for the client connection to be kept alive with the broker, as a Go duration, before closing the connection. A zero value (default) means keeping alive indefinitely. | "4m" |
consumerFetchMin | N | The minimum number of message bytes to fetch in a request - the broker will wait until at least this many are available. The default is 1 , as 0 causes the consumer to spin when no messages are available. Equivalent to the JVM’s fetch.min.bytes . |
"2" |
consumerFetchDefault | N | The default number of message bytes to fetch from the broker in each request. Default is "1048576" bytes. |
"2097152" |
channelBufferSize | N | The number of events to buffer in internal and external channels. This permits the producer and consumer to continue processing some messages in the background while user code is working, greatly improving throughput. Defaults to 256 . |
"512" |
heartbeatInterval | N | The interval between heartbeats to the consumer coordinator. At most, the value should be set to a 1/3 of the sessionTimeout value. Defaults to “3s”. |
"5s" |
sessionTimeout | N | The timeout used to detect client failures when using Kafka’s group management facility. If the broker fails to receive any heartbeats from the consumer before the expiration of this session timeout, then the consumer is removed and initiates a rebalance. Defaults to “10s”. | "20s" |
consumerGroupRebalanceStrategy | N | The strategy to use for consumer group rebalancing. Supported values: range , sticky , roundrobin . Default is range |
"sticky" |
escapeHeaders | N | Enables URL escaping of the message header values received by the consumer. Allows receiving content with special characters that are usually not allowed in HTTP headers. Default is false . |
true |
The secretKeyRef
above is referencing a kubernetes secrets store to access the tls information. Visit here to learn more about how to configure a secret store component.
Note
The metadata version
must be set to 1.0.0
when using Azure EventHubs with Kafka.
Authentication
Kafka supports a variety of authentication schemes and Dapr supports several: SASL password, mTLS, OIDC/OAuth2. With the added authentication methods, the authRequired
field has
been deprecated from the v1.6 release and instead the authType
field should be used. If authRequired
is set to true
, Dapr will attempt to configure authType
correctly
based on the value of saslPassword
. The valid values for authType
are:
none
password
certificate
mtls
oidc
awsiam
Note
authType
is authentication only. Authorization is still configured within Kafka, except for awsiam
, which can also drive authorization decisions configured in AWS IAM.
None
Setting authType
to none
will disable any authentication. This is NOT recommended in production.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: kafka-pubsub-noauth
spec:
type: pubsub.kafka
version: v1
metadata:
- name: brokers # Required. Kafka broker connection setting
value: "dapr-kafka.myapp.svc.cluster.local:9092"
- name: consumerGroup # Optional. Used for input bindings.
value: "group1"
- name: clientID # Optional. Used as client tracing ID by Kafka brokers.
value: "my-dapr-app-id"
- name: authType # Required.
value: "none"
- name: maxMessageBytes # Optional.
value: 1024
- name: consumeRetryInterval # Optional.
value: 200ms
- name: heartbeatInterval # Optional.
value: 5s
- name: sessionTimeout # Optional.
value: 15s
- name: version # Optional.
value: 0.10.2.0
- name: disableTls
value: "true"
SASL Password
Setting authType
to password
enables SASL authentication. This requires setting the saslUsername
and saslPassword
fields.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: kafka-pubsub-sasl
spec:
type: pubsub.kafka
version: v1
metadata:
- name: brokers # Required. Kafka broker connection setting
value: "dapr-kafka.myapp.svc.cluster.local:9092"
- name: consumerGroup # Optional. Used for input bindings.
value: "group1"
- name: clientID # Optional. Used as client tracing ID by Kafka brokers.
value: "my-dapr-app-id"
- name: authType # Required.
value: "password"
- name: saslUsername # Required if authType is `password`.
value: "adminuser"
- name: saslPassword # Required if authType is `password`.
secretKeyRef:
name: kafka-secrets
key: saslPasswordSecret
- name: saslMechanism
value: "SHA-512"
- name: maxMessageBytes # Optional.
value: 1024
- name: consumeRetryInterval # Optional.
value: 200ms
- name: heartbeatInterval # Optional.
value: 5s
- name: sessionTimeout # Optional.
value: 15s
- name: version # Optional.
value: 0.10.2.0
- name: caCert
secretKeyRef:
name: kafka-tls
key: caCert
Mutual TLS
Setting authType
to mtls
uses a x509 client certificate (the clientCert
field) and key (the clientKey
field) to authenticate. Note that mTLS as an
authentication mechanism is distinct from using TLS to secure the transport layer via encryption. mTLS requires TLS transport (meaning disableTls
must be false
), but securing
the transport layer does not require using mTLS. See Communication using TLS for configuring underlying TLS transport.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: kafka-pubsub-mtls
spec:
type: pubsub.kafka
version: v1
metadata:
- name: brokers # Required. Kafka broker connection setting
value: "dapr-kafka.myapp.svc.cluster.local:9092"
- name: consumerGroup # Optional. Used for input bindings.
value: "group1"
- name: clientID # Optional. Used as client tracing ID by Kafka brokers.
value: "my-dapr-app-id"
- name: authType # Required.
value: "mtls"
- name: caCert
secretKeyRef:
name: kafka-tls
key: caCert
- name: clientCert
secretKeyRef:
name: kafka-tls
key: clientCert
- name: clientKey
secretKeyRef:
name: kafka-tls
key: clientKey
- name: maxMessageBytes # Optional.
value: 1024
- name: consumeRetryInterval # Optional.
value: 200ms
- name: heartbeatInterval # Optional.
value: 5s
- name: sessionTimeout # Optional.
value: 15s
- name: version # Optional.
value: 0.10.2.0
OAuth2 or OpenID Connect
Setting authType
to oidc
enables SASL authentication via the OAUTHBEARER mechanism. This supports specifying a bearer token from an external OAuth2 or OIDC identity provider. Currently, only the client_credentials grant is supported.
Configure oidcTokenEndpoint
to the full URL for the identity provider access token endpoint.
Set oidcClientID
and oidcClientSecret
to the client credentials provisioned in the identity provider.
If caCert
is specified in the component configuration, the certificate is appended to the system CA trust for verifying the identity provider certificate. Similarly, if skipVerify
is specified in the component configuration, verification will also be skipped when accessing the identity provider.
By default, the only scope requested for the token is openid
; it is highly recommended that additional scopes be specified via oidcScopes
in a comma-separated list and validated by the Kafka broker. If additional scopes are not used to narrow the validity of the access token,
a compromised Kafka broker could replay the token to access other services as the Dapr clientID.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: kafka-pubsub
spec:
type: pubsub.kafka
version: v1
metadata:
- name: brokers # Required. Kafka broker connection setting
value: "dapr-kafka.myapp.svc.cluster.local:9092"
- name: consumerGroup # Optional. Used for input bindings.
value: "group1"
- name: clientID # Optional. Used as client tracing ID by Kafka brokers.
value: "my-dapr-app-id"
- name: authType # Required.
value: "oidc"
- name: oidcTokenEndpoint # Required if authType is `oidc`.
value: "https://identity.example.com/v1/token"
- name: oidcClientID # Required if authType is `oidc`.
value: "dapr-myapp"
- name: oidcClientSecret # Required if authType is `oidc`.
secretKeyRef:
name: kafka-secrets
key: oidcClientSecret
- name: oidcScopes # Recommended if authType is `oidc`.
value: "openid,kafka-dev"
- name: caCert # Also applied to verifying OIDC provider certificate
secretKeyRef:
name: kafka-tls
key: caCert
- name: maxMessageBytes # Optional.
value: 1024
- name: consumeRetryInterval # Optional.
value: 200ms
- name: heartbeatInterval # Optional.
value: 5s
- name: sessionTimeout # Optional.
value: 15s
- name: version # Optional.
value: 0.10.2.0
AWS IAM
Authenticating with AWS IAM is supported with MSK. Setting authType
to awsiam
uses AWS SDK to generate auth tokens to authenticate.
Note
The only required metadata field isregion
. If no acessKey
and secretKey
are provided, you can use AWS IAM roles for service accounts to have password-less authentication to your Kafka cluster.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: kafka-pubsub-awsiam
spec:
type: pubsub.kafka
version: v1
metadata:
- name: brokers # Required. Kafka broker connection setting
value: "dapr-kafka.myapp.svc.cluster.local:9092"
- name: consumerGroup # Optional. Used for input bindings.
value: "group1"
- name: clientID # Optional. Used as client tracing ID by Kafka brokers.
value: "my-dapr-app-id"
- name: authType # Required.
value: "awsiam"
- name: region # Required.
value: "us-west-1"
- name: accessKey # Optional.
value: <AWS_ACCESS_KEY>
- name: secretKey # Optional.
value: <AWS_SECRET_KEY>
- name: sessionToken # Optional.
value: <AWS_SESSION_KEY>
- name: assumeRoleArn # Optional.
value: "arn:aws:iam::123456789:role/mskRole"
- name: sessionName # Optional.
value: "DaprDefaultSession"
Communication using TLS
By default TLS is enabled to secure the transport layer to Kafka. To disable TLS, set disableTls
to true
. When TLS is enabled, you can
control server certificate verification using skipVerify
to disable verification (NOT recommended in production environments) and caCert
to
specify a trusted TLS certificate authority (CA). If no caCert
is specified, the system CA trust will be used. To also configure mTLS authentication,
see the section under Authentication.
Below is an example of a Kafka pubsub component configured to use transport layer TLS:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: kafka-pubsub
spec:
type: pubsub.kafka
version: v1
metadata:
- name: brokers # Required. Kafka broker connection setting
value: "dapr-kafka.myapp.svc.cluster.local:9092"
- name: consumerGroup # Optional. Used for input bindings.
value: "group1"
- name: clientID # Optional. Used as client tracing ID by Kafka brokers.
value: "my-dapr-app-id"
- name: authType # Required.
value: "certificate"
- name: consumeRetryInterval # Optional.
value: 200ms
- name: heartbeatInterval # Optional.
value: 5s
- name: sessionTimeout # Optional.
value: 15s
- name: version # Optional.
value: 0.10.2.0
- name: maxMessageBytes # Optional.
value: 1024
- name: caCert # Certificate authority certificate.
secretKeyRef:
name: kafka-tls
key: caCert
auth:
secretStore: <SECRET_STORE_NAME>
Consuming from multiple topics
When consuming from multiple topics using a single pub/sub component, there is no guarantee about how the consumers in your consumer group are balanced across the topic partitions.
For instance, let’s say you are subscribing to two topics with 10 partitions per topic and you have 20 replicas of your service consuming from the two topics. There is no guarantee that 10 will be assigned to the first topic and 10 to the second topic. Instead, the partitions could be divided unequally, with more than 10 assigned to the first topic and the rest assigned to the second topic.
This can result in idle consumers listening to the first topic and over-extended consumers on the second topic, or vice versa. This same behavior can be observed when using auto-scalers such as HPA or KEDA.
If you run into this particular issue, it is recommended that you configure a single pub/sub component per topic with uniquely defined consumer groups per component. This guarantees that all replicas of your service are fully allocated to the unique consumer group, where each consumer group targets one specific topic.
For example, you may define two Dapr components with the following configuration:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: kafka-pubsub-topic-one
spec:
type: pubsub.kafka
version: v1
metadata:
- name: consumerGroup
value: "{appID}-topic-one"
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: kafka-pubsub-topic-two
spec:
type: pubsub.kafka
version: v1
metadata:
- name: consumerGroup
value: "{appID}-topic-two"
Sending and receiving multiple messages
Apache Kafka component supports sending and receiving multiple messages in a single operation using the bulk Pub/sub API.
Configuring bulk subscribe
When subscribing to a topic, you can configure bulkSubscribe
options. Refer to Subscribing messages in bulk for more details. Learn more about the bulk subscribe API.
Apache Kafka supports the following bulk metadata options:
Configuration | Default |
---|---|
maxAwaitDurationMs |
10000 (10s) |
maxMessagesCount |
80 |
Per-call metadata fields
Partition Key
When invoking the Kafka pub/sub, its possible to provide an optional partition key by using the metadata
query param in the request url.
The param name can either be partitionKey
or __key
Example:
curl -X POST http://localhost:3500/v1.0/publish/myKafka/myTopic?metadata.partitionKey=key1 \
-H "Content-Type: application/json" \
-d '{
"data": {
"message": "Hi"
}
}'
Message headers
All other metadata key/value pairs (that are not partitionKey
or __key
) are set as headers in the Kafka message. Here is an example setting a correlationId
for the message.
curl -X POST http://localhost:3500/v1.0/publish/myKafka/myTopic?metadata.correlationId=myCorrelationID&metadata.partitionKey=key1 \
-H "Content-Type: application/json" \
-d '{
"data": {
"message": "Hi"
}
}'
Kafka Pubsub special message headers received on consumer side
When consuming messages, special message metadata are being automatically passed as headers. These are:
__key
: the message key if available__topic
: the topic for the message__partition
: the partition number for the message__offset
: the offset of the message in the partition__timestamp
: the timestamp for the message
You can access them within the consumer endpoint as follows:
from fastapi import APIRouter, Body, Response, status
import json
import sys
app = FastAPI()
router = APIRouter()
@router.get('/dapr/subscribe')
def subscribe():
subscriptions = [{'pubsubname': 'pubsub',
'topic': 'my-topic',
'route': 'my_topic_subscriber',
}]
return subscriptions
@router.post('/my_topic_subscriber')
def my_topic_subscriber(
key: Annotated[str, Header(alias="__key")],
offset: Annotated[int, Header(alias="__offset")],
event_data=Body()):
print(f"key={key} - offset={offset} - data={event_data}", flush=True)
return Response(status_code=status.HTTP_200_OK)
app.include_router(router)
Receiving message headers with special characters
The consumer application may be required to receive message headers that include special characters, which may cause HTTP protocol validation errors.
HTTP header values must follow specifications, making some characters not allowed. Learn more about the protocols.
In this case, you can enable escapeHeaders
configuration setting, which uses URL escaping to encode header values on the consumer side.
Note
When using this setting, the received message headers are URL escaped, and you need to URL “un-escape” it to get the original value.Set escapeHeaders
to true
to URL escape.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: kafka-pubsub-escape-headers
spec:
type: pubsub.kafka
version: v1
metadata:
- name: brokers # Required. Kafka broker connection setting
value: "dapr-kafka.myapp.svc.cluster.local:9092"
- name: consumerGroup # Optional. Used for input bindings.
value: "group1"
- name: clientID # Optional. Used as client tracing ID by Kafka brokers.
value: "my-dapr-app-id"
- name: authType # Required.
value: "none"
- name: escapeHeaders
value: "true"
Avro Schema Registry serialization/deserialization
You can configure pub/sub to publish or consume data encoded using Avro binary serialization, leveraging an Apache Schema Registry (for example, Confluent Schema Registry, Apicurio).
Configuration
Important
Currently, only message value serialization/deserialization is supported. Since cloud events are not supported, the rawPayload=true
metadata must be passed when publishing Avro messages.
Please note that rawPayload=true
should NOT be set for consumers, as the message value will be wrapped into a CloudEvent and base64-encoded. Leaving rawPayload
as default (i.e. false
) will send the Avro-decoded message to the application as a JSON payload.
When setting the useAvroJson
component metadata to true
, the inbound/outbound Avro binary is converted into/from Avro JSON encoding.
This can be preferable when accurate type mapping is desirable.
The default is standard JSON which is typically easier to bind to a native type in an application.
When configuring the Kafka pub/sub component metadata, you must define:
- The schema registry URL
- The API key/secret, if applicable
Schema subjects are automatically derived from topic names, using the standard naming convention. For example, for a topic named my-topic
, the schema subject will be my-topic-value
.
When interacting with the message payload within the service, it is in JSON format. The payload is transparently serialized/deserialized within the Dapr component.
Date/Datetime fields must be passed as their Epoch Unix timestamp equivalent (rather than typical Iso8601). For example:
2024-01-10T04:36:05.986Z
should be passed as1704861365986
(the number of milliseconds since Jan 1st, 1970)2024-01-10
should be passed as19732
(the number of days since Jan 1st, 1970)
Publishing Avro messages
In order to indicate to the Kafka pub/sub component that the message should be using Avro serialization, the valueSchemaType
metadata must be set to Avro
.
curl -X "POST" http://localhost:3500/v1.0/publish/pubsub/my-topic?metadata.rawPayload=true&metadata.valueSchemaType=Avro -H "Content-Type: application/json" -d '{"order_number": "345", "created_date": 1704861365986}'
from dapr.clients import DaprClient
with DaprClient() as d:
req_data = {
'order_number': '345',
'created_date': 1704861365986
}
# Create a typed message with content type and body
resp = d.publish_event(
pubsub_name='pubsub',
topic_name='my-topic',
data=json.dumps(req_data),
publish_metadata={'rawPayload': 'true', 'valueSchemaType': 'Avro'}
)
# Print the request
print(req_data, flush=True)
Subscribing to Avro topics
In order to indicate to the Kafka pub/sub component that the message should be deserialized using Avro, the valueSchemaType
metadata must be set to Avro
in the subscription metadata.
from fastapi import APIRouter, Body, Response, status
import json
import sys
app = FastAPI()
router = APIRouter()
@router.get('/dapr/subscribe')
def subscribe():
subscriptions = [{'pubsubname': 'pubsub',
'topic': 'my-topic',
'route': 'my_topic_subscriber',
'metadata': {
'valueSchemaType': 'Avro',
} }]
return subscriptions
@router.post('/my_topic_subscriber')
def my_topic_subscriber(event_data=Body()):
print(event_data, flush=True)
return Response(status_code=status.HTTP_200_OK)
app.include_router(router)
Overriding default consumer group rebalancing
In Kafka, rebalancing strategies determine how partitions are assigned to consumers within a consumer group. The default strategy is “range”, but “roundrobin” and “sticky” are also available.
Range
: Partitions are assigned to consumers based on their lexicographical order. If you have three partitions (0, 1, 2) and two consumers (A, B), consumer A might get partitions 0 and 1, while consumer B gets partition 2.RoundRobin
: Partitions are assigned to consumers in a round-robin fashion. With the same example above, consumer A might get partitions 0 and 2, while consumer B gets partition 1.Sticky
: This strategy aims to preserve previous assignments as much as possible while still maintaining a balanced distribution. If a consumer leaves or joins the group, only the affected partitions are reassigned, minimizing disruption.
Choosing a Strategy:
Range
: Simple to understand and implement, but can lead to uneven distribution if partition sizes vary significantly.RoundRobin
: Provides a good balance in many cases, but might not be optimal if message keys are unevenly distributed.Sticky
: Generally preferred for its ability to minimize disruption during rebalances, especially when dealing with a large number of partitions or frequent consumer group changes.
Create a Kafka instance
You can run Kafka locally using this Docker image. To run without Docker, see the getting started guide here.
To run Kafka on Kubernetes, you can use any Kafka operator, such as Strimzi.
Related links
- Basic schema for a Dapr component
- Read this guide for instructions on configuring pub/sub components
- Pub/Sub building block
5.1.2 - AWS SNS/SQS
Component format
To set up AWS SNS/SQS pub/sub, create a component of type pubsub.aws.snssqs
.
By default, the AWS SNS/SQS component:
- Generates the SNS topics
- Provisions the SQS queues
- Configures a subscription of the queues to the topics
Note
If you only have a publisher and no subscriber, only the SNS topics are created.
However, if you have a subscriber, SNS, SQS, and the dynamic or static subscription thereof are generated.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: snssqs-pubsub
spec:
type: pubsub.aws.snssqs
version: v1
metadata:
- name: accessKey
value: "AKIAIOSFODNN7EXAMPLE"
- name: secretKey
value: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
- name: region
value: "us-east-1"
# - name: consumerID # Optional. If not supplied, runtime will create one.
# value: "channel1"
# - name: endpoint # Optional.
# value: "http://localhost:4566"
# - name: sessionToken # Optional (mandatory if using AssignedRole; for example, temporary accessKey and secretKey)
# value: "TOKEN"
# - name: messageVisibilityTimeout # Optional
# value: 10
# - name: messageRetryLimit # Optional
# value: 10
# - name: messageReceiveLimit # Optional
# value: 10
# - name: sqsDeadLettersQueueName # Optional
# - value: "myapp-dlq"
# - name: messageWaitTimeSeconds # Optional
# value: 1
# - name: messageMaxNumber # Optional
# value: 10
# - name: fifo # Optional
# value: "true"
# - name: fifoMessageGroupID # Optional
# value: "app1-mgi"
# - name: disableEntityManagement # Optional
# value: "false"
# - name: disableDeleteOnRetryLimit # Optional
# value: "false"
# - name: assetsManagementTimeoutSeconds # Optional
# value: 5
# - name: concurrencyMode # Optional
# value: "single"
# - name: concurrencyLimit # Optional
# value: "0"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
accessKey | Y | ID of the AWS account/role with appropriate permissions to SNS and SQS (see below) | "AKIAIOSFODNN7EXAMPLE" |
secretKey | Y | Secret for the AWS user/role. If using an AssumeRole access, you will also need to provide a sessionToken |
"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" |
region | Y | The AWS region where the SNS/SQS assets are located or be created in. See this page for valid regions. Ensure that SNS and SQS are available in that region | "us-east-1" |
consumerID | N | Consumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the consumerID is not provided, the Dapr runtime set it to the Dapr application ID (appID ) value. See the pub/sub broker component file to learn how ConsumerID is automatically generated. |
Can be set to string value (such as "channel1" in the example above) or string format value (such as "{podName}" , etc.). See all of template tags you can use in your component metadata. |
endpoint | N | AWS endpoint for the component to use. Only used for local development with, for example, localstack. The endpoint is unnecessary when running against production AWS |
"http://localhost:4566" |
sessionToken | N | AWS session token to use. A session token is only required if you are using temporary security credentials | "TOKEN" |
messageReceiveLimit | N | Number of times a message is received, after processing of that message fails, that once reached, results in removing of that message from the queue. If sqsDeadLettersQueueName is specified, messageReceiveLimit is the number of times a message is received, after processing of that message fails, that once reached, results in moving of the message to the SQS dead-letters queue. Default: 10 |
10 |
sqsDeadLettersQueueName | N | Name of the dead letters queue for this application | "myapp-dlq" |
messageVisibilityTimeout | N | Amount of time in seconds that a message is hidden from receive requests after it is sent to a subscriber. Default: 10 |
10 |
messageRetryLimit | N | Number of times to resend a message after processing of that message fails before removing that message from the queue. Default: 10 |
10 |
messageWaitTimeSeconds | N | The duration (in seconds) for which the call waits for a message to arrive in the queue before returning. If a message is available, the call returns sooner than messageWaitTimeSeconds . If no messages are available and the wait time expires, the call returns successfully with an empty list of messages. Default: 1 |
1 |
messageMaxNumber | N | Maximum number of messages to receive from the queue at a time. Default: 10 , Maximum: 10 |
10 |
fifo | N | Use SQS FIFO queue to provide message ordering and deduplication. Default: "false" . See further details about SQS FIFO |
"true" , "false" |
fifoMessageGroupID | N | If fifo is enabled, instructs Dapr to use a custom Message Group ID for the pubsub deployment. This is not mandatory as Dapr creates a custom Message Group ID for each producer, thus ensuring ordering of messages per a Dapr producer. Default: "" |
"app1-mgi" |
disableEntityManagement | N | When set to true, SNS topics, SQS queues and the SQS subscriptions to SNS do not get created automatically. Default: "false" |
"true" , "false" |
disableDeleteOnRetryLimit | N | When set to true, after retrying and failing of messageRetryLimit times processing a message, reset the message visibility timeout so that other consumers can try processing, instead of deleting the message from SQS (the default behvior). Default: "false" |
"true" , "false" |
assetsManagementTimeoutSeconds | N | Amount of time in seconds, for an AWS asset management operation, before it times out and cancelled. Asset management operations are any operations performed on STS, SNS and SQS, except message publish and consume operations that implement the default Dapr component retry behavior. The value can be set to any non-negative float/integer. Default: 5 |
0.5 , 10 |
concurrencyMode | N | When messages are received in bulk from SQS, call the subscriber sequentially (“single” message at a time), or concurrently (in “parallel”). Default: "parallel" |
"single" , "parallel" |
concurrencyLimit | N | Defines the maximum number of concurrent workers handling messages. This value is ignored when concurrencyMode is set to "single" . To avoid limiting the number of concurrent workers, set this to 0 . Default: 0 |
100 |
Additional info
Conforming with AWS specifications
Dapr created SNS topic and SQS queue names conform with AWS specifications. By default, Dapr creates an SQS queue name based on the consumer app-id
, therefore Dapr might perform name standardization to meet with AWS specifications.
SNS/SQS component behavior
When the pub/sub SNS/SQS component provisions SNS topics, the SQS queues and the subscription behave differently in situations where the component is operating on behalf of a message producer (with no subscriber app deployed), than in situations where a subscriber app is present (with no publisher deployed).
Due to how SNS works without SQS subscription in publisher only setup, the SQS queues and the subscription behave as a “classic” pub/sub system that relies on subscribers listening to topic messages. Without those subscribers, messages:
- Cannot be passed onwards and are effectively dropped
- Are not available for future subscribers (no replay of message when the subscriber finally subscribes)
SQS FIFO
Using SQS FIFO (fifo
metadata field set to "true"
) per AWS specifications provides message ordering and deduplication, but incurs a lower SQS processing throughput, among other caveats.
Specifying fifoMessageGroupID
limits the number of concurrent consumers of the FIFO queue used to only one but guarantees global ordering of messages published by the app’s Dapr sidecars. See this AWS blog post to better understand the topic of Message Group IDs and FIFO queues.
To avoid losing the order of messages delivered to consumers, the FIFO configuration for the SQS Component requires the concurrencyMode
metadata field set to "single"
.
Default parallel concurrencyMode
Since v1.8.0, the component supports the "parallel"
concurrencyMode
as its default mode. In prior versions, the component default behavior was calling the subscriber a single message at a time and waiting for its response.
SQS dead-letter Queues
When configuring the PubSub component with SQS dead-letter queues, the metadata fields messageReceiveLimit
and sqsDeadLettersQueueName
must both be set to a value. For messageReceiveLimit
, the value must be greater than 0
and the sqsDeadLettersQueueName
must not be empty string.
Important
When running the Dapr sidecar (daprd
) with your application on EKS (AWS Kubernetes) node/pod already attached to an IAM policy defining access to AWS resources, you must not provide AWS access-key, secret-key, and tokens in the definition of the component spec.
SNS/SQS Contention with Dapr
Fundamentally, SNS aggregates messages from multiple publisher topics into a single SQS queue by creating SQS subscriptions to those topics. As a subscriber, the SNS/SQS pub/sub component consumes messages from that sole SQS queue.
However, like any SQS consumer, the component cannot selectively retrieve the messages published to the SNS topics to which it is specifically subscribed. This can result in the component receiving messages originating from topics without associated handlers. Typically, this occurs during:
- Component initialization: If infrastructure subscriptions are ready before component subscription handlers, or
- Shutdown: If component handlers are removed before infrastructure subscriptions.
Since this issue affects any SQS consumer of multiple SNS topics, the component cannot prevent consuming messages from topics lacking handlers. When this happens, the component logs an error indicating such messages were erroneously retrieved.
In these situations, the unhandled messages would reappear in SQS with their receive count decremented after each pull. Thus, there is a risk that an unhandled message could exceed its messageReceiveLimit
and be lost.
Important
Consider potential contention scenarios when using SNS/SQS with Dapr, and configuremessageReceiveLimit
appropriately. It is highly recommended to use SQS dead-letter queues by setting sqsDeadLettersQueueName
to prevent losing messages.
Create an SNS/SQS instance
For local development, the localstack project is used to integrate AWS SNS/SQS. Follow these instructions to run localstack.
To run localstack locally from the command line using Docker, apply the following cmd:
docker run --rm -it -p 4566:4566 -p 4571:4571 -e SERVICES="sts,sns,sqs" -e AWS_DEFAULT_REGION="us-east-1" localstack/localstack
In order to use localstack with your pub/sub binding, you need to provide the endpoint
configuration in the component metadata. The endpoint
is unnecessary when running against production AWS.
See Authenticating to AWS for information about authentication-related attributes.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: snssqs-pubsub
spec:
type: pubsub.aws.snssqs
version: v1
metadata:
- name: accessKey
value: "anyString"
- name: secretKey
value: "anyString"
- name: endpoint
value: http://localhost:4566
# Use us-east-1 or any other region if provided to localstack as defined by "AWS_DEFAULT_REGION" envvar
- name: region
value: us-east-1
To run localstack on Kubernetes, you can apply the configuration below. Localstack is then reachable at the DNS name http://localstack.default.svc.cluster.local:4566
(assuming this was applied to the default namespace), which should be used as the endpoint
.
apiVersion: apps/v1
kind: Deployment
metadata:
name: localstack
spec:
# using the selector, we will expose the running deployments
# this is how Kubernetes knows, that a given service belongs to a deployment
selector:
matchLabels:
app: localstack
replicas: 1
template:
metadata:
labels:
app: localstack
spec:
containers:
- name: localstack
image: localstack/localstack:latest
ports:
# Expose the edge endpoint
- containerPort: 4566
---
kind: Service
apiVersion: v1
metadata:
name: localstack
labels:
app: localstack
spec:
selector:
app: localstack
ports:
- protocol: TCP
port: 4566
targetPort: 4566
type: LoadBalancer
In order to run in AWS, create or assign an IAM user with permissions to the SNS and SQS services, with a policy like:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "YOUR_POLICY_NAME",
"Effect": "Allow",
"Action": [
"sns:CreateTopic",
"sns:GetTopicAttributes",
"sns:ListSubscriptionsByTopic",
"sns:Publish",
"sns:Subscribe",
"sns:TagResource",
"sqs:ChangeMessageVisibility",
"sqs:CreateQueue",
"sqs:DeleteMessage",
"sqs:GetQueueAttributes",
"sqs:GetQueueUrl",
"sqs:ReceiveMessage",
"sqs:SetQueueAttributes",
"sqs:TagQueue"
],
"Resource": [
"arn:aws:sns:AWS_REGION:AWS_ACCOUNT_ID:*",
"arn:aws:sqs:AWS_REGION:AWS_ACCOUNT_ID:*"
]
}
]
}
Plug the AWS account ID
and AWS account secret
into the accessKey
and secretKey
in the component metadata, using Kubernetes secrets and secretKeyRef
.
Alternatively, let’s say you want to provision the SNS and SQS assets using your own tool of choice (for example, Terraform) while preventing Dapr from doing so dynamically. You need to enable disableEntityManagement
and assign your Dapr-using application with an IAM Role, with a policy like:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "YOUR_POLICY_NAME",
"Effect": "Allow",
"Action": [
"sqs:DeleteMessage",
"sqs:ReceiveMessage",
"sqs:ChangeMessageVisibility",
"sqs:GetQueueUrl",
"sqs:GetQueueAttributes",
"sns:Publish",
"sns:ListSubscriptionsByTopic",
"sns:GetTopicAttributes"
],
"Resource": [
"arn:aws:sns:AWS_REGION:AWS_ACCOUNT_ID:APP_TOPIC_NAME",
"arn:aws:sqs:AWS_REGION:AWS_ACCOUNT_ID:APP_ID"
]
}
]
}
In the above example, you are running your applications on an EKS cluster with dynamic assets creation (the default Dapr behavior).
Related links
5.1.3 - Azure Event Hubs
Component format
To set up an Azure Event Hubs pub/sub, create a component of type pubsub.azure.eventhubs
. See the pub/sub broker component file to learn how ConsumerID is automatically generated. Read the How-to: Publish and Subscribe guide on how to create and apply a pub/sub configuration.
Apart from the configuration metadata fields shown below, Azure Event Hubs also supports Azure Authentication mechanisms.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: eventhubs-pubsub
spec:
type: pubsub.azure.eventhubs
version: v1
metadata:
# Either connectionString or eventHubNamespace is required
# Use connectionString when *not* using Microsoft Entra ID
- name: connectionString
value: "Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={EventHub}"
# Use eventHubNamespace when using Microsoft Entra ID
- name: eventHubNamespace
value: "namespace"
- name: consumerID # Optional. If not supplied, the runtime will create one.
value: "channel1"
- name: enableEntityManagement
value: "false"
- name: enableInOrderMessageDelivery
value: "false"
# The following four properties are needed only if enableEntityManagement is set to true
- name: resourceGroupName
value: "test-rg"
- name: subscriptionID
value: "value of Azure subscription ID"
- name: partitionCount
value: "1"
- name: messageRetentionInDays
value: "3"
# Checkpoint store attributes
- name: storageAccountName
value: "myeventhubstorage"
- name: storageAccountKey
value: "112233445566778899"
- name: storageContainerName
value: "myeventhubstoragecontainer"
# Alternative to passing storageAccountKey
- name: storageConnectionString
value: "DefaultEndpointsProtocol=https;AccountName=<account>;AccountKey=<account-key>"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
connectionString |
Y* | Connection string for the Event Hub or the Event Hub namespace. * Mutally exclusive with eventHubNamespace field.* Required when not using Microsoft Entra ID Authentication |
"Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={EventHub}" or "Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key}" |
eventHubNamespace |
Y* | The Event Hub Namespace name. * Mutally exclusive with connectionString field.* Required when using Microsoft Entra ID Authentication |
"namespace" |
consumerID |
N | Consumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the consumerID is not provided, the Dapr runtime set it to the Dapr application ID (appID ) value. |
Can be set to string value (such as "channel1" in the example above) or string format value (such as "{podName}" , etc.). See all of template tags you can use in your component metadata. |
enableEntityManagement |
N | Boolean value to allow management of the EventHub namespace and storage account. Default: false |
"true", "false" |
enableInOrderMessageDelivery |
N | Input/Output | Boolean value to allow messages to be delivered in the order in which they were posted. This assumes partitionKey is set when publishing or posting to ensure ordering across partitions. Default: false |
storageAccountName |
Y | Storage account name to use for the checkpoint store. | "myeventhubstorage" |
storageAccountKey |
Y* | Storage account key for the checkpoint store account. * When using Microsoft Entra ID, it’s possible to omit this if the service principal has access to the storage account too. |
"112233445566778899" |
storageConnectionString |
Y* | Connection string for the checkpoint store, alternative to specifying storageAccountKey |
"DefaultEndpointsProtocol=https;AccountName=myeventhubstorage;AccountKey=<account-key>" |
storageContainerName |
Y | Storage container name for the storage account name. | "myeventhubstoragecontainer" |
resourceGroupName |
N | Name of the resource group the Event Hub namespace is part of. Required when entity management is enabled | "test-rg" |
subscriptionID |
N | Azure subscription ID value. Required when entity management is enabled | "azure subscription id" |
partitionCount |
N | Number of partitions for the new Event Hub namespace. Used only when entity management is enabled. Default: "1" |
"2" |
messageRetentionInDays |
N | Number of days to retain messages for in the newly created Event Hub namespace. Used only when entity management is enabled. Default: "1" |
"90" |
Microsoft Entra ID authentication
The Azure Event Hubs pub/sub component supports authentication using all Microsoft Entra ID mechanisms. For further information and the relevant component metadata fields to provide depending on the choice of Microsoft Entra ID authentication mechanism, see the docs for authenticating to Azure.
Example Configuration
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: eventhubs-pubsub
spec:
type: pubsub.azure.eventhubs
version: v1
metadata:
# Azure Authentication Used
- name: azureTenantId
value: "***"
- name: azureClientId
value: "***"
- name: azureClientSecret
value: "***"
- name: eventHubNamespace
value: "namespace"
- name: enableEntityManagement
value: "false"
# The following four properties are needed only if enableEntityManagement is set to true
- name: resourceGroupName
value: "test-rg"
- name: subscriptionID
value: "value of Azure subscription ID"
- name: partitionCount
value: "1"
- name: messageRetentionInDays
# Checkpoint store attributes
# In this case, we're using Microsoft Entra ID to access the storage account too
- name: storageAccountName
value: "myeventhubstorage"
- name: storageContainerName
value: "myeventhubstoragecontainer"
Sending and receiving multiple messages
Azure Eventhubs supports sending and receiving multiple messages in a single operation using the bulk pub/sub API.
Configuring bulk publish
To set the metadata for bulk publish operation, set the query parameters on the HTTP request or the gRPC metadata, as documented in the API reference.
Metadata | Default |
---|---|
metadata.maxBulkPubBytes |
1000000 |
Configuring bulk subscribe
When subscribing to a topic, you can configure bulkSubscribe
options. Refer to Subscribing messages in bulk for more details and to learn more about the bulk subscribe API.
Configuration | Default |
---|---|
maxMessagesCount |
100 |
maxAwaitDurationMs |
10000 |
Configuring checkpoint frequency
When subscribing to a topic, you can configure the checkpointing frequency in a partition by setting the metadata in the HTTP or gRPC subscribe request . This metadata enables checkpointing after the configured number of events within a partition event sequence. Disable checkpointing by setting the frequency to 0
.
Learn more about checkpointing.
Metadata | Default |
---|---|
metadata.checkPointFrequencyPerPartition |
1 |
Following example shows a sample subscription file for Declarative subscription using checkPointFrequencyPerPartition
metadata. Similarly, you can also pass the metadata in Programmatic subscriptions as well.
apiVersion: dapr.io/v2alpha1
kind: Subscription
metadata:
name: order-pub-sub
spec:
topic: orders
routes:
default: /checkout
pubsubname: order-pub-sub
metadata:
checkPointFrequencyPerPartition: 1
scopes:
- orderprocessing
- checkout
Note
When subscribing to a topic usingBulkSubscribe
, you configure the checkpointing to occur after the specified number of batches, instead of events, where batch means the collection of events received in a single request.
Create an Azure Event Hub
Follow the instructions on the documentation to set up Azure Event Hubs.
Because this component uses Azure Storage as checkpoint store, you will also need an Azure Storage Account. Follow the instructions on the documentation to manage the storage account access keys.
See the documentation on how to get the Event Hubs connection string (note this is not for the Event Hubs namespace).
Create consumer groups for each subscriber
For every Dapr app that wants to subscribe to events, create an Event Hubs consumer group with the name of the Dapr app ID. For example, a Dapr app running on Kubernetes with dapr.io/app-id: "myapp"
will need an Event Hubs consumer group named myapp
.
Note: Dapr passes the name of the consumer group to the Event Hub, so this is not supplied in the metadata.
Entity Management
When entity management is enabled in the metadata, as long as the application has the right role and permissions to manipulate the Event Hub namespace, Dapr can automatically create the Event Hub and consumer group for you.
The Evet Hub name is the topic
field in the incoming request to publish or subscribe to, while the consumer group name is the name of the Dapr app which subscribes to a given Event Hub. For example, a Dapr app running on Kubernetes with name dapr.io/app-id: "myapp"
requires an Event Hubs consumer group named myapp
.
Entity management is only possible when using Microsoft Entra ID Authentication and not using a connection string.
Dapr passes the name of the consumer group to the Event Hub, so this is not supplied in the metadata.
Receiving custom properties
By default, Dapr does not forward custom properties. However, by setting the subscription metadata requireAllProperties
to "true"
, you can receive custom properties as HTTP headers.
apiVersion: dapr.io/v2alpha1
kind: Subscription
metadata:
name: order-pub-sub
spec:
topic: orders
routes:
default: /checkout
pubsubname: order-pub-sub
metadata:
requireAllProperties: "true"
The same can be achieved using the Dapr SDK:
[Topic("order-pub-sub", "orders")]
[TopicMetadata("requireAllProperties", "true")]
[HttpPost("checkout")]
public ActionResult Checkout(Order order, [FromHeader] int priority)
{
return Ok();
}
Subscribing to Azure IoT Hub Events
Azure IoT Hub provides an endpoint that is compatible with Event Hubs, so the Azure Event Hubs pubsub component can also be used to subscribe to Azure IoT Hub events.
The device-to-cloud events created by Azure IoT Hub devices will contain additional IoT Hub System Properties, and the Azure Event Hubs pubsub component for Dapr will return the following as part of the response metadata:
System Property Name | Description & Routing Query Keyword |
---|---|
iothub-connection-auth-generation-id |
The connectionDeviceGenerationId of the device that sent the message. See IoT Hub device identity properties. |
iothub-connection-auth-method |
The connectionAuthMethod used to authenticate the device that sent the message. |
iothub-connection-device-id |
The deviceId of the device that sent the message. See IoT Hub device identity properties. |
iothub-connection-module-id |
The moduleId of the device that sent the message. See IoT Hub device identity properties. |
iothub-enqueuedtime |
The enqueuedTime in RFC3339 format that the device-to-cloud message was received by IoT Hub. |
message-id |
The user-settable AMQP messageId. |
For example, the headers of a delivered HTTP subscription message would contain:
{
'user-agent': 'fasthttp',
'host': '127.0.0.1:3000',
'content-type': 'application/json',
'content-length': '120',
'iothub-connection-device-id': 'my-test-device',
'iothub-connection-auth-generation-id': '637618061680407492',
'iothub-connection-auth-method': '{"scope":"module","type":"sas","issuer":"iothub","acceptingIpFilterRule":null}',
'iothub-connection-module-id': 'my-test-module-a',
'iothub-enqueuedtime': '2021-07-13T22:08:09Z',
'message-id': 'my-custom-message-id',
'x-opt-sequence-number': '35',
'x-opt-enqueued-time': '2021-07-13T22:08:09Z',
'x-opt-offset': '21560',
'traceparent': '00-4655608164bc48b985b42d39865f3834-ed6cf3697c86e7bd-01'
}
Related links
- Basic schema for a Dapr component
- Read this guide for instructions on configuring pub/sub components
- Pub/Sub building block
- Authentication to Azure
5.1.4 - Azure Service Bus Queues
Component format
To set up Azure Service Bus Queues pub/sub, create a component of type pubsub.azure.servicebus.queues
. See the pub/sub broker component file to learn how ConsumerID is automatically generated. Read the How-to: Publish and Subscribe guide on how to create and apply a pub/sub configuration.
This component uses queues on Azure Service Bus; see the official documentation for the differences between topics and queues. For using topics, see the Azure Service Bus Topics pubsub component.
Connection String Authentication
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: servicebus-pubsub
spec:
type: pubsub.azure.servicebus.queues
version: v1
metadata:
# Required when not using Microsoft Entra ID Authentication
- name: connectionString
value: "Endpoint=sb://{ServiceBusNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={ServiceBus}"
# - name: consumerID # Optional
# value: channel1
# - name: timeoutInSec # Optional
# value: 60
# - name: handlerTimeoutInSec # Optional
# value: 60
# - name: disableEntityManagement # Optional
# value: "false"
# - name: maxDeliveryCount # Optional
# value: 3
# - name: lockDurationInSec # Optional
# value: 60
# - name: lockRenewalInSec # Optional
# value: 20
# - name: maxActiveMessages # Optional
# value: 10000
# - name: maxConcurrentHandlers # Optional
# value: 10
# - name: defaultMessageTimeToLiveInSec # Optional
# value: 10
# - name: autoDeleteOnIdleInSec # Optional
# value: 3600
# - name: minConnectionRecoveryInSec # Optional
# value: 2
# - name: maxConnectionRecoveryInSec # Optional
# value: 300
# - name: maxRetriableErrorsPerSec # Optional
# value: 10
# - name: publishMaxRetries # Optional
# value: 5
# - name: publishInitialRetryIntervalInMs # Optional
# value: 500
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
connectionString |
Y | Shared access policy connection string for the Service Bus. Required unless using Microsoft Entra ID authentication. | See example above |
consumerID |
N | Consumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the consumerID is not provided, the Dapr runtime set it to the Dapr application ID (appID ) value. |
Can be set to string value (such as "channel1" in the example above) or string format value (such as "{podName}" , etc.). See all of template tags you can use in your component metadata. |
namespaceName |
N | Parameter to set the address of the Service Bus namespace, as a fully-qualified domain name. Required if using Microsoft Entra ID authentication. | "namespace.servicebus.windows.net" |
timeoutInSec |
N | Timeout for sending messages and for management operations. Default: 60 |
30 |
handlerTimeoutInSec |
N | Timeout for invoking the app’s handler. Default: 60 |
30 |
lockRenewalInSec |
N | Defines the frequency at which buffered message locks will be renewed. Default: 20 . |
20 |
maxActiveMessages |
N | Defines the maximum number of messages to be processing or in the buffer at once. This should be at least as big as the maximum concurrent handlers. Default: 1000 |
2000 |
maxConcurrentHandlers |
N | Defines the maximum number of concurrent message handlers. Default: 0 (unlimited) |
10 |
disableEntityManagement |
N | When set to true, queues and subscriptions do not get created automatically. Default: "false" |
"true" , "false" |
defaultMessageTimeToLiveInSec |
N | Default message time to live, in seconds. Used during subscription creation only. | 10 |
autoDeleteOnIdleInSec |
N | Time in seconds to wait before auto deleting idle subscriptions. Used during subscription creation only. Must be 300s or greater. Default: 0 (disabled) |
3600 |
maxDeliveryCount |
N | Defines the number of attempts the server will make to deliver a message. Used during subscription creation only. Default set by server. | 10 |
lockDurationInSec |
N | Defines the length in seconds that a message will be locked for before expiring. Used during subscription creation only. Default set by server. | 30 |
minConnectionRecoveryInSec |
N | Minimum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. Default: 2 |
5 |
maxConnectionRecoveryInSec |
N | Maximum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. After each attempt, the component waits a random number of seconds, increasing every time, between the minimum and the maximum. Default: 300 (5 minutes) |
600 |
maxRetriableErrorsPerSec |
N | Maximum number of retriable errors that are processed per second. If a message fails to be processed with a retriable error, the component adds a delay before it starts processing another message, to avoid immediately re-processing messages that have failed. Default: 10 |
10 |
publishMaxRetries |
N | The max number of retries for when Azure Service Bus responds with “too busy” in order to throttle messages. Defaults: 5 |
5 |
publishInitialRetryIntervalInMs |
N | Time in milliseconds for the initial exponential backoff when Azure Service Bus throttle messages. Defaults: 500 |
500 |
Microsoft Entra ID authentication
The Azure Service Bus Queues pubsub component supports authentication using all Microsoft Entra ID mechanisms, including Managed Identities. For further information and the relevant component metadata fields to provide depending on the choice of Microsoft Entra ID authentication mechanism, see the docs for authenticating to Azure.
Example Configuration
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: servicebus-pubsub
spec:
type: pubsub.azure.servicebus.queues
version: v1
metadata:
- name: namespaceName
# Required when using Azure Authentication.
# Must be a fully-qualified domain name
value: "servicebusnamespace.servicebus.windows.net"
- name: azureTenantId
value: "***"
- name: azureClientId
value: "***"
- name: azureClientSecret
value: "***"
Message metadata
Azure Service Bus messages extend the Dapr message format with additional contextual metadata. Some metadata fields are set by Azure Service Bus itself (read-only) and others can be set by the client when publishing a message.
Sending a message with metadata
To set Azure Service Bus metadata when sending a message, set the query parameters on the HTTP request or the gRPC metadata as documented here.
metadata.MessageId
metadata.CorrelationId
metadata.SessionId
metadata.Label
metadata.ReplyTo
metadata.PartitionKey
metadata.To
metadata.ContentType
metadata.ScheduledEnqueueTimeUtc
metadata.ReplyToSessionId
Note
Receiving a message with metadata
When Dapr calls your application, it attaches Azure Service Bus message metadata to the request using either HTTP headers or gRPC metadata. In addition to the settable metadata listed above, you can also access the following read-only message metadata.
metadata.DeliveryCount
metadata.LockedUntilUtc
metadata.LockToken
metadata.EnqueuedTimeUtc
metadata.SequenceNumber
To find out more details on the purpose of any of these metadata properties refer to the official Azure Service Bus documentation.
In addition, all entries of ApplicationProperties
from the original Azure Service Bus message are appended as metadata.<application property's name>
.
Note
All times are populated by the server and are not adjusted for clock skews.Sending and receiving multiple messages
Azure Service Bus supports sending and receiving multiple messages in a single operation using the bulk pub/sub API.
Configuring bulk publish
To set the metadata for bulk publish operation, set the query parameters on the HTTP request or the gRPC metadata as documented here
Metadata | Default |
---|---|
metadata.maxBulkPubBytes |
131072 (128 KiB) |
Configuring bulk subscribe
When subscribing to a topic, you can configure bulkSubscribe
options. Refer to Subscribing messages in bulk for more details. Learn more about the bulk subscribe API.
Configuration | Default |
---|---|
maxMessagesCount |
100 |
Create an Azure Service Bus broker for queues
Follow the instructions here on setting up Azure Service Bus Queues.
Note
Your queue must have the same name as the topic you are publishing to with Dapr. For example, if you are publishing to the pub/sub"myPubsub"
on the topic "orders"
, your queue must be named "orders"
.
If you are using a shared access policy to connect to the queue, that policy must be able to “manage” the queue. To work with a dead-letter queue, the policy must live on the Service Bus Namespace that contains both the main queue and the dead-letter queue.
Retry policy and dead-letter queues
By default, an Azure Service Bus Queue has a dead-letter queue. The messages are retried the amount given for maxDeliveryCount
. The default maxDeliveryCount
value defaults to 10, but can be set up to 2000. These retries happen very rapidly and the message is put in the dead-letter queue if no success is returned.
Dapr Pub/sub offers its own dead-letter queue concept that lets you control the retry policy and subscribe to the dead-letter queue through Dapr.
- Set up a separate queue as that dead-letter queue in the Azure Service Bus namespace, and a resilience policy that defines how to retry.
- Subscribe to the topic to get the failed messages and deal with them.
For example, setting up a dead-letter queue orders-dlq
in the subscription and a resiliency policy lets you subscribe to the topic orders-dlq
to handle failed messages.
For more details on setting up dead-letter queues, see the dead-letter article.
Related links
- Basic schema for a Dapr component
- Pub/Sub building block
- Read this guide for instructions on configuring pub/sub components
5.1.5 - Azure Service Bus Topics
Component format
To set up Azure Service Bus Topics pub/sub, create a component of type pubsub.azure.servicebus.topics
. See the pub/sub broker component file to learn how ConsumerID is automatically generated. Read the How-to: Publish and Subscribe guide on how to create and apply a pub/sub configuration.
This component uses topics on Azure Service Bus; see the official documentation for the differences between topics and queues.
For using queues, see the Azure Service Bus Queues pubsub component.
Connection String Authentication
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: servicebus-pubsub
spec:
type: pubsub.azure.servicebus.topics
version: v1
metadata:
# Required when not using Microsoft Entra ID Authentication
- name: connectionString
value: "Endpoint=sb://{ServiceBusNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={ServiceBus}"
# - name: consumerID # Optional: defaults to the app's own ID
# value: channel1
# - name: timeoutInSec # Optional
# value: 60
# - name: handlerTimeoutInSec # Optional
# value: 60
# - name: disableEntityManagement # Optional
# value: "false"
# - name: maxDeliveryCount # Optional
# value: 3
# - name: lockDurationInSec # Optional
# value: 60
# - name: lockRenewalInSec # Optional
# value: 20
# - name: maxActiveMessages # Optional
# value: 10000
# - name: maxConcurrentHandlers # Optional
# value: 10
# - name: defaultMessageTimeToLiveInSec # Optional
# value: 10
# - name: autoDeleteOnIdleInSec # Optional
# value: 3600
# - name: minConnectionRecoveryInSec # Optional
# value: 2
# - name: maxConnectionRecoveryInSec # Optional
# value: 300
# - name: maxRetriableErrorsPerSec # Optional
# value: 10
# - name: publishMaxRetries # Optional
# value: 5
# - name: publishInitialRetryIntervalInMs # Optional
# value: 500
NOTE: The above settings are shared across all topics that use this component.
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
connectionString |
Y | Shared access policy connection string for the Service Bus. Required unless using Microsoft Entra ID authentication. | See example above |
namespaceName |
N | Parameter to set the address of the Service Bus namespace, as a fully-qualified domain name. Required if using Microsoft Entra ID authentication. | "namespace.servicebus.windows.net" |
consumerID |
N | Consumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the consumerID is not provided, the Dapr runtime set it to the Dapr application ID (appID ) value. (appID ) value. |
Can be set to string value (such as "channel1" in the example above) or string format value (such as "{podName}" , etc.). See all of template tags you can use in your component metadata. |
timeoutInSec |
N | Timeout for sending messages and for management operations. Default: 60 |
30 |
handlerTimeoutInSec |
N | Timeout for invoking the app’s handler. Default: 60 |
30 |
lockRenewalInSec |
N | Defines the frequency at which buffered message locks will be renewed. Default: 20 . |
20 |
maxActiveMessages |
N | Defines the maximum number of messages to be processing or in the buffer at once. This should be at least as big as the maximum concurrent handlers. Default: 1000 |
2000 |
maxConcurrentHandlers |
N | Defines the maximum number of concurrent message handlers. Default: 0 (unlimited) |
10 |
disableEntityManagement |
N | When set to true, queues and subscriptions do not get created automatically. Default: "false" |
"true" , "false" |
defaultMessageTimeToLiveInSec |
N | Default message time to live, in seconds. Used during subscription creation only. | 10 |
autoDeleteOnIdleInSec |
N | Time in seconds to wait before auto deleting idle subscriptions. Used during subscription creation only. Must be 300s or greater. Default: 0 (disabled) |
3600 |
maxDeliveryCount |
N | Defines the number of attempts the server makes to deliver a message. Used during subscription creation only. Default set by server. | 10 |
lockDurationInSec |
N | Defines the length in seconds that a message will be locked for before expiring. Used during subscription creation only. Default set by server. | 30 |
minConnectionRecoveryInSec |
N | Minimum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. Default: 2 |
5 |
maxConnectionRecoveryInSec |
N | Maximum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. After each attempt, the component waits a random number of seconds, increasing every time, between the minimum and the maximum. Default: 300 (5 minutes) |
600 |
maxRetriableErrorsPerSec |
N | Maximum number of retriable errors that are processed per second. If a message fails to be processed with a retriable error, the component adds a delay before it starts processing another message, to avoid immediately re-processing messages that have failed. Default: 10 |
10 |
publishMaxRetries |
N | The max number of retries for when Azure Service Bus responds with “too busy” in order to throttle messages. Defaults: 5 |
5 |
publishInitialRetryIntervalInMs |
N | Time in milliseconds for the initial exponential backoff when Azure Service Bus throttle messages. Defaults: 500 |
500 |
Microsoft Entra ID authentication
The Azure Service Bus Topics pubsub component supports authentication using all Microsoft Entra ID mechanisms, including Managed Identities. For further information and the relevant component metadata fields to provide depending on the choice of Microsoft Entra ID authentication mechanism, see the docs for authenticating to Azure.
Example Configuration
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: servicebus-pubsub
spec:
type: pubsub.azure.servicebus.topics
version: v1
metadata:
- name: namespaceName
# Required when using Azure Authentication.
# Must be a fully-qualified domain name
value: "servicebusnamespace.servicebus.windows.net"
- name: azureTenantId
value: "***"
- name: azureClientId
value: "***"
- name: azureClientSecret
value: "***"
Message metadata
Azure Service Bus messages extend the Dapr message format with additional contextual metadata. Some metadata fields are set by Azure Service Bus itself (read-only) and others can be set by the client when publishing a message.
Sending a message with metadata
To set Azure Service Bus metadata when sending a message, set the query parameters on the HTTP request or the gRPC metadata as documented here.
metadata.MessageId
metadata.CorrelationId
metadata.SessionId
metadata.Label
metadata.ReplyTo
metadata.PartitionKey
metadata.To
metadata.ContentType
metadata.ScheduledEnqueueTimeUtc
metadata.ReplyToSessionId
Note: The
metadata.MessageId
property does not set theid
property of the cloud event returned by Dapr and should be treated in isolation.
NOTE: If the
metadata.SessionId
property is not set but the topic requires sessions then an empty session id will be used.
NOTE: The
metadata.ScheduledEnqueueTimeUtc
property supports the RFC1123 and RFC3339 timestamp formats.
Receiving a message with metadata
When Dapr calls your application, it will attach Azure Service Bus message metadata to the request using either HTTP headers or gRPC metadata. In addition to the settable metadata listed above, you can also access the following read-only message metadata.
metadata.DeliveryCount
metadata.LockedUntilUtc
metadata.LockToken
metadata.EnqueuedTimeUtc
metadata.SequenceNumber
To find out more details on the purpose of any of these metadata properties refer to the official Azure Service Bus documentation.
In addition, all entries of ApplicationProperties
from the original Azure Service Bus message are appended as metadata.<application property's name>
.
Note: that all times are populated by the server and are not adjusted for clock skews.
Subscribe to a session enabled topic
To subscribe to a topic that has sessions enabled you can provide the following properties in the subscription metadata.
requireSessions (default: false)
sessionIdleTimeoutInSec (default: 60)
maxConcurrentSessions (default: 8)
Create an Azure Service Bus broker for topics
Follow the instructions here on setting up Azure Service Bus Topics.
Related links
- Basic schema for a Dapr component
- Pub/Sub building block
- Read this guide for instructions on configuring pub/sub components
5.1.6 - GCP
Create a Dapr component
To set up GCP pub/sub, create a component of type pubsub.gcp.pubsub
. See the pub/sub broker component file to learn how ConsumerID is automatically generated. Read the How-to: Publish and Subscribe guide on how to create and apply a pub/sub configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: gcp-pubsub
spec:
type: pubsub.gcp.pubsub
version: v1
metadata:
- name: type
value: service_account
- name: projectId
value: <PROJECT_ID> # replace
- name: endpoint # Optional.
value: "http://localhost:8085"
- name: consumerID # Optional - defaults to the app's own ID
value: <CONSUMER_ID>
- name: identityProjectId
value: <IDENTITY_PROJECT_ID> # replace
- name: privateKeyId
value: <PRIVATE_KEY_ID> #replace
- name: clientEmail
value: <CLIENT_EMAIL> #replace
- name: clientId
value: <CLIENT_ID> # replace
- name: authUri
value: https://accounts.google.com/o/oauth2/auth
- name: tokenUri
value: https://oauth2.googleapis.com/token
- name: authProviderX509CertUrl
value: https://www.googleapis.com/oauth2/v1/certs
- name: clientX509CertUrl
value: https://www.googleapis.com/robot/v1/metadata/x509/<PROJECT_NAME>.iam.gserviceaccount.com #replace PROJECT_NAME
- name: privateKey
value: <PRIVATE_KEY> # replace x509 cert
- name: disableEntityManagement
value: "false"
- name: enableMessageOrdering
value: "false"
- name: orderingKey # Optional
value: <ORDERING_KEY>
- name: maxReconnectionAttempts # Optional
value: 30
- name: connectionRecoveryInSec # Optional
value: 2
- name: deadLetterTopic # Optional
value: <EXISTING_PUBSUB_TOPIC>
- name: maxDeliveryAttempts # Optional
value: 5
- name: maxOutstandingMessages # Optional
value: 1000
- name: maxOutstandingBytes # Optional
value: 1000000000
- name: maxConcurrentConnections # Optional
value: 10
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
projectId | Y | GCP project ID | myproject-123 |
endpoint | N | GCP endpoint for the component to use. Only used for local development (for example) with GCP Pub/Sub Emulator. The endpoint is unnecessary when running against the GCP production API. |
"http://localhost:8085" |
consumerID |
N | The Consumer ID organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the consumerID is not provided, the Dapr runtime set it to the Dapr application ID (appID ) value. The consumerID , along with the topic provided as part of the request, are used to build the Pub/Sub subscription ID |
Can be set to string value (such as "channel1" ) or string format value (such as "{podName}" , etc.). See all of template tags you can use in your component metadata. |
identityProjectId | N | If the GCP pubsub project is different from the identity project, specify the identity project using this attribute | "myproject-123" |
privateKeyId | N | If using explicit credentials, this field should contain the private_key_id field from the service account json document |
"my-private-key" |
privateKey | N | If using explicit credentials, this field should contain the private_key field from the service account json |
-----BEGIN PRIVATE KEY-----MIIBVgIBADANBgkqhkiG9w0B |
clientEmail | N | If using explicit credentials, this field should contain the client_email field from the service account json |
"myservice@myproject-123.iam.gserviceaccount.com" |
clientId | N | If using explicit credentials, this field should contain the client_id field from the service account json |
106234234234 |
authUri | N | If using explicit credentials, this field should contain the auth_uri field from the service account json |
https://accounts.google.com/o/oauth2/auth |
tokenUri | N | If using explicit credentials, this field should contain the token_uri field from the service account json |
https://oauth2.googleapis.com/token |
authProviderX509CertUrl | N | If using explicit credentials, this field should contain the auth_provider_x509_cert_url field from the service account json |
https://www.googleapis.com/oauth2/v1/certs |
clientX509CertUrl | N | If using explicit credentials, this field should contain the client_x509_cert_url field from the service account json |
https://www.googleapis.com/robot/v1/metadata/x509/myserviceaccount%40myproject.iam.gserviceaccount.com |
disableEntityManagement | N | When set to "true" , topics and subscriptions do not get created automatically. Default: "false" |
"true" , "false" |
enableMessageOrdering | N | When set to "true" , subscribed messages will be received in order, depending on publishing and permissions configuration. |
"true" , "false" |
orderingKey | N | The key provided in the request. It’s used when enableMessageOrdering is set to true to order messages based on such key. |
“my-orderingkey” |
maxReconnectionAttempts | N | Defines the maximum number of reconnect attempts. Default: 30 |
30 |
connectionRecoveryInSec | N | Time in seconds to wait between connection recovery attempts. Default: 2 |
2 |
deadLetterTopic | N | Name of the GCP Pub/Sub Topic. This topic must exist before using this component. | "myapp-dlq" |
maxDeliveryAttempts | N | Maximum number of attempts to deliver the message. If deadLetterTopic is specified, maxDeliveryAttempts is the maximum number of attempts for failed processing of messages. Once that number is reached, the message will be moved to the dead-letter topic. Default: 5 |
5 |
type | N | DEPRECATED GCP credentials type. Only service_account is supported. Defaults to service_account |
service_account |
maxOutstandingMessages | N | Maximum number of outstanding messages a given streaming-pull connection can have. Default: 1000 |
50 |
maxOutstandingBytes | N | Maximum number of outstanding bytes a given streaming-pull connection can have. Default: 1000000000 |
1000000000 |
maxConcurrentConnections | N | Maximum number of concurrent streaming-pull connections to be maintained. Default: 10 |
2 |
ackDeadline | N | Message acknowledgement duration deadline. Default: 20s |
1m |
Warning
IfenableMessageOrdering
is set to “true”, the roles/viewer or roles/pubsub.viewer role will be required on the service account in order to guarantee ordering in cases where order tokens are not embedded in the messages. If this role is not given, or the call to Subscription.Config() fails for any other reason, ordering by embedded order tokens will still function correctly.
GCP Credentials
Since the GCP Pub/Sub component uses the GCP Go Client Libraries, by default it authenticates using Application Default Credentials. This is explained further in the Authenticate to GCP Cloud services using client libraries guide.
Create a GCP Pub/Sub
For local development, the GCP Pub/Sub Emulator is used to test the GCP Pub/Sub Component. Follow these instructions to run the GCP Pub/Sub Emulator.
To run the GCP Pub/Sub Emulator locally using Docker, use the following docker-compose.yaml
:
version: '3'
services:
pubsub:
image: gcr.io/google.com/cloudsdktool/cloud-sdk:422.0.0-emulators
ports:
- "8085:8085"
container_name: gcp-pubsub
entrypoint: gcloud beta emulators pubsub start --project local-test-prj --host-port 0.0.0.0:8085
In order to use the GCP Pub/Sub Emulator with your pub/sub binding, you need to provide the endpoint
configuration in the component metadata. The endpoint
is unnecessary when running against the GCP Production API.
The projectId attribute must match the --project
used in either the docker-compose.yaml
or Docker command.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: gcp-pubsub
spec:
type: pubsub.gcp.pubsub
version: v1
metadata:
- name: projectId
value: "local-test-prj"
- name: consumerID
value: "testConsumer"
- name: endpoint
value: "localhost:8085"
You can use either “explicit” or “implicit” credentials to configure access to your GCP pubsub instance. If using explicit, most fields are required. Implicit relies on dapr running under a Kubernetes service account (KSA) mapped to a Google service account (GSA) which has the necessary permissions to access pubsub. In implicit mode, only the projectId
attribute is needed, all other are optional.
Follow the instructions here on setting up Google Cloud Pub/Sub system.
Related links
- Basic schema for a Dapr component
- Read this guide for instructions on configuring pub/sub components
- Pub/Sub building block
5.1.7 - In-memory
The in-memory pub/sub component operates within a single Dapr sidecar. This is primarily meant for development purposes. State is not replicated across multiple sidecars and is lost when the Dapr sidecar is restarted.
Component format
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: pubsub
spec:
type: pubsub.in-memory
version: v1
metadata: []
Note: in-memory does not require any specific metadata for the component to work, however spec.metadata is a required field.
Related links
- Basic schema for a Dapr component in the Related links section
- Read this guide for instructions on configuring pub/sub components
- Pub/Sub building block
5.1.8 - JetStream
Component format
To set up JetStream pub/sub, create a component of type pubsub.jetstream
. See the pub/sub broker component file to learn how ConsumerID is automatically generated. Read the How-to: Publish and Subscribe guide on how to create and apply a pub/sub configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: jetstream-pubsub
spec:
type: pubsub.jetstream
version: v1
metadata:
- name: natsURL
value: "nats://localhost:4222"
- name: jwt # Optional. Used for decentralized JWT authentication.
value: "eyJhbGciOiJ...6yJV_adQssw5c"
- name: seedKey # Optional. Used for decentralized JWT authentication.
value: "SUACS34K232O...5Z3POU7BNIL4Y"
- name: tls_client_cert # Optional. Used for TLS Client authentication.
value: "/path/to/tls.crt"
- name: tls_client_key # Optional. Used for TLS Client authentication.
value: "/path/to/tls.key"
- name: token # Optional. Used for token based authentication.
value: "my-token"
- name: name
value: "my-conn-name"
- name: streamName
value: "my-stream"
- name: durableName
value: "my-durable-subscription"
- name: queueGroupName
value: "my-queue-group"
- name: startSequence
value: 1
- name: startTime # In Unix format
value: 1630349391
- name: flowControl
value: false
- name: ackWait
value: 10s
- name: maxDeliver
value: 5
- name: backOff
value: "50ms, 1s, 10s"
- name: maxAckPending
value: 5000
- name: replicas
value: 1
- name: memoryStorage
value: false
- name: rateLimit
value: 1024
- name: heartbeat
value: 15s
- name: ackPolicy
value: explicit
- name: deliverPolicy
value: all
- name: domain
value: hub
- name: apiPrefix
value: PREFIX
Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
natsURL | Y | NATS server address URL | "nats://localhost:4222" |
jwt | N | NATS decentralized authentication JWT | "eyJhbGciOiJ...6yJV_adQssw5c" |
seedKey | N | NATS decentralized authentication seed key | "SUACS34K232O...5Z3POU7BNIL4Y" |
tls_client_cert | N | NATS TLS Client Authentication Certificate | "/path/to/tls.crt" |
tls_client_key | N | NATS TLS Client Authentication Key | "/path/to/tls.key" |
token | N | NATS token based authentication | "my-token" |
name | N | NATS connection name | "my-conn-name" |
streamName | N | Name of the JetStream Stream to bind to | "my-stream" |
durableName | N | Durable name | "my-durable" |
queueGroupName | N | Queue group name | "my-queue" |
startSequence | N | Start Sequence | 1 |
startTime | N | Start Time in Unix format | 1630349391 |
flowControl | N | Flow Control | true |
ackWait | N | Ack Wait | 10s |
maxDeliver | N | Max Deliver | 15 |
backOff | N | BackOff | "50ms, 1s, 5s, 10s" |
maxAckPending | N | Max Ack Pending | 5000 |
replicas | N | Replicas | 3 |
memoryStorage | N | Memory Storage | false |
rateLimit | N | Rate Limit | 1024 |
heartbeat | N | Heartbeat | 10s |
ackPolicy | N | Ack Policy | explicit |
deliverPolicy | N | One of: all, last, new, sequence, time | all |
domain | N | [JetStream Leafondes] | HUB |
apiPrefix | N | [JetStream Leafnodes] | PREFIX |
Create a NATS server
You can run a NATS Server with JetStream enabled locally using Docker:
docker run -d -p 4222:4222 nats:latest -js
You can then interact with the server using the client port: localhost:4222
.
Install NATS JetStream on Kubernetes by using the helm:
helm repo add nats https://nats-io.github.io/k8s/helm/charts/
helm install --set nats.jetstream.enabled=true my-nats nats/nats
This installs a single NATS server into the default
namespace. To interact with NATS, find the service with:
kubectl get svc my-nats
For more information on helm chart settings, see the Helm chart documentation.
Create JetStream
It is essential to create a NATS JetStream for a specific subject. For example, for a NATS server running locally use:
nats -s localhost:4222 stream add myStream --subjects mySubject
Example: Competing consumers pattern
Let’s say you’d like each message to be processed by only one application or pod with the same app-id. Typically, the consumerID
metadata spec helps you define competing consumers.
Since consumerID
is not supported in NATS JetStream, you need to specify durableName
and queueGroupName
to achieve the competing consumers pattern. For example:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: pubsub
spec:
type: pubsub.jetstream
version: v1
metadata:
- name: name
value: "my-conn-name"
- name: streamName
value: "my-stream"
- name: durableName
value: "my-durable-subscription"
- name: queueGroupName
value: "my-queue-group"
Related links
- Basic schema for a Dapr component
- Read this guide for instructions on configuring pub/sub components
- Pub/Sub building block
- JetStream Documentation
- NATS CLI
5.1.9 - KubeMQ
Component format
To set up KubeMQ pub/sub, create a component of type pubsub.kubemq
. See the pub/sub broker component file to learn how ConsumerID is automatically generated. Read the How-to: Publish and Subscribe guide on how to create and apply a pub/sub configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: kubemq-pubsub
spec:
type: pubsub.kubemq
version: v1
metadata:
- name: address
value: localhost:50000
- name: store
value: false
- name: consumerID
value: channel1
Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
address | Y | Address of the KubeMQ server | "localhost:50000" |
store | N | type of pubsub, true: pubsub persisted (EventsStore), false: pubsub in-memory (Events) | true or false (default is false ) |
consumerID | N | Consumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the consumerID is not provided, the Dapr runtime set it to the Dapr application ID (appID ) value. |
Can be set to string value (such as "channel1" in the example above) or string format value (such as "{podName}" , etc.). See all of template tags you can use in your component metadata. |
clientID | N | Name for client id connection | sub-client-12345 |
authToken | N | Auth JWT token for connection Check out KubeMQ Authentication | ew... |
group | N | Subscriber group for load balancing | g1 |
disableReDelivery | N | Set if message should be re-delivered in case of error coming from application | true or false (default is false ) |
Create a KubeMQ broker
- Obtain KubeMQ Key.
- Wait for an email confirmation with your Key
You can run a KubeMQ broker with Docker:
docker run -d -p 8080:8080 -p 50000:50000 -p 9090:9090 -e KUBEMQ_TOKEN=<your-key> kubemq/kubemq
You can then interact with the server using the client port: localhost:50000
- Obtain KubeMQ Key.
- Wait for an email confirmation with your Key
Then Run the following kubectl commands:
kubectl apply -f https://deploy.kubemq.io/init
kubectl apply -f https://deploy.kubemq.io/key/<your-key>
Install KubeMQ CLI
Go to KubeMQ CLI and download the latest version of the CLI.
Browse KubeMQ Dashboard
Open a browser and navigate to http://localhost:8080
With KubeMQCTL installed, run the following command:
kubemqctl get dashboard
Or, with kubectl installed, run port-forward command:
kubectl port-forward svc/kubemq-cluster-api -n kubemq 8080:8080
KubeMQ Documentation
Visit KubeMQ Documentation for more information.
Related links
- Basic schema for a Dapr component
- Read this guide for instructions on configuring pub/sub components
- Pub/sub building block
5.1.10 - MQTT
Component format
To set up MQTT pub/sub, create a component of type pubsub.mqtt
. See the pub/sub broker component file to learn how ConsumerID is automatically generated. Read the How-to: Publish and Subscribe guide on how to create and apply a pub/sub configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: mqtt-pubsub
spec:
type: pubsub.mqtt
version: v1
metadata:
- name: url
value: "tcp://[username][:password]@host.domain[:port]"
- name: qos
value: 1
- name: retain
value: "false"
- name: cleanSession
value: "false"
- name: consumerID
value: "channel1"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
url | Y | Address of the MQTT broker. Can be secretKeyRef to use a secret reference. Use the tcp:// URI scheme for non-TLS communication. Use the ssl:// URI scheme for TLS communication. |
"tcp://[username][:password]@host.domain[:port]" |
consumerID | N | The client ID used to connect to the MQTT broker for the consumer connection. Defaults to the Dapr app ID. Note: if producerID is not set, -consumer is appended to this value for the consumer connection |
Can be set to string value (such as "channel1" in the example above) or string format value (such as "{podName}" , etc.). See all of template tags you can use in your component metadata. |
producerID | N | The client ID used to connect to the MQTT broker for the producer connection. Defaults to {consumerID}-producer . |
"myMqttProducerApp" |
qos | N | Indicates the Quality of Service Level (QoS) of the message (more info). Defaults to 1 . |
0 , 1 , 2 |
retain | N | Defines whether the message is saved by the broker as the last known good value for a specified topic. Defaults to "false" . |
"true" , "false" |
cleanSession | N | Sets the clean_session flag in the connection message to the MQTT broker if "true" (more info). Defaults to "false" . |
"true" , "false" |
caCert | Required for using TLS | Certificate Authority (CA) certificate in PEM format for verifying server TLS certificates. | "-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----" |
clientCert | Required for using TLS | TLS client certificate in PEM format. Must be used with clientKey . |
"-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----" |
clientKey | Required for using TLS | TLS client key in PEM format. Must be used with clientCert . Can be secretKeyRef to use a secret reference. |
"-----BEGIN RSA PRIVATE KEY-----\n<base64-encoded PKCS8>\n-----END RSA PRIVATE KEY-----" |
Enabling message delivery retries
The MQTT pub/sub component has no built-in support for retry strategies. This means that the sidecar sends a message to the service only once. If the service marks the message as not processed, the message won’t be acknowledged back to the broker. Only if broker resends the message, would it would be retried.
To make Dapr use more spohisticated retry policies, you can apply a retry resiliency policy to the MQTT pub/sub component.
There is a crucial difference between the two ways of retries:
-
Re-delivery of unacknowledged messages is completely dependent on the broker. Dapr does not guarantee it. Some brokers like emqx, vernemq etc. support it but it not a part of MQTT3 spec.
-
Using a retry resiliency policy makes the same Dapr sidecar retry redelivering the messages. So it is the same Dapr sidecar and the same app receiving the same message.
Communication using TLS
To configure communication using TLS, ensure that the MQTT broker (for example, mosquitto) is configured to support certificates and provide the caCert
, clientCert
, clientKey
metadata in the component configuration. For example:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: mqtt-pubsub
spec:
type: pubsub.mqtt
version: v1
metadata:
- name: url
value: "ssl://host.domain[:port]"
- name: qos
value: 1
- name: retain
value: "false"
- name: cleanSession
value: "false"
- name: caCert
value: ${{ myLoadedCACert }}
- name: clientCert
value: ${{ myLoadedClientCert }}
- name: clientKey
secretKeyRef:
name: myMqttClientKey
key: myMqttClientKey
auth:
secretStore: <SECRET_STORE_NAME>
Note that while the caCert
and clientCert
values may not be secrets, they can be referenced from a Dapr secret store as well for convenience.
Consuming a shared topic
When consuming a shared topic, each consumer must have a unique identifier. By default, the application ID is used to uniquely identify each consumer and publisher. In self-hosted mode, invoking each dapr run
with a different application ID is sufficient to have them consume from the same shared topic. However, on Kubernetes, multiple instances of an application pod will share the same application ID, prohibiting all instances from consuming the same topic. To overcome this, configure the component’s consumerID
metadata with a {uuid}
tag, which will give each instance a randomly generated consumerID
value on start up. For example:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: mqtt-pubsub
spec:
type: pubsub.mqtt
version: v1
metadata:
- name: consumerID
value: "{uuid}"
- name: url
value: "tcp://admin:public@localhost:1883"
- name: qos
value: 1
- name: retain
value: "false"
- name: cleanSession
value: "true"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Note that in the case, the value of the consumer ID is random every time Dapr restarts, so we are setting cleanSession
to true as well.
Create a MQTT broker
You can run a MQTT broker locally using Docker:
docker run -d -p 1883:1883 -p 9001:9001 --name mqtt eclipse-mosquitto:1.6
You can then interact with the server using the client port: mqtt://localhost:1883
You can run a MQTT broker in kubernetes using following yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mqtt-broker
labels:
app-name: mqtt-broker
spec:
replicas: 1
selector:
matchLabels:
app-name: mqtt-broker
template:
metadata:
labels:
app-name: mqtt-broker
spec:
containers:
- name: mqtt
image: eclipse-mosquitto:1.6
imagePullPolicy: IfNotPresent
ports:
- name: default
containerPort: 1883
protocol: TCP
- name: websocket
containerPort: 9001
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: mqtt-broker
labels:
app-name: mqtt-broker
spec:
type: ClusterIP
selector:
app-name: mqtt-broker
ports:
- port: 1883
targetPort: default
name: default
protocol: TCP
- port: 9001
targetPort: websocket
name: websocket
protocol: TCP
You can then interact with the server using the client port: tcp://mqtt-broker.default.svc.cluster.local:1883
Related links
- Basic schema for a Dapr component
- Read this guide for instructions on configuring pub/sub components
- Pub/Sub building block
5.1.11 - MQTT3
Component format
To set up a MQTT3 pub/sub, create a component of type pubsub.mqtt3
. See the pub/sub broker component file to learn how ConsumerID is automatically generated. Read the How-to: Publish and Subscribe guide on how to create and apply a pub/sub configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: mqtt-pubsub
spec:
type: pubsub.mqtt3
version: v1
metadata:
- name: url
value: "tcp://[username][:password]@host.domain[:port]"
# Optional
- name: retain
value: "false"
- name: cleanSession
value: "false"
- name: qos
value: "1"
- name: consumerID
value: "channel1"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
url |
Y | Address of the MQTT broker. Can be secretKeyRef to use a secret reference. Use the tcp:// URI scheme for non-TLS communication. Use the ssl:// URI scheme for TLS communication. |
"tcp://[username][:password]@host.domain[:port]" |
consumerID |
N | The client ID used to connect to the MQTT broker. Defaults to the Dapr app ID. | Can be set to string value (such as "channel1" in the example above) or string format value (such as "{podName}" , etc.). See all of template tags you can use in your component metadata. |
retain |
N | Defines whether the message is saved by the broker as the last known good value for a specified topic. Defaults to "false" . |
"true" , "false" |
cleanSession |
N | Sets the clean_session flag in the connection message to the MQTT broker if "true" (more info). Defaults to "false" . |
"true" , "false" |
caCert |
Required for using TLS | Certificate Authority (CA) certificate in PEM format for verifying server TLS certificates. | See example below |
clientCert |
Required for using TLS | TLS client certificate in PEM format. Must be used with clientKey . |
See example below |
clientKey |
Required for using TLS | TLS client key in PEM format. Must be used with clientCert . Can be secretKeyRef to use a secret reference. |
See example below |
qos |
N | Indicates the Quality of Service Level (QoS) of the message (more info). Defaults to 1 . |
0 , 1 , 2 |
Communication using TLS
To configure communication using TLS, ensure that the MQTT broker (for example, emqx) is configured to support certificates and provide the caCert
, clientCert
, clientKey
metadata in the component configuration. For example:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: mqtt-pubsub
spec:
type: pubsub.mqtt3
version: v1
metadata:
- name: url
value: "ssl://host.domain[:port]"
# TLS configuration
- name: caCert
value: |
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
- name: clientCert
value: |
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
- name: clientKey
secretKeyRef:
name: myMqttClientKey
key: myMqttClientKey
# Optional
- name: retain
value: "false"
- name: cleanSession
value: "false"
- name: qos
value: 1
Note that while the caCert
and clientCert
values may not be secrets, they can be referenced from a Dapr secret store as well for convenience.
Consuming a shared topic
When consuming a shared topic, each consumer must have a unique identifier. By default, the application ID is used to uniquely identify each consumer and publisher. In self-hosted mode, invoking each dapr run
with a different application ID is sufficient to have them consume from the same shared topic. However, on Kubernetes, multiple instances of an application pod will share the same application ID, prohibiting all instances from consuming the same topic. To overcome this, configure the component’s consumerID
metadata with a {uuid}
tag (which will give each instance a randomly generated value on start up) or {podName}
(which will use the Pod’s name on Kubernetes). For example:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: mqtt-pubsub
spec:
type: pubsub.mqtt3
version: v1
metadata:
- name: consumerID
value: "{uuid}"
- name: cleanSession
value: "true"
- name: url
value: "tcp://admin:public@localhost:1883"
- name: qos
value: 1
- name: retain
value: "false"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Note that in the case, the value of the consumer ID is random every time Dapr restarts, so you should set cleanSession
to true
as well.
It is recommended to use StatefulSets with shared subscriptions.
Create a MQTT3 broker
You can run a MQTT broker like emqx locally using Docker:
docker run -d -p 1883:1883 --name mqtt emqx:latest
You can then interact with the server using the client port: tcp://localhost:1883
You can run a MQTT3 broker in kubernetes using following yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mqtt-broker
labels:
app-name: mqtt-broker
spec:
replicas: 1
selector:
matchLabels:
app-name: mqtt-broker
template:
metadata:
labels:
app-name: mqtt-broker
spec:
containers:
- name: mqtt
image: emqx:latest
imagePullPolicy: IfNotPresent
ports:
- name: default
containerPort: 1883
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: mqtt-broker
labels:
app-name: mqtt-broker
spec:
type: ClusterIP
selector:
app-name: mqtt-broker
ports:
- port: 1883
targetPort: default
name: default
protocol: TCP
You can then interact with the server using the client port: tcp://mqtt-broker.default.svc.cluster.local:1883
Related links
- Basic schema for a Dapr component
- Read this guide for instructions on configuring pub/sub components
- Pub/Sub building block
5.1.12 - Pulsar
Component format
To set up Apache Pulsar pub/sub, create a component of type pubsub.pulsar
. See the pub/sub broker component file to learn how ConsumerID is automatically generated. Read the How-to: Publish and Subscribe guide on how to create and apply a pub/sub configuration.
For more information on Apache Pulsar, read the official docs.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: pulsar-pubsub
spec:
type: pubsub.pulsar
version: v1
metadata:
- name: host
value: "localhost:6650"
- name: enableTLS
value: "false"
- name: tenant
value: "public"
- name: token
value: "eyJrZXlJZCI6InB1bHNhci1wajU0cXd3ZHB6NGIiLCJhbGciOiJIUzI1NiJ9.eyJzd"
- name: consumerID
value: "channel1"
- name: namespace
value: "default"
- name: persistent
value: "true"
- name: disableBatching
value: "false"
- name: receiverQueueSize
value: "1000"
- name: <topic-name>.jsonschema # sets a json schema validation for the configured topic
value: |
{
"type": "record",
"name": "Example",
"namespace": "test",
"fields": [
{"name": "ID","type": "int"},
{"name": "Name","type": "string"}
]
}
- name: <topic-name>.avroschema # sets an avro schema validation for the configured topic
value: |
{
"type": "record",
"name": "Example",
"namespace": "test",
"fields": [
{"name": "ID","type": "int"},
{"name": "Name","type": "string"}
]
}
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets. This component supports storing thetoken
parameter and any other sensitive parameter and data as Kubernetes Secrets.
Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
host | Y | Address of the Pulsar broker. Default is "localhost:6650" |
"localhost:6650" OR "http://pulsar-pj54qwwdpz4b-pulsar.ap-sg.public.pulsar.com:8080" |
enableTLS | N | Enable TLS. Default: "false" |
"true" , "false" |
tenant | N | The topic tenant within the instance. Tenants are essential to multi-tenancy in Pulsar, and spread across clusters. Default: "public" |
"public" |
consumerID | N | Used to set the subscription name or consumer ID. | Can be set to string value (such as "channel1" in the example above) or string format value (such as "{podName}" , etc.). See all of template tags you can use in your component metadata. |
namespace | N | The administrative unit of the topic, which acts as a grouping mechanism for related topics. Default: "default" |
"default" |
persistent | N | Pulsar supports two kinds of topics: persistent and non-persistent. With persistent topics, all messages are durably persisted on disks (if the broker is not standalone, messages are durably persisted on multiple disks), whereas data for non-persistent topics is not persisted to storage disks. | |
disableBatching | N | disable batching.When batching enabled default batch delay is set to 10 ms and default batch size is 1000 messages,Setting disableBatching: true will make the producer to send messages individually. Default: "false" |
"true" , "false" |
receiverQueueSize | N | Sets the size of the consumer receiver queue. Controls how many messages can be accumulated by the consumer before it is explicitly called to read messages by Dapr. Default: "1000" |
"1000" |
batchingMaxPublishDelay | N | batchingMaxPublishDelay set the time period within which the messages sent will be batched,if batch messages are enabled. If set to a non zero value, messages will be queued until this time interval or batchingMaxMessages (see below) or batchingMaxSize (see below). There are two valid formats, one is the fraction with a unit suffix format, and the other is the pure digital format that is processed as milliseconds. Valid time units are “ns”, “us” (or “µs”), “ms”, “s”, “m”, “h”. Default: "10ms" |
"10ms" , "10" |
batchingMaxMessages | N | batchingMaxMessages set the maximum number of messages permitted in a batch.If set to a value greater than 1, messages will be queued until this threshold is reached or batchingMaxSize (see below) has been reached or the batch interval has elapsed. Default: "1000" |
"1000" |
batchingMaxSize | N | batchingMaxSize sets the maximum number of bytes permitted in a batch. If set to a value greater than 1, messages will be queued until this threshold is reached or batchingMaxMessages (see above) has been reached or the batch interval has elapsed. Default: "128KB" |
"131072" |
N | Enforces JSON schema validation for the configured topic. | ||
N | Enforces Avro schema validation for the configured topic. | ||
publicKey | N | A public key to be used for publisher and consumer encryption. Value can be one of two options: file path for a local PEM cert, or the cert data string value | |
privateKey | N | A private key to be used for consumer encryption. Value can be one of two options: file path for a local PEM cert, or the cert data string value | |
keys | N | A comma delimited string containing names of Pulsar session keys. Used in conjunction with publicKey for publisher encryption |
|
processMode | N | Enable processing multiple messages at once. Default: "async" |
"async" , "sync" |
subscribeType | N | Pulsar supports four kinds of subscription types. Default: "shared" |
"shared" , "exclusive" , "failover" , "key_shared" |
subscribeInitialPosition | N | Subscription position is the initial position which the cursor is set when start consuming. Default: "latest" |
"latest" , "earliest" |
subscribeMode | N | Subscription mode indicates the cursor persistence, durable subscription retains messages and persists the current position. Default: "durable" |
"durable" , "non_durable" |
partitionKey | N | Sets the key of the message for routing policy. Default: "" |
|
maxConcurrentHandlers |
N | Defines the maximum number of concurrent message handlers. Default: 100 |
10 |
Authenticate using Token
To authenticate to pulsar using a static JWT token, you can use the following metadata field:
Field | Required | Details | Example |
---|---|---|---|
token | N | Token used for authentication. | How to create Pulsar token |
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: messagebus
spec:
type: pubsub.pulsar
version: v1
metadata:
- name: host
value: "pulsar.example.com:6650"
- name: token
secretKeyRef:
name: pulsar
key: token
Authenticate using OIDC
Since v3.0
, Pulsar supports OIDC authentication.
To enable OIDC authentication, you need to provide the following OAuth2 parameters to the component spec.
OAuth2 authentication cannot be used in combination with token authentication.
It is recommended that you use a secret reference for the client secret.
The pulsar OAuth2 authenticator is not specifically complaint with OIDC so it is your responsibility to ensure fields are compliant. For example, the issuer URL must use the https
protocol, the requested scopes include openid
, etc.
If the oauth2TokenCAPEM
field is omitted then the system’s certificate pool is used for connecting to the OAuth2 issuer if using https
.
Field | Required | Details | Example |
---|---|---|---|
oauth2TokenURL | N | URL to request the OIDC client_credentials token from. Must not be empty. | “https://oauth.example.com/o/oauth2/token"` |
oauth2TokenCAPEM | N | CA PEM certificate bundle to connect to the OAuth2 issuer. If not defined, the system’s certificate pool will be used. | "---BEGIN CERTIFICATE---\n...\n---END CERTIFICATE---" |
oauth2ClientID | N | OIDC client ID. Must not be empty. | "my-client-id" |
oauth2ClientSecret | N | OIDC client secret. Must not be empty. | "my-client-secret" |
oauth2Audiences | N | Comma separated list of audiences to request for. Must not be empty. | "my-audience-1,my-audience-2" |
oauth2Scopes | N | Comma separated list of scopes to request. Must not be empty. | "openid,profile,email" |
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: messagebus
spec:
type: pubsub.pulsar
version: v1
metadata:
- name: host
value: "pulsar.example.com:6650"
- name: oauth2TokenURL
value: https://oauth.example.com/o/oauth2/token
- name: oauth2TokenCAPEM
value: "---BEGIN CERTIFICATE---\n...\n---END CERTIFICATE---"
- name: oauth2ClientID
value: my-client-id
- name: oauth2ClientSecret
secretKeyRef:
name: pulsar-oauth2
key: my-client-secret
- name: oauth2Audiences
value: "my.pulsar.example.com,another.pulsar.example.com"
- name: oauth2Scopes
value: "openid,profile,email"
Enabling message delivery retries
The Pulsar pub/sub component has no built-in support for retry strategies. This means that sidecar sends a message to the service only once and is not retried in case of failures. To make Dapr use more spohisticated retry policies, you can apply a retry resiliency policy to the Pulsar pub/sub component. Note that it will be the same Dapr sidecar retrying the redelivery the message to the same app instance and not other instances.
Delay queue
When invoking the Pulsar pub/sub, it’s possible to provide an optional delay queue by using the metadata
query parameters in the request url.
These optional parameter names are metadata.deliverAt
or metadata.deliverAfter
:
deliverAt
: Delay message to deliver at a specified time (RFC3339 format); for example,"2021-09-01T10:00:00Z"
deliverAfter
: Delay message to deliver after a specified amount of time; for example,"4h5m3s"
Examples:
curl -X POST http://localhost:3500/v1.0/publish/myPulsar/myTopic?metadata.deliverAt='2021-09-01T10:00:00Z' \
-H "Content-Type: application/json" \
-d '{
"data": {
"message": "Hi"
}
}'
Or
curl -X POST http://localhost:3500/v1.0/publish/myPulsar/myTopic?metadata.deliverAfter='4h5m3s' \
-H "Content-Type: application/json" \
-d '{
"data": {
"message": "Hi"
}
}'
E2E Encryption
Dapr supports setting public and private key pairs to enable Pulsar’s end-to-end encryption feature.
Enabling publisher encryption from file certs
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: messagebus
spec:
type: pubsub.pulsar
version: v1
metadata:
- name: host
value: "localhost:6650"
- name: publicKey
value: ./public.key
- name: keys
value: myapp.key
Enabling consumer encryption from file certs
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: messagebus
spec:
type: pubsub.pulsar
version: v1
metadata:
- name: host
value: "localhost:6650"
- name: publicKey
value: ./public.key
- name: privateKey
value: ./private.key
Enabling publisher encryption from value
Note: It is recommended to reference the public key from a secret.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: messagebus
spec:
type: pubsub.pulsar
version: v1
metadata:
- name: host
value: "localhost:6650"
- name: publicKey
value: "-----BEGIN PUBLIC KEY-----\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA1KDAM4L8RtJ+nLaXBrBh\nzVpvTemsKVZoAct8A+ShepOHT9lgHOCGLFGWNla6K6j+b3AV/P/fAAhwj82vwTDd\nruXSflvSdmYeFAw3Ypphc1A5oM53wSRWhg63potBNWqdDzj8ApYgqjpmjYSQdL5/\na3golb36GYFrY0MLFTv7wZ87pmMIPsOgGIcPbCHker2fRZ34WXYLb1hkeUpwx4eK\njpwcg35gccvR6o/UhbKAuc60V1J9Wof2sNgtlRaQej45wnpjWYzZrIyk5qUbn0Qi\nCdpIrXvYtANq0Id6gP8zJvUEdPIgNuYxEmVCl9jI+8eGI6peD0qIt8U80hf9axhJ\n3QIDAQAB\n-----END PUBLIC KEY-----\n"
- name: keys
value: myapp.key
Enabling consumer encryption from value
Note: It is recommended to reference the public and private keys from a secret.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: messagebus
spec:
type: pubsub.pulsar
version: v1
metadata:
- name: host
value: "localhost:6650"
- name: publicKey
value: "-----BEGIN PUBLIC KEY-----\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA1KDAM4L8RtJ+nLaXBrBh\nzVpvTemsKVZoAct8A+ShepOHT9lgHOCGLFGWNla6K6j+b3AV/P/fAAhwj82vwTDd\nruXSflvSdmYeFAw3Ypphc1A5oM53wSRWhg63potBNWqdDzj8ApYgqjpmjYSQdL5/\na3golb36GYFrY0MLFTv7wZ87pmMIPsOgGIcPbCHker2fRZ34WXYLb1hkeUpwx4eK\njpwcg35gccvR6o/UhbKAuc60V1J9Wof2sNgtlRaQej45wnpjWYzZrIyk5qUbn0Qi\nCdpIrXvYtANq0Id6gP8zJvUEdPIgNuYxEmVCl9jI+8eGI6peD0qIt8U80hf9axhJ\n3QIDAQAB\n-----END PUBLIC KEY-----\n"
- name: privateKey
value: "-----BEGIN RSA PRIVATE KEY-----\nMIIEpAIBAAKCAQEA1KDAM4L8RtJ+nLaXBrBhzVpvTemsKVZoAct8A+ShepOHT9lg\nHOCGLFGWNla6K6j+b3AV/P/fAAhwj82vwTDdruXSflvSdmYeFAw3Ypphc1A5oM53\nwSRWhg63potBNWqdDzj8ApYgqjpmjYSQdL5/a3golb36GYFrY0MLFTv7wZ87pmMI\nPsOgGIcPbCHker2fRZ34WXYLb1hkeUpwx4eKjpwcg35gccvR6o/UhbKAuc60V1J9\nWof2sNgtlRaQej45wnpjWYzZrIyk5qUbn0QiCdpIrXvYtANq0Id6gP8zJvUEdPIg\nNuYxEmVCl9jI+8eGI6peD0qIt8U80hf9axhJ3QIDAQABAoIBAQCKuHnM4ac/eXM7\nQPDVX1vfgyHc3hgBPCtNCHnXfGFRvFBqavKGxIElBvGOcBS0CWQ+Rg1Ca5kMx3TQ\njSweSYhH5A7pe3Sa5FK5V6MGxJvRhMSkQi/lJZUBjzaIBJA9jln7pXzdHx8ekE16\nBMPONr6g2dr4nuI9o67xKrtfViwRDGaG6eh7jIMlEqMMc6WqyhvI67rlVDSTHFKX\njlMcozJ3IT8BtTzKg2Tpy7ReVuJEpehum8yn1ZVdAnotBDJxI07DC1cbOP4M2fHM\ngfgPYWmchauZuTeTFu4hrlY5jg0/WLs6by8r/81+vX3QTNvejX9UdTHMSIfQdX82\nAfkCKUVhAoGBAOvGv+YXeTlPRcYC642x5iOyLQm+BiSX4jKtnyJiTU2s/qvvKkIu\nxAOk3OtniT9NaUAHEZE9tI71dDN6IgTLQlAcPCzkVh6Sc5eG0MObqOO7WOMCWBkI\nlaAKKBbd6cGDJkwGCJKnx0pxC9f8R4dw3fmXWgWAr8ENiekMuvjSfjZ5AoGBAObd\ns2L5uiUPTtpyh8WZ7rEvrun3djBhzi+d7rgxEGdditeiLQGKyZbDPMSMBuus/5wH\nwfi0xUq50RtYDbzQQdC3T/C20oHmZbjWK5mDaLRVzWS89YG/NT2Q8eZLBstKqxkx\ngoT77zoUDfRy+CWs1xvXzgxagD5Yg8/OrCuXOqWFAoGAPIw3r6ELknoXEvihASxU\nS4pwInZYIYGXpygLG8teyrnIVOMAWSqlT8JAsXtPNaBtjPHDwyazfZrvEmEk51JD\nX0tA8M5ah1NYt+r5JaKNxp3P/8wUT6lyszyoeubWJsnFRfSusuq/NRC+1+KDg/aq\nKnSBu7QGbm9JoT2RrmBv5RECgYBRn8Lj1I1muvHTNDkiuRj2VniOSirkUkA2/6y+\nPMKi+SS0tqcY63v4rNCYYTW1L7Yz8V44U5mJoQb4lvpMbolGhPljjxAAU3hVkItb\nvGVRlSCIZHKczADD4rJUDOS7DYxO3P1bjUN4kkyYx+lKUMDBHFzCa2D6Kgt4dobS\n5qYajQKBgQC7u7MFPkkEMqNqNGu5erytQkBq1v1Ipmf9rCi3iIj4XJLopxMgw0fx\n6jwcwNInl72KzoUBLnGQ9PKGVeBcgEgdI+a+tq+1TJo6Ta+hZSx+4AYiKY18eRKG\neNuER9NOcSVJ7Eqkcw4viCGyYDm2vgNV9HJ0VlAo3RDh8x5spEN+mg==\n-----END RSA PRIVATE KEY-----\n"
Partition Key
When invoking the Pulsar pub/sub, it’s possible to provide an optional partition key by using the metadata
query parameter in the request url.
The parameter name is partitionKey
.
Example:
curl -X POST http://localhost:3500/v1.0/publish/myPlusar/myTopic?metadata.partitionKey=key1 \
-H "Content-Type: application/json" \
-d '{
"data": {
"message": "Hi"
}
}'
Message headers
All other metadata key/value pairs (that are not partitionKey
) are set as headers in the Pulsar message. For example, set a correlationId
for the message:
curl -X POST http://localhost:3500/v1.0/publish/myPlusar/myTopic?metadata.correlationId=myCorrelationID&metadata.partitionKey=key1 \
-H "Content-Type: application/json" \
-d '{
"data": {
"message": "Hi"
}
}'
Order guarantee
To ensure that messages arrive in order for each consumer subscribed to a specific key, three conditions must be met.
subscribeType
should be set tokey_shared
.partitionKey
must be set.processMode
should be set tosync
.
Create a Pulsar instance
docker run -it \
-p 6650:6650 \
-p 8080:8080 \
--mount source=pulsardata,target=/pulsar/data \
--mount source=pulsarconf,target=/pulsar/conf \
apachepulsar/pulsar:2.5.1 \
bin/pulsar standalone
Refer to the following Helm chart Documentation.
Related links
- Basic schema for a Dapr component
- Read this guide for instructions on configuring pub/sub components
- Pub/Sub building block
5.1.13 - RabbitMQ
Component format
To set up RabbitMQ pub/sub, create a component of type pubsub.rabbitmq
. See the pub/sub broker component file to learn how ConsumerID is automatically generated. Read the How-to: Publish and Subscribe guide on how to create and apply a pub/sub configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: rabbitmq-pubsub
spec:
type: pubsub.rabbitmq
version: v1
metadata:
- name: connectionString
value: "amqp://localhost:5672"
- name: protocol
value: amqp
- name: hostname
value: localhost
- name: username
value: username
- name: password
value: password
- name: consumerID
value: channel1
- name: durable
value: false
- name: deletedWhenUnused
value: false
- name: autoAck
value: false
- name: deliveryMode
value: 0
- name: requeueInFailure
value: false
- name: prefetchCount
value: 0
- name: reconnectWait
value: 0
- name: concurrencyMode
value: parallel
- name: publisherConfirm
value: false
- name: enableDeadLetter # Optional enable dead Letter or not
value: true
- name: maxLen # Optional max message count in a queue
value: 3000
- name: maxLenBytes # Optional maximum length in bytes of a queue.
value: 10485760
- name: exchangeKind
value: fanout
- name: saslExternal
value: false
- name: ttlInSeconds
value: 60
- name: clientName
value: {podName}
- name: heartBeat
value: 10s
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
connectionString | Y* | The RabbitMQ connection string. *Mutally exclusive with protocol, hostname, username, password field | amqp://user:pass@localhost:5672 |
protocol | N* | The RabbitMQ protocol. *Mutally exclusive with connectionString field | amqp |
hostname | N* | The RabbitMQ hostname. *Mutally exclusive with connectionString field | localhost |
username | N* | The RabbitMQ username. *Mutally exclusive with connectionString field | username |
password | N* | The RabbitMQ password. *Mutally exclusive with connectionString field | password |
consumerID | N | Consumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the consumerID is not provided, the Dapr runtime set it to the Dapr application ID (appID ) value. |
Can be set to string value (such as "channel1" in the example above) or string format value (such as "{podName}" , etc.). See all of template tags you can use in your component metadata. |
durable | N | Whether or not to use durable queues. Defaults to "false" |
"true" , "false" |
deletedWhenUnused | N | Whether or not the queue should be configured to auto-delete Defaults to "true" |
"true" , "false" |
autoAck | N | Whether or not the queue consumer should auto-ack messages. Defaults to "false" |
"true" , "false" |
deliveryMode | N | Persistence mode when publishing messages. Defaults to "0" . RabbitMQ treats "2" as persistent, all other numbers as non-persistent |
"0" , "2" |
requeueInFailure | N | Whether or not to requeue when sending a negative acknowledgement in case of a failure. Defaults to "false" |
"true" , "false" |
prefetchCount | N | Number of messages to prefetch. Consider changing this to a non-zero value for production environments. Defaults to "0" , which means that all available messages will be pre-fetched. |
"2" |
publisherConfirm | N | If enabled, client waits for publisher confirms after publishing a message. Defaults to "false" |
"true" , "false" |
reconnectWait | N | How long to wait (in seconds) before reconnecting if a connection failure occurs | "0" |
concurrencyMode | N | parallel is the default, and allows processing multiple messages in parallel (limited by the app-max-concurrency annotation, if configured). Set to single to disable parallel processing. In most situations there’s no reason to change this. |
parallel , single |
enableDeadLetter | N | Enable forwarding Messages that cannot be handled to a dead-letter topic. Defaults to "false" |
"true" , "false" |
maxLen | N | The maximum number of messages of a queue and its dead letter queue (if dead letter enabled). If both maxLen and maxLenBytes are set then both will apply; whichever limit is hit first will be enforced. Defaults to no limit. |
"1000" |
maxLenBytes | N | Maximum length in bytes of a queue and its dead letter queue (if dead letter enabled). If both maxLen and maxLenBytes are set then both will apply; whichever limit is hit first will be enforced. Defaults to no limit. |
"1048576" |
exchangeKind | N | Exchange kind of the rabbitmq exchange. Defaults to "fanout" . |
"fanout" ,"topic" |
saslExternal | N | With TLS, should the username be taken from an additional field (for example, CN). See RabbitMQ Authentication Mechanisms. Defaults to "false" . |
"true" , "false" |
ttlInSeconds | N | Set message TTL at the component level, which can be overwritten by message level TTL per request. | "60" |
caCert | Required for using TLS | Certificate Authority (CA) certificate in PEM format for verifying server TLS certificates. | "-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----" |
clientCert | Required for using TLS | TLS client certificate in PEM format. Must be used with clientKey . |
"-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----" |
clientKey | Required for using TLS | TLS client key in PEM format. Must be used with clientCert . Can be secretKeyRef to use a secret reference. |
"-----BEGIN RSA PRIVATE KEY-----\n<base64-encoded PKCS8>\n-----END RSA PRIVATE KEY-----" |
clientName | N | This RabbitMQ client-provided connection name is a custom identifier. If set, the identifier is mentioned in RabbitMQ server log entries and management UI. Can be set to {uuid}, {podName}, or {appID}, which is replaced by Dapr runtime to the real value. | "app1" , {uuid} , {podName} , {appID} |
heartBeat | N | Defines the heartbeat interval with the server, detecting the aliveness of the peer TCP connection with the RabbitMQ server. Defaults to 10s . |
"10s" |
Communication using TLS
To configure communication using TLS, ensure that the RabbitMQ nodes have TLS enabled and provide the caCert
, clientCert
, clientKey
metadata in the component configuration. For example:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: rabbitmq-pubsub
spec:
type: pubsub.rabbitmq
version: v1
metadata:
- name: host
value: "amqps://localhost:5671"
- name: consumerID
value: myapp
- name: durable
value: false
- name: deletedWhenUnused
value: false
- name: autoAck
value: false
- name: deliveryMode
value: 0
- name: requeueInFailure
value: false
- name: prefetchCount
value: 0
- name: reconnectWait
value: 0
- name: concurrencyMode
value: parallel
- name: publisherConfirm
value: false
- name: enableDeadLetter # Optional enable dead Letter or not
value: true
- name: maxLen # Optional max message count in a queue
value: 3000
- name: maxLenBytes # Optional maximum length in bytes of a queue.
value: 10485760
- name: exchangeKind
value: fanout
- name: saslExternal
value: false
- name: caCert
value: ${{ myLoadedCACert }}
- name: clientCert
value: ${{ myLoadedClientCert }}
- name: clientKey
secretKeyRef:
name: myRabbitMQClientKey
key: myRabbitMQClientKey
Note that while the caCert
and clientCert
values may not be secrets, they can be referenced from a Dapr secret store as well for convenience.
Enabling message delivery retries
The RabbitMQ pub/sub component has no built-in support for retry strategies. This means that the sidecar sends a message to the service only once. When the service returns a result, the message will be marked as consumed regardless of whether it was processed correctly or not. Note that this is common among all Dapr PubSub components and not just RabbitMQ.
Dapr can try redelivering a message a second time, when autoAck
is set to false
and requeueInFailure
is set to true
.
To make Dapr use more sophisticated retry policies, you can apply a retry resiliency policy to the RabbitMQ pub/sub component.
There is a crucial difference between the two ways to retry messages:
- When using
autoAck = false
andrequeueInFailure = true
, RabbitMQ is the one responsible for re-delivering messages and any subscriber can get the redelivered message. If you have more than one instance of your consumer, then it’s possible that another consumer will get it. This is usually the better approach because if there’s a transient failure, it’s more likely that a different worker will be in a better position to successfully process the message. - Using Resiliency makes the same Dapr sidecar retry redelivering the messages. So it will be the same Dapr sidecar and the same app receiving the same message.
Create a RabbitMQ server
You can run a RabbitMQ server locally using Docker:
docker run -d --hostname my-rabbit --name some-rabbit rabbitmq:3
You can then interact with the server using the client port: localhost:5672
.
The easiest way to install RabbitMQ on Kubernetes is by using the Helm chart:
helm install rabbitmq stable/rabbitmq
Look at the chart output and get the username and password.
This will install RabbitMQ into the default
namespace. To interact with RabbitMQ, find the service with: kubectl get svc rabbitmq
.
For example, if installing using the example above, the RabbitMQ server client address would be:
rabbitmq.default.svc.cluster.local:5672
Use topic exchange to route messages
Setting exchangeKind
to "topic"
uses the topic exchanges, which are commonly used for the multicast routing of messages. In order to route messages using topic exchange, you must set the following metadata:
-
routingKey
:
Messages with a routing key are routed to one or many queues based on therouting key
defined in the metadata when subscribing. -
queueName
:
If you don’t set thequeueName
, only one queue is created, and all routing keys will route to that queue. This means all subscribers will bind to that queue, which won’t give the desired results.
For example, if an app is configured with a routing key keyA
and queueName
of queue-A
:
apiVersion: dapr.io/v2alpha1
kind: Subscription
metadata:
name: orderspubsub
spec:
topic: B
routes:
default: /B
pubsubname: pubsub
metadata:
routingKey: keyA
queueName: queue-A
It will receive messages with routing key keyA
, and messages with other routing keys are not received.
// publish messages with routing key `keyA`, and these will be received by the above example.
client.PublishEvent(context.Background(), "pubsub", "B", []byte("this is a message"), dapr.PublishEventWithMetadata(map[string]string{"routingKey": "keyA"}))
// publish messages with routing key `keyB`, and these will not be received by the above example.
client.PublishEvent(context.Background(), "pubsub", "B", []byte("this is another message"), dapr.PublishEventWithMetadata(map[string]string{"routingKey": "keyB"}))
Bind multiple routingKey
Multiple routing keys can be separated by commas.
The example below binds three routingKey
: keyA
, keyB
, and ""
. Note the binding method of empty keys.
apiVersion: dapr.io/v2alpha1
kind: Subscription
metadata:
name: orderspubsub
spec:
topic: B
routes:
default: /B
pubsubname: pubsub
metadata:
routingKey: keyA,keyB,
For more information see rabbitmq exchanges.
Use priority queues
Dapr supports RabbitMQ priority queues. To set a priority for a queue, use the maxPriority
topic subscription metadata.
Declarative priority queue example
apiVersion: dapr.io/v2alpha1
kind: Subscription
metadata:
name: pubsub
spec:
topic: checkout
routes:
default: /orders
pubsubname: order-pub-sub
metadata:
maxPriority: 3
Programmatic priority queue example
@app.route('/dapr/subscribe', methods=['GET'])
def subscribe():
subscriptions = [
{
'pubsubname': 'pubsub',
'topic': 'checkout',
'routes': {
'default': '/orders'
},
'metadata': {'maxPriority': '3'}
}
]
return jsonify(subscriptions)
const express = require('express')
const bodyParser = require('body-parser')
const app = express()
app.use(bodyParser.json({ type: 'application/*+json' }));
const port = 3000
app.get('/dapr/subscribe', (req, res) => {
res.json([
{
pubsubname: "pubsub",
topic: "checkout",
routes: {
default: '/orders'
},
metadata: {
maxPriority: '3'
}
}
]);
})
package main
"encoding/json"
"net/http"
const appPort = 3000
type subscription struct {
PubsubName string `json:"pubsubname"`
Topic string `json:"topic"`
Metadata map[string]string `json:"metadata,omitempty"`
Routes routes `json:"routes"`
}
type routes struct {
Rules []rule `json:"rules,omitempty"`
Default string `json:"default,omitempty"`
}
// This handles /dapr/subscribe
func configureSubscribeHandler(w http.ResponseWriter, _ *http.Request) {
t := []subscription{
{
PubsubName: "pubsub",
Topic: "checkout",
Routes: routes{
Default: "/orders",
},
Metadata: map[string]string{
"maxPriority": "3"
},
},
}
w.WriteHeader(http.StatusOK)
json.NewEncoder(w).Encode(t)
}
Setting a priority when publishing a message
To set a priority on a message, add the publish metadata key maxPriority
to the publish endpoint or SDK method.
curl -X POST http://localhost:3601/v1.0/publish/order-pub-sub/orders?metadata.priority=3 -H "Content-Type: application/json" -d '{"orderId": "100"}'
with DaprClient() as client:
result = client.publish_event(
pubsub_name=PUBSUB_NAME,
topic_name=TOPIC_NAME,
data=json.dumps(orderId),
data_content_type='application/json',
metadata= { 'priority': '3' })
await client.pubsub.publish(PUBSUB_NAME, TOPIC_NAME, orderId, { 'priority': '3' });
client.PublishEvent(ctx, PUBSUB_NAME, TOPIC_NAME, []byte(strconv.Itoa(orderId)), map[string]string{"priority": "3"})
Use quorum queues
By default, Dapr creates classic
queues. To create quorum
queues, add the following metadata to your pub/sub subscription
apiVersion: dapr.io/v2alpha1
kind: Subscription
metadata:
name: pubsub
spec:
topic: checkout
routes:
default: /orders
pubsubname: order-pub-sub
metadata:
queueType: quorum
Time-to-live
You can set a time-to-live (TTL) value at either the message or component level. Set default component-level TTL using the component spec ttlInSeconds
field in your component.
Note
If you set both component-level and message-level TTL, the default component-level TTL is ignored in favor of the message-level TTL.Single Active Consumer
The RabbitMQ Single Active Consumer setup ensures that only one consumer at a time processes messages from a queue and switches to another registered consumer if the active one is canceled or fails. This approach might be required when it is crucial for messages to be consumed in the exact order they arrive in the queue and if distributed processing with multiple instances is not supported. When this option is enabled on a queue by Dapr, an instance of the Dapr runtime will be the single active consumer. To allow another application instance to take over in case of failure, Dapr runtime must probe the application’s health and unsubscribe from the pub/sub component.
Note
This pattern will prevent the application to scale as only one instance can process the load. While it might be interesting for Dapr integration with legacy or sensible applications, you should consider a design allowing distributed processing if you need scalability.apiVersion: dapr.io/v2alpha1
kind: Subscription
metadata:
name: pubsub
spec:
topic: orders
routes:
default: /orders
pubsubname: order-pub-sub
metadata:
singleActiveConsumer: "true"
Related links
- Basic schema for a Dapr component in the Related links section
- Read this guide for instructions on configuring pub/sub components
- Pub/Sub building block
5.1.14 - Redis Streams
Component format
To set up Redis Streams pub/sub, create a component of type pubsub.redis
. See the pub/sub broker component file to learn how ConsumerID is automatically generated. Read the How-to: Publish and Subscribe guide on how to create and apply a pub/sub configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: redis-pubsub
spec:
type: pubsub.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: "KeFg23!"
- name: consumerID
value: "channel1"
- name: useEntraID
value: "true"
- name: enableTLS
value: "false"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
redisHost | Y | Connection-string for the redis host. If "redisType" is "cluster" it can be multiple hosts separated by commas or just a single host |
localhost:6379 , redis-master.default.svc.cluster.local:6379 |
redisPassword | N | Password for Redis host. No Default. Can be secretKeyRef to use a secret reference |
"" , "KeFg23!" |
redisUsername | N | Username for Redis host. Defaults to empty. Make sure your redis server version is 6 or above, and have created acl rule correctly. | "" , "default" |
consumerID | N | The consumer group ID. | Can be set to string value (such as "channel1" in the example above) or string format value (such as "{podName}" , etc.). See all of template tags you can use in your component metadata. |
useEntraID | N | Implements EntraID support for Azure Cache for Redis. Before enabling this:
|
"true" , "false" |
enableTLS | N | If the Redis instance supports TLS with public certificates, can be configured to be enabled or disabled. Defaults to "false" |
"true" , "false" |
clientCert | N | The content of the client certificate, used for Redis instances that require client-side certificates. Must be used with clientKey and enableTLS must be set to true. It is recommended to use a secret store as described here |
"----BEGIN CERTIFICATE-----\nMIIC..." |
clientKey | N | The content of the client private key, used in conjunction with clientCert for authentication. It is recommended to use a secret store as described here |
"----BEGIN PRIVATE KEY-----\nMIIE..." |
redeliverInterval | N | The interval between checking for pending messages to redeliver. Can use either be Go duration string (for example “ms”, “s”, “m”) or milliseconds number. Defaults to "60s" . "0" disables redelivery. |
"30s" , "5000" |
processingTimeout | N | The amount time that a message must be pending before attempting to redeliver it. Can use either be Go duration string ( for example “ms”, “s”, “m”) or milliseconds number. Defaults to "15s" . "0" disables redelivery. |
"60s" , "600000" |
queueDepth | N | The size of the message queue for processing. Defaults to "100" . |
"1000" |
concurrency | N | The number of concurrent workers that are processing messages. Defaults to "10" . |
"15" |
redisType | N | The type of redis. There are two valid values, one is "node" for single node mode, the other is "cluster" for redis cluster mode. Defaults to "node" . |
"cluster" |
redisDB | N | Database selected after connecting to redis. If "redisType" is "cluster" this option is ignored. Defaults to "0" . |
"0" |
redisMaxRetries | N | Maximum number of times to retry commands before giving up. Default is to not retry failed commands. | "5" |
redisMinRetryInterval | N | Minimum backoff for redis commands between each retry. Default is "8ms" ; "-1" disables backoff. |
"8ms" |
redisMaxRetryInterval | N | Maximum backoff for redis commands between each retry. Default is "512ms" ;"-1" disables backoff. |
"5s" |
dialTimeout | N | Dial timeout for establishing new connections. Defaults to "5s" . |
"5s" |
readTimeout | N | Timeout for socket reads. If reached, redis commands will fail with a timeout instead of blocking. Defaults to "3s" , "-1" for no timeout. |
"3s" |
writeTimeout | N | Timeout for socket writes. If reached, redis commands will fail with a timeout instead of blocking. Defaults is readTimeout. | "3s" |
poolSize | N | Maximum number of socket connections. Default is 10 connections per every CPU as reported by runtime.NumCPU. | "20" |
poolTimeout | N | Amount of time client waits for a connection if all connections are busy before returning an error. Default is readTimeout + 1 second. | "5s" |
maxConnAge | N | Connection age at which the client retires (closes) the connection. Default is to not close aged connections. | "30m" |
minIdleConns | N | Minimum number of idle connections to keep open in order to avoid the performance degradation associated with creating new connections. Defaults to "0" . |
"2" |
idleCheckFrequency | N | Frequency of idle checks made by idle connections reaper. Default is "1m" . "-1" disables idle connections reaper. |
"-1" |
idleTimeout | N | Amount of time after which the client closes idle connections. Should be less than server’s timeout. Default is "5m" . "-1" disables idle timeout check. |
"10m" |
failover | N | Property to enable failover configuration. Needs sentinelMasterName to be set. Defaults to "false" |
"true" , "false" |
sentinelMasterName | N | The sentinel master name. See Redis Sentinel Documentation | "" , "mymaster" |
sentinelUsername | N | Username for Redis Sentinel. Applicable only when “failover” is true, and Redis Sentinel has authentication enabled | "username" |
sentinelPassword | N | Password for Redis Sentinel. Applicable only when “failover” is true, and Redis Sentinel has authentication enabled | "password" |
maxLenApprox | N | Maximum number of items inside a stream.The old entries are automatically evicted when the specified length is reached, so that the stream is left at a constant size. Defaults to unlimited. | "10000" |
streamTTL | N | TTL duration for stream entries. Entries older than this duration will be evicted. This is an approximate value, as it’s implemented using Redis stream’s MINID trimming with the ‘~’ modifier. The actual retention may include slightly more entries than strictly defined by the TTL, as Redis optimizes the trimming operation for efficiency by potentially keeping some additional entries. |
"30d" |
Create a Redis instance
Dapr can use any Redis instance - containerized, running on your local dev machine, or a managed cloud service, provided the version of Redis is 5.x or 6.x.
The Dapr CLI will automatically create and setup a Redis Streams instance for you.
The Redis instance will be installed via Docker when you run dapr init
, and the component file will be created in default directory. ($HOME/.dapr/components
directory (Mac/Linux) or %USERPROFILE%\.dapr\components
on Windows).
You can use Helm to quickly create a Redis instance in our Kubernetes cluster. This approach requires Installing Helm.
-
Install Redis into your cluster.
helm repo add bitnami https://charts.bitnami.com/bitnami helm install redis bitnami/redis --set image.tag=6.2
-
Run
kubectl get pods
to see the Redis containers now running in your cluster. -
Add
redis-master:6379
as theredisHost
in your redis.yaml file. For example:metadata: - name: redisHost value: redis-master:6379
-
Next, we’ll get our Redis password, which is slightly different depending on the OS we’re using:
-
Windows: Run
kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" > encoded.b64
, which will create a file with your encoded password. Next, runcertutil -decode encoded.b64 password.txt
, which will put your redis password in a text file calledpassword.txt
. Copy the password and delete the two files. -
Linux/MacOS: Run
kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" | base64 --decode
and copy the outputted password.
Add this password as the
redisPassword
value in your redis.yaml file. For example:- name: redisPassword value: "lhDOkwTlp0"
-
-
Create an Azure Cache for Redis instance using the official Microsoft documentation.
-
Once your instance is created, grab the Host name (FQDN) and your access key from the Azure portal.
- For the Host name:
- Navigate to the resource’s Overview page.
- Copy the Host name value.
- For your access key:
- Navigate to Settings > Access Keys.
- Copy and save your key.
- For the Host name:
-
Add your key and your host name to a
redis.yaml
file that Dapr can apply to your cluster.- If you’re running a sample, add the host and key to the provided
redis.yaml
. - If you’re creating a project from the ground up, create a
redis.yaml
file as specified in the Component format section.
- If you’re running a sample, add the host and key to the provided
-
Set the
redisHost
key to[HOST NAME FROM PREVIOUS STEP]:6379
and theredisPassword
key to the key you saved earlier.Note: In a production-grade application, follow secret management instructions to securely manage your secrets.
-
Enable EntraID support:
- Enable Entra ID authentication on your Azure Redis server. This may takes a few minutes.
- Set
useEntraID
to"true"
to implement EntraID support for Azure Cache for Redis.
-
Set
enableTLS
to"true"
to support TLS.
Note:
useEntraID
assumes that either your UserPrincipal (via AzureCLICredential) or the SystemAssigned managed identity have the RedisDataOwner role permission. If a user-assigned identity is used, you need to specify theazureClientID
property.
Note
The Dapr CLI automatically deploys a local redis instance in self hosted mode as part of thedapr init
command.
Related links
- Basic schema for a Dapr component
- Read this guide for instructions on configuring pub/sub components
- Pub/Sub building block
5.1.15 - RocketMQ
Component format
To set up RocketMQ pub/sub, create a component of type pubsub.rocketmq
. See the pub/sub broker component file to learn how ConsumerID is automatically generated. Read the How-to: Publish and Subscribe guide on how to create and apply a pub/sub configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: rocketmq-pubsub
spec:
type: pubsub.rocketmq
version: v1
metadata:
- name: instanceName
value: dapr-rocketmq-test
- name: consumerGroup
value: dapr-rocketmq-test-g-c
- name: producerGroup
value: dapr-rocketmq-test-g-p
- name: consumerID
value: channel1
- name: nameSpace
value: dapr-test
- name: nameServer
value: "127.0.0.1:9876,127.0.0.2:9876"
- name: retries
value: 3
- name: consumerModel
value: "clustering"
- name: consumeOrderly
value: false
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Details | default | Example |
---|---|---|---|---|
instanceName | N | Instance name | time.Now().String() |
dapr-rocketmq-test |
consumerGroup | N | Consumer group name. Recommend. If producerGroup is null ,groupName is used. |
dapr-rocketmq-test-g-c |
|
producerGroup (consumerID) | N | Producer group name. Recommended. If producerGroup is null ,consumerID is used. If consumerID also is null, groupName is used. |
dapr-rocketmq-test-g-p |
|
consumerID | N | Consumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the consumerID is not provided, the Dapr runtime set it to the Dapr application ID (appID ) value. |
Can be set to string value (such as "channel1" in the example above) or string format value (such as "{podName}" , etc.). See all of template tags you can use in your component metadata. |
|
groupName | N | Consumer/Producer group name. Depreciated. | dapr-rocketmq-test-g |
|
nameSpace | N | RocketMQ namespace | dapr-rocketmq |
|
nameServerDomain | N | RocketMQ name server domain | https://my-app.net:8080/nsaddr |
|
nameServer | N | RocketMQ name server, separated by “,” or “;” | 127.0.0.1:9876;127.0.0.2:9877,127.0.0.3:9877 |
|
accessKey | N | Access Key (Username) | "admin" |
|
secretKey | N | Secret Key (Password) | "password" |
|
securityToken | N | Security Token | ||
retries | N | Number of retries to send a message to broker | 3 |
3 |
producerQueueSelector (queueSelector) | N | Producer Queue selector. There are five implementations of queue selector: hash , random , manual , roundRobin , dapr . |
dapr |
hash |
consumerModel | N | Message model that defines how messages are delivered to each consumer client. RocketMQ supports two message models: clustering and broadcasting . |
clustering |
broadcasting , clustering |
fromWhere (consumeFromWhere) | N | Consuming point on consumer booting. There are three consuming points: CONSUME_FROM_LAST_OFFSET , CONSUME_FROM_FIRST_OFFSET , CONSUME_FROM_TIMESTAMP |
CONSUME_FROM_LAST_OFFSET |
CONSUME_FROM_LAST_OFFSET |
consumeTimestamp | N | Backtracks consumption time with second precision. Time format is yyyymmddhhmmss . For example, 20131223171201 implies the time of 17:12:01 and date of December 23, 2013 |
time.Now().Add(time.Minute * (-30)).Format("20060102150405") |
20131223171201 |
consumeOrderly | N | Determines if it’s an ordered message using FIFO order. | false |
false |
consumeMessageBatchMaxSize | N | Batch consumption size out of range [1, 1024] |
512 |
10 |
consumeConcurrentlyMaxSpan | N | Concurrently max span offset. This has no effect on sequential consumption. Range: [1, 65535] |
1000 |
1000 |
maxReconsumeTimes | N | Max re-consume times. -1 means 16 times. If messages are re-consumed more than {@link maxReconsumeTimes} before success, they’ll be directed to a deletion queue. |
Orderly message is MaxInt32 ; Concurrently message is 16 |
16 |
autoCommit | N | Enable auto commit | true |
false |
consumeTimeout | N | Maximum amount of time a message may block the consuming thread. Time unit: Minute | 15 |
15 |
consumerPullTimeout | N | The socket timeout in milliseconds | ||
pullInterval | N | Message pull interval | 100 |
100 |
pullBatchSize | N | The number of messages pulled from the broker at a time. If pullBatchSize is null , use ConsumerBatchSize . pullBatchSize out of range [1, 1024] |
32 |
10 |
pullThresholdForQueue | N | Flow control threshold on queue level. Each message queue will cache a maximum of 1000 messages by default. Consider the PullBatchSize - the instantaneous value may exceed the limit. Range: [1, 65535] |
1024 |
1000 |
pullThresholdForTopic | N | Flow control threshold on topic level. The value of pullThresholdForQueue will be overwritten and calculated based on pullThresholdForTopic if it isn’t unlimited. For example, if the value of pullThresholdForTopic is 1000 and 10 message queues are assigned to this consumer, then pullThresholdForQueue will be set to 100. Range: [1, 6553500] |
-1(Unlimited) |
10 |
pullThresholdSizeForQueue | N | Limit the cached message size on queue level. Consider the pullBatchSize - the instantaneous value may exceed the limit. The size of a message is only measured by message body, so it’s not accurate. Range: [1, 1024] |
100 |
100 |
pullThresholdSizeForTopic | N | Limit the cached message size on topic level. The value of pullThresholdSizeForQueue will be overwritten and calculated based on pullThresholdSizeForTopic if it isn’t unlimited. For example, if the value of pullThresholdSizeForTopic is 1000 MiB and 10 message queues are assigned to this consumer, then pullThresholdSizeForQueue will be set to 100 MiB. Range: [1, 102400] |
-1 |
100 |
content-type | N | Message content type. | "text/plain" |
"application/cloudevents+json; charset=utf-8" , "application/octet-stream" |
logLevel | N | Log level | warn |
info |
sendTimeOut | N | Send message timeout to connect RocketMQ’s broker, measured in nanoseconds. Deprecated. | 3 seconds | 10000000000 |
sendTimeOutSec | N | Timeout duration for publishing a message in seconds. If sendTimeOutSec is null , sendTimeOut is used. |
3 seconds | 3 |
mspProperties | N | The RocketMQ message properties in this collection are passed to the APP in Data Separate multiple properties with “,” | key,mkey |
For backwards-compatibility reasons, the following values in the metadata are supported, although their use is discouraged.
Field (supported but deprecated) | Required | Details | Example |
---|---|---|---|
groupName | N | Producer group name for RocketMQ publishers | "my_unique_group_name" |
sendTimeOut | N | Timeout duration for publishing a message in nanoseconds | 0 |
consumerBatchSize | N | The number of messages pulled from the broker at a time | 32 |
Setup RocketMQ
See https://rocketmq.apache.org/docs/quick-start/ to setup a local RocketMQ instance.
Per-call metadata fields
Partition Key
When invoking the RocketMQ pub/sub, it’s possible to provide an optional partition key by using the metadata
query param in the request url.
You need to specify rocketmq-tag
, "rocketmq-key"
, rocketmq-shardingkey
, rocketmq-queue
in metadata
Example:
curl -X POST http://localhost:3500/v1.0/publish/myRocketMQ/myTopic?metadata.rocketmq-tag=?&metadata.rocketmq-key=?&metadata.rocketmq-shardingkey=key&metadata.rocketmq-queue=1 \
-H "Content-Type: application/json" \
-d '{
"data": {
"message": "Hi"
}
}'
QueueSelector
The RocketMQ component contains a total of five queue selectors. The RocketMQ client provides the following queue selectors:
HashQueueSelector
RandomQueueSelector
RoundRobinQueueSelector
ManualQueueSelector
To learn more about these RocketMQ client queue selectors, read the RocketMQ documentation.
The Dapr RocketMQ component implements the following queue selector:
DaprQueueSelector
This article focuses on the design of DaprQueueSelector
.
DaprQueueSelector
DaprQueueSelector
integrates three queue selectors:
HashQueueSelector
RoundRobinQueueSelector
ManualQueueSelector
DaprQueueSelector
gets the queue id from the request parameter. You can set the queue id by running the following:
http://localhost:3500/v1.0/publish/myRocketMQ/myTopic?metadata.rocketmq-queue=1
The ManualQueueSelector
is implemented using the method above.
Next, the DaprQueueSelector
tries to:
- Get a
ShardingKey
- Hash the
ShardingKey
to determine the queue id.
You can set the ShardingKey
by doing the following:
http://localhost:3500/v1.0/publish/myRocketMQ/myTopic?metadata.rocketmq-shardingkey=key
If the ShardingKey
does not exist, the RoundRobin
algorithm is used to determine the queue id.
Related links
- Basic schema for a Dapr component
- Pub/Sub building block
- Read this guide for instructions on configuring pub/sub components
5.1.16 - Solace-AMQP
Component format
To set up Solace-AMQP pub/sub, create a component of type pubsub.solace.amqp
. See the pub/sub broker component file to learn how ConsumerID is automatically generated. Read the How-to: Publish and Subscribe guide on how to create and apply a pub/sub configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: solace
spec:
type: pubsub.solace.amqp
version: v1
metadata:
- name: url
value: 'amqp://localhost:5672'
- name: username
value: 'default'
- name: password
value: 'default'
- name: consumerID
value: 'channel1'
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
url | Y | Address of the AMQP broker. Can be secretKeyRef to use a secret reference. Use the amqp:// URI scheme for non-TLS communication. Use the amqps:// URI scheme for TLS communication. |
"amqp://host.domain[:port]" |
username | Y | The username to connect to the broker. Only required if anonymous is not specified or set to false . |
default |
password | Y | The password to connect to the broker. Only required if anonymous is not specified or set to false . |
default |
consumerID | N | Consumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the consumerID is not provided, the Dapr runtime set it to the Dapr application ID (appID ) value. |
Can be set to string value (such as "channel1" in the example above) or string format value (such as "{podName}" , etc.). See all of template tags you can use in your component metadata. |
anonymous | N | To connect to the broker without credential validation. Only works if enabled on the broker. A username and password would not be required if this is set to true . |
true |
caCert | Required for using TLS | Certificate Authority (CA) certificate in PEM format for verifying server TLS certificates. | "-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----" |
clientCert | Required for using TLS | TLS client certificate in PEM format. Must be used with clientKey . |
"-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----" |
clientKey | Required for using TLS | TLS client key in PEM format. Must be used with clientCert . Can be secretKeyRef to use a secret reference. |
"-----BEGIN RSA PRIVATE KEY-----\n<base64-encoded PKCS8>\n-----END RSA PRIVATE KEY-----" |
Communication using TLS
To configure communication using TLS:
- Ensure that the Solace broker is configured to support certificates.
- Provide the
caCert
,clientCert
, andclientKey
metadata in the component configuration.
For example:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: solace
spec:
type: pubsub.solace.amqp
version: v1
metadata:
- name: url
value: "amqps://host.domain[:port]"
- name: username
value: 'default'
- name: password
value: 'default'
- name: caCert
value: ${{ myLoadedCACert }}
- name: clientCert
value: ${{ myLoadedClientCert }}
- name: clientKey
secretKeyRef:
name: mySolaceClientKey
key: mySolaceClientKey
auth:
secretStore: <SECRET_STORE_NAME>
While the
caCert
andclientCert
values may not be secrets, they can be referenced from a Dapr secret store as well for convenience.
Publishing/subscribing to topics and queues
By default, messages are published and subscribed over topics. If you would like your destination to be a queue, prefix the topic with queue:
and the Solace AMQP component will connect to a queue.
Create a Solace broker
You can run a Solace broker locally using Docker:
docker run -d -p 8080:8080 -p 55554:55555 -p 8008:8008 -p 1883:1883 -p 8000:8000 -p 5672:5672 -p 9000:9000 -p 2222:2222 --shm-size=2g --env username_admin_globalaccesslevel=admin --env username_admin_password=admin --name=solace solace/solace-pubsub-standard
You can then interact with the server using the client port: mqtt://localhost:5672
You can also sign up for a free SaaS broker on Solace Cloud.
Related links
- Basic schema for a Dapr component
- Read this guide for instructions on configuring pub/sub components
- Pub/sub building block
5.2 - Bindings component specs
The following table lists input and output bindings supported by the Dapr bindings building block. Learn how to set up different input and output binding components for Dapr bindings.
Table headers to note:
Header | Description | Example |
---|---|---|
Status | Component certification status |
Alpha Beta Stable |
Component version | The version of the component | v1 |
Since runtime version | The version of the Dapr runtime when the component status was set or updated | 1.11 |
Every binding component has its own set of properties. Click the name link to see the component specification for each binding.
Generic
Component | Input Binding | Output Binding | Status | Component version | Since runtime version |
---|---|---|---|---|---|
Apple Push Notifications (APN) |
![]() |
✅ | Alpha | v1 | 1.0 |
commercetools GraphQL |
![]() |
✅ | Alpha | v1 | 1.8 |
Cron (Scheduler) | ✅ |
![]() |
Stable | v1 | 1.10 |
GraphQL |
![]() |
✅ | Alpha | v1 | 1.0 |
HTTP |
![]() |
✅ | Stable | v1 | 1.0 |
Huawei OBS |
![]() |
✅ | Alpha | v1 | 1.8 |
InfluxDB |
![]() |
✅ | Beta | v1 | 1.7 |
Kafka | ✅ | ✅ | Stable | v1 | 1.8 |
Kitex |
![]() |
✅ | Alpha | v1 | 1.11 |
KubeMQ | ✅ | ✅ | Beta | v1 | 1.10 |
Kubernetes Events | ✅ |
![]() |
Alpha | v1 | 1.0 |
Local Storage |
![]() |
✅ | Stable | v1 | 1.9 |
MQTT3 | ✅ | ✅ | Beta | v1 | 1.7 |
MySQL & MariaDB |
![]() |
✅ | Alpha | v1 | 1.0 |
PostgreSQL |
![]() |
✅ | Stable | v1 | 1.9 |
Postmark |
![]() |
✅ | Alpha | v1 | 1.0 |
RabbitMQ | ✅ | ✅ | Stable | v1 | 1.9 |
Redis |
![]() |
✅ | Stable | v1 | 1.9 |
RethinkDB | ✅ |
![]() |
Beta | v1 | 1.9 |
SendGrid |
![]() |
✅ | Alpha | v1 | 1.0 |
SFTP |
![]() |
✅ | Alpha | v1 | 1.15 |
SMTP |
![]() |
✅ | Alpha | v1 | 1.0 |
Twilio |
![]() |
✅ | Alpha | v1 | 1.0 |
Wasm |
![]() |
✅ | Alpha | v1 | 1.11 |
Alibaba Cloud
Component | Input Binding | Output Binding | Status | Component version | Since runtime version |
---|---|---|---|---|---|
Alibaba Cloud DingTalk | ✅ | ✅ | Alpha | v1 | 1.2 |
Alibaba Cloud OSS |
![]() |
✅ | Alpha | v1 | 1.0 |
Alibaba Cloud SLS |
![]() |
✅ | Alpha | v1 | 1.9 |
Alibaba Cloud Tablestore |
![]() |
✅ | Alpha | v1 | 1.5 |
Amazon Web Services (AWS)
Component | Input Binding | Output Binding | Status | Component version | Since runtime version |
---|---|---|---|---|---|
AWS DynamoDB |
![]() |
✅ | Alpha | v1 | 1.0 |
AWS Kinesis | ✅ | ✅ | Alpha | v1 | 1.0 |
AWS S3 |
![]() |
✅ | Stable | v1 | 1.11 |
AWS SES |
![]() |
✅ | Alpha | v1 | 1.4 |
AWS SNS |
![]() |
✅ | Alpha | v1 | 1.0 |
AWS SQS | ✅ | ✅ | Alpha | v1 | 1.0 |
Cloudflare
Component | Input Binding | Output Binding | Status | Component version | Since runtime version |
---|---|---|---|---|---|
Cloudflare Queues |
![]() |
✅ | Alpha | v1 | 1.10 |
Google Cloud Platform (GCP)
Component | Input Binding | Output Binding | Status | Component version | Since runtime version |
---|---|---|---|---|---|
GCP Cloud Pub/Sub | ✅ | ✅ | Alpha | v1 | 1.0 |
GCP Storage Bucket |
![]() |
✅ | Alpha | v1 | 1.0 |
Microsoft Azure
Component | Input Binding | Output Binding | Status | Component version | Since runtime version |
---|---|---|---|---|---|
Azure Blob Storage |
![]() |
✅ | Stable | v1 | 1.0 |
Azure Cosmos DB (Gremlin API) |
![]() |
✅ | Alpha | v1 | 1.5 |
Azure CosmosDB |
![]() |
✅ | Stable | v1 | 1.7 |
Azure Event Grid | ✅ | ✅ | Beta | v1 | 1.7 |
Azure Event Hubs | ✅ | ✅ | Stable | v1 | 1.8 |
Azure OpenAI | ✅ | ✅ | Alpha | v1 | 1.11 |
Azure Service Bus Queues | ✅ | ✅ | Stable | v1 | 1.7 |
Azure SignalR |
![]() |
✅ | Alpha | v1 | 1.0 |
Azure Storage Queues | ✅ | ✅ | Stable | v1 | 1.0 |
Zeebe (Camunda Cloud)
Component | Input Binding | Output Binding | Status | Component version | Since runtime version |
---|---|---|---|---|---|
Zeebe Command |
![]() |
✅ | Stable | v1 | 1.2 |
Zeebe Job Worker | ✅ |
![]() |
Stable | v1 | 1.2 |
5.2.1 - Alibaba Cloud DingTalk binding spec
Setup Dapr component
To setup an Alibaba Cloud DingTalk binding create a component of type bindings.dingtalk.webhook
. See this guide on how to create and apply a secretstore configuration. See this guide on referencing secrets to retrieve and use the secret with Dapr components.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.dingtalk.webhook
version: v1
metadata:
- name: id
value: "test_webhook_id"
- name: url
value: "https://oapi.dingtalk.com/robot/send?access_token=******"
- name: secret
value: "****************"
- name: direction
value: "input, output"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
id |
Y | Input/Output | Unique id | "test_webhook_id" |
url |
Y | Input/Output | DingTalk’s Webhook url | "https://oapi.dingtalk.com/robot/send?access_token=******" |
secret |
N | Input/Output | The secret of DingTalk’s Webhook | "****************" |
direction |
N | Input/Output | The direction of the binding | "input" , "output" , "input, output" |
Binding support
This component supports both input and output binding interfaces.
This component supports output binding with the following operations:
create
get
Specifying a partition key
Example: Follow the instructions here on setting the data of payload
curl -X POST http://localhost:3500/v1.0/bindings/myDingTalk \
-H "Content-Type: application/json" \
-d '{
"data": {
"msgtype": "text",
"text": {
"content": "Hi"
}
},
"operation": "create"
}'
curl -X POST http://localhost:3500/v1.0/bindings/myDingTalk \
-H "Content-Type: application/json" \
-d '{
"data": {
"msgtype": "text",
"text": {
"content": "Hi"
}
},
"operation": "get"
}'
Related links
5.2.2 - Alibaba Cloud Log Storage Service binding spec
Component format
To setup an Alibaba Cloud SLS binding create a component of type bindings.alicloud.sls
. See this guide on how to create and apply a binding configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: alicloud.sls
spec:
type: bindings.alicloud.sls
version: v1
metadata:
- name: AccessKeyID
value: "[accessKey-id]"
- name: AccessKeySecret
value: "[accessKey-secret]"
- name: Endpoint
value: "[endpoint]"
Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
AccessKeyID |
Y | Output | Access key ID credential. | |
AccessKeySecret |
Y | Output | Access key credential secret | |
Endpoint |
Y | Output | Alicloud SLS endpoint. |
Binding support
This component supports output binding with the following operations:
create
: Create object
Request format
To perform a log store operation, invoke the binding with a POST
method and the following JSON body:
{
"metadata":{
"project":"your-sls-project-name",
"logstore":"your-sls-logstore-name",
"topic":"your-sls-topic-name",
"source":"your-sls-source"
},
"data":{
"custome-log-filed":"any other log info"
},
"operation":"create"
}
Note
Note, the value of “project”,“logstore”,“topic” and “source” property should provide in the metadata properties.Example
curl -X POST -H "Content-Type: application/json" -d "{\"metadata\":{\"project\":\"project-name\",\"logstore\":\"logstore-name\",\"topic\":\"topic-name\",\"source\":\"source-name\"},\"data\":{\"log-filed\":\"log info\"}" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -X POST -H "Content-Type: application/json" -d '{"metadata":{"project":"project-name","logstore":"logstore-name","topic":"topic-name","source":"source-name"},"data":{"log-filed":"log info"}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Response format
As Alibaba Cloud SLS producer API is asynchronous, there is no response for this binding (there is no callback interface to accept the response of success or failure, only a record for failure any reason to the console log).
Related links
5.2.3 - Alibaba Cloud Object Storage Service binding spec
Component format
To setup an Alibaba Cloud Object Storage binding create a component of type bindings.alicloud.oss
. See this guide on how to create and apply a secretstore configuration. See this guide on referencing secrets to retrieve and use the secret with Dapr components.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: alicloudobjectstorage
spec:
type: bindings.alicloud.oss
version: v1
metadata:
- name: endpoint
value: "[endpoint]"
- name: accessKeyID
value: "[key-id]"
- name: accessKey
value: "[access-key]"
- name: bucket
value: "[bucket]"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
endpoint |
Y | Output | Alicloud OSS endpoint. | https://oss-cn-hangzhou.aliyuncs.com |
accessKeyID |
Y | Output | Access key ID credential. | |
accessKey |
Y | Output | Access key credential. | |
bucket |
Y | Output | Name of the storage bucket. |
Binding support
This component supports output binding with the following operations:
create
: Create object
Create object
To perform a create object operation, invoke the binding with a POST
method and the following JSON body:
{
"operation": "create",
"data": "YOUR_CONTENT"
}
Note
By default, a random UUID is auto-generated as the object key. See below for Metadata support to set the key for the object.Example
Saving to a random generated UUID file
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\" }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "Hello World" }' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Saving to a specific file
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\", \"metadata\": { \"key\": \"my-key\" } }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "Hello World", "metadata": { "key": "my-key" } }' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Note
Windows CMD requires escaping the"
character.
Metadata information
Object key
By default, the Alicloud OSS output binding will auto-generate a UUID as the object key. You can set the key with the following metadata:
{
"data": "file content",
"metadata": {
"key": "my-key"
},
"operation": "create"
}
Related links
5.2.4 - Alibaba Cloud Tablestore binding spec
Component format
To setup an Alibaba Cloud Tablestore binding create a component of type bindings.alicloud.tablestore
. See this guide on how to create and apply a secretstore configuration. See this guide on referencing secrets to retrieve and use the secret with Dapr components.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: mytablestore
spec:
type: bindings.alicloud.tablestore
version: v1
metadata:
- name: endpoint
value: "[endpoint]"
- name: accessKeyID
value: "[key-id]"
- name: accessKey
value: "[access-key]"
- name: instanceName
value: "[instance]"
- name: tableName
value: "[table]"
- name: endpoint
value: "[endpoint]"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
endpoint |
Y | Output | Alicloud Tablestore endpoint. | https://tablestore-cn-hangzhou.aliyuncs.com |
accessKeyID |
Y | Output | Access key ID credential. | |
accessKey |
Y | Output | Access key credential. | |
instanceName |
Y | Output | Name of the instance. | |
tableName |
Y | Output | Name of the table. |
Binding support
This component supports output binding with the following operations:
create
: Create object
Create object
To perform a create object operation, invoke the binding with a POST
method and the following JSON body:
{
"operation": "create",
"data": "YOUR_CONTENT",
"metadata": {
"primaryKeys": "pk1"
}
}
Note
Note themetadata.primaryKeys
field is mandatory.
Delete object
To perform a delete object operation, invoke the binding with a POST
method and the following JSON body:
{
"operation": "delete",
"metadata": {
"primaryKeys": "pk1",
"columnToGet": "name,age,date"
},
"data": {
"pk1": "data1"
}
}
Note
Note themetadata.primaryKeys
field is mandatory.
List objects
To perform a list objects operation, invoke the binding with a POST
method and the following JSON body:
{
"operation": "delete",
"metadata": {
"primaryKeys": "pk1",
"columnToGet": "name,age,date"
},
"data": {
"pk1": "data1",
"pk2": "data2"
}
}
Note
Note themetadata.primaryKeys
field is mandatory.
Get object
To perform a get object operation, invoke the binding with a POST
method and the following JSON body:
{
"operation": "delete",
"metadata": {
"primaryKeys": "pk1"
},
"data": {
"pk1": "data1"
}
}
Note
Note themetadata.primaryKeys
field is mandatory.
Related links
5.2.5 - Apple Push Notification Service binding spec
Component format
To setup Apple Push Notifications binding create a component of type bindings.apns
. See this guide on how to create and apply a binding configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.apns
version: v1
metadata:
- name: development
value: "<bool>"
- name: key-id
value: "<APPLE_KEY_ID>"
- name: team-id
value: "<APPLE_TEAM_ID>"
- name: private-key
secretKeyRef:
name: <SECRET>
key: "<SECRET-KEY-NAME>"
Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
development |
Y | Output | Tells the binding which APNs service to use. Set to "true" to use the development service or "false" to use the production service. Default: "true" |
"true" |
key-id |
Y | Output | The identifier for the private key from the Apple Developer Portal | "private-key-id " |
team-id |
Y | Output | The identifier for the organization or author from the Apple Developer Portal | "team-id" |
private-key |
Y | Output | Is a PKCS #8-formatted private key. It is intended that the private key is stored in the secret store and not exposed directly in the configuration. See here for more details | "pem file" |
Private key
The APNS binding needs a cryptographic private key in order to generate authentication tokens for the APNS service. The private key can be generated from the Apple Developer Portal and is provided as a PKCS #8 file with the private key stored in PEM format. The private key should be stored in the Dapr secret store and not stored directly in the binding’s configuration file.
A sample configuration file for the APNS binding is shown below:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: apns
spec:
type: bindings.apns
metadata:
- name: development
value: false
- name: key-id
value: PUT-KEY-ID-HERE
- name: team-id
value: PUT-APPLE-TEAM-ID-HERE
- name: private-key
secretKeyRef:
name: apns-secrets
key: private-key
If using Kubernetes, a sample secret configuration may look like this:
apiVersion: v1
kind: Secret
metadata:
name: apns-secrets
stringData:
private-key: |
-----BEGIN PRIVATE KEY-----
KEY-DATA-GOES-HERE
-----END PRIVATE KEY-----
Binding support
This component supports output binding with the following operations:
create
Push notification format
The APNS binding is a pass-through wrapper over the Apple Push Notification Service. The APNS binding will send the request directly to the APNS service without any translation. It is therefore important to understand the payload for push notifications expected by the APNS service. The payload format is documented here.
Request format
{
"data": {
"aps": {
"alert": {
"title": "New Updates!",
"body": "There are new updates for your review"
}
}
},
"metadata": {
"device-token": "PUT-DEVICE-TOKEN-HERE",
"apns-push-type": "alert",
"apns-priority": "10",
"apns-topic": "com.example.helloworld"
},
"operation": "create"
}
The data
object contains a complete push notification specification as described in the Apple documentation. The data
object will be sent directly to the APNs service.
Besides the device-token
value, the HTTP headers specified in the Apple documentation can be sent as metadata fields and will be included in the HTTP request to the APNs service.
Response format
{
"messageID": "UNIQUE-ID-FOR-NOTIFICATION"
}
Related links
5.2.6 - AWS DynamoDB binding spec
Component format
To setup AWS DynamoDB binding create a component of type bindings.aws.dynamodb
. See this guide on how to create and apply a binding configuration.
See Authenticating to AWS for information about authentication-related attributes
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.aws.dynamodb
version: v1
metadata:
- name: table
value: "items"
- name: region
value: "us-west-2"
- name: accessKey
value: "*****************"
- name: secretKey
value: "*****************"
- name: sessionToken
value: "*****************"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
table |
Y | Output | The DynamoDB table name | "items" |
region |
Y | Output | The specific AWS region the AWS DynamoDB instance is deployed in | "us-east-1" |
accessKey |
Y | Output | The AWS Access Key to access this resource | "key" |
secretKey |
Y | Output | The AWS Secret Access Key to access this resource | "secretAccessKey" |
sessionToken |
N | Output | The AWS session token to use | "sessionToken" |
Important
When running the Dapr sidecar (daprd) with your application on EKS (AWS Kubernetes), if you’re using a node/pod that has already been attached to an IAM policy defining access to AWS resources, you must not provide AWS access-key, secret-key, and tokens in the definition of the component spec you’re using.Binding support
This component supports output binding with the following operations:
create
Related links
5.2.7 - AWS Kinesis binding spec
Component format
To setup AWS Kinesis binding create a component of type bindings.aws.kinesis
. See this guide on how to create and apply a binding configuration.
See this for instructions on how to set up an AWS Kinesis data streams See Authenticating to AWS for information about authentication-related attributes
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.aws.kinesis
version: v1
metadata:
- name: streamName
value: "KINESIS_STREAM_NAME" # Kinesis stream name
- name: consumerName
value: "KINESIS_CONSUMER_NAME" # Kinesis consumer name
- name: mode
value: "shared" # shared - Shared throughput or extended - Extended/Enhanced fanout
- name: region
value: "AWS_REGION" #replace
- name: accessKey
value: "AWS_ACCESS_KEY" # replace
- name: secretKey
value: "AWS_SECRET_KEY" #replace
- name: sessionToken
value: "*****************"
- name: direction
value: "input, output"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
mode |
N | Input | The Kinesis stream mode. shared - Shared throughput, extended - Extended/Enhanced fanout methods. More details are here. Defaults to "shared" |
"shared" , "extended" |
streamName |
Y | Input/Output | The AWS Kinesis Stream Name | "stream" |
consumerName |
Y | Input | The AWS Kinesis Consumer Name | "myconsumer" |
region |
Y | Output | The specific AWS region the AWS Kinesis instance is deployed in | "us-east-1" |
accessKey |
Y | Output | The AWS Access Key to access this resource | "key" |
secretKey |
Y | Output | The AWS Secret Access Key to access this resource | "secretAccessKey" |
sessionToken |
N | Output | The AWS session token to use | "sessionToken" |
direction |
N | Input/Output | The direction of the binding | "input" , "output" , "input, output" |
Important
When running the Dapr sidecar (daprd) with your application on EKS (AWS Kubernetes), if you’re using a node/pod that has already been attached to an IAM policy defining access to AWS resources, you must not provide AWS access-key, secret-key, and tokens in the definition of the component spec you’re using.Binding support
This component supports both input and output binding interfaces.
This component supports output binding with the following operations:
create
Related links
5.2.8 - AWS S3 binding spec
Component format
To setup an AWS S3 binding create a component of type bindings.aws.s3
. This binding works with other S3-compatible services, such as Minio. See this guide on how to create and apply a binding configuration.
See Authenticating to AWS for information about authentication-related attributes.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.aws.s3
version: v1
metadata:
- name: bucket
value: "mybucket"
- name: region
value: "us-west-2"
- name: endpoint
value: "s3.us-west-2.amazonaws.com"
- name: accessKey
value: "*****************"
- name: secretKey
value: "*****************"
- name: sessionToken
value: "mysession"
- name: decodeBase64
value: "<bool>"
- name: encodeBase64
value: "<bool>"
- name: forcePathStyle
value: "<bool>"
- name: disableSSL
value: "<bool>"
- name: insecureSSL
value: "<bool>"
- name: storageClass
value: "<string>"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
bucket |
Y | Output | The name of the S3 bucket to write to | "bucket" |
region |
Y | Output | The specific AWS region | "us-east-1" |
endpoint |
N | Output | The specific AWS endpoint | "s3.us-east-1.amazonaws.com" |
accessKey |
Y | Output | The AWS Access Key to access this resource | "key" |
secretKey |
Y | Output | The AWS Secret Access Key to access this resource | "secretAccessKey" |
sessionToken |
N | Output | The AWS session token to use | "sessionToken" |
forcePathStyle |
N | Output | Currently Amazon S3 SDK supports virtual hosted-style and path-style access. "true" is path-style format like "https://<endpoint>/<your bucket>/<key>" . "false" is hosted-style format like "https://<your bucket>.<endpoint>/<key>" . Defaults to "false" |
"true" , "false" |
decodeBase64 |
N | Output | Configuration to decode base64 file content before saving to bucket storage. (In case of saving a file with binary content). "true" is the only allowed positive value. Other positive variations like "True", "1" are not acceptable. Defaults to false |
"true" , "false" |
encodeBase64 |
N | Output | Configuration to encode base64 file content before return the content. (In case of opening a file with binary content). "true" is the only allowed positive value. Other positive variations like "True", "1" are not acceptable. Defaults to "false" |
"true" , "false" |
disableSSL |
N | Output | Allows to connect to non https:// endpoints. Defaults to "false" |
"true" , "false" |
insecureSSL |
N | Output | When connecting to https:// endpoints, accepts invalid or self-signed certificates. Defaults to "false" |
"true" , "false" |
storageClass |
N | Output | The desired storage class for objects during the create operation. Valid aws storage class types can be found here | STANDARD_IA |
Important
When running the Dapr sidecar (daprd) with your application on EKS (AWS Kubernetes), if you’re using a node/pod that has already been attached to an IAM policy defining access to AWS resources, you must not provide AWS access-key, secret-key, and tokens in the definition of the component spec you’re using.S3 Bucket Creation
Using with Minio
Minio is a service that exposes local storage as S3-compatible block storage, and it’s a popular alternative to S3 especially in development environments. You can use the S3 binding with Minio too, with some configuration tweaks:
- Set
endpoint
to the address of the Minio server, including protocol (http://
orhttps://
) and the optional port at the end. For example,http://minio.local:9000
(the values depend on your environment). forcePathStyle
must be set totrue
- The value for
region
is not important; you can set it tous-east-1
. - Depending on your environment, you may need to set
disableSSL
totrue
if you’re connecting to Minio using a non-secure connection (using thehttp://
protocol). If you are using a secure connection (https://
protocol) but with a self-signed certificate, you may need to setinsecureSSL
totrue
.
For local development, the LocalStack project is used to integrate AWS S3. Follow these instructions to run LocalStack.
To run LocalStack locally from the command line using Docker, use a docker-compose.yaml
similar to the following:
version: "3.8"
services:
localstack:
container_name: "cont-aws-s3"
image: localstack/localstack:1.4.0
ports:
- "127.0.0.1:4566:4566"
environment:
- DEBUG=1
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "<PATH>/init-aws.sh:/etc/localstack/init/ready.d/init-aws.sh" # init hook
- "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
To use the S3 component, you need to use an existing bucket. The example above uses a LocalStack Initialization Hook to setup the bucket.
To use LocalStack with your S3 binding, you need to provide the endpoint
configuration in the component metadata. The endpoint
is unnecessary when running against production AWS.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: aws-s3
namespace: default
spec:
type: bindings.aws.s3
version: v1
metadata:
- name: bucket
value: conformance-test-docker
- name: endpoint
value: "http://localhost:4566"
- name: accessKey
value: "my-access"
- name: secretKey
value: "my-secret"
- name: region
value: "us-east-1"
To use the S3 component, you need to use an existing bucket. Follow the AWS documentation for creating a bucket.
Binding support
This component supports output binding with the following operations:
create
: Create objectget
: Get objectdelete
: Delete objectlist
: List objects
Create object
To perform a create operation, invoke the AWS S3 binding with a POST
method and the following JSON body:
Note: by default, a random UUID is generated. See below for Metadata support to set the name
{
"operation": "create",
"data": "YOUR_CONTENT",
"metadata": {
"storageClass": "STANDARD_IA",
"tags": "project=sashimi,year=2024",
}
}
For example you can provide a storage class or tags while using the create
operation with a Linux curl command
curl -d '{ "operation": "create", "data": "YOUR_BASE_64_CONTENT", "metadata": { "storageClass": "STANDARD_IA", "project=sashimi,year=2024" } }' /
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Share object with a presigned URL
To presign an object with a specified time-to-live, use the presignTTL
metadata key on a create
request.
Valid values for presignTTL
are Go duration strings.
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\", \"metadata\": { \"presignTTL\": \"15m\" } }" \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "Hello World", "metadata": { "presignTTL": "15m" } }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Response
The response body contains the following example JSON:
{
"location":"https://<your bucket>.s3.<your region>.amazonaws.com/<key>",
"versionID":"<version ID if Bucket Versioning is enabled>",
"presignURL": "https://<your bucket>.s3.<your region>.amazonaws.com/image.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAJJWZ7B6WCRGMKFGQ%2F20180210%2Feu-west-2%2Fs3%2Faws4_request&X-Amz-Date=20180210T171315Z&X-Amz-Expires=1800&X-Amz-Signature=12b74b0788aa036bc7c3d03b3f20c61f1f91cc9ad8873e3314255dc479a25351&X-Amz-SignedHeaders=host"
}
Examples
Save text to a random generated UUID file
On Windows, utilize cmd prompt (PowerShell has different escaping mechanism)
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\" }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "Hello World" }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Save text to a specific file
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\", \"metadata\": { \"key\": \"my-test-file.txt\" } }" \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "Hello World", "metadata": { "key": "my-test-file.txt" } }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Save a file to a object
To upload a file, encode it as Base64 and let the Binding know to deserialize it:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.aws.s3
version: v1
metadata:
- name: bucket
value: mybucket
- name: region
value: us-west-2
- name: endpoint
value: s3.us-west-2.amazonaws.com
- name: accessKey
value: *****************
- name: secretKey
value: *****************
- name: sessionToken
value: mysession
- name: decodeBase64
value: <bool>
- name: forcePathStyle
value: <bool>
Then you can upload it as you would normally:
curl -d "{ \"operation\": \"create\", \"data\": \"YOUR_BASE_64_CONTENT\", \"metadata\": { \"key\": \"my-test-file.jpg\" } }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "YOUR_BASE_64_CONTENT", "metadata": { "key": "my-test-file.jpg" } }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Upload from file path
To upload a file from a supplied path (relative or absolute), use the filepath
metadata key on a create
request that contains empty data
fields.
curl -d '{ \"operation\": \"create\", \"metadata\": { \"filePath\": \"my-test-file.txt\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "metadata": { "filePath": "my-test-file.txt" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Response
The response body will contain the following JSON:
{
"location":"https://<your bucket>.s3.<your region>.amazonaws.com/<key>",
"versionID":"<version ID if Bucket Versioning is enabled"
}
Presign an existing object
To presign an existing S3 object with a specified time-to-live, use the presignTTL
and key
metadata keys on a presign
request.
Valid values for presignTTL
are Go duration strings.
curl -d "{ \"operation\": \"presign\", \"metadata\": { \"presignTTL\": \"15m\", \"key\": \"my-test-file.txt\" } }" \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "presign", "metadata": { "presignTTL": "15m", "key": "my-test-file.txt" } }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Response
The response body contains the following example JSON:
{
"presignURL": "https://<your bucket>.s3.<your region>.amazonaws.com/image.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAJJWZ7B6WCRGMKFGQ%2F20180210%2Feu-west-2%2Fs3%2Faws4_request&X-Amz-Date=20180210T171315Z&X-Amz-Expires=1800&X-Amz-Signature=12b74b0788aa036bc7c3d03b3f20c61f1f91cc9ad8873e3314255dc479a25351&X-Amz-SignedHeaders=host"
}
Get object
To perform a get file operation, invoke the AWS S3 binding with a POST
method and the following JSON body:
{
"operation": "get",
"metadata": {
"key": "my-test-file.txt"
}
}
The metadata parameters are:
key
- the name of the object
Example
curl -d '{ \"operation\": \"get\", \"metadata\": { \"key\": \"my-test-file.txt\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "get", "metadata": { "key": "my-test-file.txt" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Response
The response body contains the value stored in the object.
Delete object
To perform a delete object operation, invoke the AWS S3 binding with a POST
method and the following JSON body:
{
"operation": "delete",
"metadata": {
"key": "my-test-file.txt"
}
}
The metadata parameters are:
key
- the name of the object
Examples
Delete object
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"key\": \"my-test-file.txt\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "delete", "metadata": { "key": "my-test-file.txt" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Response
An HTTP 204 (No Content) and empty body will be returned if successful.
List objects
To perform a list object operation, invoke the S3 binding with a POST
method and the following JSON body:
{
"operation": "list",
"data": {
"maxResults": 10,
"prefix": "file",
"marker": "hvlcCQFSOD5TD",
"delimiter": "i0FvxAn2EOEL6"
}
}
The data parameters are:
maxResults
- (optional) sets the maximum number of keys returned in the response. By default the action returns up to 1,000 key names. The response might contain fewer keys but will never contain more.prefix
- (optional) limits the response to keys that begin with the specified prefix.marker
- (optional) marker is where you want Amazon S3 to start listing from. Amazon S3 starts listing after this specified key. Marker can be any key in the bucket. The marker value may then be used in a subsequent call to request the next set of list items.delimiter
- (optional) A delimiter is a character you use to group keys.
Response
The response body contains the list of found objects.
The list of objects will be returned as JSON array in the following form:
{
"CommonPrefixes": null,
"Contents": [
{
"ETag": "\"7e94cc9b0f5226557b05a7c2565dd09f\"",
"Key": "hpNdFUxruNuwm",
"LastModified": "2021-08-16T06:44:14Z",
"Owner": {
"DisplayName": "owner name",
"ID": "owner id"
},
"Size": 6916,
"StorageClass": "STANDARD"
}
],
"Delimiter": "",
"EncodingType": null,
"IsTruncated": true,
"Marker": "hvlcCQFSOD5TD",
"MaxKeys": 1,
"Name": "mybucketdapr",
"NextMarker": "hzaUPWjmvyi9W",
"Prefix": ""
}
Related links
5.2.9 - AWS SES binding spec
Component format
To setup AWS binding create a component of type bindings.aws.ses
. See this guide on how to create and apply a binding configuration.
See Authenticating to AWS for information about authentication-related attributes
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: ses
spec:
type: bindings.aws.ses
version: v1
metadata:
- name: accessKey
value: *****************
- name: secretKey
value: *****************
- name: region
value: "eu-west-1"
- name: sessionToken
value: mysession
- name: emailFrom
value: "sender@example.com"
- name: emailTo
value: "receiver@example.com"
- name: emailCc
value: "cc@example.com"
- name: emailBcc
value: "bcc@example.com"
- name: subject
value: "subject"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
region |
N | Output | The specific AWS region | "eu-west-1" |
accessKey |
N | Output | The AWS Access Key to access this resource | "key" |
secretKey |
N | Output | The AWS Secret Access Key to access this resource | "secretAccessKey" |
sessionToken |
N | Output | The AWS session token to use | "sessionToken" |
emailFrom |
N | Output | If set, this specifies the email address of the sender. See also | "me@example.com" |
emailTo |
N | Output | If set, this specifies the email address of the receiver. See also | "me@example.com" |
emailCc |
N | Output | If set, this specifies the email address to CC in. See also | "me@example.com" |
emailBcc |
N | Output | If set, this specifies email address to BCC in. See also | "me@example.com" |
subject |
N | Output | If set, this specifies the subject of the email message. See also | "subject of mail" |
Important
When running the Dapr sidecar (daprd) with your application on EKS (AWS Kubernetes), if you’re using a node/pod that has already been attached to an IAM policy defining access to AWS resources, you must not provide AWS access-key, secret-key, and tokens in the definition of the component spec you’re using.Binding support
This component supports output binding with the following operations:
create
Example request
You can specify any of the following optional metadata properties with each request:
emailFrom
emailTo
emailCc
emailBcc
subject
When sending an email, the metadata in the configuration and in the request is combined. The combined set of metadata must contain at least the emailFrom
, emailTo
, emailCc
, emailBcc
and subject
fields.
The emailTo
, emailCc
and emailBcc
fields can contain multiple email addresses separated by a semicolon.
Example:
{
"operation": "create",
"metadata": {
"emailTo": "dapr-smtp-binding@example.net",
"emailCc": "cc1@example.net",
"subject": "Email subject"
},
"data": "Testing Dapr SMTP Binding"
}
The emailTo
, emailCc
and emailBcc
fields can contain multiple email addresses separated by a semicolon.
Related links
5.2.10 - AWS SNS binding spec
Component format
To setup AWS SNS binding create a component of type bindings.aws.sns
. See this guide on how to create and apply a binding configuration.
See Authenticating to AWS for information about authentication-related attributes
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.aws.sns
version: v1
metadata:
- name: topicArn
value: "mytopic"
- name: region
value: "us-west-2"
- name: endpoint
value: "sns.us-west-2.amazonaws.com"
- name: accessKey
value: "*****************"
- name: secretKey
value: "*****************"
- name: sessionToken
value: "*****************"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
topicArn |
Y | Output | The SNS topic name | "arn:::topicarn" |
region |
Y | Output | The specific AWS region | "us-east-1" |
endpoint |
N | Output | The specific AWS endpoint | "sns.us-east-1.amazonaws.com" |
accessKey |
Y | Output | The AWS Access Key to access this resource | "key" |
secretKey |
Y | Output | The AWS Secret Access Key to access this resource | "secretAccessKey" |
sessionToken |
N | Output | The AWS session token to use | "sessionToken" |
Important
When running the Dapr sidecar (daprd) with your application on EKS (AWS Kubernetes), if you’re using a node/pod that has already been attached to an IAM policy defining access to AWS resources, you must not provide AWS access-key, secret-key, and tokens in the definition of the component spec you’re using.Binding support
This component supports output binding with the following operations:
create
Related links
5.2.11 - AWS SQS binding spec
Component format
To setup AWS SQS binding create a component of type bindings.aws.sqs
. See this guide on how to create and apply a binding configuration.
See Authenticating to AWS for information about authentication-related attributes
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.aws.sqs
version: v1
metadata:
- name: queueName
value: "items"
- name: region
value: "us-west-2"
- name: accessKey
value: "*****************"
- name: secretKey
value: "*****************"
- name: sessionToken
value: "*****************"
- name: direction
value: "input, output"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
queueName |
Y | Input/Output | The SQS queue name | "myqueue" |
region |
Y | Input/Output | The specific AWS region | "us-east-1" |
accessKey |
Y | Input/Output | The AWS Access Key to access this resource | "key" |
secretKey |
Y | Input/Output | The AWS Secret Access Key to access this resource | "secretAccessKey" |
sessionToken |
N | Input/Output | The AWS session token to use | "sessionToken" |
direction |
N | Input/Output | The direction of the binding | "input" , "output" , "input, output" |
Important
When running the Dapr sidecar (daprd) with your application on EKS (AWS Kubernetes), if you’re using a node/pod that has already been attached to an IAM policy defining access to AWS resources, you must not provide AWS access-key, secret-key, and tokens in the definition of the component spec you’re using.Binding support
This component supports both input and output binding interfaces.
This component supports output binding with the following operations:
create
Related links
5.2.12 - Azure Blob Storage binding spec
Component format
To setup Azure Blob Storage binding create a component of type bindings.azure.blobstorage
. See this guide on how to create and apply a binding configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.azure.blobstorage
version: v1
metadata:
- name: accountName
value: myStorageAccountName
- name: accountKey
value: ***********
- name: containerName
value: container1
# - name: decodeBase64
# value: <bool>
# - name: getBlobRetryCount
# value: <integer>
# - name: publicAccessLevel
# value: <publicAccessLevel>
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
accountName |
Y | Input/Output | The name of the Azure Storage account | "myexmapleaccount" |
accountKey |
Y* | Input/Output | The access key of the Azure Storage account. Only required when not using Microsoft Entra ID authentication. | "access-key" |
containerName |
Y | Output | The name of the Blob Storage container to write to | myexamplecontainer |
endpoint |
N | Input/Output | Optional custom endpoint URL. This is useful when using the Azurite emulator or when using custom domains for Azure Storage (although this is not officially supported). The endpoint must be the full base URL, including the protocol (http:// or https:// ), the IP or FQDN, and optional port. |
"http://127.0.0.1:10000" |
decodeBase64 |
N | Output | Configuration to decode base64 file content before saving to Blob Storage. (In case of saving a file with binary content). Defaults to false |
true , false |
getBlobRetryCount |
N | Output | Specifies the maximum number of HTTP GET requests that will be made while reading from a RetryReader Defaults to 10 |
1 , 2 |
publicAccessLevel |
N | Output | Specifies whether data in the container may be accessed publicly and the level of access (only used if the container is created by Dapr). Defaults to none |
blob , container , none |
Microsoft Entra ID authentication
The Azure Blob Storage binding component supports authentication using all Microsoft Entra ID mechanisms. For further information and the relevant component metadata fields to provide depending on the choice of Microsoft Entra ID authentication mechanism, see the docs for authenticating to Azure.
Binding support
This component supports output binding with the following operations:
create
: Create blobget
: Get blobdelete
: Delete bloblist
: List blobs
The Blob storage component’s input binding triggers and pushes events using Azure Event Grid.
Refer to the Reacting to Blob storage events guide for more set up and more information.
Create blob
To perform a create blob operation, invoke the Azure Blob Storage binding with a POST
method and the following JSON body:
Note: by default, a random UUID is generated. See below for Metadata support to set the name
{
"operation": "create",
"data": "YOUR_CONTENT"
}
Examples
Save text to a random generated UUID blob
On Windows, utilize cmd prompt (PowerShell has different escaping mechanism)
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\" }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "Hello World" }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Save text to a specific blob
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\", \"metadata\": { \"blobName\": \"my-test-file.txt\" } }" \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "Hello World", "metadata": { "blobName": "my-test-file.txt" } }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Save a file to a blob
To upload a file, encode it as Base64 and let the Binding know to deserialize it:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.azure.blobstorage
version: v1
metadata:
- name: accountName
value: myStorageAccountName
- name: accountKey
value: ***********
- name: containerName
value: container1
- name: decodeBase64
value: true
Then you can upload it as you would normally:
curl -d "{ \"operation\": \"create\", \"data\": \"YOUR_BASE_64_CONTENT\", \"metadata\": { \"blobName\": \"my-test-file.jpg\" } }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "YOUR_BASE_64_CONTENT", "metadata": { "blobName": "my-test-file.jpg" } }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Response
The response body will contain the following JSON:
{
"blobURL": "https://<your account name>. blob.core.windows.net/<your container name>/<filename>"
}
Get blob
To perform a get blob operation, invoke the Azure Blob Storage binding with a POST
method and the following JSON body:
{
"operation": "get",
"metadata": {
"blobName": "myblob",
"includeMetadata": "true"
}
}
The metadata parameters are:
blobName
- the name of the blobincludeMetadata
- (optional) defines if the user defined metadata should be returned or not, defaults to: false
Example
curl -d '{ \"operation\": \"get\", \"metadata\": { \"blobName\": \"myblob\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "get", "metadata": { "blobName": "myblob" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Response
The response body contains the value stored in the blob object. If enabled, the user defined metadata will be returned as HTTP headers in the form:
Metadata.key1: value1
Metadata.key2: value2
Delete blob
To perform a delete blob operation, invoke the Azure Blob Storage binding with a POST
method and the following JSON body:
{
"operation": "delete",
"metadata": {
"blobName": "myblob"
}
}
The metadata parameters are:
blobName
- the name of the blobdeleteSnapshots
- (optional) required if the blob has associated snapshots. Specify one of the following two options:- include: Delete the base blob and all of its snapshots
- only: Delete only the blob’s snapshots and not the blob itself
Examples
Delete blob
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"blobName\": \"myblob\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "delete", "metadata": { "blobName": "myblob" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Delete blob snapshots only
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"blobName\": \"myblob\", \"deleteSnapshots\": \"only\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "delete", "metadata": { "blobName": "myblob", "deleteSnapshots": "only" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Delete blob including snapshots
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"blobName\": \"myblob\", \"deleteSnapshots\": \"include\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "delete", "metadata": { "blobName": "myblob", "deleteSnapshots": "include" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Response
An HTTP 204 (No Content) and empty body will be retuned if successful.
List blobs
To perform a list blobs operation, invoke the Azure Blob Storage binding with a POST
method and the following JSON body:
{
"operation": "list",
"data": {
"maxResults": 10,
"prefix": "file",
"marker": "2!108!MDAwMDM1IWZpbGUtMDgtMDctMjAyMS0wOS0zOC01NS03NzgtMjEudHh0ITAwMDAyOCE5OTk5LTEyLTMxVDIzOjU5OjU5Ljk5OTk5OTlaIQ--",
"include": {
"snapshots": false,
"metadata": true,
"uncommittedBlobs": false,
"copy": false,
"deleted": false
}
}
}
The data parameters are:
maxResults
- (optional) specifies the maximum number of blobs to return, including all BlobPrefix elements. If the request does not specify maxresults the server will return up to 5,000 items.prefix
- (optional) filters the results to return only blobs whose names begin with the specified prefix.marker
- (optional) a string value that identifies the portion of the list to be returned with the next list operation. The operation returns a marker value within the response body if the list returned was not complete. The marker value may then be used in a subsequent call to request the next set of list items.include
- (optional) Specifies one or more datasets to include in the response:- snapshots: Specifies that snapshots should be included in the enumeration. Snapshots are listed from oldest to newest in the response. Defaults to: false
- metadata: Specifies that blob metadata be returned in the response. Defaults to: false
- uncommittedBlobs: Specifies that blobs for which blocks have been uploaded, but which have not been committed using Put Block List, be included in the response. Defaults to: false
- copy: Version 2012-02-12 and newer. Specifies that metadata related to any current or previous Copy Blob operation should be included in the response. Defaults to: false
- deleted: Version 2017-07-29 and newer. Specifies that soft deleted blobs should be included in the response. Defaults to: false
Response
The response body contains the list of found blocks as also the following HTTP headers:
Metadata.marker: 2!108!MDAwMDM1IWZpbGUtMDgtMDctMjAyMS0wOS0zOC0zNC04NjctMTEudHh0ITAwMDAyOCE5OTk5LTEyLTMxVDIzOjU5OjU5Ljk5OTk5OTlaIQ--
Metadata.number: 10
marker
- the next marker which can be used in a subsequent call to request the next set of list items. See the marker description on the data property of the binding input.number
- the number of found blobs
The list of blobs will be returned as JSON array in the following form:
[
{
"XMLName": {
"Space": "",
"Local": "Blob"
},
"Name": "file-08-07-2021-09-38-13-776-1.txt",
"Deleted": false,
"Snapshot": "",
"Properties": {
"XMLName": {
"Space": "",
"Local": "Properties"
},
"CreationTime": "2021-07-08T07:38:16Z",
"LastModified": "2021-07-08T07:38:16Z",
"Etag": "0x8D941E3593C6573",
"ContentLength": 1,
"ContentType": "application/octet-stream",
"ContentEncoding": "",
"ContentLanguage": "",
"ContentMD5": "xMpCOKC5I4INzFCab3WEmw==",
"ContentDisposition": "",
"CacheControl": "",
"BlobSequenceNumber": null,
"BlobType": "BlockBlob",
"LeaseStatus": "unlocked",
"LeaseState": "available",
"LeaseDuration": "",
"CopyID": null,
"CopyStatus": "",
"CopySource": null,
"CopyProgress": null,
"CopyCompletionTime": null,
"CopyStatusDescription": null,
"ServerEncrypted": true,
"IncrementalCopy": null,
"DestinationSnapshot": null,
"DeletedTime": null,
"RemainingRetentionDays": null,
"AccessTier": "Hot",
"AccessTierInferred": true,
"ArchiveStatus": "",
"CustomerProvidedKeySha256": null,
"AccessTierChangeTime": null
},
"Metadata": null
}
]
Metadata information
By default the Azure Blob Storage output binding auto generates a UUID as the blob filename and is not assigned any system or custom metadata to it. It is configurable in the metadata property of the message (all optional).
Applications publishing to an Azure Blob Storage output binding should send a message with the following format:
{
"data": "file content",
"metadata": {
"blobName" : "filename.txt",
"contentType" : "text/plain",
"contentMD5" : "vZGKbMRDAnMs4BIwlXaRvQ==",
"contentEncoding" : "UTF-8",
"contentLanguage" : "en-us",
"contentDisposition" : "attachment",
"cacheControl" : "no-cache",
"custom" : "hello-world"
},
"operation": "create"
}
Related links
5.2.13 - Azure Cosmos DB (Gremlin API) binding spec
Component format
To setup an Azure Cosmos DB (Gremlin API) binding create a component of type bindings.azure.cosmosdb.gremlinapi
. See this guide on how to create and apply a binding configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.azure.cosmosdb.gremlinapi
version: v1
metadata:
- name: url
value: "wss://******.gremlin.cosmos.azure.com:443/"
- name: masterKey
value: "*****"
- name: username
value: "*****"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
url |
Y | Output | The Cosmos DB url for Gremlin APIs | "wss://******.gremlin.cosmos.azure.com:443/" |
masterKey |
Y | Output | The Cosmos DB account master key | "masterKey" |
username |
Y | Output | The username of the Cosmos DB database | "/dbs/<database_name>/colls/<graph_name>" |
For more information see Quickstart: Azure Cosmos Graph DB using Gremlin.
Binding support
This component supports output binding with the following operations:
query
Request payload sample
{
"data": {
"gremlin": "g.V().count()"
},
"operation": "query"
}
Related links
5.2.14 - Azure Cosmos DB (SQL API) binding spec
Component format
To setup Azure Cosmos DB binding create a component of type bindings.azure.cosmosdb
. See this guide on how to create and apply a binding configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.azure.cosmosdb
version: v1
metadata:
- name: url
value: "https://******.documents.azure.com:443/"
- name: masterKey
value: "*****"
- name: database
value: "OrderDb"
- name: collection
value: "Orders"
- name: partitionKey
value: "<message>"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
url |
Y | Output | The Cosmos DB url | "https://******.documents.azure.com:443/" |
masterKey |
Y | Output | The Cosmos DB account master key | "master-key" |
database |
Y | Output | The name of the Cosmos DB database | "OrderDb" |
collection |
Y | Output | The name of the container inside the database. | "Orders" |
partitionKey |
Y | Output | The name of the key to extract from the payload (document to be created) that is used as the partition key. This name must match the partition key specified upon creation of the Cosmos DB container. | "OrderId" , "message" |
For more information see Azure Cosmos DB resource model.
Microsoft Entra ID authentication
The Azure Cosmos DB binding component supports authentication using all Microsoft Entra ID mechanisms. For further information and the relevant component metadata fields to provide depending on the choice of Microsoft Entra ID authentication mechanism, see the docs for authenticating to Azure.
You can read additional information for setting up Cosmos DB with Azure AD authentication in the section below.
Binding support
This component supports output binding with the following operations:
create
Best Practices for Production Use
Azure Cosmos DB shares a strict metadata request rate limit across all databases in a single Azure Cosmos DB account. New connections to Azure Cosmos DB assume a large percentage of the allowable request rate limit. (See the Cosmos DB documentation)
Therefore several strategies must be applied to avoid simultaneous new connections to Azure Cosmos DB:
- Ensure sidecars of applications only load the Azure Cosmos DB component when they require it to avoid unnecessary database connections. This can be done by scoping your components to specific applications.
- Choose deployment strategies that sequentially deploy or start your applications to minimize bursts in new connections to your Azure Cosmos DB accounts.
- Avoid reusing the same Azure Cosmos DB account for unrelated databases or systems (even outside of Dapr). Distinct Azure Cosmos DB accounts have distinct rate limits.
- Increase the
initTimeout
value to allow the component to retry connecting to Azure Cosmos DB during side car initialization for up to 5 minutes. The default value is5s
and should be increased. When using Kubernetes, increasing this value may also require an update to your Readiness and Liveness probes.
spec:
type: bindings.azure.cosmosdb
version: v1
initTimeout: 5m
metadata:
Data format
The output binding create
operation requires the following keys to exist in the payload of every document to be created:
id
: a unique ID for the document to be created<partitionKey>
: the name of the partition key specified via thespec.partitionKey
in the component definition. This must also match the partition key specified upon creation of the Cosmos DB container.
Setting up Cosmos DB for authenticating with Azure AD
When using the Dapr Cosmos DB binding and authenticating with Azure AD, you need to perform a few additional steps to set up your environment.
Prerequisites:
- You need a Service Principal created as per the instructions in the authenticating to Azure page. You need the ID of the Service Principal for the commands below (note that this is different from the client ID of your application, or the value you use for
azureClientId
in the metadata). - Azure CLI
- jq
- The scripts below are optimized for a bash or zsh shell
When using the Cosmos DB binding, you don’t need to create stored procedures as you do in the case of the Cosmos DB state store.
Granting your Azure AD application access to Cosmos DB
You can find more information on the official documentation, including instructions to assign more granular permissions.
In order to grant your application permissions to access data stored in Cosmos DB, you need to assign it a custom role for the Cosmos DB data plane. In this example you’re going to use a built-in role, “Cosmos DB Built-in Data Contributor”, which grants your application full read-write access to the data; you can optionally create custom, fine-tuned roles following the instructions in the official docs.
# Name of the Resource Group that contains your Cosmos DB
RESOURCE_GROUP="..."
# Name of your Cosmos DB account
ACCOUNT_NAME="..."
# ID of your Service Principal object
PRINCIPAL_ID="..."
# ID of the "Cosmos DB Built-in Data Contributor" role
# You can also use the ID of a custom role
ROLE_ID="00000000-0000-0000-0000-000000000002"
az cosmosdb sql role assignment create \
--account-name "$ACCOUNT_NAME" \
--resource-group "$RESOURCE_GROUP" \
--scope "/" \
--principal-id "$PRINCIPAL_ID" \
--role-definition-id "$ROLE_ID"
Related links
5.2.15 - Azure Event Grid binding spec
Component format
To setup an Azure Event Grid binding create a component of type bindings.azure.eventgrid
. See this guide on how to create and apply a binding configuration.
See this for the documentation for Azure Event Grid.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <name>
spec:
type: bindings.azure.eventgrid
version: v1
metadata:
# Required Output Binding Metadata
- name: accessKey
value: "[AccessKey]"
- name: topicEndpoint
value: "[TopicEndpoint]"
# Required Input Binding Metadata
- name: azureTenantId
value: "[AzureTenantId]"
- name: azureSubscriptionId
value: "[AzureSubscriptionId]"
- name: azureClientId
value: "[ClientId]"
- name: azureClientSecret
value: "[ClientSecret]"
- name: subscriberEndpoint
value: "[SubscriberEndpoint]"
- name: handshakePort
# Make sure to pass this as a string, with quotes around the value
value: "[HandshakePort]"
- name: scope
value: "[Scope]"
# Optional Input Binding Metadata
- name: eventSubscriptionName
value: "[EventSubscriptionName]"
# Optional metadata
- name: direction
value: "input, output"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
accessKey |
Y | Output | The Access Key to be used for publishing an Event Grid Event to a custom topic | "accessKey" |
topicEndpoint |
Y | Output | The topic endpoint in which this output binding should publish events | "topic-endpoint" |
azureTenantId |
Y | Input | The Azure tenant ID of the Event Grid resource | "tenentID" |
azureSubscriptionId |
Y | Input | The Azure subscription ID of the Event Grid resource | "subscriptionId" |
azureClientId |
Y | Input | The client ID that should be used by the binding to create or update the Event Grid Event Subscription and to authenticate incoming messages | "clientId" |
azureClientSecret |
Y | Input | The client id that should be used by the binding to create or update the Event Grid Event Subscription and to authenticate incoming messages | "clientSecret" |
subscriberEndpoint |
Y | Input | The HTTPS endpoint of the webhook Event Grid sends events (formatted as Cloud Events) to. If you’re not re-writing URLs on ingress, it should be in the form of: "https://[YOUR HOSTNAME]/<path>" If testing on your local machine, you can use something like ngrok to create a public endpoint. |
"https://[YOUR HOSTNAME]/<path>" |
handshakePort |
Y | Input | The container port that the input binding listens on when receiving events on the webhook | "9000" |
scope |
Y | Input | The identifier of the resource to which the event subscription needs to be created or updated. See the scope section for more details | "/subscriptions/{subscriptionId}/" |
eventSubscriptionName |
N | Input | The name of the event subscription. Event subscription names must be between 3 and 64 characters long and should use alphanumeric letters only | "name" |
direction |
N | Input/Output | The direction of the binding | "input" , "output" , "input, output" |
Scope
Scope is the identifier of the resource to which the event subscription needs to be created or updated. The scope can be a subscription, a resource group, a top-level resource belonging to a resource provider namespace, or an Event Grid topic. For example:
/subscriptions/{subscriptionId}/
for a subscription/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}
for a resource group/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}
for a resource/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.EventGrid/topics/{topicName}
for an Event Grid topic
Values in braces {} should be replaced with actual values.
Binding support
This component supports both input and output binding interfaces.
This component supports output binding with the following operations:
create
: publishes a message on the Event Grid topic
Receiving events
You can use the Event Grid binding to receive events from a variety of sources and actions. Learn more about all of the available event sources and handlers that work with Event Grid.
In the following table, you can find the list of Dapr components that can raise events.
Microsoft Entra ID credentials
The Azure Event Grid binding requires an Microsoft Entra ID application and service principal for two reasons:
- Creating an event subscription when Dapr is started (and updating it if the Dapr configuration changes)
- Authenticating messages delivered by Event Hubs to your application.
Requirements:
- The Azure CLI installed.
- PowerShell 7 installed.
- Az module for PowerShell for PowerShell installed:
Install-Module Az -Scope CurrentUser -Repository PSGallery -Force
- Microsoft.Graph module for PowerShell for PowerShell installed:
Install-Module Microsoft.Graph -Scope CurrentUser -Repository PSGallery -Force
For the first purpose, you will need to create an Azure Service Principal. After creating it, take note of the Microsoft Entra ID application’s clientID (a UUID), and run the following script with the Azure CLI:
# Set the client ID of the app you created
CLIENT_ID="..."
# Scope of the resource, usually in the format:
# `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.EventGrid/topics/{topicName}`
SCOPE="..."
# First ensure that Azure Resource Manager provider is registered for Event Grid
az provider register --namespace "Microsoft.EventGrid"
az provider show --namespace "Microsoft.EventGrid" --query "registrationState"
# Give the SP needed permissions so that it can create event subscriptions to Event Grid
az role assignment create --assignee "$CLIENT_ID" --role "EventGrid EventSubscription Contributor" --scopes "$SCOPE"
For the second purpose, first download a script:
curl -LO "https://raw.githubusercontent.com/dapr/components-contrib/master/.github/infrastructure/conformance/azure/setup-eventgrid-sp.ps1"
Then, using PowerShell (pwsh
), run:
# Set the client ID of the app you created
$clientId = "..."
# Authenticate with the Microsoft Graph
# You may need to add the -TenantId flag to the next command if needed
Connect-MgGraph -Scopes "Application.Read.All","Application.ReadWrite.All"
./setup-eventgrid-sp.ps1 $clientId
Note: if your directory does not have a Service Principal for the application “Microsoft.EventGrid”, you may need to run the command
Connect-MgGraph
and sign in as an admin for the Microsoft Entra ID tenant (this is related to permissions on the Microsoft Entra ID directory, and not the Azure subscription). Otherwise, please ask your tenant’s admin to sign in and run this PowerShell command:New-MgServicePrincipal -AppId "4962773b-9cdb-44cf-a8bf-237846a00ab7"
(the UUID is a constant)
Testing locally
- Install ngrok
- Run locally using a custom port, for example
9000
, for handshakes
# Using port 9000 as an example
ngrok http --host-header=localhost 9000
- Configure the ngrok’s HTTPS endpoint and the custom port to input binding metadata
- Run Dapr
# Using default ports for .NET core web api and Dapr as an example
dapr run --app-id dotnetwebapi --app-port 5000 --dapr-http-port 3500 dotnet run
Testing on Kubernetes
Azure Event Grid requires a valid HTTPS endpoint for custom webhooks; self-signed certificates aren’t accepted. In order to enable traffic from the public internet to your app’s Dapr sidecar you need an ingress controller enabled with Dapr. There’s a good article on this topic: Kubernetes NGINX ingress controller with Dapr.
To get started, first create a dapr-annotations.yaml
file for Dapr annotations:
controller:
podAnnotations:
dapr.io/enabled: "true"
dapr.io/app-id: "nginx-ingress"
dapr.io/app-port: "80"
Then install the NGINX ingress controller to your Kubernetes cluster with Helm 3 using the annotations:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install nginx-ingress ingress-nginx/ingress-nginx -f ./dapr-annotations.yaml -n default
# Get the public IP for the ingress controller
kubectl get svc -l component=controller -o jsonpath='Public IP is: {.items[0].status.loadBalancer.ingress[0].ip}{"\n"}'
If deploying to Azure Kubernetes Service, you can follow the official Microsoft documentation for rest of the steps:
- Add an A record to your DNS zone
- Install cert-manager
- Create a CA cluster issuer
Final step for enabling communication between Event Grid and Dapr is to define http
and custom port to your app’s service and an ingress
in Kubernetes. This example uses a .NET Core web api and Dapr default ports and custom port 9000 for handshakes.
# dotnetwebapi.yaml
kind: Service
apiVersion: v1
metadata:
name: dotnetwebapi
labels:
app: dotnetwebapi
spec:
selector:
app: dotnetwebapi
ports:
- name: webapi
protocol: TCP
port: 80
targetPort: 80
- name: dapr-eventgrid
protocol: TCP
port: 9000
targetPort: 9000
type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: eventgrid-input-rule
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt
spec:
tls:
- hosts:
- dapr.<your custom domain>
secretName: dapr-tls
rules:
- host: dapr.<your custom domain>
http:
paths:
- path: /api/events
backend:
serviceName: dotnetwebapi
servicePort: 9000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dotnetwebapi
labels:
app: dotnetwebapi
spec:
replicas: 1
selector:
matchLabels:
app: dotnetwebapi
template:
metadata:
labels:
app: dotnetwebapi
annotations:
dapr.io/enabled: "true"
dapr.io/app-id: "dotnetwebapi"
dapr.io/app-port: "5000"
spec:
containers:
- name: webapi
image: <your container image>
ports:
- containerPort: 5000
imagePullPolicy: Always
Deploy the binding and app (including ingress) to Kubernetes
# Deploy Dapr components
kubectl apply -f eventgrid.yaml
# Deploy your app and Nginx ingress
kubectl apply -f dotnetwebapi.yaml
Note: This manifest deploys everything to Kubernetes’ default namespace.
Troubleshooting possible issues with Nginx controller
After initial deployment the “Daprized” Nginx controller can malfunction. To check logs and fix issue (if it exists) follow these steps.
$ kubectl get pods -l app=nginx-ingress
NAME READY STATUS RESTARTS AGE
nginx-nginx-ingress-controller-649df94867-fp6mg 2/2 Running 0 51m
nginx-nginx-ingress-default-backend-6d96c457f6-4nbj5 1/1 Running 0 55m
$ kubectl logs nginx-nginx-ingress-controller-649df94867-fp6mg nginx-ingress-controller
# If you see 503s logged from calls to webhook endpoint '/api/events' restart the pod
# .."OPTIONS /api/events HTTP/1.1" 503..
$ kubectl delete pod nginx-nginx-ingress-controller-649df94867-fp6mg
# Check the logs again - it should start returning 200
# .."OPTIONS /api/events HTTP/1.1" 200..
Related links
5.2.16 - Azure Event Hubs binding spec
Component format
To setup an Azure Event Hubs binding, create a component of type bindings.azure.eventhubs
. See this guide on how to create and apply a binding configuration.
See this for instructions on how to set up an Event Hub.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.azure.eventhubs
version: v1
metadata:
# Hub name ("topic")
- name: eventHub
value: "mytopic"
- name: consumerGroup
value: "myapp"
# Either connectionString or eventHubNamespace is required
# Use connectionString when *not* using Microsoft Entra ID
- name: connectionString
value: "Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={EventHub}"
# Use eventHubNamespace when using Microsoft Entra ID
- name: eventHubNamespace
value: "namespace"
- name: enableEntityManagement
value: "false"
- name: enableInOrderMessageDelivery
value: "false"
# The following four properties are needed only if enableEntityManagement is set to true
- name: resourceGroupName
value: "test-rg"
- name: subscriptionID
value: "value of Azure subscription ID"
- name: partitionCount
value: "1"
- name: messageRetentionInDays
value: "3"
# Checkpoint store attributes
- name: storageAccountName
value: "myeventhubstorage"
- name: storageAccountKey
value: "112233445566778899"
- name: storageContainerName
value: "myeventhubstoragecontainer"
# Alternative to passing storageAccountKey
- name: storageConnectionString
value: "DefaultEndpointsProtocol=https;AccountName=<account>;AccountKey=<account-key>"
# Optional metadata
- name: getAllMessageProperties
value: "true"
- name: direction
value: "input, output"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
eventHub |
Y* | Input/Output | The name of the Event Hubs hub (“topic”). Required if using Microsoft Entra ID authentication or if the connection string doesn’t contain an EntityPath value |
mytopic |
connectionString |
Y* | Input/Output | Connection string for the Event Hub or the Event Hub namespace. * Mutally exclusive with eventHubNamespace field.* Required when not using Microsoft Entra ID Authentication |
"Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={EventHub}" or "Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key}" |
eventHubNamespace |
Y* | Input/Output | The Event Hub Namespace name. * Mutally exclusive with connectionString field.* Required when using Microsoft Entra ID Authentication |
"namespace" |
enableEntityManagement |
N | Input/Output | Boolean value to allow management of the EventHub namespace and storage account. Default: false |
"true" , "false" |
enableInOrderMessageDelivery |
N | Input/Output | Boolean value to allow messages to be delivered in the order in which they were posted. This assumes partitionKey is set when publishing or posting to ensure ordering across partitions. Default: false |
"true" , "false" |
resourceGroupName |
N | Input/Output | Name of the resource group the Event Hub namespace is part of. Required when entity management is enabled | "test-rg" |
subscriptionID |
N | Input/Output | Azure subscription ID value. Required when entity management is enabled | "azure subscription id" |
partitionCount |
N | Input/Output | Number of partitions for the new Event Hub namespace. Used only when entity management is enabled. Default: "1" |
"2" |
messageRetentionInDays |
N | Input/Output | Number of days to retain messages for in the newly created Event Hub namespace. Used only when entity management is enabled. Default: "1" |
"90" |
consumerGroup |
Y | Input | The name of the Event Hubs Consumer Group to listen on | "group1" |
storageAccountName |
Y | Input | Storage account name to use for the checkpoint store. | "myeventhubstorage" |
storageAccountKey |
Y* | Input | Storage account key for the checkpoint store account. * When using Microsoft Entra ID, it’s possible to omit this if the service principal has access to the storage account too. |
"112233445566778899" |
storageConnectionString |
Y* | Input | Connection string for the checkpoint store, alternative to specifying storageAccountKey |
"DefaultEndpointsProtocol=https;AccountName=myeventhubstorage;AccountKey=<account-key>" |
storageContainerName |
Y | Input | Storage container name for the storage account name. | "myeventhubstoragecontainer" |
getAllMessageProperties |
N | Input | When set to true , retrieves all user/app/custom properties from the Event Hub message and forwards them in the returned event metadata. Default setting is "false" . |
"true" , "false" |
direction |
N | Input/Output | The direction of the binding. | "input" , "output" , "input, output" |
Microsoft Entra ID authentication
The Azure Event Hubs pub/sub component supports authentication using all Microsoft Entra ID mechanisms. For further information and the relevant component metadata fields to provide depending on the choice of Microsoft Entra ID authentication mechanism, see the docs for authenticating to Azure.
Binding support
This component supports output binding with the following operations:
create
: publishes a new message to Azure Event Hubs
Input Binding to Azure IoT Hub Events
Azure IoT Hub provides an endpoint that is compatible with Event Hubs, so Dapr apps can create input bindings to read Azure IoT Hub events using the Event Hubs bindings component.
The device-to-cloud events created by Azure IoT Hub devices will contain additional IoT Hub System Properties, and the Azure Event Hubs binding for Dapr will return the following as part of the response metadata:
System Property Name | Description & Routing Query Keyword |
---|---|
iothub-connection-auth-generation-id |
The connectionDeviceGenerationId of the device that sent the message. See IoT Hub device identity properties. |
iothub-connection-auth-method |
The connectionAuthMethod used to authenticate the device that sent the message. |
iothub-connection-device-id |
The deviceId of the device that sent the message. See IoT Hub device identity properties. |
iothub-connection-module-id |
The moduleId of the device that sent the message. See IoT Hub device identity properties. |
iothub-enqueuedtime |
The enqueuedTime in RFC3339 format that the device-to-cloud message was received by IoT Hub. |
message-id |
The user-settable AMQP messageId. |
For example, the headers of a HTTP Read()
response would contain:
{
'user-agent': 'fasthttp',
'host': '127.0.0.1:3000',
'content-type': 'application/json',
'content-length': '120',
'iothub-connection-device-id': 'my-test-device',
'iothub-connection-auth-generation-id': '637618061680407492',
'iothub-connection-auth-method': '{"scope":"module","type":"sas","issuer":"iothub","acceptingIpFilterRule":null}',
'iothub-connection-module-id': 'my-test-module-a',
'iothub-enqueuedtime': '2021-07-13T22:08:09Z',
'message-id': 'my-custom-message-id',
'x-opt-sequence-number': '35',
'x-opt-enqueued-time': '2021-07-13T22:08:09Z',
'x-opt-offset': '21560',
'traceparent': '00-4655608164bc48b985b42d39865f3834-ed6cf3697c86e7bd-01'
}
Related links
5.2.17 - Azure OpenAI binding spec
Component format
To setup an Azure OpenAI binding create a component of type bindings.azure.openai
. See this guide on how to create and apply a binding configuration.
See this for the documentation for Azure OpenAI Service.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.azure.openai
version: v1
metadata:
- name: apiKey # Required
value: "1234567890abcdef"
- name: endpoint # Required
value: "https://myopenai.openai.azure.com"
Warning
The above example usesapiKey
as a plain string. It is recommended to use a secret store for the secrets as described here.
Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
endpoint |
Y | Output | Azure OpenAI service endpoint URL. | "https://myopenai.openai.azure.com" |
apiKey |
Y* | Output | The access key of the Azure OpenAI service. Only required when not using Microsoft Entra ID authentication. | "1234567890abcdef" |
azureTenantId |
Y* | Input | The tenant ID of the Azure OpenAI resource. Only required when apiKey is not provided. |
"tenentID" |
azureClientId |
Y* | Input | The client ID that should be used by the binding to create or update the Azure OpenAI Subscription and to authenticate incoming messages. Only required when apiKey is not provided. |
"clientId" |
azureClientSecret |
Y* | Input | The client secret that should be used by the binding to create or update the Azure OpenAI Subscription and to authenticate incoming messages. Only required when apiKey is not provided. |
"clientSecret" |
Microsoft Entra ID authentication
The Azure OpenAI binding component supports authentication using all Microsoft Entra ID mechanisms. For further information and the relevant component metadata fields to provide depending on the choice of Microsoft Entra ID authentication mechanism, see the docs for authenticating to Azure.
Example Configuration
apiVersion: dapr.io/v1alpha1
kind: component
metadata:
name: <NAME>
spec:
type: bindings.azure.openai
version: v1
metadata:
- name: endpoint
value: "https://myopenai.openai.azure.com"
- name: azureTenantId
value: "***"
- name: azureClientId
value: "***"
- name: azureClientSecret
value: "***"
Binding support
This component supports output binding with the following operations:
completion
: Completion APIchat-completion
: Chat Completion APIget-embedding
: Embedding API
Completion API
To call the completion API with a prompt, invoke the Azure OpenAI binding with a POST
method and the following JSON body:
{
"operation": "completion",
"data": {
"deploymentId": "my-model",
"prompt": "A dog is",
"maxTokens":5
}
}
The data parameters are:
deploymentId
- string that specifies the model deployment ID to use.prompt
- string that specifies the prompt to generate completions for.maxTokens
- (optional) defines the max number of tokens to generate. Defaults to 16 for completion API.temperature
- (optional) defines the sampling temperature between 0 and 2. Higher values like 0.8 make the output more random, while lower values like 0.2 make it more focused and deterministic. Defaults to 1.0 for completion API.topP
- (optional) defines the sampling temperature. Defaults to 1.0 for completion API.n
- (optional) defines the number of completions to generate. Defaults to 1 for completion API.presencePenalty
- (optional) Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics. Defaults to 0.0 for completion API.frequencyPenalty
- (optional) Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim. Defaults to 0.0 for completion API.
Read more about the importance and usage of these parameters in the Azure OpenAI API documentation.
Examples
curl -d '{ "data": {"deploymentId: "my-model" , "prompt": "A dog is ", "maxTokens":15}, "operation": "completion" }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Response
The response body contains the following JSON:
[
{
"finish_reason": "length",
"index": 0,
"text": " a pig in a dress.\n\nSun, Oct 20, 2013"
},
{
"finish_reason": "length",
"index": 1,
"text": " the only thing on earth that loves you\n\nmore than he loves himself.\"\n\n"
}
]
Chat Completion API
To perform a chat-completion operation, invoke the Azure OpenAI binding with a POST
method and the following JSON body:
{
"operation": "chat-completion",
"data": {
"deploymentId": "my-model",
"messages": [
{
"role": "system",
"message": "You are a bot that gives really short replies"
},
{
"role": "user",
"message": "Tell me a joke"
}
],
"n": 2,
"maxTokens": 30,
"temperature": 1.2
}
}
The data parameters are:
deploymentId
- string that specifies the model deployment ID to use.messages
- array of messages that will be used to generate chat completions. Each message is of the form:role
- string that specifies the role of the message. Can be eitheruser
,system
orassistant
.message
- string that specifies the conversation message for the role.
maxTokens
- (optional) defines the max number of tokens to generate. Defaults to 16 for the chat completion API.temperature
- (optional) defines the sampling temperature between 0 and 2. Higher values like 0.8 make the output more random, while lower values like 0.2 make it more focused and deterministic. Defaults to 1.0 for the chat completion API.topP
- (optional) defines the sampling temperature. Defaults to 1.0 for the chat completion API.n
- (optional) defines the number of completions to generate. Defaults to 1 for the chat completion API.presencePenalty
- (optional) Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics. Defaults to 0.0 for the chat completion API.frequencyPenalty
- (optional) Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim. Defaults to 0.0 for the chat completion API.
Example
curl -d '{
"data": {
"deploymentId": "my-model",
"messages": [
{
"role": "system",
"message": "You are a bot that gives really short replies"
},
{
"role": "user",
"message": "Tell me a joke"
}
],
"n": 2,
"maxTokens": 30,
"temperature": 1.2
},
"operation": "chat-completion"
}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Response
The response body contains the following JSON:
[
{
"finish_reason": "stop",
"index": 0,
"message": {
"content": "Why was the math book sad? Because it had too many problems.",
"role": "assistant"
}
},
{
"finish_reason": "stop",
"index": 1,
"message": {
"content": "Why did the tomato turn red? Because it saw the salad dressing!",
"role": "assistant"
}
}
]
Get Embedding API
The get-embedding
operation returns a vector representation of a given input that can be easily consumed by machine learning models and other algorithms.
To perform a get-embedding
operation, invoke the Azure OpenAI binding with a POST
method and the following JSON body:
{
"operation": "get-embedding",
"data": {
"deploymentId": "my-model",
"message": "The capital of France is Paris."
}
}
The data parameters are:
deploymentId
- string that specifies the model deployment ID to use.message
- string that specifies the text to embed.
Example
curl -d '{
"data": {
"deploymentId": "embeddings",
"message": "The capital of France is Paris."
},
"operation": "get-embedding"
}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Response
The response body contains the following JSON:
[0.018574921,-0.00023652936,-0.0057790717,.... (1536 floats total for ada)]
Learn more about the Azure OpenAI output binding
Watch the following Community Call presentation to learn more about the Azure OpenAI output binding.
Related links
5.2.18 - Azure Service Bus Queues binding spec
Component format
To setup Azure Service Bus Queues binding create a component of type bindings.azure.servicebusqueues
. See this guide on how to create and apply a binding configuration.
Connection String Authentication
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.azure.servicebusqueues
version: v1
metadata:
- name: connectionString # Required when not using Azure Authentication.
value: "Endpoint=sb://{ServiceBusNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={ServiceBus}"
- name: queueName
value: "queue1"
# - name: timeoutInSec # Optional
# value: "60"
# - name: handlerTimeoutInSec # Optional
# value: "60"
# - name: disableEntityManagement # Optional
# value: "false"
# - name: maxDeliveryCount # Optional
# value: "3"
# - name: lockDurationInSec # Optional
# value: "60"
# - name: lockRenewalInSec # Optional
# value: "20"
# - name: maxActiveMessages # Optional
# value: "10000"
# - name: maxConcurrentHandlers # Optional
# value: "10"
# - name: defaultMessageTimeToLiveInSec # Optional
# value: "10"
# - name: autoDeleteOnIdleInSec # Optional
# value: "3600"
# - name: minConnectionRecoveryInSec # Optional
# value: "2"
# - name: maxConnectionRecoveryInSec # Optional
# value: "300"
# - name: maxRetriableErrorsPerSec # Optional
# value: "10"
# - name: publishMaxRetries # Optional
# value: "5"
# - name: publishInitialRetryIntervalInMs # Optional
# value: "500"
# - name: direction
# value: "input, output"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
connectionString |
Y | Input/Output | The Service Bus connection string. Required unless using Microsoft Entra ID authentication. | "Endpoint=sb://************" |
queueName |
Y | Input/Output | The Service Bus queue name. Queue names are case-insensitive and will always be forced to lowercase. | "queuename" |
timeoutInSec |
N | Input/Output | Timeout for all invocations to the Azure Service Bus endpoint, in seconds. Note that this option impacts network calls and it’s unrelated to the TTL applies to messages. Default: "60" |
"60" |
namespaceName |
N | Input/Output | Parameter to set the address of the Service Bus namespace, as a fully-qualified domain name. Required if using Microsoft Entra ID authentication. | "namespace.servicebus.windows.net" |
disableEntityManagement |
N | Input/Output | When set to true, queues and subscriptions do not get created automatically. Default: "false" |
"true" , "false" |
lockDurationInSec |
N | Input/Output | Defines the length in seconds that a message will be locked for before expiring. Used during subscription creation only. Default set by server. | "30" |
autoDeleteOnIdleInSec |
N | Input/Output | Time in seconds to wait before auto deleting idle subscriptions. Used during subscription creation only. Must be 300s or greater. Default: "0" (disabled) |
"3600" |
defaultMessageTimeToLiveInSec |
N | Input/Output | Default message time to live, in seconds. Used during subscription creation only. | "10" |
maxDeliveryCount |
N | Input/Output | Defines the number of attempts the server will make to deliver a message. Used during subscription creation only. Default set by server. | "10" |
minConnectionRecoveryInSec |
N | Input/Output | Minimum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. Default: "2" |
"5" |
maxConnectionRecoveryInSec |
N | Input/Output | Maximum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. After each attempt, the component waits a random number of seconds, increasing every time, between the minimum and the maximum. Default: "300" (5 minutes) |
"600" |
maxActiveMessages |
N | Defines the maximum number of messages to be processing or in the buffer at once. This should be at least as big as the maximum concurrent handlers. Default: "1" |
"1" |
|
handlerTimeoutInSec |
N | Input | Timeout for invoking the app’s handler. Default: "0" (no timeout) |
"30" |
minConnectionRecoveryInSec |
N | Input | Minimum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. Default: "2" |
"5" |
maxConnectionRecoveryInSec |
N | Input | Maximum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. After each attempt, the binding waits a random number of seconds, increasing every time, between the minimum and the maximum. Default: "300" (5 minutes) |
"600" |
lockRenewalInSec |
N | Input | Defines the frequency at which buffered message locks will be renewed. Default: "20" . |
"20" |
maxActiveMessages |
N | Input | Defines the maximum number of messages to be processing or in the buffer at once. This should be at least as big as the maximum concurrent handlers. Default: "1" |
"2000" |
maxConcurrentHandlers |
N | Input | Defines the maximum number of concurrent message handlers; set to 0 for unlimited. Default: "1" |
"10" |
maxRetriableErrorsPerSec |
N | Input | Maximum number of retriable errors that are processed per second. If a message fails to be processed with a retriable error, the component adds a delay before it starts processing another message, to avoid immediately re-processing messages that have failed. Default: "10" |
"10" |
publishMaxRetries |
N | Output | The max number of retries for when Azure Service Bus responds with “too busy” in order to throttle messages. Defaults: "5" |
"5" |
publishInitialRetryIntervalInMs |
N | Output | Time in milliseconds for the initial exponential backoff when Azure Service Bus throttle messages. Defaults: "500" |
"500" |
direction |
N | Input/Output | The direction of the binding | "input" , "output" , "input, output" |
Microsoft Entra ID authentication
The Azure Service Bus Queues binding component supports authentication using all Microsoft Entra ID mechanisms, including Managed Identities. For further information and the relevant component metadata fields to provide depending on the choice of Microsoft Entra ID authentication mechanism, see the docs for authenticating to Azure.
Example Configuration
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.azure.servicebusqueues
version: v1
metadata:
- name: azureTenantId
value: "***"
- name: azureClientId
value: "***"
- name: azureClientSecret
value: "***"
- name: namespaceName
# Required when using Azure Authentication.
# Must be a fully-qualified domain name
value: "servicebusnamespace.servicebus.windows.net"
- name: queueName
value: queue1
- name: ttlInSeconds
value: 60
Binding support
This component supports both input and output binding interfaces.
This component supports output binding with the following operations:
create
: publishes a message to the specified queue
Message metadata
Azure Service Bus messages extend the Dapr message format with additional contextual metadata. Some metadata fields are set by Azure Service Bus itself (read-only) and others can be set by the client when publishing a message through Invoke
binding call with create
operation.
Sending a message with metadata
To set Azure Service Bus metadata when sending a message, set the query parameters on the HTTP request or the gRPC metadata as documented here.
metadata.MessageId
metadata.CorrelationId
metadata.SessionId
metadata.Label
metadata.ReplyTo
metadata.PartitionKey
metadata.To
metadata.ContentType
metadata.ScheduledEnqueueTimeUtc
metadata.ReplyToSessionId
Note
Receiving a message with metadata
When Dapr calls your application, it attaches Azure Service Bus message metadata to the request using either HTTP headers or gRPC metadata. In addition to the settable metadata listed above, you can also access the following read-only message metadata.
metadata.DeliveryCount
metadata.LockedUntilUtc
metadata.LockToken
metadata.EnqueuedTimeUtc
metadata.SequenceNumber
To find out more details on the purpose of any of these metadata properties refer to the official Azure Service Bus documentation.
In addition, all entries of ApplicationProperties
from the original Azure Service Bus message are appended as metadata.<application property's name>
.
Note
All times are populated by the server and are not adjusted for clock skews.Specifying a TTL per message
Time to live can be defined on a per-queue level (as illustrated above) or at the message level. The value defined at message level overwrites any value set at the queue level.
To set time to live at message level use the metadata
section in the request body during the binding invocation: the field name is ttlInSeconds
.
curl -X POST http://localhost:3500/v1.0/bindings/myServiceBusQueue \
-H "Content-Type: application/json" \
-d '{
"data": {
"message": "Hi"
},
"metadata": {
"ttlInSeconds": "60"
},
"operation": "create"
}'
Schedule a message
A message can be scheduled for delayed processing.
To schedule a message, use the metadata
section in the request body during the binding invocation: the field name is ScheduledEnqueueTimeUtc
.
The supported timestamp formats are RFC1123 and RFC3339.
curl -X POST http://localhost:3500/v1.0/bindings/myServiceBusQueue \
-H "Content-Type: application/json" \
-d '{
"data": {
"message": "Hi"
},
"metadata": {
"ScheduledEnqueueTimeUtc": "Tue, 02 Jan 2024 15:04:05 GMT"
},
"operation": "create"
}'
Related links
5.2.19 - Azure SignalR binding spec
Component format
To setup Azure SignalR binding create a component of type bindings.azure.signalr
. See this guide on how to create and apply a binding configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.azure.signalr
version: v1
metadata:
- name: connectionString
value: "Endpoint=https://<your-azure-signalr>.service.signalr.net;AccessKey=<your-access-key>;Version=1.0;"
- name: hub # Optional
value: "<hub name>"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
connectionString |
Y | Output | The Azure SignalR connection string | "Endpoint=https://<your-azure-signalr>.service.signalr.net;AccessKey=<your-access-key>;Version=1.0;" |
hub |
N | Output | Defines the hub in which the message will be send. The hub can be dynamically defined as a metadata value when publishing to an output binding (key is “hub”) | "myhub" |
endpoint |
N | Output | Endpoint of Azure SignalR; required if not included in the connectionString or if using Microsoft Entra ID |
"https://<your-azure-signalr>.service.signalr.net" |
accessKey |
N | Output | Access key | "your-access-key" |
Microsoft Entra ID authentication
The Azure SignalR binding component supports authentication using all Microsoft Entra ID mechanisms. See the docs for authenticating to Azure to learn more about the relevant component metadata fields based on your choice of Microsoft Entra ID authentication mechanism.
You have two options to authenticate this component with Microsoft Entra ID:
- Pass individual metadata keys:
endpoint
for the endpoint- If needed:
azureClientId
,azureTenantId
andazureClientSecret
- Pass a connection string with
AuthType=aad
specified:- System-assigned managed identity:
Endpoint=https://<servicename>.service.signalr.net;AuthType=aad;Version=1.0;
- User-assigned managed identity:
Endpoint=https://<servicename>.service.signalr.net;AuthType=aad;ClientId=<clientid>;Version=1.0;
- Microsoft Entra ID application:
Endpoint=https://<servicename>.service.signalr.net;AuthType=aad;ClientId=<clientid>;ClientSecret=<clientsecret>;TenantId=<tenantid>;Version=1.0;
Note that you cannot use a connection string if your application’s ClientSecret contains a;
character.
- System-assigned managed identity:
Binding support
This component supports output binding with the following operations:
create
Additional information
By default the Azure SignalR output binding will broadcast messages to all connected users. To narrow the audience there are two options, both configurable in the Metadata property of the message:
- group: Sends the message to a specific Azure SignalR group
- user: Sends the message to a specific Azure SignalR user
Applications publishing to an Azure SignalR output binding should send a message with the following contract:
{
"data": {
"Target": "<enter message name>",
"Arguments": [
{
"sender": "dapr",
"text": "Message from dapr output binding"
}
]
},
"metadata": {
"group": "chat123"
},
"operation": "create"
}
For more information on integration Azure SignalR into a solution check the documentation
Related links
5.2.20 - Azure Storage Queues binding spec
Component format
To setup Azure Storage Queues binding create a component of type bindings.azure.storagequeues
. See this guide on how to create and apply a binding configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.azure.storagequeues
version: v1
metadata:
- name: accountName
value: "account1"
- name: accountKey
value: "***********"
- name: queueName
value: "myqueue"
# - name: pollingInterval
# value: "30s"
# - name: ttlInSeconds
# value: "60"
# - name: decodeBase64
# value: "false"
# - name: encodeBase64
# value: "false"
# - name: endpoint
# value: "http://127.0.0.1:10001"
# - name: visibilityTimeout
# value: "30s"
# - name: initialVisibilityDelay
# value: "30s"
# - name: direction
# value: "input, output"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
accountName |
Y | Input/Output | The name of the Azure Storage account | "account1" |
accountKey |
Y* | Input/Output | The access key of the Azure Storage account. Only required when not using Microsoft Entra ID authentication. | "access-key" |
queueName |
Y | Input/Output | The name of the Azure Storage queue | "myqueue" |
pollingInterval |
N | Output | Set the interval to poll Azure Storage Queues for new messages, as a Go duration value. Default: "10s" |
"30s" |
ttlInSeconds |
N | Output | Parameter to set the default message time to live. If this parameter is omitted, messages will expire after 10 minutes. See also | "60" |
decodeBase64 |
N | Input | Configuration to decode base64 content received from the Storage Queue into a string. Defaults to false |
true , false |
encodeBase64 |
N | Output | If enabled base64 encodes the data payload before uploading to Azure storage queues. Default false . |
true , false |
endpoint |
N | Input/Output | Optional custom endpoint URL. This is useful when using the Azurite emulator or when using custom domains for Azure Storage (although this is not officially supported). The endpoint must be the full base URL, including the protocol (http:// or https:// ), the IP or FQDN, and optional port. |
"http://127.0.0.1:10001" or "https://accountName.queue.example.com" |
initialVisibilityDelay |
N | Input | Allows setting a custom queue visibility timeout to avoid immediate retrying of recently failed messages. Defaults to 30 seconds. | "100s" |
visibilityTimeout |
N | Input | Sets a delay before a message becomes visible in the queue after being added. It can also be specified per message by setting the initialVisibilityDelay property in the invocation request’s metadata. Defaults to 0 seconds. |
"30s" |
direction |
N | Input/Output | Direction of the binding. | "input" , "output" , "input, output" |
Microsoft Entra ID authentication
The Azure Storage Queue binding component supports authentication using all Microsoft Entra ID mechanisms. See the docs for authenticating to Azure to learn more about the relevant component metadata fields based on your choice of Microsoft Entra ID authentication mechanism.
Binding support
This component supports both input and output binding interfaces.
This component supports output binding with the following operations:
create
Specifying a TTL per message
Time to live can be defined on queue level (as illustrated above) or at the message level. The value defined at message level overwrites any value set at queue level.
To set time to live at message level use the metadata
section in the request body during the binding invocation.
The field name is ttlInSeconds
.
Example:
curl -X POST http://localhost:3500/v1.0/bindings/myStorageQueue \
-H "Content-Type: application/json" \
-d '{
"data": {
"message": "Hi"
},
"metadata": {
"ttlInSeconds": "60"
},
"operation": "create"
}'
Specifying a Initial Visibility delay per message
An initial visibility delay can be defined on queue level or at the message level. The value defined at message level overwrites any value set at a queue level.
To set an initial visibility delay value at the message level, use the metadata
section in the request body during the binding invocation.
The field name is initialVisbilityDelay
.
Example.
curl -X POST http://localhost:3500/v1.0/bindings/myStorageQueue \
-H "Content-Type: application/json" \
-d '{
"data": {
"message": "Hi"
},
"metadata": {
"initialVisbilityDelay": "30"
},
"operation": "create"
}'
Related links
5.2.21 - Cloudflare Queues bindings spec
Component format
This output binding for Dapr allows interacting with Cloudflare Queues to publish new messages. It is currently not possible to consume messages from a Queue using Dapr.
To setup a Cloudflare Queues binding, create a component of type bindings.cloudflare.queues
. See this guide on how to create and apply a binding configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.cloudflare.queues
version: v1
# Increase the initTimeout if Dapr is managing the Worker for you
initTimeout: "120s"
metadata:
# Name of the existing Cloudflare Queue (required)
- name: queueName
value: ""
# Name of the Worker (required)
- name: workerName
value: ""
# PEM-encoded private Ed25519 key (required)
- name: key
value: |
-----BEGIN PRIVATE KEY-----
MC4CAQ...
-----END PRIVATE KEY-----
# Cloudflare account ID (required to have Dapr manage the Worker)
- name: cfAccountID
value: ""
# API token for Cloudflare (required to have Dapr manage the Worker)
- name: cfAPIToken
value: ""
# URL of the Worker (required if the Worker has been pre-created outside of Dapr)
- name: workerUrl
value: ""
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
queueName |
Y | Output | Name of the existing Cloudflare Queue | "mydaprqueue" |
key |
Y | Output | Ed25519 private key, PEM-encoded | See example above |
cfAccountID |
Y/N | Output | Cloudflare account ID. Required to have Dapr manage the worker. | "456789abcdef8b5588f3d134f74ac"def |
cfAPIToken |
Y/N | Output | API token for Cloudflare. Required to have Dapr manage the Worker. | "secret-key" |
workerUrl |
Y/N | Output | URL of the Worker. Required if the Worker has been pre-provisioned outside of Dapr. | "https://mydaprqueue.mydomain.workers.dev" |
When you configure Dapr to create your Worker for you, you may need to set a longer value for the
initTimeout
property of the component, to allow enough time for the Worker script to be deployed. For example:initTimeout: "120s"
Binding support
This component supports output binding with the following operations:
publish
(alias:create
): Publish a message to the Queue.
The data passed to the binding is used as-is for the body of the message published to the Queue.
This operation does not accept any metadata property.
Create a Cloudflare Queue
To use this component, you must have a Cloudflare Queue created in your Cloudflare account.
You can create a new Queue in one of two ways:
-
Using the Cloudflare dashboard
-
Using the Wrangler CLI:
# Authenticate if needed with `npx wrangler login` first npx wrangler queues create <NAME> # For example: `npx wrangler queues create myqueue`
Configuring the Worker
Because Cloudflare Queues can only be accessed by scripts running on Workers, Dapr needs to maintain a Worker to communicate with the Queue.
Dapr can manage the Worker for you automatically, or you can pre-provision a Worker yourself. Pre-provisioning the Worker is the only supported option when running on workerd.
Important
Use a separate Worker for each Dapr component. Do not use the same Worker script for different Cloudflare Queues bindings, and do not use the same Worker script for different Cloudflare components in Dapr (for example, the Workers KV state store and the Queues binding).If you want to let Dapr manage the Worker for you, you will need to provide these 3 metadata options:
workerName
: Name of the Worker script. This will be the first part of the URL of your Worker. For example, if the “workers.dev” domain configured for your Cloudflare account ismydomain.workers.dev
and you setworkerName
tomydaprqueue
, the Worker that Dapr deploys will be available athttps://mydaprqueue.mydomain.workers.dev
.cfAccountID
: ID of your Cloudflare account. You can find this in your browser’s URL bar after logging into the Cloudflare dashboard, with the ID being the hex string right afterdash.cloudflare.com
. For example, if the URL ishttps://dash.cloudflare.com/456789abcdef8b5588f3d134f74acdef
, the value forcfAccountID
is456789abcdef8b5588f3d134f74acdef
.cfAPIToken
: API token with permission to create and edit Workers. You can create it from the “API Tokens” page in the “My Profile” section in the Cloudflare dashboard:- Click on “Create token”.
- Select the “Edit Cloudflare Workers” template.
- Follow the on-screen instructions to generate a new API token.
When Dapr is configured to manage the Worker for you, when a Dapr Runtime is started it checks that the Worker exists and it’s up-to-date. If the Worker doesn’t exist, or if it’s using an outdated version, Dapr creates or upgrades it for you automatically.
If you’d rather not give Dapr permissions to deploy Worker scripts for you, you can manually provision a Worker for Dapr to use. Note that if you have multiple Dapr components that interact with Cloudflare services via a Worker, you will need to create a separate Worker for each one of them.
To manually provision a Worker script, you will need to have Node.js installed on your local machine.
- Create a new folder where you’ll place the source code of the Worker, for example:
daprworker
. - If you haven’t already, authenticate with Wrangler (the Cloudflare Workers CLI) using:
npx wrangler login
. - Inside the newly-created folder, create a new
wrangler.toml
file with the contents below, filling in the missing information as appropriate:
# Name of your Worker, for example "mydaprqueue"
name = ""
# Do not change these options
main = "worker.js"
compatibility_date = "2022-12-09"
usage_model = "bundled"
[vars]
# Set this to the **public** part of the Ed25519 key, PEM-encoded (with newlines replaced with `\n`).
# Example:
# PUBLIC_KEY = "-----BEGIN PUBLIC KEY-----\nMCowB...=\n-----END PUBLIC KEY-----
PUBLIC_KEY = ""
# Set this to the name of your Worker (same as the value of the "name" property above), for example "mydaprqueue".
TOKEN_AUDIENCE = ""
# Set the next two values to the name of your Queue, for example "myqueue".
# Note that they will both be set to the same value.
[[queues.producers]]
queue = ""
binding = ""
Note: see the next section for how to generate an Ed25519 key pair. Make sure you use the public part of the key when deploying a Worker!
- Copy the (pre-compiled and minified) code of the Worker in the
worker.js
file. You can do that with this command:
# Set this to the version of Dapr that you're using
DAPR_VERSION="release-1.15"
curl -LfO "https://raw.githubusercontent.com/dapr/components-contrib/${DAPR_VERSION}/internal/component/cloudflare/workers/code/worker.js"
- Deploy the Worker using Wrangler:
npx wrangler publish
Once your Worker has been deployed, you will need to initialize the component with these two metadata options:
workerName
: Name of the Worker script. This is the value you set in thename
property in thewrangler.toml
file.workerUrl
: URL of the deployed Worker. Thenpx wrangler command
will show the full URL to you, for examplehttps://mydaprqueue.mydomain.workers.dev
.
Generate an Ed25519 key pair
All Cloudflare Workers listen on the public Internet, so Dapr needs to use additional authentication and data protection measures to ensure that no other person or application can communicate with your Worker (and thus, with your Cloudflare Queue). These include industry-standard measures such as:
- All requests made by Dapr to the Worker are authenticated via a bearer token (technically, a JWT) which is signed with an Ed25519 key.
- All communications between Dapr and your Worker happen over an encrypted connection, using TLS (HTTPS).
- The bearer token is generated on each request and is valid for a brief period of time only (currently, one minute).
To let Dapr issue bearer tokens, and have your Worker validate them, you will need to generate a new Ed25519 key pair. Here are examples of generating the key pair using OpenSSL or the step CLI.
Support for generating Ed25519 keys is available since OpenSSL 1.1.0, so the commands below will not work if you’re using an older version of OpenSSL.
Note for Mac users: on macOS, the “openssl” binary that is shipped by Apple is actually based on LibreSSL, which as of writing doesn’t support Ed25519 keys. If you’re using macOS, either use the step CLI, or install OpenSSL 3.0 from Homebrew using
brew install openssl@3
then replacingopenssl
in the commands below with$(brew --prefix)/opt/openssl@3/bin/openssl
.
You can generate a new Ed25519 key pair with OpenSSL using:
openssl genpkey -algorithm ed25519 -out private.pem
openssl pkey -in private.pem -pubout -out public.pem
On macOS, using openssl@3 from Homebrew:
$(brew --prefix)/opt/openssl@3/bin/openssl genpkey -algorithm ed25519 -out private.pem $(brew --prefix)/opt/openssl@3/bin/openssl pkey -in private.pem -pubout -out public.pem
If you don’t have the step CLI already, install it following the official instructions.
Next, you can generate a new Ed25519 key pair with the step CLI using:
step crypto keypair \
public.pem private.pem \
--kty OKP --curve Ed25519 \
--insecure --no-password
Regardless of how you generated your key pair, with the instructions above you’ll have two files:
private.pem
contains the private part of the key; use the contents of this file for thekey
property of the component’s metadata.public.pem
contains the public part of the key, which you’ll need only if you’re deploying a Worker manually (as per the instructions in the previoius section).
Warning
Protect the private part of your key and treat it as a secret value!Related links
5.2.22 - commercetools GraphQL binding spec
Component format
To setup commercetools GraphQL binding create a component of type bindings.commercetools
. See this guide on how to create and apply a binding configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.commercetools
version: v1
metadata:
- name: region # required.
value: "region"
- name: provider # required.
value: "gcp"
- name: projectKey # required.
value: "<project-key>"
- name: clientID # required.
value: "*****************"
- name: clientSecret # required.
value: "*****************"
- name: scopes # required.
value: "<project-scopes>"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
region |
Y | Output | The region of the commercetools project | "europe-west1" |
provider |
Y | Output | The cloud provider, either gcp or aws | "gcp" , "aws" |
projectKey |
Y | Output | The commercetools project key | |
clientID |
Y | Output | The commercetools client ID for the project | |
clientSecret |
Y | Output | The commercetools client secret for the project | |
scopes |
Y | Output | The commercetools scopes for the project | "manage_project:project-key" |
For more information see commercetools - Creating an API Client and commercetools - Regions.
Binding support
This component supports output binding with the following operations:
create
Related links
- Basic schema for a Dapr component
- Bindings building block
- How-To: Trigger application with input binding
- How-To: Use bindings to interface with external resources
- Bindings API reference
- Sample app that leverages the commercetools binding with sample GraphQL query
5.2.23 - Cron binding spec
Component format
To setup cron binding create a component of type bindings.cron
. See this guide on how to create and apply a binding configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.cron
version: v1
metadata:
- name: schedule
value: "@every 15m" # valid cron schedule
- name: direction
value: "input"
Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
schedule |
Y | Input | The valid cron schedule to use. See this for more details | "@every 15m" |
direction |
N | Input | The direction of the binding | "input" |
Schedule Format
The Dapr cron binding supports following formats:
Character | Descriptor | Acceptable values |
---|---|---|
1 | Second | 0 to 59, or * |
2 | Minute | 0 to 59, or * |
3 | Hour | 0 to 23, or * (UTC) |
4 | Day of the month | 1 to 31, or * |
5 | Month | 1 to 12, or * |
6 | Day of the week | 0 to 7 (where 0 and 7 represent Sunday), or * |
For example:
30 * * * * *
- every 30 seconds0 */15 * * * *
- every 15 minutes0 30 3-6,20-23 * * *
- every hour on the half hour in the range 3-6am, 8-11pmCRON_TZ=America/New_York 0 30 04 * * *
- every day at 4:30am New York time
You can learn more about cron and the supported formats here
For ease of use, the Dapr cron binding also supports few shortcuts:
@every 15s
wheres
is seconds,m
minutes, andh
hours@daily
or@hourly
which runs at that period from the time the binding is initialized
Listen to the cron binding
After setting up the cron binding, all you need to do is listen on an endpoint that matches the name of your component. Assume the [NAME] is scheduled
. This will be made as a HTTP POST
request. The below example shows how a simple Node.js Express application can receive calls on the /scheduled
endpoint and write a message to the console.
app.post('/scheduled', async function(req, res){
console.log("scheduled endpoint called", req.body)
res.status(200).send()
});
When running this code, note that the /scheduled
endpoint is called every fifteen minutes by the Dapr sidecar.
Binding support
This component supports input binding interface.
Related links
5.2.24 - GCP Pub/Sub binding spec
Component format
To setup GCP Pub/Sub binding create a component of type bindings.gcp.pubsub
. See this guide on how to create and apply a binding configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.gcp.pubsub
version: v1
metadata:
- name: topic
value: "topic1"
- name: subscription
value: "subscription1"
- name: type
value: "service_account"
- name: project_id
value: "project_111"
- name: private_key_id
value: "*************"
- name: client_email
value: "name@domain.com"
- name: client_id
value: "1111111111111111"
- name: auth_uri
value: "https://accounts.google.com/o/oauth2/auth"
- name: token_uri
value: "https://oauth2.googleapis.com/token"
- name: auth_provider_x509_cert_url
value: "https://www.googleapis.com/oauth2/v1/certs"
- name: client_x509_cert_url
value: "https://www.googleapis.com/robot/v1/metadata/x509/<project-name>.iam.gserviceaccount.com"
- name: private_key
value: "PRIVATE KEY"
- name: direction
value: "input, output"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
topic |
Y | Output | GCP Pub/Sub topic name | "topic1" |
subscription |
N | GCP Pub/Sub subscription name | "name1" |
|
type |
Y | Output | GCP credentials type | service_account |
project_id |
Y | Output | GCP project id | projectId |
private_key_id |
N | Output | GCP private key id | "privateKeyId" |
private_key |
Y | Output | GCP credentials private key. Replace with x509 cert | 12345-12345 |
client_email |
Y | Output | GCP client email | "client@email.com" |
client_id |
N | Output | GCP client id | 0123456789-0123456789 |
auth_uri |
N | Output | Google account OAuth endpoint | https://accounts.google.com/o/oauth2/auth |
token_uri |
N | Output | Google account token uri | https://oauth2.googleapis.com/token |
auth_provider_x509_cert_url |
N | Output | GCP credentials cert url | https://www.googleapis.com/oauth2/v1/certs |
client_x509_cert_url |
N | Output | GCP credentials project x509 cert url | https://www.googleapis.com/robot/v1/metadata/x509/<PROJECT_NAME>.iam.gserviceaccount.com |
direction |
N | Input/Output | The direction of the binding. | "input" , "output" , "input, output" |
Binding support
This component supports both input and output binding interfaces.
This component supports output binding with the following operations:
create
Related links
5.2.25 - GCP Storage Bucket binding spec
Component format
To setup GCP Storage Bucket binding create a component of type bindings.gcp.bucket
. See this guide on how to create and apply a binding configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.gcp.bucket
version: v1
metadata:
- name: bucket
value: "mybucket"
- name: type
value: "service_account"
- name: project_id
value: "project_111"
- name: private_key_id
value: "*************"
- name: client_email
value: "name@domain.com"
- name: client_id
value: "1111111111111111"
- name: auth_uri
value: "https://accounts.google.com/o/oauth2/auth"
- name: token_uri
value: "https://oauth2.googleapis.com/token"
- name: auth_provider_x509_cert_url
value: "https://www.googleapis.com/oauth2/v1/certs"
- name: client_x509_cert_url
value: "https://www.googleapis.com/robot/v1/metadata/x509/<project-name>.iam.gserviceaccount.com"
- name: private_key
value: "PRIVATE KEY"
- name: decodeBase64
value: "<bool>"
- name: encodeBase64
value: "<bool>"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
bucket |
Y | Output | The bucket name | "mybucket" |
project_id |
Y | Output | GCP project ID | projectId |
type |
N | Output | The GCP credentials type | "service_account" |
private_key_id |
N | Output | If using explicit credentials, this field should contain the private_key_id field from the service account json document |
"privateKeyId" |
private_key |
N | Output | If using explicit credentials, this field should contain the private_key field from the service account json. Replace with x509 cert |
12345-12345 |
client_email |
N | Output | If using explicit credentials, this field should contain the client_email field from the service account json |
"client@email.com" |
client_id |
N | Output | If using explicit credentials, this field should contain the client_id field from the service account json |
0123456789-0123456789 |
auth_uri |
N | Output | If using explicit credentials, this field should contain the auth_uri field from the service account json |
https://accounts.google.com/o/oauth2/auth |
token_uri |
N | Output | If using explicit credentials, this field should contain the token_uri field from the service account json |
https://oauth2.googleapis.com/token |
auth_provider_x509_cert_url |
N | Output | If using explicit credentials, this field should contain the auth_provider_x509_cert_url field from the service account json |
https://www.googleapis.com/oauth2/v1/certs |
client_x509_cert_url |
N | Output | If using explicit credentials, this field should contain the client_x509_cert_url field from the service account json |
https://www.googleapis.com/robot/v1/metadata/x509/<PROJECT_NAME>.iam.gserviceaccount.com |
decodeBase64 |
N | Output | Configuration to decode base64 file content before saving to bucket storage. (In case of saving a file with binary content). true is the only allowed positive value. Other positive variations like "True", "1" are not acceptable. Defaults to false |
true , false |
encodeBase64 |
N | Output | Configuration to encode base64 file content before return the content. (In case of opening a file with binary content). true is the only allowed positive value. Other positive variations like "True", "1" are not acceptable. Defaults to false |
true , false |
GCP Credentials
Since the GCP Storage Bucket component uses the GCP Go Client Libraries, by default it authenticates using Application Default Credentials. This is explained further in the Authenticate to GCP Cloud services using client libraries guide. Also, see how to Set up Application Default Credentials.
Binding support
This component supports output binding with the following operations:
create
: Create fileget
: Get filebulkGet
: Bulk get objectsdelete
: Delete filelist
: List filecopy
: Copy filemove
: Move filerename
: Rename file
Create file
To perform a create operation, invoke the GCP Storage Bucket binding with a POST
method and the following JSON body:
Note: by default, a random UUID is generated. See below for Metadata support to set the name
{
"operation": "create",
"data": "YOUR_CONTENT"
}
The metadata parameters are:
key
- (optional) the name of the objectdecodeBase64
- (optional) configuration to decode base64 file content before saving to storage
Examples
Save text to a random generated UUID file
On Windows, utilize cmd prompt (PowerShell has different escaping mechanism)
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\" }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "Hello World" }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Save text to a specific file
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\", \"metadata\": { \"key\": \"my-test-file.txt\" } }" \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "Hello World", "metadata": { "key": "my-test-file.txt" } }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Upload a file
To upload a file, pass the file contents as the data payload; you may want to encode this in e.g. Base64 for binary content.
Then you can upload it as you would normally:
curl -d "{ \"operation\": \"create\", \"data\": \"(YOUR_FILE_CONTENTS)\", \"metadata\": { \"key\": \"my-test-file.jpg\" } }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "$(cat my-test-file.jpg)", "metadata": { "key": "my-test-file.jpg" } }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Response
The response body will contain the following JSON:
{
"objectURL":"https://storage.googleapis.com/<your bucket>/<key>",
}
Get object
To perform a get file operation, invoke the GCP bucket binding with a POST
method and the following JSON body:
{
"operation": "get",
"metadata": {
"key": "my-test-file.txt"
}
}
The metadata parameters are:
key
- the name of the objectencodeBase64
- (optional) configuration to encode base64 file content before return the content.
Example
curl -d '{ \"operation\": \"get\", \"metadata\": { \"key\": \"my-test-file.txt\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "get", "metadata": { "key": "my-test-file.txt" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Response
The response body contains the value stored in the object.
Bulk get objects
To perform a bulk get operation that retrieves all bucket files at once, invoke the GCP bucket binding with a POST
method and the following JSON body:
{
"operation": "bulkGet",
}
The metadata parameters are:
encodeBase64
- (optional) configuration to encode base64 file content before return the content for all files
Example
curl -d '{ \"operation\": \"bulkget\"}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "bulkget"}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Response
The response body contains an array of objects, where each object represents a file in the bucket with the following structure:
[
{
"name": "file1.txt",
"data": "content of file1",
"attrs": {
"bucket": "mybucket",
"name": "file1.txt",
"size": 1234,
...
}
},
{
"name": "file2.txt",
"data": "content of file2",
"attrs": {
"bucket": "mybucket",
"name": "file2.txt",
"size": 5678,
...
}
}
]
Each object in the array contains:
name
: The name of the filedata
: The content of the fileattrs
: Object attributes from GCP Storage including metadata like creation time, size, content type, etc.
Delete object
To perform a delete object operation, invoke the GCP bucket binding with a POST
method and the following JSON body:
{
"operation": "delete",
"metadata": {
"key": "my-test-file.txt"
}
}
The metadata parameters are:
key
- the name of the object
Examples
Delete object
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"key\": \"my-test-file.txt\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "delete", "metadata": { "key": "my-test-file.txt" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Response
An HTTP 204 (No Content) and empty body will be retuned if successful.
List objects
To perform a list object operation, invoke the GCP bucket binding with a POST
method and the following JSON body:
{
"operation": "list",
"data": {
"maxResults": 10,
"prefix": "file",
"delimiter": "i0FvxAn2EOEL6"
}
}
The data parameters are:
maxResults
- (optional) sets the maximum number of keys returned in the response. By default the action returns up to 1,000 key names. The response might contain fewer keys but will never contain more.prefix
- (optional) it can be used to filter objects starting with prefix.delimiter
- (optional) it can be used to restrict the results to only the kobjects in the given “directory”. Without the delimiter, the entire tree under the prefix is returned
Response
The response body contains the list of found objects.
The list of objects will be returned as JSON array in the following form:
[
{
"Bucket": "<your bucket>",
"Name": "02WGzEdsUWNlQ",
"ContentType": "image/png",
"ContentLanguage": "",
"CacheControl": "",
"EventBasedHold": false,
"TemporaryHold": false,
"RetentionExpirationTime": "0001-01-01T00:00:00Z",
"ACL": null,
"PredefinedACL": "",
"Owner": "",
"Size": 5187,
"ContentEncoding": "",
"ContentDisposition": "",
"MD5": "aQdLBCYV0BxA51jUaxc3pQ==",
"CRC32C": 1058633505,
"MediaLink": "https://storage.googleapis.com/download/storage/v1/b/<your bucket>/o/02WGzEdsUWNlQ?generation=1631553155678071&alt=media",
"Metadata": null,
"Generation": 1631553155678071,
"Metageneration": 1,
"StorageClass": "STANDARD",
"Created": "2021-09-13T17:12:35.679Z",
"Deleted": "0001-01-01T00:00:00Z",
"Updated": "2021-09-13T17:12:35.679Z",
"CustomerKeySHA256": "",
"KMSKeyName": "",
"Prefix": "",
"Etag": "CPf+mpK5/PICEAE="
}
]
Copy objects
To perform a copy object operation, invoke the GCP bucket binding with a POST
method and the following JSON body:
{
"operation": "copy",
"metadata": {
"destinationBucket": "destination-bucket-name",
}
}
The metadata parameters are:
destinationBucket
- the name of the destination bucket (required)
Move objects
To perform a move object operation, invoke the GCP bucket binding with a POST
method and the following JSON body:
{
"operation": "move",
"metadata": {
"destinationBucket": "destination-bucket-name",
}
}
The metadata parameters are:
destinationBucket
- the name of the destination bucket (required)
Rename objects
To perform a rename object operation, invoke the GCP bucket binding with a POST
method and the following JSON body:
{
"operation": "rename",
"metadata": {
"newName": "object-new-name",
}
}
The metadata parameters are:
newName
- the new name of the object (required)
Related links
5.2.26 - GraphQL binding spec
Component format
To setup GraphQL binding create a component of type bindings.graphql
. See this guide on how to create and apply a binding configuration. To separate normal config settings (e.g. endpoint) from headers, “header:” is used a prefix on the header names.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: example.bindings.graphql
spec:
type: bindings.graphql
version: v1
metadata:
- name: endpoint
value: "http://localhost:8080/v1/graphql"
- name: header:x-hasura-access-key
value: "adminkey"
- name: header:Cache-Control
value: "no-cache"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
endpoint |
Y | Output | GraphQL endpoint string See here for more details | "http://localhost:4000/graphql/graphql" |
header:[HEADERKEY] |
N | Output | GraphQL header. Specify the header key in the name , and the header value in the value . |
"no-cache" (see above) |
variable:[VARIABLEKEY] |
N | Output | GraphQL query variable. Specify the variable name in the name , and the variable value in the value . |
"123" (see below) |
Endpoint and Header format
The GraphQL binding uses GraphQL client internally.
Binding support
This component supports output binding with the following operations:
query
mutation
query
The query
operation is used for query
statements, which returns the metadata along with data in a form of an array of row values.
Request
in := &dapr.InvokeBindingRequest{
Name: "example.bindings.graphql",
Operation: "query",
Metadata: map[string]string{ "query": `query { users { name } }`},
}
To use a query
that requires query variables, add a key-value pair to the metadata
map, wherein every key corresponding to a query variable is the variable name prefixed with variable:
in := &dapr.InvokeBindingRequest{
Name: "example.bindings.graphql",
Operation: "query",
Metadata: map[string]string{
"query": `query HeroNameAndFriends($episode: string!) { hero(episode: $episode) { name } }`,
"variable:episode": "JEDI",
}
Related links
5.2.27 - HTTP binding spec
Alternative
The service invocation API allows invoking non-Dapr HTTP endpoints and is the recommended approach. Read “How-To: Invoke Non-Dapr Endpoints using HTTP” for more information.
Setup Dapr component
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.http
version: v1
metadata:
- name: url
value: "http://something.com"
#- name: maxResponseBodySize
# value: "100Mi" # OPTIONAL maximum amount of data to read from a response
#- name: MTLSRootCA
# value: "/Users/somepath/root.pem" # OPTIONAL path to root CA or PEM-encoded string
#- name: MTLSClientCert
# value: "/Users/somepath/client.pem" # OPTIONAL path to client cert or PEM-encoded string
#- name: MTLSClientKey
# value: "/Users/somepath/client.key" # OPTIONAL path to client key or PEM-encoded string
#- name: MTLSRenegotiation
# value: "RenegotiateOnceAsClient" # OPTIONAL one of: RenegotiateNever, RenegotiateOnceAsClient, RenegotiateFreelyAsClient
#- name: securityToken # OPTIONAL <token to include as a header on HTTP requests>
# secretKeyRef:
# name: mysecret
# key: "mytoken"
#- name: securityTokenHeader
# value: "Authorization: Bearer" # OPTIONAL <header name for the security token>
#- name: errorIfNot2XX
# value: "false" # OPTIONAL
Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
url |
Y | Output | The base URL of the HTTP endpoint to invoke | http://host:port/path , http://myservice:8000/customers |
maxResponseBodySize |
N | Output | Maximum length of the response to read. A whole number is interpreted as bytes; units such as Ki, Mi, Gi (SI) or `k |
M |
MTLSRootCA |
N | Output | Path to root CA certificate or PEM-encoded string | |
MTLSClientCert |
N | Output | Path to client certificate or PEM-encoded string | |
MTLSClientKey |
N | Output | Path client private key or PEM-encoded string | |
MTLSRenegotiation |
N | Output | Type of mTLS renegotiation to be used | RenegotiateOnceAsClient |
securityToken |
N | Output | The value of a token to be added to a HTTP request as a header. Used together with securityTokenHeader |
|
securityTokenHeader |
N | Output | The name of the header for securityToken on a HTTP request |
|
errorIfNot2XX |
N | Output | If a binding error should be thrown when the response is not in the 2xx range. Defaults to true |
The values for MTLSRootCA, MTLSClientCert and MTLSClientKey can be provided in three ways:
-
Secret store reference:
apiVersion: dapr.io/v1alpha1 kind: Component metadata: name: <NAME> spec: type: bindings.http version: v1 metadata: - name: url value: http://something.com - name: MTLSRootCA secretKeyRef: name: mysecret key: myrootca auth: secretStore: <NAME_OF_SECRET_STORE_COMPONENT>
-
Path to the file: the absolute path to the file can be provided as a value for the field.
-
PEM encoded string: the PEM-encoded string can also be provided as a value for the field.
Note
Metadata fields MTLSRootCA, MTLSClientCert and MTLSClientKey are used to configure (m)TLS authentication. To use mTLS authentication, you must provide all three fields. See mTLS for more details. You can also provide only MTLSRootCA, to enable HTTPS connection with a certificate signed by a custom CA. See HTTPS section for more details.Binding support
This component supports output binding with the following HTTP methods/verbs:
create
: For backward compatibility and treated like a postget
: Read data/recordshead
: Identical to get except that the server does not return a response bodypost
: Typically used to create records or send commandsput
: Update data/recordspatch
: Sometimes used to update a subset of fields of a recorddelete
: Delete a data/recordoptions
: Requests for information about the communication options available (not commonly used)trace
: Used to invoke a remote, application-layer loop- back of the request message (not commonly used)
Request
Operation metadata fields
All of the operations above support the following metadata fields
Field | Required | Details | Example |
---|---|---|---|
path |
N | The path to append to the base URL. Used for accessing specific URIs. | "/1234" , "/search?lastName=Jones" |
Field with a capitalized first letter | N | Any fields that have a capital first letter are sent as request headers | "Content-Type" , "Accept" |
Retrieving data
To retrieve data from the HTTP endpoint, invoke the HTTP binding with a GET
method and the following JSON body:
{
"operation": "get"
}
Optionally, a path can be specified to interact with resource URIs:
{
"operation": "get",
"metadata": {
"path": "/things/1234"
}
}
Response
The response body contains the data returned by the HTTP endpoint. The data
field contains the HTTP response body as a byte slice (Base64 encoded via curl). The metadata
field contains:
Field | Required | Details | Example |
---|---|---|---|
statusCode |
Y | The HTTP status code | 200 , 404 , 503 |
status |
Y | The status description | "200 OK" , "201 Created" |
Field with a capitalized first letter | N | Any fields that have a capital first letter are sent as request headers | "Content-Type" |
Example
Requesting the base URL
curl -d "{ \"operation\": \"get\" }" \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "get" }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Requesting a specific path
curl -d "{ \"operation\": \"get\", \"metadata\": { \"path\": \"/things/1234\" } }" \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "get", "metadata": { "path": "/things/1234" } }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Sending and updating data
To send data to the HTTP endpoint, invoke the HTTP binding with a POST
, PUT
, or PATCH
method and the following JSON body:
Note
Any metadata field that starts with a capital letter is passed as a request header. For example, the default content type isapplication/json; charset=utf-8
. This can be overridden be setting the Content-Type
metadata field.
{
"operation": "post",
"data": "content (default is JSON)",
"metadata": {
"path": "/things",
"Content-Type": "application/json; charset=utf-8"
}
}
Example
Posting a new record
curl -d "{ \"operation\": \"post\", \"data\": \"YOUR_BASE_64_CONTENT\", \"metadata\": { \"path\": \"/things\" } }" \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "post", "data": "YOUR_BASE_64_CONTENT", "metadata": { "path": "/things" } }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Using HTTPS
The HTTP binding can also be used with HTTPS endpoints by configuring the Dapr sidecar to trust the server’s SSL certificate.
- Update the binding URL to use
https
instead ofhttp
. - If you need to add a custom TLS certificate, refer How-To: Install certificates in the Dapr sidecar, to install the TLS certificates in the sidecar.
Example
Update the binding component
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: bindings.http
version: v1
metadata:
- name: url
value: https://my-secured-website.com # Use HTTPS
Install the TLS certificate in the sidecar
When the sidecar is not running inside a container, the TLS certificate can be directly installed on the host operating system.
Below is an example when the sidecar is running as a container. The SSL certificate is located on the host computer at /tmp/ssl/cert.pem
.
version: '3'
services:
my-app:
# ...
dapr-sidecar:
image: "daprio/daprd:1.8.0"
command: [
"./daprd",
"-app-id", "myapp",
"-app-port", "3000",
]
volumes:
- "./components/:/components"
- "/tmp/ssl/:/certificates" # Mount the certificates folder to the sidecar container at /certificates
environment:
- "SSL_CERT_DIR=/certificates" # Set the environment variable to the path of the certificates folder
depends_on:
- my-app
The sidecar can read the TLS certificate from a variety of sources. See How-to: Mount Pod volumes to the Dapr sidecar for more. In this example, we store the TLS certificate as a Kubernetes secret.
kubectl create secret generic myapp-cert --from-file /tmp/ssl/cert.pem
The YAML below is an example of the Kubernetes deployment that mounts the above secret to the sidecar and sets SSL_CERT_DIR
to install the certificates.
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
namespace: default
labels:
app: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
annotations:
dapr.io/enabled: "true"
dapr.io/app-id: "myapp"
dapr.io/app-port: "8000"
dapr.io/volume-mounts: "cert-vol:/certificates" # Mount the certificates folder to the sidecar container at /certificates
dapr.io/env: "SSL_CERT_DIR=/certificates" # Set the environment variable to the path of the certificates folder
spec:
volumes:
- name: cert-vol
secret:
secretName: myapp-cert
...
Invoke the binding securely
curl -d "{ \"operation\": \"get\" }" \
https://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "get" }' \
https://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Note
HTTPS binding support can also be configured using the MTLSRootCA metadata option. This will add the specified certificate to the list of trusted certificates for the binding. There’s no specific preference for either method. While the MTLSRootCA option is easy to use and doesn’t require any changes to the sidecar, it accepts only one certificate. If you need to trust multiple certificates, you need to install them in the sidecar by following the steps above.Using mTLS or enabling client TLS authentication along with HTTPS
You can configure the HTTP binding to use mTLS or client TLS authentication along with HTTPS by providing the MTLSRootCA
, MTLSClientCert
, and MTLSClientKey
metadata fields in the binding component.
These fields can be passed as a file path or as a pem encoded string:
- If the file path is provided, the file is read and the contents are used.
- If the PEM-encoded string is provided, the string is used as is.
When these fields are configured, the Dapr sidecar uses the provided certificate to authenticate itself with the server during the TLS handshake process.
If the remote server is enforcing TLS renegotiation, you also need to set the metadata field MTLSRenegotiation
. This field accepts one of following options:
RenegotiateNever
RenegotiateOnceAsClient
RenegotiateFreelyAsClient
For more details see the Go RenegotiationSupport
documentation.
You can use this when the server with which the HTTP binding is configured to communicate requires mTLS or client TLS authentication.
Related links
5.2.28 - Huawei OBS binding spec
Component format
To setup Huawei Object Storage Service (OBS) (output) binding create a component of type bindings.huawei.obs
. See this guide on how to create and apply a binding configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.huawei.obs
version: v1
- name: bucket
value: "<your-bucket-name>"
- name: endpoint
value: "<obs-bucket-endpoint>"
- name: accessKey
value: "<your-access-key>"
- name: secretKey
value: "<your-secret-key>"
# optional fields
- name: region
value: "<your-bucket-region>"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
bucket |
Y | Output | The name of the Huawei OBS bucket to write to | "My-OBS-Bucket" |
endpoint |
Y | Output | The specific Huawei OBS endpoint | "obs.cn-north-4.myhuaweicloud.com" |
accessKey |
Y | Output | The Huawei Access Key (AK) to access this resource | "************" |
secretKey |
Y | Output | The Huawei Secret Key (SK) to access this resource | "************" |
region |
N | Output | The specific Huawei region of the bucket | "cn-north-4" |
Binding support
This component supports output binding with the following operations:
create
: Create fileupload
: Upload fileget
: Get filedelete
: Delete filelist
: List file
Create file
To perform a create operation, invoke the Huawei OBS binding with a POST
method and the following JSON body:
Note: by default, a random UUID is generated. See below for Metadata support to set the destination file name
{
"operation": "create",
"data": "YOUR_CONTENT"
}
Examples
Save text to a random generated UUID file
On Windows, utilize cmd prompt (PowerShell has different escaping mechanism)
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\" }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "Hello World" }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Save text to a specific file
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\", \"metadata\": { \"key\": \"my-test-file.txt\" } }" \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "Hello World", "metadata": { "key": "my-test-file.txt" } }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Response
The response JSON body contains the statusCode
and the versionId
fields. The versionId
will have a value returned only if the bucket versioning is enabled and an empty string otherwise.
Upload file
To upload a binary file (for example, .jpg, .zip), invoke the Huawei OBS binding with a POST
method and the following JSON body:
Note: by default, a random UUID is generated, if you don’t specify the
key
. See the example below for metadata support to set the destination file name. This API can be used to upload a regular file, such as a plain text file.
{
"operation": "upload",
"metadata": {
"key": "DESTINATION_FILE_NAME"
},
"data": {
"sourceFile": "PATH_TO_YOUR_SOURCE_FILE"
}
}
Example
curl -d "{ \"operation\": \"upload\", \"data\": { \"sourceFile\": \".\my-test-file.jpg\" }, \"metadata\": { \"key\": \"my-test-file.jpg\" } }" \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "upload", "data": { "sourceFile": "./my-test-file.jpg" }, "metadata": { "key": "my-test-file.jpg" } }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Response
The response JSON body contains the statusCode
and the versionId
fields. The versionId
will have a value returned only if the bucket versioning is enabled and an empty string otherwise.
Get object
To perform a get file operation, invoke the Huawei OBS binding with a POST
method and the following JSON body:
{
"operation": "get",
"metadata": {
"key": "my-test-file.txt"
}
}
The metadata parameters are:
key
- the name of the object
Example
curl -d '{ \"operation\": \"get\", \"metadata\": { \"key\": \"my-test-file.txt\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "get", "metadata": { "key": "my-test-file.txt" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Response
The response body contains the value stored in the object.
Delete object
To perform a delete object operation, invoke the Huawei OBS binding with a POST
method and the following JSON body:
{
"operation": "delete",
"metadata": {
"key": "my-test-file.txt"
}
}
The metadata parameters are:
key
- the name of the object
Examples
Delete object
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"key\": \"my-test-file.txt\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "delete", "metadata": { "key": "my-test-file.txt" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Response
An HTTP 204 (No Content) and empty body are returned if successful.
List objects
To perform a list object operation, invoke the Huawei OBS binding with a POST
method and the following JSON body:
{
"operation": "list",
"data": {
"maxResults": 5,
"prefix": "dapr-",
"marker": "obstest",
"delimiter": "jpg"
}
}
The data parameters are:
maxResults
- (optional) sets the maximum number of keys returned in the response. By default the action returns up to 1,000 key names. The response might contain fewer keys but will never contain more.prefix
- (optional) limits the response to keys that begin with the specified prefix.marker
- (optional) marker is where you want Huawei OBS to start listing from. Huawei OBS starts listing after this specified key. Marker can be any key in the bucket. The marker value may then be used in a subsequent call to request the next set of list items.delimiter
- (optional) A delimiter is a character you use to group keys. It returns objects/files with their object key other than that is specified by the delimiter pattern.
Example
curl -d '{ \"operation\": \"list\", \"data\": { \"maxResults\": 5, \"prefix\": \"dapr-\", \"marker\": \"obstest\", \"delimiter\": \"jpg\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "list", "data": { "maxResults": 5, "prefix": "dapr-", "marker": "obstest", "delimiter": "jpg" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Response
The response body contains the list of found objects.
Related links
5.2.29 - InfluxDB binding spec
Component format
To setup InfluxDB binding create a component of type bindings.influx
. See this guide on how to create and apply a binding configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.influx
version: v1
metadata:
- name: url # Required
value: "<INFLUX-DB-URL>"
- name: token # Required
value: "<TOKEN>"
- name: org # Required
value: "<ORG>"
- name: bucket # Required
value: "<BUCKET>"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
url |
Y | Output | The URL for the InfluxDB instance | "http://localhost:8086" |
token |
Y | Output | The authorization token for InfluxDB | "mytoken" |
org |
Y | Output | The InfluxDB organization | "myorg" |
bucket |
Y | Output | Bucket name to write to | "mybucket" |
Binding support
This component supports output binding with the following operations:
create
query
Query
In order to query InfluxDB, use a query
operation along with a raw
key in the call’s metadata, with the query as the value:
curl -X POST http://localhost:3500/v1.0/bindings/myInfluxBinding \
-H "Content-Type: application/json" \
-d "{
\"metadata\": {
\"raw\": "SELECT * FROM 'sith_lords'"
},
\"operation\": \"query\"
}"
Related links
5.2.30 - Kafka binding spec
Component format
To setup Kafka binding create a component of type bindings.kafka
. See this guide on how to create and apply a binding configuration. For details on using secretKeyRef
, see the guide on how to reference secrets in components.
All component metadata field values can carry templated metadata values, which are resolved on Dapr sidecar startup.
For example, you can choose to use {namespace}
as the consumerGroup
, to enable using the same appId
in different namespaces using the same topics as described in this article.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: kafka-binding
spec:
type: bindings.kafka
version: v1
metadata:
- name: topics # Optional. Used for input bindings.
value: "topic1,topic2"
- name: brokers # Required.
value: "localhost:9092,localhost:9093"
- name: consumerGroup # Optional. Used for input bindings.
value: "group1"
- name: publishTopic # Optional. Used for output bindings.
value: "topic3"
- name: authRequired # Required.
value: "true"
- name: saslUsername # Required if authRequired is `true`.
value: "user"
- name: saslPassword # Required if authRequired is `true`.
secretKeyRef:
name: kafka-secrets
key: "saslPasswordSecret"
- name: saslMechanism
value: "SHA-512"
- name: initialOffset # Optional. Used for input bindings.
value: "newest"
- name: maxMessageBytes # Optional.
value: "1024"
- name: heartbeatInterval # Optional.
value: 5s
- name: sessionTimeout # Optional.
value: 15s
- name: version # Optional.
value: "2.0.0"
- name: direction
value: "input, output"
- name: schemaRegistryURL # Optional. When using Schema Registry Avro serialization/deserialization. The Schema Registry URL.
value: http://localhost:8081
- name: schemaRegistryAPIKey # Optional. When using Schema Registry Avro serialization/deserialization. The Schema Registry API Key.
value: XYAXXAZ
- name: schemaRegistryAPISecret # Optional. When using Schema Registry Avro serialization/deserialization. The Schema Registry credentials API Secret.
value: "ABCDEFGMEADFF"
- name: schemaCachingEnabled # Optional. When using Schema Registry Avro serialization/deserialization. Enables caching for schemas.
value: true
- name: schemaLatestVersionCacheTTL # Optional. When using Schema Registry Avro serialization/deserialization. The TTL for schema caching when publishing a message with latest schema available.
value: 5m
- name: escapeHeaders # Optional.
value: false
Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
topics |
N | Input | A comma-separated string of topics. | "mytopic1,topic2" |
brokers |
Y | Input/Output | A comma-separated string of Kafka brokers. | "localhost:9092,dapr-kafka.myapp.svc.cluster.local:9093" |
clientID |
N | Input/Output | A user-provided string sent with every request to the Kafka brokers for logging, debugging, and auditing purposes. | "my-dapr-app" |
consumerGroup |
N | Input | A kafka consumer group to listen on. Each record published to a topic is delivered to one consumer within each consumer group subscribed to the topic. | "group1" |
consumeRetryEnabled |
N | Input/Output | Enable consume retry by setting to "true" . Default to false in Kafka binding component. |
"true" , "false" |
publishTopic |
Y | Output | The topic to publish to. | "mytopic" |
authRequired |
N | Deprecated | Enable SASL authentication with the Kafka brokers. | "true" , "false" |
authType |
Y | Input/Output | Configure or disable authentication. Supported values: none , password , mtls , or oidc |
"password" , "none" |
saslUsername |
N | Input/Output | The SASL username used for authentication. Only required if authRequired is set to "true" . |
"adminuser" |
saslPassword |
N | Input/Output | The SASL password used for authentication. Can be secretKeyRef to use a secret reference. Only required if authRequired is set to "true" . |
"" , "KeFg23!" |
saslMechanism |
N | Input/Output | The SASL authentication mechanism you’d like to use. Only required if authtype is set to "password" . If not provided, defaults to PLAINTEXT , which could cause a break for some services, like Amazon Managed Service for Kafka. |
"SHA-512", "SHA-256", "PLAINTEXT" |
initialOffset |
N | Input | The initial offset to use if no offset was previously committed. Should be “newest” or “oldest”. Defaults to “newest”. | "oldest" |
maxMessageBytes |
N | Input/Output | The maximum size in bytes allowed for a single Kafka message. Defaults to 1024. | "2048" |
oidcTokenEndpoint |
N | Input/Output | Full URL to an OAuth2 identity provider access token endpoint. Required when authType is set to oidc |
“https://identity.example.com/v1/token" |
oidcClientID |
N | Input/Output | The OAuth2 client ID that has been provisioned in the identity provider. Required when authType is set to oidc |
"dapr-kafka" |
oidcClientSecret |
N | Input/Output | The OAuth2 client secret that has been provisioned in the identity provider: Required when authType is set to oidc |
"KeFg23!" |
oidcScopes |
N | Input/Output | Comma-delimited list of OAuth2/OIDC scopes to request with the access token. Recommended when authType is set to oidc . Defaults to "openid" |
"openid,kafka-prod" |
version |
N | Input/Output | Kafka cluster version. Defaults to 2.0.0. Please note that this needs to be mandatorily set to 1.0.0 for EventHubs with Kafka. |
"1.0.0" |
direction |
N | Input/Output | The direction of the binding. | "input" , "output" , "input, output" |
oidcExtensions |
N | Input/Output | String containing a JSON-encoded dictionary of OAuth2/OIDC extensions to request with the access token | {"cluster":"kafka","poolid":"kafkapool"} |
schemaRegistryURL |
N | Required when using Schema Registry Avro serialization/deserialization. The Schema Registry URL. | http://localhost:8081 |
|
schemaRegistryAPIKey |
N | When using Schema Registry Avro serialization/deserialization. The Schema Registry credentials API Key. | XYAXXAZ |
|
schemaRegistryAPISecret |
N | When using Schema Registry Avro serialization/deserialization. The Schema Registry credentials API Secret. | ABCDEFGMEADFF |
|
schemaCachingEnabled |
N | When using Schema Registry Avro serialization/deserialization. Enables caching for schemas. Default is true |
true |
|
schemaLatestVersionCacheTTL |
N | When using Schema Registry Avro serialization/deserialization. The TTL for schema caching when publishing a message with latest schema available. Default is 5 min | 5m |
|
clientConnectionTopicMetadataRefreshInterval |
N | Input/Output | The interval for the client connection’s topic metadata to be refreshed with the broker as a Go duration. Defaults to 9m . |
"4m" |
clientConnectionKeepAliveInterval |
N | Input/Output | The maximum time for the client connection to be kept alive with the broker, as a Go duration, before closing the connection. A zero value (default) means keeping alive indefinitely. | "4m" |
consumerFetchDefault |
N | Input/Output | The default number of message bytes to fetch from the broker in each request. Default is "1048576" bytes. |
"2097152" |
heartbeatInterval |
N | Input | The interval between heartbeats to the consumer coordinator. At most, the value should be set to a 1/3 of the sessionTimeout value. Defaults to "3s" . |
"5s" |
sessionTimeout |
N | Input | The timeout used to detect client failures when using Kafka’s group management facility. If the broker fails to receive any heartbeats from the consumer before the expiration of this session timeout, then the consumer is removed and initiates a rebalance. Defaults to "10s" . |
"20s" |
escapeHeaders |
N | Input | Enables URL escaping of the message header values received by the consumer. Allows receiving content with special characters that are usually not allowed in HTTP headers. Default is false . |
true |
Note
The metadata version
must be set to 1.0.0
when using Azure EventHubs with Kafka.
Binding support
This component supports both input and output binding interfaces.
This component supports output binding with the following operations:
create
Authentication
Kafka supports a variety of authentication schemes and Dapr supports several: SASL password, mTLS, OIDC/OAuth2. Learn more about Kafka’s authentication method for both the Kafka binding and Kafka pub/sub components.
Specifying a partition key
When invoking the Kafka binding, its possible to provide an optional partition key by using the metadata
section in the request body.
The field name is partitionKey
.
Example:
curl -X POST http://localhost:3500/v1.0/bindings/myKafka \
-H "Content-Type: application/json" \
-d '{
"data": {
"message": "Hi"
},
"metadata": {
"partitionKey": "key1"
},
"operation": "create"
}'
Response
An HTTP 204 (No Content) and empty body will be returned if successful.
Related links
5.2.31 - Kitex
Overview
The binding for Kitex mainly utilizes the generic-call feature in Kitex. Learn more from the official documentation around Kitex generic-call. Currently, Kitex only supports Thrift generic calls. The implementation integrated into components-contrib adopts binary generic calls.
Component format
To setup an Kitex binding, create a component of type bindings.kitex
. See the How-to: Use output bindings to interface with external resources guide on creating and applying a binding configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: bindings.kitex
spec:
type: bindings.kitex
version: v1
metadata:
- name: hostPorts
value: "127.0.0.1:8888"
- name: destService
value: "echo"
- name: methodName
value: "echo"
- name: version
value: "0.5.0"
Spec metadata fields
The InvokeRequest.Metadata
for bindings.kitex
requires the client to fill in four required items when making a call:
hostPorts
destService
methodName
version
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
hostPorts |
Y | Output | IP address and port information of the Kitex server (Thrift) | "127.0.0.1:8888" |
destService |
Y | Output | Service name of the Kitex server (Thrift) | "echo" |
methodName |
Y | Output | Method name under a specific service name of the Kitex server (Thrift) | "echo" |
version |
Y | Output | Kitex version | "0.5.0" |
Binding support
This component supports output binding with the following operations:
get
Example
When using Kitex binding:
- The client needs to pass in the correct Thrift-encoded binary
- The server needs to be a Thrift Server.
The kitex_output_test can be used as a reference.
For example, the variable reqData
needs to be encoded by the Thrift protocol before sending, and the returned data needs to be decoded by the Thrift protocol.
Request
{
"operation": "get",
"metadata": {
"hostPorts": "127.0.0.1:8888",
"destService": "echo",
"methodName": "echo",
"version":"0.5.0"
},
"data": reqdata
}
Related links
5.2.32 - KubeMQ binding spec
Component format
To setup KubeMQ binding create a component of type bindings.kubemq
. See this guide on how to create and apply a binding configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: binding-topic
spec:
type: bindings.kubemq
version: v1
metadata:
- name: address
value: "localhost:50000"
- name: channel
value: "queue1"
- name: direction
value: "input, output"
Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
address |
Y | Address of the KubeMQ server | "localhost:50000" |
channel |
Y | The Queue channel name | "queue1" |
authToken |
N | Auth JWT token for connection. Check out KubeMQ Authentication | "ew..." |
autoAcknowledged |
N | Sets if received queue message is automatically acknowledged | "true" or "false" (default is "false" ) |
pollMaxItems |
N | Sets the number of messages to poll on every connection | "1" |
pollTimeoutSeconds |
N | Sets the time in seconds for each poll interval | "3600" |
direction |
N | The direction of the binding | "input" , "output" , "input, output" |
Binding support
This component supports both input and output binding interfaces.
Create a KubeMQ broker
- Obtain KubeMQ Key.
- Wait for an email confirmation with your Key
You can run a KubeMQ broker with Docker:
docker run -d -p 8080:8080 -p 50000:50000 -p 9090:9090 -e KUBEMQ_TOKEN=<your-key> kubemq/kubemq
You can then interact with the server using the client port: localhost:50000
- Obtain KubeMQ Key.
- Wait for an email confirmation with your Key
Then Run the following kubectl commands:
kubectl apply -f https://deploy.kubemq.io/init
kubectl apply -f https://deploy.kubemq.io/key/<your-key>
Install KubeMQ CLI
Go to KubeMQ CLI and download the latest version of the CLI.
Browse KubeMQ Dashboard
Open a browser and navigate to http://localhost:8080
With KubeMQCTL installed, run the following command:
kubemqctl get dashboard
Or, with kubectl installed, run port-forward command:
kubectl port-forward svc/kubemq-cluster-api -n kubemq 8080:8080
KubeMQ Documentation
Visit KubeMQ Documentation for more information.
Related links
5.2.33 - Kubernetes Events binding spec
Component format
To setup Kubernetes Events binding create a component of type bindings.kubernetes
. See this guide on how to create and apply a binding configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.kubernetes
version: v1
metadata:
- name: namespace
value: "<NAMESPACE>"
- name: resyncPeriodInSec
value: "<seconds>"
- name: direction
value: "input"
Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
namespace |
Y | Input | The Kubernetes namespace to read events from | "default" |
resyncPeriodInSec |
N | Input | The period of time to refresh event list from Kubernetes API server. Defaults to "10" |
"15" |
direction |
N | Input | The direction of the binding | "input" |
kubeconfigPath |
N | Input | The path to the kubeconfig file. If not specified, the binding uses the default in-cluster config value | "/path/to/kubeconfig" |
Binding support
This component supports input binding interface.
Output format
Output received from the binding is of format bindings.ReadResponse
with the Data
field populated with the following structure:
{
"event": "",
"oldVal": {
"metadata": {
"name": "hello-node.162c2661c524d095",
"namespace": "kube-events",
"selfLink": "/api/v1/namespaces/kube-events/events/hello-node.162c2661c524d095",
...
},
"involvedObject": {
"kind": "Deployment",
"namespace": "kube-events",
...
},
"reason": "ScalingReplicaSet",
"message": "Scaled up replica set hello-node-7bf657c596 to 1",
...
},
"newVal": {
"metadata": { "creationTimestamp": "null" },
"involvedObject": {},
"source": {},
"firstTimestamp": "null",
"lastTimestamp": "null",
"eventTime": "null",
...
}
}
Three different event types are available:
- Add : Only the
newVal
field is populated,oldVal
field is an emptyv1.Event
,event
isadd
- Delete : Only the
oldVal
field is populated,newVal
field is an emptyv1.Event
,event
isdelete
- Update : Both the
oldVal
andnewVal
fields are populated,event
isupdate
Required permissions
For consuming events
from Kubernetes, permissions need to be assigned to a User/Group/ServiceAccount using [RBAC Auth] mechanism of Kubernetes.
Role
One of the rules need to be of the form as below to give permissions to get, watch
and list
events
. API Groups can be as restrictive as needed.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: <ROLENAME>
rules:
- apiGroups: [""]
resources: ["events"]
verbs: ["get", "watch", "list"]
RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: <NAME>
subjects:
- kind: ServiceAccount
name: default # or as need be, can be changed
roleRef:
kind: Role
name: <ROLENAME> # same as the one above
apiGroup: ""
Related links
5.2.34 - Local Storage binding spec
Component format
To set up the Local Storage binding, create a component of type bindings.localstorage
. See this guide on how to create and apply a binding configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.localstorage
version: v1
metadata:
- name: rootPath
value: "<string>"
Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
rootPath |
Y | Output | The root path anchor to which files can be read / saved | "/temp/files" |
Binding support
This component supports output binding with the following operations:
create
: Create fileget
: Get filelist
: List filesdelete
: Delete file
Create file
To perform a create file operation, invoke the Local Storage binding with a POST
method and the following JSON body:
Note: by default, a random UUID is generated. See below for Metadata support to set the name
{
"operation": "create",
"data": "YOUR_CONTENT"
}
Examples
Save text to a random generated UUID file
On Windows, utilize cmd prompt (PowerShell has different escaping mechanism)
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\" }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "Hello World" }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Save text to a specific file
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\", \"metadata\": { \"fileName\": \"my-test-file.txt\" } }" \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "Hello World", "metadata": { "fileName": "my-test-file.txt" } }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Save a binary file
To upload a file, encode it as Base64. The binding should automatically detect the Base64 encoding.
curl -d "{ \"operation\": \"create\", \"data\": \"YOUR_BASE_64_CONTENT\", \"metadata\": { \"fileName\": \"my-test-file.jpg\" } }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "YOUR_BASE_64_CONTENT", "metadata": { "fileName": "my-test-file.jpg" } }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Response
The response body will contain the following JSON:
{
"fileName": "<filename>"
}
Get file
To perform a get file operation, invoke the Local Storage binding with a POST
method and the following JSON body:
{
"operation": "get",
"metadata": {
"fileName": "myfile"
}
}
Example
curl -d '{ \"operation\": \"get\", \"metadata\": { \"fileName\": \"myfile\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "get", "metadata": { "fileName": "myfile" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Response
The response body contains the value stored in the file.
List files
To perform a list files operation, invoke the Local Storage binding with a POST
method and the following JSON body:
{
"operation": "list"
}
If you only want to list the files beneath a particular directory below the rootPath
, specify the relative directory name as the fileName
in the metadata.
{
"operation": "list",
"metadata": {
"fileName": "my/cool/directory"
}
}
Example
curl -d '{ \"operation\": \"list\", \"metadata\": { \"fileName\": \"my/cool/directory\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "list", "metadata": { "fileName": "my/cool/directory" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Response
The response is a JSON array of file names.
Delete file
To perform a delete file operation, invoke the Local Storage binding with a POST
method and the following JSON body:
{
"operation": "delete",
"metadata": {
"fileName": "myfile"
}
}
Example
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"fileName\": \"myfile\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "delete", "metadata": { "fileName": "myfile" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Response
An HTTP 204 (No Content) and empty body will be returned if successful.
Metadata information
By default the Local Storage output binding auto generates a UUID as the file name. It is configurable in the metadata property of the message.
{
"data": "file content",
"metadata": {
"fileName": "filename.txt"
},
"operation": "create"
}
Related links
5.2.35 - MQTT3 binding spec
Component format
To setup a MQTT3 binding create a component of type bindings.mqtt3
. See this guide on how to create and apply a binding configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.mqtt3
version: v1
metadata:
- name: url
value: "tcp://[username][:password]@host.domain[:port]"
- name: topic
value: "mytopic"
- name: consumerID
value: "myapp"
# Optional
- name: retain
value: "false"
- name: cleanSession
value: "false"
- name: backOffMaxRetries
value: "0"
- name: direction
value: "input, output"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
url |
Y | Input/Output | Address of the MQTT broker. Can be secretKeyRef to use a secret reference. Use the tcp:// URI scheme for non-TLS communication. Use the ssl:// URI scheme for TLS communication. |
"tcp://[username][:password]@host.domain[:port]" |
topic |
Y | Input/Output | The topic to listen on or send events to. | "mytopic" |
consumerID |
Y | Input/Output | The client ID used to connect to the MQTT broker. | "myMqttClientApp" |
retain |
N | Input/Output | Defines whether the message is saved by the broker as the last known good value for a specified topic. Defaults to "false" . |
"true" , "false" |
cleanSession |
N | Input/Output | Sets the clean_session flag in the connection message to the MQTT broker if "true" . Defaults to "false" . |
"true" , "false" |
caCert |
Required for using TLS | Input/Output | Certificate Authority (CA) certificate in PEM format for verifying server TLS certificates. | See example below |
clientCert |
Required for using TLS | Input/Output | TLS client certificate in PEM format. Must be used with clientKey . |
See example below |
clientKey |
Required for using TLS | Input/Output | TLS client key in PEM format. Must be used with clientCert . Can be secretKeyRef to use a secret reference. |
See example below |
backOffMaxRetries |
N | Input | The maximum number of retries to process the message before returning an error. Defaults to "0" , which means that no retries will be attempted. "-1" can be specified to indicate that messages should be retried indefinitely until they are successfully processed or the application is shutdown. The component will wait 5 seconds between retries. |
"3" |
direction |
N | Input/Output | The direction of the binding | "input" , "output" , "input, output" |
Communication using TLS
To configure communication using TLS, ensure that the MQTT broker (e.g. emqx) is configured to support certificates and provide the caCert
, clientCert
, clientKey
metadata in the component configuration. For example:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: mqtt-binding
spec:
type: bindings.mqtt3
version: v1
metadata:
- name: url
value: "ssl://host.domain[:port]"
- name: topic
value: "topic1"
- name: consumerID
value: "myapp"
# TLS configuration
- name: caCert
value: |
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
- name: clientCert
value: |
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
- name: clientKey
secretKeyRef:
name: myMqttClientKey
key: myMqttClientKey
# Optional
- name: retain
value: "false"
- name: cleanSession
value: "false"
- name: backoffMaxRetries
value: "0"
Note that while the
caCert
andclientCert
values may not be secrets, they can be referenced from a Dapr secret store as well for convenience.
Consuming a shared topic
When consuming a shared topic, each consumer must have a unique identifier. If running multiple instances of an application, you configure the component’s consumerID
metadata with a {uuid}
tag, which will give each instance a randomly generated consumerID
value on start up. For example:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: mqtt-binding
namespace: default
spec:
type: bindings.mqtt3
version: v1
metadata:
- name: consumerID
value: "{uuid}"
- name: url
value: "tcp://admin:public@localhost:1883"
- name: topic
value: "topic1"
- name: retain
value: "false"
- name: cleanSession
value: "true"
- name: backoffMaxRetries
value: "0"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.In this case, the value of the consumer ID is random every time Dapr restarts, so you should set
cleanSession
totrue
as well.
Binding support
This component supports both input and output binding interfaces.
This component supports output binding with the following operations:
create
: publishes a new message
Set topic per-request
You can override the topic in component metadata on a per-request basis:
{
"operation": "create",
"metadata": {
"topic": "myTopic"
},
"data": "<h1>Testing Dapr Bindings</h1>This is a test.<br>Bye!"
}
Set retain property per-request
You can override the retain property in component metadata on a per-request basis:
{
"operation": "create",
"metadata": {
"retain": "true"
},
"data": "<h1>Testing Dapr Bindings</h1>This is a test.<br>Bye!"
}
Related links
5.2.36 - MySQL & MariaDB binding spec
Component format
The MySQL binding allows connecting to both MySQL and MariaDB databases. In this document, we refer to “MySQL” to indicate both databases.
To setup a MySQL binding create a component of type bindings.mysql
. See this guide on how to create and apply a binding configuration.
The MySQL binding uses Go-MySQL-Driver internally.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.mysql
version: v1
metadata:
- name: url # Required, define DB connection in DSN format
value: "<CONNECTION_STRING>"
- name: pemPath # Optional
value: "<PEM PATH>"
- name: maxIdleConns
value: "<MAX_IDLE_CONNECTIONS>"
- name: maxOpenConns
value: "<MAX_OPEN_CONNECTIONS>"
- name: connMaxLifetime
value: "<CONNECTION_MAX_LIFE_TIME>"
- name: connMaxIdleTime
value: "<CONNECTION_MAX_IDLE_TIME>"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here. Note that you can not use secret just for username/password. If you use secret, it has to be for the complete connection string.Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
url |
Y | Output | Represent DB connection in Data Source Name (DNS) format. See here SSL details | "user:password@tcp(localhost:3306)/dbname" |
pemPath |
Y | Output | Path to the PEM file. Used with SSL connection | "path/to/pem/file" |
maxIdleConns |
N | Output | The max idle connections. Integer greater than 0 | "10" |
maxOpenConns |
N | Output | The max open connections. Integer greater than 0 | "10" |
connMaxLifetime |
N | Output | The max connection lifetime. Duration string | "12s" |
connMaxIdleTime |
N | Output | The max connection idle time. Duration string | "12s" |
SSL connection
If your server requires SSL your connection string must end of &tls=custom
for example:
"<user>:<password>@tcp(<server>:3306)/<database>?allowNativePasswords=true&tls=custom"
You must replace the
<PEM PATH>
with a full path to the PEM file. If you are using Azure Database for MySQL see the Azure documentation on SSL database connections, for information on how to download the required certificate. The connection to MySQL requires a minimum TLS version of 1.2.
Multiple statements
By default, the MySQL Go driver only supports one SQL statement per query/command.
To allow multiple statements in one query you need to add multiStatements=true
to a query string, for example:
"<user>:<password>@tcp(<server>:3306)/<database>?multiStatements=true"
While this allows batch queries, it also greatly increases the risk of SQL injections. Only the result of the first query is returned, all other results are silently discarded.
Binding support
This component supports output binding with the following operations:
exec
query
close
Parametrized queries
This binding supports parametrized queries, which allow separating the SQL query itself from user-supplied values. The usage of parametrized queries is strongly recommended for security reasons, as they prevent SQL Injection attacks.
For example:
-- ❌ WRONG! Includes values in the query and is vulnerable to SQL Injection attacks.
SELECT * FROM mytable WHERE user_key = 'something';
-- ✅ GOOD! Uses parametrized queries.
-- This will be executed with parameters ["something"]
SELECT * FROM mytable WHERE user_key = ?;
exec
The exec
operation can be used for DDL operations (like table creation), as well as INSERT
, UPDATE
, DELETE
operations which return only metadata (e.g. number of affected rows).
The params
property is a string containing a JSON-encoded array of parameters.
Request
{
"operation": "exec",
"metadata": {
"sql": "INSERT INTO foo (id, c1, ts) VALUES (?, ?, ?)",
"params": "[1, \"demo\", \"2020-09-24T11:45:05Z07:00\"]"
}
}
Response
{
"metadata": {
"operation": "exec",
"duration": "294µs",
"start-time": "2020-09-24T11:13:46.405097Z",
"end-time": "2020-09-24T11:13:46.414519Z",
"rows-affected": "1",
"sql": "INSERT INTO foo (id, c1, ts) VALUES (?, ?, ?)"
}
}
query
The query
operation is used for SELECT
statements, which returns the metadata along with data in a form of an array of row values.
The params
property is a string containing a JSON-encoded array of parameters.
Request
{
"operation": "query",
"metadata": {
"sql": "SELECT * FROM foo WHERE id < $1",
"params": "[3]"
}
}
Response
{
"metadata": {
"operation": "query",
"duration": "432µs",
"start-time": "2020-09-24T11:13:46.405097Z",
"end-time": "2020-09-24T11:13:46.420566Z",
"sql": "SELECT * FROM foo WHERE id < ?"
},
"data": [
{column_name: value, column_name: value, ...},
{column_name: value, column_name: value, ...},
{column_name: value, column_name: value, ...},
]
}
Here column_name is the name of the column returned by query, and value is a value of this column. Note that values are returned as string or numbers (language specific data type)
close
The close
operation can be used to explicitly close the DB connection and return it to the pool. This operation doesn’t have any response.
Request
{
"operation": "close"
}
Related links
5.2.37 - PostgreSQL binding spec
Component format
To setup PostgreSQL binding create a component of type bindings.postgresql
. See this guide on how to create and apply a binding configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.postgresql
version: v1
metadata:
# Connection string
- name: connectionString
value: "<CONNECTION STRING>"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Authenticate using a connection string
The following metadata options are required to authenticate using a PostgreSQL connection string.
Field | Required | Details | Example |
---|---|---|---|
connectionString |
Y | The connection string for the PostgreSQL database. See the PostgreSQL documentation on database connections for information on how to define a connection string. | "host=localhost user=postgres password=example port=5432 connect_timeout=10 database=my_db" |
Authenticate using individual connection parameters
In addition to using a connection string, you can optionally specify individual connection parameters. These parameters are equivalent to the standard PostgreSQL connection parameters.
Field | Required | Details | Example |
---|---|---|---|
host |
Y | The host name or IP address of the PostgreSQL server | "localhost" |
hostaddr |
N | The IP address of the PostgreSQL server (alternative to host) | "127.0.0.1" |
port |
Y | The port number of the PostgreSQL server | "5432" |
database |
Y | The name of the database to connect to | "my_db" |
user |
Y | The PostgreSQL user to connect as | "postgres" |
password |
Y | The password for the PostgreSQL user | "example" |
sslRootCert |
N | Path to the SSL root certificate file | "/path/to/ca.crt" |
Note
When using individual connection parameters, these will override the ones present in theconnectionString
.
Authenticate using Microsoft Entra ID
Authenticating with Microsoft Entra ID is supported with Azure Database for PostgreSQL. All authentication methods supported by Dapr can be used, including client credentials (“service principal”) and Managed Identity.
Field | Required | Details | Example |
---|---|---|---|
useAzureAD |
Y | Must be set to true to enable the component to retrieve access tokens from Microsoft Entra ID. |
"true" |
connectionString |
Y | The connection string for the PostgreSQL database. This must contain the user, which corresponds to the name of the user created inside PostgreSQL that maps to the Microsoft Entra ID identity; this is often the name of the corresponding principal (e.g. the name of the Microsoft Entra ID application). This connection string should not contain any password. |
"host=mydb.postgres.database.azure.com user=myapplication port=5432 database=my_db sslmode=require" |
azureTenantId |
N | ID of the Microsoft Entra ID tenant | "cd4b2887-304c-…" |
azureClientId |
N | Client ID (application ID) | "c7dd251f-811f-…" |
azureClientSecret |
N | Client secret (application password) | "Ecy3X…" |
Authenticate using AWS IAM
Authenticating with AWS IAM is supported with all versions of PostgreSQL type components.
The user specified in the connection string must be an already existing user in the DB, and an AWS IAM enabled user granted the rds_iam
database role.
Authentication is based on the AWS authentication configuration file, or the AccessKey/SecretKey provided.
The AWS authentication token will be dynamically rotated before it’s expiration time with AWS.
Field | Required | Details | Example |
---|---|---|---|
useAWSIAM |
Y | Must be set to true to enable the component to retrieve access tokens from AWS IAM. This authentication method only works with AWS Relational Database Service for PostgreSQL databases. |
"true" |
connectionString |
Y | The connection string for the PostgreSQL database. This must contain an already existing user, which corresponds to the name of the user created inside PostgreSQL that maps to the AWS IAM policy. This connection string should not contain any password. Note that the database name field is denoted by dbname with AWS. |
"host=mydb.postgres.database.aws.com user=myapplication port=5432 dbname=my_db sslmode=require" |
awsRegion |
N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘region’ instead. The AWS Region where the AWS Relational Database Service is deployed to. | "us-east-1" |
awsAccessKey |
N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘accessKey’ instead. AWS access key associated with an IAM account | "AKIAIOSFODNN7EXAMPLE" |
awsSecretKey |
N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘secretKey’ instead. The secret key associated with the access key | "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" |
awsSessionToken |
N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘sessionToken’ instead. AWS session token to use. A session token is only required if you are using temporary security credentials. | "TOKEN" |
Other metadata options
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
timeout |
N | Output | Timeout for operations on the database, as a Go duration. Integers are interpreted as number of seconds. Defaults to 20s |
"30s" , 30 |
maxConns |
N | Output | Maximum number of connections pooled by this component. Set to 0 or lower to use the default value, which is the greater of 4 or the number of CPUs. | "4" |
connectionMaxIdleTime |
N | Output | Max idle time before unused connections are automatically closed in the connection pool. By default, there’s no value and this is left to the database driver to choose. | "5m" |
queryExecMode |
N | Output | Controls the default mode for executing queries. By default Dapr uses the extended protocol and automatically prepares and caches prepared statements. However, this may be incompatible with proxies such as PGBouncer. In this case it may be preferrable to use exec or simple_protocol . |
"simple_protocol" |
URL format
The PostgreSQL binding uses pgx connection pool internally so the connectionString
parameter can be any valid connection string, either in a DSN
or URL
format:
Example DSN
user=dapr password=secret host=dapr.example.com port=5432 dbname=my_dapr sslmode=verify-ca
Example URL
postgres://dapr:secret@dapr.example.com:5432/my_dapr?sslmode=verify-ca
Both methods also support connection pool configuration variables:
pool_min_conns
: integer 0 or greaterpool_max_conns
: integer greater than 0pool_max_conn_lifetime
: duration stringpool_max_conn_idle_time
: duration stringpool_health_check_period
: duration string
Binding support
This component supports output binding with the following operations:
exec
query
close
Parametrized queries
This binding supports parametrized queries, which allow separating the SQL query itself from user-supplied values. The usage of parametrized queries is strongly recommended for security reasons, as they prevent SQL Injection attacks.
For example:
-- ❌ WRONG! Includes values in the query and is vulnerable to SQL Injection attacks.
SELECT * FROM mytable WHERE user_key = 'something';
-- ✅ GOOD! Uses parametrized queries.
-- This will be executed with parameters ["something"]
SELECT * FROM mytable WHERE user_key = $1;
exec
The exec
operation can be used for DDL operations (like table creation), as well as INSERT
, UPDATE
, DELETE
operations which return only metadata (e.g. number of affected rows).
The params
property is a string containing a JSON-encoded array of parameters.
Request
{
"operation": "exec",
"metadata": {
"sql": "INSERT INTO foo (id, c1, ts) VALUES ($1, $2, $3)",
"params": "[1, \"demo\", \"2020-09-24T11:45:05Z07:00\"]"
}
}
Response
{
"metadata": {
"operation": "exec",
"duration": "294µs",
"start-time": "2020-09-24T11:13:46.405097Z",
"end-time": "2020-09-24T11:13:46.414519Z",
"rows-affected": "1",
"sql": "INSERT INTO foo (id, c1, ts) VALUES ($1, $2, $3)"
}
}
query
The query
operation is used for SELECT
statements, which returns the metadata along with data in a form of an array of row values.
The params
property is a string containing a JSON-encoded array of parameters.
Request
{
"operation": "query",
"metadata": {
"sql": "SELECT * FROM foo WHERE id < $1",
"params": "[3]"
}
}
Response
{
"metadata": {
"operation": "query",
"duration": "432µs",
"start-time": "2020-09-24T11:13:46.405097Z",
"end-time": "2020-09-24T11:13:46.420566Z",
"sql": "SELECT * FROM foo WHERE id < $1"
},
"data": "[
[0,\"test-0\",\"2020-09-24T04:13:46Z\"],
[1,\"test-1\",\"2020-09-24T04:13:46Z\"],
[2,\"test-2\",\"2020-09-24T04:13:46Z\"]
]"
}
close
The close
operation can be used to explicitly close the DB connection and return it to the pool. This operation doesn’t have any response.
Request
{
"operation": "close"
}
Related links
5.2.38 - Postmark binding spec
Component format
To setup Postmark binding create a component of type bindings.postmark
. See this guide on how to create and apply a binding configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: postmark
spec:
type: bindings.postmark
metadata:
- name: accountToken
value: "YOUR_ACCOUNT_TOKEN" # required, this is your Postmark account token
- name: serverToken
value: "YOUR_SERVER_TOKEN" # required, this is your Postmark server token
- name: emailFrom
value: "testapp@dapr.io" # optional
- name: emailTo
value: "dave@dapr.io" # optional
- name: subject
value: "Hello!" # optional
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
accountToken |
Y | Output | The Postmark account token, this should be considered a secret value | "account token" |
serverToken |
Y | Output | The Postmark server token, this should be considered a secret value | "server token" |
emailFrom |
N | Output | If set this specifies the ‘from’ email address of the email message | "me@exmaple.com" |
emailTo |
N | Output | If set this specifies the ’to’ email address of the email message | "me@example.com" |
emailCc |
N | Output | If set this specifies the ‘cc’ email address of the email message | "me@example.com" |
emailBcc |
N | Output | If set this specifies the ‘bcc’ email address of the email message | "me@example.com" |
subject |
N | Output | If set this specifies the subject of the email message | "me@example.com" |
You can specify any of the optional metadata properties on the output binding request too (e.g. emailFrom
, emailTo
, subject
, etc.)
Combined, the optional metadata properties in the component configuration and the request payload should at least contain the emailFrom
, emailTo
and subject
fields, as these are required to send an email with success.
Binding support
This component supports output binding with the following operations:
create
Example request payload
{
"operation": "create",
"metadata": {
"emailTo": "changeme@example.net",
"subject": "An email from Dapr Postmark binding"
},
"data": "<h1>Testing Dapr Bindings</h1>This is a test.<br>Bye!"
}
Related links
5.2.39 - RabbitMQ binding spec
Component format
To setup RabbitMQ binding create a component of type bindings.rabbitmq
. See this guide on how to create and apply a binding configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.rabbitmq
version: v1
metadata:
- name: queueName
value: "queue1"
- name: host
value: "amqp://[username][:password]@host.domain[:port]"
- name: durable
value: "true"
- name: deleteWhenUnused
value: "false"
- name: ttlInSeconds
value: "60"
- name: prefetchCount
value: "0"
- name: exclusive
value: "false"
- name: maxPriority
value: "5"
- name: contentType
value: "text/plain"
- name: reconnectWaitInSeconds
value: "5"
- name: externalSasl
value: "false"
- name: caCert
value: "null"
- name: clientCert
value: "null"
- name: clientKey
value: "null"
- name: direction
value: "input, output"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
When a new RabbitMQ message gets published, all values from the associated metadata are added to the message’s header values.
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
queueName |
Y | Input/Output | The RabbitMQ queue name | "myqueue" |
host |
Y | Input/Output | The RabbitMQ host address | "amqp://[username][:password]@host.domain[:port]" or with TLS: "amqps://[username][:password]@host.domain[:port]" |
durable |
N | Output | Tells RabbitMQ to persist message in storage. Defaults to "false" |
"true" , "false" |
deleteWhenUnused |
N | Input/Output | Enables or disables auto-delete. Defaults to "false" |
"true" , "false" |
ttlInSeconds |
N | Output | Set the default message time to live at RabbitMQ queue level. If this parameter is omitted, messages won’t expire, continuing to exist on the queue until processed. See also | 60 |
prefetchCount |
N | Input | Set the Channel Prefetch Setting (QoS). If this parameter is omiited, QoS would set value to 0 as no limit | 0 |
exclusive |
N | Input/Output | Determines whether the topic will be an exclusive topic or not. Defaults to "false" |
"true" , "false" |
maxPriority |
N | Input/Output | Parameter to set the priority queue. If this parameter is omitted, queue will be created as a general queue instead of a priority queue. Value between 1 and 255. See also | "1" , "10" |
contentType |
N | Input/Output | The content type of the message. Defaults to “text/plain”. | "text/plain" , "application/cloudevent+json" and so on |
reconnectWaitInSeconds |
N | Input/Output | Represents the duration in seconds that the client should wait before attempting to reconnect to the server after a disconnection occurs. Defaults to "5" . |
"5" , "10" |
externalSasl |
N | Input/Output | With TLS, should the username be taken from an additional field (e.g. CN.) See RabbitMQ Authentication Mechanisms. Defaults to "false" . |
"true" , "false" |
caCert |
N | Input/Output | The CA certificate to use for TLS connection. Defaults to null . |
"-----BEGIN CERTIFICATE-----\nMI..." |
clientCert |
N | Input/Output | The client certificate to use for TLS connection. Defaults to null . |
"-----BEGIN CERTIFICATE-----\nMI..." |
clientKey |
N | Input/Output | The client key to use for TLS connection. Defaults to null . |
"-----BEGIN PRIVATE KEY-----\nMI..." |
direction |
N | Input/Output | The direction of the binding. | "input" , "output" , "input, output" |
Binding support
This component supports both input and output binding interfaces.
This component supports output binding with the following operations:
create
Specifying a TTL per message
Time to live can be defined on queue level (as illustrated above) or at the message level. The value defined at message level overwrites any value set at queue level.
To set time to live at message level use the metadata
section in the request body during the binding invocation.
The field name is ttlInSeconds
.
Example:
curl -X POST http://localhost:3500/v1.0/bindings/myRabbitMQ \
-H "Content-Type: application/json" \
-d "{
\"data\": {
\"message\": \"Hi\"
},
\"metadata\": {
\"ttlInSeconds\": "60"
},
\"operation\": \"create\"
}"
curl -X POST http://localhost:3500/v1.0/bindings/myRabbitMQ \
-H "Content-Type: application/json" \
-d '{
"data": {
"message": "Hi"
},
"metadata": {
"ttlInSeconds": "60"
},
"operation": "create"
}'
Specifying a priority per message
Priority can be defined at the message level. If maxPriority
parameter is set, high priority messages will have priority over other low priority messages.
To set priority at message level use the metadata
section in the request body during the binding invocation.
The field name is priority
.
Example:
curl -X POST http://localhost:3500/v1.0/bindings/myRabbitMQ \
-H "Content-Type: application/json" \
-d "{
\"data\": {
\"message\": \"Hi\"
},
\"metadata\": {
"priority": \"5\"
},
\"operation\": \"create\"
}"
curl -X POST http://localhost:3500/v1.0/bindings/myRabbitMQ \
-H "Content-Type: application/json" \
-d '{
"data": {
"message": "Hi"
},
"metadata": {
"priority": "5"
},
"operation": "create"
}'
Related links
5.2.40 - Redis binding spec
Component format
To setup Redis binding create a component of type bindings.redis
. See this guide on how to create and apply a binding configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.redis
version: v1
metadata:
- name: redisHost
value: "<address>:6379"
- name: redisPassword
value: "**************"
- name: useEntraID
value: "true"
- name: enableTLS
value: "<bool>"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
redisHost |
Y | Output | The Redis host address | "localhost:6379" |
redisPassword |
N | Output | The Redis password | "password" |
redisUsername |
N | Output | Username for Redis host. Defaults to empty. Make sure your redis server version is 6 or above, and have created acl rule correctly. | "username" |
useEntraID |
N | Output | Implements EntraID support for Azure Cache for Redis. Before enabling this:
|
"true" , "false" |
enableTLS |
N | Output | If the Redis instance supports TLS with public certificates it can be configured to enable or disable TLS. Defaults to "false" |
"true" , "false" |
clientCert |
N | Output | The content of the client certificate, used for Redis instances that require client-side certificates. Must be used with clientKey and enableTLS must be set to true. It is recommended to use a secret store as described here |
"----BEGIN CERTIFICATE-----\nMIIC..." |
clientKey |
N | Output | The content of the client private key, used in conjunction with clientCert for authentication. It is recommended to use a secret store as described here |
"----BEGIN PRIVATE KEY-----\nMIIE..." |
failover |
N | Output | Property to enable failover configuration. Needs sentinelMasterName to be set. Defaults to "false" |
"true" , "false" |
sentinelMasterName |
N | Output | The sentinel master name. See Redis Sentinel Documentation | "" , "mymaster" |
sentinelUsername |
N | Output | Username for Redis Sentinel. Applicable only when “failover” is true, and Redis Sentinel has authentication enabled | "username" |
sentinelPassword |
N | Output | Password for Redis Sentinel. Applicable only when “failover” is true, and Redis Sentinel has authentication enabled | "password" |
redeliverInterval |
N | Output | The interval between checking for pending messages to redelivery. Defaults to "60s" . "0" disables redelivery. |
"30s" |
processingTimeout |
N | Output | The amount time a message must be pending before attempting to redeliver it. Defaults to "15s" . "0" disables redelivery. |
"30s" |
redisType |
N | Output | The type of redis. There are two valid values, one is "node" for single node mode, the other is "cluster" for redis cluster mode. Defaults to "node" . |
"cluster" |
redisDB |
N | Output | Database selected after connecting to redis. If "redisType" is "cluster" this option is ignored. Defaults to "0" . |
"0" |
redisMaxRetries |
N | Output | Maximum number of times to retry commands before giving up. Default is to not retry failed commands. | "5" |
redisMinRetryInterval |
N | Output | Minimum backoff for redis commands between each retry. Default is "8ms" ; "-1" disables backoff. |
"8ms" |
redisMaxRetryInterval |
N | Output | Maximum backoff for redis commands between each retry. Default is "512ms" ;"-1" disables backoff. |
"5s" |
dialTimeout |
N | Output | Dial timeout for establishing new connections. Defaults to "5s" . |
"5s" |
readTimeout |
N | Output | Timeout for socket reads. If reached, redis commands will fail with a timeout instead of blocking. Defaults to "3s" , "-1" for no timeout. |
"3s" |
writeTimeout |
N | Output | Timeout for socket writes. If reached, redis commands will fail with a timeout instead of blocking. Defaults is readTimeout. | "3s" |
poolSize |
N | Output | Maximum number of socket connections. Default is 10 connections per every CPU as reported by runtime.NumCPU. | "20" |
poolTimeout |
N | Output | Amount of time client waits for a connection if all connections are busy before returning an error. Default is readTimeout + 1 second. | "5s" |
maxConnAge |
N | Output | Connection age at which the client retires (closes) the connection. Default is to not close aged connections. | "30m" |
minIdleConns |
N | Output | Minimum number of idle connections to keep open in order to avoid the performance degradation associated with creating new connections. Defaults to "0" . |
"2" |
idleCheckFrequency |
N | Output | Frequency of idle checks made by idle connections reaper. Default is "1m" . "-1" disables idle connections reaper. |
"-1" |
idleTimeout |
N | Output | Amount of time after which the client closes idle connections. Should be less than server’s timeout. Default is "5m" . "-1" disables idle timeout check. |
"10m" |
Binding support
This component supports output binding with the following operations:
create
get
delete
create
You can store a record in Redis using the create
operation. This sets a key to hold a value. If the key already exists, the value is overwritten.
Request
{
"operation": "create",
"metadata": {
"key": "key1"
},
"data": {
"Hello": "World",
"Lorem": "Ipsum"
}
}
Response
An HTTP 204 (No Content) and empty body is returned if successful.
get
You can get a record in Redis using the get
operation. This gets a key that was previously set.
This takes an optional parameter delete
, which is by default false
. When it is set to true
, this operation uses the GETDEL
operation of Redis. For example, it returns the value
which was previously set and then deletes it.
Request
{
"operation": "get",
"metadata": {
"key": "key1"
},
"data": {
}
}
Response
{
"data": {
"Hello": "World",
"Lorem": "Ipsum"
}
}
Request with delete flag
{
"operation": "get",
"metadata": {
"key": "key1",
"delete": "true"
},
"data": {
}
}
delete
You can delete a record in Redis using the delete
operation. Returns success whether the key exists or not.
Request
{
"operation": "delete",
"metadata": {
"key": "key1"
}
}
Response
An HTTP 204 (No Content) and empty body is returned if successful.
Create a Redis instance
Dapr can use any Redis instance - containerized, running on your local dev machine, or a managed cloud service, provided the version of Redis is 5.0.0 or later.
Note: Dapr does not support Redis >= 7. It is recommended to use Redis 6
The Dapr CLI will automatically create and setup a Redis Streams instance for you.
The Redis instance will be installed via Docker when you run dapr init
, and the component file will be created in default directory. ($HOME/.dapr/components
directory (Mac/Linux) or %USERPROFILE%\.dapr\components
on Windows).
You can use Helm to quickly create a Redis instance in our Kubernetes cluster. This approach requires Installing Helm.
-
Install Redis into your cluster.
helm repo add bitnami https://charts.bitnami.com/bitnami helm install redis bitnami/redis --set image.tag=6.2
-
Run
kubectl get pods
to see the Redis containers now running in your cluster. -
Add
redis-master:6379
as theredisHost
in your redis.yaml file. For example:metadata: - name: redisHost value: redis-master:6379
-
Next, we’ll get our Redis password, which is slightly different depending on the OS we’re using:
-
Windows: Run
kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" > encoded.b64
, which will create a file with your encoded password. Next, runcertutil -decode encoded.b64 password.txt
, which will put your redis password in a text file calledpassword.txt
. Copy the password and delete the two files. -
Linux/MacOS: Run
kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" | base64 --decode
and copy the outputted password.
Add this password as the
redisPassword
value in your redis.yaml file. For example:- name: redisPassword value: "lhDOkwTlp0"
-
-
Create an Azure Cache for Redis instance using the official Microsoft documentation.
-
Once your instance is created, grab the Host name (FQDN) and your access key from the Azure portal.
- For the Host name:
- Navigate to the resource’s Overview page.
- Copy the Host name value.
- For your access key:
- Navigate to Settings > Access Keys.
- Copy and save your key.
- For the Host name:
-
Add your key and your host name to a
redis.yaml
file that Dapr can apply to your cluster.- If you’re running a sample, add the host and key to the provided
redis.yaml
. - If you’re creating a project from the ground up, create a
redis.yaml
file as specified in the Component format section.
- If you’re running a sample, add the host and key to the provided
-
Set the
redisHost
key to[HOST NAME FROM PREVIOUS STEP]:6379
and theredisPassword
key to the key you saved earlier.Note: In a production-grade application, follow secret management instructions to securely manage your secrets.
-
Enable EntraID support:
- Enable Entra ID authentication on your Azure Redis server. This may takes a few minutes.
- Set
useEntraID
to"true"
to implement EntraID support for Azure Cache for Redis.
-
Set
enableTLS
to"true"
to support TLS.
Note:
useEntraID
assumes that either your UserPrincipal (via AzureCLICredential) or the SystemAssigned managed identity have the RedisDataOwner role permission. If a user-assigned identity is used, you need to specify theazureClientID
property.
Note
The Dapr CLI automatically deploys a local redis instance in self hosted mode as part of thedapr init
command.
Related links
5.2.41 - RethinkDB binding spec
Component format
The RethinkDB state store supports transactions which means it can be used to support Dapr actors. Dapr persists only the actor’s current state which doesn’t allow the users to track how actor’s state may have changed over time.
To enable users to track change of the state of actors, this binding leverages RethinkDB’s built-in capability to monitor RethinkDB table and event on change with both the old
and new
state. This binding creates a subscription on the Dapr state table and streams these changes using the Dapr input binding interface.
To setup RethinkDB statechange binding create a component of type bindings.rethinkdb.statechange
. See this guide on how to create and apply a binding configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: changes
spec:
type: bindings.rethinkdb.statechange
version: v1
metadata:
- name: address
value: "<REPLACE-RETHINKDB-ADDRESS>" # Required, e.g. 127.0.0.1:28015 or rethinkdb.default.svc.cluster.local:28015).
- name: database
value: "<REPLACE-RETHINKDB-DB-NAME>" # Required, e.g. dapr (alpha-numerics only)
- name: direction
value: "<DIRECTION-OF-RETHINKDB-BINDING>"
Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
address |
Y | Input | Address of RethinkDB server | "27.0.0.1:28015" , "rethinkdb.default.svc.cluster.local:28015" |
database |
Y | Input | RethinDB database name | "dapr" |
direction |
N | Input | Direction of the binding | "input" |
Binding support
This component only supports input binding interface.
Related links
5.2.42 - SFTP binding spec
Component format
To set up the SFTP binding, create a component of type bindings.sftp
. See this guide on how to create and apply a binding configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.sftp
version: v1
metadata:
- name: rootPath
value: "<string>"
- name: address
value: "<string>"
- name: username
value: "<string>"
- name: password
value: "*****************"
- name: privateKey
value: "*****************"
- name: privateKeyPassphrase
value: "*****************"
- name: hostPublicKey
value: "*****************"
- name: knownHostsFile
value: "<string>"
- name: insecureIgnoreHostKey
value: "<bool>"
Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
rootPath |
Y | Output | Root path for default working directory | "/path" |
address |
Y | Output | Address of SFTP server | "localhost:22" |
username |
Y | Output | Username for authentication | "username" |
password |
N | Output | Password for username/password authentication | "password" |
privateKey |
N | Output | Private key for public key authentication | "|- |
privateKeyPassphrase |
N | Output | Private key passphrase for public key authentication | "passphrase" |
hostPublicKey |
N | Output | Host public key for host validation | "ecdsa-sha2-nistp256 *** root@openssh-server" |
knownHostsFile |
N | Output | Known hosts file for host validation | "/path/file" |
insecureIgnoreHostKey |
N | Output | Allows to skip host validation. Defaults to "false" |
"true" , "false" |
Binding support
This component supports output binding with the following operations:
create
: Create fileget
: Get filelist
: List filesdelete
: Delete file
Create file
To perform a create file operation, invoke the SFTP binding with a POST
method and the following JSON body:
{
"operation": "create",
"data": "<YOUR_BASE_64_CONTENT>",
"metadata": {
"fileName": "<filename>",
}
}
Example
curl -d "{ \"operation\": \"create\", \"data\": \"YOUR_BASE_64_CONTENT\", \"metadata\": { \"fileName\": \"my-test-file.jpg\" } }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "YOUR_BASE_64_CONTENT", "metadata": { "fileName": "my-test-file.jpg" } }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Response
The response body contains the following JSON:
{
"fileName": "<filename>"
}
Get file
To perform a get file operation, invoke the SFTP binding with a POST
method and the following JSON body:
{
"operation": "get",
"metadata": {
"fileName": "<filename>"
}
}
Example
curl -d '{ \"operation\": \"get\", \"metadata\": { \"fileName\": \"filename\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "get", "metadata": { "fileName": "filename" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Response
The response body contains the value stored in the file.
List files
To perform a list files operation, invoke the SFTP binding with a POST
method and the following JSON body:
{
"operation": "list"
}
If you only want to list the files beneath a particular directory below the rootPath
, specify the relative directory name as the fileName
in the metadata.
{
"operation": "list",
"metadata": {
"fileName": "my/cool/directory"
}
}
Example
curl -d '{ \"operation\": \"list\", \"metadata\": { \"fileName\": \"my/cool/directory\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "list", "metadata": { "fileName": "my/cool/directory" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Response
The response is a JSON array of file names.
Delete file
To perform a delete file operation, invoke the SFTP binding with a POST
method and the following JSON body:
{
"operation": "delete",
"metadata": {
"fileName": "myfile"
}
}
Example
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"fileName\": \"myfile\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "delete", "metadata": { "fileName": "myfile" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Response
An HTTP 204 (No Content) and empty body is returned if successful.
Related links
5.2.43 - SMTP binding spec
Component format
To setup SMTP binding create a component of type bindings.smtp
. See this guide on how to create and apply a binding configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: smtp
spec:
type: bindings.smtp
version: v1
metadata:
- name: host
value: "smtp host"
- name: port
value: "smtp port"
- name: user
value: "username"
- name: password
value: "password"
- name: skipTLSVerify
value: true|false
- name: emailFrom
value: "sender@example.com"
- name: emailTo
value: "receiver@example.com"
- name: emailCC
value: "cc@example.com"
- name: emailBCC
value: "bcc@example.com"
- name: subject
value: "subject"
- name: priority
value: "[value 1-5]"
Warning
The example configuration shown above, contain a username and password as plain-text strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
host |
Y | Output | The host where your SMTP server runs | "smtphost" |
port |
Y | Output | The port your SMTP server listens on | "9999" |
user |
Y | Output | The user to authenticate against the SMTP server | "user" |
password |
Y | Output | The password of the user | "password" |
skipTLSVerify |
N | Output | If set to true, the SMPT server’s TLS certificate will not be verified. Defaults to "false" |
"true" , "false" |
emailFrom |
N | Output | If set, this specifies the email address of the sender. See also | "me@example.com" |
emailTo |
N | Output | If set, this specifies the email address of the receiver. See also | "me@example.com" |
emailCc |
N | Output | If set, this specifies the email address to CC in. See also | "me@example.com" |
emailBcc |
N | Output | If set, this specifies email address to BCC in. See also | "me@example.com" |
subject |
N | Output | If set, this specifies the subject of the email message. See also | "subject of mail" |
priority |
N | Output | If set, this specifies the priority (X-Priority) of the email message, from 1 (lowest) to 5 (highest) (default value: 3). See also | "1" |
Binding support
This component supports output binding with the following operations:
create
Example request
You can specify any of the following optional metadata properties with each request:
emailFrom
emailTo
emailCC
emailBCC
subject
priority
When sending an email, the metadata in the configuration and in the request is combined. The combined set of metadata must contain at least the emailFrom
, emailTo
and subject
fields.
The emailTo
, emailCC
and emailBCC
fields can contain multiple email addresses separated by a semicolon.
Example:
{
"operation": "create",
"metadata": {
"emailTo": "dapr-smtp-binding@example.net",
"emailCC": "cc1@example.net; cc2@example.net",
"subject": "Email subject",
"priority: "1"
},
"data": "Testing Dapr SMTP Binding"
}
The emailTo
, emailCC
and emailBCC
fields can contain multiple email addresses separated by a semicolon.
Related links
5.2.44 - Twilio SendGrid binding spec
Component format
To setup Twilio SendGrid binding create a component of type bindings.twilio.sendgrid
. See this guide on how to create and apply a binding configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: sendgrid
spec:
type: bindings.twilio.sendgrid
version: v1
metadata:
- name: emailFrom
value: "testapp@dapr.io" # optional
- name: emailFromName
value: "test app" # optional
- name: emailTo
value: "dave@dapr.io" # optional
- name: emailToName
value: "dave" # optional
- name: subject
value: "Hello!" # optional
- name: emailCc
value: "jill@dapr.io" # optional
- name: emailBcc
value: "bob@dapr.io" # optional
- name: dynamicTemplateId
value: "d-123456789" # optional
- name: dynamicTemplateData
value: '{"customer":{"name":"John Smith"}}' # optional
- name: apiKey
value: "YOUR_API_KEY" # required, this is your SendGrid key
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
apiKey |
Y | Output | SendGrid API key, this should be considered a secret value | "apikey" |
emailFrom |
N | Output | If set this specifies the ‘from’ email address of the email message. Only a single email address is allowed. Optional field, see below | "me@example.com" |
emailFromName |
N | Output | If set this specifies the ‘from’ name of the email message. Optional field, see below | "me" |
emailTo |
N | Output | If set this specifies the ’to’ email address of the email message. Only a single email address is allowed. Optional field, see below | "me@example.com" |
emailToName |
N | Output | If set this specifies the ’to’ name of the email message. Optional field, see below | "me" |
emailCc |
N | Output | If set this specifies the ‘cc’ email address of the email message. Only a single email address is allowed. Optional field, see below | "me@example.com" |
emailBcc |
N | Output | If set this specifies the ‘bcc’ email address of the email message. Only a single email address is allowed. Optional field, see below | "me@example.com" |
subject |
N | Output | If set this specifies the subject of the email message. Optional field, see below | "subject of the email" |
Binding support
This component supports output binding with the following operations:
create
Example request payload
You can specify any of the optional metadata properties on the output binding request too (e.g. emailFrom
, emailTo
, subject
, etc.)
{
"operation": "create",
"metadata": {
"emailTo": "changeme@example.net",
"subject": "An email from Dapr SendGrid binding"
},
"data": "<h1>Testing Dapr Bindings</h1>This is a test.<br>Bye!"
}
Dynamic templates
If a dynamic template is used, a dynamicTemplateId
needs to be provided and then the dynamicTemplateData
is used:
{
"operation": "create",
"metadata": {
"emailTo": "changeme@example.net",
"subject": "An template email from Dapr SendGrid binding",
"dynamicTemplateId": "d-123456789",
"dynamicTemplateData": "{\"customer\":{\"name\":\"John Smith\"}}"
}
}
Related links
5.2.45 - Twilio SMS binding spec
Component format
To setup Twilio SMS binding create a component of type bindings.twilio.sms
. See this guide on how to create and apply a binding configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.twilio.sms
version: v1
metadata:
- name: toNumber # required.
value: "111-111-1111"
- name: fromNumber # required.
value: "222-222-2222"
- name: accountSid # required.
value: "*****************"
- name: authToken # required.
value: "*****************"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
toNumber |
Y | Output | The target number to send the sms to | "111-111-1111" |
fromNumber |
Y | Output | The sender phone number | "222-222-2222" |
accountSid |
Y | Output | The Twilio account SID | "account sid" |
authToken |
Y | Output | The Twilio auth token | "auth token" |
Binding support
This component supports output binding with the following operations:
create
Related links
5.2.46 - Wasm
Overview
With WebAssembly, you can safely run code compiled in other languages. Runtimes
execute WebAssembly Modules (Wasm), which are most often binaries with a .wasm
extension.
The Wasm Binding allows you to invoke a program compiled to Wasm by passing commandline args or environment variables to it, similar to how you would with a normal subprocess. For example, you can satisfy an invocation using Python, even though Dapr is written in Go and is running on a platform that doesn’t have Python installed!
The Wasm binary must be a program compiled with the WebAssembly System Interface (WASI). The binary can be a program you’ve written such as in Go, or an interpreter you use to run inlined scripts, such as Python.
Minimally, you must specify a Wasm binary compiled with the canonical WASI
version wasi_snapshot_preview1
(a.k.a. wasip1
), often abbreviated to wasi
.
Note: If compiling in Go 1.21+, this is
GOOS=wasip1 GOARCH=wasm
. In TinyGo, Rust, and Zig, this is the targetwasm32-wasi
.
You can also re-use an existing binary. For example, Wasm Language Runtimes distributes interpreters (including PHP, Python, and Ruby) already compiled to WASI.
Wasm binaries are loaded from a URL. For example, the URL file://rewrite.wasm
loads rewrite.wasm
from the current directory of the process. On Kubernetes,
see How to: Mount Pod volumes to the Dapr sidecar
to configure a filesystem mount that can contain Wasm binaries.
It is also possible to fetch the Wasm binary from a remote URL. In this case,
the URL must point exactly to one Wasm binary. For example:
http://example.com/rewrite.wasm
, orhttps://example.com/rewrite.wasm
.
Dapr uses wazero to run these binaries, because it has no dependencies. This allows use of WebAssembly with no installation process except Dapr itself.
The Wasm output binding supports making HTTP client calls using the wasi-http specification. You can find example code for making HTTP calls in a variety of languages here:
Note
If you just want to make an HTTP call, it is simpler to use the service invocation API. However, if you need to add your own logic - for example, filtering or calling to multiple API endpoints - consider using Wasm.Component format
To configure a Wasm binding, create a component of type
bindings.wasm
. See this guide
on how to create and apply a binding configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: wasm
spec:
type: bindings.wasm
version: v1
metadata:
- name: url
value: "file://uppercase.wasm"
Spec metadata fields
Field | Details | Required | Example |
---|---|---|---|
url |
The URL of the resource including the Wasm binary to instantiate. The supported schemes include file:// , http:// , and https:// . The path of a file:// URL is relative to the Dapr process unless it begins with / . |
true | file://hello.wasm , https://example.com/hello.wasm |
Binding support
This component supports output binding with the following operations:
execute
Example request
The data
field, if present will be the program’s STDIN. You can optionally
pass metadata properties with each request:
args
any CLI arguments, comma-separated. This excludes the program name.
For example, consider binding the url
to a Ruby interpreter, such as from
webassembly-language-runtimes:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: wasm
spec:
type: bindings.wasm
version: v1
metadata:
- name: url
value: "https://github.com/vmware-labs/webassembly-language-runtimes/releases/download/ruby%2F3.2.0%2B20230215-1349da9/ruby-3.2.0-slim.wasm"
Assuming that you wanted to start your Dapr at port 3500 with the Wasm Binding, you’d run:
$ dapr run --app-id wasm --dapr-http-port 3500 --resources-path components
The following request responds Hello "salaboy"
:
$ curl -X POST http://localhost:3500/v1.0/bindings/wasm -d'
{
"operation": "execute",
"metadata": {
"args": "-ne,print \"Hello \"; print"
},
"data": "salaboy"
}'
Related links
5.2.47 - Zeebe command binding spec
Component format
To setup Zeebe command binding create a component of type bindings.zeebe.command
. See this guide on how to create and apply a binding configuration.
See this for Zeebe documentation.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.zeebe.command
version: v1
metadata:
- name: gatewayAddr
value: "<host>:<port>"
- name: gatewayKeepAlive
value: "45s"
- name: usePlainTextConnection
value: "true"
- name: caCertificatePath
value: "/path/to/ca-cert"
Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
gatewayAddr |
Y | Output | Zeebe gateway address | "localhost:26500" |
gatewayKeepAlive |
N | Output | Sets how often keep alive messages should be sent to the gateway. Defaults to 45 seconds | "45s" |
usePlainTextConnection |
N | Output | Whether to use a plain text connection or not | "true" , "false" |
caCertificatePath |
N | Output | The path to the CA cert | "/path/to/ca-cert" |
Binding support
This component supports output binding with the following operations:
topology
deploy-process
deploy-resource
create-instance
cancel-instance
set-variables
resolve-incident
publish-message
activate-jobs
complete-job
fail-job
update-job-retries
throw-error
Output binding
Zeebe uses gRPC under the hood for the Zeebe client we use in this binding. Please consult the gRPC API reference for more information.
topology
The topology
operation obtains the current topology of the cluster the gateway is part of.
To perform a topology
operation, invoke the Zeebe command binding with a POST
method, and the following JSON body:
{
"data": {},
"operation": "topology"
}
Response
The binding returns a JSON with the following response:
{
"brokers": [
{
"nodeId": null,
"host": "172.18.0.5",
"port": 26501,
"partitions": [
{
"partitionId": 1,
"role": null,
"health": null
}
],
"version": "0.26.0"
}
],
"clusterSize": 1,
"partitionsCount": 1,
"replicationFactor": 1,
"gatewayVersion": "0.26.0"
}
The response values are:
brokers
- list of brokers part of this clusternodeId
- unique (within a cluster) node ID for the brokerhost
- hostname of the brokerport
- port for the brokerport
- port for the brokerpartitions
- list of partitions managed or replicated on this brokerpartitionId
- the unique ID of this partitionrole
- the role of the broker for this partitionhealth
- the health of this partition
version
- broker version
clusterSize
- how many nodes are in the clusterpartitionsCount
- how many partitions are spread across the clusterreplicationFactor
- configured replication factor for this clustergatewayVersion
- gateway version
deploy-process
Deprecated alias of ‘deploy-resource’.
deploy-resource
The deploy-resource
operation deploys a single resource to Zeebe. A resource can be a process (BPMN) or a decision and a decision requirement (DMN).
To perform a deploy-resource
operation, invoke the Zeebe command binding with a POST
method, and the following JSON body:
{
"data": "YOUR_FILE_CONTENT",
"metadata": {
"fileName": "products-process.bpmn"
},
"operation": "deploy-resource"
}
The metadata parameters are:
fileName
- the name of the resource file
Response
The binding returns a JSON with the following response:
{
"key": 2251799813685252,
"deployments": [
{
"Metadata": {
"Process": {
"bpmnProcessId": "products-process",
"version": 2,
"processDefinitionKey": 2251799813685251,
"resourceName": "products-process.bpmn"
}
}
}
]
}
{
"key": 2251799813685253,
"deployments": [
{
"Metadata": {
"Decision": {
"dmnDecisionId": "products-approval",
"dmnDecisionName": "Products approval",
"version": 1,
"decisionKey": 2251799813685252,
"dmnDecisionRequirementsId": "Definitions_0c98xne",
"decisionRequirementsKey": 2251799813685251
}
}
},
{
"Metadata": {
"DecisionRequirements": {
"dmnDecisionRequirementsId": "Definitions_0c98xne",
"dmnDecisionRequirementsName": "DRD",
"version": 1,
"decisionRequirementsKey": 2251799813685251,
"resourceName": "products-approval.dmn"
}
}
}
]
}
The response values are:
key
- the unique key identifying the deploymentdeployments
- a list of deployed resources, e.g. processesmetadata
- deployment metadata, each deployment has only one metadataprocess
- metadata of a deployed processbpmnProcessId
- the bpmn process ID, as parsed during deployment; together with the version forms a unique identifier for a specific process definitionversion
- the assigned process versionprocessDefinitionKey
- the assigned key, which acts as a unique identifier for this processresourceName
- the resource name from which this process was parsed
decision
- metadata of a deployed decisiondmnDecisionId
- the dmn decision ID, as parsed during deployment; together with the versions forms a unique identifier for a specific decisiondmnDecisionName
- the dmn name of the decision, as parsed during deploymentversion
- the assigned decision versiondecisionKey
- the assigned decision key, which acts as a unique identifier for this decisiondmnDecisionRequirementsId
- the dmn ID of the decision requirements graph that this decision is part of, as parsed during deploymentdecisionRequirementsKey
- the assigned key of the decision requirements graph that this decision is part of
decisionRequirements
- metadata of a deployed decision requirementsdmnDecisionRequirementsId
- the dmn decision requirements ID, as parsed during deployment; together with the versions forms a unique identifier for a specific decisiondmnDecisionRequirementsName
- the dmn name of the decision requirements, as parsed during deploymentversion
- the assigned decision requirements versiondecisionRequirementsKey
- the assigned decision requirements key, which acts as a unique identifier for this decision requirementsresourceName
- the resource name from which this decision requirements was parsed
create-instance
The create-instance
operation creates and starts an instance of the specified process. The process definition to use to create the instance can be
specified either using its unique key (as returned by the deploy-process
operation), or using the BPMN process ID and a version.
Note that only processes with none start events can be started through this command.
Typically, process creation and execution are decoupled. This means that the command creates a new process instance and immediately responds with
the process instance id. The execution of the process occurs after the response is sent. However, there are use cases that need to collect the results
of a process when its execution is complete. By defining the withResult
property, the command allows to “synchronously” execute processes and receive
the results via a set of variables. The response is sent when the process execution is complete.
For more information please visit the official documentation.
To perform a create-instance
operation, invoke the Zeebe command binding with a POST
method, and the following JSON body:
{
"data": {
"bpmnProcessId": "products-process",
"variables": {
"productId": "some-product-id",
"productName": "some-product-name",
"productKey": "some-product-key"
}
},
"operation": "create-instance"
}
{
"data": {
"processDefinitionKey": 2251799813685895,
"variables": {
"productId": "some-product-id",
"productName": "some-product-name",
"productKey": "some-product-key"
}
},
"operation": "create-instance"
}
{
"data": {
"bpmnProcessId": "products-process",
"variables": {
"productId": "some-product-id",
"productName": "some-product-name",
"productKey": "some-product-key"
},
"withResult": true,
"requestTimeout": "30s",
"fetchVariables": ["productId"]
},
"operation": "create-instance"
}
The data parameters are:
bpmnProcessId
- the BPMN process ID of the process definition to instantiateprocessDefinitionKey
- the unique key identifying the process definition to instantiateversion
- (optional, default: latest version) the version of the process to instantiatevariables
- (optional) JSON document that will instantiate the variables for the root variable scope of the process instance; it must be a JSON object, as variables will be mapped in a key-value fashion. e.g. { “a”: 1, “b”: 2 } will create two variables, named “a” and “b” respectively, with their associated values. [{ “a”: 1, “b”: 2 }] would not be a valid argument, as the root of the JSON document is an array and not an objectwithResult
- (optional, default: false) if set to true, the process will be instantiated and executed synchronouslyrequestTimeout
- (optional, only used if withResult=true) timeout the request will be closed if the process is not completed before the requestTimeout. If requestTimeout = 0, uses the generic requestTimeout configured in the gateway.fetchVariables
- (optional, only used if withResult=true) list of names of variables to be included invariables
property of the response. If empty, all visible variables in the root scope will be returned.
Response
The binding returns a JSON with the following response:
{
"processDefinitionKey": 2251799813685895,
"bpmnProcessId": "products-process",
"version": 3,
"processInstanceKey": 2251799813687851,
"variables": "{\"productId\":\"some-product-id\"}"
}
The response values are:
processDefinitionKey
- the key of the process definition which was used to create the process instancebpmnProcessId
- the BPMN process ID of the process definition which was used to create the process instanceversion
- the version of the process definition which was used to create the process instanceprocessInstanceKey
- the unique identifier of the created process instancevariables
- (optional, only if withResult=true was used in the request) JSON document consists of visible variables in the root scope; returned as a serialized JSON document
cancel-instance
The cancel-instance
operation cancels a running process instance.
To perform a cancel-instance
operation, invoke the Zeebe command binding with a POST
method, and the following JSON body:
{
"data": {
"processInstanceKey": 2251799813687851
},
"operation": "cancel-instance"
}
The data parameters are:
processInstanceKey
- the process instance key
Response
The binding does not return a response body.
set-variables
The set-variables
operation creates or updates variables for an element instance (e.g. process instance, flow element instance).
To perform a set-variables
operation, invoke the Zeebe command binding with a POST
method, and the following JSON body:
{
"data": {
"elementInstanceKey": 2251799813687880,
"variables": {
"productId": "some-product-id",
"productName": "some-product-name",
"productKey": "some-product-key"
}
},
"operation": "set-variables"
}
The data parameters are:
elementInstanceKey
- the unique identifier of a particular element; can be the process instance key (as obtained during instance creation), or a given element, such as a service task (see elementInstanceKey on the job message)local
- (optional, default:false
) if true, the variables will be merged strictly into the local scope (as indicated by elementInstanceKey); this means the variables is not propagated to upper scopes. for example, let’s say we have two scopes, ‘1’ and ‘2’, with each having effective variables as: 1 =>{ "foo" : 2 }
, and 2 =>{ "bar" : 1 }
. if we send an update request with elementInstanceKey = 2, variables{ "foo" : 5 }
, and local is true, then scope 1 will be unchanged, and scope 2 will now be{ "bar" : 1, "foo" 5 }
. if local was false, however, then scope 1 would be{ "foo": 5 }
, and scope 2 would be{ "bar" : 1 }
variables
- a JSON serialized document describing variables as key value pairs; the root of the document must be an object
Response
The binding returns a JSON with the following response:
{
"key": 2251799813687896
}
The response values are:
key
- the unique key of the set variables command
resolve-incident
The resolve-incident
operation resolves an incident.
To perform a resolve-incident
operation, invoke the Zeebe command binding with a POST
method, and the following JSON body:
{
"data": {
"incidentKey": 2251799813686123
},
"operation": "resolve-incident"
}
The data parameters are:
incidentKey
- the unique ID of the incident to resolve
Response
The binding does not return a response body.
publish-message
The publish-message
operation publishes a single message. Messages are published to specific partitions computed from their correlation keys.
To perform a publish-message
operation, invoke the Zeebe command binding with a POST
method, and the following JSON body:
{
"data": {
"messageName": "product-message",
"correlationKey": "2",
"timeToLive": "1m",
"variables": {
"productId": "some-product-id",
"productName": "some-product-name",
"productKey": "some-product-key"
},
},
"operation": "publish-message"
}
The data parameters are:
messageName
- the name of the messagecorrelationKey
- (optional) the correlation key of the messagetimeToLive
- (optional) how long the message should be buffered on the brokermessageId
- (optional) the unique ID of the message; can be omitted. only useful to ensure only one message with the given ID will ever be published (during its lifetime)variables
- (optional) the message variables as a JSON document; to be valid, the root of the document must be an object, e.g. { “a”: “foo” }. [ “foo” ] would not be valid
Response
The binding returns a JSON with the following response:
{
"key": 2251799813688225
}
The response values are:
key
- the unique ID of the message that was published
activate-jobs
The activate-jobs
operation iterates through all known partitions round-robin and activates up to the requested maximum and streams them back to
the client as they are activated.
To perform a activate-jobs
operation, invoke the Zeebe command binding with a POST
method, and the following JSON body:
{
"data": {
"jobType": "fetch-products",
"maxJobsToActivate": 5,
"timeout": "5m",
"workerName": "products-worker",
"fetchVariables": [
"productId",
"productName",
"productKey"
],
"requestTimeout": "30s"
},
"operation": "activate-jobs"
}
The data parameters are:
jobType
- the job type, as defined in the BPMN process (e.g.<zeebe:taskDefinition type="fetch-products" />
)maxJobsToActivate
- the maximum jobs to activate by this requesttimeout
- (optional, default: 5 minutes) a job returned after this call will not be activated by another call until the timeout has been reachedworkerName
- (optional, default:default
) the name of the worker activating the jobs, mostly used for logging purposesfetchVariables
- (optional) a list of variables to fetch as the job variables; if empty, all visible variables at the time of activation for the scope of the job will be returnedrequestTimeout
- (optional) the request will be completed when at least one job is activated or after the requestTimeout. If the requestTimeout = 0, a default timeout is used. If the requestTimeout < 0, long polling is disabled and the request is completed immediately, even when no job is activated.
Response
The binding returns a JSON with the following response:
[
{
"key": 2251799813685267,
"type": "fetch-products",
"processInstanceKey": 2251799813685260,
"bpmnProcessId": "products",
"processDefinitionVersion": 1,
"processDefinitionKey": 2251799813685249,
"elementId": "Activity_test",
"elementInstanceKey": 2251799813685266,
"customHeaders": "{\"process-header-1\":\"1\",\"process-header-2\":\"2\"}",
"worker": "test",
"retries": 1,
"deadline": 1694091934039,
"variables":"{\"productId\":\"some-product-id\"}"
}
]
The response values are:
key
- the key, a unique identifier for the jobtype
- the type of the job (should match what was requested)processInstanceKey
- the job’s process instance keybpmnProcessId
- the bpmn process ID of the job process definitionprocessDefinitionVersion
- the version of the job process definitionprocessDefinitionKey
- the key of the job process definitionelementId
- the associated task element IDelementInstanceKey
- the unique key identifying the associated task, unique within the scope of the process instancecustomHeaders
- a set of custom headers defined during modelling; returned as a serialized JSON documentworker
- the name of the worker which activated this jobretries
- the amount of retries left to this job (should always be positive)deadline
- when the job can be activated again, sent as a UNIX epoch timestampvariables
- computed at activation time, consisting of all visible variables to the task scope; returned as a serialized JSON document
complete-job
The complete-job
operation completes a job with the given payload, which allows completing the associated service task.
To perform a complete-job
operation, invoke the Zeebe command binding with a POST
method, and the following JSON body:
{
"data": {
"jobKey": 2251799813686172,
"variables": {
"productId": "some-product-id",
"productName": "some-product-name",
"productKey": "some-product-key"
}
},
"operation": "complete-job"
}
The data parameters are:
jobKey
- the unique job identifier, as obtained from the activate jobs responsevariables
- (optional) a JSON document representing the variables in the current task scope
Response
The binding does not return a response body.
fail-job
The fail-job
operation marks the job as failed; if the retries argument is positive, then the job will be immediately activatable again, and a
worker could try again to process it. If it is zero or negative however, an incident will be raised, tagged with the given errorMessage, and the
job will not be activatable until the incident is resolved.
To perform a fail-job
operation, invoke the Zeebe command binding with a POST
method, and the following JSON body:
{
"data": {
"jobKey": 2251799813685739,
"retries": 5,
"errorMessage": "some error occurred",
"retryBackOff": "30s",
"variables": {
"productId": "some-product-id",
"productName": "some-product-name",
"productKey": "some-product-key"
}
},
"operation": "fail-job"
}
The data parameters are:
jobKey
- the unique job identifier, as obtained when activating the jobretries
- the amount of retries the job should have lefterrorMessage
- (optional) a message describing why the job failed this is particularly useful if a job runs out of retries and an incident is raised, as it this message can help explain why an incident was raisedretryBackOff
- (optional) the back-off timeout for the next retryvariables
- (optional) JSON document that will instantiate the variables at the local scope of the job’s associated task; it must be a JSON object, as variables will be mapped in a key-value fashion. e.g. { “a”: 1, “b”: 2 } will create two variables, named “a” and “b” respectively, with their associated values. [{ “a”: 1, “b”: 2 }] would not be a valid argument, as the root of the JSON document is an array and not an object.
Response
The binding does not return a response body.
update-job-retries
The update-job-retries
operation updates the number of retries a job has left. This is mostly useful for jobs that have run out of retries, should the
underlying problem be solved.
To perform a update-job-retries
operation, invoke the Zeebe command binding with a POST
method, and the following JSON body:
{
"data": {
"jobKey": 2251799813686172,
"retries": 10
},
"operation": "update-job-retries"
}
The data parameters are:
jobKey
- the unique job identifier, as obtained through the activate-jobs operationretries
- the new amount of retries for the job; must be positive
Response
The binding does not return a response body.
throw-error
The throw-error
operation throw an error to indicate that a business error is occurred while processing the job. The error is identified
by an error code and is handled by an error catch event in the process with the same error code.
To perform a throw-error
operation, invoke the Zeebe command binding with a POST
method, and the following JSON body:
{
"data": {
"jobKey": 2251799813686172,
"errorCode": "product-fetch-error",
"errorMessage": "The product could not be fetched",
"variables": {
"productId": "some-product-id",
"productName": "some-product-name",
"productKey": "some-product-key"
}
},
"operation": "throw-error"
}
The data parameters are:
jobKey
- the unique job identifier, as obtained when activating the joberrorCode
- the error code that will be matched with an error catch eventerrorMessage
- (optional) an error message that provides additional contextvariables
- (optional) JSON document that will instantiate the variables at the local scope of the job’s associated task; it must be a JSON object, as variables will be mapped in a key-value fashion. e.g. { “a”: 1, “b”: 2 } will create two variables, named “a” and “b” respectively, with their associated values. [{ “a”: 1, “b”: 2 }] would not be a valid argument, as the root of the JSON document is an array and not an object.
Response
The binding does not return a response body.
Related links
5.2.48 - Zeebe JobWorker binding spec
Component format
To setup Zeebe JobWorker binding create a component of type bindings.zeebe.jobworker
. See this guide on how to create and apply a binding configuration.
See this for Zeebe JobWorker documentation.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.zeebe.jobworker
version: v1
metadata:
- name: gatewayAddr
value: "<host>:<port>"
- name: gatewayKeepAlive
value: "45s"
- name: usePlainTextConnection
value: "true"
- name: caCertificatePath
value: "/path/to/ca-cert"
- name: workerName
value: "products-worker"
- name: workerTimeout
value: "5m"
- name: requestTimeout
value: "15s"
- name: jobType
value: "fetch-products"
- name: maxJobsActive
value: "32"
- name: concurrency
value: "4"
- name: pollInterval
value: "100ms"
- name: pollThreshold
value: "0.3"
- name: fetchVariables
value: "productId, productName, productKey"
- name: autocomplete
value: "true"
- name: retryBackOff
value: "30s"
- name: direction
value: "input"
Spec metadata fields
Field | Required | Binding support | Details | Example |
---|---|---|---|---|
gatewayAddr |
Y | Input | Zeebe gateway address | "localhost:26500" |
gatewayKeepAlive |
N | Input | Sets how often keep alive messages should be sent to the gateway. Defaults to 45 seconds | "45s" |
usePlainTextConnection |
N | Input | Whether to use a plain text connection or not | "true" , "false" |
caCertificatePath |
N | Input | The path to the CA cert | "/path/to/ca-cert" |
workerName |
N | Input | The name of the worker activating the jobs, mostly used for logging purposes | "products-worker" |
workerTimeout |
N | Input | A job returned after this call will not be activated by another call until the timeout has been reached; defaults to 5 minutes | "5m" |
requestTimeout |
N | Input | The request will be completed when at least one job is activated or after the requestTimeout. If the requestTimeout = 0, a default timeout is used. If the requestTimeout < 0, long polling is disabled and the request is completed immediately, even when no job is activated. Defaults to 10 seconds | "30s" |
jobType |
Y | Input | the job type, as defined in the BPMN process (e.g. <zeebe:taskDefinition type="fetch-products" /> ) |
"fetch-products" |
maxJobsActive |
N | Input | Set the maximum number of jobs which will be activated for this worker at the same time. Defaults to 32 | "32" |
concurrency |
N | Input | The maximum number of concurrent spawned goroutines to complete jobs. Defaults to 4 | "4" |
pollInterval |
N | Input | Set the maximal interval between polling for new jobs. Defaults to 100 milliseconds | "100ms" |
pollThreshold |
N | Input | Set the threshold of buffered activated jobs before polling for new jobs, i.e. threshold * maxJobsActive. Defaults to 0.3 | "0.3" |
fetchVariables |
N | Input | A list of variables to fetch as the job variables; if empty, all visible variables at the time of activation for the scope of the job will be returned | "productId" , "productName" , "productKey" |
autocomplete |
N | Input | Indicates if a job should be autocompleted or not. If not set, all jobs will be auto-completed by default. Disable it if the worker should manually complete or fail the job with either a business error or an incident | "true" , "false" |
retryBackOff |
N | Input | The back-off timeout for the next retry if a job fails | 15s |
direction |
N | Input | The direction of the binding | "input" |
Binding support
This component supports input binding interfaces.
Input binding
Variables
The Zeebe process engine handles the process state as also process variables which can be passed
on process instantiation or which can be updated or created during process execution. These variables
can be passed to a registered job worker by defining the variable names as comma-separated list in
the fetchVariables
metadata field. The process engine will then pass these variables with its current
values to the job worker implementation.
If the binding will register three variables productId
, productName
and productKey
then the worker will
be called with the following JSON body:
{
"productId": "some-product-id",
"productName": "some-product-name",
"productKey": "some-product-key"
}
Note: if the fetchVariables
metadata field will not be passed, all process variables will be passed to the worker.
Headers
The Zeebe process engine has the ability to pass custom task headers to a job worker. These headers can be defined for every service task. Task headers will be passed by the binding as metadata (HTTP headers) to the job worker.
The binding will also pass the following job related variables as metadata. The values will be passed as string. The table contains also the original data type so that it can be converted back to the equivalent data type in the used programming language for the worker.
Metadata | Data type | Description |
---|---|---|
X-Zeebe-Job-Key | int64 | The key, a unique identifier for the job |
X-Zeebe-Job-Type | string | The type of the job (should match what was requested) |
X-Zeebe-Process-Instance-Key | int64 | The job’s process instance key |
X-Zeebe-Bpmn-Process-Id | string | The bpmn process ID of the job process definition |
X-Zeebe-Process-Definition-Version | int32 | The version of the job process definition |
X-Zeebe-Process-Definition-Key | int64 | The key of the job process definition |
X-Zeebe-Element-Id | string | The associated task element ID |
X-Zeebe-Element-Instance-Key | int64 | The unique key identifying the associated task, unique within the scope of the process instance |
X-Zeebe-Worker | string | The name of the worker which activated this job |
X-Zeebe-Retries | int32 | The amount of retries left to this job (should always be positive) |
X-Zeebe-Deadline | int64 | When the job can be activated again, sent as a UNIX epoch timestamp |
X-Zeebe-Autocomplete | bool | The autocomplete status that is defined in the binding metadata |
Related links
5.3 - State store component specs
The following table lists state stores supported, at various levels, by the Dapr state management building block. Learn how to set up different state stores for Dapr state management.
Table headers to note:
Header | Description | Example |
---|---|---|
Status | Component certification status |
Alpha Beta Stable |
Component version | The version of the component | v1 |
Since runtime version | The version of the Dapr runtime when the component status was set or updated | 1.11 |
Note
State stores can be used for actors if it supports both transactional operations and ETag.Generic
Component | CRUD | Transactional | ETag | TTL | Actors | Query | Status | Component version | Since runtime version |
---|---|---|---|---|---|---|---|---|---|
Aerospike | ✅ |
![]() |
✅ |
![]() |
![]() |
![]() |
Alpha | v1 | 1.0 |
Apache Cassandra | ✅ |
![]() |
![]() |
✅ |
![]() |
![]() |
Stable | v1 | 1.9 |
CockroachDB | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | Stable | v1 | 1.10 |
Couchbase | ✅ |
![]() |
✅ |
![]() |
![]() |
![]() |
Alpha | v1 | 1.0 |
etcd | ✅ | ✅ | ✅ | ✅ | ✅ |
![]() |
Beta | v2 | 1.12 |
Hashicorp Consul | ✅ |
![]() |
![]() |
![]() |
![]() |
![]() |
Alpha | v1 | 1.0 |
Hazelcast | ✅ |
![]() |
![]() |
![]() |
![]() |
![]() |
Alpha | v1 | 1.0 |
In-memory | ✅ | ✅ | ✅ | ✅ | ✅ |
![]() |
Stable | v1 | 1.9 |
JetStream KV | ✅ |
![]() |
![]() |
![]() |
![]() |
![]() |
Alpha | v1 | 1.7 |
Memcached | ✅ |
![]() |
![]() |
✅ |
![]() |
![]() |
Stable | v1 | 1.9 |
MongoDB | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | Stable | v1 | 1.0 |
MySQL & MariaDB | ✅ | ✅ | ✅ | ✅ | ✅ |
![]() |
Stable | v1 | 1.10 |
Oracle Database | ✅ | ✅ | ✅ | ✅ | ✅ |
![]() |
Beta | v1 | 1.7 |
PostgreSQL v1 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | Stable | v1 | 1.0 |
PostgreSQL v2 | ✅ | ✅ | ✅ | ✅ | ✅ |
![]() |
Stable | v2 | 1.13 |
Redis | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | Stable | v1 | 1.0 |
RethinkDB | ✅ |
![]() |
![]() |
![]() |
![]() |
![]() |
Beta | v1 | 1.9 |
SQLite | ✅ | ✅ | ✅ | ✅ | ✅ |
![]() |
Stable | v1 | 1.11 |
Zookeeper | ✅ |
![]() |
✅ |
![]() |
![]() |
![]() |
Alpha | v1 | 1.0 |
Amazon Web Services (AWS)
Component | CRUD | Transactional | ETag | TTL | Actors | Query | Status | Component version | Since runtime version |
---|---|---|---|---|---|---|---|---|---|
AWS DynamoDB | ✅ | ✅ | ✅ | ✅ | ✅ |
![]() |
Stable | v1 | 1.10 |
Cloudflare
Component | CRUD | Transactional | ETag | TTL | Actors | Query | Status | Component version | Since runtime version |
---|---|---|---|---|---|---|---|---|---|
Cloudflare Workers KV | ✅ |
![]() |
![]() |
✅ |
![]() |
![]() |
Beta | v1 | 1.10 |
Google Cloud Platform (GCP)
Component | CRUD | Transactional | ETag | TTL | Actors | Query | Status | Component version | Since runtime version |
---|---|---|---|---|---|---|---|---|---|
GCP Firestore | ✅ |
![]() |
![]() |
![]() |
![]() |
![]() |
Stable | v1 | 1.11 |
Microsoft Azure
Component | CRUD | Transactional | ETag | TTL | Actors | Query | Status | Component version | Since runtime version |
---|---|---|---|---|---|---|---|---|---|
Azure Blob Storage | ✅ |
![]() |
✅ |
![]() |
![]() |
![]() |
Stable | v2 | 1.13 |
Azure Cosmos DB | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | Stable | v1 | 1.0 |
Azure Table Storage | ✅ |
![]() |
✅ |
![]() |
![]() |
![]() |
Stable | v1 | 1.9 |
Microsoft SQL Server | ✅ | ✅ | ✅ | ✅ | ✅ |
![]() |
Stable | v1 | 1.5 |
Oracle Cloud
Component | CRUD | Transactional | ETag | TTL | Actors | Query | Status | Component version | Since runtime version |
---|---|---|---|---|---|---|---|---|---|
Autonomous Database (ATP and ADW) | ✅ | ✅ | ✅ | ✅ | ✅ |
![]() |
Alpha | v1 | 1.7 |
Coherence | ✅ |
![]() |
![]() |
✅ |
![]() |
![]() |
Alpha | v1 | 1.16 |
Object Storage | ✅ |
![]() |
✅ | ✅ |
![]() |
![]() |
Alpha | v1 | 1.6 |
5.3.1 - Aerospike
Component format
To setup Aerospike state store create a component of type state.Aerospike
. See this guide on how to create and apply a state store configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: state.Aerospike
version: v1
metadata:
- name: hosts
value: <REPLACE-WITH-HOSTS> # Required. A comma delimited string of hosts. Example: "aerospike:3000,aerospike2:3000"
- name: namespace
value: <REPLACE-WITH-NAMESPACE> # Required. The aerospike namespace.
- name: set
value: <REPLACE-WITH-SET> # Optional
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
hosts | Y | Host name/port of database server | "localhost:3000" , "aerospike:3000,aerospike2:3000" |
namespace | Y | The Aerospike namespace | "namespace" |
set | N | The setName in the database | "myset" |
Setup Aerospike
You can run Aerospike locally using Docker:
docker run -d --name aerospike -p 3000:3000 -p 3001:3001 -p 3002:3002 -p 3003:3003 aerospike
You can then interact with the server using localhost:3000
.
The easiest way to install Aerospike on Kubernetes is by using the Helm chart:
helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
helm install --name my-aerospike --namespace aerospike stable/aerospike
This installs Aerospike into the aerospike
namespace.
To interact with Aerospike, find the service with: kubectl get svc aerospike -n aerospike
.
For example, if installing using the example above, the Aerospike host address would be:
aerospike-my-aerospike.aerospike.svc.cluster.local:3000
Related links
- Basic schema for a Dapr component
- Read this guide for instructions on configuring state store components
- State management building block
5.3.2 - AWS DynamoDB
Component format
To setup a DynamoDB state store create a component of type state.aws.dynamodb
. See this guide on how to create and apply a state store configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: state.aws.dynamodb
version: v1
metadata:
- name: table
value: "Contracts"
- name: accessKey
value: "AKIAIOSFODNN7EXAMPLE" # Optional
- name: secretKey
value: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" # Optional
- name: endpoint
value: "http://localhost:8080" # Optional
- name: region
value: "eu-west-1" # Optional
- name: sessionToken
value: "myTOKEN" # Optional
- name: ttlAttributeName
value: "expiresAt" # Optional
- name: partitionKey
value: "ContractID" # Optional
# Uncomment this if you wish to use AWS DynamoDB as a state store for actors (optional)
#- name: actorStateStore
# value: "true"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Primary Key
In order to use DynamoDB as a Dapr state store, the table must have a primary key named key
. See the section Partition Keys for an option to change this behavior.
Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
table | Y | name of the DynamoDB table to use | "Contracts" |
accessKey | N | ID of the AWS account with appropriate permissions to SNS and SQS. Can be secretKeyRef to use a secret reference |
"AKIAIOSFODNN7EXAMPLE" |
secretKey | N | Secret for the AWS user. Can be secretKeyRef to use a secret reference |
"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" |
region | N | The AWS region to the instance. See this page for valid regions: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RegionsAndAvailabilityZones.html. Ensure that DynamoDB are available in that region. | "us-east-1" |
endpoint | N | AWS endpoint for the component to use. Only used for local development. The endpoint is unncessary when running against production AWS |
"http://localhost:4566" |
sessionToken | N | AWS session token to use. A session token is only required if you are using temporary security credentials. | "TOKEN" |
ttlAttributeName | N | The table attribute name which should be used for TTL. | "expiresAt" |
partitionKey | N | The table primary key or partition key attribute name. This field is used to replace the default primary key attribute name "key" . See the section Partition Keys. |
"ContractID" |
actorStateStore | N | Consider this state store for actors. Defaults to “false” | "true" , "false" |
Important
When running the Dapr sidecar (daprd) with your application on EKS (AWS Kubernetes), if you’re using a node/pod that has already been attached to an IAM policy defining access to AWS resources, you must not provide AWS access-key, secret-key, and tokens in the definition of the component spec you’re using.Setup AWS DynamoDB
See Authenticating to AWS for information about authentication-related attributes
Time to live (TTL)
In order to use DynamoDB TTL feature, you must enable TTL on your table and define the attribute name.
The attribute name must be defined in the ttlAttributeName
field.
See official AWS docs.
Partition Keys
By default, the DynamoDB state store component uses the table attribute name key
as primary/partition key in the DynamoDB table.
This can be overridden by specifying a metadata field in the component configuration with a key of partitionKey
and a value of the desired attribute name.
To learn more about DynamoDB primary/partition keys, read the AWS DynamoDB Developer Guide.
The following statestore.yaml
file shows how to configure the DynamoDB state store component to use the partition key attribute name of ContractID
:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
spec:
type: state.aws.dynamodb
version: v1
metadata:
- name: table
value: "Contracts"
- name: partitionKey
value: "ContractID"
The above component specification assumes the following DynamoDB Table Layout:
{
"Table": {
"AttributeDefinitions": [
{
"AttributeName": "ContractID",
"AttributeType": "S"
}
],
"TableName": "Contracts",
"KeySchema": [
{
"AttributeName": "ContractID",
"KeyType": "HASH"
}
],
}
The following operation passes "A12345"
as the value for key
, and based on the component specification provided above, the Dapr runtime will replace the key
attribute name
with ContractID
as the Partition/Primary Key sent to DynamoDB:
$ dapr run --app-id contractsprocessing --app-port ...
$ curl -X POST http://localhost:3500/v1.0/state/<store_name> \
-H "Content-Type: application/json"
-d '[
{
"key": "A12345",
"value": "Dapr Contract"
}
]'
The following AWS CLI Command displays the contents of the DynamoDB Contracts
table:
$ aws dynamodb get-item \
--table-name Contracts \
--key '{"ContractID":{"S":"contractsprocessing||A12345"}}'
{
"Item": {
"value": {
"S": "Dapr Contract"
},
"etag": {
"S": "....."
},
"ContractID": {
"S": "contractsprocessing||A12345"
}
}
}
Related links
- Basic schema for a Dapr component
- Read this guide for instructions on configuring state store components
- State management building block
- Authenticating to AWS
5.3.3 - Azure Blob Storage
Component format
To setup the Azure Blob Storage state store create a component of type state.azure.blobstorage
. See this guide on how to create and apply a state store configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: state.azure.blobstorage
# Supports v1 and v2. Users should always use v2 by default. There is no
# migration path from v1 to v2, see `versioning` below.
version: v2
metadata:
- name: accountName
value: "[your_account_name]"
- name: accountKey
value: "[your_account_key]"
- name: containerName
value: "[your_container_name]"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Versioning
Dapr has 2 versions of the Azure Blob Storage state store component: v1
and v2
. It is recommended to use v2
for all new applications. v1
is considered legacy and is preserved for compatibility with existing applications only.
In v1
, a longstanding implementation issue was identified, where the key prefix was incorrectly stripped by the component, essentially behaving as if keyPrefix
was always set to none
.
The updated v2
of the component fixes the incorrect behavior and makes the state store correctly respect the keyPrefix
property.
While v1
and v2
have the same metadata fields, they are otherwise incompatible, with no automatic data migration path for v1
to v2
.
If you are using v1
of this component, you should continue to use v1
until you create a new state store.
Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
accountName |
Y | The storage account name | "mystorageaccount" . |
accountKey |
Y (unless using Microsoft Entra ID) | Primary or secondary storage key | "key" |
containerName |
Y | The name of the container to be used for Dapr state. The container will be created for you if it doesn’t exist | "container" |
azureEnvironment |
N | Optional name for the Azure environment if using a different Azure cloud | "AZUREPUBLICCLOUD" (default value), "AZURECHINACLOUD" , "AZUREUSGOVERNMENTCLOUD" |
endpoint |
N | Optional custom endpoint URL. This is useful when using the Azurite emulator or when using custom domains for Azure Storage (although this is not officially supported). The endpoint must be the full base URL, including the protocol (http:// or https:// ), the IP or FQDN, and optional port. |
"http://127.0.0.1:10000" |
ContentType |
N | The blob’s content type | "text/plain" |
ContentMD5 |
N | The blob’s MD5 hash | "vZGKbMRDAnMs4BIwlXaRvQ==" |
ContentEncoding |
N | The blob’s content encoding | "UTF-8" |
ContentLanguage |
N | The blob’s content language | "en-us" |
ContentDisposition |
N | The blob’s content disposition. Conveys additional information about how to process the response payload | "attachment" |
CacheControl |
N | The blob’s cache control | "no-cache" |
Setup Azure Blob Storage
Follow the instructions from the Azure documentation on how to create an Azure Storage Account.
If you wish to create a container for Dapr to use, you can do so beforehand. However, the Blob Storage state provider will create one for you automatically if it doesn’t exist.
In order to setup Azure Blob Storage as a state store, you will need the following properties:
- accountName: The storage account name. For example: mystorageaccount.
- accountKey: Primary or secondary storage account key.
- containerName: The name of the container to be used for Dapr state. The container will be created for you if it doesn’t exist.
Authenticating with Microsoft Entra ID
This component supports authentication with Microsoft Entra ID as an alternative to use account keys. Whenever possible, it is recommended that you use Microsoft Entra ID for authentication in production systems, to take advantage of better security, fine-tuned access control, and the ability to use managed identities for apps running on Azure.
The following scripts are optimized for a bash or zsh shell and require the following apps installed:
You must also be authenticated with Azure in your Azure CLI.
- To get started with using Microsoft Entra ID for authenticating the Blob Storage state store component, make sure you’ve created an Microsoft Entra ID application and a Service Principal as explained in the Authenticating to Azure document.
Once done, set a variable with the ID of the Service Principal that you created:
SERVICE_PRINCIPAL_ID="[your_service_principal_object_id]"
- Set the following variables with the name of your Azure Storage Account and the name of the Resource Group where it’s located:
STORAGE_ACCOUNT_NAME="[your_storage_account_name]"
RG_NAME="[your_resource_group_name]"
- Using RBAC, assign a role to our Service Principal so it can access data inside the Storage Account.
In this case, you are assigning the “Storage blob Data Contributor” role, which has broad access; other more restrictive roles can be used as well, depending on your application.
RG_ID=$(az group show --resource-group ${RG_NAME} | jq -r ".id")
az role assignment create \
--assignee "${SERVICE_PRINCIPAL_ID}" \
--role "Storage blob Data Contributor" \
--scope "${RG_ID}/providers/Microsoft.Storage/storageAccounts/${STORAGE_ACCOUNT_NAME}"
When authenticating your component using Microsoft Entra ID, the accountKey
field is not required. Instead, please specify the required credentials in the component’s metadata (if any) according to the Authenticating to Azure document.
For example:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: state.azure.blobstorage
version: v1
metadata:
- name: accountName
value: "[your_account_name]"
- name: containerName
value: "[your_container_name]"
- name: azureTenantId
value: "[your_tenant_id]"
- name: azureClientId
value: "[your_client_id]"
- name: azureClientSecret
value : "[your_client_secret]"
Apply the configuration
In Kubernetes
To apply Azure Blob Storage state store to Kubernetes, use the kubectl
CLI:
kubectl apply -f azureblob.yaml
Running locally
To run locally, create a components
dir containing the YAML file and provide the path to the dapr run
command with the flag --resources-path
.
This state store creates a blob file in the container and puts raw state inside it.
For example, the following operation coming from service called myservice
:
curl -X POST http://localhost:3500/v1.0/state \
-H "Content-Type: application/json"
-d '[
{
"key": "nihilus",
"value": "darth"
}
]'
This creates the blob file in the container with key
as filename and value
as the contents of file.
Concurrency
Azure Blob Storage state concurrency is achieved by using ETag
s according to the Azure Blob Storage documentation.
Related links
- Basic schema for a Dapr component
- Read this guide for instructions on configuring state store components
- State management building block
5.3.4 - Azure Cosmos DB (SQL API)
Component format
To setup Azure Cosmos DB state store create a component of type state.azure.cosmosdb
. See this guide on how to create and apply a state store configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: state.azure.cosmosdb
version: v1
metadata:
- name: url
value: <REPLACE-WITH-URL>
- name: masterKey
value: <REPLACE-WITH-MASTER-KEY>
- name: database
value: <REPLACE-WITH-DATABASE>
- name: collection
value: <REPLACE-WITH-COLLECTION>
# Uncomment this if you wish to use Azure Cosmos DB as a state store for actors (optional)
#- name: actorStateStore
# value: "true"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.If you wish to use Cosmos DB as an actor store, append the following to the yaml.
- name: actorStateStore
value: "true"
Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
url | Y | The Cosmos DB url | "https://******.documents.azure.com:443/" . |
masterKey | Y* | The key to authenticate to the Cosmos DB account. Only required when not using Microsoft Entra ID authentication. | "key" |
database | Y | The name of the database | "db" |
collection | Y | The name of the collection (container) | "collection" |
actorStateStore | N | Consider this state store for actors. Defaults to "false" |
"true" , "false" |
Microsoft Entra ID authentication
The Azure Cosmos DB state store component supports authentication using all Microsoft Entra ID mechanisms. For further information and the relevant component metadata fields to provide depending on the choice of Microsoft Entra ID authentication mechanism, see the docs for authenticating to Azure.
You can read additional information for setting up Cosmos DB with Azure AD authentication in the section below.
Setup Azure Cosmos DB
Follow the instructions from the Azure documentation on how to create an Azure Cosmos DB account. The database and collection must be created in Cosmos DB before Dapr can use it.
Important: The partition key for the collection must be named /partitionKey
(note: this is case-sensitive).
In order to setup Cosmos DB as a state store, you need the following properties:
- URL: the Cosmos DB url. for example:
https://******.documents.azure.com:443/
- Master Key: The key to authenticate to the Cosmos DB account. Skip this if using Microsoft Entra ID authentication.
- Database: The name of the database
- Collection: The name of the collection (or container)
TTLs and cleanups
This state store supports Time-To-Live (TTL) for records stored with Dapr. When storing data using Dapr, you can set the ttlInSeconds
metadata property to override the default TTL on the CosmodDB container, indicating when the data should be considered “expired”. Note that this value only takes effect if the container’s DefaultTimeToLive
field has a non-NULL value. See the CosmosDB documentation for more information.
Best Practices for Production Use
Azure Cosmos DB shares a strict metadata request rate limit across all databases in a single Azure Cosmos DB account. New connections to Azure Cosmos DB assume a large percentage of the allowable request rate limit. (See the Cosmos DB documentation)
Therefore several strategies must be applied to avoid simultaneous new connections to Azure Cosmos DB:
- Ensure sidecars of applications only load the Azure Cosmos DB component when they require it to avoid unnecessary database connections. This can be done by scoping your components to specific applications.
- Choose deployment strategies that sequentially deploy or start your applications to minimize bursts in new connections to your Azure Cosmos DB accounts.
- Avoid reusing the same Azure Cosmos DB account for unrelated databases or systems (even outside of Dapr). Distinct Azure Cosmos DB accounts have distinct rate limits.
- Increase the
initTimeout
value to allow the component to retry connecting to Azure Cosmos DB during side car initialization for up to 5 minutes. The default value is5s
and should be increased. When using Kubernetes, increasing this value may also require an update to your Readiness and Liveness probes.
spec:
type: state.azure.cosmosdb
version: v1
initTimeout: 5m
metadata:
Data format
To use the Cosmos DB state store, your data must be sent to Dapr in JSON-serialized format. Having it just JSON serializable will not work.
If you are using the Dapr SDKs (for example the .NET SDK), the SDK automatically serializes your data to JSON.
If you want to invoke Dapr’s HTTP endpoint directly, take a look at the examples (using curl) in the Partition keys section below.
Partition keys
For non-actor state operations, the Azure Cosmos DB state store will use the key
property provided in the requests to the Dapr API to determine the Cosmos DB partition key. This can be overridden by specifying a metadata field in the request with a key of partitionKey
and a value of the desired partition.
The following operation uses nihilus
as the partition key value sent to Cosmos DB:
curl -X POST http://localhost:3500/v1.0/state/<store_name> \
-H "Content-Type: application/json"
-d '[
{
"key": "nihilus",
"value": "darth"
}
]'
For non-actor state operations, if you want to control the Cosmos DB partition, you can specify it in metadata. Reusing the example above, here’s how to put it under the mypartition
partition
curl -X POST http://localhost:3500/v1.0/state/<store_name> \
-H "Content-Type: application/json"
-d '[
{
"key": "nihilus",
"value": "darth",
"metadata": {
"partitionKey": "mypartition"
}
}
]'
For actor state operations, the partition key is generated by Dapr using the appId
, the actor type, and the actor id, such that data for the same actor always ends up under the same partition (you do not need to specify it). This is because actor state operations must use transactions, and in Cosmos DB the items in a transaction must be on the same partition.
Setting up Cosmos DB for authenticating with Microsoft Entra ID
When using the Dapr Cosmos DB state store and authenticating with Microsoft Entra ID, you need to perform a few additional steps to set up your environment.
Prerequisites:
- You need a Service Principal created as per the instructions in the authenticating to Azure page. You need the ID of the Service Principal for the commands below (note that this is different from the client ID of your application, or the value you use for
azureClientId
in the metadata). - Azure CLI
- jq
- The scripts below are optimized for a bash or zsh shell
Granting your Microsoft Entra ID application access to Cosmos DB
You can find more information on the official documentation, including instructions to assign more granular permissions.
In order to grant your application permissions to access data stored in Cosmos DB, you need to assign it a custom role for the Cosmos DB data plane. In this example you’re going to use a built-in role, “Cosmos DB Built-in Data Contributor”, which grants your application full read-write access to the data; you can optionally create custom, fine-tuned roles following the instructions in the official docs.
# Name of the Resource Group that contains your Cosmos DB
RESOURCE_GROUP="..."
# Name of your Cosmos DB account
ACCOUNT_NAME="..."
# ID of your Service Principal object
PRINCIPAL_ID="..."
# ID of the "Cosmos DB Built-in Data Contributor" role
# You can also use the ID of a custom role
ROLE_ID="00000000-0000-0000-0000-000000000002"
az cosmosdb sql role assignment create \
--account-name "$ACCOUNT_NAME" \
--resource-group "$RESOURCE_GROUP" \
--scope "/" \
--principal-id "$PRINCIPAL_ID" \
--role-definition-id "$ROLE_ID"
Optimizations
Optimizing Cosmos DB for bulk operation write performance
If you are building a system that only ever reads data from Cosmos DB via key (id
), which is the default Dapr behavior when using the state management API or actors, there are ways you can optimize Cosmos DB for improved write speeds. This is done by excluding all paths from indexing. By default, Cosmos DB indexes all fields inside of a document. On systems that are write-heavy and run little-to-no queries on values within a document, this indexing policy slows down the time it takes to write or update a document in Cosmos DB. This is exacerbated in high-volume systems.
For example, the default Terraform definition for a Cosmos SQL container indexing reads as follows:
indexing_policy {
indexing_mode = "consistent"
included_path {
path = "/*"
}
}
It is possible to force Cosmos DB to only index the id
and partitionKey
fields by excluding all other fields from indexing. This can be done by updating the above to read as follows:
indexing_policy {
# This could also be set to "none" if you are using the container purely as a key-value store. This may be applicable if your container is only going to be used as a distributed cache.
indexing_mode = "consistent"
# Note that included_path has been replaced with excluded_path
excluded_path {
path = "/*"
}
}
Note
This optimization comes at the cost of queries against fields inside of documents within the state store. This would likely impact any stored procedures or SQL queries defined and executed. It is only recommended that this optimization be applied only if you are using the Dapr State Management API or Dapr Actors to interact with Cosmos DB.Optimizing Cosmos DB for cost savings
If you intend to use Cosmos DB only as a key-value pair, it may be in your interest to consider converting your state object to JSON and compressing it before persisting it to state, and subsequently decompressing it when reading it out of state. This is because Cosmos DB bills your usage based on the maximum number of RU/s used in a given time period (typically each hour). Furthermore, RU usage is calculated as 1 RU per 1 KB of data you read or write. Compression helps by reducing the size of the data stored in Cosmos DB and subsequently reducing RU usage.
This savings is particularly significant for Dapr actors. While the Dapr State Management API does a base64 encoding of your object before saving, Dapr actor state is saved as raw, formatted JSON. This means multiple lines with indentations for formatting. Compressing can signficantly reduce the size of actor state objects. For example, if you have an actor state object that is 75KB in size when the actor is hydrated, you will use 75 RU/s to read that object out of state. If you then modify the state object and it grows to 100KB, you will use 100 RU/s to write that object to Cosmos DB, totalling 175 RU/s for the I/O operation. Let’s say your actors are concurrently handling 1000 requests per second, you will need at least 175,000 RU/s to meet that load. With effective compression, the size reduction can be in the region of 90%, which means you will only need in the region of 17,500 RU/s to meet the load.
Note
This particular optimization only makes sense if you are saving large objects to state. The performance and memory tradeoff for performing the compression and decompression on either end need to make sense for your use case. Furthermore, once the data is saved to state, it is not human readable, nor is it queryable. You should only adopt this optimization if you are saving large state objects as key-value pairs.Related links
- Basic schema for a Dapr component
- Read this guide for instructions on configuring state store components
- State management building block
5.3.5 - Azure Table Storage
Component format
To setup Azure Tablestorage state store create a component of type state.azure.tablestorage
. See this guide on how to create and apply a state store configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: state.azure.tablestorage
version: v1
metadata:
- name: accountName
value: <REPLACE-WITH-ACCOUNT-NAME>
- name: accountKey
value: <REPLACE-WITH-ACCOUNT-KEY>
- name: tableName
value: <REPLACE-WITH-TABLE-NAME>
# - name: cosmosDbMode
# value: false
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
accountName |
Y | The storage account name | "mystorageaccount" . |
accountKey |
Y | Primary or secondary storage key | "key" |
tableName |
Y | The name of the table to be used for Dapr state. The table will be created for you if it doesn’t exist | "table" |
cosmosDbMode |
N | If enabled, connects to Cosmos DB Table API instead of Azure Tables (Storage Accounts). Defaults to false . |
"false" |
serviceURL |
N | The full storage service endpoint URL. Useful for Azure environments other than public cloud. | "https://mystorageaccount.table.core.windows.net/" |
skipCreateTable |
N | Skips the check for and, if necessary, creation of the specified storage table. This is useful when using active directory authentication with minimal privileges. Defaults to false . |
"true" |
Microsoft Entra ID authentication
The Azure Cosmos DB state store component supports authentication using all Microsoft Entra ID mechanisms. For further information and the relevant component metadata fields to provide depending on the choice of Microsoft Entra ID authentication mechanism, see the docs for authenticating to Azure.
You can read additional information for setting up Cosmos DB with Microsoft Entra ID authentication in the section below.
Option 1: Setup Azure Table Storage
Follow the instructions from the Azure documentation on how to create an Azure Storage Account.
If you wish to create a table for Dapr to use, you can do so beforehand. However, Table Storage state provider will create one for you automatically if it doesn’t exist, unless the skipCreateTable
option is enabled.
In order to setup Azure Table Storage as a state store, you will need the following properties:
- AccountName: The storage account name. For example: mystorageaccount.
- AccountKey: Primary or secondary storage key. Skip this if using Microsoft Entra ID authentication.
- TableName: The name of the table to be used for Dapr state. The table will be created for you if it doesn’t exist, unless the
skipCreateTable
option is enabled. - cosmosDbMode: Set this to
false
to connect to Azure Tables.
Option 2: Setup Azure Cosmos DB Table API
Follow the instructions from the Azure documentation on creating a Cosmos DB account with Table API.
If you wish to create a table for Dapr to use, you can do so beforehand. However, Table Storage state provider will create one for you automatically if it doesn’t exist, unless the skipCreateTable
option is enabled.
In order to setup Azure Cosmos DB Table API as a state store, you will need the following properties:
- AccountName: The Cosmos DB account name. For example: mycosmosaccount.
- AccountKey: The Cosmos DB master key. Skip this if using Microsoft Entra ID authentication.
- TableName: The name of the table to be used for Dapr state. The table will be created for you if it doesn’t exist, unless the
skipCreateTable
option is enabled. - cosmosDbMode: Set this to
true
to connect to Azure Tables.
Partitioning
The Azure Table Storage state store uses the key
property provided in the requests to the Dapr API to determine the row key
. Service Name is used for partition key
. This provides best performance, as each service type stores state in it’s own table partition.
This state store creates a column called Value
in the table storage and puts raw state inside it.
For example, the following operation coming from service called myservice
curl -X POST http://localhost:3500/v1.0/state \
-H "Content-Type: application/json"
-d '[
{
"key": "nihilus",
"value": "darth"
}
]'
will create the following record in a table:
PartitionKey | RowKey | Value |
---|---|---|
myservice | nihilus | darth |
Concurrency
Azure Table Storage state concurrency is achieved by using ETag
s according to the official documentation.
Related links
- Basic schema for a Dapr component
- Read this guide for instructions on configuring state store components
- State management building block
5.3.6 - Cassandra
Component format
To setup Cassandra state store create a component of type state.cassandra
. See this guide on how to create and apply a state store configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: state.cassandra
version: v1
metadata:
- name: hosts
value: <REPLACE-WITH-COMMA-DELIMITED-HOSTS> # Required. Example: cassandra.cassandra.svc.cluster.local
- name: username
value: <REPLACE-WITH-PASSWORD> # Optional. default: ""
- name: password
value: <REPLACE-WITH-PASSWORD> # Optional. default: ""
- name: consistency
value: <REPLACE-WITH-CONSISTENCY> # Optional. default: "All"
- name: table
value: <REPLACE-WITH-TABLE> # Optional. default: "items"
- name: keyspace
value: <REPLACE-WITH-KEYSPACE> # Optional. default: "dapr"
- name: protoVersion
value: <REPLACE-WITH-PROTO-VERSION> # Optional. default: "4"
- name: replicationFactor
value: <REPLACE-WITH-REPLICATION-FACTOR> # Optional. default: "1"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
hosts | Y | Comma separated value of the hosts | "cassandra.cassandra.svc.cluster.local" . |
port | N | Port for communication. Default "9042" |
"9042" |
username | Y | The username of database user. No default | "user" |
password | Y | The password for the user | "password" |
consistency | N | The consistency values | "All" , "Quorum" |
table | N | Table name. Defaults to "items" |
"items" , "tab" |
keyspace | N | The cassandra keyspace to use. Defaults to "dapr" |
"dapr" |
protoVersion | N | The proto version for the client. Defaults to "4" |
"3" , "4" |
replicationFactor | N | The replication factor for the calls. Defaults to "1" |
"3" |
Setup Cassandra
You can run Cassandra locally with the Datastax Docker image:
docker run -e DS_LICENSE=accept --memory 4g --name my-dse -d datastax/dse-server -g -s -k
You can then interact with the server using localhost:9042
.
The easiest way to install Cassandra on Kubernetes is by using the Helm chart:
kubectl create namespace cassandra
helm install cassandra incubator/cassandra --namespace cassandra
This installs Cassandra into the cassandra
namespace by default.
To interact with Cassandra, find the service with: kubectl get svc -n cassandra
.
For example, if installing using the example above, the Cassandra DNS would be:
cassandra.cassandra.svc.cluster.local
Apache Ignite
Apache Ignite’s integration with Cassandra as a caching layer is not supported by this component.
Apache Ignite
Apache Ignite’s integration with Cassandra as a caching layer is not supported by this component.
Related links
- Basic schema for a Dapr component
- Read this guide for instructions on configuring state store components
- State management building block
5.3.7 - Cloudflare Workers KV
Create a Dapr component
To setup a Cloudflare Workers KV state store, create a component of type state.cloudflare.workerskv
. See this guide on how to create and apply a state store configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: state.cloudflare.workerskv
version: v1
# Increase the initTimeout if Dapr is managing the Worker for you
initTimeout: "120s"
metadata:
# ID of the Workers KV namespace (required)
- name: kvNamespaceID
value: ""
# Name of the Worker (required)
- name: workerName
value: ""
# PEM-encoded private Ed25519 key (required)
- name: key
value: |
-----BEGIN PRIVATE KEY-----
MC4CAQ...
-----END PRIVATE KEY-----
# Cloudflare account ID (required to have Dapr manage the Worker)
- name: cfAccountID
value: ""
# API token for Cloudflare (required to have Dapr manage the Worker)
- name: cfAPIToken
value: ""
# URL of the Worker (required if the Worker has been pre-created outside of Dapr)
- name: workerUrl
value: ""
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
kvNamespaceID |
Y | ID of the pre-created Workers KV namespace | "123456789abcdef8b5588f3d134f74ac" |
workerName |
Y | Name of the Worker to connect to | "mydaprkv" |
key |
Y | Ed25519 private key, PEM-encoded | See example above |
cfAccountID |
Y/N | Cloudflare account ID. Required to have Dapr manage the worker. | "456789abcdef8b5588f3d134f74ac"def |
cfAPIToken |
Y/N | API token for Cloudflare. Required to have Dapr manage the Worker. | "secret-key" |
workerUrl |
Y/N | URL of the Worker. Required if the Worker has been pre-provisioned outside of Dapr. | "https://mydaprkv.mydomain.workers.dev" |
When you configure Dapr to create your Worker for you, you may need to set a longer value for the
initTimeout
property of the component, to allow enough time for the Worker script to be deployed. For example:initTimeout: "120s"
Create a Workers KV namespace
To use this component, you must have a Workers KV namespace created in your Cloudflare account.
You can create a new Workers KV namespace in one of two ways:
-
Using the Cloudflare dashboard
Make note of the “ID” of the Workers KV namespace that you can see in the dashboard. This is a hex string (for example123456789abcdef8b5588f3d134f74ac
)–not the name you used when you created it! -
Using the Wrangler CLI:
# Authenticate if needed with `npx wrangler login` first wrangler kv:namespace create <NAME>
The output contains the ID of the namespace, for example:
{ binding = "<NAME>", id = "123456789abcdef8b5588f3d134f74ac" }
Configuring the Worker
Because Cloudflare Workers KV namespaces can only be accessed by scripts running on Workers, Dapr needs to maintain a Worker to communicate with the Workers KV storage.
Dapr can manage the Worker for you automatically, or you can pre-provision a Worker yourself. Pre-provisioning the Worker is the only supported option when running on workerd.
Important
Use a separate Worker for each Dapr component. Do not use the same Worker script for different Cloudflare Workers KV state store components, and do not use the same Worker script for different Cloudflare components in Dapr (e.g. the Workers KV state store and the Queues binding).If you want to let Dapr manage the Worker for you, you will need to provide these 3 metadata options:
workerName
: Name of the Worker script. This will be the first part of the URL of your Worker. For example, if the “workers.dev” domain configured for your Cloudflare account ismydomain.workers.dev
and you setworkerName
tomydaprkv
, the Worker that Dapr deploys will be available athttps://mydaprkv.mydomain.workers.dev
.cfAccountID
: ID of your Cloudflare account. You can find this in your browser’s URL bar after logging into the Cloudflare dashboard, with the ID being the hex string right afterdash.cloudflare.com
. For example, if the URL ishttps://dash.cloudflare.com/456789abcdef8b5588f3d134f74acdef
, the value forcfAccountID
is456789abcdef8b5588f3d134f74acdef
.cfAPIToken
: API token with permission to create and edit Workers and Workers KV namespaces. You can create it from the “API Tokens” page in the “My Profile” section in the Cloudflare dashboard:- Click on “Create token”.
- Select the “Edit Cloudflare Workers” template.
- Follow the on-screen instructions to generate a new API token.
When Dapr is configured to manage the Worker for you, when a Dapr Runtime is started it checks that the Worker exists and it’s up-to-date. If the Worker doesn’t exist, or if it’s using an outdated version, Dapr will create or upgrade it for you automatically.
If you’d rather not give Dapr permissions to deploy Worker scripts for you, you can manually provision a Worker for Dapr to use. Note that if you have multiple Dapr components that interact with Cloudflare services via a Worker, you will need to create a separate Worker for each one of them.
To manually provision a Worker script, you will need to have Node.js installed on your local machine.
- Create a new folder where you’ll place the source code of the Worker, for example:
daprworker
. - If you haven’t already, authenticate with Wrangler (the Cloudflare Workers CLI) using:
npx wrangler login
. - Inside the newly-created folder, create a new
wrangler.toml
file with the contents below, filling in the missing information as appropriate:
# Name of your Worker, for example "mydaprkv"
name = ""
# Do not change these options
main = "worker.js"
compatibility_date = "2022-12-09"
usage_model = "bundled"
[vars]
# Set this to the **public** part of the Ed25519 key, PEM-encoded (with newlines replaced with `\n`).
# Example:
# PUBLIC_KEY = "-----BEGIN PUBLIC KEY-----\nMCowB...=\n-----END PUBLIC KEY-----
PUBLIC_KEY = ""
# Set this to the name of your Worker (same as the value of the "name" property above), for example "mydaprkv".
TOKEN_AUDIENCE = ""
[[kv_namespaces]]
# Set the next two values to the ID (not name) of your KV namespace, for example "123456789abcdef8b5588f3d134f74ac".
# Note that they will both be set to the same value.
binding = ""
id = ""
Note: see the next section for how to generate an Ed25519 key pair. Make sure you use the public part of the key when deploying a Worker!
- Copy the (pre-compiled and minified) code of the Worker in the
worker.js
file. You can do that with this command:
# Set this to the version of Dapr that you're using
DAPR_VERSION="release-1.15"
curl -LfO "https://raw.githubusercontent.com/dapr/components-contrib/${DAPR_VERSION}/internal/component/cloudflare/workers/code/worker.js"
- Deploy the Worker using Wrangler:
npx wrangler publish
Once your Worker has been deployed, you will need to initialize the component with these two metadata options:
workerName
: Name of the Worker script. This is the value you set in thename
property in thewrangler.toml
file.workerUrl
: URL of the deployed Worker. Thenpx wrangler command
will show the full URL to you, for examplehttps://mydaprkv.mydomain.workers.dev
.
Generate an Ed25519 key pair
All Cloudflare Workers listen on the public Internet, so Dapr needs to use additional authentication and data protection measures to ensure that no other person or application can communicate with your Worker (and thus, with your Worker KV namespace). These include industry-standard measures such as:
- All requests made by Dapr to the Worker are authenticated via a bearer token (technically, a JWT) which is signed with an Ed25519 key.
- All communications between Dapr and your Worker happen over an encrypted connection, using TLS (HTTPS).
- The bearer token is generated on each request and is valid for a brief period of time only (currently, one minute).
To let Dapr issue bearer tokens, and have your Worker validate them, you will need to generate a new Ed25519 key pair. Here are examples of generating the key pair using OpenSSL or the step CLI.
Support for generating Ed25519 keys is available since OpenSSL 1.1.0, so the commands below will not work if you’re using an older version of OpenSSL.
Note for Mac users: on macOS, the “openssl” binary that is shipped by Apple is actually based on LibreSSL, which as of writing doesn’t support Ed25519 keys. If you’re using macOS, either use the step CLI, or install OpenSSL 3.0 from Homebrew using
brew install openssl@3
then replacingopenssl
in the commands below with$(brew --prefix)/opt/openssl@3/bin/openssl
.
You can generate a new Ed25519 key pair with OpenSSL using:
openssl genpkey -algorithm ed25519 -out private.pem
openssl pkey -in private.pem -pubout -out public.pem
On macOS, using openssl@3 from Homebrew:
$(brew --prefix)/opt/openssl@3/bin/openssl genpkey -algorithm ed25519 -out private.pem $(brew --prefix)/opt/openssl@3/bin/openssl pkey -in private.pem -pubout -out public.pem
If you don’t have the step CLI already, install it following the official instructions.
Next, you can generate a new Ed25519 key pair with the step CLI using:
step crypto keypair \
public.pem private.pem \
--kty OKP --curve Ed25519 \
--insecure --no-password
Regardless of how you generated your key pair, with the instructions above you’ll have two files:
private.pem
contains the private part of the key; use the contents of this file for thekey
property of the component’s metadata.public.pem
contains the public part of the key, which you’ll need only if you’re deploying a Worker manually (as per the instructions in the previoius section).
Warning
Protect the private part of your key and treat it as a secret value!Additional notes
- Note that Cloudflare Workers KV doesn’t guarantee strong data consistency. Although changes are visible immediately (usually) for requests made to the same Cloudflare datacenter, it can take a certain amount of time (usually up to one minute) for changes to be replicated across all Cloudflare regions.
- This state store supports TTLs with Dapr, but the minimum value for the TTL is 1 minute.
Related links
- Basic schema for a Dapr component
- Read this guide for instructions on configuring state store components
- State management building block
- Documentation for Cloudflare Workers KV
5.3.8 - CockroachDB
Create a Dapr component
Create a file called cockroachdb.yaml
, paste the following and replace the <CONNECTION STRING>
value with your connection string. The connection string for CockroachDB follow the same standard for PostgreSQL connection string. For example, "host=localhost user=root port=26257 connect_timeout=10 database=dapr_test"
. See the CockroachDB documentation on database connections for information on how to define a connection string.
If you want to also configure CockroachDB to store actors, add the actorStateStore
option as in the example below.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: state.cockroachdb
version: v1
metadata:
# Connection string
- name: connectionString
value: "<CONNECTION STRING>"
# Timeout for database operations, in seconds (optional)
#- name: timeoutInSeconds
# value: 20
# Name of the table where to store the state (optional)
#- name: tableName
# value: "state"
# Name of the table where to store metadata used by Dapr (optional)
#- name: metadataTableName
# value: "dapr_metadata"
# Cleanup interval in seconds, to remove expired rows (optional)
#- name: cleanupIntervalInSeconds
# value: 3600
# Max idle time for connections before they're closed (optional)
#- name: connectionMaxIdleTime
# value: 0
# Uncomment this if you wish to use CockroachDB as a state store for actors (optional)
#- name: actorStateStore
# value: "true"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
connectionString |
Y | The connection string for CockroachDB | "host=localhost user=root port=26257 connect_timeout=10 database=dapr_test" |
timeoutInSeconds |
N | Timeout, in seconds, for all database operations. Defaults to 20 |
30 |
tableName |
N | Name of the table where the data is stored. Defaults to state . Can optionally have the schema name as prefix, such as public.state |
"state" , "public.state" |
metadataTableName |
N | Name of the table Dapr uses to store a few metadata properties. Defaults to dapr_metadata . Can optionally have the schema name as prefix, such as public.dapr_metadata |
"dapr_metadata" , "public.dapr_metadata" |
cleanupIntervalInSeconds |
N | Interval, in seconds, to clean up rows with an expired TTL. Default: 3600 (i.e. 1 hour). Setting this to values <=0 disables the periodic cleanup. |
1800 , -1 |
connectionMaxIdleTime |
N | Max idle time before unused connections are automatically closed in the connection pool. By default, there’s no value and this is left to the database driver to choose. | "5m" |
actorStateStore |
N | Consider this state store for actors. Defaults to "false" |
"true" , "false" |
Setup CockroachDB
-
Run an instance of CockroachDB. You can run a local instance of CockroachDB in Docker CE with the following command:
This example does not describe a production configuration because it sets a single-node cluster, it’s only recommend for local environment.
docker run --name roach1 -p 26257:26257 cockroachdb/cockroach:v21.2.3 start-single-node --insecure
-
Create a database for state data.
To create a new database in CockroachDB, run the following SQL command inside container:
docker exec -it roach1 ./cockroach sql --insecure -e 'create database dapr_test'
The easiest way to install CockroachDB on Kubernetes is by using the CockroachDB Operator:
Advanced
TTLs and cleanups
This state store supports Time-To-Live (TTL) for records stored with Dapr. When storing data using Dapr, you can set the ttlInSeconds
metadata property to indicate after how many seconds the data should be considered “expired”.
Because CockroachDB doesn’t have built-in support for TTLs, you implement this in Dapr by adding a column in the state table indicating when the data should be considered “expired”. “Expired” records are not returned to the caller, even if they’re still physically stored in the database. A background “garbage collector” periodically scans the state table for expired rows and deletes them.
You can set the interval for the deletion of expired records with the cleanupIntervalInSeconds
metadata property, which defaults to 3600 seconds (that is, 1 hour).
- Longer intervals require less frequent scans for expired rows, but can require storing expired records for longer, potentially requiring more storage space. If you plan to store many records in your state table, with short TTLs, consider setting
cleanupIntervalInSeconds
to a smaller value - for example,300
(300 seconds, or 5 minutes). - If you do not plan to use TTLs with Dapr and the CockroachDB state store, you should consider setting
cleanupIntervalInSeconds
to a value <= 0 (e.g.0
or-1
) to disable the periodic cleanup and reduce the load on the database.
Related links
- Basic schema for a Dapr component
- Read this guide for instructions on configuring state store components
- State management building block
5.3.9 - Coherence
Component format
To setup Coherence state store, create a component of type state.coherence
. See this guide on how to create and apply a state store configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: state.coherence
version: v1
metadata:
- name: serverAddress
value: <REPLACE-WITH-GRPC-PROXY-HOST-AND-PORT> # Required. Example: "my-cluster-grpc:1408"
- name: tlsEnabled
value: <REPLACE-WITH-BOOLEAN> # Optional
- name: tlsClientCertPath
value: <REPLACE-WITH-PATH> # Optional
- name: tlsClientKey
value: <REPLACE-WITH-PATH> # Optional
- name: tlsCertsPath
value: <REPLACE-WITH-PATH> # Optional
- name: ignoreInvalidCerts
value: <REPLACE-WITH-BOOLEAN> # Optional
- name: scopeName
value: <REPLACE-WITH-SCOPE> # Optional
- name: requestTimeout
value: <REPLACE-WITH-REQUEST-TIMEOUT> # Optional
- name: nearCacheTTL
value: <REPLACE-WITH-NEAR-CACHE-TTL> # Optional
- name: nearCacheUnits
value: <REPLACE-WITH-NEAR-CACHE-UNITS> # Optional
- name: nearCacheMemory
value: <REPLACE-WITH-NEAR-CACHE-MEMORY> # Optional
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
serverAddress | Y | Comma delimited endpoints | "my-cluster-grpc:1408" |
tlsEnabled | N | Indicates if TLS should be enabled. Defaults to false | "true" |
tlsClientCertPath | N | Client certificate path for Coherence. Defaults to “”. Can be secretKeyRef to use a secret reference. |
"-----BEGIN CERTIFICATE-----\nMIIC9TCCA..." |
tlsClientKey | N | Client key for Coherence. Defaults to “”. Can be secretKeyRef to use a secret reference. |
"-----BEGIN CERTIFICATE-----\nMIIC9TCCA..." |
tlsCertsPath | N | Additional certificates for Coherence. Defaults to “”. Can be secretKeyRef to use a secret reference. |
"-----BEGIN CERTIFICATE-----\nMIIC9TCCA..." |
ignoreInvalidCerts | N | Indicates if to ignore self-signed certificates for testing only, not to be used in production. Defaults to false | "false" |
scopeName | N | A scope name to use for the internal cache. Defaults to "" | "my-scope" |
requestTimeout | N | ATimeout for calls to the cluster Defaults to “30s” | "15s" |
nearCacheTTL | N | If non-zero a near cache is used and the TTL of the near cache is this value. Defaults to 0s | "60s" |
nearCacheUnits | N | If non-zero a near cache is used and the maximum size of the near cache is this value in units. Defaults to 0 | "1000" |
nearCacheMemory | N | If non-zero a near cache is used and the maximum size of the near cache is this value in bytes. Defaults to 0 | "4096" |
About Using Near Cache TTL
The Coherence state store allows you to specify a near cache to cache frequently accessed data when using the DAPR client.
When you access data using Get(ctx context.Context, req *GetRequest)
, returned entries are stored in the near cache and
subsequent data access for keys in the near cache is almost instant, where without a near cache each Get()
operation results in a network call.
When using the near cache option, Coherence automatically adds a MapListener to the internal cache which listens on all cache events and updates or invalidates entries in the near cache that have been changed or removed on the server.
To manage the amount of memory used by the near cache, the following options are supported when creating one:
- nearCacheTTL – objects expired after time in near cache, for example 5 minutes
- nearCacheUnits – maximum number of cache entries in the near cache
- nearCacheMemory – maximum amount of memory used by cache entries
You can specify either High-Units or Memory and in either case, optionally, a TTL.
The minimum expiry time for a near cache entry is 1/4 second. This is to ensure that expiry of elements is as efficient as possible. You will receive an error if you try to set the TTL to a lower value.
Setup Coherence
Run Coherence locally using Docker:
docker run -d -p 1408:1408 -p 30000:30000 ghcr.io/oracle/coherence-ce:25.03.1
You can then interact with the server using localhost:1408
.
The easiest way to install Coherence on Kubernetes is by using the Coherence Operator:
Install the Operator:
kubectl apply -f https://github.com/oracle/coherence-operator/releases/download/v3.5.2/coherence-operator.yaml
Note: Change v3.5.2 to the latest release.
This installs the Coherence operator into the coherence
namespace.
Create a Coherence Cluster yaml my-cluster.yaml
apiVersion: coherence.oracle.com/v1
kind: Coherence
metadata:
name: my-cluster
spec:
coherence:
management:
enabled: true
ports:
- name: management
- name: grpc
port: 1408
Apply the yaml
kubectl apply -f my-cluster.yaml
To interact with Coherence, find the service with: kubectl get svc
and look for service named ‘*grpc’.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 9m
my-cluster-grpc ClusterIP 10.96.225.43 <none> 1408/TCP 7m3s
my-cluster-management ClusterIP 10.96.41.6 <none> 30000/TCP 7m3s
my-cluster-sts ClusterIP None <none> 7/TCP,7575/TCP,7574/TCP,6676/TCP,30000/TCP,1408/TCP 7m3s
my-cluster-wka ClusterIP None <none> 7/TCP,7575/TCP,7574/TCP,6676/TCP 7m3s
For example, if installing using the example above, the Coherence host address would be:
my-cluster-grpc
Related links
- Basic schema for a Dapr component
- Read this guide for instructions on configuring state store components
- State management building block
- Coherence CE on GitHub
- Coherence Community - All things Coherence
5.3.10 - Couchbase
Component format
To setup Couchbase state store create a component of type state.couchbase
. See this guide on how to create and apply a state store configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: state.couchbase
version: v1
metadata:
- name: couchbaseURL
value: <REPLACE-WITH-URL> # Required. Example: "http://localhost:8091"
- name: username
value: <REPLACE-WITH-USERNAME> # Required.
- name: password
value: <REPLACE-WITH-PASSWORD> # Required.
- name: bucketName
value: <REPLACE-WITH-BUCKET> # Required.
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
couchbaseURL | Y | The URL of the Couchbase server | "http://localhost:8091" |
username | Y | The username for the database | "user" |
password | Y | The password for access | "password" |
bucketName | Y | The bucket name to write to | "bucket" |
Setup Couchbase
You can run Couchbase locally using Docker:
docker run -d --name db -p 8091-8094:8091-8094 -p 11210:11210 couchbase
You can then interact with the server using localhost:8091
and start the server setup.
The easiest way to install Couchbase on Kubernetes is by using the Helm chart:
helm repo add couchbase https://couchbase-partners.github.io/helm-charts/
helm install couchbase/couchbase-operator
helm install couchbase/couchbase-cluster
Related links
- Basic schema for a Dapr component
- Read this guide for instructions on configuring state store components
- State management building block
5.3.11 - Etcd
Component format
To setup an Etcd state store create a component of type state.etcd
. See this guide on how to create and apply a state store configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: state.etcd
# Supports v1 and v2. Users should always use v2 by default. There is no
# migration path from v1 to v2, see `versioning` below.
version: v2
metadata:
- name: endpoints
value: <CONNECTION STRING> # Required. Example: 192.168.0.1:2379,192.168.0.2:2379,192.168.0.3:2379
- name: keyPrefixPath
value: <KEY PREFIX STRING> # Optional. default: "". Example: "dapr"
- name: tlsEnable
value: <ENABLE TLS> # Optional. Example: "false"
- name: ca
value: <CA> # Optional. Required if tlsEnable is `true`.
- name: cert
value: <CERT> # Optional. Required if tlsEnable is `true`.
- name: key
value: <KEY> # Optional. Required if tlsEnable is `true`.
# Uncomment this if you wish to use Etcd as a state store for actors (optional)
#- name: actorStateStore
# value: "true"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Versioning
Dapr has 2 versions of the Etcd state store component: v1
and v2
. It is recommended to use v2
, as v1
is deprecated.
While v1
and v2
have the same metadata fields, v1
causes data inconsistencies in apps when using Actor TTLs from Dapr v1.12.
v1
and v2
are incompatible with no data migration path for v1
to v2
on an existing active Etcd cluster and keyPrefixPath
.
If you are using v1
, you should continue to use v1
until you create a new Etcd cluster or use a different keyPrefixPath
.
Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
endpoints |
Y | Connection string to the Etcd cluster | "192.168.0.1:2379,192.168.0.2:2379,192.168.0.3:2379" |
keyPrefixPath |
N | Key prefix path in Etcd. Default is no prefix. | "dapr" |
tlsEnable |
N | Whether to enable TLS for connecting to Etcd. | "false" |
ca |
N | CA certificate for connecting to Etcd, PEM-encoded. Can be secretKeyRef to use a secret reference. |
"-----BEGIN CERTIFICATE-----\nMIIC9TCCA..." |
cert |
N | TLS certificate for connecting to Etcd, PEM-encoded. Can be secretKeyRef to use a secret reference. |
"-----BEGIN CERTIFICATE-----\nMIIDUTCC..." |
key |
N | TLS key for connecting to Etcd, PEM-encoded. Can be secretKeyRef to use a secret reference. |
"-----BEGIN PRIVATE KEY-----\nMIIEpAIB..." |
actorStateStore |
N | Consider this state store for actors. Defaults to "false" |
"true" , "false" |
Setup Etcd
You can run Etcd database locally using Docker Compose. Create a new file called docker-compose.yml
and add the following contents as an example:
version: '2'
services:
etcd:
image: gcr.io/etcd-development/etcd:v3.4.20
ports:
- "2379:2379"
command: etcd --listen-client-urls http://0.0.0.0:2379 --advertise-client-urls http://0.0.0.0:2379```
Save the docker-compose.yml
file and run the following command to start the Etcd server:
docker-compose up -d
This starts the Etcd server in the background and expose the default Etcd port of 2379
. You can then interact with the server using the etcdctl
command-line client on localhost:12379
. For example:
etcdctl --endpoints=localhost:2379 put mykey myvalue
Use Helm to quickly create an Etcd instance in your Kubernetes cluster. This approach requires Installing Helm.
Follow the Bitnami instructions to get started with setting up Etcd in Kubernetes.
Related links
- Basic schema for a Dapr component
- Read this guide for instructions on configuring state store components
- State management building block
5.3.12 - GCP Firestore (Datastore mode)
Component format
To setup GCP Firestore state store create a component of type state.gcp.firestore
. See this guide on how to create and apply a state store configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: state.gcp.firestore
version: v1
metadata:
- name: project_id
value: <REPLACE-WITH-PROJECT-ID> # Required.
- name: type
value: <REPLACE-WITH-CREDENTIALS-TYPE> # Required.
- name: endpoint # Optional.
value: "http://localhost:8432"
- name: private_key_id
value: <REPLACE-WITH-PRIVATE-KEY-ID> # Optional.
- name: private_key
value: <REPLACE-WITH-PRIVATE-KEY> # Optional, but Required if `private_key_id` is specified.
- name: client_email
value: <REPLACE-WITH-CLIENT-EMAIL> # Optional, but Required if `private_key_id` is specified.
- name: client_id
value: <REPLACE-WITH-CLIENT-ID> # Optional, but Required if `private_key_id` is specified.
- name: auth_uri
value: <REPLACE-WITH-AUTH-URI> # Optional.
- name: token_uri
value: <REPLACE-WITH-TOKEN-URI> # Optional.
- name: auth_provider_x509_cert_url
value: <REPLACE-WITH-AUTH-X509-CERT-URL> # Optional.
- name: client_x509_cert_url
value: <REPLACE-WITH-CLIENT-x509-CERT-URL> # Optional.
- name: entity_kind
value: <REPLACE-WITH-ENTITY-KIND> # Optional. default: "DaprState"
- name: noindex
value: <REPLACE-WITH-BOOLEAN> # Optional. default: "false"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
project_id | Y | The ID of the GCP project to use | "project-id" |
type | Y | The credentials type | "service_account" |
endpoint | N | GCP endpoint for the component to use. Only used for local development with (for example) GCP Datastore Emulator. The endpoint is unnecessary when running against the GCP production API. |
"localhost:8432" |
private_key_id | N | The ID of the prvate key to use | "private-key-id" |
privateKey | N | If using explicit credentials, this field should contain the private_key field from the service account json |
-----BEGIN PRIVATE KEY-----MIIBVgIBADANBgkqhkiG9w0B |
client_email | N | The email address for the client | "eample@example.com" |
client_id | N | The client id value to use for authentication | "client-id" |
auth_uri | N | The authentication URI to use | "https://accounts.google.com/o/oauth2/auth" |
token_uri | N | The token URI to query for Auth token | "https://oauth2.googleapis.com/token" |
auth_provider_x509_cert_url | N | The auth provider certificate URL | "https://www.googleapis.com/oauth2/v1/certs" |
client_x509_cert_url | N | The client certificate URL | "https://www.googleapis.com/robot/v1/metadata/x509/x" |
entity_kind | N | The entity name in Filestore. Defaults to "DaprState" |
"DaprState" |
noindex | N | Whether to disable indexing of state entities. Use this setting if you encounter Firestore index size limitations. Defaults to "false" |
"true" |
GCP Credentials
Since the GCP Firestore component uses the GCP Go Client Libraries, by default it authenticates using Application Default Credentials. This is explained in the Authenticate to GCP Cloud services using client libraries guide.
Setup GCP Firestore
You can use the GCP Datastore emulator to run locally using the instructions here.
You can then interact with the server using http://localhost:8432
.
Follow the instructions here to get started with setting up Firestore in Google Cloud.
Related links
- Basic schema for a Dapr component
- Read this guide for instructions on configuring state store components
- State management building block
5.3.13 - HashiCorp Consul
Component format
To setup Hashicorp Consul state store create a component of type state.consul
. See this guide on how to create and apply a state store configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: state.consul
version: v1
metadata:
- name: datacenter
value: <REPLACE-WITH-DATA-CENTER> # Required. Example: dc1
- name: httpAddr
value: <REPLACE-WITH-CONSUL-HTTP-ADDRESS> # Required. Example: "consul.default.svc.cluster.local:8500"
- name: aclToken
value: <REPLACE-WITH-ACL-TOKEN> # Optional. default: ""
- name: scheme
value: <REPLACE-WITH-SCHEME> # Optional. default: "http"
- name: keyPrefixPath
value: <REPLACE-WITH-TABLE> # Optional. default: ""
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
datacenter | Y | Datacenter to use | "dc1" |
httpAddr | Y | Address of the Consul server | "consul.default.svc.cluster.local:8500" |
aclToken | N | Per Request ACL Token. Default is "" |
"token" |
scheme | N | Scheme is the URI scheme for the Consul server. Default is "http" |
"http" |
keyPrefixPath | N | Key prefix path in Consul. Default is "" |
"dapr" |
Setup HashiCorp Consul
You can run Consul locally using Docker:
docker run -d --name=dev-consul -e CONSUL_BIND_INTERFACE=eth0 consul
You can then interact with the server using localhost:8500
.
The easiest way to install Consul on Kubernetes is by using the Helm chart:
helm install consul stable/consul
This installs Consul into the default
namespace.
To interact with Consul, find the service with: kubectl get svc consul
.
For example, if installing using the example above, the Consul host address would be:
consul.default.svc.cluster.local:8500
Related links
- Basic schema for a Dapr component
- Read this guide for instructions on configuring state store components
- State management building block
5.3.14 - Hazelcast
Create a Dapr component
To setup Hazelcast state store create a component of type state.hazelcast
. See this guide on how to create and apply a state store configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: state.hazelcast
version: v1
metadata:
- name: hazelcastServers
value: <REPLACE-WITH-HOSTS> # Required. A comma delimited string of servers. Example: "hazelcast:3000,hazelcast2:3000"
- name: hazelcastMap
value: <REPLACE-WITH-MAP> # Required. Hazelcast map configuration.
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
hazelcastServers | Y | A comma delimited string of servers | "hazelcast:3000,hazelcast2:3000" |
hazelcastMap | Y | Hazelcast Map configuration | "foo-map" |
Setup Hazelcast
You can run Hazelcast locally using Docker:
docker run -e JAVA_OPTS="-Dhazelcast.local.publicAddress=127.0.0.1:5701" -p 5701:5701 hazelcast/hazelcast
You can then interact with the server using the 127.0.0.1:5701
.
The easiest way to install Hazelcast on Kubernetes is by using the Helm chart.
Related links
- Basic schema for a Dapr component
- Read this guide for instructions on configuring state store components
- State management building block
5.3.15 - In-memory
The in-memory state store component maintains state in the Dapr sidecar’s memory. This is primarily meant for development purposes. State is not replicated across multiple sidecars and is lost when the Dapr sidecar is restarted.
Component format
To setup in-memory state store, create a component of type state.in-memory
. See this guide on how to create and apply a state store configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: state.in-memory
version: v1
metadata:
# Uncomment this if you wish to use In-memory as a state store for actors (optional)
#- name: actorStateStore
# value: "true"
Note: While in-memory does not require any specific metadata for the component to work,
spec.metadata
is a required field.
Related links
5.3.16 - JetStream KV
Component format
To setup a JetStream KV state store create a component of type state.jetstream
. See this guide on how to create and apply a state store configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: state.jetstream
version: v1
metadata:
- name: natsURL
value: "nats://localhost:4222"
- name: jwt
value: "eyJhbGciOiJ...6yJV_adQssw5c" # Optional. Used for decentralized JWT authentication
- name: seedKey
value: "SUACS34K232O...5Z3POU7BNIL4Y" # Optional. Used for decentralized JWT authentication
- name: bucket
value: "<bucketName>"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadatafield
Field | Required | Details | Example |
---|---|---|---|
natsURL | Y | NATS server address URL | “nats://localhost:4222 ” |
jwt | N | NATS decentralized authentication JWT | “eyJhbGciOiJ...6yJV_adQssw5c ” |
seedKey | N | NATS decentralized authentication seed key | “SUACS34K232O...5Z3POU7BNIL4Y ” |
bucket | Y | JetStream KV bucket name | "<bucketName>" |
Create a NATS server
You can run a NATS Server with JetStream enabled locally using Docker:
docker run -d -p 4222:4222 nats:latest -js
You can then interact with the server using the client port: localhost:4222
.
Install NATS JetStream on Kubernetes by using the helm:
helm repo add nats https://nats-io.github.io/k8s/helm/charts/
helm install my-nats nats/nats
This installs a single NATS server into the default
namespace. To interact
with NATS, find the service with: kubectl get svc my-nats
.
Creating a JetStream KV bucket
It is necessary to create a key value bucket, this can easily done via NATS CLI.
nats kv add <bucketName>
Related links
- Basic schema for a Dapr component
- Read this guide for instructions on configuring state store components
- State management building block
- JetStream Documentation
- Key Value Store Documentation
- NATS CLI
5.3.17 - Memcached
Component format
To setup Memcached state store create a component of type state.memcached
. See this guide on how to create and apply a state store configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: state.memcached
version: v1
metadata:
- name: hosts
value: <REPLACE-WITH-COMMA-DELIMITED-ENDPOINTS> # Required. Example: "memcached.default.svc.cluster.local:11211"
- name: maxIdleConnections
value: <REPLACE-WITH-MAX-IDLE-CONNECTIONS> # Optional. default: "2"
- name: timeout
value: <REPLACE-WITH-TIMEOUT> # Optional. default: "1000"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
hosts | Y | Comma delimited endpoints | "memcached.default.svc.cluster.local:11211" |
maxIdleConnections | N | The max number of idle connections. Defaults to "2" |
"3" |
timeout | N | The timeout for the calls in milliseconds. Defaults to "1000" |
"1000" |
Setup Memcached
You can run Memcached locally using Docker:
docker run --name my-memcache -d memcached
You can then interact with the server using localhost:11211
.
The easiest way to install Memcached on Kubernetes is by using the Helm chart:
helm install memcached stable/memcached
This installs Memcached into the default
namespace.
To interact with Memcached, find the service with: kubectl get svc memcached
.
For example, if installing using the example above, the Memcached host address would be:
memcached.default.svc.cluster.local:11211
Related links
- Basic schema for a Dapr component
- Read this guide for instructions on configuring state store components
- State management building block
5.3.18 - Microsoft SQL Server & Azure SQL
Component format
This state store component can be used with both Microsoft SQL Server and Azure SQL.
To set up this state store, create a component of type state.sqlserver
. See this guide on how to create and apply a state store configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: state.sqlserver
version: v1
metadata:
# Authenticate using SQL Server credentials
- name: connectionString
value: |
Server=myServerName\myInstanceName;Database=myDataBase;User Id=myUsername;Password=myPassword;
# Authenticate with Microsoft Entra ID (Azure SQL only)
# "useAzureAD" be set to "true"
- name: useAzureAD
value: true
# Connection string or URL of the Azure SQL database, optionally containing the database
- name: connectionString
value: |
sqlserver://myServerName.database.windows.net:1433?database=myDataBase
# Other optional fields (listing default values)
- name: tableName
value: "state"
- name: metadataTableName
value: "dapr_metadata"
- name: schema
value: "dbo"
- name: keyType
value: "string"
- name: keyLength
value: "200"
- name: indexedProperties
value: ""
- name: cleanupIntervalInSeconds
value: "3600"
# Uncomment this if you wish to use Microsoft SQL Server as a state store for actors (optional)
#- name: actorStateStore
# value: "true"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.If you wish to use SQL server as an actor state store, append the following to the metadata:
- name: actorStateStore
value: "true"
Spec metadata fields
Authenticate using SQL Server credentials
The following metadata options are required to authenticate using SQL Server credentials. This is supported on both SQL Server and Azure SQL.
Field | Required | Details | Example |
---|---|---|---|
connectionString |
Y | The connection string used to connect. If the connection string contains the database, it must already exist. Otherwise, if the database is omitted, a default database named “Dapr” is created. |
"Server=myServerName\myInstanceName;Database=myDataBase;User Id=myUsername;Password=myPassword;" |
Authenticate using Microsoft Entra ID
Authenticating with Microsoft Entra ID is supported with Azure SQL only. All authentication methods supported by Dapr can be used, including client credentials (“service principal”) and Managed Identity.
Field | Required | Details | Example |
---|---|---|---|
useAzureAD |
Y | Must be set to true to enable the component to retrieve access tokens from Microsoft Entra ID. |
"true" |
connectionString |
Y | The connection string or URL of the Azure SQL database, without credentials. If the connection string contains the database, it must already exist. Otherwise, if the database is omitted, a default database named “Dapr” is created. |
"sqlserver://myServerName.database.windows.net:1433?database=myDataBase" |
azureTenantId |
N | ID of the Microsoft Entra ID tenant | "cd4b2887-304c-47e1-b4d5-65447fdd542b" |
azureClientId |
N | Client ID (application ID) | "c7dd251f-811f-4ba2-a905-acd4d3f8f08b" |
azureClientSecret |
N | Client secret (application password) | "Ecy3XG7zVZK3/vl/a2NSB+a1zXLa8RnMum/IgD0E" |
Other metadata options
Field | Required | Details | Example |
---|---|---|---|
tableName |
N | The name of the table to use. Alpha-numeric with underscores. Defaults to "state" |
"table_name" |
metadataTableName |
N | Name of the table Dapr uses to store a few metadata properties. Defaults to dapr_metadata . |
"dapr_metadata" |
keyType |
N | The type of key used. Supported values: "string" (default), "uuid" , "integer" . |
"string" |
keyLength |
N | The max length of key. Ignored if “keyType” is not string . Defaults to "200" |
"200" |
schema |
N | The schema to use. Defaults to "dbo" |
"dapr" ,"dbo" |
indexedProperties |
N | List of indexed properties, as a string containing a JSON document. | '[{"column": "transactionid", "property": "id", "type": "int"}, {"column": "customerid", "property": "customer", "type": "nvarchar(100)"}]' |
actorStateStore |
N | Indicates that Dapr should configure this component for the actor state store (more information). | "true" |
cleanupIntervalInSeconds |
N | Interval, in seconds, to clean up rows with an expired TTL. Default: "3600" (i.e. 1 hour). Setting this to values <=0 disables the periodic cleanup. |
"1800" , "-1" |
Create a Microsoft SQL Server/Azure SQL instance
Follow the instructions from the Azure documentation on how to create a SQL database. The database must be created before Dapr consumes it.
In order to setup SQL Server as a state store, you need the following properties:
- Connection String: The SQL Server connection string. For example: server=localhost;user id=sa;password=your-password;port=1433;database=mydatabase;
- Schema: The database schema to use (default=dbo). Will be created if does not exist
- Table Name: The database table name. Will be created if does not exist
- Indexed Properties: Optional properties from json data which will be indexed and persisted as individual column
Create a dedicated user
When connecting with a dedicated user (not sa
), these authorizations are required for the user - even when the user is owner of the desired database schema:
CREATE TABLE
CREATE TYPE
TTLs and cleanups
This state store supports Time-To-Live (TTL) for records stored with Dapr. When storing data using Dapr, you can set the ttlInSeconds
metadata property to indicate after how many seconds the data should be considered “expired”.
Because SQL Server doesn’t have built-in support for TTLs, Dapr implements this by adding a column in the state table indicating when the data should be considered “expired”. “Expired” records are not returned to the caller, even if they’re still physically stored in the database. A background “garbage collector” periodically scans the state table for expired rows and deletes them.
You can set the interval for the deletion of expired records with the cleanupIntervalInSeconds
metadata property, which defaults to 3600 seconds (that is, 1 hour).
- Longer intervals require less frequent scans for expired rows, but can require storing expired records for longer, potentially requiring more storage space. If you plan to store many records in your state table, with short TTLs, consider setting
cleanupIntervalInSeconds
to a smaller value - for example,300
(300 seconds, or 5 minutes). - If you do not plan to use TTLs with Dapr and the SQL Server state store, you should consider setting
cleanupIntervalInSeconds
to a value <= 0 (e.g.0
or-1
) to disable the periodic cleanup and reduce the load on the database.
The state store does not have an index on the ExpireDate
column, which means that each clean up operation must perform a full table scan. If you intend to write to the table with a large number of records that use TTLs, you should consider creating an index on the ExpireDate
column. An index makes queries faster, but uses more storage space and slightly slows down writes.
CREATE CLUSTERED INDEX expiredate_idx ON state(ExpireDate ASC)
Related links
- Basic schema for a Dapr component
- Read this guide for instructions on configuring state store components
- State management building block
5.3.19 - MongoDB
Component format
To setup MongoDB state store create a component of type state.mongodb
. See this guide on how to create and apply a state store configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: state.mongodb
version: v1
metadata:
- name: server
value: <REPLACE-WITH-SERVER> # Required unless "host" field is set . Example: "server.example.com"
- name: host
value: <REPLACE-WITH-HOST> # Required unless "server" field is set . Example: "mongo-mongodb.default.svc.cluster.local:27017"
- name: username
value: <REPLACE-WITH-USERNAME> # Optional. Example: "admin"
- name: password
value: <REPLACE-WITH-PASSWORD> # Optional.
- name: databaseName
value: <REPLACE-WITH-DATABASE-NAME> # Optional. default: "daprStore"
- name: collectionName
value: <REPLACE-WITH-COLLECTION-NAME> # Optional. default: "daprCollection"
- name: writeConcern
value: <REPLACE-WITH-WRITE-CONCERN> # Optional.
- name: readConcern
value: <REPLACE-WITH-READ-CONCERN> # Optional.
- name: operationTimeout
value: <REPLACE-WITH-OPERATION-TIMEOUT> # Optional. default: "5s"
- name: params
value: <REPLACE-WITH-ADDITIONAL-PARAMETERS> # Optional. Example: "?authSource=daprStore&ssl=true"
# Uncomment this if you wish to use MongoDB as a state store for actors (optional)
#- name: actorStateStore
# value: "true"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Actor state store and transactions support
When using as an actor state store or to leverage transactions, MongoDB must be running in a Replica Set.
If you wish to use MongoDB as an actor store, add this metadata option to your Component YAML:
- name: actorStateStore
value: "true"
Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
server | Y1 | The server to connect to, when using DNS SRV record | "server.example.com" |
host | Y1 | The host to connect to | "mongo-mongodb.default.svc.cluster.local:27017" |
username | N | The username of the user to connect with (applicable in conjunction with host ) |
"admin" |
password | N | The password of the user (applicable in conjunction with host ) |
"password" |
databaseName | N | The name of the database to use. Defaults to "daprStore" |
"daprStore" |
collectionName | N | The name of the collection to use. Defaults to "daprCollection" |
"daprCollection" |
writeConcern | N | The write concern to use | "majority" |
readConcern | N | The read concern to use | "majority" , "local" ,"available" , "linearizable" , "snapshot" |
operationTimeout | N | The timeout for the operation. Defaults to "5s" |
"5s" |
params | N2 | Additional parameters to use | "?authSource=daprStore&ssl=true" |
actorStateStore | N | Consider this state store for actors. Defaults to "false" |
"true" , "false" |
[1] The
server
andhost
fields are mutually exclusive. If neither or both are set, Dapr returns an error.
[2] The
params
field accepts a query string that specifies connection specific options as<name>=<value>
pairs, separated by&
and prefixed with?
. e.g. to use “daprStore” db as authentication database and enabling SSL/TLS in connection, specify params as?authSource=daprStore&ssl=true
. See the mongodb manual for the list of available options and their use cases.
Setup MongoDB
You can run a single MongoDB instance locally using Docker:
docker run --name some-mongo -d -p 27017:27017 mongo
You can then interact with the server at localhost:27017
. If you do not specify a databaseName
value in your component definition, make sure to create a database named daprStore
.
In order to use the MongoDB state store for transactions and as an actor state store, you need to run MongoDB as a Replica Set. Refer to the official documentation for how to create a 3-node Replica Set using Docker.
You can conveniently install MongoDB on Kubernetes using the Helm chart packaged by Bitnami. Refer to the documentation for the Helm chart for deploying MongoDB, both as a standalone server, and with a Replica Set (required for using transactions and actors).
This installs MongoDB into the default
namespace.
To interact with MongoDB, find the service with: kubectl get svc mongo-mongodb
.
For example, if installing using the Helm defaults above, the MongoDB host address would be:
mongo-mongodb.default.svc.cluster.local:27017
Follow the on-screen instructions to get the root password for MongoDB.
The username is typically admin
by default.
TTLs and cleanups
This state store supports Time-To-Live (TTL) for records stored with Dapr. When storing data using Dapr, you can set the ttlInSeconds
metadata property to indicate when the data should be considered “expired”.
Related links
- Basic schema for a Dapr component
- Read this guide for instructions on configuring state store components
- State management building block
5.3.20 - MySQL & MariaDB
Component format
The MySQL state store components allows connecting to both MySQL and MariaDB databases. In this document, we refer to “MySQL” to indicate both databases.
To setup MySQL state store create a component of type state.mysql
. See this guide on how to create and apply a state store configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: state.mysql
version: v1
metadata:
- name: connectionString
value: "<CONNECTION STRING>"
- name: schemaName
value: "<SCHEMA NAME>"
- name: tableName
value: "<TABLE NAME>"
- name: timeoutInSeconds
value: "30"
- name: pemPath # Required if pemContents not provided. Path to pem file.
value: "<PEM PATH>"
- name: pemContents # Required if pemPath not provided. Pem value.
value: "<PEM CONTENTS>"
# Uncomment this if you wish to use MySQL & MariaDB as a state store for actors (optional)
#- name: actorStateStore
# value: "true"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.If you wish to use MySQL as an actor store, append the following to the yaml.
- name: actorStateStore
value: "true"
Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
connectionString |
Y | The connection string to connect to MySQL. Do not add the schema to the connection string | Non SSL connection: "<user>:<password>@tcp(<server>:3306)/?allowNativePasswords=true" , Enforced SSL Connection: "<user>:<password>@tcp(<server>:3306)/?allowNativePasswords=true&tls=custom" |
schemaName |
N | The schema name to use. Will be created if schema does not exist. Defaults to "dapr_state_store" |
"custom_schema" , "dapr_schema" |
tableName |
N | The table name to use. Will be created if table does not exist. Defaults to "state" |
"table_name" , "dapr_state" |
timeoutInSeconds |
N | Timeout for all database operations. Defaults to 20 |
30 |
pemPath |
N | Full path to the PEM file to use for enforced SSL Connection required if pemContents is not provided. Cannot be used in K8s environment | "/path/to/file.pem" , "C:\path\to\file.pem" |
pemContents |
N | Contents of PEM file to use for enforced SSL Connection required if pemPath is not provided. Can be used in K8s environment | "pem value" |
cleanupIntervalInSeconds |
N | Interval, in seconds, to clean up rows with an expired TTL. Default: 3600 (that is 1 hour). Setting this to values <=0 disables the periodic cleanup. |
1800 , -1 |
actorStateStore |
N | Consider this state store for actors. Defaults to "false" |
"true" , "false" |
Setup MySQL
Dapr can use any MySQL instance - containerized, running on your local dev machine, or a managed cloud service.
Run an instance of MySQL. You can run a local instance of MySQL in Docker CE with the following command:
This example does not describe a production configuration because it sets the password in plain text and the user name is left as the MySQL default of “root”.
docker run --name dapr-mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:latest
We can use Helm to quickly create a MySQL instance in our Kubernetes cluster. This approach requires Installing Helm.
-
Install MySQL into your cluster.
helm repo add bitnami https://charts.bitnami.com/bitnami helm install dapr-mysql bitnami/mysql
-
Run
kubectl get pods
to see the MySQL containers now running in your cluster. -
Next, we’ll get our password, which is slightly different depending on the OS we’re using:
-
Windows: Run
[System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String($(kubectl get secret --namespace default dapr-mysql -o jsonpath="{.data.mysql-root-password}")))
and copy the outputted password. -
Linux/MacOS: Run
kubectl get secret --namespace default dapr-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode
and copy the outputted password.
-
-
With the password you can construct your connection string.
If you are using MySQL on Azure see the Azure documentation on SSL database connections, for information on how to download the required certificate.
Non SSL connection
Replace the <CONNECTION STRING>
value with your connection string. The connection string is a standard MySQL connection string. For example, "<user>:<password>@tcp(<server>:3306)/?allowNativePasswords=true"
.
Enforced SSL connection
If your server requires SSL your connection string must end with &tls=custom
for example, "<user>:<password>@tcp(<server>:3306)/?allowNativePasswords=true&tls=custom"
. You must replace the <PEM PATH>
with a full path to the PEM file. The connection to MySQL will require a minimum TLS version of 1.2.
TTLs and cleanups
This state store supports Time-To-Live (TTL) for records stored with Dapr. When storing data using Dapr, you can set the ttlInSeconds
metadata property to indicate when the data should be considered “expired”.
Because MySQL doesn’t have built-in support for TTLs, this is implemented in Dapr by adding a column in the state table indicating when the data is to be considered “expired”. Records that are “expired” are not returned to the caller, even if they’re still physically stored in the database. A background “garbage collector” periodically scans the state table for expired rows and deletes them.
The interval at which the deletion of expired records happens is set with the cleanupIntervalInSeconds
metadata property, which defaults to 3600 seconds (that is, 1 hour).
- Longer intervals require less frequent scans for expired rows, but can require storing expired records for longer, potentially requiring more storage space. If you plan to store many records in your state table, with short TTLs, consider setting
cleanupIntervalInSeconds
to a smaller value, for example300
(300 seconds, or 5 minutes). - If you do not plan to use TTLs with Dapr and the MySQL state store, you should consider setting
cleanupIntervalInSeconds
to a value <= 0 (e.g.0
or-1
) to disable the periodic cleanup and reduce the load on the database.
Related links
- Basic schema for a Dapr component
- Read this guide for instructions on configuring state store components
- State management building block
5.3.21 - OCI Object Storage
Component format
To setup OCI Object Storage state store create a component of type state.oci.objectstorage
. See this guide on how to create and apply a state store configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: state.oci.objectstorage
version: v1
metadata:
- name: instancePrincipalAuthentication
value: <"true" or "false"> # Optional. default: "false"
- name: configFileAuthentication
value: <"true" or "false"> # Optional. default: "false" . Not used when instancePrincipalAuthentication == "true"
- name: configFilePath
value: <REPLACE-WITH-FULL-QUALIFIED-PATH-OF-CONFIG-FILE> # Optional. No default. Only used when configFileAuthentication == "true"
- name: configFileProfile
value: <REPLACE-WITH-NAME-OF-PROFILE-IN-CONFIG-FILE> # Optional. default: "DEFAULT" . Only used when configFileAuthentication == "true"
- name: tenancyOCID
value: <REPLACE-WITH-TENANCY-OCID> # Not used when configFileAuthentication == "true" or instancePrincipalAuthentication == "true"
- name: userOCID
value: <REPLACE-WITH-USER-OCID> # Not used when configFileAuthentication == "true" or instancePrincipalAuthentication == "true"
- name: fingerPrint
value: <REPLACE-WITH-FINGERPRINT> # Not used when configFileAuthentication == "true" or instancePrincipalAuthentication == "true"
- name: privateKey # Not used when configFileAuthentication == "true" or instancePrincipalAuthentication == "true"
value: |
-----BEGIN RSA PRIVATE KEY-----
REPLACE-WITH-PRIVATE-KEY-AS-IN-PEM-FILE
-----END RSA PRIVATE KEY-----
- name: region
value: <REPLACE-WITH-OCI-REGION> # Not used when configFileAuthentication == "true" or instancePrincipalAuthentication == "true"
- name: bucketName
value: <REPLACE-WITH-BUCKET-NAME>
- name: compartmentOCID
value: <REPLACE-WITH-COMPARTMENT-OCID>
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
instancePrincipalAuthentication | N | Boolean to indicate whether instance principal based authentication is used. Default: "false" |
"true" or "false" . |
configFileAuthentication | N | Boolean to indicate whether identity credential details are provided through a configuration file. Default: "false" Not required nor used when instancePrincipalAuthentication is true. |
"true" or "false" . |
configFilePath | N | Full path name to the OCI configuration file. No default value exists. Not used when instancePrincipalAuthentication is true. Note: the ~/ prefix is not supported. | "/home/apps/configuration-files/myOCIConfig.txt" . |
configFileProfile | N | Name of profile in configuration file to use. Default: "DEFAULT" Not used when instancePrincipalAuthentication is true. |
"DEFAULT" or "PRODUCTION" . |
tenancyOCID | Y | The OCI tenancy identifier. Not required nor used when instancePrincipalAuthentication is true. | "ocid1.tenancy.oc1..aaaaaaaag7c7sljhsdjhsdyuwe723" . |
userOCID | Y | The OCID for an OCI account (this account requires permissions to access OCI Object Storage). Not required nor used when instancePrincipalAuthentication is true. | "ocid1.user.oc1..aaaaaaaaby4oyyyuqwy7623yuwe76" |
fingerPrint | Y | Fingerprint of the public key. Not required nor used when instancePrincipalAuthentication is true. | "02:91:6c:49:e2:94:21:15:a7:6b:0e:a7:34:e1:3d:1b" |
privateKey | Y | Private key of the RSA key pair. Not required nor used when instancePrincipalAuthentication is true. | "MIIEoyuweHAFGFG2727as+7BTwQRAIW4V" |
region | Y | OCI Region. Not required nor used when instancePrincipalAuthentication is true. | "us-ashburn-1" |
bucketName | Y | Name of the bucket written to and read from (and if necessary created) | "application-state-store-bucket" |
compartmentOCID | Y | The OCID for the compartment that contains the bucket | "ocid1.compartment.oc1..aaaaaaaacsssekayyuq7asjh78" |
Setup OCI Object Storage
The OCI Object Storage state store needs to interact with Oracle Cloud Infrastructure. The state store supports two different approaches to authentication. One is based on an identity (a user or service account) and the other is instance principal authentication leveraging the permissions granted to the compute instance running the application workload. Note: Resource Principal Authentication - used for resources that are not instances such as serverless functions - is not currently supported.
Dapr-applications running on Oracle Cloud Infrastructure - in a compute instance or as a container on Kubernetes - can leverage instance principal authentication. See the OCI documentation on calling OCI Services from instances for more background. In short: The instance needs to be member of a Dynamic Group and this Dynamic Group needs to get permissions for interacting with the Object Storage service through IAM policies. In case of such instance principal authentication, specify property instancePrincipalAuthentication as "true"
. You do not need to configure the properties tenancyOCID, userOCID, region, fingerPrint and privateKey - these will be ignored if you define values for them.
Identity based authentication interacts with OCI through an OCI account that has permissions to create, read and delete objects through OCI Object Storage in the indicated bucket and that is allowed to create a bucket in the specified compartment if the bucket is not created beforehand. The OCI documentation describes how to create an OCI Account. The interaction by the state store is performed using the public key’s fingerprint and a private key from an RSA Key Pair generated for the OCI account. The instructions for generating the key pair and getting hold of the required information are available in the OCI documentation.
Details for the identity and identity’s credentials to be used for interaction with OCI can be provided directly in the Dapr component properties file - using the properties tenancyOCID, userOCID, fingerPrint, privateKey and region - or can be provided from a configuration file as is common for many OCI related tools (such as CLI and Terraform) and SDKs. In the latter case the exact file name and full path has to be provided through property configFilePath. Note: the ~/ prefix is not supported in the path. A configuration file can contain multiple profiles; the desired profile can be specified through property configFileProfile. If no value is provided, DEFAULT is used as the name for the profile to be used. Note: if the indicated profile is not found, then the DEFAULT profile (if it exists) is used instead. The OCI SDK documentation gives details about the definition of the configuration file.
If you wish to create the bucket for Dapr to use, you can do so beforehand. However, Object Storage state provider will create one - in the specified compartment - for you automatically if it doesn’t exist.
In order to setup OCI Object Storage as a state store, you need the following properties:
- instancePrincipalAuthentication: The flag that indicates if instance principal based authentication should be used.
- configFileAuthentication: The flag that indicates if the OCI identity credential details are provided through a configuration file. Not used when instancePrincipalAuthentication is true.
- configFilePath: Full path name to the OCI configuration file. Not used when instancePrincipalAuthentication is true or configFileAuthentication is not true.
- configFileProfile: Name of profile in configuration file to use. Default:
"DEFAULT"
Not required nor used when instancePrincipalAuthentication is true or configFileAuthentication is not true. When the specified profile is not found in the configuration file, the DEFAULT profile is used when it exists - tenancyOCID: The identifier for the OCI cloud tenancy in which the state is to be stored. Not used when instancePrincipalAuthentication is true or configFileAuthentication is true.
- userOCID: The identifier for the account used by the state store component to connect to OCI; this must be an account with appropriate permissions on the OCI Object Storage service in the specified compartment and bucket. Not used when instancePrincipalAuthentication is true or configFileAuthentication is true.
- fingerPrint: The fingerprint for the public key in the RSA key pair generated for the account indicated by userOCID. Not used when instancePrincipalAuthentication is true or configFileAuthentication is true.
- privateKey: The private key in the RSA key pair generated for the account indicated by userOCID. Not used when instancePrincipalAuthentication is true or configFileAuthentication is true.
- region: The OCI region - for example us-ashburn-1, eu-amsterdam-1, ap-mumbai-1. Not used when instancePrincipalAuthentication is true
- bucketName: The name of the bucket on OCI Object Storage in which state will be created. This bucket can exist already when the state store is initialized or it will be created during initialization of the state store. Note that the name of buckets is unique within a namespace
- compartmentOCID: The identifier of the compartment within the tenancy in which the bucket exists or will be created.
What Happens at Runtime?
Every state entry is represented by an object in OCI Object Storage. The OCI Object Storage state store uses the key
property provided in the requests to the Dapr API to determine the name of the object. The value
is stored as the (literal) content of the object. Each object is assigned a unique ETag value - whenever it is created or updated (aka overwritten); this is native behavior of OCI Object Storage. The state store assigns a meta data tag to every object it writes; the tag is category and its value is dapr-state-store. This allows the objects created as state for Daprized applications to be identified.
For example, the following operation
curl -X POST http://localhost:3500/v1.0/state \
-H "Content-Type: application/json"
-d '[
{
"key": "nihilus",
"value": "darth"
}
]'
creates the following object:
Bucket | Directory | Object Name | Object Content | Meta Tags |
---|---|---|---|---|
as specified with bucketName in components.yaml | - (root) | nihilus | darth | category: dapr-state-store |
Dapr uses a fixed key scheme with composite keys to partition state across applications. For general states, the key format is:
App-ID||state key
The OCI Object Storage state store maps the first key segment (for App-ID) to a directory within a bucket, using the Prefixes and Hierarchy used for simulating a directory structure as described in the OCI Object Storage documentation.
The following operation therefore (notice the composite key)
curl -X POST http://localhost:3500/v1.0/state \
-H "Content-Type: application/json"
-d '[
{
"key": "myApplication||nihilus",
"value": "darth"
}
]'
will create the following object:
Bucket | Directory | Object Name | Object Content | Meta Tags |
---|---|---|---|---|
as specified with bucketName in components.yaml | myApplication | nihilus | darth | category: dapr-state-store |
You will be able to inspect all state stored through the OCI Object Storage state store by inspecting the contents of the bucket through the console, the APIs, CLI or SDKs. By going directly to the bucket, you can prepare state that will be available as state to your application at runtime.
Time To Live and State Expiration
The OCI Object Storage state store supports Dapr’s Time To Live logic that ensure that state cannot be retrieved after it has expired. See this How To on Setting State Time To Live for details.
OCI Object Storage does not have native support for a Time To Live setting. The implementation in this component uses a meta data tag put on each object for which a TTL has been specified. The tag is called expiry-time-from-ttl and it contains a string in ISO date time format with the UTC based expiry time. When state is retrieved through a call to Get, this component checks if it has the expiry-time-from-ttl set and if so it checks whether it is in the past. In that case, no state is returned.
The following operation therefore (notice the composite key)
curl -X POST http://localhost:3500/v1.0/state \
-H "Content-Type: application/json"
-d '[
{
"key": "temporary",
"value": "ephemeral",
"metadata": {"ttlInSeconds": "120"}}
}
]'
creates the following object:
Bucket | Directory | Object Name | Object Content | Meta Tags |
---|---|---|---|---|
as specified with bucketName in components.yaml | - | nihilus | darth | category: dapr-state-store , expiry-time-from-ttl: 2022-01-06T08:34:32 |
The exact value of the expiry-time-from-ttl depends of course on the time at which the state was created and will be 120 seconds later than that moment.
Note that expired state is not removed from the state store by this component. An application operator may decide to run a periodic job that does a form of garbage collection in order to explicitly remove all state that has an expiry-time-from-ttl label with a timestamp in the past.
Concurrency
OCI Object Storage state concurrency is achieved by using ETag
s. Each object in OCI Object Storage is assigned a unique ETag when it is created or updated (aka replaced). When the Set
and Delete
requests for this state store specify the FirstWrite concurrency policy, then the request need to provide the actual ETag value for the state to be written or removed for the request to be successful.
Consistency
OCI Object Storage state does not support Transactions.
Query
OCI Object Storage state does not support the Query API.
Related links
- Basic schema for a Dapr component
- Read this guide for instructions on configuring state store components
- State management building block
5.3.22 - Oracle Database
Component format
Create a component properties yaml file, for example called oracle.yaml
(but it could be named anything ), paste the following and replace the <CONNECTION STRING>
value with your connection string. The connection string is a standard Oracle Database connection string, composed as: "oracle://user/password@host:port/servicename"
for example "oracle://demo:demo@localhost:1521/xe"
.
In case you connect to the database using an Oracle Wallet, you should specify a value for the oracleWalletLocation
property, for example: "/home/app/state/Wallet_daprDB/"
; this should refer to the local file system directory that contains the file cwallet.sso
that is extracted from the Oracle Wallet archive file.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: state.oracledatabase
version: v1
metadata:
- name: connectionString
value: "<CONNECTION STRING>"
- name: oracleWalletLocation
value: "<FULL PATH TO DIRECTORY WITH ORACLE WALLET CONTENTS >" # Optional, no default
- name: tableName
value: "<NAME OF DATABASE TABLE TO STORE STATE IN >" # Optional, defaults to STATE
# Uncomment this if you wish to use Oracle Database as a state store for actors (optional)
#- name: actorStateStore
# value: "true"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
connectionString | Y | The connection string for Oracle Database | "oracle://user/password@host:port/servicename" for example "oracle://demo:demo@localhost:1521/xe" or for Autonomous Database "oracle://states_schema:State12345pw@adb.us-ashburn-1.oraclecloud.com:1522/k8j2agsqjsw_daprdb_low.adb.oraclecloud.com" |
oracleWalletLocation | N | Location of the contents of an Oracle Wallet file (required to connect to Autonomous Database on OCI) | "/home/app/state/Wallet_daprDB/" |
tableName | N | Name of the database table in which this instance of the state store records the data default "STATE" |
"MY_APP_STATE_STORE" |
actorStateStore | N | Consider this state store for actors. Defaults to "false" |
"true" , "false" |
What Happens at Runtime?
When the state store component initializes, it connects to the Oracle Database and checks if a table with the name specified with tableName
exists. If it does not, it creates this table (with columns Key, Value, Binary_YN, ETag, Creation_Time, Update_Time, Expiration_time).
Every state entry is represented by a record in the database table. The key
property provided in the request is used to determine the name of the object stored literally in the KEY column. The value
is stored as the content of the object. Binary content is stored as Base64 encoded text. Each object is assigned a unique ETag value whenever it is created or updated.
For example, the following operation
curl -X POST http://localhost:3500/v1.0/state \
-H "Content-Type: application/json"
-d '[
{
"key": "nihilus",
"value": "darth"
}
]'
creates the following records in table STATE:
KEY | VALUE | CREATION_TIME | BINARY_YN | ETAG |
---|---|---|---|---|
nihilus | darth | 2022-02-14T22:11:00 | N | 79dfb504-5b27-43f6-950f-d55d5ae0894f |
Dapr uses a fixed key scheme with composite keys to partition state across applications. For general states, the key format is:
App-ID||state key
. The Oracle Database state store maps this key in its entirety to the KEY column.
You can easily inspect all state stored with SQL queries against the tableName
table, for example the STATE table.
Time To Live and State Expiration
The Oracle Database state store component supports Dapr’s Time To Live logic that ensures that state cannot be retrieved after it has expired. See this How To on Setting State Time To Live for details.
The Oracle Database does not have native support for a Time-To-Live setting. The implementation in this component uses a column called EXPIRATION_TIME
to hold the time after which the record is considered expired. The value in this column is set only when a TTL was specified in a Set
request. It is calculated as the current UTC timestamp with the TTL period added to it. When state is retrieved through a call to Get
, this component checks if it has the EXPIRATION_TIME
set and if so, it checks whether it is in the past. In that case, no state is returned.
The following operation :
curl -X POST http://localhost:3500/v1.0/state \
-H "Content-Type: application/json"
-d '[
{
"key": "temporary",
"value": "ephemeral",
"metadata": {"ttlInSeconds": "120"}}
}
]'
creates the following object:
KEY | VALUE | CREATION_TIME | EXPIRATION_TIME | BINARY_YN | ETAG |
---|---|---|---|---|---|
temporary | ephemeral | 2022-03-31T22:11:00 | 2022-03-31T22:13:00 | N | 79dfb504-5b27-43f6-950f-d55d5ae0894f |
with the EXPIRATION_TIME set to a timestamp 2 minutes (120 seconds) (later than the CREATION_TIME)
Note that expired state is not removed from the state store by this component. An application operator may decide to run a periodic job that does a form of garbage collection in order to explicitly remove all state records with an EXPIRATION_TIME in the past. The SQL statement for collecting the expired garbage records:
delete dapr_state
where expiration_time < SYS_EXTRACT_UTC(SYSTIMESTAMP);
Concurrency
Concurrency in the Oracle Database state store is achieved by using ETag
s. Each piece of state recorded in the Oracle Database state store is assigned a unique ETag - a generated, unique string stored in the column ETag - when it is created or updated. Note: the column UPDATE_TIME is also updated whenever a Set
operation is performed on an existing record.
Only when the Set
and Delete
requests for this state store specify the FirstWrite concurrency policy, then the request needs to provide the actual ETag value for the state to be written or removed for the request to be successful. If a different or no concurrency policy is specified, then no check is performed on the ETag value.
Consistency
The Oracle Database state store supports Transactions. Multiple Set
and Delete
commands can be combined in a request that is processed as a single, atomic transaction.
Note: simple Set
and Delete
operations are a transaction on their own; when a Set
or Delete
requests returns an HTTP-20X result, the database transaction has been committed successfully.
Query
Oracle Database state store does not currently support the Query API.
Create an Oracle Database and User Schema
-
Run an instance of Oracle Database. You can run a local instance of Oracle Database in Docker CE with the following command - or of course use an existing Oracle Database:
docker run -d -p 1521:1521 -e ORACLE_PASSWORD=TheSuperSecret1509! gvenzl/oracle-xe
This example does not describe a production configuration because it sets the password for users
SYS
andSYSTEM
in plain text.When the output from the conmmand indicates that the container is running, learn the container id using the
docker ps
command. Then start a shell session using:docker exec -it <container id> /bin/bash
and subsequently run the SQL*Plus client, connecting to the database as the SYS user:
sqlplus sys/TheSuperSecret1509! as sysdba
-
Create a database schema for state data. Create a new user schema - for example called dapr - for storing state data. Grant this user (schema) privileges for creating a table and storing data in the associated tablespace.
To create a new user schema in Oracle Database, run the following SQL command:
create user dapr identified by DaprPassword4239 default tablespace users quota unlimited on users; grant create session, create table to dapr;
-
(optional) Create table for storing state records. The Oracle Database state store component checks if the table for storing state already exists in the database user schema it connects to and if it does not, it creates that table. However, instead of having the Oracle Database state store component create the table for storing state records at run time, you can also create the table in advance. That gives you - or the DBA for the database - more control over the physical configuration of the table. This also means you do not have to grant the create table privilege to the user schema.
Run the following DDL statement to create the table for storing the state in the dapr database user schema :
CREATE TABLE dapr_state ( key varchar2(2000) NOT NULL PRIMARY KEY, value clob NOT NULL, binary_yn varchar2(1) NOT NULL, etag varchar2(50) NOT NULL, creation_time TIMESTAMP WITH TIME ZONE DEFAULT SYSTIMESTAMP NOT NULL , expiration_time TIMESTAMP WITH TIME ZONE NULL, update_time TIMESTAMP WITH TIME ZONE NULL )
-
Create a free (or paid for) Autonomous Transaction Processing (ATP) or ADW (Autonomous Data Warehouse) instance on Oracle Cloud Infrastructure, as described in the OCI documentation for the always free autonomous database.
You need to provide the password for user ADMIN. You use this account (initially at least) for database administration activities. You can work both in the web based SQL Developer tool, from its desktop counterpart or from any of a plethora of database development tools.
-
Create a schema for state data. Create a new user schema in the Oracle Database for storing state data - for example using the ADMIN account. Grant this new user (schema) privileges for creating a table and storing data in the associated tablespace.
To create a new user schema in Oracle Database, run the following SQL command:
create user dapr identified by DaprPassword4239 default tablespace users quota unlimited on users; grant create session, create table to dapr;
-
(optional) Create table for storing state records. The Oracle Database state store component checks if the table for storing state already exists in the database user schema it connects to and if it does not, it creates that table. However, instead of having the Oracle Database state store component create the table for storing state records at run time, you can also create the table in advance. That gives you - or the DBA for the database - more control over the physical configuration of the table. This also means you do not have to grant the create table privilege to the user schema.
Run the following DDL statement to create the table for storing the state in the dapr database user schema :
CREATE TABLE dapr_state ( key varchar2(2000) NOT NULL PRIMARY KEY, value clob NOT NULL, binary_yn varchar2(1) NOT NULL, etag varchar2(50) NOT NULL, creation_time TIMESTAMP WITH TIME ZONE DEFAULT SYSTIMESTAMP NOT NULL , expiration_time TIMESTAMP WITH TIME ZONE NULL, update_time TIMESTAMP WITH TIME ZONE NULL )
Related links
- Basic schema for a Dapr component
- Read this guide for instructions on configuring state store components
- State management building block
5.3.23 - PostgreSQL
Note
This is the v2 of the PostgreSQL state store component, which contains some improvements to performance and reliability. New applications are encouraged to use v2.
The PostgreSQL v2 state store component is not compatible with the v1 component, and data cannot be migrated between the two components. The v2 component does not offer support for state store query APIs.
There are no plans to deprecate the v1 component.
This component allows using PostgreSQL (Postgres) as state store for Dapr, using the “v2” component. See this guide on how to create and apply a state store configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: state.postgresql
# Note: setting "version" to "v2" is required to use the v2 of the component
version: v2
metadata:
# Connection string
- name: connectionString
value: "<CONNECTION STRING>"
# Individual connection parameters - can be used instead to override connectionString parameters
#- name: host
# value: "localhost"
#- name: hostaddr
# value: "127.0.0.1"
#- name: port
# value: "5432"
#- name: database
# value: "my_db"
#- name: user
# value: "postgres"
#- name: password
# value: "example"
#- name: sslRootCert
# value: "/path/to/ca.crt"
# Timeout for database operations, as a Go duration or number of seconds (optional)
#- name: timeout
# value: 20
# Prefix for the table where the data is stored (optional)
#- name: tablePrefix
# value: ""
# Name of the table where to store metadata used by Dapr (optional)
#- name: metadataTableName
# value: "dapr_metadata"
# Cleanup interval in seconds, to remove expired rows (optional)
#- name: cleanupInterval
# value: "1h"
# Maximum number of connections pooled by this component (optional)
#- name: maxConns
# value: 0
# Max idle time for connections before they're closed (optional)
#- name: connectionMaxIdleTime
# value: 0
# Controls the default mode for executing queries. (optional)
#- name: queryExecMode
# value: ""
# Uncomment this if you wish to use PostgreSQL as a state store for actors or workflows (optional)
#- name: actorStateStore
# value: "true"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Authenticate using a connection string
The following metadata options are required to authenticate using a PostgreSQL connection string.
Field | Required | Details | Example |
---|---|---|---|
connectionString |
Y | The connection string for the PostgreSQL database. See the PostgreSQL documentation on database connections for information on how to define a connection string. | "host=localhost user=postgres password=example port=5432 connect_timeout=10 database=my_db" |
Authenticate using individual connection parameters
In addition to using a connection string, you can optionally specify individual connection parameters. These parameters are equivalent to the standard PostgreSQL connection parameters.
Field | Required | Details | Example |
---|---|---|---|
host |
Y | The host name or IP address of the PostgreSQL server | "localhost" |
hostaddr |
N | The IP address of the PostgreSQL server (alternative to host) | "127.0.0.1" |
port |
Y | The port number of the PostgreSQL server | "5432" |
database |
Y | The name of the database to connect to | "my_db" |
user |
Y | The PostgreSQL user to connect as | "postgres" |
password |
Y | The password for the PostgreSQL user | "example" |
sslRootCert |
N | Path to the SSL root certificate file | "/path/to/ca.crt" |
Note
When using individual connection parameters, these will override the ones present in theconnectionString
.
Authenticate using Microsoft Entra ID
Authenticating with Microsoft Entra ID is supported with Azure Database for PostgreSQL. All authentication methods supported by Dapr can be used, including client credentials (“service principal”) and Managed Identity.
Field | Required | Details | Example |
---|---|---|---|
useAzureAD |
Y | Must be set to true to enable the component to retrieve access tokens from Microsoft Entra ID. |
"true" |
connectionString |
Y | The connection string for the PostgreSQL database. This must contain the user, which corresponds to the name of the user created inside PostgreSQL that maps to the Microsoft Entra ID identity. This is often the name of the corresponding principal (for example, the name of the Microsoft Entra ID application). This connection string should not contain any password. |
"host=mydb.postgres.database.azure.com user=myapplication port=5432 database=my_db sslmode=require" |
azureTenantId |
N | ID of the Microsoft Entra ID tenant | "cd4b2887-304c-…" |
azureClientId |
N | Client ID (application ID) | "c7dd251f-811f-…" |
azureClientSecret |
N | Client secret (application password) | "Ecy3X…" |
Authenticate using AWS IAM
Authenticating with AWS IAM is supported with all versions of PostgreSQL type components.
The user specified in the connection string must be an already existing user in the DB, and an AWS IAM enabled user granted the rds_iam
database role.
Authentication is based on the AWS authentication configuration file, or the AccessKey/SecretKey provided.
The AWS authentication token will be dynamically rotated before it’s expiration time with AWS.
Field | Required | Details | Example |
---|---|---|---|
useAWSIAM |
Y | Must be set to true to enable the component to retrieve access tokens from AWS IAM. This authentication method only works with AWS Relational Database Service for PostgreSQL databases. |
"true" |
connectionString |
Y | The connection string for the PostgreSQL database. This must contain an already existing user, which corresponds to the name of the user created inside PostgreSQL that maps to the AWS IAM policy. This connection string should not contain any password. Note that the database name field is denoted by dbname with AWS. |
"host=mydb.postgres.database.aws.com user=myapplication port=5432 dbname=my_db sslmode=require" |
awsRegion |
N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘region’ instead. The AWS Region where the AWS Relational Database Service is deployed to. | "us-east-1" |
awsAccessKey |
N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘accessKey’ instead. AWS access key associated with an IAM account | "AKIAIOSFODNN7EXAMPLE" |
awsSecretKey |
N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘secretKey’ instead. The secret key associated with the access key | "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" |
awsSessionToken |
N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘sessionToken’ instead. AWS session token to use. A session token is only required if you are using temporary security credentials. | "TOKEN" |
Other metadata options
Field | Required | Details | Example |
---|---|---|---|
tablePrefix |
N | Prefix for the table where the data is stored. Can optionally have the schema name as prefix, such as public.prefix_ |
"prefix_" , "public.prefix_" |
metadataTableName |
N | Name of the table Dapr uses to store a few metadata properties. Defaults to dapr_metadata . Can optionally have the schema name as prefix, such as public.dapr_metadata |
"dapr_metadata" , "public.dapr_metadata" |
timeout |
N | Timeout for operations on the database, as a Go duration. Integers are interpreted as number of seconds. Defaults to 20s |
"30s" , 30 |
cleanupInterval |
N | Interval, as a Go duration or number of seconds, to clean up rows with an expired TTL. Default: 1h (1 hour). Setting this to values <=0 disables the periodic cleanup. |
"30m" , 1800 , -1 |
maxConns |
N | Maximum number of connections pooled by this component. Set to 0 or lower to use the default value, which is the greater of 4 or the number of CPUs. | "4" |
connectionMaxIdleTime |
N | Max idle time before unused connections are automatically closed in the connection pool. By default, there’s no value and this is left to the database driver to choose. | "5m" |
queryExecMode |
N | Controls the default mode for executing queries. By default Dapr uses the extended protocol and automatically prepares and caches prepared statements. However, this may be incompatible with proxies such as PGBouncer. In this case, it may be preferrable to use exec or simple_protocol . |
"simple_protocol" |
actorStateStore |
N | Consider this state store for actors. Defaults to "false" |
"true" , "false" |
Setup PostgreSQL
-
Run an instance of PostgreSQL. You can run a local instance of PostgreSQL in Docker with the following command:
docker run -p 5432:5432 -e POSTGRES_PASSWORD=example postgres
This example does not describe a production configuration because it sets the password in plain text and the user name is left as the PostgreSQL default of “postgres”.
-
Create a database for state data.
Either the default “postgres” database can be used, or create a new database for storing state data.To create a new database in PostgreSQL, run the following SQL command:
CREATE DATABASE my_dapr;
Advanced
Differences between v1 and v2
The PostgreSQL state store v2 was introduced in Dapr 1.13. The pre-existing v1 remains available and is not deprecated.
In the v2 component, the table schema has been changed significantly, with the goal of increasing performance and reliability. Most notably, the value stored by Dapr is now of type BYTEA, which allows faster queries and, in some cases, is more space-efficient than the previously-used JSONB column.
However, due to this change, the v2 component does not support the Dapr state store query APIs.
Also, in the v2 component, ETags are now random UUIDs, which ensures better compatibility with other PostgreSQL-compatible databases, such as CockroachDB.
Because of these changes, v1 and v2 components are not able to read or write data from the same table. At this stage, it’s also impossible to migrate data between the two versions of the component.
Displaying the data in human-readable format
The PostgreSQL v2 component stores the state’s value in the value
column, which is of type BYTEA. Most PostgreSQL tools, including pgAdmin, consider the value as binary and do not display it in human-readable form by default.
If you want to inspect the value in the state store, and you know it’s not binary (for example, JSON data), you can have the value displayed in human-readable form using a query like the following:
-- Replace "state" with the name of the state table in your environment
SELECT *, convert_from(value, 'utf-8') FROM state;
TTLs and cleanups
This state store supports Time-To-Live (TTL) for records stored with Dapr. When storing data using Dapr, you can set the ttlInSeconds
metadata property to indicate after how many seconds the data should be considered “expired”.
Because PostgreSQL doesn’t have built-in support for TTLs, this is implemented in Dapr by adding a column in the state table indicating when the data is to be considered “expired”. Records that are “expired” are not returned to the caller, even if they’re still physically stored in the database. A background “garbage collector” periodically scans the state table for expired rows and deletes them.
You can set the deletion interval of expired records with the cleanupInterval
metadata property, which defaults to 3600 seconds (that is, 1 hour).
- Longer intervals require less frequent scans for expired rows, but can require storing expired records for longer, potentially requiring more storage space. If you plan to store many records in your state table, with short TTLs, consider setting
cleanupInterval
to a smaller value; for example,5m
(5 minutes). - If you do not plan to use TTLs with Dapr and the PostgreSQL state store, you should consider setting
cleanupInterval
to a value <= 0 (for example,0
or-1
) to disable the periodic cleanup and reduce the load on the database.
Related links
- Basic schema for a Dapr component
- Read this guide for instructions on configuring state store components
- State management building block
5.3.24 - PostgreSQL v1
Note
Starting with Dapr 1.13, you can leverage the PostgreSQL v2 state store component, which contains some improvements to performance and reliability.
The v2 component is not compatible with v1, and data cannot be migrated between the two components. The v2 component does not offer support for state store query APIs.
There are no plans to deprecate the v1 component.
This component allows using PostgreSQL (Postgres) as state store for Dapr, using the “v1” component. See this guide on how to create and apply a state store configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: state.postgresql
version: v1
metadata:
# Connection string
- name: connectionString
value: "<CONNECTION STRING>"
# Individual connection parameters - can be used instead to override connectionString parameters
#- name: host
# value: "localhost"
#- name: hostaddr
# value: "127.0.0.1"
#- name: port
# value: "5432"
#- name: database
# value: "my_db"
#- name: user
# value: "postgres"
#- name: password
# value: "example"
#- name: sslRootCert
# value: "/path/to/ca.crt"
# Timeout for database operations, as a Go duration or number of seconds (optional)
#- name: timeout
# value: 20
# Name of the table where to store the state (optional)
#- name: tableName
# value: "state"
# Name of the table where to store metadata used by Dapr (optional)
#- name: metadataTableName
# value: "dapr_metadata"
# Cleanup interval in seconds, to remove expired rows (optional)
#- name: cleanupInterval
# value: "1h"
# Maximum number of connections pooled by this component (optional)
#- name: maxConns
# value: 0
# Max idle time for connections before they're closed (optional)
#- name: connectionMaxIdleTime
# value: 0
# Controls the default mode for executing queries. (optional)
#- name: queryExecMode
# value: ""
# Uncomment this if you wish to use PostgreSQL as a state store for actors or workflows (optional)
#- name: actorStateStore
# value: "true"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Authenticate using a connection string
The following metadata options are required to authenticate using a PostgreSQL connection string.
Field | Required | Details | Example |
---|---|---|---|
connectionString |
Y | The connection string for the PostgreSQL database. See the PostgreSQL documentation on database connections for information on how to define a connection string. | "host=localhost user=postgres password=example port=5432 connect_timeout=10 database=my_db" |
Authenticate using individual connection parameters
In addition to using a connection string, you can optionally specify individual connection parameters. These parameters are equivalent to the standard PostgreSQL connection parameters.
Field | Required | Details | Example |
---|---|---|---|
host |
Y | The host name or IP address of the PostgreSQL server | "localhost" |
hostaddr |
N | The IP address of the PostgreSQL server (alternative to host) | "127.0.0.1" |
port |
Y | The port number of the PostgreSQL server | "5432" |
database |
Y | The name of the database to connect to | "my_db" |
user |
Y | The PostgreSQL user to connect as | "postgres" |
password |
Y | The password for the PostgreSQL user | "example" |
sslRootCert |
N | Path to the SSL root certificate file | "/path/to/ca.crt" |
Note
When using individual connection parameters, these will override the ones present in theconnectionString
.
Authenticate using Microsoft Entra ID
Authenticating with Microsoft Entra ID is supported with Azure Database for PostgreSQL. All authentication methods supported by Dapr can be used, including client credentials (“service principal”) and Managed Identity.
Field | Required | Details | Example |
---|---|---|---|
useAzureAD |
Y | Must be set to true to enable the component to retrieve access tokens from Microsoft Entra ID. |
"true" |
connectionString |
Y | The connection string for the PostgreSQL database. This must contain the user, which corresponds to the name of the user created inside PostgreSQL that maps to the Microsoft Entra ID identity; this is often the name of the corresponding principal (e.g. the name of the Microsoft Entra ID application). This connection string should not contain any password. |
"host=mydb.postgres.database.azure.com user=myapplication port=5432 database=my_db sslmode=require" |
azureTenantId |
N | ID of the Microsoft Entra ID tenant | "cd4b2887-304c-…" |
azureClientId |
N | Client ID (application ID) | "c7dd251f-811f-…" |
azureClientSecret |
N | Client secret (application password) | "Ecy3X…" |
Authenticate using AWS IAM
Authenticating with AWS IAM is supported with all versions of PostgreSQL type components.
The user specified in the connection string must be an already existing user in the DB, and an AWS IAM enabled user granted the rds_iam
database role.
Authentication is based on the AWS authentication configuration file, or the AccessKey/SecretKey provided.
The AWS authentication token will be dynamically rotated before it’s expiration time with AWS.
Field | Required | Details | Example |
---|---|---|---|
useAWSIAM |
Y | Must be set to true to enable the component to retrieve access tokens from AWS IAM. This authentication method only works with AWS Relational Database Service for PostgreSQL databases. |
"true" |
connectionString |
Y | The connection string for the PostgreSQL database. This must contain an already existing user, which corresponds to the name of the user created inside PostgreSQL that maps to the AWS IAM policy. This connection string should not contain any password. Note that the database name field is denoted by dbname with AWS. |
"host=mydb.postgres.database.aws.com user=myapplication port=5432 dbname=my_db sslmode=require" |
awsRegion |
N | The AWS Region where the AWS Relational Database Service is deployed to. | "us-east-1" |
awsAccessKey |
N | AWS access key associated with an IAM account | "AKIAIOSFODNN7EXAMPLE" |
awsSecretKey |
N | The secret key associated with the access key | "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" |
awsSessionToken |
N | AWS session token to use. A session token is only required if you are using temporary security credentials. | "TOKEN" |
Other metadata options
Field | Required | Details | Example |
---|---|---|---|
tableName |
N | Name of the table where the data is stored. Defaults to state . Can optionally have the schema name as prefix, such as public.state |
"state" , "public.state" |
metadataTableName |
N | Name of the table Dapr uses to store a few metadata properties. Defaults to dapr_metadata . Can optionally have the schema name as prefix, such as public.dapr_metadata |
"dapr_metadata" , "public.dapr_metadata" |
timeout |
N | Timeout for operations on the database, as a Go duration. Integers are interpreted as number of seconds. Defaults to 20s |
"30s" , 30 |
cleanupInterval |
N | Interval, as a Go duration or number of seconds, to clean up rows with an expired TTL. Default: 1h (1 hour). Setting this to values <=0 disables the periodic cleanup. |
"30m" , 1800 , -1 |
maxConns |
N | Maximum number of connections pooled by this component. Set to 0 or lower to use the default value, which is the greater of 4 or the number of CPUs. | "4" |
connectionMaxIdleTime |
N | Max idle time before unused connections are automatically closed in the connection pool. By default, there’s no value and this is left to the database driver to choose. | "5m" |
queryExecMode |
N | Controls the default mode for executing queries. By default Dapr uses the extended protocol and automatically prepares and caches prepared statements. However, this may be incompatible with proxies such as PGBouncer. In this case it may be preferrable to use exec or simple_protocol . |
"simple_protocol" |
actorStateStore |
N | Consider this state store for actors. Defaults to "false" |
"true" , "false" |
Setup PostgreSQL
-
Run an instance of PostgreSQL. You can run a local instance of PostgreSQL in Docker CE with the following command:
docker run -p 5432:5432 -e POSTGRES_PASSWORD=example postgres
This example does not describe a production configuration because it sets the password in plain text and the user name is left as the PostgreSQL default of “postgres”.
-
Create a database for state data.
Either the default “postgres” database can be used, or create a new database for storing state data.To create a new database in PostgreSQL, run the following SQL command:
CREATE DATABASE my_dapr;
Advanced
TTLs and cleanups
This state store supports Time-To-Live (TTL) for records stored with Dapr. When storing data using Dapr, you can set the ttlInSeconds
metadata property to indicate after how many seconds the data should be considered “expired”.
Because PostgreSQL doesn’t have built-in support for TTLs, this is implemented in Dapr by adding a column in the state table indicating when the data is to be considered “expired”. Records that are “expired” are not returned to the caller, even if they’re still physically stored in the database. A background “garbage collector” periodically scans the state table for expired rows and deletes them.
You can set the deletion interval of expired records with the cleanupInterval
metadata property, which defaults to 3600 seconds (that is, 1 hour).
- Longer intervals require less frequent scans for expired rows, but can require storing expired records for longer, potentially requiring more storage space. If you plan to store many records in your state table, with short TTLs, consider setting
cleanupInterval
to a smaller value; for example,5m
(5 minutes). - If you do not plan to use TTLs with Dapr and the PostgreSQL state store, you should consider setting
cleanupInterval
to a value <= 0 (for example,0
or-1
) to disable the periodic cleanup and reduce the load on the database.
The column in the state table where the expiration date for records is stored in, expiredate
, does not have an index by default, so each periodic cleanup must perform a full-table scan. If you have a table with a very large number of records, and only some of them use a TTL, you may find it useful to create an index on that column. Assuming that your state table name is state
(the default), you can use this query:
CREATE INDEX expiredate_idx
ON state
USING btree (expiredate ASC NULLS LAST);
Related links
- Basic schema for a Dapr component
- Read this guide for instructions on configuring state store components
- State management building block
5.3.25 - Redis
Component format
To setup Redis state store create a component of type state.redis
. See this guide on how to create and apply a state store configuration.
Limitations
Before using Redis and the Transactions API, make sure you’re familiar with Redis limitations regarding transactions.apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: <HOST>
- name: redisPassword # Optional.
value: <PASSWORD>
- name: useEntraID
value: <bool> # Optional. Allowed: true, false.
- name: enableTLS
value: <bool> # Optional. Allowed: true, false.
- name: clientCert
value: # Optional
- name: clientKey
value: # Optional
- name: maxRetries
value: # Optional
- name: maxRetryBackoff
value: # Optional
- name: failover
value: <bool> # Optional. Allowed: true, false.
- name: sentinelMasterName
value: <string> # Optional
- name: sentinelUsername
value: # Optional
- name: sentinelPassword
value: # Optional
- name: redeliverInterval
value: # Optional
- name: processingTimeout
value: # Optional
- name: redisType
value: # Optional
- name: redisDB
value: # Optional
- name: redisMaxRetries
value: # Optional
- name: redisMinRetryInterval
value: # Optional
- name: redisMaxRetryInterval
value: # Optional
- name: dialTimeout
value: # Optional
- name: readTimeout
value: # Optional
- name: writeTimeout
value: # Optional
- name: poolSize
value: # Optional
- name: poolTimeout
value: # Optional
- name: maxConnAge
value: # Optional
- name: minIdleConns
value: # Optional
- name: idleCheckFrequency
value: # Optional
- name: idleTimeout
value: # Optional
- name: ttlInSeconds
value: <int> # Optional
- name: queryIndexes
value: <string> # Optional
# Uncomment this if you wish to use Redis as a state store for actors (optional)
#- name: actorStateStore
# value: "true"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.If you wish to use Redis as an actor store, append the following to the yaml.
- name: actorStateStore
value: "true"
Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
redisHost | Y | Connection-string for the redis host | localhost:6379 , redis-master.default.svc.cluster.local:6379 |
redisPassword | N | Password for Redis host. No Default. Can be secretKeyRef to use a secret reference |
"" , "KeFg23!" |
redisUsername | N | Username for Redis host. Defaults to empty. Make sure your redis server version is 6 or above, and have created acl rule correctly. | "" , "default" |
useEntraID | N | Implements EntraID support for Azure Cache for Redis. Before enabling this:
|
"true" , "false" |
enableTLS | N | If the Redis instance supports TLS with public certificates, can be configured to be enabled or disabled. Defaults to "false" |
"true" , "false" |
clientCert | N | The content of the client certificate, used for Redis instances that require client-side certificates. Must be used with clientKey and enableTLS must be set to true. It is recommended to use a secret store as described here |
"----BEGIN CERTIFICATE-----\nMIIC..." |
clientKey | N | The content of the client private key, used in conjunction with clientCert for authentication. It is recommended to use a secret store as described here |
"----BEGIN PRIVATE KEY-----\nMIIE..." |
maxRetries | N | Maximum number of retries before giving up. Defaults to 3 |
5 , 10 |
maxRetryBackoff | N | Maximum backoff between each retry. Defaults to 2 seconds; "-1" disables backoff. |
3000000000 |
failover | N | Property to enable failover configuration. Needs sentinelMasterName to be set. The redisHost should be the sentinel host address. See Redis Sentinel Documentation. Defaults to "false" |
"true" , "false" |
sentinelMasterName | N | The sentinel master name. See Redis Sentinel Documentation | "" , "mymaster" |
sentinelUsername | N | Username for Redis Sentinel. Applicable only when “failover” is true, and Redis Sentinel has authentication enabled | "username" |
sentinelPassword | N | Password for Redis Sentinel. Applicable only when “failover” is true, and Redis Sentinel has authentication enabled | "password" |
redeliverInterval | N | The interval between checking for pending messages to redelivery. Defaults to "60s" . "0" disables redelivery. |
"30s" |
processingTimeout | N | The amount time a message must be pending before attempting to redeliver it. Defaults to "15s" . "0" disables redelivery. |
"30s" |
redisType | N | The type of redis. There are two valid values, one is "node" for single node mode, the other is "cluster" for redis cluster mode. Defaults to "node" . |
"cluster" |
redisDB | N | Database selected after connecting to redis. If "redisType" is "cluster" this option is ignored. Defaults to "0" . |
"0" |
redisMaxRetries | N | Alias for maxRetries . If both values are set maxRetries is ignored. |
"5" |
redisMinRetryInterval | N | Minimum backoff for redis commands between each retry. Default is "8ms" ; "-1" disables backoff. |
"8ms" |
redisMaxRetryInterval | N | Alias for maxRetryBackoff . If both values are set maxRetryBackoff is ignored. |
"5s" |
dialTimeout | N | Dial timeout for establishing new connections. Defaults to "5s" . |
"5s" |
readTimeout | N | Timeout for socket reads. If reached, redis commands will fail with a timeout instead of blocking. Defaults to "3s" , "-1" for no timeout. |
"3s" |
writeTimeout | N | Timeout for socket writes. If reached, redis commands will fail with a timeout instead of blocking. Defaults is readTimeout. | "3s" |
poolSize | N | Maximum number of socket connections. Default is 10 connections per every CPU as reported by runtime.NumCPU. | "20" |
poolTimeout | N | Amount of time client waits for a connection if all connections are busy before returning an error. Default is readTimeout + 1 second. | "5s" |
maxConnAge | N | Connection age at which the client retires (closes) the connection. Default is to not close aged connections. | "30m" |
minIdleConns | N | Minimum number of idle connections to keep open in order to avoid the performance degradation associated with creating new connections. Defaults to "0" . |
"2" |
idleCheckFrequency | N | Frequency of idle checks made by idle connections reaper. Default is "1m" . "-1" disables idle connections reaper. |
"-1" |
idleTimeout | N | Amount of time after which the client closes idle connections. Should be less than server’s timeout. Default is "5m" . "-1" disables idle timeout check. |
"10m" |
ttlInSeconds | N | Allows specifying a default Time-to-live (TTL) in seconds that will be applied to every state store request unless TTL is explicitly defined via the request metadata. | 600 |
queryIndexes | N | Indexing schemas for querying JSON objects | see Querying JSON objects |
actorStateStore | N | Consider this state store for actors. Defaults to "false" |
"true" , "false" |
Setup Redis
Dapr can use any Redis instance: containerized, running on your local dev machine, or a managed cloud service.
A Redis instance is automatically created as a Docker container when you run dapr init
You can use Helm to quickly create a Redis instance in our Kubernetes cluster. This approach requires Installing Helm.
-
Install Redis into your cluster. Note that we’re explicitly setting an image tag to get a version greater than 5, which is what Dapr’ pub/sub functionality requires. If you’re intending on using Redis as just a state store (and not for pub/sub), you do not have to set the image version.
helm repo add bitnami https://charts.bitnami.com/bitnami helm install redis bitnami/redis
-
Run
kubectl get pods
to see the Redis containers now running in your cluster. -
Add
redis-master:6379
as theredisHost
in your redis.yaml file. For example:metadata: - name: redisHost value: redis-master:6379
-
Next, get the Redis password, which is slightly different depending on the OS we’re using:
-
Windows: Run
kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" > encoded.b64
, which creates a file with your encoded password. Next, runcertutil -decode encoded.b64 password.txt
, which will put your redis password in a text file calledpassword.txt
. Copy the password and delete the two files. -
Linux/MacOS: Run
kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" | base64 --decode
and copy the outputted password.
Add this password as the
redisPassword
value in your redis.yaml file. For example:metadata: - name: redisPassword value: lhDOkwTlp0
-
-
Create an Azure Cache for Redis instance using the official Microsoft documentation.
-
Once your instance is created, grab the Host name (FQDN) and your access key from the Azure portal.
- For the Host name:
- Navigate to the resource’s Overview page.
- Copy the Host name value.
- For your access key:
- Navigate to Settings > Access Keys.
- Copy and save your key.
- For the Host name:
-
Add your key and your host name to a
redis.yaml
file that Dapr can apply to your cluster.- If you’re running a sample, add the host and key to the provided
redis.yaml
. - If you’re creating a project from the ground up, create a
redis.yaml
file as specified in the Component format section.
- If you’re running a sample, add the host and key to the provided
-
Set the
redisHost
key to[HOST NAME FROM PREVIOUS STEP]:6379
and theredisPassword
key to the key you saved earlier.Note: In a production-grade application, follow secret management instructions to securely manage your secrets.
-
Enable EntraID support:
- Enable Entra ID authentication on your Azure Redis server. This may takes a few minutes.
- Set
useEntraID
to"true"
to implement EntraID support for Azure Cache for Redis.
-
Set
enableTLS
to"true"
to support TLS.
Note:
useEntraID
assumes that either your UserPrincipal (via AzureCLICredential) or the SystemAssigned managed identity have the RedisDataOwner role permission. If a user-assigned identity is used, you need to specify theazureClientID
property.
Querying JSON objects (optional)
In addition to supporting storing and querying state data as key/value pairs, the Redis state store optionally supports querying of JSON objects to meet more complex querying or filtering requirements. To enable this feature, the following steps are required:
- The Redis store must support Redis modules and specifically both Redisearch and RedisJson. If you are deploying and running Redis then load redisearch and redisjson modules when deploying the Redis service. ``
- Specify
queryIndexes
entry in the metadata of the component config. The value of thequeryIndexes
is a JSON array of the following format:
[
{
"name": "<indexing name>",
"indexes": [
{
"key": "<JSONPath-like syntax for selected element inside documents>",
"type": "<value type (supported types: TEXT, NUMERIC)>",
},
...
]
},
...
]
- When calling state management API, add the following metadata to the API calls:
- Save State, Get State, Delete State:
- add
metadata.contentType=application/json
URL query parameter to HTTP API request - add
"contentType": "application/json"
pair to the metadata of gRPC API request
- add
- Query State:
- add
metadata.contentType=application/json&metadata.queryIndexName=<indexing name>
URL query parameters to HTTP API request - add
"contentType" : "application/json"
and"queryIndexName" : "<indexing name>"
pairs to the metadata of gRPC API request
- add
Consider an example where you store documents like that:
{
"key": "1",
"value": {
"person": {
"org": "Dev Ops",
"id": 1036
},
"city": "Seattle",
"state": "WA"
}
}
The component config file containing corresponding indexing schema looks like that:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
spec:
type: state.redis
version: v1
initTimeout: 1m
metadata:
- name: redisHost
value: "localhost:6379"
- name: redisPassword
value: ""
- name: queryIndexes
value: |
[
{
"name": "orgIndx",
"indexes": [
{
"key": "person.org",
"type": "TEXT"
},
{
"key": "person.id",
"type": "NUMERIC"
},
{
"key": "state",
"type": "TEXT"
},
{
"key": "city",
"type": "TEXT"
}
]
}
]
Consecutively, you can now store, retrieve, and query these documents.
Consider the example from “How-To: Query state” guide. Let’s run it with Redis.
If you are using a self-hosted deployment of Dapr, a Redis instance without the JSON module is automatically created as a Docker container when you run dapr init
.
Alternatively, you can create an instance of Redis by running the following command:
docker run -p 6379:6379 --name redis --rm redis
The Redis container that gets created on dapr init or via the above command, cannot be used with state store query API alone. You can run redislabs/rejson docker image on a different port(than the already installed Redis is using) to work with they query API.
Note:
redislabs/rejson
has support only for amd64 architecture.
Use following command to create an instance of redis compatible with query API.
docker run -p 9445:9445 --name rejson --rm redislabs/rejson:2.0.6
Follow instructions for Redis deployment in Kubernetes with one extra detail.
When installing Redis Helm package, provide a configuration file that specifies container image and enables required modules:
helm install redis bitnami/redis --set image.tag=6.2 -f values.yaml
where values.yaml
looks like:
image:
repository: redislabs/rejson
tag: 2.0.6
master:
extraFlags:
- --loadmodule
- /usr/lib/redis/modules/rejson.so
- --loadmodule
- /usr/lib/redis/modules/redisearch.so
Note
Azure Redis managed service does not support the RedisJson module and cannot be used with query.
Follow instructions for Redis deployment in AWS.
Note
For query support you need to enable RediSearch and RedisJson.
Note
Memory Store does not support modules and cannot be used with query.
Next is to start a Dapr application. Refer to this component configuration file, which contains query indexing schemas. Make sure to modify the redisHost
to reflect the local forwarding port which redislabs/rejson
uses.
dapr run --app-id demo --dapr-http-port 3500 --resources-path query-api-examples/components/redis
Now populate the state store with the employee dataset, so you can then query it later.
curl -X POST -H "Content-Type: application/json" -d @query-api-examples/dataset.json \
http://localhost:3500/v1.0/state/querystatestore?metadata.contentType=application/json
To make sure the data has been properly stored, you can retrieve a specific object
curl http://localhost:3500/v1.0/state/querystatestore/1?metadata.contentType=application/json
The result will be:
{
"city": "Seattle",
"state": "WA",
"person": {
"org": "Dev Ops",
"id": 1036
}
}
Now, let’s find all employees in the state of California and sort them by their employee ID in descending order.
This is the query:
{
"filter": {
"EQ": { "state": "CA" }
},
"sort": [
{
"key": "person.id",
"order": "DESC"
}
]
}
Execute the query with the following command:
curl -s -X POST -H "Content-Type: application/json" -d @query-api-examples/query1.json \
'http://localhost:3500/v1.0-alpha1/state/querystatestore/query?metadata.contentType=application/json&metadata.queryIndexName=orgIndx'
The result will be:
{
"results": [
{
"key": "3",
"data": {
"person": {
"org": "Finance",
"id": 1071
},
"city": "Sacramento",
"state": "CA"
},
"etag": "1"
},
{
"key": "7",
"data": {
"person": {
"org": "Dev Ops",
"id": 1015
},
"city": "San Francisco",
"state": "CA"
},
"etag": "1"
},
{
"key": "5",
"data": {
"person": {
"org": "Hardware",
"id": 1007
},
"city": "Los Angeles",
"state": "CA"
},
"etag": "1"
},
{
"key": "9",
"data": {
"person": {
"org": "Finance",
"id": 1002
},
"city": "San Diego",
"state": "CA"
},
"etag": "1"
}
]
}
The query syntax and documentation is available here
Related links
- Basic schema for a Dapr component
- Read this guide for instructions on configuring state store components
- State management building block
5.3.26 - RethinkDB
Component format
To setup RethinkDB state store, create a component of type state.rethinkdb
. See the how-to guide to create and apply a state store configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: state.rethinkdb
version: v1
metadata:
- name: address
value: <REPLACE-RETHINKDB-ADDRESS> # Required, e.g. 127.0.0.1:28015 or rethinkdb.default.svc.cluster.local:28015).
- name: database
value: <REPLACE-RETHINKDB-DB-NAME> # Required, e.g. dapr (alpha-numerics only)
- name: table
value: # Optional
- name: username
value: <USERNAME> # Optional
- name: password
value: <PASSWORD> # Optional
- name: archive
value: bool # Optional (whether or not store should keep archive table of all the state changes)
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described here.If the optional archive
metadata is set to true
, on each state change, the RethinkDB state store will also log state changes with timestamp in the daprstate_archive
table. This allows for time series analyses of the state managed by Dapr.
Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
address | Y | The address for RethinkDB server | "127.0.0.1:28015" , "rethinkdb.default.svc.cluster.local:28015" |
database | Y | The database to use. Alpha-numerics only | "dapr" |
table | N | The table name to use | "table" |
username | N | The username to connect with | "user" |
password | N | The password to connect with | "password" |
archive | N | Whether or not to archive the table | "true" , "false" |
Setup RethinkDB
You can run RethinkDB locally using Docker:
docker run --name rethinkdb -v "$PWD:/rethinkdb-data" -d rethinkdb:latest
To connect to the admin UI:
open "http://$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' rethinkdb):8080"
Related links
- Basic schema for a Dapr component
- Read the how-to guide for instructions on configuring state store components.
- State management building block.
5.3.27 - SQLite
This component allows using SQLite 3 as state store for Dapr.
The component is currently compiled with SQLite version 3.41.2.
Create a Dapr component
Create a file called sqlite.yaml
, paste the following, and replace the <CONNECTION STRING>
value with your connection string, which is the path to a file on disk.
If you want to also configure SQLite to store actors, add the actorStateStore
option as in the example below.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: state.sqlite
version: v1
metadata:
# Connection string
- name: connectionString
value: "data.db"
# Timeout for database operations, in seconds (optional)
#- name: timeoutInSeconds
# value: 20
# Name of the table where to store the state (optional)
#- name: tableName
# value: "state"
# Cleanup interval in seconds, to remove expired rows (optional)
#- name: cleanupInterval
# value: "1h"
# Set busy timeout for database operations
#- name: busyTimeout
# value: "2s"
# Uncomment this if you wish to use SQLite as a state store for actors (optional)
#- name: actorStateStore
# value: "true"
Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
connectionString |
Y | The connection string for the SQLite database. See below for more details. | "path/to/data.db" , "file::memory:?cache=shared" |
timeout |
N | Timeout for operations on the database, as a Go duration. Integers are interpreted as number of seconds. Defaults to 20s |
"30s" , 30 |
tableName |
N | Name of the table where the data is stored. Defaults to state . |
"state" |
metadataTableName |
N | Name of the table used by Dapr to store metadata for the component. Defaults to metadata . |
"metadata" |
cleanupInterval |
N | Interval, as a Go duration, to clean up rows with an expired TTL. Setting this to values <=0 disables the periodic cleanup. Default: 0 (i.e. disabled) |
"2h" , "30m" , -1 |
busyTimeout |
N | Interval, as a Go duration, to wait in case the SQLite database is currently busy serving another request, before returning a “database busy” error. Default: 2s |
"100ms" , "5s" |
disableWAL |
N | If set to true, disables Write-Ahead Logging for journaling of the SQLite database. You should set this to false if the database is stored on a network file system (for example, a folder mounted as a SMB or NFS share). This option is ignored for read-only or in-memory databases. |
"true" , "false" |
actorStateStore |
N | Consider this state store for actors. Defaults to "false" |
"true" , "false" |
The connectionString
parameter configures how to open the SQLite database.
- Normally, this is the path to a file on disk, relative to the current working directory, or absolute. For example:
"data.db"
(relative to the working directory) or"/mnt/data/mydata.db"
. - The path is interpreted by the SQLite library, so it’s possible to pass additional options to the SQLite driver using “URI options” if the path begins with
file:
. For example:"file:path/to/data.db?mode=ro"
opens the database at pathpath/to/data.db
in read-only mode. Refer to the SQLite documentation for all supported URI options. - The special case
":memory:"
launches the component backed by an in-memory SQLite database. This database is not persisted on disk, not shared across multiple Dapr instances, and all data is lost when the Dapr sidecar is stopped. When using an in-memory database, Dapr automatically sets thecache=shared
URI option.
Advanced
TTLs and cleanups
This state store supports Time-To-Live (TTL) for records stored with Dapr. When storing data using Dapr, you can set the ttlInSeconds
metadata property to indicate when the data should be considered “expired”.
Because SQLite doesn’t have built-in support for TTLs, this is implemented in Dapr by adding a column in the state table indicating when the data is to be considered “expired”. Records that are “expired” are not returned to the caller, even if they’re still physically stored in the database. A background “garbage collector” periodically scans the state table for expired rows and deletes them.
The cleanupInterval
metadata property sets the expired records deletion interval, which is disabled by default.
- Longer intervals require less frequent scans for expired rows, but can cause the database to store expired records for longer, potentially requiring more storage space. If you plan to store many records in your state table, with short TTLs, consider setting
cleanupInterval
to a smaller value, for example5m
. - If you do not plan to use TTLs with Dapr and the SQLite state store, you should consider setting
cleanupInterval
to a value <= 0 (e.g.0
or-1
) to disable the periodic cleanup and reduce the load on the database. This is the default behavior.
The expiration_time
column in the state table, where the expiration date for records is stored, does not have an index by default, so each periodic cleanup must perform a full-table scan. If you have a table with a very large number of records, and only some of them use a TTL, you may find it useful to create an index on that column. Assuming that your state table name is state
(the default), you can use this query:
CREATE INDEX idx_expiration_time
ON state (expiration_time);
Dapr does not automatically vacuum SQLite databases.
Sharing a SQLite database and using networked filesystems
Although you can have multiple Dapr instances accessing the same SQLite database (for example, because your application is scaled horizontally or because you have multiple apps accessing the same state store), there are some caveats you should keep in mind.
SQLite works best when all clients access a database file on the same, locally-mounted disk. Using virtual disks that are mounted from a SAN (Storage Area Network), as is common practice in virtualized or cloud environments, is fine.
However, storing your SQLite database in a networked filesystem (for example via NFS or SMB, but these examples are not an exhaustive list) should be done with care. The official SQLite documentation has a page dedicated to recommendations and caveats for running SQLite over a network.
Given the risk of data corruption that running SQLite over a networked filesystem (such as via NFS or SMB) comes with, we do not recommend doing that with Dapr in production environment. However, if you do want to do that, you should configure your SQLite Dapr component with disableWAL
set to true
.
Related links
- Basic schema for a Dapr component
- Read this guide for instructions on configuring state store components
- State management building block
5.3.28 - Zookeeper
Component format
To setup Zookeeper state store create a component of type state.zookeeper
. See this guide on how to create and apply a state store configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: state.zookeeper
version: v1
metadata:
- name: servers
value: <REPLACE-WITH-COMMA-DELIMITED-SERVERS> # Required. Example: "zookeeper.default.svc.cluster.local:2181"
- name: sessionTimeout
value: <REPLACE-WITH-SESSION-TIMEOUT> # Required. Example: "5s"
- name: maxBufferSize
value: <REPLACE-WITH-MAX-BUFFER-SIZE> # Optional. default: "1048576"
- name: maxConnBufferSize
value: <REPLACE-WITH-MAX-CONN-BUFFER-SIZE> # Optional. default: "1048576"
- name: keyPrefixPath
value: <REPLACE-WITH-KEY-PREFIX-PATH> # Optional.
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
servers | Y | Comma delimited list of servers | "zookeeper.default.svc.cluster.local:2181" |
sessionTimeout | Y | The session timeout value | "5s" |
maxBufferSize | N | The maximum size of buffer. Defaults to "1048576" |
"1048576" |
maxConnBufferSize | N | The maximum size of connection buffer. Defaults to "1048576 " |
"1048576" |
keyPrefixPath | N | The key prefix path in Zookeeper. No default | "dapr" |
Setup Zookeeper
You can run Zookeeper locally using Docker:
docker run --name some-zookeeper --restart always -d zookeeper
You can then interact with the server using localhost:2181
.
The easiest way to install Zookeeper on Kubernetes is by using the Helm chart:
helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
helm install zookeeper incubator/zookeeper
This installs Zookeeper into the default
namespace.
To interact with Zookeeper, find the service with: kubectl get svc zookeeper
.
For example, if installing using the example above, the Zookeeper host address would be:
zookeeper.default.svc.cluster.local:2181
Related links
- Basic schema for a Dapr component
- Read this guide for instructions on configuring state store components
- State management building block
5.4 - Secret store component specs
The following table lists secret stores supported by the Dapr secrets building block. Learn how to set up different secret stores for Dapr secrets management.
Table headers to note:
Header | Description | Example |
---|---|---|
Status | Component certification status |
Alpha Beta Stable |
Component version | The version of the component | v1 |
Since runtime version | The version of the Dapr runtime when the component status was set or updated | 1.11 |
Generic
Component | Multiple Key-Values Per Secret | Status | Component version | Since runtime version |
---|---|---|---|---|
HashiCorp Vault | ✅ | Stable | v1 | 1.10 |
Kubernetes secrets | ✅ | Stable | v1 | 1.0 |
Local environment variables |
![]() |
Stable | v1 | 1.9 |
Local file | ✅ | Stable | v1 | 1.9 |
Alibaba Cloud
Component | Multiple Key-Values Per Secret | Status | Component version | Since runtime version |
---|---|---|---|---|
AlibabaCloud OOS Parameter Store |
![]() |
Alpha | v1 | 1.6 |
Amazon Web Services (AWS)
Component | Multiple Key-Values Per Secret | Status | Component version | Since runtime version |
---|---|---|---|---|
AWS Secrets Manager |
![]() |
Beta | v1 | 1.15 |
AWS SSM Parameter Store |
![]() |
Alpha | v1 | 1.1 |
Google Cloud Platform (GCP)
Component | Multiple Key-Values Per Secret | Status | Component version | Since runtime version |
---|---|---|---|---|
GCP Secret Manager |
![]() |
Alpha | v1 | 1.0 |
Microsoft Azure
Component | Multiple Key-Values Per Secret | Status | Component version | Since runtime version |
---|---|---|---|---|
Azure Key Vault |
![]() |
Stable | v1 | 1.0 |
5.4.1 - AlibabaCloud OOS Parameter Store
Component format
To setup AlibabaCloud OOS Parameter Store secret store create a component of type secretstores.alicloud.parameterstore
. See this guide on how to create and apply a secretstore configuration. See this guide on referencing secrets to retrieve and use the secret with Dapr components.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: alibabacloudparameterstore
spec:
type: secretstores.alicloud.parameterstore
version: v1
metadata:
- name: regionId
value: "[alicloud_region_id]"
- name: accessKeyId
value: "[alicloud_access_key_id]"
- name: accessKeySecret
value: "[alicloud_access_key_secret]"
- name: securityToken
value: "[alicloud_security_token]"
Warning
The above example uses secrets as plain strings. It is recommended to use a local secret store such as Kubernetes secret store or a local file to bootstrap secure key storage.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
regionId | Y | The specific region the AlibabaCloud OOS Parameter Store instance is deployed in | "cn-hangzhou" |
accessKeyId | Y | The AlibabaCloud Access Key ID to access this resource | "accessKeyId" |
accessKeySecret | Y | The AlibabaCloud Access Key Secret to access this resource | "accessKeySecret" |
securityToken | N | The AlibabaCloud Security Token to use | "securityToken" |
Optional per-request metadata properties
The following optional query parameters can be provided when retrieving secrets from this secret store:
Query Parameter | Description |
---|---|
metadata.version_id |
Version for the given secret key |
metadata.path |
(For bulk requests only) The path from the metadata. If not set, defaults to root path (all secrets). |
Create an AlibabaCloud OOS Parameter Store instance
Setup AlibabaCloud OOS Parameter Store using the AlibabaCloud documentation: https://www.alibabacloud.com/help/en/doc-detail/186828.html.
Related links
5.4.2 - AWS Secrets Manager
Component format
To setup AWS Secrets Manager secret store create a component of type secretstores.aws.secretmanager
. See this guide on how to create and apply a secretstore configuration. See this guide on referencing secrets to retrieve and use the secret with Dapr components.
See Authenticating to AWS for information about authentication-related attributes.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: awssecretmanager
spec:
type: secretstores.aws.secretmanager
version: v1
metadata:
- name: region
value: "[aws_region]"
- name: accessKey
value: "[aws_access_key]"
- name: secretKey
value: "[aws_secret_key]"
- name: sessionToken
value: "[aws_session_token]"
Warning
The above example uses secrets as plain strings. It is recommended to use a local secret store such as Kubernetes secret store or a local file to bootstrap secure key storage.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
region | Y | The specific AWS region the AWS Secrets Manager instance is deployed in | "us-east-1" |
accessKey | Y | The AWS Access Key to access this resource | "key" |
secretKey | Y | The AWS Secret Access Key to access this resource | "secretAccessKey" |
sessionToken | N | The AWS session token to use | "sessionToken" |
Important
When running the Dapr sidecar (daprd) with your application on EKS (AWS Kubernetes), if you’re using a node/pod that has already been attached to an IAM policy defining access to AWS resources, you must not provide AWS access-key, secret-key, and tokens in the definition of the component spec you’re using.Optional per-request metadata properties
The following optional query parameters can be provided when retrieving secrets from this secret store:
Query Parameter | Description |
---|---|
metadata.version_id |
Version for the given secret key. |
metadata.version_stage |
Version stage for the given secret key. |
Create an AWS Secrets Manager instance
Setup AWS Secrets Manager using the AWS documentation: https://docs.aws.amazon.com/secretsmanager/latest/userguide/tutorials_basic.html.
Related links
5.4.3 - AWS SSM Parameter Store
Component format
To setup AWS SSM Parameter Store secret store create a component of type secretstores.aws.parameterstore
. See this guide on how to create and apply a secretstore configuration. See this guide on referencing secrets to retrieve and use the secret with Dapr components.
See Authenticating to AWS for information about authentication-related attributes.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: awsparameterstore
spec:
type: secretstores.aws.parameterstore
version: v1
metadata:
- name: region
value: "[aws_region]"
- name: accessKey
value: "[aws_access_key]"
- name: secretKey
value: "[aws_secret_key]"
- name: sessionToken
value: "[aws_session_token]"
- name: prefix
value: "[secret_name]"
Warning
The above example uses secrets as plain strings. It is recommended to use a local secret store such as Kubernetes secret store or a local file to bootstrap secure key storage.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
region | Y | The specific AWS region the AWS SSM Parameter Store instance is deployed in | "us-east-1" |
accessKey | Y | The AWS Access Key to access this resource | "key" |
secretKey | Y | The AWS Secret Access Key to access this resource | "secretAccessKey" |
sessionToken | N | The AWS session token to use | "sessionToken" |
prefix | N | Allows you to specify more than one SSM parameter store secret store component. | "prefix" |
Important
When running the Dapr sidecar (daprd) with your application on EKS (AWS Kubernetes), if you’re using a node/pod that has already been attached to an IAM policy defining access to AWS resources, you must not provide AWS access-key, secret-key, and tokens in the definition of the component spec you’re using.Create an AWS SSM Parameter Store instance
Setup AWS SSM Parameter Store using the AWS documentation: https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html.
Related links
5.4.4 - Azure Key Vault secret store
Component format
To setup Azure Key Vault secret store, create a component of type secretstores.azure.keyvault
.
- See the secret store components guide on how to create and apply a secret store configuration.
- See the guide on referencing secrets to retrieve and use the secret with Dapr components.
- See the Configure the component section below.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: azurekeyvault
spec:
type: secretstores.azure.keyvault
version: v1
metadata:
- name: vaultName # Required
value: [your_keyvault_name]
- name: azureEnvironment # Optional, defaults to AZUREPUBLICCLOUD
value: "AZUREPUBLICCLOUD"
# See authentication section below for all options
- name: azureTenantId
value: "[your_service_principal_tenant_id]"
- name: azureClientId
value: "[your_service_principal_app_id]"
- name: azureCertificateFile
value : "[pfx_certificate_file_fully_qualified_local_path]"
Authenticating with Microsoft Entra ID
The Azure Key Vault secret store component supports authentication with Microsoft Entra ID only. Before you enable this component:
- Read the Authenticating to Azure document.
- Create an Microsoft Entra ID application (also called Service Principal).
- Alternatively, create a managed identity for your application platform.
Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
vaultName |
Y | The name of the Azure Key Vault | "mykeyvault" |
azureEnvironment |
N | Optional name for the Azure environment if using a different Azure cloud | "AZUREPUBLICCLOUD" (default value), "AZURECHINACLOUD" , "AZUREUSGOVERNMENTCLOUD" , "AZUREGERMANCLOUD" |
Auth metadata | See Authenticating to Azure for more information |
Additionally, you must provide the authentication fields as explained in the Authenticating to Azure document.
Optional per-request metadata properties
The following optional query parameters can be provided when retrieving secrets from this secret store:
Query Parameter | Description |
---|---|
metadata.version_id |
Version for the given secret key. |
metadata.maxresults |
(For bulk requests only) Number of secrets to return, after which the request will be truncated. |
Example
Prerequisites
- Azure Subscription
- Azure CLI
- jq
- You are using bash or zsh shell
- You’ve created an Microsoft Entra ID application (Service Principal) per the instructions in Authenticating to Azure. You will need the following values:
Value Description SERVICE_PRINCIPAL_ID
The ID of the Service Principal that you created for a given application
Create an Azure Key Vault and authorize a Service Principal
- Set a variable with the Service Principal that you created:
SERVICE_PRINCIPAL_ID="[your_service_principal_object_id]"
- Set a variable with the location in which to create all resources:
LOCATION="[your_location]"
(You can get the full list of options with: az account list-locations --output tsv
)
- Create a Resource Group, giving it any name you’d like:
RG_NAME="[resource_group_name]"
RG_ID=$(az group create \
--name "${RG_NAME}" \
--location "${LOCATION}" \
| jq -r .id)
- Create an Azure Key Vault that uses Azure RBAC for authorization:
KEYVAULT_NAME="[key_vault_name]"
az keyvault create \
--name "${KEYVAULT_NAME}" \
--enable-rbac-authorization true \
--resource-group "${RG_NAME}" \
--location "${LOCATION}"
- Using RBAC, assign a role to the Microsoft Entra ID application so it can access the Key Vault.
In this case, assign the “Key Vault Secrets User” role, which has the “Get secrets” permission over Azure Key Vault.
az role assignment create \
--assignee "${SERVICE_PRINCIPAL_ID}" \
--role "Key Vault Secrets User" \
--scope "${RG_ID}/providers/Microsoft.KeyVault/vaults/${KEYVAULT_NAME}"
Other less restrictive roles, like “Key Vault Secrets Officer” and “Key Vault Administrator”, can be used, depending on your application. See Microsoft Docs for more information about Azure built-in roles for Key Vault.
Configure the component
Using a client secret
To use a client secret, create a file called azurekeyvault.yaml
in the components directory. Use the following template, filling in the Microsoft Entra ID application you created:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: azurekeyvault
spec:
type: secretstores.azure.keyvault
version: v1
metadata:
- name: vaultName
value: "[your_keyvault_name]"
- name: azureTenantId
value: "[your_tenant_id]"
- name: azureClientId
value: "[your_client_id]"
- name: azureClientSecret
value : "[your_client_secret]"
Using a certificate
If you want to use a certificate saved on the local disk instead, use the following template. Fill in the details of the Microsoft Entra ID application you created:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: azurekeyvault
spec:
type: secretstores.azure.keyvault
version: v1
metadata:
- name: vaultName
value: "[your_keyvault_name]"
- name: azureTenantId
value: "[your_tenant_id]"
- name: azureClientId
value: "[your_client_id]"
- name: azureCertificateFile
value : "[pfx_certificate_file_fully_qualified_local_path]"
In Kubernetes, you store the client secret or the certificate into the Kubernetes Secret Store and then refer to those in the YAML file. Before you start, you need the details of the Microsoft Entra ID application you created.
Using a client secret
-
Create a Kubernetes secret using the following command:
kubectl create secret generic [your_k8s_secret_name] --from-literal=[your_k8s_secret_key]=[your_client_secret]
[your_client_secret]
is the application’s client secret as generated above[your_k8s_secret_name]
is secret name in the Kubernetes secret store[your_k8s_secret_key]
is secret key in the Kubernetes secret store
-
Create an
azurekeyvault.yaml
component file.The component yaml refers to the Kubernetes secretstore using
auth
property andsecretKeyRef
refers to the client secret stored in the Kubernetes secret store.apiVersion: dapr.io/v1alpha1 kind: Component metadata: name: azurekeyvault spec: type: secretstores.azure.keyvault version: v1 metadata: - name: vaultName value: "[your_keyvault_name]" - name: azureTenantId value: "[your_tenant_id]" - name: azureClientId value: "[your_client_id]" - name: azureClientSecret secretKeyRef: name: "[your_k8s_secret_name]" key: "[your_k8s_secret_key]" auth: secretStore: kubernetes
-
Apply the
azurekeyvault.yaml
component:kubectl apply -f azurekeyvault.yaml
Using a certificate
-
Create a Kubernetes secret using the following command:
kubectl create secret generic [your_k8s_secret_name] --from-file=[your_k8s_secret_key]=[pfx_certificate_file_fully_qualified_local_path]
[pfx_certificate_file_fully_qualified_local_path]
is the path of PFX file you obtained earlier[your_k8s_secret_name]
is secret name in the Kubernetes secret store[your_k8s_secret_key]
is secret key in the Kubernetes secret store
-
Create an
azurekeyvault.yaml
component file.The component yaml refers to the Kubernetes secretstore using
auth
property andsecretKeyRef
refers to the certificate stored in the Kubernetes secret store.apiVersion: dapr.io/v1alpha1 kind: Component metadata: name: azurekeyvault spec: type: secretstores.azure.keyvault version: v1 metadata: - name: vaultName value: "[your_keyvault_name]" - name: azureTenantId value: "[your_tenant_id]" - name: azureClientId value: "[your_client_id]" - name: azureCertificate secretKeyRef: name: "[your_k8s_secret_name]" key: "[your_k8s_secret_key]" auth: secretStore: kubernetes
-
Apply the
azurekeyvault.yaml
component:kubectl apply -f azurekeyvault.yaml
Using Azure managed identity
-
Ensure your AKS cluster has managed identity enabled and follow the guide for using managed identities.
-
Create an
azurekeyvault.yaml
component file.The component yaml refers to a particular KeyVault name. The managed identity you will use in a later step must be given read access to this particular KeyVault instance.
apiVersion: dapr.io/v1alpha1 kind: Component metadata: name: azurekeyvault spec: type: secretstores.azure.keyvault version: v1 metadata: - name: vaultName value: "[your_keyvault_name]"
-
Apply the
azurekeyvault.yaml
component:kubectl apply -f azurekeyvault.yaml
-
Create and assign a managed identity at the pod-level via either:
- Microsoft Entra ID workload identity (preferred method)
- Microsoft Entra ID pod identity
Important: While both Microsoft Entra ID pod identity and workload identity are in preview, currently Microsoft Entra ID Workload Identity is planned for general availability (stable state).
-
After creating a workload identity, give it
read
permissions:- On your desired KeyVault instance
- In your application deployment. Inject the pod identity both:
- Via a label annotation
- By specifying the Kubernetes service account associated with the desired workload identity
apiVersion: v1 kind: Pod metadata: name: mydaprdemoapp labels: aadpodidbinding: $POD_IDENTITY_NAME
Using Azure managed identity directly vs. via Microsoft Entra ID workload identity
When using managed identity directly, you can have multiple identities associated with an app, requiring azureClientId
to specify which identity should be used.
However, when using managed identity via Microsoft Entra ID workload identity, azureClientId
is not necessary and has no effect. The Azure identity to be used is inferred from the service account tied to an Azure identity via the Azure federated identity.
References
5.4.5 - GCP Secret Manager
Component format
To setup GCP Secret Manager secret store create a component of type secretstores.gcp.secretmanager
. See this guide on how to create and apply a secretstore configuration. See this guide on referencing secrets to retrieve and use the secret with Dapr components.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: gcpsecretmanager
spec:
type: secretstores.gcp.secretmanager
version: v1
metadata:
- name: type
value: <replace-with-account-type>
- name: project_id
value: <replace-with-project-id>
- name: private_key_id
value: <replace-with-private-key-id>
- name: client_email
value: <replace-with-email>
- name: client_id
value: <replace-with-client-id>
- name: auth_uri
value: <replace-with-auth-uri>
- name: token_uri
value: <replace-with-token-uri>
- name: auth_provider_x509_cert_url
value: <replace-with-auth-provider-cert-url>
- name: client_x509_cert_url
value: <replace-with-client-cert-url>
- name: private_key
value: <replace-with-private-key>
Warning
The above example uses secrets as plain strings. It is recommended to use a local secret store such as Kubernetes secret store or a local file to bootstrap secure key storage.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
project_id |
Y | The project ID associated with this component. | "project_id" |
type |
N | The type of the account. | "service_account" |
private_key_id |
N | If using explicit credentials, this field should contain the private_key_id field from the service account json document |
"privateKeyId" |
private_key |
N | If using explicit credentials, this field should contain the private_key field from the service account json. Replace with x509 cert |
12345-12345 |
client_email |
N | If using explicit credentials, this field should contain the client_email field from the service account json |
"client@email.com" |
client_id |
N | If using explicit credentials, this field should contain the client_id field from the service account json |
0123456789-0123456789 |
auth_uri |
N | If using explicit credentials, this field should contain the auth_uri field from the service account json |
https://accounts.google.com/o/oauth2/auth |
token_uri |
N | If using explicit credentials, this field should contain the token_uri field from the service account json |
https://oauth2.googleapis.com/token |
auth_provider_x509_cert_url |
N | If using explicit credentials, this field should contain the auth_provider_x509_cert_url field from the service account json |
https://www.googleapis.com/oauth2/v1/certs |
client_x509_cert_url |
N | If using explicit credentials, this field should contain the client_x509_cert_url field from the service account json |
https://www.googleapis.com/robot/v1/metadata/x509/<PROJECT_NAME>.iam.gserviceaccount.com |
GCP Credentials
Since the GCP Secret Manager component uses the GCP Go Client Libraries, by default it authenticates using Application Default Credentials. This is explained further in the Authenticate to GCP Cloud services using client libraries guide. Also, see how to Set up Application Default Credentials.
Optional per-request metadata properties
The following optional query parameters can be provided to the GCP Secret Manager component:
Query Parameter | Description |
---|---|
metadata.version_id |
Version for the given secret key. |
Setup GCP Secret Manager instance
Setup GCP Secret Manager using the GCP documentation: https://cloud.google.com/secret-manager/docs/quickstart.
Related links
5.4.6 - HashiCorp Vault
Create the Vault component
To setup HashiCorp Vault secret store create a component of type secretstores.hashicorp.vault
. See this guide on how to create and apply a secretstore configuration. See this guide on referencing secrets to retrieve and use the secret with Dapr components.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: vault
spec:
type: secretstores.hashicorp.vault
version: v1
metadata:
- name: vaultAddr
value: [vault_address] # Optional. Default: "https://127.0.0.1:8200"
- name: caCert # Optional. This or caPath or caPem
value: "[ca_cert]"
- name: caPath # Optional. This or CaCert or caPem
value: "[path_to_ca_cert_file]"
- name: caPem # Optional. This or CaCert or CaPath
value : "[encoded_ca_cert_pem]"
- name: skipVerify # Optional. Default: false
value : "[skip_tls_verification]"
- name: tlsServerName # Optional.
value : "[tls_config_server_name]"
- name: vaultTokenMountPath # Required if vaultToken not provided. Path to token file.
value : "[path_to_file_containing_token]"
- name: vaultToken # Required if vaultTokenMountPath not provided. Token value.
value : "[path_to_file_containing_token]"
- name: vaultKVPrefix # Optional. Default: "dapr"
value : "[vault_prefix]"
- name: vaultKVUsePrefix # Optional. default: "true"
value: "[true/false]"
- name: enginePath # Optional. default: "secret"
value: "secret"
- name: vaultValueType # Optional. default: "map"
value: "map"
Warning
The above example uses secrets as plain strings. It is recommended to use a local secret store such as Kubernetes secret store or a local file to bootstrap secure key storage.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
vaultAddr | N | The address of the Vault server. Defaults to "https://127.0.0.1:8200" |
"https://127.0.0.1:8200" |
caPem | N | The inlined contents of the CA certificate to use, in PEM format. If defined, takes precedence over caPath and caCert . |
See below |
caPath | N | The path to a folder holding the CA certificate file to use, in PEM format. If the folder contains multiple files, only the first file found will be used. If defined, takes precedence over caCert . |
"path/to/cacert/holding/folder" |
caCert | N | The path to the CA certificate to use, in PEM format. | ""path/to/cacert.pem" |
skipVerify | N | Skip TLS verification. Defaults to "false" |
"true" , "false" |
tlsServerName | N | The name of the server requested during TLS handshake in order to support virtual hosting. This value is also used to verify the TLS certificate presented by Vault server. | "tls-server" |
vaultTokenMountPath | Y | Path to file containing token | "path/to/file" |
vaultToken | Y | Token for authentication within Vault. | "tokenValue" |
vaultKVPrefix | N | The prefix in vault. Defaults to "dapr" |
"dapr" , "myprefix" |
vaultKVUsePrefix | N | If false, vaultKVPrefix is forced to be empty. If the value is not given or set to true, vaultKVPrefix is used when accessing the vault. Setting it to false is needed to be able to use the BulkGetSecret method of the store. | "true" , "false" |
enginePath | N | The engine path in vault. Defaults to "secret" |
"kv" , "any" |
vaultValueType | N | Vault value type. map means to parse the value into map[string]string , text means to use the value as a string. ‘map’ sets the multipleKeyValuesPerSecret behavior. text makes Vault behave as a secret store with name/value semantics. Defaults to "map" |
"map" , "text" |
Optional per-request metadata properties
The following optional query parameters can be provided to Hashicorp Vault secret store component:
Query Parameter | Description |
---|---|
metadata.version_id |
Version for the given secret key. |
Setup Hashicorp Vault instance
Setup Hashicorp Vault using the Vault documentation: https://www.vaultproject.io/docs/install/index.html.
For Kubernetes, you can use the Helm Chart: https://github.com/hashicorp/vault-helm.
Multiple key-values per secret
HashiCorp Vault supports multiple key-values in a secret. While this behavior is ultimately dependent on the underlying secret engine configured by enginePath
, it may change the way you store and retrieve keys from Vault. For instance, multiple key-values in a secret is the behavior exposed in the secret
engine, the default engine configured by the enginePath
field.
When retrieving secrets, a JSON payload is returned with the key names as fields and their respective values.
Suppose you add a secret to your Vault setup as follows:
vault kv put secret/dapr/mysecret firstKey=aValue secondKey=anotherValue thirdKey=yetAnotherDistinctValue
In the example above, the secret is named mysecret
and it has 3 key-values under it.
Observe that the secret is created under a dapr
prefix, as this is the default value for the vaultKVPrefix
flag.
Retrieving it from Dapr would result in the following output:
$ curl http://localhost:3501/v1.0/secrets/my-hashicorp-vault/mysecret
{
"firstKey": "aValue",
"secondKey": "anotherValue",
"thirdKey": "yetAnotherDistinctValue"
}
Notice that the name of the secret (mysecret
) is not repeated in the result.
TLS Server verification
The fields skipVerify
, tlsServerName
, caCert
, caPath
, and caPem
control if and how Dapr verifies the vault server’s certificate while connecting using TLS/HTTPS.
Inline CA PEM caPem
The caPem
field value should be the contents of the PEM CA certificate you want to use. Given PEM certificates are made of multiple lines, defining that value might seem challenging at first. YAML allows for a few ways of defining a multiline values.
Below is one way to define a caPem
field.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: vault
spec:
type: secretstores.hashicorp.vault
version: v1
metadata:
- name: vaultAddr
value: https://127.0.0.1:8200
- name: caPem
value: |-
-----BEGIN CERTIFICATE-----
<< the rest of your PEM file content's here, indented appropriately. >>
-----END CERTIFICATE-----
Related links
5.4.7 - HuaweiCloud Cloud Secret Management Service (CSMS)
Component format
To setup HuaweiCloud Cloud Secret Management Service (CSMS) secret store create a component of type secretstores.huaweicloud.csms
. See this guide on how to create and apply a secretstore configuration. See this guide on referencing secrets to retrieve and use the secret with Dapr components.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: huaweicloudcsms
spec:
type: secretstores.huaweicloud.csms
version: v1
metadata:
- name: region
value: "[huaweicloud_region]"
- name: accessKey
value: "[huaweicloud_access_key]"
- name: secretAccessKey
value: "[huaweicloud_secret_access_key]"
Warning
The above example uses secrets as plain strings. It is recommended to use a local secret store such as Kubernetes secret store or a local file to bootstrap secure key storage.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
region | Y | The specific region the HuaweiCloud CSMS instance is deployed in | "cn-north-4" |
accessKey | Y | The HuaweiCloud Access Key to access this resource | "accessKey" |
secretAccessKey | Y | The HuaweiCloud Secret Access Key to access this resource | "secretAccessKey" |
Optional per-request metadata properties
The following optional query parameters can be provided when retrieving secrets from this secret store:
Query Parameter | Description |
---|---|
metadata.version_id |
Version for the given secret key. |
Setup HuaweiCloud Cloud Secret Management Service (CSMS) instance
Setup HuaweiCloud Cloud Secret Management Service (CSMS) using the HuaweiCloud documentation: https://support.huaweicloud.com/intl/en-us/usermanual-dew/dew_01_9993.html.
Related links
5.4.8 - Kubernetes secrets
Default Kubernetes secret store component
When Dapr is deployed to a Kubernetes cluster, a secret store with the name kubernetes
is automatically provisioned. This pre-provisioned secret store allows you to use the native Kubernetes secret store with no need to author, deploy or maintain a component configuration file for the secret store and is useful for developers looking to simply access secrets stored natively in a Kubernetes cluster.
A custom component definition file for a Kubernetes secret store can still be configured (See below for details). Using a custom definition decouples referencing the secret store in your code from the hosting platform as the store name is not fixed and can be customized, keeping your code more generic and portable. Additionally, by explicitly defining a Kubernetes secret store component you can connect to a Kubernetes secret store from a local Dapr self-hosted installation. This requires a valid kubeconfig
file.
Scoping secret store access
When limiting access to secrets in your application using secret scopes, it’s important to include the default secret store in the scope definition in order to restrict it.Create a custom Kubernetes secret store component
To setup a Kubernetes secret store create a component of type secretstores.kubernetes
. See this guide on how to create and apply a secretstore configuration. See this guide on referencing secrets to retrieve and use the secret with Dapr components.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: mycustomsecretstore
spec:
type: secretstores.kubernetes
version: v1
metadata:[]
Spec metadata fields
Field | Required | Details | Example | |
---|---|---|---|---|
defaultNamespace |
N | Default namespace to retrieve secrets from. If unset, the namespace must be specified in each request metadata or via environment variable NAMESPACE |
"default-ns" |
|
kubeconfigPath |
N | The path to the kubeconfig file. If not specified, the store uses the default in-cluster config value | "/path/to/kubeconfig" |
Optional per-request metadata properties
The following optional query parameters can be provided to Kubernetes secret store component:
Query Parameter | Description |
---|---|
metadata.namespace |
The namespace of the secret. If not specified, the namespace of the pod is used. |
Related links
5.4.9 - Local environment variables (for Development)
This Dapr secret store component uses locally defined environment variable and does not use authentication.
Warning
This approach to secret management is not recommended for production environments.Component format
To setup local environment variables secret store create a component of type secretstores.local.env
. Create a file with the following content in your ./components
directory:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: envvar-secret-store
spec:
type: secretstores.local.env
version: v1
metadata:
# - name: prefix
# value: "MYAPP_"
Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
prefix |
N | If set, limits operations to environment variables with the given prefix. The prefix is removed from the returned secrets’ names. The matching is case-insensitive on Windows and case-sensitive on all other operating systems. |
"MYAPP_" |
Notes
For security reasons, this component cannot be used to access these environment variables:
APP_API_TOKEN
- Any variable whose name begins with the
DAPR_
prefix
Related Links
5.4.10 - Local file (for Development)
This Dapr secret store component reads plain text JSON from a given file and does not use authentication.
Warning
This approach to secret management is not recommended for production environments.Component format
To setup local file based secret store create a component of type secretstores.local.file
. Create a file with the following content in your ./components
directory:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: local-secret-store
spec:
type: secretstores.local.file
version: v1
metadata:
- name: secretsFile
value: [path to the JSON file]
- name: nestedSeparator
value: ":"
- name: multiValued
value: "false"
Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
secretsFile | Y | The path to the file where secrets are stored | "path/to/file.json" |
nestedSeparator | N | Used by the store when flattening the JSON hierarchy to a map. Defaults to ":" |
":" |
multiValued | N | "true" sets the multipleKeyValuesPerSecret behavior. Allows one level of multi-valued key/value pairs before flattening JSON hierarchy. Defaults to "false" |
"true" |
Setup JSON file to hold the secrets
Given the following JSON loaded from secretsFile
:
{
"redisPassword": "your redis password",
"connectionStrings": {
"sql": "your sql connection string",
"mysql": "your mysql connection string"
}
}
The flag multiValued
determines whether the secret store presents a name/value behavior or a multiple key-value per secret behavior.
Name/Value semantics
If multiValued
is false
, the store loads the JSON file and create a map with the following key-value pairs:
flattened key | value |
---|---|
“redisPassword” | "your redis password" |
“connectionStrings:sql” | "your sql connection string" |
“connectionStrings:mysql” | "your mysql connection string" |
If the multiValued
setting set to true, invoking a GET
request on the key connectionStrings
results in a 500 HTTP response and an error message. For example:
$ curl http://localhost:3501/v1.0/secrets/local-secret-store/connectionStrings
{
"errorCode": "ERR_SECRET_GET",
"message": "failed getting secret with key connectionStrings from secret store local-secret-store: secret connectionStrings not found"
}
This error is expected, since the connectionStrings
key is not present, per the table above.
However, requesting for flattened key connectionStrings:sql
would result in a successful response, with the following:
$ curl http://localhost:3501/v1.0/secrets/local-secret-store/connectionStrings:sql
{
"connectionStrings:sql": "your sql connection string"
}
Multiple key-values behavior
If multiValued
is true
, the secret store enables multiple key-value per secret behavior:
- Nested structures after the top level will be flattened.
- It parses the same JSON file into this table:
key | value |
---|---|
“redisPassword” | "your redis password" |
“connectionStrings” | {"mysql":"your mysql connection string","sql":"your sql connection string"} |
Notice that in the above table:
connectionStrings
is now a JSON object with two keys:mysql
andsql
.- The
connectionStrings:sql
andconnectionStrings:mysql
flattened keys from the table mapped for name/value semantics are missing.
Invoking a GET
request on the key connectionStrings
now results in a successful HTTP response similar to the following:
$ curl http://localhost:3501/v1.0/secrets/local-secret-store/connectionStrings
{
"sql": "your sql connection string",
"mysql": "your mysql connection string"
}
Meanwhile, requesting for the flattened key connectionStrings:sql
would now return a 500 HTTP error response with the following:
{
"errorCode": "ERR_SECRET_GET",
"message": "failed getting secret with key connectionStrings:sql from secret store local-secret-store: secret connectionStrings:sql not found"
}
Handling deeper nesting levels
Notice that, as stated in the spec metadata fields table, multiValued
only handles a single nesting level.
Let’s say you have a local file secret store with multiValued
enabled, pointing to a secretsFile
with the following JSON content:
{
"redisPassword": "your redis password",
"connectionStrings": {
"mysql": {
"username": "your mysql username",
"password": "your mysql password"
}
}
}
The contents of key mysql
under connectionStrings
has a nesting level greater than 1 and would be flattened.
Here is how it would look in memory:
key | value |
---|---|
“redisPassword” | "your redis password" |
“connectionStrings” | { "mysql:username": "your mysql username", "mysql:password": "your mysql password" } |
Once again, requesting for key connectionStrings
results in a successful HTTP response but its contents, as shown in the table above, would be flattened:
$ curl http://localhost:3501/v1.0/secrets/local-secret-store/connectionStrings
{
"mysql:username": "your mysql username",
"mysql:password": "your mysql password"
}
This is useful in order to mimic secret stores like Vault or Kubernetes that return multiple key/value pairs per secret key.
Related links
5.5 - Configuration store component specs
Table headers to note:
Header | Description | Example |
---|---|---|
Status | Component certification status |
Alpha Beta Stable |
Component version | The version of the component | v1 |
Since runtime version | The version of the Dapr runtime when the component status was set or updated | 1.11 |
Generic
Component | Status | Component version | Since runtime version |
---|---|---|---|
PostgreSQL | Stable | v1 | 1.11 |
Redis | Stable | v1 | 1.11 |
Microsoft Azure
Component | Status | Component version | Since runtime version |
---|---|---|---|
Azure App Configuration | Alpha | v1 | 1.9 |
5.5.1 - Azure App Configuration
Component format
To set up an Azure App Configuration configuration store, create a component of type configuration.azure.appconfig
.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: configuration.azure.appconfig
version: v1
metadata:
- name: host # host should be used when Azure Authentication mechanism is used.
value: <HOST>
- name: connectionString # connectionString should not be used when Azure Authentication mechanism is used.
value: <CONNECTIONSTRING>
- name: maxRetries
value: # Optional
- name: retryDelay
value: # Optional
- name: maxRetryDelay
value: # Optional
- name: azureEnvironment # Optional, defaults to AZUREPUBLICCLOUD
value: "AZUREPUBLICCLOUD"
# See authentication section below for all options
- name: azureTenantId # Optional
value: "[your_service_principal_tenant_id]"
- name: azureClientId # Optional
value: "[your_service_principal_app_id]"
- name: azureCertificateFile # Optional
value : "[pfx_certificate_file_fully_qualified_local_path]"
- name: subscribePollInterval # Optional
value: #Optional [Expected format example - 24h]
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
connectionString | Y* | Connection String for the Azure App Configuration instance. No Default. Can be secretKeyRef to use a secret reference. *Mutally exclusive with host field. *Not to be used when Azure Authentication is used |
Endpoint=https://foo.azconfig.io;Id=osOX-l9-s0:sig;Secret=00000000000000000000000000000000000000000000 |
host | N* | Endpoint for the Azure App Configuration instance. No Default. *Mutally exclusive with connectionString field. *To be used when Azure Authentication is used | https://dapr.azconfig.io |
maxRetries | N | Maximum number of retries before giving up. Defaults to 3 |
5 , 10 |
retryDelay | N | RetryDelay specifies the initial amount of delay to use before retrying an operation. The delay increases exponentially with each retry up to the maximum specified by MaxRetryDelay. Defaults to 4 seconds; "-1" disables delay between retries. |
4s |
maxRetryDelay | N | MaxRetryDelay specifies the maximum delay allowed before retrying an operation. Typically the value is greater than or equal to the value specified in RetryDelay. Defaults to 120 seconds; "-1" disables the limit |
120s |
subscribePollInterval | N | subscribePollInterval specifies the poll interval in nanoseconds for polling the subscribed keys for any changes. This will be updated in the future to Go Time format. Default polling interval is set to 24 hours. |
24h |
Note: either host
or connectionString
must be specified.
Authenticating with Connection String
Access an App Configuration instance using its connection string, which is available in the Azure portal. Since connection strings contain credential information, you should treat them as secrets and use a secret store.
Authenticating with Microsoft Entra ID
The Azure App Configuration configuration store component also supports authentication with Microsoft Entra ID. Before you enable this component:
- Read the Authenticating to Azure document.
- Create an Microsoft Entra ID application (also called Service Principal).
- Alternatively, create a managed identity for your application platform.
Set up Azure App Configuration
You need an Azure subscription to set up Azure App Configuration.
-
Start the Azure App Configuration creation flow. Log in if necessary.
-
Click Create to kickoff deployment of your Azure App Configuration instance.
-
Once your instance is created, grab the Host (Endpoint) or your Connection string:
- For the Host: navigate to the resource’s Overview and copy Endpoint.
- For your connection string: navigate to Settings > Access Keys and copy your Connection string.
-
Add your host or your connection string to an
azappconfig.yaml
file that Dapr can apply.Set the
host
key to[Endpoint]
or theconnectionString
key to the values you saved earlier.Note
In a production-grade application, follow the secret management instructions to securely manage your secrets.
Azure App Configuration request metadata
In Azure App Configuration, you can use labels to define different values for the same key. For example, you can define a single key with different values for development and production. You can specify which label to load when connecting to App Configuration
The Azure App Configuration store component supports the following optional label
metadata property:
label
: The label of the configuration to retrieve. If not present, the configuration store returns the configuration for the specified key and a null label.
The label can be populated using query parameters in the request URL:
GET curl http://localhost:<daprPort>/v1.0/configuration/<store-name>?key=<key name>&metadata.label=<label value>
Related links
5.5.2 - PostgreSQL
Component format
To set up an PostgreSQL configuration store, create a component of type configuration.postgresql
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: configuration.postgresql
version: v1
metadata:
# Connection string
- name: connectionString
value: "host=localhost user=postgres password=example port=5432 connect_timeout=10 database=config"
# Name of the table which holds configuration information
- name: table
value: "[your_configuration_table_name]"
# Individual connection parameters - can be used instead to override connectionString parameters
#- name: host
# value: "localhost"
#- name: hostaddr
# value: "127.0.0.1"
#- name: port
# value: "5432"
#- name: database
# value: "my_db"
#- name: user
# value: "postgres"
#- name: password
# value: "example"
#- name: sslRootCert
# value: "/path/to/ca.crt"
# Timeout for database operations, in seconds (optional)
#- name: timeoutInSeconds
# value: 20
# Name of the table where to store the state (optional)
#- name: tableName
# value: "state"
# Name of the table where to store metadata used by Dapr (optional)
#- name: metadataTableName
# value: "dapr_metadata"
# Cleanup interval in seconds, to remove expired rows (optional)
#- name: cleanupIntervalInSeconds
# value: 3600
# Maximum number of connections pooled by this component (optional)
#- name: maxConns
# value: 0
# Max idle time for connections before they're closed (optional)
#- name: connectionMaxIdleTime
# value: 0
# Controls the default mode for executing queries. (optional)
#- name: queryExecMode
# value: ""
# Uncomment this if you wish to use PostgreSQL as a state store for actors (optional)
#- name: actorStateStore
# value: "true"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Authenticate using a connection string
The following metadata options are required to authenticate using a PostgreSQL connection string.
Field | Required | Details | Example |
---|---|---|---|
connectionString |
Y | The connection string for the PostgreSQL database. See the PostgreSQL documentation on database connections for information on how to define a connection string. | "host=localhost user=postgres password=example port=5432 connect_timeout=10 database=my_db" |
Authenticate using individual connection parameters
In addition to using a connection string, you can optionally specify individual connection parameters. These parameters are equivalent to the standard PostgreSQL connection parameters.
Field | Required | Details | Example |
---|---|---|---|
host |
Y | The host name or IP address of the PostgreSQL server | "localhost" |
hostaddr |
N | The IP address of the PostgreSQL server (alternative to host) | "127.0.0.1" |
port |
Y | The port number of the PostgreSQL server | "5432" |
database |
Y | The name of the database to connect to | "my_db" |
user |
Y | The PostgreSQL user to connect as | "postgres" |
password |
Y | The password for the PostgreSQL user | "example" |
sslRootCert |
N | Path to the SSL root certificate file | "/path/to/ca.crt" |
Note
When using individual connection parameters, these will override the ones present in theconnectionString
.
Authenticate using Microsoft Entra ID
Authenticating with Microsoft Entra ID is supported with Azure Database for PostgreSQL. All authentication methods supported by Dapr can be used, including client credentials (“service principal”) and Managed Identity.
Field | Required | Details | Example |
---|---|---|---|
useAzureAD |
Y | Must be set to true to enable the component to retrieve access tokens from Microsoft Entra ID. |
"true" |
connectionString |
Y | The connection string for the PostgreSQL database. This must contain the user, which corresponds to the name of the user created inside PostgreSQL that maps to the Microsoft Entra ID identity; this is often the name of the corresponding principal (e.g. the name of the Microsoft Entra ID application). This connection string should not contain any password. |
"host=mydb.postgres.database.azure.com user=myapplication port=5432 database=my_db sslmode=require" |
azureTenantId |
N | ID of the Microsoft Entra ID tenant | "cd4b2887-304c-…" |
azureClientId |
N | Client ID (application ID) | "c7dd251f-811f-…" |
azureClientSecret |
N | Client secret (application password) | "Ecy3X…" |
Authenticate using AWS IAM
Authenticating with AWS IAM is supported with all versions of PostgreSQL type components.
The user specified in the connection string must be an already existing user in the DB, and an AWS IAM enabled user granted the rds_iam
database role.
Authentication is based on the AWS authentication configuration file, or the AccessKey/SecretKey provided.
The AWS authentication token will be dynamically rotated before it’s expiration time with AWS.
Field | Required | Details | Example |
---|---|---|---|
useAWSIAM |
Y | Must be set to true to enable the component to retrieve access tokens from AWS IAM. This authentication method only works with AWS Relational Database Service for PostgreSQL databases. |
"true" |
connectionString |
Y | The connection string for the PostgreSQL database. This must contain an already existing user, which corresponds to the name of the user created inside PostgreSQL that maps to the AWS IAM policy. This connection string should not contain any password. Note that the database name field is denoted by dbname with AWS. |
"host=mydb.postgres.database.aws.com user=myapplication port=5432 dbname=my_db sslmode=require" |
awsRegion |
N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘region’ instead. The AWS Region where the AWS Relational Database Service is deployed to. | "us-east-1" |
awsAccessKey |
N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘accessKey’ instead. AWS access key associated with an IAM account | "AKIAIOSFODNN7EXAMPLE" |
awsSecretKey |
N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘secretKey’ instead. The secret key associated with the access key | "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" |
awsSessionToken |
N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘sessionToken’ instead. AWS session token to use. A session token is only required if you are using temporary security credentials. | "TOKEN" |
Other metadata options
Field | Required | Details | Example |
---|---|---|---|
table |
Y | Table name for configuration information, must be lowercased. | configtable |
timeout |
N | Timeout for operations on the database, as a Go duration. Integers are interpreted as number of seconds. Defaults to 20s |
"30s" , 30 |
maxConns |
N | Maximum number of connections pooled by this component. Set to 0 or lower to use the default value, which is the greater of 4 or the number of CPUs. | "4" |
connectionMaxIdleTime |
N | Max idle time before unused connections are automatically closed in the connection pool. By default, there’s no value and this is left to the database driver to choose. | "5m" |
queryExecMode |
N | Controls the default mode for executing queries. By default Dapr uses the extended protocol and automatically prepares and caches prepared statements. However, this may be incompatible with proxies such as PGBouncer. In this case it may be preferrable to use exec or simple_protocol . |
"simple_protocol" |
Set up PostgreSQL as Configuration Store
-
Start the PostgreSQL Database
-
Connect to the PostgreSQL database and setup a configuration table with following schema:
Field Datatype Nullable Details KEY VARCHAR N Holds "Key"
of the configuration attributeVALUE VARCHAR N Holds Value of the configuration attribute VERSION VARCHAR N Holds version of the configuration attribute METADATA JSON Y Holds Metadata as JSON CREATE TABLE IF NOT EXISTS table_name ( KEY VARCHAR NOT NULL, VALUE VARCHAR NOT NULL, VERSION VARCHAR NOT NULL, METADATA JSON );
-
Create a TRIGGER on configuration table. An example function to create a TRIGGER is as follows:
CREATE OR REPLACE FUNCTION notify_event() RETURNS TRIGGER AS $$ DECLARE data json; notification json; BEGIN IF (TG_OP = 'DELETE') THEN data = row_to_json(OLD); ELSE data = row_to_json(NEW); END IF; notification = json_build_object( 'table',TG_TABLE_NAME, 'action', TG_OP, 'data', data); PERFORM pg_notify('config',notification::text); RETURN NULL; END; $$ LANGUAGE plpgsql;
-
Create the trigger with data encapsulated in the field labeled as
data
:notification = json_build_object( 'table',TG_TABLE_NAME, 'action', TG_OP, 'data', data );
-
The channel mentioned as attribute to
pg_notify
should be used when subscribing for configuration notifications -
Since this is a generic created trigger, map this trigger to
configuration table
CREATE TRIGGER config AFTER INSERT OR UPDATE OR DELETE ON configtable FOR EACH ROW EXECUTE PROCEDURE notify_event();
-
In the subscribe request add an additional metadata field with key as
pgNotifyChannel
and value should be set to samechannel name
mentioned inpg_notify
. From the above example, it should be set toconfig
Note
When calling subscribe
API, metadata.pgNotifyChannel
should be used to specify the name of the channel to listen for notifications from PostgreSQL configuration store.
Any number of keys can be added to a subscription request. Each subscription uses an exclusive database connection. It is strongly recommended to subscribe to multiple keys within a single subscription. This helps optimize the number of connections to the database.
Example of subscribe HTTP API:
curl -l 'http://<host>:<dapr-http-port>/configuration/mypostgresql/subscribe?key=<keyname1>&key=<keyname2>&metadata.pgNotifyChannel=<channel name>'
Related links
5.5.3 - Redis
Component format
To setup Redis configuration store create a component of type configuration.redis
. See this guide on how to create and apply a configuration store configuration.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: configuration.redis
version: v1
metadata:
- name: redisHost
value: <address>:6379
- name: redisPassword
value: **************
- name: useEntraID
value: "true"
- name: enableTLS
value: <bool>
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
redisHost | Y | Output | The Redis host address |
redisPassword | N | Output | The Redis password |
redisUsername | N | Output | Username for Redis host. Defaults to empty. Make sure your Redis server version is 6 or above, and have created acl rule correctly. |
enableTLS | N | Output | If the Redis instance supports TLS with public certificates it can be configured to enable or disable TLS. Defaults to "false" |
clientCert | N | Output | The content of the client certificate, used for Redis instances that require client-side certificates. Must be used with clientKey and enableTLS must be set to true. It is recommended to use a secret store as described here |
clientKey | N | Output | The content of the client private key, used in conjunction with clientCert for authentication. It is recommended to use a secret store as described here |
failover | N | Output | Property to enable failover configuration. Needs sentinelMasterName to be set. Defaults to "false" |
sentinelMasterName | N | Output | The Sentinel master name. See Redis Sentinel Documentation |
sentinelUsername | N | Output | Username for Redis Sentinel. Applicable only when “failover” is true, and Redis Sentinel has authentication enabled |
sentinelPassword | N | Output | Password for Redis Sentinel. Applicable only when “failover” is true, and Redis Sentinel has authentication enabled |
redisType | N | Output | The type of Redis. There are two valid values, one is "node" for single node mode, the other is "cluster" for Redis cluster mode. Defaults to "node" . |
redisDB | N | Output | Database selected after connecting to Redis. If "redisType" is "cluster" , this option is ignored. Defaults to "0" . |
redisMaxRetries | N | Output | Maximum number of times to retry commands before giving up. Default is to not retry failed commands. |
redisMinRetryInterval | N | Output | Minimum backoff for Redis commands between each retry. Default is "8ms" ; "-1" disables backoff. |
redisMaxRetryInterval | N | Output | Maximum backoff for Redis commands between each retry. Default is "512ms" ;"-1" disables backoff. |
dialTimeout | N | Output | Dial timeout for establishing new connections. Defaults to "5s" . |
readTimeout | N | Output | Timeout for socket reads. If reached, Redis commands fail with a timeout instead of blocking. Defaults to "3s" , "-1" for no timeout. |
writeTimeout | N | Output | Timeout for socket writes. If reached, Redis commands fail with a timeout instead of blocking. Defaults is readTimeout. |
poolSize | N | Output | Maximum number of socket connections. Default is 10 connections per every CPU as reported by runtime.NumCPU. |
poolTimeout | N | Output | Amount of time client waits for a connection if all connections are busy before returning an error. Default is readTimeout + 1 second. |
maxConnAge | N | Output | Connection age at which the client retires (closes) the connection. Default is to not close aged connections. |
minIdleConns | N | Output | Minimum number of idle connections to keep open in order to avoid the performance degradation associated with creating new connections. Defaults to "0" . |
idleCheckFrequency | N | Output | Frequency of idle checks made by idle connections reaper. Default is "1m" . "-1" disables idle connections reaper. |
idleTimeout | N | Output | Amount of time after which the client closes idle connections. Should be less than server’s timeout. Default is "5m" . "-1" disables idle timeout check. |
Setup Redis
Dapr can use any Redis instance: containerized, running on your local dev machine, or a managed cloud service.
A Redis instance is automatically created as a Docker container when you run dapr init
You can use Helm to quickly create a Redis instance in our Kubernetes cluster. This approach requires Installing Helm.
-
Install Redis into your cluster. Note that we’re explicitly setting an image tag to get a version greater than 5, which is what Dapr’ pub/sub functionality requires. If you’re intending on using Redis as just a state store (and not for pub/sub), you do not have to set the image version.
helm repo add bitnami https://charts.bitnami.com/bitnami helm install redis bitnami/redis --set image.tag=6.2
-
Run
kubectl get pods
to see the Redis containers now running in your cluster. -
Add
redis-master:6379
as theredisHost
in your redis.yaml file. For example:metadata: - name: redisHost value: redis-master:6379
-
Next, get the Redis password, which is slightly different depending on the OS we’re using:
-
Windows: Run
kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" > encoded.b64
, which creates a file with your encoded password. Next, runcertutil -decode encoded.b64 password.txt
, which will put your redis password in a text file calledpassword.txt
. Copy the password and delete the two files. -
Linux/MacOS: Run
kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" | base64 --decode
and copy the outputted password.
Add this password as the
redisPassword
value in your redis.yaml file. For example:metadata: - name: redisPassword value: lhDOkwTlp0
-
-
Create an Azure Cache for Redis instance using the official Microsoft documentation.
-
Once your instance is created, grab the Host name (FQDN) and your access key from the Azure portal.
- For the Host name:
- Navigate to the resource’s Overview page.
- Copy the Host name value.
- For your access key:
- Navigate to Settings > Access Keys.
- Copy and save your key.
- For the Host name:
-
Add your key and your host name to a
redis.yaml
file that Dapr can apply to your cluster.- If you’re running a sample, add the host and key to the provided
redis.yaml
. - If you’re creating a project from the ground up, create a
redis.yaml
file as specified in the Component format section.
- If you’re running a sample, add the host and key to the provided
-
Set the
redisHost
key to[HOST NAME FROM PREVIOUS STEP]:6379
and theredisPassword
key to the key you saved earlier.Note: In a production-grade application, follow secret management instructions to securely manage your secrets.
-
Enable EntraID support:
- Enable Entra ID authentication on your Azure Redis server. This may takes a few minutes.
- Set
useEntraID
to"true"
to implement EntraID support for Azure Cache for Redis.
-
Set
enableTLS
to"true"
to support TLS.
Note:
useEntraID
assumes that either your UserPrincipal (via AzureCLICredential) or the SystemAssigned managed identity have the RedisDataOwner role permission. If a user-assigned identity is used, you need to specify theazureClientID
property.
Related links
- Basic schema for a Dapr component
- Read How-To: Manage configuration from a store for instructions on how to use Redis as a configuration store.
- Configuration building block
5.6 - Lock component specs
Table headers to note:
Header | Description | Example |
---|---|---|
Status | Component certification status |
Alpha Beta Stable |
Component version | The version of the component | v1 |
Since runtime version | The version of the Dapr runtime when the component status was set or updated | 1.11 |
Generic
Component | Status | Component version | Since runtime version |
---|---|---|---|
Redis | Alpha | v1 | 1.8 |
5.6.1 - Redis
Component format
To set up the Redis lock, create a component of type lock.redis
. See this guide on how to create a lock.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: lock.redis
version: v1
metadata:
- name: redisHost
value: <HOST>
- name: redisPassword #Optional.
value: <PASSWORD>
- name: useEntraID
value: <bool> # Optional. Allowed: true, false.
- name: enableTLS
value: <bool> # Optional. Allowed: true, false.
- name: failover
value: <bool> # Optional. Allowed: true, false.
- name: sentinelMasterName
value: <string> # Optional
- name: maxRetries
value: # Optional
- name: maxRetryBackoff
value: # Optional
- name: redeliverInterval
value: # Optional
- name: processingTimeout
value: # Optional
- name: redisType
value: # Optional
- name: redisDB
value: # Optional
- name: redisMaxRetries
value: # Optional
- name: redisMinRetryInterval
value: # Optional
- name: redisMaxRetryInterval
value: # Optional
- name: dialTimeout
value: # Optional
- name: readTimeout
value: # Optional
- name: writeTimeout
value: # Optional
- name: poolSize
value: # Optional
- name: poolTimeout
value: # Optional
- name: maxConnAge
value: # Optional
- name: minIdleConns
value: # Optional
- name: idleCheckFrequency
value: # Optional
- name: idleTimeout
value: # Optional
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described here.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
redisHost | Y | Connection-string for the redis host | localhost:6379 , redis-master.default.svc.cluster.local:6379 |
redisPassword | N | Password for Redis host. No Default. Can be secretKeyRef to use a secret reference |
"" , "KeFg23!" |
redisUsername | N | Username for Redis host. Defaults to empty. Make sure your redis server version is 6 or above, and have created acl rule correctly. | "" , "default" |
useEntraID | N | Implements EntraID support for Azure Cache for Redis. Before enabling this:
|
"true" , "false" |
enableTLS | N | If the Redis instance supports TLS with public certificates, can be configured to be enabled or disabled. Defaults to "false" |
"true" , "false" |
maxRetries | N | Maximum number of retries before giving up. Defaults to 3 |
5 , 10 |
maxRetryBackoff | N | Maximum backoff between each retry. Defaults to 2 seconds; "-1" disables backoff. |
3000000000 |
failover | N | Property to enable failover configuration. Needs sentinelMasterName to be set. The redisHost should be the sentinel host address. See Redis Sentinel Documentation. Defaults to "false" |
"true" , "false" |
sentinelMasterName | N | The sentinel master name. See Redis Sentinel Documentation | "mymaster" |
redeliverInterval | N | The interval between checking for pending messages to redelivery. Defaults to "60s" . "0" disables redelivery. |
"30s" |
processingTimeout | N | The amount time a message must be pending before attempting to redeliver it. Defaults to "15s" . "0" disables redelivery. |
"30s" |
redisType | N | The type of redis. There are two valid values, one is "node" for single node mode, the other is "cluster" for redis cluster mode. Defaults to "node" . |
"cluster" |
redisDB | N | Database selected after connecting to redis. If "redisType" is "cluster" this option is ignored. Defaults to "0" . |
"0" |
redisMaxRetries | N | Alias for maxRetries . If both values are set maxRetries is ignored. |
"5" |
redisMinRetryInterval | N | Minimum backoff for redis commands between each retry. Default is "8ms" ; "-1" disables backoff. |
"8ms" |
redisMaxRetryInterval | N | Alias for maxRetryBackoff . If both values are set maxRetryBackoff is ignored. |
"5s" |
dialTimeout | N | Dial timeout for establishing new connections. Defaults to "5s" . |
"5s" |
readTimeout | N | Timeout for socket reads. If reached, redis commands will fail with a timeout instead of blocking. Defaults to "3s" , "-1" for no timeout. |
"3s" |
writeTimeout | N | Timeout for socket writes. If reached, redis commands will fail with a timeout instead of blocking. Defaults is readTimeout. | "3s" |
poolSize | N | Maximum number of socket connections. Default is 10 connections per every CPU as reported by runtime.NumCPU. | "20" |
poolTimeout | N | Amount of time client waits for a connection if all connections are busy before returning an error. Default is readTimeout + 1 second. | "5s" |
maxConnAge | N | Connection age at which the client retires (closes) the connection. Default is to not close aged connections. | "30m" |
minIdleConns | N | Minimum number of idle connections to keep open in order to avoid the performance degradation associated with creating new connections. Defaults to "0" . |
"2" |
idleCheckFrequency | N | Frequency of idle checks made by idle connections reaper. Default is "1m" . "-1" disables idle connections reaper. |
"-1" |
idleTimeout | N | Amount of time after which the client closes idle connections. Should be less than server’s timeout. Default is "5m" . "-1" disables idle timeout check. |
"10m" |
Setup Redis
Dapr can use any Redis instance: containerized, running on your local dev machine, or a managed cloud service.
A Redis instance is automatically created as a Docker container when you run dapr init
You can use Helm to quickly create a Redis instance in our Kubernetes cluster. This approach requires Installing Helm.
-
Install Redis into your cluster. Note that we’re explicitly setting an image tag to get a version greater than 5, which is what Dapr’ pub/sub functionality requires. If you’re intending on using Redis as just a state store (and not for pub/sub), you do not have to set the image version.
helm repo add bitnami https://charts.bitnami.com/bitnami helm install redis bitnami/redis --set image.tag=6.2
-
Run
kubectl get pods
to see the Redis containers now running in your cluster. -
Add
redis-master:6379
as theredisHost
in your redis.yaml file. For example:metadata: - name: redisHost value: redis-master:6379
-
Next, get the Redis password, which is slightly different depending on the OS we’re using:
-
Windows: Run
kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" > encoded.b64
, which creates a file with your encoded password. Next, runcertutil -decode encoded.b64 password.txt
, which will put your redis password in a text file calledpassword.txt
. Copy the password and delete the two files. -
Linux/MacOS: Run
kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" | base64 --decode
and copy the outputted password.
Add this password as the
redisPassword
value in your redis.yaml file. For example:metadata: - name: redisPassword value: lhDOkwTlp0
-
-
Create an Azure Cache for Redis instance using the official Microsoft documentation.
-
Once your instance is created, grab the Host name (FQDN) and your access key from the Azure portal.
- For the Host name:
- Navigate to the resource’s Overview page.
- Copy the Host name value.
- For your access key:
- Navigate to Settings > Access Keys.
- Copy and save your key.
- For the Host name:
-
Add your key and your host name to a
redis.yaml
file that Dapr can apply to your cluster.- If you’re running a sample, add the host and key to the provided
redis.yaml
. - If you’re creating a project from the ground up, create a
redis.yaml
file as specified in the Component format section.
- If you’re running a sample, add the host and key to the provided
-
Set the
redisHost
key to[HOST NAME FROM PREVIOUS STEP]:6379
and theredisPassword
key to the key you saved earlier.Note: In a production-grade application, follow secret management instructions to securely manage your secrets.
-
Enable EntraID support:
- Enable Entra ID authentication on your Azure Redis server. This may takes a few minutes.
- Set
useEntraID
to"true"
to implement EntraID support for Azure Cache for Redis.
-
Set
enableTLS
to"true"
to support TLS.
Note:
useEntraID
assumes that either your UserPrincipal (via AzureCLICredential) or the SystemAssigned managed identity have the RedisDataOwner role permission. If a user-assigned identity is used, you need to specify theazureClientID
property.
Related links
5.7 - Cryptography component specs
Table headers to note:
Header | Description | Example |
---|---|---|
Status | Component certification status |
Alpha Beta Stable |
Component version | The version of the component | v1 |
Since runtime version | The version of the Dapr runtime when the component status was set or updated | 1.11 |
Using the Dapr cryptography engine
Component | Status | Component version | Since runtime version |
---|---|---|---|
JSON Web Key Sets (JWKS) | Alpha | v1 | 1.11 |
Kubernetes secrets | Alpha | v1 | 1.11 |
Local storage | Alpha | v1 | 1.11 |
Microsoft Azure
Component | Status | Component version | Since runtime version |
---|---|---|---|
Azure Key Vault | Alpha | v1 | 1.11 |
5.7.1 - Azure Key Vault
Component format
A Dapr crypto.yaml
component file has the following structure:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: azurekeyvault
spec:
type: crypto.azure.keyvault
metadata:
- name: vaultName
value: mykeyvault
# See authentication section below for all options
- name: azureTenantId
value: ${{AzureKeyVaultTenantId}}
- name: azureClientId
value: ${{AzureKeyVaultServicePrincipalClientId}}
- name: azureClientSecret
value: ${{AzureKeyVaultServicePrincipalClientSecret}}
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described here.Authenticating with Microsoft Entra ID
The Azure Key Vault cryptography component supports authentication with Microsoft Entra ID only. Before you enable this component:
- Read the Authenticating to Azure document.
- Create an Microsoft Entra ID application (also called a Service Principal).
- Alternatively, create a managed identity for your application platform.
Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
vaultName |
Y | Azure Key Vault name | "mykeyvault" |
Auth metadata | Y | See Authenticating to Azure for more information |
Related links
5.7.2 - JSON Web Key Sets (JWKS)
Component format
The purpose of this component is to load keys from a JSON Web Key Set (RFC 7517). These are JSON documents that contain 1 or more keys as JWK (JSON Web Key); they can be public, private, or shared keys.
This component supports loading a JWKS:
- From a local file; in this case, Dapr watches for changes to the file on disk and reloads it automatically.
- From a HTTP(S) URL, which is periodically refreshed.
- By passing the actual JWKS in the
jwks
metadata property, as a string (optionally, base64-encoded).
Note
This component uses the cryptographic engine in Dapr to perform operations. Although keys are never exposed to your application, Dapr has access to the raw key material.A Dapr crypto.yaml
component file has the following structure:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: jwks
spec:
type: crypto.dapr.jwks
version: v1
metadata:
# Example 1: load JWKS from file
- name: "jwks"
value: "fixtures/crypto/jwks/jwks.json"
# Example 2: load JWKS from a HTTP(S) URL
# Only "jwks" is required
- name: "jwks"
value: "https://example.com/.well-known/jwks.json"
- name: "requestTimeout"
value: "30s"
- name: "minRefreshInterval"
value: "10m"
# Option 3: include the actual JWKS
- name: "jwks"
value: |
{
"keys": [
{
"kty": "RSA",
"use": "sig",
"kid": "…",
"n": "…",
"e": "…",
"issuer": "https://example.com"
}
]
}
# Option 3b: include the JWKS base64-encoded
- name: "jwks"
value: |
eyJrZXlzIjpbeyJ…
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described here.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
jwks |
Y | Path to the JWKS document | Local file: "fixtures/crypto/jwks/jwks.json" HTTP(S) URL: "https://example.com/.well-known/jwks.json" Embedded JWKS: {"keys": […]} (can be base64-encoded) |
requestTimeout |
N | Timeout for network requests when fetching the JWKS document from a HTTP(S) URL, as a Go duration. Default: “30s” | "5s" |
minRefreshInterval |
N | Minimum interval to wait before subsequent refreshes of the JWKS document from a HTTP(S) source, as a Go duration. Default: “10m” | "1h" |
Related links
5.7.3 - Kubernetes Secrets
Component format
The purpose of this component is to load the Kubernetes secret named after the key name.
Note
This component uses the cryptographic engine in Dapr to perform operations. Although keys are never exposed to your application, Dapr has access to the raw key material.A Dapr crypto.yaml
component file has the following structure:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: crypto.dapr.kubernetes.secrets
version: v1
metadata:[]
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described here.Spec metadata fields
Field | Required | Details | Example | |
---|---|---|---|---|
defaultNamespace |
N | Default namespace to retrieve secrets from. If unset, the namespace must be specified for each key, as namespace/secretName/key |
"default-ns" |
|
kubeconfigPath |
N | The path to the kubeconfig file. If not specified, the component uses the default in-cluster config value | "/path/to/kubeconfig" |
Related links
5.7.4 - Local storage
Component format
The purpose of this component is to load keys from a local directory.
The component accepts as input the name of a folder, and loads keys from there. Each key is in its own file, and when users request a key with a given name, Dapr loads the file with that name.
Supported file formats:
- PEM with public and private keys (supports: PKCS#1, PKCS#8, PKIX)
- JSON Web Key (JWK) containing a public, private, or symmetric key
- Raw key data for symmetric keys
Note
This component uses the cryptographic engine in Dapr to perform operations. Although keys are never exposed to your application, Dapr has access to the raw key material.A Dapr crypto.yaml
component file has the following structure:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: mycrypto
spec:
type: crypto.dapr.localstorage
metadata:
version: v1
- name: path
value: /path/to/folder/
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described here.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
path |
Y | Folder containing the keys to be loaded. When loading a key, the name of the key will be used as name of the file in this folder. | /path/to/folder |
Example
Let’s say you’ve set path=/mnt/keys
, which contains the following files:
/mnt/keys/mykey1.pem
/mnt/keys/mykey2
When using the component, you can reference the keys as mykey1.pm
and mykey2
.
Related links
5.8 - Conversation component specs
Table headers to note:
Header | Description | Example |
---|---|---|
Status | Component certification status |
Alpha Beta Stable |
Component version | The version of the component | v1 |
Since runtime version | The version of the Dapr runtime when the component status was set or updated | 1.11 |
Amazon Web Services (AWS)
Component | Status | Component version | Since runtime version |
---|---|---|---|
AWS Bedrock | Alpha | v1 | 1.15 |
Generic
Component | Status | Component version | Since runtime version |
---|---|---|---|
Anthropic | Alpha | v1 | 1.15 |
DeepSeek | Alpha | v1 | 1.15 |
GoogleAI | Alpha | v1 | 1.16 |
Huggingface | Alpha | v1 | 1.15 |
Mistral | Alpha | v1 | 1.15 |
Ollama | Alpha | v1 | 1.16 |
OpenAI | Alpha | v1 | 1.15 |
5.8.1 - Anthropic
Component format
A Dapr conversation.yaml
component file has the following structure:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: anthropic
spec:
type: conversation.anthropic
metadata:
- name: key
value: "mykey"
- name: model
value: claude-3-5-sonnet-20240620
- name: cacheTTL
value: 10m
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described here.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
key |
Y | API key for Anthropic. | "mykey" |
model |
N | The Anthropic LLM to use. Defaults to claude-3-5-sonnet-20240620 |
claude-3-5-sonnet-20240620 |
cacheTTL |
N | A time-to-live value for a prompt cache to expire. Uses Golang duration format. | 10m |
Related links
5.8.2 - AWS Bedrock
Component format
A Dapr conversation.yaml
component file has the following structure:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: awsbedrock
spec:
type: conversation.aws.bedrock
metadata:
- name: endpoint
value: "http://localhost:4566"
- name: model
value: amazon.titan-text-express-v1
- name: cacheTTL
value: 10m
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described here.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
endpoint |
N | AWS endpoint for the component to use and connect to emulators. Not recommended for production AWS use. | http://localhost:4566 |
model |
N | The LLM to use. Defaults to Bedrock’s default provider model from Amazon. | amazon.titan-text-express-v1 |
cacheTTL |
N | A time-to-live value for a prompt cache to expire. Uses Golang duration format. | 10m |
Authenticating AWS
Instead of using a key
parameter, AWS Bedrock authenticates using Dapr’s standard method of IAM or static credentials. Learn more about authenticating with AWS.
Related links
5.8.3 - DeepSeek
Component format
A Dapr conversation.yaml
component file has the following structure:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: deepseek
spec:
type: conversation.deepseek
metadata:
- name: key
value: mykey
- name: maxTokens
value: 2048
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described here.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
key |
Y | API key for DeepSeek. | mykey |
maxTokens |
N | The max amount of tokens for each request. | 2048 |
Related links
5.8.4 - Local Testing
Component format
A Dapr conversation.yaml
component file has the following structure:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: echo
spec:
type: conversation.echo
version: v1
Information
This component is only meant for local validation and testing of a Conversation component implementation. It does not actually send the data to any LLM but rather echos the input back directly.Related links
5.8.5 - GoogleAI
Component format
A Dapr conversation.yaml
component file has the following structure:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: googleai
spec:
type: conversation.googleai
metadata:
- name: key
value: mykey
- name: model
value: gemini-1.5-flash
- name: cacheTTL
value: 10m
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described here.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
key |
Y | API key for GoogleAI. | mykey |
model |
N | The GoogleAI LLM to use. Defaults to gemini-1.5-flash . |
gemini-2.0-flash |
cacheTTL |
N | A time-to-live value for a prompt cache to expire. Uses Golang duration format. | 10m |
Related links
5.8.6 - Huggingface
Component format
A Dapr conversation.yaml
component file has the following structure:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: huggingface
spec:
type: conversation.huggingface
metadata:
- name: key
value: mykey
- name: model
value: meta-llama/Meta-Llama-3-8B
- name: cacheTTL
value: 10m
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described here.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
key |
Y | API key for Huggingface. | mykey |
model |
N | The Huggingface LLM to use. Defaults to meta-llama/Meta-Llama-3-8B . |
meta-llama/Meta-Llama-3-8B |
cacheTTL |
N | A time-to-live value for a prompt cache to expire. Uses Golang duration format. | 10m |
Related links
5.8.7 - Mistral
Component format
A Dapr conversation.yaml
component file has the following structure:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: mistral
spec:
type: conversation.mistral
metadata:
- name: key
value: mykey
- name: model
value: open-mistral-7b
- name: cacheTTL
value: 10m
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described here.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
key |
Y | API key for Mistral. | mykey |
model |
N | The Mistral LLM to use. Defaults to open-mistral-7b . |
open-mistral-7b |
cacheTTL |
N | A time-to-live value for a prompt cache to expire. Uses Golang duration format. | 10m |
Related links
5.8.8 - Ollama
Component format
A Dapr conversation.yaml
component file has the following structure:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: ollama
spec:
type: conversation.ollama
metadata:
- name: model
value: llama3.2:latest
- name: cacheTTL
value: 10m
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described here.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
model |
N | The Ollama LLM to use. Defaults to llama3.2:latest . |
phi4:latest |
cacheTTL |
N | A time-to-live value for a prompt cache to expire. Uses Golang duration format. | 10m |
Related links
5.8.9 - OpenAI
Component format
A Dapr conversation.yaml
component file has the following structure:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: openai
spec:
type: conversation.openai
metadata:
- name: key
value: mykey
- name: model
value: gpt-4-turbo
- name: endpoint
value: 'https://api.openai.com/v1'
- name: cacheTTL
value: 10m
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described here.Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
key |
Y | API key for OpenAI. | mykey |
model |
N | The OpenAI LLM to use. Defaults to gpt-4-turbo . |
gpt-4-turbo |
endpoint |
N | Custom API endpoint URL for OpenAI API-compatible services. If not specified, the default OpenAI API endpoint is used. | https://api.openai.com/v1 |
cacheTTL |
N | A time-to-live value for a prompt cache to expire. Uses Golang duration format. | 10m |
Related links
5.9 - Name resolution provider component specs
The following components provide name resolution for the service invocation building block.
Name resolution components are configured via the configuration.
Table headers to note:
Header | Description | Example |
---|---|---|
Status | Component certification status |
Alpha Beta Stable |
Component version | The version of the component | v1 |
Since runtime version | The version of the Dapr runtime when the component status was set or updated | 1.11 |
Generic
Component | Status | Component version | Since runtime version |
---|---|---|---|
HashiCorp Consul | Alpha | v1 | 1.2 |
SQLite | Alpha | v1 | 1.13 |
Kubernetes
Component | Status | Component version | Since runtime version |
---|---|---|---|
Kubernetes | Stable | v1 | 1.0 |
Self-Hosted
Component | Status | Component version | Since runtime version |
---|---|---|---|
mDNS | Stable | v1 | 1.0 |
5.9.1 - HashiCorp Consul
Configuration format
Hashicorp Consul is setup within the Dapr Configuration.
Within the config, add a nameResolution
spec and set the component
field to "consul"
.
If you are using the Dapr sidecar to register your service to Consul then you will need the following configuration:
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: appconfig
spec:
nameResolution:
component: "consul"
configuration:
selfRegister: true
If Consul service registration is managed externally from Dapr you need to ensure that the Dapr-to-Dapr internal gRPC port is added to the service metadata under DAPR_PORT
(this key is configurable) and that the Consul service Id matches the Dapr app Id. You can then omit selfRegister
from the config above.
Behaviour
On init
the Consul component either validates the connection to the configured (or default) agent or registers the service if configured to do so. The name resolution interface does not cater for an “on shutdown” pattern so consider this when using Dapr to register services to Consul as it does not deregister services.
The component resolves target apps by filtering healthy services and looks for a DAPR_PORT
in the metadata (key is configurable) in order to retrieve the Dapr sidecar port. Consul service.meta
is used over service.port
so as to not interfere with existing Consul estates.
Spec configuration fields
The configuration spec is fixed to v1.3.0 of the Consul API
Field | Required | Type | Details | Examples |
---|---|---|---|---|
Client | N | *api.Config | Configures client connection to the Consul agent. If blank it will use the sdk defaults, which in this case is just an address of 127.0.0.1:8500 |
10.0.4.4:8500 |
QueryOptions | N | *api.QueryOptions | Configures query used for resolving healthy services, if blank it will default to UseCache:true |
UseCache: false , Datacenter: "myDC" |
Checks | N | []*api.AgentServiceCheck | Configures health checks if/when registering. If blank it will default to a single health check on the Dapr sidecar health endpoint | See sample configs |
Tags | N | []string |
Configures any tags to include if/when registering services | - "dapr" |
Meta | N | map[string]string |
Configures any additional metadata to include if/when registering services | DAPR_METRICS_PORT: "${DAPR_METRICS_PORT}" |
DaprPortMetaKey | N | string |
The key used for getting the Dapr sidecar port from Consul service metadata during service resolution, it will also be used to set the Dapr sidecar port in metadata during registration. If blank it will default to DAPR_PORT |
"DAPR_TO_DAPR_PORT" |
SelfRegister | N | bool |
Controls if Dapr will register the service to Consul. The name resolution interface does not cater for an “on shutdown” pattern so please consider this if using Dapr to register services to Consul as it will not deregister services. If blank it will default to false |
true |
AdvancedRegistration | N | *api.AgentServiceRegistration | Gives full control of service registration through configuration. If configured the component will ignore any configuration of Checks, Tags, Meta and SelfRegister. | See sample configs |
Sample configurations
Basic
The minimum configuration needed is the following:
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: appconfig
spec:
nameResolution:
component: "consul"
Registration with additional customizations
Enabling SelfRegister
it is then possible to customize the checks, tags and meta
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: appconfig
spec:
nameResolution:
component: "consul"
configuration:
client:
address: "127.0.0.1:8500"
selfRegister: true
checks:
- name: "Dapr Health Status"
checkID: "daprHealth:${APP_ID}"
interval: "15s"
http: "http://${HOST_ADDRESS}:${DAPR_HTTP_PORT}/v1.0/healthz"
- name: "Service Health Status"
checkID: "serviceHealth:${APP_ID}"
interval: "15s"
http: "http://${HOST_ADDRESS}:${APP_PORT}/health"
tags:
- "dapr"
- "v1"
- "${OTHER_ENV_VARIABLE}"
meta:
DAPR_METRICS_PORT: "${DAPR_METRICS_PORT}"
DAPR_PROFILE_PORT: "${DAPR_PROFILE_PORT}"
daprPortMetaKey: "DAPR_PORT"
queryOptions:
useCache: true
filter: "Checks.ServiceTags contains dapr"
Advanced registration
Configuring the advanced registration gives you full control over setting all the Consul properties possible when registering.
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: appconfig
spec:
nameResolution:
component: "consul"
configuration:
client:
address: "127.0.0.1:8500"
selfRegister: false
queryOptions:
useCache: true
daprPortMetaKey: "DAPR_PORT"
advancedRegistration:
name: "${APP_ID}"
port: ${APP_PORT}
address: "${HOST_ADDRESS}"
check:
name: "Dapr Health Status"
checkID: "daprHealth:${APP_ID}"
interval: "15s"
http: "http://${HOST_ADDRESS}:${DAPR_HTTP_PORT}/v1.0/healthz"
meta:
DAPR_METRICS_PORT: "${DAPR_METRICS_PORT}"
DAPR_PROFILE_PORT: "${DAPR_PROFILE_PORT}"
tags:
- "dapr"
Setup HashiCorp Consul
HashiCorp offer in depth guides on how to setup Consul for different hosting models. Check out the self-hosted guide here
HashiCorp offer in depth guides on how to setup Consul for different hosting models. Check out the Kubernetes guide here
Related links
5.9.2 - Kubernetes DNS
Configuration format
Generally, Kubernetes DNS name resolution is configured automatically in Kubernetes mode by Dapr. There is no configuration needed to use Kubernetes DNS as your name resolution provider unless some overrides are necessary for the Kubernetes name resolution component.
In the scenario that an override is required, within a Dapr Configuration CRD, add a nameResolution
spec and set the component
field to "kubernetes"
. The other configuration fields can be set as needed in a configuration
map, as seen below.
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: appconfig
spec:
nameResolution:
component: "kubernetes"
configuration:
clusterDomain: "cluster.local" # Mutually exclusive with the template field
template: "{{.ID}}-{{.Data.region}}.internal:{{.Port}}" # Mutually exclusive with the clusterDomain field
Behaviour
The component resolves target apps by using the Kubernetes cluster’s DNS provider. You can learn more in the Kubernetes docs.
Spec configuration fields
The configuration spec is fixed to v1.3.0 of the Consul API
Field | Required | Type | Details | Examples |
---|---|---|---|---|
clusterDomain | N | string |
The cluster domain to be used for resolved addresses. This field is mutually exclusive with the template file. |
cluster.local |
template | N | string |
A template string to be parsed when addresses are resolved using text/template . The template will be populated by the fields in the ResolveRequest struct. This field is mutually exclusive with clusterDomain field. |
{{.ID}}-{{.Data.region}}.{{.Namespace}}.internal:{{.Port}} |
Related links
5.9.3 - mDNS
Configuration format
Multicast DNS (mDNS) is configured automatically in self-hosted mode by Dapr. There is no configuration needed to use mDNS as your name resolution provider.
Behaviour
The component resolves target apps by using the host system’s mDNS service. You can learn more about mDNS here.
Troubleshooting
In some cloud provider virtual networks, such as Microsoft Azure, mDNS is not available. Use an alternate provider such as HashiCorp Consul instead.
On some enterprise-managed systems, mDNS may be disabled on macOS if a network filter/proxy is configured. Check with your IT department if mDNS is disabled and you are unable to use service invocation locally.
Spec configuration fields
Not applicable, as mDNS is configured by Dapr when running in self-hosted mode.
Related links
5.9.4 - SQLite
As an alternative to mDNS, the SQLite name resolution component can be used for running Dapr on single-node environments and for local development scenarios. Dapr sidecars that are part of the cluster store their information in a SQLite database on the local machine.
Note
This component is optimized to be used in scenarios where all Dapr instances are running on the same physical machine, where the database is accessed through the same, locally-mounted disk.Using the SQLite nameresolver with a database file accessed over the network (including via SMB/NFS) can lead to issues such as data corruption, and is not supported.
Configuration format
Name resolution is configured via the Dapr Configuration.
Within the Configuration YAML, set the spec.nameResolution.component
property to "sqlite"
, then pass configuration options in the spec.nameResolution.configuration
dictionary.
This is the basic example of a Configuration resource:
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: appconfig
spec:
nameResolution:
component: "sqlite"
version: "v1"
configuration:
connectionString: "/home/user/.dapr/nr.db"
Spec configuration fields
When using the SQLite name resolver component, the spec.nameResolution.configuration
dictionary contains these options:
Field | Required | Type | Details | Examples |
---|---|---|---|---|
connectionString |
Y | string |
The connection string for the SQLite database. Normally, this is the path to a file on disk, relative to the current working directory, or absolute. | "nr.db" (relative to the working directory), "/home/user/.dapr/nr.db" |
updateInterval |
N | Go duration (as a string ) |
Interval for active Dapr sidecars to update their status in the database, which is used as healthcheck. Smaller intervals reduce the likelihood of stale data being returned if an application goes offline, but increase the load on the database. Must be at least 1s greater than timeout . Values with fractions of seconds are truncated (for example, 1500ms becomes 1s ). Default: 5s |
"2s" |
timeout |
N | Go duration (as a string ).Must be at least 1s. |
Timeout for operations on the database. Integers are interpreted as number of seconds. Defaults to 1s |
"2s" , 2 |
tableName |
N | string |
Name of the table where the data is stored. If the table does not exist, the table is created by Dapr. Defaults to hosts . |
"hosts" |
metadataTableName |
N | string |
Name of the table used by Dapr to store metadata for the component. If the table does not exist, the table is created by Dapr. Defaults to metadata . |
"metadata" |
cleanupInterval |
N | Go duration (as a string ) |
Interval to remove stale records from the database. Default: 1h (1 hour) |
"10m" |
busyTimeout |
N | Go duration (as a string ) |
Interval to wait in case the SQLite database is currently busy serving another request, before returning a “database busy” error. This is an advanced setting.busyTimeout controls how locking works in SQLite. With SQLite, writes are exclusive, so every time any app is writing the database is locked. If another app tries to write, it waits up to busyTimeout before returning the “database busy” error. However the timeout setting controls the timeout for the entire operation. For example if the query “hangs”, after the database has acquired the lock (so after busy timeout is cleared), then timeout comes into effect. Default: 800ms (800 milliseconds) |
"100ms" |
disableWAL |
N | bool |
If set to true, disables Write-Ahead Logging for journaling of the SQLite database. This is for advanced scenarios only | true , false |
Related links
5.10 - Middleware component specs
The following table lists middleware components supported by Dapr. Learn how to customize processing pipelines and set up middleware components.
Table headers to note:
Header | Description | Example |
---|---|---|
Status | Component certification status |
Alpha Beta Stable |
Component version | The version of the component | v1 |
Since runtime version | The version of the Dapr runtime when the component status was set or updated | 1.11 |
HTTP
Component | Description | Status | Component version |
---|---|---|---|
OAuth2 Authorization Grant flow | Enables the OAuth2 Authorization Grant flow on a Web API | Alpha | v1 |
OAuth2 Client Credentials Grant flow | Enables the OAuth2 Client Credentials Grant flow on a Web API | Alpha | v1 |
OpenID Connect | Verifies a Bearer Token using OpenID Connect on a Web API | Stable | v1 |
Rate limit | Restricts the maximum number of allowed HTTP requests per second | Stable | v1 |
Rego/OPA Policies | Applies Rego/OPA Policies to incoming Dapr HTTP requests | Alpha | v1 |
Router Alias | Use Router Alias to map arbitrary HTTP routes to valid Dapr API endpoints | Alpha | v1 |
RouterChecker | Use RouterChecker middleware to block invalid http request routing | Alpha | v1 |
Sentinel | Use Sentinel middleware to guarantee the reliability and resiliency of your application | Alpha | v1 |
Uppercase | Converts the body of the request to uppercase letters (demo) | Stable | v1 |
Wasm | Use Wasm middleware in your HTTP pipeline | Alpha | v1 |
5.10.1 - Bearer
The bearer HTTP middleware verifies a Bearer Token using OpenID Connect on a Web API, without modifying the application. This design separates authentication/authorization concerns from the application, so that application operators can adopt and configure authentication/authorization providers without impacting the application code.
Component format
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: bearer-token
spec:
type: middleware.http.bearer
version: v1
metadata:
- name: audience
value: "<your token audience; i.e. the application's client ID>"
- name: issuer
value: "<your token issuer, e.g. 'https://accounts.google.com'>"
# Optional values
- name: jwksURL
value: "<JWKS URL, e.g. 'https://accounts.google.com/.well-known/openid-configuration'>"
Spec metadata fields
Field | Required | Details | Example |
---|---|---|---|
audience |
Y | The audience expected in the tokens. Usually, this corresponds to the client ID of your application that is created as part of a credential hosted by a OpenID Connect platform. | |
issuer |
Y | The issuer authority, which is the value expected in the issuer claim in the tokens. | "https://accounts.google.com" |
jwksURL |
N | Address of the JWKS (JWK Set containing the public keys for verifying tokens). If empty, will try to fetch the URL set in the OpenID Configuration document <issuer>/.well-known/openid-configuration . |
"https://accounts.google.com/.well-known/openid-configuration" |
Common values for issuer
include:
- Auth0:
https://{domain}
, where{domain}
is the domain of your Auth0 application - Microsoft Entra ID:
https://login.microsoftonline.com/{tenant}/v2.0
, where{tenant}
should be replaced with the tenant ID of your application, as a UUID - Google:
https://accounts.google.com
- Salesforce (Force.com):
https://login.salesforce.com
Dapr configuration
To be applied, the middleware must be referenced in configuration. See middleware pipelines.
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: appconfig
spec:
httpPipeline:
handlers:
- name: bearer-token
type: middleware.http.bearer
Related links
5.10.2 - OAuth2
The OAuth2 HTTP middleware enables the OAuth2 Authorization Code flow on a Web API without modifying the application. This design separates authentication/authorization concerns from the application, so that application operators can adopt and configure authentication/authorization providers without impacting the application code.
Component format
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: oauth2
spec:
type: middleware.http.oauth2
version: v1
metadata:
- name: clientId
value: "<your client ID>"
- name: clientSecret
value: "<your client secret>"
- name: scopes
value: "https://www.googleapis.com/auth/userinfo.email"
- name: authURL
value: "https://accounts.google.com/o/oauth2/v2/auth"
- name: tokenURL
value: "https://accounts.google.com/o/oauth2/token"
- name: redirectURL
value: "http://dummy.com"
- name: authHeaderName
value: "authorization"
- name: forceHTTPS
value: "false"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Details | Example |
---|---|---|
clientId | The client ID of your application that is created as part of a credential hosted by a OAuth-enabled platform | |
clientSecret | The client secret of your application that is created as part of a credential hosted by a OAuth-enabled platform | |
scopes | A list of space-delimited, case-sensitive strings of scopes which are typically used for authorization in the application | "https://www.googleapis.com/auth/userinfo.email" |
authURL | The endpoint of the OAuth2 authorization server | "https://accounts.google.com/o/oauth2/v2/auth" |
tokenURL | The endpoint is used by the client to obtain an access token by presenting its authorization grant or refresh token | "https://accounts.google.com/o/oauth2/token" |
redirectURL | The URL of your web application that the authorization server should redirect to once the user has authenticated | "https://myapp.com" |
authHeaderName | The authorization header name to forward to your application | "authorization" |
forceHTTPS | If true, enforces the use of TLS/SSL | "true" ,"false" |
Dapr configuration
To be applied, the middleware must be referenced in configuration. See middleware pipelines.
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: appconfig
spec:
httpPipeline:
handlers:
- name: oauth2
type: middleware.http.oauth2
Related links
5.10.3 - OAuth2 client credentials
The OAuth2 client credentials HTTP middleware enables the OAuth2 Client Credentials flow on a Web API without modifying the application. This design separates authentication/authorization concerns from the application, so that application operators can adopt and configure authentication/authorization providers without impacting the application code.
Component format
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: oauth2clientcredentials
spec:
type: middleware.http.oauth2clientcredentials
version: v1
metadata:
- name: clientId
value: "<your client ID>"
- name: clientSecret
value: "<your client secret>"
- name: scopes
value: "https://www.googleapis.com/auth/userinfo.email"
- name: tokenURL
value: "https://accounts.google.com/o/oauth2/token"
- name: headerName
value: "authorization"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.Spec metadata fields
Field | Details | Example |
---|---|---|
clientId | The client ID of your application that is created as part of a credential hosted by a OAuth-enabled platform | |
clientSecret | The client secret of your application that is created as part of a credential hosted by a OAuth-enabled platform | |
scopes | A list of space-delimited, case-sensitive strings of scopes which are typically used for authorization in the application | "https://www.googleapis.com/auth/userinfo.email" |
tokenURL | The endpoint is used by the client to obtain an access token by presenting its authorization grant or refresh token | "https://accounts.google.com/o/oauth2/token" |
headerName | The authorization header name to forward to your application | "authorization" |
endpointParamsQuery | Specifies additional parameters for requests to the token endpoint | true |
authStyle | Optionally specifies how the endpoint wants the client ID & client secret sent. See the table of possible values below | 0 |
Possible values for authStyle
Value | Meaning |
---|---|
1 |
Sends the “client_id” and “client_secret” in the POST body as application/x-www-form-urlencoded parameters. |
2 |
Sends the “client_id” and “client_secret” using HTTP Basic Authorization. This is an optional style described in the OAuth2 RFC 6749 section 2.3.1. |
0 |
Means to auto-detect which authentication style the provider wants by trying both ways and caching the successful way for the future. |
Dapr configuration
To be applied, the middleware must be referenced in a configuration. See middleware pipelines.
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: appconfig
spec:
httpPipeline:
handlers:
- name: oauth2clientcredentials
type: middleware.http.oauth2clientcredentials
Related links
5.10.4 - Apply Open Policy Agent (OPA) policies
The Open Policy Agent (OPA) HTTP middleware applies OPA Policies to incoming Dapr HTTP requests. This can be used to apply reusable authorization policies to app endpoints.
Component format
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: my-policy
spec:
type: middleware.http.opa
version: v1
metadata:
# `includedHeaders` is a comma-separated set of case-insensitive headers to include in the request input.
# Request headers are not passed to the policy by default. Include to receive incoming request headers in
# the input
- name: includedHeaders
value: "x-my-custom-header, x-jwt-header"
# `defaultStatus` is the status code to return for denied responses
- name: defaultStatus
value: 403
# `readBody` controls whether the middleware reads the entire request body in-memory and make it
# available for policy decisions.
- name: readBody
value: "false"
# `rego` is the open policy agent policy to evaluate. required
# The policy package must be http and the policy must set data.http.allow
- name: rego
value: |
package http
default allow = true
# Allow may also be an object and include other properties
# For example, if you wanted to redirect on a policy failure, you could set the status code to 301 and set the location header on the response:
allow = {
"status_code": 301,
"additional_headers": {
"location": "https://my.site/authorize"
}
} {
not jwt.payload["my-claim"]
}
# You can also allow the request and add additional headers to it:
allow = {
"allow": true,
"additional_headers": {
"x-my-claim": my_claim
}
} {
my_claim := jwt.payload["my-claim"]
}
jwt = { "payload": payload } {
auth_header := input.request.headers["Authorization"]
[_, jwt] := split(auth_header, " ")
[_, payload, _] := io.jwt.decode(jwt)
}
You can prototype and experiment with policies using the official OPA playground. For example, you can find the example policy above here.
Spec metadata fields
Field | Details | Example |
---|---|---|
rego |
The Rego policy language | See above |
defaultStatus |
The status code to return for denied responses | "https://accounts.google.com" , "https://login.salesforce.com" |
readBody |
If set to true (the default value), the body of each request is read fully in-memory and can be used to make policy decisions. If your policy doesn’t depend on inspecting the request body, consider disabling this (setting to false ) for significant performance improvements. |
"false" |
includedHeaders |
A comma-separated set of case-insensitive headers to include in the request input. Request headers are not passed to the policy by default. Include to receive incoming request headers in the input | "x-my-custom-header, x-jwt-header" |
Dapr configuration
To be applied, the middleware must be referenced in configuration. See middleware pipelines.
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: appconfig
spec:
httpPipeline:
handlers:
- name: my-policy
type: middleware.http.opa
Input
This middleware supplies a HTTPRequest
as input.
HTTPRequest
The HTTPRequest
input contains all the relevant information about an incoming HTTP Request.
type Input struct {
request HTTPRequest
}
type HTTPRequest struct {
// The request method (e.g. GET,POST,etc...)
method string
// The raw request path (e.g. "/v2/my-path/")
path string
// The path broken down into parts for easy consumption (e.g. ["v2", "my-path"])
path_parts string[]
// The raw query string (e.g. "?a=1&b=2")
raw_query string
// The query broken down into keys and their values
query map[string][]string
// The request headers
// NOTE: By default, no headers are included. You must specify what headers
// you want to receive via `spec.metadata.includedHeaders` (see above)
headers map[string]string
// The request scheme (e.g. http, https)
scheme string
// The request body (e.g. http, https)
body string
}
Result
The policy must set data.http.allow
with either a boolean
value, or an object
value with an allow
boolean property. A true
allow
will allow the request, while a false
value will reject the request with the status specified by defaultStatus
. The following policy, with defaults, demonstrates a 403 - Forbidden
for all requests:
package http
default allow = false
which is the same as:
package http
default allow = {
"allow": false
}
Changing the rejected response status code
When rejecting a request, you can override the status code the that gets returned. For example, if you wanted to return a 401
instead of a 403
, you could do the following:
package http
default allow = {
"allow": false,
"status_code": 401
}
Adding response headers
To redirect, add headers and set the status_code
to the returned result:
package http
default allow = {
"allow": false,
"status_code": 301,
"additional_headers": {
"Location": "https://my.redirect.site"
}
}
Adding request headers
You can also set additional headers on the allowed request:
package http
default allow = false
allow = { "allow": true, "additional_headers": { "X-JWT-Payload": payload } } {
not input.path[0] == "forbidden"
// Where `jwt` is the result of another rule
payload := base64.encode(json.marshal(jwt.payload))
}
Result structure
type Result bool
// or
type Result struct {
// Whether to allow or deny the incoming request
allow bool
// Overrides denied response status code; Optional
status_code int
// Sets headers on allowed request or denied response; Optional
additional_headers map[string]string
}
Related links
5.10.5 - Rate limiting
The rate limit HTTP middleware allows restricting the maximum number of allowed HTTP requests per second. Rate limiting can protect your application from Denial of Service (DoS) attacks. DoS attacks can be initiated by malicious 3rd parties but also by bugs in your software (a.k.a. a “friendly fire” DoS attack).
Component format
In the following definition, the maximum requests per second are set to 10:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: ratelimit
spec:
type: middleware.http.ratelimit
version: v1
metadata:
- name: maxRequestsPerSecond
value: 10
Spec metadata fields
Field | Details | Example |
---|---|---|
maxRequestsPerSecond |
The maximum requests per second by remote IP. The component looks at the X-Forwarded-For and X-Real-IP headers to determine the caller’s IP. |
10 |
Once the limit is reached, the requests will fail with HTTP Status code 429: Too Many Requests.
Important
The rate limit is enforced independently in each Dapr sidecar, and not cluster-wide.Alternatively, the max concurrency setting can be used to rate-limit applications and applies to all traffic, regardless of remote IP, protocol, or path.
Dapr configuration
To be applied, the middleware must be referenced in configuration. See middleware pipelines.
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: appconfig
spec:
httpPipeline:
handlers:
- name: ratelimit
type: middleware.http.ratelimit
Related links
5.10.6 - Router alias http request routing
The router alias HTTP middleware component allows you to convert arbitrary HTTP routes arriving into Dapr to valid Dapr API endpoints.
Component format
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: routeralias
spec:
type: middleware.http.routeralias
version: v1
metadata:
# String containing a JSON-encoded or YAML-encoded dictionary
# Each key in the dictionary is the incoming path, and the value is the path it's converted to
- name: "routes"
value: |
{
"/mall/activity/info": "/v1.0/invoke/srv.default/method/mall/activity/info",
"/hello/activity/{id}/info": "/v1.0/invoke/srv.default/method/hello/activity/info",
"/hello/activity/{id}/user": "/v1.0/invoke/srv.default/method/hello/activity/user"
}
In the example above, an incoming HTTP request for /mall/activity/info?id=123
is transformed into /v1.0/invoke/srv.default/method/mall/activity/info?id=123
.
Spec metadata fields
Field | Details | Example |
---|---|---|
routes |
String containing a JSON-encoded or YAML-encoded dictionary. Each key in the dictionary is the incoming path, and the value is the path it’s converted to. | See example above |
Dapr configuration
To be applied, the middleware must be referenced in configuration. See middleware pipelines.
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: appconfig
spec:
httpPipeline:
handlers:
- name: routeralias
type: middleware.http.routeralias
Related links
5.10.7 - RouterChecker http request routing
The RouterChecker HTTP middleware component leverages regexp to check the validity of HTTP request routing to prevent invalid routers from entering the Dapr cluster. In turn, the RouterChecker component filters out bad requests and reduces noise in the telemetry and log data.
Component format
The RouterChecker applies a set of rules to the incoming HTTP request. You define these rules in the component metadata using regular expressions. In the following example, the HTTP request RouterChecker is set to validate all requests message against the ^[A-Za-z0-9/._-]+$
: regex.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: routerchecker
spec:
type: middleware.http.routerchecker
version: v1
metadata:
- name: rule
value: "^[A-Za-z0-9/._-]+$"
In this example, the above definition would result in the following PASS/FAIL cases:
PASS /v1.0/invoke/demo/method/method
PASS /v1.0/invoke/demo.default/method/method
PASS /v1.0/invoke/demo.default/method/01
PASS /v1.0/invoke/demo.default/method/METHOD
PASS /v1.0/invoke/demo.default/method/user/info
PASS /v1.0/invoke/demo.default/method/user_info
PASS /v1.0/invoke/demo.default/method/user-info
FAIL /v1.0/invoke/demo.default/method/cat password
FAIL /v1.0/invoke/demo.default/method/" AND 4210=4210 limit 1
FAIL /v1.0/invoke/demo.default/method/"$(curl
Spec metadata fields
Field | Details | Example |
---|---|---|
rule | the regexp expression to be used by the HTTP request RouterChecker | ^[A-Za-z0-9/._-]+$ |
Dapr configuration
To be applied, the middleware must be referenced in configuration. See middleware pipelines.
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: appconfig
spec:
httpPipeline:
handlers:
- name: routerchecker
type: middleware.http.routerchecker
Related links
5.10.8 - Sentinel fault-tolerance middleware component
Sentinel is a powerful fault-tolerance component that takes “flow” as the breakthrough point and covers multiple fields including flow control, traffic shaping, concurrency limiting, circuit breaking, and adaptive system protection to guarantee the reliability and resiliency of microservices.
The Sentinel HTTP middleware enables Dapr to facilitate Sentinel’s powerful abilities to protect your application. You can refer to Sentinel Wiki for more details on Sentinel.
Component format
In the following definition, the maximum requests per second are set to 10:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: sentinel
spec:
type: middleware.http.sentinel
version: v1
metadata:
- name: appName
value: "nodeapp"
- name: logDir
value: "/var/tmp"
- name: flowRules
value: >-
[
{
"resource": "POST:/v1.0/invoke/nodeapp/method/neworder",
"threshold": 10,
"tokenCalculateStrategy": 0,
"controlBehavior": 0
}
]
Spec metadata fields
Field | Details | Example |
---|---|---|
appName | the name of current running service | nodeapp |
logDir | the log directory path | /var/tmp/sentinel |
flowRules | json array of sentinel flow control rules | flow control rule |
circuitBreakerRules | json array of sentinel circuit breaker rules | circuit breaker rule |
hotSpotParamRules | json array of sentinel hotspot parameter flow control rules | hotspot rule |
isolationRules | json array of sentinel isolation rules | isolation rule |
systemRules | json array of sentinel system rules | system rule |
Once the limit is reached, the request will return HTTP Status code 429: Too Many Requests.
Special note to resource
field in each rule’s definition. In Dapr, it follows the following format:
POST/GET/PUT/DELETE:Dapr HTTP API Request Path
All concrete HTTP API information can be found from Dapr API Reference. In the above sample config, the resource
field is set to POST:/v1.0/invoke/nodeapp/method/neworder.
Dapr configuration
To be applied, the middleware must be referenced in configuration. See middleware pipelines.
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: daprConfig
spec:
httpPipeline:
handlers:
- name: sentinel
type: middleware.http.sentinel
Related links
5.10.9 - Uppercase request body
The uppercase HTTP middleware converts the body of the request to uppercase letters and is used for testing that the pipeline is functioning. It should only be used for local development.
Component format
In the following definition, it make content of request body into uppercase:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: uppercase
spec:
type: middleware.http.uppercase
version: v1
This component has no metadata
to configure.
Dapr configuration
To be applied, the middleware must be referenced in configuration. See middleware pipelines.
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: appconfig
spec:
httpPipeline:
handlers:
- name: uppercase
type: middleware.http.uppercase
Related links
5.10.10 - Wasm
WebAssembly is a way to safely run code compiled in other languages. Runtimes
execute WebAssembly Modules (Wasm), which are most often binaries with a .wasm
extension.
The Wasm HTTP middleware allows you to manipulate
an incoming request or serve a response with custom logic compiled to a Wasm
binary. In other words, you can extend Dapr using external files that are not
pre-compiled into the daprd
binary. Dapr embeds wazero
to accomplish this without CGO.
Wasm binaries are loaded from a URL. For example, the URL file://rewrite.wasm
loads rewrite.wasm
from the current directory of the process. On Kubernetes,
see How to: Mount Pod volumes to the Dapr sidecar
to configure a filesystem mount that can contain Wasm modules.
It is also possible to fetch the Wasm binary from a remote URL. In this case,
the URL must point exactly to one Wasm binary. For example:
http://example.com/rewrite.wasm
, orhttps://example.com/rewrite.wasm
.
Component format
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: wasm
spec:
type: middleware.http.wasm
version: v1
metadata:
- name: url
value: "file://router.wasm"
- guestConfig
value: {"environment":"production"}
Spec metadata fields
Minimally, a user must specify a Wasm binary implements the http-handler. How to compile this is described later.
Field | Details | Required | Example |
---|---|---|---|
url | The URL of the resource including the Wasm binary to instantiate. The supported schemes include file:// , http:// , and https:// . The path of a file:// URL is relative to the Dapr process unless it begins with / . |
true | file://hello.wasm , https://example.com/hello.wasm |
guestConfig | An optional configuration passed to Wasm guests. Users can pass an arbitrary string to be parsed by the guest code. | false | environment=production ,{"environment":"production"} |
Dapr configuration
To be applied, the middleware must be referenced in configuration. See middleware pipelines.
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: appconfig
spec:
httpPipeline:
handlers:
- name: wasm
type: middleware.http.wasm
Note: WebAssembly middleware uses more resources than native middleware. This result in a resource constraint faster than the same logic in native code. Production usage should Control max concurrency.
Generating Wasm
This component lets you manipulate an incoming request or serve a response with
custom logic compiled using the http-handler
Application Binary Interface (ABI). The handle_request
function receives an
incoming request and can manipulate it or serve a response as necessary.
To compile your Wasm, you must compile source using a http-handler compliant guest SDK such as TinyGo.
Here’s an example in TinyGo:
package main
import (
"strings"
"github.com/http-wasm/http-wasm-guest-tinygo/handler"
"github.com/http-wasm/http-wasm-guest-tinygo/handler/api"
)
func main() {
handler.HandleRequestFn = handleRequest
}
// handleRequest implements a simple HTTP router.
func handleRequest(req api.Request, resp api.Response) (next bool, reqCtx uint32) {
// If the URI starts with /host, trim it and dispatch to the next handler.
if uri := req.GetURI(); strings.HasPrefix(uri, "/host") {
req.SetURI(uri[5:])
next = true // proceed to the next handler on the host.
return
}
// Serve a static response
resp.Headers().Set("Content-Type", "text/plain")
resp.Body().WriteString("hello")
return // skip the next handler, as we wrote a response.
}
If using TinyGo, compile as shown below and set the spec metadata field named
“url” to the location of the output (for example, file://router.wasm
):
tinygo build -o router.wasm -scheduler=none --no-debug -target=wasi router.go`
Wasm guestConfig
example
Here is an example of how to use guestConfig
to pass configurations to Wasm. In Wasm code, you can use the function handler.Host.GetConfig
defined in guest SDK to get the configuration. In the following example, the Wasm middleware parses the executed environment
from JSON config defined in the component.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: wasm
spec:
type: middleware.http.wasm
version: v1
metadata:
- name: url
value: "file://router.wasm"
- guestConfig
value: {"environment":"production"}
Here’s an example in TinyGo:
package main
import (
"encoding/json"
"github.com/http-wasm/http-wasm-guest-tinygo/handler"
"github.com/http-wasm/http-wasm-guest-tinygo/handler/api"
)
type Config struct {
Environment string `json:"environment"`
}
func main() {
// get config bytes, which is the value of guestConfig defined in the component.
configBytes := handler.Host.GetConfig()
config := Config{}
json.Unmarshal(configBytes, &config)
handler.Host.Log(api.LogLevelInfo, "Config environment: "+config.Environment)
}
Related links
6 - Dapr resource specs
6.1 - Component spec
Dapr defines and registers components using a resource specifications. All components are defined as a resource and can be applied to any hosting environment where Dapr is running, not just Kubernetes.
Typically, components are restricted to a particular namespace and restricted access through scopes to any particular set of applications. The namespace is either explicit on the component manifest itself, or set by the API server, which derives the namespace through context with applying to Kubernetes.
Note
The exception to this rule is in self-hosted mode, where daprd ingests component resources when the namespace field is omitted. However, the security profile is mute, as daprd has access to the manifest anyway, unlike in Kubernetes.Format
apiVersion: dapr.io/v1alpha1
kind: Component
auth:
secretstore: <REPLACE-WITH-SECRET-STORE-NAME>
metadata:
name: <REPLACE-WITH-COMPONENT-NAME>
namespace: <REPLACE-WITH-COMPONENT-NAMESPACE>
spec:
type: <REPLACE-WITH-COMPONENT-TYPE>
version: v1
initTimeout: <REPLACE-WITH-TIMEOUT-DURATION>
ignoreErrors: <REPLACE-WITH-BOOLEAN>
metadata:
- name: <REPLACE-WITH-METADATA-NAME>
value: <REPLACE-WITH-METADATA-VALUE>
scopes:
- <REPLACE-WITH-APPID>
- <REPLACE-WITH-APPID>
Spec fields
Field | Required | Details | Example |
---|---|---|---|
apiVersion | Y | The version of the Dapr (and Kubernetes if applicable) API you are calling | dapr.io/v1alpha1 |
kind | Y | The type of resource. For components is must always be Component |
Component |
auth | N | The name of a secret store where secretKeyRef in the metadata lookup the name of secrets used in the component |
See How-to: Reference secrets in components |
scopes | N | The applications the component is limited to, specified by their app IDs | order-processor , checkout |
metadata | - | Information about the component registration | |
metadata.name | Y | The name of the component | prod-statestore |
metadata.namespace | N | The namespace for the component for hosting environments with namespaces | myapp-namespace |
spec | - | Detailed information on the component resource | |
spec.type | Y | The type of the component | state.redis |
spec.version | Y | The version of the component | v1 |
spec.initTimeout | N | The timeout duration for the initialization of the component. Default is 5s | 5m , 1h , 20s |
spec.ignoreErrors | N | Tells the Dapr sidecar to continue initialization if the component fails to load. Default is false | false |
spec.metadata | - | A key/value pair of component specific configuration. See your component definition for fields | |
spec.metadata.name | Y | The name of the component-specific property and its value | - name: secretsFile value: secrets.json |
Templated metadata values
Metadata values can contain template tags that are resolved on Dapr sidecar startup. The table below shows the current templating tags that can be used in components.
Tag | Details | Example use case |
---|---|---|
{uuid} | Randomly generated UUIDv4 | When you need a unique identifier in self-hosted mode; for example, multiple application instances consuming a shared MQTT subscription |
{podName} | Name of the pod containing the Dapr sidecar | Use to have a persisted behavior, where the ConsumerID does not change on restart when using StatefulSets in Kubernetes |
{namespace} | Namespace where the Dapr sidecar resides combined with its appId | Using a shared clientId when multiple application instances consume a Kafka topic in Kubernetes |
{appID} | The configured appID of the resource containing the Dapr sidecar |
Having a shared clientId when multiple application instances consumer a Kafka topic in self-hosted mode |
Below is an example of using the {uuid}
tag in an MQTT pubsub component. Note that multiple template tags can be used in a single metadata value.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: messagebus
spec:
type: pubsub.mqtt3
version: v1
metadata:
- name: consumerID
value: "{uuid}"
- name: url
value: "tcp://admin:public@localhost:1883"
- name: qos
value: 1
- name: retain
value: "false"
- name: cleanSession
value: "false"
Related links
6.2 - Subscription spec
The Subscription
Dapr resource allows you to subscribe declaratively to a topic using an external component YAML file.
Note
Any subscription can be restricted to a particular namespace and restricted access through scopes to any particular set of applications.This guide demonstrates two subscription API versions:
v2alpha1
(default spec)v1alpha1
(deprecated)
v2alpha1
format
The following is the basic v2alpha1
spec for a Subscription
resource. v2alpha1
is the default spec for the subscription API.
apiVersion: dapr.io/v2alpha1
kind: Subscription
metadata:
name: <REPLACE-WITH-NAME>
spec:
topic: <REPLACE-WITH-TOPIC-NAME> # Required
routes: # Required
rules:
- match: <REPLACE-WITH-CEL-FILTER>
path: <REPLACE-WITH-PATH>
pubsubname: <REPLACE-WITH-PUBSUB-NAME> # Required
deadLetterTopic: <REPLACE-WITH-DEADLETTERTOPIC-NAME> # Optional
bulkSubscribe: # Optional
enabled: <REPLACE-WITH-BOOLEAN-VALUE>
maxMessagesCount: <REPLACE-WITH-VALUE>
maxAwaitDurationMs: <REPLACE-WITH-VALUE>
scopes:
- <REPLACE-WITH-SCOPED-APPIDS>
Spec fields
Field | Required | Details | Example |
---|---|---|---|
topic | Y | The name of the topic to which your component subscribes. | orders |
routes | Y | The routes configuration for this topic, including specifying the condition for sending a message to a specific path. Includes the following fields:
|
match: event.type == "widget" path: /widgets |
pubsubname | N | The name of your pub/sub component. | pubsub |
deadLetterTopic | N | The name of the dead letter topic that forwards undeliverable messages. | poisonMessages |
bulkSubscribe | N | Enable bulk subscribe properties. | true , false |
v1alpha1
format
The following is the basic version v1alpha1
spec for a Subscription
resource. v1alpha1
is now deprecated.
apiVersion: dapr.io/v1alpha1
kind: Subscription
metadata:
name: <REPLACE-WITH-RESOURCE-NAME>
spec:
topic: <REPLACE-WITH-TOPIC-NAME> # Required
route: <REPLACE-WITH-ROUTE-NAME> # Required
pubsubname: <REPLACE-WITH-PUBSUB-NAME> # Required
deadLetterTopic: <REPLACE-WITH-DEAD-LETTER-TOPIC-NAME> # Optional
bulkSubscribe: # Optional
- enabled: <REPLACE-WITH-BOOLEAN-VALUE>
- maxMessagesCount: <REPLACE-WITH-VALUE>
- maxAwaitDurationMs: <REPLACE-WITH-VALUE>
scopes:
- <REPLACE-WITH-SCOPED-APPIDS>
Spec fields
Field | Required | Details | Example |
---|---|---|---|
topic | Y | The name of the topic to which your component subscribes. | orders |
route | Y | The endpoint to which all topic messages are sent. | /checkout |
pubsubname | N | The name of your pub/sub component. | pubsub |
deadlettertopic | N | The name of the dead letter topic that forwards undeliverable messages. | poisonMessages |
bulksubscribe | N | Enable bulk subscribe properties. | true , false |
Related links
6.3 - Resiliency spec
The Resiliency
Dapr resource allows you to define and apply fault tolerance resiliency policies. Resiliency specs are applied when the Dapr sidecar starts.
Note
Any resiliency resource can be restricted to a particular namepsace and restricted access through scopes to any particular set of applications.Format
apiVersion: dapr.io/v1alpha1
kind: Resiliency
metadata:
name: <REPLACE-WITH-RESOURCE-NAME>
version: v1alpha1
scopes:
- <REPLACE-WITH-SCOPED-APPIDS>
spec:
policies: # Required
timeouts:
timeoutName: <REPLACE-WITH-TIME-VALUE> # Replace with any unique name
retries:
retryName: # Replace with any unique name
policy: <REPLACE-WITH-VALUE>
duration: <REPLACE-WITH-VALUE>
maxInterval: <REPLACE-WITH-VALUE>
maxRetries: <REPLACE-WITH-VALUE>
matching:
httpStatusCodes: <REPLACE-WITH-VALUE>
gRPCStatusCodes: <REPLACE-WITH-VALUE>
circuitBreakers:
circuitBreakerName: # Replace with any unique name
maxRequests: <REPLACE-WITH-VALUE>
timeout: <REPLACE-WITH-VALUE>
trip: <REPLACE-WITH-CONSECUTIVE-FAILURE-VALUE>
targets: # Required
apps:
appID: # Replace with scoped app ID
timeout: <REPLACE-WITH-TIMEOUT-NAME>
retry: <REPLACE-WITH-RETRY-NAME>
circuitBreaker: <REPLACE-WITH-CIRCUIT-BREAKER-NAME>
actors:
myActorType:
timeout: <REPLACE-WITH-TIMEOUT-NAME>
retry: <REPLACE-WITH-RETRY-NAME>
circuitBreaker: <REPLACE-WITH-CIRCUIT-BREAKER-NAME>
circuitBreakerCacheSize: <REPLACE-WITH-VALUE>
components:
componentName: # Replace with your component name
outbound:
timeout: <REPLACE-WITH-TIMEOUT-NAME>
retry: <REPLACE-WITH-RETRY-NAME>
circuitBreaker: <REPLACE-WITH-CIRCUIT-BREAKER-NAME>
Spec fields
Field | Required | Details | Example |
---|---|---|---|
policies | Y | The configuration of resiliency policies, including:
See more examples with all of the built-in policies |
timeout: general retry: retryForever circuit breaker: simpleCB |
targets | Y | The configuration for the applications, actors, or components that use the resiliency policies. See more examples in the resiliency targets guide |
apps components actors |
Related links
6.4 - HTTPEndpoint spec
The HTTPEndpoint
is a Dapr resource that is used to enable the invocation of non-Dapr endpoints from a Dapr application.
Note
Any HTTPEndpoint resource can be restricted to a particular namepsace and restricted access through scopes to any particular set of applications.Format
apiVersion: dapr.io/v1alpha1
kind: HTTPEndpoint
metadata:
name: <NAME>
spec:
baseUrl: <REPLACE-WITH-BASEURL> # Required. Use "http://" or "https://" prefix.
headers: # Optional
- name: <REPLACE-WITH-A-HEADER-NAME>
value: <REPLACE-WITH-A-HEADER-VALUE>
- name: <REPLACE-WITH-A-HEADER-NAME>
secretKeyRef:
name: <REPLACE-WITH-SECRET-NAME>
key: <REPLACE-WITH-SECRET-KEY>
clientTLS:
rootCA:
secretKeyRef:
name: <REPLACE-WITH-SECRET-NAME>
key: <REPLACE-WITH-SECRET-KEY>
certificate:
secretKeyRef:
name: <REPLACE-WITH-SECRET-NAME>
key: <REPLACE-WITH-SECRET-KEY>
privateKey:
secretKeyRef:
name: <REPLACE-WITH-SECRET-NAME>
key: <REPLACE-WITH-SECRET-KEY>
scopes: # Optional
- <REPLACE-WITH-SCOPED-APPIDS>
auth: # Optional
secretStore: <REPLACE-WITH-SECRETSTORE>
Spec fields
Field | Required | Details | Example |
---|---|---|---|
baseUrl | Y | Base URL of the non-Dapr endpoint | "https://api.github.com" , "http://api.github.com" |
headers | N | HTTP request headers for service invocation | name: "Accept-Language" value: "en-US" name: "Authorization" secretKeyRef.name: "my-secret" secretKeyRef.key: "myGithubToken" |
clientTLS | N | Enables TLS authentication to an endpoint with any standard combination of root certificate, client certificate and private key |
Related links
6.5 - Configuration spec
The Configuration
is a Dapr resource that is used to configure the Dapr sidecar, control plane, and others.
Sidecar format
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: <REPLACE-WITH-NAME>
namespace: <REPLACE-WITH-NAMESPACE>
spec:
api:
allowed:
- name: <REPLACE-WITH-API>
version: <VERSION>
protocol: <HTTP-OR-GRPC>
tracing:
samplingRate: <REPLACE-WITH-INTEGER>
stdout: true
otel:
endpointAddress: <REPLACE-WITH-ENDPOINT-ADDRESS>
isSecure: <TRUE-OR-FALSE>
protocol: <HTTP-OR-GRPC>
metrics:
enabled: <TRUE-OR-FALSE>
rules:
- name: <METRIC-NAME>
labels:
- name: <LABEL-NAME>
regex: {}
recordErrorCodes: <TRUE-OR-FALSE>
latencyDistributionBuckets:
- <BUCKET-VALUE-MS-0>
- <BUCKET-VALUE-MS-1>
http:
increasedCardinality: <TRUE-OR-FALSE>
pathMatching:
- <PATH-A>
- <PATH-B>
excludeVerbs: <TRUE-OR-FALSE>
httpPipeline: # for incoming http calls
handlers:
- name: <HANDLER-NAME>
type: <HANDLER-TYPE>
appHttpPipeline: # for outgoing http calls
handlers:
- name: <HANDLER-NAME>
type: <HANDLER-TYPE>
nameResolution:
component: <NAME-OF-NAME-RESOLUTION-COMPONENT>
version: <NAME-RESOLUTION-COMPONENT-VERSION>
configuration:
<NAME-RESOLUTION-COMPONENT-METADATA-CONFIGURATION>
secrets:
scopes:
- storeName: <NAME-OF-SCOPED-STORE>
defaultAccess: <ALLOW-OR-DENY>
deniedSecrets: <REPLACE-WITH-DENIED-SECRET>
components:
deny:
- <COMPONENT-TO-DENY>
accessControl:
defaultAction: <ALLOW-OR-DENY>
trustDomain: <REPLACE-WITH-TRUST-DOMAIN>
policies:
- appId: <APP-NAME>
defaultAction: <ALLOW-OR-DENY>
trustDomain: <REPLACE-WITH-TRUST-DOMAIN>
namespace: "default"
operations:
- name: <OPERATION-NAME>
httpVerb: ['POST', 'GET']
action: <ALLOW-OR-DENY>
Spec fields
Field | Required | Details | Example |
---|---|---|---|
accessControl | N | Applied to Dapr sidecar for the called application. Enables the configuration of policies that restrict what operations calling applications can perform (via service invocation) on the called appliaction. | Learn more about the accessControl configuration. |
api | N | Used to enable only the Dapr sidecar APIs used by the application. | Learn more about the api configuration. |
httpPipeline | N | Configure API middleware pipelines | Middleware pipeline configuration overview Learn more about the httpPipeline configuration. |
appHttpPipeline | N | Configure application middleware pipelines | Middleware pipeline configuration overview Learn more about the appHttpPipeline configuration. |
components | N | Used to specify a denylist of component types that can’t be initialized. | Learn more about the components configuration. |
features | N | Defines the preview features that are enabled/disabled. | Learn more about the features configuration. |
logging | N | Configure how logging works in the Dapr runtime. | Learn more about the logging configuration. |
metrics | N | Enable or disable metrics for an application. | Learn more about the metrics configuration. |
nameResolution | N | Name resolution configuration spec for the service invocation building block. | Learn more about the nameResolution configuration per components. |
secrets | N | Limit the secrets to which your Dapr application has access. | Learn more about the secrets configuration. |
tracing | N | Turns on tracing for an application. | Learn more about the tracing configuration. |
Control plane format
The daprsystem
configuration file installed with Dapr applies global settings and is only set up when Dapr is deployed to Kubernetes.
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: daprsystem
namespace: default
spec:
mtls:
enabled: true
allowedClockSkew: 15m
workloadCertTTL: 24h
Spec fields
Field | Required | Details | Example |
---|---|---|---|
mtls | N | Defines the mTLS configuration | allowedClockSkew: 15m workloadCertTTL:24h Learn more about the mtls configuration. |