Detailed documentation on the Alibaba Cloud DingTalk binding component
Setup Dapr component
To setup an Alibaba Cloud DingTalk binding create a component of type bindings.dingtalk.webhook. See this guide on how to create and apply a secretstore configuration. See this guide on referencing secrets to retrieve and use the secret with Dapr components.
2 - Alibaba Cloud Log Storage Service binding spec
Detailed documentation on the Alibaba Cloud Log Storage binding component
Component format
To setup an Alibaba Cloud SLS binding create a component of type bindings.alicloud.sls. See this guide on how to create and apply a binding configuration.
To perform a log store operation, invoke the binding with a POST method and the following JSON body:
{"metadata":{"project":"your-sls-project-name","logstore":"your-sls-logstore-name","topic":"your-sls-topic-name","source":"your-sls-source"},"data":{"custome-log-filed":"any other log info"},"operation":"create"}
Note
Note, the value of “project”,“logstore”,“topic” and “source” property should provide in the metadata properties.
Example
curl -X POST -H "Content-Type: application/json" -d "{\"metadata\":{\"project\":\"project-name\",\"logstore\":\"logstore-name\",\"topic\":\"topic-name\",\"source\":\"source-name\"},\"data\":{\"log-filed\":\"log info\"}" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -X POST -H "Content-Type: application/json" -d '{"metadata":{"project":"project-name","logstore":"logstore-name","topic":"topic-name","source":"source-name"},"data":{"log-filed":"log info"}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Response format
As Alibaba Cloud SLS producer API is asynchronous, there is no response for this binding (there is no callback interface to accept the response of success or failure, only a record for failure any reason to the console log).
3 - Alibaba Cloud Object Storage Service binding spec
Detailed documentation on the Alibaba Cloud Object Storage binding component
Component format
To setup an Alibaba Cloud Object Storage binding create a component of type bindings.alicloud.oss. See this guide on how to create and apply a secretstore configuration. See this guide on referencing secrets to retrieve and use the secret with Dapr components.
Detailed documentation on the Alibaba Tablestore binding component
Component format
To setup an Alibaba Cloud Tablestore binding create a component of type bindings.alicloud.tablestore. See this guide on how to create and apply a secretstore configuration. See this guide on referencing secrets to retrieve and use the secret with Dapr components.
Tells the binding which APNs service to use. Set to "true" to use the development service or "false" to use the production service. Default: "true"
"true"
key-id
Y
Output
The identifier for the private key from the Apple Developer Portal
"private-key-id"
team-id
Y
Output
The identifier for the organization or author from the Apple Developer Portal
"team-id"
private-key
Y
Output
Is a PKCS #8-formatted private key. It is intended that the private key is stored in the secret store and not exposed directly in the configuration. See here for more details
"pem file"
Private key
The APNS binding needs a cryptographic private key in order to generate authentication tokens for the APNS service.
The private key can be generated from the Apple Developer Portal and is provided as a PKCS #8 file with the private key stored in PEM format.
The private key should be stored in the Dapr secret store and not stored directly in the binding’s configuration file.
A sample configuration file for the APNS binding is shown below:
This component supports output binding with the following operations:
create
Push notification format
The APNS binding is a pass-through wrapper over the Apple Push Notification Service. The APNS binding will send the request directly to the APNS service without any translation.
It is therefore important to understand the payload for push notifications expected by the APNS service.
The payload format is documented here.
Request format
{"data":{"aps":{"alert":{"title":"New Updates!","body":"There are new updates for your review"}}},"metadata":{"device-token":"PUT-DEVICE-TOKEN-HERE","apns-push-type":"alert","apns-priority":"10","apns-topic":"com.example.helloworld"},"operation":"create"}
The data object contains a complete push notification specification as described in the Apple documentation. The data object will be sent directly to the APNs service.
Besides the device-token value, the HTTP headers specified in the Apple documentation can be sent as metadata fields and will be included in the HTTP request to the APNs service.
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.
Spec metadata fields
Field
Required
Binding support
Details
Example
table
Y
Output
The DynamoDB table name
"items"
region
Y
Output
The specific AWS region the AWS DynamoDB instance is deployed in
"us-east-1"
accessKey
Y
Output
The AWS Access Key to access this resource
"key"
secretKey
Y
Output
The AWS Secret Access Key to access this resource
"secretAccessKey"
sessionToken
N
Output
The AWS session token to use
"sessionToken"
Important
When running the Dapr sidecar (daprd) with your application on EKS (AWS Kubernetes), if you’re using a node/pod that has already been attached to an IAM policy defining access to AWS resources, you must not provide AWS access-key, secret-key, and tokens in the definition of the component spec you’re using.
Binding support
This component supports output binding with the following operations:
Detailed documentation on the AWS Kinesis binding component
Component format
To setup AWS Kinesis binding create a component of type bindings.aws.kinesis. See this guide on how to create and apply a binding configuration.
See this for instructions on how to set up an AWS Kinesis data streams
See Authenticating to AWS for information about authentication-related attributes
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.
Spec metadata fields
Field
Required
Binding support
Details
Example
mode
N
Input
The Kinesis stream mode. shared- Shared throughput, extended - Extended/Enhanced fanout methods. More details are here. Defaults to "shared"
"shared", "extended"
streamName
Y
Input/Output
The AWS Kinesis Stream Name
"stream"
consumerName
Y
Input
The AWS Kinesis Consumer Name
"myconsumer"
region
Y
Output
The specific AWS region the AWS Kinesis instance is deployed in
"us-east-1"
accessKey
Y
Output
The AWS Access Key to access this resource
"key"
secretKey
Y
Output
The AWS Secret Access Key to access this resource
"secretAccessKey"
sessionToken
N
Output
The AWS session token to use
"sessionToken"
direction
N
Input/Output
The direction of the binding
"input", "output", "input, output"
Important
When running the Dapr sidecar (daprd) with your application on EKS (AWS Kubernetes), if you’re using a node/pod that has already been attached to an IAM policy defining access to AWS resources, you must not provide AWS access-key, secret-key, and tokens in the definition of the component spec you’re using.
Binding support
This component supports both input and output binding interfaces.
This component supports output binding with the following operations:
Detailed documentation on the AWS S3 binding component
Component format
To setup an AWS S3 binding create a component of type bindings.aws.s3. This binding works with other S3-compatible services, such as Minio. See this guide on how to create and apply a binding configuration.
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.
Spec metadata fields
Field
Required
Binding support
Details
Example
bucket
Y
Output
The name of the S3 bucket to write to
"bucket"
region
Y
Output
The specific AWS region
"us-east-1"
endpoint
N
Output
The specific AWS endpoint
"s3.us-east-1.amazonaws.com"
accessKey
Y
Output
The AWS Access Key to access this resource
"key"
secretKey
Y
Output
The AWS Secret Access Key to access this resource
"secretAccessKey"
sessionToken
N
Output
The AWS session token to use
"sessionToken"
forcePathStyle
N
Output
Currently Amazon S3 SDK supports virtual hosted-style and path-style access. "true" is path-style format like "https://<endpoint>/<your bucket>/<key>". "false" is hosted-style format like "https://<your bucket>.<endpoint>/<key>". Defaults to "false"
"true", "false"
decodeBase64
N
Output
Configuration to decode base64 file content before saving to bucket storage. (In case of saving a file with binary content). "true" is the only allowed positive value. Other positive variations like "True", "1" are not acceptable. Defaults to false
"true", "false"
encodeBase64
N
Output
Configuration to encode base64 file content before return the content. (In case of opening a file with binary content). "true" is the only allowed positive value. Other positive variations like "True", "1" are not acceptable. Defaults to "false"
"true", "false"
disableSSL
N
Output
Allows to connect to non https:// endpoints. Defaults to "false"
"true", "false"
insecureSSL
N
Output
When connecting to https:// endpoints, accepts invalid or self-signed certificates. Defaults to "false"
When running the Dapr sidecar (daprd) with your application on EKS (AWS Kubernetes), if you’re using a node/pod that has already been attached to an IAM policy defining access to AWS resources, you must not provide AWS access-key, secret-key, and tokens in the definition of the component spec you’re using.
S3 Bucket Creation
Using with Minio
Minio is a service that exposes local storage as S3-compatible block storage, and it’s a popular alternative to S3 especially in development environments. You can use the S3 binding with Minio too, with some configuration tweaks:
Set endpoint to the address of the Minio server, including protocol (http:// or https://) and the optional port at the end. For example, http://minio.local:9000 (the values depend on your environment).
forcePathStyle must be set to true
The value for region is not important; you can set it to us-east-1.
Depending on your environment, you may need to set disableSSL to true if you’re connecting to Minio using a non-secure connection (using the http:// protocol). If you are using a secure connection (https:// protocol) but with a self-signed certificate, you may need to set insecureSSL to true.
To use the S3 component, you need to use an existing bucket. The example above uses a LocalStack Initialization Hook to setup the bucket.
To use LocalStack with your S3 binding, you need to provide the endpoint configuration in the component metadata. The endpoint is unnecessary when running against production AWS.
To presign an object with a specified time-to-live, use the presignTTL metadata key on a create request.
Valid values for presignTTL are Go duration strings.
The response body contains the following example JSON:
{"location":"https://<your bucket>.s3.<your region>.amazonaws.com/<key>","versionID":"<version ID if Bucket Versioning is enabled>","presignURL":"https://<your bucket>.s3.<your region>.amazonaws.com/image.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAJJWZ7B6WCRGMKFGQ%2F20180210%2Feu-west-2%2Fs3%2Faws4_request&X-Amz-Date=20180210T171315Z&X-Amz-Expires=1800&X-Amz-Signature=12b74b0788aa036bc7c3d03b3f20c61f1f91cc9ad8873e3314255dc479a25351&X-Amz-SignedHeaders=host"}
Examples
Save text to a random generated UUID file
On Windows, utilize cmd prompt (PowerShell has different escaping mechanism)
The response body will contain the following JSON:
{"location":"https://<your bucket>.s3.<your region>.amazonaws.com/<key>","versionID":"<version ID if Bucket Versioning is enabled"}
Presign an existing object
To presign an existing S3 object with a specified time-to-live, use the presignTTL and key metadata keys on a presign request.
Valid values for presignTTL are Go duration strings.
maxResults - (optional) sets the maximum number of keys returned in the response. By default the action returns up to 1,000 key names. The response might contain fewer keys but will never contain more.
prefix - (optional) limits the response to keys that begin with the specified prefix.
marker - (optional) marker is where you want Amazon S3 to start listing from. Amazon S3 starts listing after this specified key. Marker can be any key in the bucket.
The marker value may then be used in a subsequent call to request the next set of list items.
delimiter - (optional) A delimiter is a character you use to group keys.
Response
The response body contains the list of found objects.
The list of objects will be returned as JSON array in the following form:
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.
Spec metadata fields
Field
Required
Binding support
Details
Example
region
N
Output
The specific AWS region
"eu-west-1"
accessKey
N
Output
The AWS Access Key to access this resource
"key"
secretKey
N
Output
The AWS Secret Access Key to access this resource
"secretAccessKey"
sessionToken
N
Output
The AWS session token to use
"sessionToken"
emailFrom
N
Output
If set, this specifies the email address of the sender. See also
"me@example.com"
emailTo
N
Output
If set, this specifies the email address of the receiver. See also
"me@example.com"
emailCc
N
Output
If set, this specifies the email address to CC in. See also
"me@example.com"
emailBcc
N
Output
If set, this specifies email address to BCC in. See also
"me@example.com"
subject
N
Output
If set, this specifies the subject of the email message. See also
"subject of mail"
Important
When running the Dapr sidecar (daprd) with your application on EKS (AWS Kubernetes), if you’re using a node/pod that has already been attached to an IAM policy defining access to AWS resources, you must not provide AWS access-key, secret-key, and tokens in the definition of the component spec you’re using.
Binding support
This component supports output binding with the following operations:
create
Example request
You can specify any of the following optional metadata properties with each request:
emailFrom
emailTo
emailCc
emailBcc
subject
When sending an email, the metadata in the configuration and in the request is combined. The combined set of metadata must contain at least the emailFrom, emailTo, emailCc, emailBcc and subject fields.
The emailTo, emailCc and emailBcc fields can contain multiple email addresses separated by a semicolon.
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.
Spec metadata fields
Field
Required
Binding support
Details
Example
topicArn
Y
Output
The SNS topic name
"arn:::topicarn"
region
Y
Output
The specific AWS region
"us-east-1"
endpoint
N
Output
The specific AWS endpoint
"sns.us-east-1.amazonaws.com"
accessKey
Y
Output
The AWS Access Key to access this resource
"key"
secretKey
Y
Output
The AWS Secret Access Key to access this resource
"secretAccessKey"
sessionToken
N
Output
The AWS session token to use
"sessionToken"
Important
When running the Dapr sidecar (daprd) with your application on EKS (AWS Kubernetes), if you’re using a node/pod that has already been attached to an IAM policy defining access to AWS resources, you must not provide AWS access-key, secret-key, and tokens in the definition of the component spec you’re using.
Binding support
This component supports output binding with the following operations:
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.
Spec metadata fields
Field
Required
Binding support
Details
Example
queueName
Y
Input/Output
The SQS queue name
"myqueue"
region
Y
Input/Output
The specific AWS region
"us-east-1"
accessKey
Y
Input/Output
The AWS Access Key to access this resource
"key"
secretKey
Y
Input/Output
The AWS Secret Access Key to access this resource
"secretAccessKey"
sessionToken
N
Input/Output
The AWS session token to use
"sessionToken"
direction
N
Input/Output
The direction of the binding
"input", "output", "input, output"
Important
When running the Dapr sidecar (daprd) with your application on EKS (AWS Kubernetes), if you’re using a node/pod that has already been attached to an IAM policy defining access to AWS resources, you must not provide AWS access-key, secret-key, and tokens in the definition of the component spec you’re using.
Binding support
This component supports both input and output binding interfaces.
This component supports output binding with the following operations:
Detailed documentation on the Azure Blob Storage binding component
Component format
To setup Azure Blob Storage binding create a component of type bindings.azure.blobstorage. See this guide on how to create and apply a binding configuration.
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.
Spec metadata fields
Field
Required
Binding support
Details
Example
accountName
Y
Input/Output
The name of the Azure Storage account
"myexmapleaccount"
accountKey
Y*
Input/Output
The access key of the Azure Storage account. Only required when not using Microsoft Entra ID authentication.
"access-key"
containerName
Y
Output
The name of the Blob Storage container to write to
myexamplecontainer
endpoint
N
Input/Output
Optional custom endpoint URL. This is useful when using the Azurite emulator or when using custom domains for Azure Storage (although this is not officially supported). The endpoint must be the full base URL, including the protocol (http:// or https://), the IP or FQDN, and optional port.
"http://127.0.0.1:10000"
decodeBase64
N
Output
Configuration to decode base64 file content before saving to Blob Storage. (In case of saving a file with binary content). Defaults to false
true, false
getBlobRetryCount
N
Output
Specifies the maximum number of HTTP GET requests that will be made while reading from a RetryReader Defaults to 10
1, 2
publicAccessLevel
N
Output
Specifies whether data in the container may be accessed publicly and the level of access (only used if the container is created by Dapr). Defaults to none
blob, container, none
Microsoft Entra ID authentication
The Azure Blob Storage binding component supports authentication using all Microsoft Entra ID mechanisms. For further information and the relevant component metadata fields to provide depending on the choice of Microsoft Entra ID authentication mechanism, see the docs for authenticating to Azure.
Binding support
This component supports output binding with the following operations:
maxResults - (optional) specifies the maximum number of blobs to return, including all BlobPrefix elements. If the request does not specify maxresults the server will return up to 5,000 items.
prefix - (optional) filters the results to return only blobs whose names begin with the specified prefix.
marker - (optional) a string value that identifies the portion of the list to be returned with the next list operation. The operation returns a marker value within the response body if the list returned was not complete. The marker value may then be used in a subsequent call to request the next set of list items.
include - (optional) Specifies one or more datasets to include in the response:
snapshots: Specifies that snapshots should be included in the enumeration. Snapshots are listed from oldest to newest in the response. Defaults to: false
metadata: Specifies that blob metadata be returned in the response. Defaults to: false
uncommittedBlobs: Specifies that blobs for which blocks have been uploaded, but which have not been committed using Put Block List, be included in the response. Defaults to: false
copy: Version 2012-02-12 and newer. Specifies that metadata related to any current or previous Copy Blob operation should be included in the response. Defaults to: false
deleted: Version 2017-07-29 and newer. Specifies that soft deleted blobs should be included in the response. Defaults to: false
Response
The response body contains the list of found blocks as also the following HTTP headers:
marker - the next marker which can be used in a subsequent call to request the next set of list items. See the marker description on the data property of the binding input.
number - the number of found blobs
The list of blobs will be returned as JSON array in the following form:
By default the Azure Blob Storage output binding auto generates a UUID as the blob filename and is not assigned any system or custom metadata to it. It is configurable in the metadata property of the message (all optional).
Applications publishing to an Azure Blob Storage output binding should send a message with the following format:
Detailed documentation on the Azure Cosmos DB (Gremlin API) binding component
Component format
To setup an Azure Cosmos DB (Gremlin API) binding create a component of type bindings.azure.cosmosdb.gremlinapi. See this guide on how to create and apply a binding configuration.
Detailed documentation on the Azure Cosmos DB (SQL API) binding component
Component format
To setup Azure Cosmos DB binding create a component of type bindings.azure.cosmosdb. See this guide on how to create and apply a binding configuration.
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.
Spec metadata fields
Field
Required
Binding support
Details
Example
url
Y
Output
The Cosmos DB url
"https://******.documents.azure.com:443/"
masterKey
Y
Output
The Cosmos DB account master key
"master-key"
database
Y
Output
The name of the Cosmos DB database
"OrderDb"
collection
Y
Output
The name of the container inside the database.
"Orders"
partitionKey
Y
Output
The name of the key to extract from the payload (document to be created) that is used as the partition key. This name must match the partition key specified upon creation of the Cosmos DB container.
The Azure Cosmos DB binding component supports authentication using all Microsoft Entra ID mechanisms. For further information and the relevant component metadata fields to provide depending on the choice of Microsoft Entra ID authentication mechanism, see the docs for authenticating to Azure.
You can read additional information for setting up Cosmos DB with Azure AD authentication in the section below.
Binding support
This component supports output binding with the following operations:
create
Best Practices for Production Use
Azure Cosmos DB shares a strict metadata request rate limit across all databases in a single Azure Cosmos DB account. New connections to Azure Cosmos DB assume a large percentage of the allowable request rate limit. (See the Cosmos DB documentation)
Therefore several strategies must be applied to avoid simultaneous new connections to Azure Cosmos DB:
Ensure sidecars of applications only load the Azure Cosmos DB component when they require it to avoid unnecessary database connections. This can be done by scoping your components to specific applications.
Choose deployment strategies that sequentially deploy or start your applications to minimize bursts in new connections to your Azure Cosmos DB accounts.
Avoid reusing the same Azure Cosmos DB account for unrelated databases or systems (even outside of Dapr). Distinct Azure Cosmos DB accounts have distinct rate limits.
Increase the initTimeout value to allow the component to retry connecting to Azure Cosmos DB during side car initialization for up to 5 minutes. The default value is 5s and should be increased. When using Kubernetes, increasing this value may also require an update to your Readiness and Liveness probes.
The output bindingcreate operation requires the following keys to exist in the payload of every document to be created:
id: a unique ID for the document to be created
<partitionKey>: the name of the partition key specified via the spec.partitionKey in the component definition. This must also match the partition key specified upon creation of the Cosmos DB container.
Setting up Cosmos DB for authenticating with Azure AD
When using the Dapr Cosmos DB binding and authenticating with Azure AD, you need to perform a few additional steps to set up your environment.
Prerequisites:
You need a Service Principal created as per the instructions in the authenticating to Azure page. You need the ID of the Service Principal for the commands below (note that this is different from the client ID of your application, or the value you use for azureClientId in the metadata).
The scripts below are optimized for a bash or zsh shell
When using the Cosmos DB binding, you don’t need to create stored procedures as you do in the case of the Cosmos DB state store.
Granting your Azure AD application access to Cosmos DB
You can find more information on the official documentation, including instructions to assign more granular permissions.
In order to grant your application permissions to access data stored in Cosmos DB, you need to assign it a custom role for the Cosmos DB data plane. In this example you’re going to use a built-in role, “Cosmos DB Built-in Data Contributor”, which grants your application full read-write access to the data; you can optionally create custom, fine-tuned roles following the instructions in the official docs.
# Name of the Resource Group that contains your Cosmos DBRESOURCE_GROUP="..."# Name of your Cosmos DB accountACCOUNT_NAME="..."# ID of your Service Principal objectPRINCIPAL_ID="..."# ID of the "Cosmos DB Built-in Data Contributor" role# You can also use the ID of a custom roleROLE_ID="00000000-0000-0000-0000-000000000002"az cosmosdb sql role assignment create \
--account-name "$ACCOUNT_NAME"\
--resource-group "$RESOURCE_GROUP"\
--scope "/"\
--principal-id "$PRINCIPAL_ID"\
--role-definition-id "$ROLE_ID"
Detailed documentation on the Azure Event Grid binding component
Component format
To setup an Azure Event Grid binding create a component of type bindings.azure.eventgrid. See this guide on how to create and apply a binding configuration.
See this for the documentation for Azure Event Grid.
apiVersion:dapr.io/v1alpha1kind:Componentmetadata:name:<name>spec:type:bindings.azure.eventgridversion:v1metadata:# Required Output Binding Metadata- name:accessKeyvalue:"[AccessKey]"- name:topicEndpointvalue:"[TopicEndpoint]"# Required Input Binding Metadata- name:azureTenantIdvalue:"[AzureTenantId]"- name:azureSubscriptionIdvalue:"[AzureSubscriptionId]"- name:azureClientIdvalue:"[ClientId]"- name:azureClientSecretvalue:"[ClientSecret]"- name:subscriberEndpointvalue:"[SubscriberEndpoint]"- name:handshakePort# Make sure to pass this as a string, with quotes around the valuevalue:"[HandshakePort]"- name:scopevalue:"[Scope]"# Optional Input Binding Metadata- name:eventSubscriptionNamevalue:"[EventSubscriptionName]"# Optional metadata- name:directionvalue:"input, output"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.
Spec metadata fields
Field
Required
Binding support
Details
Example
accessKey
Y
Output
The Access Key to be used for publishing an Event Grid Event to a custom topic
"accessKey"
topicEndpoint
Y
Output
The topic endpoint in which this output binding should publish events
"topic-endpoint"
azureTenantId
Y
Input
The Azure tenant ID of the Event Grid resource
"tenentID"
azureSubscriptionId
Y
Input
The Azure subscription ID of the Event Grid resource
"subscriptionId"
azureClientId
Y
Input
The client ID that should be used by the binding to create or update the Event Grid Event Subscription and to authenticate incoming messages
"clientId"
azureClientSecret
Y
Input
The client id that should be used by the binding to create or update the Event Grid Event Subscription and to authenticate incoming messages
"clientSecret"
subscriberEndpoint
Y
Input
The HTTPS endpoint of the webhook Event Grid sends events (formatted as Cloud Events) to. If you’re not re-writing URLs on ingress, it should be in the form of: "https://[YOUR HOSTNAME]/<path>" If testing on your local machine, you can use something like ngrok to create a public endpoint.
"https://[YOUR HOSTNAME]/<path>"
handshakePort
Y
Input
The container port that the input binding listens on when receiving events on the webhook
"9000"
scope
Y
Input
The identifier of the resource to which the event subscription needs to be created or updated. See the scope section for more details
"/subscriptions/{subscriptionId}/"
eventSubscriptionName
N
Input
The name of the event subscription. Event subscription names must be between 3 and 64 characters long and should use alphanumeric letters only
"name"
direction
N
Input/Output
The direction of the binding
"input", "output", "input, output"
Scope
Scope is the identifier of the resource to which the event subscription needs to be created or updated. The scope can be a subscription, a resource group, a top-level resource belonging to a resource provider namespace, or an Event Grid topic. For example:
/subscriptions/{subscriptionId}/ for a subscription
/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName} for a resource group
/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName} for a resource
/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.EventGrid/topics/{topicName} for an Event Grid topic
Values in braces {} should be replaced with actual values.
Binding support
This component supports both input and output binding interfaces.
This component supports output binding with the following operations:
create: publishes a message on the Event Grid topic
For the first purpose, you will need to create an Azure Service Principal. After creating it, take note of the Microsoft Entra ID application’s clientID (a UUID), and run the following script with the Azure CLI:
# Set the client ID of the app you createdCLIENT_ID="..."# Scope of the resource, usually in the format:# `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.EventGrid/topics/{topicName}`SCOPE="..."# First ensure that Azure Resource Manager provider is registered for Event Gridaz provider register --namespace "Microsoft.EventGrid"az provider show --namespace "Microsoft.EventGrid" --query "registrationState"# Give the SP needed permissions so that it can create event subscriptions to Event Gridaz role assignment create --assignee "$CLIENT_ID" --role "EventGrid EventSubscription Contributor" --scopes "$SCOPE"
# Set the client ID of the app you created$clientId="..."# Authenticate with the Microsoft Graph# You may need to add the -TenantId flag to the next command if neededConnect-MgGraph-Scopes"Application.Read.All","Application.ReadWrite.All"./setup-eventgrid-sp.ps1$clientId
Note: if your directory does not have a Service Principal for the application “Microsoft.EventGrid”, you may need to run the command Connect-MgGraph and sign in as an admin for the Microsoft Entra ID tenant (this is related to permissions on the Microsoft Entra ID directory, and not the Azure subscription). Otherwise, please ask your tenant’s admin to sign in and run this PowerShell command: New-MgServicePrincipal -AppId "4962773b-9cdb-44cf-a8bf-237846a00ab7" (the UUID is a constant)
Run locally using a custom port, for example 9000, for handshakes
# Using port 9000 as an examplengrok http --host-header=localhost 9000
Configure the ngrok’s HTTPS endpoint and the custom port to input binding metadata
Run Dapr
# Using default ports for .NET core web api and Dapr as an exampledapr run --app-id dotnetwebapi --app-port 5000 --dapr-http-port 3500 dotnet run
Testing on Kubernetes
Azure Event Grid requires a valid HTTPS endpoint for custom webhooks; self-signed certificates aren’t accepted. In order to enable traffic from the public internet to your app’s Dapr sidecar you need an ingress controller enabled with Dapr. There’s a good article on this topic: Kubernetes NGINX ingress controller with Dapr.
To get started, first create a dapr-annotations.yaml file for Dapr annotations:
Then install the NGINX ingress controller to your Kubernetes cluster with Helm 3 using the annotations:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install nginx-ingress ingress-nginx/ingress-nginx -f ./dapr-annotations.yaml -n default
# Get the public IP for the ingress controllerkubectl get svc -l component=controller -o jsonpath='Public IP is: {.items[0].status.loadBalancer.ingress[0].ip}{"\n"}'
Final step for enabling communication between Event Grid and Dapr is to define http and custom port to your app’s service and an ingress in Kubernetes. This example uses a .NET Core web api and Dapr default ports and custom port 9000 for handshakes.
Deploy the binding and app (including ingress) to Kubernetes
# Deploy Dapr componentskubectl apply -f eventgrid.yaml
# Deploy your app and Nginx ingresskubectl apply -f dotnetwebapi.yaml
Note: This manifest deploys everything to Kubernetes’ default namespace.
Troubleshooting possible issues with Nginx controller
After initial deployment the “Daprized” Nginx controller can malfunction. To check logs and fix issue (if it exists) follow these steps.
$ kubectl get pods -l app=nginx-ingress
NAME READY STATUS RESTARTS AGE
nginx-nginx-ingress-controller-649df94867-fp6mg 2/2 Running 0 51m
nginx-nginx-ingress-default-backend-6d96c457f6-4nbj5 1/1 Running 0 55m
$ kubectl logs nginx-nginx-ingress-controller-649df94867-fp6mg nginx-ingress-controller
# If you see 503s logged from calls to webhook endpoint '/api/events' restart the pod# .."OPTIONS /api/events HTTP/1.1" 503..$ kubectl delete pod nginx-nginx-ingress-controller-649df94867-fp6mg
# Check the logs again - it should start returning 200# .."OPTIONS /api/events HTTP/1.1" 200..
Detailed documentation on the Azure Event Hubs binding component
Component format
To setup an Azure Event Hubs binding, create a component of type bindings.azure.eventhubs. See this guide on how to create and apply a binding configuration.
See this for instructions on how to set up an Event Hub.
apiVersion:dapr.io/v1alpha1kind:Componentmetadata:name:<NAME>spec:type:bindings.azure.eventhubsversion:v1metadata:# Hub name ("topic")- name:eventHubvalue:"mytopic"- name:consumerGroupvalue:"myapp"# Either connectionString or eventHubNamespace is required# Use connectionString when *not* using Microsoft Entra ID- name:connectionStringvalue:"Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={EventHub}"# Use eventHubNamespace when using Microsoft Entra ID- name:eventHubNamespacevalue:"namespace"- name:enableEntityManagementvalue:"false"- name:enableInOrderMessageDeliveryvalue:"false"# The following four properties are needed only if enableEntityManagement is set to true- name:resourceGroupNamevalue:"test-rg"- name:subscriptionIDvalue:"value of Azure subscription ID"- name:partitionCountvalue:"1"- name:messageRetentionInDaysvalue:"3"# Checkpoint store attributes- name:storageAccountNamevalue:"myeventhubstorage"- name:storageAccountKeyvalue:"112233445566778899"- name:storageContainerNamevalue:"myeventhubstoragecontainer"# Alternative to passing storageAccountKey- name:storageConnectionStringvalue:"DefaultEndpointsProtocol=https;AccountName=<account>;AccountKey=<account-key>"# Optional metadata- name:getAllMessagePropertiesvalue:"true"- name:directionvalue:"input, output"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.
Spec metadata fields
Field
Required
Binding support
Details
Example
eventHub
Y*
Input/Output
The name of the Event Hubs hub (“topic”). Required if using Microsoft Entra ID authentication or if the connection string doesn’t contain an EntityPath value
mytopic
connectionString
Y*
Input/Output
Connection string for the Event Hub or the Event Hub namespace. * Mutally exclusive with eventHubNamespace field. * Required when not using Microsoft Entra ID Authentication
"Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={EventHub}" or "Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key}"
eventHubNamespace
Y*
Input/Output
The Event Hub Namespace name. * Mutally exclusive with connectionString field. * Required when using Microsoft Entra ID Authentication
"namespace"
enableEntityManagement
N
Input/Output
Boolean value to allow management of the EventHub namespace and storage account. Default: false
"true", "false"
enableInOrderMessageDelivery
N
Input/Output
Boolean value to allow messages to be delivered in the order in which they were posted. This assumes partitionKey is set when publishing or posting to ensure ordering across partitions. Default: false
"true", "false"
resourceGroupName
N
Input/Output
Name of the resource group the Event Hub namespace is part of. Required when entity management is enabled
"test-rg"
subscriptionID
N
Input/Output
Azure subscription ID value. Required when entity management is enabled
"azure subscription id"
partitionCount
N
Input/Output
Number of partitions for the new Event Hub namespace. Used only when entity management is enabled. Default: "1"
"2"
messageRetentionInDays
N
Input/Output
Number of days to retain messages for in the newly created Event Hub namespace. Used only when entity management is enabled. Default: "1"
Storage account name to use for the checkpoint store.
"myeventhubstorage"
storageAccountKey
Y*
Input
Storage account key for the checkpoint store account. * When using Microsoft Entra ID, it’s possible to omit this if the service principal has access to the storage account too.
"112233445566778899"
storageConnectionString
Y*
Input
Connection string for the checkpoint store, alternative to specifying storageAccountKey
Storage container name for the storage account name.
"myeventhubstoragecontainer"
getAllMessageProperties
N
Input
When set to true, retrieves all user/app/custom properties from the Event Hub message and forwards them in the returned event metadata. Default setting is "false".
"true", "false"
direction
N
Input/Output
The direction of the binding.
"input", "output", "input, output"
Microsoft Entra ID authentication
The Azure Event Hubs pub/sub component supports authentication using all Microsoft Entra ID mechanisms. For further information and the relevant component metadata fields to provide depending on the choice of Microsoft Entra ID authentication mechanism, see the docs for authenticating to Azure.
Binding support
This component supports output binding with the following operations:
create: publishes a new message to Azure Event Hubs
Input Binding to Azure IoT Hub Events
Azure IoT Hub provides an endpoint that is compatible with Event Hubs, so Dapr apps can create input bindings to read Azure IoT Hub events using the Event Hubs bindings component.
The device-to-cloud events created by Azure IoT Hub devices will contain additional IoT Hub System Properties, and the Azure Event Hubs binding for Dapr will return the following as part of the response metadata:
Detailed documentation on the Azure OpenAI binding component
Component format
To setup an Azure OpenAI binding create a component of type bindings.azure.openai. See this guide on how to create and apply a binding configuration.
See this for the documentation for Azure OpenAI Service.
The above example uses apiKey as a plain string. It is recommended to use a secret store for the secrets as described here.
Spec metadata fields
Field
Required
Binding support
Details
Example
endpoint
Y
Output
Azure OpenAI service endpoint URL.
"https://myopenai.openai.azure.com"
apiKey
Y*
Output
The access key of the Azure OpenAI service. Only required when not using Microsoft Entra ID authentication.
"1234567890abcdef"
azureTenantId
Y*
Input
The tenant ID of the Azure OpenAI resource. Only required when apiKey is not provided.
"tenentID"
azureClientId
Y*
Input
The client ID that should be used by the binding to create or update the Azure OpenAI Subscription and to authenticate incoming messages. Only required when apiKey is not provided.
"clientId"
azureClientSecret
Y*
Input
The client secret that should be used by the binding to create or update the Azure OpenAI Subscription and to authenticate incoming messages. Only required when apiKey is not provided.
"clientSecret"
Microsoft Entra ID authentication
The Azure OpenAI binding component supports authentication using all Microsoft Entra ID mechanisms. For further information and the relevant component metadata fields to provide depending on the choice of Microsoft Entra ID authentication mechanism, see the docs for authenticating to Azure.
To call the completion API with a prompt, invoke the Azure OpenAI binding with a POST method and the following JSON body:
{"operation":"completion","data":{"deploymentId":"my-model","prompt":"A dog is","maxTokens":5}}
The data parameters are:
deploymentId - string that specifies the model deployment ID to use.
prompt - string that specifies the prompt to generate completions for.
maxTokens - (optional) defines the max number of tokens to generate. Defaults to 16 for completion API.
temperature - (optional) defines the sampling temperature between 0 and 2. Higher values like 0.8 make the output more random, while lower values like 0.2 make it more focused and deterministic. Defaults to 1.0 for completion API.
topP - (optional) defines the sampling temperature. Defaults to 1.0 for completion API.
n - (optional) defines the number of completions to generate. Defaults to 1 for completion API.
presencePenalty - (optional) Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics. Defaults to 0.0 for completion API.
frequencyPenalty - (optional) Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim. Defaults to 0.0 for completion API.
curl -d '{ "data": {"deploymentId: "my-model" , "prompt": "A dog is ", "maxTokens":15}, "operation": "completion" }'\
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Response
The response body contains the following JSON:
[{"finish_reason":"length","index":0,"text":" a pig in a dress.\n\nSun, Oct 20, 2013"},{"finish_reason":"length","index":1,"text":" the only thing on earth that loves you\n\nmore than he loves himself.\"\n\n"}]
Chat Completion API
To perform a chat-completion operation, invoke the Azure OpenAI binding with a POST method and the following JSON body:
{"operation":"chat-completion","data":{"deploymentId":"my-model","messages":[{"role":"system","message":"You are a bot that gives really short replies"},{"role":"user","message":"Tell me a joke"}],"n":2,"maxTokens":30,"temperature":1.2}}
The data parameters are:
deploymentId - string that specifies the model deployment ID to use.
messages - array of messages that will be used to generate chat completions.
Each message is of the form:
role - string that specifies the role of the message. Can be either user, system or assistant.
message - string that specifies the conversation message for the role.
maxTokens - (optional) defines the max number of tokens to generate. Defaults to 16 for the chat completion API.
temperature - (optional) defines the sampling temperature between 0 and 2. Higher values like 0.8 make the output more random, while lower values like 0.2 make it more focused and deterministic. Defaults to 1.0 for the chat completion API.
topP - (optional) defines the sampling temperature. Defaults to 1.0 for the chat completion API.
n - (optional) defines the number of completions to generate. Defaults to 1 for the chat completion API.
presencePenalty - (optional) Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics. Defaults to 0.0 for the chat completion API.
frequencyPenalty - (optional) Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim. Defaults to 0.0 for the chat completion API.
Example
curl -d '{
"data": {
"deploymentId": "my-model",
"messages": [
{
"role": "system",
"message": "You are a bot that gives really short replies"
},
{
"role": "user",
"message": "Tell me a joke"
}
],
"n": 2,
"maxTokens": 30,
"temperature": 1.2
},
"operation": "chat-completion"
}'\
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Response
The response body contains the following JSON:
[{"finish_reason":"stop","index":0,"message":{"content":"Why was the math book sad? Because it had too many problems.","role":"assistant"}},{"finish_reason":"stop","index":1,"message":{"content":"Why did the tomato turn red? Because it saw the salad dressing!","role":"assistant"}}]
Get Embedding API
The get-embedding operation returns a vector representation of a given input that can be easily consumed by machine learning models and other algorithms.
To perform a get-embedding operation, invoke the Azure OpenAI binding with a POST method and the following JSON body:
{"operation":"get-embedding","data":{"deploymentId":"my-model","message":"The capital of France is Paris."}}
The data parameters are:
deploymentId - string that specifies the model deployment ID to use.
message - string that specifies the text to embed.
Example
curl -d '{
"data": {
"deploymentId": "embeddings",
"message": "The capital of France is Paris."
},
"operation": "get-embedding"
}'\
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Detailed documentation on the Azure Service Bus Queues binding component
Component format
To setup Azure Service Bus Queues binding create a component of type bindings.azure.servicebusqueues. See this guide on how to create and apply a binding configuration.
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.
Spec metadata fields
Field
Required
Binding support
Details
Example
connectionString
Y
Input/Output
The Service Bus connection string. Required unless using Microsoft Entra ID authentication.
"Endpoint=sb://************"
queueName
Y
Input/Output
The Service Bus queue name. Queue names are case-insensitive and will always be forced to lowercase.
"queuename"
timeoutInSec
N
Input/Output
Timeout for all invocations to the Azure Service Bus endpoint, in seconds. Note that this option impacts network calls and it’s unrelated to the TTL applies to messages. Default: "60"
"60"
namespaceName
N
Input/Output
Parameter to set the address of the Service Bus namespace, as a fully-qualified domain name. Required if using Microsoft Entra ID authentication.
"namespace.servicebus.windows.net"
disableEntityManagement
N
Input/Output
When set to true, queues and subscriptions do not get created automatically. Default: "false"
"true", "false"
lockDurationInSec
N
Input/Output
Defines the length in seconds that a message will be locked for before expiring. Used during subscription creation only. Default set by server.
"30"
autoDeleteOnIdleInSec
N
Input/Output
Time in seconds to wait before auto deleting idle subscriptions. Used during subscription creation only. Must be 300s or greater. Default: "0" (disabled)
"3600"
defaultMessageTimeToLiveInSec
N
Input/Output
Default message time to live, in seconds. Used during subscription creation only.
"10"
maxDeliveryCount
N
Input/Output
Defines the number of attempts the server will make to deliver a message. Used during subscription creation only. Default set by server.
"10"
minConnectionRecoveryInSec
N
Input/Output
Minimum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. Default: "2"
"5"
maxConnectionRecoveryInSec
N
Input/Output
Maximum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. After each attempt, the component waits a random number of seconds, increasing every time, between the minimum and the maximum. Default: "300" (5 minutes)
"600"
maxActiveMessages
N
Defines the maximum number of messages to be processing or in the buffer at once. This should be at least as big as the maximum concurrent handlers. Default: "1"
"1"
handlerTimeoutInSec
N
Input
Timeout for invoking the app’s handler. Default: "0" (no timeout)
"30"
minConnectionRecoveryInSec
N
Input
Minimum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. Default: "2"
"5"
maxConnectionRecoveryInSec
N
Input
Maximum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. After each attempt, the binding waits a random number of seconds, increasing every time, between the minimum and the maximum. Default: "300" (5 minutes)
"600"
lockRenewalInSec
N
Input
Defines the frequency at which buffered message locks will be renewed. Default: "20".
"20"
maxActiveMessages
N
Input
Defines the maximum number of messages to be processing or in the buffer at once. This should be at least as big as the maximum concurrent handlers. Default: "1"
"2000"
maxConcurrentHandlers
N
Input
Defines the maximum number of concurrent message handlers; set to 0 for unlimited. Default: "1"
"10"
maxRetriableErrorsPerSec
N
Input
Maximum number of retriable errors that are processed per second. If a message fails to be processed with a retriable error, the component adds a delay before it starts processing another message, to avoid immediately re-processing messages that have failed. Default: "10"
"10"
publishMaxRetries
N
Output
The max number of retries for when Azure Service Bus responds with “too busy” in order to throttle messages. Defaults: "5"
"5"
publishInitialRetryIntervalInMs
N
Output
Time in milliseconds for the initial exponential backoff when Azure Service Bus throttle messages. Defaults: "500"
"500"
direction
N
Input/Output
The direction of the binding
"input", "output", "input, output"
Microsoft Entra ID authentication
The Azure Service Bus Queues binding component supports authentication using all Microsoft Entra ID mechanisms, including Managed Identities. For further information and the relevant component metadata fields to provide depending on the choice of Microsoft Entra ID authentication mechanism, see the docs for authenticating to Azure.
Example Configuration
apiVersion:dapr.io/v1alpha1kind:Componentmetadata:name:<NAME>spec:type:bindings.azure.servicebusqueuesversion:v1metadata:- name:azureTenantIdvalue:"***"- name:azureClientIdvalue:"***"- name:azureClientSecretvalue:"***"- name:namespaceName# Required when using Azure Authentication.# Must be a fully-qualified domain namevalue:"servicebusnamespace.servicebus.windows.net"- name:queueNamevalue:queue1- name:ttlInSecondsvalue:60
Binding support
This component supports both input and output binding interfaces.
This component supports output binding with the following operations:
create: publishes a message to the specified queue
Message metadata
Azure Service Bus messages extend the Dapr message format with additional contextual metadata. Some metadata fields are set by Azure Service Bus itself (read-only) and others can be set by the client when publishing a message through Invoke binding call with create operation.
Sending a message with metadata
To set Azure Service Bus metadata when sending a message, set the query parameters on the HTTP request or the gRPC metadata as documented here.
metadata.MessageId
metadata.CorrelationId
metadata.SessionId
metadata.Label
metadata.ReplyTo
metadata.PartitionKey
metadata.To
metadata.ContentType
metadata.ScheduledEnqueueTimeUtc
metadata.ReplyToSessionId
Note
The metadata.MessageId property does not set the id property of the cloud event returned by Dapr and should be treated in isolation.
The metadata.ScheduledEnqueueTimeUtc property supports the RFC1123 and RFC3339 timestamp formats.
Receiving a message with metadata
When Dapr calls your application, it attaches Azure Service Bus message metadata to the request using either HTTP headers or gRPC metadata.
In addition to the settable metadata listed above, you can also access the following read-only message metadata.
In addition, all entries of ApplicationProperties from the original Azure Service Bus message are appended as metadata.<application property's name>.
Note
All times are populated by the server and are not adjusted for clock skews.
Specifying a TTL per message
Time to live can be defined on a per-queue level (as illustrated above) or at the message level. The value defined at message level overwrites any value set at the queue level.
To set time to live at message level use the metadata section in the request body during the binding invocation: the field name is ttlInSeconds.
Defines the hub in which the message will be send. The hub can be dynamically defined as a metadata value when publishing to an output binding (key is “hub”)
"myhub"
endpoint
N
Output
Endpoint of Azure SignalR; required if not included in the connectionString or if using Microsoft Entra ID
The Azure SignalR binding component supports authentication using all Microsoft Entra ID mechanisms. See the docs for authenticating to Azure to learn more about the relevant component metadata fields based on your choice of Microsoft Entra ID authentication mechanism.
You have two options to authenticate this component with Microsoft Entra ID:
Pass individual metadata keys:
endpoint for the endpoint
If needed: azureClientId, azureTenantId and azureClientSecret
Pass a connection string with AuthType=aad specified:
Microsoft Entra ID application: Endpoint=https://<servicename>.service.signalr.net;AuthType=aad;ClientId=<clientid>;ClientSecret=<clientsecret>;TenantId=<tenantid>;Version=1.0;
Note that you cannot use a connection string if your application’s ClientSecret contains a ; character.
Binding support
This component supports output binding with the following operations:
create
Additional information
By default the Azure SignalR output binding will broadcast messages to all connected users. To narrow the audience there are two options, both configurable in the Metadata property of the message:
group: Sends the message to a specific Azure SignalR group
user: Sends the message to a specific Azure SignalR user
Applications publishing to an Azure SignalR output binding should send a message with the following contract:
{"data":{"Target":"<enter message name>","Arguments":[{"sender":"dapr","text":"Message from dapr output binding"}]},"metadata":{"group":"chat123"},"operation":"create"}
For more information on integration Azure SignalR into a solution check the documentation
Detailed documentation on the Azure Storage Queues binding component
Component format
To setup Azure Storage Queues binding create a component of type bindings.azure.storagequeues. See this guide on how to create and apply a binding configuration.
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.
Spec metadata fields
Field
Required
Binding support
Details
Example
accountName
Y
Input/Output
The name of the Azure Storage account
"account1"
accountKey
Y*
Input/Output
The access key of the Azure Storage account. Only required when not using Microsoft Entra ID authentication.
"access-key"
queueName
Y
Input/Output
The name of the Azure Storage queue
"myqueue"
pollingInterval
N
Output
Set the interval to poll Azure Storage Queues for new messages, as a Go duration value. Default: "10s"
"30s"
ttlInSeconds
N
Output
Parameter to set the default message time to live. If this parameter is omitted, messages will expire after 10 minutes. See also
"60"
decodeBase64
N
Input
Configuration to decode base64 content received from the Storage Queue into a string. Defaults to false
true, false
encodeBase64
N
Output
If enabled base64 encodes the data payload before uploading to Azure storage queues. Default false.
true, false
endpoint
N
Input/Output
Optional custom endpoint URL. This is useful when using the Azurite emulator or when using custom domains for Azure Storage (although this is not officially supported). The endpoint must be the full base URL, including the protocol (http:// or https://), the IP or FQDN, and optional port.
"http://127.0.0.1:10001" or "https://accountName.queue.example.com"
initialVisibilityDelay
N
Input
Allows setting a custom queue visibility timeout to avoid immediate retrying of recently failed messages. Defaults to 30 seconds.
"100s"
visibilityTimeout
N
Input
Sets a delay before a message becomes visible in the queue after being added. It can also be specified per message by setting the initialVisibilityDelay property in the invocation request’s metadata. Defaults to 0 seconds.
"30s"
direction
N
Input/Output
Direction of the binding.
"input", "output", "input, output"
Microsoft Entra ID authentication
The Azure Storage Queue binding component supports authentication using all Microsoft Entra ID mechanisms. See the docs for authenticating to Azure to learn more about the relevant component metadata fields based on your choice of Microsoft Entra ID authentication mechanism.
Binding support
This component supports both input and output binding interfaces.
This component supports output binding with the following operations:
create
Specifying a TTL per message
Time to live can be defined on queue level (as illustrated above) or at the message level. The value defined at message level overwrites any value set at queue level.
To set time to live at message level use the metadata section in the request body during the binding invocation.
An initial visibility delay can be defined on queue level or at the message level. The value defined at message level overwrites any value set at a queue level.
To set an initial visibility delay value at the message level, use the metadata section in the request body during the binding invocation.
Detailed documentation on the Cloudflare Queues component
Component format
This output binding for Dapr allows interacting with Cloudflare Queues to publish new messages. It is currently not possible to consume messages from a Queue using Dapr.
To setup a Cloudflare Queues binding, create a component of type bindings.cloudflare.queues. See this guide on how to create and apply a binding configuration.
apiVersion:dapr.io/v1alpha1kind:Componentmetadata:name:<NAME>spec:type:bindings.cloudflare.queuesversion:v1# Increase the initTimeout if Dapr is managing the Worker for youinitTimeout:"120s"metadata:# Name of the existing Cloudflare Queue (required)- name:queueNamevalue:""# Name of the Worker (required)- name:workerNamevalue:""# PEM-encoded private Ed25519 key (required)- name:keyvalue:| -----BEGIN PRIVATE KEY-----
MC4CAQ...
-----END PRIVATE KEY-----# Cloudflare account ID (required to have Dapr manage the Worker)- name:cfAccountIDvalue:""# API token for Cloudflare (required to have Dapr manage the Worker)- name:cfAPITokenvalue:""# URL of the Worker (required if the Worker has been pre-created outside of Dapr)- name:workerUrlvalue:""
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.
Spec metadata fields
Field
Required
Binding support
Details
Example
queueName
Y
Output
Name of the existing Cloudflare Queue
"mydaprqueue"
key
Y
Output
Ed25519 private key, PEM-encoded
See example above
cfAccountID
Y/N
Output
Cloudflare account ID. Required to have Dapr manage the worker.
"456789abcdef8b5588f3d134f74ac"def
cfAPIToken
Y/N
Output
API token for Cloudflare. Required to have Dapr manage the Worker.
"secret-key"
workerUrl
Y/N
Output
URL of the Worker. Required if the Worker has been pre-provisioned outside of Dapr.
"https://mydaprqueue.mydomain.workers.dev"
When you configure Dapr to create your Worker for you, you may need to set a longer value for the initTimeout property of the component, to allow enough time for the Worker script to be deployed. For example: initTimeout: "120s"
Binding support
This component supports output binding with the following operations:
publish (alias: create): Publish a message to the Queue.
The data passed to the binding is used as-is for the body of the message published to the Queue.
This operation does not accept any metadata property.
Create a Cloudflare Queue
To use this component, you must have a Cloudflare Queue created in your Cloudflare account.
# Authenticate if needed with `npx wrangler login` firstnpx wrangler queues create <NAME>
# For example: `npx wrangler queues create myqueue`
Configuring the Worker
Because Cloudflare Queues can only be accessed by scripts running on Workers, Dapr needs to maintain a Worker to communicate with the Queue.
Dapr can manage the Worker for you automatically, or you can pre-provision a Worker yourself. Pre-provisioning the Worker is the only supported option when running on workerd.
Important
Use a separate Worker for each Dapr component. Do not use the same Worker script for different Cloudflare Queues bindings, and do not use the same Worker script for different Cloudflare components in Dapr (for example, the Workers KV state store and the Queues binding).
If you want to let Dapr manage the Worker for you, you will need to provide these 3 metadata options:
workerName: Name of the Worker script. This will be the first part of the URL of your Worker. For example, if the “workers.dev” domain configured for your Cloudflare account is mydomain.workers.dev and you set workerName to mydaprqueue, the Worker that Dapr deploys will be available at https://mydaprqueue.mydomain.workers.dev.
cfAccountID: ID of your Cloudflare account. You can find this in your browser’s URL bar after logging into the Cloudflare dashboard, with the ID being the hex string right after dash.cloudflare.com. For example, if the URL is https://dash.cloudflare.com/456789abcdef8b5588f3d134f74acdef, the value for cfAccountID is 456789abcdef8b5588f3d134f74acdef.
cfAPIToken: API token with permission to create and edit Workers. You can create it from the “API Tokens” page in the “My Profile” section in the Cloudflare dashboard:
Click on “Create token”.
Select the “Edit Cloudflare Workers” template.
Follow the on-screen instructions to generate a new API token.
When Dapr is configured to manage the Worker for you, when a Dapr Runtime is started it checks that the Worker exists and it’s up-to-date. If the Worker doesn’t exist, or if it’s using an outdated version, Dapr creates or upgrades it for you automatically.
If you’d rather not give Dapr permissions to deploy Worker scripts for you, you can manually provision a Worker for Dapr to use. Note that if you have multiple Dapr components that interact with Cloudflare services via a Worker, you will need to create a separate Worker for each one of them.
To manually provision a Worker script, you will need to have Node.js installed on your local machine.
Create a new folder where you’ll place the source code of the Worker, for example: daprworker.
If you haven’t already, authenticate with Wrangler (the Cloudflare Workers CLI) using: npx wrangler login.
Inside the newly-created folder, create a new wrangler.toml file with the contents below, filling in the missing information as appropriate:
# Name of your Worker, for example "mydaprqueue"name=""# Do not change these optionsmain="worker.js"compatibility_date="2022-12-09"usage_model="bundled"[vars]# Set this to the **public** part of the Ed25519 key, PEM-encoded (with newlines replaced with `\n`).# Example:# PUBLIC_KEY = "-----BEGIN PUBLIC KEY-----\nMCowB...=\n-----END PUBLIC KEY-----PUBLIC_KEY=""# Set this to the name of your Worker (same as the value of the "name" property above), for example "mydaprqueue".TOKEN_AUDIENCE=""# Set the next two values to the name of your Queue, for example "myqueue".# Note that they will both be set to the same value.[[queues.producers]]queue=""binding=""
Note: see the next section for how to generate an Ed25519 key pair. Make sure you use the public part of the key when deploying a Worker!
Copy the (pre-compiled and minified) code of the Worker in the worker.js file. You can do that with this command:
# Set this to the version of Dapr that you're usingDAPR_VERSION="release-1.15"curl -LfO "https://raw.githubusercontent.com/dapr/components-contrib/${DAPR_VERSION}/internal/component/cloudflare/workers/code/worker.js"
Deploy the Worker using Wrangler:
npx wrangler publish
Once your Worker has been deployed, you will need to initialize the component with these two metadata options:
workerName: Name of the Worker script. This is the value you set in the name property in the wrangler.toml file.
workerUrl: URL of the deployed Worker. The npx wrangler command will show the full URL to you, for example https://mydaprqueue.mydomain.workers.dev.
Generate an Ed25519 key pair
All Cloudflare Workers listen on the public Internet, so Dapr needs to use additional authentication and data protection measures to ensure that no other person or application can communicate with your Worker (and thus, with your Cloudflare Queue). These include industry-standard measures such as:
All requests made by Dapr to the Worker are authenticated via a bearer token (technically, a JWT) which is signed with an Ed25519 key.
All communications between Dapr and your Worker happen over an encrypted connection, using TLS (HTTPS).
The bearer token is generated on each request and is valid for a brief period of time only (currently, one minute).
To let Dapr issue bearer tokens, and have your Worker validate them, you will need to generate a new Ed25519 key pair. Here are examples of generating the key pair using OpenSSL or the step CLI.
Support for generating Ed25519 keys is available since OpenSSL 1.1.0, so the commands below will not work if you’re using an older version of OpenSSL.
Note for Mac users: on macOS, the “openssl” binary that is shipped by Apple is actually based on LibreSSL, which as of writing doesn’t support Ed25519 keys. If you’re using macOS, either use the step CLI, or install OpenSSL 3.0 from Homebrew using brew install openssl@3 then replacing openssl in the commands below with $(brew --prefix)/opt/openssl@3/bin/openssl.
You can generate a new Ed25519 key pair with OpenSSL using:
Regardless of how you generated your key pair, with the instructions above you’ll have two files:
private.pem contains the private part of the key; use the contents of this file for the key property of the component’s metadata.
public.pem contains the public part of the key, which you’ll need only if you’re deploying a Worker manually (as per the instructions in the previoius section).
Warning
Protect the private part of your key and treat it as a secret value!
Detailed documentation on the commercetools GraphQL binding component
Component format
To setup commercetools GraphQL binding create a component of type bindings.commercetools. See this guide on how to create and apply a binding configuration.
The valid cron schedule to use. See this for more details
"@every 15m"
direction
N
Input
The direction of the binding
"input"
Schedule Format
The Dapr cron binding supports following formats:
Character
Descriptor
Acceptable values
1
Second
0 to 59, or *
2
Minute
0 to 59, or *
3
Hour
0 to 23, or * (UTC)
4
Day of the month
1 to 31, or *
5
Month
1 to 12, or *
6
Day of the week
0 to 7 (where 0 and 7 represent Sunday), or *
For example:
30 * * * * * - every 30 seconds
0 */15 * * * * - every 15 minutes
0 30 3-6,20-23 * * * - every hour on the half hour in the range 3-6am, 8-11pm
CRON_TZ=America/New_York 0 30 04 * * * - every day at 4:30am New York time
You can learn more about cron and the supported formats here
For ease of use, the Dapr cron binding also supports few shortcuts:
@every 15s where s is seconds, m minutes, and h hours
@daily or @hourly which runs at that period from the time the binding is initialized
Listen to the cron binding
After setting up the cron binding, all you need to do is listen on an endpoint that matches the name of your component. Assume the [NAME] is scheduled. This will be made as a HTTP POST request. The below example shows how a simple Node.js Express application can receive calls on the /scheduled endpoint and write a message to the console.
Configuration to decode base64 file content before saving to bucket storage. (In case of saving a file with binary content). true is the only allowed positive value. Other positive variations like "True", "1" are not acceptable. Defaults to false
true, false
encodeBase64
N
Output
Configuration to encode base64 file content before return the content. (In case of opening a file with binary content). true is the only allowed positive value. Other positive variations like "True", "1" are not acceptable. Defaults to false
The response body contains an array of objects, where each object represents a file in the bucket with the following structure:
[{"name":"file1.txt","data":"content of file1","attrs":{"bucket":"mybucket","name":"file1.txt","size":1234,...}},{"name":"file2.txt","data":"content of file2","attrs":{"bucket":"mybucket","name":"file2.txt","size":5678,...}}]
Each object in the array contains:
name: The name of the file
data: The content of the file
attrs: Object attributes from GCP Storage including metadata like creation time, size, content type, etc.
Delete object
To perform a delete object operation, invoke the GCP bucket binding with a POST method and the following JSON body:
maxResults - (optional) sets the maximum number of keys returned in the response. By default the action returns up to 1,000 key names. The response might contain fewer keys but will never contain more.
prefix - (optional) it can be used to filter objects starting with prefix.
delimiter - (optional) it can be used to restrict the results to only the kobjects in the given “directory”. Without the delimiter, the entire tree under the prefix is returned
Response
The response body contains the list of found objects.
The list of objects will be returned as JSON array in the following form:
Detailed documentation on the GraphQL binding component
Component format
To setup GraphQL binding create a component of type bindings.graphql. See this guide on how to create and apply a binding configuration. To separate normal config settings (e.g. endpoint) from headers, “header:” is used a prefix on the header names.
This component supports output binding with the following operations:
query
mutation
query
The query operation is used for query statements, which returns the metadata along with data in a form of an array of row values.
Request
in:=&dapr.InvokeBindingRequest{Name:"example.bindings.graphql",Operation:"query",Metadata:map[string]string{"query":`query { users { name } }`},}
To use a query that requires query variables, add a key-value pair to the metadata map, wherein every key corresponding to a query variable is the variable name prefixed with variable:
in:=&dapr.InvokeBindingRequest{Name:"example.bindings.graphql",Operation:"query",Metadata:map[string]string{"query":`query HeroNameAndFriends($episode: string!) { hero(episode: $episode) { name } }`,"variable:episode":"JEDI",}
apiVersion:dapr.io/v1alpha1kind:Componentmetadata:name:<NAME>spec:type:bindings.httpversion:v1metadata:- name:urlvalue:"http://something.com"#- name: maxResponseBodySize# value: "100Mi" # OPTIONAL maximum amount of data to read from a response#- name: MTLSRootCA# value: "/Users/somepath/root.pem" # OPTIONAL path to root CA or PEM-encoded string#- name: MTLSClientCert# value: "/Users/somepath/client.pem" # OPTIONAL path to client cert or PEM-encoded string#- name: MTLSClientKey# value: "/Users/somepath/client.key" # OPTIONAL path to client key or PEM-encoded string#- name: MTLSRenegotiation# value: "RenegotiateOnceAsClient" # OPTIONAL one of: RenegotiateNever, RenegotiateOnceAsClient, RenegotiateFreelyAsClient#- name: securityToken # OPTIONAL <token to include as a header on HTTP requests># secretKeyRef:# name: mysecret# key: "mytoken"#- name: securityTokenHeader# value: "Authorization: Bearer" # OPTIONAL <header name for the security token>#- name: errorIfNot2XX# value: "false" # OPTIONAL
Path to the file: the absolute path to the file can be provided as a value for the field.
PEM encoded string: the PEM-encoded string can also be provided as a value for the field.
Note
Metadata fields MTLSRootCA, MTLSClientCert and MTLSClientKey are used to configure (m)TLS authentication.
To use mTLS authentication, you must provide all three fields. See mTLS for more details. You can also provide only MTLSRootCA, to enable HTTPS connection with a certificate signed by a custom CA. See HTTPS section for more details.
Binding support
This component supports output binding with the following HTTP methods/verbs:
create : For backward compatibility and treated like a post
get : Read data/records
head : Identical to get except that the server does not return a response body
post : Typically used to create records or send commands
put : Update data/records
patch : Sometimes used to update a subset of fields of a record
delete : Delete a data/record
options : Requests for information about the communication options available (not commonly used)
trace : Used to invoke a remote, application-layer loop- back of the request message (not commonly used)
Request
Operation metadata fields
All of the operations above support the following metadata fields
Field
Required
Details
Example
path
N
The path to append to the base URL. Used for accessing specific URIs.
"/1234", "/search?lastName=Jones"
Field with a capitalized first letter
N
Any fields that have a capital first letter are sent as request headers
"Content-Type", "Accept"
Retrieving data
To retrieve data from the HTTP endpoint, invoke the HTTP binding with a GET method and the following JSON body:
{"operation":"get"}
Optionally, a path can be specified to interact with resource URIs:
The response body contains the data returned by the HTTP endpoint. The data field contains the HTTP response body as a byte slice (Base64 encoded via curl). The metadata field contains:
To send data to the HTTP endpoint, invoke the HTTP binding with a POST, PUT, or PATCH method and the following JSON body:
Note
Any metadata field that starts with a capital letter is passed as a request header.
For example, the default content type is application/json; charset=utf-8. This can be overridden be setting the Content-Type metadata field.
{"operation":"post","data":"content (default is JSON)","metadata":{"path":"/things","Content-Type":"application/json; charset=utf-8"}}
apiVersion:dapr.io/v1alpha1kind:Componentmetadata:name:<NAME>namespace:<NAMESPACE>spec:type:bindings.httpversion:v1metadata:- name:urlvalue:https://my-secured-website.com# Use HTTPS
Install the TLS certificate in the sidecar
When the sidecar is not running inside a container, the TLS certificate can be directly installed on the host operating system.
Below is an example when the sidecar is running as a container. The SSL certificate is located on the host computer at /tmp/ssl/cert.pem.
version:'3'services:my-app:# ...dapr-sidecar:image:"daprio/daprd:1.8.0"command:["./daprd","-app-id","myapp","-app-port","3000",]volumes:- "./components/:/components"- "/tmp/ssl/:/certificates"# Mount the certificates folder to the sidecar container at /certificatesenvironment:- "SSL_CERT_DIR=/certificates"# Set the environment variable to the path of the certificates folderdepends_on:- my-app
The sidecar can read the TLS certificate from a variety of sources. See How-to: Mount Pod volumes to the Dapr sidecar for more. In this example, we store the TLS certificate as a Kubernetes secret.
The YAML below is an example of the Kubernetes deployment that mounts the above secret to the sidecar and sets SSL_CERT_DIR to install the certificates.
apiVersion:apps/v1kind:Deploymentmetadata:name:myappnamespace:defaultlabels:app:myappspec:replicas:1selector:matchLabels:app:myapptemplate:metadata:labels:app:myappannotations:dapr.io/enabled:"true"dapr.io/app-id:"myapp"dapr.io/app-port:"8000"dapr.io/volume-mounts:"cert-vol:/certificates"# Mount the certificates folder to the sidecar container at /certificatesdapr.io/env:"SSL_CERT_DIR=/certificates"# Set the environment variable to the path of the certificates folderspec:volumes:- name:cert-volsecret:secretName:myapp-cert...
HTTPS binding support can also be configured using the MTLSRootCA metadata option. This will add the specified certificate to the list of trusted certificates for the binding. There’s no specific preference for either method. While the MTLSRootCA option is easy to use and doesn’t require any changes to the sidecar, it accepts only one certificate. If you need to trust multiple certificates, you need to install them in the sidecar by following the steps above.
Using mTLS or enabling client TLS authentication along with HTTPS
You can configure the HTTP binding to use mTLS or client TLS authentication along with HTTPS by providing the MTLSRootCA, MTLSClientCert, and MTLSClientKey metadata fields in the binding component.
These fields can be passed as a file path or as a pem encoded string:
If the file path is provided, the file is read and the contents are used.
If the PEM-encoded string is provided, the string is used as is.
When these fields are configured, the Dapr sidecar uses the provided certificate to authenticate itself with the server during the TLS handshake process.
If the remote server is enforcing TLS renegotiation, you also need to set the metadata field MTLSRenegotiation. This field accepts one of following options:
Detailed documentation on the Huawei OBS binding component
Component format
To setup Huawei Object Storage Service (OBS) (output) binding create a component of type bindings.huawei.obs. See this guide on how to create and apply a binding configuration.
The response JSON body contains the statusCode and the versionId fields. The versionId will have a value returned only if the bucket versioning is enabled and an empty string otherwise.
Upload file
To upload a binary file (for example, .jpg, .zip), invoke the Huawei OBS binding with a POST method and the following JSON body:
Note: by default, a random UUID is generated, if you don’t specify the key. See the example below for metadata support to set the destination file name. This API can be used to upload a regular file, such as a plain text file.
The response JSON body contains the statusCode and the versionId fields. The versionId will have a value returned only if the bucket versioning is enabled and an empty string otherwise.
Get object
To perform a get file operation, invoke the Huawei OBS binding with a POST method and the following JSON body:
maxResults - (optional) sets the maximum number of keys returned in the response. By default the action returns up to 1,000 key names. The response might contain fewer keys but will never contain more.
prefix - (optional) limits the response to keys that begin with the specified prefix.
marker - (optional) marker is where you want Huawei OBS to start listing from. Huawei OBS starts listing after this specified key. Marker can be any key in the bucket. The marker value may then be used in a subsequent call to request the next set of list items.
delimiter - (optional) A delimiter is a character you use to group keys. It returns objects/files with their object key other than that is specified by the delimiter pattern.
Detailed documentation on the Kafka binding component
Component format
To setup Kafka binding create a component of type bindings.kafka. See this guide on how to create and apply a binding configuration. For details on using secretKeyRef, see the guide on how to reference secrets in components.
All component metadata field values can carry templated metadata values, which are resolved on Dapr sidecar startup.
For example, you can choose to use {namespace} as the consumerGroup, to enable using the same appId in different namespaces using the same topics as described in this article.
apiVersion:dapr.io/v1alpha1kind:Componentmetadata:name:kafka-bindingspec:type:bindings.kafkaversion:v1metadata:- name:topics# Optional. Used for input bindings.value:"topic1,topic2"- name:brokers# Required.value:"localhost:9092,localhost:9093"- name:consumerGroup# Optional. Used for input bindings.value:"group1"- name:publishTopic# Optional. Used for output bindings.value:"topic3"- name:authRequired# Required.value:"true"- name:saslUsername# Required if authRequired is `true`.value:"user"- name:saslPassword# Required if authRequired is `true`.secretKeyRef:name:kafka-secretskey:"saslPasswordSecret"- name:saslMechanismvalue:"SHA-512"- name:initialOffset# Optional. Used for input bindings.value:"newest"- name:maxMessageBytes# Optional.value:"1024"- name:heartbeatInterval# Optional.value:5s- name:sessionTimeout# Optional.value:15s- name:version# Optional.value:"2.0.0"- name:directionvalue:"input, output"- name:schemaRegistryURL# Optional. When using Schema Registry Avro serialization/deserialization. The Schema Registry URL.value:http://localhost:8081- name:schemaRegistryAPIKey# Optional. When using Schema Registry Avro serialization/deserialization. The Schema Registry API Key.value:XYAXXAZ- name:schemaRegistryAPISecret# Optional. When using Schema Registry Avro serialization/deserialization. The Schema Registry credentials API Secret.value:"ABCDEFGMEADFF"- name:schemaCachingEnabled# Optional. When using Schema Registry Avro serialization/deserialization. Enables caching for schemas.value:true- name:schemaLatestVersionCacheTTL# Optional. When using Schema Registry Avro serialization/deserialization. The TTL for schema caching when publishing a message with latest schema available.value:5m- name:escapeHeaders# Optional.value:false
A user-provided string sent with every request to the Kafka brokers for logging, debugging, and auditing purposes.
"my-dapr-app"
consumerGroup
N
Input
A kafka consumer group to listen on. Each record published to a topic is delivered to one consumer within each consumer group subscribed to the topic.
"group1"
consumeRetryEnabled
N
Input/Output
Enable consume retry by setting to "true". Default to false in Kafka binding component.
"true", "false"
publishTopic
Y
Output
The topic to publish to.
"mytopic"
authRequired
N
Deprecated
Enable SASL authentication with the Kafka brokers.
"true", "false"
authType
Y
Input/Output
Configure or disable authentication. Supported values: none, password, mtls, or oidc
"password", "none"
saslUsername
N
Input/Output
The SASL username used for authentication. Only required if authRequired is set to "true".
"adminuser"
saslPassword
N
Input/Output
The SASL password used for authentication. Can be secretKeyRef to use a secret reference. Only required if authRequired is set to "true".
"", "KeFg23!"
saslMechanism
N
Input/Output
The SASL authentication mechanism you’d like to use. Only required if authtype is set to "password". If not provided, defaults to PLAINTEXT, which could cause a break for some services, like Amazon Managed Service for Kafka.
"SHA-512", "SHA-256", "PLAINTEXT"
initialOffset
N
Input
The initial offset to use if no offset was previously committed. Should be “newest” or “oldest”. Defaults to “newest”.
"oldest"
maxMessageBytes
N
Input/Output
The maximum size in bytes allowed for a single Kafka message. Defaults to 1024.
"2048"
oidcTokenEndpoint
N
Input/Output
Full URL to an OAuth2 identity provider access token endpoint. Required when authType is set to oidc
The OAuth2 client ID that has been provisioned in the identity provider. Required when authType is set to oidc
"dapr-kafka"
oidcClientSecret
N
Input/Output
The OAuth2 client secret that has been provisioned in the identity provider: Required when authType is set to oidc
"KeFg23!"
oidcScopes
N
Input/Output
Comma-delimited list of OAuth2/OIDC scopes to request with the access token. Recommended when authType is set to oidc. Defaults to "openid"
"openid,kafka-prod"
version
N
Input/Output
Kafka cluster version. Defaults to 2.0.0. Please note that this needs to be mandatorily set to 1.0.0 for EventHubs with Kafka.
"1.0.0"
direction
N
Input/Output
The direction of the binding.
"input", "output", "input, output"
oidcExtensions
N
Input/Output
String containing a JSON-encoded dictionary of OAuth2/OIDC extensions to request with the access token
{"cluster":"kafka","poolid":"kafkapool"}
schemaRegistryURL
N
Required when using Schema Registry Avro serialization/deserialization. The Schema Registry URL.
http://localhost:8081
schemaRegistryAPIKey
N
When using Schema Registry Avro serialization/deserialization. The Schema Registry credentials API Key.
XYAXXAZ
schemaRegistryAPISecret
N
When using Schema Registry Avro serialization/deserialization. The Schema Registry credentials API Secret.
ABCDEFGMEADFF
schemaCachingEnabled
N
When using Schema Registry Avro serialization/deserialization. Enables caching for schemas. Default is true
true
schemaLatestVersionCacheTTL
N
When using Schema Registry Avro serialization/deserialization. The TTL for schema caching when publishing a message with latest schema available. Default is 5 min
5m
clientConnectionTopicMetadataRefreshInterval
N
Input/Output
The interval for the client connection’s topic metadata to be refreshed with the broker as a Go duration. Defaults to 9m.
"4m"
clientConnectionKeepAliveInterval
N
Input/Output
The maximum time for the client connection to be kept alive with the broker, as a Go duration, before closing the connection. A zero value (default) means keeping alive indefinitely.
"4m"
consumerFetchDefault
N
Input/Output
The default number of message bytes to fetch from the broker in each request. Default is "1048576" bytes.
"2097152"
heartbeatInterval
N
Input
The interval between heartbeats to the consumer coordinator. At most, the value should be set to a 1/3 of the sessionTimeout value. Defaults to "3s".
"5s"
sessionTimeout
N
Input
The timeout used to detect client failures when using Kafka’s group management facility. If the broker fails to receive any heartbeats from the consumer before the expiration of this session timeout, then the consumer is removed and initiates a rebalance. Defaults to "10s".
"20s"
escapeHeaders
N
Input
Enables URL escaping of the message header values received by the consumer. Allows receiving content with special characters that are usually not allowed in HTTP headers. Default is false.
true
Note
The metadata version must be set to 1.0.0 when using Azure EventHubs with Kafka.
Binding support
This component supports both input and output binding interfaces.
This component supports output binding with the following operations:
Detailed documentation on the Kitex binding component
Overview
The binding for Kitex mainly utilizes the generic-call feature in Kitex. Learn more from the official documentation around Kitex generic-call.
Currently, Kitex only supports Thrift generic calls. The implementation integrated into components-contrib adopts binary generic calls.
The InvokeRequest.Metadata for bindings.kitex requires the client to fill in four required items when making a call:
hostPorts
destService
methodName
version
Field
Required
Binding support
Details
Example
hostPorts
Y
Output
IP address and port information of the Kitex server (Thrift)
"127.0.0.1:8888"
destService
Y
Output
Service name of the Kitex server (Thrift)
"echo"
methodName
Y
Output
Method name under a specific service name of the Kitex server (Thrift)
"echo"
version
Y
Output
Kitex version
"0.5.0"
Binding support
This component supports output binding with the following operations:
get
Example
When using Kitex binding:
The client needs to pass in the correct Thrift-encoded binary
The server needs to be a Thrift Server.
The kitex_output_test can be used as a reference.
For example, the variable reqData needs to be encoded by the Thrift protocol before sending, and the returned data needs to be decoded by the Thrift protocol.
The period of time to refresh event list from Kubernetes API server. Defaults to "10"
"15"
direction
N
Input
The direction of the binding
"input"
kubeconfigPath
N
Input
The path to the kubeconfig file. If not specified, the binding uses the default in-cluster config value
"/path/to/kubeconfig"
Binding support
This component supports input binding interface.
Output format
Output received from the binding is of format bindings.ReadResponse with the Data field populated with the following structure:
{"event":"","oldVal":{"metadata":{"name":"hello-node.162c2661c524d095","namespace":"kube-events","selfLink":"/api/v1/namespaces/kube-events/events/hello-node.162c2661c524d095",...},"involvedObject":{"kind":"Deployment","namespace":"kube-events",...},"reason":"ScalingReplicaSet","message":"Scaled up replica set hello-node-7bf657c596 to 1",...},"newVal":{"metadata":{"creationTimestamp":"null"},"involvedObject":{},"source":{},"firstTimestamp":"null","lastTimestamp":"null","eventTime":"null",...}}
Three different event types are available:
Add : Only the newVal field is populated, oldVal field is an empty v1.Event, event is add
Delete : Only the oldVal field is populated, newVal field is an empty v1.Event, event is delete
Update : Both the oldVal and newVal fields are populated, event is update
Required permissions
For consuming events from Kubernetes, permissions need to be assigned to a User/Group/ServiceAccount using [RBAC Auth] mechanism of Kubernetes.
Role
One of the rules need to be of the form as below to give permissions to get, watch and listevents. API Groups can be as restrictive as needed.
apiVersion:rbac.authorization.k8s.io/v1kind:RoleBindingmetadata:name:<NAME>subjects:- kind:ServiceAccountname:default# or as need be, can be changedroleRef:kind:Rolename:<ROLENAME># same as the one aboveapiGroup:""
Detailed documentation on the Local Storage binding component
Component format
To set up the Local Storage binding, create a component of type bindings.localstorage. See this guide on how to create and apply a binding configuration.
The response body contains the value stored in the file.
List files
To perform a list files operation, invoke the Local Storage binding with a POST method and the following JSON body:
{"operation":"list"}
If you only want to list the files beneath a particular directory below the rootPath, specify the relative directory name as the fileName in the metadata.
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.
Spec metadata fields
Field
Required
Binding support
Details
Example
url
Y
Input/Output
Address of the MQTT broker. Can be secretKeyRef to use a secret reference. Use the tcp:// URI scheme for non-TLS communication. Use the ssl:// URI scheme for TLS communication.
"tcp://[username][:password]@host.domain[:port]"
topic
Y
Input/Output
The topic to listen on or send events to.
"mytopic"
consumerID
Y
Input/Output
The client ID used to connect to the MQTT broker.
"myMqttClientApp"
retain
N
Input/Output
Defines whether the message is saved by the broker as the last known good value for a specified topic. Defaults to "false".
"true", "false"
cleanSession
N
Input/Output
Sets the clean_session flag in the connection message to the MQTT broker if "true". Defaults to "false".
"true", "false"
caCert
Required for using TLS
Input/Output
Certificate Authority (CA) certificate in PEM format for verifying server TLS certificates.
See example below
clientCert
Required for using TLS
Input/Output
TLS client certificate in PEM format. Must be used with clientKey.
See example below
clientKey
Required for using TLS
Input/Output
TLS client key in PEM format. Must be used with clientCert. Can be secretKeyRef to use a secret reference.
See example below
backOffMaxRetries
N
Input
The maximum number of retries to process the message before returning an error. Defaults to "0", which means that no retries will be attempted. "-1" can be specified to indicate that messages should be retried indefinitely until they are successfully processed or the application is shutdown. The component will wait 5 seconds between retries.
"3"
direction
N
Input/Output
The direction of the binding
"input", "output", "input, output"
Communication using TLS
To configure communication using TLS, ensure that the MQTT broker (e.g. emqx) is configured to support certificates and provide the caCert, clientCert, clientKey metadata in the component configuration. For example:
Note that while the caCert and clientCert values may not be secrets, they can be referenced from a Dapr secret store as well for convenience.
Consuming a shared topic
When consuming a shared topic, each consumer must have a unique identifier. If running multiple instances of an application, you configure the component’s consumerID metadata with a {uuid} tag, which will give each instance a randomly generated consumerID value on start up. For example:
apiVersion:dapr.io/v1alpha1kind:Componentmetadata:name:<NAME>spec:type:bindings.mysqlversion:v1metadata:- name:url# Required, define DB connection in DSN formatvalue:"<CONNECTION_STRING>"- name:pemPath# Optionalvalue:"<PEM PATH>"- name:maxIdleConnsvalue:"<MAX_IDLE_CONNECTIONS>"- name:maxOpenConnsvalue:"<MAX_OPEN_CONNECTIONS>"- name:connMaxLifetimevalue:"<CONNECTION_MAX_LIFE_TIME>"- name:connMaxIdleTimevalue:"<CONNECTION_MAX_IDLE_TIME>"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.
Note that you can not use secret just for username/password. If you use secret, it has to be for the complete connection string.
Spec metadata fields
Field
Required
Binding support
Details
Example
url
Y
Output
Represent DB connection in Data Source Name (DNS) format. See here SSL details
"user:password@tcp(localhost:3306)/dbname"
pemPath
Y
Output
Path to the PEM file. Used with SSL connection
"path/to/pem/file"
maxIdleConns
N
Output
The max idle connections. Integer greater than 0
"10"
maxOpenConns
N
Output
The max open connections. Integer greater than 0
"10"
connMaxLifetime
N
Output
The max connection lifetime. Duration string
"12s"
connMaxIdleTime
N
Output
The max connection idle time. Duration string
"12s"
SSL connection
If your server requires SSL your connection string must end of &tls=custom for example:
You must replace the <PEM PATH> with a full path to the PEM file. If you are using Azure Database for MySQL see the Azure documentation on SSL database connections, for information on how to download the required certificate. The connection to MySQL requires a minimum TLS version of 1.2.
Multiple statements
By default, the MySQL Go driver only supports one SQL statement per query/command.
To allow multiple statements in one query you need to add multiStatements=true to a query string, for example:
While this allows batch queries, it also greatly increases the risk of SQL injections. Only the result of the first query is returned,
all other results are silently discarded.
Binding support
This component supports output binding with the following operations:
exec
query
close
Parametrized queries
This binding supports parametrized queries, which allow separating the SQL query itself from user-supplied values. The usage of parametrized queries is strongly recommended for security reasons, as they prevent SQL Injection attacks.
For example:
-- ❌ WRONG! Includes values in the query and is vulnerable to SQL Injection attacks.
SELECT*FROMmytableWHEREuser_key='something';-- ✅ GOOD! Uses parametrized queries.
-- This will be executed with parameters ["something"]
SELECT*FROMmytableWHEREuser_key=?;
exec
The exec operation can be used for DDL operations (like table creation), as well as INSERT, UPDATE, DELETE operations which return only metadata (e.g. number of affected rows).
The params property is a string containing a JSON-encoded array of parameters.
{"metadata":{"operation":"exec","duration":"294µs","start-time":"2020-09-24T11:13:46.405097Z","end-time":"2020-09-24T11:13:46.414519Z","rows-affected":"1","sql":"INSERT INTO foo (id, c1, ts) VALUES (?, ?, ?)"}}
query
The query operation is used for SELECT statements, which returns the metadata along with data in a form of an array of row values.
The params property is a string containing a JSON-encoded array of parameters.
Request
{"operation":"query","metadata":{"sql":"SELECT * FROM foo WHERE id < $1","params":"[3]"}}
Response
{"metadata":{"operation":"query","duration":"432µs","start-time":"2020-09-24T11:13:46.405097Z","end-time":"2020-09-24T11:13:46.420566Z","sql":"SELECT * FROM foo WHERE id < ?"},"data":[{column_name:value,column_name:value,...},{column_name:value,column_name:value,...},{column_name:value,column_name:value,...},]}
Here column_name is the name of the column returned by query, and value is a value of this column. Note that values are returned as string
or numbers (language specific data type)
close
The close operation can be used to explicitly close the DB connection and return it to the pool. This operation doesn’t have any response.
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.
Spec metadata fields
Authenticate using a connection string
The following metadata options are required to authenticate using a PostgreSQL connection string.
Field
Required
Details
Example
connectionString
Y
The connection string for the PostgreSQL database. See the PostgreSQL documentation on database connections for information on how to define a connection string.
Authenticate using individual connection parameters
In addition to using a connection string, you can optionally specify individual connection parameters. These parameters are equivalent to the standard PostgreSQL connection parameters.
Field
Required
Details
Example
host
Y
The host name or IP address of the PostgreSQL server
"localhost"
hostaddr
N
The IP address of the PostgreSQL server (alternative to host)
"127.0.0.1"
port
Y
The port number of the PostgreSQL server
"5432"
database
Y
The name of the database to connect to
"my_db"
user
Y
The PostgreSQL user to connect as
"postgres"
password
Y
The password for the PostgreSQL user
"example"
sslRootCert
N
Path to the SSL root certificate file
"/path/to/ca.crt"
Note
When using individual connection parameters, these will override the ones present in the connectionString.
Authenticate using Microsoft Entra ID
Authenticating with Microsoft Entra ID is supported with Azure Database for PostgreSQL. All authentication methods supported by Dapr can be used, including client credentials (“service principal”) and Managed Identity.
Field
Required
Details
Example
useAzureAD
Y
Must be set to true to enable the component to retrieve access tokens from Microsoft Entra ID.
"true"
connectionString
Y
The connection string for the PostgreSQL database. This must contain the user, which corresponds to the name of the user created inside PostgreSQL that maps to the Microsoft Entra ID identity; this is often the name of the corresponding principal (e.g. the name of the Microsoft Entra ID application). This connection string should not contain any password.
Authenticating with AWS IAM is supported with all versions of PostgreSQL type components.
The user specified in the connection string must be an already existing user in the DB, and an AWS IAM enabled user granted the rds_iam database role.
Authentication is based on the AWS authentication configuration file, or the AccessKey/SecretKey provided.
The AWS authentication token will be dynamically rotated before it’s expiration time with AWS.
Field
Required
Details
Example
useAWSIAM
Y
Must be set to true to enable the component to retrieve access tokens from AWS IAM. This authentication method only works with AWS Relational Database Service for PostgreSQL databases.
"true"
connectionString
Y
The connection string for the PostgreSQL database. This must contain an already existing user, which corresponds to the name of the user created inside PostgreSQL that maps to the AWS IAM policy. This connection string should not contain any password. Note that the database name field is denoted by dbname with AWS.
This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘region’ instead. The AWS Region where the AWS Relational Database Service is deployed to.
"us-east-1"
awsAccessKey
N
This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘accessKey’ instead. AWS access key associated with an IAM account
"AKIAIOSFODNN7EXAMPLE"
awsSecretKey
N
This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘secretKey’ instead. The secret key associated with the access key
"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
awsSessionToken
N
This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘sessionToken’ instead. AWS session token to use. A session token is only required if you are using temporary security credentials.
"TOKEN"
Other metadata options
Field
Required
Binding support
Details
Example
timeout
N
Output
Timeout for operations on the database, as a Go duration. Integers are interpreted as number of seconds. Defaults to 20s
"30s", 30
maxConns
N
Output
Maximum number of connections pooled by this component. Set to 0 or lower to use the default value, which is the greater of 4 or the number of CPUs.
"4"
connectionMaxIdleTime
N
Output
Max idle time before unused connections are automatically closed in the connection pool. By default, there’s no value and this is left to the database driver to choose.
"5m"
queryExecMode
N
Output
Controls the default mode for executing queries. By default Dapr uses the extended protocol and automatically prepares and caches prepared statements. However, this may be incompatible with proxies such as PGBouncer. In this case it may be preferrable to use exec or simple_protocol.
"simple_protocol"
URL format
The PostgreSQL binding uses pgx connection pool internally so the connectionString parameter can be any valid connection string, either in a DSN or URL format:
Both methods also support connection pool configuration variables:
pool_min_conns: integer 0 or greater
pool_max_conns: integer greater than 0
pool_max_conn_lifetime: duration string
pool_max_conn_idle_time: duration string
pool_health_check_period: duration string
Binding support
This component supports output binding with the following operations:
exec
query
close
Parametrized queries
This binding supports parametrized queries, which allow separating the SQL query itself from user-supplied values. The usage of parametrized queries is strongly recommended for security reasons, as they prevent SQL Injection attacks.
For example:
-- ❌ WRONG! Includes values in the query and is vulnerable to SQL Injection attacks.
SELECT*FROMmytableWHEREuser_key='something';-- ✅ GOOD! Uses parametrized queries.
-- This will be executed with parameters ["something"]
SELECT*FROMmytableWHEREuser_key=$1;
exec
The exec operation can be used for DDL operations (like table creation), as well as INSERT, UPDATE, DELETE operations which return only metadata (e.g. number of affected rows).
The params property is a string containing a JSON-encoded array of parameters.
{"metadata":{"operation":"exec","duration":"294µs","start-time":"2020-09-24T11:13:46.405097Z","end-time":"2020-09-24T11:13:46.414519Z","rows-affected":"1","sql":"INSERT INTO foo (id, c1, ts) VALUES ($1, $2, $3)"}}
query
The query operation is used for SELECT statements, which returns the metadata along with data in a form of an array of row values.
The params property is a string containing a JSON-encoded array of parameters.
Request
{"operation":"query","metadata":{"sql":"SELECT * FROM foo WHERE id < $1","params":"[3]"}}
Response
{"metadata":{"operation":"query","duration":"432µs","start-time":"2020-09-24T11:13:46.405097Z","end-time":"2020-09-24T11:13:46.420566Z","sql":"SELECT * FROM foo WHERE id < $1"},"data":"[
[0,\"test-0\",\"2020-09-24T04:13:46Z\"],
[1,\"test-1\",\"2020-09-24T04:13:46Z\"],
[2,\"test-2\",\"2020-09-24T04:13:46Z\"]
]"}
close
The close operation can be used to explicitly close the DB connection and return it to the pool. This operation doesn’t have any response.
Detailed documentation on the Postmark binding component
Component format
To setup Postmark binding create a component of type bindings.postmark. See this guide on how to create and apply a binding configuration.
apiVersion:dapr.io/v1alpha1kind:Componentmetadata:name:postmarkspec:type:bindings.postmarkmetadata:- name:accountTokenvalue:"YOUR_ACCOUNT_TOKEN"# required, this is your Postmark account token- name:serverTokenvalue:"YOUR_SERVER_TOKEN"# required, this is your Postmark server token- name:emailFromvalue:"testapp@dapr.io"# optional- name:emailTovalue:"dave@dapr.io"# optional- name:subjectvalue:"Hello!"# optional
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.
Spec metadata fields
Field
Required
Binding support
Details
Example
accountToken
Y
Output
The Postmark account token, this should be considered a secret value
"account token"
serverToken
Y
Output
The Postmark server token, this should be considered a secret value
"server token"
emailFrom
N
Output
If set this specifies the ‘from’ email address of the email message
"me@exmaple.com"
emailTo
N
Output
If set this specifies the ’to’ email address of the email message
"me@example.com"
emailCc
N
Output
If set this specifies the ‘cc’ email address of the email message
"me@example.com"
emailBcc
N
Output
If set this specifies the ‘bcc’ email address of the email message
"me@example.com"
subject
N
Output
If set this specifies the subject of the email message
"me@example.com"
You can specify any of the optional metadata properties on the output binding request too (e.g. emailFrom, emailTo, subject, etc.)
Combined, the optional metadata properties in the component configuration and the request payload should at least contain the emailFrom, emailTo and subject fields, as these are required to send an email with success.
Binding support
This component supports output binding with the following operations:
create
Example request payload
{"operation":"create","metadata":{"emailTo":"changeme@example.net","subject":"An email from Dapr Postmark binding"},"data":"<h1>Testing Dapr Bindings</h1>This is a test.<br>Bye!"}
Determines whether the topic will be an exclusive topic or not. Defaults to "false"
"true", "false"
maxPriority
N
Input/Output
Parameter to set the priority queue. If this parameter is omitted, queue will be created as a general queue instead of a priority queue. Value between 1 and 255. See also
"1", "10"
contentType
N
Input/Output
The content type of the message. Defaults to “text/plain”.
"text/plain", "application/cloudevent+json" and so on
reconnectWaitInSeconds
N
Input/Output
Represents the duration in seconds that the client should wait before attempting to reconnect to the server after a disconnection occurs. Defaults to "5".
The CA certificate to use for TLS connection. Defaults to null.
"-----BEGIN CERTIFICATE-----\nMI..."
clientCert
N
Input/Output
The client certificate to use for TLS connection. Defaults to null.
"-----BEGIN CERTIFICATE-----\nMI..."
clientKey
N
Input/Output
The client key to use for TLS connection. Defaults to null.
"-----BEGIN PRIVATE KEY-----\nMI..."
direction
N
Input/Output
The direction of the binding.
"input", "output", "input, output"
Binding support
This component supports both input and output binding interfaces.
This component supports output binding with the following operations:
create
Specifying a TTL per message
Time to live can be defined on queue level (as illustrated above) or at the message level. The value defined at message level overwrites any value set at queue level.
To set time to live at message level use the metadata section in the request body during the binding invocation.
Priority can be defined at the message level. If maxPriority parameter is set, high priority messages will have priority over other low priority messages.
To set priority at message level use the metadata section in the request body during the binding invocation.
If the Redis instance supports TLS with public certificates it can be configured to enable or disable TLS. Defaults to "false"
"true", "false"
clientCert
N
Output
The content of the client certificate, used for Redis instances that require client-side certificates. Must be used with clientKey and enableTLS must be set to true. It is recommended to use a secret store as described here
"----BEGIN CERTIFICATE-----\nMIIC..."
clientKey
N
Output
The content of the client private key, used in conjunction with clientCert for authentication. It is recommended to use a secret store as described here
"----BEGIN PRIVATE KEY-----\nMIIE..."
failover
N
Output
Property to enabled failover configuration. Needs sentinalMasterName to be set. Defaults to "false"
The interval between checking for pending messages to redelivery. Defaults to "60s". "0" disables redelivery.
"30s"
processingTimeout
N
Output
The amount time a message must be pending before attempting to redeliver it. Defaults to "15s". "0" disables redelivery.
"30s"
redisType
N
Output
The type of redis. There are two valid values, one is "node" for single node mode, the other is "cluster" for redis cluster mode. Defaults to "node".
"cluster"
redisDB
N
Output
Database selected after connecting to redis. If "redisType" is "cluster" this option is ignored. Defaults to "0".
"0"
redisMaxRetries
N
Output
Maximum number of times to retry commands before giving up. Default is to not retry failed commands.
"5"
redisMinRetryInterval
N
Output
Minimum backoff for redis commands between each retry. Default is "8ms"; "-1" disables backoff.
"8ms"
redisMaxRetryInterval
N
Output
Maximum backoff for redis commands between each retry. Default is "512ms";"-1" disables backoff.
"5s"
dialTimeout
N
Output
Dial timeout for establishing new connections. Defaults to "5s".
"5s"
readTimeout
N
Output
Timeout for socket reads. If reached, redis commands will fail with a timeout instead of blocking. Defaults to "3s", "-1" for no timeout.
"3s"
writeTimeout
N
Output
Timeout for socket writes. If reached, redis commands will fail with a timeout instead of blocking. Defaults is readTimeout.
"3s"
poolSize
N
Output
Maximum number of socket connections. Default is 10 connections per every CPU as reported by runtime.NumCPU.
"20"
poolTimeout
N
Output
Amount of time client waits for a connection if all connections are busy before returning an error. Default is readTimeout + 1 second.
"5s"
maxConnAge
N
Output
Connection age at which the client retires (closes) the connection. Default is to not close aged connections.
"30m"
minIdleConns
N
Output
Minimum number of idle connections to keep open in order to avoid the performance degradation associated with creating new connections. Defaults to "0".
"2"
idleCheckFrequency
N
Output
Frequency of idle checks made by idle connections reaper. Default is "1m". "-1" disables idle connections reaper.
"-1"
idleTimeout
N
Output
Amount of time after which the client closes idle connections. Should be less than server’s timeout. Default is "5m". "-1" disables idle timeout check.
"10m"
Binding support
This component supports output binding with the following operations:
create
get
delete
create
You can store a record in Redis using the create operation. This sets a key to hold a value. If the key already exists, the value is overwritten.
An HTTP 204 (No Content) and empty body is returned if successful.
get
You can get a record in Redis using the get operation. This gets a key that was previously set.
This takes an optional parameter delete, which is by default false. When it is set to true, this operation uses the GETDEL operation of Redis. For example, it returns the value which was previously set and then deletes it.
You can delete a record in Redis using the delete operation. Returns success whether the key exists or not.
Request
{"operation":"delete","metadata":{"key":"key1"}}
Response
An HTTP 204 (No Content) and empty body is returned if successful.
Create a Redis instance
Dapr can use any Redis instance - containerized, running on your local dev machine, or a managed cloud service, provided the version of Redis is 5.0.0 or later.
Note: Dapr does not support Redis >= 7. It is recommended to use Redis 6
The Dapr CLI will automatically create and setup a Redis Streams instance for you.
The Redis instance will be installed via Docker when you run dapr init, and the component file will be created in default directory. ($HOME/.dapr/components directory (Mac/Linux) or %USERPROFILE%\.dapr\components on Windows).
You can use Helm to quickly create a Redis instance in our Kubernetes cluster. This approach requires Installing Helm.
Run kubectl get pods to see the Redis containers now running in your cluster.
Add redis-master:6379 as the redisHost in your redis.yaml file. For example:
metadata:- name:redisHostvalue:redis-master:6379
Next, we’ll get our Redis password, which is slightly different depending on the OS we’re using:
Windows: Run kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" > encoded.b64, which will create a file with your encoded password. Next, run certutil -decode encoded.b64 password.txt, which will put your redis password in a text file called password.txt. Copy the password and delete the two files.
Linux/MacOS: Run kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" | base64 --decode and copy the outputted password.
Add this password as the redisPassword value in your redis.yaml file. For example:
Once your instance is created, grab the Host name (FQDN) and your access key from the Azure portal.
For the Host name:
Navigate to the resource’s Overview page.
Copy the Host name value.
For your access key:
Navigate to Settings > Access Keys.
Copy and save your key.
Add your key and your host name to a redis.yaml file that Dapr can apply to your cluster.
If you’re running a sample, add the host and key to the provided redis.yaml.
If you’re creating a project from the ground up, create a redis.yaml file as specified in the Component format section.
Set the redisHost key to [HOST NAME FROM PREVIOUS STEP]:6379 and the redisPassword key to the key you saved earlier.
Note: In a production-grade application, follow secret management instructions to securely manage your secrets.
Enable EntraID support:
Enable Entra ID authentication on your Azure Redis server. This may takes a few minutes.
Set useEntraID to "true" to implement EntraID support for Azure Cache for Redis.
Set enableTLS to "true" to support TLS.
Note:useEntraID assumes that either your UserPrincipal (via AzureCLICredential) or the SystemAssigned managed identity have the RedisDataOwner role permission. If a user-assigned identity is used, you need to specify the azureClientID property.
Detailed documentation on the RethinkDB binding component
Component format
The RethinkDB state store supports transactions which means it can be used to support Dapr actors. Dapr persists only the actor’s current state which doesn’t allow the users to track how actor’s state may have changed over time.
To enable users to track change of the state of actors, this binding leverages RethinkDB’s built-in capability to monitor RethinkDB table and event on change with both the old and new state. This binding creates a subscription on the Dapr state table and streams these changes using the Dapr input binding interface.
To setup RethinkDB statechange binding create a component of type bindings.rethinkdb.statechange. See this guide on how to create and apply a binding configuration.
apiVersion:dapr.io/v1alpha1kind:Componentmetadata:name:changesspec:type:bindings.rethinkdb.statechangeversion:v1metadata:- name:addressvalue:"<REPLACE-RETHINKDB-ADDRESS>"# Required, e.g. 127.0.0.1:28015 or rethinkdb.default.svc.cluster.local:28015).- name:databasevalue:"<REPLACE-RETHINKDB-DB-NAME>"# Required, e.g. dapr (alpha-numerics only)- name:direction value:"<DIRECTION-OF-RETHINKDB-BINDING>"
The response body contains the value stored in the file.
List files
To perform a list files operation, invoke the SFTP binding with a POST method and the following JSON body:
{"operation":"list"}
If you only want to list the files beneath a particular directory below the rootPath, specify the relative directory name as the fileName in the metadata.
The example configuration shown above, contain a username and password as plain-text strings. It is recommended to use a secret store for the secrets as described here.
Spec metadata fields
Field
Required
Binding support
Details
Example
host
Y
Output
The host where your SMTP server runs
"smtphost"
port
Y
Output
The port your SMTP server listens on
"9999"
user
Y
Output
The user to authenticate against the SMTP server
"user"
password
Y
Output
The password of the user
"password"
skipTLSVerify
N
Output
If set to true, the SMPT server’s TLS certificate will not be verified. Defaults to "false"
"true", "false"
emailFrom
N
Output
If set, this specifies the email address of the sender. See also
"me@example.com"
emailTo
N
Output
If set, this specifies the email address of the receiver. See also
"me@example.com"
emailCc
N
Output
If set, this specifies the email address to CC in. See also
"me@example.com"
emailBcc
N
Output
If set, this specifies email address to BCC in. See also
"me@example.com"
subject
N
Output
If set, this specifies the subject of the email message. See also
"subject of mail"
priority
N
Output
If set, this specifies the priority (X-Priority) of the email message, from 1 (lowest) to 5 (highest) (default value: 3). See also
"1"
Binding support
This component supports output binding with the following operations:
create
Example request
You can specify any of the following optional metadata properties with each request:
emailFrom
emailTo
emailCC
emailBCC
subject
priority
When sending an email, the metadata in the configuration and in the request is combined. The combined set of metadata must contain at least the emailFrom, emailTo and subject fields.
The emailTo, emailCC and emailBCC fields can contain multiple email addresses separated by a semicolon.
Detailed documentation on the Twilio SendGrid binding component
Component format
To setup Twilio SendGrid binding create a component of type bindings.twilio.sendgrid. See this guide on how to create and apply a binding configuration.
apiVersion:dapr.io/v1alpha1kind:Componentmetadata:name:sendgridspec:type:bindings.twilio.sendgridversion:v1metadata:- name:emailFromvalue:"testapp@dapr.io"# optional- name:emailFromNamevalue:"test app"# optional- name:emailTovalue:"dave@dapr.io"# optional- name:emailToNamevalue:"dave"# optional- name:subjectvalue:"Hello!"# optional- name:emailCcvalue:"jill@dapr.io"# optional- name:emailBccvalue:"bob@dapr.io"# optional- name:dynamicTemplateIdvalue:"d-123456789"# optional- name:dynamicTemplateDatavalue:'{"customer":{"name":"John Smith"}}'# optional- name:apiKeyvalue:"YOUR_API_KEY"# required, this is your SendGrid key
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.
Spec metadata fields
Field
Required
Binding support
Details
Example
apiKey
Y
Output
SendGrid API key, this should be considered a secret value
"apikey"
emailFrom
N
Output
If set this specifies the ‘from’ email address of the email message. Only a single email address is allowed. Optional field, see below
"me@example.com"
emailFromName
N
Output
If set this specifies the ‘from’ name of the email message. Optional field, see below
"me"
emailTo
N
Output
If set this specifies the ’to’ email address of the email message. Only a single email address is allowed. Optional field, see below
"me@example.com"
emailToName
N
Output
If set this specifies the ’to’ name of the email message. Optional field, see below
"me"
emailCc
N
Output
If set this specifies the ‘cc’ email address of the email message. Only a single email address is allowed. Optional field, see below
"me@example.com"
emailBcc
N
Output
If set this specifies the ‘bcc’ email address of the email message. Only a single email address is allowed. Optional field, see below
"me@example.com"
subject
N
Output
If set this specifies the subject of the email message. Optional field, see below
"subject of the email"
Binding support
This component supports output binding with the following operations:
create
Example request payload
You can specify any of the optional metadata properties on the output binding request too (e.g. emailFrom, emailTo, subject, etc.)
{"operation":"create","metadata":{"emailTo":"changeme@example.net","subject":"An email from Dapr SendGrid binding"},"data":"<h1>Testing Dapr Bindings</h1>This is a test.<br>Bye!"}
Dynamic templates
If a dynamic template is used, a dynamicTemplateId needs to be provided and then the dynamicTemplateData is used:
{"operation":"create","metadata":{"emailTo":"changeme@example.net","subject":"An template email from Dapr SendGrid binding","dynamicTemplateId":"d-123456789","dynamicTemplateData":"{\"customer\":{\"name\":\"John Smith\"}}"}}
Detailed documentation on the WebAssembly binding component
Overview
With WebAssembly, you can safely run code compiled in other languages. Runtimes
execute WebAssembly Modules (Wasm), which are most often binaries with a .wasm
extension.
The Wasm Binding allows you to invoke a program compiled to Wasm by passing
commandline args or environment variables to it, similar to how you would with
a normal subprocess. For example, you can satisfy an invocation using Python,
even though Dapr is written in Go and is running on a platform that doesn’t have
Python installed!
The Wasm binary must be a program compiled with the WebAssembly System
Interface (WASI). The binary can be a program you’ve written such as in Go, or
an interpreter you use to run inlined scripts, such as Python.
Minimally, you must specify a Wasm binary compiled with the canonical WASI
version wasi_snapshot_preview1 (a.k.a. wasip1), often abbreviated to wasi.
Note: If compiling in Go 1.21+, this is GOOS=wasip1 GOARCH=wasm. In TinyGo, Rust, and Zig, this is the target wasm32-wasi.
You can also re-use an existing binary. For example, Wasm Language Runtimes
distributes interpreters (including PHP, Python, and Ruby) already compiled to
WASI.
Wasm binaries are loaded from a URL. For example, the URL file://rewrite.wasm
loads rewrite.wasm from the current directory of the process. On Kubernetes,
see How to: Mount Pod volumes to the Dapr sidecar
to configure a filesystem mount that can contain Wasm binaries.
It is also possible to fetch the Wasm binary from a remote URL. In this case,
the URL must point exactly to one Wasm binary. For example:
http://example.com/rewrite.wasm, or
https://example.com/rewrite.wasm.
Dapr uses wazero to run these binaries, because it has no
dependencies. This allows use of WebAssembly with no installation process
except Dapr itself.
The Wasm output binding supports making HTTP client calls using the wasi-http specification.
You can find example code for making HTTP calls in a variety of languages here:
If you just want to make an HTTP call, it is simpler to use the service invocation API. However, if you need to add your own logic - for example, filtering or calling to multiple API endpoints - consider using Wasm.
Component format
To configure a Wasm binding, create a component of type
bindings.wasm. See this guide
on how to create and apply a binding configuration.
The URL of the resource including the Wasm binary to instantiate. The supported schemes include file://, http://, and https://. The path of a file:// URL is relative to the Dapr process unless it begins with /.
true
file://hello.wasm, https://example.com/hello.wasm
Binding support
This component supports output binding with the following operations:
execute
Example request
The data field, if present will be the program’s STDIN. You can optionally
pass metadata properties with each request:
args any CLI arguments, comma-separated. This excludes the program name.
deployments - a list of deployed resources, e.g. processes
metadata - deployment metadata, each deployment has only one metadata
process- metadata of a deployed process
bpmnProcessId - the bpmn process ID, as parsed during deployment; together with the version forms a unique identifier for a specific
process definition
version - the assigned process version
processDefinitionKey - the assigned key, which acts as a unique identifier for this process
resourceName - the resource name from which this process was parsed
decision - metadata of a deployed decision
dmnDecisionId - the dmn decision ID, as parsed during deployment; together with the versions forms a unique identifier for a specific
decision
dmnDecisionName - the dmn name of the decision, as parsed during deployment
version - the assigned decision version
decisionKey - the assigned decision key, which acts as a unique identifier for this decision
dmnDecisionRequirementsId - the dmn ID of the decision requirements graph that this decision is part of, as parsed during deployment
decisionRequirementsKey - the assigned key of the decision requirements graph that this decision is part of
decisionRequirements - metadata of a deployed decision requirements
dmnDecisionRequirementsId - the dmn decision requirements ID, as parsed during deployment; together with the versions forms a unique
identifier for a specific decision
dmnDecisionRequirementsName - the dmn name of the decision requirements, as parsed during deployment
version - the assigned decision requirements version
decisionRequirementsKey - the assigned decision requirements key, which acts as a unique identifier for this decision requirements
resourceName - the resource name from which this decision requirements was parsed
create-instance
The create-instance operation creates and starts an instance of the specified process. The process definition to use to create the instance can be
specified either using its unique key (as returned by the deploy-process operation), or using the BPMN process ID and a version.
Note that only processes with none start events can be started through this command.
Typically, process creation and execution are decoupled. This means that the command creates a new process instance and immediately responds with
the process instance id. The execution of the process occurs after the response is sent. However, there are use cases that need to collect the results
of a process when its execution is complete. By defining the withResult property, the command allows to “synchronously” execute processes and receive
the results via a set of variables. The response is sent when the process execution is complete.
bpmnProcessId - the BPMN process ID of the process definition to instantiate
processDefinitionKey - the unique key identifying the process definition to instantiate
version - (optional, default: latest version) the version of the process to instantiate
variables - (optional) JSON document that will instantiate the variables for the root variable scope of the
process instance; it must be a JSON object, as variables will be mapped in a
key-value fashion. e.g. { “a”: 1, “b”: 2 } will create two variables, named “a” and
“b” respectively, with their associated values. [{ “a”: 1, “b”: 2 }] would not be a
valid argument, as the root of the JSON document is an array and not an object
withResult - (optional, default: false) if set to true, the process will be instantiated and executed synchronously
requestTimeout - (optional, only used if withResult=true) timeout the request will be closed if the process is not completed before the
requestTimeout. If requestTimeout = 0, uses the generic requestTimeout configured in the gateway.
fetchVariables - (optional, only used if withResult=true) list of names of variables to be included in variables property of the response.
If empty, all visible variables in the root scope will be returned.
Response
The binding returns a JSON with the following response:
processDefinitionKey - the key of the process definition which was used to create the process instance
bpmnProcessId - the BPMN process ID of the process definition which was used to create the process instance
version - the version of the process definition which was used to create the process instance
processInstanceKey - the unique identifier of the created process instance
variables - (optional, only if withResult=true was used in the request) JSON document consists of visible variables in the root scope;
returned as a serialized JSON document
cancel-instance
The cancel-instance operation cancels a running process instance.
To perform a cancel-instance operation, invoke the Zeebe command binding with a POST method, and the following JSON body:
elementInstanceKey - the unique identifier of a particular element; can be the process instance key (as
obtained during instance creation), or a given element, such as a service task (see elementInstanceKey on the job message)
local - (optional, default: false) if true, the variables will be merged strictly into the local scope (as indicated by
elementInstanceKey); this means the variables is not propagated to upper scopes.
for example, let’s say we have two scopes, ‘1’ and ‘2’, with each having effective variables as:
1 => { "foo" : 2 }, and 2 => { "bar" : 1 }. if we send an update request with
elementInstanceKey = 2, variables { "foo" : 5 }, and local is true, then scope 1 will
be unchanged, and scope 2 will now be { "bar" : 1, "foo" 5 }. if local was false, however,
then scope 1 would be { "foo": 5 }, and scope 2 would be { "bar" : 1 }
variables - a JSON serialized document describing variables as key value pairs; the root of the document must be an object
Response
The binding returns a JSON with the following response:
{"key":2251799813687896}
The response values are:
key - the unique key of the set variables command
resolve-incident
The resolve-incident operation resolves an incident.
To perform a resolve-incident operation, invoke the Zeebe command binding with a POST method, and the following JSON body:
correlationKey - (optional) the correlation key of the message
timeToLive - (optional) how long the message should be buffered on the broker
messageId - (optional) the unique ID of the message; can be omitted. only useful to ensure only one message with the given ID will ever
be published (during its lifetime)
variables - (optional) the message variables as a JSON document; to be valid, the root of the document must be an object, e.g. { “a”: “foo” }.
[ “foo” ] would not be valid
Response
The binding returns a JSON with the following response:
{"key":2251799813688225}
The response values are:
key - the unique ID of the message that was published
activate-jobs
The activate-jobs operation iterates through all known partitions round-robin and activates up to the requested maximum and streams them back to
the client as they are activated.
To perform a activate-jobs operation, invoke the Zeebe command binding with a POST method, and the following JSON body:
jobType - the job type, as defined in the BPMN process (e.g. <zeebe:taskDefinition type="fetch-products" />)
maxJobsToActivate - the maximum jobs to activate by this request
timeout - (optional, default: 5 minutes) a job returned after this call will not be activated by another call until the timeout has been reached
workerName - (optional, default: default) the name of the worker activating the jobs, mostly used for logging purposes
fetchVariables - (optional) a list of variables to fetch as the job variables; if empty, all visible variables at the time of activation for the
scope of the job will be returned
requestTimeout - (optional) the request will be completed when at least one job is activated or after the requestTimeout. If the requestTimeout = 0,
a default timeout is used. If the requestTimeout < 0, long polling is disabled and the request is completed immediately, even when no job is activated.
Response
The binding returns a JSON with the following response:
jobKey - the unique job identifier, as obtained from the activate jobs response
variables - (optional) a JSON document representing the variables in the current task scope
Response
The binding does not return a response body.
fail-job
The fail-job operation marks the job as failed; if the retries argument is positive, then the job will be immediately activatable again, and a
worker could try again to process it. If it is zero or negative however, an incident will be raised, tagged with the given errorMessage, and the
job will not be activatable until the incident is resolved.
To perform a fail-job operation, invoke the Zeebe command binding with a POST method, and the following JSON body:
jobKey - the unique job identifier, as obtained when activating the job
retries - the amount of retries the job should have left
errorMessage - (optional) a message describing why the job failed this is particularly useful if a job runs out of retries and an
incident is raised, as it this message can help explain why an incident was raised
retryBackOff - (optional) the back-off timeout for the next retry
variables - (optional) JSON document that will instantiate the variables at the local scope of the
job’s associated task; it must be a JSON object, as variables will be mapped in a
key-value fashion. e.g. { “a”: 1, “b”: 2 } will create two variables, named “a” and
“b” respectively, with their associated values. [{ “a”: 1, “b”: 2 }] would not be a
valid argument, as the root of the JSON document is an array and not an object.
Response
The binding does not return a response body.
update-job-retries
The update-job-retries operation updates the number of retries a job has left. This is mostly useful for jobs that have run out of retries, should the
underlying problem be solved.
To perform a update-job-retries operation, invoke the Zeebe command binding with a POST method, and the following JSON body:
jobKey - the unique job identifier, as obtained through the activate-jobs operation
retries - the new amount of retries for the job; must be positive
Response
The binding does not return a response body.
throw-error
The throw-error operation throw an error to indicate that a business error is occurred while processing the job. The error is identified
by an error code and is handled by an error catch event in the process with the same error code.
To perform a throw-error operation, invoke the Zeebe command binding with a POST method, and the following JSON body:
{"data":{"jobKey":2251799813686172,"errorCode":"product-fetch-error","errorMessage":"The product could not be fetched","variables":{"productId":"some-product-id","productName":"some-product-name","productKey":"some-product-key"}},"operation":"throw-error"}
The data parameters are:
jobKey - the unique job identifier, as obtained when activating the job
errorCode - the error code that will be matched with an error catch event
errorMessage - (optional) an error message that provides additional context
variables - (optional) JSON document that will instantiate the variables at the local scope of the
job’s associated task; it must be a JSON object, as variables will be mapped in a
key-value fashion. e.g. { “a”: 1, “b”: 2 } will create two variables, named “a” and
“b” respectively, with their associated values. [{ “a”: 1, “b”: 2 }] would not be a
valid argument, as the root of the JSON document is an array and not an object.
Detailed documentation on the Zeebe JobWorker binding component
Component format
To setup Zeebe JobWorker binding create a component of type bindings.zeebe.jobworker. See this guide on how to create and apply a binding configuration.
Sets how often keep alive messages should be sent to the gateway. Defaults to 45 seconds
"45s"
usePlainTextConnection
N
Input
Whether to use a plain text connection or not
"true", "false"
caCertificatePath
N
Input
The path to the CA cert
"/path/to/ca-cert"
workerName
N
Input
The name of the worker activating the jobs, mostly used for logging purposes
"products-worker"
workerTimeout
N
Input
A job returned after this call will not be activated by another call until the timeout has been reached; defaults to 5 minutes
"5m"
requestTimeout
N
Input
The request will be completed when at least one job is activated or after the requestTimeout. If the requestTimeout = 0, a default timeout is used. If the requestTimeout < 0, long polling is disabled and the request is completed immediately, even when no job is activated. Defaults to 10 seconds
"30s"
jobType
Y
Input
the job type, as defined in the BPMN process (e.g. <zeebe:taskDefinition type="fetch-products" />)
"fetch-products"
maxJobsActive
N
Input
Set the maximum number of jobs which will be activated for this worker at the same time. Defaults to 32
"32"
concurrency
N
Input
The maximum number of concurrent spawned goroutines to complete jobs. Defaults to 4
"4"
pollInterval
N
Input
Set the maximal interval between polling for new jobs. Defaults to 100 milliseconds
"100ms"
pollThreshold
N
Input
Set the threshold of buffered activated jobs before polling for new jobs, i.e. threshold * maxJobsActive. Defaults to 0.3
"0.3"
fetchVariables
N
Input
A list of variables to fetch as the job variables; if empty, all visible variables at the time of activation for the scope of the job will be returned
"productId", "productName", "productKey"
autocomplete
N
Input
Indicates if a job should be autocompleted or not. If not set, all jobs will be auto-completed by default. Disable it if the worker should manually complete or fail the job with either a business error or an incident
"true", "false"
retryBackOff
N
Input
The back-off timeout for the next retry if a job fails
15s
direction
N
Input
The direction of the binding
"input"
Binding support
This component supports input binding interfaces.
Input binding
Variables
The Zeebe process engine handles the process state as also process variables which can be passed
on process instantiation or which can be updated or created during process execution. These variables
can be passed to a registered job worker by defining the variable names as comma-separated list in
the fetchVariables metadata field. The process engine will then pass these variables with its current
values to the job worker implementation.
If the binding will register three variables productId, productName and productKey then the worker will
be called with the following JSON body:
Note: if the fetchVariables metadata field will not be passed, all process variables will be passed to the worker.
Headers
The Zeebe process engine has the ability to pass custom task headers to a job worker. These headers can be defined for every
service task.
Task headers will be passed by the binding as metadata (HTTP headers) to the job worker.
The binding will also pass the following job related variables as metadata. The values will be passed as string. The table contains also the
original data type so that it can be converted back to the equivalent data type in the used programming language for the worker.
Metadata
Data type
Description
X-Zeebe-Job-Key
int64
The key, a unique identifier for the job
X-Zeebe-Job-Type
string
The type of the job (should match what was requested)
X-Zeebe-Process-Instance-Key
int64
The job’s process instance key
X-Zeebe-Bpmn-Process-Id
string
The bpmn process ID of the job process definition
X-Zeebe-Process-Definition-Version
int32
The version of the job process definition
X-Zeebe-Process-Definition-Key
int64
The key of the job process definition
X-Zeebe-Element-Id
string
The associated task element ID
X-Zeebe-Element-Instance-Key
int64
The unique key identifying the associated task, unique within the scope of the process instance
X-Zeebe-Worker
string
The name of the worker which activated this job
X-Zeebe-Retries
int32
The amount of retries left to this job (should always be positive)
X-Zeebe-Deadline
int64
When the job can be activated again, sent as a UNIX epoch timestamp
X-Zeebe-Autocomplete
bool
The autocomplete status that is defined in the binding metadata