This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Bindings component specs

The supported external bindings that interface with Dapr

The following table lists input and output bindings supported by the Dapr bindings building block. Learn how to set up different input and output binding components for Dapr bindings.

Table headers to note:

Header Description Example
Status Component certification status Alpha
Beta
Stable
Component version The version of the component v1
Since runtime version The version of the Dapr runtime when the component status was set or updated 1.11

Every binding component has its own set of properties. Click the name link to see the component specification for each binding.

Generic

<tr>
    <td><a href="/reference/components-reference/supported-bindings/apns/">Apple Push Notifications (APN)</a>
    </td>
    <td align="center">
        
            <img src="/images/emptybox.png" alt="Input binding not supported" aria-label="Input binding not supported">
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Alpha</td>
    <td>v1</td>
    <td>1.0</td>
</tr>

<tr>
    <td><a href="/reference/components-reference/supported-bindings/commercetools/">commercetools GraphQL</a>
    </td>
    <td align="center">
        
            <img src="/images/emptybox.png" alt="Input binding not supported" aria-label="Input binding not supported">
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Alpha</td>
    <td>v1</td>
    <td>1.8</td>
</tr>

<tr>
    <td><a href="/reference/components-reference/supported-bindings/cron/">Cron (Scheduler)</a>
    </td>
    <td align="center">
        
            <span role="img" aria-label="Input binding supported">✅</span>
        
    </td>
    <td align="center">
        
            <img src="/images/emptybox.png" alt="Output binding not supported" aria-label="Output binding not supported">
        
    </td>
    <td>Stable</td>
    <td>v1</td>
    <td>1.10</td>
</tr>

<tr>
    <td><a href="/reference/components-reference/supported-bindings/graghql/">GraphQL</a>
    </td>
    <td align="center">
        
            <img src="/images/emptybox.png" alt="Input binding not supported" aria-label="Input binding not supported">
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Alpha</td>
    <td>v1</td>
    <td>1.0</td>
</tr>

<tr>
    <td><a href="/reference/components-reference/supported-bindings/http/">HTTP</a>
    </td>
    <td align="center">
        
            <img src="/images/emptybox.png" alt="Input binding not supported" aria-label="Input binding not supported">
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Stable</td>
    <td>v1</td>
    <td>1.0</td>
</tr>

<tr>
    <td><a href="/reference/components-reference/supported-bindings/huawei-obs/">Huawei OBS</a>
    </td>
    <td align="center">
        
            <img src="/images/emptybox.png" alt="Input binding not supported" aria-label="Input binding not supported">
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Alpha</td>
    <td>v1</td>
    <td>1.8</td>
</tr>

<tr>
    <td><a href="/reference/components-reference/supported-bindings/influxdb/">InfluxDB</a>
    </td>
    <td align="center">
        
            <img src="/images/emptybox.png" alt="Input binding not supported" aria-label="Input binding not supported">
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Beta</td>
    <td>v1</td>
    <td>1.7</td>
</tr>

<tr>
    <td><a href="/reference/components-reference/supported-bindings/kafka/">Kafka</a>
    </td>
    <td align="center">
        
            <span role="img" aria-label="Input binding supported">✅</span>
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Stable</td>
    <td>v1</td>
    <td>1.8</td>
</tr>

<tr>
    <td><a href="/reference/components-reference/supported-bindings/kitex/">Kitex</a>
    </td>
    <td align="center">
        
            <img src="/images/emptybox.png" alt="Input binding not supported" aria-label="Input binding not supported">
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Alpha</td>
    <td>v1</td>
    <td>1.11</td>
</tr>

<tr>
    <td><a href="/reference/components-reference/supported-bindings/kubemq/">KubeMQ</a>
    </td>
    <td align="center">
        
            <span role="img" aria-label="Input binding supported">✅</span>
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Beta</td>
    <td>v1</td>
    <td>1.10</td>
</tr>

<tr>
    <td><a href="/reference/components-reference/supported-bindings/kubernetes-binding/">Kubernetes Events</a>
    </td>
    <td align="center">
        
            <span role="img" aria-label="Input binding supported">✅</span>
        
    </td>
    <td align="center">
        
            <img src="/images/emptybox.png" alt="Output binding not supported" aria-label="Output binding not supported">
        
    </td>
    <td>Alpha</td>
    <td>v1</td>
    <td>1.0</td>
</tr>

<tr>
    <td><a href="/reference/components-reference/supported-bindings/localstorage/">Local Storage</a>
    </td>
    <td align="center">
        
            <img src="/images/emptybox.png" alt="Input binding not supported" aria-label="Input binding not supported">
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Stable</td>
    <td>v1</td>
    <td>1.9</td>
</tr>

<tr>
    <td><a href="/reference/components-reference/supported-bindings/mqtt3/">MQTT3</a>
    </td>
    <td align="center">
        
            <span role="img" aria-label="Input binding supported">✅</span>
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Beta</td>
    <td>v1</td>
    <td>1.7</td>
</tr>

<tr>
    <td><a href="/reference/components-reference/supported-bindings/mysql/">MySQL &amp; MariaDB</a>
    </td>
    <td align="center">
        
            <img src="/images/emptybox.png" alt="Input binding not supported" aria-label="Input binding not supported">
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Alpha</td>
    <td>v1</td>
    <td>1.0</td>
</tr>

<tr>
    <td><a href="/reference/components-reference/supported-bindings/postgresql/">PostgreSQL</a>
    </td>
    <td align="center">
        
            <img src="/images/emptybox.png" alt="Input binding not supported" aria-label="Input binding not supported">
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Stable</td>
    <td>v1</td>
    <td>1.9</td>
</tr>

<tr>
    <td><a href="/reference/components-reference/supported-bindings/postmark/">Postmark</a>
    </td>
    <td align="center">
        
            <img src="/images/emptybox.png" alt="Input binding not supported" aria-label="Input binding not supported">
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Alpha</td>
    <td>v1</td>
    <td>1.0</td>
</tr>

<tr>
    <td><a href="/reference/components-reference/supported-bindings/rabbitmq/">RabbitMQ</a>
    </td>
    <td align="center">
        
            <span role="img" aria-label="Input binding supported">✅</span>
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Stable</td>
    <td>v1</td>
    <td>1.9</td>
</tr>

<tr>
    <td><a href="/reference/components-reference/supported-bindings/redis/">Redis</a>
    </td>
    <td align="center">
        
            <img src="/images/emptybox.png" alt="Input binding not supported" aria-label="Input binding not supported">
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Stable</td>
    <td>v1</td>
    <td>1.9</td>
</tr>

<tr>
    <td><a href="/reference/components-reference/supported-bindings/rethinkdb/">RethinkDB</a>
    </td>
    <td align="center">
        
            <span role="img" aria-label="Input binding supported">✅</span>
        
    </td>
    <td align="center">
        
            <img src="/images/emptybox.png" alt="Output binding not supported" aria-label="Output binding not supported">
        
    </td>
    <td>Beta</td>
    <td>v1</td>
    <td>1.9</td>
</tr>

<tr>
    <td><a href="/reference/components-reference/supported-bindings/sendgrid/">SendGrid</a>
    </td>
    <td align="center">
        
            <img src="/images/emptybox.png" alt="Input binding not supported" aria-label="Input binding not supported">
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Alpha</td>
    <td>v1</td>
    <td>1.0</td>
</tr>

<tr>
    <td><a href="/reference/components-reference/supported-bindings/sftp/">SFTP</a>
    </td>
    <td align="center">
        
            <img src="/images/emptybox.png" alt="Input binding not supported" aria-label="Input binding not supported">
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Alpha</td>
    <td>v1</td>
    <td>1.15</td>
</tr>

<tr>
    <td><a href="/reference/components-reference/supported-bindings/smtp/">SMTP</a>
    </td>
    <td align="center">
        
            <img src="/images/emptybox.png" alt="Input binding not supported" aria-label="Input binding not supported">
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Alpha</td>
    <td>v1</td>
    <td>1.0</td>
</tr>

<tr>
    <td><a href="/reference/components-reference/supported-bindings/twilio/">Twilio</a>
    </td>
    <td align="center">
        
            <img src="/images/emptybox.png" alt="Input binding not supported" aria-label="Input binding not supported">
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Alpha</td>
    <td>v1</td>
    <td>1.0</td>
</tr>

<tr>
    <td><a href="/reference/components-reference/supported-bindings/wasm/">Wasm</a>
    </td>
    <td align="center">
        
            <img src="/images/emptybox.png" alt="Input binding not supported" aria-label="Input binding not supported">
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Alpha</td>
    <td>v1</td>
    <td>1.11</td>
</tr>
Component Input Binding Output Binding Status Component version Since runtime version

Alibaba Cloud

<tr>
    <td><a href="/reference/components-reference/supported-bindings/alicloud-dingtalk/">Alibaba Cloud DingTalk</a>
    </td>
    <td align="center">
        
            <span role="img" aria-label="Input binding supported">✅</span>
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Alpha</td>
    <td>v1</td>
    <td>1.2</td>
</tr>

<tr>
    <td><a href="/reference/components-reference/supported-bindings/alicloudoss/">Alibaba Cloud OSS</a>
    </td>
    <td align="center">
        
            <img src="/images/emptybox.png" alt="Input binding not supported" aria-label="Input binding not supported">
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Alpha</td>
    <td>v1</td>
    <td>1.0</td>
</tr>

<tr>
    <td><a href="/reference/components-reference/supported-bindings/alicloudsls/">Alibaba Cloud SLS</a>
    </td>
    <td align="center">
        
            <img src="/images/emptybox.png" alt="Input binding not supported" aria-label="Input binding not supported">
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Alpha</td>
    <td>v1</td>
    <td>1.9</td>
</tr>

<tr>
    <td><a href="/reference/components-reference/supported-bindings/alicloudtablestore/">Alibaba Cloud Tablestore</a>
    </td>
    <td align="center">
        
            <img src="/images/emptybox.png" alt="Input binding not supported" aria-label="Input binding not supported">
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Alpha</td>
    <td>v1</td>
    <td>1.5</td>
</tr>
Component Input Binding Output Binding Status Component version Since runtime version

Amazon Web Services (AWS)

<tr>
    <td><a href="/reference/components-reference/supported-bindings/dynamodb/">AWS DynamoDB</a>
    </td>
    <td align="center">
        
            <img src="/images/emptybox.png" alt="Input binding not supported" aria-label="Input binding not supported">
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Alpha</td>
    <td>v1</td>
    <td>1.0</td>
</tr>

<tr>
    <td><a href="/reference/components-reference/supported-bindings/kinesis/">AWS Kinesis</a>
    </td>
    <td align="center">
        
            <span role="img" aria-label="Input binding supported">✅</span>
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Alpha</td>
    <td>v1</td>
    <td>1.0</td>
</tr>

<tr>
    <td><a href="/reference/components-reference/supported-bindings/s3/">AWS S3</a>
    </td>
    <td align="center">
        
            <img src="/images/emptybox.png" alt="Input binding not supported" aria-label="Input binding not supported">
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Stable</td>
    <td>v1</td>
    <td>1.11</td>
</tr>

<tr>
    <td><a href="/reference/components-reference/supported-bindings/ses/">AWS SES</a>
    </td>
    <td align="center">
        
            <img src="/images/emptybox.png" alt="Input binding not supported" aria-label="Input binding not supported">
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Alpha</td>
    <td>v1</td>
    <td>1.4</td>
</tr>

<tr>
    <td><a href="/reference/components-reference/supported-bindings/sns/">AWS SNS</a>
    </td>
    <td align="center">
        
            <img src="/images/emptybox.png" alt="Input binding not supported" aria-label="Input binding not supported">
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Alpha</td>
    <td>v1</td>
    <td>1.0</td>
</tr>

<tr>
    <td><a href="/reference/components-reference/supported-bindings/sqs/">AWS SQS</a>
    </td>
    <td align="center">
        
            <span role="img" aria-label="Input binding supported">✅</span>
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Alpha</td>
    <td>v1</td>
    <td>1.0</td>
</tr>
Component Input Binding Output Binding Status Component version Since runtime version

Cloudflare

<tr>
    <td><a href="/reference/components-reference/supported-bindings/cloudflare-queues/">Cloudflare Queues</a>
    </td>
    <td align="center">
        
            <img src="/images/emptybox.png" alt="Input binding not supported" aria-label="Input binding not supported">
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Alpha</td>
    <td>v1</td>
    <td>1.10</td>
</tr>
Component Input Binding Output Binding Status Component version Since runtime version

Google Cloud Platform (GCP)

<tr>
    <td><a href="/reference/components-reference/supported-bindings/gcppubsub/">GCP Cloud Pub/Sub</a>
    </td>
    <td align="center">
        
            <span role="img" aria-label="Input binding supported">✅</span>
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Alpha</td>
    <td>v1</td>
    <td>1.0</td>
</tr>

<tr>
    <td><a href="/reference/components-reference/supported-bindings/gcpbucket/">GCP Storage Bucket</a>
    </td>
    <td align="center">
        
            <img src="/images/emptybox.png" alt="Input binding not supported" aria-label="Input binding not supported">
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Alpha</td>
    <td>v1</td>
    <td>1.0</td>
</tr>
Component Input Binding Output Binding Status Component version Since runtime version

Microsoft Azure

<tr>
    <td><a href="/reference/components-reference/supported-bindings/blobstorage/">Azure Blob Storage</a>
    </td>
    <td align="center">
        
            <img src="/images/emptybox.png" alt="Input binding not supported" aria-label="Input binding not supported">
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Stable</td>
    <td>v1</td>
    <td>1.0</td>
</tr>

<tr>
    <td><a href="/reference/components-reference/supported-bindings/cosmosdbgremlinapi/">Azure Cosmos DB (Gremlin API)</a>
    </td>
    <td align="center">
        
            <img src="/images/emptybox.png" alt="Input binding not supported" aria-label="Input binding not supported">
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Alpha</td>
    <td>v1</td>
    <td>1.5</td>
</tr>

<tr>
    <td><a href="/reference/components-reference/supported-bindings/cosmosdb/">Azure CosmosDB</a>
    </td>
    <td align="center">
        
            <img src="/images/emptybox.png" alt="Input binding not supported" aria-label="Input binding not supported">
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Stable</td>
    <td>v1</td>
    <td>1.7</td>
</tr>

<tr>
    <td><a href="/reference/components-reference/supported-bindings/eventgrid/">Azure Event Grid</a>
    </td>
    <td align="center">
        
            <span role="img" aria-label="Input binding supported">✅</span>
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Beta</td>
    <td>v1</td>
    <td>1.7</td>
</tr>

<tr>
    <td><a href="/reference/components-reference/supported-bindings/eventhubs/">Azure Event Hubs</a>
    </td>
    <td align="center">
        
            <span role="img" aria-label="Input binding supported">✅</span>
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Stable</td>
    <td>v1</td>
    <td>1.8</td>
</tr>

<tr>
    <td><a href="/reference/components-reference/supported-bindings/openai/">Azure OpenAI</a>
    </td>
    <td align="center">
        
            <span role="img" aria-label="Input binding supported">✅</span>
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Alpha</td>
    <td>v1</td>
    <td>1.11</td>
</tr>

<tr>
    <td><a href="/reference/components-reference/supported-bindings/servicebusqueues/">Azure Service Bus Queues</a>
    </td>
    <td align="center">
        
            <span role="img" aria-label="Input binding supported">✅</span>
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Stable</td>
    <td>v1</td>
    <td>1.7</td>
</tr>

<tr>
    <td><a href="/reference/components-reference/supported-bindings/signalr/">Azure SignalR</a>
    </td>
    <td align="center">
        
            <img src="/images/emptybox.png" alt="Input binding not supported" aria-label="Input binding not supported">
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Alpha</td>
    <td>v1</td>
    <td>1.0</td>
</tr>

<tr>
    <td><a href="/reference/components-reference/supported-bindings/storagequeues/">Azure Storage Queues</a>
    </td>
    <td align="center">
        
            <span role="img" aria-label="Input binding supported">✅</span>
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Stable</td>
    <td>v1</td>
    <td>1.0</td>
</tr>
Component Input Binding Output Binding Status Component version Since runtime version

Zeebe (Camunda Cloud)

<tr>
    <td><a href="/reference/components-reference/supported-bindings/zeebe-command/">Zeebe Command</a>
    </td>
    <td align="center">
        
            <img src="/images/emptybox.png" alt="Input binding not supported" aria-label="Input binding not supported">
        
    </td>
    <td align="center">
        
            <span role="img" aria-label="Output binding supported">✅</span>
        
    </td>
    <td>Stable</td>
    <td>v1</td>
    <td>1.2</td>
</tr>

<tr>
    <td><a href="/reference/components-reference/supported-bindings/zeebe-jobworker/">Zeebe Job Worker</a>
    </td>
    <td align="center">
        
            <span role="img" aria-label="Input binding supported">✅</span>
        
    </td>
    <td align="center">
        
            <img src="/images/emptybox.png" alt="Output binding not supported" aria-label="Output binding not supported">
        
    </td>
    <td>Stable</td>
    <td>v1</td>
    <td>1.2</td>
</tr>
Component Input Binding Output Binding Status Component version Since runtime version

1 - Alibaba Cloud DingTalk binding spec

Detailed documentation on the Alibaba Cloud DingTalk binding component

Setup Dapr component

To setup an Alibaba Cloud DingTalk binding create a component of type bindings.dingtalk.webhook. See this guide on how to create and apply a secretstore configuration. See this guide on referencing secrets to retrieve and use the secret with Dapr components.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.dingtalk.webhook
  version: v1
  metadata:
  - name: id
    value: "test_webhook_id"
  - name: url
    value: "https://oapi.dingtalk.com/robot/send?access_token=******"
  - name: secret
    value: "****************"
  - name: direction
    value: "input, output"

Spec metadata fields

Field Required Binding support Details Example
id Y Input/Output Unique id "test_webhook_id"
url Y Input/Output DingTalk’s Webhook url "https://oapi.dingtalk.com/robot/send?access_token=******"
secret N Input/Output The secret of DingTalk’s Webhook "****************"
direction N Input/Output The direction of the binding "input", "output", "input, output"

Binding support

This component supports both input and output binding interfaces.

This component supports output binding with the following operations:

  • create
  • get

Specifying a partition key

Example: Follow the instructions here on setting the data of payload

curl -X POST http://localhost:3500/v1.0/bindings/myDingTalk \
  -H "Content-Type: application/json" \
  -d '{
        "data": {
          "msgtype": "text",
          "text": {
            "content": "Hi"
          }
        },
        "operation": "create"
      }'
curl -X POST http://localhost:3500/v1.0/bindings/myDingTalk \
  -H "Content-Type: application/json" \
  -d '{
        "data": {
          "msgtype": "text",
          "text": {
            "content": "Hi"
          }
        },
        "operation": "get"
      }'

2 - Alibaba Cloud Log Storage Service binding spec

Detailed documentation on the Alibaba Cloud Log Storage binding component

Component format

To setup an Alibaba Cloud SLS binding create a component of type bindings.alicloud.sls. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: alicloud.sls
spec:
  type: bindings.alicloud.sls
  version: v1
  metadata:
  - name: AccessKeyID
    value: "[accessKey-id]"
  - name: AccessKeySecret
    value: "[accessKey-secret]"
  - name: Endpoint
    value: "[endpoint]"

Spec metadata fields

Field Required Binding support Details Example
AccessKeyID Y Output Access key ID credential.
AccessKeySecret Y Output Access key credential secret
Endpoint Y Output Alicloud SLS endpoint.

Binding support

This component supports output binding with the following operations:

Request format

To perform a log store operation, invoke the binding with a POST method and the following JSON body:

{
    "metadata":{
        "project":"your-sls-project-name",
        "logstore":"your-sls-logstore-name",
        "topic":"your-sls-topic-name",
        "source":"your-sls-source"
    },
    "data":{
        "custome-log-filed":"any other log info"
    },
    "operation":"create"
}

Example

curl -X POST -H "Content-Type: application/json" -d "{\"metadata\":{\"project\":\"project-name\",\"logstore\":\"logstore-name\",\"topic\":\"topic-name\",\"source\":\"source-name\"},\"data\":{\"log-filed\":\"log info\"}" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -X POST -H "Content-Type: application/json" -d '{"metadata":{"project":"project-name","logstore":"logstore-name","topic":"topic-name","source":"source-name"},"data":{"log-filed":"log info"}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response format

As Alibaba Cloud SLS producer API is asynchronous, there is no response for this binding (there is no callback interface to accept the response of success or failure, only a record for failure any reason to the console log).

3 - Alibaba Cloud Object Storage Service binding spec

Detailed documentation on the Alibaba Cloud Object Storage binding component

Component format

To setup an Alibaba Cloud Object Storage binding create a component of type bindings.alicloud.oss. See this guide on how to create and apply a secretstore configuration. See this guide on referencing secrets to retrieve and use the secret with Dapr components.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: alicloudobjectstorage
spec:
  type: bindings.alicloud.oss
  version: v1
  metadata:
  - name: endpoint
    value: "[endpoint]"
  - name: accessKeyID
    value: "[key-id]"
  - name: accessKey
    value: "[access-key]"
  - name: bucket
    value: "[bucket]"

Spec metadata fields

Field Required Binding support Details Example
endpoint Y Output Alicloud OSS endpoint. https://oss-cn-hangzhou.aliyuncs.com
accessKeyID Y Output Access key ID credential.
accessKey Y Output Access key credential.
bucket Y Output Name of the storage bucket.

Binding support

This component supports output binding with the following operations:

Create object

To perform a create object operation, invoke the binding with a POST method and the following JSON body:

{
  "operation": "create",
  "data": "YOUR_CONTENT"
}

Example

Saving to a random generated UUID file

curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\" }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "Hello World" }' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Saving to a specific file

curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\", \"metadata\": { \"key\": \"my-key\" } }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "Hello World", "metadata": { "key": "my-key" } }' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Metadata information

Object key

By default, the Alicloud OSS output binding will auto-generate a UUID as the object key. You can set the key with the following metadata:

{
    "data": "file content",
    "metadata": {
        "key": "my-key"
    },
    "operation": "create"
}

4 - Alibaba Cloud Tablestore binding spec

Detailed documentation on the Alibaba Tablestore binding component

Component format

To setup an Alibaba Cloud Tablestore binding create a component of type bindings.alicloud.tablestore. See this guide on how to create and apply a secretstore configuration. See this guide on referencing secrets to retrieve and use the secret with Dapr components.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: mytablestore
spec:
  type: bindings.alicloud.tablestore
  version: v1
  metadata:
  - name: endpoint
    value: "[endpoint]"
  - name: accessKeyID
    value: "[key-id]"
  - name: accessKey
    value: "[access-key]"
  - name: instanceName
    value: "[instance]"
  - name: tableName
    value: "[table]"
  - name: endpoint
    value: "[endpoint]"

Spec metadata fields

Field Required Binding support Details Example
endpoint Y Output Alicloud Tablestore endpoint. https://tablestore-cn-hangzhou.aliyuncs.com
accessKeyID Y Output Access key ID credential.
accessKey Y Output Access key credential.
instanceName Y Output Name of the instance.
tableName Y Output Name of the table.

Binding support

This component supports output binding with the following operations:

Create object

To perform a create object operation, invoke the binding with a POST method and the following JSON body:

{
  "operation": "create",
  "data": "YOUR_CONTENT",
  "metadata": {
    "primaryKeys": "pk1"
  }
} 

Delete object

To perform a delete object operation, invoke the binding with a POST method and the following JSON body:

{
  "operation": "delete",
  "metadata": {
   "primaryKeys": "pk1",
   "columnToGet": "name,age,date"
  },
  "data": {
    "pk1": "data1"
  }
} 

List objects

To perform a list objects operation, invoke the binding with a POST method and the following JSON body:

{
  "operation": "delete",
  "metadata": {
    "primaryKeys": "pk1",
    "columnToGet": "name,age,date"
  },
  "data": {
    "pk1": "data1",
    "pk2": "data2"
  }
} 

Get object

To perform a get object operation, invoke the binding with a POST method and the following JSON body:

{
  "operation": "delete",
  "metadata": {
    "primaryKeys": "pk1"
  },
  "data": {
    "pk1": "data1"
  }
} 

5 - Apple Push Notification Service binding spec

Detailed documentation on the Apple Push Notification Service binding component

Component format

To setup Apple Push Notifications binding create a component of type bindings.apns. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.apns
  version: v1
  metadata:
    - name: development
      value: "<bool>"
    - name: key-id
      value: "<APPLE_KEY_ID>"
    - name: team-id
      value: "<APPLE_TEAM_ID>"
    - name: private-key
      secretKeyRef:
        name: <SECRET>
        key: "<SECRET-KEY-NAME>"

Spec metadata fields

Field Required Binding support Details Example
development Y Output Tells the binding which APNs service to use. Set to "true" to use the development service or "false" to use the production service. Default: "true" "true"
key-id Y Output The identifier for the private key from the Apple Developer Portal "private-key-id"
team-id Y Output The identifier for the organization or author from the Apple Developer Portal "team-id"
private-key Y Output Is a PKCS #8-formatted private key. It is intended that the private key is stored in the secret store and not exposed directly in the configuration. See here for more details "pem file"

Private key

The APNS binding needs a cryptographic private key in order to generate authentication tokens for the APNS service. The private key can be generated from the Apple Developer Portal and is provided as a PKCS #8 file with the private key stored in PEM format. The private key should be stored in the Dapr secret store and not stored directly in the binding’s configuration file.

A sample configuration file for the APNS binding is shown below:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: apns
spec:
  type: bindings.apns
  metadata:
  - name: development
    value: false
  - name: key-id
    value: PUT-KEY-ID-HERE
  - name: team-id
    value: PUT-APPLE-TEAM-ID-HERE
  - name: private-key
    secretKeyRef:
      name: apns-secrets
      key: private-key

If using Kubernetes, a sample secret configuration may look like this:

apiVersion: v1
kind: Secret
metadata:
    name: apns-secrets
stringData:
    private-key: |
        -----BEGIN PRIVATE KEY-----
        KEY-DATA-GOES-HERE
        -----END PRIVATE KEY-----

Binding support

This component supports output binding with the following operations:

  • create

Push notification format

The APNS binding is a pass-through wrapper over the Apple Push Notification Service. The APNS binding will send the request directly to the APNS service without any translation. It is therefore important to understand the payload for push notifications expected by the APNS service. The payload format is documented here.

Request format

{
    "data": {
        "aps": {
            "alert": {
                "title": "New Updates!",
                "body": "There are new updates for your review"
            }
        }
    },
    "metadata": {
        "device-token": "PUT-DEVICE-TOKEN-HERE",
        "apns-push-type": "alert",
        "apns-priority": "10",
        "apns-topic": "com.example.helloworld"
    },
    "operation": "create"
}

The data object contains a complete push notification specification as described in the Apple documentation. The data object will be sent directly to the APNs service.

Besides the device-token value, the HTTP headers specified in the Apple documentation can be sent as metadata fields and will be included in the HTTP request to the APNs service.

Response format

{
    "messageID": "UNIQUE-ID-FOR-NOTIFICATION"
}

6 - AWS DynamoDB binding spec

Detailed documentation on the AWS DynamoDB binding component

Component format

To setup AWS DynamoDB binding create a component of type bindings.aws.dynamodb. See this guide on how to create and apply a binding configuration.

See Authenticating to AWS for information about authentication-related attributes

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.aws.dynamodb
  version: v1
  metadata:
  - name: table
    value: "items"
  - name: region
    value: "us-west-2"
  - name: accessKey
    value: "*****************"
  - name: secretKey
    value: "*****************"
  - name: sessionToken
    value: "*****************"

Spec metadata fields

Field Required Binding support Details Example
table Y Output The DynamoDB table name "items"
region Y Output The specific AWS region the AWS DynamoDB instance is deployed in "us-east-1"
accessKey Y Output The AWS Access Key to access this resource "key"
secretKey Y Output The AWS Secret Access Key to access this resource "secretAccessKey"
sessionToken N Output The AWS session token to use "sessionToken"

Binding support

This component supports output binding with the following operations:

  • create

7 - AWS Kinesis binding spec

Detailed documentation on the AWS Kinesis binding component

Component format

To setup AWS Kinesis binding create a component of type bindings.aws.kinesis. See this guide on how to create and apply a binding configuration.

See this for instructions on how to set up an AWS Kinesis data streams See Authenticating to AWS for information about authentication-related attributes

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.aws.kinesis
  version: v1
  metadata:
  - name: streamName
    value: "KINESIS_STREAM_NAME" # Kinesis stream name
  - name: consumerName
    value: "KINESIS_CONSUMER_NAME" # Kinesis consumer name
  - name: mode
    value: "shared" # shared - Shared throughput or extended - Extended/Enhanced fanout
  - name: region
    value: "AWS_REGION" #replace
  - name: accessKey
    value: "AWS_ACCESS_KEY" # replace
  - name: secretKey
    value: "AWS_SECRET_KEY" #replace
  - name: sessionToken
    value: "*****************"
  - name: direction
    value: "input, output"

Spec metadata fields

Field Required Binding support Details Example
mode N Input The Kinesis stream mode. shared- Shared throughput, extended - Extended/Enhanced fanout methods. More details are here. Defaults to "shared" "shared", "extended"
streamName Y Input/Output The AWS Kinesis Stream Name "stream"
consumerName Y Input The AWS Kinesis Consumer Name "myconsumer"
region Y Output The specific AWS region the AWS Kinesis instance is deployed in "us-east-1"
accessKey Y Output The AWS Access Key to access this resource "key"
secretKey Y Output The AWS Secret Access Key to access this resource "secretAccessKey"
sessionToken N Output The AWS session token to use "sessionToken"
direction N Input/Output The direction of the binding "input", "output", "input, output"

Binding support

This component supports both input and output binding interfaces.

This component supports output binding with the following operations:

  • create

8 - AWS S3 binding spec

Detailed documentation on the AWS S3 binding component

Component format

To setup an AWS S3 binding create a component of type bindings.aws.s3. This binding works with other S3-compatible services, such as Minio. See this guide on how to create and apply a binding configuration.

See Authenticating to AWS for information about authentication-related attributes.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.aws.s3
  version: v1
  metadata:
  - name: bucket
    value: "mybucket"
  - name: region
    value: "us-west-2"
  - name: endpoint
    value: "s3.us-west-2.amazonaws.com"
  - name: accessKey
    value: "*****************"
  - name: secretKey
    value: "*****************"
  - name: sessionToken
    value: "mysession"
  - name: decodeBase64
    value: "<bool>"
  - name: encodeBase64
    value: "<bool>"
  - name: forcePathStyle
    value: "<bool>"
  - name: disableSSL
    value: "<bool>"
  - name: insecureSSL
    value: "<bool>"
  - name: storageClass
    value: "<string>"

Spec metadata fields

Field Required Binding support Details Example
bucket Y Output The name of the S3 bucket to write to "bucket"
region Y Output The specific AWS region "us-east-1"
endpoint N Output The specific AWS endpoint "s3.us-east-1.amazonaws.com"
accessKey Y Output The AWS Access Key to access this resource "key"
secretKey Y Output The AWS Secret Access Key to access this resource "secretAccessKey"
sessionToken N Output The AWS session token to use "sessionToken"
forcePathStyle N Output Currently Amazon S3 SDK supports virtual hosted-style and path-style access. "true" is path-style format like "https://<endpoint>/<your bucket>/<key>". "false" is hosted-style format like "https://<your bucket>.<endpoint>/<key>". Defaults to "false" "true", "false"
decodeBase64 N Output Configuration to decode base64 file content before saving to bucket storage. (In case of saving a file with binary content). "true" is the only allowed positive value. Other positive variations like "True", "1" are not acceptable. Defaults to false "true", "false"
encodeBase64 N Output Configuration to encode base64 file content before return the content. (In case of opening a file with binary content). "true" is the only allowed positive value. Other positive variations like "True", "1" are not acceptable. Defaults to "false" "true", "false"
disableSSL N Output Allows to connect to non https:// endpoints. Defaults to "false" "true", "false"
insecureSSL N Output When connecting to https:// endpoints, accepts invalid or self-signed certificates. Defaults to "false" "true", "false"
storageClass N Output The desired storage class for objects during the create operation. Valid aws storage class types can be found here STANDARD_IA

S3 Bucket Creation

Using with Minio

Minio is a service that exposes local storage as S3-compatible block storage, and it’s a popular alternative to S3 especially in development environments. You can use the S3 binding with Minio too, with some configuration tweaks:

  1. Set endpoint to the address of the Minio server, including protocol (http:// or https://) and the optional port at the end. For example, http://minio.local:9000 (the values depend on your environment).
  2. forcePathStyle must be set to true
  3. The value for region is not important; you can set it to us-east-1.
  4. Depending on your environment, you may need to set disableSSL to true if you’re connecting to Minio using a non-secure connection (using the http:// protocol). If you are using a secure connection (https:// protocol) but with a self-signed certificate, you may need to set insecureSSL to true.

For local development, the LocalStack project is used to integrate AWS S3. Follow these instructions to run LocalStack.

To run LocalStack locally from the command line using Docker, use a docker-compose.yaml similar to the following:

version: "3.8"

services:
  localstack:
    container_name: "cont-aws-s3"
    image: localstack/localstack:1.4.0
    ports:
      - "127.0.0.1:4566:4566"
    environment:
      - DEBUG=1
      - DOCKER_HOST=unix:///var/run/docker.sock
    volumes:
      - "<PATH>/init-aws.sh:/etc/localstack/init/ready.d/init-aws.sh"  # init hook
      - "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack"
      - "/var/run/docker.sock:/var/run/docker.sock"

To use the S3 component, you need to use an existing bucket. The example above uses a LocalStack Initialization Hook to setup the bucket.

To use LocalStack with your S3 binding, you need to provide the endpoint configuration in the component metadata. The endpoint is unnecessary when running against production AWS.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
    name: aws-s3
    namespace: default
spec:
  type: bindings.aws.s3
  version: v1
  metadata:
    - name: bucket
      value: conformance-test-docker
    - name: endpoint
      value: "http://localhost:4566"
    - name: accessKey
      value: "my-access"
    - name: secretKey
      value: "my-secret"
    - name: region
      value: "us-east-1"

To use the S3 component, you need to use an existing bucket. Follow the AWS documentation for creating a bucket.

Binding support

This component supports output binding with the following operations:

Create object

To perform a create operation, invoke the AWS S3 binding with a POST method and the following JSON body:

Note: by default, a random UUID is generated. See below for Metadata support to set the name

{
  "operation": "create",
  "data": "YOUR_CONTENT",
  "metadata": { 
    "storageClass": "STANDARD_IA",
    "tags": "project=sashimi,year=2024",
  }
}

For example you can provide a storage class or tags while using the create operation with a Linux curl command

curl -d '{ "operation": "create", "data": "YOUR_BASE_64_CONTENT", "metadata": { "storageClass": "STANDARD_IA", "project=sashimi,year=2024" } }' /
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Share object with a presigned URL

To presign an object with a specified time-to-live, use the presignTTL metadata key on a create request. Valid values for presignTTL are Go duration strings.

curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\", \"metadata\": { \"presignTTL\": \"15m\" } }" \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "Hello World", "metadata": { "presignTTL": "15m" } }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Response

The response body contains the following example JSON:

{
    "location":"https://<your bucket>.s3.<your region>.amazonaws.com/<key>",
    "versionID":"<version ID if Bucket Versioning is enabled>",
    "presignURL": "https://<your bucket>.s3.<your region>.amazonaws.com/image.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAJJWZ7B6WCRGMKFGQ%2F20180210%2Feu-west-2%2Fs3%2Faws4_request&X-Amz-Date=20180210T171315Z&X-Amz-Expires=1800&X-Amz-Signature=12b74b0788aa036bc7c3d03b3f20c61f1f91cc9ad8873e3314255dc479a25351&X-Amz-SignedHeaders=host"
}

Examples

Save text to a random generated UUID file

On Windows, utilize cmd prompt (PowerShell has different escaping mechanism)

curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\" }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "Hello World" }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Save text to a specific file
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\", \"metadata\": { \"key\": \"my-test-file.txt\" } }" \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "Hello World", "metadata": { "key": "my-test-file.txt" } }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Save a file to a object

To upload a file, encode it as Base64 and let the Binding know to deserialize it:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.aws.s3
  version: v1
  metadata:
  - name: bucket
    value: mybucket
  - name: region
    value: us-west-2
  - name: endpoint
    value: s3.us-west-2.amazonaws.com
  - name: accessKey
    value: *****************
  - name: secretKey
    value: *****************
  - name: sessionToken
    value: mysession
  - name: decodeBase64
    value: <bool>
  - name: forcePathStyle
    value: <bool>

Then you can upload it as you would normally:

curl -d "{ \"operation\": \"create\", \"data\": \"YOUR_BASE_64_CONTENT\", \"metadata\": { \"key\": \"my-test-file.jpg\" } }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "YOUR_BASE_64_CONTENT", "metadata": { "key": "my-test-file.jpg" } }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Upload from file path

To upload a file from a supplied path (relative or absolute), use the filepath metadata key on a create request that contains empty data fields.

curl -d '{ \"operation\": \"create\", \"metadata\": { \"filePath\": \"my-test-file.txt\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "metadata": { "filePath": "my-test-file.txt" }}' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

The response body will contain the following JSON:

{
    "location":"https://<your bucket>.s3.<your region>.amazonaws.com/<key>",
    "versionID":"<version ID if Bucket Versioning is enabled"
}

Presign an existing object

To presign an existing S3 object with a specified time-to-live, use the presignTTL and key metadata keys on a presign request. Valid values for presignTTL are Go duration strings.

curl -d "{ \"operation\": \"presign\", \"metadata\": { \"presignTTL\": \"15m\", \"key\": \"my-test-file.txt\" } }" \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "presign", "metadata": { "presignTTL": "15m", "key": "my-test-file.txt" } }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Response

The response body contains the following example JSON:

{
    "presignURL": "https://<your bucket>.s3.<your region>.amazonaws.com/image.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAJJWZ7B6WCRGMKFGQ%2F20180210%2Feu-west-2%2Fs3%2Faws4_request&X-Amz-Date=20180210T171315Z&X-Amz-Expires=1800&X-Amz-Signature=12b74b0788aa036bc7c3d03b3f20c61f1f91cc9ad8873e3314255dc479a25351&X-Amz-SignedHeaders=host"
}

Get object

To perform a get file operation, invoke the AWS S3 binding with a POST method and the following JSON body:

{
  "operation": "get",
  "metadata": {
    "key": "my-test-file.txt"
  }
}

The metadata parameters are:

  • key - the name of the object

Example

curl -d '{ \"operation\": \"get\", \"metadata\": { \"key\": \"my-test-file.txt\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "get", "metadata": { "key": "my-test-file.txt" }}' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

The response body contains the value stored in the object.

Delete object

To perform a delete object operation, invoke the AWS S3 binding with a POST method and the following JSON body:

{
  "operation": "delete",
  "metadata": {
    "key": "my-test-file.txt"
  }
}

The metadata parameters are:

  • key - the name of the object

Examples

Delete object
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"key\": \"my-test-file.txt\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "delete", "metadata": { "key": "my-test-file.txt" }}' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

An HTTP 204 (No Content) and empty body will be returned if successful.

List objects

To perform a list object operation, invoke the S3 binding with a POST method and the following JSON body:

{
  "operation": "list",
  "data": {
    "maxResults": 10,
    "prefix": "file",
    "marker": "hvlcCQFSOD5TD",
    "delimiter": "i0FvxAn2EOEL6"
  }
}

The data parameters are:

  • maxResults - (optional) sets the maximum number of keys returned in the response. By default the action returns up to 1,000 key names. The response might contain fewer keys but will never contain more.
  • prefix - (optional) limits the response to keys that begin with the specified prefix.
  • marker - (optional) marker is where you want Amazon S3 to start listing from. Amazon S3 starts listing after this specified key. Marker can be any key in the bucket. The marker value may then be used in a subsequent call to request the next set of list items.
  • delimiter - (optional) A delimiter is a character you use to group keys.

Response

The response body contains the list of found objects.

The list of objects will be returned as JSON array in the following form:

{
	"CommonPrefixes": null,
	"Contents": [
		{
			"ETag": "\"7e94cc9b0f5226557b05a7c2565dd09f\"",
			"Key": "hpNdFUxruNuwm",
			"LastModified": "2021-08-16T06:44:14Z",
			"Owner": {
				"DisplayName": "owner name",
				"ID": "owner id"
			},
			"Size": 6916,
			"StorageClass": "STANDARD"
		}
	],
	"Delimiter": "",
	"EncodingType": null,
	"IsTruncated": true,
	"Marker": "hvlcCQFSOD5TD",
	"MaxKeys": 1,
	"Name": "mybucketdapr",
	"NextMarker": "hzaUPWjmvyi9W",
	"Prefix": ""
}

9 - AWS SES binding spec

Detailed documentation on the AWS SES binding component

Component format

To setup AWS binding create a component of type bindings.aws.ses. See this guide on how to create and apply a binding configuration.

See Authenticating to AWS for information about authentication-related attributes

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: ses
spec:
  type: bindings.aws.ses
  version: v1
  metadata:
  - name: accessKey
    value: *****************
  - name: secretKey
    value: *****************
  - name: region
    value: "eu-west-1"
  - name: sessionToken
    value: mysession
  - name: emailFrom
    value: "sender@example.com"
  - name: emailTo
    value: "receiver@example.com"
  - name: emailCc
    value: "cc@example.com"
  - name: emailBcc
    value: "bcc@example.com"
  - name: subject
    value: "subject"

Spec metadata fields

Field Required Binding support Details Example
region N Output The specific AWS region "eu-west-1"
accessKey N Output The AWS Access Key to access this resource "key"
secretKey N Output The AWS Secret Access Key to access this resource "secretAccessKey"
sessionToken N Output The AWS session token to use "sessionToken"
emailFrom N Output If set, this specifies the email address of the sender. See also "me@example.com"
emailTo N Output If set, this specifies the email address of the receiver. See also "me@example.com"
emailCc N Output If set, this specifies the email address to CC in. See also "me@example.com"
emailBcc N Output If set, this specifies email address to BCC in. See also "me@example.com"
subject N Output If set, this specifies the subject of the email message. See also "subject of mail"

Binding support

This component supports output binding with the following operations:

  • create

Example request

You can specify any of the following optional metadata properties with each request:

  • emailFrom
  • emailTo
  • emailCc
  • emailBcc
  • subject

When sending an email, the metadata in the configuration and in the request is combined. The combined set of metadata must contain at least the emailFrom, emailTo, emailCc, emailBcc and subject fields.

The emailTo, emailCc and emailBcc fields can contain multiple email addresses separated by a semicolon.

Example:

{
  "operation": "create",
  "metadata": {
    "emailTo": "dapr-smtp-binding@example.net",
    "emailCc": "cc1@example.net",
    "subject": "Email subject"
  },
  "data": "Testing Dapr SMTP Binding"
}

The emailTo, emailCc and emailBcc fields can contain multiple email addresses separated by a semicolon.

10 - AWS SNS binding spec

Detailed documentation on the AWS SNS binding component

Component format

To setup AWS SNS binding create a component of type bindings.aws.sns. See this guide on how to create and apply a binding configuration.

See Authenticating to AWS for information about authentication-related attributes

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.aws.sns
  version: v1
  metadata:
  - name: topicArn
    value: "mytopic"
  - name: region
    value: "us-west-2"
  - name: endpoint
    value: "sns.us-west-2.amazonaws.com"
  - name: accessKey
    value: "*****************"
  - name: secretKey
    value: "*****************"
  - name: sessionToken
    value: "*****************"

Spec metadata fields

Field Required Binding support Details Example
topicArn Y Output The SNS topic name "arn:::topicarn"
region Y Output The specific AWS region "us-east-1"
endpoint N Output The specific AWS endpoint "sns.us-east-1.amazonaws.com"
accessKey Y Output The AWS Access Key to access this resource "key"
secretKey Y Output The AWS Secret Access Key to access this resource "secretAccessKey"
sessionToken N Output The AWS session token to use "sessionToken"

Binding support

This component supports output binding with the following operations:

  • create

11 - AWS SQS binding spec

Detailed documentation on the AWS SQS binding component

Component format

To setup AWS SQS binding create a component of type bindings.aws.sqs. See this guide on how to create and apply a binding configuration.

See Authenticating to AWS for information about authentication-related attributes

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.aws.sqs
  version: v1
  metadata:
  - name: queueName
    value: "items"
  - name: region
    value: "us-west-2"
  - name: accessKey
    value: "*****************"
  - name: secretKey
    value: "*****************"
  - name: sessionToken
    value: "*****************"
  - name: direction 
    value: "input, output"

Spec metadata fields

Field Required Binding support Details Example
queueName Y Input/Output The SQS queue name "myqueue"
region Y Input/Output The specific AWS region "us-east-1"
accessKey Y Input/Output The AWS Access Key to access this resource "key"
secretKey Y Input/Output The AWS Secret Access Key to access this resource "secretAccessKey"
sessionToken N Input/Output The AWS session token to use "sessionToken"
direction N Input/Output The direction of the binding "input", "output", "input, output"

Binding support

This component supports both input and output binding interfaces.

This component supports output binding with the following operations:

  • create

12 - Azure Blob Storage binding spec

Detailed documentation on the Azure Blob Storage binding component

Component format

To setup Azure Blob Storage binding create a component of type bindings.azure.blobstorage. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.azure.blobstorage
  version: v1
  metadata:
  - name: accountName
    value: myStorageAccountName
  - name: accountKey
    value: ***********
  - name: containerName
    value: container1
# - name: decodeBase64
#   value: <bool>
# - name: getBlobRetryCount
#   value: <integer>
# - name: publicAccessLevel
#   value: <publicAccessLevel>

Spec metadata fields

Field Required Binding support Details Example
accountName Y Input/Output The name of the Azure Storage account "myexmapleaccount"
accountKey Y* Input/Output The access key of the Azure Storage account. Only required when not using Microsoft Entra ID authentication. "access-key"
containerName Y Output The name of the Blob Storage container to write to myexamplecontainer
endpoint N Input/Output Optional custom endpoint URL. This is useful when using the Azurite emulator or when using custom domains for Azure Storage (although this is not officially supported). The endpoint must be the full base URL, including the protocol (http:// or https://), the IP or FQDN, and optional port. "http://127.0.0.1:10000"
decodeBase64 N Output Configuration to decode base64 file content before saving to Blob Storage. (In case of saving a file with binary content). Defaults to false true, false
getBlobRetryCount N Output Specifies the maximum number of HTTP GET requests that will be made while reading from a RetryReader Defaults to 10 1, 2
publicAccessLevel N Output Specifies whether data in the container may be accessed publicly and the level of access (only used if the container is created by Dapr). Defaults to none blob, container, none

Microsoft Entra ID authentication

The Azure Blob Storage binding component supports authentication using all Microsoft Entra ID mechanisms. For further information and the relevant component metadata fields to provide depending on the choice of Microsoft Entra ID authentication mechanism, see the docs for authenticating to Azure.

Binding support

This component supports output binding with the following operations:

The Blob storage component’s input binding triggers and pushes events using Azure Event Grid.

Refer to the Reacting to Blob storage events guide for more set up and more information.

Create blob

To perform a create blob operation, invoke the Azure Blob Storage binding with a POST method and the following JSON body:

Note: by default, a random UUID is generated. See below for Metadata support to set the name

{
  "operation": "create",
  "data": "YOUR_CONTENT"
}

Examples

Save text to a random generated UUID blob

On Windows, utilize cmd prompt (PowerShell has different escaping mechanism)

curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\" }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "Hello World" }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Save text to a specific blob
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\", \"metadata\": { \"blobName\": \"my-test-file.txt\" } }" \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "Hello World", "metadata": { "blobName": "my-test-file.txt" } }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Save a file to a blob

To upload a file, encode it as Base64 and let the Binding know to deserialize it:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.azure.blobstorage
  version: v1
  metadata:
  - name: accountName
    value: myStorageAccountName
  - name: accountKey
    value: ***********
  - name: containerName
    value: container1
  - name: decodeBase64
    value: true

Then you can upload it as you would normally:

curl -d "{ \"operation\": \"create\", \"data\": \"YOUR_BASE_64_CONTENT\", \"metadata\": { \"blobName\": \"my-test-file.jpg\" } }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "YOUR_BASE_64_CONTENT", "metadata": { "blobName": "my-test-file.jpg" } }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

The response body will contain the following JSON:

{
   "blobURL": "https://<your account name>. blob.core.windows.net/<your container name>/<filename>"
}

Get blob

To perform a get blob operation, invoke the Azure Blob Storage binding with a POST method and the following JSON body:

{
  "operation": "get",
  "metadata": {
    "blobName": "myblob",
    "includeMetadata": "true"
  }
}

The metadata parameters are:

  • blobName - the name of the blob
  • includeMetadata- (optional) defines if the user defined metadata should be returned or not, defaults to: false

Example

curl -d '{ \"operation\": \"get\", \"metadata\": { \"blobName\": \"myblob\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "get", "metadata": { "blobName": "myblob" }}' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

The response body contains the value stored in the blob object. If enabled, the user defined metadata will be returned as HTTP headers in the form:

Metadata.key1: value1 Metadata.key2: value2

Delete blob

To perform a delete blob operation, invoke the Azure Blob Storage binding with a POST method and the following JSON body:

{
  "operation": "delete",
  "metadata": {
    "blobName": "myblob"
  }
}

The metadata parameters are:

  • blobName - the name of the blob
  • deleteSnapshots - (optional) required if the blob has associated snapshots. Specify one of the following two options:
    • include: Delete the base blob and all of its snapshots
    • only: Delete only the blob’s snapshots and not the blob itself

Examples

Delete blob
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"blobName\": \"myblob\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "delete", "metadata": { "blobName": "myblob" }}' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Delete blob snapshots only
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"blobName\": \"myblob\", \"deleteSnapshots\": \"only\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "delete", "metadata": { "blobName": "myblob", "deleteSnapshots": "only" }}' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Delete blob including snapshots
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"blobName\": \"myblob\", \"deleteSnapshots\": \"include\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "delete", "metadata": { "blobName": "myblob", "deleteSnapshots": "include" }}' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

An HTTP 204 (No Content) and empty body will be retuned if successful.

List blobs

To perform a list blobs operation, invoke the Azure Blob Storage binding with a POST method and the following JSON body:

{
  "operation": "list",
  "data": {
    "maxResults": 10,
    "prefix": "file",
    "marker": "2!108!MDAwMDM1IWZpbGUtMDgtMDctMjAyMS0wOS0zOC01NS03NzgtMjEudHh0ITAwMDAyOCE5OTk5LTEyLTMxVDIzOjU5OjU5Ljk5OTk5OTlaIQ--",
    "include": {
      "snapshots": false,
      "metadata": true,
      "uncommittedBlobs": false,
      "copy": false,
      "deleted": false
    }
  }
}

The data parameters are:

  • maxResults - (optional) specifies the maximum number of blobs to return, including all BlobPrefix elements. If the request does not specify maxresults the server will return up to 5,000 items.
  • prefix - (optional) filters the results to return only blobs whose names begin with the specified prefix.
  • marker - (optional) a string value that identifies the portion of the list to be returned with the next list operation. The operation returns a marker value within the response body if the list returned was not complete. The marker value may then be used in a subsequent call to request the next set of list items.
  • include - (optional) Specifies one or more datasets to include in the response:
    • snapshots: Specifies that snapshots should be included in the enumeration. Snapshots are listed from oldest to newest in the response. Defaults to: false
    • metadata: Specifies that blob metadata be returned in the response. Defaults to: false
    • uncommittedBlobs: Specifies that blobs for which blocks have been uploaded, but which have not been committed using Put Block List, be included in the response. Defaults to: false
    • copy: Version 2012-02-12 and newer. Specifies that metadata related to any current or previous Copy Blob operation should be included in the response. Defaults to: false
    • deleted: Version 2017-07-29 and newer. Specifies that soft deleted blobs should be included in the response. Defaults to: false

Response

The response body contains the list of found blocks as also the following HTTP headers:

Metadata.marker: 2!108!MDAwMDM1IWZpbGUtMDgtMDctMjAyMS0wOS0zOC0zNC04NjctMTEudHh0ITAwMDAyOCE5OTk5LTEyLTMxVDIzOjU5OjU5Ljk5OTk5OTlaIQ-- Metadata.number: 10

  • marker - the next marker which can be used in a subsequent call to request the next set of list items. See the marker description on the data property of the binding input.
  • number - the number of found blobs

The list of blobs will be returned as JSON array in the following form:

[
  {
    "XMLName": {
      "Space": "",
      "Local": "Blob"
    },
    "Name": "file-08-07-2021-09-38-13-776-1.txt",
    "Deleted": false,
    "Snapshot": "",
    "Properties": {
      "XMLName": {
        "Space": "",
        "Local": "Properties"
      },
      "CreationTime": "2021-07-08T07:38:16Z",
      "LastModified": "2021-07-08T07:38:16Z",
      "Etag": "0x8D941E3593C6573",
      "ContentLength": 1,
      "ContentType": "application/octet-stream",
      "ContentEncoding": "",
      "ContentLanguage": "",
      "ContentMD5": "xMpCOKC5I4INzFCab3WEmw==",
      "ContentDisposition": "",
      "CacheControl": "",
      "BlobSequenceNumber": null,
      "BlobType": "BlockBlob",
      "LeaseStatus": "unlocked",
      "LeaseState": "available",
      "LeaseDuration": "",
      "CopyID": null,
      "CopyStatus": "",
      "CopySource": null,
      "CopyProgress": null,
      "CopyCompletionTime": null,
      "CopyStatusDescription": null,
      "ServerEncrypted": true,
      "IncrementalCopy": null,
      "DestinationSnapshot": null,
      "DeletedTime": null,
      "RemainingRetentionDays": null,
      "AccessTier": "Hot",
      "AccessTierInferred": true,
      "ArchiveStatus": "",
      "CustomerProvidedKeySha256": null,
      "AccessTierChangeTime": null
    },
    "Metadata": null
  }
]

Metadata information

By default the Azure Blob Storage output binding auto generates a UUID as the blob filename and is not assigned any system or custom metadata to it. It is configurable in the metadata property of the message (all optional).

Applications publishing to an Azure Blob Storage output binding should send a message with the following format:

{
    "data": "file content",
    "metadata": {
        "blobName"           : "filename.txt",
        "contentType"        : "text/plain",
        "contentMD5"         : "vZGKbMRDAnMs4BIwlXaRvQ==",
        "contentEncoding"    : "UTF-8",
        "contentLanguage"    : "en-us",
        "contentDisposition" : "attachment",
        "cacheControl"       : "no-cache",
        "custom"             : "hello-world"
    },
    "operation": "create"
}

13 - Azure Cosmos DB (Gremlin API) binding spec

Detailed documentation on the Azure Cosmos DB (Gremlin API) binding component

Component format

To setup an Azure Cosmos DB (Gremlin API) binding create a component of type bindings.azure.cosmosdb.gremlinapi. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.azure.cosmosdb.gremlinapi
  version: v1
  metadata:
  - name: url
    value: "wss://******.gremlin.cosmos.azure.com:443/"
  - name: masterKey
    value: "*****"
  - name: username
    value: "*****"

Spec metadata fields

Field Required Binding support Details Example
url Y Output The Cosmos DB url for Gremlin APIs "wss://******.gremlin.cosmos.azure.com:443/"
masterKey Y Output The Cosmos DB account master key "masterKey"
username Y Output The username of the Cosmos DB database "/dbs/<database_name>/colls/<graph_name>"

For more information see Quickstart: Azure Cosmos Graph DB using Gremlin.

Binding support

This component supports output binding with the following operations:

  • query

Request payload sample

{
  "data": {
    "gremlin": "g.V().count()"
    },
  "operation": "query"
}

14 - Azure Cosmos DB (SQL API) binding spec

Detailed documentation on the Azure Cosmos DB (SQL API) binding component

Component format

To setup Azure Cosmos DB binding create a component of type bindings.azure.cosmosdb. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.azure.cosmosdb
  version: v1
  metadata:
  - name: url
    value: "https://******.documents.azure.com:443/"
  - name: masterKey
    value: "*****"
  - name: database
    value: "OrderDb"
  - name: collection
    value: "Orders"
  - name: partitionKey
    value: "<message>"

Spec metadata fields

Field Required Binding support Details Example
url Y Output The Cosmos DB url "https://******.documents.azure.com:443/"
masterKey Y Output The Cosmos DB account master key "master-key"
database Y Output The name of the Cosmos DB database "OrderDb"
collection Y Output The name of the container inside the database. "Orders"
partitionKey Y Output The name of the key to extract from the payload (document to be created) that is used as the partition key. This name must match the partition key specified upon creation of the Cosmos DB container. "OrderId", "message"

For more information see Azure Cosmos DB resource model.

Microsoft Entra ID authentication

The Azure Cosmos DB binding component supports authentication using all Microsoft Entra ID mechanisms. For further information and the relevant component metadata fields to provide depending on the choice of Microsoft Entra ID authentication mechanism, see the docs for authenticating to Azure.

You can read additional information for setting up Cosmos DB with Azure AD authentication in the section below.

Binding support

This component supports output binding with the following operations:

  • create

Best Practices for Production Use

Azure Cosmos DB shares a strict metadata request rate limit across all databases in a single Azure Cosmos DB account. New connections to Azure Cosmos DB assume a large percentage of the allowable request rate limit. (See the Cosmos DB documentation)

Therefore several strategies must be applied to avoid simultaneous new connections to Azure Cosmos DB:

  • Ensure sidecars of applications only load the Azure Cosmos DB component when they require it to avoid unnecessary database connections. This can be done by scoping your components to specific applications.
  • Choose deployment strategies that sequentially deploy or start your applications to minimize bursts in new connections to your Azure Cosmos DB accounts.
  • Avoid reusing the same Azure Cosmos DB account for unrelated databases or systems (even outside of Dapr). Distinct Azure Cosmos DB accounts have distinct rate limits.
  • Increase the initTimeout value to allow the component to retry connecting to Azure Cosmos DB during side car initialization for up to 5 minutes. The default value is 5s and should be increased. When using Kubernetes, increasing this value may also require an update to your Readiness and Liveness probes.
spec:
  type: bindings.azure.cosmosdb
  version: v1
  initTimeout: 5m
  metadata:

Data format

The output binding create operation requires the following keys to exist in the payload of every document to be created:

  • id: a unique ID for the document to be created
  • <partitionKey>: the name of the partition key specified via the spec.partitionKey in the component definition. This must also match the partition key specified upon creation of the Cosmos DB container.

Setting up Cosmos DB for authenticating with Azure AD

When using the Dapr Cosmos DB binding and authenticating with Azure AD, you need to perform a few additional steps to set up your environment.

Prerequisites:

  • You need a Service Principal created as per the instructions in the authenticating to Azure page. You need the ID of the Service Principal for the commands below (note that this is different from the client ID of your application, or the value you use for azureClientId in the metadata).
  • Azure CLI
  • jq
  • The scripts below are optimized for a bash or zsh shell

When using the Cosmos DB binding, you don’t need to create stored procedures as you do in the case of the Cosmos DB state store.

Granting your Azure AD application access to Cosmos DB

You can find more information on the official documentation, including instructions to assign more granular permissions.

In order to grant your application permissions to access data stored in Cosmos DB, you need to assign it a custom role for the Cosmos DB data plane. In this example you’re going to use a built-in role, “Cosmos DB Built-in Data Contributor”, which grants your application full read-write access to the data; you can optionally create custom, fine-tuned roles following the instructions in the official docs.

# Name of the Resource Group that contains your Cosmos DB
RESOURCE_GROUP="..."
# Name of your Cosmos DB account
ACCOUNT_NAME="..."
# ID of your Service Principal object
PRINCIPAL_ID="..."
# ID of the "Cosmos DB Built-in Data Contributor" role
# You can also use the ID of a custom role
ROLE_ID="00000000-0000-0000-0000-000000000002"

az cosmosdb sql role assignment create \
  --account-name "$ACCOUNT_NAME" \
  --resource-group "$RESOURCE_GROUP" \
  --scope "/" \
  --principal-id "$PRINCIPAL_ID" \
  --role-definition-id "$ROLE_ID"

15 - Azure Event Grid binding spec

Detailed documentation on the Azure Event Grid binding component

Component format

To setup an Azure Event Grid binding create a component of type bindings.azure.eventgrid. See this guide on how to create and apply a binding configuration.

See this for the documentation for Azure Event Grid.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <name>
spec:
  type: bindings.azure.eventgrid
  version: v1
  metadata:
  # Required Output Binding Metadata
  - name: accessKey
    value: "[AccessKey]"
  - name: topicEndpoint
    value: "[TopicEndpoint]"
  # Required Input Binding Metadata
  - name: azureTenantId
    value: "[AzureTenantId]"
  - name: azureSubscriptionId
    value: "[AzureSubscriptionId]"
  - name: azureClientId
    value: "[ClientId]"
  - name: azureClientSecret
    value: "[ClientSecret]"
  - name: subscriberEndpoint
    value: "[SubscriberEndpoint]"
  - name: handshakePort
    # Make sure to pass this as a string, with quotes around the value
    value: "[HandshakePort]"
  - name: scope
    value: "[Scope]"
  # Optional Input Binding Metadata
  - name: eventSubscriptionName
    value: "[EventSubscriptionName]"
  # Optional metadata
  - name: direction
    value: "input, output"

Spec metadata fields

Field Required Binding support Details Example
accessKey Y Output The Access Key to be used for publishing an Event Grid Event to a custom topic "accessKey"
topicEndpoint Y Output The topic endpoint in which this output binding should publish events "topic-endpoint"
azureTenantId Y Input The Azure tenant ID of the Event Grid resource "tenentID"
azureSubscriptionId Y Input The Azure subscription ID of the Event Grid resource "subscriptionId"
azureClientId Y Input The client ID that should be used by the binding to create or update the Event Grid Event Subscription and to authenticate incoming messages "clientId"
azureClientSecret Y Input The client id that should be used by the binding to create or update the Event Grid Event Subscription and to authenticate incoming messages "clientSecret"
subscriberEndpoint Y Input The HTTPS endpoint of the webhook Event Grid sends events (formatted as Cloud Events) to. If you’re not re-writing URLs on ingress, it should be in the form of: "https://[YOUR HOSTNAME]/<path>"
If testing on your local machine, you can use something like ngrok to create a public endpoint.
"https://[YOUR HOSTNAME]/<path>"
handshakePort Y Input The container port that the input binding listens on when receiving events on the webhook "9000"
scope Y Input The identifier of the resource to which the event subscription needs to be created or updated. See the scope section for more details "/subscriptions/{subscriptionId}/"
eventSubscriptionName N Input The name of the event subscription. Event subscription names must be between 3 and 64 characters long and should use alphanumeric letters only "name"
direction N Input/Output The direction of the binding "input", "output", "input, output"

Scope

Scope is the identifier of the resource to which the event subscription needs to be created or updated. The scope can be a subscription, a resource group, a top-level resource belonging to a resource provider namespace, or an Event Grid topic. For example:

  • /subscriptions/{subscriptionId}/ for a subscription
  • /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName} for a resource group
  • /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName} for a resource
  • /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.EventGrid/topics/{topicName} for an Event Grid topic

Values in braces {} should be replaced with actual values.

Binding support

This component supports both input and output binding interfaces.

This component supports output binding with the following operations:

  • create: publishes a message on the Event Grid topic

Receiving events

You can use the Event Grid binding to receive events from a variety of sources and actions. Learn more about all of the available event sources and handlers that work with Event Grid.

In the following table, you can find the list of Dapr components that can raise events.

Event sources Dapr components
Azure Blob Storage Azure Blob Storage binding
Azure Blob Storage state store
Azure Cache for Redis Redis binding
Redis pub/sub
Azure Event Hubs Azure Event Hubs pub/sub
Azure Event Hubs binding
Azure IoT Hub Azure Event Hubs pub/sub
Azure Event Hubs binding
Azure Service Bus Azure Service Bus binding
Azure Service Bus pub/sub topics and queues
Azure SignalR Service SignalR binding

Microsoft Entra ID credentials

The Azure Event Grid binding requires an Microsoft Entra ID application and service principal for two reasons:

  • Creating an event subscription when Dapr is started (and updating it if the Dapr configuration changes)
  • Authenticating messages delivered by Event Hubs to your application.

Requirements:

For the first purpose, you will need to create an Azure Service Principal. After creating it, take note of the Microsoft Entra ID application’s clientID (a UUID), and run the following script with the Azure CLI:

# Set the client ID of the app you created
CLIENT_ID="..."
# Scope of the resource, usually in the format:
# `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.EventGrid/topics/{topicName}`
SCOPE="..."

# First ensure that Azure Resource Manager provider is registered for Event Grid
az provider register --namespace "Microsoft.EventGrid"
az provider show --namespace "Microsoft.EventGrid" --query "registrationState"
# Give the SP needed permissions so that it can create event subscriptions to Event Grid
az role assignment create --assignee "$CLIENT_ID" --role "EventGrid EventSubscription Contributor" --scopes "$SCOPE"

For the second purpose, first download a script:

curl -LO "https://raw.githubusercontent.com/dapr/components-contrib/master/.github/infrastructure/conformance/azure/setup-eventgrid-sp.ps1"

Then, using PowerShell (pwsh), run:

# Set the client ID of the app you created
$clientId = "..."

# Authenticate with the Microsoft Graph
# You may need to add the -TenantId flag to the next command if needed
Connect-MgGraph -Scopes "Application.Read.All","Application.ReadWrite.All"
./setup-eventgrid-sp.ps1 $clientId

Note: if your directory does not have a Service Principal for the application “Microsoft.EventGrid”, you may need to run the command Connect-MgGraph and sign in as an admin for the Microsoft Entra ID tenant (this is related to permissions on the Microsoft Entra ID directory, and not the Azure subscription). Otherwise, please ask your tenant’s admin to sign in and run this PowerShell command: New-MgServicePrincipal -AppId "4962773b-9cdb-44cf-a8bf-237846a00ab7" (the UUID is a constant)

Testing locally

  • Install ngrok
  • Run locally using a custom port, for example 9000, for handshakes
# Using port 9000 as an example
ngrok http --host-header=localhost 9000
  • Configure the ngrok’s HTTPS endpoint and the custom port to input binding metadata
  • Run Dapr
# Using default ports for .NET core web api and Dapr as an example
dapr run --app-id dotnetwebapi --app-port 5000 --dapr-http-port 3500 dotnet run

Testing on Kubernetes

Azure Event Grid requires a valid HTTPS endpoint for custom webhooks; self-signed certificates aren’t accepted. In order to enable traffic from the public internet to your app’s Dapr sidecar you need an ingress controller enabled with Dapr. There’s a good article on this topic: Kubernetes NGINX ingress controller with Dapr.

To get started, first create a dapr-annotations.yaml file for Dapr annotations:

controller:
  podAnnotations:
    dapr.io/enabled: "true"
    dapr.io/app-id: "nginx-ingress"
    dapr.io/app-port: "80"

Then install the NGINX ingress controller to your Kubernetes cluster with Helm 3 using the annotations:

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install nginx-ingress ingress-nginx/ingress-nginx -f ./dapr-annotations.yaml -n default
# Get the public IP for the ingress controller
kubectl get svc -l component=controller -o jsonpath='Public IP is: {.items[0].status.loadBalancer.ingress[0].ip}{"\n"}'

If deploying to Azure Kubernetes Service, you can follow the official Microsoft documentation for rest of the steps:

  • Add an A record to your DNS zone
  • Install cert-manager
  • Create a CA cluster issuer

Final step for enabling communication between Event Grid and Dapr is to define http and custom port to your app’s service and an ingress in Kubernetes. This example uses a .NET Core web api and Dapr default ports and custom port 9000 for handshakes.

# dotnetwebapi.yaml
kind: Service
apiVersion: v1
metadata:
  name: dotnetwebapi
  labels:
    app: dotnetwebapi
spec:
  selector:
    app: dotnetwebapi
  ports:
    - name: webapi
      protocol: TCP
      port: 80
      targetPort: 80
    - name: dapr-eventgrid
      protocol: TCP
      port: 9000
      targetPort: 9000
  type: ClusterIP

---
  apiVersion: extensions/v1beta1
  kind: Ingress
  metadata:
    name: eventgrid-input-rule
    annotations:
      kubernetes.io/ingress.class: nginx
      cert-manager.io/cluster-issuer: letsencrypt
  spec:
    tls:
      - hosts:
        - dapr.<your custom domain>
        secretName: dapr-tls
    rules:
      - host: dapr.<your custom domain>
        http:
          paths:
            - path: /api/events
              backend:
                serviceName: dotnetwebapi
                servicePort: 9000

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: dotnetwebapi
  labels:
    app: dotnetwebapi
spec:
  replicas: 1
  selector:
    matchLabels:
      app: dotnetwebapi
  template:
    metadata:
      labels:
        app: dotnetwebapi
      annotations:
        dapr.io/enabled: "true"
        dapr.io/app-id: "dotnetwebapi"
        dapr.io/app-port: "5000"
    spec:
      containers:
      - name: webapi
        image: <your container image>
        ports:
        - containerPort: 5000
        imagePullPolicy: Always

Deploy the binding and app (including ingress) to Kubernetes

# Deploy Dapr components
kubectl apply -f eventgrid.yaml
# Deploy your app and Nginx ingress
kubectl apply -f dotnetwebapi.yaml

Note: This manifest deploys everything to Kubernetes’ default namespace.

Troubleshooting possible issues with Nginx controller

After initial deployment the “Daprized” Nginx controller can malfunction. To check logs and fix issue (if it exists) follow these steps.

$ kubectl get pods -l app=nginx-ingress

NAME                                                   READY   STATUS    RESTARTS   AGE
nginx-nginx-ingress-controller-649df94867-fp6mg        2/2     Running   0          51m
nginx-nginx-ingress-default-backend-6d96c457f6-4nbj5   1/1     Running   0          55m

$ kubectl logs nginx-nginx-ingress-controller-649df94867-fp6mg nginx-ingress-controller

# If you see 503s logged from calls to webhook endpoint '/api/events' restart the pod
# .."OPTIONS /api/events HTTP/1.1" 503..

$ kubectl delete pod nginx-nginx-ingress-controller-649df94867-fp6mg

# Check the logs again - it should start returning 200
# .."OPTIONS /api/events HTTP/1.1" 200..

16 - Azure Event Hubs binding spec

Detailed documentation on the Azure Event Hubs binding component

Component format

To setup an Azure Event Hubs binding, create a component of type bindings.azure.eventhubs. See this guide on how to create and apply a binding configuration.

See this for instructions on how to set up an Event Hub.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.azure.eventhubs
  version: v1
  metadata:
    # Hub name ("topic")
    - name: eventHub
      value: "mytopic"
    - name: consumerGroup
      value: "myapp"
    # Either connectionString or eventHubNamespace is required
    # Use connectionString when *not* using Microsoft Entra ID
    - name: connectionString
      value: "Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={EventHub}"
    # Use eventHubNamespace when using Microsoft Entra ID
    - name: eventHubNamespace
      value: "namespace"
    - name: enableEntityManagement
      value: "false"
    - name: enableInOrderMessageDelivery
      value: "false"
    # The following four properties are needed only if enableEntityManagement is set to true
    - name: resourceGroupName
      value: "test-rg"
    - name: subscriptionID
      value: "value of Azure subscription ID"
    - name: partitionCount
      value: "1"
    - name: messageRetentionInDays
      value: "3"
    # Checkpoint store attributes
    - name: storageAccountName
      value: "myeventhubstorage"
    - name: storageAccountKey
      value: "112233445566778899"
    - name: storageContainerName
      value: "myeventhubstoragecontainer"
    # Alternative to passing storageAccountKey
    - name: storageConnectionString
      value: "DefaultEndpointsProtocol=https;AccountName=<account>;AccountKey=<account-key>"
    # Optional metadata
    - name: getAllMessageProperties
      value: "true"
    - name: direction
      value: "input, output"

Spec metadata fields

Field Required Binding support Details Example
eventHub Y* Input/Output The name of the Event Hubs hub (“topic”). Required if using Microsoft Entra ID authentication or if the connection string doesn’t contain an EntityPath value mytopic
connectionString Y* Input/Output Connection string for the Event Hub or the Event Hub namespace.
* Mutally exclusive with eventHubNamespace field.
* Required when not using Microsoft Entra ID Authentication
"Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={EventHub}" or "Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key}"
eventHubNamespace Y* Input/Output The Event Hub Namespace name.
* Mutally exclusive with connectionString field.
* Required when using Microsoft Entra ID Authentication
"namespace"
enableEntityManagement N Input/Output Boolean value to allow management of the EventHub namespace and storage account. Default: false "true", "false"
enableInOrderMessageDelivery N Input/Output Boolean value to allow messages to be delivered in the order in which they were posted. This assumes partitionKey is set when publishing or posting to ensure ordering across partitions. Default: false "true", "false"
resourceGroupName N Input/Output Name of the resource group the Event Hub namespace is part of. Required when entity management is enabled "test-rg"
subscriptionID N Input/Output Azure subscription ID value. Required when entity management is enabled "azure subscription id"
partitionCount N Input/Output Number of partitions for the new Event Hub namespace. Used only when entity management is enabled. Default: "1" "2"
messageRetentionInDays N Input/Output Number of days to retain messages for in the newly created Event Hub namespace. Used only when entity management is enabled. Default: "1" "90"
consumerGroup Y Input The name of the Event Hubs Consumer Group to listen on "group1"
storageAccountName Y Input Storage account name to use for the checkpoint store. "myeventhubstorage"
storageAccountKey Y* Input Storage account key for the checkpoint store account.
* When using Microsoft Entra ID, it’s possible to omit this if the service principal has access to the storage account too.
"112233445566778899"
storageConnectionString Y* Input Connection string for the checkpoint store, alternative to specifying storageAccountKey "DefaultEndpointsProtocol=https;AccountName=myeventhubstorage;AccountKey=<account-key>"
storageContainerName Y Input Storage container name for the storage account name. "myeventhubstoragecontainer"
getAllMessageProperties N Input When set to true, retrieves all user/app/custom properties from the Event Hub message and forwards them in the returned event metadata. Default setting is "false". "true", "false"
direction N Input/Output The direction of the binding. "input", "output", "input, output"

Microsoft Entra ID authentication

The Azure Event Hubs pub/sub component supports authentication using all Microsoft Entra ID mechanisms. For further information and the relevant component metadata fields to provide depending on the choice of Microsoft Entra ID authentication mechanism, see the docs for authenticating to Azure.

Binding support

This component supports output binding with the following operations:

  • create: publishes a new message to Azure Event Hubs

Input Binding to Azure IoT Hub Events

Azure IoT Hub provides an endpoint that is compatible with Event Hubs, so Dapr apps can create input bindings to read Azure IoT Hub events using the Event Hubs bindings component.

The device-to-cloud events created by Azure IoT Hub devices will contain additional IoT Hub System Properties, and the Azure Event Hubs binding for Dapr will return the following as part of the response metadata:

System Property Name Description & Routing Query Keyword
iothub-connection-auth-generation-id The connectionDeviceGenerationId of the device that sent the message. See IoT Hub device identity properties.
iothub-connection-auth-method The connectionAuthMethod used to authenticate the device that sent the message.
iothub-connection-device-id The deviceId of the device that sent the message. See IoT Hub device identity properties.
iothub-connection-module-id The moduleId of the device that sent the message. See IoT Hub device identity properties.
iothub-enqueuedtime The enqueuedTime in RFC3339 format that the device-to-cloud message was received by IoT Hub.
message-id The user-settable AMQP messageId.

For example, the headers of a HTTP Read() response would contain:

{
  'user-agent': 'fasthttp',
  'host': '127.0.0.1:3000',
  'content-type': 'application/json',
  'content-length': '120',
  'iothub-connection-device-id': 'my-test-device',
  'iothub-connection-auth-generation-id': '637618061680407492',
  'iothub-connection-auth-method': '{"scope":"module","type":"sas","issuer":"iothub","acceptingIpFilterRule":null}',
  'iothub-connection-module-id': 'my-test-module-a',
  'iothub-enqueuedtime': '2021-07-13T22:08:09Z',
  'message-id': 'my-custom-message-id',
  'x-opt-sequence-number': '35',
  'x-opt-enqueued-time': '2021-07-13T22:08:09Z',
  'x-opt-offset': '21560',
  'traceparent': '00-4655608164bc48b985b42d39865f3834-ed6cf3697c86e7bd-01'
}

17 - Azure OpenAI binding spec

Detailed documentation on the Azure OpenAI binding component

Component format

To setup an Azure OpenAI binding create a component of type bindings.azure.openai. See this guide on how to create and apply a binding configuration. See this for the documentation for Azure OpenAI Service.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.azure.openai
  version: v1
  metadata:
  - name: apiKey # Required
    value: "1234567890abcdef"
  - name: endpoint # Required
    value: "https://myopenai.openai.azure.com"

Spec metadata fields

Field Required Binding support Details Example
endpoint Y Output Azure OpenAI service endpoint URL. "https://myopenai.openai.azure.com"
apiKey Y* Output The access key of the Azure OpenAI service. Only required when not using Microsoft Entra ID authentication. "1234567890abcdef"
azureTenantId Y* Input The tenant ID of the Azure OpenAI resource. Only required when apiKey is not provided. "tenentID"
azureClientId Y* Input The client ID that should be used by the binding to create or update the Azure OpenAI Subscription and to authenticate incoming messages. Only required when apiKey is not provided. "clientId"
azureClientSecret Y* Input The client secret that should be used by the binding to create or update the Azure OpenAI Subscription and to authenticate incoming messages. Only required when apiKey is not provided. "clientSecret"

Microsoft Entra ID authentication

The Azure OpenAI binding component supports authentication using all Microsoft Entra ID mechanisms. For further information and the relevant component metadata fields to provide depending on the choice of Microsoft Entra ID authentication mechanism, see the docs for authenticating to Azure.

Example Configuration

apiVersion: dapr.io/v1alpha1
kind: component
metadata:
  name: <NAME>
spec:
  type: bindings.azure.openai
  version: v1
  metadata:
  - name: endpoint
    value: "https://myopenai.openai.azure.com"
  - name: azureTenantId
    value: "***"
  - name: azureClientId
    value: "***"
  - name: azureClientSecret
    value: "***"

Binding support

This component supports output binding with the following operations:

Completion API

To call the completion API with a prompt, invoke the Azure OpenAI binding with a POST method and the following JSON body:

{
  "operation": "completion",
  "data": {
    "deploymentId": "my-model",
    "prompt": "A dog is",
    "maxTokens":5
    }
}

The data parameters are:

  • deploymentId - string that specifies the model deployment ID to use.
  • prompt - string that specifies the prompt to generate completions for.
  • maxTokens - (optional) defines the max number of tokens to generate. Defaults to 16 for completion API.
  • temperature - (optional) defines the sampling temperature between 0 and 2. Higher values like 0.8 make the output more random, while lower values like 0.2 make it more focused and deterministic. Defaults to 1.0 for completion API.
  • topP - (optional) defines the sampling temperature. Defaults to 1.0 for completion API.
  • n - (optional) defines the number of completions to generate. Defaults to 1 for completion API.
  • presencePenalty - (optional) Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics. Defaults to 0.0 for completion API.
  • frequencyPenalty - (optional) Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim. Defaults to 0.0 for completion API.

Read more about the importance and usage of these parameters in the Azure OpenAI API documentation.

Examples

curl -d '{ "data": {"deploymentId: "my-model" , "prompt": "A dog is ", "maxTokens":15}, "operation": "completion" }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

The response body contains the following JSON:

[
  {
    "finish_reason": "length",
    "index": 0,
    "text": " a pig in a dress.\n\nSun, Oct 20, 2013"
  },
  {
    "finish_reason": "length",
    "index": 1,
    "text": " the only thing on earth that loves you\n\nmore than he loves himself.\"\n\n"
  }
]

Chat Completion API

To perform a chat-completion operation, invoke the Azure OpenAI binding with a POST method and the following JSON body:

{
    "operation": "chat-completion",
    "data": {
        "deploymentId": "my-model",
        "messages": [
            {
                "role": "system",
                "message": "You are a bot that gives really short replies"
            },
            {
                "role": "user",
                "message": "Tell me a joke"
            }
        ],
        "n": 2,
        "maxTokens": 30,
        "temperature": 1.2
    }
}

The data parameters are:

  • deploymentId - string that specifies the model deployment ID to use.
  • messages - array of messages that will be used to generate chat completions. Each message is of the form:
    • role - string that specifies the role of the message. Can be either user, system or assistant.
    • message - string that specifies the conversation message for the role.
  • maxTokens - (optional) defines the max number of tokens to generate. Defaults to 16 for the chat completion API.
  • temperature - (optional) defines the sampling temperature between 0 and 2. Higher values like 0.8 make the output more random, while lower values like 0.2 make it more focused and deterministic. Defaults to 1.0 for the chat completion API.
  • topP - (optional) defines the sampling temperature. Defaults to 1.0 for the chat completion API.
  • n - (optional) defines the number of completions to generate. Defaults to 1 for the chat completion API.
  • presencePenalty - (optional) Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics. Defaults to 0.0 for the chat completion API.
  • frequencyPenalty - (optional) Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim. Defaults to 0.0 for the chat completion API.

Example

curl -d '{
  "data": {
      "deploymentId": "my-model",
      "messages": [
          {
              "role": "system",
              "message": "You are a bot that gives really short replies"
          },
          {
              "role": "user",
              "message": "Tell me a joke"
          }
      ],
      "n": 2,
      "maxTokens": 30,
      "temperature": 1.2
  },
  "operation": "chat-completion"
}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

The response body contains the following JSON:

[
  {
    "finish_reason": "stop",
    "index": 0,
    "message": {
      "content": "Why was the math book sad? Because it had too many problems.",
      "role": "assistant"
    }
  },
  {
    "finish_reason": "stop",
    "index": 1,
    "message": {
      "content": "Why did the tomato turn red? Because it saw the salad dressing!",
      "role": "assistant"
    }
  }
]

Get Embedding API

The get-embedding operation returns a vector representation of a given input that can be easily consumed by machine learning models and other algorithms. To perform a get-embedding operation, invoke the Azure OpenAI binding with a POST method and the following JSON body:

{
    "operation": "get-embedding",
    "data": {
        "deploymentId": "my-model",
        "message": "The capital of France is Paris."
    }
}

The data parameters are:

  • deploymentId - string that specifies the model deployment ID to use.
  • message - string that specifies the text to embed.

Example

curl -d '{
  "data": {
      "deploymentId": "embeddings",
      "message": "The capital of France is Paris."
  },
  "operation": "get-embedding"
}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

The response body contains the following JSON:

[0.018574921,-0.00023652936,-0.0057790717,.... (1536 floats total for ada)]

Learn more about the Azure OpenAI output binding

Watch the following Community Call presentation to learn more about the Azure OpenAI output binding.

18 - Azure Service Bus Queues binding spec

Detailed documentation on the Azure Service Bus Queues binding component

Component format

To setup Azure Service Bus Queues binding create a component of type bindings.azure.servicebusqueues. See this guide on how to create and apply a binding configuration.

Connection String Authentication

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.azure.servicebusqueues
  version: v1
  metadata:
  - name: connectionString # Required when not using Azure Authentication.
    value: "Endpoint=sb://{ServiceBusNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={ServiceBus}"
  - name: queueName
    value: "queue1"
  # - name: timeoutInSec # Optional
  #   value: "60"
  # - name: handlerTimeoutInSec # Optional
  #   value: "60"
  # - name: disableEntityManagement # Optional
  #   value: "false"
  # - name: maxDeliveryCount # Optional
  #   value: "3"
  # - name: lockDurationInSec # Optional
  #   value: "60"
  # - name: lockRenewalInSec # Optional
  #   value: "20"
  # - name: maxActiveMessages # Optional
  #   value: "10000"
  # - name: maxConcurrentHandlers # Optional
  #   value: "10"
  # - name: defaultMessageTimeToLiveInSec # Optional
  #   value: "10"
  # - name: autoDeleteOnIdleInSec # Optional
  #   value: "3600"
  # - name: minConnectionRecoveryInSec # Optional
  #   value: "2"
  # - name: maxConnectionRecoveryInSec # Optional
  #   value: "300"
  # - name: maxRetriableErrorsPerSec # Optional
  #   value: "10"
  # - name: publishMaxRetries # Optional
  #   value: "5"
  # - name: publishInitialRetryIntervalInMs # Optional
  #   value: "500"
  # - name: direction
  #   value: "input, output"

Spec metadata fields

Field Required Binding support Details Example
connectionString Y Input/Output The Service Bus connection string. Required unless using Microsoft Entra ID authentication. "Endpoint=sb://************"
queueName Y Input/Output The Service Bus queue name. Queue names are case-insensitive and will always be forced to lowercase. "queuename"
timeoutInSec N Input/Output Timeout for all invocations to the Azure Service Bus endpoint, in seconds. Note that this option impacts network calls and it’s unrelated to the TTL applies to messages. Default: "60" "60"
namespaceName N Input/Output Parameter to set the address of the Service Bus namespace, as a fully-qualified domain name. Required if using Microsoft Entra ID authentication. "namespace.servicebus.windows.net"
disableEntityManagement N Input/Output When set to true, queues and subscriptions do not get created automatically. Default: "false" "true", "false"
lockDurationInSec N Input/Output Defines the length in seconds that a message will be locked for before expiring. Used during subscription creation only. Default set by server. "30"
autoDeleteOnIdleInSec N Input/Output Time in seconds to wait before auto deleting idle subscriptions. Used during subscription creation only. Must be 300s or greater. Default: "0" (disabled) "3600"
defaultMessageTimeToLiveInSec N Input/Output Default message time to live, in seconds. Used during subscription creation only. "10"
maxDeliveryCount N Input/Output Defines the number of attempts the server will make to deliver a message. Used during subscription creation only. Default set by server. "10"
minConnectionRecoveryInSec N Input/Output Minimum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. Default: "2" "5"
maxConnectionRecoveryInSec N Input/Output Maximum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. After each attempt, the component waits a random number of seconds, increasing every time, between the minimum and the maximum. Default: "300" (5 minutes) "600"
maxActiveMessages N Defines the maximum number of messages to be processing or in the buffer at once. This should be at least as big as the maximum concurrent handlers. Default: "1" "1"
handlerTimeoutInSec N Input Timeout for invoking the app’s handler. Default: "0" (no timeout) "30"
minConnectionRecoveryInSec N Input Minimum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. Default: "2" "5"
maxConnectionRecoveryInSec N Input Maximum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. After each attempt, the binding waits a random number of seconds, increasing every time, between the minimum and the maximum. Default: "300" (5 minutes) "600"
lockRenewalInSec N Input Defines the frequency at which buffered message locks will be renewed. Default: "20". "20"
maxActiveMessages N Input Defines the maximum number of messages to be processing or in the buffer at once. This should be at least as big as the maximum concurrent handlers. Default: "1" "2000"
maxConcurrentHandlers N Input Defines the maximum number of concurrent message handlers; set to 0 for unlimited. Default: "1" "10"
maxRetriableErrorsPerSec N Input Maximum number of retriable errors that are processed per second. If a message fails to be processed with a retriable error, the component adds a delay before it starts processing another message, to avoid immediately re-processing messages that have failed. Default: "10" "10"
publishMaxRetries N Output The max number of retries for when Azure Service Bus responds with “too busy” in order to throttle messages. Defaults: "5" "5"
publishInitialRetryIntervalInMs N Output Time in milliseconds for the initial exponential backoff when Azure Service Bus throttle messages. Defaults: "500" "500"
direction N Input/Output The direction of the binding "input", "output", "input, output"

Microsoft Entra ID authentication

The Azure Service Bus Queues binding component supports authentication using all Microsoft Entra ID mechanisms, including Managed Identities. For further information and the relevant component metadata fields to provide depending on the choice of Microsoft Entra ID authentication mechanism, see the docs for authenticating to Azure.

Example Configuration

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.azure.servicebusqueues
  version: v1
  metadata:
  - name: azureTenantId
    value: "***"
  - name: azureClientId
    value: "***"
  - name: azureClientSecret
    value: "***"
  - name: namespaceName
    # Required when using Azure Authentication.
    # Must be a fully-qualified domain name
    value: "servicebusnamespace.servicebus.windows.net"
  - name: queueName
    value: queue1
  - name: ttlInSeconds
    value: 60

Binding support

This component supports both input and output binding interfaces.

This component supports output binding with the following operations:

  • create: publishes a message to the specified queue

Message metadata

Azure Service Bus messages extend the Dapr message format with additional contextual metadata. Some metadata fields are set by Azure Service Bus itself (read-only) and others can be set by the client when publishing a message through Invoke binding call with create operation.

Sending a message with metadata

To set Azure Service Bus metadata when sending a message, set the query parameters on the HTTP request or the gRPC metadata as documented here.

  • metadata.MessageId
  • metadata.CorrelationId
  • metadata.SessionId
  • metadata.Label
  • metadata.ReplyTo
  • metadata.PartitionKey
  • metadata.To
  • metadata.ContentType
  • metadata.ScheduledEnqueueTimeUtc
  • metadata.ReplyToSessionId

Receiving a message with metadata

When Dapr calls your application, it attaches Azure Service Bus message metadata to the request using either HTTP headers or gRPC metadata. In addition to the settable metadata listed above, you can also access the following read-only message metadata.

  • metadata.DeliveryCount
  • metadata.LockedUntilUtc
  • metadata.LockToken
  • metadata.EnqueuedTimeUtc
  • metadata.SequenceNumber

To find out more details on the purpose of any of these metadata properties refer to the official Azure Service Bus documentation.

In addition, all entries of ApplicationProperties from the original Azure Service Bus message are appended as metadata.<application property's name>.

Specifying a TTL per message

Time to live can be defined on a per-queue level (as illustrated above) or at the message level. The value defined at message level overwrites any value set at the queue level.

To set time to live at message level use the metadata section in the request body during the binding invocation: the field name is ttlInSeconds.

curl -X POST http://localhost:3500/v1.0/bindings/myServiceBusQueue \
  -H "Content-Type: application/json" \
  -d '{
        "data": {
          "message": "Hi"
        },
        "metadata": {
          "ttlInSeconds": "60"
        },
        "operation": "create"
      }'

Schedule a message

A message can be scheduled for delayed processing.

To schedule a message, use the metadata section in the request body during the binding invocation: the field name is ScheduledEnqueueTimeUtc.

The supported timestamp formats are RFC1123 and RFC3339.

curl -X POST http://localhost:3500/v1.0/bindings/myServiceBusQueue \
  -H "Content-Type: application/json" \
  -d '{
        "data": {
          "message": "Hi"
        },
        "metadata": {
          "ScheduledEnqueueTimeUtc": "Tue, 02 Jan 2024 15:04:05 GMT"
        },
        "operation": "create"
      }'

19 - Azure SignalR binding spec

Detailed documentation on the Azure SignalR binding component

Component format

To setup Azure SignalR binding create a component of type bindings.azure.signalr. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.azure.signalr
  version: v1
  metadata:
  - name: connectionString
    value: "Endpoint=https://<your-azure-signalr>.service.signalr.net;AccessKey=<your-access-key>;Version=1.0;"
  - name: hub  # Optional
    value: "<hub name>"

Spec metadata fields

Field Required Binding support Details Example
connectionString Y Output The Azure SignalR connection string "Endpoint=https://<your-azure-signalr>.service.signalr.net;AccessKey=<your-access-key>;Version=1.0;"
hub N Output Defines the hub in which the message will be send. The hub can be dynamically defined as a metadata value when publishing to an output binding (key is “hub”) "myhub"
endpoint N Output Endpoint of Azure SignalR; required if not included in the connectionString or if using Microsoft Entra ID "https://<your-azure-signalr>.service.signalr.net"
accessKey N Output Access key "your-access-key"

Microsoft Entra ID authentication

The Azure SignalR binding component supports authentication using all Microsoft Entra ID mechanisms. See the docs for authenticating to Azure to learn more about the relevant component metadata fields based on your choice of Microsoft Entra ID authentication mechanism.

You have two options to authenticate this component with Microsoft Entra ID:

  • Pass individual metadata keys:
    • endpoint for the endpoint
    • If needed: azureClientId, azureTenantId and azureClientSecret
  • Pass a connection string with AuthType=aad specified:
    • System-assigned managed identity: Endpoint=https://<servicename>.service.signalr.net;AuthType=aad;Version=1.0;
    • User-assigned managed identity: Endpoint=https://<servicename>.service.signalr.net;AuthType=aad;ClientId=<clientid>;Version=1.0;
    • Microsoft Entra ID application: Endpoint=https://<servicename>.service.signalr.net;AuthType=aad;ClientId=<clientid>;ClientSecret=<clientsecret>;TenantId=<tenantid>;Version=1.0;
      Note that you cannot use a connection string if your application’s ClientSecret contains a ; character.

Binding support

This component supports output binding with the following operations:

  • create

Additional information

By default the Azure SignalR output binding will broadcast messages to all connected users. To narrow the audience there are two options, both configurable in the Metadata property of the message:

  • group: Sends the message to a specific Azure SignalR group
  • user: Sends the message to a specific Azure SignalR user

Applications publishing to an Azure SignalR output binding should send a message with the following contract:

{
    "data": {
        "Target": "<enter message name>",
        "Arguments": [
            {
                "sender": "dapr",
                "text": "Message from dapr output binding"
            }
        ]
    },
    "metadata": {
        "group": "chat123"
    },
    "operation": "create"
}

For more information on integration Azure SignalR into a solution check the documentation

20 - Azure Storage Queues binding spec

Detailed documentation on the Azure Storage Queues binding component

Component format

To setup Azure Storage Queues binding create a component of type bindings.azure.storagequeues. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.azure.storagequeues
  version: v1
  metadata:
  - name: accountName
    value: "account1"
  - name: accountKey
    value: "***********"
  - name: queueName
    value: "myqueue"
# - name: pollingInterval
#   value: "30s"
# - name: ttlInSeconds
#   value: "60"
# - name: decodeBase64
#   value: "false"
# - name: encodeBase64
#   value: "false"
# - name: endpoint
#   value: "http://127.0.0.1:10001"
# - name: visibilityTimeout
#   value: "30s"
# - name: initialVisibilityDelay
#   value: "30s"
# - name: direction 
#   value: "input, output"

Spec metadata fields

Field Required Binding support Details Example
accountName Y Input/Output The name of the Azure Storage account "account1"
accountKey Y* Input/Output The access key of the Azure Storage account. Only required when not using Microsoft Entra ID authentication. "access-key"
queueName Y Input/Output The name of the Azure Storage queue "myqueue"
pollingInterval N Output Set the interval to poll Azure Storage Queues for new messages, as a Go duration value. Default: "10s" "30s"
ttlInSeconds N Output Parameter to set the default message time to live. If this parameter is omitted, messages will expire after 10 minutes. See also "60"
decodeBase64 N Input Configuration to decode base64 content received from the Storage Queue into a string. Defaults to false true, false
encodeBase64 N Output If enabled base64 encodes the data payload before uploading to Azure storage queues. Default false. true, false
endpoint N Input/Output Optional custom endpoint URL. This is useful when using the Azurite emulator or when using custom domains for Azure Storage (although this is not officially supported). The endpoint must be the full base URL, including the protocol (http:// or https://), the IP or FQDN, and optional port. "http://127.0.0.1:10001" or "https://accountName.queue.example.com"
initialVisibilityDelay N Input Allows setting a custom queue visibility timeout to avoid immediate retrying of recently failed messages. Defaults to 30 seconds. "100s"
visibilityTimeout N Input Sets a delay before a message becomes visible in the queue after being added. It can also be specified per message by setting the initialVisibilityDelay property in the invocation request’s metadata. Defaults to 0 seconds. "30s"
direction N Input/Output Direction of the binding. "input", "output", "input, output"

Microsoft Entra ID authentication

The Azure Storage Queue binding component supports authentication using all Microsoft Entra ID mechanisms. See the docs for authenticating to Azure to learn more about the relevant component metadata fields based on your choice of Microsoft Entra ID authentication mechanism.

Binding support

This component supports both input and output binding interfaces.

This component supports output binding with the following operations:

  • create

Specifying a TTL per message

Time to live can be defined on queue level (as illustrated above) or at the message level. The value defined at message level overwrites any value set at queue level.

To set time to live at message level use the metadata section in the request body during the binding invocation.

The field name is ttlInSeconds.

Example:

curl -X POST http://localhost:3500/v1.0/bindings/myStorageQueue \
  -H "Content-Type: application/json" \
  -d '{
        "data": {
          "message": "Hi"
        },
        "metadata": {
          "ttlInSeconds": "60"
        },
        "operation": "create"
      }'

Specifying a Initial Visibility delay per message

An initial visibility delay can be defined on queue level or at the message level. The value defined at message level overwrites any value set at a queue level.

To set an initial visibility delay value at the message level, use the metadata section in the request body during the binding invocation.

The field name is initialVisbilityDelay.

Example.

curl -X POST http://localhost:3500/v1.0/bindings/myStorageQueue \
  -H "Content-Type: application/json" \
  -d '{
        "data": {
          "message": "Hi"
        },
        "metadata": {
          "initialVisbilityDelay": "30"
        },
        "operation": "create"
      }'

21 - Cloudflare Queues bindings spec

Detailed documentation on the Cloudflare Queues component

Component format

This output binding for Dapr allows interacting with Cloudflare Queues to publish new messages. It is currently not possible to consume messages from a Queue using Dapr.

To setup a Cloudflare Queues binding, create a component of type bindings.cloudflare.queues. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.cloudflare.queues
  version: v1
  # Increase the initTimeout if Dapr is managing the Worker for you
  initTimeout: "120s"
  metadata:
    # Name of the existing Cloudflare Queue (required)
    - name: queueName
      value: ""
    # Name of the Worker (required)
    - name: workerName
      value: ""
    # PEM-encoded private Ed25519 key (required)
    - name: key
      value: |
        -----BEGIN PRIVATE KEY-----
        MC4CAQ...
        -----END PRIVATE KEY-----
    # Cloudflare account ID (required to have Dapr manage the Worker)
    - name: cfAccountID
      value: ""
    # API token for Cloudflare (required to have Dapr manage the Worker)
    - name: cfAPIToken
      value: ""
    # URL of the Worker (required if the Worker has been pre-created outside of Dapr)
    - name: workerUrl
      value: ""

Spec metadata fields

Field Required Binding support Details Example
queueName Y Output Name of the existing Cloudflare Queue "mydaprqueue"
key Y Output Ed25519 private key, PEM-encoded See example above
cfAccountID Y/N Output Cloudflare account ID. Required to have Dapr manage the worker. "456789abcdef8b5588f3d134f74ac"def
cfAPIToken Y/N Output API token for Cloudflare. Required to have Dapr manage the Worker. "secret-key"
workerUrl Y/N Output URL of the Worker. Required if the Worker has been pre-provisioned outside of Dapr. "https://mydaprqueue.mydomain.workers.dev"

When you configure Dapr to create your Worker for you, you may need to set a longer value for the initTimeout property of the component, to allow enough time for the Worker script to be deployed. For example: initTimeout: "120s"

Binding support

This component supports output binding with the following operations:

  • publish (alias: create): Publish a message to the Queue.
    The data passed to the binding is used as-is for the body of the message published to the Queue.
    This operation does not accept any metadata property.

Create a Cloudflare Queue

To use this component, you must have a Cloudflare Queue created in your Cloudflare account.

You can create a new Queue in one of two ways:

  • Using the Cloudflare dashboard

  • Using the Wrangler CLI:

    # Authenticate if needed with `npx wrangler login` first
    npx wrangler queues create <NAME>
    # For example: `npx wrangler queues create myqueue`
    

Configuring the Worker

Because Cloudflare Queues can only be accessed by scripts running on Workers, Dapr needs to maintain a Worker to communicate with the Queue.

Dapr can manage the Worker for you automatically, or you can pre-provision a Worker yourself. Pre-provisioning the Worker is the only supported option when running on workerd.

If you want to let Dapr manage the Worker for you, you will need to provide these 3 metadata options:

  • workerName: Name of the Worker script. This will be the first part of the URL of your Worker. For example, if the “workers.dev” domain configured for your Cloudflare account is mydomain.workers.dev and you set workerName to mydaprqueue, the Worker that Dapr deploys will be available at https://mydaprqueue.mydomain.workers.dev.
  • cfAccountID: ID of your Cloudflare account. You can find this in your browser’s URL bar after logging into the Cloudflare dashboard, with the ID being the hex string right after dash.cloudflare.com. For example, if the URL is https://dash.cloudflare.com/456789abcdef8b5588f3d134f74acdef, the value for cfAccountID is 456789abcdef8b5588f3d134f74acdef.
  • cfAPIToken: API token with permission to create and edit Workers. You can create it from the “API Tokens” page in the “My Profile” section in the Cloudflare dashboard:
    1. Click on “Create token”.
    2. Select the “Edit Cloudflare Workers” template.
    3. Follow the on-screen instructions to generate a new API token.

When Dapr is configured to manage the Worker for you, when a Dapr Runtime is started it checks that the Worker exists and it’s up-to-date. If the Worker doesn’t exist, or if it’s using an outdated version, Dapr creates or upgrades it for you automatically.

If you’d rather not give Dapr permissions to deploy Worker scripts for you, you can manually provision a Worker for Dapr to use. Note that if you have multiple Dapr components that interact with Cloudflare services via a Worker, you will need to create a separate Worker for each one of them.

To manually provision a Worker script, you will need to have Node.js installed on your local machine.

  1. Create a new folder where you’ll place the source code of the Worker, for example: daprworker.
  2. If you haven’t already, authenticate with Wrangler (the Cloudflare Workers CLI) using: npx wrangler login.
  3. Inside the newly-created folder, create a new wrangler.toml file with the contents below, filling in the missing information as appropriate:
# Name of your Worker, for example "mydaprqueue"
name = ""

# Do not change these options
main = "worker.js"
compatibility_date = "2022-12-09"
usage_model = "bundled"

[vars]
# Set this to the **public** part of the Ed25519 key, PEM-encoded (with newlines replaced with `\n`).
# Example:
# PUBLIC_KEY = "-----BEGIN PUBLIC KEY-----\nMCowB...=\n-----END PUBLIC KEY-----
PUBLIC_KEY = ""
# Set this to the name of your Worker (same as the value of the "name" property above), for example "mydaprqueue".
TOKEN_AUDIENCE = ""

# Set the next two values to the name of your Queue, for example "myqueue".
# Note that they will both be set to the same value.
[[queues.producers]]
queue = ""
binding = ""

Note: see the next section for how to generate an Ed25519 key pair. Make sure you use the public part of the key when deploying a Worker!

  1. Copy the (pre-compiled and minified) code of the Worker in the worker.js file. You can do that with this command:
# Set this to the version of Dapr that you're using
DAPR_VERSION="release-1.15"
curl -LfO "https://raw.githubusercontent.com/dapr/components-contrib/${DAPR_VERSION}/internal/component/cloudflare/workers/code/worker.js"
  1. Deploy the Worker using Wrangler:
npx wrangler publish

Once your Worker has been deployed, you will need to initialize the component with these two metadata options:

  • workerName: Name of the Worker script. This is the value you set in the name property in the wrangler.toml file.
  • workerUrl: URL of the deployed Worker. The npx wrangler command will show the full URL to you, for example https://mydaprqueue.mydomain.workers.dev.

Generate an Ed25519 key pair

All Cloudflare Workers listen on the public Internet, so Dapr needs to use additional authentication and data protection measures to ensure that no other person or application can communicate with your Worker (and thus, with your Cloudflare Queue). These include industry-standard measures such as:

  • All requests made by Dapr to the Worker are authenticated via a bearer token (technically, a JWT) which is signed with an Ed25519 key.
  • All communications between Dapr and your Worker happen over an encrypted connection, using TLS (HTTPS).
  • The bearer token is generated on each request and is valid for a brief period of time only (currently, one minute).

To let Dapr issue bearer tokens, and have your Worker validate them, you will need to generate a new Ed25519 key pair. Here are examples of generating the key pair using OpenSSL or the step CLI.

Support for generating Ed25519 keys is available since OpenSSL 1.1.0, so the commands below will not work if you’re using an older version of OpenSSL.

Note for Mac users: on macOS, the “openssl” binary that is shipped by Apple is actually based on LibreSSL, which as of writing doesn’t support Ed25519 keys. If you’re using macOS, either use the step CLI, or install OpenSSL 3.0 from Homebrew using brew install openssl@3 then replacing openssl in the commands below with $(brew --prefix)/opt/openssl@3/bin/openssl.

You can generate a new Ed25519 key pair with OpenSSL using:

openssl genpkey -algorithm ed25519 -out private.pem
openssl pkey -in private.pem -pubout -out public.pem

On macOS, using openssl@3 from Homebrew:

$(brew --prefix)/opt/openssl@3/bin/openssl genpkey -algorithm ed25519 -out private.pem
$(brew --prefix)/opt/openssl@3/bin/openssl pkey -in private.pem -pubout -out public.pem

If you don’t have the step CLI already, install it following the official instructions.

Next, you can generate a new Ed25519 key pair with the step CLI using:

step crypto keypair \
  public.pem private.pem \
  --kty OKP --curve Ed25519 \
  --insecure --no-password

Regardless of how you generated your key pair, with the instructions above you’ll have two files:

  • private.pem contains the private part of the key; use the contents of this file for the key property of the component’s metadata.
  • public.pem contains the public part of the key, which you’ll need only if you’re deploying a Worker manually (as per the instructions in the previoius section).

22 - commercetools GraphQL binding spec

Detailed documentation on the commercetools GraphQL binding component

Component format

To setup commercetools GraphQL binding create a component of type bindings.commercetools. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.commercetools
  version: v1
  metadata:
  - name: region # required.
    value: "region"
  - name: provider # required.
    value: "gcp"
  - name: projectKey # required.
    value: "<project-key>"
  - name: clientID # required.
    value: "*****************"
  - name: clientSecret # required.
    value: "*****************"
  - name: scopes # required.
    value: "<project-scopes>"

Spec metadata fields

Field Required Binding support Details Example
region Y Output The region of the commercetools project "europe-west1"
provider Y Output The cloud provider, either gcp or aws "gcp", "aws"
projectKey Y Output The commercetools project key
clientID Y Output The commercetools client ID for the project
clientSecret Y Output The commercetools client secret for the project
scopes Y Output The commercetools scopes for the project "manage_project:project-key"

For more information see commercetools - Creating an API Client and commercetools - Regions.

Binding support

This component supports output binding with the following operations:

  • create

23 - Cron binding spec

Detailed documentation on the cron binding component

Component format

To setup cron binding create a component of type bindings.cron. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.cron
  version: v1
  metadata:
  - name: schedule
    value: "@every 15m" # valid cron schedule
  - name: direction
    value: "input"

Spec metadata fields

Field Required Binding support Details Example
schedule Y Input The valid cron schedule to use. See this for more details "@every 15m"
direction N Input The direction of the binding "input"

Schedule Format

The Dapr cron binding supports following formats:

Character Descriptor Acceptable values
1 Second 0 to 59, or *
2 Minute 0 to 59, or *
3 Hour 0 to 23, or * (UTC)
4 Day of the month 1 to 31, or *
5 Month 1 to 12, or *
6 Day of the week 0 to 7 (where 0 and 7 represent Sunday), or *

For example:

  • 30 * * * * * - every 30 seconds
  • 0 */15 * * * * - every 15 minutes
  • 0 30 3-6,20-23 * * * - every hour on the half hour in the range 3-6am, 8-11pm
  • CRON_TZ=America/New_York 0 30 04 * * * - every day at 4:30am New York time

You can learn more about cron and the supported formats here

For ease of use, the Dapr cron binding also supports few shortcuts:

  • @every 15s where s is seconds, m minutes, and h hours
  • @daily or @hourly which runs at that period from the time the binding is initialized

Listen to the cron binding

After setting up the cron binding, all you need to do is listen on an endpoint that matches the name of your component. Assume the [NAME] is scheduled. This will be made as a HTTP POST request. The below example shows how a simple Node.js Express application can receive calls on the /scheduled endpoint and write a message to the console.

app.post('/scheduled', async function(req, res){
    console.log("scheduled endpoint called", req.body)
    res.status(200).send()
});

When running this code, note that the /scheduled endpoint is called every fifteen minutes by the Dapr sidecar.

Binding support

This component supports input binding interface.

24 - GCP Pub/Sub binding spec

Detailed documentation on the GCP Pub/Sub binding component

Component format

To setup GCP Pub/Sub binding create a component of type bindings.gcp.pubsub. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.gcp.pubsub
  version: v1
  metadata:
  - name: topic
    value: "topic1"
  - name: subscription
    value: "subscription1"
  - name: type
    value: "service_account"
  - name: project_id
    value: "project_111"
  - name: private_key_id
    value: "*************"
  - name: client_email
    value: "name@domain.com"
  - name: client_id
    value: "1111111111111111"
  - name: auth_uri
    value: "https://accounts.google.com/o/oauth2/auth"
  - name: token_uri
    value: "https://oauth2.googleapis.com/token"
  - name: auth_provider_x509_cert_url
    value: "https://www.googleapis.com/oauth2/v1/certs"
  - name: client_x509_cert_url
    value: "https://www.googleapis.com/robot/v1/metadata/x509/<project-name>.iam.gserviceaccount.com"
  - name: private_key
    value: "PRIVATE KEY"
  - name: direction
    value: "input, output"

Spec metadata fields

Field Required Binding support Details Example
topic Y Output GCP Pub/Sub topic name "topic1"
subscription N GCP Pub/Sub subscription name "name1"
type Y Output GCP credentials type service_account
project_id Y Output GCP project id projectId
private_key_id N Output GCP private key id "privateKeyId"
private_key Y Output GCP credentials private key. Replace with x509 cert 12345-12345
client_email Y Output GCP client email "client@email.com"
client_id N Output GCP client id 0123456789-0123456789
auth_uri N Output Google account OAuth endpoint https://accounts.google.com/o/oauth2/auth
token_uri N Output Google account token uri https://oauth2.googleapis.com/token
auth_provider_x509_cert_url N Output GCP credentials cert url https://www.googleapis.com/oauth2/v1/certs
client_x509_cert_url N Output GCP credentials project x509 cert url https://www.googleapis.com/robot/v1/metadata/x509/<PROJECT_NAME>.iam.gserviceaccount.com
direction N Input/Output The direction of the binding. "input", "output", "input, output"

Binding support

This component supports both input and output binding interfaces.

This component supports output binding with the following operations:

  • create

25 - GCP Storage Bucket binding spec

Detailed documentation on the GCP Storage Bucket binding component

Component format

To setup GCP Storage Bucket binding create a component of type bindings.gcp.bucket. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.gcp.bucket
  version: v1
  metadata:
  - name: bucket
    value: "mybucket"
  - name: type
    value: "service_account"
  - name: project_id
    value: "project_111"
  - name: private_key_id
    value: "*************"
  - name: client_email
    value: "name@domain.com"
  - name: client_id
    value: "1111111111111111"
  - name: auth_uri
    value: "https://accounts.google.com/o/oauth2/auth"
  - name: token_uri
    value: "https://oauth2.googleapis.com/token"
  - name: auth_provider_x509_cert_url
    value: "https://www.googleapis.com/oauth2/v1/certs"
  - name: client_x509_cert_url
    value: "https://www.googleapis.com/robot/v1/metadata/x509/<project-name>.iam.gserviceaccount.com"
  - name: private_key
    value: "PRIVATE KEY"
  - name: decodeBase64
    value: "<bool>"
  - name: encodeBase64
    value: "<bool>"

Spec metadata fields

Field Required Binding support Details Example
bucket Y Output The bucket name "mybucket"
project_id Y Output GCP project ID projectId
type N Output The GCP credentials type "service_account"
private_key_id N Output If using explicit credentials, this field should contain the private_key_id field from the service account json document "privateKeyId"
private_key N Output If using explicit credentials, this field should contain the private_key field from the service account json. Replace with x509 cert 12345-12345
client_email N Output If using explicit credentials, this field should contain the client_email field from the service account json "client@email.com"
client_id N Output If using explicit credentials, this field should contain the client_id field from the service account json 0123456789-0123456789
auth_uri N Output If using explicit credentials, this field should contain the auth_uri field from the service account json https://accounts.google.com/o/oauth2/auth
token_uri N Output If using explicit credentials, this field should contain the token_uri field from the service account json https://oauth2.googleapis.com/token
auth_provider_x509_cert_url N Output If using explicit credentials, this field should contain the auth_provider_x509_cert_url field from the service account json https://www.googleapis.com/oauth2/v1/certs
client_x509_cert_url N Output If using explicit credentials, this field should contain the client_x509_cert_url field from the service account json https://www.googleapis.com/robot/v1/metadata/x509/<PROJECT_NAME>.iam.gserviceaccount.com
decodeBase64 N Output Configuration to decode base64 file content before saving to bucket storage. (In case of saving a file with binary content). true is the only allowed positive value. Other positive variations like "True", "1" are not acceptable. Defaults to false true, false
encodeBase64 N Output Configuration to encode base64 file content before return the content. (In case of opening a file with binary content). true is the only allowed positive value. Other positive variations like "True", "1" are not acceptable. Defaults to false true, false

GCP Credentials

Since the GCP Storage Bucket component uses the GCP Go Client Libraries, by default it authenticates using Application Default Credentials. This is explained further in the Authenticate to GCP Cloud services using client libraries guide. Also, see how to Set up Application Default Credentials.

Binding support

This component supports output binding with the following operations:

Create file

To perform a create operation, invoke the GCP Storage Bucket binding with a POST method and the following JSON body:

Note: by default, a random UUID is generated. See below for Metadata support to set the name

{
  "operation": "create",
  "data": "YOUR_CONTENT"
}

The metadata parameters are:

  • key - (optional) the name of the object
  • decodeBase64 - (optional) configuration to decode base64 file content before saving to storage

Examples

Save text to a random generated UUID file

On Windows, utilize cmd prompt (PowerShell has different escaping mechanism)

curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\" }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "Hello World" }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Save text to a specific file
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\", \"metadata\": { \"key\": \"my-test-file.txt\" } }" \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "Hello World", "metadata": { "key": "my-test-file.txt" } }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Upload a file

To upload a file, pass the file contents as the data payload; you may want to encode this in e.g. Base64 for binary content.

Then you can upload it as you would normally:

curl -d "{ \"operation\": \"create\", \"data\": \"(YOUR_FILE_CONTENTS)\", \"metadata\": { \"key\": \"my-test-file.jpg\" } }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "$(cat my-test-file.jpg)", "metadata": { "key": "my-test-file.jpg" } }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

The response body will contain the following JSON:

{
    "objectURL":"https://storage.googleapis.com/<your bucket>/<key>",
}

Get object

To perform a get file operation, invoke the GCP bucket binding with a POST method and the following JSON body:

{
  "operation": "get",
  "metadata": {
    "key": "my-test-file.txt"
  }
}

The metadata parameters are:

  • key - the name of the object
  • encodeBase64 - (optional) configuration to encode base64 file content before return the content.

Example

curl -d '{ \"operation\": \"get\", \"metadata\": { \"key\": \"my-test-file.txt\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "get", "metadata": { "key": "my-test-file.txt" }}' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

The response body contains the value stored in the object.

Bulk get objects

To perform a bulk get operation that retrieves all bucket files at once, invoke the GCP bucket binding with a POST method and the following JSON body:

{
  "operation": "bulkGet",
}

The metadata parameters are:

  • encodeBase64 - (optional) configuration to encode base64 file content before return the content for all files

Example

curl -d '{ \"operation\": \"bulkget\"}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "bulkget"}' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

The response body contains an array of objects, where each object represents a file in the bucket with the following structure:

[
  {
    "name": "file1.txt",
    "data": "content of file1",
    "attrs": {
      "bucket": "mybucket",
      "name": "file1.txt",
      "size": 1234,
      ...
    }
  },
  {
    "name": "file2.txt",
    "data": "content of file2",
    "attrs": {
      "bucket": "mybucket",
      "name": "file2.txt",
      "size": 5678,
      ...
    }
  }
]

Each object in the array contains:

  • name: The name of the file
  • data: The content of the file
  • attrs: Object attributes from GCP Storage including metadata like creation time, size, content type, etc.

Delete object

To perform a delete object operation, invoke the GCP bucket binding with a POST method and the following JSON body:

{
  "operation": "delete",
  "metadata": {
    "key": "my-test-file.txt"
  }
}

The metadata parameters are:

  • key - the name of the object

Examples

Delete object
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"key\": \"my-test-file.txt\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "delete", "metadata": { "key": "my-test-file.txt" }}' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

An HTTP 204 (No Content) and empty body will be retuned if successful.

List objects

To perform a list object operation, invoke the GCP bucket binding with a POST method and the following JSON body:

{
  "operation": "list",
  "data": {
    "maxResults": 10,
    "prefix": "file",
    "delimiter": "i0FvxAn2EOEL6"
  }
}

The data parameters are:

  • maxResults - (optional) sets the maximum number of keys returned in the response. By default the action returns up to 1,000 key names. The response might contain fewer keys but will never contain more.
  • prefix - (optional) it can be used to filter objects starting with prefix.
  • delimiter - (optional) it can be used to restrict the results to only the kobjects in the given “directory”. Without the delimiter, the entire tree under the prefix is returned

Response

The response body contains the list of found objects.

The list of objects will be returned as JSON array in the following form:

[
	{
		"Bucket": "<your bucket>",
		"Name": "02WGzEdsUWNlQ",
		"ContentType": "image/png",
		"ContentLanguage": "",
		"CacheControl": "",
		"EventBasedHold": false,
		"TemporaryHold": false,
		"RetentionExpirationTime": "0001-01-01T00:00:00Z",
		"ACL": null,
		"PredefinedACL": "",
		"Owner": "",
		"Size": 5187,
		"ContentEncoding": "",
		"ContentDisposition": "",
		"MD5": "aQdLBCYV0BxA51jUaxc3pQ==",
		"CRC32C": 1058633505,
		"MediaLink": "https://storage.googleapis.com/download/storage/v1/b/<your bucket>/o/02WGzEdsUWNlQ?generation=1631553155678071&alt=media",
		"Metadata": null,
		"Generation": 1631553155678071,
		"Metageneration": 1,
		"StorageClass": "STANDARD",
		"Created": "2021-09-13T17:12:35.679Z",
		"Deleted": "0001-01-01T00:00:00Z",
		"Updated": "2021-09-13T17:12:35.679Z",
		"CustomerKeySHA256": "",
		"KMSKeyName": "",
		"Prefix": "",
		"Etag": "CPf+mpK5/PICEAE="
	}
]

Copy objects

To perform a copy object operation, invoke the GCP bucket binding with a POST method and the following JSON body:

{
  "operation": "copy",
  "metadata": {
    "destinationBucket": "destination-bucket-name",
  }
}

The metadata parameters are:

  • destinationBucket - the name of the destination bucket (required)

Move objects

To perform a move object operation, invoke the GCP bucket binding with a POST method and the following JSON body:

{
  "operation": "move",
  "metadata": {
    "destinationBucket": "destination-bucket-name",
  }
}

The metadata parameters are:

  • destinationBucket - the name of the destination bucket (required)

Rename objects

To perform a rename object operation, invoke the GCP bucket binding with a POST method and the following JSON body:

{
  "operation": "rename",
  "metadata": {
    "newName": "object-new-name",
  }
}

The metadata parameters are:

  • newName - the new name of the object (required)

26 - GraphQL binding spec

Detailed documentation on the GraphQL binding component

Component format

To setup GraphQL binding create a component of type bindings.graphql. See this guide on how to create and apply a binding configuration. To separate normal config settings (e.g. endpoint) from headers, “header:” is used a prefix on the header names.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: example.bindings.graphql
spec:
  type: bindings.graphql
  version: v1
  metadata:
    - name: endpoint
      value: "http://localhost:8080/v1/graphql"
    - name: header:x-hasura-access-key
      value: "adminkey"
    - name: header:Cache-Control
      value: "no-cache"

Spec metadata fields

Field Required Binding support Details Example
endpoint Y Output GraphQL endpoint string See here for more details "http://localhost:4000/graphql/graphql"
header:[HEADERKEY] N Output GraphQL header. Specify the header key in the name, and the header value in the value. "no-cache" (see above)
variable:[VARIABLEKEY] N Output GraphQL query variable. Specify the variable name in the name, and the variable value in the value. "123" (see below)

Endpoint and Header format

The GraphQL binding uses GraphQL client internally.

Binding support

This component supports output binding with the following operations:

  • query
  • mutation

query

The query operation is used for query statements, which returns the metadata along with data in a form of an array of row values.

Request

in := &dapr.InvokeBindingRequest{
Name:      "example.bindings.graphql",
Operation: "query",
Metadata: map[string]string{ "query": `query { users { name } }`},
}

To use a query that requires query variables, add a key-value pair to the metadata map, wherein every key corresponding to a query variable is the variable name prefixed with variable:

in := &dapr.InvokeBindingRequest{
Name: "example.bindings.graphql",
Operation: "query",
Metadata: map[string]string{ 
  "query": `query HeroNameAndFriends($episode: string!) { hero(episode: $episode) { name } }`,
  "variable:episode": "JEDI",
}

27 - HTTP binding spec

Detailed documentation on the HTTP binding component

Alternative

The service invocation API allows invoking non-Dapr HTTP endpoints and is the recommended approach. Read “How-To: Invoke Non-Dapr Endpoints using HTTP” for more information.

Setup Dapr component

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.http
  version: v1
  metadata:
    - name: url
      value: "http://something.com"
    #- name: maxResponseBodySize
    #  value: "100Mi" # OPTIONAL maximum amount of data to read from a response
    #- name: MTLSRootCA
    #  value: "/Users/somepath/root.pem" # OPTIONAL path to root CA or PEM-encoded string
    #- name: MTLSClientCert
    #  value: "/Users/somepath/client.pem" # OPTIONAL path to client cert or PEM-encoded string
    #- name: MTLSClientKey
    #  value: "/Users/somepath/client.key" # OPTIONAL path to client key or PEM-encoded string
    #- name: MTLSRenegotiation
    #  value: "RenegotiateOnceAsClient" # OPTIONAL one of: RenegotiateNever, RenegotiateOnceAsClient, RenegotiateFreelyAsClient
    #- name: securityToken # OPTIONAL <token to include as a header on HTTP requests>
    #  secretKeyRef:
    #    name: mysecret
    #    key: "mytoken"
    #- name: securityTokenHeader
    #  value: "Authorization: Bearer" # OPTIONAL <header name for the security token>
    #- name: errorIfNot2XX
    #  value: "false" # OPTIONAL

Spec metadata fields

Field Required Binding support Details Example
url Y Output The base URL of the HTTP endpoint to invoke http://host:port/path, http://myservice:8000/customers
maxResponseBodySize N Output Maximum length of the response to read. A whole number is interpreted as bytes; units such as Ki, Mi, Gi (SI) or `k M
MTLSRootCA N Output Path to root CA certificate or PEM-encoded string
MTLSClientCert N Output Path to client certificate or PEM-encoded string
MTLSClientKey N Output Path client private key or PEM-encoded string
MTLSRenegotiation N Output Type of mTLS renegotiation to be used RenegotiateOnceAsClient
securityToken N Output The value of a token to be added to a HTTP request as a header. Used together with securityTokenHeader
securityTokenHeader N Output The name of the header for securityToken on a HTTP request
errorIfNot2XX N Output If a binding error should be thrown when the response is not in the 2xx range. Defaults to true

The values for MTLSRootCA, MTLSClientCert and MTLSClientKey can be provided in three ways:

  • Secret store reference:

    apiVersion: dapr.io/v1alpha1
    kind: Component
    metadata:
      name: <NAME>
    spec:
      type: bindings.http
      version: v1
      metadata:
      - name: url
        value: http://something.com
      - name: MTLSRootCA
        secretKeyRef:
          name: mysecret
          key: myrootca
    auth:
      secretStore: <NAME_OF_SECRET_STORE_COMPONENT>
    
  • Path to the file: the absolute path to the file can be provided as a value for the field.

  • PEM encoded string: the PEM-encoded string can also be provided as a value for the field.

Binding support

This component supports output binding with the following HTTP methods/verbs:

  • create : For backward compatibility and treated like a post
  • get : Read data/records
  • head : Identical to get except that the server does not return a response body
  • post : Typically used to create records or send commands
  • put : Update data/records
  • patch : Sometimes used to update a subset of fields of a record
  • delete : Delete a data/record
  • options : Requests for information about the communication options available (not commonly used)
  • trace : Used to invoke a remote, application-layer loop- back of the request message (not commonly used)

Request

Operation metadata fields

All of the operations above support the following metadata fields

Field Required Details Example
path N The path to append to the base URL. Used for accessing specific URIs. "/1234", "/search?lastName=Jones"
Field with a capitalized first letter N Any fields that have a capital first letter are sent as request headers "Content-Type", "Accept"

Retrieving data

To retrieve data from the HTTP endpoint, invoke the HTTP binding with a GET method and the following JSON body:

{
  "operation": "get"
}

Optionally, a path can be specified to interact with resource URIs:

{
  "operation": "get",
  "metadata": {
    "path": "/things/1234"
  }
}

Response

The response body contains the data returned by the HTTP endpoint. The data field contains the HTTP response body as a byte slice (Base64 encoded via curl). The metadata field contains:

Field Required Details Example
statusCode Y The HTTP status code 200, 404, 503
status Y The status description "200 OK", "201 Created"
Field with a capitalized first letter N Any fields that have a capital first letter are sent as request headers "Content-Type"

Example

Requesting the base URL

curl -d "{ \"operation\": \"get\" }" \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "get" }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Requesting a specific path

curl -d "{ \"operation\": \"get\", \"metadata\": { \"path\": \"/things/1234\" } }" \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "get", "metadata": { "path": "/things/1234" } }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Sending and updating data

To send data to the HTTP endpoint, invoke the HTTP binding with a POST, PUT, or PATCH method and the following JSON body:

{
  "operation": "post",
  "data": "content (default is JSON)",
  "metadata": {
    "path": "/things",
    "Content-Type": "application/json; charset=utf-8"
  }
}

Example

Posting a new record

curl -d "{ \"operation\": \"post\", \"data\": \"YOUR_BASE_64_CONTENT\", \"metadata\": { \"path\": \"/things\" } }" \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "post", "data": "YOUR_BASE_64_CONTENT", "metadata": { "path": "/things" } }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Using HTTPS

The HTTP binding can also be used with HTTPS endpoints by configuring the Dapr sidecar to trust the server’s SSL certificate.

  1. Update the binding URL to use https instead of http.
  2. If you need to add a custom TLS certificate, refer How-To: Install certificates in the Dapr sidecar, to install the TLS certificates in the sidecar.

Example

Update the binding component

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
  namespace: <NAMESPACE>
spec:
  type: bindings.http
  version: v1
  metadata:
  - name: url
    value: https://my-secured-website.com # Use HTTPS

Install the TLS certificate in the sidecar

When the sidecar is not running inside a container, the TLS certificate can be directly installed on the host operating system.

Below is an example when the sidecar is running as a container. The SSL certificate is located on the host computer at /tmp/ssl/cert.pem.

version: '3'
services:
  my-app:
    # ...
  dapr-sidecar:
    image: "daprio/daprd:1.8.0"
    command: [
      "./daprd",
     "-app-id", "myapp",
     "-app-port", "3000",
     ]
    volumes:
        - "./components/:/components"
        - "/tmp/ssl/:/certificates" # Mount the certificates folder to the sidecar container at /certificates
    environment:
      - "SSL_CERT_DIR=/certificates" # Set the environment variable to the path of the certificates folder
    depends_on:
      - my-app

The sidecar can read the TLS certificate from a variety of sources. See How-to: Mount Pod volumes to the Dapr sidecar for more. In this example, we store the TLS certificate as a Kubernetes secret.

kubectl create secret generic myapp-cert --from-file /tmp/ssl/cert.pem

The YAML below is an example of the Kubernetes deployment that mounts the above secret to the sidecar and sets SSL_CERT_DIR to install the certificates.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  namespace: default
  labels:
    app: myapp
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
      annotations:
        dapr.io/enabled: "true"
        dapr.io/app-id: "myapp"
        dapr.io/app-port: "8000"
        dapr.io/volume-mounts: "cert-vol:/certificates" # Mount the certificates folder to the sidecar container at /certificates
        dapr.io/env: "SSL_CERT_DIR=/certificates" # Set the environment variable to the path of the certificates folder
    spec:
      volumes:
        - name: cert-vol
          secret:
            secretName: myapp-cert
...

Invoke the binding securely

curl -d "{ \"operation\": \"get\" }" \
      https://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "get" }' \
      https://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Using mTLS or enabling client TLS authentication along with HTTPS

You can configure the HTTP binding to use mTLS or client TLS authentication along with HTTPS by providing the MTLSRootCA, MTLSClientCert, and MTLSClientKey metadata fields in the binding component.

These fields can be passed as a file path or as a pem encoded string:

  • If the file path is provided, the file is read and the contents are used.
  • If the PEM-encoded string is provided, the string is used as is.

When these fields are configured, the Dapr sidecar uses the provided certificate to authenticate itself with the server during the TLS handshake process.

If the remote server is enforcing TLS renegotiation, you also need to set the metadata field MTLSRenegotiation. This field accepts one of following options:

  • RenegotiateNever
  • RenegotiateOnceAsClient
  • RenegotiateFreelyAsClient

For more details see the Go RenegotiationSupport documentation.

You can use this when the server with which the HTTP binding is configured to communicate requires mTLS or client TLS authentication.

28 - Huawei OBS binding spec

Detailed documentation on the Huawei OBS binding component

Component format

To setup Huawei Object Storage Service (OBS) (output) binding create a component of type bindings.huawei.obs. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.huawei.obs
  version: v1
  - name: bucket
    value: "<your-bucket-name>"
  - name: endpoint
    value: "<obs-bucket-endpoint>"
  - name: accessKey
    value: "<your-access-key>"
  - name: secretKey
    value: "<your-secret-key>"
  # optional fields
  - name: region
    value: "<your-bucket-region>"

Spec metadata fields

Field Required Binding support Details Example
bucket Y Output The name of the Huawei OBS bucket to write to "My-OBS-Bucket"
endpoint Y Output The specific Huawei OBS endpoint "obs.cn-north-4.myhuaweicloud.com"
accessKey Y Output The Huawei Access Key (AK) to access this resource "************"
secretKey Y Output The Huawei Secret Key (SK) to access this resource "************"
region N Output The specific Huawei region of the bucket "cn-north-4"

Binding support

This component supports output binding with the following operations:

Create file

To perform a create operation, invoke the Huawei OBS binding with a POST method and the following JSON body:

Note: by default, a random UUID is generated. See below for Metadata support to set the destination file name

{
  "operation": "create",
  "data": "YOUR_CONTENT"
}

Examples

Save text to a random generated UUID file

On Windows, utilize cmd prompt (PowerShell has different escaping mechanism)

curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\" }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "Hello World" }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Save text to a specific file
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\", \"metadata\": { \"key\": \"my-test-file.txt\" } }" \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "Hello World", "metadata": { "key": "my-test-file.txt" } }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

The response JSON body contains the statusCode and the versionId fields. The versionId will have a value returned only if the bucket versioning is enabled and an empty string otherwise.

Upload file

To upload a binary file (for example, .jpg, .zip), invoke the Huawei OBS binding with a POST method and the following JSON body:

Note: by default, a random UUID is generated, if you don’t specify the key. See the example below for metadata support to set the destination file name. This API can be used to upload a regular file, such as a plain text file.

{
  "operation": "upload",
  "metadata": {
     "key": "DESTINATION_FILE_NAME"
   },
  "data": {
     "sourceFile": "PATH_TO_YOUR_SOURCE_FILE"
   }
}

Example

curl -d "{ \"operation\": \"upload\", \"data\": { \"sourceFile\": \".\my-test-file.jpg\" }, \"metadata\": { \"key\": \"my-test-file.jpg\" } }" \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "upload", "data": { "sourceFile": "./my-test-file.jpg" }, "metadata": { "key": "my-test-file.jpg" } }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

The response JSON body contains the statusCode and the versionId fields. The versionId will have a value returned only if the bucket versioning is enabled and an empty string otherwise.

Get object

To perform a get file operation, invoke the Huawei OBS binding with a POST method and the following JSON body:

{
  "operation": "get",
  "metadata": {
    "key": "my-test-file.txt"
  }
}

The metadata parameters are:

  • key - the name of the object

Example

curl -d '{ \"operation\": \"get\", \"metadata\": { \"key\": \"my-test-file.txt\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "get", "metadata": { "key": "my-test-file.txt" }}' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

The response body contains the value stored in the object.

Delete object

To perform a delete object operation, invoke the Huawei OBS binding with a POST method and the following JSON body:

{
  "operation": "delete",
  "metadata": {
    "key": "my-test-file.txt"
  }
}

The metadata parameters are:

  • key - the name of the object

Examples

Delete object
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"key\": \"my-test-file.txt\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "delete", "metadata": { "key": "my-test-file.txt" }}' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

An HTTP 204 (No Content) and empty body are returned if successful.

List objects

To perform a list object operation, invoke the Huawei OBS binding with a POST method and the following JSON body:

{
  "operation": "list",
  "data": {
    "maxResults": 5,
    "prefix": "dapr-",
    "marker": "obstest",
    "delimiter": "jpg"
  }
}

The data parameters are:

  • maxResults - (optional) sets the maximum number of keys returned in the response. By default the action returns up to 1,000 key names. The response might contain fewer keys but will never contain more.
  • prefix - (optional) limits the response to keys that begin with the specified prefix.
  • marker - (optional) marker is where you want Huawei OBS to start listing from. Huawei OBS starts listing after this specified key. Marker can be any key in the bucket. The marker value may then be used in a subsequent call to request the next set of list items.
  • delimiter - (optional) A delimiter is a character you use to group keys. It returns objects/files with their object key other than that is specified by the delimiter pattern.

Example

curl -d '{ \"operation\": \"list\", \"data\": { \"maxResults\": 5, \"prefix\": \"dapr-\", \"marker\": \"obstest\", \"delimiter\": \"jpg\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "list", "data": { "maxResults": 5, "prefix": "dapr-", "marker": "obstest", "delimiter": "jpg" }}' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

The response body contains the list of found objects.

29 - InfluxDB binding spec

Detailed documentation on the InfluxDB binding component

Component format

To setup InfluxDB binding create a component of type bindings.influx. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.influx
  version: v1
  metadata:
  - name: url # Required
    value: "<INFLUX-DB-URL>"
  - name: token # Required
    value: "<TOKEN>"
  - name: org # Required
    value: "<ORG>"
  - name: bucket # Required
    value: "<BUCKET>"

Spec metadata fields

Field Required Binding support Details Example
url Y Output The URL for the InfluxDB instance "http://localhost:8086"
token Y Output The authorization token for InfluxDB "mytoken"
org Y Output The InfluxDB organization "myorg"
bucket Y Output Bucket name to write to "mybucket"

Binding support

This component supports output binding with the following operations:

  • create
  • query

Query

In order to query InfluxDB, use a query operation along with a raw key in the call’s metadata, with the query as the value:

curl -X POST http://localhost:3500/v1.0/bindings/myInfluxBinding \
  -H "Content-Type: application/json" \
  -d "{
        \"metadata\": {
          \"raw\": "SELECT * FROM 'sith_lords'"
        },
        \"operation\": \"query\"
      }"

30 - Kafka binding spec

Detailed documentation on the Kafka binding component

Component format

To setup Kafka binding create a component of type bindings.kafka. See this guide on how to create and apply a binding configuration. For details on using secretKeyRef, see the guide on how to reference secrets in components.

All component metadata field values can carry templated metadata values, which are resolved on Dapr sidecar startup. For example, you can choose to use {namespace} as the consumerGroup, to enable using the same appId in different namespaces using the same topics as described in this article.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: kafka-binding
spec:
  type: bindings.kafka
  version: v1
  metadata:
  - name: topics # Optional. Used for input bindings.
    value: "topic1,topic2"
  - name: brokers # Required.
    value: "localhost:9092,localhost:9093"
  - name: consumerGroup # Optional. Used for input bindings.
    value: "group1"
  - name: publishTopic # Optional. Used for output bindings.
    value: "topic3"
  - name: authRequired # Required.
    value: "true"
  - name: saslUsername # Required if authRequired is `true`.
    value: "user"
  - name: saslPassword # Required if authRequired is `true`.
    secretKeyRef:
      name: kafka-secrets
      key: "saslPasswordSecret"
  - name: saslMechanism
    value: "SHA-512"
  - name: initialOffset # Optional. Used for input bindings.
    value: "newest"
  - name: maxMessageBytes # Optional.
    value: "1024"
  - name: heartbeatInterval # Optional.
    value: 5s
  - name: sessionTimeout # Optional.
    value: 15s
  - name: version # Optional.
    value: "2.0.0"
  - name: direction
    value: "input, output"
  - name: schemaRegistryURL # Optional. When using Schema Registry Avro serialization/deserialization. The Schema Registry URL.
    value: http://localhost:8081
  - name: schemaRegistryAPIKey # Optional. When using Schema Registry Avro serialization/deserialization. The Schema Registry API Key.
    value: XYAXXAZ
  - name: schemaRegistryAPISecret # Optional. When using Schema Registry Avro serialization/deserialization. The Schema Registry credentials API Secret.
    value: "ABCDEFGMEADFF"
  - name: schemaCachingEnabled # Optional. When using Schema Registry Avro serialization/deserialization. Enables caching for schemas.
    value: true
  - name: schemaLatestVersionCacheTTL # Optional. When using Schema Registry Avro serialization/deserialization. The TTL for schema caching when publishing a message with latest schema available.
    value: 5m
  - name: escapeHeaders # Optional.
    value: false

Spec metadata fields

Field Required Binding support Details Example
topics N Input A comma-separated string of topics. "mytopic1,topic2"
brokers Y Input/Output A comma-separated string of Kafka brokers. "localhost:9092,dapr-kafka.myapp.svc.cluster.local:9093"
clientID N Input/Output A user-provided string sent with every request to the Kafka brokers for logging, debugging, and auditing purposes. "my-dapr-app"
consumerGroup N Input A kafka consumer group to listen on. Each record published to a topic is delivered to one consumer within each consumer group subscribed to the topic. "group1"
consumeRetryEnabled N Input/Output Enable consume retry by setting to "true". Default to false in Kafka binding component. "true", "false"
publishTopic Y Output The topic to publish to. "mytopic"
authRequired N Deprecated Enable SASL authentication with the Kafka brokers. "true", "false"
authType Y Input/Output Configure or disable authentication. Supported values: none, password, mtls, or oidc "password", "none"
saslUsername N Input/Output The SASL username used for authentication. Only required if authRequired is set to "true". "adminuser"
saslPassword N Input/Output The SASL password used for authentication. Can be secretKeyRef to use a secret reference. Only required if authRequired is set to "true". "", "KeFg23!"
saslMechanism N Input/Output The SASL authentication mechanism you’d like to use. Only required if authtype is set to "password". If not provided, defaults to PLAINTEXT, which could cause a break for some services, like Amazon Managed Service for Kafka. "SHA-512", "SHA-256", "PLAINTEXT"
initialOffset N Input The initial offset to use if no offset was previously committed. Should be “newest” or “oldest”. Defaults to “newest”. "oldest"
maxMessageBytes N Input/Output The maximum size in bytes allowed for a single Kafka message. Defaults to 1024. "2048"
oidcTokenEndpoint N Input/Output Full URL to an OAuth2 identity provider access token endpoint. Required when authType is set to oidc https://identity.example.com/v1/token"
oidcClientID N Input/Output The OAuth2 client ID that has been provisioned in the identity provider. Required when authType is set to oidc "dapr-kafka"
oidcClientSecret N Input/Output The OAuth2 client secret that has been provisioned in the identity provider: Required when authType is set to oidc "KeFg23!"
oidcScopes N Input/Output Comma-delimited list of OAuth2/OIDC scopes to request with the access token. Recommended when authType is set to oidc. Defaults to "openid" "openid,kafka-prod"
version N Input/Output Kafka cluster version. Defaults to 2.0.0. Please note that this needs to be mandatorily set to 1.0.0 for EventHubs with Kafka. "1.0.0"
direction N Input/Output The direction of the binding. "input", "output", "input, output"
oidcExtensions N Input/Output String containing a JSON-encoded dictionary of OAuth2/OIDC extensions to request with the access token {"cluster":"kafka","poolid":"kafkapool"}
schemaRegistryURL N Required when using Schema Registry Avro serialization/deserialization. The Schema Registry URL. http://localhost:8081
schemaRegistryAPIKey N When using Schema Registry Avro serialization/deserialization. The Schema Registry credentials API Key. XYAXXAZ
schemaRegistryAPISecret N When using Schema Registry Avro serialization/deserialization. The Schema Registry credentials API Secret. ABCDEFGMEADFF
schemaCachingEnabled N When using Schema Registry Avro serialization/deserialization. Enables caching for schemas. Default is true true
schemaLatestVersionCacheTTL N When using Schema Registry Avro serialization/deserialization. The TTL for schema caching when publishing a message with latest schema available. Default is 5 min 5m
clientConnectionTopicMetadataRefreshInterval N Input/Output The interval for the client connection’s topic metadata to be refreshed with the broker as a Go duration. Defaults to 9m. "4m"
clientConnectionKeepAliveInterval N Input/Output The maximum time for the client connection to be kept alive with the broker, as a Go duration, before closing the connection. A zero value (default) means keeping alive indefinitely. "4m"
consumerFetchDefault N Input/Output The default number of message bytes to fetch from the broker in each request. Default is "1048576" bytes. "2097152"
heartbeatInterval N Input The interval between heartbeats to the consumer coordinator. At most, the value should be set to a 1/3 of the sessionTimeout value. Defaults to "3s". "5s"
sessionTimeout N Input The timeout used to detect client failures when using Kafka’s group management facility. If the broker fails to receive any heartbeats from the consumer before the expiration of this session timeout, then the consumer is removed and initiates a rebalance. Defaults to "10s". "20s"
escapeHeaders N Input Enables URL escaping of the message header values received by the consumer. Allows receiving content with special characters that are usually not allowed in HTTP headers. Default is false. true

Note

The metadata version must be set to 1.0.0 when using Azure EventHubs with Kafka.

Binding support

This component supports both input and output binding interfaces.

This component supports output binding with the following operations:

  • create

Authentication

Kafka supports a variety of authentication schemes and Dapr supports several: SASL password, mTLS, OIDC/OAuth2. Learn more about Kafka’s authentication method for both the Kafka binding and Kafka pub/sub components.

Specifying a partition key

When invoking the Kafka binding, its possible to provide an optional partition key by using the metadata section in the request body.

The field name is partitionKey.

Example:

curl -X POST http://localhost:3500/v1.0/bindings/myKafka \
  -H "Content-Type: application/json" \
  -d '{
        "data": {
          "message": "Hi"
        },
        "metadata": {
          "partitionKey": "key1"
        },
        "operation": "create"
      }'

Response

An HTTP 204 (No Content) and empty body will be returned if successful.

31 - Kitex

Detailed documentation on the Kitex binding component

Overview

The binding for Kitex mainly utilizes the generic-call feature in Kitex. Learn more from the official documentation around Kitex generic-call. Currently, Kitex only supports Thrift generic calls. The implementation integrated into components-contrib adopts binary generic calls.

Component format

To setup an Kitex binding, create a component of type bindings.kitex. See the How-to: Use output bindings to interface with external resources guide on creating and applying a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: bindings.kitex
spec:
  type: bindings.kitex
  version: v1
  metadata: 
  - name: hostPorts
    value: "127.0.0.1:8888"
  - name: destService
    value: "echo"
  - name: methodName
    value: "echo"
  - name: version
    value: "0.5.0"

Spec metadata fields

The InvokeRequest.Metadata for bindings.kitex requires the client to fill in four required items when making a call:

  • hostPorts
  • destService
  • methodName
  • version
Field Required Binding support Details Example
hostPorts Y Output IP address and port information of the Kitex server (Thrift) "127.0.0.1:8888"
destService Y Output Service name of the Kitex server (Thrift) "echo"
methodName Y Output Method name under a specific service name of the Kitex server (Thrift) "echo"
version Y Output Kitex version "0.5.0"

Binding support

This component supports output binding with the following operations:

  • get

Example

When using Kitex binding:

  • The client needs to pass in the correct Thrift-encoded binary
  • The server needs to be a Thrift Server.

The kitex_output_test can be used as a reference. For example, the variable reqData needs to be encoded by the Thrift protocol before sending, and the returned data needs to be decoded by the Thrift protocol.

Request

{
  "operation": "get",
  "metadata": {
    "hostPorts": "127.0.0.1:8888",
    "destService": "echo",
    "methodName": "echo",
    "version":"0.5.0"
  },
  "data": reqdata
}

32 - KubeMQ binding spec

Detailed documentation on the KubeMQ binding component

Component format

To setup KubeMQ binding create a component of type bindings.kubemq. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: binding-topic
spec:
  type: bindings.kubemq
  version: v1
  metadata:
    - name: address
      value: "localhost:50000"
    - name: channel
      value: "queue1"
    - name: direction
      value: "input, output"

Spec metadata fields

Field Required Details Example
address Y Address of the KubeMQ server "localhost:50000"
channel Y The Queue channel name "queue1"
authToken N Auth JWT token for connection. Check out KubeMQ Authentication "ew..."
autoAcknowledged N Sets if received queue message is automatically acknowledged "true" or "false" (default is "false")
pollMaxItems N Sets the number of messages to poll on every connection "1"
pollTimeoutSeconds N Sets the time in seconds for each poll interval "3600"
direction N The direction of the binding "input", "output", "input, output"

Binding support

This component supports both input and output binding interfaces.

Create a KubeMQ broker

  1. Obtain KubeMQ Key.
  2. Wait for an email confirmation with your Key

You can run a KubeMQ broker with Docker:

docker run -d -p 8080:8080 -p 50000:50000 -p 9090:9090 -e KUBEMQ_TOKEN=<your-key> kubemq/kubemq

You can then interact with the server using the client port: localhost:50000

  1. Obtain KubeMQ Key.
  2. Wait for an email confirmation with your Key

Then Run the following kubectl commands:

kubectl apply -f https://deploy.kubemq.io/init
kubectl apply -f https://deploy.kubemq.io/key/<your-key>

Install KubeMQ CLI

Go to KubeMQ CLI and download the latest version of the CLI.

Browse KubeMQ Dashboard

Open a browser and navigate to http://localhost:8080

With KubeMQCTL installed, run the following command:

kubemqctl get dashboard

Or, with kubectl installed, run port-forward command:

kubectl port-forward svc/kubemq-cluster-api -n kubemq 8080:8080

KubeMQ Documentation

Visit KubeMQ Documentation for more information.

33 - Kubernetes Events binding spec

Detailed documentation on the Kubernetes Events binding component

Component format

To setup Kubernetes Events binding create a component of type bindings.kubernetes. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.kubernetes
  version: v1
  metadata:
  - name: namespace
    value: "<NAMESPACE>"
  - name: resyncPeriodInSec
    value: "<seconds>"
  - name: direction
    value: "input"

Spec metadata fields

Field Required Binding support Details Example
namespace Y Input The Kubernetes namespace to read events from "default"
resyncPeriodInSec N Input The period of time to refresh event list from Kubernetes API server. Defaults to "10" "15"
direction N Input The direction of the binding "input"
kubeconfigPath N Input The path to the kubeconfig file. If not specified, the binding uses the default in-cluster config value "/path/to/kubeconfig"

Binding support

This component supports input binding interface.

Output format

Output received from the binding is of format bindings.ReadResponse with the Data field populated with the following structure:

 {
   "event": "",
   "oldVal": {
     "metadata": {
       "name": "hello-node.162c2661c524d095",
       "namespace": "kube-events",
       "selfLink": "/api/v1/namespaces/kube-events/events/hello-node.162c2661c524d095",
       ...
     },
     "involvedObject": {
       "kind": "Deployment",
       "namespace": "kube-events",
       ...
     },
     "reason": "ScalingReplicaSet",
     "message": "Scaled up replica set hello-node-7bf657c596 to 1",
     ...
   },
   "newVal": {
     "metadata": { "creationTimestamp": "null" },
     "involvedObject": {},
     "source": {},
     "firstTimestamp": "null",
     "lastTimestamp": "null",
     "eventTime": "null",
     ...
   }
 }

Three different event types are available:

  • Add : Only the newVal field is populated, oldVal field is an empty v1.Event, event is add
  • Delete : Only the oldVal field is populated, newVal field is an empty v1.Event, event is delete
  • Update : Both the oldVal and newVal fields are populated, event is update

Required permissions

For consuming events from Kubernetes, permissions need to be assigned to a User/Group/ServiceAccount using [RBAC Auth] mechanism of Kubernetes.

Role

One of the rules need to be of the form as below to give permissions to get, watch and list events. API Groups can be as restrictive as needed.

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: <ROLENAME>
rules:
- apiGroups: [""]
  resources: ["events"]
  verbs: ["get", "watch", "list"]

RoleBinding

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: <NAME>
subjects:
- kind: ServiceAccount
  name: default # or as need be, can be changed
roleRef:
  kind: Role
  name: <ROLENAME> # same as the one above
  apiGroup: ""

34 - Local Storage binding spec

Detailed documentation on the Local Storage binding component

Component format

To set up the Local Storage binding, create a component of type bindings.localstorage. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.localstorage
  version: v1
  metadata:
  - name: rootPath
    value: "<string>"

Spec metadata fields

Field Required Binding support Details Example
rootPath Y Output The root path anchor to which files can be read / saved "/temp/files"

Binding support

This component supports output binding with the following operations:

Create file

To perform a create file operation, invoke the Local Storage binding with a POST method and the following JSON body:

Note: by default, a random UUID is generated. See below for Metadata support to set the name

{
  "operation": "create",
  "data": "YOUR_CONTENT"
}

Examples

Save text to a random generated UUID file

On Windows, utilize cmd prompt (PowerShell has different escaping mechanism)

curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\" }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "Hello World" }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Save text to a specific file
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\", \"metadata\": { \"fileName\": \"my-test-file.txt\" } }" \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "Hello World", "metadata": { "fileName": "my-test-file.txt" } }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Save a binary file

To upload a file, encode it as Base64. The binding should automatically detect the Base64 encoding.

curl -d "{ \"operation\": \"create\", \"data\": \"YOUR_BASE_64_CONTENT\", \"metadata\": { \"fileName\": \"my-test-file.jpg\" } }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "YOUR_BASE_64_CONTENT", "metadata": { "fileName": "my-test-file.jpg" } }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

The response body will contain the following JSON:

{
   "fileName": "<filename>"
}

Get file

To perform a get file operation, invoke the Local Storage binding with a POST method and the following JSON body:

{
  "operation": "get",
  "metadata": {
    "fileName": "myfile"
  }
}

Example

curl -d '{ \"operation\": \"get\", \"metadata\": { \"fileName\": \"myfile\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "get", "metadata": { "fileName": "myfile" }}' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

The response body contains the value stored in the file.

List files

To perform a list files operation, invoke the Local Storage binding with a POST method and the following JSON body:

{
  "operation": "list"
}

If you only want to list the files beneath a particular directory below the rootPath, specify the relative directory name as the fileName in the metadata.

{
  "operation": "list",
  "metadata": {
    "fileName": "my/cool/directory"
  }
}

Example

curl -d '{ \"operation\": \"list\", \"metadata\": { \"fileName\": \"my/cool/directory\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "list", "metadata": { "fileName": "my/cool/directory" }}' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

The response is a JSON array of file names.

Delete file

To perform a delete file operation, invoke the Local Storage binding with a POST method and the following JSON body:

{
  "operation": "delete",
  "metadata": {
    "fileName": "myfile"
  }
}

Example

curl -d '{ \"operation\": \"delete\", \"metadata\": { \"fileName\": \"myfile\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "delete", "metadata": { "fileName": "myfile" }}' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

An HTTP 204 (No Content) and empty body will be returned if successful.

Metadata information

By default the Local Storage output binding auto generates a UUID as the file name. It is configurable in the metadata property of the message.

{
    "data": "file content",
    "metadata": {
        "fileName": "filename.txt"
    },
    "operation": "create"
}

35 - MQTT3 binding spec

Detailed documentation on the MQTT3 binding component

Component format

To setup a MQTT3 binding create a component of type bindings.mqtt3. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.mqtt3
  version: v1
  metadata:
    - name: url
      value: "tcp://[username][:password]@host.domain[:port]"
    - name: topic
      value: "mytopic"
    - name: consumerID
      value: "myapp"
    # Optional
    - name: retain
      value: "false"
    - name: cleanSession
      value: "false"
    - name: backOffMaxRetries
      value: "0"
    - name: direction
      value: "input, output"

Spec metadata fields

Field Required Binding support Details Example
url Y Input/Output Address of the MQTT broker. Can be secretKeyRef to use a secret reference.
Use the tcp:// URI scheme for non-TLS communication.
Use the ssl:// URI scheme for TLS communication.
"tcp://[username][:password]@host.domain[:port]"
topic Y Input/Output The topic to listen on or send events to. "mytopic"
consumerID Y Input/Output The client ID used to connect to the MQTT broker. "myMqttClientApp"
retain N Input/Output Defines whether the message is saved by the broker as the last known good value for a specified topic. Defaults to "false". "true", "false"
cleanSession N Input/Output Sets the clean_session flag in the connection message to the MQTT broker if "true". Defaults to "false". "true", "false"
caCert Required for using TLS Input/Output Certificate Authority (CA) certificate in PEM format for verifying server TLS certificates. See example below
clientCert Required for using TLS Input/Output TLS client certificate in PEM format. Must be used with clientKey. See example below
clientKey Required for using TLS Input/Output TLS client key in PEM format. Must be used with clientCert. Can be secretKeyRef to use a secret reference. See example below
backOffMaxRetries N Input The maximum number of retries to process the message before returning an error. Defaults to "0", which means that no retries will be attempted. "-1" can be specified to indicate that messages should be retried indefinitely until they are successfully processed or the application is shutdown. The component will wait 5 seconds between retries. "3"
direction N Input/Output The direction of the binding "input", "output", "input, output"

Communication using TLS

To configure communication using TLS, ensure that the MQTT broker (e.g. emqx) is configured to support certificates and provide the caCert, clientCert, clientKey metadata in the component configuration. For example:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: mqtt-binding
spec:
  type: bindings.mqtt3
  version: v1
  metadata:
    - name: url
      value: "ssl://host.domain[:port]"
    - name: topic
      value: "topic1"
    - name: consumerID
      value: "myapp"
    # TLS configuration
    - name: caCert
      value: |
        -----BEGIN CERTIFICATE-----
        ...
        -----END CERTIFICATE-----
    - name: clientCert
      value: |
        -----BEGIN CERTIFICATE-----
        ...
        -----END CERTIFICATE-----
    - name: clientKey
      secretKeyRef:
        name: myMqttClientKey
        key: myMqttClientKey
    # Optional
    - name: retain
      value: "false"
    - name: cleanSession
      value: "false"
    - name: backoffMaxRetries
      value: "0"

Note that while the caCert and clientCert values may not be secrets, they can be referenced from a Dapr secret store as well for convenience.

Consuming a shared topic

When consuming a shared topic, each consumer must have a unique identifier. If running multiple instances of an application, you configure the component’s consumerID metadata with a {uuid} tag, which will give each instance a randomly generated consumerID value on start up. For example:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: mqtt-binding
  namespace: default
spec:
  type: bindings.mqtt3
  version: v1
  metadata:
  - name: consumerID
    value: "{uuid}"
  - name: url
    value: "tcp://admin:public@localhost:1883"
  - name: topic
    value: "topic1"
  - name: retain
    value: "false"
  - name: cleanSession
    value: "true"
  - name: backoffMaxRetries
    value: "0"

In this case, the value of the consumer ID is random every time Dapr restarts, so you should set cleanSession to true as well.

Binding support

This component supports both input and output binding interfaces.

This component supports output binding with the following operations:

  • create: publishes a new message

Set topic per-request

You can override the topic in component metadata on a per-request basis:

{
  "operation": "create",
  "metadata": {
    "topic": "myTopic"
  },
  "data": "<h1>Testing Dapr Bindings</h1>This is a test.<br>Bye!"
}

Set retain property per-request

You can override the retain property in component metadata on a per-request basis:

{
  "operation": "create",
  "metadata": {
    "retain": "true"
  },
  "data": "<h1>Testing Dapr Bindings</h1>This is a test.<br>Bye!"
}

36 - MySQL & MariaDB binding spec

Detailed documentation on the MySQL binding component

Component format

The MySQL binding allows connecting to both MySQL and MariaDB databases. In this document, we refer to “MySQL” to indicate both databases.

To setup a MySQL binding create a component of type bindings.mysql. See this guide on how to create and apply a binding configuration.

The MySQL binding uses Go-MySQL-Driver internally.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.mysql
  version: v1
  metadata:
    - name: url # Required, define DB connection in DSN format
      value: "<CONNECTION_STRING>"
    - name: pemPath # Optional
      value: "<PEM PATH>"
    - name: maxIdleConns
      value: "<MAX_IDLE_CONNECTIONS>"
    - name: maxOpenConns
      value: "<MAX_OPEN_CONNECTIONS>"
    - name: connMaxLifetime
      value: "<CONNECTION_MAX_LIFE_TIME>"
    - name: connMaxIdleTime
      value: "<CONNECTION_MAX_IDLE_TIME>"

Spec metadata fields

Field Required Binding support Details Example
url Y Output Represent DB connection in Data Source Name (DNS) format. See here SSL details "user:password@tcp(localhost:3306)/dbname"
pemPath Y Output Path to the PEM file. Used with SSL connection "path/to/pem/file"
maxIdleConns N Output The max idle connections. Integer greater than 0 "10"
maxOpenConns N Output The max open connections. Integer greater than 0 "10"
connMaxLifetime N Output The max connection lifetime. Duration string "12s"
connMaxIdleTime N Output The max connection idle time. Duration string "12s"

SSL connection

If your server requires SSL your connection string must end of &tls=custom for example:

"<user>:<password>@tcp(<server>:3306)/<database>?allowNativePasswords=true&tls=custom"

You must replace the <PEM PATH> with a full path to the PEM file. If you are using Azure Database for MySQL see the Azure documentation on SSL database connections, for information on how to download the required certificate. The connection to MySQL requires a minimum TLS version of 1.2.

Multiple statements

By default, the MySQL Go driver only supports one SQL statement per query/command.

To allow multiple statements in one query you need to add multiStatements=true to a query string, for example:

"<user>:<password>@tcp(<server>:3306)/<database>?multiStatements=true"

While this allows batch queries, it also greatly increases the risk of SQL injections. Only the result of the first query is returned, all other results are silently discarded.

Binding support

This component supports output binding with the following operations:

  • exec
  • query
  • close

Parametrized queries

This binding supports parametrized queries, which allow separating the SQL query itself from user-supplied values. The usage of parametrized queries is strongly recommended for security reasons, as they prevent SQL Injection attacks.

For example:

-- ❌ WRONG! Includes values in the query and is vulnerable to SQL Injection attacks.
SELECT * FROM mytable WHERE user_key = 'something';

-- ✅ GOOD! Uses parametrized queries.
-- This will be executed with parameters ["something"]
SELECT * FROM mytable WHERE user_key = ?;

exec

The exec operation can be used for DDL operations (like table creation), as well as INSERT, UPDATE, DELETE operations which return only metadata (e.g. number of affected rows).

The params property is a string containing a JSON-encoded array of parameters.

Request

{
  "operation": "exec",
  "metadata": {
    "sql": "INSERT INTO foo (id, c1, ts) VALUES (?, ?, ?)",
    "params": "[1, \"demo\", \"2020-09-24T11:45:05Z07:00\"]"
  }
}

Response

{
  "metadata": {
    "operation": "exec",
    "duration": "294µs",
    "start-time": "2020-09-24T11:13:46.405097Z",
    "end-time": "2020-09-24T11:13:46.414519Z",
    "rows-affected": "1",
    "sql": "INSERT INTO foo (id, c1, ts) VALUES (?, ?, ?)"
  }
}

query

The query operation is used for SELECT statements, which returns the metadata along with data in a form of an array of row values.

The params property is a string containing a JSON-encoded array of parameters.

Request

{
  "operation": "query",
  "metadata": {
    "sql": "SELECT * FROM foo WHERE id < $1",
    "params": "[3]"
  }
}

Response

{
  "metadata": {
    "operation": "query",
    "duration": "432µs",
    "start-time": "2020-09-24T11:13:46.405097Z",
    "end-time": "2020-09-24T11:13:46.420566Z",
    "sql": "SELECT * FROM foo WHERE id < ?"
  },
  "data": [
    {column_name: value, column_name: value, ...},
    {column_name: value, column_name: value, ...},
    {column_name: value, column_name: value, ...},
  ]
}

Here column_name is the name of the column returned by query, and value is a value of this column. Note that values are returned as string or numbers (language specific data type)

close

The close operation can be used to explicitly close the DB connection and return it to the pool. This operation doesn’t have any response.

Request

{
  "operation": "close"
}

37 - PostgreSQL binding spec

Detailed documentation on the PostgreSQL binding component

Component format

To setup PostgreSQL binding create a component of type bindings.postgresql. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.postgresql
  version: v1
  metadata:
    # Connection string
    - name: connectionString
      value: "<CONNECTION STRING>"

Spec metadata fields

Authenticate using a connection string

The following metadata options are required to authenticate using a PostgreSQL connection string.

Field Required Details Example
connectionString Y The connection string for the PostgreSQL database. See the PostgreSQL documentation on database connections for information on how to define a connection string. "host=localhost user=postgres password=example port=5432 connect_timeout=10 database=my_db"

Authenticate using individual connection parameters

In addition to using a connection string, you can optionally specify individual connection parameters. These parameters are equivalent to the standard PostgreSQL connection parameters.

Field Required Details Example
host Y The host name or IP address of the PostgreSQL server "localhost"
hostaddr N The IP address of the PostgreSQL server (alternative to host) "127.0.0.1"
port Y The port number of the PostgreSQL server "5432"
database Y The name of the database to connect to "my_db"
user Y The PostgreSQL user to connect as "postgres"
password Y The password for the PostgreSQL user "example"
sslRootCert N Path to the SSL root certificate file "/path/to/ca.crt"

Authenticate using Microsoft Entra ID

Authenticating with Microsoft Entra ID is supported with Azure Database for PostgreSQL. All authentication methods supported by Dapr can be used, including client credentials (“service principal”) and Managed Identity.

Field Required Details Example
useAzureAD Y Must be set to true to enable the component to retrieve access tokens from Microsoft Entra ID. "true"
connectionString Y The connection string for the PostgreSQL database.
This must contain the user, which corresponds to the name of the user created inside PostgreSQL that maps to the Microsoft Entra ID identity; this is often the name of the corresponding principal (e.g. the name of the Microsoft Entra ID application). This connection string should not contain any password.
"host=mydb.postgres.database.azure.com user=myapplication port=5432 database=my_db sslmode=require"
azureTenantId N ID of the Microsoft Entra ID tenant "cd4b2887-304c-…"
azureClientId N Client ID (application ID) "c7dd251f-811f-…"
azureClientSecret N Client secret (application password) "Ecy3X…"

Authenticate using AWS IAM

Authenticating with AWS IAM is supported with all versions of PostgreSQL type components. The user specified in the connection string must be an already existing user in the DB, and an AWS IAM enabled user granted the rds_iam database role. Authentication is based on the AWS authentication configuration file, or the AccessKey/SecretKey provided. The AWS authentication token will be dynamically rotated before it’s expiration time with AWS.

Field Required Details Example
useAWSIAM Y Must be set to true to enable the component to retrieve access tokens from AWS IAM. This authentication method only works with AWS Relational Database Service for PostgreSQL databases. "true"
connectionString Y The connection string for the PostgreSQL database.
This must contain an already existing user, which corresponds to the name of the user created inside PostgreSQL that maps to the AWS IAM policy. This connection string should not contain any password. Note that the database name field is denoted by dbname with AWS.
"host=mydb.postgres.database.aws.com user=myapplication port=5432 dbname=my_db sslmode=require"
awsRegion N This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘region’ instead. The AWS Region where the AWS Relational Database Service is deployed to. "us-east-1"
awsAccessKey N This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘accessKey’ instead. AWS access key associated with an IAM account "AKIAIOSFODNN7EXAMPLE"
awsSecretKey N This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘secretKey’ instead. The secret key associated with the access key "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
awsSessionToken N This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘sessionToken’ instead. AWS session token to use. A session token is only required if you are using temporary security credentials. "TOKEN"

Other metadata options

Field Required Binding support Details Example
timeout N Output Timeout for operations on the database, as a Go duration. Integers are interpreted as number of seconds. Defaults to 20s "30s", 30
maxConns N Output Maximum number of connections pooled by this component. Set to 0 or lower to use the default value, which is the greater of 4 or the number of CPUs. "4"
connectionMaxIdleTime N Output Max idle time before unused connections are automatically closed in the connection pool. By default, there’s no value and this is left to the database driver to choose. "5m"
queryExecMode N Output Controls the default mode for executing queries. By default Dapr uses the extended protocol and automatically prepares and caches prepared statements. However, this may be incompatible with proxies such as PGBouncer. In this case it may be preferrable to use exec or simple_protocol. "simple_protocol"

URL format

The PostgreSQL binding uses pgx connection pool internally so the connectionString parameter can be any valid connection string, either in a DSN or URL format:

Example DSN

user=dapr password=secret host=dapr.example.com port=5432 dbname=my_dapr sslmode=verify-ca

Example URL

postgres://dapr:secret@dapr.example.com:5432/my_dapr?sslmode=verify-ca

Both methods also support connection pool configuration variables:

  • pool_min_conns: integer 0 or greater
  • pool_max_conns: integer greater than 0
  • pool_max_conn_lifetime: duration string
  • pool_max_conn_idle_time: duration string
  • pool_health_check_period: duration string

Binding support

This component supports output binding with the following operations:

  • exec
  • query
  • close

Parametrized queries

This binding supports parametrized queries, which allow separating the SQL query itself from user-supplied values. The usage of parametrized queries is strongly recommended for security reasons, as they prevent SQL Injection attacks.

For example:

-- ❌ WRONG! Includes values in the query and is vulnerable to SQL Injection attacks.
SELECT * FROM mytable WHERE user_key = 'something';

-- ✅ GOOD! Uses parametrized queries.
-- This will be executed with parameters ["something"]
SELECT * FROM mytable WHERE user_key = $1;

exec

The exec operation can be used for DDL operations (like table creation), as well as INSERT, UPDATE, DELETE operations which return only metadata (e.g. number of affected rows).

The params property is a string containing a JSON-encoded array of parameters.

Request

{
  "operation": "exec",
  "metadata": {
    "sql": "INSERT INTO foo (id, c1, ts) VALUES ($1, $2, $3)",
    "params": "[1, \"demo\", \"2020-09-24T11:45:05Z07:00\"]"
  }
}

Response

{
  "metadata": {
    "operation": "exec",
    "duration": "294µs",
    "start-time": "2020-09-24T11:13:46.405097Z",
    "end-time": "2020-09-24T11:13:46.414519Z",
    "rows-affected": "1",
    "sql": "INSERT INTO foo (id, c1, ts) VALUES ($1, $2, $3)"
  }
}

query

The query operation is used for SELECT statements, which returns the metadata along with data in a form of an array of row values.

The params property is a string containing a JSON-encoded array of parameters.

Request

{
  "operation": "query",
  "metadata": {
    "sql": "SELECT * FROM foo WHERE id < $1",
    "params": "[3]"
  }
}

Response

{
  "metadata": {
    "operation": "query",
    "duration": "432µs",
    "start-time": "2020-09-24T11:13:46.405097Z",
    "end-time": "2020-09-24T11:13:46.420566Z",
    "sql": "SELECT * FROM foo WHERE id < $1"
  },
  "data": "[
    [0,\"test-0\",\"2020-09-24T04:13:46Z\"],
    [1,\"test-1\",\"2020-09-24T04:13:46Z\"],
    [2,\"test-2\",\"2020-09-24T04:13:46Z\"]
  ]"
}

close

The close operation can be used to explicitly close the DB connection and return it to the pool. This operation doesn’t have any response.

Request

{
  "operation": "close"
}

38 - Postmark binding spec

Detailed documentation on the Postmark binding component

Component format

To setup Postmark binding create a component of type bindings.postmark. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: postmark
spec:
  type: bindings.postmark
  metadata:
  - name: accountToken
    value: "YOUR_ACCOUNT_TOKEN" # required, this is your Postmark account token
  - name: serverToken
    value: "YOUR_SERVER_TOKEN" # required, this is your Postmark server token
  - name: emailFrom
    value: "testapp@dapr.io" # optional
  - name: emailTo
    value: "dave@dapr.io" # optional
  - name: subject
    value: "Hello!" # optional

Spec metadata fields

Field Required Binding support Details Example
accountToken Y Output The Postmark account token, this should be considered a secret value "account token"
serverToken Y Output The Postmark server token, this should be considered a secret value "server token"
emailFrom N Output If set this specifies the ‘from’ email address of the email message "me@exmaple.com"
emailTo N Output If set this specifies the ’to’ email address of the email message "me@example.com"
emailCc N Output If set this specifies the ‘cc’ email address of the email message "me@example.com"
emailBcc N Output If set this specifies the ‘bcc’ email address of the email message "me@example.com"
subject N Output If set this specifies the subject of the email message "me@example.com"

You can specify any of the optional metadata properties on the output binding request too (e.g. emailFrom, emailTo, subject, etc.)

Combined, the optional metadata properties in the component configuration and the request payload should at least contain the emailFrom, emailTo and subject fields, as these are required to send an email with success.

Binding support

This component supports output binding with the following operations:

  • create

Example request payload

{
  "operation": "create",
  "metadata": {
    "emailTo": "changeme@example.net",
    "subject": "An email from Dapr Postmark binding"
  },
  "data": "<h1>Testing Dapr Bindings</h1>This is a test.<br>Bye!"
}

39 - RabbitMQ binding spec

Detailed documentation on the RabbitMQ binding component

Component format

To setup RabbitMQ binding create a component of type bindings.rabbitmq. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.rabbitmq
  version: v1
  metadata:
  - name: queueName
    value: "queue1"
  - name: host
    value: "amqp://[username][:password]@host.domain[:port]"
  - name: durable
    value: "true"
  - name: deleteWhenUnused
    value: "false"
  - name: ttlInSeconds
    value: "60"
  - name: prefetchCount
    value: "0"
  - name: exclusive
    value: "false"
  - name: maxPriority
    value: "5"
  - name: contentType
    value: "text/plain"
  - name: reconnectWaitInSeconds
    value: "5"
  - name: externalSasl
    value: "false"
  - name: caCert
    value: "null"
  - name: clientCert
    value: "null"
  - name: clientKey
    value: "null"
  - name: direction 
    value: "input, output"

Spec metadata fields

When a new RabbitMQ message gets published, all values from the associated metadata are added to the message’s header values.

Field Required Binding support Details Example
queueName Y Input/Output The RabbitMQ queue name "myqueue"
host Y Input/Output The RabbitMQ host address "amqp://[username][:password]@host.domain[:port]" or with TLS: "amqps://[username][:password]@host.domain[:port]"
durable N Output Tells RabbitMQ to persist message in storage. Defaults to "false" "true", "false"
deleteWhenUnused N Input/Output Enables or disables auto-delete. Defaults to "false" "true", "false"
ttlInSeconds N Output Set the default message time to live at RabbitMQ queue level. If this parameter is omitted, messages won’t expire, continuing to exist on the queue until processed. See also 60
prefetchCount N Input Set the Channel Prefetch Setting (QoS). If this parameter is omiited, QoS would set value to 0 as no limit 0
exclusive N Input/Output Determines whether the topic will be an exclusive topic or not. Defaults to "false" "true", "false"
maxPriority N Input/Output Parameter to set the priority queue. If this parameter is omitted, queue will be created as a general queue instead of a priority queue. Value between 1 and 255. See also "1", "10"
contentType N Input/Output The content type of the message. Defaults to “text/plain”. "text/plain", "application/cloudevent+json" and so on
reconnectWaitInSeconds N Input/Output Represents the duration in seconds that the client should wait before attempting to reconnect to the server after a disconnection occurs. Defaults to "5". "5", "10"
externalSasl N Input/Output With TLS, should the username be taken from an additional field (e.g. CN.) See RabbitMQ Authentication Mechanisms. Defaults to "false". "true", "false"
caCert N Input/Output The CA certificate to use for TLS connection. Defaults to null. "-----BEGIN CERTIFICATE-----\nMI..."
clientCert N Input/Output The client certificate to use for TLS connection. Defaults to null. "-----BEGIN CERTIFICATE-----\nMI..."
clientKey N Input/Output The client key to use for TLS connection. Defaults to null. "-----BEGIN PRIVATE KEY-----\nMI..."
direction N Input/Output The direction of the binding. "input", "output", "input, output"

Binding support

This component supports both input and output binding interfaces.

This component supports output binding with the following operations:

  • create

Specifying a TTL per message

Time to live can be defined on queue level (as illustrated above) or at the message level. The value defined at message level overwrites any value set at queue level.

To set time to live at message level use the metadata section in the request body during the binding invocation.

The field name is ttlInSeconds.

Example:

curl -X POST http://localhost:3500/v1.0/bindings/myRabbitMQ \
  -H "Content-Type: application/json" \
  -d "{
        \"data\": {
          \"message\": \"Hi\"
        },
        \"metadata\": {
          \"ttlInSeconds\": "60"
        },
        \"operation\": \"create\"
      }"
curl -X POST http://localhost:3500/v1.0/bindings/myRabbitMQ \
  -H "Content-Type: application/json" \
  -d '{
        "data": {
          "message": "Hi"
        },
        "metadata": {
          "ttlInSeconds": "60"
        },
        "operation": "create"
      }'

Specifying a priority per message

Priority can be defined at the message level. If maxPriority parameter is set, high priority messages will have priority over other low priority messages.

To set priority at message level use the metadata section in the request body during the binding invocation.

The field name is priority.

Example:

curl -X POST http://localhost:3500/v1.0/bindings/myRabbitMQ \
  -H "Content-Type: application/json" \
  -d "{
        \"data\": {
          \"message\": \"Hi\"
        },
        \"metadata\": {
          "priority": \"5\"
        },
        \"operation\": \"create\"
      }"
curl -X POST http://localhost:3500/v1.0/bindings/myRabbitMQ \
  -H "Content-Type: application/json" \
  -d '{
        "data": {
          "message": "Hi"
        },
        "metadata": {
          "priority": "5"
        },
        "operation": "create"
      }'

40 - Redis binding spec

Detailed documentation on the Redis binding component

Component format

To setup Redis binding create a component of type bindings.redis. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.redis
  version: v1
  metadata:
  - name: redisHost
    value: "<address>:6379"
  - name: redisPassword
    value: "**************"
  - name: useEntraID
    value: "true"
  - name: enableTLS
    value: "<bool>"

Spec metadata fields

Field Required Binding support Details Example
redisHost Y Output The Redis host address "localhost:6379"
redisPassword N Output The Redis password "password"
redisUsername N Output Username for Redis host. Defaults to empty. Make sure your redis server version is 6 or above, and have created acl rule correctly. "username"
useEntraID N Output Implements EntraID support for Azure Cache for Redis. Before enabling this:
  • The redisHost name must be specified in the form of "server:port"
  • TLS must be enabled
Learn more about this setting under Create a Redis instance > Azure Cache for Redis
"true", "false"
enableTLS N Output If the Redis instance supports TLS with public certificates it can be configured to enable or disable TLS. Defaults to "false" "true", "false"
clientCert N Output The content of the client certificate, used for Redis instances that require client-side certificates. Must be used with clientKey and enableTLS must be set to true. It is recommended to use a secret store as described here "----BEGIN CERTIFICATE-----\nMIIC..."
clientKey N Output The content of the client private key, used in conjunction with clientCert for authentication. It is recommended to use a secret store as described here "----BEGIN PRIVATE KEY-----\nMIIE..."
failover N Output Property to enabled failover configuration. Needs sentinalMasterName to be set. Defaults to "false" "true", "false"
sentinelMasterName N Output The sentinel master name. See Redis Sentinel Documentation "", "127.0.0.1:6379"
redeliverInterval N Output The interval between checking for pending messages to redelivery. Defaults to "60s". "0" disables redelivery. "30s"
processingTimeout N Output The amount time a message must be pending before attempting to redeliver it. Defaults to "15s". "0" disables redelivery. "30s"
redisType N Output The type of redis. There are two valid values, one is "node" for single node mode, the other is "cluster" for redis cluster mode. Defaults to "node". "cluster"
redisDB N Output Database selected after connecting to redis. If "redisType" is "cluster" this option is ignored. Defaults to "0". "0"
redisMaxRetries N Output Maximum number of times to retry commands before giving up. Default is to not retry failed commands. "5"
redisMinRetryInterval N Output Minimum backoff for redis commands between each retry. Default is "8ms"; "-1" disables backoff. "8ms"
redisMaxRetryInterval N Output Maximum backoff for redis commands between each retry. Default is "512ms";"-1" disables backoff. "5s"
dialTimeout N Output Dial timeout for establishing new connections. Defaults to "5s". "5s"
readTimeout N Output Timeout for socket reads. If reached, redis commands will fail with a timeout instead of blocking. Defaults to "3s", "-1" for no timeout. "3s"
writeTimeout N Output Timeout for socket writes. If reached, redis commands will fail with a timeout instead of blocking. Defaults is readTimeout. "3s"
poolSize N Output Maximum number of socket connections. Default is 10 connections per every CPU as reported by runtime.NumCPU. "20"
poolTimeout N Output Amount of time client waits for a connection if all connections are busy before returning an error. Default is readTimeout + 1 second. "5s"
maxConnAge N Output Connection age at which the client retires (closes) the connection. Default is to not close aged connections. "30m"
minIdleConns N Output Minimum number of idle connections to keep open in order to avoid the performance degradation associated with creating new connections. Defaults to "0". "2"
idleCheckFrequency N Output Frequency of idle checks made by idle connections reaper. Default is "1m". "-1" disables idle connections reaper. "-1"
idleTimeout N Output Amount of time after which the client closes idle connections. Should be less than server’s timeout. Default is "5m". "-1" disables idle timeout check. "10m"

Binding support

This component supports output binding with the following operations:

  • create
  • get
  • delete

create

You can store a record in Redis using the create operation. This sets a key to hold a value. If the key already exists, the value is overwritten.

Request

{
  "operation": "create",
  "metadata": {
    "key": "key1"
  },
  "data": {
    "Hello": "World",
    "Lorem": "Ipsum"
  }
}

Response

An HTTP 204 (No Content) and empty body is returned if successful.

get

You can get a record in Redis using the get operation. This gets a key that was previously set.

This takes an optional parameter delete, which is by default false. When it is set to true, this operation uses the GETDEL operation of Redis. For example, it returns the value which was previously set and then deletes it.

Request

{
  "operation": "get",
  "metadata": {
    "key": "key1"
  },
  "data": {
  }
}

Response

{
  "data": {
    "Hello": "World",
    "Lorem": "Ipsum"
  }
}

Request with delete flag

{
  "operation": "get",
  "metadata": {
    "key": "key1",
    "delete": "true"
  },
  "data": {
  }
}

delete

You can delete a record in Redis using the delete operation. Returns success whether the key exists or not.

Request

{
  "operation": "delete",
  "metadata": {
    "key": "key1"
  }
}

Response

An HTTP 204 (No Content) and empty body is returned if successful.

Create a Redis instance

Dapr can use any Redis instance - containerized, running on your local dev machine, or a managed cloud service, provided the version of Redis is 5.0.0 or later.

Note: Dapr does not support Redis >= 7. It is recommended to use Redis 6

The Dapr CLI will automatically create and setup a Redis Streams instance for you. The Redis instance will be installed via Docker when you run dapr init, and the component file will be created in default directory. ($HOME/.dapr/components directory (Mac/Linux) or %USERPROFILE%\.dapr\components on Windows).

You can use Helm to quickly create a Redis instance in our Kubernetes cluster. This approach requires Installing Helm.

  1. Install Redis into your cluster.

    helm repo add bitnami https://charts.bitnami.com/bitnami
    helm install redis bitnami/redis --set image.tag=6.2
    
  2. Run kubectl get pods to see the Redis containers now running in your cluster.

  3. Add redis-master:6379 as the redisHost in your redis.yaml file. For example:

        metadata:
        - name: redisHost
          value: redis-master:6379
    
  4. Next, we’ll get our Redis password, which is slightly different depending on the OS we’re using:

    • Windows: Run kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" > encoded.b64, which will create a file with your encoded password. Next, run certutil -decode encoded.b64 password.txt, which will put your redis password in a text file called password.txt. Copy the password and delete the two files.

    • Linux/MacOS: Run kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" | base64 --decode and copy the outputted password.

    Add this password as the redisPassword value in your redis.yaml file. For example:

        - name: redisPassword
          value: "lhDOkwTlp0"
    
  1. Create an Azure Cache for Redis instance using the official Microsoft documentation.

  2. Once your instance is created, grab the Host name (FQDN) and your access key from the Azure portal.

    • For the Host name:
      • Navigate to the resource’s Overview page.
      • Copy the Host name value.
    • For your access key:
      • Navigate to Settings > Access Keys.
      • Copy and save your key.
  3. Add your key and your host name to a redis.yaml file that Dapr can apply to your cluster.

    • If you’re running a sample, add the host and key to the provided redis.yaml.
    • If you’re creating a project from the ground up, create a redis.yaml file as specified in the Component format section.
  4. Set the redisHost key to [HOST NAME FROM PREVIOUS STEP]:6379 and the redisPassword key to the key you saved earlier.

    Note: In a production-grade application, follow secret management instructions to securely manage your secrets.

  5. Enable EntraID support:

    • Enable Entra ID authentication on your Azure Redis server. This may takes a few minutes.
    • Set useEntraID to "true" to implement EntraID support for Azure Cache for Redis.
  6. Set enableTLS to "true" to support TLS.

Note:useEntraID assumes that either your UserPrincipal (via AzureCLICredential) or the SystemAssigned managed identity have the RedisDataOwner role permission. If a user-assigned identity is used, you need to specify the azureClientID property.

41 - RethinkDB binding spec

Detailed documentation on the RethinkDB binding component

Component format

The RethinkDB state store supports transactions which means it can be used to support Dapr actors. Dapr persists only the actor’s current state which doesn’t allow the users to track how actor’s state may have changed over time.

To enable users to track change of the state of actors, this binding leverages RethinkDB’s built-in capability to monitor RethinkDB table and event on change with both the old and new state. This binding creates a subscription on the Dapr state table and streams these changes using the Dapr input binding interface.

To setup RethinkDB statechange binding create a component of type bindings.rethinkdb.statechange. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: changes
spec:
  type: bindings.rethinkdb.statechange
  version: v1
  metadata:
  - name: address
    value: "<REPLACE-RETHINKDB-ADDRESS>" # Required, e.g. 127.0.0.1:28015 or rethinkdb.default.svc.cluster.local:28015).
  - name: database
    value: "<REPLACE-RETHINKDB-DB-NAME>" # Required, e.g. dapr (alpha-numerics only)
  - name: direction 
    value: "<DIRECTION-OF-RETHINKDB-BINDING>"

Spec metadata fields

Field Required Binding support Details Example
address Y Input Address of RethinkDB server "27.0.0.1:28015", "rethinkdb.default.svc.cluster.local:28015"
database Y Input RethinDB database name "dapr"
direction N Input Direction of the binding "input"

Binding support

This component only supports input binding interface.

42 - SFTP binding spec

Detailed documentation on the Secure File Transfer Protocol (SFTP) binding component

Component format

To set up the SFTP binding, create a component of type bindings.sftp. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.sftp
  version: v1
  metadata:
  - name: rootPath
    value: "<string>"
  - name: address
    value: "<string>"
  - name: username
    value: "<string>"
  - name: password
    value: "*****************"
  - name: privateKey
    value: "*****************"
  - name: privateKeyPassphrase
    value: "*****************"
  - name: hostPublicKey
    value: "*****************"
  - name: knownHostsFile
    value: "<string>"
  - name: insecureIgnoreHostKey
    value: "<bool>"

Spec metadata fields

Field Required Binding support Details Example
rootPath Y Output Root path for default working directory "/path"
address Y Output Address of SFTP server "localhost:22"
username Y Output Username for authentication "username"
password N Output Password for username/password authentication "password"
privateKey N Output Private key for public key authentication
"|-
—–BEGIN OPENSSH PRIVATE KEY—–
*****************
—–END OPENSSH PRIVATE KEY—–"
privateKeyPassphrase N Output Private key passphrase for public key authentication "passphrase"
hostPublicKey N Output Host public key for host validation "ecdsa-sha2-nistp256 *** root@openssh-server"
knownHostsFile N Output Known hosts file for host validation "/path/file"
insecureIgnoreHostKey N Output Allows to skip host validation. Defaults to "false" "true", "false"

Binding support

This component supports output binding with the following operations:

Create file

To perform a create file operation, invoke the SFTP binding with a POST method and the following JSON body:

{
  "operation": "create",
  "data": "<YOUR_BASE_64_CONTENT>",
  "metadata": {
    "fileName": "<filename>",
  }
}

Example

curl -d "{ \"operation\": \"create\", \"data\": \"YOUR_BASE_64_CONTENT\", \"metadata\": { \"fileName\": \"my-test-file.jpg\" } }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "YOUR_BASE_64_CONTENT", "metadata": { "fileName": "my-test-file.jpg" } }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

The response body contains the following JSON:

{
   "fileName": "<filename>"
}

Get file

To perform a get file operation, invoke the SFTP binding with a POST method and the following JSON body:

{
  "operation": "get",
  "metadata": {
    "fileName": "<filename>"
  }
}

Example

curl -d '{ \"operation\": \"get\", \"metadata\": { \"fileName\": \"filename\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "get", "metadata": { "fileName": "filename" }}' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

The response body contains the value stored in the file.

List files

To perform a list files operation, invoke the SFTP binding with a POST method and the following JSON body:

{
  "operation": "list"
}

If you only want to list the files beneath a particular directory below the rootPath, specify the relative directory name as the fileName in the metadata.

{
  "operation": "list",
  "metadata": {
    "fileName": "my/cool/directory"
  }
}

Example

curl -d '{ \"operation\": \"list\", \"metadata\": { \"fileName\": \"my/cool/directory\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "list", "metadata": { "fileName": "my/cool/directory" }}' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

The response is a JSON array of file names.

Delete file

To perform a delete file operation, invoke the SFTP binding with a POST method and the following JSON body:

{
  "operation": "delete",
  "metadata": {
    "fileName": "myfile"
  }
}

Example

curl -d '{ \"operation\": \"delete\", \"metadata\": { \"fileName\": \"myfile\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "delete", "metadata": { "fileName": "myfile" }}' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

An HTTP 204 (No Content) and empty body is returned if successful.

43 - SMTP binding spec

Detailed documentation on the SMTP binding component

Component format

To setup SMTP binding create a component of type bindings.smtp. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: smtp
spec:
  type: bindings.smtp
  version: v1
  metadata:
  - name: host
    value: "smtp host"
  - name: port
    value: "smtp port"
  - name: user
    value: "username"
  - name: password
    value: "password"
  - name: skipTLSVerify
    value: true|false
  - name: emailFrom
    value: "sender@example.com"
  - name: emailTo
    value: "receiver@example.com"
  - name: emailCC
    value: "cc@example.com"
  - name: emailBCC
    value: "bcc@example.com"
  - name: subject
    value: "subject"
  - name: priority
    value: "[value 1-5]"

Spec metadata fields

Field Required Binding support Details Example
host Y Output The host where your SMTP server runs "smtphost"
port Y Output The port your SMTP server listens on "9999"
user Y Output The user to authenticate against the SMTP server "user"
password Y Output The password of the user "password"
skipTLSVerify N Output If set to true, the SMPT server’s TLS certificate will not be verified. Defaults to "false" "true", "false"
emailFrom N Output If set, this specifies the email address of the sender. See also "me@example.com"
emailTo N Output If set, this specifies the email address of the receiver. See also "me@example.com"
emailCc N Output If set, this specifies the email address to CC in. See also "me@example.com"
emailBcc N Output If set, this specifies email address to BCC in. See also "me@example.com"
subject N Output If set, this specifies the subject of the email message. See also "subject of mail"
priority N Output If set, this specifies the priority (X-Priority) of the email message, from 1 (lowest) to 5 (highest) (default value: 3). See also "1"

Binding support

This component supports output binding with the following operations:

  • create

Example request

You can specify any of the following optional metadata properties with each request:

  • emailFrom
  • emailTo
  • emailCC
  • emailBCC
  • subject
  • priority

When sending an email, the metadata in the configuration and in the request is combined. The combined set of metadata must contain at least the emailFrom, emailTo and subject fields.

The emailTo, emailCC and emailBCC fields can contain multiple email addresses separated by a semicolon.

Example:

{
  "operation": "create",
  "metadata": {
    "emailTo": "dapr-smtp-binding@example.net",
    "emailCC": "cc1@example.net; cc2@example.net",
    "subject": "Email subject",
    "priority: "1"
  },
  "data": "Testing Dapr SMTP Binding"
}

The emailTo, emailCC and emailBCC fields can contain multiple email addresses separated by a semicolon.

44 - Twilio SendGrid binding spec

Detailed documentation on the Twilio SendGrid binding component

Component format

To setup Twilio SendGrid binding create a component of type bindings.twilio.sendgrid. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: sendgrid
spec:
  type: bindings.twilio.sendgrid
  version: v1
  metadata:
  - name: emailFrom
    value: "testapp@dapr.io" # optional
  - name: emailFromName
    value: "test app" # optional
  - name: emailTo
    value: "dave@dapr.io" # optional
  - name: emailToName
    value: "dave" # optional
  - name: subject
    value: "Hello!" # optional
  - name: emailCc
    value: "jill@dapr.io" # optional
  - name: emailBcc
    value: "bob@dapr.io" # optional
  - name: dynamicTemplateId
    value: "d-123456789" # optional
  - name: dynamicTemplateData
    value: '{"customer":{"name":"John Smith"}}' # optional
  - name: apiKey
    value: "YOUR_API_KEY" # required, this is your SendGrid key

Spec metadata fields

Field Required Binding support Details Example
apiKey Y Output SendGrid API key, this should be considered a secret value "apikey"
emailFrom N Output If set this specifies the ‘from’ email address of the email message. Only a single email address is allowed. Optional field, see below "me@example.com"
emailFromName N Output If set this specifies the ‘from’ name of the email message. Optional field, see below "me"
emailTo N Output If set this specifies the ’to’ email address of the email message. Only a single email address is allowed. Optional field, see below "me@example.com"
emailToName N Output If set this specifies the ’to’ name of the email message. Optional field, see below "me"
emailCc N Output If set this specifies the ‘cc’ email address of the email message. Only a single email address is allowed. Optional field, see below "me@example.com"
emailBcc N Output If set this specifies the ‘bcc’ email address of the email message. Only a single email address is allowed. Optional field, see below "me@example.com"
subject N Output If set this specifies the subject of the email message. Optional field, see below "subject of the email"

Binding support

This component supports output binding with the following operations:

  • create

Example request payload

You can specify any of the optional metadata properties on the output binding request too (e.g. emailFrom, emailTo, subject, etc.)

{
  "operation": "create",
  "metadata": {
    "emailTo": "changeme@example.net",
    "subject": "An email from Dapr SendGrid binding"
  },
  "data": "<h1>Testing Dapr Bindings</h1>This is a test.<br>Bye!"
}

Dynamic templates

If a dynamic template is used, a dynamicTemplateId needs to be provided and then the dynamicTemplateData is used:

{
  "operation": "create",
  "metadata": {
    "emailTo": "changeme@example.net",
    "subject": "An template email from Dapr SendGrid binding",
    "dynamicTemplateId": "d-123456789",
    "dynamicTemplateData": "{\"customer\":{\"name\":\"John Smith\"}}"
  }
}

45 - Twilio SMS binding spec

Detailed documentation on the Twilio SMS binding component

Component format

To setup Twilio SMS binding create a component of type bindings.twilio.sms. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.twilio.sms
  version: v1
  metadata:
  - name: toNumber # required.
    value: "111-111-1111"
  - name: fromNumber # required.
    value: "222-222-2222"
  - name: accountSid # required.
    value: "*****************"
  - name: authToken # required.
    value: "*****************"

Spec metadata fields

Field Required Binding support Details Example
toNumber Y Output The target number to send the sms to "111-111-1111"
fromNumber Y Output The sender phone number "222-222-2222"
accountSid Y Output The Twilio account SID "account sid"
authToken Y Output The Twilio auth token "auth token"

Binding support

This component supports output binding with the following operations:

  • create

46 - Wasm

Detailed documentation on the WebAssembly binding component

Overview

With WebAssembly, you can safely run code compiled in other languages. Runtimes execute WebAssembly Modules (Wasm), which are most often binaries with a .wasm extension.

The Wasm Binding allows you to invoke a program compiled to Wasm by passing commandline args or environment variables to it, similar to how you would with a normal subprocess. For example, you can satisfy an invocation using Python, even though Dapr is written in Go and is running on a platform that doesn’t have Python installed!

The Wasm binary must be a program compiled with the WebAssembly System Interface (WASI). The binary can be a program you’ve written such as in Go, or an interpreter you use to run inlined scripts, such as Python.

Minimally, you must specify a Wasm binary compiled with the canonical WASI version wasi_snapshot_preview1 (a.k.a. wasip1), often abbreviated to wasi.

Note: If compiling in Go 1.21+, this is GOOS=wasip1 GOARCH=wasm. In TinyGo, Rust, and Zig, this is the target wasm32-wasi.

You can also re-use an existing binary. For example, Wasm Language Runtimes distributes interpreters (including PHP, Python, and Ruby) already compiled to WASI.

Wasm binaries are loaded from a URL. For example, the URL file://rewrite.wasm loads rewrite.wasm from the current directory of the process. On Kubernetes, see How to: Mount Pod volumes to the Dapr sidecar to configure a filesystem mount that can contain Wasm binaries. It is also possible to fetch the Wasm binary from a remote URL. In this case, the URL must point exactly to one Wasm binary. For example:

  • http://example.com/rewrite.wasm, or
  • https://example.com/rewrite.wasm.

Dapr uses wazero to run these binaries, because it has no dependencies. This allows use of WebAssembly with no installation process except Dapr itself.

The Wasm output binding supports making HTTP client calls using the wasi-http specification. You can find example code for making HTTP calls in a variety of languages here:

Component format

To configure a Wasm binding, create a component of type bindings.wasm. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: wasm
spec:
  type: bindings.wasm
  version: v1
  metadata:
    - name: url
      value: "file://uppercase.wasm"

Spec metadata fields

Field Details Required Example
url The URL of the resource including the Wasm binary to instantiate. The supported schemes include file://, http://, and https://. The path of a file:// URL is relative to the Dapr process unless it begins with /. true file://hello.wasm, https://example.com/hello.wasm

Binding support

This component supports output binding with the following operations:

  • execute

Example request

The data field, if present will be the program’s STDIN. You can optionally pass metadata properties with each request:

  • args any CLI arguments, comma-separated. This excludes the program name.

For example, consider binding the url to a Ruby interpreter, such as from webassembly-language-runtimes:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: wasm
spec:
  type: bindings.wasm
  version: v1
  metadata:
  - name: url
    value: "https://github.com/vmware-labs/webassembly-language-runtimes/releases/download/ruby%2F3.2.0%2B20230215-1349da9/ruby-3.2.0-slim.wasm"

Assuming that you wanted to start your Dapr at port 3500 with the Wasm Binding, you’d run:

$ dapr run --app-id wasm --dapr-http-port 3500 --resources-path components

The following request responds Hello "salaboy":

$ curl -X POST http://localhost:3500/v1.0/bindings/wasm -d'
{
  "operation": "execute",
  "metadata": {
    "args": "-ne,print \"Hello \"; print"
  },
  "data": "salaboy"
}'

47 - Zeebe command binding spec

Detailed documentation on the Zeebe command binding component

Component format

To setup Zeebe command binding create a component of type bindings.zeebe.command. See this guide on how to create and apply a binding configuration.

See this for Zeebe documentation.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.zeebe.command
  version: v1
  metadata:
  - name: gatewayAddr
    value: "<host>:<port>"
  - name: gatewayKeepAlive
    value: "45s"
  - name: usePlainTextConnection
    value: "true"
  - name: caCertificatePath
    value: "/path/to/ca-cert"

Spec metadata fields

Field Required Binding support Details Example
gatewayAddr Y Output Zeebe gateway address "localhost:26500"
gatewayKeepAlive N Output Sets how often keep alive messages should be sent to the gateway. Defaults to 45 seconds "45s"
usePlainTextConnection N Output Whether to use a plain text connection or not "true", "false"
caCertificatePath N Output The path to the CA cert "/path/to/ca-cert"

Binding support

This component supports output binding with the following operations:

  • topology
  • deploy-process
  • deploy-resource
  • create-instance
  • cancel-instance
  • set-variables
  • resolve-incident
  • publish-message
  • activate-jobs
  • complete-job
  • fail-job
  • update-job-retries
  • throw-error

Output binding

Zeebe uses gRPC under the hood for the Zeebe client we use in this binding. Please consult the gRPC API reference for more information.

topology

The topology operation obtains the current topology of the cluster the gateway is part of.

To perform a topology operation, invoke the Zeebe command binding with a POST method, and the following JSON body:

{
  "data": {},
  "operation": "topology"
}
Response

The binding returns a JSON with the following response:

{
  "brokers": [
    {
      "nodeId": null,
      "host": "172.18.0.5",
      "port": 26501,
      "partitions": [
        {
          "partitionId": 1,
          "role": null,
          "health": null
        }
      ],
      "version": "0.26.0"
    }
  ],
  "clusterSize": 1,
  "partitionsCount": 1,
  "replicationFactor": 1,
  "gatewayVersion": "0.26.0"
}

The response values are:

  • brokers - list of brokers part of this cluster
    • nodeId - unique (within a cluster) node ID for the broker
    • host - hostname of the broker
    • port - port for the broker
    • port - port for the broker
    • partitions - list of partitions managed or replicated on this broker
      • partitionId - the unique ID of this partition
      • role - the role of the broker for this partition
      • health - the health of this partition
    • version - broker version
  • clusterSize - how many nodes are in the cluster
  • partitionsCount - how many partitions are spread across the cluster
  • replicationFactor - configured replication factor for this cluster
  • gatewayVersion - gateway version

deploy-process

Deprecated alias of ‘deploy-resource’.

deploy-resource

The deploy-resource operation deploys a single resource to Zeebe. A resource can be a process (BPMN) or a decision and a decision requirement (DMN).

To perform a deploy-resource operation, invoke the Zeebe command binding with a POST method, and the following JSON body:

{
  "data": "YOUR_FILE_CONTENT",
  "metadata": {
    "fileName": "products-process.bpmn"
  },
  "operation": "deploy-resource"
}

The metadata parameters are:

  • fileName - the name of the resource file
Response

The binding returns a JSON with the following response:

{
  "key": 2251799813685252,
  "deployments": [
    {
      "Metadata": {
        "Process": {
          "bpmnProcessId": "products-process",
          "version": 2,
          "processDefinitionKey": 2251799813685251,
          "resourceName": "products-process.bpmn"
        }
      }
    }
  ]
}
{
  "key": 2251799813685253,
  "deployments": [
    {
      "Metadata": {
        "Decision": {
          "dmnDecisionId": "products-approval",
          "dmnDecisionName": "Products approval",
          "version": 1,
          "decisionKey": 2251799813685252,
          "dmnDecisionRequirementsId": "Definitions_0c98xne",
          "decisionRequirementsKey": 2251799813685251
        }
      }
    },
    {
      "Metadata": {
        "DecisionRequirements": {
          "dmnDecisionRequirementsId": "Definitions_0c98xne",
          "dmnDecisionRequirementsName": "DRD",
          "version": 1,
          "decisionRequirementsKey": 2251799813685251,
          "resourceName": "products-approval.dmn"
        }
      }
    }
  ]
}

The response values are:

  • key - the unique key identifying the deployment
  • deployments - a list of deployed resources, e.g. processes
    • metadata - deployment metadata, each deployment has only one metadata
      • process- metadata of a deployed process
        • bpmnProcessId - the bpmn process ID, as parsed during deployment; together with the version forms a unique identifier for a specific process definition
        • version - the assigned process version
        • processDefinitionKey - the assigned key, which acts as a unique identifier for this process
        • resourceName - the resource name from which this process was parsed
      • decision - metadata of a deployed decision
        • dmnDecisionId - the dmn decision ID, as parsed during deployment; together with the versions forms a unique identifier for a specific decision
        • dmnDecisionName - the dmn name of the decision, as parsed during deployment
        • version - the assigned decision version
        • decisionKey - the assigned decision key, which acts as a unique identifier for this decision
        • dmnDecisionRequirementsId - the dmn ID of the decision requirements graph that this decision is part of, as parsed during deployment
        • decisionRequirementsKey - the assigned key of the decision requirements graph that this decision is part of
      • decisionRequirements - metadata of a deployed decision requirements
        • dmnDecisionRequirementsId - the dmn decision requirements ID, as parsed during deployment; together with the versions forms a unique identifier for a specific decision
        • dmnDecisionRequirementsName - the dmn name of the decision requirements, as parsed during deployment
        • version - the assigned decision requirements version
        • decisionRequirementsKey - the assigned decision requirements key, which acts as a unique identifier for this decision requirements
        • resourceName - the resource name from which this decision requirements was parsed

create-instance

The create-instance operation creates and starts an instance of the specified process. The process definition to use to create the instance can be specified either using its unique key (as returned by the deploy-process operation), or using the BPMN process ID and a version.

Note that only processes with none start events can be started through this command.

Typically, process creation and execution are decoupled. This means that the command creates a new process instance and immediately responds with the process instance id. The execution of the process occurs after the response is sent. However, there are use cases that need to collect the results of a process when its execution is complete. By defining the withResult property, the command allows to “synchronously” execute processes and receive the results via a set of variables. The response is sent when the process execution is complete.

For more information please visit the official documentation.

To perform a create-instance operation, invoke the Zeebe command binding with a POST method, and the following JSON body:

{
  "data": {
    "bpmnProcessId": "products-process",
    "variables": {
      "productId": "some-product-id",
      "productName": "some-product-name",
      "productKey": "some-product-key"
    }
  },
  "operation": "create-instance"
}
{
  "data": {
    "processDefinitionKey": 2251799813685895,
    "variables": {
      "productId": "some-product-id",
      "productName": "some-product-name",
      "productKey": "some-product-key"
    }
  },
  "operation": "create-instance"
}
{
  "data": {
    "bpmnProcessId": "products-process",
    "variables": {
      "productId": "some-product-id",
      "productName": "some-product-name",
      "productKey": "some-product-key"
    },
    "withResult": true,
    "requestTimeout": "30s",
    "fetchVariables": ["productId"]
  },
  "operation": "create-instance"
}

The data parameters are:

  • bpmnProcessId - the BPMN process ID of the process definition to instantiate
  • processDefinitionKey - the unique key identifying the process definition to instantiate
  • version - (optional, default: latest version) the version of the process to instantiate
  • variables - (optional) JSON document that will instantiate the variables for the root variable scope of the process instance; it must be a JSON object, as variables will be mapped in a key-value fashion. e.g. { “a”: 1, “b”: 2 } will create two variables, named “a” and “b” respectively, with their associated values. [{ “a”: 1, “b”: 2 }] would not be a valid argument, as the root of the JSON document is an array and not an object
  • withResult - (optional, default: false) if set to true, the process will be instantiated and executed synchronously
  • requestTimeout - (optional, only used if withResult=true) timeout the request will be closed if the process is not completed before the requestTimeout. If requestTimeout = 0, uses the generic requestTimeout configured in the gateway.
  • fetchVariables - (optional, only used if withResult=true) list of names of variables to be included in variables property of the response. If empty, all visible variables in the root scope will be returned.
Response

The binding returns a JSON with the following response:

{
  "processDefinitionKey": 2251799813685895,
  "bpmnProcessId": "products-process",
  "version": 3,
  "processInstanceKey": 2251799813687851,
  "variables": "{\"productId\":\"some-product-id\"}"
}

The response values are:

  • processDefinitionKey - the key of the process definition which was used to create the process instance
  • bpmnProcessId - the BPMN process ID of the process definition which was used to create the process instance
  • version - the version of the process definition which was used to create the process instance
  • processInstanceKey - the unique identifier of the created process instance
  • variables - (optional, only if withResult=true was used in the request) JSON document consists of visible variables in the root scope; returned as a serialized JSON document

cancel-instance

The cancel-instance operation cancels a running process instance.

To perform a cancel-instance operation, invoke the Zeebe command binding with a POST method, and the following JSON body:

{
  "data": {
    "processInstanceKey": 2251799813687851
  },
  "operation": "cancel-instance"
}

The data parameters are:

  • processInstanceKey - the process instance key
Response

The binding does not return a response body.

set-variables

The set-variables operation creates or updates variables for an element instance (e.g. process instance, flow element instance).

To perform a set-variables operation, invoke the Zeebe command binding with a POST method, and the following JSON body:

{
  "data": {
    "elementInstanceKey": 2251799813687880,
    "variables": {
      "productId": "some-product-id",
      "productName": "some-product-name",
      "productKey": "some-product-key"
    }
  },
  "operation": "set-variables"
}

The data parameters are:

  • elementInstanceKey - the unique identifier of a particular element; can be the process instance key (as obtained during instance creation), or a given element, such as a service task (see elementInstanceKey on the job message)
  • local - (optional, default: false) if true, the variables will be merged strictly into the local scope (as indicated by elementInstanceKey); this means the variables is not propagated to upper scopes. for example, let’s say we have two scopes, ‘1’ and ‘2’, with each having effective variables as: 1 => { "foo" : 2 }, and 2 => { "bar" : 1 }. if we send an update request with elementInstanceKey = 2, variables { "foo" : 5 }, and local is true, then scope 1 will be unchanged, and scope 2 will now be { "bar" : 1, "foo" 5 }. if local was false, however, then scope 1 would be { "foo": 5 }, and scope 2 would be { "bar" : 1 }
  • variables - a JSON serialized document describing variables as key value pairs; the root of the document must be an object
Response

The binding returns a JSON with the following response:

{
  "key": 2251799813687896
}

The response values are:

  • key - the unique key of the set variables command

resolve-incident

The resolve-incident operation resolves an incident.

To perform a resolve-incident operation, invoke the Zeebe command binding with a POST method, and the following JSON body:

{
  "data": {
    "incidentKey": 2251799813686123
  },
  "operation": "resolve-incident"
}

The data parameters are:

  • incidentKey - the unique ID of the incident to resolve
Response

The binding does not return a response body.

publish-message

The publish-message operation publishes a single message. Messages are published to specific partitions computed from their correlation keys.

To perform a publish-message operation, invoke the Zeebe command binding with a POST method, and the following JSON body:

{
  "data": {
    "messageName": "product-message",
    "correlationKey": "2",
    "timeToLive": "1m",
    "variables": {
      "productId": "some-product-id",
      "productName": "some-product-name",
      "productKey": "some-product-key"
    },
  },  
  "operation": "publish-message"
}

The data parameters are:

  • messageName - the name of the message
  • correlationKey - (optional) the correlation key of the message
  • timeToLive - (optional) how long the message should be buffered on the broker
  • messageId - (optional) the unique ID of the message; can be omitted. only useful to ensure only one message with the given ID will ever be published (during its lifetime)
  • variables - (optional) the message variables as a JSON document; to be valid, the root of the document must be an object, e.g. { “a”: “foo” }. [ “foo” ] would not be valid
Response

The binding returns a JSON with the following response:

{
  "key": 2251799813688225
}

The response values are:

  • key - the unique ID of the message that was published

activate-jobs

The activate-jobs operation iterates through all known partitions round-robin and activates up to the requested maximum and streams them back to the client as they are activated.

To perform a activate-jobs operation, invoke the Zeebe command binding with a POST method, and the following JSON body:

{
  "data": {
    "jobType": "fetch-products",
    "maxJobsToActivate": 5,
    "timeout": "5m",
    "workerName": "products-worker",
    "fetchVariables": [
      "productId",
      "productName",
      "productKey"
    ],
    "requestTimeout": "30s"
  },
  "operation": "activate-jobs"
}

The data parameters are:

  • jobType - the job type, as defined in the BPMN process (e.g. <zeebe:taskDefinition type="fetch-products" />)
  • maxJobsToActivate - the maximum jobs to activate by this request
  • timeout - (optional, default: 5 minutes) a job returned after this call will not be activated by another call until the timeout has been reached
  • workerName - (optional, default: default) the name of the worker activating the jobs, mostly used for logging purposes
  • fetchVariables - (optional) a list of variables to fetch as the job variables; if empty, all visible variables at the time of activation for the scope of the job will be returned
  • requestTimeout - (optional) the request will be completed when at least one job is activated or after the requestTimeout. If the requestTimeout = 0, a default timeout is used. If the requestTimeout < 0, long polling is disabled and the request is completed immediately, even when no job is activated.
Response

The binding returns a JSON with the following response:

[
  {
    "key": 2251799813685267,
    "type": "fetch-products",
    "processInstanceKey": 2251799813685260,
    "bpmnProcessId": "products",
    "processDefinitionVersion": 1,
    "processDefinitionKey": 2251799813685249,
    "elementId": "Activity_test",
    "elementInstanceKey": 2251799813685266,
    "customHeaders": "{\"process-header-1\":\"1\",\"process-header-2\":\"2\"}",
    "worker": "test", 
    "retries": 1,
    "deadline": 1694091934039,
    "variables":"{\"productId\":\"some-product-id\"}"
  }
]

The response values are:

  • key - the key, a unique identifier for the job
  • type - the type of the job (should match what was requested)
  • processInstanceKey - the job’s process instance key
  • bpmnProcessId - the bpmn process ID of the job process definition
  • processDefinitionVersion - the version of the job process definition
  • processDefinitionKey - the key of the job process definition
  • elementId - the associated task element ID
  • elementInstanceKey - the unique key identifying the associated task, unique within the scope of the process instance
  • customHeaders - a set of custom headers defined during modelling; returned as a serialized JSON document
  • worker - the name of the worker which activated this job
  • retries - the amount of retries left to this job (should always be positive)
  • deadline - when the job can be activated again, sent as a UNIX epoch timestamp
  • variables - computed at activation time, consisting of all visible variables to the task scope; returned as a serialized JSON document

complete-job

The complete-job operation completes a job with the given payload, which allows completing the associated service task.

To perform a complete-job operation, invoke the Zeebe command binding with a POST method, and the following JSON body:

{
  "data": {
    "jobKey": 2251799813686172,
    "variables": {
      "productId": "some-product-id",
      "productName": "some-product-name",
      "productKey": "some-product-key"
    }
  },
  "operation": "complete-job"
}

The data parameters are:

  • jobKey - the unique job identifier, as obtained from the activate jobs response
  • variables - (optional) a JSON document representing the variables in the current task scope
Response

The binding does not return a response body.

fail-job

The fail-job operation marks the job as failed; if the retries argument is positive, then the job will be immediately activatable again, and a worker could try again to process it. If it is zero or negative however, an incident will be raised, tagged with the given errorMessage, and the job will not be activatable until the incident is resolved.

To perform a fail-job operation, invoke the Zeebe command binding with a POST method, and the following JSON body:

{
  "data": {
    "jobKey": 2251799813685739,
    "retries": 5,
    "errorMessage": "some error occurred",
    "retryBackOff": "30s",
    "variables": {
      "productId": "some-product-id",
      "productName": "some-product-name",
      "productKey": "some-product-key"
    }
  },
  "operation": "fail-job"
}

The data parameters are:

  • jobKey - the unique job identifier, as obtained when activating the job
  • retries - the amount of retries the job should have left
  • errorMessage - (optional) a message describing why the job failed this is particularly useful if a job runs out of retries and an incident is raised, as it this message can help explain why an incident was raised
  • retryBackOff - (optional) the back-off timeout for the next retry
  • variables - (optional) JSON document that will instantiate the variables at the local scope of the job’s associated task; it must be a JSON object, as variables will be mapped in a key-value fashion. e.g. { “a”: 1, “b”: 2 } will create two variables, named “a” and “b” respectively, with their associated values. [{ “a”: 1, “b”: 2 }] would not be a valid argument, as the root of the JSON document is an array and not an object.
Response

The binding does not return a response body.

update-job-retries

The update-job-retries operation updates the number of retries a job has left. This is mostly useful for jobs that have run out of retries, should the underlying problem be solved.

To perform a update-job-retries operation, invoke the Zeebe command binding with a POST method, and the following JSON body:

{
  "data": {
    "jobKey": 2251799813686172,
    "retries": 10
  },
  "operation": "update-job-retries"
}

The data parameters are:

  • jobKey - the unique job identifier, as obtained through the activate-jobs operation
  • retries - the new amount of retries for the job; must be positive
Response

The binding does not return a response body.

throw-error

The throw-error operation throw an error to indicate that a business error is occurred while processing the job. The error is identified by an error code and is handled by an error catch event in the process with the same error code.

To perform a throw-error operation, invoke the Zeebe command binding with a POST method, and the following JSON body:

{
  "data": {
    "jobKey": 2251799813686172,
    "errorCode": "product-fetch-error",
    "errorMessage": "The product could not be fetched",
    "variables": {
      "productId": "some-product-id",
      "productName": "some-product-name",
      "productKey": "some-product-key"
    }
  },
  "operation": "throw-error"
}

The data parameters are:

  • jobKey - the unique job identifier, as obtained when activating the job
  • errorCode - the error code that will be matched with an error catch event
  • errorMessage - (optional) an error message that provides additional context
  • variables - (optional) JSON document that will instantiate the variables at the local scope of the job’s associated task; it must be a JSON object, as variables will be mapped in a key-value fashion. e.g. { “a”: 1, “b”: 2 } will create two variables, named “a” and “b” respectively, with their associated values. [{ “a”: 1, “b”: 2 }] would not be a valid argument, as the root of the JSON document is an array and not an object.
Response

The binding does not return a response body.

48 - Zeebe JobWorker binding spec

Detailed documentation on the Zeebe JobWorker binding component

Component format

To setup Zeebe JobWorker binding create a component of type bindings.zeebe.jobworker. See this guide on how to create and apply a binding configuration.

See this for Zeebe JobWorker documentation.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.zeebe.jobworker
  version: v1
  metadata:
  - name: gatewayAddr
    value: "<host>:<port>"
  - name: gatewayKeepAlive
    value: "45s"
  - name: usePlainTextConnection
    value: "true"
  - name: caCertificatePath
    value: "/path/to/ca-cert"
  - name: workerName
    value: "products-worker"
  - name: workerTimeout
    value: "5m"
  - name: requestTimeout
    value: "15s"
  - name: jobType
    value: "fetch-products"
  - name: maxJobsActive
    value: "32"
  - name: concurrency
    value: "4"
  - name: pollInterval
    value: "100ms"
  - name: pollThreshold
    value: "0.3"
  - name: fetchVariables
    value: "productId, productName, productKey"
  - name: autocomplete
    value: "true"
  - name: retryBackOff
    value: "30s"
  - name: direction
    value: "input"

Spec metadata fields

Field Required Binding support Details Example
gatewayAddr Y Input Zeebe gateway address "localhost:26500"
gatewayKeepAlive N Input Sets how often keep alive messages should be sent to the gateway. Defaults to 45 seconds "45s"
usePlainTextConnection N Input Whether to use a plain text connection or not "true", "false"
caCertificatePath N Input The path to the CA cert "/path/to/ca-cert"
workerName N Input The name of the worker activating the jobs, mostly used for logging purposes "products-worker"
workerTimeout N Input A job returned after this call will not be activated by another call until the timeout has been reached; defaults to 5 minutes "5m"
requestTimeout N Input The request will be completed when at least one job is activated or after the requestTimeout. If the requestTimeout = 0, a default timeout is used. If the requestTimeout < 0, long polling is disabled and the request is completed immediately, even when no job is activated. Defaults to 10 seconds "30s"
jobType Y Input the job type, as defined in the BPMN process (e.g. <zeebe:taskDefinition type="fetch-products" />) "fetch-products"
maxJobsActive N Input Set the maximum number of jobs which will be activated for this worker at the same time. Defaults to 32 "32"
concurrency N Input The maximum number of concurrent spawned goroutines to complete jobs. Defaults to 4 "4"
pollInterval N Input Set the maximal interval between polling for new jobs. Defaults to 100 milliseconds "100ms"
pollThreshold N Input Set the threshold of buffered activated jobs before polling for new jobs, i.e. threshold * maxJobsActive. Defaults to 0.3 "0.3"
fetchVariables N Input A list of variables to fetch as the job variables; if empty, all visible variables at the time of activation for the scope of the job will be returned "productId", "productName", "productKey"
autocomplete N Input Indicates if a job should be autocompleted or not. If not set, all jobs will be auto-completed by default. Disable it if the worker should manually complete or fail the job with either a business error or an incident "true", "false"
retryBackOff N Input The back-off timeout for the next retry if a job fails 15s
direction N Input The direction of the binding "input"

Binding support

This component supports input binding interfaces.

Input binding

Variables

The Zeebe process engine handles the process state as also process variables which can be passed on process instantiation or which can be updated or created during process execution. These variables can be passed to a registered job worker by defining the variable names as comma-separated list in the fetchVariables metadata field. The process engine will then pass these variables with its current values to the job worker implementation.

If the binding will register three variables productId, productName and productKey then the worker will be called with the following JSON body:

{
  "productId": "some-product-id",
  "productName": "some-product-name",
  "productKey": "some-product-key"
}

Note: if the fetchVariables metadata field will not be passed, all process variables will be passed to the worker.

Headers

The Zeebe process engine has the ability to pass custom task headers to a job worker. These headers can be defined for every service task. Task headers will be passed by the binding as metadata (HTTP headers) to the job worker.

The binding will also pass the following job related variables as metadata. The values will be passed as string. The table contains also the original data type so that it can be converted back to the equivalent data type in the used programming language for the worker.

Metadata Data type Description
X-Zeebe-Job-Key int64 The key, a unique identifier for the job
X-Zeebe-Job-Type string The type of the job (should match what was requested)
X-Zeebe-Process-Instance-Key int64 The job’s process instance key
X-Zeebe-Bpmn-Process-Id string The bpmn process ID of the job process definition
X-Zeebe-Process-Definition-Version int32 The version of the job process definition
X-Zeebe-Process-Definition-Key int64 The key of the job process definition
X-Zeebe-Element-Id string The associated task element ID
X-Zeebe-Element-Instance-Key int64 The unique key identifying the associated task, unique within the scope of the process instance
X-Zeebe-Worker string The name of the worker which activated this job
X-Zeebe-Retries int32 The amount of retries left to this job (should always be positive)
X-Zeebe-Deadline int64 When the job can be activated again, sent as a UNIX epoch timestamp
X-Zeebe-Autocomplete bool The autocomplete status that is defined in the binding metadata