This is the multi-page printable view of this section. Click here to print.
Developing applications with Dapr
- 1: Building blocks
- 1.1: Service invocation
- 1.1.1: Service invocation overview
- 1.1.2: How-To: Invoke services using HTTP
- 1.1.3: How-To: Invoke services using gRPC
- 1.1.4: How-To: Invoke Non-Dapr Endpoints using HTTP
- 1.1.5: How to: Service invocation across namespaces
- 1.2: Publish & subscribe messaging
- 1.2.1: Publish and subscribe overview
- 1.2.2: How to: Publish a message and subscribe to a topic
- 1.2.3: Publishing & subscribing messages with Cloudevents
- 1.2.4: Publishing & subscribing messages without CloudEvents
- 1.2.5: How-To: Route messages to different event handlers
- 1.2.6: Declarative, streaming, and programmatic subscription types
- 1.2.7: Dead Letter Topics
- 1.2.8: How to: Set up pub/sub namespace consumer groups
- 1.2.9: How to: Horizontally scale subscribers with StatefulSets
- 1.2.10: Scope Pub/sub topic access
- 1.2.11: Message Time-to-Live (TTL)
- 1.2.12: Publish and subscribe to bulk messages
- 1.3: Workflow
- 1.3.1: Workflow overview
- 1.3.2: Features and concepts
- 1.3.3: Workflow patterns
- 1.3.4: Workflow architecture
- 1.3.5: How to: Author a workflow
- 1.3.6: How to: Manage workflows
- 1.4: State management
- 1.4.1: State management overview
- 1.4.2: How-To: Save and get state
- 1.4.3: How-To: Query state
- 1.4.4: How-To: Build a stateful service
- 1.4.5: How-To: Enable the transactional outbox pattern
- 1.4.6: How-To: Share state between applications
- 1.4.7: How-To: Encrypt application state
- 1.4.8: Work with backend state stores
- 1.4.8.1: Azure Cosmos DB
- 1.4.8.2: Redis
- 1.4.8.3: SQL server
- 1.4.9: State Time-to-Live (TTL)
- 1.5: Bindings
- 1.5.1: Bindings overview
- 1.5.2: How-To: Trigger your application with input bindings
- 1.5.3: How-To: Use output bindings to interface with external resources
- 1.6: Actors
- 1.6.1: Actors overview
- 1.6.2: Actor runtime features
- 1.6.3: Actor runtime configuration parameters
- 1.6.4: Namespaced actors
- 1.6.5: Actors timers and reminders
- 1.6.6: How to: Enable partitioning of actor reminders
- 1.6.7: How-to: Interact with virtual actors using scripting
- 1.6.8: How-to: Enable and use actor reentrancy in Dapr
- 1.7: Secrets management
- 1.7.1: Secrets management overview
- 1.7.2: How To: Retrieve a secret
- 1.7.3: How To: Use secret scoping
- 1.8: Configuration
- 1.9: Distributed lock
- 1.9.1: Distributed lock overview
- 1.9.2: How-To: Use a lock
- 1.10: Cryptography
- 1.10.1: Cryptography overview
- 1.10.2: How to: Use the cryptography APIs
- 1.11: Jobs
- 1.11.1: Jobs overview
- 1.11.2: Features and concepts
- 1.11.3: How-To: Schedule and handle triggered jobs
- 1.12: Conversation
- 2: Dapr Software Development Kits (SDKs)
- 2.1: Dapr .NET SDK
- 2.1.1: Getting started with the Dapr client .NET SDK
- 2.1.1.1: DaprClient usage
- 2.1.2: Dapr actors .NET SDK
- 2.1.2.1: The IActorProxyFactory interface
- 2.1.2.2: Author & run actors
- 2.1.2.3: Actor serialization in the .NET SDK
- 2.1.2.4: How to: Run and use virtual actors in the .NET SDK
- 2.1.3: Dapr Workflow .NET SDK
- 2.1.4: Dapr AI .NET SDK
- 2.1.5: Dapr Jobs .NET SDK
- 2.1.5.1: How to: Author and manage Dapr Jobs in the .NET SDK
- 2.1.5.2: DaprJobsClient usage
- 2.1.6: Dapr Cryptography .NET SDK
- 2.1.7: Dapr Messaging .NET SDK
- 2.1.7.1: How to: Author and manage Dapr streaming subscriptions in the .NET SDK
- 2.1.7.2: DaprPublishSubscribeClient usage
- 2.1.8: Best Practices for the Dapr .NET SDK
- 2.1.8.1: Error Model in the Dapr .NET SDK
- 2.1.8.2: Experimental Attributes
- 2.1.8.3: Dapr source code analyzers and generators
- 2.1.9: Developing applications with the Dapr .NET SDK
- 2.1.9.1: Dapr .NET SDK Development with Dapr CLI
- 2.1.9.2: Dapr .NET SDK Development with Docker-Compose
- 2.1.9.3: Dapr .NET SDK Development with .NET Aspire
- 2.1.10: How to troubleshoot and debug with the Dapr .NET SDK
- 2.1.10.1: Troubleshoot Pub/Sub with the .NET SDK
- 2.2: Dapr Go SDK
- 2.2.1: Getting started with the Dapr client Go SDK
- 2.2.2: Getting started with the Dapr Service (Callback) SDK for Go
- 2.3: Dapr Java SDK
- 2.3.1: AI
- 2.3.2: Getting started with the Dapr client Java SDK
- 2.3.2.1: Properties
- 2.3.3: Jobs
- 2.3.4: Workflow
- 2.3.5: Getting started with the Dapr and Spring Boot
- 2.4: JavaScript SDK
- 2.4.1: JavaScript Client SDK
- 2.4.2: JavaScript Server SDK
- 2.4.3: JavaScript SDK for Actors
- 2.4.4: Logging in JavaScript SDK
- 2.4.5: JavaScript Examples
- 2.4.6: How to: Author and manage Dapr Workflow in the JavaScript SDK
- 2.5: Dapr PHP SDK
- 2.5.1: Virtual Actors
- 2.5.1.1: Production Reference: Actors
- 2.5.2: The App
- 2.5.2.1: Unit Testing
- 2.5.3: Custom Serialization
- 2.5.4: Publish and Subscribe with PHP
- 2.5.5: State Management with PHP
- 2.6: Dapr Python SDK
- 2.6.1: Getting started with the Dapr client Python SDK
- 2.6.2: Getting started with the Dapr actor Python SDK
- 2.6.3: Dapr Python SDK extensions
- 2.7: Dapr Rust SDK
- 3: Dapr Agents
- 3.1: Introduction
- 3.2: Getting Started
- 3.3: Why Dapr Agents
- 3.4: Core Concepts
- 3.5: Agentic Patterns
- 3.6: Integrations
- 3.7: Quickstarts
- 4: Error codes
- 4.1: Errors overview
- 4.2: Error codes reference guide
- 4.3: Handling HTTP error codes
- 4.4: Handling gRPC error codes
- 5: Local development
- 5.1: IDE support
- 5.1.1: Visual Studio Code integration with Dapr
- 5.1.1.1: Dapr Visual Studio Code extension overview
- 5.1.1.2: How-To: Debug Dapr applications with Visual Studio Code
- 5.1.1.3: Developing Dapr applications with Dev Containers
- 5.1.2: IntelliJ
- 5.2: Multi-App Run
- 5.3: How to: Use the gRPC interface in your Dapr application
- 5.4: Serialization in Dapr's SDKs
- 6: Debugging Dapr applications and the Dapr control plane
- 7: Integrations
- 7.1: Integrations with AWS
- 7.1.1: Authenticating to AWS
- 7.2: Integrations with Azure
- 7.2.1: Authenticate to Azure
- 7.2.1.1: Authenticating to Azure
- 7.2.1.2: How to: Generate a new Microsoft Entra ID application and Service Principal
- 7.2.1.3: How to: Use managed identities
- 7.2.2: Dapr integration policies for Azure API Management
- 7.2.3: Dapr extension for Azure Functions runtime
- 7.2.4: Dapr extension for Azure Kubernetes Service (AKS)
- 7.3: Integrations with Diagrid
- 7.4: How to: Autoscale a Dapr app with KEDA
- 7.5: How to: Use the Dapr CLI in a GitHub Actions workflow
- 7.6: How to: Use the Dapr Kubernetes Operator
- 7.7: How to: Integrate with Kratix
- 7.8: How to: Integrate with Argo CD
- 8: Components
- 8.1: Pluggable components
- 8.1.1: Pluggable components overview
- 8.1.2: How to: Implement pluggable components
- 8.1.3: Pluggable components SDKs
- 8.1.3.1: Getting started with the Dapr pluggable components .NET SDK
- 8.1.3.1.1: Implementing a .NET input/output binding component
- 8.1.3.1.2: Implementing a .NET pub/sub component
- 8.1.3.1.3: Implementing a .NET state store component
- 8.1.3.1.4: Advanced uses of the Dapr pluggable components .NET SDK
- 8.1.3.1.4.1: Application Environment of a .NET Dapr pluggable component
- 8.1.3.1.4.2: Lifetimes of .NET Dapr pluggable components
- 8.1.3.1.4.3: Multiple services in a .NET Dapr pluggable component
- 8.1.3.2: Getting started with the Dapr pluggable components Go SDK
- 8.1.3.2.1: Implementing a Go input/output binding component
- 8.1.3.2.2: Implementing a Go pub/sub component
- 8.1.3.2.3: Implementing a Go state store component
- 8.1.3.2.4: Advanced uses of the Dapr pluggable components .Go SDK
- 8.2: How to: Author middleware components
1 - Building blocks
Get a high-level overview of Dapr building blocks in the Concepts section.

1.1 - Service invocation
More about Dapr Service Invocation
Learn more about how to use Dapr Service Invocation:
- Try the Service Invocation quickstart.
- Explore service invocation via any of the supporting Dapr SDKs.
- Review the Service Invocation API reference documentation.
1.1.1 - Service invocation overview
Using service invocation, your application can reliably and securely communicate with other applications using the standard gRPC or HTTP protocols.
In many microservice-based applications, multiple services need the ability to communicate with one another. This inter-service communication requires that application developers handle problems like:
- Service discovery. How do I discover my different services?
- Standardizing API calls between services. How do I invoke methods between services?
- Secure inter-service communication. How do I call other services securely with encryption and apply access control on the methods?
- Mitigating request timeouts or failures. How do I handle retries and transient errors?
- Implementing observability and tracing. How do I use tracing to see a call graph with metrics to diagnose issues in production?
Service invocation API
Dapr addresses these challenges by providing a service invocation API that acts similar to a reverse proxy with built-in service discovery, while leveraging built-in distributed tracing, metrics, error handling, encryption and more.
Dapr uses a sidecar architecture. To invoke an application using Dapr:
- You use the
invoke
API on the Dapr instance. - Each application communicates with its own instance of Dapr.
- The Dapr instances discover and communicate with each other.
The following overview video and demo demonstrates how Dapr service invocation works.
The diagram below is an overview of how Dapr’s service invocation works between two Dapr-ized applications.

- Service A makes an HTTP or gRPC call targeting Service B. The call goes to the local Dapr sidecar.
- Dapr discovers Service B’s location using the name resolution component which is running on the given hosting platform.
- Dapr forwards the message to Service B’s Dapr sidecar
- Note: All calls between Dapr sidecars go over gRPC for performance. Only calls between services and Dapr sidecars can be either HTTP or gRPC.
- Service B’s Dapr sidecar forwards the request to the specified endpoint (or method) on Service B. Service B then runs its business logic code.
- Service B sends a response to Service A. The response goes to Service B’s sidecar.
- Dapr forwards the response to Service A’s Dapr sidecar.
- Service A receives the response.
You can also call non-Dapr HTTP endpoints using the service invocation API. For example, you may only use Dapr in part of an overall application, may not have access to the code to migrate an existing application to use Dapr, or simply need to call an external HTTP service. Read “How-To: Invoke Non-Dapr Endpoints using HTTP” for more information.
Features
Service invocation provides several features to make it easy for you to call methods between applications or to call external HTTP endpoints.
HTTP and gRPC service invocation
- HTTP: If you’re already using HTTP protocols in your application, using the Dapr HTTP header might be the easiest way to get started. You don’t need to change your existing endpoint URLs; just add the
dapr-app-id
header and you’re ready to go. For more information, see Invoke Services using HTTP. - gRPC: Dapr allows users to keep their own proto services and work natively with gRPC. This means that you can use service invocation to call your existing gRPC apps without having to include any Dapr SDKs or include custom gRPC services. For more information, see the how-to tutorial for Dapr and gRPC.
Service-to-service security
With the Dapr Sentry service, all calls between Dapr applications can be made secure with mutual (mTLS) authentication on hosted platforms, including automatic certificate rollover.
For more information read the service-to-service security article.
Resiliency including retries
In the event of call failures and transient errors, service invocation provides a resiliency feature that performs automatic retries with backoff time periods. To find out more, see the Resiliency article here.
Tracing and metrics with observability
By default, all calls between applications are traced and metrics are gathered to provide insights and diagnostics for applications. This is especially important in production scenarios, providing call graphs and metrics on the calls between your services. For more information read about observability.
Access control
With access policies, applications can control:
- Which applications are allowed to call them.
- What applications are authorized to do.
For example, you can restrict sensitive applications with personnel information from being accessed by unauthorized applications. Combined with service-to-service secure communication, you can provide for soft multi-tenancy deployments.
For more information read the access control allow lists for service invocation article.
Namespace scoping
You can scope applications to namespaces for deployment and security and call between services deployed to different namespaces. For more information, read the Service invocation across namespaces article.
Round robin load balancing with mDNS
Dapr provides round robin load balancing of service invocation requests with the mDNS protocol, for example with a single machine or with multiple, networked, physical machines.
The diagram below shows an example of how this works. If you have 1 instance of an application with app ID FrontEnd
and 3 instances of application with app ID Cart
and you call from FrontEnd
app to Cart
app, Dapr round robins’ between the 3 instances. These instance can be on the same machine or on different machines. .

Note: App ID is unique per application, not application instance. Regardless how many instances of that application exist (due to scaling), all of them will share the same app ID.
Swappable service discovery
Dapr can run on a variety of hosting platforms. To enable swappable service discovery with service invocation, Dapr uses name resolution components. For example, the Kubernetes name resolution component uses the Kubernetes DNS service to resolve the location of other applications running in the cluster.
Self-hosted machines can use the mDNS name resolution component. As an alternative, you can use the SQLite name resolution component to run Dapr on single-node environments and for local development scenarios. Dapr sidecars that are part of the cluster store their information in a SQLite database on the local machine.
The Consul name resolution component is particularly suited to multi-machine deployments and can be used in any hosting environment, including Kubernetes, multiple VMs, or self-hosted.
Streaming for HTTP service invocation
You can handle data as a stream in HTTP service invocation. This can offer improvements in performance and memory utilization when using Dapr to invoke another service using HTTP with large request or response bodies.
The diagram below demonstrates the six steps of data flow.

- Request: “App A” to “Dapr sidecar A”
- Request: “Dapr sidecar A” to “Dapr sidecar B”
- Request: “Dapr sidecar B” to “App B”
- Response: “App B” to “Dapr sidecar B”
- Response: “Dapr sidecar B” to “Dapr sidecar A”
- Response: “Dapr sidecar A” to “App A”
Example Architecture
Following the above call sequence, suppose you have the applications as described in the Hello World tutorial, where a python app invokes a node.js app. In such a scenario, the python app would be “Service A” , and a Node.js app would be “Service B”.
The diagram below shows sequence 1-7 again on a local machine showing the API calls:

- The Node.js app has a Dapr app ID of
nodeapp
. The python app invokes the Node.js app’sneworder
method by POSTinghttp://localhost:3500/v1.0/invoke/nodeapp/method/neworder
, which first goes to the python app’s local Dapr sidecar. - Dapr discovers the Node.js app’s location using name resolution component (in this case mDNS while self-hosted) which runs on your local machine.
- Dapr forwards the request to the Node.js app’s sidecar using the location it just received.
- The Node.js app’s sidecar forwards the request to the Node.js app. The Node.js app performs its business logic, logging the incoming message and then persist the order ID into Redis (not shown in the diagram).
- The Node.js app sends a response to the Python app through the Node.js sidecar.
- Dapr forwards the response to the Python Dapr sidecar.
- The Python app receives the response.
Try out service invocation
Quickstarts & tutorials
The Dapr docs contain multiple quickstarts that leverage the service invocation building block in different example architectures. To get a straight-forward understanding of the service invocation api and it’s features we recommend starting with our quickstarts:
Quickstart/tutorial | Description |
---|---|
Service invocation quickstart | This quickstart gets you interacting directly with the service invocation building block. |
Hello world tutorial | This tutorial shows how to use both the service invocation and state management building blocks all running locally on your machine. |
Hello world kubernetes tutorial | This tutorial walks through using Dapr in kubernetes and covers both the service invocation and state management building blocks as well. |
Start using service invocation directly in your app
Want to skip the quickstarts? Not a problem. You can try out the service invocation building block directly in your application to securely communicate with other services. After Dapr is installed, you can begin using the service invocation API in the following ways.
Invoke services using:
- HTTP and gRPC service invocation (recommended set up method)
- HTTP - Allows you to just add the
dapr-app-id
header and you’re ready to get started. Read more on this here, Invoke Services using HTTP. - gRPC - For gRPC based applications, the service invocation API is also available. Run the gRPC server, then invoke services using the Dapr CLI. Read more on this in Configuring Dapr to use gRPC and Invoke services using gRPC.
- HTTP - Allows you to just add the
- Direct call to the API - In addition to proxying, there’s also an option to directly call the service invocation API to invoke a GET endpoint. Just update your address URL to
localhost:<dapr-http-port>
and you’ll be able to directly call the API. You can also read more on this in the Invoke Services using HTTP docs linked above under HTTP proxying. - SDKs - If you’re using a Dapr SDK, you can directly use service invocation through the SDK. Select the SDK you need and use the Dapr client to invoke a service. Read more on this in Dapr SDKs.
For quick testing, try using the Dapr CLI for service invocation:
- Dapr CLI command - Once the Dapr CLI is set up, use
dapr invoke --method <method-name>
command along with the method flag and the method of interest. Read more on this in Dapr CLI.
Next steps
- Read the service invocation API specification. This reference guide for service invocation describes how to invoke methods on other services.
- Understand the service invocation performance numbers.
- Take a look at observability. Here you can dig into Dapr’s monitoring tools like tracing, metrics and logging.
- Read up on our security practices around mTLS encryption, token authentication, and endpoint authorization.
1.1.2 - How-To: Invoke services using HTTP
This article demonstrates how to deploy services each with an unique application ID for other services to discover and call endpoints on them using service invocation over HTTP.

Note
If you haven’t already, try out the service invocation quickstart for a quick walk-through on how to use the service invocation API.Choose an ID for your service
Dapr allows you to assign a global, unique ID for your app. This ID encapsulates the state for your application, regardless of the number of instances it may have.
dapr run --app-id checkout --app-protocol http --dapr-http-port 3500 -- python3 checkout/app.py
dapr run --app-id order-processor --app-port 8001 --app-protocol http --dapr-http-port 3501 -- python3 order-processor/app.py
If your app uses a TLS, you can tell Dapr to invoke your app over a TLS connection by setting --app-protocol https
:
dapr run --app-id checkout --app-protocol https --dapr-http-port 3500 -- python3 checkout/app.py
dapr run --app-id order-processor --app-port 8001 --app-protocol https --dapr-http-port 3501 -- python3 order-processor/app.py
dapr run --app-id checkout --app-protocol http --dapr-http-port 3500 -- npm start
dapr run --app-id order-processor --app-port 5001 --app-protocol http --dapr-http-port 3501 -- npm start
If your app uses a TLS, you can tell Dapr to invoke your app over a TLS connection by setting --app-protocol https
:
dapr run --app-id checkout --dapr-http-port 3500 --app-protocol https -- npm start
dapr run --app-id order-processor --app-port 5001 --dapr-http-port 3501 --app-protocol https -- npm start
dapr run --app-id checkout --app-protocol http --dapr-http-port 3500 -- dotnet run
dapr run --app-id order-processor --app-port 7001 --app-protocol http --dapr-http-port 3501 -- dotnet run
If your app uses a TLS, you can tell Dapr to invoke your app over a TLS connection by setting --app-protocol https
:
dapr run --app-id checkout --dapr-http-port 3500 --app-protocol https -- dotnet run
dapr run --app-id order-processor --app-port 7001 --dapr-http-port 3501 --app-protocol https -- dotnet run
dapr run --app-id checkout --app-protocol http --dapr-http-port 3500 -- java -jar target/CheckoutService-0.0.1-SNAPSHOT.jar
dapr run --app-id order-processor --app-port 9001 --app-protocol http --dapr-http-port 3501 -- java -jar target/OrderProcessingService-0.0.1-SNAPSHOT.jar
If your app uses a TLS, you can tell Dapr to invoke your app over a TLS connection by setting --app-protocol https
:
dapr run --app-id checkout --dapr-http-port 3500 --app-protocol https -- java -jar target/CheckoutService-0.0.1-SNAPSHOT.jar
dapr run --app-id order-processor --app-port 9001 --dapr-http-port 3501 --app-protocol https -- java -jar target/OrderProcessingService-0.0.1-SNAPSHOT.jar
dapr run --app-id checkout --dapr-http-port 3500 -- go run .
dapr run --app-id order-processor --app-port 6006 --app-protocol http --dapr-http-port 3501 -- go run .
If your app uses a TLS, you can tell Dapr to invoke your app over a TLS connection by setting --app-protocol https
:
dapr run --app-id checkout --dapr-http-port 3500 --app-protocol https -- go run .
dapr run --app-id order-processor --app-port 6006 --dapr-http-port 3501 --app-protocol https -- go run .
Set an app-id when deploying to Kubernetes
In Kubernetes, set the dapr.io/app-id
annotation on your pod:
apiVersion: apps/v1
kind: Deployment
metadata:
name: <language>-app
namespace: default
labels:
app: <language>-app
spec:
replicas: 1
selector:
matchLabels:
app: <language>-app
template:
metadata:
labels:
app: <language>-app
annotations:
dapr.io/enabled: "true"
dapr.io/app-id: "order-processor"
dapr.io/app-port: "6001"
...
If your app uses a TLS connection, you can tell Dapr to invoke your app over TLS with the app-protocol: "https"
annotation (full list here). Note that Dapr does not validate TLS certificates presented by the app.
Invoke the service
To invoke an application using Dapr, you can use the invoke
API on any Dapr instance. The sidecar programming model encourages each application to interact with its own instance of Dapr. The Dapr sidecars discover and communicate with one another.
Below are code examples that leverage Dapr SDKs for service invocation.
#dependencies
import random
from time import sleep
import logging
import requests
#code
logging.basicConfig(level = logging.INFO)
while True:
sleep(random.randrange(50, 5000) / 1000)
orderId = random.randint(1, 1000)
#Invoke a service
result = requests.post(
url='%s/orders' % (base_url),
data=json.dumps(order),
headers=headers
)
logging.basicConfig(level = logging.INFO)
logging.info('Order requested: ' + str(orderId))
logging.info('Result: ' + str(result))
//dependencies
import axios from "axios";
//code
const daprHost = "127.0.0.1";
var main = function() {
for(var i=0;i<10;i++) {
sleep(5000);
var orderId = Math.floor(Math.random() * (1000 - 1) + 1);
start(orderId).catch((e) => {
console.error(e);
process.exit(1);
});
}
}
//Invoke a service
const result = await axios.post('order-processor' , "orders/" + orderId , axiosConfig);
console.log("Order requested: " + orderId);
console.log("Result: " + result.config.data);
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
main();
//dependencies
using System;
using System.Collections.Generic;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using System.Threading;
//code
namespace EventService
{
class Program
{
static async Task Main(string[] args)
{
while(true) {
await Task.Delay(5000)
var random = new Random();
var orderId = random.Next(1,1000);
//Using Dapr SDK to invoke a method
var order = new Order(orderId.ToString());
var httpClient = DaprClient.CreateInvokeHttpClient();
var response = await httpClient.PostAsJsonAsync("http://order-processor/orders", order);
var result = await response.Content.ReadAsStringAsync();
Console.WriteLine("Order requested: " + orderId);
Console.WriteLine("Result: " + result);
}
}
}
}
//dependencies
import java.io.IOException;
import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;
import java.time.Duration;
import java.util.HashMap;
import java.util.Map;
import java.util.Random;
import java.util.concurrent.TimeUnit;
//code
@SpringBootApplication
public class CheckoutServiceApplication {
private static final HttpClient httpClient = HttpClient.newBuilder()
.version(HttpClient.Version.HTTP_2)
.connectTimeout(Duration.ofSeconds(10))
.build();
public static void main(String[] args) throws InterruptedException, IOException {
while (true) {
TimeUnit.MILLISECONDS.sleep(5000);
Random random = new Random();
int orderId = random.nextInt(1000 - 1) + 1;
// Create a Map to represent the request body
Map<String, Object> requestBody = new HashMap<>();
requestBody.put("orderId", orderId);
// Add other fields to the requestBody Map as needed
HttpRequest request = HttpRequest.newBuilder()
.POST(HttpRequest.BodyPublishers.ofString(new JSONObject(requestBody).toString()))
.uri(URI.create(dapr_url))
.header("Content-Type", "application/json")
.header("dapr-app-id", "order-processor")
.build();
HttpResponse<String> response = httpClient.send(request, HttpResponse.BodyHandlers.ofString());
System.out.println("Order passed: " + orderId);
TimeUnit.MILLISECONDS.sleep(1000);
log.info("Order requested: " + orderId);
log.info("Result: " + response.body());
}
}
}
package main
import (
"fmt"
"io"
"log"
"math/rand"
"net/http"
"os"
"time"
)
func main() {
daprHttpPort := os.Getenv("DAPR_HTTP_PORT")
if daprHttpPort == "" {
daprHttpPort = "3500"
}
client := &http.Client{
Timeout: 15 * time.Second,
}
for i := 0; i < 10; i++ {
time.Sleep(5000)
orderId := rand.Intn(1000-1) + 1
url := fmt.Sprintf("http://localhost:%s/checkout/%v", daprHttpPort, orderId)
req, err := http.NewRequest(http.MethodGet, url, nil)
if err != nil {
panic(err)
}
// Adding target app id as part of the header
req.Header.Add("dapr-app-id", "order-processor")
// Invoking a service
resp, err := client.Do(req)
if err != nil {
log.Fatal(err.Error())
}
b, err := io.ReadAll(resp.Body)
if err != nil {
panic(err)
}
fmt.Println(string(b))
}
}
Additional URL formats
To invoke a ‘GET’ endpoint:
curl http://localhost:3602/v1.0/invoke/checkout/method/checkout/100
To avoid changing URL paths as much as possible, Dapr provides the following ways to call the service invocation API:
- Change the address in the URL to
localhost:<dapr-http-port>
. - Add a
dapr-app-id
header to specify the ID of the target service, or alternatively pass the ID via HTTP Basic Auth:http://dapr-app-id:<service-id>@localhost:3602/path
.
For example, the following command:
curl http://localhost:3602/v1.0/invoke/checkout/method/checkout/100
is equivalent to:
curl -H 'dapr-app-id: checkout' 'http://localhost:3602/checkout/100' -X POST
or:
curl 'http://dapr-app-id:checkout@localhost:3602/checkout/100' -X POST
Using CLI:
dapr invoke --app-id checkout --method checkout/100
Including a query string in the URL
You can also append a query string or a fragment to the end of the URL and Dapr will pass it through unchanged. This means that if you need to pass some additional arguments in your service invocation that aren’t part of a payload or the path, you can do so by appending a ?
to the end of the URL, followed by the key/value pairs separated by =
signs and delimited by &
. For example:
curl 'http://dapr-app-id:checkout@localhost:3602/checkout/100?basket=1234&key=abc' -X POST
Namespaces
When running on namespace supported platforms, you include the namespace of the target app in the app ID. For example, following the <app>.<namespace>
format, use checkout.production
.
Using this example, invoking the service with a namespace would look like:
curl http://localhost:3602/v1.0/invoke/checkout.production/method/checkout/100 -X POST
See the Cross namespace API spec for more information on namespaces.
View traces and logs
Our example above showed you how to directly invoke a different service running locally or in Kubernetes. Dapr:
- Outputs metrics, tracing, and logging information,
- Allows you to visualize a call graph between services and log errors, and
- Optionally, log the payload body.
For more information on tracing and logs, see the observability article.
Related Links
1.1.3 - How-To: Invoke services using gRPC
This article describe how to use Dapr to connect services using gRPC.
By using Dapr’s gRPC proxying capability, you can use your existing proto-based gRPC services and have the traffic go through the Dapr sidecar. Doing so yields the following Dapr service invocation benefits to developers:
- Mutual authentication
- Tracing
- Metrics
- Access lists
- Network level resiliency
- API token based authentication
Dapr allows proxying all kinds of gRPC invocations, including unary and stream-based ones.
Step 1: Run a gRPC server
The following example is taken from the “hello world” grpc-go example. Although this example is in Go, the same concepts apply to all programming languages supported by gRPC.
package main
import (
"context"
"log"
"net"
"google.golang.org/grpc"
pb "google.golang.org/grpc/examples/helloworld/helloworld"
)
const (
port = ":50051"
)
// server is used to implement helloworld.GreeterServer.
type server struct {
pb.UnimplementedGreeterServer
}
// SayHello implements helloworld.GreeterServer
func (s *server) SayHello(ctx context.Context, in *pb.HelloRequest) (*pb.HelloReply, error) {
log.Printf("Received: %v", in.GetName())
return &pb.HelloReply{Message: "Hello " + in.GetName()}, nil
}
func main() {
lis, err := net.Listen("tcp", port)
if err != nil {
log.Fatalf("failed to listen: %v", err)
}
s := grpc.NewServer()
pb.RegisterGreeterServer(s, &server{})
log.Printf("server listening at %v", lis.Addr())
if err := s.Serve(lis); err != nil {
log.Fatalf("failed to serve: %v", err)
}
}
This Go app implements the Greeter proto service and exposes a SayHello
method.
Run the gRPC server using the Dapr CLI
dapr run --app-id server --app-port 50051 -- go run main.go
Using the Dapr CLI, we’re assigning a unique id to the app, server
, using the --app-id
flag.
Step 2: Invoke the service
The following example shows you how to discover the Greeter service using Dapr from a gRPC client.
Notice that instead of invoking the target service directly at port 50051
, the client is invoking its local Dapr sidecar over port 50007
which then provides all the capabilities of service invocation including service discovery, tracing, mTLS and retries.
package main
import (
"context"
"log"
"time"
"google.golang.org/grpc"
pb "google.golang.org/grpc/examples/helloworld/helloworld"
"google.golang.org/grpc/metadata"
)
const (
address = "localhost:50007"
)
func main() {
// Set up a connection to the server.
conn, err := grpc.Dial(address, grpc.WithInsecure(), grpc.WithBlock())
if err != nil {
log.Fatalf("did not connect: %v", err)
}
defer conn.Close()
c := pb.NewGreeterClient(conn)
ctx, cancel := context.WithTimeout(context.Background(), time.Second*2)
defer cancel()
ctx = metadata.AppendToOutgoingContext(ctx, "dapr-app-id", "server")
r, err := c.SayHello(ctx, &pb.HelloRequest{Name: "Darth Tyrannus"})
if err != nil {
log.Fatalf("could not greet: %v", err)
}
log.Printf("Greeting: %s", r.GetMessage())
}
The following line tells Dapr to discover and invoke an app named server
:
ctx = metadata.AppendToOutgoingContext(ctx, "dapr-app-id", "server")
All languages supported by gRPC allow for adding metadata. Here are a few examples:
Metadata headers = new Metadata();
Metadata.Key<String> jwtKey = Metadata.Key.of("dapr-app-id", "server");
GreeterService.ServiceBlockingStub stub = GreeterService.newBlockingStub(channel);
stub = MetadataUtils.attachHeaders(stub, header);
stub.SayHello(new HelloRequest() { Name = "Darth Malak" });
var metadata = new Metadata
{
{ "dapr-app-id", "server" }
};
var call = client.SayHello(new HelloRequest { Name = "Darth Nihilus" }, metadata);
metadata = (('dapr-app-id', 'server'),)
response = stub.SayHello(request={ name: 'Darth Revan' }, metadata=metadata)
const metadata = new grpc.Metadata();
metadata.add('dapr-app-id', 'server');
client.sayHello({ name: "Darth Malgus" }, metadata)
metadata = { 'dapr-app-id' : 'server' }
response = service.sayHello({ 'name': 'Darth Bane' }, metadata)
grpc::ClientContext context;
context.AddMetadata("dapr-app-id", "server");
Run the client using the Dapr CLI
dapr run --app-id client --dapr-grpc-port 50007 -- go run main.go
View telemetry
If you’re running Dapr locally with Zipkin installed, open the browser at http://localhost:9411
and view the traces between the client and server.
Deploying to Kubernetes
Set the following Dapr annotations on your deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: grpc-app
namespace: default
labels:
app: grpc-app
spec:
replicas: 1
selector:
matchLabels:
app: grpc-app
template:
metadata:
labels:
app: grpc-app
annotations:
dapr.io/enabled: "true"
dapr.io/app-id: "server"
dapr.io/app-protocol: "grpc"
dapr.io/app-port: "50051"
...
The dapr.io/app-protocol: "grpc"
annotation tells Dapr to invoke the app using gRPC.
If your app uses a TLS connection, you can tell Dapr to invoke your app over TLS with the app-protocol: "grpcs"
annotation (full list here). Note that Dapr does not validate TLS certificates presented by the app.
Namespaces
When running on namespace supported platforms, you include the namespace of the target app in the app ID: myApp.production
For example, invoking the gRPC server on a different namespace:
ctx = metadata.AppendToOutgoingContext(ctx, "dapr-app-id", "server.production")
See the Cross namespace API spec for more information on namespaces.
Step 3: View traces and logs
The example above showed you how to directly invoke a different service running locally or in Kubernetes. Dapr outputs metrics, tracing and logging information allowing you to visualize a call graph between services, log errors and optionally log the payload body.
For more information on tracing and logs see the observability article.
Proxying of streaming RPCs
When using Dapr to proxy streaming RPC calls using gRPC, you must set an additional metadata option dapr-stream
with value true
.
For example:
ctx = metadata.AppendToOutgoingContext(ctx, "dapr-app-id", "server")
ctx = metadata.AppendToOutgoingContext(ctx, "dapr-stream", "true")
Metadata headers = new Metadata();
Metadata.Key<String> jwtKey = Metadata.Key.of("dapr-app-id", "server");
Metadata.Key<String> jwtKey = Metadata.Key.of("dapr-stream", "true");
var metadata = new Metadata
{
{ "dapr-app-id", "server" },
{ "dapr-stream", "true" }
};
metadata = (('dapr-app-id', 'server'), ('dapr-stream', 'true'),)
const metadata = new grpc.Metadata();
metadata.add('dapr-app-id', 'server');
metadata.add('dapr-stream', 'true');
metadata = { 'dapr-app-id' : 'server' }
metadata = { 'dapr-stream' : 'true' }
grpc::ClientContext context;
context.AddMetadata("dapr-app-id", "server");
context.AddMetadata("dapr-stream", "true");
Streaming gRPCs and Resiliency
Currently, resiliency policies are not supported for service invocation via gRPC.
When proxying streaming gRPCs, due to their long-lived nature, resiliency policies are applied on the “initial handshake” only. As a consequence:
- If the stream is interrupted after the initial handshake, it will not be automatically re-established by Dapr. Your application will be notified that the stream has ended, and will need to recreate it.
- Retry policies only impact the initial connection “handshake”. If your resiliency policy includes retries, Dapr will detect failures in establishing the initial connection to the target app and will retry until it succeeds (or until the number of retries defined in the policy is exhausted).
- Likewise, timeouts defined in resiliency policies only apply to the initial “handshake”. After the connection has been established, timeouts do not impact the stream anymore.
Related Links
Community call demo
Watch this video on how to use Dapr’s gRPC proxying capability:
1.1.4 - How-To: Invoke Non-Dapr Endpoints using HTTP
This article demonstrates how to call a non-Dapr endpoint using Dapr over HTTP.
Using Dapr’s service invocation API, you can communicate with endpoints that either use or do not use Dapr. Using Dapr to call endpoints that do not use Dapr not only provides a consistent API, but also the following Dapr service invocation benefits:
- Ability to apply resiliency policies
- Call observability with tracing & metrics
- Security access control through scoping
- Ability to utilize middleware pipeline components
- Service discovery
- Authentication through the use of headers
HTTP service invocation to external services or non-Dapr endpoints
Sometimes you need to call a non-Dapr HTTP endpoint. For example:
- You may choose to only use Dapr in part of your overall application, including brownfield development
- You may not have access to the code to migrate an existing application to use Dapr
- You need to call an external HTTP service.
By defining an HTTPEndpoint
resource, you declaratively define a way to interact with a non-Dapr endpoint. You then use the service invocation URL to invoke non-Dapr endpoints. Alternatively, you can place a non-Dapr Fully Qualified Domain Name (FQDN) endpoint URL directly into the service invocation URL.
Order of precedence between HttpEndpoint, FQDN URL, and appId
When using service invocation, the Dapr runtime follows a precedence order:
- Is this a named
HTTPEndpoint
resource? - Is this an FQDN URL with an
http://
orhttps://
prefix? - Is this an
appID
?
Service invocation and non-Dapr HTTP endpoint
The diagram below is an overview of how Dapr’s service invocation works when invoking non-Dapr endpoints.

- Service A makes an HTTP call targeting Service B, a non-Dapr endpoint. The call goes to the local Dapr sidecar.
- Dapr discovers Service B’s location using the
HTTPEndpoint
or FQDN URL then forwards the message to Service B. - Service B sends a response to Service A’s Dapr sidecar.
- Service A receives the response.
Using an HTTPEndpoint resource or FQDN URL for non-Dapr endpoints
There are two ways to invoke a non-Dapr endpoint when communicating either to Dapr applications or non-Dapr applications. A Dapr application can invoke a non-Dapr endpoint by providing one of the following:
A named
HTTPEndpoint
resource, including defining anHTTPEndpoint
resource type. See the HTTPEndpoint reference guide for an example.localhost:3500/v1.0/invoke/<HTTPEndpoint-name>/method/<my-method>
For example, with an
HTTPEndpoint
resource called “palpatine” and a method called “Order66”, this would be:curl http://localhost:3500/v1.0/invoke/palpatine/method/order66
A FQDN URL to the non-Dapr endpoint.
localhost:3500/v1.0/invoke/<URL>/method/<my-method>
For example, with an FQDN resource called
https://darthsidious.starwars
, this would be:curl http://localhost:3500/v1.0/invoke/https://darthsidious.starwars/method/order66
Using appId when calling Dapr enabled applications
AppIDs are always used to call Dapr applications with the appID
and my-method
. Read the How-To: Invoke services using HTTP guide for more information. For example:
localhost:3500/v1.0/invoke/<appID>/method/<my-method>
curl http://localhost:3602/v1.0/invoke/orderprocessor/method/checkout
TLS authentication
Using the HTTPEndpoint resource allows you to use any combination of a root certificate, client certificate and private key according to the authentication requirements of the remote endpoint.
Example using root certificate
apiVersion: dapr.io/v1alpha1
kind: HTTPEndpoint
metadata:
name: "external-http-endpoint-tls"
spec:
baseUrl: https://service-invocation-external:443
headers:
- name: "Accept-Language"
value: "en-US"
clientTLS:
rootCA:
secretKeyRef:
name: dapr-tls-client
key: ca.crt
Example using client certificate and private key
apiVersion: dapr.io/v1alpha1
kind: HTTPEndpoint
metadata:
name: "external-http-endpoint-tls"
spec:
baseUrl: https://service-invocation-external:443
headers:
- name: "Accept-Language"
value: "en-US"
clientTLS:
certificate:
secretKeyRef:
name: dapr-tls-client
key: tls.crt
privateKey:
secretKeyRef:
name: dapr-tls-key
key: tls.key
Related Links
Community call demo
Watch this video on how to use service invocation to call non-Dapr endpoints.
1.1.5 - How to: Service invocation across namespaces
In this article, you’ll learn how you can call between services deployed to different namespaces. By default, service invocation supports invoking services within the same namespace by simply referencing the app ID (nodeapp
):
localhost:3500/v1.0/invoke/nodeapp/method/neworder
Service invocation also supports calls across namespaces. On all supported hosting platforms, Dapr app IDs conform to a valid FQDN format that includes the target namespace. You can specify both:
- The app ID (
nodeapp
), and - The namespace the app runs in (
production
).
Example 1
Call the neworder
method on the nodeapp
in the production
namespace:
localhost:3500/v1.0/invoke/nodeapp.production/method/neworder
When calling an application in a namespace using service invocation, you qualify it with the namespace. This proves useful in cross-namespace calls in a Kubernetes cluster.
Example 2
Call the ping
method on myapp
scoped to the production
namespace:
https://localhost:3500/v1.0/invoke/myapp.production/method/ping
Example 3
Call the same ping
method as example 2 using a curl command from an external DNS address (in this case, api.demo.dapr.team
) and supply the Dapr API token for authentication:
MacOS/Linux:
curl -i -d '{ "message": "hello" }' \
-H "Content-type: application/json" \
-H "dapr-api-token: ${API_TOKEN}" \
https://api.demo.dapr.team/v1.0/invoke/myapp.production/method/ping
1.2 - Publish & subscribe messaging
More about Dapr Pub/sub
Learn more about how to use Dapr Pub/sub:
- Try the Pub/sub quickstart.
- Explore pub/sub via any of the supporting Dapr SDKs.
- Review the Pub/sub API reference documentation.
- Browse the supported pub/sub component specs.
1.2.1 - Publish and subscribe overview
Publish and subscribe (pub/sub) enables microservices to communicate with each other using messages for event-driven architectures.
- The producer, or publisher, writes messages to an input channel and sends them to a topic, unaware which application will receive them.
- The consumer, or subscriber, subscribes to the topic and receives messages from an output channel, unaware which service produced these messages.
An intermediary message broker copies each message from a publisher’s input channel to an output channel for all subscribers interested in that message. This pattern is especially useful when you need to decouple microservices from one another.

Pub/sub API
The pub/sub API in Dapr:
- Provides a platform-agnostic API to send and receive messages.
- Offers at-least-once message delivery guarantee.
- Integrates with various message brokers and queuing systems.
The specific message broker used by your service is pluggable and configured as a Dapr pub/sub component at runtime. This removes the dependency from your service and makes your service more portable and flexible to changes.
When using pub/sub in Dapr:
- Your service makes a network call to a Dapr pub/sub building block API.
- The pub/sub building block makes calls into a Dapr pub/sub component that encapsulates a specific message broker.
- To receive messages on a topic, Dapr subscribes to the pub/sub component on behalf of your service with a topic and delivers the messages to an endpoint on your service when they arrive.
The following overview video and demo demonstrates how Dapr pub/sub works.
In the diagram below, a “shipping” service and an “email” service have both subscribed to topics published by a “cart” service. Each service loads pub/sub component configuration files that point to the same pub/sub message broker component; for example: Redis Streams, NATS Streaming, Azure Service Bus, or GCP pub/sub.

In the diagram below, the Dapr API posts an “order” topic from the publishing “cart” service to “order” endpoints on the “shipping” and “email” subscribing services.

View the complete list of pub/sub components that Dapr supports.
Features
The pub/sub API building block brings several features to your application.
Sending messages using Cloud Events
To enable message routing and provide additional context with each message between services, Dapr uses the CloudEvents 1.0 specification as its message format. Any message sent by an application to a topic using Dapr is automatically wrapped in a Cloud Events envelope, using Content-Type
header value for datacontenttype
attribute.
For more information, read about messaging with CloudEvents, or sending raw messages without CloudEvents.
Communication with applications not using Dapr and CloudEvents
If one of your applications uses Dapr while another doesn’t, you can disable the CloudEvent wrapping for a publisher or subscriber. This allows partial adoption of Dapr pub/sub in applications that cannot adopt Dapr all at once.
For more information, read how to use pub/sub without CloudEvents.
Setting message content types
When publishing a message, it’s important to specify the content type of the data being sent. Unless specified, Dapr will assume text/plain
.
- HTTP client: the content type can be set in a
Content-Type
header - gRPC client and SDK: have a dedicated content type parameter
Message delivery
In principle, Dapr considers a message successfully delivered once the subscriber processes the message and responds with a non-error response. For more granular control, Dapr’s pub/sub API also provides explicit statuses, defined in the response payload, with which the subscriber indicates specific handling instructions to Dapr (for example, RETRY
or DROP
).
Receiving messages with topic subscriptions
Dapr applications can subscribe to published topics via three subscription types that support the same features: declarative, streaming and programmatic.
Subscription type | Description |
---|---|
Declarative | The subscription is defined in an external file. The declarative approach removes the Dapr dependency from your code and allows for existing applications to subscribe to topics, without having to change code. |
Streaming | The subscription is defined in the user code. Streaming subscriptions are dynamic, meaning they allow for adding or removing subscriptions at runtime. They do not require a subscription endpoint in your application (that is required by both programmatic and declarative subscriptions), making them easy to configure in code. Streaming subscriptions also do not require an app to be configured with the sidecar to receive messages. With streaming subscriptions, since messages are sent to a message handler code, there is no concept of routes or bulk subscriptions. |
Programmatic | Subscription is defined in the user code. The programmatic approach implements the static subscription and requires an endpoint in your code. |
For more information, read about the subscriptions in Subscription Types.
Reloading topic subscriptions
To reload topic subscriptions that are defined programmatically or declaratively, the Dapr sidecar needs to be restarted.
The Dapr sidecar can be made to dynamically reload changed declarative topic subscriptions without restarting by enabling the HotReload
feature gate.
Hot reloading of topic subscriptions is currently a preview feature.
In-flight messages are unaffected when reloading a subscription.
Message routing
Dapr provides content-based routing pattern. Pub/sub routing is an implementation of this pattern that allows developers to use expressions to route CloudEvents based on their contents to different URIs/paths and event handlers in your application. If no route matches, an optional default route is used. This is useful as your applications expands to support multiple event versions or special cases.
This feature is available to both the declarative and programmatic subscription approaches.
For more information on message routing, read Dapr pub/sub API reference
Handling failed messages with dead letter topics
Sometimes, messages can’t be processed because of a variety of possible issues, such as erroneous conditions within the producer or consumer application or an unexpected state change that causes an issue with your application code. Dapr allows developers to set dead letter topics to deal with messages that cannot be delivered to an application. This feature is available on all pub/sub components and prevents consumer applications from endlessly retrying a failed message. For more information, read about dead letter topics
Enabling the outbox pattern
Dapr enables developers to use the outbox pattern for achieving a single transaction across a transactional state store and any message broker. For more information, read How to enable transactional outbox messaging
Namespace consumer groups
Dapr solves multi-tenancy at-scale with namespaces for consumer groups. Simply include the "{namespace}"
value in your component metadata for consumer groups to allow multiple namespaces with applications of the same app-id
to publish and subscribe to the same message broker.
At-least-once guarantee
Dapr guarantees at-least-once semantics for message delivery. When an application publishes a message to a topic using the pub/sub API, Dapr ensures the message is delivered at least once to every subscriber.
Even if the message fails to deliver, or your application crashes, Dapr attempts to redeliver the message until successful delivery.
All Dapr pub/sub components support the at-least-once guarantee.
Consumer groups and competing consumers pattern
Dapr handles the burden of dealing with consumer groups and the competing consumers pattern. In the competing consumers pattern, multiple application instances using a single consumer group compete for the message. Dapr enforces the competing consumer pattern when replicas use the same app-id
without explicit consumer group overrides.
When multiple instances of the same application (with same app-id
) subscribe to a topic, Dapr delivers each message to only one instance of that application. This concept is illustrated in the diagram below.

Similarly, if two different applications (with different app-id
) subscribe to the same topic, Dapr delivers each message to only one instance of each application.
Not all Dapr pub/sub components support the competing consumer pattern. Currently, the following (non-exhaustive) pub/sub components support this:
Scoping topics for added security
By default, all topic messages associated with an instance of a pub/sub component are available to every application configured with that component. You can limit which application can publish or subscribe to topics with Dapr topic scoping. For more information, read: pub/sub topic scoping.
Message Time-to-Live (TTL)
Dapr can set a timeout message on a per-message basis, meaning that if the message is not read from the pub/sub component, then the message is discarded. This timeout message prevents a build up of unread messages. If a message has been in the queue longer than the configured TTL, it is marked as dead. For more information, read pub/sub message TTL.
Publish and subscribe to bulk messages
Dapr supports sending and receiving multiple messages in a single request. When writing applications that need to send or receive a large number of messages, using bulk operations allows achieving high throughput by reducing the overall number of requests. For more information, read pub/sub bulk messages.
Scaling subscribers with StatefulSets
When running on Kubernetes, subscribers can have a sticky consumerID
per instance when using StatefulSets in combination with the {podName}
marker. See how to horizontally scale subscribers with StatefulSets.
Try out pub/sub
Quickstarts and tutorials
Want to put the Dapr pub/sub API to the test? Walk through the following quickstart and tutorials to see pub/sub in action:
Quickstart/tutorial | Description |
---|---|
Pub/sub quickstart | Send and receive messages using the publish and subscribe API. |
Pub/sub tutorial | Demonstrates how to use Dapr to enable pub-sub applications. Uses Redis as a pub-sub component. |
Start using pub/sub directly in your app
Want to skip the quickstarts? Not a problem. You can try out the pub/sub building block directly in your application to publish messages and subscribe to a topic. After Dapr is installed, you can begin using the pub/sub API starting with the pub/sub how-to guide.
Next steps
- Learn about messaging with CloudEvents and when you might want to send messages without CloudEvents.
- Follow How-To: Configure pub/sub components with multiple namespaces.
- Review the list of pub/sub components.
- Read the API reference.
1.2.2 - How to: Publish a message and subscribe to a topic
Now that you’ve learned what the Dapr pub/sub building block provides, learn how it can work in your service. The below code example loosely describes an application that processes orders with two services, each with Dapr sidecars:
- A checkout service using Dapr to subscribe to the topic in the message queue.
- An order processing service using Dapr to publish a message to RabbitMQ.

Dapr automatically wraps the user payload in a CloudEvents v1.0 compliant envelope, using Content-Type
header value for datacontenttype
attribute. Learn more about messages with CloudEvents.
The following example demonstrates how your applications publish and subscribe to a topic called orders
.
Note
If you haven’t already, try out the pub/sub quickstart for a quick walk-through on how to use pub/sub.Set up the Pub/Sub component
The first step is to set up the pub/sub component:
When you run dapr init
, Dapr creates a default Redis pubsub.yaml
and runs a Redis container on your local machine, located:
- On Windows, under
%UserProfile%\.dapr\components\pubsub.yaml
- On Linux/MacOS, under
~/.dapr/components/pubsub.yaml
With the pubsub.yaml
component, you can easily swap out underlying components without application code changes. In this example, RabbitMQ is used.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: order-pub-sub
spec:
type: pubsub.rabbitmq
version: v1
metadata:
- name: host
value: "amqp://localhost:5672"
- name: durable
value: "false"
- name: deletedWhenUnused
value: "false"
- name: autoAck
value: "false"
- name: reconnectWait
value: "0"
- name: concurrency
value: parallel
scopes:
- orderprocessing
- checkout
You can override this file with another pubsub component by creating a components directory (in this example, myComponents
) containing the file and using the flag --resources-path
with the dapr run
CLI command.
To deploy this into a Kubernetes cluster, fill in the metadata
connection details of the pub/sub component in the YAML below, save as pubsub.yaml
, and run kubectl apply -f pubsub.yaml
.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: order-pub-sub
spec:
type: pubsub.rabbitmq
version: v1
metadata:
- name: connectionString
value: "amqp://localhost:5672"
- name: protocol
value: amqp
- name: hostname
value: localhost
- name: username
value: username
- name: password
value: password
- name: durable
value: "false"
- name: deletedWhenUnused
value: "false"
- name: autoAck
value: "false"
- name: reconnectWait
value: "0"
- name: concurrency
value: parallel
scopes:
- orderprocessing
- checkout
dapr run --app-id myapp --resources-path ./myComponents -- dotnet run
dapr run --app-id myapp --resources-path ./myComponents -- mvn spring-boot:run
dapr run --app-id myapp --resources-path ./myComponents -- python3 app.py
dapr run --app-id myapp --resources-path ./myComponents -- go run app.go
dapr run --app-id myapp --resources-path ./myComponents -- npm start
Subscribe to topics
Dapr provides three methods by which you can subscribe to topics:
- Declaratively, where subscriptions are defined in an external file.
- Streaming, where subscriptions are defined in user code.
- Programmatically, where subscriptions are defined in user code.
Learn more in the declarative, streaming, and programmatic subscriptions doc. This example demonstrates a declarative subscription.
Create a file named subscription.yaml
and paste the following:
apiVersion: dapr.io/v2alpha1
kind: Subscription
metadata:
name: order-pub-sub
spec:
topic: orders
routes:
default: /checkout
pubsubname: order-pub-sub
scopes:
- orderprocessing
- checkout
The example above shows an event subscription to topic orders
, for the pubsub component order-pub-sub
.
- The
route
field tells Dapr to send all topic messages to the/checkout
endpoint in the app. - The
scopes
field enables this subscription for apps with IDsorderprocessing
andcheckout
.
Place subscription.yaml
in the same directory as your pubsub.yaml
component. When Dapr starts up, it loads subscriptions along with the components.
Note
This feature is currently in preview. Dapr can be made to “hot reload” declarative subscriptions, whereby updates are picked up automatically without needing a restart. This is enabled by via theHotReload
feature gate.
To prevent reprocessing or loss of unprocessed messages, in-flight messages between Dapr and your application are unaffected during hot reload events.Below are code examples that leverage Dapr SDKs to subscribe to the topic you defined in subscription.yaml
.
using System.Collections.Generic;
using System.Threading.Tasks;
using System;
using Microsoft.AspNetCore.Mvc;
using Dapr;
using Dapr.Client;
namespace CheckoutService.Controllers;
[ApiController]
public sealed class CheckoutServiceController : ControllerBase
{
//Subscribe to a topic called "orders" from the "order-pub-sub" compoennt
[Topic("order-pub-sub", "orders")]
[HttpPost("checkout")]
public void GetCheckout([FromBody] int orderId)
{
Console.WriteLine("Subscriber received : " + orderId);
}
}
Navigate to the directory containing the above code, then run the following command to launch both a Dapr sidecar and the subscriber application:
dapr run --app-id checkout --app-port 6002 --dapr-http-port 3602 --dapr-grpc-port 60002 --app-protocol https dotnet run
//dependencies
import io.dapr.Topic;
import io.dapr.client.domain.CloudEvent;
import org.springframework.web.bind.annotation.*;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import reactor.core.publisher.Mono;
//code
@RestController
public class CheckoutServiceController {
private static final Logger log = LoggerFactory.getLogger(CheckoutServiceController.class);
//Subscribe to a topic
@Topic(name = "orders", pubsubName = "order-pub-sub")
@PostMapping(path = "/checkout")
public Mono<Void> getCheckout(@RequestBody(required = false) CloudEvent<String> cloudEvent) {
return Mono.fromRunnable(() -> {
try {
log.info("Subscriber received: " + cloudEvent.getData());
} catch (Exception e) {
throw new RuntimeException(e);
}
});
}
}
Navigate to the directory containing the above code, then run the following command to launch both a Dapr sidecar and the subscriber application:
dapr run --app-id checkout --app-port 6002 --dapr-http-port 3602 --dapr-grpc-port 60002 mvn spring-boot:run
#dependencies
from cloudevents.sdk.event import v1
from dapr.ext.grpc import App
import logging
import json
#code
app = App()
logging.basicConfig(level = logging.INFO)
#Subscribe to a topic
@app.subscribe(pubsub_name='order-pub-sub', topic='orders')
def mytopic(event: v1.Event) -> None:
data = json.loads(event.Data())
logging.info('Subscriber received: ' + str(data))
app.run(6002)
Navigate to the directory containing the above code, then run the following command to launch both a Dapr sidecar and the subscriber application:
dapr run --app-id checkout --app-port 6002 --dapr-http-port 3602 --app-protocol grpc -- python3 CheckoutService.py
//dependencies
import (
"log"
"net/http"
"context"
"github.com/dapr/go-sdk/service/common"
daprd "github.com/dapr/go-sdk/service/http"
)
//code
var sub = &common.Subscription{
PubsubName: "order-pub-sub",
Topic: "orders",
Route: "/checkout",
}
func main() {
s := daprd.NewService(":6002")
//Subscribe to a topic
if err := s.AddTopicEventHandler(sub, eventHandler); err != nil {
log.Fatalf("error adding topic subscription: %v", err)
}
if err := s.Start(); err != nil && err != http.ErrServerClosed {
log.Fatalf("error listenning: %v", err)
}
}
func eventHandler(ctx context.Context, e *common.TopicEvent) (retry bool, err error) {
log.Printf("Subscriber received: %s", e.Data)
return false, nil
}
Navigate to the directory containing the above code, then run the following command to launch both a Dapr sidecar and the subscriber application:
dapr run --app-id checkout --app-port 6002 --dapr-http-port 3602 --dapr-grpc-port 60002 go run CheckoutService.go
//dependencies
import { DaprServer, CommunicationProtocolEnum } from '@dapr/dapr';
//code
const daprHost = "127.0.0.1";
const serverHost = "127.0.0.1";
const serverPort = "6002";
start().catch((e) => {
console.error(e);
process.exit(1);
});
async function start(orderId) {
const server = new DaprServer({
serverHost,
serverPort,
communicationProtocol: CommunicationProtocolEnum.HTTP,
clientOptions: {
daprHost,
daprPort: process.env.DAPR_HTTP_PORT,
},
});
//Subscribe to a topic
await server.pubsub.subscribe("order-pub-sub", "orders", async (orderId) => {
console.log(`Subscriber received: ${JSON.stringify(orderId)}`)
});
await server.start();
}
Navigate to the directory containing the above code, then run the following command to launch both a Dapr sidecar and the subscriber application:
dapr run --app-id checkout --app-port 6002 --dapr-http-port 3602 --dapr-grpc-port 60002 npm start
Publish a message
Start an instance of Dapr with an app-id called orderprocessing
:
dapr run --app-id orderprocessing --dapr-http-port 3601
Then publish a message to the orders
topic:
dapr publish --publish-app-id orderprocessing --pubsub order-pub-sub --topic orders --data '{"orderId": "100"}'
curl -X POST http://localhost:3601/v1.0/publish/order-pub-sub/orders -H "Content-Type: application/json" -d '{"orderId": "100"}'
Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '{"orderId": "100"}' -Uri 'http://localhost:3601/v1.0/publish/order-pub-sub/orders'
Below are code examples that leverage Dapr SDKs to publish a topic.
using System;
using System.Collections.Generic;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Threading.Tasks;
using Dapr.Client;
using System.Threading;
const string PUBSUB_NAME = "order-pub-sub";
const string TOPIC_NAME = "orders";
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddDaprClient();
var app = builder.Build();
var random = new Random();
var client = app.Services.GetRequiredService<DaprClient>();
while(true) {
await Task.Delay(TimeSpan.FromSeconds(5));
var orderId = random.Next(1,1000);
var source = new CancellationTokenSource();
var cancellationToken = source.Token;
//Using Dapr SDK to publish a topic
await client.PublishEventAsync(PUBSUB_NAME, TOPIC_NAME, orderId, cancellationToken);
Console.WriteLine("Published data: " + orderId);
}
Navigate to the directory containing the above code, then run the following command to launch both a Dapr sidecar and the publisher application:
dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 --app-protocol https dotnet run
//dependencies
import io.dapr.client.DaprClient;
import io.dapr.client.DaprClientBuilder;
import io.dapr.client.domain.Metadata;
import static java.util.Collections.singletonMap;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.Random;
import java.util.concurrent.TimeUnit;
//code
@SpringBootApplication
public class OrderProcessingServiceApplication {
private static final Logger log = LoggerFactory.getLogger(OrderProcessingServiceApplication.class);
public static void main(String[] args) throws InterruptedException{
String MESSAGE_TTL_IN_SECONDS = "1000";
String TOPIC_NAME = "orders";
String PUBSUB_NAME = "order-pub-sub";
while(true) {
TimeUnit.MILLISECONDS.sleep(5000);
Random random = new Random();
int orderId = random.nextInt(1000-1) + 1;
DaprClient client = new DaprClientBuilder().build();
//Using Dapr SDK to publish a topic
client.publishEvent(
PUBSUB_NAME,
TOPIC_NAME,
orderId,
singletonMap(Metadata.TTL_IN_SECONDS, MESSAGE_TTL_IN_SECONDS)).block();
log.info("Published data:" + orderId);
}
}
}
Navigate to the directory containing the above code, then run the following command to launch both a Dapr sidecar and the publisher application:
dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 mvn spring-boot:run
#dependencies
import random
from time import sleep
import requests
import logging
import json
from dapr.clients import DaprClient
#code
logging.basicConfig(level = logging.INFO)
while True:
sleep(random.randrange(50, 5000) / 1000)
orderId = random.randint(1, 1000)
PUBSUB_NAME = 'order-pub-sub'
TOPIC_NAME = 'orders'
with DaprClient() as client:
#Using Dapr SDK to publish a topic
result = client.publish_event(
pubsub_name=PUBSUB_NAME,
topic_name=TOPIC_NAME,
data=json.dumps(orderId),
data_content_type='application/json',
)
logging.info('Published data: ' + str(orderId))
Navigate to the directory containing the above code, then run the following command to launch both a Dapr sidecar and the publisher application:
dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --app-protocol grpc python3 OrderProcessingService.py
//dependencies
import (
"context"
"log"
"math/rand"
"time"
"strconv"
dapr "github.com/dapr/go-sdk/client"
)
//code
var (
PUBSUB_NAME = "order-pub-sub"
TOPIC_NAME = "orders"
)
func main() {
for i := 0; i < 10; i++ {
time.Sleep(5000)
orderId := rand.Intn(1000-1) + 1
client, err := dapr.NewClient()
if err != nil {
panic(err)
}
defer client.Close()
ctx := context.Background()
//Using Dapr SDK to publish a topic
if err := client.PublishEvent(ctx, PUBSUB_NAME, TOPIC_NAME, []byte(strconv.Itoa(orderId)));
err != nil {
panic(err)
}
log.Println("Published data: " + strconv.Itoa(orderId))
}
}
Navigate to the directory containing the above code, then run the following command to launch both a Dapr sidecar and the publisher application:
dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 go run OrderProcessingService.go
//dependencies
import { DaprServer, DaprClient, CommunicationProtocolEnum } from '@dapr/dapr';
const daprHost = "127.0.0.1";
var main = function() {
for(var i=0;i<10;i++) {
sleep(5000);
var orderId = Math.floor(Math.random() * (1000 - 1) + 1);
start(orderId).catch((e) => {
console.error(e);
process.exit(1);
});
}
}
async function start(orderId) {
const PUBSUB_NAME = "order-pub-sub"
const TOPIC_NAME = "orders"
const client = new DaprClient({
daprHost,
daprPort: process.env.DAPR_HTTP_PORT,
communicationProtocol: CommunicationProtocolEnum.HTTP
});
console.log("Published data:" + orderId)
//Using Dapr SDK to publish a topic
await client.pubsub.publish(PUBSUB_NAME, TOPIC_NAME, orderId);
}
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
main();
Navigate to the directory containing the above code, then run the following command to launch both a Dapr sidecar and the publisher application:
dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 npm start
Message acknowledgement and retries
In order to tell Dapr that a message was processed successfully, return a 200 OK
response. If Dapr receives any other return status code than 200
, or if your app crashes, Dapr will attempt to redeliver the message following at-least-once semantics.
Demo video
Watch this demo video to learn more about pub/sub messaging with Dapr.
Next steps
- Try the pub/sub tutorial.
- Learn about messaging with CloudEvents and when you might want to send messages without CloudEvents.
- Review the list of pub/sub components.
- Read the API reference.
1.2.3 - Publishing & subscribing messages with Cloudevents
To enable message routing and provide additional context with each message, Dapr uses the CloudEvents 1.0 specification as its message format. Any message sent by an application to a topic using Dapr is automatically wrapped in a CloudEvents envelope, using the Content-Type
header value for datacontenttype
attribute.
Dapr uses CloudEvents to provide additional context to the event payload, enabling features like:
- Tracing
- Content-type for proper deserialization of event data
- Verification of sender application
You can choose any of three methods for publish a CloudEvent via pub/sub:
- Send a pub/sub event, which is then wrapped by Dapr in a CloudEvent envelope.
- Replace specific CloudEvents attributes provided by Dapr by overriding the standard CloudEvent properties.
- Write your own CloudEvent envelope as part of the pub/sub event.
Dapr-generated CloudEvents example
Sending a publish operation to Dapr automatically wraps it in a CloudEvent envelope containing the following fields:
id
source
specversion
type
traceparent
traceid
tracestate
topic
pubsubname
time
datacontenttype
(optional)
The following example demonstrates a CloudEvent generated by Dapr for a publish operation to the orders
topic that includes:
- A W3C
traceid
unique to the message - The
data
and the fields for the CloudEvent where the data content is serialized as JSON
{
"topic": "orders",
"pubsubname": "order_pub_sub",
"traceid": "00-113ad9c4e42b27583ae98ba698d54255-e3743e35ff56f219-01",
"tracestate": "",
"data": {
"orderId": 1
},
"id": "5929aaac-a5e2-4ca1-859c-edfe73f11565",
"specversion": "1.0",
"datacontenttype": "application/json; charset=utf-8",
"source": "checkout",
"type": "com.dapr.event.sent",
"time": "2020-09-23T06:23:21Z",
"traceparent": "00-113ad9c4e42b27583ae98ba698d54255-e3743e35ff56f219-01"
}
As another example of a v1.0 CloudEvent, the following shows data as XML content in a CloudEvent message serialized as JSON:
{
"topic": "orders",
"pubsubname": "order_pub_sub",
"traceid": "00-113ad9c4e42b27583ae98ba698d54255-e3743e35ff56f219-01",
"tracestate": "",
"data" : "<note><to></to><from>user2</from><message>Order</message></note>",
"id" : "id-1234-5678-9101",
"specversion" : "1.0",
"datacontenttype" : "text/xml",
"subject" : "Test XML Message",
"source" : "https://example.com/message",
"type" : "xml.message",
"time" : "2020-09-23T06:23:21Z"
}
Replace Dapr generated CloudEvents values
Dapr automatically generates several CloudEvent properties. You can replace these generated CloudEvent properties by providing the following optional metadata key/value:
cloudevent.id
: overridesid
cloudevent.source
: overridessource
cloudevent.type
: overridestype
cloudevent.traceid
: overridestraceid
cloudevent.tracestate
: overridestracestate
cloudevent.traceparent
: overridestraceparent
The ability to replace CloudEvents properties using these metadata properties applies to all pub/sub components.
Example
For example, to replace the source
and id
values from the CloudEvent example above in code:
with DaprClient() as client:
order = {'orderId': i}
# Publish an event/message using Dapr PubSub
result = client.publish_event(
pubsub_name='order_pub_sub',
topic_name='orders',
publish_metadata={'cloudevent.id': 'd99b228f-6c73-4e78-8c4d-3f80a043d317', 'cloudevent.source': 'payment'}
)
# or
cloud_event = {
'specversion': '1.0',
'type': 'com.example.event',
'source': 'payment',
'id': 'd99b228f-6c73-4e78-8c4d-3f80a043d317',
'data': {'orderId': i},
'datacontenttype': 'application/json',
...
}
# Set the data content type to 'application/cloudevents+json'
result = client.publish_event(
pubsub_name='order_pub_sub',
topic_name='orders',
data=json.dumps(cloud_event),
data_content_type='application/cloudevents+json',
)
var order = new Order(i);
using var client = new DaprClientBuilder().Build();
// Override cloudevent metadata
var metadata = new Dictionary<string,string>() {
{ "cloudevent.source", "payment" },
{ "cloudevent.id", "d99b228f-6c73-4e78-8c4d-3f80a043d317" }
}
// Publish an event/message using Dapr PubSub
await client.PublishEventAsync("order_pub_sub", "orders", order, metadata);
Console.WriteLine("Published data: " + order);
await Task.Delay(TimeSpan.FromSeconds(1));
The JSON payload then reflects the new source
and id
values:
{
"topic": "orders",
"pubsubname": "order_pub_sub",
"traceid": "00-113ad9c4e42b27583ae98ba698d54255-e3743e35ff56f219-01",
"tracestate": "",
"data": {
"orderId": 1
},
"id": "d99b228f-6c73-4e78-8c4d-3f80a043d317",
"specversion": "1.0",
"datacontenttype": "application/json; charset=utf-8",
"source": "payment",
"type": "com.dapr.event.sent",
"time": "2020-09-23T06:23:21Z",
"traceparent": "00-113ad9c4e42b27583ae98ba698d54255-e3743e35ff56f219-01"
}
Important
While you can replacetraceid
/traceparent
and tracestate
, doing this may interfere with tracing events and report inconsistent results in tracing tools. It’s recommended to use Open Telemetry for distributed traces. Learn more about distributed tracing.Publish your own CloudEvent
If you want to use your own CloudEvent, make sure to specify the datacontenttype
as application/cloudevents+json
.
If the CloudEvent that was authored by the app does not contain the minimum required fields in the CloudEvent specification, the message is rejected. Dapr adds the following fields to the CloudEvent if they are missing:
time
traceid
traceparent
tracestate
topic
pubsubname
source
type
specversion
You can add additional fields to a custom CloudEvent that are not part of the official CloudEvent specification. Dapr will pass these fields as-is.
Example
Publish a CloudEvent to the orders
topic:
dapr publish --publish-app-id orderprocessing --pubsub order-pub-sub --topic orders --data '{\"orderId\": \"100\"}'
Publish a CloudEvent to the orders
topic:
curl -X POST http://localhost:3601/v1.0/publish/order-pub-sub/orders -H "Content-Type: application/cloudevents+json" -d '{"specversion" : "1.0", "type" : "com.dapr.cloudevent.sent", "source" : "testcloudeventspubsub", "subject" : "Cloud Events Test", "id" : "someCloudEventId", "time" : "2021-08-02T09:00:00Z", "datacontenttype" : "application/cloudevents+json", "data" : {"orderId": "100"}}'
Publish a CloudEvent to the orders
topic:
Invoke-RestMethod -Method Post -ContentType 'application/cloudevents+json' -Body '{"specversion" : "1.0", "type" : "com.dapr.cloudevent.sent", "source" : "testcloudeventspubsub", "subject" : "Cloud Events Test", "id" : "someCloudEventId", "time" : "2021-08-02T09:00:00Z", "datacontenttype" : "application/cloudevents+json", "data" : {"orderId": "100"}}' -Uri 'http://localhost:3601/v1.0/publish/order-pub-sub/orders'
Event deduplication
When using cloud events created by Dapr, the envelope contains an id
field which can be used by the app to perform message deduplication. Dapr does not handle deduplication automatically. Dapr supports using message brokers that natively enable message deduplication.
Next steps
- Learn why you might not want to use CloudEvents
- Try out the pub/sub Quickstart
- List of pub/sub components
- Read the API reference
1.2.4 - Publishing & subscribing messages without CloudEvents
When adding Dapr to your application, some services may still need to communicate via pub/sub messages not encapsulated in CloudEvents, due to either compatibility reasons or some apps not using Dapr. These are referred to as “raw” pub/sub messages. Dapr enables apps to publish and subscribe to raw events not wrapped in a CloudEvent for compatibility and to send data that is not JSON serializable.
Publishing raw messages
Dapr apps are able to publish raw events to pub/sub topics without CloudEvent encapsulation, for compatibility with non-Dapr apps.

Warning
Not using CloudEvents disables support for tracing, event deduplication per messageId, content-type metadata, and any other features built using the CloudEvent schema.To disable CloudEvent wrapping, set the rawPayload
metadata to true
as part of the publishing request. This allows subscribers to receive these messages without having to parse the CloudEvent schema.
curl -X "POST" http://localhost:3500/v1.0/publish/pubsub/TOPIC_A?metadata.rawPayload=true -H "Content-Type: application/json" -d '{"order-number": "345"}'
using Dapr.Client;
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddControllers().AddDapr();
var app = builder.Build();
app.MapPost("/publish", async (DaprClient daprClient) =>
{
var message = new Message(
Guid.NewGuid().ToString(),
$"Hello at {DateTime.UtcNow}",
DateTime.UtcNow
);
await daprClient.PublishEventAsync(
"pubsub", // pubsub name
"messages", // topic name
message, // message data
new Dictionary<string, string>
{
{ "rawPayload", "true" },
{ "content-type", "application/json" }
}
);
return Results.Ok(message);
});
app.Run();
from dapr.clients import DaprClient
with DaprClient() as d:
req_data = {
'order-number': '345'
}
# Create a typed message with content type and body
resp = d.publish_event(
pubsub_name='pubsub',
topic_name='TOPIC_A',
data=json.dumps(req_data),
publish_metadata={'rawPayload': 'true'}
)
# Print the request
print(req_data, flush=True)
<?php
require_once __DIR__.'/vendor/autoload.php';
$app = \Dapr\App::create();
$app->run(function(\DI\FactoryInterface $factory) {
$publisher = $factory->make(\Dapr\PubSub\Publish::class, ['pubsub' => 'pubsub']);
$publisher->topic('TOPIC_A')->publish('data', ['rawPayload' => 'true']);
});
Subscribing to raw messages
Dapr apps can subscribe to raw messages from pub/sub topics, even if they werenât published as CloudEvents. However, the subscribing Dapr process still wraps these raw messages in a CloudEvent before delivering them to the subscribing application.

Programmatically subscribe to raw events
When subscribing programmatically, add the additional metadata entry for rawPayload
to allow the subscriber to receive a message that is not wrapped by a CloudEvent. For .NET, this metadata entry is called rawPayload
.
When using raw payloads the message is always base64 encoded with content type application/octet-stream
.
using System.Text.Json;
using System.Text.Json.Serialization;
var builder = WebApplication.CreateBuilder(args);
var app = builder.Build();
app.MapGet("/dapr/subscribe", () =>
{
var subscriptions = new[]
{
new
{
pubsubname = "pubsub",
topic = "messages",
route = "/messages",
metadata = new Dictionary<string, string>
{
{ "rawPayload", "true" },
{ "content-type", "application/json" }
}
}
};
return Results.Ok(subscriptions);
});
app.MapPost("/messages", async (HttpContext context) =>
{
using var reader = new StreamReader(context.Request.Body);
var json = await reader.ReadToEndAsync();
Console.WriteLine($"Raw message received: {json}");
return Results.Ok();
});
app.Run();
import flask
from flask import request, jsonify
from flask_cors import CORS
import json
import sys
app = flask.Flask(__name__)
CORS(app)
@app.route('/dapr/subscribe', methods=['GET'])
def subscribe():
subscriptions = [{'pubsubname': 'pubsub',
'topic': 'deathStarStatus',
'route': 'dsstatus',
'metadata': {
'rawPayload': 'true',
} }]
return jsonify(subscriptions)
@app.route('/dsstatus', methods=['POST'])
def ds_subscriber():
print(request.json, flush=True)
return json.dumps({'success':True}), 200, {'ContentType':'application/json'}
app.run()
<?php
require_once __DIR__.'/vendor/autoload.php';
$app = \Dapr\App::create(configure: fn(\DI\ContainerBuilder $builder) => $builder->addDefinitions(['dapr.subscriptions' => [
new \Dapr\PubSub\Subscription(pubsubname: 'pubsub', topic: 'deathStarStatus', route: '/dsstatus', metadata: [ 'rawPayload' => 'true'] ),
]]));
$app->post('/dsstatus', function(
#[\Dapr\Attributes\FromBody]
\Dapr\PubSub\CloudEvent $cloudEvent,
\Psr\Log\LoggerInterface $logger
) {
$logger->alert('Received event: {event}', ['event' => $cloudEvent]);
return ['status' => 'SUCCESS'];
}
);
$app->start();
Declaratively subscribe to raw events
Similarly, you can subscribe to raw events declaratively by adding the rawPayload
metadata entry to your subscription specification.
apiVersion: dapr.io/v2alpha1
kind: Subscription
metadata:
name: myevent-subscription
spec:
topic: deathStarStatus
routes:
default: /dsstatus
pubsubname: pubsub
metadata:
isRawPayload: "true"
scopes:
- app1
- app2
Next steps
- Learn more about publishing and subscribing messages
- List of pub/sub components
- Read the API reference
- Read the .NET sample on how to consume Kafka messages without CloudEvents
1.2.5 - How-To: Route messages to different event handlers
Pub/sub routing is an implementation of content-based routing, a messaging pattern that utilizes a DSL instead of imperative application code. With pub/sub routing, you use expressions to route CloudEvents (based on their contents) to different URIs/paths and event handlers in your application. If no route matches, then an optional default route is used. This proves useful as your applications expand to support multiple event versions or special cases.
While routing can be implemented with code, keeping routing rules external from the application can improve portability.
This feature is available to both the declarative and programmatic subscription approaches, however does not apply to streaming subscriptions.
Declarative subscription
For declarative subscriptions, use dapr.io/v2alpha1
as the apiVersion
. Here is an example of subscriptions.yaml
using routing:
apiVersion: dapr.io/v2alpha1
kind: Subscription
metadata:
name: myevent-subscription
spec:
pubsubname: pubsub
topic: inventory
routes:
rules:
- match: event.type == "widget"
path: /widgets
- match: event.type == "gadget"
path: /gadgets
default: /products
scopes:
- app1
- app2
Programmatic subscription
In the programmatic approach, the routes
structure is returned instead of route
. The JSON structure matches the declarative YAML:
import flask
from flask import request, jsonify
from flask_cors import CORS
import json
import sys
app = flask.Flask(__name__)
CORS(app)
@app.route('/dapr/subscribe', methods=['GET'])
def subscribe():
subscriptions = [
{
'pubsubname': 'pubsub',
'topic': 'inventory',
'routes': {
'rules': [
{
'match': 'event.type == "widget"',
'path': '/widgets'
},
{
'match': 'event.type == "gadget"',
'path': '/gadgets'
},
],
'default': '/products'
}
}]
return jsonify(subscriptions)
@app.route('/products', methods=['POST'])
def ds_subscriber():
print(request.json, flush=True)
return json.dumps({'success':True}), 200, {'ContentType':'application/json'}
app.run()
const express = require('express')
const bodyParser = require('body-parser')
const app = express()
app.use(bodyParser.json({ type: 'application/*+json' }));
const port = 3000
app.get('/dapr/subscribe', (req, res) => {
res.json([
{
pubsubname: "pubsub",
topic: "inventory",
routes: {
rules: [
{
match: 'event.type == "widget"',
path: '/widgets'
},
{
match: 'event.type == "gadget"',
path: '/gadgets'
},
],
default: '/products'
}
}
]);
})
app.post('/products', (req, res) => {
console.log(req.body);
res.sendStatus(200);
});
app.listen(port, () => console.log(`consumer app listening on port ${port}!`))
[Topic("pubsub", "inventory", "event.type ==\"widget\"", 1)]
[HttpPost("widgets")]
public async Task<ActionResult<Stock>> HandleWidget(Widget widget, [FromServices] DaprClient daprClient)
{
// Logic
return stock;
}
[Topic("pubsub", "inventory", "event.type ==\"gadget\"", 2)]
[HttpPost("gadgets")]
public async Task<ActionResult<Stock>> HandleGadget(Gadget gadget, [FromServices] DaprClient daprClient)
{
// Logic
return stock;
}
[Topic("pubsub", "inventory")]
[HttpPost("products")]
public async Task<ActionResult<Stock>> HandleProduct(Product product, [FromServices] DaprClient daprClient)
{
// Logic
return stock;
}
package main
import (
"encoding/json"
"fmt"
"log"
"net/http"
"github.com/gorilla/mux"
)
const appPort = 3000
type subscription struct {
PubsubName string `json:"pubsubname"`
Topic string `json:"topic"`
Metadata map[string]string `json:"metadata,omitempty"`
Routes routes `json:"routes"`
}
type routes struct {
Rules []rule `json:"rules,omitempty"`
Default string `json:"default,omitempty"`
}
type rule struct {
Match string `json:"match"`
Path string `json:"path"`
}
// This handles /dapr/subscribe
func configureSubscribeHandler(w http.ResponseWriter, _ *http.Request) {
t := []subscription{
{
PubsubName: "pubsub",
Topic: "inventory",
Routes: routes{
Rules: []rule{
{
Match: `event.type == "widget"`,
Path: "/widgets",
},
{
Match: `event.type == "gadget"`,
Path: "/gadgets",
},
},
Default: "/products",
},
},
}
w.WriteHeader(http.StatusOK)
json.NewEncoder(w).Encode(t)
}
func main() {
router := mux.NewRouter().StrictSlash(true)
router.HandleFunc("/dapr/subscribe", configureSubscribeHandler).Methods("GET")
log.Fatal(http.ListenAndServe(fmt.Sprintf(":%d", appPort), router))
}
<?php
require_once __DIR__.'/vendor/autoload.php';
$app = \Dapr\App::create(configure: fn(\DI\ContainerBuilder $builder) => $builder->addDefinitions(['dapr.subscriptions' => [
new \Dapr\PubSub\Subscription(pubsubname: 'pubsub', topic: 'inventory', routes: (
rules: => [
('match': 'event.type == "widget"', path: '/widgets'),
('match': 'event.type == "gadget"', path: '/gadgets'),
]
default: '/products')),
]]));
$app->post('/products', function(
#[\Dapr\Attributes\FromBody]
\Dapr\PubSub\CloudEvent $cloudEvent,
\Psr\Log\LoggerInterface $logger
) {
$logger->alert('Received event: {event}', ['event' => $cloudEvent]);
return ['status' => 'SUCCESS'];
}
);
$app->start();
Common Expression Language (CEL)
In these examples, depending on the event.type
, the application will be called on:
/widgets
/gadgets
/products
The expressions are written as Common Expression Language (CEL) where event
represents the cloud event. Any of the attributes from the CloudEvents core specification can be referenced in the expression.
Example expressions
Match “important” messages:
has(event.data.important) && event.data.important == true
Match deposits greater than $10,000:
event.type == "deposit" && int(event.data.amount) > 10000
Note
By default the numeric values ââare written as double-precision floating-point. There are no automatic arithmetic conversions for numeric values. In this case, ifevent.data.amount
is not cast as integer, the match is not performed. For more information, see the CEL documentation.Match multiple versions of a message:
event.type == "mymessage.v1"
event.type == "mymessage.v2"
CloudEvent attributes
For reference, the following attributes are from the CloudEvents specification.
Event Data
data
As defined by the term data, CloudEvents may include domain-specific information about the occurrence. When present, this information will be encapsulated within data
.
- Description: The event payload. This specification places no restriction on the information type. It is encoded into a media format, specified by the
datacontenttype
attribute (e.g. application/json), and adheres to thedataschema
format when those respective attributes are present. - Constraints:
- OPTIONAL
Limitation
Currently, you can only access the attributes inside data if it is nested JSON values and not JSON escaped in a string.REQUIRED Attributes
The following attributes are required in all CloudEvents:
id
- Type:
String
- Description: Identifies the event. Producers must ensure that
source
+id
are unique for each distinct event. If a duplicate event is re-sent (e.g. due to a network error), it may have the sameid
. Consumers may assume that events with identicalsource
andid
are duplicates. - Constraints:
- REQUIRED
- Must be a non-empty string
- Must be unique within the scope of the producer
- Examples:
- An event counter maintained by the producer
- A UUID
source
Type:
URI-reference
Description: Identifies the context in which an event happened. Often this includes information such as:
- The type of the event source
- The organization publishing the event
- The process that produced the event
The exact syntax and semantics behind the data encoded in the URI is defined by the event producer.
Producers must ensure that
source
+id
are unique for each distinct event.An application may:
- Assign a unique
source
to each distinct producer, making it easier to produce unique IDs and preventing other producers from having the samesource
. - Use UUIDs, URNs, DNS authorities, or an application-specific scheme to create unique
source
identifiers.
A source may include more than one producer. In this case, the producers must collaborate to ensure that
source
+id
are unique for each distinct event.Constraints:
- REQUIRED
- Must be a non-empty URI-reference
- An absolute URI is RECOMMENDED
Examples:
- Internet-wide unique URI with a DNS authority:
- https://github.com/cloudevents
- mailto:cncf-wg-serverless@lists.cncf.io
- Universally-unique URN with a UUID:
- urn:uuid:6e8bc430-9c3a-11d9-9669-0800200c9a66
- Application-specific identifiers:
- /cloudevents/spec/pull/123
- /sensors/tn-1234567/alerts
- 1-555-123-4567
- Internet-wide unique URI with a DNS authority:
specversion
Type:
String
Description: The version of the CloudEvents specification used by the event. This enables the interpretation of the context. Compliant event producers must use a value of
1.0
when referring to this version of the specification.Currently, this attribute only includes the ‘major’ and ‘minor’ version numbers. This allows patch changes to the specification to be made without changing this property’s value in the serialization.
Note: for ‘release candidate’ releases, a suffix might be used for testing purposes.
Constraints:
- REQUIRED
- Must be a non-empty string
type
- Type:
String
- Description: Contains a value describing the event type related to the originating occurrence. Often, this attribute is used for routing, observability, policy enforcement, etc. The format is producer-defined and might include information like the version of the
type
. See Versioning of CloudEvents in the Primer for more information. - Constraints:
- REQUIRED
- Must be a non-empty string
- Should be prefixed with a reverse-DNS name. The prefixed domain dictates the organization, which defines the semantics of this event type.
- Examples:
- com.github.pull_request.opened
- com.example.object.deleted.v2
OPTIONAL Attributes
The following attributes are optional to appear in CloudEvents. See the Notational Conventions section for more information on the definition of OPTIONAL.
datacontenttype
Type:
String
per RFC 2046Description: Content type of
data
value. This attribute enablesdata
to carry any type of content, whereby format and encoding might differ from that of the chosen event format.For example, an event rendered using the JSON envelope format might carry an XML payload in
data
. The consumer is informed by this attribute being set to"application/xml"
.The rules for how
data
content is rendered for differentdatacontenttype
values are defined in the event format specifications. For example, the JSON event format defines the relationship in section 3.1.For some binary mode protocol bindings, this field is directly mapped to the respective protocol’s content-type metadata property. You can find normative rules for the binary mode and the content-type metadata mapping in the respective protocol.
In some event formats, you may omit the
datacontenttype
attribute. For example, if a JSON format event has nodatacontenttype
attribute, it’s implied that thedata
is a JSON value conforming to the"application/json"
media type. In other words: a JSON-format event with nodatacontenttype
is exactly equivalent to one withdatacontenttype="application/json"
.When translating an event message with no
datacontenttype
attribute to a different format or protocol binding, the targetdatacontenttype
should be set explicitly to the implieddatacontenttype
of the source.Constraints:
- OPTIONAL
- If present, must adhere to the format specified in RFC 2046
For Media Type examples, see IANA Media Types
dataschema
- Type:
URI
- Description: Identifies the schema that
data
adheres to. Incompatible changes to the schema should be reflected by a different URI. See Versioning of CloudEvents in the Primer for more information. - Constraints:
- OPTIONAL
- If present, must be a non-empty URI
subject
Type:
String
Description: This describes the event subject in the context of the event producer (identified by
source
). In publish-subscribe scenarios, a subscriber will typically subscribe to events emitted by asource
. Thesource
identifier alone might not be sufficient as a qualifier for any specific event if thesource
context has internal sub-structure.Identifying the subject of the event in context metadata (opposed to only in the
data
payload) is helpful in generic subscription filtering scenarios, where middleware is unable to interpret thedata
content. In the above example, the subscriber might only be interested in blobs with names ending with ‘.jpg’ or ‘.jpeg’. With thesubject
attribute, you can construct a simple and efficient string-suffix filter for that subset of events.Constraints:
- OPTIONAL
- If present, must be a non-empty string
Example:
A subscriber might register interest for when new blobs are created inside a blob-storage container. In this case:- The event
source
identifies the subscription scope (storage container) - The event
type
identifies the “blob created” event - The event
id
uniquely identifies the event instance to distinguish separately created occurrences of a same-named blob.
The name of the newly created blob is carried in
subject
:source
: https://example.com/storage/tenant/containersubject
: mynewfile.jpg
- The event
time
- Type:
Timestamp
- Description: Timestamp of when the occurrence happened. If the time of the occurrence cannot be determined, then this attribute may be set to some other time (such as the current time) by the CloudEvents producer. However, all producers for the same
source
must be consistent in this respect. In other words, either they all use the actual time of the occurrence or they all use the same algorithm to determine the value used. - Constraints:
- OPTIONAL
- If present, must adhere to the format specified in RFC 3339
Limitation
Currently, comparisons to time (e.g. before or after “now”) are not supported.Community call demo
Watch this video on how to use message routing with pub/sub:
Next steps
- Try the pub/sub routing sample.
- Learn about topic scoping and message time-to-live.
- Configure pub/sub components with multiple namespaces.
- Review the list of pub/sub components.
- Read the API reference.
1.2.6 - Declarative, streaming, and programmatic subscription types
Pub/sub API subscription types
Dapr applications can subscribe to published topics via three subscription types that support the same features: declarative, streaming and programmatic.
Subscription type | Description |
---|---|
Declarative | Subscription is defined in an external file. The declarative approach removes the Dapr dependency from your code and allows for existing applications to subscribe to topics, without having to change code. |
Streaming | Subscription is defined in the application code. Streaming subscriptions are dynamic, meaning they allow for adding or removing subscriptions at runtime. They do not require a subscription endpoint in your application (that is required by both programmatic and declarative subscriptions), making them easy to configure in code. Streaming subscriptions also do not require an app to be configured with the sidecar to receive messages. |
Programmatic | Subscription is defined in the application code. The programmatic approach implements the static subscription and requires an endpoint in your code. |
The examples below demonstrate pub/sub messaging between a checkout
app and an orderprocessing
app via the orders
topic. The examples demonstrate the same Dapr pub/sub component used first declaratively, then programmatically.
Declarative subscriptions
Note
This feature is currently in preview. Dapr can be made to “hot reload” declarative subscriptions, whereby updates are picked up automatically without needing a restart. This is enabled by via theHotReload
feature gate.
To prevent reprocessing or loss of unprocessed messages, in-flight messages between Dapr and your application are unaffected during hot reload events.You can subscribe declaratively to a topic using an external component file. This example uses a YAML component file named subscription.yaml
:
apiVersion: dapr.io/v2alpha1
kind: Subscription
metadata:
name: order
spec:
topic: orders
routes:
default: /orders
pubsubname: pubsub
scopes:
- orderprocessing
Here the subscription called order
:
- Uses the pub/sub component called
pubsub
to subscribes to the topic calledorders
. - Sets the
route
field to send all topic messages to the/orders
endpoint in the app. - Sets
scopes
field to scope this subscription for access only by apps with IDorderprocessing
.
When running Dapr, set the YAML component file path to point Dapr to the component.
dapr run --app-id myapp --resources-path ./myComponents -- dotnet run
dapr run --app-id myapp --resources-path ./myComponents -- mvn spring-boot:run
dapr run --app-id myapp --resources-path ./myComponents -- python3 app.py
dapr run --app-id myapp --resources-path ./myComponents -- npm start
dapr run --app-id myapp --resources-path ./myComponents -- go run app.go
In Kubernetes, apply the component to the cluster:
kubectl apply -f subscription.yaml
In your application code, subscribe to the topic specified in the Dapr pub/sub component.
//Subscribe to a topic
[HttpPost("orders")]
public void getCheckout([FromBody] int orderId)
{
Console.WriteLine("Subscriber received : " + orderId);
}
import io.dapr.client.domain.CloudEvent;
//Subscribe to a topic
@PostMapping(path = "/orders")
public Mono<Void> getCheckout(@RequestBody(required = false) CloudEvent<String> cloudEvent) {
return Mono.fromRunnable(() -> {
try {
log.info("Subscriber received: " + cloudEvent.getData());
}
});
}
from cloudevents.sdk.event import v1
#Subscribe to a topic
@app.route('/orders', methods=['POST'])
def checkout(event: v1.Event) -> None:
data = json.loads(event.Data())
logging.info('Subscriber received: ' + str(data))
const express = require('express')
const bodyParser = require('body-parser')
const app = express()
app.use(bodyParser.json({ type: 'application/*+json' }));
// listen to the declarative route
app.post('/orders', (req, res) => {
console.log(req.body);
res.sendStatus(200);
});
//Subscribe to a topic
var sub = &common.Subscription{
PubsubName: "pubsub",
Topic: "orders",
Route: "/orders",
}
func eventHandler(ctx context.Context, e *common.TopicEvent) (retry bool, err error) {
log.Printf("Subscriber received: %s", e.Data)
return false, nil
}
The /orders
endpoint matches the route
defined in the subscriptions and this is where Dapr sends all topic messages to.
Streaming subscriptions
Streaming subscriptions are subscriptions defined in application code that can be dynamically stopped and started at runtime. Messages are pulled by the application from Dapr. This means no endpoint is needed to subscribe to a topic, and it’s possible to subscribe without any app configured on the sidecar at all. Any number of pubsubs and topics can be subscribed to at once. As messages are sent to the given message handler code, there is no concept of routes or bulk subscriptions.
Note: Only a single pubsub/topic pair per application may be subscribed at a time.
The example below shows the different ways to stream subscribe to a topic.
You can use the SubscribeAsync
method on the DaprPublishSubscribeClient
to configure the message handler to use to pull messages from the stream.
using System.Text;
using Dapr.Messaging.PublishSubscribe;
using Dapr.Messaging.PublishSubscribe.Extensions;
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddDaprPubSubClient();
var app = builder.Build();
var messagingClient = app.Services.GetRequiredService<DaprPublishSubscribeClient>();
//Create a dynamic streaming subscription and subscribe with a timeout of 30 seconds and 10 seconds for message handling
var cancellationTokenSource = new CancellationTokenSource(TimeSpan.FromSeconds(30));
var subscription = await messagingClient.SubscribeAsync("pubsub", "myTopic",
new DaprSubscriptionOptions(new MessageHandlingPolicy(TimeSpan.FromSeconds(10), TopicResponseAction.Retry)),
HandleMessageAsync, cancellationTokenSource.Token);
await Task.Delay(TimeSpan.FromMinutes(1));
//When you're done with the subscription, simply dispose of it
await subscription.DisposeAsync();
return;
//Process each message returned from the subscription
Task<TopicResponseAction> HandleMessageAsync(TopicMessage message, CancellationToken cancellationToken = default)
{
try
{
//Do something with the message
Console.WriteLine(Encoding.UTF8.GetString(message.Data.Span));
return Task.FromResult(TopicResponseAction.Success);
}
catch
{
return Task.FromResult(TopicResponseAction.Retry);
}
}
Learn more about streaming subscriptions using the .NET SDK client.
You can use the subscribe
method, which returns a Subscription
object and allows you to pull messages from the stream by calling the next_message
method. This runs in and may block the main thread while waiting for messages.
import time
from dapr.clients import DaprClient
from dapr.clients.grpc.subscription import StreamInactiveError
counter = 0
def process_message(message):
global counter
counter += 1
# Process the message here
print(f'Processing message: {message.data()} from {message.topic()}...')
return 'success'
def main():
with DaprClient() as client:
global counter
subscription = client.subscribe(
pubsub_name='pubsub', topic='orders', dead_letter_topic='orders_dead'
)
try:
while counter < 5:
try:
message = subscription.next_message()
except StreamInactiveError as e:
print('Stream is inactive. Retrying...')
time.sleep(1)
continue
if message is None:
print('No message received within timeout period.')
continue
# Process the message
response_status = process_message(message)
if response_status == 'success':
subscription.respond_success(message)
elif response_status == 'retry':
subscription.respond_retry(message)
elif response_status == 'drop':
subscription.respond_drop(message)
finally:
print("Closing subscription...")
subscription.close()
if __name__ == '__main__':
main()
You can also use the subscribe_with_handler
method, which accepts a callback function executed for each message received from the stream. This runs in a separate thread, so it doesn’t block the main thread.
import time
from dapr.clients import DaprClient
from dapr.clients.grpc._response import TopicEventResponse
counter = 0
def process_message(message):
# Process the message here
global counter
counter += 1
print(f'Processing message: {message.data()} from {message.topic()}...')
return TopicEventResponse('success')
def main():
with (DaprClient() as client):
# This will start a new thread that will listen for messages
# and process them in the `process_message` function
close_fn = client.subscribe_with_handler(
pubsub_name='pubsub', topic='orders', handler_fn=process_message,
dead_letter_topic='orders_dead'
)
while counter < 5:
time.sleep(1)
print("Closing subscription...")
close_fn()
if __name__ == '__main__':
main()
Learn more about streaming subscriptions using the Python SDK client.
package main
import (
"context"
"log"
"github.com/dapr/go-sdk/client"
)
func main() {
cl, err := client.NewClient()
if err != nil {
log.Fatal(err)
}
sub, err := cl.Subscribe(context.Background(), client.SubscriptionOptions{
PubsubName: "pubsub",
Topic: "orders",
})
if err != nil {
panic(err)
}
// Close must always be called.
defer sub.Close()
for {
msg, err := sub.Receive()
if err != nil {
panic(err)
}
// Process the event
// We _MUST_ always signal the result of processing the message, else the
// message will not be considered as processed and will be redelivered or
// dead lettered.
// msg.Retry()
// msg.Drop()
if err := msg.Success(); err != nil {
panic(err)
}
}
}
or
package main
import (
"context"
"log"
"github.com/dapr/go-sdk/client"
"github.com/dapr/go-sdk/service/common"
)
func main() {
cl, err := client.NewClient()
if err != nil {
log.Fatal(err)
}
stop, err := cl.SubscribeWithHandler(context.Background(),
client.SubscriptionOptions{
PubsubName: "pubsub",
Topic: "orders",
},
eventHandler,
)
if err != nil {
panic(err)
}
// Stop must always be called.
defer stop()
<-make(chan struct{})
}
func eventHandler(e *common.TopicEvent) common.SubscriptionResponseStatus {
// Process message here
// common.SubscriptionResponseStatusRetry
// common.SubscriptionResponseStatusDrop
common.SubscriptionResponseStatusDrop, status)
}
return common.SubscriptionResponseStatusSuccess
}
Demo
Watch this video for an overview on streaming subscriptions:
Programmatic subscriptions
The dynamic programmatic approach returns the routes
JSON structure within the code, unlike the declarative approach’s route
YAML structure.
Note: Programmatic subscriptions are only read once during application start-up. You cannot dynamically add new programmatic subscriptions, only at new ones at compile time.
In the example below, you define the values found in the declarative YAML subscription above within the application code.
[Topic("pubsub", "orders")]
[HttpPost("/orders")]
public async Task<ActionResult<Order>>Checkout(Order order, [FromServices] DaprClient daprClient)
{
// Logic
return order;
}
or
// Dapr subscription in [Topic] routes orders topic to this route
app.MapPost("/orders", [Topic("pubsub", "orders")] (Order order) => {
Console.WriteLine("Subscriber received : " + order);
return Results.Ok(order);
});
Both of the handlers defined above also need to be mapped to configure the dapr/subscribe
endpoint. This is done in the application startup code while defining endpoints.
app.UseEndpoints(endpoints =>
{
endpoints.MapSubscribeHandler();
});
private static final ObjectMapper OBJECT_MAPPER = new ObjectMapper();
@Topic(name = "orders", pubsubName = "pubsub")
@PostMapping(path = "/orders")
public Mono<Void> handleMessage(@RequestBody(required = false) CloudEvent<String> cloudEvent) {
return Mono.fromRunnable(() -> {
try {
System.out.println("Subscriber received: " + cloudEvent.getData());
System.out.println("Subscriber received: " + OBJECT_MAPPER.writeValueAsString(cloudEvent));
} catch (Exception e) {
throw new RuntimeException(e);
}
});
}
@app.route('/dapr/subscribe', methods=['GET'])
def subscribe():
subscriptions = [
{
'pubsubname': 'pubsub',
'topic': 'orders',
'routes': {
'rules': [
{
'match': 'event.type == "order"',
'path': '/orders'
},
],
'default': '/orders'
}
}]
return jsonify(subscriptions)
@app.route('/orders', methods=['POST'])
def ds_subscriber():
print(request.json, flush=True)
return json.dumps({'success':True}), 200, {'ContentType':'application/json'}
app.run()
const express = require('express')
const bodyParser = require('body-parser')
const app = express()
app.use(bodyParser.json({ type: 'application/*+json' }));
const port = 3000
app.get('/dapr/subscribe', (req, res) => {
res.json([
{
pubsubname: "pubsub",
topic: "orders",
routes: {
rules: [
{
match: 'event.type == "order"',
path: '/orders'
},
],
default: '/products'
}
}
]);
})
app.post('/orders', (req, res) => {
console.log(req.body);
res.sendStatus(200);
});
app.listen(port, () => console.log(`consumer app listening on port ${port}!`))
package main
import (
"encoding/json"
"fmt"
"log"
"net/http"
"github.com/gorilla/mux"
)
const appPort = 3000
type subscription struct {
PubsubName string `json:"pubsubname"`
Topic string `json:"topic"`
Metadata map[string]string `json:"metadata,omitempty"`
Routes routes `json:"routes"`
}
type routes struct {
Rules []rule `json:"rules,omitempty"`
Default string `json:"default,omitempty"`
}
type rule struct {
Match string `json:"match"`
Path string `json:"path"`
}
// This handles /dapr/subscribe
func configureSubscribeHandler(w http.ResponseWriter, _ *http.Request) {
t := []subscription{
{
PubsubName: "pubsub",
Topic: "orders",
Routes: routes{
Rules: []rule{
{
Match: `event.type == "order"`,
Path: "/orders",
},
},
Default: "/orders",
},
},
}
w.WriteHeader(http.StatusOK)
json.NewEncoder(w).Encode(t)
}
func main() {
router := mux.NewRouter().StrictSlash(true)
router.HandleFunc("/dapr/subscribe", configureSubscribeHandler).Methods("GET")
log.Fatal(http.ListenAndServe(fmt.Sprintf(":%d", appPort), router))
}
Next Steps
- Try out the pub/sub Quickstart
- Follow: How-To: Configure pub/sub components with multiple namespaces
- Learn more about declarative and programmatic subscription methods.
- Learn about topic scoping
- Learn about message TTL
- Learn more about pub/sub with and without CloudEvent
- List of pub/sub components
- Read the pub/sub API reference
1.2.7 - Dead Letter Topics
Introduction
There are times when applications might not be able to handle messages for a variety of reasons. For example, there could be transient issues retrieving data needed to process a message or the app business logic fails returning an error. Dead letter topics are used to forward messages that cannot be delivered to a subscribing app. This eases the pressure on app by freeing them from dealing with these failed messages, allowing developers to write code that reads from the dead letter topic and either fixes the message and resends this, or abandons it completely.
Dead letter topics are typically used in along with a retry resiliency policy and a dead letter subscription that handles the required logic for dealing with the messages forwarded from the dead letter topic.
When a dead letter topic is set, any message that failed to be delivered to an app for a configured topic is put on the dead letter topic to be forwarded to a subscription that handles these messages. This could be the same app or a completely different one.
Dapr enables dead letter topics for all of it’s pub/sub components, even if the underlying system does not support this feature natively. For example the AWS SNS Component has a dead letter queue and RabbitMQ has the dead letter topics. You will need to ensure that you configure components like this appropriately.
The diagram below is an example of how dead letter topics work. First a message is sent from a publisher on an orders
topic. Dapr receives the message on behalf of a subscriber application, however the orders topic message fails to be delivered to the /checkout
endpoint on the application, even after retries. As a result of the failure to deliver, the message is forwarded to the poisonMessages
topic which delivers this to the /failedMessages
endpoint to be processed, in this case on the same application. The failedMessages
processing code could drop the message or resend a new message.

Configuring a dead letter topic with a declarative subscription
The following YAML shows how to configure a subscription with a dead letter topic named poisonMessages
for messages consumed from the orders
topic. This subscription is scoped to an app with a checkout
ID.
apiVersion: dapr.io/v2alpha1
kind: Subscription
metadata:
name: order
spec:
topic: orders
routes:
default: /checkout
pubsubname: pubsub
deadLetterTopic: poisonMessages
scopes:
- checkout
Configuring a dead letter topic with a streaming subscription
var deadLetterTopic = "poisonMessages"
sub, err := cl.Subscribe(context.Background(), client.SubscriptionOptions{
PubsubName: "pubsub",
Topic: "orders",
DeadLetterTopic: &deadLetterTopic,
})
Configuring a dead letter topic with programmatic subscription
The JSON returned from the /subscribe
endpoint shows how to configure a dead letter topic named poisonMessages
for messages consumed from the orders
topic.
app.get('/dapr/subscribe', (_req, res) => {
res.json([
{
pubsubname: "pubsub",
topic: "orders",
route: "/checkout",
deadLetterTopic: "poisonMessages"
}
]);
});
Retries and dead letter topics
By default, when a dead letter topic is set, any failing message immediately goes to the dead letter topic. As a result it is recommend to always have a retry policy set when using dead letter topics in a subscription. To enable the retry of a message before sending it to the dead letter topic, apply a retry resiliency policy to the pub/sub component.
This example shows how to set a constant retry policy named pubsubRetry
, with 10 maximum delivery attempts applied every 5 seconds for the pubsub
pub/sub component.
apiVersion: dapr.io/v1alpha1
kind: Resiliency
metadata:
name: myresiliency
spec:
policies:
retries:
pubsubRetry:
policy: constant
duration: 5s
maxRetries: 10
targets:
components:
pubsub:
inbound:
retry: pubsubRetry
Configuring a subscription for handling the dead letter topics
Remember to now configure a subscription to handling the dead letter topics. For example you can create another declarative subscription to receive these on the same or a different application. The example below shows the checkout application subscribing to the poisonMessages
topic with another subscription and sending these to be handled by the /failedmessages
endpoint.
apiVersion: dapr.io/v2alpha1
kind: Subscription
metadata:
name: deadlettertopics
spec:
topic: poisonMessages
routes:
rules:
- match:
path: /failedMessages
pubsubname: pubsub
scopes:
- checkout
Demo
Watch this video for an overview of the dead letter topics:
Next steps
- For more information on resiliency policies, read Resiliency overview.
- For more information on topic subscriptions, read Declarative, streaming, and programmatic subscription methods.
1.2.8 - How to: Set up pub/sub namespace consumer groups
You’ve set up Dapr’s pub/sub API building block, and your applications are publishing and subscribing to topics smoothly, using a centralized message broker. What if you’d like to perform simple A/B testing, blue/green deployments, or even canary deployments for your applications? Even with using Dapr, this can prove difficult.
Dapr solves multi-tenancy at-scale with its pub/sub namespace consumer groups construct.
Without namespace consumer groups
Let’s say you have a Kubernetes cluster, with two applications (App1 and App2) deployed to the same namespace (namespace-a). App2 publishes to a topic called order
, while App1 subscribes to the topic called order
. This will create two consumer groups, named after your applications (App1 and App2).

In order to perform simple testing and deployments while using a centralized message broker, you create another namespace with two applications of the same app-id
, App1 and App2.
Dapr creates consumer groups using the app-id
of individual applications, so the consumer group names will remain App1 and App2.

To avoid this, you’d then need to have something “creep” into your code to change the app-id
, depending on the namespace on which you’re running. This workaround is cumbersome and a significant painpoint.
With namespace consumer groups
Not only can Dapr allow you to change the behavior of a consumer group with a consumerID for your UUID and pod names, Dapr also provides a namespace construct that lives in the pub/sub component metadata. For example, using Redis as your message broker:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: pubsub
spec:
type: pubsub.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""
- name: consumerID
value: "{namespace}"
By configuring consumerID
with the {namespace}
value, you’ll be able to use the same app-id
with the same topics from different namespaces.

In the diagram above, you have two namespaces, each with applications of the same app-id
, publishing and subscribing to the same centralized message broker orders
. This time, however, Dapr has created consumer group names prefixed with the namespace in which they’re running.
Without you needing to change your code/app-id
, the namespace consumer group allows you to:
- Add more namespaces
- Keep the same topics
- Keep the same
app-id
across namespaces - Have your entire deployment pipeline remain intact
Simply include the "{namespace}"
consumer group construct in your component metadata. You don’t need to encode the namespace in the metadata. Dapr understands the namespace it is running in and completes the namespace value for you, like a dynamic metadata value injected by the runtime.
Note
If you add the namespace consumer group to your metadata afterwards, Dapr updates everything for you. This means that you can add namespace metadata value to existing pub/sub deployments.Demo
Watch this video for an overview on pub/sub multi-tenancy:
Next steps
- Learn more about configuring Pub/Sub components with multiple namespaces pub/sub namespaces.
1.2.9 - How to: Horizontally scale subscribers with StatefulSets
Unlike Deployments, where Pods are ephemeral, StatefulSets allows deployment of stateful applications on Kubernetes by keeping a sticky identity for each Pod.
Below is an example of a StatefulSet with Dapr:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: python-subscriber
spec:
selector:
matchLabels:
app: python-subscriber # has to match .spec.template.metadata.labels
serviceName: "python-subscriber"
replicas: 3
template:
metadata:
labels:
app: python-subscriber # has to match .spec.selector.matchLabels
annotations:
dapr.io/enabled: "true"
dapr.io/app-id: "python-subscriber"
dapr.io/app-port: "5001"
spec:
containers:
- name: python-subscriber
image: ghcr.io/dapr/samples/pubsub-python-subscriber:latest
ports:
- containerPort: 5001
imagePullPolicy: Always
When subscribing to a pub/sub topic via Dapr, the application can define the consumerID
, which determines the subscriber’s position in the queue or topic. With the StatefulSets sticky identity of Pods, you can have a unique consumerID
per Pod, allowing each horizontal scale of the subscriber application. Dapr keeps track of the name of each Pod, which can be used when declaring components using the {podName}
marker.
On scaling the number of subscribers of a given topic, each Dapr component has unique settings that determine the behavior. Usually, there are two options for multiple consumers:
- Broadcast: each message published to the topic will be consumed by all subscribers.
- Shared: a message is consumed by any subscriber (but not all).
Kafka isolates each subscriber by consumerID
with its own position in the topic. When an instance restarts, it reuses the same consumerID
and continues from its last known position, without skipping messages. The component below demonstrates how a Kafka component can be used by multiple Pods:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: pubsub
spec:
type: pubsub.kafka
version: v1
metadata:
- name: brokers
value: my-cluster-kafka-bootstrap.kafka.svc.cluster.local:9092
- name: consumerID
value: "{podName}"
- name: authRequired
value: "false"
The MQTT3 protocol has shared topics, allowing multiple subscribers to “compete” for messages from the topic, meaning a message is only processed by one of them. For example:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: mqtt-pubsub
spec:
type: pubsub.mqtt3
version: v1
metadata:
- name: consumerID
value: "{podName}"
- name: cleanSession
value: "true"
- name: url
value: "tcp://admin:public@localhost:1883"
- name: qos
value: 1
- name: retain
value: "false"
Next steps
- Try the pub/sub tutorial.
- Learn about messaging with CloudEvents and when you might want to send messages without CloudEvents.
- Review the list of pub/sub components.
- Read the API reference.
1.2.10 - Scope Pub/sub topic access
Introduction
Namespaces or component scopes can be used to limit component access to particular applications. These application scopes added to a component limit only the applications with specific IDs to be able to use the component.
In addition to this general component scope, the following can be limited for pub/sub components:
- Which topics can be used (published or subscribed)
- Which applications are allowed to publish to specific topics
- Which applications are allowed to subscribe to specific topics
This is called pub/sub topic scoping.
Pub/sub scopes are defined for each pub/sub component. You may have a pub/sub component named pubsub
that has one set of scopes, and another pubsub2
with a different set.
To use this topic scoping three metadata properties can be set for a pub/sub component:
spec.metadata.publishingScopes
- A semicolon-separated list of applications & comma-separated topic lists, allowing that app to publish to that list of topics
- If nothing is specified in
publishingScopes
(default behavior), all apps can publish to all topics - To deny an app the ability to publish to any topic, leave the topics list blank (
app1=;app2=topic2
) - For example,
app1=topic1;app2=topic2,topic3;app3=
will allow app1 to publish to topic1 and nothing else, app2 to publish to topic2 and topic3 only, and app3 to publish to nothing.
spec.metadata.subscriptionScopes
- A semicolon-separated list of applications & comma-separated topic lists, allowing that app to subscribe to that list of topics
- If nothing is specified in
subscriptionScopes
(default behavior), all apps can subscribe to all topics - For example,
app1=topic1;app2=topic2,topic3
will allow app1 to subscribe to topic1 only and app2 to subscribe to topic2 and topic3
spec.metadata.allowedTopics
- A comma-separated list of allowed topics for all applications.
- If
allowedTopics
is not set (default behavior), all topics are valid.subscriptionScopes
andpublishingScopes
still take place if present. publishingScopes
orsubscriptionScopes
can be used in conjunction withallowedTopics
to add granular limitations
spec.metadata.protectedTopics
- A comma-separated list of protected topics for all applications.
- If a topic is marked as protected then an application must be explicitly granted publish or subscribe permissions through
publishingScopes
orsubscriptionScopes
to publish/subscribe to it.
These metadata properties can be used for all pub/sub components. The following examples use Redis as pub/sub component.
Example 1: Scope topic access
Limiting which applications can publish/subscribe to topics can be useful if you have topics which contain sensitive information and only a subset of your applications are allowed to publish or subscribe to these.
It can also be used for all topics to have always a “ground truth” for which applications are using which topics as publishers/subscribers.
Here is an example of three applications and three topics:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: pubsub
spec:
type: pubsub.redis
version: v1
metadata:
- name: redisHost
value: "localhost:6379"
- name: redisPassword
value: ""
- name: publishingScopes
value: "app1=topic1;app2=topic2,topic3;app3="
- name: subscriptionScopes
value: "app2=;app3=topic1"
The table below shows which applications are allowed to publish into the topics:
topic1 | topic2 | topic3 | |
---|---|---|---|
app1 | â | ||
app2 | â | â | |
app3 |
The table below shows which applications are allowed to subscribe to the topics:
topic1 | topic2 | topic3 | |
---|---|---|---|
app1 | â | â | â |
app2 | |||
app3 | â |
Note: If an application is not listed (e.g. app1 in subscriptionScopes) it is allowed to subscribe to all topics. Because
allowedTopics
is not used and app1 does not have any subscription scopes, it can also use additional topics not listed above.
Example 2: Limit allowed topics
A topic is created if a Dapr application sends a message to it. In some scenarios this topic creation should be governed. For example:
- A bug in a Dapr application on generating the topic name can lead to an unlimited amount of topics created
- Streamline the topics names and total count and prevent an unlimited growth of topics
In these situations allowedTopics
can be used.
Here is an example of three allowed topics:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: pubsub
spec:
type: pubsub.redis
version: v1
metadata:
- name: redisHost
value: "localhost:6379"
- name: redisPassword
value: ""
- name: allowedTopics
value: "topic1,topic2,topic3"
All applications can use these topics, but only those topics, no others are allowed.
Example 3: Combine allowedTopics
and scopes
Sometimes you want to combine both scopes, thus only having a fixed set of allowed topics and specify scoping to certain applications.
Here is an example of three applications and two topics:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: pubsub
spec:
type: pubsub.redis
version: v1
metadata:
- name: redisHost
value: "localhost:6379"
- name: redisPassword
value: ""
- name: allowedTopics
value: "A,B"
- name: publishingScopes
value: "app1=A"
- name: subscriptionScopes
value: "app1=;app2=A"
Note: The third application is not listed, because if an app is not specified inside the scopes, it is allowed to use all topics.
The table below shows which application is allowed to publish into the topics:
A | B | C | |
---|---|---|---|
app1 | â | ||
app2 | â | â | |
app3 | â | â |
The table below shows which application is allowed to subscribe to the topics:
A | B | C | |
---|---|---|---|
app1 | |||
app2 | â | ||
app3 | â | â |
Example 4: Mark topics as protected
If your topic involves sensitive data, each new application must be explicitly listed in the publishingScopes
and subscriptionScopes
to ensure it cannot read from or write to that topic. Alternatively, you can designate the topic as ‘protected’ (using protectedTopics
) and grant access only to specific applications that genuinely require it.
Here is an example of three applications and three topics, two of which are protected:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: pubsub
spec:
type: pubsub.redis
version: v1
metadata:
- name: redisHost
value: "localhost:6379"
- name: redisPassword
value: ""
- name: protectedTopics
value: "A,B"
- name: publishingScopes
value: "app1=A,B;app2=B"
- name: subscriptionScopes
value: "app1=A,B;app2=B"
In the example above, topics A and B are marked as protected. As a result, even though app3
is not listed under publishingScopes
or subscriptionScopes
, it cannot interact with these topics.
The table below shows which application is allowed to publish into the topics:
A | B | C | |
---|---|---|---|
app1 | â | â | |
app2 | â | ||
app3 | â |
The table below shows which application is allowed to subscribe to the topics:
A | B | C | |
---|---|---|---|
app1 | â | â | |
app2 | â | ||
app3 | â |
Demo
Next steps
- Learn how to configure pub/sub components with multiple namespaces
- Learn about message time-to-live
- List of pub/sub components
- Read the API reference
1.2.11 - Message Time-to-Live (TTL)
Introduction
Dapr enables per-message time-to-live (TTL). This means that applications can set time-to-live per message, and subscribers do not receive those messages after expiration.
All Dapr pub/sub components are compatible with message TTL, as Dapr handles the TTL logic within the runtime. Simply set the ttlInSeconds
metadata when publishing a message.
In some components, such as Kafka, time-to-live can be configured in the topic via retention.ms
as per documentation. With message TTL in Dapr, applications using Kafka can now set time-to-live per message in addition to per topic.
Native message TTL support
When message time-to-live has native support in the pub/sub component, Dapr simply forwards the time-to-live configuration without adding any extra logic, keeping predictable behavior. This is helpful when the expired messages are handled differently by the component. For example, with Azure Service Bus, where expired messages are stored in the dead letter queue and are not simply deleted.
Note
You can also set message TTL for a given message broker at creation. Look at the specific characteristic of the component that you are using to see if this is suitable.Supported components
Azure Service Bus
Azure Service Bus supports entity level time-to-live. This means that messages have a default time-to-live but can also be set with a shorter timespan at publishing time. Dapr propagates the time-to-live metadata for the message and lets Azure Service Bus handle the expiration directly.
Non-Dapr subscribers
If messages are consumed by subscribers not using Dapr, the expired messages are not automatically dropped, as expiration is handled by the Dapr runtime when a Dapr sidecar receives a message. However, subscribers can programmatically drop expired messages by adding logic to handle the expiration
attribute in the cloud event, which follows the RFC3339 format.
When non-Dapr subscribers use components such as Azure Service Bus, which natively handle message TTL, they do not receive expired messages. Here, no extra logic is needed.
Example
Message TTL can be set in the metadata as part of the publishing request:
curl -X "POST" http://localhost:3500/v1.0/publish/pubsub/TOPIC_A?metadata.ttlInSeconds=120 -H "Content-Type: application/json" -d '{"order-number": "345"}'
from dapr.clients import DaprClient
with DaprClient() as d:
req_data = {
'order-number': '345'
}
# Create a typed message with content type and body
resp = d.publish_event(
pubsub_name='pubsub',
topic='TOPIC_A',
data=json.dumps(req_data),
publish_metadata={'ttlInSeconds': '120'}
)
# Print the request
print(req_data, flush=True)
<?php
require_once __DIR__.'/vendor/autoload.php';
$app = \Dapr\App::create();
$app->run(function(\DI\FactoryInterface $factory) {
$publisher = $factory->make(\Dapr\PubSub\Publish::class, ['pubsub' => 'pubsub']);
$publisher->topic('TOPIC_A')->publish('data', ['ttlInSeconds' => '120']);
});
See this guide for a reference on the pub/sub API.
Next steps
- Learn about topic scoping
- Learn how to configure pub/sub components with multiple namespaces
- List of pub/sub components
- Read the API reference
1.2.12 - Publish and subscribe to bulk messages
alpha
The bulk publish and subscribe APIs are in alpha stage.With the bulk publish and subscribe APIs, you can publish and subscribe to multiple messages in a single request. When writing applications that need to send or receive a large number of messages, using bulk operations allows achieving high throughput by reducing the overall number of requests between the Dapr sidecar, the application, and the underlying pub/sub broker.
Publishing messages in bulk
Restrictions when publishing messages in bulk
The bulk publish API allows you to publish multiple messages to a topic in a single request. It is non-transactional, i.e., from a single bulk request, some messages can succeed and some can fail. If any of the messages fail to publish, the bulk publish operation returns a list of failed messages.
The bulk publish operation also does not guarantee any ordering of messages.
Example
import io.dapr.client.DaprClientBuilder;
import io.dapr.client.DaprPreviewClient;
import io.dapr.client.domain.BulkPublishResponse;
import io.dapr.client.domain.BulkPublishResponseFailedEntry;
import java.util.ArrayList;
import java.util.List;
class BulkPublisher {
private static final String PUBSUB_NAME = "my-pubsub-name";
private static final String TOPIC_NAME = "topic-a";
public void publishMessages() {
try (DaprPreviewClient client = (new DaprClientBuilder()).buildPreviewClient()) {
// Create a list of messages to publish
List<String> messages = new ArrayList<>();
for (int i = 0; i < 10; i++) {
String message = String.format("This is message #%d", i);
messages.add(message);
}
// Publish list of messages using the bulk publish API
BulkPublishResponse<String> res = client.publishEvents(PUBSUB_NAME, TOPIC_NAME, "text/plain", messages).block();
}
}
}
import { DaprClient } from "@dapr/dapr";
const pubSubName = "my-pubsub-name";
const topic = "topic-a";
async function start() {
const client = new DaprClient();
// Publish multiple messages to a topic.
await client.pubsub.publishBulk(pubSubName, topic, ["message 1", "message 2", "message 3"]);
// Publish multiple messages to a topic with explicit bulk publish messages.
const bulkPublishMessages = [
{
entryID: "entry-1",
contentType: "application/json",
event: { hello: "foo message 1" },
},
{
entryID: "entry-2",
contentType: "application/cloudevents+json",
event: {
specversion: "1.0",
source: "/some/source",
type: "example",
id: "1234",
data: "foo message 2",
datacontenttype: "text/plain"
},
},
{
entryID: "entry-3",
contentType: "text/plain",
event: "foo message 3",
},
];
await client.pubsub.publishBulk(pubSubName, topic, bulkPublishMessages);
}
start().catch((e) => {
console.error(e);
process.exit(1);
});
using System;
using System.Collections.Generic;
using Dapr.Client;
const string PubsubName = "my-pubsub-name";
const string TopicName = "topic-a";
IReadOnlyList<object> BulkPublishData = new List<object>() {
new { Id = "17", Amount = 10m },
new { Id = "18", Amount = 20m },
new { Id = "19", Amount = 30m }
};
using var client = new DaprClientBuilder().Build();
var res = await client.BulkPublishEventAsync(PubsubName, TopicName, BulkPublishData);
if (res == null) {
throw new Exception("null response from dapr");
}
if (res.FailedEntries.Count > 0)
{
Console.WriteLine("Some events failed to be published!");
foreach (var failedEntry in res.FailedEntries)
{
Console.WriteLine("EntryId: " + failedEntry.Entry.EntryId + " Error message: " +
failedEntry.ErrorMessage);
}
}
else
{
Console.WriteLine("Published all events!");
}
import requests
import json
base_url = "http://localhost:3500/v1.0-alpha1/publish/bulk/{}/{}"
pubsub_name = "my-pubsub-name"
topic_name = "topic-a"
payload = [
{
"entryId": "ae6bf7c6-4af2-11ed-b878-0242ac120002",
"event": "first text message",
"contentType": "text/plain"
},
{
"entryId": "b1f40bd6-4af2-11ed-b878-0242ac120002",
"event": {
"message": "second JSON message"
},
"contentType": "application/json"
}
]
response = requests.post(base_url.format(pubsub_name, topic_name), json=payload)
print(response.status_code)
package main
import (
"fmt"
"strings"
"net/http"
"io/ioutil"
)
const (
pubsubName = "my-pubsub-name"
topicName = "topic-a"
baseUrl = "http://localhost:3500/v1.0-alpha1/publish/bulk/%s/%s"
)
func main() {
url := fmt.Sprintf(baseUrl, pubsubName, topicName)
method := "POST"
payload := strings.NewReader(`[
{
"entryId": "ae6bf7c6-4af2-11ed-b878-0242ac120002",
"event": "first text message",
"contentType": "text/plain"
},
{
"entryId": "b1f40bd6-4af2-11ed-b878-0242ac120002",
"event": {
"message": "second JSON message"
},
"contentType": "application/json"
}
]`)
client := &http.Client {}
req, _ := http.NewRequest(method, url, payload)
req.Header.Add("Content-Type", "application/json")
res, err := client.Do(req)
// ...
}
curl -X POST http://localhost:3500/v1.0-alpha1/publish/bulk/my-pubsub-name/topic-a \
-H 'Content-Type: application/json' \
-d '[
{
"entryId": "ae6bf7c6-4af2-11ed-b878-0242ac120002",
"event": "first text message",
"contentType": "text/plain"
},
{
"entryId": "b1f40bd6-4af2-11ed-b878-0242ac120002",
"event": {
"message": "second JSON message"
},
"contentType": "application/json"
},
]'
Invoke-RestMethod -Method Post -ContentType 'application/json' -Uri 'http://localhost:3500/v1.0-alpha1/publish/bulk/my-pubsub-name/topic-a' `
-Body '[
{
"entryId": "ae6bf7c6-4af2-11ed-b878-0242ac120002",
"event": "first text message",
"contentType": "text/plain"
},
{
"entryId": "b1f40bd6-4af2-11ed-b878-0242ac120002",
"event": {
"message": "second JSON message"
},
"contentType": "application/json"
},
]'
Subscribing messages in bulk
The bulk subscribe API allows you to subscribe multiple messages from a topic in a single request. As we know from How to: Publish & Subscribe to topics, there are three ways to subscribe to topic(s):
- Declaratively - subscriptions are defined in an external file.
- Programmatically - subscriptions are defined in code.
- Streaming - Not supported for bulk subscribe as messages are sent to handler code.
To Bulk Subscribe to topic(s), we just need to use bulkSubscribe
spec attribute, something like following:
apiVersion: dapr.io/v2alpha1
kind: Subscription
metadata:
name: order-pub-sub
spec:
topic: orders
routes:
default: /checkout
pubsubname: order-pub-sub
bulkSubscribe:
enabled: true
maxMessagesCount: 100
maxAwaitDurationMs: 40
scopes:
- orderprocessing
- checkout
In the example above, bulkSubscribe
is optional. If you use bulkSubscribe
, then:
enabled
is mandatory and enables or disables bulk subscriptions on this topic- You can optionally configure the max number of messages (
maxMessagesCount
) delivered in a bulk message. Default value ofmaxMessagesCount
for components not supporting bulk subscribe is 100 i.e. for default bulk events between App and Dapr. Please refer How components handle publishing and subscribing to bulk messages. If a component supports bulk subscribe, then default value for this parameter can be found in that component doc. - You can optionally provide the max duration to wait (
maxAwaitDurationMs
) before a bulk message is sent to the app. Default value ofmaxAwaitDurationMs
for components not supporting bulk subscribe is 1000 i.e. for default bulk events between App and Dapr. Please refer How components handle publishing and subscribing to bulk messages. If a component supports bulk subscribe, then default value for this parameter can be found in that component doc.
The application receives an EntryId
associated with each entry (individual message) in the bulk message. This EntryId
must be used by the app to communicate the status of that particular entry. If the app fails to notify on an EntryId
status, it’s considered a RETRY
.
A JSON-encoded payload body with the processing status against each entry needs to be sent:
{
"statuses":
[
{
"entryId": "<entryId1>",
"status": "<status>"
},
{
"entryId": "<entryId2>",
"status": "<status>"
}
]
}
Possible status values:
Status | Description |
---|---|
SUCCESS | Message is processed successfully |
RETRY | Message to be retried by Dapr |
DROP | Warning is logged and message is dropped |
Refer to Expected HTTP Response for Bulk Subscribe for further insights on response.
Example
The following code examples demonstrate how to use Bulk Subscribe.
import io.dapr.Topic;
import io.dapr.client.domain.BulkSubscribeAppResponse;
import io.dapr.client.domain.BulkSubscribeAppResponseEntry;
import io.dapr.client.domain.BulkSubscribeAppResponseStatus;
import io.dapr.client.domain.BulkSubscribeMessage;
import io.dapr.client.domain.BulkSubscribeMessageEntry;
import io.dapr.client.domain.CloudEvent;
import io.dapr.springboot.annotations.BulkSubscribe;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import reactor.core.publisher.Mono;
class BulkSubscriber {
@BulkSubscribe()
// @BulkSubscribe(maxMessagesCount = 100, maxAwaitDurationMs = 40)
@Topic(name = "topicbulk", pubsubName = "orderPubSub")
@PostMapping(path = "/topicbulk")
public Mono<BulkSubscribeAppResponse> handleBulkMessage(
@RequestBody(required = false) BulkSubscribeMessage<CloudEvent<String>> bulkMessage) {
return Mono.fromCallable(() -> {
List<BulkSubscribeAppResponseEntry> entries = new ArrayList<BulkSubscribeAppResponseEntry>();
for (BulkSubscribeMessageEntry<?> entry : bulkMessage.getEntries()) {
try {
CloudEvent<?> cloudEvent = (CloudEvent<?>) entry.getEvent();
System.out.printf("Bulk Subscriber got: %s\n", cloudEvent.getData());
entries.add(new BulkSubscribeAppResponseEntry(entry.getEntryId(), BulkSubscribeAppResponseStatus.SUCCESS));
} catch (Exception e) {
e.printStackTrace();
entries.add(new BulkSubscribeAppResponseEntry(entry.getEntryId(), BulkSubscribeAppResponseStatus.RETRY));
}
}
return new BulkSubscribeAppResponse(entries);
});
}
}
import { DaprServer } from "@dapr/dapr";
const pubSubName = "orderPubSub";
const topic = "topicbulk";
const daprHost = process.env.DAPR_HOST || "127.0.0.1";
const daprPort = process.env.DAPR_HTTP_PORT || "3502";
const serverHost = process.env.SERVER_HOST || "127.0.0.1";
const serverPort = process.env.APP_PORT || 5001;
async function start() {
const server = new DaprServer({
serverHost,
serverPort,
clientOptions: {
daprHost,
daprPort,
},
});
// Publish multiple messages to a topic with default config.
await client.pubsub.bulkSubscribeWithDefaultConfig(pubSubName, topic, (data) => console.log("Subscriber received: " + JSON.stringify(data)));
// Publish multiple messages to a topic with specific maxMessagesCount and maxAwaitDurationMs.
await client.pubsub.bulkSubscribeWithConfig(pubSubName, topic, (data) => console.log("Subscriber received: " + JSON.stringify(data)), 100, 40);
}
using Microsoft.AspNetCore.Mvc;
using Dapr.AspNetCore;
using Dapr;
namespace DemoApp.Controllers;
[ApiController]
[Route("[controller]")]
public class BulkMessageController : ControllerBase
{
private readonly ILogger<BulkMessageController> logger;
public BulkMessageController(ILogger<BulkMessageController> logger)
{
this.logger = logger;
}
[BulkSubscribe("messages", 10, 10)]
[Topic("pubsub", "messages")]
public ActionResult<BulkSubscribeAppResponse> HandleBulkMessages([FromBody] BulkSubscribeMessage<BulkMessageModel<BulkMessageModel>> bulkMessages)
{
List<BulkSubscribeAppResponseEntry> responseEntries = new List<BulkSubscribeAppResponseEntry>();
logger.LogInformation($"Received {bulkMessages.Entries.Count()} messages");
foreach (var message in bulkMessages.Entries)
{
try
{
logger.LogInformation($"Received a message with data '{message.Event.Data.MessageData}'");
responseEntries.Add(new BulkSubscribeAppResponseEntry(message.EntryId, BulkSubscribeAppResponseStatus.SUCCESS));
}
catch (Exception e)
{
logger.LogError(e.Message);
responseEntries.Add(new BulkSubscribeAppResponseEntry(message.EntryId, BulkSubscribeAppResponseStatus.RETRY));
}
}
return new BulkSubscribeAppResponse(responseEntries);
}
public class BulkMessageModel
{
public string MessageData { get; set; }
}
}
Currently, you can only bulk subscribe in Python using an HTTP client.
import json
from flask import Flask, request, jsonify
app = Flask(__name__)
@app.route('/dapr/subscribe', methods=['GET'])
def subscribe():
# Define the bulk subscribe configuration
subscriptions = [{
"pubsubname": "pubsub",
"topic": "TOPIC_A",
"route": "/checkout",
"bulkSubscribe": {
"enabled": True,
"maxMessagesCount": 3,
"maxAwaitDurationMs": 40
}
}]
print('Dapr pub/sub is subscribed to: ' + json.dumps(subscriptions))
return jsonify(subscriptions)
# Define the endpoint to handle incoming messages
@app.route('/checkout', methods=['POST'])
def checkout():
messages = request.json
print(messages)
for message in messages:
print(f"Received message: {message}")
return json.dumps({'success': True}), 200, {'ContentType': 'application/json'}
if __name__ == '__main__':
app.run(port=5000)
How components handle publishing and subscribing to bulk messages
For event publish/subscribe, two kinds of network transfers are involved.
- From/To App To/From Dapr.
- From/To Dapr To/From Pubsub Broker.
These are the opportunities where optimization is possible. When optimized, Bulk requests are made, which reduce the overall number of calls and thus increases throughput and provides better latency.
On enabling Bulk Publish and/or Bulk Subscribe, the communication between the App and Dapr sidecar (Point 1 above) is optimized for all components.
Optimization from Dapr sidecar to the pub/sub broker depends on a number of factors, for example:
- Broker must inherently support Bulk pub/sub
- The Dapr component must be updated to support the use of bulk APIs provided by the broker
Currently, the following components are updated to support this level of optimization:
Component | Bulk Publish | Bulk Subscribe |
---|---|---|
Kafka | Yes | Yes |
Azure Servicebus | Yes | Yes |
Azure Eventhubs | Yes | Yes |
Demos
Watch the following demos and presentations about bulk pub/sub.
KubeCon Europe 2023 presentation
Dapr Community Call #77 presentation
Related links
- List of supported pub/sub components
- Read the API reference
1.3 - Workflow
More about Dapr Workflow
Learn more about how to use Dapr Workflow:
- Try the Workflow quickstart.
- Explore workflow via any of the supporting Dapr SDKs.
- Review the Workflow API reference documentation.
1.3.1 - Workflow overview
Dapr workflow makes it easy for developers to write business logic and integrations in a reliable way. Since Dapr workflows are stateful, they support long-running and fault-tolerant applications, ideal for orchestrating microservices. Dapr workflow works seamlessly with other Dapr building blocks, such as service invocation, pub/sub, state management, and bindings.
The durable, resilient Dapr Workflow capability:
- Offers a built-in workflow runtime for driving Dapr Workflow execution.
- Provides SDKs for authoring workflows in code, using any language.
- Provides HTTP and gRPC APIs for managing workflows (start, query, pause/resume, raise event, terminate, purge).
- Integrates with any other workflow runtime via workflow components.

Some example scenarios that Dapr Workflow can perform are:
- Order processing involving orchestration between inventory management, payment systems, and shipping services.
- HR onboarding workflows coordinating tasks across multiple departments and participants.
- Orchestrating the roll-out of digital menu updates in a national restaurant chain.
- Image processing workflows involving API-based classification and storage.
Features
Workflows and activities
With Dapr Workflow, you can write activities and then orchestrate those activities in a workflow. Workflow activities are:
- The basic unit of work in a workflow
- Used for calling other (Dapr) services, interacting with state stores, and pub/sub brokers.
Learn more about workflow activities.
Child workflows
In addition to activities, you can write workflows to schedule other workflows as child workflows. A child workflow has its own instance ID, history, and status that is independent of the parent workflow that started it, except for the fact that terminating the parent workflow terminates all of the child workflows created by it. Child workflow also supports automatic retry policies.
Learn more about child workflows.
Timers and reminders
Same as Dapr actors, you can schedule reminder-like durable delays for any time range.
Learn more about workflow timers and reminders
Workflow HTTP calls to manage a workflow
When you create an application with workflow code and run it with Dapr, you can call specific workflows that reside in the application. Each individual workflow can be:
- Started or terminated through a POST request
- Triggered to deliver a named event through a POST request
- Paused and then resumed through a POST request
- Purged from your state store through a POST request
- Queried for workflow status through a GET request
Learn more about how manage a workflow using HTTP calls.
Workflow patterns
Dapr Workflow simplifies complex, stateful coordination requirements in microservice architectures. The following sections describe several application patterns that can benefit from Dapr Workflow.
Learn more about different types of workflow patterns
Workflow SDKs
The Dapr Workflow authoring SDKs are language-specific SDKs that contain types and functions to implement workflow logic. The workflow logic lives in your application and is orchestrated by the Dapr Workflow engine running in the Dapr sidecar via a gRPC stream.
Supported SDKs
You can use the following SDKs to author a workflow.
Language stack | Package |
---|---|
Python | dapr-ext-workflow |
JavaScript | DaprWorkflowClient |
.NET | Dapr.Workflow |
Java | io.dapr.workflows |
Go | workflow |
Try out workflows
Quickstarts and tutorials
Want to put workflows to the test? Walk through the following quickstart and tutorials to see workflows in action:
Quickstart/tutorial | Description |
---|---|
Workflow quickstart | Run a workflow application with four workflow activities to see Dapr Workflow in action |
Workflow Python SDK example | Learn how to create a Dapr Workflow and invoke it using the Python dapr-ext-workflow package. |
Workflow JavaScript SDK example | Learn how to create a Dapr Workflow and invoke it using the JavaScript SDK. |
Workflow .NET SDK example | Learn how to create a Dapr Workflow and invoke it using ASP.NET Core web APIs. |
Workflow Java SDK example | Learn how to create a Dapr Workflow and invoke it using the Java io.dapr.workflows package. |
Workflow Go SDK example | Learn how to create a Dapr Workflow and invoke it using the Go workflow package. |
Start using workflows directly in your app
Want to skip the quickstarts? Not a problem. You can try out the workflow building block directly in your application. After Dapr is installed, you can begin using workflows, starting with how to author a workflow.
Limitations
- State stores: Due to underlying limitations in some database choices, more commonly NoSQL databases, you might run into limitations around storing internal states. For example, CosmosDB has a maximum single operation item limit of only 100 states in a single request.
Watch the demo
Watch this video for an overview on Dapr Workflow:
Next steps
Workflow features and concepts >>Related links
- Workflow API reference
- Try out the full SDK examples:
1.3.2 - Features and concepts
Now that you’ve learned about the workflow building block at a high level, let’s deep dive into the features and concepts included with the Dapr Workflow engine and SDKs. Dapr Workflow exposes several core features and concepts which are common across all supported languages.
Note
For more information on how workflow state is managed, see the workflow architecture guide.Workflows
Dapr Workflows are functions you write that define a series of tasks to be executed in a particular order. The Dapr Workflow engine takes care of scheduling and execution of the tasks, including managing failures and retries. If the app hosting your workflows is scaled out across multiple machines, the workflow engine may also load balance the execution of workflows and their tasks across multiple machines.
There are several different kinds of tasks that a workflow can schedule, including
- Activities for executing custom logic
- Durable timers for putting the workflow to sleep for arbitrary lengths of time
- Child workflows for breaking larger workflows into smaller pieces
- External event waiters for blocking workflows until they receive external event signals. These tasks are described in more details in their corresponding sections.
Workflow identity
Each workflow you define has a type name, and individual executions of a workflow require a unique instance ID. Workflow instance IDs can be generated by your app code, which is useful when workflows correspond to business entities like documents or jobs, or can be auto-generated UUIDs. A workflow’s instance ID is useful for debugging and also for managing workflows using the Workflow APIs.
Only one workflow instance with a given ID can exist at any given time. However, if a workflow instance completes or fails, its ID can be reused by a new workflow instance. Note, however, that the new workflow instance effectively replaces the old one in the configured state store.
Workflow replay
Dapr Workflows maintain their execution state by using a technique known as event sourcing. Instead of storing the current state of a workflow as a snapshot, the workflow engine manages an append-only log of history events that describe the various steps that a workflow has taken. When using the workflow SDK, these history events are stored automatically whenever the workflow “awaits” for the result of a scheduled task.
When a workflow “awaits” a scheduled task, it unloads itself from memory until the task completes. Once the task completes, the workflow engine schedules the workflow function to run again. This second workflow function execution is known as a replay.
When a workflow function is replayed, it runs again from the beginning. However, when it encounters a task that already completed, instead of scheduling that task again, the workflow engine:
- Returns the stored result of the completed task to the workflow.
- Continues execution until the next “await” point.
This “replay” behavior continues until the workflow function completes or fails with an error.
Using this replay technique, a workflow is able to resume execution from any “await” point as if it had never been unloaded from memory. Even the values of local variables from previous runs can be restored without the workflow engine knowing anything about what data they stored. This ability to restore state makes Dapr Workflows durable and fault tolerant.
Note
The workflow replay behavior described here requires that workflow function code be deterministic. Deterministic workflow functions take the exact same actions when provided the exact same inputs. Learn more about the limitations around deterministic workflow code.Infinite loops and eternal workflows
As discussed in the workflow replay section, workflows maintain a write-only event-sourced history log of all its operations. To avoid runaway resource usage, workflows must limit the number of operations they schedule. For example, ensure your workflow doesn’t:
- Use infinite loops in its implementation
- Schedule thousands of tasks.
You can use the following two techniques to write workflows that may need to schedule extreme numbers of tasks:
Use the continue-as-new API:
Each workflow SDK exposes a continue-as-new API that workflows can invoke to restart themselves with a new input and history. The continue-as-new API is especially ideal for implementing “eternal workflows”, like monitoring agents, which would otherwise be implemented using awhile (true)
-like construct. Using continue-as-new is a great way to keep the workflow history size small.The continue-as-new API truncates the existing history, replacing it with a new history.
Use child workflows:
Each workflow SDK exposes an API for creating child workflows. A child workflow behaves like any other workflow, except that it’s scheduled by a parent workflow. Child workflows have:- Their own history
- The benefit of distributing workflow function execution across multiple machines.
If a workflow needs to schedule thousands of tasks or more, it’s recommended that those tasks be distributed across child workflows so that no single workflow’s history size grows too large.
Updating workflow code
Because workflows are long-running and durable, updating workflow code must be done with extreme care. As discussed in the workflow determinism limitation section, workflow code must be deterministic. Updates to workflow code must preserve this determinism if there are any non-completed workflow instances in the system. Otherwise, updates to workflow code can result in runtime failures the next time those workflows execute.
Workflow activities
Workflow activities are the basic unit of work in a workflow and are the tasks that get orchestrated in the business process. For example, you might create a workflow to process an order. The tasks may involve checking the inventory, charging the customer, and creating a shipment. Each task would be a separate activity. These activities may be executed serially, in parallel, or some combination of both.
Unlike workflows, activities aren’t restricted in the type of work you can do in them. Activities are frequently used to make network calls or run CPU intensive operations. An activity can also return data back to the workflow.
The Dapr Workflow engine guarantees that each called activity is executed at least once as part of a workflow’s execution. Because activities only guarantee at-least-once execution, it’s recommended that activity logic be implemented as idempotent whenever possible.
Child workflows
In addition to activities, workflows can schedule other workflows as child workflows. A child workflow has its own instance ID, history, and status that is independent of the parent workflow that started it.
Child workflows have many benefits:
- You can split large workflows into a series of smaller child workflows, making your code more maintainable.
- You can distribute workflow logic across multiple compute nodes concurrently, which is useful if your workflow logic otherwise needs to coordinate a lot of tasks.
- You can reduce memory usage and CPU overhead by keeping the history of parent workflow smaller.
The return value of a child workflow is its output. If a child workflow fails with an exception, then that exception is surfaced to the parent workflow, just like it is when an activity task fails with an exception. Child workflows also support automatic retry policies.
Terminating a parent workflow terminates all of the child workflows created by the workflow instance. See the terminate workflow api for more information.
Durable timers
Dapr Workflows allow you to schedule reminder-like durable delays for any time range, including minutes, days, or even years. These durable timers can be scheduled by workflows to implement simple delays or to set up ad-hoc timeouts on other async tasks. More specifically, a durable timer can be set to trigger on a particular date or after a specified duration. There are no limits to the maximum duration of durable timers, which are internally backed by internal actor reminders. For example, a workflow that tracks a 30-day free subscription to a service could be implemented using a durable timer that fires 30-days after the workflow is created. Workflows can be safely unloaded from memory while waiting for a durable timer to fire.
Note
Some APIs in the workflow authoring SDK may internally schedule durable timers to implement internal timeout behavior.Retry policies
Workflows support durable retry policies for activities and child workflows. Workflow retry policies are separate and distinct from Dapr resiliency policies in the following ways.
- Workflow retry policies are configured by the workflow author in code, whereas Dapr Resiliency policies are configured by the application operator in YAML.
- Workflow retry policies are durable and maintain their state across application restarts, whereas Dapr Resiliency policies are not durable and must be re-applied after application restarts.
- Workflow retry policies are triggered by unhandled errors/exceptions in activities and child workflows, whereas Dapr Resiliency policies are triggered by operation timeouts and connectivity faults.
Retries are internally implemented using durable timers. This means that workflows can be safely unloaded from memory while waiting for a retry to fire, conserving system resources. This also means that delays between retries can be arbitrarily long, including minutes, hours, or even days.
Note
The actions performed by a retry policy are saved into a workflow’s history. Care must be taken not to change the behavior of a retry policy after a workflow has already been executed. Otherwise, the workflow may behave unexpectedly when replayed. See the notes on updating workflow code for more information.It’s possible to use both workflow retry policies and Dapr Resiliency policies together. For example, if a workflow activity uses a Dapr client to invoke a service, the Dapr client uses the configured resiliency policy. See Quickstart: Service-to-service resiliency for more information with an example. However, if the activity itself fails for any reason, including exhausting the retries on the resiliency policy, then the workflow’s resiliency policy kicks in.
Note
Using workflow retry policies and resiliency policies together can result in unexpected behavior. For example, if a workflow activity exhausts its configured retry policy, the workflow engine will still retry the activity according to the workflow retry policy. This can result in the activity being retried more times than expected.Because workflow retry policies are configured in code, the exact developer experience may vary depending on the version of the workflow SDK. In general, workflow retry policies can be configured with the following parameters.
Parameter | Description |
---|---|
Maximum number of attempts | The maximum number of times to execute the activity or child workflow. |
First retry interval | The amount of time to wait before the first retry. |
Backoff coefficient | The coefficient used to determine the rate of increase of back-off. For example a coefficient of 2 doubles the wait of each subsequent retry. |
Maximum retry interval | The maximum amount of time to wait before each subsequent retry. |
Retry timeout | The overall timeout for retries, regardless of any configured max number of attempts. |
External events
Sometimes workflows will need to wait for events that are raised by external systems. For example, an approval workflow may require a human to explicitly approve an order request within an order processing workflow if the total cost exceeds some threshold. Another example is a trivia game orchestration workflow that pauses while waiting for all participants to submit their answers to trivia questions. These mid-execution inputs are referred to as external events.
External events have a name and a payload and are delivered to a single workflow instance. Workflows can create “wait for external event” tasks that subscribe to external events and await those tasks to block execution until the event is received. The workflow can then read the payload of these events and make decisions about which next steps to take. External events can be processed serially or in parallel. External events can be raised by other workflows or by workflow code.
Workflows can also wait for multiple external event signals of the same name, in which case they are dispatched to the corresponding workflow tasks in a first-in, first-out (FIFO) manner. If a workflow receives an external event signal but has not yet created a “wait for external event” task, the event will be saved into the workflow’s history and consumed immediately after the workflow requests the event.
Learn more about external system interaction.
Workflow backend
Dapr Workflow relies on the Durable Task Framework for Go (a.k.a. durabletask-go) as the core engine for executing workflows. This engine is designed to support multiple backend implementations. For example, the durabletask-go repo includes a SQLite implementation and the Dapr repo includes an Actors implementation.
By default, Dapr Workflow supports the Actors backend, which is stable and scalable. However, you can choose a different backend supported in Dapr Workflow. For example, SQLite(TBD future release) could be an option for backend for local development and testing.
The backend implementation is largely decoupled from the workflow core engine or the programming model that you see. The backend primarily impacts:
- How workflow state is stored
- How workflow execution is coordinated across replicas
In that sense, it’s similar to Dapr’s state store abstraction, except designed specifically for workflow. All APIs and programming model features are the same, regardless of which backend is used.
Purging
Workflow state can be purged from a state store, purging all its history and removing all metadata related to a specific workflow instance. The purge capability is used for workflows that have run to a COMPLETED
, FAILED
, or TERMINATED
state.
Learn more in the workflow API reference guide.
Limitations
Workflow determinism and code restraints
To take advantage of the workflow replay technique, your workflow code needs to be deterministic. For your workflow code to be deterministic, you may need to work around some limitations.
Workflow functions must call deterministic APIs.
APIs that generate random numbers, random UUIDs, or the current date are non-deterministic. To work around this limitation, you can:
- Use these APIs in activity functions, or
- (Preferred) Use built-in equivalent APIs offered by the SDK. For example, each authoring SDK provides an API for retrieving the current time in a deterministic manner.
For example, instead of this:
// DON'T DO THIS!
DateTime currentTime = DateTime.UtcNow;
Guid newIdentifier = Guid.NewGuid();
string randomString = GetRandomString();
// DON'T DO THIS!
Instant currentTime = Instant.now();
UUID newIdentifier = UUID.randomUUID();
String randomString = getRandomString();
// DON'T DO THIS!
const currentTime = new Date();
const newIdentifier = uuidv4();
const randomString = getRandomString();
// DON'T DO THIS!
const currentTime = time.Now()
Do this:
// Do this!!
DateTime currentTime = context.CurrentUtcDateTime;
Guid newIdentifier = context.NewGuid();
string randomString = await context.CallActivityAsync<string>(nameof("GetRandomString")); //Use "nameof" to prevent specifying an activity name that does not exist in your application
// Do this!!
Instant currentTime = context.getCurrentInstant();
Guid newIdentifier = context.newGuid();
String randomString = context.callActivity(GetRandomString.class.getName(), String.class).await();
// Do this!!
const currentTime = context.getCurrentUtcDateTime();
const randomString = yield context.callActivity(getRandomString);
const currentTime = ctx.CurrentUTCDateTime()
Workflow functions must only interact indirectly with external state.
External data includes any data that isn’t stored in the workflow state. Workflows must not interact with global variables, environment variables, the file system, or make network calls.
Instead, workflows should interact with external state indirectly using workflow inputs, activity tasks, and through external event handling.
For example, instead of this:
// DON'T DO THIS!
string configuration = Environment.GetEnvironmentVariable("MY_CONFIGURATION")!;
string data = await new HttpClient().GetStringAsync("https://example.com/api/data");
// DON'T DO THIS!
String configuration = System.getenv("MY_CONFIGURATION");
HttpRequest request = HttpRequest.newBuilder().uri(new URI("https://postman-echo.com/post")).GET().build();
HttpResponse<String> response = HttpClient.newBuilder().build().send(request, HttpResponse.BodyHandlers.ofString());
// DON'T DO THIS!
// Accessing an Environment Variable (Node.js)
const configuration = process.env.MY_CONFIGURATION;
fetch('https://postman-echo.com/get')
.then(response => response.text())
.then(data => {
console.log(data);
})
.catch(error => {
console.error('Error:', error);
});
// DON'T DO THIS!
resp, err := http.Get("http://example.com/api/data")
Do this:
// Do this!!
string configuration = workflowInput.Configuration; // imaginary workflow input argument
string data = await context.CallActivityAsync<string>(nameof("MakeHttpCall"), "https://example.com/api/data");
// Do this!!
String configuration = ctx.getInput(InputType.class).getConfiguration(); // imaginary workflow input argument
String data = ctx.callActivity(MakeHttpCall.class, "https://example.com/api/data", String.class).await();
// Do this!!
const configuration = workflowInput.getConfiguration(); // imaginary workflow input argument
const data = yield ctx.callActivity(makeHttpCall, "https://example.com/api/data");
// Do this!!
err := ctx.CallActivity(MakeHttpCallActivity, workflow.ActivityInput("https://example.com/api/data")).Await(&output)
Workflow functions must execute only on the workflow dispatch thread.
The implementation of each language SDK requires that all workflow function operations operate on the same thread (goroutine, etc.) that the function was scheduled on. Workflow functions must never:
- Schedule background threads, or
- Use APIs that schedule a callback function to run on another thread.
Failure to follow this rule could result in undefined behavior. Any background processing should instead be delegated to activity tasks, which can be scheduled to run serially or concurrently.
For example, instead of this:
// DON'T DO THIS!
Task t = Task.Run(() => context.CallActivityAsync("DoSomething"));
await context.CreateTimer(5000).ConfigureAwait(false);
// DON'T DO THIS!
new Thread(() -> {
ctx.callActivity(DoSomethingActivity.class.getName()).await();
}).start();
ctx.createTimer(Duration.ofSeconds(5)).await();
Don’t declare JavaScript workflow as async
. The Node.js runtime doesn’t guarantee that asynchronous functions are deterministic.
// DON'T DO THIS!
go func() {
err := ctx.CallActivity(DoSomething).Await(nil)
}()
err := ctx.CreateTimer(time.Second).Await(nil)
Do this:
// Do this!!
Task t = context.CallActivityAsync(nameof("DoSomething"));
await context.CreateTimer(5000).ConfigureAwait(true);
// Do this!!
ctx.callActivity(DoSomethingActivity.class.getName()).await();
ctx.createTimer(Duration.ofSeconds(5)).await();
Since the Node.js runtime doesn’t guarantee that asynchronous functions are deterministic, always declare JavaScript workflow as synchronous generator functions.
// Do this!
task := ctx.CallActivity(DoSomething)
task.Await(nil)
Updating workflow code
Make sure updates you make to the workflow code maintain its determinism. A couple examples of code updates that can break workflow determinism:
Changing workflow function signatures:
Changing the name, input, or output of a workflow or activity function is considered a breaking change and must be avoided.Changing the number or order of workflow tasks:
Changing the number or order of workflow tasks causes a workflow instance’s history to no longer match the code and may result in runtime errors or other unexpected behavior.
To work around these constraints:
- Instead of updating existing workflow code, leave the existing workflow code as-is and create new workflow definitions that include the updates.
- Upstream code that creates workflows should only be updated to create instances of the new workflows.
- Leave the old code around to ensure that existing workflow instances can continue to run without interruption. If and when it’s known that all instances of the old workflow logic have completed, then the old workflow code can be safely deleted.
Next steps
Workflow patterns >>Related links
- Try out Dapr Workflow using the quickstart
- Workflow overview
- Workflow API reference
- Try out the following examples:
1.3.3 - Workflow patterns
Dapr Workflows simplify complex, stateful coordination requirements in microservice architectures. The following sections describe several application patterns that can benefit from Dapr Workflows.
Task chaining
In the task chaining pattern, multiple steps in a workflow are run in succession, and the output of one step may be passed as the input to the next step. Task chaining workflows typically involve creating a sequence of operations that need to be performed on some data, such as filtering, transforming, and reducing.

In some cases, the steps of the workflow may need to be orchestrated across multiple microservices. For increased reliability and scalability, you’re also likely to use queues to trigger the various steps.
While the pattern is simple, there are many complexities hidden in the implementation. For example:
- What happens if one of the microservices are unavailable for an extended period of time?
- Can failed steps be automatically retried?
- If not, how do you facilitate the rollback of previously completed steps, if applicable?
- Implementation details aside, is there a way to visualize the workflow so that other engineers can understand what it does and how it works?
Dapr Workflow solves these complexities by allowing you to implement the task chaining pattern concisely as a simple function in the programming language of your choice, as shown in the following example.
import dapr.ext.workflow as wf
def task_chain_workflow(ctx: wf.DaprWorkflowContext, wf_input: int):
try:
result1 = yield ctx.call_activity(step1, input=wf_input)
result2 = yield ctx.call_activity(step2, input=result1)
result3 = yield ctx.call_activity(step3, input=result2)
except Exception as e:
yield ctx.call_activity(error_handler, input=str(e))
raise
return [result1, result2, result3]
def step1(ctx, activity_input):
print(f'Step 1: Received input: {activity_input}.')
# Do some work
return activity_input + 1
def step2(ctx, activity_input):
print(f'Step 2: Received input: {activity_input}.')
# Do some work
return activity_input * 2
def step3(ctx, activity_input):
print(f'Step 3: Received input: {activity_input}.')
# Do some work
return activity_input ^ 2
def error_handler(ctx, error):
print(f'Executing error handler: {error}.')
# Do some compensating work
Note Workflow retry policies will be available in a future version of the Python SDK.
import { DaprWorkflowClient, WorkflowActivityContext, WorkflowContext, WorkflowRuntime, TWorkflow } from "@dapr/dapr";
async function start() {
// Update the gRPC client and worker to use a local address and port
const daprHost = "localhost";
const daprPort = "50001";
const workflowClient = new DaprWorkflowClient({
daprHost,
daprPort,
});
const workflowRuntime = new WorkflowRuntime({
daprHost,
daprPort,
});
const hello = async (_: WorkflowActivityContext, name: string) => {
return `Hello ${name}!`;
};
const sequence: TWorkflow = async function* (ctx: WorkflowContext): any {
const cities: string[] = [];
const result1 = yield ctx.callActivity(hello, "Tokyo");
cities.push(result1);
const result2 = yield ctx.callActivity(hello, "Seattle");
cities.push(result2);
const result3 = yield ctx.callActivity(hello, "London");
cities.push(result3);
return cities;
};
workflowRuntime.registerWorkflow(sequence).registerActivity(hello);
// Wrap the worker startup in a try-catch block to handle any errors during startup
try {
await workflowRuntime.start();
console.log("Workflow runtime started successfully");
} catch (error) {
console.error("Error starting workflow runtime:", error);
}
// Schedule a new orchestration
try {
const id = await workflowClient.scheduleNewWorkflow(sequence);
console.log(`Orchestration scheduled with ID: ${id}`);
// Wait for orchestration completion
const state = await workflowClient.waitForWorkflowCompletion(id, undefined, 30);
console.log(`Orchestration completed! Result: ${state?.serializedOutput}`);
} catch (error) {
console.error("Error scheduling or waiting for orchestration:", error);
}
await workflowRuntime.stop();
await workflowClient.stop();
// stop the dapr side car
process.exit(0);
}
start().catch((e) => {
console.error(e);
process.exit(1);
});
// Expotential backoff retry policy that survives long outages
var retryOptions = new WorkflowTaskOptions
{
RetryPolicy = new WorkflowRetryPolicy(
firstRetryInterval: TimeSpan.FromMinutes(1),
backoffCoefficient: 2.0,
maxRetryInterval: TimeSpan.FromHours(1),
maxNumberOfAttempts: 10),
};
try
{
var result1 = await context.CallActivityAsync<string>("Step1", wfInput, retryOptions);
var result2 = await context.CallActivityAsync<byte[]>("Step2", result1, retryOptions);
var result3 = await context.CallActivityAsync<long[]>("Step3", result2, retryOptions);
return string.Join(", ", result4);
}
catch (TaskFailedException) // Task failures are surfaced as TaskFailedException
{
// Retries expired - apply custom compensation logic
await context.CallActivityAsync<long[]>("MyCompensation", options: retryOptions);
throw;
}
Note In the example above,
"Step1"
,"Step2"
,"Step3"
, and"MyCompensation"
represent workflow activities, which are functions in your code that actually implement the steps of the workflow. For brevity, these activity implementations are left out of this example.
public class ChainWorkflow extends Workflow {
@Override
public WorkflowStub create() {
return ctx -> {
StringBuilder sb = new StringBuilder();
String wfInput = ctx.getInput(String.class);
String result1 = ctx.callActivity("Step1", wfInput, String.class).await();
String result2 = ctx.callActivity("Step2", result1, String.class).await();
String result3 = ctx.callActivity("Step3", result2, String.class).await();
String result = sb.append(result1).append(',').append(result2).append(',').append(result3).toString();
ctx.complete(result);
};
}
}
class Step1 implements WorkflowActivity {
@Override
public Object run(WorkflowActivityContext ctx) {
Logger logger = LoggerFactory.getLogger(Step1.class);
logger.info("Starting Activity: " + ctx.getName());
// Do some work
return null;
}
}
class Step2 implements WorkflowActivity {
@Override
public Object run(WorkflowActivityContext ctx) {
Logger logger = LoggerFactory.getLogger(Step2.class);
logger.info("Starting Activity: " + ctx.getName());
// Do some work
return null;
}
}
class Step3 implements WorkflowActivity {
@Override
public Object run(WorkflowActivityContext ctx) {
Logger logger = LoggerFactory.getLogger(Step3.class);
logger.info("Starting Activity: " + ctx.getName());
// Do some work
return null;
}
}
func TaskChainWorkflow(ctx *workflow.WorkflowContext) (any, error) {
var input int
if err := ctx.GetInput(&input); err != nil {
return "", err
}
var result1 int
if err := ctx.CallActivity(Step1, workflow.ActivityInput(input)).Await(&result1); err != nil {
return nil, err
}
var result2 int
if err := ctx.CallActivity(Step2, workflow.ActivityInput(input)).Await(&result2); err != nil {
return nil, err
}
var result3 int
if err := ctx.CallActivity(Step3, workflow.ActivityInput(input)).Await(&result3); err != nil {
return nil, err
}
return []int{result1, result2, result3}, nil
}
func Step1(ctx workflow.ActivityContext) (any, error) {
var input int
if err := ctx.GetInput(&input); err != nil {
return "", err
}
fmt.Printf("Step 1: Received input: %s", input)
return input + 1, nil
}
func Step2(ctx workflow.ActivityContext) (any, error) {
var input int
if err := ctx.GetInput(&input); err != nil {
return "", err
}
fmt.Printf("Step 2: Received input: %s", input)
return input * 2, nil
}
func Step3(ctx workflow.ActivityContext) (any, error) {
var input int
if err := ctx.GetInput(&input); err != nil {
return "", err
}
fmt.Printf("Step 3: Received input: %s", input)
return int(math.Pow(float64(input), 2)), nil
}
As you can see, the workflow is expressed as a simple series of statements in the programming language of your choice. This allows any engineer in the organization to quickly understand the end-to-end flow without necessarily needing to understand the end-to-end system architecture.
Behind the scenes, the Dapr Workflow runtime:
- Takes care of executing the workflow and ensuring that it runs to completion.
- Saves progress automatically.
- Automatically resumes the workflow from the last completed step if the workflow process itself fails for any reason.
- Enables error handling to be expressed naturally in your target programming language, allowing you to implement compensation logic easily.
- Provides built-in retry configuration primitives to simplify the process of configuring complex retry policies for individual steps in the workflow.
Fan-out/fan-in
In the fan-out/fan-in design pattern, you execute multiple tasks simultaneously across potentially multiple workers, wait for them to finish, and perform some aggregation on the result.

In addition to the challenges mentioned in the previous pattern, there are several important questions to consider when implementing the fan-out/fan-in pattern manually:
- How do you control the degree of parallelism?
- How do you know when to trigger subsequent aggregation steps?
- What if the number of parallel steps is dynamic?
Dapr Workflows provides a way to express the fan-out/fan-in pattern as a simple function, as shown in the following example:
import time
from typing import List
import dapr.ext.workflow as wf
def batch_processing_workflow(ctx: wf.DaprWorkflowContext, wf_input: int):
# get a batch of N work items to process in parallel
work_batch = yield ctx.call_activity(get_work_batch, input=wf_input)
# schedule N parallel tasks to process the work items and wait for all to complete
parallel_tasks = [ctx.call_activity(process_work_item, input=work_item) for work_item in work_batch]
outputs = yield wf.when_all(parallel_tasks)
# aggregate the results and send them to another activity
total = sum(outputs)
yield ctx.call_activity(process_results, input=total)
def get_work_batch(ctx, batch_size: int) -> List[int]:
return [i + 1 for i in range(batch_size)]
def process_work_item(ctx, work_item: int) -> int:
print(f'Processing work item: {work_item}.')
time.sleep(5)
result = work_item * 2
print(f'Work item {work_item} processed. Result: {result}.')
return result
def process_results(ctx, final_result: int):
print(f'Final result: {final_result}.')
import {
Task,
DaprWorkflowClient,
WorkflowActivityContext,
WorkflowContext,
WorkflowRuntime,
TWorkflow,
} from "@dapr/dapr";
// Wrap the entire code in an immediately-invoked async function
async function start() {
// Update the gRPC client and worker to use a local address and port
const daprHost = "localhost";
const daprPort = "50001";
const workflowClient = new DaprWorkflowClient({
daprHost,
daprPort,
});
const workflowRuntime = new WorkflowRuntime({
daprHost,
daprPort,
});
function getRandomInt(min: number, max: number): number {
return Math.floor(Math.random() * (max - min + 1)) + min;
}
async function getWorkItemsActivity(_: WorkflowActivityContext): Promise<string[]> {
const count: number = getRandomInt(2, 10);
console.log(`generating ${count} work items...`);
const workItems: string[] = Array.from({ length: count }, (_, i) => `work item ${i}`);
return workItems;
}
function sleep(ms: number): Promise<void> {
return new Promise((resolve) => setTimeout(resolve, ms));
}
async function processWorkItemActivity(context: WorkflowActivityContext, item: string): Promise<number> {
console.log(`processing work item: ${item}`);
// Simulate some work that takes a variable amount of time
const sleepTime = Math.random() * 5000;
await sleep(sleepTime);
// Return a result for the given work item, which is also a random number in this case
// For more information about random numbers in workflow please check
// https://learn.microsoft.com/azure/azure-functions/durable/durable-functions-code-constraints?tabpane=csharp#random-numbers
return Math.floor(Math.random() * 11);
}
const workflow: TWorkflow = async function* (ctx: WorkflowContext): any {
const tasks: Task<any>[] = [];
const workItems = yield ctx.callActivity(getWorkItemsActivity);
for (const workItem of workItems) {
tasks.push(ctx.callActivity(processWorkItemActivity, workItem));
}
const results: number[] = yield ctx.whenAll(tasks);
const sum: number = results.reduce((accumulator, currentValue) => accumulator + currentValue, 0);
return sum;
};
workflowRuntime.registerWorkflow(workflow);
workflowRuntime.registerActivity(getWorkItemsActivity);
workflowRuntime.registerActivity(processWorkItemActivity);
// Wrap the worker startup in a try-catch block to handle any errors during startup
try {
await workflowRuntime.start();
console.log("Worker started successfully");
} catch (error) {
console.error("Error starting worker:", error);
}
// Schedule a new orchestration
try {
const id = await workflowClient.scheduleNewWorkflow(workflow);
console.log(`Orchestration scheduled with ID: ${id}`);
// Wait for orchestration completion
const state = await workflowClient.waitForWorkflowCompletion(id, undefined, 30);
console.log(`Orchestration completed! Result: ${state?.serializedOutput}`);
} catch (error) {
console.error("Error scheduling or waiting for orchestration:", error);
}
// stop worker and client
await workflowRuntime.stop();
await workflowClient.stop();
// stop the dapr side car
process.exit(0);
}
start().catch((e) => {
console.error(e);
process.exit(1);
});
// Get a list of N work items to process in parallel.
object[] workBatch = await context.CallActivityAsync<object[]>("GetWorkBatch", null);
// Schedule the parallel tasks, but don't wait for them to complete yet.
var parallelTasks = new List<Task<int>>(workBatch.Length);
for (int i = 0; i < workBatch.Length; i++)
{
Task<int> task = context.CallActivityAsync<int>("ProcessWorkItem", workBatch[i]);
parallelTasks.Add(task);
}
// Everything is scheduled. Wait here until all parallel tasks have completed.
await Task.WhenAll(parallelTasks);
// Aggregate all N outputs and publish the result.
int sum = parallelTasks.Sum(t => t.Result);
await context.CallActivityAsync("PostResults", sum);
public class FaninoutWorkflow extends Workflow {
@Override
public WorkflowStub create() {
return ctx -> {
// Get a list of N work items to process in parallel.
Object[] workBatch = ctx.callActivity("GetWorkBatch", Object[].class).await();
// Schedule the parallel tasks, but don't wait for them to complete yet.
List<Task<Integer>> tasks = Arrays.stream(workBatch)
.map(workItem -> ctx.callActivity("ProcessWorkItem", workItem, int.class))
.collect(Collectors.toList());
// Everything is scheduled. Wait here until all parallel tasks have completed.
List<Integer> results = ctx.allOf(tasks).await();
// Aggregate all N outputs and publish the result.
int sum = results.stream().mapToInt(Integer::intValue).sum();
ctx.complete(sum);
};
}
}
func BatchProcessingWorkflow(ctx *workflow.WorkflowContext) (any, error) {
var input int
if err := ctx.GetInput(&input); err != nil {
return 0, err
}
var workBatch []int
if err := ctx.CallActivity(GetWorkBatch, workflow.ActivityInput(input)).Await(&workBatch); err != nil {
return 0, err
}
parallelTasks := workflow.NewTaskSlice(len(workBatch))
for i, workItem := range workBatch {
parallelTasks[i] = ctx.CallActivity(ProcessWorkItem, workflow.ActivityInput(workItem))
}
var outputs int
for _, task := range parallelTasks {
var output int
err := task.Await(&output)
if err == nil {
outputs += output
} else {
return 0, err
}
}
if err := ctx.CallActivity(ProcessResults, workflow.ActivityInput(outputs)).Await(nil); err != nil {
return 0, err
}
return 0, nil
}
func GetWorkBatch(ctx workflow.ActivityContext) (any, error) {
var batchSize int
if err := ctx.GetInput(&batchSize); err != nil {
return 0, err
}
batch := make([]int, batchSize)
for i := 0; i < batchSize; i++ {
batch[i] = i
}
return batch, nil
}
func ProcessWorkItem(ctx workflow.ActivityContext) (any, error) {
var workItem int
if err := ctx.GetInput(&workItem); err != nil {
return 0, err
}
fmt.Printf("Processing work item: %d\n", workItem)
time.Sleep(time.Second * 5)
result := workItem * 2
fmt.Printf("Work item %d processed. Result: %d\n", workItem, result)
return result, nil
}
func ProcessResults(ctx workflow.ActivityContext) (any, error) {
var finalResult int
if err := ctx.GetInput(&finalResult); err != nil {
return 0, err
}
fmt.Printf("Final result: %d\n", finalResult)
return finalResult, nil
}
The key takeaways from this example are:
- The fan-out/fan-in pattern can be expressed as a simple function using ordinary programming constructs
- The number of parallel tasks can be static or dynamic
- The workflow itself is capable of aggregating the results of parallel executions
Furthermore, the execution of the workflow is durable. If a workflow starts 100 parallel task executions and only 40 complete before the process crashes, the workflow restarts itself automatically and only schedules the remaining 60 tasks.
It’s possible to go further and limit the degree of concurrency using simple, language-specific constructs. The sample code below illustrates how to restrict the degree of fan-out to just 5 concurrent activity executions:
//Revisiting the earlier example...
// Get a list of N work items to process in parallel.
object[] workBatch = await context.CallActivityAsync<object[]>("GetWorkBatch", null);
const int MaxParallelism = 5;
var results = new List<int>();
var inFlightTasks = new HashSet<Task<int>>();
foreach(var workItem in workBatch)
{
if (inFlightTasks.Count >= MaxParallelism)
{
var finishedTask = await Task.WhenAny(inFlightTasks);
results.Add(finishedTask.Result);
inFlightTasks.Remove(finishedTask);
}
inFlightTasks.Add(context.CallActivityAsync<int>("ProcessWorkItem", workItem));
}
results.AddRange(await Task.WhenAll(inFlightTasks));
var sum = results.Sum(t => t);
await context.CallActivityAsync("PostResults", sum);
With the release of 1.16, it’s even easier to process workflow activities in parallel while putting an upper cap on
concurrency by using the following extension methods on the WorkflowContext
:
//Revisiting the earlier example...
// Get a list of work items to process
var workBatch = await context.CallActivityAsync<object[]>("GetWorkBatch", null);
// Process deterministically in parallel with an upper cap of 5 activities at a time
var results = await context.ProcessInParallelAsync(workBatch, workItem => context.CallActivityAsync<int>("ProcessWorkItem", workItem), maxConcurrency: 5);
var sum = results.Sum(t => t);
await context.CallActivityAsync("PostResults", sum);
Limiting the degree of concurrency in this way can be useful for limiting contention against shared resources. For example, if the activities need to call into external resources that have their own concurrency limits, like a databases or external APIs, it can be useful to ensure that no more than a specified number of activities call that resource concurrently.
Async HTTP APIs
Asynchronous HTTP APIs are typically implemented using the Asynchronous Request-Reply pattern. Implementing this pattern traditionally involves the following:
- A client sends a request to an HTTP API endpoint (the start API)
- The start API writes a message to a backend queue, which triggers the start of a long-running operation
- Immediately after scheduling the backend operation, the start API returns an HTTP 202 response to the client with an identifier that can be used to poll for status
- The status API queries a database that contains the status of the long-running operation
- The client repeatedly polls the status API either until some timeout expires or it receives a “completion” response
The end-to-end flow is illustrated in the following diagram.

The challenge with implementing the asynchronous request-reply pattern is that it involves the use of multiple APIs and state stores. It also involves implementing the protocol correctly so that the client knows how to automatically poll for status and know when the operation is complete.
The Dapr workflow HTTP API supports the asynchronous request-reply pattern out-of-the box, without requiring you to write any code or do any state management.
The following curl
commands illustrate how the workflow APIs support this pattern.
curl -X POST http://localhost:3500/v1.0/workflows/dapr/OrderProcessingWorkflow/start?instanceID=12345678 -d '{"Name":"Paperclips","Quantity":1,"TotalCost":9.95}'
The previous command will result in the following response JSON:
{"instanceID":"12345678"}
The HTTP client can then construct the status query URL using the workflow instance ID and poll it repeatedly until it sees the “COMPLETE”, “FAILURE”, or “TERMINATED” status in the payload.
curl http://localhost:3500/v1.0/workflows/dapr/12345678
The following is an example of what an in-progress workflow status might look like.
{
"instanceID": "12345678",
"workflowName": "OrderProcessingWorkflow",
"createdAt": "2023-05-03T23:22:11.143069826Z",
"lastUpdatedAt": "2023-05-03T23:22:22.460025267Z",
"runtimeStatus": "RUNNING",
"properties": {
"dapr.workflow.custom_status": "",
"dapr.workflow.input": "{\"Name\":\"Paperclips\",\"Quantity\":1,\"TotalCost\":9.95}"
}
}
As you can see from the previous example, the workflow’s runtime status is RUNNING
, which lets the client know that it should continue polling.
If the workflow has completed, the status might look as follows.
{
"instanceID": "12345678",
"workflowName": "OrderProcessingWorkflow",
"createdAt": "2023-05-03T23:30:11.381146313Z",
"lastUpdatedAt": "2023-05-03T23:30:52.923870615Z",
"runtimeStatus": "COMPLETED",
"properties": {
"dapr.workflow.custom_status": "",
"dapr.workflow.input": "{\"Name\":\"Paperclips\",\"Quantity\":1,\"TotalCost\":9.95}",
"dapr.workflow.output": "{\"Processed\":true}"
}
}
As you can see from the previous example, the runtime status of the workflow is now COMPLETED
, which means the client can stop polling for updates.
Monitor
The monitor pattern is recurring process that typically:
- Checks the status of a system
- Takes some action based on that status - e.g. send a notification
- Sleeps for some period of time
- Repeat
The following diagram provides a rough illustration of this pattern.

Depending on the business needs, there may be a single monitor or there may be multiple monitors, one for each business entity (for example, a stock). Furthermore, the amount of time to sleep may need to change, depending on the circumstances. These requirements make using cron-based scheduling systems impractical.
Dapr Workflow supports this pattern natively by allowing you to implement eternal workflows. Rather than writing infinite while-loops (which is an anti-pattern), Dapr Workflow exposes a continue-as-new API that workflow authors can use to restart a workflow function from the beginning with a new input.
from dataclasses import dataclass
from datetime import timedelta
import random
import dapr.ext.workflow as wf
@dataclass
class JobStatus:
job_id: str
is_healthy: bool
def status_monitor_workflow(ctx: wf.DaprWorkflowContext, job: JobStatus):
# poll a status endpoint associated with this job
status = yield ctx.call_activity(check_status, input=job)
if not ctx.is_replaying:
print(f"Job '{job.job_id}' is {status}.")
if status == "healthy":
job.is_healthy = True
next_sleep_interval = 60 # check less frequently when healthy
else:
if job.is_healthy:
job.is_healthy = False
ctx.call_activity(send_alert, input=f"Job '{job.job_id}' is unhealthy!")
next_sleep_interval = 5 # check more frequently when unhealthy
yield ctx.create_timer(fire_at=ctx.current_utc_datetime + timedelta(minutes=next_sleep_interval))
# restart from the beginning with a new JobStatus input
ctx.continue_as_new(job)
def check_status(ctx, _) -> str:
return random.choice(["healthy", "unhealthy"])
def send_alert(ctx, message: str):
print(f'*** Alert: {message}')
const statusMonitorWorkflow: TWorkflow = async function* (ctx: WorkflowContext): any {
let duration;
const status = yield ctx.callActivity(checkStatusActivity);
if (status === "healthy") {
// Check less frequently when in a healthy state
// set duration to 1 hour
duration = 60 * 60;
} else {
yield ctx.callActivity(alertActivity, "job unhealthy");
// Check more frequently when in an unhealthy state
// set duration to 5 minutes
duration = 5 * 60;
}
// Put the workflow to sleep until the determined time
ctx.createTimer(duration);
// Restart from the beginning with the updated state
ctx.continueAsNew();
};
public override async Task<object> RunAsync(WorkflowContext context, MyEntityState myEntityState)
{
TimeSpan nextSleepInterval;
var status = await context.CallActivityAsync<string>("GetStatus");
if (status == "healthy")
{
myEntityState.IsHealthy = true;
// Check less frequently when in a healthy state
nextSleepInterval = TimeSpan.FromMinutes(60);
}
else
{
if (myEntityState.IsHealthy)
{
myEntityState.IsHealthy = false;
await context.CallActivityAsync("SendAlert", myEntityState);
}
// Check more frequently when in an unhealthy state
nextSleepInterval = TimeSpan.FromMinutes(5);
}
// Put the workflow to sleep until the determined time
await context.CreateTimer(nextSleepInterval);
// Restart from the beginning with the updated state
context.ContinueAsNew(myEntityState);
return null;
}
This example assumes you have a predefined
MyEntityState
class with a booleanIsHealthy
property.
public class MonitorWorkflow extends Workflow {
@Override
public WorkflowStub create() {
return ctx -> {
Duration nextSleepInterval;
var status = ctx.callActivity(DemoWorkflowStatusActivity.class.getName(), DemoStatusActivityOutput.class).await();
var isHealthy = status.getIsHealthy();
if (isHealthy) {
// Check less frequently when in a healthy state
nextSleepInterval = Duration.ofMinutes(60);
} else {
ctx.callActivity(DemoWorkflowAlertActivity.class.getName()).await();
// Check more frequently when in an unhealthy state
nextSleepInterval = Duration.ofMinutes(5);
}
// Put the workflow to sleep until the determined time
try {
ctx.createTimer(nextSleepInterval);
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
// Restart from the beginning with the updated state
ctx.continueAsNew();
}
}
}
type JobStatus struct {
JobID string `json:"job_id"`
IsHealthy bool `json:"is_healthy"`
}
func StatusMonitorWorkflow(ctx *workflow.WorkflowContext) (any, error) {
var sleepInterval time.Duration
var job JobStatus
if err := ctx.GetInput(&job); err != nil {
return "", err
}
var status string
if err := ctx.CallActivity(CheckStatus, workflow.ActivityInput(job)).Await(&status); err != nil {
return "", err
}
if status == "healthy" {
job.IsHealthy = true
sleepInterval = time.Minutes * 60
} else {
if job.IsHealthy {
job.IsHealthy = false
err := ctx.CallActivity(SendAlert, workflow.ActivityInput(fmt.Sprintf("Job '%s' is unhealthy!", job.JobID))).Await(nil)
if err != nil {
return "", err
}
}
sleepInterval = time.Minutes * 5
}
if err := ctx.CreateTimer(sleepInterval).Await(nil); err != nil {
return "", err
}
ctx.ContinueAsNew(job, false)
return "", nil
}
func CheckStatus(ctx workflow.ActivityContext) (any, error) {
statuses := []string{"healthy", "unhealthy"}
return statuses[rand.Intn(1)], nil
}
func SendAlert(ctx workflow.ActivityContext) (any, error) {
var message string
if err := ctx.GetInput(&message); err != nil {
return "", err
}
fmt.Printf("*** Alert: %s", message)
return "", nil
}
A workflow implementing the monitor pattern can loop forever or it can terminate itself gracefully by not calling continue-as-new.
Note
This pattern can also be expressed using actors and reminders. The difference is that this workflow is expressed as a single function with inputs and state stored in local variables. Workflows can also execute a sequence of actions with stronger reliability guarantees, if necessary.External system interaction
In some cases, a workflow may need to pause and wait for an external system to perform some action. For example, a workflow may need to pause and wait for a payment to be received. In this case, a payment system might publish an event to a pub/sub topic on receipt of a payment, and a listener on that topic can raise an event to the workflow using the raise event workflow API.
Another very common scenario is when a workflow needs to pause and wait for a human, for example when approving a purchase order. Dapr Workflow supports this event pattern via the external events feature.
Here’s an example workflow for a purchase order involving a human:
- A workflow is triggered when a purchase order is received.
- A rule in the workflow determines that a human needs to perform some action. For example, the purchase order cost exceeds a certain auto-approval threshold.
- The workflow sends a notification requesting a human action. For example, it sends an email with an approval link to a designated approver.
- The workflow pauses and waits for the human to either approve or reject the order by clicking on a link.
- If the approval isn’t received within the specified time, the workflow resumes and performs some compensation logic, such as canceling the order.
The following diagram illustrates this flow.

The following example code shows how this pattern can be implemented using Dapr Workflow.
from dataclasses import dataclass
from datetime import timedelta
import dapr.ext.workflow as wf
@dataclass
class Order:
cost: float
product: str
quantity: int
def __str__(self):
return f'{self.product} ({self.quantity})'
@dataclass
class Approval:
approver: str
@staticmethod
def from_dict(dict):
return Approval(**dict)
def purchase_order_workflow(ctx: wf.DaprWorkflowContext, order: Order):
# Orders under $1000 are auto-approved
if order.cost < 1000:
return "Auto-approved"
# Orders of $1000 or more require manager approval
yield ctx.call_activity(send_approval_request, input=order)
# Approvals must be received within 24 hours or they will be canceled.
approval_event = ctx.wait_for_external_event("approval_received")
timeout_event = ctx.create_timer(timedelta(hours=24))
winner = yield wf.when_any([approval_event, timeout_event])
if winner == timeout_event:
return "Cancelled"
# The order was approved
yield ctx.call_activity(place_order, input=order)
approval_details = Approval.from_dict(approval_event.get_result())
return f"Approved by '{approval_details.approver}'"
def send_approval_request(_, order: Order) -> None:
print(f'*** Sending approval request for order: {order}')
def place_order(_, order: Order) -> None:
print(f'*** Placing order: {order}')
import {
Task,
DaprWorkflowClient,
WorkflowActivityContext,
WorkflowContext,
WorkflowRuntime,
TWorkflow,
} from "@dapr/dapr";
import * as readlineSync from "readline-sync";
// Wrap the entire code in an immediately-invoked async function
async function start() {
class Order {
cost: number;
product: string;
quantity: number;
constructor(cost: number, product: string, quantity: number) {
this.cost = cost;
this.product = product;
this.quantity = quantity;
}
}
function sleep(ms: number): Promise<void> {
return new Promise((resolve) => setTimeout(resolve, ms));
}
// Update the gRPC client and worker to use a local address and port
const daprHost = "localhost";
const daprPort = "50001";
const workflowClient = new DaprWorkflowClient({
daprHost,
daprPort,
});
const workflowRuntime = new WorkflowRuntime({
daprHost,
daprPort,
});
// Activity function that sends an approval request to the manager
const sendApprovalRequest = async (_: WorkflowActivityContext, order: Order) => {
// Simulate some work that takes an amount of time
await sleep(3000);
console.log(`Sending approval request for order: ${order.product}`);
};
// Activity function that places an order
const placeOrder = async (_: WorkflowActivityContext, order: Order) => {
console.log(`Placing order: ${order.product}`);
};
// Orchestrator function that represents a purchase order workflow
const purchaseOrderWorkflow: TWorkflow = async function* (ctx: WorkflowContext, order: Order): any {
// Orders under $1000 are auto-approved
if (order.cost < 1000) {
return "Auto-approved";
}
// Orders of $1000 or more require manager approval
yield ctx.callActivity(sendApprovalRequest, order);
// Approvals must be received within 24 hours or they will be cancled.
const tasks: Task<any>[] = [];
const approvalEvent = ctx.waitForExternalEvent("approval_received");
const timeoutEvent = ctx.createTimer(24 * 60 * 60);
tasks.push(approvalEvent);
tasks.push(timeoutEvent);
const winner = ctx.whenAny(tasks);
if (winner == timeoutEvent) {
return "Cancelled";
}
yield ctx.callActivity(placeOrder, order);
const approvalDetails = approvalEvent.getResult();
return `Approved by ${approvalDetails.approver}`;
};
workflowRuntime
.registerWorkflow(purchaseOrderWorkflow)
.registerActivity(sendApprovalRequest)
.registerActivity(placeOrder);
// Wrap the worker startup in a try-catch block to handle any errors during startup
try {
await workflowRuntime.start();
console.log("Worker started successfully");
} catch (error) {
console.error("Error starting worker:", error);
}
// Schedule a new orchestration
try {
const cost = readlineSync.questionInt("Cost of your order:");
const approver = readlineSync.question("Approver of your order:");
const timeout = readlineSync.questionInt("Timeout for your order in seconds:");
const order = new Order(cost, "MyProduct", 1);
const id = await workflowClient.scheduleNewWorkflow(purchaseOrderWorkflow, order);
console.log(`Orchestration scheduled with ID: ${id}`);
// prompt for approval asynchronously
promptForApproval(approver, workflowClient, id);
// Wait for orchestration completion
const state = await workflowClient.waitForWorkflowCompletion(id, undefined, timeout + 2);
console.log(`Orchestration completed! Result: ${state?.serializedOutput}`);
} catch (error) {
console.error("Error scheduling or waiting for orchestration:", error);
}
// stop worker and client
await workflowRuntime.stop();
await workflowClient.stop();
// stop the dapr side car
process.exit(0);
}
async function promptForApproval(approver: string, workflowClient: DaprWorkflowClient, id: string) {
if (readlineSync.keyInYN("Press [Y] to approve the order... Y/yes, N/no")) {
const approvalEvent = { approver: approver };
await workflowClient.raiseEvent(id, "approval_received", approvalEvent);
} else {
return "Order rejected";
}
}
start().catch((e) => {
console.error(e);
process.exit(1);
});
public override async Task<OrderResult> RunAsync(WorkflowContext context, OrderPayload order)
{
// ...(other steps)...
// Require orders over a certain threshold to be approved
if (order.TotalCost > OrderApprovalThreshold)
{
try
{
// Request human approval for this order
await context.CallActivityAsync(nameof(RequestApprovalActivity), order);
// Pause and wait for a human to approve the order
ApprovalResult approvalResult = await context.WaitForExternalEventAsync<ApprovalResult>(
eventName: "ManagerApproval",
timeout: TimeSpan.FromDays(3));
if (approvalResult == ApprovalResult.Rejected)
{
// The order was rejected, end the workflow here
return new OrderResult(Processed: false);
}
}
catch (TaskCanceledException)
{
// An approval timeout results in automatic order cancellation
return new OrderResult(Processed: false);
}
}
// ...(other steps)...
// End the workflow with a success result
return new OrderResult(Processed: true);
}
Note In the example above,
RequestApprovalActivity
is the name of a workflow activity to invoke andApprovalResult
is an enumeration defined by the workflow app. For brevity, these definitions were left out of the example code.
public class ExternalSystemInteractionWorkflow extends Workflow {
@Override
public WorkflowStub create() {
return ctx -> {
// ...other steps...
Integer orderCost = ctx.getInput(int.class);
// Require orders over a certain threshold to be approved
if (orderCost > ORDER_APPROVAL_THRESHOLD) {
try {
// Request human approval for this order
ctx.callActivity("RequestApprovalActivity", orderCost, Void.class).await();
// Pause and wait for a human to approve the order
boolean approved = ctx.waitForExternalEvent("ManagerApproval", Duration.ofDays(3), boolean.class).await();
if (!approved) {
// The order was rejected, end the workflow here
ctx.complete("Process reject");
}
} catch (TaskCanceledException e) {
// An approval timeout results in automatic order cancellation
ctx.complete("Process cancel");
}
}
// ...other steps...
// End the workflow with a success result
ctx.complete("Process approved");
};
}
}
type Order struct {
Cost float64 `json:"cost"`
Product string `json:"product"`
Quantity int `json:"quantity"`
}
type Approval struct {
Approver string `json:"approver"`
}
func PurchaseOrderWorkflow(ctx *workflow.WorkflowContext) (any, error) {
var order Order
if err := ctx.GetInput(&order); err != nil {
return "", err
}
// Orders under $1000 are auto-approved
if order.Cost < 1000 {
return "Auto-approved", nil
}
// Orders of $1000 or more require manager approval
if err := ctx.CallActivity(SendApprovalRequest, workflow.ActivityInput(order)).Await(nil); err != nil {
return "", err
}
// Approvals must be received within 24 hours or they will be cancelled
var approval Approval
if err := ctx.WaitForExternalEvent("approval_received", time.Hour*24).Await(&approval); err != nil {
// Assuming that a timeout has taken place - in any case; an error.
return "error/cancelled", err
}
// The order was approved
if err := ctx.CallActivity(PlaceOrder, workflow.ActivityInput(order)).Await(nil); err != nil {
return "", err
}
return fmt.Sprintf("Approved by %s", approval.Approver), nil
}
func SendApprovalRequest(ctx workflow.ActivityContext) (any, error) {
var order Order
if err := ctx.GetInput(&order); err != nil {
return "", err
}
fmt.Printf("*** Sending approval request for order: %v\n", order)
return "", nil
}
func PlaceOrder(ctx workflow.ActivityContext) (any, error) {
var order Order
if err := ctx.GetInput(&order); err != nil {
return "", err
}
fmt.Printf("*** Placing order: %v", order)
return "", nil
}
The code that delivers the event to resume the workflow execution is external to the workflow. Workflow events can be delivered to a waiting workflow instance using the raise event workflow management API, as shown in the following example:
from dapr.clients import DaprClient
from dataclasses import asdict
with DaprClient() as d:
d.raise_workflow_event(
instance_id=instance_id,
workflow_component="dapr",
event_name="approval_received",
event_data=asdict(Approval("Jane Doe")))
import { DaprClient } from "@dapr/dapr";
public async raiseEvent(workflowInstanceId: string, eventName: string, eventPayload?: any) {
this._innerClient.raiseOrchestrationEvent(workflowInstanceId, eventName, eventPayload);
}
// Raise the workflow event to the waiting workflow
await daprClient.RaiseWorkflowEventAsync(
instanceId: orderId,
workflowComponent: "dapr",
eventName: "ManagerApproval",
eventData: ApprovalResult.Approved);
System.out.println("**SendExternalMessage: RestartEvent**");
client.raiseEvent(restartingInstanceId, "RestartEvent", "RestartEventPayload");
func raiseEvent() {
daprClient, err := client.NewClient()
if err != nil {
log.Fatalf("failed to initialize the client")
}
err = daprClient.RaiseEventWorkflow(context.Background(), &client.RaiseEventWorkflowRequest{
InstanceID: "instance_id",
WorkflowComponent: "dapr",
EventName: "approval_received",
EventData: Approval{
Approver: "Jane Doe",
},
})
if err != nil {
log.Fatalf("failed to raise event on workflow")
}
log.Println("raised an event on specified workflow")
}
External events don’t have to be directly triggered by humans. They can also be triggered by other systems. For example, a workflow may need to pause and wait for a payment to be received. In this case, a payment system might publish an event to a pub/sub topic on receipt of a payment, and a listener on that topic can raise an event to the workflow using the raise event workflow API.
Next steps
Workflow architecture >>Related links
- Try out Dapr Workflows using the quickstart
- Workflow overview
- Workflow API reference
- Try out the following examples:
1.3.4 - Workflow architecture
Dapr Workflows allow developers to define workflows using ordinary code in a variety of programming languages. The workflow engine runs inside of the Dapr sidecar and orchestrates workflow code deployed as part of your application. Dapr Workflows are built on top of Dapr Actors providing durability and scalability for workflow execution.
This article describes:
- The architecture of the Dapr Workflow engine
- How the workflow engine interacts with application code
- How the workflow engine fits into the overall Dapr architecture
- How different workflow backends can work with workflow engine
For more information on how to author Dapr Workflows in your application, see How to: Author a workflow.
The Dapr Workflow engine is internally powered by Dapr’s actor runtime. The following diagram illustrates the Dapr Workflow architecture in Kubernetes mode:

To use the Dapr Workflow building block, you write workflow code in your application using the Dapr Workflow SDK, which internally connects to the sidecar using a gRPC stream. This registers the workflow and any workflow activities, or tasks that workflows can schedule.
The engine is embedded directly into the sidecar and implemented using the durabletask-go
framework library. This framework allows you to swap out different storage providers, including a storage provider created for Dapr that leverages internal actors behind the scenes. Since Dapr Workflows use actors, you can store workflow state in state stores.
Sidecar interactions
When a workflow application starts up, it uses a workflow authoring SDK to send a gRPC request to the Dapr sidecar and get back a stream of workflow work items, following the server streaming RPC pattern. These work items can be anything from “start a new X workflow” (where X is the type of a workflow) to “schedule activity Y with input Z to run on behalf of workflow X”.
The workflow app executes the appropriate workflow code and then sends a gRPC request back to the sidecar with the execution results.

All interactions happen over a single gRPC channel and are initiated by the application, which means the application doesn’t need to open any inbound ports. The details of these interactions are internally handled by the language-specific Dapr Workflow authoring SDK.
Differences between workflow and actor sidecar interactions
If you’re familiar with Dapr actors, you may notice a few differences in terms of how sidecar interactions works for workflows compared to actors.
Actors | Workflows |
---|---|
Actors can interact with the sidecar using either HTTP or gRPC. | Workflows only use gRPC. Due to the workflow gRPC protocol’s complexity, an SDK is required when implementing workflows. |
Actor operations are pushed to application code from the sidecar. This requires the application to listen on a particular app port. | For workflows, operations are pulled from the sidecar by the application using a streaming protocol. The application doesn’t need to listen on any ports to run workflows. |
Actors explicitly register themselves with the sidecar. | Workflows do not register themselves with the sidecar. The embedded engine doesn’t keep track of workflow types. This responsibility is instead delegated to the workflow application and its SDK. |
Workflow distributed tracing
The durabletask-go
core used by the workflow engine writes distributed traces using Open Telemetry SDKs. These traces are captured automatically by the Dapr sidecar and exported to the configured Open Telemetry provider, such as Zipkin.
Each workflow instance managed by the engine is represented as one or more spans. There is a single parent span representing the full workflow execution and child spans for the various tasks, including spans for activity task execution and durable timers.
Workflow activity code currently does not have access to the trace context.
Internal workflow actors
There are two types of actors that are internally registered within the Dapr sidecar in support of the workflow engine:
dapr.internal.{namespace}.{appID}.workflow
dapr.internal.{namespace}.{appID}.activity
The {namespace}
value is the Dapr namespace and defaults to default
if no namespace is configured. The {appID}
value is the app’s ID. For example, if you have a workflow app named “wfapp”, then the type of the workflow actor would be dapr.internal.default.wfapp.workflow
and the type of the activity actor would be dapr.internal.default.wfapp.activity
.
The following diagram demonstrates how internal workflow actors operate in a Kubernetes scenario:

Just like user-defined actors, internal workflow actors are distributed across the cluster by the actor placement service. They also maintain their own state and make use of reminders. However, unlike actors that live in application code, these internal actors are embedded into the Dapr sidecar. Application code is completely unaware that these actors exist.
Note
The internal workflow actor types are only registered after an app has registered a workflow using a Dapr Workflow SDK. If an app never registers a workflow, then the internal workflow actors are never registered.Workflow actors
There are 2 different types of actors used with workflows: workflow actors and activity actors. Workflow actors are responsible for managing the state and placement of all workflows running in the app. A new instance of the workflow actor is activated for every workflow instance that gets created. The ID of the workflow actor is the ID of the workflow. This internal actor stores the state of the workflow as it progresses and determines the node on which the workflow code executes via the actor placement service.
Each workflow actor saves its state using the following keys in the configured state store:
Key | Description |
---|---|
inbox-NNNNNN | A workflow’s inbox is effectively a FIFO queue of messages that drive a workflow’s execution. Example messages include workflow creation messages, activity task completion messages, etc. Each message is stored in its own key in the state store with the name inbox-NNNNNN where NNNNNN is a 6-digit number indicating the ordering of the messages. These state keys are removed once the corresponding messages are consumed by the workflow. |
history-NNNNNN | A workflow’s history is an ordered list of events that represent a workflow’s execution history. Each key in the history holds the data for a single history event. Like an append-only log, workflow history events are only added and never removed (except when a workflow performs a “continue as new” operation, which purges all history and restarts a workflow with a new input). |
customStatus | Contains a user-defined workflow status value. There is exactly one customStatus key for each workflow actor instance. |
metadata | Contains meta information about the workflow as a JSON blob and includes details such as the length of the inbox, the length of the history, and a 64-bit integer representing the workflow generation (for cases where the instance ID gets reused). The length information is used to determine which keys need to be read or written to when loading or saving workflow state updates. |
Warning
Workflow actor state remains in the state store even after a workflow has completed. Creating a large number of workflows could result in unbounded storage usage. To address this either purge workflows using their ID or directly delete entries in the workflow DB store.The following diagram illustrates the typical lifecycle of a workflow actor.

To summarize:
- A workflow actor is activated when it receives a new message.
- New messages then trigger the associated workflow code (in your application) to run and return an execution result back to the workflow actor.
- Once the result is received, the actor schedules any tasks as necessary.
- After scheduling, the actor updates its state in the state store.
- Finally, the actor goes idle until it receives another message. During this idle time, the sidecar may decide to unload the workflow actor from memory.
Activity actors
Activity actors are responsible for managing the state and placement of all workflow activity invocations. A new instance of the activity actor is activated for every activity task that gets scheduled by a workflow. The ID of the activity actor is the ID of the workflow combined with a sequence number (sequence numbers start with 0). For example, if a workflow has an ID of 876bf371
and is the third activity to be scheduled by the workflow, it’s ID will be 876bf371::2
where 2
is the sequence number.
Each activity actor stores a single key into the state store:
Key | Description |
---|---|
activityState | The key contains the activity invocation payload, which includes the serialized activity input data. This key is deleted automatically after the activity invocation has completed. |
The following diagram illustrates the typical lifecycle of an activity actor.

Activity actors are short-lived:
- Activity actors are activated when a workflow actor schedules an activity task.
- Activity actors then immediately call into the workflow application to invoke the associated activity code.
- Once the activity code has finished running and has returned its result, the activity actor sends a message to the parent workflow actor with the execution results.
- Once the results are sent, the workflow is triggered to move forward to its next step.
Reminder usage and execution guarantees
The Dapr Workflow ensures workflow fault-tolerance by using actor reminders to recover from transient system failures. Prior to invoking application workflow code, the workflow or activity actor will create a new reminder. If the application code executes without interruption, the reminder is deleted. However, if the node or the sidecar hosting the associated workflow or activity crashes, the reminder will reactivate the corresponding actor and the execution will be retried.

Important
Too many active reminders in a cluster may result in performance issues. If your application is already using actors and reminders heavily, be mindful of the additional load that Dapr Workflows may add to your system.State store usage
Dapr Workflows use actors internally to drive the execution of workflows. Like any actors, these internal workflow actors store their state in the configured state store. Any state store that supports actors implicitly supports Dapr Workflow.
As discussed in the workflow actors section, workflows save their state incrementally by appending to a history log. The history log for a workflow is distributed across multiple state store keys so that each “checkpoint” only needs to append the newest entries.
The size of each checkpoint is determined by the number of concurrent actions scheduled by the workflow before it goes into an idle state. Sequential workflows will therefore make smaller batch updates to the state store, while fan-out/fan-in workflows will require larger batches. The size of the batch is also impacted by the size of inputs and outputs when workflows invoke activities or child workflows.

Different state store implementations may implicitly put restrictions on the types of workflows you can author. For example, the Azure Cosmos DB state store limits item sizes to 2 MB of UTF-8 encoded JSON (source). The input or output payload of an activity or child workflow is stored as a single record in the state store, so a item limit of 2 MB means that workflow and activity inputs and outputs can’t exceed 2 MB of JSON-serialized data.
Similarly, if a state store imposes restrictions on the size of a batch transaction, that may limit the number of parallel actions that can be scheduled by a workflow.
Workflow state can be purged from a state store, including all its history. Each Dapr SDK exposes APIs for purging all metadata related to specific workflow instances.
Workflow scalability
Because Dapr Workflows are internally implemented using actors, Dapr Workflows have the same scalability characteristics as actors. The placement service:
- Doesn’t distinguish between workflow actors and actors you define in your application
- Will load balance workflows using the same algorithms that it uses for actors
The expected scalability of a workflow is determined by the following factors:
- The number of machines used to host your workflow application
- The CPU and memory resources available on the machines running workflows
- The scalability of the state store configured for actors
- The scalability of the actor placement service and the reminder subsystem
The implementation details of the workflow code in the target application also plays a role in the scalability of individual workflow instances. Each workflow instance executes on a single node at a time, but a workflow can schedule activities and child workflows which run on other nodes.
Workflows can also schedule these activities and child workflows to run in parallel, allowing a single workflow to potentially distribute compute tasks across all available nodes in the cluster.

Important
Currently, there are no global limits imposed on workflow and activity concurrency. A runaway workflow could therefore potentially consume all resources in a cluster if it attempts to schedule too many tasks in parallel. Use care when authoring Dapr Workflows that schedule large batches of work in parallel.
Also, the Dapr Workflow engine requires that all instances of each workflow app register the exact same set of workflows and activities. In other words, it’s not possible to scale certain workflows or activities independently. All workflows and activities within an app must be scaled together.
Workflows don’t control the specifics of how load is distributed across the cluster. For example, if a workflow schedules 10 activity tasks to run in parallel, all 10 tasks may run on as many as 10 different compute nodes or as few as a single compute node. The actual scale behavior is determined by the actor placement service, which manages the distribution of the actors that represent each of the workflow’s tasks.
Workflow backend
The workflow backend is responsible for orchestrating and preserving the state of workflows. At any given time, only one backend can be supported. You can configure the workflow backend as a component, similar to any other component in Dapr. Configuration requires:
- Specifying the type of workflow backend.
- Providing the configuration specific to that backend.
For instance, the following sample demonstrates how to define a actor backend component. Dapr workflow currently supports only the actor backend by default, and users are not required to define an actor backend component to use it.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: actorbackend
spec:
type: workflowbackend.actor
version: v1
Workflow latency
In order to provide guarantees around durability and resiliency, Dapr Workflows frequently write to the state store and rely on reminders to drive execution. Dapr Workflows therefore may not be appropriate for latency-sensitive workloads. Expected sources of high latency include:
- Latency from the state store when persisting workflow state.
- Latency from the state store when rehydrating workflows with large histories.
- Latency caused by too many active reminders in the cluster.
- Latency caused by high CPU usage in the cluster.
See the Reminder usage and execution guarantees section for more details on how the design of workflow actors may impact execution latency.
Next steps
Author workflows >>Related links
- Workflow overview
- Workflow API reference
- Try out the Workflow quickstart
- Try out the following examples:
1.3.5 - How to: Author a workflow
This article provides a high-level overview of how to author workflows that are executed by the Dapr Workflow engine.
Note
If you haven’t already, try out the workflow quickstart for a quick walk-through on how to use workflows.Author workflows as code
Dapr Workflow logic is implemented using general purpose programming languages, allowing you to:
- Use your preferred programming language (no need to learn a new DSL or YAML schema).
- Have access to the languageâs standard libraries.
- Build your own libraries and abstractions.
- Use debuggers and examine local variables.
- Write unit tests for your workflows, just like any other part of your application logic.
The Dapr sidecar doesnât load any workflow definitions. Rather, the sidecar simply drives the execution of the workflows, leaving all the workflow activities to be part of the application.
Write the workflow activities
Workflow activities are the basic unit of work in a workflow and are the tasks that get orchestrated in the business process.
Define the workflow activities you’d like your workflow to perform. Activities are a function definition and can take inputs and outputs. The following example creates a counter (activity) called hello_act
that notifies users of the current counter value. hello_act
is a function derived from a class called WorkflowActivityContext
.
@wfr.activity(name='hello_act')
def hello_act(ctx: WorkflowActivityContext, wf_input):
global counter
counter += wf_input
print(f'New counter value is: {counter}!', flush=True)
Define the workflow activities you’d like your workflow to perform. Activities are wrapped in the WorkflowActivityContext
class, which implements the workflow activities.
export default class WorkflowActivityContext {
private readonly _innerContext: ActivityContext;
constructor(innerContext: ActivityContext) {
if (!innerContext) {
throw new Error("ActivityContext cannot be undefined");
}
this._innerContext = innerContext;
}
public getWorkflowInstanceId(): string {
return this._innerContext.orchestrationId;
}
public getWorkflowActivityId(): number {
return this._innerContext.taskId;
}
}
Define the workflow activities you’d like your workflow to perform. Activities are a class definition and can take inputs and outputs. Activities also participate in dependency injection, like binding to a Dapr client.
The activities called in the example below are:
NotifyActivity
: Receive notification of a new order.ReserveInventoryActivity
: Check for sufficient inventory to meet the new order.ProcessPaymentActivity
: Process payment for the order. IncludesNotifyActivity
to send notification of successful order.
NotifyActivity
public class NotifyActivity : WorkflowActivity<Notification, object>
{
//...
public NotifyActivity(ILoggerFactory loggerFactory)
{
this.logger = loggerFactory.CreateLogger<NotifyActivity>();
}
//...
}
See the full NotifyActivity.cs
workflow activity example.
ReserveInventoryActivity
public class ReserveInventoryActivity : WorkflowActivity<InventoryRequest, InventoryResult>
{
//...
public ReserveInventoryActivity(ILoggerFactory loggerFactory, DaprClient client)
{
this.logger = loggerFactory.CreateLogger<ReserveInventoryActivity>();
this.client = client;
}
//...
}
See the full ReserveInventoryActivity.cs
workflow activity example.
ProcessPaymentActivity
public class ProcessPaymentActivity : WorkflowActivity<PaymentRequest, object>
{
//...
public ProcessPaymentActivity(ILoggerFactory loggerFactory)
{
this.logger = loggerFactory.CreateLogger<ProcessPaymentActivity>();
}
//...
}
See the full ProcessPaymentActivity.cs
workflow activity example.
Define the workflow activities you’d like your workflow to perform. Activities are wrapped in the public DemoWorkflowActivity
class, which implements the workflow activities.
@JsonAutoDetect(fieldVisibility = JsonAutoDetect.Visibility.ANY)
public class DemoWorkflowActivity implements WorkflowActivity {
@Override
public DemoActivityOutput run(WorkflowActivityContext ctx) {
Logger logger = LoggerFactory.getLogger(DemoWorkflowActivity.class);
logger.info("Starting Activity: " + ctx.getName());
var message = ctx.getInput(DemoActivityInput.class).getMessage();
var newMessage = message + " World!, from Activity";
logger.info("Message Received from input: " + message);
logger.info("Sending message to output: " + newMessage);
logger.info("Sleeping for 5 seconds to simulate long running operation...");
try {
TimeUnit.SECONDS.sleep(5);
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
logger.info("Activity finished");
var output = new DemoActivityOutput(message, newMessage);
logger.info("Activity returned: " + output);
return output;
}
}
Define each workflow activity you’d like your workflow to perform. The Activity input can be unmarshalled from the context with ctx.GetInput
. Activities should be defined as taking a ctx workflow.ActivityContext
parameter and returning an interface and error.
func TestActivity(ctx workflow.ActivityContext) (any, error) {
var input int
if err := ctx.GetInput(&input); err != nil {
return "", err
}
// Do something here
return "result", nil
}
Write the workflow
Next, register and call the activites in a workflow.
The hello_world_wf
function is a function derived from a class called DaprWorkflowContext
with input and output parameter types. It also includes a yield
statement that does the heavy lifting of the workflow and calls the workflow activities.
@wfr.workflow(name='hello_world_wf')
def hello_world_wf(ctx: DaprWorkflowContext, wf_input):
print(f'{wf_input}')
yield ctx.call_activity(hello_act, input=1)
yield ctx.call_activity(hello_act, input=10)
yield ctx.call_activity(hello_retryable_act, retry_policy=retry_policy)
yield ctx.call_child_workflow(child_retryable_wf, retry_policy=retry_policy)
# Change in event handling: Use when_any to handle both event and timeout
event = ctx.wait_for_external_event(event_name)
timeout = ctx.create_timer(timedelta(seconds=30))
winner = yield when_any([event, timeout])
if winner == timeout:
print('Workflow timed out waiting for event')
return 'Timeout'
yield ctx.call_activity(hello_act, input=100)
yield ctx.call_activity(hello_act, input=1000)
return 'Completed'
Next, register the workflow with the WorkflowRuntime
class and start the workflow runtime.
export default class WorkflowRuntime {
//..
// Register workflow implementation for handling orchestrations
public registerWorkflow(workflow: TWorkflow): WorkflowRuntime {
const name = getFunctionName(workflow);
const workflowWrapper = (ctx: OrchestrationContext, input: any): any => {
const workflowContext = new WorkflowContext(ctx);
return workflow(workflowContext, input);
};
this.worker.addNamedOrchestrator(name, workflowWrapper);
return this;
}
// Register workflow activities
public registerActivity(fn: TWorkflowActivity<TInput, TOutput>): WorkflowRuntime {
const name = getFunctionName(fn);
const activityWrapper = (ctx: ActivityContext, intput: TInput): TOutput => {
const wfActivityContext = new WorkflowActivityContext(ctx);
return fn(wfActivityContext, intput);
};
this.worker.addNamedActivity(name, activityWrapper);
return this;
}
// Start the workflow runtime processing items and block.
public async start() {
await this.worker.start();
}
}
The OrderProcessingWorkflow
class is derived from a base class called Workflow
with input and output parameter types. It also includes a RunAsync
method that does the heavy lifting of the workflow and calls the workflow activities.
class OrderProcessingWorkflow : Workflow<OrderPayload, OrderResult>
{
public override async Task<OrderResult> RunAsync(WorkflowContext context, OrderPayload order)
{
//...
await context.CallActivityAsync(
nameof(NotifyActivity),
new Notification($"Received order {orderId} for {order.Name} at {order.TotalCost:c}"));
//...
InventoryResult result = await context.CallActivityAsync<InventoryResult>(
nameof(ReserveInventoryActivity),
new InventoryRequest(RequestId: orderId, order.Name, order.Quantity));
//...
await context.CallActivityAsync(
nameof(ProcessPaymentActivity),
new PaymentRequest(RequestId: orderId, order.TotalCost, "USD"));
await context.CallActivityAsync(
nameof(NotifyActivity),
new Notification($"Order {orderId} processed successfully!"));
// End the workflow with a success result
return new OrderResult(Processed: true);
}
}
See the full workflow example in OrderProcessingWorkflow.cs
.
Next, register the workflow with the WorkflowRuntimeBuilder
and start the workflow runtime.
public class DemoWorkflowWorker {
public static void main(String[] args) throws Exception {
// Register the Workflow with the builder.
WorkflowRuntimeBuilder builder = new WorkflowRuntimeBuilder().registerWorkflow(DemoWorkflow.class);
builder.registerActivity(DemoWorkflowActivity.class);
// Build and then start the workflow runtime pulling and executing tasks
try (WorkflowRuntime runtime = builder.build()) {
System.out.println("Start workflow runtime");
runtime.start();
}
System.exit(0);
}
}
Define your workflow function with the parameter ctx *workflow.WorkflowContext
and return any and error. Invoke your defined activities from within your workflow.
func TestWorkflow(ctx *workflow.WorkflowContext) (any, error) {
var input int
if err := ctx.GetInput(&input); err != nil {
return nil, err
}
var output string
if err := ctx.CallActivity(TestActivity, workflow.ActivityInput(input)).Await(&output); err != nil {
return nil, err
}
if err := ctx.WaitForExternalEvent("testEvent", time.Second*60).Await(&output); err != nil {
return nil, err
}
if err := ctx.CreateTimer(time.Second).Await(nil); err != nil {
return nil, nil
}
return output, nil
}
Write the application
Finally, compose the application using the workflow.
In the following example, for a basic Python hello world application using the Python SDK, your project code would include:
- A Python package called
DaprClient
to receive the Python SDK capabilities. - A builder with extensions called:
WorkflowRuntime
: Allows you to register the workflow runtime.DaprWorkflowContext
: Allows you to create workflowsWorkflowActivityContext
: Allows you to create workflow activities
- API calls. In the example below, these calls start, pause, resume, purge, and completing the workflow.
from datetime import timedelta
from time import sleep
from dapr.ext.workflow import (
WorkflowRuntime,
DaprWorkflowContext,
WorkflowActivityContext,
RetryPolicy,
DaprWorkflowClient,
when_any,
)
from dapr.conf import Settings
from dapr.clients.exceptions import DaprInternalError
settings = Settings()
counter = 0
retry_count = 0
child_orchestrator_count = 0
child_orchestrator_string = ''
child_act_retry_count = 0
instance_id = 'exampleInstanceID'
child_instance_id = 'childInstanceID'
workflow_name = 'hello_world_wf'
child_workflow_name = 'child_wf'
input_data = 'Hi Counter!'
event_name = 'event1'
event_data = 'eventData'
non_existent_id_error = 'no such instance exists'
retry_policy = RetryPolicy(
first_retry_interval=timedelta(seconds=1),
max_number_of_attempts=3,
backoff_coefficient=2,
max_retry_interval=timedelta(seconds=10),
retry_timeout=timedelta(seconds=100),
)
wfr = WorkflowRuntime()
@wfr.workflow(name='hello_world_wf')
def hello_world_wf(ctx: DaprWorkflowContext, wf_input):
print(f'{wf_input}')
yield ctx.call_activity(hello_act, input=1)
yield ctx.call_activity(hello_act, input=10)
yield ctx.call_activity(hello_retryable_act, retry_policy=retry_policy)
yield ctx.call_child_workflow(child_retryable_wf, retry_policy=retry_policy)
# Change in event handling: Use when_any to handle both event and timeout
event = ctx.wait_for_external_event(event_name)
timeout = ctx.create_timer(timedelta(seconds=30))
winner = yield when_any([event, timeout])
if winner == timeout:
print('Workflow timed out waiting for event')
return 'Timeout'
yield ctx.call_activity(hello_act, input=100)
yield ctx.call_activity(hello_act, input=1000)
return 'Completed'
@wfr.activity(name='hello_act')
def hello_act(ctx: WorkflowActivityContext, wf_input):
global counter
counter += wf_input
print(f'New counter value is: {counter}!', flush=True)
@wfr.activity(name='hello_retryable_act')
def hello_retryable_act(ctx: WorkflowActivityContext):
global retry_count
if (retry_count % 2) == 0:
print(f'Retry count value is: {retry_count}!', flush=True)
retry_count += 1
raise ValueError('Retryable Error')
print(f'Retry count value is: {retry_count}! This print statement verifies retry', flush=True)
retry_count += 1
@wfr.workflow(name='child_retryable_wf')
def child_retryable_wf(ctx: DaprWorkflowContext):
global child_orchestrator_string, child_orchestrator_count
if not ctx.is_replaying:
child_orchestrator_count += 1
print(f'Appending {child_orchestrator_count} to child_orchestrator_string!', flush=True)
child_orchestrator_string += str(child_orchestrator_count)
yield ctx.call_activity(
act_for_child_wf, input=child_orchestrator_count, retry_policy=retry_policy
)
if child_orchestrator_count < 3:
raise ValueError('Retryable Error')
@wfr.activity(name='act_for_child_wf')
def act_for_child_wf(ctx: WorkflowActivityContext, inp):
global child_orchestrator_string, child_act_retry_count
inp_char = chr(96 + inp)
print(f'Appending {inp_char} to child_orchestrator_string!', flush=True)
child_orchestrator_string += inp_char
if child_act_retry_count % 2 == 0:
child_act_retry_count += 1
raise ValueError('Retryable Error')
child_act_retry_count += 1
def main():
wfr.start()
wf_client = DaprWorkflowClient()
print('==========Start Counter Increase as per Input:==========')
wf_client.schedule_new_workflow(
workflow=hello_world_wf, input=input_data, instance_id=instance_id
)
wf_client.wait_for_workflow_start(instance_id)
# Sleep to let the workflow run initial activities
sleep(12)
assert counter == 11
assert retry_count == 2
assert child_orchestrator_string == '1aa2bb3cc'
# Pause Test
wf_client.pause_workflow(instance_id=instance_id)
metadata = wf_client.get_workflow_state(instance_id=instance_id)
print(f'Get response from {workflow_name} after pause call: {metadata.runtime_status.name}')
# Resume Test
wf_client.resume_workflow(instance_id=instance_id)
metadata = wf_client.get_workflow_state(instance_id=instance_id)
print(f'Get response from {workflow_name} after resume call: {metadata.runtime_status.name}')
sleep(2) # Give the workflow time to reach the event wait state
wf_client.raise_workflow_event(instance_id=instance_id, event_name=event_name, data=event_data)
print('========= Waiting for Workflow completion', flush=True)
try:
state = wf_client.wait_for_workflow_completion(instance_id, timeout_in_seconds=30)
if state.runtime_status.name == 'COMPLETED':
print('Workflow completed! Result: {}'.format(state.serialized_output.strip('"')))
else:
print(f'Workflow failed! Status: {state.runtime_status.name}')
except TimeoutError:
print('*** Workflow timed out!')
wf_client.purge_workflow(instance_id=instance_id)
try:
wf_client.get_workflow_state(instance_id=instance_id)
except DaprInternalError as err:
if non_existent_id_error in err._message:
print('Instance Successfully Purged')
wfr.shutdown()
if __name__ == '__main__':
main()
The following example is a basic JavaScript application using the JavaScript SDK. As in this example, your project code would include:
- A builder with extensions called:
WorkflowRuntime
: Allows you to register workflows and workflow activitiesDaprWorkflowContext
: Allows you to create workflowsWorkflowActivityContext
: Allows you to create workflow activities
- API calls. In the example below, these calls start, terminate, get status, pause, resume, raise event, and purge the workflow.
import { TaskHubGrpcClient } from "@microsoft/durabletask-js";
import { WorkflowState } from "./WorkflowState";
import { generateApiTokenClientInterceptors, generateEndpoint, getDaprApiToken } from "../internal/index";
import { TWorkflow } from "../../types/workflow/Workflow.type";
import { getFunctionName } from "../internal";
import { WorkflowClientOptions } from "../../types/workflow/WorkflowClientOption";
/** DaprWorkflowClient class defines client operations for managing workflow instances. */
export default class DaprWorkflowClient {
private readonly _innerClient: TaskHubGrpcClient;
/** Initialize a new instance of the DaprWorkflowClient.
*/
constructor(options: Partial<WorkflowClientOptions> = {}) {
const grpcEndpoint = generateEndpoint(options);
options.daprApiToken = getDaprApiToken(options);
this._innerClient = this.buildInnerClient(grpcEndpoint.endpoint, options);
}
private buildInnerClient(hostAddress: string, options: Partial<WorkflowClientOptions>): TaskHubGrpcClient {
let innerOptions = options?.grpcOptions;
if (options.daprApiToken !== undefined && options.daprApiToken !== "") {
innerOptions = {
...innerOptions,
interceptors: [generateApiTokenClientInterceptors(options), ...(innerOptions?.interceptors ?? [])],
};
}
return new TaskHubGrpcClient(hostAddress, innerOptions);
}
/**
* Schedule a new workflow using the DurableTask client.
*/
public async scheduleNewWorkflow(
workflow: TWorkflow | string,
input?: any,
instanceId?: string,
startAt?: Date,
): Promise<string> {
if (typeof workflow === "string") {
return await this._innerClient.scheduleNewOrchestration(workflow, input, instanceId, startAt);
}
return await this._innerClient.scheduleNewOrchestration(getFunctionName(workflow), input, instanceId, startAt);
}
/**
* Terminate the workflow associated with the provided instance id.
*
* @param {string} workflowInstanceId - Workflow instance id to terminate.
* @param {any} output - The optional output to set for the terminated workflow instance.
*/
public async terminateWorkflow(workflowInstanceId: string, output: any) {
await this._innerClient.terminateOrchestration(workflowInstanceId, output);
}
/**
* Fetch workflow instance metadata from the configured durable store.
*/
public async getWorkflowState(
workflowInstanceId: string,
getInputsAndOutputs: boolean,
): Promise<WorkflowState | undefined> {
const state = await this._innerClient.getOrchestrationState(workflowInstanceId, getInputsAndOutputs);
if (state !== undefined) {
return new WorkflowState(state);
}
}
/**
* Waits for a workflow to start running
*/
public async waitForWorkflowStart(
workflowInstanceId: string,
fetchPayloads = true,
timeoutInSeconds = 60,
): Promise<WorkflowState | undefined> {
const state = await this._innerClient.waitForOrchestrationStart(
workflowInstanceId,
fetchPayloads,
timeoutInSeconds,
);
if (state !== undefined) {
return new WorkflowState(state);
}
}
/**
* Waits for a workflow to complete running
*/
public async waitForWorkflowCompletion(
workflowInstanceId: string,
fetchPayloads = true,
timeoutInSeconds = 60,
): Promise<WorkflowState | undefined> {
const state = await this._innerClient.waitForOrchestrationCompletion(
workflowInstanceId,
fetchPayloads,
timeoutInSeconds,
);
if (state != undefined) {
return new WorkflowState(state);
}
}
/**
* Sends an event notification message to an awaiting workflow instance
*/
public async raiseEvent(workflowInstanceId: string, eventName: string, eventPayload?: any) {
this._innerClient.raiseOrchestrationEvent(workflowInstanceId, eventName, eventPayload);
}
/**
* Purges the workflow instance state from the workflow state store.
*/
public async purgeWorkflow(workflowInstanceId: string): Promise<boolean> {
const purgeResult = await this._innerClient.purgeOrchestration(workflowInstanceId);
if (purgeResult !== undefined) {
return purgeResult.deletedInstanceCount > 0;
}
return false;
}
/**
* Closes the inner DurableTask client and shutdown the GRPC channel.
*/
public async stop() {
await this._innerClient.stop();
}
}
In the following Program.cs
example, for a basic ASP.NET order processing application using the .NET SDK, your project code would include:
- A NuGet package called
Dapr.Workflow
to receive the .NET SDK capabilities - A builder with an extension method called
AddDaprWorkflow
- This will allow you to register workflows and workflow activities (tasks that workflows can schedule)
- HTTP API calls
- One for submitting a new order
- One for checking the status of an existing order
using Dapr.Workflow;
//...
// Dapr Workflows are registered as part of the service configuration
builder.Services.AddDaprWorkflow(options =>
{
// Note that it's also possible to register a lambda function as the workflow
// or activity implementation instead of a class.
options.RegisterWorkflow<OrderProcessingWorkflow>();
// These are the activities that get invoked by the workflow(s).
options.RegisterActivity<NotifyActivity>();
options.RegisterActivity<ReserveInventoryActivity>();
options.RegisterActivity<ProcessPaymentActivity>();
});
WebApplication app = builder.Build();
// POST starts new order workflow instance
app.MapPost("/orders", async (DaprWorkflowClient client, [FromBody] OrderPayload orderInfo) =>
{
if (orderInfo?.Name == null)
{
return Results.BadRequest(new
{
message = "Order data was missing from the request",
example = new OrderPayload("Paperclips", 99.95),
});
}
//...
});
// GET fetches state for order workflow to report status
app.MapGet("/orders/{orderId}", async (string orderId, DaprWorkflowClient client) =>
{
WorkflowState state = await client.GetWorkflowStateAsync(orderId, true);
if (!state.Exists)
{
return Results.NotFound($"No order with ID = '{orderId}' was found.");
}
var httpResponsePayload = new
{
details = state.ReadInputAs<OrderPayload>(),
status = state.RuntimeStatus.ToString(),
result = state.ReadOutputAs<OrderResult>(),
};
//...
}).WithName("GetOrderInfoEndpoint");
app.Run();
As in the following example, a hello-world application using the Java SDK and Dapr Workflow would include:
- A Java package called
io.dapr.workflows.client
to receive the Java SDK client capabilities. - An import of
io.dapr.workflows.Workflow
- The
DemoWorkflow
class which extendsWorkflow
- Creating the workflow with input and output.
- API calls. In the example below, these calls start and call the workflow activities.
package io.dapr.examples.workflows;
import com.microsoft.durabletask.CompositeTaskFailedException;
import com.microsoft.durabletask.Task;
import com.microsoft.durabletask.TaskCanceledException;
import io.dapr.workflows.Workflow;
import io.dapr.workflows.WorkflowStub;
import java.time.Duration;
import java.util.Arrays;
import java.util.List;
/**
* Implementation of the DemoWorkflow for the server side.
*/
public class DemoWorkflow extends Workflow {
@Override
public WorkflowStub create() {
return ctx -> {
ctx.getLogger().info("Starting Workflow: " + ctx.getName());
// ...
ctx.getLogger().info("Calling Activity...");
var input = new DemoActivityInput("Hello Activity!");
var output = ctx.callActivity(DemoWorkflowActivity.class.getName(), input, DemoActivityOutput.class).await();
// ...
};
}
}
As in the following example, a hello-world application using the Go SDK and Dapr Workflow would include:
- A Go package called
client
to receive the Go SDK client capabilities. - The
TestWorkflow
method - Creating the workflow with input and output.
- API calls. In the example below, these calls start and call the workflow activities.
package main
import (
"context"
"fmt"
"log"
"time"
"github.com/dapr/go-sdk/client"
"github.com/dapr/go-sdk/workflow"
)
var stage = 0
const (
workflowComponent = "dapr"
)
func main() {
w, err := workflow.NewWorker()
if err != nil {
log.Fatal(err)
}
fmt.Println("Worker initialized")
if err := w.RegisterWorkflow(TestWorkflow); err != nil {
log.Fatal(err)
}
fmt.Println("TestWorkflow registered")
if err := w.RegisterActivity(TestActivity); err != nil {
log.Fatal(err)
}
fmt.Println("TestActivity registered")
// Start workflow runner
if err := w.Start(); err != nil {
log.Fatal(err)
}
fmt.Println("runner started")
daprClient, err := client.NewClient()
if err != nil {
log.Fatalf("failed to intialise client: %v", err)
}
defer daprClient.Close()
ctx := context.Background()
// Start workflow test
respStart, err := daprClient.StartWorkflow(ctx, &client.StartWorkflowRequest{
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
WorkflowComponent: workflowComponent,
WorkflowName: "TestWorkflow",
Options: nil,
Input: 1,
SendRawInput: false,
})
if err != nil {
log.Fatalf("failed to start workflow: %v", err)
}
fmt.Printf("workflow started with id: %v\n", respStart.InstanceID)
// Pause workflow test
err = daprClient.PauseWorkflow(ctx, &client.PauseWorkflowRequest{
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
WorkflowComponent: workflowComponent,
})
if err != nil {
log.Fatalf("failed to pause workflow: %v", err)
}
respGet, err := daprClient.GetWorkflow(ctx, &client.GetWorkflowRequest{
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
WorkflowComponent: workflowComponent,
})
if err != nil {
log.Fatalf("failed to get workflow: %v", err)
}
if respGet.RuntimeStatus != workflow.StatusSuspended.String() {
log.Fatalf("workflow not paused: %v", respGet.RuntimeStatus)
}
fmt.Printf("workflow paused\n")
// Resume workflow test
err = daprClient.ResumeWorkflow(ctx, &client.ResumeWorkflowRequest{
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
WorkflowComponent: workflowComponent,
})
if err != nil {
log.Fatalf("failed to resume workflow: %v", err)
}
respGet, err = daprClient.GetWorkflow(ctx, &client.GetWorkflowRequest{
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
WorkflowComponent: workflowComponent,
})
if err != nil {
log.Fatalf("failed to get workflow: %v", err)
}
if respGet.RuntimeStatus != workflow.StatusRunning.String() {
log.Fatalf("workflow not running")
}
fmt.Println("workflow resumed")
fmt.Printf("stage: %d\n", stage)
// Raise Event Test
err = daprClient.RaiseEventWorkflow(ctx, &client.RaiseEventWorkflowRequest{
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
WorkflowComponent: workflowComponent,
EventName: "testEvent",
EventData: "testData",
SendRawData: false,
})
if err != nil {
fmt.Printf("failed to raise event: %v", err)
}
fmt.Println("workflow event raised")
time.Sleep(time.Second) // allow workflow to advance
fmt.Printf("stage: %d\n", stage)
respGet, err = daprClient.GetWorkflow(ctx, &client.GetWorkflowRequest{
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
WorkflowComponent: workflowComponent,
})
if err != nil {
log.Fatalf("failed to get workflow: %v", err)
}
fmt.Printf("workflow status: %v\n", respGet.RuntimeStatus)
// Purge workflow test
err = daprClient.PurgeWorkflow(ctx, &client.PurgeWorkflowRequest{
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
WorkflowComponent: workflowComponent,
})
if err != nil {
log.Fatalf("failed to purge workflow: %v", err)
}
respGet, err = daprClient.GetWorkflow(ctx, &client.GetWorkflowRequest{
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
WorkflowComponent: workflowComponent,
})
if err != nil && respGet != nil {
log.Fatal("failed to purge workflow")
}
fmt.Println("workflow purged")
fmt.Printf("stage: %d\n", stage)
// Terminate workflow test
respStart, err = daprClient.StartWorkflow(ctx, &client.StartWorkflowRequest{
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
WorkflowComponent: workflowComponent,
WorkflowName: "TestWorkflow",
Options: nil,
Input: 1,
SendRawInput: false,
})
if err != nil {
log.Fatalf("failed to start workflow: %v", err)
}
fmt.Printf("workflow started with id: %s\n", respStart.InstanceID)
err = daprClient.TerminateWorkflow(ctx, &client.TerminateWorkflowRequest{
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
WorkflowComponent: workflowComponent,
})
if err != nil {
log.Fatalf("failed to terminate workflow: %v", err)
}
respGet, err = daprClient.GetWorkflow(ctx, &client.GetWorkflowRequest{
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
WorkflowComponent: workflowComponent,
})
if err != nil {
log.Fatalf("failed to get workflow: %v", err)
}
if respGet.RuntimeStatus != workflow.StatusTerminated.String() {
log.Fatal("failed to terminate workflow")
}
fmt.Println("workflow terminated")
err = daprClient.PurgeWorkflow(ctx, &client.PurgeWorkflowRequest{
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
WorkflowComponent: workflowComponent,
})
respGet, err = daprClient.GetWorkflow(ctx, &client.GetWorkflowRequest{
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
WorkflowComponent: workflowComponent,
})
if err == nil || respGet != nil {
log.Fatalf("failed to purge workflow: %v", err)
}
fmt.Println("workflow purged")
stage = 0
fmt.Println("workflow client test")
wfClient, err := workflow.NewClient()
if err != nil {
log.Fatalf("[wfclient] faield to initialize: %v", err)
}
id, err := wfClient.ScheduleNewWorkflow(ctx, "TestWorkflow", workflow.WithInstanceID("a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9"), workflow.WithInput(1))
if err != nil {
log.Fatalf("[wfclient] failed to start workflow: %v", err)
}
fmt.Printf("[wfclient] started workflow with id: %s\n", id)
metadata, err := wfClient.FetchWorkflowMetadata(ctx, id)
if err != nil {
log.Fatalf("[wfclient] failed to get worfklow: %v", err)
}
fmt.Printf("[wfclient] workflow status: %v\n", metadata.RuntimeStatus.String())
if stage != 1 {
log.Fatalf("Workflow assertion failed while validating the wfclient. Stage 1 expected, current: %d", stage)
}
fmt.Printf("[wfclient] stage: %d\n", stage)
// raise event
if err := wfClient.RaiseEvent(ctx, id, "testEvent", workflow.WithEventPayload("testData")); err != nil {
log.Fatalf("[wfclient] failed to raise event: %v", err)
}
fmt.Println("[wfclient] event raised")
// Sleep to allow the workflow to advance
time.Sleep(time.Second)
if stage != 2 {
log.Fatalf("Workflow assertion failed while validating the wfclient. Stage 2 expected, current: %d", stage)
}
fmt.Printf("[wfclient] stage: %d\n", stage)
// stop workflow
if err := wfClient.TerminateWorkflow(ctx, id); err != nil {
log.Fatalf("[wfclient] failed to terminate workflow: %v", err)
}
fmt.Println("[wfclient] workflow terminated")
if err := wfClient.PurgeWorkflow(ctx, id); err != nil {
log.Fatalf("[wfclient] failed to purge workflow: %v", err)
}
fmt.Println("[wfclient] workflow purged")
// stop workflow runtime
if err := w.Shutdown(); err != nil {
log.Fatalf("failed to shutdown runtime: %v", err)
}
fmt.Println("workflow worker successfully shutdown")
}
func TestWorkflow(ctx *workflow.WorkflowContext) (any, error) {
var input int
if err := ctx.GetInput(&input); err != nil {
return nil, err
}
var output string
if err := ctx.CallActivity(TestActivity, workflow.ActivityInput(input)).Await(&output); err != nil {
return nil, err
}
err := ctx.WaitForExternalEvent("testEvent", time.Second*60).Await(&output)
if err != nil {
return nil, err
}
if err := ctx.CallActivity(TestActivity, workflow.ActivityInput(input)).Await(&output); err != nil {
return nil, err
}
return output, nil
}
func TestActivity(ctx workflow.ActivityContext) (any, error) {
var input int
if err := ctx.GetInput(&input); err != nil {
return "", err
}
stage += input
return fmt.Sprintf("Stage: %d", stage), nil
}
Important
Because of how replay-based workflows execute, you’ll write logic that does things like I/O and interacting with systems inside activities. Meanwhile, the workflow method is just for orchestrating those activities.Next steps
Now that you’ve authored a workflow, learn how to manage it.
Manage workflows >>Related links
- Workflow overview
- Workflow API reference
- Try out the full SDK examples:
1.3.6 - How to: Manage workflows
Now that you’ve authored the workflow and its activities in your application, you can start, terminate, and get information about the workflow using HTTP API calls. For more information, read the workflow API reference.
Manage your workflow within your code. In the workflow example from the Author a workflow guide, the workflow is registered in the code using the following APIs:
- schedule_new_workflow: Start an instance of a workflow
- get_workflow_state: Get information on the status of the workflow
- pause_workflow: Pauses or suspends a workflow instance that can later be resumed
- resume_workflow: Resumes a paused workflow instance
- raise_workflow_event: Raise an event on a workflow
- purge_workflow: Removes all metadata related to a specific workflow instance
- wait_for_workflow_completion: Complete a particular instance of a workflow
from dapr.ext.workflow import WorkflowRuntime, DaprWorkflowContext, WorkflowActivityContext
from dapr.clients import DaprClient
# Sane parameters
instanceId = "exampleInstanceID"
workflowComponent = "dapr"
workflowName = "hello_world_wf"
eventName = "event1"
eventData = "eventData"
# Start the workflow
wf_client.schedule_new_workflow(
workflow=hello_world_wf, input=input_data, instance_id=instance_id
)
# Get info on the workflow
wf_client.get_workflow_state(instance_id=instance_id)
# Pause the workflow
wf_client.pause_workflow(instance_id=instance_id)
metadata = wf_client.get_workflow_state(instance_id=instance_id)
# Resume the workflow
wf_client.resume_workflow(instance_id=instance_id)
# Raise an event on the workflow.
wf_client.raise_workflow_event(instance_id=instance_id, event_name=event_name, data=event_data)
# Purge the workflow
wf_client.purge_workflow(instance_id=instance_id)
# Wait for workflow completion
wf_client.wait_for_workflow_completion(instance_id, timeout_in_seconds=30)
Manage your workflow within your code. In the workflow example from the Author a workflow guide, the workflow is registered in the code using the following APIs:
- client.workflow.start: Start an instance of a workflow
- client.workflow.get: Get information on the status of the workflow
- client.workflow.pause: Pauses or suspends a workflow instance that can later be resumed
- client.workflow.resume: Resumes a paused workflow instance
- client.workflow.purge: Removes all metadata related to a specific workflow instance
- client.workflow.terminate: Terminate or stop a particular instance of a workflow
import { DaprClient } from "@dapr/dapr";
async function printWorkflowStatus(client: DaprClient, instanceId: string) {
const workflow = await client.workflow.get(instanceId);
console.log(
`Workflow ${workflow.workflowName}, created at ${workflow.createdAt.toUTCString()}, has status ${
workflow.runtimeStatus
}`,
);
console.log(`Additional properties: ${JSON.stringify(workflow.properties)}`);
console.log("--------------------------------------------------\n\n");
}
async function start() {
const client = new DaprClient();
// Start a new workflow instance
const instanceId = await client.workflow.start("OrderProcessingWorkflow", {
Name: "Paperclips",
TotalCost: 99.95,
Quantity: 4,
});
console.log(`Started workflow instance ${instanceId}`);
await printWorkflowStatus(client, instanceId);
// Pause a workflow instance
await client.workflow.pause(instanceId);
console.log(`Paused workflow instance ${instanceId}`);
await printWorkflowStatus(client, instanceId);
// Resume a workflow instance
await client.workflow.resume(instanceId);
console.log(`Resumed workflow instance ${instanceId}`);
await printWorkflowStatus(client, instanceId);
// Terminate a workflow instance
await client.workflow.terminate(instanceId);
console.log(`Terminated workflow instance ${instanceId}`);
await printWorkflowStatus(client, instanceId);
// Wait for the workflow to complete, 30 seconds!
await new Promise((resolve) => setTimeout(resolve, 30000));
await printWorkflowStatus(client, instanceId);
// Purge a workflow instance
await client.workflow.purge(instanceId);
console.log(`Purged workflow instance ${instanceId}`);
// This will throw an error because the workflow instance no longer exists.
await printWorkflowStatus(client, instanceId);
}
start().catch((e) => {
console.error(e);
process.exit(1);
});
Manage your workflow within your code. In the OrderProcessingWorkflow
example from the Author a workflow guide, the workflow is registered in the code. You can now start, terminate, and get information about a running workflow:
string orderId = "exampleOrderId";
OrderPayload input = new OrderPayload("Paperclips", 99.95);
Dictionary<string, string> workflowOptions; // This is an optional parameter
// Start the workflow using the orderId as our workflow ID. This returns a string containing the instance ID for the particular workflow instance, whether we provide it ourselves or not.
await daprWorkflowClient.ScheduleNewWorkflowAsync(nameof(OrderProcessingWorkflow), orderId, input, workflowOptions);
// Get information on the workflow. This response contains information such as the status of the workflow, when it started, and more!
WorkflowState currentState = await daprWorkflowClient.GetWorkflowStateAsync(orderId, orderId);
// Terminate the workflow
await daprWorkflowClient.TerminateWorkflowAsync(orderId);
// Raise an event (an incoming purchase order) that your workflow will wait for
await daprWorkflowClient.RaiseEventAsync(orderId, "incoming-purchase-order", input);
// Pause
await daprWorkflowClient.SuspendWorkflowAsync(orderId);
// Resume
await daprWorkflowClient.ResumeWorkflowAsync(orderId);
// Purge the workflow, removing all inbox and history information from associated instance
await daprWorkflowClient.PurgeInstanceAsync(orderId);
Manage your workflow within your code. In the workflow example from the Java SDK, the workflow is registered in the code using the following APIs:
- scheduleNewWorkflow: Starts a new workflow instance
- getInstanceState: Get information on the status of the workflow
- waitForInstanceStart: Pauses or suspends a workflow instance that can later be resumed
- raiseEvent: Raises events/tasks for the running workflow instance
- waitForInstanceCompletion: Waits for the workflow to complete its tasks
- purgeInstance: Removes all metadata related to a specific workflow instance
- terminateWorkflow: Terminates the workflow
- purgeInstance: Removes all metadata related to a specific workflow
package io.dapr.examples.workflows;
import io.dapr.workflows.client.DaprWorkflowClient;
import io.dapr.workflows.client.WorkflowInstanceStatus;
// ...
public class DemoWorkflowClient {
// ...
public static void main(String[] args) throws InterruptedException {
DaprWorkflowClient client = new DaprWorkflowClient();
try (client) {
// Start a workflow
String instanceId = client.scheduleNewWorkflow(DemoWorkflow.class, "input data");
// Get status information on the workflow
WorkflowInstanceStatus workflowMetadata = client.getInstanceState(instanceId, true);
// Wait or pause for the workflow instance start
try {
WorkflowInstanceStatus waitForInstanceStartResult =
client.waitForInstanceStart(instanceId, Duration.ofSeconds(60), true);
}
// Raise an event for the workflow; you can raise several events in parallel
client.raiseEvent(instanceId, "TestEvent", "TestEventPayload");
client.raiseEvent(instanceId, "event1", "TestEvent 1 Payload");
client.raiseEvent(instanceId, "event2", "TestEvent 2 Payload");
client.raiseEvent(instanceId, "event3", "TestEvent 3 Payload");
// Wait for workflow to complete running through tasks
try {
WorkflowInstanceStatus waitForInstanceCompletionResult =
client.waitForInstanceCompletion(instanceId, Duration.ofSeconds(60), true);
}
// Purge the workflow instance, removing all metadata associated with it
boolean purgeResult = client.purgeInstance(instanceId);
// Terminate the workflow instance
client.terminateWorkflow(instanceToTerminateId, null);
System.exit(0);
}
}
Manage your workflow within your code. In the workflow example from the Go SDK, the workflow is registered in the code using the following APIs:
- StartWorkflow: Starts a new workflow instance
- GetWorkflow: Get information on the status of the workflow
- PauseWorkflow: Pauses or suspends a workflow instance that can later be resumed
- RaiseEventWorkflow: Raises events/tasks for the running workflow instance
- ResumeWorkflow: Waits for the workflow to complete its tasks
- PurgeWorkflow: Removes all metadata related to a specific workflow instance
- TerminateWorkflow: Terminates the workflow
// Start workflow
type StartWorkflowRequest struct {
InstanceID string // Optional instance identifier
WorkflowComponent string
WorkflowName string
Options map[string]string // Optional metadata
Input any // Optional input
SendRawInput bool // Set to True in order to disable serialization on the input
}
type StartWorkflowResponse struct {
InstanceID string
}
// Get the workflow status
type GetWorkflowRequest struct {
InstanceID string
WorkflowComponent string
}
type GetWorkflowResponse struct {
InstanceID string
WorkflowName string
CreatedAt time.Time
LastUpdatedAt time.Time
RuntimeStatus string
Properties map[string]string
}
// Purge workflow
type PurgeWorkflowRequest struct {
InstanceID string
WorkflowComponent string
}
// Terminate workflow
type TerminateWorkflowRequest struct {
InstanceID string
WorkflowComponent string
}
// Pause workflow
type PauseWorkflowRequest struct {
InstanceID string
WorkflowComponent string
}
// Resume workflow
type ResumeWorkflowRequest struct {
InstanceID string
WorkflowComponent string
}
// Raise an event for the running workflow
type RaiseEventWorkflowRequest struct {
InstanceID string
WorkflowComponent string
EventName string
EventData any
SendRawData bool // Set to True in order to disable serialization on the data
}
Manage your workflow using HTTP calls. The example below plugs in the properties from the Author a workflow example with a random instance ID number.
Start workflow
To start your workflow with an ID 12345678
, run:
curl -X POST "http://localhost:3500/v1.0/workflows/dapr/OrderProcessingWorkflow/start?instanceID=12345678"
Note that workflow instance IDs can only contain alphanumeric characters, underscores, and dashes.
Terminate workflow
To terminate your workflow with an ID 12345678
, run:
curl -X POST "http://localhost:3500/v1.0/workflows/dapr/12345678/terminate"
Raise an event
For workflow components that support subscribing to external events, such as the Dapr Workflow engine, you can use the following “raise event” API to deliver a named event to a specific workflow instance.
curl -X POST "http://localhost:3500/v1.0/workflows/<workflowComponentName>/<instanceID>/raiseEvent/<eventName>"
An
eventName
can be any function.
Pause or resume a workflow
To plan for down-time, wait for inputs, and more, you can pause and then resume a workflow. To pause a workflow with an ID 12345678
until triggered to resume, run:
curl -X POST "http://localhost:3500/v1.0/workflows/dapr/12345678/pause"
To resume a workflow with an ID 12345678
, run:
curl -X POST "http://localhost:3500/v1.0/workflows/dapr/12345678/resume"
Purge a workflow
The purge API can be used to permanently delete workflow metadata from the underlying state store, including any stored inputs, outputs, and workflow history records. This is often useful for implementing data retention policies and for freeing resources.
Only workflow instances in the COMPLETED, FAILED, or TERMINATED state can be purged. If the workflow is in any other state, calling purge returns an error.
curl -X POST "http://localhost:3500/v1.0/workflows/dapr/12345678/purge"
Get information about a workflow
To fetch workflow information (outputs and inputs) with an ID 12345678
, run:
curl -X GET "http://localhost:3500/v1.0/workflows/dapr/12345678"
Learn more about these HTTP calls in the workflow API reference guide.
Next steps
Try out the full SDK examples:
1.4 - State management
More about Dapr State Management
Learn more about how to use Dapr State Management:
- Try the State Management quickstart.
- Explore state management via any of the supporting Dapr SDKs.
- Review the State Management API reference documentation.
- Browse the supported state management component specs.
1.4.1 - State management overview
Your application can use Dapr’s state management API to save, read, and query key/value pairs in the supported state stores. Using a state store component, you can build stateful, long running applications that save and retrieve their state (like a shopping cart or a game’s session state). For example, in the diagram below:
- Use HTTP POST to save or query key/value pairs.
- Use HTTP GET to read a specific key and have its value returned.

The following overview video and demo demonstrates how Dapr state management works.
Features
With the state management API building block, your application can leverage features that are typically complicated and error-prone to build, including:
- Setting the choices on concurrency control and data consistency.
- Performing bulk update operations CRUD including multiple transactional operations.
- Querying and filtering the key/value data.
These are the features available as part of the state management API:
Pluggable state stores
Dapr data stores are modeled as components, which can be swapped out without any changes to your service code. See supported state stores to see the list.
Configurable state store behaviors
With Dapr, you can include additional metadata in a state operation request that describes how you expect the request to be handled. You can attach:
- Concurrency requirements
- Consistency requirements
By default, your application should assume a data store is eventually consistent and uses a last-write-wins concurrency pattern.
Not all stores are created equal. To ensure your application’s portability, you can query the metadata capabilities of the store and make your code adaptive to different store capabilities.
Concurrency
Dapr supports Optimistic Concurrency Control (OCC) using ETags. When a state value is requested, Dapr always attaches an ETag property to the returned state. When the user code:
- Updates a state, it’s expected to attach the ETag through the request body.
- Deletes a state, itâs expected to attach the ETag through the
If-Match
header.
The write
operation succeeds when the provided ETag matches the ETag in the state store.
Why Dapr chooses optimistic concurrency control (OCC)
Data update conflicts are rare in many applications, since clients are naturally partitioned by business contexts to operate on different data. However, if your application chooses to use ETags, mismatched ETags may cause a request rejection. It’s recommended you use a retry policy in your code to compensate for conflicts when using ETags.
If your application omits ETags in writing requests, Dapr skips ETag checks while handling the requests. This enables the last-write-wins pattern, compared to the first-write-wins pattern with ETags.
Note on ETags
For stores that don’t natively support ETags, the corresponding Dapr state store implementation is expected to simulate ETags and follow the Dapr state management API specification when handling states. Since Dapr state store implementations are technically clients to the underlying data store, simulation should be straightforward, using the concurrency control mechanisms provided by the store.Read the API reference to learn how to set concurrency options.
Consistency
Dapr supports both strong consistency and eventual consistency, with eventual consistency as the default behavior.
- Strong consistency: Dapr waits for all replicas (or designated quorums) to acknowledge before it acknowledges a write request.
- Eventual consistency: Dapr returns as soon as the write request is accepted by the underlying data store, even if this is a single replica.
Read the API reference to learn how to set consistency options.
Setting content type
State store components may maintain and manipulate data differently, depending on the content type. Dapr supports passing content type in state management API as part of request metadata.
Setting the content type is optional, and the component decides whether to make use of it. Dapr only provides the means of passing this information to the component.
- With the HTTP API: Set content type via URL query parameter
metadata.contentType
. For example,http://localhost:3500/v1.0/state/store?metadata.contentType=application/json
. - With the gRPC API: Set content type by adding key/value pair
"contentType" : <content type>
to the request metadata.
Multiple operations
Dapr supports two types of multi-read or multi-write operations: bulk or transactional. Read the API reference to learn how use bulk and multi options.
Bulk read operations
You can group multiple read requests into a bulk (or batch) operation. In the bulk operation, Dapr submits the read requests as individual requests to the underlying data store, and returns them as a single result.
Transactional operations
You can group write, update, and delete operations into a request, which are then handled as an atomic transaction. The request will succeed or fail as a transactional set of operations.
Actor state
Transactional state stores can be used to store actor state. To specify which state store to use for actors, specify value of property actorStateStore
as true
in the state store component’s metadata section. Actors state is stored with a specific scheme in transactional state stores, allowing for consistent querying. Only a single state store component can be used as the state store for all actors. Read the state API reference and the actors API reference to learn more about state stores for actors.
Time to Live (TTL) on actor state
You should always set the TTL metadata field (ttlInSeconds
), or the equivalent API call in your chosen SDK when saving actor state to ensure that state eventually removed. Read actors overview for more information.
State encryption
Dapr supports automatic client encryption of application state with support for key rotations. This is supported on all Dapr state stores. For more info, read the How-To: Encrypt application state topic.
Shared state between applications
Different applications’ needs vary when it comes to sharing state. In one scenario, you may want to encapsulate all state within a given application and have Dapr manage the access for you. In another scenario, you may want two applications working on the same state to get and save the same keys.
Dapr enables states to be:
- Isolated to an application.
- Shared in a state store between applications.
- Shared between multiple applications across different state stores.
For more details read How-To: Share state between applications,
Enabling the outbox pattern
Dapr enables developers to use the outbox pattern for achieving a single transaction across a transactional state store and any message broker. For more information, read How to enable transactional outbox messaging
Querying state
There are two ways to query the state:
- Using the state management query API provided in Dapr runtime.
- Querying state store directly with the store’s native SDK.
Query API
Using the optional state management query API, you can query the key/value data saved in state stores, regardless of underlying database or storage technology. With the state management query API, you can filter, sort, and paginate the key/value data. For more details read How-To: Query state.
Querying state store directly
Dapr saves and retrieves state values without any transformation. You can query and aggregate state directly from the underlying state store. For example, to get all state keys associated with an application ID “myApp” in Redis, use:
KEYS "myApp*"
Note on direct queries
Since you aren’t calling through the Dapr runtime, direct queries of the state store are not governed by Dapr concurrency control. What you see are snapshots of committed data acceptable for read-only queries across multiple actors. Writes should be done via the Dapr state management or actors APIs.Querying actor state
If the data store supports SQL queries, you can query an actor’s state using SQL queries. For example:
SELECT * FROM StateTable WHERE Id='<app-id>||<actor-type>||<actor-id>||<key>'
You can also avoid the common turn-based concurrency limitations of actor frameworks by performing aggregate queries across actor instances. For example, to calculate the average temperature of all thermometer actors, use:
SELECT AVG(value) FROM StateTable WHERE Id LIKE '<app-id>||<thermometer>||*||temperature'
State Time-to-Live (TTL)
Dapr enables per state set request time-to-live (TTL). This means that applications can set time-to-live per state stored, and these states cannot be retrieved after expiration.
State management API
The state management API can be found in the state management API reference, which describes how to retrieve, save, delete, and query state values by providing keys.
Try out state management
Quickstarts and tutorials
Want to put the Dapr state management API to the test? Walk through the following quickstart and tutorials to see state management in action:
Quickstart/tutorial | Description |
---|---|
State management quickstart | Create stateful applications using the state management API. |
Hello World | Recommended Demonstrates how to run Dapr locally. Highlights service invocation and state management. |
Hello World Kubernetes | Recommended Demonstrates how to run Dapr in Kubernetes. Highlights service invocation and state management. |
Start using state management directly in your app
Want to skip the quickstarts? Not a problem. You can try out the state management building block directly in your application. After Dapr is installed, you can begin using the state management API starting with the state management how-to guide.
Next steps
- Start working through the state management how-to guides, starting with:
- Review the list of state store components
- Read the state management API reference
- Read the actors API reference
1.4.2 - How-To: Save and get state
State management is one of the most common needs of any new, legacy, monolith, or microservice application. Dealing with and testing different database libraries and handling retries and faults can be both difficult and time consuming.
In this guide, you’ll learn the basics of using the key/value state API to allow an application to save, get, and delete state.
The code example below loosely describes an application that processes orders with an order processing service which has a Dapr sidecar. The order processing service uses Dapr to store state in a Redis state store.

Set up a state store
A state store component represents a resource that Dapr uses to communicate with a database.
For the purpose of this guide we’ll use a Redis state store, but any state store from the supported list will work.
When you run dapr init
in self-hosted mode, Dapr creates a default Redis statestore.yaml
and runs a Redis state store on your local machine, located:
- On Windows, under
%UserProfile%\.dapr\components\statestore.yaml
- On Linux/MacOS, under
~/.dapr/components/statestore.yaml
With the statestore.yaml
component, you can easily swap out underlying components without application code changes.
To deploy this into a Kubernetes cluster, fill in the metadata
connection details of your state store component in the YAML below, save as statestore.yaml
, and run kubectl apply -f statestore.yaml
.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""
Important
Set anapp-id
, as the state keys are prefixed with this value. If you don’t set an app-id
, one is generated for you at runtime. The next time you run the command, a new app-id
is generated and you will no longer have access to the previously saved state.Save and retrieve a single state
The following example shows how to save and retrieve a single key/value pair using the Dapr state management API.
using System.Text;
using System.Threading.Tasks;
using Dapr.Client;
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddDaprClient();
var app = builder.Build();
var random = new Random();
//Resolve the DaprClient from its dependency injection registration
using var client = app.Services.GetRequiredService<DaprClient>();
while(true)
{
await Task.Delay(TimeSpan.FromSeconds(5));
var orderId = random.Next(1,1000);
//Using Dapr SDK to save and get state
await client.SaveStateAsync(DAPR_STORE_NAME, "order_1", orderId.ToString());
await client.SaveStateAsync(DAPR_STORE_NAME, "order_2", orderId.ToString());
var result = await client.GetStateAsync<string>(DAPR_STORE_NAME, "order_1");
Console.WriteLine($"Result after get: {result}");
}
To launch a Dapr sidecar for the above example application, run a command similar to the following:
dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 dotnet run
//dependencies
import io.dapr.client.DaprClient;
import io.dapr.client.DaprClientBuilder;
import io.dapr.client.domain.State;
import io.dapr.client.domain.TransactionalStateOperation;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import reactor.core.publisher.Mono;
import java.util.Random;
import java.util.concurrent.TimeUnit;
//code
@SpringBootApplication
public class OrderProcessingServiceApplication {
private static final Logger log = LoggerFactory.getLogger(OrderProcessingServiceApplication.class);
private static final String STATE_STORE_NAME = "statestore";
public static void main(String[] args) throws InterruptedException{
while(true) {
TimeUnit.MILLISECONDS.sleep(5000);
Random random = new Random();
int orderId = random.nextInt(1000-1) + 1;
DaprClient client = new DaprClientBuilder().build();
//Using Dapr SDK to save and get state
client.saveState(STATE_STORE_NAME, "order_1", Integer.toString(orderId)).block();
client.saveState(STATE_STORE_NAME, "order_2", Integer.toString(orderId)).block();
Mono<State<String>> result = client.getState(STATE_STORE_NAME, "order_1", String.class);
log.info("Result after get" + result);
}
}
}
To launch a Dapr sidecar for the above example application, run a command similar to the following:
dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 mvn spring-boot:run
#dependencies
import random
from time import sleep
import requests
import logging
from dapr.clients import DaprClient
from dapr.clients.grpc._state import StateItem
from dapr.clients.grpc._request import TransactionalStateOperation, TransactionOperationType
#code
logging.basicConfig(level = logging.INFO)
DAPR_STORE_NAME = "statestore"
while True:
sleep(random.randrange(50, 5000) / 1000)
orderId = random.randint(1, 1000)
with DaprClient() as client:
#Using Dapr SDK to save and get state
client.save_state(DAPR_STORE_NAME, "order_1", str(orderId))
result = client.get_state(DAPR_STORE_NAME, "order_1")
logging.info('Result after get: ' + result.data.decode('utf-8'))
To launch a Dapr sidecar for the above example application, run a command similar to the following:
dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 -- python3 OrderProcessingService.py
// dependencies
import (
"context"
"log"
"math/rand"
"strconv"
"time"
dapr "github.com/dapr/go-sdk/client"
)
// code
func main() {
const STATE_STORE_NAME = "statestore"
rand.Seed(time.Now().UnixMicro())
for i := 0; i < 10; i++ {
orderId := rand.Intn(1000-1) + 1
client, err := dapr.NewClient()
if err != nil {
panic(err)
}
defer client.Close()
ctx := context.Background()
err = client.SaveState(ctx, STATE_STORE_NAME, "order_1", []byte(strconv.Itoa(orderId)), nil)
if err != nil {
panic(err)
}
result, err := client.GetState(ctx, STATE_STORE_NAME, "order_1", nil)
if err != nil {
panic(err)
}
log.Println("Result after get:", string(result.Value))
time.Sleep(2 * time.Second)
}
}
To launch a Dapr sidecar for the above example application, run a command similar to the following:
dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 go run OrderProcessingService.go
//dependencies
import { DaprClient, HttpMethod, CommunicationProtocolEnum } from '@dapr/dapr';
//code
const daprHost = "127.0.0.1";
var main = function() {
for(var i=0;i<10;i++) {
sleep(5000);
var orderId = Math.floor(Math.random() * (1000 - 1) + 1);
start(orderId).catch((e) => {
console.error(e);
process.exit(1);
});
}
}
async function start(orderId) {
const client = new DaprClient({
daprHost,
daprPort: process.env.DAPR_HTTP_PORT,
communicationProtocol: CommunicationProtocolEnum.HTTP,
});
const STATE_STORE_NAME = "statestore";
//Using Dapr SDK to save and get state
await client.state.save(STATE_STORE_NAME, [
{
key: "order_1",
value: orderId.toString()
},
{
key: "order_2",
value: orderId.toString()
}
]);
var result = await client.state.get(STATE_STORE_NAME, "order_1");
console.log("Result after get: " + result);
}
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
main();
To launch a Dapr sidecar for the above example application, run a command similar to the following:
dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 npm start
Launch a Dapr sidecar:
dapr run --app-id orderprocessing --dapr-http-port 3601
In a separate terminal, save a key/value pair into your statestore:
curl -X POST -H "Content-Type: application/json" -d '[{ "key": "order_1", "value": "250"}]' http://localhost:3601/v1.0/state/statestore
Now get the state you just saved:
curl http://localhost:3601/v1.0/state/statestore/order_1
Restart your sidecar and try retrieving state again to observe that state persists separately from the app.
Launch a Dapr sidecar:
dapr --app-id orderprocessing --dapr-http-port 3601 run
In a separate terminal, save a key/value pair into your statestore:
Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '[{"key": "order_1", "value": "250"}]' -Uri 'http://localhost:3601/v1.0/state/statestore'
Now get the state you just saved:
Invoke-RestMethod -Uri 'http://localhost:3601/v1.0/state/statestore/order_1'
Restart your sidecar and try retrieving state again to observe that state persists separately from the app.
Delete state
Below are code examples that leverage Dapr SDKs for deleting the state.
using Dapr.Client;
using System.Threading.Tasks;
const string DAPR_STORE_NAME = "statestore";
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddDaprClient();
var app = builder.Build();
//Resolve the DaprClient from the dependency injection registration
using var client = app.Services.GetRequiredService<DaprClient>();
//Use the DaprClient to delete the state
await client.DeleteStateAsync(DAPR_STORE_NAME, "order_1", cancellationToken: cancellationToken);
To launch a Dapr sidecar for the above example application, run a command similar to the following:
dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 dotnet run
//dependencies
import io.dapr.client.DaprClient;
import io.dapr.client.DaprClientBuilder;
import org.springframework.boot.autoconfigure.SpringBootApplication;
//code
@SpringBootApplication
public class OrderProcessingServiceApplication {
public static void main(String[] args) throws InterruptedException{
String STATE_STORE_NAME = "statestore";
//Using Dapr SDK to delete the state
DaprClient client = new DaprClientBuilder().build();
String storedEtag = client.getState(STATE_STORE_NAME, "order_1", String.class).block().getEtag();
client.deleteState(STATE_STORE_NAME, "order_1", storedEtag, null).block();
}
}
To launch a Dapr sidecar for the above example application, run a command similar to the following:
dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 mvn spring-boot:run
#dependencies
from dapr.clients.grpc._request import TransactionalStateOperation, TransactionOperationType
#code
logging.basicConfig(level = logging.INFO)
DAPR_STORE_NAME = "statestore"
#Using Dapr SDK to delete the state
with DaprClient() as client:
client.delete_state(store_name=DAPR_STORE_NAME, key="order_1")
To launch a Dapr sidecar for the above example application, run a command similar to the following:
dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 -- python3 OrderProcessingService.py
//dependencies
import (
"context"
dapr "github.com/dapr/go-sdk/client"
)
//code
func main() {
STATE_STORE_NAME := "statestore"
//Using Dapr SDK to delete the state
client, err := dapr.NewClient()
if err != nil {
panic(err)
}
defer client.Close()
ctx := context.Background()
if err := client.DeleteState(ctx, STATE_STORE_NAME, "order_1"); err != nil {
panic(err)
}
}
To launch a Dapr sidecar for the above example application, run a command similar to the following:
dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 go run OrderProcessingService.go
//dependencies
import { DaprClient, HttpMethod, CommunicationProtocolEnum } from '@dapr/dapr';
//code
const daprHost = "127.0.0.1";
var main = function() {
const STATE_STORE_NAME = "statestore";
//Using Dapr SDK to save and get state
const client = new DaprClient({
daprHost,
daprPort: process.env.DAPR_HTTP_PORT,
communicationProtocol: CommunicationProtocolEnum.HTTP,
});
await client.state.delete(STATE_STORE_NAME, "order_1");
}
main();
To launch a Dapr sidecar for the above example application, run a command similar to the following:
dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 npm start
With the same Dapr instance running from above, run:
curl -X DELETE 'http://localhost:3601/v1.0/state/statestore/order_1'
Try getting state again. Note that no value is returned.
With the same Dapr instance running from above, run:
Invoke-RestMethod -Method Delete -Uri 'http://localhost:3601/v1.0/state/statestore/order_1'
Try getting state again. Note that no value is returned.
Save and retrieve multiple states
Below are code examples that leverage Dapr SDKs for saving and retrieving multiple states.
using Dapr.Client;
using System.Threading.Tasks;
const string DAPR_STORE_NAME = "statestore";
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddDaprClient();
var app = builder.Build();
//Resolve the DaprClient from the dependency injection registration
using var client = app.Services.GetRequiredService<DaprClient>();
IReadOnlyList<BulkStateItem> multipleStateResult = await client.GetBulkStateAsync(DAPR_STORE_NAME, new List<string> { "order_1", "order_2" }, parallelism: 1);
To launch a Dapr sidecar for the above example application, run a command similar to the following:
dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 dotnet run
The above example returns a BulkStateItem
with the serialized format of the value you saved to state. If you prefer that the value be deserialized by the SDK across each of your bulk response items, you can instead use the following:
using Dapr.Client;
using System.Threading.Tasks;
const string DAPR_STORE_NAME = "statestore";
var builder = WebApplication.CreateBuilder(args);
builder.Serivces.AddDaprClient();
var app = builder.Build();
//Resolve the DaprClient from the dependency injection registration
using var client = app.Services.GetRequiredService<DaprClient>();
IReadOnlyList<BulkStateItem<Widget>> mulitpleStateResult = await client.GetBulkStateAsync<Widget>(DAPR_STORE_NAME, new List<string> { "widget_1", "widget_2" }, parallelism: 1);
record Widget(string Size, string Color);
//dependencies
import io.dapr.client.DaprClient;
import io.dapr.client.DaprClientBuilder;
import io.dapr.client.domain.State;
import java.util.Arrays;
//code
@SpringBootApplication
public class OrderProcessingServiceApplication {
private static final Logger log = LoggerFactory.getLogger(OrderProcessingServiceApplication.class);
public static void main(String[] args) throws InterruptedException{
String STATE_STORE_NAME = "statestore";
//Using Dapr SDK to retrieve multiple states
DaprClient client = new DaprClientBuilder().build();
Mono<List<State<String>>> resultBulk = client.getBulkState(STATE_STORE_NAME,
Arrays.asList("order_1", "order_2"), String.class);
}
}
To launch a Dapr sidecar for the above example application, run a command similar to the following:
dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 mvn spring-boot:run
#dependencies
from dapr.clients import DaprClient
from dapr.clients.grpc._state import StateItem
#code
logging.basicConfig(level = logging.INFO)
DAPR_STORE_NAME = "statestore"
orderId = 100
#Using Dapr SDK to save and retrieve multiple states
with DaprClient() as client:
client.save_bulk_state(store_name=DAPR_STORE_NAME, states=[StateItem(key="order_2", value=str(orderId))])
result = client.get_bulk_state(store_name=DAPR_STORE_NAME, keys=["order_1", "order_2"], states_metadata={"metakey": "metavalue"}).items
logging.info('Result after get bulk: ' + str(result))
To launch a Dapr sidecar for the above example application, run a command similar to the following:
dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 -- python3 OrderProcessingService.py
// dependencies
import (
"context"
"log"
"math/rand"
"strconv"
"time"
dapr "github.com/dapr/go-sdk/client"
)
// code
func main() {
const STATE_STORE_NAME = "statestore"
rand.Seed(time.Now().UnixMicro())
for i := 0; i < 10; i++ {
orderId := rand.Intn(1000-1) + 1
client, err := dapr.NewClient()
if err != nil {
panic(err)
}
defer client.Close()
ctx := context.Background()
err = client.SaveState(ctx, STATE_STORE_NAME, "order_1", []byte(strconv.Itoa(orderId)), nil)
if err != nil {
panic(err)
}
keys := []string{"key1", "key2", "key3"}
items, err := client.GetBulkState(ctx, STATE_STORE_NAME, keys, nil, 100)
if err != nil {
panic(err)
}
for _, item := range items {
log.Println("Item from GetBulkState:", string(item.Value))
}
}
}
To launch a Dapr sidecar for the above example application, run a command similar to the following:
dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 go run OrderProcessingService.go
//dependencies
import { DaprClient, HttpMethod, CommunicationProtocolEnum } from '@dapr/dapr';
//code
const daprHost = "127.0.0.1";
var main = function() {
const STATE_STORE_NAME = "statestore";
var orderId = 100;
//Using Dapr SDK to save and retrieve multiple states
const client = new DaprClient({
daprHost,
daprPort: process.env.DAPR_HTTP_PORT,
communicationProtocol: CommunicationProtocolEnum.HTTP,
});
await client.state.save(STATE_STORE_NAME, [
{
key: "order_1",
value: orderId.toString()
},
{
key: "order_2",
value: orderId.toString()
}
]);
result = await client.state.getBulk(STATE_STORE_NAME, ["order_1", "order_2"]);
}
main();
To launch a Dapr sidecar for the above example application, run a command similar to the following:
dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 npm start
With the same Dapr instance running from above, save two key/value pairs into your statestore:
curl -X POST -H "Content-Type: application/json" -d '[{ "key": "order_1", "value": "250"}, { "key": "order_2", "value": "550"}]' http://localhost:3601/v1.0/state/statestore
Now get the states you just saved:
curl -X POST -H "Content-Type: application/json" -d '{"keys":["order_1", "order_2"]}' http://localhost:3601/v1.0/state/statestore/bulk
With the same Dapr instance running from above, save two key/value pairs into your statestore:
Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '[{ "key": "order_1", "value": "250"}, { "key": "order_2", "value": "550"}]' -Uri 'http://localhost:3601/v1.0/state/statestore'
Now get the states you just saved:
Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '{"keys":["order_1", "order_2"]}' -Uri 'http://localhost:3601/v1.0/state/statestore/bulk'
Perform state transactions
Note
State transactions require a state store that supports multi-item transactions. See the supported state stores page for a full list.Below are code examples that leverage Dapr SDKs for performing state transactions.
using Dapr.Client;
using System.Threading.Tasks;
const string DAPR_STORE_NAME = "statestore";
var builder = WebApplication.CreateBuilder(args);
builder.Serivces.AddDaprClient();
var app = builder.Build();
//Resolve the DaprClient from the dependency injection registration
using var client = app.Services.GetRequiredService<DaprClient>();
var random = new Random();
while (true)
{
await Task.Delay(TimeSpan.FromSeconds(5));
var orderId = random.Next(1, 1000);
var requests = new List<StateTransactionRequest>
{
new StateTransactionRequest("order_3", JsonSerializer.SerializeToUtf8Bytes(orderId.ToString()), StateOperationType.Upsert),
new StateTransactionRequest("order_2", null, StateOperationType.Delete)
};
var cancellationTokenSource = new CancellationTokenSource();
var cancellationToken = cancellationTokenSource.Token;
//Use the DaprClient to perform the state transactions
await client.ExecuteStateTransactionAsync(DAPR_STORE_NAME, requests, cancellationToken: cancellationToken);
Console.WriteLine($"Order requested: {orderId}");
Console.WriteLine($"Result: {result}");
}
To launch a Dapr sidecar for the above example application, run a command similar to the following:
dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 dotnet run
//dependencies
import io.dapr.client.DaprClient;
import io.dapr.client.DaprClientBuilder;
import io.dapr.client.domain.State;
import io.dapr.client.domain.TransactionalStateOperation;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import reactor.core.publisher.Mono;
import java.util.ArrayList;
import java.util.List;
import java.util.Random;
import java.util.concurrent.TimeUnit;
//code
@SpringBootApplication
public class OrderProcessingServiceApplication {
private static final Logger log = LoggerFactory.getLogger(OrderProcessingServiceApplication.class);
private static final String STATE_STORE_NAME = "statestore";
public static void main(String[] args) throws InterruptedException{
while(true) {
TimeUnit.MILLISECONDS.sleep(5000);
Random random = new Random();
int orderId = random.nextInt(1000-1) + 1;
DaprClient client = new DaprClientBuilder().build();
List<TransactionalStateOperation<?>> operationList = new ArrayList<>();
operationList.add(new TransactionalStateOperation<>(TransactionalStateOperation.OperationType.UPSERT,
new State<>("order_3", Integer.toString(orderId), "")));
operationList.add(new TransactionalStateOperation<>(TransactionalStateOperation.OperationType.DELETE,
new State<>("order_2")));
//Using Dapr SDK to perform the state transactions
client.executeStateTransaction(STATE_STORE_NAME, operationList).block();
log.info("Order requested: " + orderId);
}
}
}
To launch a Dapr sidecar for the above example application, run a command similar to the following:
dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 mvn spring-boot:run
#dependencies
import random
from time import sleep
import requests
import logging
from dapr.clients import DaprClient
from dapr.clients.grpc._state import StateItem
from dapr.clients.grpc._request import TransactionalStateOperation, TransactionOperationType
#code
logging.basicConfig(level = logging.INFO)
DAPR_STORE_NAME = "statestore"
while True:
sleep(random.randrange(50, 5000) / 1000)
orderId = random.randint(1, 1000)
with DaprClient() as client:
#Using Dapr SDK to perform the state transactions
client.execute_state_transaction(store_name=DAPR_STORE_NAME, operations=[
TransactionalStateOperation(
operation_type=TransactionOperationType.upsert,
key="order_3",
data=str(orderId)),
TransactionalStateOperation(key="order_3", data=str(orderId)),
TransactionalStateOperation(
operation_type=TransactionOperationType.delete,
key="order_2",
data=str(orderId)),
TransactionalStateOperation(key="order_2", data=str(orderId))
])
client.delete_state(store_name=DAPR_STORE_NAME, key="order_1")
logging.basicConfig(level = logging.INFO)
logging.info('Order requested: ' + str(orderId))
logging.info('Result: ' + str(result))
To launch a Dapr sidecar for the above example application, run a command similar to the following:
dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 -- python3 OrderProcessingService.py
// dependencies
package main
import (
"context"
"log"
"math/rand"
"strconv"
"time"
dapr "github.com/dapr/go-sdk/client"
)
// code
func main() {
const STATE_STORE_NAME = "statestore"
rand.Seed(time.Now().UnixMicro())
for i := 0; i < 10; i++ {
orderId := rand.Intn(1000-1) + 1
client, err := dapr.NewClient()
if err != nil {
panic(err)
}
defer client.Close()
ctx := context.Background()
err = client.SaveState(ctx, STATE_STORE_NAME, "order_1", []byte(strconv.Itoa(orderId)), nil)
if err != nil {
panic(err)
}
result, err := client.GetState(ctx, STATE_STORE_NAME, "order_1", nil)
if err != nil {
panic(err)
}
ops := make([]*dapr.StateOperation, 0)
data1 := "data1"
data2 := "data2"
op1 := &dapr.StateOperation{
Type: dapr.StateOperationTypeUpsert,
Item: &dapr.SetStateItem{
Key: "key1",
Value: []byte(data1),
},
}
op2 := &dapr.StateOperation{
Type: dapr.StateOperationTypeDelete,
Item: &dapr.SetStateItem{
Key: "key2",
Value: []byte(data2),
},
}
ops = append(ops, op1, op2)
meta := map[string]string{}
err = client.ExecuteStateTransaction(ctx, STATE_STORE_NAME, meta, ops)
log.Println("Result after get:", string(result.Value))
time.Sleep(2 * time.Second)
}
}
To launch a Dapr sidecar for the above example application, run a command similar to the following:
dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 go run OrderProcessingService.go
//dependencies
import { DaprClient, HttpMethod, CommunicationProtocolEnum } from '@dapr/dapr';
//code
const daprHost = "127.0.0.1";
var main = function() {
for(var i=0;i<10;i++) {
sleep(5000);
var orderId = Math.floor(Math.random() * (1000 - 1) + 1);
start(orderId).catch((e) => {
console.error(e);
process.exit(1);
});
}
}
async function start(orderId) {
const client = new DaprClient({
daprHost,
daprPort: process.env.DAPR_HTTP_PORT,
communicationProtocol: CommunicationProtocolEnum.HTTP,
});
const STATE_STORE_NAME = "statestore";
//Using Dapr SDK to save and retrieve multiple states
await client.state.transaction(STATE_STORE_NAME, [
{
operation: "upsert",
request: {
key: "order_3",
value: orderId.toString()
}
},
{
operation: "delete",
request: {
key: "order_2"
}
}
]);
}
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
main();
To launch a Dapr sidecar for the above example application, run a command similar to the following:
dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 npm start
With the same Dapr instance running from above, perform two state transactions:
curl -X POST -H "Content-Type: application/json" -d '{"operations": [{"operation":"upsert", "request": {"key": "order_1", "value": "250"}}, {"operation":"delete", "request": {"key": "order_2"}}]}' http://localhost:3601/v1.0/state/statestore/transaction
Now see the results of your state transactions:
curl -X POST -H "Content-Type: application/json" -d '{"keys":["order_1", "order_2"]}' http://localhost:3601/v1.0/state/statestore/bulk
With the same Dapr instance running from above, save two key/value pairs into your statestore:
Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '{"operations": [{"operation":"upsert", "request": {"key": "order_1", "value": "250"}}, {"operation":"delete", "request": {"key": "order_2"}}]}' -Uri 'http://localhost:3601/v1.0/state/statestore/transaction'
Now see the results of your state transactions:
Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '{"keys":["order_1", "order_2"]}' -Uri 'http://localhost:3601/v1.0/state/statestore/bulk'
Next steps
- Read the full State API reference
- Try one of the Dapr SDKs
- Build a stateful service
1.4.3 - How-To: Query state
alpha
The state query API is in alpha stage.With the state query API, you can retrieve, filter, and sort the key/value data stored in state store components. The query API is not a replacement for a complete query language.
Even though the state store is a key/value store, the value
might be a JSON document with its own hierarchy, keys, and values. The query API allows you to use those keys/values to retrieve corresponding documents.
Querying the state
Submit query requests via HTTP POST/PUT or gRPC. The body of the request is the JSON map with 3 entries:
filter
sort
page
filter
The filter
specifies the query conditions in the form of a tree, where each node represents either unary or multi-operand operation.
The following operations are supported:
Operator | Operands | Description |
---|---|---|
EQ | key:value | key == value |
NEQ | key:value | key != value |
GT | key:value | key > value |
GTE | key:value | key >= value |
LT | key:value | key < value |
LTE | key:value | key <= value |
IN | key:[]value | key == value[0] OR key == value[1] OR … OR key == value[n] |
AND | []operation | operation[0] AND operation[1] AND … AND operation[n] |
OR | []operation | operation[0] OR operation[1] OR … OR operation[n] |
The key
in the operand is similar to the JSONPath notation. Each dot in the key indicates a nested JSON structure. For example, consider this structure:
{
"shape": {
"name": "rectangle",
"dimensions": {
"height": 24,
"width": 10
},
"color": {
"name": "red",
"code": "#FF0000"
}
}
}
To compare the value of the color code, the key will be shape.color.code
.
If the filter
section is omitted, the query returns all entries.
sort
The sort
is an ordered array of key:order
pairs, where:
key
is a key in the state storeorder
is an optional string indicating sorting order:"ASC"
for ascending"DESC"
for descending
If omitted, ascending order is the default.
page
The page
contains limit
and token
parameters.
limit
sets the page size.token
is an iteration token returned by the component, used in subsequent queries.
Behind the scenes, this query request is translated into the native query language and executed by the state store component.
Example data and query
Let’s look at some real examples, ranging from simple to complex.
As a dataset, consider a collection of employee records containing employee ID, organization, state, and city. Notice that this dataset is an array of key/value pairs, where:
key
is the unique IDvalue
is the JSON object with employee record.
To better illustrate functionality, organization name (org) and employee ID (id) are a nested JSON person object.
Get started by creating an instance of MongoDB, which is your state store.
docker run -d --rm -p 27017:27017 --name mongodb mongo:5
Next, start a Dapr application. Refer to the component configuration file, which instructs Dapr to use MongoDB as its state store.
dapr run --app-id demo --dapr-http-port 3500 --resources-path query-api-examples/components/mongodb
Populate the state store with the employee dataset, so you can query it later.
curl -X POST -H "Content-Type: application/json" -d @query-api-examples/dataset.json http://localhost:3500/v1.0/state/statestore
Once populated, you can examine the data in the state store. In the image below, a section of the MongoDB UI displays employee records.

Each entry has the _id
member as a concatenated object key, and the value
member containing the JSON record.
The query API allows you to select records from this JSON structure.
Now you can run the example queries.
Example 1
First, find all employees in the state of California and sort them by their employee ID in descending order.
This is the query:
{
"filter": {
"EQ": { "state": "CA" }
},
"sort": [
{
"key": "person.id",
"order": "DESC"
}
]
}
An equivalent of this query in SQL is:
SELECT * FROM c WHERE
state = "CA"
ORDER BY
person.id DESC
Execute the query with the following command:
curl -s -X POST -H "Content-Type: application/json" -d @query-api-examples/query1.json http://localhost:3500/v1.0-alpha1/state/statestore/query | jq .
Invoke-RestMethod -Method Post -ContentType 'application/json' -InFile query-api-examples/query1.json -Uri 'http://localhost:3500/v1.0-alpha1/state/statestore/query'
The query result is an array of matching key/value pairs in the requested order:
{
"results": [
{
"key": "3",
"data": {
"person": {
"org": "Finance",
"id": 1071
},
"city": "Sacramento",
"state": "CA"
},
"etag": "44723d41-deb1-4c23-940e-3e6896c3b6f7"
},
{
"key": "7",
"data": {
"city": "San Francisco",
"state": "CA",
"person": {
"id": 1015,
"org": "Dev Ops"
}
},
"etag": "0e69e69f-3dbc-423a-9db8-26767fcd2220"
},
{
"key": "5",
"data": {
"state": "CA",
"person": {
"org": "Hardware",
"id": 1007
},
"city": "Los Angeles"
},
"etag": "f87478fa-e5c5-4be0-afa5-f9f9d75713d8"
},
{
"key": "9",
"data": {
"person": {
"org": "Finance",
"id": 1002
},
"city": "San Diego",
"state": "CA"
},
"etag": "f5cf05cd-fb43-4154-a2ec-445c66d5f2f8"
}
]
}
Example 2
Now, find all employees from the “Dev Ops” and “Hardware” organizations.
This is the query:
{
"filter": {
"IN": { "person.org": [ "Dev Ops", "Hardware" ] }
}
}
An equivalent of this query in SQL is:
SELECT * FROM c WHERE
person.org IN ("Dev Ops", "Hardware")
Execute the query with the following command:
curl -s -X POST -H "Content-Type: application/json" -d @query-api-examples/query2.json http://localhost:3500/v1.0-alpha1/state/statestore/query | jq .
Invoke-RestMethod -Method Post -ContentType 'application/json' -InFile query-api-examples/query2.json -Uri 'http://localhost:3500/v1.0-alpha1/state/statestore/query'
Similar to the previous example, the result is an array of matching key/value pairs.
Example 3
In this example, find:
- All employees from the “Dev Ops” department.
- Employees from the “Finance” departing residing in the states of Washington and California.
In addition, sort the results first by state in descending alphabetical order, then by employee ID in ascending order. Let’s process up to 3 records at a time.
This is the query:
{
"filter": {
"OR": [
{
"EQ": { "person.org": "Dev Ops" }
},
{
"AND": [
{
"EQ": { "person.org": "Finance" }
},
{
"IN": { "state": [ "CA", "WA" ] }
}
]
}
]
},
"sort": [
{
"key": "state",
"order": "DESC"
},
{
"key": "person.id"
}
],
"page": {
"limit": 3
}
}
An equivalent of this query in SQL is:
SELECT * FROM c WHERE
person.org = "Dev Ops" OR
(person.org = "Finance" AND state IN ("CA", "WA"))
ORDER BY
state DESC,
person.id ASC
LIMIT 3
Execute the query with the following command:
curl -s -X POST -H "Content-Type: application/json" -d @query-api-examples/query3.json http://localhost:3500/v1.0-alpha1/state/statestore/query | jq .
Invoke-RestMethod -Method Post -ContentType 'application/json' -InFile query-api-examples/query3.json -Uri 'http://localhost:3500/v1.0-alpha1/state/statestore/query'
Upon successful execution, the state store returns a JSON object with a list of matching records and the pagination token:
{
"results": [
{
"key": "1",
"data": {
"person": {
"org": "Dev Ops",
"id": 1036
},
"city": "Seattle",
"state": "WA"
},
"etag": "6f54ad94-dfb9-46f0-a371-e42d550adb7d"
},
{
"key": "4",
"data": {
"person": {
"org": "Dev Ops",
"id": 1042
},
"city": "Spokane",
"state": "WA"
},
"etag": "7415707b-82ce-44d0-bf15-6dc6305af3b1"
},
{
"key": "10",
"data": {
"person": {
"org": "Dev Ops",
"id": 1054
},
"city": "New York",
"state": "NY"
},
"etag": "26bbba88-9461-48d1-8a35-db07c374e5aa"
}
],
"token": "3"
}
The pagination token is used “as is” in the subsequent query to get the next batch of records:
{
"filter": {
"OR": [
{
"EQ": { "person.org": "Dev Ops" }
},
{
"AND": [
{
"EQ": { "person.org": "Finance" }
},
{
"IN": { "state": [ "CA", "WA" ] }
}
]
}
]
},
"sort": [
{
"key": "state",
"order": "DESC"
},
{
"key": "person.id"
}
],
"page": {
"limit": 3,
"token": "3"
}
}
curl -s -X POST -H "Content-Type: application/json" -d @query-api-examples/query3-token.json http://localhost:3500/v1.0-alpha1/state/statestore/query | jq .
Invoke-RestMethod -Method Post -ContentType 'application/json' -InFile query-api-examples/query3-token.json -Uri 'http://localhost:3500/v1.0-alpha1/state/statestore/query'
And the result of this query is:
{
"results": [
{
"key": "9",
"data": {
"person": {
"org": "Finance",
"id": 1002
},
"city": "San Diego",
"state": "CA"
},
"etag": "f5cf05cd-fb43-4154-a2ec-445c66d5f2f8"
},
{
"key": "7",
"data": {
"city": "San Francisco",
"state": "CA",
"person": {
"id": 1015,
"org": "Dev Ops"
}
},
"etag": "0e69e69f-3dbc-423a-9db8-26767fcd2220"
},
{
"key": "3",
"data": {
"person": {
"org": "Finance",
"id": 1071
},
"city": "Sacramento",
"state": "CA"
},
"etag": "44723d41-deb1-4c23-940e-3e6896c3b6f7"
}
],
"token": "6"
}
That way you can update the pagination token in the query and iterate through the results until no more records are returned.
Limitations
The state query API has the following limitations:
- To query actor states stored in a state store, you need to use the query API for the specific database. See querying actor state.
- The API does not work with Dapr encrypted state stores capability. Since the encryption is done by the Dapr runtime and stored as encrypted data, then this effectively prevents server side querying.
You can find additional information in the related links section.
Related links
- Refer to the query API reference.
- See the state store components that implement query support.
- View the state store query API implementation guide.
- See how to query Redis state store.
1.4.4 - How-To: Build a stateful service
In this article, you’ll learn how to create a stateful service which can be horizontally scaled, using opt-in concurrency and consistency models. Consuming the state management API frees developers from difficult state coordination, conflict resolution, and failure handling.
Set up a state store
A state store component represents a resource that Dapr uses to communicate with a database. For the purpose of this guide, we’ll use the default Redis state store.
Using the Dapr CLI
When you run dapr init
in self-hosted mode, Dapr creates a default Redis statestore.yaml
and runs a Redis state store on your local machine, located:
- On Windows, under
%UserProfile%\.dapr\components\statestore.yaml
- On Linux/MacOS, under
~/.dapr/components/statestore.yaml
With the statestore.yaml
component, you can easily swap out underlying components without application code changes.
See a list of supported state stores.
Kubernetes
See how to setup different state stores on Kubernetes.
Strong and eventual consistency
Using strong consistency, Dapr makes sure that the underlying state store:
- Returns the response once the data has been written to all replicas.
- Receives an ACK from a quorum before writing or deleting state.
For get requests, Dapr ensures the store returns the most up-to-date data consistently among replicas. The default is eventual consistency, unless specified otherwise in the request to the state API.
The following examples illustrate how to save, get, and delete state using strong consistency. The example is written in Python, but is applicable to any programming language.
Saving state
import requests
import json
store_name = "redis-store" # name of the state store as specified in state store component yaml file
dapr_state_url = "http://localhost:3500/v1.0/state/{}".format(store_name)
stateReq = '[{ "key": "k1", "value": "Some Data", "options": { "consistency": "strong" }}]'
response = requests.post(dapr_state_url, json=stateReq)
Getting state
import requests
import json
store_name = "redis-store" # name of the state store as specified in state store component yaml file
dapr_state_url = "http://localhost:3500/v1.0/state/{}".format(store_name)
response = requests.get(dapr_state_url + "/key1", headers={"consistency":"strong"})
print(response.headers['ETag'])
Deleting state
import requests
import json
store_name = "redis-store" # name of the state store as specified in state store component yaml file
dapr_state_url = "http://localhost:3500/v1.0/state/{}".format(store_name)
response = requests.delete(dapr_state_url + "/key1", headers={"consistency":"strong"})
If the concurrency
option hasn’t been specified, the default is last-write concurrency mode.
First-write-wins and last-write-wins
Dapr allows developers to opt-in for two common concurrency patterns when working with data stores:
- First-write-wins: useful in situations where you have multiple instances of an application, all writing to the same key concurrently.
- Last-write-wins: Default mode for Dapr.
Dapr uses version numbers to determine whether a specific key has been updated. You can:
- Retain the version number when reading the data for a key.
- Use the version number during updates such as writes and deletes.
If the version information has changed since the version number was retrieved, an error is thrown, requiring you to perform another read to get the latest version information and state.
Dapr utilizes ETags to determine the state’s version number. ETags are returned from state requests in an ETag
header. Using ETags, your application knows that a resource has been updated since the last time they checked by erroring during an ETag mismatch.
The following example shows how to:
- Get an ETag.
- Use the ETag to save state.
- Delete the state.
The following example is written in Python, but is applicable to any programming language.
import requests
import json
store_name = "redis-store" # name of the state store as specified in state store component yaml file
dapr_state_url = "http://localhost:3500/v1.0/state/{}".format(store_name)
response = requests.get(dapr_state_url + "/key1", headers={"concurrency":"first-write"})
etag = response.headers['ETag']
newState = '[{ "key": "k1", "value": "New Data", "etag": {}, "options": { "concurrency": "first-write" }}]'.format(etag)
requests.post(dapr_state_url, json=newState)
response = requests.delete(dapr_state_url + "/key1", headers={"If-Match": "{}".format(etag)})
Handling version mismatch failures
In the following example, you’ll see how to retry a save state operation when the version has changed:
import requests
import json
# This method saves the state and returns false if failed to save state
def save_state(data):
try:
store_name = "redis-store" # name of the state store as specified in state store component yaml file
dapr_state_url = "http://localhost:3500/v1.0/state/{}".format(store_name)
response = requests.post(dapr_state_url, json=data)
if response.status_code == 200:
return True
except:
return False
return False
# This method gets the state and returns the response, with the ETag in the header -->
def get_state(key):
response = requests.get("http://localhost:3500/v1.0/state/<state_store_name>/{}".format(key), headers={"concurrency":"first-write"})
return response
# Exit when save state is successful. success will be False if there's an ETag mismatch -->
success = False
while success != True:
response = get_state("key1")
etag = response.headers['ETag']
newState = '[{ "key": "key1", "value": "New Data", "etag": {}, "options": { "concurrency": "first-write" }}]'.format(etag)
success = save_state(newState)
1.4.5 - How-To: Enable the transactional outbox pattern
The transactional outbox pattern is a well known design pattern for sending notifications regarding changes in an application’s state. The transactional outbox pattern uses a single transaction that spans across the database and the message broker delivering the notification.
Developers are faced with many difficult technical challenges when trying to implement this pattern on their own, which often involves writing error-prone central coordination managers that, at most, support a combination of one or two databases and message brokers.
For example, you can use the outbox pattern to:
- Write a new user record to an account database.
- Send a notification message that the account was successfully created.
With Dapr’s outbox support, you can notify subscribers when an application’s state is created or updated when calling Dapr’s transactions API.
The diagram below is an overview of how the outbox feature works:
- Service A saves/updates state to the state store using a transaction.
- A message is written to the broker under the same transaction. When the message is successfully delivered to the message broker, the transaction completes, ensuring the state and message are transacted together.
- The message broker delivers the message topic to any subscribers - in this case, Service B.

Requirements
The outbox feature can be used with using any transactional state store supported by Dapr. All pub/sub brokers are supported with the outbox feature.
Learn more about the transactional methods you can use.
Note
Message brokers that work with the competing consumer pattern (for example, Apache Kafka) are encouraged to reduce the chances of duplicate events.Enable the outbox pattern
To enable the outbox feature, add the following required and optional fields on a state store component:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: mysql-outbox
spec:
type: state.mysql
version: v1
metadata:
- name: connectionString
value: "<CONNECTION STRING>"
- name: outboxPublishPubsub # Required
value: "mypubsub"
- name: outboxPublishTopic # Required
value: "newOrder"
- name: outboxPubsub # Optional
value: "myOutboxPubsub"
- name: outboxDiscardWhenMissingState #Optional. Defaults to false
value: false
Metadata fields
Name | Required | Default Value | Description |
---|---|---|---|
outboxPublishPubsub | Yes | N/A | Sets the name of the pub/sub component to deliver the notifications when publishing state changes |
outboxPublishTopic | Yes | N/A | Sets the topic that receives the state changes on the pub/sub configured with outboxPublishPubsub . The message body will be a state transaction item for an insert or update operation |
outboxPubsub | No | outboxPublishPubsub | Sets the pub/sub component used by Dapr to coordinate the state and pub/sub transactions. If not set, the pub/sub component configured with outboxPublishPubsub is used. This is useful if you want to separate the pub/sub component used to send the notification state changes from the one used to coordinate the transaction |
outboxDiscardWhenMissingState | No | false | By setting outboxDiscardWhenMissingState to true , Dapr discards the transaction if it cannot find the state in the database and does not retry. This setting can be useful if the state store data has been deleted for any reason before Dapr was able to deliver the message and you would like Dapr to drop the items from the pub/sub and stop retrying to fetch the state |
Additional configurations
Combining outbox and non-outbox messages on the same state store
If you want to use the same state store for sending both outbox and non-outbox messages, simply define two state store components that connect to the same state store, where one has the outbox feature and the other does not.
MySQL state store without outbox
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: mysql
spec:
type: state.mysql
version: v1
metadata:
- name: connectionString
value: "<CONNECTION STRING>"
MySQL state store with outbox
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: mysql-outbox
spec:
type: state.mysql
version: v1
metadata:
- name: connectionString
value: "<CONNECTION STRING>"
- name: outboxPublishPubsub # Required
value: "mypubsub"
- name: outboxPublishTopic # Required
value: "newOrder"
Shape the outbox pattern message
You can override the outbox pattern message published to the pub/sub broker by setting another transaction that is not be saved to the database and is explicitly mentioned as a projection. This transaction is added a metadata key named outbox.projection
with a value set to true
. When added to the state array saved in a transaction, this payload is ignored when the state is written and the data is used as the payload sent to the upstream subscriber.
To use correctly, the key
values must match between the operation on the state store and the message projection. If the keys do not match, the whole transaction fails.
If you have two or more outbox.projection
enabled state items for the same key, the first one defined is used and the others are ignored.
Learn more about default and custom CloudEvent messages.
In the following Python SDK example of a state transaction, the value of "2"
is saved to the database, but the value of "3"
is published to the end-user topic.
DAPR_STORE_NAME = "statestore"
async def main():
client = DaprClient()
# Define the first state operation to save the value "2"
op1 = StateItem(
key="key1",
value=b"2"
)
# Define the second state operation to publish the value "3" with metadata
op2 = StateItem(
key="key1",
value=b"3",
options=StateOptions(
metadata={
"outbox.projection": "true"
}
)
)
# Create the list of state operations
ops = [op1, op2]
# Execute the state transaction
await client.state.transaction(DAPR_STORE_NAME, operations=ops)
print("State transaction executed.")
By setting the metadata item "outbox.projection"
to "true"
and making sure the key
values match (key1
):
- The first operation is written to the state store and no message is written to the message broker.
- The second operation value is published to the configured pub/sub topic.
In the following JavaScript SDK example of a state transaction, the value of "2"
is saved to the database, but the value of "3"
is published to the end-user topic.
const { DaprClient, StateOperationType } = require('@dapr/dapr');
const DAPR_STORE_NAME = "statestore";
async function main() {
const client = new DaprClient();
// Define the first state operation to save the value "2"
const op1 = {
operation: StateOperationType.UPSERT,
request: {
key: "key1",
value: "2"
}
};
// Define the second state operation to publish the value "3" with metadata
const op2 = {
operation: StateOperationType.UPSERT,
request: {
key: "key1",
value: "3",
metadata: {
"outbox.projection": "true"
}
}
};
// Create the list of state operations
const ops = [op1, op2];
// Execute the state transaction
await client.state.transaction(DAPR_STORE_NAME, ops);
console.log("State transaction executed.");
}
main().catch(err => {
console.error(err);
});
By setting the metadata item "outbox.projection"
to "true"
and making sure the key
values match (key1
):
- The first operation is written to the state store and no message is written to the message broker.
- The second operation value is published to the configured pub/sub topic.
In the following .NET SDK example of a state transaction, the value of "2"
is saved to the database, but the value of "3"
is published to the end-user topic.
public class Program
{
private const string DAPR_STORE_NAME = "statestore";
public static async Task Main(string[] args)
{
var client = new DaprClientBuilder().Build();
// Define the first state operation to save the value "2"
var op1 = new StateTransactionRequest(
key: "key1",
value: Encoding.UTF8.GetBytes("2"),
operationType: StateOperationType.Upsert
);
// Define the second state operation to publish the value "3" with metadata
var metadata = new Dictionary<string, string>
{
{ "outbox.projection", "true" }
};
var op2 = new StateTransactionRequest(
key: "key1",
value: Encoding.UTF8.GetBytes("3"),
operationType: StateOperationType.Upsert,
metadata: metadata
);
// Create the list of state operations
var ops = new List<StateTransactionRequest> { op1, op2 };
// Execute the state transaction
await client.ExecuteStateTransactionAsync(DAPR_STORE_NAME, ops);
Console.WriteLine("State transaction executed.");
}
}
By setting the metadata item "outbox.projection"
to "true"
and making sure the key
values match (key1
):
- The first operation is written to the state store and no message is written to the message broker.
- The second operation value is published to the configured pub/sub topic.
In the following Java SDK example of a state transaction, the value of "2"
is saved to the database, but the value of "3"
is published to the end-user topic.
public class Main {
private static final String DAPR_STORE_NAME = "statestore";
public static void main(String[] args) {
try (DaprClient client = new DaprClientBuilder().build()) {
// Define the first state operation to save the value "2"
StateOperation<String> op1 = new StateOperation<>(
StateOperationType.UPSERT,
"key1",
"2"
);
// Define the second state operation to publish the value "3" with metadata
Map<String, String> metadata = new HashMap<>();
metadata.put("outbox.projection", "true");
StateOperation<String> op2 = new StateOperation<>(
StateOperationType.UPSERT,
"key1",
"3",
metadata
);
// Create the list of state operations
List<StateOperation<?>> ops = new ArrayList<>();
ops.add(op1);
ops.add(op2);
// Execute the state transaction
client.executeStateTransaction(DAPR_STORE_NAME, ops).block();
System.out.println("State transaction executed.");
} catch (Exception e) {
e.printStackTrace();
}
}
}
By setting the metadata item "outbox.projection"
to "true"
and making sure the key
values match (key1
):
- The first operation is written to the state store and no message is written to the message broker.
- The second operation value is published to the configured pub/sub topic.
In the following Go SDK example of a state transaction, the value of "2"
is saved to the database, but the value of "3"
is published to the end-user topic.
ops := make([]*dapr.StateOperation, 0)
op1 := &dapr.StateOperation{
Type: dapr.StateOperationTypeUpsert,
Item: &dapr.SetStateItem{
Key: "key1",
Value: []byte("2"),
},
}
op2 := &dapr.StateOperation{
Type: dapr.StateOperationTypeUpsert,
Item: &dapr.SetStateItem{
Key: "key1",
Value: []byte("3"),
// Override the data payload saved to the database
Metadata: map[string]string{
"outbox.projection": "true",
},
},
}
ops = append(ops, op1, op2)
meta := map[string]string{}
err := testClient.ExecuteStateTransaction(ctx, store, meta, ops)
By setting the metadata item "outbox.projection"
to "true"
and making sure the key
values match (key1
):
- The first operation is written to the state store and no message is written to the message broker.
- The second operation value is published to the configured pub/sub topic.
You can pass the message override using the following HTTP request:
curl -X POST http://localhost:3500/v1.0/state/starwars/transaction \
-H "Content-Type: application/json" \
-d '{
"operations": [
{
"operation": "upsert",
"request": {
"key": "order1",
"value": {
"orderId": "7hf8374s",
"type": "book",
"name": "The name of the wind"
}
}
},
{
"operation": "upsert",
"request": {
"key": "order1",
"value": {
"orderId": "7hf8374s"
},
"metadata": {
"outbox.projection": "true"
},
"contentType": "application/json"
}
}
]
}'
By setting the metadata item "outbox.projection"
to "true"
and making sure the key
values match (key1
):
- The first operation is written to the state store and no message is written to the message broker.
- The second operation value is published to the configured pub/sub topic.
Override Dapr-generated CloudEvent fields
You can override the Dapr-generated CloudEvent fields on the published outbox event with custom CloudEvent metadata.
async def execute_state_transaction():
async with DaprClient() as client:
# Define state operations
ops = []
op1 = {
'operation': 'upsert',
'request': {
'key': 'key1',
'value': b'2', # Convert string to byte array
'metadata': {
'cloudevent.id': 'unique-business-process-id',
'cloudevent.source': 'CustomersApp',
'cloudevent.type': 'CustomerCreated',
'cloudevent.subject': '123',
'my-custom-ce-field': 'abc'
}
}
}
ops.append(op1)
# Execute state transaction
store_name = 'your-state-store-name'
try:
await client.execute_state_transaction(store_name, ops)
print('State transaction executed.')
except Exception as e:
print('Error executing state transaction:', e)
# Run the async function
if __name__ == "__main__":
asyncio.run(execute_state_transaction())
const { DaprClient } = require('dapr-client');
async function executeStateTransaction() {
// Initialize Dapr client
const daprClient = new DaprClient();
// Define state operations
const ops = [];
const op1 = {
operationType: 'upsert',
request: {
key: 'key1',
value: Buffer.from('2'),
metadata: {
'id': 'unique-business-process-id',
'source': 'CustomersApp',
'type': 'CustomerCreated',
'subject': '123',
'my-custom-ce-field': 'abc'
}
}
};
ops.push(op1);
// Execute state transaction
const storeName = 'your-state-store-name';
const metadata = {};
}
executeStateTransaction();
public class StateOperationExample
{
public async Task ExecuteStateTransactionAsync()
{
var daprClient = new DaprClientBuilder().Build();
// Define the value "2" as a string and serialize it to a byte array
var value = "2";
var valueBytes = JsonSerializer.SerializeToUtf8Bytes(value);
// Define the first state operation to save the value "2" with metadata
// Override Cloudevent metadata
var metadata = new Dictionary<string, string>
{
{ "cloudevent.id", "unique-business-process-id" },
{ "cloudevent.source", "CustomersApp" },
{ "cloudevent.type", "CustomerCreated" },
{ "cloudevent.subject", "123" },
{ "my-custom-ce-field", "abc" }
};
var op1 = new StateTransactionRequest(
key: "key1",
value: valueBytes,
operationType: StateOperationType.Upsert,
metadata: metadata
);
// Create the list of state operations
var ops = new List<StateTransactionRequest> { op1 };
// Execute the state transaction
var storeName = "your-state-store-name";
await daprClient.ExecuteStateTransactionAsync(storeName, ops);
Console.WriteLine("State transaction executed.");
}
public static async Task Main(string[] args)
{
var example = new StateOperationExample();
await example.ExecuteStateTransactionAsync();
}
}
public class StateOperationExample {
public static void main(String[] args) {
executeStateTransaction();
}
public static void executeStateTransaction() {
// Build Dapr client
try (DaprClient daprClient = new DaprClientBuilder().build()) {
// Define the value "2"
String value = "2";
// Override CloudEvent metadata
Map<String, String> metadata = new HashMap<>();
metadata.put("cloudevent.id", "unique-business-process-id");
metadata.put("cloudevent.source", "CustomersApp");
metadata.put("cloudevent.type", "CustomerCreated");
metadata.put("cloudevent.subject", "123");
metadata.put("my-custom-ce-field", "abc");
// Define state operations
List<StateOperation<?>> ops = new ArrayList<>();
StateOperation<String> op1 = new StateOperation<>(
StateOperationType.UPSERT,
"key1",
value,
metadata
);
ops.add(op1);
// Execute state transaction
String storeName = "your-state-store-name";
daprClient.executeStateTransaction(storeName, ops).block();
System.out.println("State transaction executed.");
} catch (Exception e) {
e.printStackTrace();
}
}
}
func main() {
// Create a Dapr client
client, err := dapr.NewClient()
if err != nil {
log.Fatalf("failed to create Dapr client: %v", err)
}
defer client.Close()
ctx := context.Background()
store := "your-state-store-name"
// Define state operations
ops := make([]*dapr.StateOperation, 0)
op1 := &dapr.StateOperation{
Type: dapr.StateOperationTypeUpsert,
Item: &dapr.SetStateItem{
Key: "key1",
Value: []byte("2"),
// Override Cloudevent metadata
Metadata: map[string]string{
"cloudevent.id": "unique-business-process-id",
"cloudevent.source": "CustomersApp",
"cloudevent.type": "CustomerCreated",
"cloudevent.subject": "123",
"my-custom-ce-field": "abc",
},
},
}
ops = append(ops, op1)
// Metadata for the transaction (if any)
meta := map[string]string{}
// Execute state transaction
err = client.ExecuteStateTransaction(ctx, store, meta, ops)
if err != nil {
log.Fatalf("failed to execute state transaction: %v", err)
}
log.Println("State transaction executed.")
}
curl -X POST http://localhost:3500/v1.0/state/starwars/transaction \
-H "Content-Type: application/json" \
-d '{
"operations": [
{
"operation": "upsert",
"request": {
"key": "key1",
"value": "2"
}
},
],
"metadata": {
"id": "unique-business-process-id",
"source": "CustomersApp",
"type": "CustomerCreated",
"subject": "123",
"my-custom-ce-field": "abc",
}
}'
Note
Thedata
CloudEvent field is reserved for Dapr’s use only, and is non-customizable.Demo
1.4.6 - How-To: Share state between applications
Dapr provides different ways to share state between applications.
Different architectures might have different needs when it comes to sharing state. In one scenario, you may want to:
- Encapsulate all state within a given application
- Have Dapr manage the access for you
In a different scenario, you may need two applications working on the same state to get and save the same keys.
To enable state sharing, Dapr supports the following key prefixes strategies:
Key prefixes | Description |
---|---|
appid | The default strategy allowing you to manage state only by the app with the specified appid . All state keys will be prefixed with the appid , and are scoped for the application. |
name | Uses the name of the state store component as the prefix. Multiple applications can share the same state for a given state store. |
namespace | If set, this setting prefixes the appid key with the configured namespace, resulting in a key that is scoped to a given namespace. This allows apps in different namespace with the same appid to reuse the same state store. If a namespace is not configured, the setting fallbacks to the appid strategy. For more information on namespaces in Dapr see How-To: Scope components to one or more applications |
none | Uses no prefixing. Multiple applications share state across different state stores. |
Specifying a state prefix strategy
To specify a prefix strategy, add a metadata key named keyPrefix
on a state component:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
namespace: production
spec:
type: state.redis
version: v1
metadata:
- name: keyPrefix
value: <key-prefix-strategy>
Examples
The following examples demonstrate what state retrieval looks like with each of the supported prefix strategies.
appid
(default)
In the example below, a Dapr application with app id myApp
is saving state into a state store named redis
:
curl -X POST http://localhost:3500/v1.0/state/redis \
-H "Content-Type: application/json"
-d '[
{
"key": "darth",
"value": "nihilus"
}
]'
The key will be saved as myApp||darth
.
namespace
A Dapr application running in namespace production
with app id myApp
is saving state into a state store named redis
:
curl -X POST http://localhost:3500/v1.0/state/redis \
-H "Content-Type: application/json"
-d '[
{
"key": "darth",
"value": "nihilus"
}
]'
The key will be saved as production.myApp||darth
.
name
In the example below, a Dapr application with app id myApp
is saving state into a state store named redis
:
curl -X POST http://localhost:3500/v1.0/state/redis \
-H "Content-Type: application/json"
-d '[
{
"key": "darth",
"value": "nihilus"
}
]'
The key will be saved as redis||darth
.
none
In the example below, a Dapr application with app id myApp
is saving state into a state store named redis
:
curl -X POST http://localhost:3500/v1.0/state/redis \
-H "Content-Type: application/json"
-d '[
{
"key": "darth",
"value": "nihilus"
}
]'
The key will be saved as darth
.
1.4.7 - How-To: Encrypt application state
Encrypt application state at rest to provide stronger security in enterprise workloads or regulated environments. Dapr offers automatic client-side encryption based on AES in Galois/Counter Mode (GCM), supporting keys of 128, 192, and 256-bits.
In addition to automatic encryption, Dapr supports primary and secondary encryption keys to make it easier for developers and ops teams to enable a key rotation strategy. This feature is supported by all Dapr state stores.
The encryption keys are always fetched from a secret, and cannot be supplied as plaintext values on the metadata
section.
Enabling automatic encryption
Add the following metadata
section to any Dapr supported state store:
metadata:
- name: primaryEncryptionKey
secretKeyRef:
name: mysecret
key: mykey # key is optional.
For example, this is the full YAML of a Redis encrypted state store:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""
- name: primaryEncryptionKey
secretKeyRef:
name: mysecret
key: mykey
You now have a Dapr state store configured to fetch the encryption key from a secret named mysecret
, containing the actual encryption key in a key named mykey
.
The actual encryption key must be a valid, hex-encoded encryption key. While 192-bit and 256-bit keys are supported, it’s recommended you use 128-bit encryption keys. Dapr errors and exists if the encryption key is invalid.
For example, you can generate a random, hex-encoded 128-bit (16-byte) key with:
openssl rand 16 | hexdump -v -e '/1 "%02x"'
# Result will be similar to "cb321007ad11a9d23f963bff600d58e0"
Note that the secret store does not have to support keys.
Key rotation
To support key rotation, Dapr provides a way to specify a secondary encryption key:
metadata:
- name: primaryEncryptionKey
secretKeyRef:
name: mysecret
key: mykey
- name: secondaryEncryptionKey
secretKeyRef:
name: mysecret2
key: mykey2
When Dapr starts, it fetches the secrets containing the encryption keys listed in the metadata
section. Dapr automatically knows which state item has been encrypted with which key, as it appends the secretKeyRef.name
field to the end of the actual state key.
To rotate a key,
- Change the
primaryEncryptionKey
to point to a secret containing your new key. - Move the old primary encryption key to the
secondaryEncryptionKey
.
New data will be encrypted using the new key, and any retrieved old data will be decrypted using the secondary key.
Any updates to data items encrypted with the old key will be re-encrypted using the new key.
Note
when you rotate a key, data encrypted with the old key is not automatically re-encrypted unless your application writes it again. If you remove the rotated key (the now-secondary encryption key), you will not be able to access data that was encrypted with that.Related links
1.4.8 - Work with backend state stores
Explore the Operations section to see a list of supported state stores and how to setup state store components.
1.4.8.1 - Azure Cosmos DB
Dapr doesn’t transform state values while saving and retrieving states. Dapr requires all state store implementations to abide by a certain key format scheme (see the state management spec. You can directly interact with the underlying store to manipulate the state data, such as:
- Querying states.
- Creating aggregated views.
- Making backups.
Note
Azure Cosmos DB is a multi-modal database that supports multiple APIs. The default Dapr Cosmos DB state store implementation uses the Azure Cosmos DB SQL API.Connect to Azure Cosmos DB
To connect to your Cosmos DB instance, you can either:
- Use the Data Explorer on Azure Management Portal.
- Use various SDKs and tools.
Note
When you configure an Azure Cosmos DB for Dapr, specify the exact database and collection to use. The following Cosmos DB SQL API samples assume you’ve already connected to the right database and a collection named “states”.List keys by App ID
To get all state keys associated with application “myapp”, use the query:
SELECT * FROM states WHERE CONTAINS(states.id, 'myapp||')
The above query returns all documents with an id containing “myapp-”, which is the prefix of the state keys.
Get specific state data
To get the state data by a key “balance” for the application “myapp”, use the query:
SELECT * FROM states WHERE states.id = 'myapp||balance'
Read the value field of the returned document. To get the state version/ETag, use the command:
SELECT states._etag FROM states WHERE states.id = 'myapp||balance'
Read actor state
To get all the state keys associated with an actor with the instance ID “leroy” of actor type “cat” belonging to the application with ID “mypets”, use the command:
SELECT * FROM states WHERE CONTAINS(states.id, 'mypets||cat||leroy||')
And to get a specific actor state such as “food”, use the command:
SELECT * FROM states WHERE states.id = 'mypets||cat||leroy||food'
Warning
You should not manually update or delete states in the store. All writes and delete operations should be done via the Dapr runtime. The only exception: it is often required to delete actor records in a state store, once you know that these are no longer in use, to prevent a build up of unused actor instances that may never be loaded again.1.4.8.2 - Redis
Dapr doesn’t transform state values while saving and retrieving states. Dapr requires all state store implementations to abide by a certain key format scheme (see the state management spec. You can directly interact with the underlying store to manipulate the state data, such as:
- Querying states.
- Creating aggregated views.
- Making backups.
Note
The following examples uses Redis CLI against a Redis store using the default Dapr state store implementation.Connect to Redis
You can use the official redis-cli or any other Redis compatible tools to connect to the Redis state store to query Dapr states directly. If you are running Redis in a container, the easiest way to use redis-cli is via a container:
docker run --rm -it --link <name of the Redis container> redis redis-cli -h <name of the Redis container>
List keys by App ID
To get all state keys associated with application “myapp”, use the command:
KEYS myapp*
The above command returns a list of existing keys, for example:
1) "myapp||balance"
2) "myapp||amount"
Get specific state data
Dapr saves state values as hash values. Each hash value contains a “data” field, which contains:
- The state data.
- A “version” field, with an ever-incrementing version serving as the ETag.
For example, to get the state data by a key “balance” for the application “myapp”, use the command:
HGET myapp||balance data
To get the state version/ETag, use the command:
HGET myapp||balance version
Read actor state
To get all the state keys associated with an actor with the instance ID “leroy” of actor type “cat” belonging to the application with ID “mypets”, use the command:
KEYS mypets||cat||leroy*
To get a specific actor state such as “food”, use the command:
HGET mypets||cat||leroy||food value
Warning
You should not manually update or delete states in the store. All writes and delete operations should be done via the Dapr runtime. The only exception: it is often required to delete actor records in a state store, once you know that these are no longer in use, to prevent a build up of unused actor instances that may never be loaded again.1.4.8.3 - SQL server
Dapr doesn’t transform state values while saving and retrieving states. Dapr requires all state store implementations to abide by a certain key format scheme (see the state management spec. You can directly interact with the underlying store to manipulate the state data, such as:
- Querying states.
- Creating aggregated views.
- Making backups.
Connect to SQL Server
The easiest way to connect to your SQL Server instance is to use the:
- Azure Data Studio (Windows, macOS, Linux)
- SQL Server Management Studio (Windows)
Note
When you configure an Azure SQL database for Dapr, you need to specify the exact table name to use. The following Azure SQL samples assume you’ve already connected to the right database with a table named “states”.List keys by App ID
To get all state keys associated with application “myapp”, use the query:
SELECT * FROM states WHERE [Key] LIKE 'myapp||%'
The above query returns all rows with id containing “myapp||”, which is the prefix of the state keys.
Get specific state data
To get the state data by a key “balance” for the application “myapp”, use the query:
SELECT * FROM states WHERE [Key] = 'myapp||balance'
Read the Data field of the returned row. To get the state version/ETag, use the command:
SELECT [RowVersion] FROM states WHERE [Key] = 'myapp||balance'
Get filtered state data
To get all state data where the value “color” in json data equals to “blue”, use the query:
SELECT * FROM states WHERE JSON_VALUE([Data], '$.color') = 'blue'
Read actor state
To get all the state keys associated with an actor with the instance ID “leroy” of actor type “cat” belonging to the application with ID “mypets”, use the command:
SELECT * FROM states WHERE [Key] LIKE 'mypets||cat||leroy||%'
To get a specific actor state such as “food”, use the command:
SELECT * FROM states WHERE [Key] = 'mypets||cat||leroy||food'
Warning
You should not manually update or delete states in the store. All writes and delete operations should be done via the Dapr runtime. The only exception: it is often required to delete actor records in a state store, once you know that these are no longer in use, to prevent a build up of unused actor instances that may never be loaded again.1.4.9 - State Time-to-Live (TTL)
Dapr enables per state set request time-to-live (TTL). This means that applications can set time-to-live per state stored, and these states cannot be retrieved after expiration.
For supported state stores, you simply set the ttlInSeconds
metadata when publishing a message. Other state stores will ignore this value. For some state stores, you can specify a default expiration on a per-table/container basis.
Native state TTL support
When state TTL has native support in the state store component, Dapr forwards the TTL configuration without adding any extra logic, maintaining predictable behavior. This is helpful when the expired state is handled differently by the component.
When a TTL is not specified, the default behavior of the state store is retained.
Explicit persistence bypassing globally defined TTL
Persisting state applies to all state stores that let you specify a default TTL used for all data, either:
- Setting a global TTL value via a Dapr component, or
- When creating the state store outside of Dapr and setting a global TTL value.
When no specific TTL is specified, the data expires after that global TTL period of time. This is not facilitated by Dapr.
In addition, all state stores also support the option to explicitly persist data. This means you can ignore the default database policy (which may have been set outside of Dapr or via a Dapr Component) to indefinitely retain a given database record. You can do this by setting ttlInSeconds
to the value of -1
. This value indicates to ignore any TTL value set.
Supported components
Refer to the TTL column in the state store components guide.
Example
You can set state TTL in the metadata as part of the state store set request:
#dependencies
from dapr.clients import DaprClient
#code
DAPR_STORE_NAME = "statestore"
with DaprClient() as client:
client.save_state(DAPR_STORE_NAME, "order_1", str(orderId), state_metadata={
'ttlInSeconds': '120'
})
To launch a Dapr sidecar and run the above example application, you’d then run a command similar to the following:
dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 -- python3 OrderProcessingService.py
// dependencies
using Dapr.Client;
// code
await client.SaveStateAsync(storeName, stateKeyName, state, metadata: new Dictionary<string, string>() {
{
"ttlInSeconds", "120"
}
});
To launch a Dapr sidecar and run the above example application, you’d then run a command similar to the following:
dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 dotnet run
// dependencies
import (
dapr "github.com/dapr/go-sdk/client"
)
// code
md := map[string]string{"ttlInSeconds": "120"}
if err := client.SaveState(ctx, store, "key1", []byte("hello world"), md); err != nil {
panic(err)
}
To launch a Dapr sidecar and run the above example application, you’d then run a command similar to the following:
dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 go run .
curl -X POST -H "Content-Type: application/json" -d '[{ "key": "order_1", "value": "250", "metadata": { "ttlInSeconds": "120" } }]' http://localhost:3601/v1.0/state/statestore
Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '[{"key": "order_1", "value": "250", "metadata": {"ttlInSeconds": "120"}}]' -Uri 'http://localhost:3601/v1.0/state/statestore'
Related links
- See the state API reference guide.
- Learn how to use key value pairs to persist a state.
- List of state store components.
- Read the API reference.
1.5 - Bindings
More about Dapr Bindings
Learn more about how to use Dapr Bindings:
- Try the Bindings quickstart.
- Explore input and output bindings via any of the supporting Dapr SDKs.
- Review the Bindings API reference documentation.
- Browse the supported input and output bindings component specs.
1.5.1 - Bindings overview
Using Dapr’s bindings API, you can trigger your app with events coming in from external systems and interface with external systems. With the bindings API, you can:
- Avoid the complexities of connecting to and polling from messaging systems, such as queues and message buses.
- Focus on business logic, instead of the implementation details of interacting with a system.
- Keep your code free from SDKs or libraries.
- Handle retries and failure recovery.
- Switch between bindings at runtime.
- Build portable applications with environment-specific bindings set-up and no required code changes.
For example, with bindings, your application can respond to incoming Twilio/SMS messages without:
- Adding or configuring a third-party Twilio SDK
- Worrying about polling from Twilio (or using WebSockets, etc.)

In the above diagram:
- The input binding triggers a method on your application.
- Execute output binding operations on the component, such as
"create"
.
Bindings are developed independently of Dapr runtime. You can view and contribute to the bindings.
Note
If you are using the HTTP Binding, then it is preferable to use service invocation instead. Read How-To: Invoke Non-Dapr Endpoints using HTTP for more information.Input bindings
With input bindings, you can trigger your application when an event from an external resource occurs. An optional payload and metadata may be sent with the request.
The following overview video and demo demonstrates how Dapr input binding works.
To receive events from an input binding:
- Define the component YAML that describes the binding type and its metadata (connection info, etc.).
- Listen for the incoming event using:
- An HTTP endpoint
- The gRPC proto library to get incoming events.
Note
On startup, Dapr sends an OPTIONS request for all defined input bindings to the application. If the application wants to subscribe to the binding, Dapr expects a status code of 2xx or 405.Read the Create an event-driven app using input bindings guide to get started with input bindings.
Output bindings
With output bindings, you can invoke external resources. An optional payload and metadata can be sent with the invocation request.
The following overview video and demo demonstrates how Dapr output binding works.
To invoke an output binding:
- Define the component YAML that describes the binding type and its metadata (connection info, etc.).
- Use the HTTP endpoint or gRPC method to invoke the binding with an optional payload.
- Specify an output operation. Output operations depend on the binding component you use, and can include:
"create"
"update"
"delete"
"exec"
Read the Use output bindings to interface with external resources guide to get started with output bindings.
Binding directions (optional)
You can provide the direction
metadata field to indicate the direction(s) supported by the binding component. In doing so, the Dapr sidecar avoids the "wait for the app to become ready"
state, reducing the lifecycle dependency between the Dapr sidecar and the application:
"input"
"output"
"input, output"
Note
It is highly recommended that all input bindings should include thedirection
property.See a full example of the bindings direction
metadata.
Try out bindings
Quickstarts and tutorials
Want to put the Dapr bindings API to the test? Walk through the following quickstart and tutorials to see bindings in action:
Quickstart/tutorial | Description |
---|---|
Bindings quickstart | Work with external systems using input bindings to respond to events and output bindings to call operations. |
Bindings tutorial | Demonstrates how to use Dapr to create input and output bindings to other components. Uses bindings to Kafka. |
Start using bindings directly in your app
Want to skip the quickstarts? Not a problem. You can try out the bindings building block directly in your application to invoke output bindings and trigger input bindings. After Dapr is installed, you can begin using the bindings API starting with the input bindings how-to guide.
Next Steps
- Follow these guides on:
- Try out the bindings tutorial to experiment with binding to a Kafka queue.
- Read the bindings API specification
1.5.2 - How-To: Trigger your application with input bindings
With input bindings, you can trigger your application when an event from an external resource occurs. An external resource could be a queue, messaging pipeline, cloud-service, filesystem, etc. An optional payload and metadata may be sent with the request.
Input bindings are ideal for event-driven processing, data pipelines, or generally reacting to events and performing further processing. Dapr input bindings allow you to:
- Receive events without including specific SDKs or libraries
- Replace bindings without changing your code
- Focus on business logic and not the event resource implementation

This guide uses a Kafka binding as an example. You can find your preferred binding spec from the list of bindings components. In this guide:
- The example invokes the
/binding
endpoint withcheckout
, the name of the binding to invoke. - The payload goes inside the mandatory
data
field, and can be any JSON serializable value. - The
operation
field tells the binding what action it needs to take. For example, the Kafka binding supports thecreate
operation.
Note
If you haven’t already, try out the bindings quickstart for a quick walk-through on how to use the bindings API.Create a binding
Create a binding.yaml
file and save to a components
sub-folder in your application directory.
Create a new binding component named checkout
. Within the metadata
section, configure the following Kafka-related properties:
- The topic to which you’ll publish the message
- The broker
When creating the binding component, specify the supported direction
of the binding.
Use the --resources-path
flag with the dapr run
command to point to your custom resources directory.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: checkout
spec:
type: bindings.kafka
version: v1
metadata:
# Kafka broker connection setting
- name: brokers
value: localhost:9092
# consumer configuration: topic and consumer group
- name: topics
value: sample
- name: consumerGroup
value: group1
# publisher configuration: topic
- name: publishTopic
value: sample
- name: authRequired
value: false
- name: direction
value: input
To deploy into a Kubernetes cluster, run kubectl apply -f binding.yaml
.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: checkout
spec:
type: bindings.kafka
version: v1
metadata:
# Kafka broker connection setting
- name: brokers
value: localhost:9092
# consumer configuration: topic and consumer group
- name: topics
value: sample
- name: consumerGroup
value: group1
# publisher configuration: topic
- name: publishTopic
value: sample
- name: authRequired
value: false
- name: direction
value: input
Listen for incoming events (input binding)
Configure your application to receive incoming events. If you’re using HTTP, you need to:
- Listen on a
POST
endpoint with the name of the binding, as specified inmetadata.name
in thebinding.yaml
file. - Verify your application allows Dapr to make an
OPTIONS
request for this endpoint.
Below are code examples that leverage Dapr SDKs to demonstrate an input binding.
The following example demonstrates how to configure an input binding using ASP.NET Core controllers.
The following example demonstrates how to configure an input binding using ASP.NET Core controllers.
using System.Collections.Generic;
using System.Threading.Tasks;
using System;
using Microsoft.AspNetCore.Mvc;
namespace CheckoutService.controller;
[ApiController]
public sealed class CheckoutServiceController : ControllerBase
{
[HttpPost("/checkout")]
public ActionResult<string> getCheckout([FromBody] int orderId)
{
Console.WriteLine($"Received Message: {orderId}");
return $"CID{orderId}";
}
}
The following example demonstrates how to configure the same input binding using a minimal API approach:
app.MapPost("checkout", ([FromBody] int orderId) =>
{
Console.WriteLine($"Received Message: {orderId}");
return $"CID{orderId}"
});
//dependencies
import org.springframework.web.bind.annotation.*;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import reactor.core.publisher.Mono;
//code
@RestController
@RequestMapping("/")
public class CheckoutServiceController {
private static final Logger log = LoggerFactory.getLogger(CheckoutServiceController.class);
@PostMapping(path = "/checkout")
public Mono<String> getCheckout(@RequestBody(required = false) byte[] body) {
return Mono.fromRunnable(() ->
log.info("Received Message: " + new String(body)));
}
}
#dependencies
import logging
from dapr.ext.grpc import App, BindingRequest
#code
app = App()
@app.binding('checkout')
def getCheckout(request: BindingRequest):
logging.basicConfig(level = logging.INFO)
logging.info('Received Message : ' + request.text())
app.run(6002)
//dependencies
import (
"encoding/json"
"log"
"net/http"
"github.com/gorilla/mux"
)
//code
func getCheckout(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
var orderId int
err := json.NewDecoder(r.Body).Decode(&orderId)
log.Println("Received Message: ", orderId)
if err != nil {
log.Printf("error parsing checkout input binding payload: %s", err)
w.WriteHeader(http.StatusOK)
return
}
}
func main() {
r := mux.NewRouter()
r.HandleFunc("/checkout", getCheckout).Methods("POST", "OPTIONS")
http.ListenAndServe(":6002", r)
}
//dependencies
import { DaprServer, CommunicationProtocolEnum } from '@dapr/dapr';
//code
const daprHost = "127.0.0.1";
const serverHost = "127.0.0.1";
const serverPort = "6002";
const daprPort = "3602";
start().catch((e) => {
console.error(e);
process.exit(1);
});
async function start() {
const server = new DaprServer({
serverHost,
serverPort,
communicationProtocol: CommunicationProtocolEnum.HTTP,
clientOptions: {
daprHost,
daprPort,
}
});
await server.binding.receive('checkout', async (orderId) => console.log(`Received Message: ${JSON.stringify(orderId)}`));
await server.start();
}
ACK an event
Tell Dapr you’ve successfully processed an event in your application by returning a 200 OK
response from your HTTP handler.
Reject an event
Tell Dapr the event was not processed correctly in your application and schedule it for redelivery by returning any response other than 200 OK
. For example, a 500 Error
.
Specify a custom route
By default, incoming events will be sent to an HTTP endpoint that corresponds to the name of the input binding. You can override this by setting the following metadata property in binding.yaml
:
name: mybinding
spec:
type: binding.rabbitmq
metadata:
- name: route
value: /onevent
Event delivery Guarantees
Event delivery guarantees are controlled by the binding implementation. Depending on the binding implementation, the event delivery can be exactly once or at least once.
References
1.5.3 - How-To: Use output bindings to interface with external resources
With output bindings, you can invoke external resources. An optional payload and metadata can be sent with the invocation request.

This guide uses a Kafka binding as an example. You can find your preferred binding spec from the list of bindings components. In this guide:
- The example invokes the
/binding
endpoint withcheckout
, the name of the binding to invoke. - The payload goes inside the mandatory
data
field, and can be any JSON serializable value. - The
operation
field tells the binding what action it needs to take. For example, the Kafka binding supports thecreate
operation.
Note
If you haven’t already, try out the bindings quickstart for a quick walk-through on how to use the bindings API.Create a binding
Create a binding.yaml
file and save to a components
sub-folder in your application directory.
Create a new binding component named checkout
. Within the metadata
section, configure the following Kafka-related properties:
- The topic to which you’ll publish the message
- The broker
When creating the binding component, specify the supported direction
of the binding.
Use the --resources-path
flag with dapr run
to point to your custom resources directory.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: checkout
spec:
type: bindings.kafka
version: v1
metadata:
# Kafka broker connection setting
- name: brokers
value: localhost:9092
# consumer configuration: topic and consumer group
- name: topics
value: sample
- name: consumerGroup
value: group1
# publisher configuration: topic
- name: publishTopic
value: sample
- name: authRequired
value: false
- name: direction
value: output
To deploy the following binding.yaml
file into a Kubernetes cluster, run kubectl apply -f binding.yaml
.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: checkout
spec:
type: bindings.kafka
version: v1
metadata:
# Kafka broker connection setting
- name: brokers
value: localhost:9092
# consumer configuration: topic and consumer group
- name: topics
value: sample
- name: consumerGroup
value: group1
# publisher configuration: topic
- name: publishTopic
value: sample
- name: authRequired
value: false
- name: direction
value: output
Send an event (output binding)
The code examples below leverage Dapr SDKs to invoke the output bindings endpoint on a running Dapr instance.
Here’s an example of using a console app with top-level statements in .NET 6+:
Here’s an example of using a console app with top-level statements in .NET 6+:
using System.Text;
using System.Threading.Tasks;
using Dapr.Client;
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddDaprClient();
var app = builder.Build();
const string BINDING_NAME = "checkout";
const string BINDING_OPERATION = "create";
var random = new Random();
using var daprClient = app.Services.GetRequiredService<DaprClient>();
while (true)
{
await Task.Delay(TimeSpan.FromSeconds(5));
var orderId = random.Next(1, 1000);
await client.InvokeBindingAsync(BINDING_NAME, BINDING_OPERATION, orderId);
Console.WriteLine($"Sending message: {orderId}");
}
//dependencies
import io.dapr.client.DaprClient;
import io.dapr.client.DaprClientBuilder;
import io.dapr.client.domain.HttpExtension;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.Random;
import java.util.concurrent.TimeUnit;
//code
@SpringBootApplication
public class OrderProcessingServiceApplication {
private static final Logger log = LoggerFactory.getLogger(OrderProcessingServiceApplication.class);
public static void main(String[] args) throws InterruptedException{
String BINDING_NAME = "checkout";
String BINDING_OPERATION = "create";
while(true) {
TimeUnit.MILLISECONDS.sleep(5000);
Random random = new Random();
int orderId = random.nextInt(1000-1) + 1;
DaprClient client = new DaprClientBuilder().build();
//Using Dapr SDK to invoke output binding
client.invokeBinding(BINDING_NAME, BINDING_OPERATION, orderId).block();
log.info("Sending message: " + orderId);
}
}
}
#dependencies
import random
from time import sleep
import requests
import logging
import json
from dapr.clients import DaprClient
#code
logging.basicConfig(level = logging.INFO)
BINDING_NAME = 'checkout'
BINDING_OPERATION = 'create'
while True:
sleep(random.randrange(50, 5000) / 1000)
orderId = random.randint(1, 1000)
with DaprClient() as client:
#Using Dapr SDK to invoke output binding
resp = client.invoke_binding(BINDING_NAME, BINDING_OPERATION, json.dumps(orderId))
logging.basicConfig(level = logging.INFO)
logging.info('Sending message: ' + str(orderId))
//dependencies
import (
"context"
"log"
"math/rand"
"time"
"strconv"
dapr "github.com/dapr/go-sdk/client"
)
//code
func main() {
BINDING_NAME := "checkout";
BINDING_OPERATION := "create";
for i := 0; i < 10; i++ {
time.Sleep(5000)
orderId := rand.Intn(1000-1) + 1
client, err := dapr.NewClient()
if err != nil {
panic(err)
}
defer client.Close()
ctx := context.Background()
//Using Dapr SDK to invoke output binding
in := &dapr.InvokeBindingRequest{ Name: BINDING_NAME, Operation: BINDING_OPERATION , Data: []byte(strconv.Itoa(orderId))}
err = client.InvokeOutputBinding(ctx, in)
log.Println("Sending message: " + strconv.Itoa(orderId))
}
}
//dependencies
import { DaprClient, CommunicationProtocolEnum } from "@dapr/dapr";
//code
const daprHost = "127.0.0.1";
(async function () {
for (var i = 0; i < 10; i++) {
await sleep(2000);
const orderId = Math.floor(Math.random() * (1000 - 1) + 1);
try {
await sendOrder(orderId)
} catch (err) {
console.error(e);
process.exit(1);
}
}
})();
async function sendOrder(orderId) {
const BINDING_NAME = "checkout";
const BINDING_OPERATION = "create";
const client = new DaprClient({
daprHost,
daprPort: process.env.DAPR_HTTP_PORT,
communicationProtocol: CommunicationProtocolEnum.HTTP,
});
//Using Dapr SDK to invoke output binding
const result = await client.binding.send(BINDING_NAME, BINDING_OPERATION, orderId);
console.log("Sending message: " + orderId);
}
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
You can also invoke the output bindings endpoint using HTTP:
curl -X POST -H 'Content-Type: application/json' http://localhost:3601/v1.0/bindings/checkout -d '{ "data": 100, "operation": "create" }'
Watch this video on how to use bi-directional output bindings.
References
1.6 - Actors
More about Dapr Actors
Learn more about how to use Dapr Actors:
- Try the Actors quickstart.
- Explore actors via any of the Dapr SDKs.
- Review the Actors API reference documentation.
1.6.1 - Actors overview
The actor pattern describes actors as the lowest-level “unit of computation”. In other words, you write your code in a self-contained unit (called an actor) that receives messages and processes them one at a time, without any kind of concurrency or threading.
While your code processes a message, it can send one or more messages to other actors, or create new actors. An underlying runtime manages how, when and where each actor runs, and also routes messages between actors.
A large number of actors can execute simultaneously, and actors execute independently from each other.
Actors in Dapr
Dapr includes a runtime that specifically implements the Virtual Actor pattern. With Dapr’s implementation, you write your Dapr actors according to the actor model, and Dapr leverages the scalability and reliability guarantees that the underlying platform provides.
Every actor is defined as an instance of an actor type, identical to the way an object is an instance of a class. For example, there may be an actor type that implements the functionality of a calculator and there could be many actors of that type that are distributed on various nodes across a cluster. Each such actor is uniquely identified by an actor ID.

The following overview video and demo demonstrates how actors in Dapr work.
Dapr actors vs. Dapr Workflow
Dapr actors builds on the state management and service invocation APIs to create stateful, long running objects with identity. Dapr Workflow and Dapr Actors are related, with workflows building on actors to provide a higher level of abstraction to orchestrate a set of actors, implementing common workflow patterns and managing the lifecycle of actors on your behalf.
Dapr actors are designed to provide a way to encapsulate state and behavior within a distributed system. An actor can be activated on demand by a client application. When an actor is activated, it is assigned a unique identity, which allows it to maintain its state across multiple invocations. This makes actors useful for building stateful, scalable, and fault-tolerant distributed applications.
On the other hand, Dapr Workflow provides a way to define and orchestrate complex workflows that involve multiple services and components within a distributed system. Workflows allow you to define a sequence of steps or tasks that need to be executed in a specific order, and can be used to implement business processes, event-driven workflows, and other similar scenarios.
As mentioned above, Dapr Workflow builds on Dapr Actors managing their activation and lifecycle.
When to use Dapr actors
As with any other technology decision, you should decide whether to use actors based on the problem you’re trying to solve. For example, if you were building a chat application, you might use Dapr actors to implement the chat rooms and the individual chat sessions between users, as each chat session needs to maintain its own state and be scalable and fault-tolerant.
Generally speaking, consider the actor pattern to model your problem or scenario if:
- Your problem space involves a large number (thousands or more) of small, independent, and isolated units of state and logic.
- You want to work with single-threaded objects that do not require significant interaction from external components, including querying state across a set of actors.
- Your actor instances won’t block callers with unpredictable delays by issuing I/O operations.
When to use Dapr Workflow
You would use Dapr Workflow when you need to define and orchestrate complex workflows that involve multiple services and components. For example, using the chat application example earlier, you might use Dapr Workflows to define the overall workflow of the application, such as how new users are registered, how messages are sent and received, and how the application handles errors and exceptions.
Learn more about Dapr Workflow and how to use workflows in your application.
Actor types and actor IDs
Actors are uniquely defined as an instance of an actor type, similar to how an object is an instance of a class. For example, you might have an actor type that implements the functionality of a calculator. There could be many actors of that type distributed across various nodes in a cluster.
Each actor is uniquely identified by an actor ID. An actor ID can be any string value you choose. If you do not provide an actor ID, Dapr generates a random string for you as an ID.
Features
Namespaced actors
Dapr supports namespaced actors. An actor type can be deployed into different namespaces. You can call instances of these actors in the same namespace.
Learn more about namespaced actors and how they work.
Actor lifetime
Since Dapr actors are virtual, they do not need to be explicitly created or destroyed. The Dapr actor runtime:
- Automatically activates an actor once it receives an initial request for that actor ID.
- Garbage-collects the in-memory object of unused actors.
- Maintains knowledge of the actor’s existence in case it’s reactivated later.
An actor’s state outlives the object’s lifetime, as state is stored in the configured state provider for Dapr runtime.
Learn more about actor lifetimes.
Distribution and failover
To provide scalability and reliability, actors instances are throughout the cluster and Dapr distributes actor instances throughout the cluster and automatically migrates them to healthy nodes.
Learn more about Dapr actor placement.
Actor communication
You can invoke actor methods by calling them over HTTP, as shown in the general example below.

- The service calls the actor API on the sidecar.
- With the cached partitioning information from the placement service, the sidecar determines which actor service instance will host actor ID 3. The call is forwarded to the appropriate sidecar.
- The sidecar instance in pod 2 calls the service instance to invoke the actor and execute the actor method.
Learn more about calling actor methods.
Concurrency
The Dapr actor runtime provides a simple turn-based access model for accessing actor methods. Turn-based access greatly simplifies concurrent systems as there is no need for synchronization mechanisms for data access.
State
Transactional state stores can be used to store actor state. Regardless of whether you intend to store any state in your actor, you must specify a value for property actorStateStore
as true
in the state store component’s metadata section. Actors state is stored with a specific scheme in transactional state stores, allowing for consistent querying. Only a single state store component can be used as the state store for all actors. Read the state API reference and the actors API reference to learn more about state stores for actors.
Actor timers and reminders
Actors can schedule periodic work on themselves by registering either timers or reminders.
The functionality of timers and reminders is very similar. The main difference is that Dapr actor runtime is not retaining any information about timers after deactivation, while persisting the information about reminders using Dapr actor state provider.
This distinction allows users to trade off between light-weight but stateless timers vs. more resource-demanding but stateful reminders.
The following overview video and demo demonstrates how actor timers and reminders work.
- Learn more about actor timers.
- Learn more about actor reminders.
- Learn more about timer and reminder error handling and failover.
Next steps
Actors features and concepts >>Related links
- Actors API reference
- Refer to the Dapr SDK documentation and examples.
1.6.2 - Actor runtime features
Now that you’ve learned about the actor building block at a high level, let’s deep dive into the features and concepts included with actors in Dapr.
Actor lifetime
Dapr actors are virtual, meaning that their lifetime is not tied to their in-memory representation. As a result, they do not need to be explicitly created or destroyed. The Dapr actor runtime automatically activates an actor the first time it receives a request for that actor ID. If an actor is not used for a period of time, the Dapr actor runtime garbage-collects the in-memory object. It will also maintain knowledge of the actor’s existence should it need to be reactivated later.
Invocation of actor methods, timers, and reminders reset the actor idle time. For example, a reminder firing keeps the actor active.
- Actor reminders fire whether an actor is active or inactive. If fired for an inactive actor, it activates the actor first.
- Actor timers firing reset the idle time; however, timers only fire while the actor is active.
The idle timeout and scan interval Dapr runtime uses to see if an actor can be garbage-collected is configurable. This information can be passed when Dapr runtime calls into the actor service to get supported actor types.
This virtual actor lifetime abstraction carries some caveats as a result of the virtual actor model, and in fact the Dapr Actors implementation deviates at times from this model.
An actor is automatically activated (causing an actor object to be constructed) the first time a message is sent to its actor ID. After some period of time, the actor object is garbage collected. In the future, using the actor ID again, causes a new actor object to be constructed. An actor’s state outlives the object’s lifetime as state is stored in configured state provider for Dapr runtime.
Distribution and failover
To provide scalability and reliability, actors instances are distributed throughout the cluster and Dapr automatically migrates them from failed nodes to healthy ones as required.
Actors are distributed across the instances of the actor service, and those instance are distributed across the nodes in a cluster. Each service instance contains a set of actors for a given actor type.
Actor placement service
The Dapr actor runtime manages distribution scheme and key range settings for you via the actor Placement
service. When a new instance of a service is created:
- The sidecar makes a call to the actor service to retrieve registered actor types and configuration settings.
- The corresponding Dapr runtime registers the actor types it can create.
- The
Placement
service calculates the partitioning across all the instances for a given actor type.
This partition data table for each actor type is updated and stored in each Dapr instance running in the environment and can change dynamically as new instances of actor services are created and destroyed.

When a client calls an actor with a particular id (for example, actor id 123), the Dapr instance for the client hashes the actor type and id, and uses the information to call onto the corresponding Dapr instance that can serve the requests for that particular actor id. As a result, the same partition (or service instance) is always called for any given actor id. This is shown in the diagram below.

This simplifies some choices, but also carries some consideration:
- By default, actors are randomly placed into pods resulting in uniform distribution.
- Because actors are randomly placed, it should be expected that actor operations always require network communication, including serialization and deserialization of method call data, incurring latency and overhead.
Note
Note: The Dapr actor Placement service is only used for actor placement and therefore is not needed if your services are not using Dapr actors. The Placement service can run in all hosting environments, including self-hosted and Kubernetes.Actor communication
You can interact with Dapr to invoke the actor method by calling the HTTP endpoint.
POST/GET/PUT/DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/<method/state/timers/reminders>
You can provide any data for the actor method in the request body, and the response for the request would be in the response body which is the data from actor call.
Another, and perhaps more convenient, way of interacting with actors is via SDKs. Dapr currently supports actors SDKs in .NET, Java, and Python.
Refer to Dapr Actor Features for more details.
Concurrency
The Dapr actor runtime provides a simple turn-based access model for accessing actor methods. This means that no more than one thread can be active inside an actor object’s code at any time. Turn-based access greatly simplifies concurrent systems as there is no need for synchronization mechanisms for data access. It also means systems must be designed with special considerations for the single-threaded access nature of each actor instance.
A single actor instance cannot process more than one request at a time. An actor instance can cause a throughput bottleneck if it is expected to handle concurrent requests.
Actors can deadlock on each other if there is a circular request between two actors while an external request is made to one of the actors simultaneously. The Dapr actor runtime automatically times out on actor calls and throw an exception to the caller to interrupt possible deadlock situations.

Reentrancy
To allow actors to “re-enter” and invoke methods on themselves, see Actor Reentrancy.
Turn-based access
A turn consists of the complete execution of an actor method in response to a request from other actors or clients, or the complete execution of a timer/reminder callback. Even though these methods and callbacks are asynchronous, the Dapr actor runtime does not interleave them. A turn must be fully finished before a new turn is allowed. In other words, an actor method or timer/reminder callback that is currently executing must be fully finished before a new call to a method or callback is allowed. A method or callback is considered to have finished if the execution has returned from the method or callback and the task returned by the method or callback has finished. It is worth emphasizing that turn-based concurrency is respected even across different methods, timers, and callbacks.
The Dapr actor runtime enforces turn-based concurrency by acquiring a per-actor lock at the beginning of a turn and releasing the lock at the end of the turn. Thus, turn-based concurrency is enforced on a per-actor basis and not across actors. Actor methods and timer/reminder callbacks can execute simultaneously on behalf of different actors.
The following example illustrates the above concepts. Consider an actor type that implements two asynchronous methods (say, Method1 and Method2), a timer, and a reminder. The diagram below shows an example of a timeline for the execution of these methods and callbacks on behalf of two actors (ActorId1 and ActorId2) that belong to this actor type.

Next steps
Timers and reminders >>Related links
1.6.3 - Actor runtime configuration parameters
You can modify the default Dapr actor runtime behavior using the following configuration parameters.
Parameter | Description | Default |
---|---|---|
entities | The actor types supported by this host. | N/A |
actorIdleTimeout | The timeout before deactivating an idle actor. Checks for timeouts occur every actorScanInterval interval. | 60 minutes |
actorScanInterval | The duration which specifies how often to scan for actors to deactivate idle actors. Actors that have been idle longer than actor_idle_timeout will be deactivated. | 30 seconds |
drainOngoingCallTimeout | The duration when in the process of draining rebalanced actors. This specifies the timeout for the current active actor method to finish. If there is no current actor method call, this is ignored. | 60 seconds |
drainRebalancedActors | If true, Dapr will wait for drainOngoingCallTimeout duration to allow a current actor call to complete before trying to deactivate an actor. | true |
reentrancy (ActorReentrancyConfig ) | Configure the reentrancy behavior for an actor. If not provided, reentrancy is disabled. | disabled, false |
remindersStoragePartitions | Configure the number of partitions for actor’s reminders. If not provided, all reminders are saved as a single record in actor’s state store. | 0 |
entitiesConfig | Configure each actor type individually with an array of configurations. Any entity specified in the individual entity configurations must also be specified in the top level entities field. | N/A |
Examples
// In Startup.cs
public void ConfigureServices(IServiceCollection services)
{
// Register actor runtime with DI
services.AddActors(options =>
{
// Register actor types and configure actor settings
options.Actors.RegisterActor<MyActor>();
// Configure default settings
options.ActorIdleTimeout = TimeSpan.FromMinutes(60);
options.ActorScanInterval = TimeSpan.FromSeconds(30);
options.DrainOngoingCallTimeout = TimeSpan.FromSeconds(60);
options.DrainRebalancedActors = true;
options.RemindersStoragePartitions = 7;
options.ReentrancyConfig = new() { Enabled = false };
// Add a configuration for a specific actor type.
// This actor type must have a matching value in the base level 'entities' field. If it does not, the configuration will be ignored.
// If there is a matching entity, the values here will be used to overwrite any values specified in the root configuration.
// In this example, `ReentrantActor` has reentrancy enabled; however, 'MyActor' will not have reentrancy enabled.
options.Actors.RegisterActor<ReentrantActor>(typeOptions: new()
{
ReentrancyConfig = new()
{
Enabled = true,
}
});
});
// Register additional services for use with actors
services.AddSingleton<BankService>();
}
import { CommunicationProtocolEnum, DaprClient, DaprServer } from "@dapr/dapr";
// Configure the actor runtime with the DaprClientOptions.
const clientOptions = {
actor: {
actorIdleTimeout: "1h",
actorScanInterval: "30s",
drainOngoingCallTimeout: "1m",
drainRebalancedActors: true,
reentrancy: {
enabled: true,
maxStackDepth: 32,
},
remindersStoragePartitions: 0,
},
};
// Use the options when creating DaprServer and DaprClient.
// Note, DaprServer creates a DaprClient internally, which needs to be configured with clientOptions.
const server = new DaprServer(serverHost, serverPort, daprHost, daprPort, clientOptions);
const client = new DaprClient(daprHost, daprPort, CommunicationProtocolEnum.HTTP, clientOptions);
See the documentation on writing actors with the JavaScript SDK.
from datetime import timedelta
from dapr.actor.runtime.config import ActorRuntimeConfig, ActorReentrancyConfig
ActorRuntime.set_actor_config(
ActorRuntimeConfig(
actor_idle_timeout=timedelta(hours=1),
actor_scan_interval=timedelta(seconds=30),
drain_ongoing_call_timeout=timedelta(minutes=1),
drain_rebalanced_actors=True,
reentrancy=ActorReentrancyConfig(enabled=False),
remindersStoragePartitions=7
)
)
// import io.dapr.actors.runtime.ActorRuntime;
// import java.time.Duration;
ActorRuntime.getInstance().getConfig().setActorIdleTimeout(Duration.ofMinutes(60));
ActorRuntime.getInstance().getConfig().setActorScanInterval(Duration.ofSeconds(30));
ActorRuntime.getInstance().getConfig().setDrainOngoingCallTimeout(Duration.ofSeconds(60));
ActorRuntime.getInstance().getConfig().setDrainBalancedActors(true);
ActorRuntime.getInstance().getConfig().setActorReentrancyConfig(false, null);
ActorRuntime.getInstance().getConfig().setRemindersStoragePartitions(7);
const (
defaultActorType = "basicType"
reentrantActorType = "reentrantType"
)
type daprConfig struct {
Entities []string `json:"entities,omitempty"`
ActorIdleTimeout string `json:"actorIdleTimeout,omitempty"`
ActorScanInterval string `json:"actorScanInterval,omitempty"`
DrainOngoingCallTimeout string `json:"drainOngoingCallTimeout,omitempty"`
DrainRebalancedActors bool `json:"drainRebalancedActors,omitempty"`
Reentrancy config.ReentrancyConfig `json:"reentrancy,omitempty"`
EntitiesConfig []config.EntityConfig `json:"entitiesConfig,omitempty"`
}
var daprConfigResponse = daprConfig{
Entities: []string{defaultActorType, reentrantActorType},
ActorIdleTimeout: actorIdleTimeout,
ActorScanInterval: actorScanInterval,
DrainOngoingCallTimeout: drainOngoingCallTimeout,
DrainRebalancedActors: drainRebalancedActors,
Reentrancy: config.ReentrancyConfig{Enabled: false},
EntitiesConfig: []config.EntityConfig{
{
// Add a configuration for a specific actor type.
// This actor type must have a matching value in the base level 'entities' field. If it does not, the configuration will be ignored.
// If there is a matching entity, the values here will be used to overwrite any values specified in the root configuration.
// In this example, `reentrantActorType` has reentrancy enabled; however, 'defaultActorType' will not have reentrancy enabled.
Entities: []string{reentrantActorType},
Reentrancy: config.ReentrancyConfig{
Enabled: true,
MaxStackDepth: &maxStackDepth,
},
},
},
}
func configHandler(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
json.NewEncoder(w).Encode(daprConfigResponse)
}
Related links
1.6.4 - Namespaced actors
Namespacing in Dapr provides isolation, and thus multi-tenancy. With actor namespacing, the same actor type can be deployed into different namespaces. You can call instances of these actors in the same namespace.
Note
Each namespaced actor deployment must use its own separate state store, especially if the same actor type is used across namespaces. In other words, no namespace information is written as part of the actor record, and hence separate state stores are required for each namespace. See Configuring actor state stores for namespacing section for examples.Creating and configuring namespaces
You can use namespaces either in self-hosted mode or on Kubernetes.
In self-hosted mode, you can specify the namespace for a Dapr instance by setting the NAMESPACE
environment variable.
On Kubernetes, you can create and configure namepaces when deploying actor applications. For example, start with the following kubectl
commands:
kubectl create namespace namespace-actorA
kubectl config set-context --current --namespace=namespace-actorA
Then, deploy your actor applications into this namespace (in the example, namespace-actorA
).
Configuring actor state stores for namespacing
Each namespaced actor deployment must use its own separate state store. While you could use different physical databases for each actor namespace, some state store components provide a way to logically separate data by table, prefix, collection, and more. This allows you to use the same physical database for multiple namespaces, as long as you provide the logical separation in the Dapr component definition.
Some examples are provided below.
Example 1: By a prefix in etcd
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
spec:
type: state.etcd
version: v2
metadata:
- name: endpoints
value: localhost:2379
- name: keyPrefixPath
value: namespace-actorA
- name: actorStateStore
value: "true"
Example 2: By table name in SQLite
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
spec:
type: state.sqlite
version: v1
metadata:
- name: connectionString
value: "data.db"
- name: tableName
value: "namespace-actorA"
- name: actorStateStore
value: "true"
Example 3: By logical database number in Redis
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""
- name: actorStateStore
value: "true"
- name: redisDB
value: "1"
- name: redisPassword
secretKeyRef:
name: redis-secret
key: redis-password
- name: actorStateStore
value: "true"
- name: redisDB
value: "1"
auth:
secretStore: <SECRET_STORE_NAME>
Check your state store component specs to see what it provides.
Note
Namespaced actors use the multi-tenant Placement service. With this control plane service where each application deployment has its own namespace, sidecars belonging to an application in namespace “ActorA” won’t receive placement information for an application in namespace “ActorB”.Next steps
1.6.5 - Actors timers and reminders
Actors can schedule periodic work on themselves by registering either timers or reminders.
The functionality of timers and reminders is very similar. The main difference is that Dapr actor runtime is not retaining any information about timers after deactivation, while persisting the information about reminders using Dapr actor state provider.
This distinction allows users to trade off between light-weight but stateless timers vs. more resource-demanding but stateful reminders.
The scheduling configuration of timers and reminders is identical, as summarized below:
dueTime
is an optional parameter that sets time at which or time interval before the callback is invoked for the first time. If dueTime
is omitted, the callback is invoked immediately after timer/reminder registration.
Supported formats:
- RFC3339 date format, e.g.
2020-10-02T15:00:00Z
- time.Duration format, e.g.
2h30m
- ISO 8601 duration format, e.g.
PT2H30M
period
is an optional parameter that sets time interval between two consecutive callback invocations. When specified in ISO 8601-1 duration
format, you can also configure the number of repetition in order to limit the total number of callback invocations.
If period
is omitted, the callback will be invoked only once.
Supported formats:
- time.Duration format, e.g.
2h30m
- ISO 8601 duration format, e.g.
PT2H30M
,R5/PT1M30S
ttl
is an optional parameter that sets time at which or time interval after which the timer/reminder will be expired and deleted. If ttl
is omitted, no restrictions are applied.
Supported formats:
- RFC3339 date format, e.g.
2020-10-02T15:00:00Z
- time.Duration format, e.g.
2h30m
- ISO 8601 duration format. Example:
PT2H30M
The actor runtime validates correctness of the scheduling configuration and returns error on invalid input.
When you specify both the number of repetitions in period
as well as ttl
, the timer/reminder will be stopped when either condition is met.
Actor timers
You can register a callback on actor to be executed based on a timer.
The Dapr actor runtime ensures that the callback methods respect the turn-based concurrency guarantees. This means that no other actor methods or timer/reminder callbacks will be in progress until this callback completes execution.
The Dapr actor runtime saves changes made to the actor’s state when the callback finishes. If an error occurs in saving the state, that actor object is deactivated and a new instance will be activated.
All timers are stopped when the actor is deactivated as part of garbage collection. No timer callbacks are invoked after that. Also, the Dapr actor runtime does not retain any information about the timers that were running before deactivation. It is up to the actor to register any timers that it needs when it is reactivated in the future.
You can create a timer for an actor by calling the HTTP/gRPC request to Dapr as shown below, or via Dapr SDK.
POST/PUT http://localhost:3500/v1.0/actors/<actorType>/<actorId>/timers/<name>
Examples
The timer parameters are specified in the request body.
The following request body configures a timer with a dueTime
of 9 seconds and a period
of 3 seconds. This means it will first fire after 9 seconds, then every 3 seconds after that.
{
"dueTime":"0h0m9s0ms",
"period":"0h0m3s0ms"
}
The following request body configures a timer with a period
of 3 seconds (in ISO 8601 duration format). It also limits the number of invocations to 10. This means it will fire 10 times: first, immediately after registration, then every 3 seconds after that.
{
"period":"R10/PT3S",
}
The following request body configures a timer with a period
of 3 seconds (in ISO 8601 duration format) and a ttl
of 20 seconds. This means it fires immediately after registration, then every 3 seconds after that for the duration of 20 seconds.
{
"period":"PT3S",
"ttl":"20s"
}
The following request body configures a timer with a dueTime
of 10 seconds, a period
of 3 seconds, and a ttl
of 10 seconds. It also limits the number of invocations to 4. This means it will first fire after 10 seconds, then every 3 seconds after that for the duration of 10 seconds, but no more than 4 times in total.
{
"dueTime":"10s",
"period":"R4/PT3S",
"ttl":"10s"
}
You can remove the actor timer by calling
DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/timers/<name>
Refer api spec for more details.
Actor reminders
Note
In Dapr v1.15, actor reminders are stored by default in the Scheduler service. When upgrading to Dapr v1.15 all existing reminders are automatically migrated to the Scheduler service with no loss of reminders as a one time operation for each actor type.Reminders are a mechanism to trigger persistent callbacks on an actor at specified times. Their functionality is similar to timers. But unlike timers, reminders are triggered under all circumstances until the actor explicitly unregisters them or the actor is explicitly deleted or the number in invocations is exhausted. Specifically, reminders are triggered across actor deactivations and failovers because the Dapr actor runtime persists the information about the actors’ reminders using Dapr actor state provider.
You can create a persistent reminder for an actor by calling the HTTP/gRPC request to Dapr as shown below, or via Dapr SDK.
POST/PUT http://localhost:3500/v1.0/actors/<actorType>/<actorId>/reminders/<name>
The request structure for reminders is identical to those of actors. Please refer to the actor timers examples.
Retrieve actor reminder
You can retrieve the actor reminder by calling
GET http://localhost:3500/v1.0/actors/<actorType>/<actorId>/reminders/<name>
Remove the actor reminder
You can remove the actor reminder by calling
DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/reminders/<name>
If an actor reminder is triggered and the app does not return a 2** code to the runtime (for example, because of a connection issue), actor reminders will be retried up to three times with a backoff interval of one second between each attempt. There may be additional retries attempted in accordance with any optionally applied actor resiliency policy.
Refer api spec for more details.
Error handling
When an actor’s method completes successfully, the runtime will continue to invoke the method at the specified timer or reminder schedule. However, if the method throws an exception, the runtime catches it and logs the error message in the Dapr sidecar logs, without retrying.
To allow actors to recover from failures and retry after a crash or restart, you can persist an actor’s state by configuring a state store, like Redis or Azure Cosmos DB.
If an invocation of the method fails, the timer is not removed. Timers are only removed when:
- The sidecar crashes
- The executions run out
- You delete it explicitly
Reminder data serialization format
Actor reminder data is serialized to JSON by default. Dapr v1.13 onwards supports a protobuf serialization format for internal reminders data for workflow via both the Placement and Scheduler services. Depending on throughput and size of the payload, this can result in significant performance improvements, giving developers a higher throughput and lower latency.
Another benefit is storing smaller data in the actor underlying database, which can result in cost optimizations when using some cloud databases. A restriction with using protobuf serialization is that the reminder data can no longer be queried.
Note
Protobuf serialization will become the default format in Dapr 1.14Reminder data saved in protobuf format cannot be read in Dapr 1.12.x and earlier versions. Its recommended to test this feature in Dapr v1.13 and verify that it works as expected with your database before taking this into production.
Note
If you use protobuf serialization in Dapr v1.13 and need to downgrade to an earlier Dapr version, the reminder data will be incompatible with versions 1.12.x and earlier versions. Once you save your reminders data in protobuf format, you cannot move it back to JSON format.Enabling protobuf serialization on Kubernetes
To use protobuf serialization for actor reminders on Kubernetes, use the following Helm value:
--set dapr_placement.maxActorApiLevel=20
Enabling protobuf serialization on self-hosted
To use protobuf serialization for actor reminders on self-hosted, use the following daprd
flag:
--max-api-level=20
Next steps
Configure actor runtime behavior >>Related links
1.6.6 - How to: Enable partitioning of actor reminders
Warning
This feature is only relevant when using state store actor reminders, no longer enabled by default. As of v1.15, Dapr uses the far more performant Scheduler Actor Reminders by default. This page is only relevant if you are using the legacy state store actor reminders, enabled via setting theSchedulerReminders
feature flag to false.
It is highly recommended you use using the Scheduler Actor Reminders feature.Actor reminders are persisted and continue to be triggered after sidecar restarts. Applications with multiple reminders registered can experience the following issues:
- Low throughput on reminders registration and de-registration
- Limited number of reminders registered based on the single record size limit on the state store
To sidestep these issues, applications can enable partitioning of actor reminders while data is distributed in multiple keys in the state store.
- A metadata record in
actors\|\|<actor type>\|\|metadata
is used to store the persisted configuration for a given actor type. - Multiple records store subsets of the reminders for the same actor type.
Key | Value |
---|---|
actors||<actor type>||metadata | { "id": <actor metadata identifier>, "actorRemindersMetadata": { "partitionCount": <number of partitions for reminders> } } |
actors||<actor type>||<actor metadata identifier>||reminders||1 | [ <reminder 1-1>, <reminder 1-2>, ... , <reminder 1-n> ] |
actors||<actor type>||<actor metadata identifier>||reminders||2 | [ <reminder 1-1>, <reminder 1-2>, ... , <reminder 1-m> ] |
If you need to change the number of partitions, Dapr’s sidecar will automatically redistribute the reminders’ set.
Configure the actor runtime to partition actor reminders
Similar to other actor configuration elements, the actor runtime provides the appropriate configuration to partition actor reminders via the actor’s endpoint for GET /dapr/config
. Select your preferred language for an actor runtime configuration example.
// In Startup.cs
public void ConfigureServices(IServiceCollection services)
{
// Register actor runtime with DI
services.AddActors(options =>
{
// Register actor types and configure actor settings
options.Actors.RegisterActor<MyActor>();
// Configure default settings
options.ActorIdleTimeout = TimeSpan.FromMinutes(60);
options.ActorScanInterval = TimeSpan.FromSeconds(30);
options.RemindersStoragePartitions = 7;
});
// Register additional services for use with actors
services.AddSingleton<BankService>();
}
import { CommunicationProtocolEnum, DaprClient, DaprServer } from "@dapr/dapr";
// Configure the actor runtime with the DaprClientOptions.
const clientOptions = {
actor: {
remindersStoragePartitions: 0,
},
};
const actor = builder.build(new ActorId("my-actor"));
// Register a reminder, it has a default callback: `receiveReminder`
await actor.registerActorReminder(
"reminder-id", // Unique name of the reminder.
Temporal.Duration.from({ seconds: 2 }), // DueTime
Temporal.Duration.from({ seconds: 1 }), // Period
Temporal.Duration.from({ seconds: 1 }), // TTL
100, // State to be sent to reminder callback.
);
// Delete the reminder
await actor.unregisterActorReminder("reminder-id");
See the documentation on writing actors with the JavaScript SDK.
from datetime import timedelta
ActorRuntime.set_actor_config(
ActorRuntimeConfig(
actor_idle_timeout=timedelta(hours=1),
actor_scan_interval=timedelta(seconds=30),
remindersStoragePartitions=7
)
)
// import io.dapr.actors.runtime.ActorRuntime;
// import java.time.Duration;
ActorRuntime.getInstance().getConfig().setActorIdleTimeout(Duration.ofMinutes(60));
ActorRuntime.getInstance().getConfig().setActorScanInterval(Duration.ofSeconds(30));
ActorRuntime.getInstance().getConfig().setRemindersStoragePartitions(7);
type daprConfig struct {
Entities []string `json:"entities,omitempty"`
ActorIdleTimeout string `json:"actorIdleTimeout,omitempty"`
ActorScanInterval string `json:"actorScanInterval,omitempty"`
DrainOngoingCallTimeout string `json:"drainOngoingCallTimeout,omitempty"`
DrainRebalancedActors bool `json:"drainRebalancedActors,omitempty"`
RemindersStoragePartitions int `json:"remindersStoragePartitions,omitempty"`
}
var daprConfigResponse = daprConfig{
[]string{defaultActorType},
actorIdleTimeout,
actorScanInterval,
drainOngoingCallTimeout,
drainRebalancedActors,
7,
}
func configHandler(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
json.NewEncoder(w).Encode(daprConfigResponse)
}
The following is an example of a valid configuration for reminder partitioning:
{
"entities": [ "MyActorType", "AnotherActorType" ],
"remindersStoragePartitions": 7
}
Handle configuration changes
To configure actor reminders partitioning, Dapr persists the actor type metadata in the actor’s state store. This allows the configuration changes to be applied globally, not just in a single sidecar instance.
In addition, you can only increase the number of partitions, not decrease. This allows Dapr to automatically redistribute the data on a rolling restart, where one or more partition configurations might be active.
Demo
Watch this video for a demo of actor reminder partitioning:
Next steps
Interact with virtual actors >>Related links
1.6.7 - How-to: Interact with virtual actors using scripting
Learn how to use virtual actors by calling HTTP/gRPC endpoints.
Invoke the actor method
You can interact with Dapr to invoke the actor method by calling HTTP/gRPC endpoint.
POST/GET/PUT/DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/method/<method>
Provide data for the actor method in the request body. The response for the request, which is data from actor method call, is in the response body.
Refer to the Actors API spec for more details.
Note
Alternatively, you can use Dapr SDKs to use actors.Save state with actors
You can interact with Dapr via HTTP/gRPC endpoints to save state reliably using the Dapr actor state management capabaility.
To use actors, your state store must support multi-item transactions. This means your state store component must implement the TransactionalStore
interface.
See the list of components that support transactions/actors. Only a single state store component can be used as the state store for all actors.
Next steps
Actor reentrancy >>Related links
1.6.8 - How-to: Enable and use actor reentrancy in Dapr
A core tenet of the virtual actor pattern is the single-threaded nature of actor execution. Without reentrancy, the Dapr runtime locks on all actor requests. A second request wouldn’t be able to start until the first had completed. This means an actor cannot call itself, or have another actor call into it, even if it’s part of the same call chain.
Reentrancy solves this by allowing requests from the same chain, or context, to re-enter into an already locked actor. This proves useful in scenarios where:
- An actor wants to call a method on itself
- Actors are used in workflows to perform work, then call back onto the coordinating actor.
Examples of chains that reentrancy allows are shown below:
Actor A -> Actor A
ActorA -> Actor B -> Actor A
With reentrancy, you can perform more complex actor calls, without sacrificing the single-threaded behavior of virtual actors.

The maxStackDepth
parameter sets a value that controls how many reentrant calls can be made to the same actor. By default, this is set to 32, which is more than sufficient in most cases.
Configure the actor runtime to enable reentrancy
The reentrant actor must provide the appropriate configuration. This is done by the actor’s endpoint for GET /dapr/config
, similar to other actor configuration elements.
public class Startup
{
public void ConfigureServices(IServiceCollection services)
{
services.AddSingleton<BankService>();
services.AddActors(options =>
{
options.Actors.RegisterActor<DemoActor>();
options.ReentrancyConfig = new Dapr.Actors.ActorReentrancyConfig()
{
Enabled = true,
MaxStackDepth = 32,
};
});
}
}
import { CommunicationProtocolEnum, DaprClient, DaprServer } from "@dapr/dapr";
// Configure the actor runtime with the DaprClientOptions.
const clientOptions = {
actor: {
reentrancy: {
enabled: true,
maxStackDepth: 32,
},
},
};
from fastapi import FastAPI
from dapr.ext.fastapi import DaprActor
from dapr.actor.runtime.config import ActorRuntimeConfig, ActorReentrancyConfig
from dapr.actor.runtime.runtime import ActorRuntime
from demo_actor import DemoActor
reentrancyConfig = ActorReentrancyConfig(enabled=True)
config = ActorRuntimeConfig(reentrancy=reentrancyConfig)
ActorRuntime.set_actor_config(config)
app = FastAPI(title=f'{DemoActor.__name__}Service')
actor = DaprActor(app)
@app.on_event("startup")
async def startup_event():
# Register DemoActor
await actor.register_actor(DemoActor)
@app.get("/MakeExampleReentrantCall")
def do_something_reentrant():
# invoke another actor here, reentrancy will be handled automatically
return
Here is a snippet of an actor written in Golang providing the reentrancy configuration via the HTTP API. Reentrancy has not yet been included into the Go SDK.
type daprConfig struct {
Entities []string `json:"entities,omitempty"`
ActorIdleTimeout string `json:"actorIdleTimeout,omitempty"`
ActorScanInterval string `json:"actorScanInterval,omitempty"`
DrainOngoingCallTimeout string `json:"drainOngoingCallTimeout,omitempty"`
DrainRebalancedActors bool `json:"drainRebalancedActors,omitempty"`
Reentrancy config.ReentrancyConfig `json:"reentrancy,omitempty"`
}
var daprConfigResponse = daprConfig{
[]string{defaultActorType},
actorIdleTimeout,
actorScanInterval,
drainOngoingCallTimeout,
drainRebalancedActors,
config.ReentrancyConfig{Enabled: true, MaxStackDepth: &maxStackDepth},
}
func configHandler(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
json.NewEncoder(w).Encode(daprConfigResponse)
}
Handle reentrant requests
The key to a reentrant request is the Dapr-Reentrancy-Id
header. The value of this header is used to match requests to their call chain and allow them to bypass the actor’s lock.
The header is generated by the Dapr runtime for any actor request that has a reentrant config specified. Once it is generated, it is used to lock the actor and must be passed to all future requests. Below is an example of an actor handling a reentrant request:
func reentrantCallHandler(w http.ResponseWriter, r *http.Request) {
/*
* Omitted.
*/
req, _ := http.NewRequest("PUT", url, bytes.NewReader(nextBody))
reentrancyID := r.Header.Get("Dapr-Reentrancy-Id")
req.Header.Add("Dapr-Reentrancy-Id", reentrancyID)
client := http.Client{}
resp, err := client.Do(req)
/*
* Omitted.
*/
}
Demo
Watch this video on how to use actor reentrancy.
Next steps
Actors in the Dapr SDKsRelated links
1.7 - Secrets management
More about Dapr Secrets
Learn more about how to use Dapr Secrets:
- Try the Secrets quickstart.
- Explore secrets via any of the supporting Dapr SDKs.
- Review the Secrets API reference documentation.
- Browse the supported secrets component specs.
1.7.1 - Secrets management overview
Applications usually store sensitive information in secrets by using a dedicated secret store. For example, you authenticate databases, services, and external systems with connection strings, keys, tokens, and other application-level secrets stored in a secret store, such as AWS Secrets Manager, Azure Key Vault, Hashicorp Vault, etc.
To access these secret stores, the application imports the secret store SDK, often requiring a fair amount of unrelated boilerplate code. This poses an even greater challenge in multi-cloud scenarios, where different vendor-specific secret stores may be used.
Secrets management API
Dapr’s dedicated secrets building block API makes it easier for developers to consume application secrets from a secret store. To use Dapr’s secret store building block, you:
- Set up a component for a specific secret store solution.
- Retrieve secrets using the Dapr secrets API in the application code.
- Optionally, reference secrets in Dapr component files.
The following overview video and demo demonstrates how Dapr secrets management works.
Features
The secrets management API building block brings several features to your application.
Configure secrets without changing application code
You can call the secrets API in your application code to retrieve and use secrets from Dapr supported secret stores. Watch this video for an example of how the secrets management API can be used in your application.
For example, the diagram below shows an application requesting the secret called “mysecret” from a secret store called “vault” from a configured cloud secret store.

Applications can also use the secrets API to access secrets from a Kubernetes secret store. By default, Dapr enables a built-in Kubernetes secret store in Kubernetes mode, deployed via:
- The Helm defaults, or
dapr init -k
If you are using another secret store, you can disable (not configure) the Dapr Kubernetes secret store by adding the annotation dapr.io/disable-builtin-k8s-secret-store: "true"
to the deployment.yaml file. The default is false
.
In the example below, the application retrieves the same secret “mysecret” from a Kubernetes secret store.

In Azure, you can configure Dapr to retrieve secrets using managed identities to authenticate with Azure Key Vault. In the example below:
- An Azure Kubernetes Service (AKS) cluster is configured to use managed identities.
- Dapr uses pod identities to retrieve secrets from Azure Key Vault on behalf of the application.

In the examples above, the application code did not have to change to get the same secret. Dapr uses the secret management components via the secrets management building block API.
Try out the secrets API using one of our quickstarts or tutorials.
Reference secret stores in Dapr components
When configuring Dapr components such as state stores, you’re often required to include credentials in components files. Alternatively, you can place the credentials within a Dapr supported secret store and reference the secret within the Dapr component. This is the preferred approach and recommended best practice, especially in production environments.
For more information, read referencing secret stores in components.
Limit access to secrets
To provide more granular control on access to secrets, Dapr provides the ability to define scopes and restricting access permissions. Learn more about using secret scoping
Try out secrets management
Quickstarts and tutorials
Want to put the Dapr secrets management API to the test? Walk through the following quickstart and tutorials to see Dapr secrets in action:
Quickstart/tutorial | Description |
---|---|
Secrets management quickstart | Retrieve secrets in the application code from a configured secret store using the secrets management API. |
Secret Store tutorial | Demonstrates the use of Dapr Secrets API to access secret stores. |
Start managing secrets directly in your app
Want to skip the quickstarts? Not a problem. You can try out the secret management building block directly in your application to retrieve and manage secrets. After Dapr is installed, you can begin using the secrets management API starting with the secrets how-to guide.
Next steps
- Learn how to use secret scoping.
- Read the secrets API reference doc.
1.7.2 - How To: Retrieve a secret
Now that you’ve learned what the Dapr secrets building block provides, learn how it can work in your service. This guide demonstrates how to call the secrets API and retrieve secrets in your application code from a configured secret store.

Note
If you haven’t already, try out the secrets management quickstart for a quick walk-through on how to use the secrets API.Set up a secret store
Before retrieving secrets in your application’s code, you must configure a secret store component. This example configures a secret store that uses a local JSON file to store secrets.
Warning
In a production-grade application, local secret stores are not recommended. Find alternatives to securely manage your secrets.In your project directory, create a file named secrets.json
with the following contents:
{
"secret": "Order Processing pass key"
}
Create a new directory named components
. Navigate into that directory and create a component file named local-secret-store.yaml
with the following contents:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: localsecretstore
spec:
type: secretstores.local.file
version: v1
metadata:
- name: secretsFile
value: secrets.json #path to secrets file
- name: nestedSeparator
value: ":"
Warning
The path to the secret store JSON is relative to where you calldapr run
.For more information:
- See how to configure a different kind of secret store.
- Review supported secret stores to see specific details required for different secret store solutions.
Get a secret
Get the secret by calling the Dapr sidecar using the secrets API:
curl http://localhost:3601/v1.0/secrets/localsecretstore/secret
See a full API reference.
Calling the secrets API from your code
Now that you’ve set up the local secret store, call Dapr to get the secrets from your application code. Below are code examples that leverage Dapr SDKs for retrieving a secret.
using System;
using System.Threading.Tasks;
using Dapr.Client;
namespace EventService;
const string SECRET_STORE_NAME = "localsecretstore";
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddDaprClient();
var app = builder.Build();
//Resolve a DaprClient from DI
var daprClient = app.Services.GetRequiredService<DaprClient>();
//Use the Dapr SDK to get a secret
var secret = await daprClient.GetSecretAsync(SECRET_STORE_NAME, "secret");
Console.WriteLine($"Result: {string.Join(", ", secret)}");
//dependencies
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;
import io.dapr.client.DaprClient;
import io.dapr.client.DaprClientBuilder;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.Map;
//code
@SpringBootApplication
public class OrderProcessingServiceApplication {
private static final Logger log = LoggerFactory.getLogger(OrderProcessingServiceApplication.class);
private static final ObjectMapper JSON_SERIALIZER = new ObjectMapper();
private static final String SECRET_STORE_NAME = "localsecretstore";
public static void main(String[] args) throws InterruptedException, JsonProcessingException {
DaprClient client = new DaprClientBuilder().build();
//Using Dapr SDK to get a secret
Map<String, String> secret = client.getSecret(SECRET_STORE_NAME, "secret").block();
log.info("Result: " + JSON_SERIALIZER.writeValueAsString(secret));
}
}
#dependencies
import random
from time import sleep
import requests
import logging
from dapr.clients import DaprClient
from dapr.clients.grpc._state import StateItem
from dapr.clients.grpc._request import TransactionalStateOperation, TransactionOperationType
#code
logging.basicConfig(level = logging.INFO)
DAPR_STORE_NAME = "localsecretstore"
key = 'secret'
with DaprClient() as client:
#Using Dapr SDK to get a secret
secret = client.get_secret(store_name=DAPR_STORE_NAME, key=key)
logging.info('Result: ')
logging.info(secret.secret)
#Using Dapr SDK to get bulk secrets
secret = client.get_bulk_secret(store_name=DAPR_STORE_NAME)
logging.info('Result for bulk secret: ')
logging.info(sorted(secret.secrets.items()))
//dependencies
import (
"context"
"log"
dapr "github.com/dapr/go-sdk/client"
)
//code
func main() {
client, err := dapr.NewClient()
SECRET_STORE_NAME := "localsecretstore"
if err != nil {
panic(err)
}
defer client.Close()
ctx := context.Background()
//Using Dapr SDK to get a secret
secret, err := client.GetSecret(ctx, SECRET_STORE_NAME, "secret", nil)
if secret != nil {
log.Println("Result : ")
log.Println(secret)
}
//Using Dapr SDK to get bulk secrets
secretBulk, err := client.GetBulkSecret(ctx, SECRET_STORE_NAME, nil)
if secret != nil {
log.Println("Result for bulk: ")
log.Println(secretBulk)
}
}
//dependencies
import { DaprClient, HttpMethod, CommunicationProtocolEnum } from '@dapr/dapr';
//code
const daprHost = "127.0.0.1";
async function main() {
const client = new DaprClient({
daprHost,
daprPort: process.env.DAPR_HTTP_PORT,
communicationProtocol: CommunicationProtocolEnum.HTTP,
});
const SECRET_STORE_NAME = "localsecretstore";
//Using Dapr SDK to get a secret
var secret = await client.secret.get(SECRET_STORE_NAME, "secret");
console.log("Result: " + secret);
//Using Dapr SDK to get bulk secrets
secret = await client.secret.getBulk(SECRET_STORE_NAME);
console.log("Result for bulk: " + secret);
}
main();
Related links
- Review the Dapr secrets API features.
- Learn how to use secrets scopes
- Read the secrets API reference and review the supported secrets.
- Learn how to set up different secret store components and how to reference secrets in your component.
1.7.3 - How To: Use secret scoping
Once you configure a secret store for your application, any secret defined within that store is accessible by default from the Dapr application.
You can limit the Dapr application’s access to specific secrets by defining secret scopes. Simply add a secret scope policy to the application configuration with restrictive permissions.
The secret scoping policy applies to any secret store, including:
- A local secret store
- A Kubernetes secret store
- A public cloud secret store
For details on how to set up a secret store, read How To: Retrieve a secret.
Watch this video for a demo on how to use secret scoping with your application.
Scenario 1 : Deny access to all secrets for a secret store
In this example, all secret access is denied to an application running on a Kubernetes cluster, which has a configured Kubernetes secret store named mycustomsecretstore
. Aside from the user-defined custom store, the example also configures the Kubernetes default store (named kubernetes
) to ensure all secrets are denied access. Learn more about the Kubernetes default secret store.
Define the following appconfig.yaml
configuration and apply it to the Kubernetes cluster using the command kubectl apply -f appconfig.yaml
.
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: appconfig
spec:
secrets:
scopes:
- storeName: kubernetes
defaultAccess: deny
- storeName: mycustomsecreststore
defaultAccess: deny
For applications that need to be denied access to the Kubernetes secret store, follow these instructions, and add the following annotation to the application pod:
dapr.io/config: appconfig
With this defined, the application no longer has access to any secrets in the Kubernetes secret store.
Scenario 2 : Allow access to only certain secrets in a secret store
This example uses a secret store named vault
. This could be a Hashicorp secret store component set on your application. To allow a Dapr application to have access to only secret1
and secret2
in the vault
secret store, define the following appconfig.yaml
:
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: appconfig
spec:
secrets:
scopes:
- storeName: vault
defaultAccess: deny
allowedSecrets: ["secret1", "secret2"]
The default access to the vault
secret store is deny
, while some secrets are accessible by the application, based on the allowedSecrets
list. Learn how to apply configuration to the sidecar.
Scenario 3: Deny access to certain sensitive secrets in a secret store
Define the following config.yaml
:
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: appconfig
spec:
secrets:
scopes:
- storeName: vault
defaultAccess: allow # this is the default value, line can be omitted
deniedSecrets: ["secret1", "secret2"]
This example configuration explicitly denies access to secret1
and secret2
from the secret store named vault
while allowing access to all other secrets. Learn how to apply configuration to the sidecar.
Permission priority
The allowedSecrets
and deniedSecrets
list values take priority over the defaultAccess
policy.
Scenarios | defaultAccess | allowedSecrets | deniedSecrets | permission |
---|---|---|---|---|
1 - Only default access | deny/allow | empty | empty | deny/allow |
2 - Default deny with allowed list | deny | [“s1”] | empty | only “s1” can be accessed |
3 - Default allow with deneied list | allow | empty | [“s1”] | only “s1” cannot be accessed |
4 - Default allow with allowed list | allow | [“s1”] | empty | only “s1” can be accessed |
5 - Default deny with denied list | deny | empty | [“s1”] | deny |
6 - Default deny/allow with both lists | deny/allow | [“s1”] | [“s2”] | only “s1” can be accessed |
Related links
- List of secret stores
- Overview of secret stores
1.8 - Configuration
More about Dapr Configuration
Learn more about how to use Dapr Configuration:
- Try the Configuration quickstart.
- Explore configuration via any of the supporting Dapr SDKs.
- Review the Configuration API reference documentation.
- Browse the supported configuration component specs.
1.8.1 - Configuration overview
Consuming application configuration is a common task when writing applications. Frequently, configuration stores are used to manage this configuration data. A configuration item is often dynamic in nature and tightly coupled to the needs of the application that consumes it.
For example, application configuration can include:
- Names of secrets
- Different identifiers
- Partition or consumer IDs
- Names of databases to connect to, etc
Usually, configuration items are stored as key/value items in a state store or database. Developers or operators can change application configuration at runtime in the configuration store. Once changes are made, a service is notified to load the new configuration.
Configuration data is read-only from the application API perspective, with updates to the configuration store made through operator tooling. With Dapr’s configuration API, you can:
- Consume configuration items that are returned as read-only key/value pairs
- Subscribe to changes whenever a configuration item changes

Note
The Configuration API should not be confused with the Dapr sidecar and control plane configuration, which is used to set policies and settings on Dapr sidecar instances or the installed Dapr control plane.Try out configuration
Quickstart
Want to put the Dapr configuration API to the test? Walk through the following quickstart to see the configuration API in action:
Quickstart | Description |
---|---|
Configuration quickstart | Get configuration items or subscribe to configuration changes using the configuration API. |
Start using the configuration API directly in your app
Want to skip the quickstarts? Not a problem. You can try out the configuration building block directly in your application to read and manage configuration data. After Dapr is installed, you can begin using the configuration API starting with the configuration how-to guide.
Watch the demo
Watch this demo of using the Dapr Configuration building block
Next steps
Follow these guides on:
1.8.2 - How-To: Manage configuration from a store
This example uses the Redis configuration store component to demonstrate how to retrieve a configuration item.

Note
If you haven’t already, try out the configuration quickstart for a quick walk-through on how to use the configuration API.Create a configuration item in store
Create a configuration item in a supported configuration store. This can be a simple key-value item, with any key of your choice. As mentioned earlier, this example uses the Redis configuration store component.
Run Redis with Docker
docker run --name my-redis -p 6379:6379 -d redis:6
Save an item
Using the Redis CLI, connect to the Redis instance:
redis-cli -p 6379
Save a configuration item:
MSET orderId1 "101||1" orderId2 "102||1"
Configure a Dapr configuration store
Save the following component file to the default components folder on your machine. You can use this as the Dapr component YAML:
- For Kubernetes using
kubectl
. - When running with the Dapr CLI.
Note
Since the Redis configuration component has identical metadata to the Redisstatestore.yaml
component, you can simply copy/change the Redis state store component type if you already have a Redis statestore.yaml
.apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: configstore
spec:
type: configuration.redis
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: <PASSWORD>
Retrieve Configuration Items
Get configuration items
The following example shows how to get a saved configuration item using the Dapr Configuration API.
using System;
using System.Collections.Generic;
using System.Threading.Tasks;
using Dapr.Client;
const string CONFIG_STORE_NAME = "configstore";
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddDaprClient();
var app = builder.Build();
using var client = app.Services.GetRequiredServices<DaprClient>();
var configuration = await client.GetConfiguration(CONFIG_STORE_NAME, [ "orderId1", "orderId2" ]);
Console.WriteLine($"Got key=\n{configuration[0].Key} -> {configuration[0].Value}\n{configuration[1].Key} -> {configuration[1].Value}");
//dependencies
import io.dapr.client.DaprClientBuilder;
import io.dapr.client.DaprClient;
import io.dapr.client.domain.ConfigurationItem;
import io.dapr.client.domain.GetConfigurationRequest;
import io.dapr.client.domain.SubscribeConfigurationRequest;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
//code
private static final String CONFIG_STORE_NAME = "configstore";
public static void main(String[] args) throws Exception {
try (DaprClient client = (new DaprClientBuilder()).build()) {
List<String> keys = new ArrayList<>();
keys.add("orderId1");
keys.add("orderId2");
GetConfigurationRequest req = new GetConfigurationRequest(CONFIG_STORE_NAME, keys);
try {
Mono<List<ConfigurationItem>> items = client.getConfiguration(req);
items.block().forEach(ConfigurationClient::print);
} catch (Exception ex) {
System.out.println(ex.getMessage());
}
}
}
#dependencies
from dapr.clients import DaprClient
#code
with DaprClient() as d:
CONFIG_STORE_NAME = 'configstore'
keys = ['orderId1', 'orderId2']
#Startup time for dapr
d.wait(20)
configuration = d.get_configuration(store_name=CONFIG_STORE_NAME, keys=[keys], config_metadata={})
print(f"Got key={configuration.items[0].key} value={configuration.items[0].value} version={configuration.items[0].version}")
package main
import (
"context"
"fmt"
dapr "github.com/dapr/go-sdk/client"
)
func main() {
ctx := context.Background()
client, err := dapr.NewClient()
if err != nil {
panic(err)
}
items, err := client.GetConfigurationItems(ctx, "configstore", ["orderId1","orderId2"])
if err != nil {
panic(err)
}
for key, item := range items {
fmt.Printf("get config: key = %s value = %s version = %s",key,(*item).Value, (*item).Version)
}
}
import { CommunicationProtocolEnum, DaprClient } from "@dapr/dapr";
// JS SDK does not support Configuration API over HTTP protocol yet
const protocol = CommunicationProtocolEnum.GRPC;
const host = process.env.DAPR_HOST ?? "localhost";
const port = process.env.DAPR_GRPC_PORT ?? 3500;
const DAPR_CONFIGURATION_STORE = "configstore";
const CONFIGURATION_ITEMS = ["orderId1", "orderId2"];
async function main() {
const client = new DaprClient(host, port, protocol);
// Get config items from the config store
try {
const config = await client.configuration.get(DAPR_CONFIGURATION_STORE, CONFIGURATION_ITEMS);
Object.keys(config.items).forEach((key) => {
console.log("Configuration for " + key + ":", JSON.stringify(config.items[key]));
});
} catch (error) {
console.log("Could not get config item, err:" + error);
process.exit(1);
}
}
main().catch((e) => console.error(e));
Launch a dapr sidecar:
dapr run --app-id orderprocessing --dapr-http-port 3601
In a separate terminal, get the configuration item saved earlier:
curl http://localhost:3601/v1.0/configuration/configstore?key=orderId1
Launch a Dapr sidecar:
dapr run --app-id orderprocessing --dapr-http-port 3601
In a separate terminal, get the configuration item saved earlier:
Invoke-RestMethod -Uri 'http://localhost:3601/v1.0/configuration/configstore?key=orderId1'
Subscribe to configuration item updates
Below are code examples that leverage SDKs to subscribe to keys [orderId1, orderId2]
using configstore
store component.
using System;
using System.Collections.Generic;
using System.Threading.Tasks;
using Dapr.Client;
using System.Text.Json;
const string DAPR_CONFIGURATION_STORE = "configstore";
var CONFIGURATION_ITEMS = new List<string> { "orderId1", "orderId2" };
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddDaprClient();
var app = builder.Build();
var client = app.Services.GetRequiredService<DaprClient>();
// Subscribe for configuration changes
var subscribe = await client.SubscribeConfiguration(DAPR_CONFIGURATION_STORE, CONFIGURATION_ITEMS);
// Print configuration changes
await foreach (var items in subscribe.Source)
{
// First invocation when app subscribes to config changes only returns subscription id
if (items.Keys.Count == 0)
{
Console.WriteLine("App subscribed to config changes with subscription id: " + subscribe.Id);
subscriptionId = subscribe.Id;
continue;
}
var cfg = JsonSerializer.Serialize(items);
Console.WriteLine("Configuration update " + cfg);
}
Navigate to the directory containing the above code, then run the following command to launch both a Dapr sidecar and the subscriber application:
dapr run --app-id orderprocessing -- dotnet run
using System;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Hosting;
using Dapr.Client;
using Dapr.Extensions.Configuration;
using System.Collections.Generic;
using System.Threading;
Console.WriteLine("Starting application.");
var builder = WebApplication.CreateBuilder(args);
// Unlike most other situations, we build a `DaprClient` here using its factory because we cannot rely on `IConfiguration`
// or other injected services to configure it because we haven't yet built the DI container.
var client = new DaprClientBuilder().Build();
// In a real-world application, you'd also add the following line to register the `DaprClient` with the DI container so
// it can be injected into other services. In this demonstration, it's not necessary as we're not injecting it anywhere.
// builder.Services.AddDaprClient();
// Get the initial value and continue to watch it for changes
builder.Configuration.AddDaprConfigurationStore("configstore", new List<string>() { "orderId1","orderId2" }, client, TimeSpan.FromSeconds(20));
builder.Configuration.AddStreamingDaprConfigurationStore("configstore", new List<string>() { "orderId1","orderId2" }, client, TimeSpan.FromSeconds(20));
await builder.Build().RunAsync();
Console.WriteLine("Closing application.");
Navigate to the directory containing the above code, then run the following command to launch both a Dapr sidecar and the subscriber application:
dapr run --app-id orderprocessing -- dotnet run
import io.dapr.client.DaprClientBuilder;
import io.dapr.client.DaprClient;
import io.dapr.client.domain.ConfigurationItem;
import io.dapr.client.domain.GetConfigurationRequest;
import io.dapr.client.domain.SubscribeConfigurationRequest;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
//code
private static final String CONFIG_STORE_NAME = "configstore";
private static String subscriptionId = null;
public static void main(String[] args) throws Exception {
try (DaprClient client = (new DaprClientBuilder()).build()) {
// Subscribe for config changes
List<String> keys = new ArrayList<>();
keys.add("orderId1");
keys.add("orderId2");
Flux<SubscribeConfigurationResponse> subscription = client.subscribeConfiguration(DAPR_CONFIGURATON_STORE,keys);
// Read config changes for 20 seconds
subscription.subscribe((response) -> {
// First ever response contains the subscription id
if (response.getItems() == null || response.getItems().isEmpty()) {
subscriptionId = response.getSubscriptionId();
System.out.println("App subscribed to config changes with subscription id: " + subscriptionId);
} else {
response.getItems().forEach((k, v) -> {
System.out.println("Configuration update for " + k + ": {'value':'" + v.getValue() + "'}");
});
}
});
Thread.sleep(20000);
}
}
Navigate to the directory containing the above code, then run the following command to launch both a Dapr sidecar and the subscriber application:
dapr run --app-id orderprocessing -- -- mvn spring-boot:run
#dependencies
from dapr.clients import DaprClient
#code
def handler(id: str, resp: ConfigurationResponse):
for key in resp.items:
print(f"Subscribed item received key={key} value={resp.items[key].value} "
f"version={resp.items[key].version} "
f"metadata={resp.items[key].metadata}", flush=True)
def executeConfiguration():
with DaprClient() as d:
storeName = 'configurationstore'
keys = ['orderId1', 'orderId2']
id = d.subscribe_configuration(store_name=storeName, keys=keys,
handler=handler, config_metadata={})
print("Subscription ID is", id, flush=True)
sleep(20)
executeConfiguration()
Navigate to the directory containing the above code, then run the following command to launch both a Dapr sidecar and the subscriber application:
dapr run --app-id orderprocessing -- python3 OrderProcessingService.py
package main
import (
"context"
"fmt"
"time"
dapr "github.com/dapr/go-sdk/client"
)
func main() {
ctx := context.Background()
client, err := dapr.NewClient()
if err != nil {
panic(err)
}
subscribeID, err := client.SubscribeConfigurationItems(ctx, "configstore", []string{"orderId1", "orderId2"}, func(id string, items map[string]*dapr.ConfigurationItem) {
for k, v := range items {
fmt.Printf("get updated config key = %s, value = %s version = %s \n", k, v.Value, v.Version)
}
})
if err != nil {
panic(err)
}
time.Sleep(20*time.Second)
}
Navigate to the directory containing the above code, then run the following command to launch both a Dapr sidecar and the subscriber application:
dapr run --app-id orderprocessing -- go run main.go
import { CommunicationProtocolEnum, DaprClient } from "@dapr/dapr";
// JS SDK does not support Configuration API over HTTP protocol yet
const protocol = CommunicationProtocolEnum.GRPC;
const host = process.env.DAPR_HOST ?? "localhost";
const port = process.env.DAPR_GRPC_PORT ?? 3500;
const DAPR_CONFIGURATION_STORE = "configstore";
const CONFIGURATION_ITEMS = ["orderId1", "orderId2"];
async function main() {
const client = new DaprClient(host, port, protocol);
// Subscribe to config updates
try {
const stream = await client.configuration.subscribeWithKeys(
DAPR_CONFIGURATION_STORE,
CONFIGURATION_ITEMS,
(config) => {
console.log("Configuration update", JSON.stringify(config.items));
}
);
// Unsubscribe to config updates and exit app after 20 seconds
setTimeout(() => {
stream.stop();
console.log("App unsubscribed to config changes");
process.exit(0);
}, 20000);
} catch (error) {
console.log("Error subscribing to config updates, err:" + error);
process.exit(1);
}
}
main().catch((e) => console.error(e));
Navigate to the directory containing the above code, then run the following command to launch both a Dapr sidecar and the subscriber application:
dapr run --app-id orderprocessing --app-protocol grpc --dapr-grpc-port 3500 -- node index.js
Unsubscribe from configuration item updates
After you’ve subscribed to watch configuration items, you will receive updates for all of the subscribed keys. To stop receiving updates, you need to explicitly call the unsubscribe API.
Following are the code examples showing how you can unsubscribe to configuration updates using unsubscribe API.
using System;
using System.Collections.Generic;
using System.Threading.Tasks;
using Dapr.Client;
var builder = WebApplication.CreateBuilder();
builder.Services.AddDaprClient();
var app = builder.Build();
const string DAPR_CONFIGURATION_STORE = "configstore";
const string SubscriptionId = "abc123"; //Replace with the subscription identifier to unsubscribe from
var client = app.Services.GetRequiredService<DaprClient>();
await client.UnsubscribeConfiguration(DAPR_CONFIGURATION_STORE, SubscriptionId);
Console.WriteLine("App unsubscribed from config changes");
import io.dapr.client.DaprClientBuilder;
import io.dapr.client.DaprClient;
import io.dapr.client.domain.ConfigurationItem;
import io.dapr.client.domain.GetConfigurationRequest;
import io.dapr.client.domain.SubscribeConfigurationRequest;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
//code
private static final String CONFIG_STORE_NAME = "configstore";
private static String subscriptionId = null;
public static void main(String[] args) throws Exception {
try (DaprClient client = (new DaprClientBuilder()).build()) {
// Unsubscribe from config changes
UnsubscribeConfigurationResponse unsubscribe = client
.unsubscribeConfiguration(subscriptionId, DAPR_CONFIGURATON_STORE).block();
if (unsubscribe.getIsUnsubscribed()) {
System.out.println("App unsubscribed to config changes");
} else {
System.out.println("Error unsubscribing to config updates, err:" + unsubscribe.getMessage());
}
} catch (Exception e) {
System.out.println("Error unsubscribing to config updates," + e.getMessage());
System.exit(1);
}
}
import asyncio
import time
import logging
from dapr.clients import DaprClient
subscriptionID = ""
with DaprClient() as d:
isSuccess = d.unsubscribe_configuration(store_name='configstore', id=subscriptionID)
print(f"Unsubscribed successfully? {isSuccess}", flush=True)
package main
import (
"context"
"encoding/json"
"fmt"
"log"
"os"
"time"
dapr "github.com/dapr/go-sdk/client"
)
var DAPR_CONFIGURATION_STORE = "configstore"
var subscriptionID = ""
func main() {
client, err := dapr.NewClient()
if err != nil {
log.Panic(err)
}
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
if err := client.UnsubscribeConfigurationItems(ctx, DAPR_CONFIGURATION_STORE , subscriptionID); err != nil {
panic(err)
}
}
import { CommunicationProtocolEnum, DaprClient } from "@dapr/dapr";
// JS SDK does not support Configuration API over HTTP protocol yet
const protocol = CommunicationProtocolEnum.GRPC;
const host = process.env.DAPR_HOST ?? "localhost";
const port = process.env.DAPR_GRPC_PORT ?? 3500;
const DAPR_CONFIGURATION_STORE = "configstore";
const CONFIGURATION_ITEMS = ["orderId1", "orderId2"];
async function main() {
const client = new DaprClient(host, port, protocol);
try {
const stream = await client.configuration.subscribeWithKeys(
DAPR_CONFIGURATION_STORE,
CONFIGURATION_ITEMS,
(config) => {
console.log("Configuration update", JSON.stringify(config.items));
}
);
setTimeout(() => {
// Unsubscribe to config updates
stream.stop();
console.log("App unsubscribed to config changes");
process.exit(0);
}, 20000);
} catch (error) {
console.log("Error subscribing to config updates, err:" + error);
process.exit(1);
}
}
main().catch((e) => console.error(e));
curl 'http://localhost:<DAPR_HTTP_PORT>/v1.0/configuration/configstore/<subscription-id>/unsubscribe'
Invoke-RestMethod -Uri 'http://localhost:<DAPR_HTTP_PORT>/v1.0/configuration/configstore/<subscription-id>/unsubscribe'
Next steps
1.9 - Distributed lock
More about Dapr Distributed Lock
Learn more about how to use Dapr Distributed Lock:
- Explore distributed locks via any of the supporting Dapr SDKs.
- Review the Distributed Lock API reference documentation.
- Browse the supported distributed locks component specs.
1.9.1 - Distributed lock overview
Introduction
Locks are used to provide mutually exclusive access to a resource. For example, you can use a lock to:
- Provide exclusive access to a database row, table, or an entire database
- Lock reading messages from a queue in a sequential manner
Any resource that is shared where updates occur can be the target for a lock. Locks are usually used on operations that mutate state, not on reads.
Each lock has a name. The application determines the resources that the named lock accesses. Typically, multiple instances of the same application use this named lock to exclusively access the resource and perform updates.
For example, in the competing consumer pattern, multiple instances of an application access a queue. You can decide that you want to lock the queue while the application is running its business logic.
In the diagram below, two instances of the same application, App1
, use the Redis lock component to take a lock on a shared resource.
- The first app instance acquires the named lock and gets exclusive access.
- The second app instance is unable to acquire the lock and therefore is not allowed to access the resource until the lock is released, either:
- Explicitly by the application through the unlock API, or
- After a period of time, due to a lease timeout.

*This API is currently in Alpha
state.
Features
Mutually exclusive access to a resource
At any given moment, only one instance of an application can hold a named lock. Locks are scoped to a Dapr app-id.
Deadlock free using leases
Dapr distributed locks use a lease-based locking mechanism. If an application acquires a lock, encounters an exception, and cannot free the lock, the lock is automatically released after a period of time using a lease. This prevents resource deadlocks in the event of application failures.
Demo
Watch this video for an overview of the distributed lock API:
Next steps
Follow these guides on:
1.9.2 - How-To: Use a lock
Now that you’ve learned what the Dapr distributed lock API building block provides, learn how it can work in your service. In this guide, an example application acquires a lock using the Redis lock component to demonstrate how to lock resources. For a list of supported lock stores, see this reference page.
In the diagram below, two instances of the same application acquire a lock, where one instance is successful and the other is denied.

The diagram below shows two instances of the same application, where one instance releases the lock and the other instance is then able to acquire the lock.

The diagram below shows two instances of different applications, acquiring different locks on the same resource.

Configure a lock component
Save the following component file to the default components folder on your machine.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: lockstore
spec:
type: lock.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: <PASSWORD>
Acquire lock
curl -X POST http://localhost:3500/v1.0-alpha1/lock/lockstore
-H 'Content-Type: application/json'
-d '{"resourceId":"my_file_name", "lockOwner":"random_id_abc123", "expiryInSeconds": 60}'
using System;
using Dapr.Client;
namespace LockService
{
class Program
{
[Obsolete("Distributed Lock API is in Alpha, this can be removed once it is stable.")]
static async Task Main(string[] args)
{
string DAPR_LOCK_NAME = "lockstore";
string fileName = "my_file_name";
var client = new DaprClientBuilder().Build();
await using (var fileLock = await client.Lock(DAPR_LOCK_NAME, fileName, "random_id_abc123", 60))
{
if (fileLock.Success)
{
Console.WriteLine("Success");
}
else
{
Console.WriteLine($"Failed to lock {fileName}.");
}
}
}
}
}
package main
import (
"fmt"
dapr "github.com/dapr/go-sdk/client"
)
func main() {
client, err := dapr.NewClient()
if err != nil {
panic(err)
}
defer client.Close()
resp, err := client.TryLockAlpha1(ctx, "lockstore", &dapr.LockRequest{
LockOwner: "random_id_abc123",
ResourceID: "my_file_name",
ExpiryInSeconds: 60,
})
fmt.Println(resp.Success)
}
Unlock existing lock
curl -X POST http://localhost:3500/v1.0-alpha1/unlock/lockstore
-H 'Content-Type: application/json'
-d '{"resourceId":"my_file_name", "lockOwner":"random_id_abc123"}'
using System;
using Dapr.Client;
namespace LockService
{
class Program
{
static async Task Main(string[] args)
{
string DAPR_LOCK_NAME = "lockstore";
var client = new DaprClientBuilder().Build();
var response = await client.Unlock(DAPR_LOCK_NAME, "my_file_name", "random_id_abc123"));
Console.WriteLine(response.status);
}
}
}
package main
import (
"fmt"
dapr "github.com/dapr/go-sdk/client"
)
func main() {
client, err := dapr.NewClient()
if err != nil {
panic(err)
}
defer client.Close()
resp, err := client.UnlockAlpha1(ctx, "lockstore", &UnlockRequest{
LockOwner: "random_id_abc123",
ResourceID: "my_file_name",
})
fmt.Println(resp.Status)
}
Next steps
Read the distributed lock API overview to learn more.
1.10 - Cryptography
More about Dapr Cryptography
Learn more about how to use Dapr Cryptography:
- Try the Cryptography quickstart.
- Explore cryptography via any of the supporting Dapr SDKs.
- Browse the supported cryptography component specs.
1.10.1 - Cryptography overview
With the cryptography building block, you can leverage cryptography in a safe and consistent way. Dapr exposes APIs that allow you to perform operations, such as encrypting and decrypting messages, within key vaults or the Dapr sidecar, without exposing cryptographic keys to your application.
Why Cryptography?
Applications make extensive use of cryptography, which, when implemented correctly, can make solutions safer even when data is compromised. In certain cases, you may be required to use cryptography to comply with industry regulations (for example, in finance) or legal requirements (including privacy regulations such as GDPR).
However, leveraging cryptography correctly can be difficult. You need to:
- Pick the right algorithms and options
- Learn the proper way to manage and protect keys
- Navigate operational complexities when you wants limit access to cryptographic key material
One important requirement for security is limiting access to your cryptographic keys, what is often referred to as “raw key material”. Dapr can integrate with key vaults such as Azure Key Vault (with more components coming in the future) which store keys in secure enclaves and perform cryptographic operations in the vaults, without exposing keys to your application or Dapr.
Alternatively, you can configure Dapr to manage the cryptographic keys for you, performing operations within the sidecar, again without exposing raw key material to your application.
Cryptography in Dapr
With Dapr, you can perform cryptographic operations without exposing cryptographic keys to your application.

By using the cryptography building block, you can:
- More easily perform cryptographic operations in a safe way. Dapr provides safeguards against using unsafe algorithms, or using algorithms with unsafe options.
- Keep keys outside of applications. Applications never see the “raw key material”, but can request the vault to perform operations with the keys. When using the cryptographic engine of Dapr, operations are performed safely within the Dapr sidecar.
- Experience greater separation of concerns. By using external vaults or cryptographic components, only authorized teams can access private key materials.
- Manage and rotate keys more easily. Keys are managed in the vault and outside of the application, and they can be rotated without needing the developers to be involved (or even without restarting the apps).
- Enables better audit logging to monitor when operations are performed with keys in a vault.
Note
While both HTTP and gRPC are supported in the alpha release, using the gRPC APIs with the supported Dapr SDKs is the recommended approach for cryptography.Features
Cryptographic components
The Dapr cryptography building block includes two kinds of components:
Components that allow interacting with management services or vaults (“key vaults”).
Similar to how Dapr offers an “abstraction layer” on top of various secret stores or state stores, these components allow interacting with various key vaults such as Azure Key Vault (with more coming in future Dapr releases). With these components, cryptographic operations on the private keys are performed within the vaults and Dapr never sees your private keys.Components based on Dapr’s own cryptographic engine.
When key vaults are not available, you can leverage components based on Dapr’s own cryptographic engine. These components, which have.dapr.
in the name, perform cryptographic operations within the Dapr sidecar, with keys stored on files, Kubernetes secrets, or other sources. Although the private keys are known by Dapr, they are still not available to your applications.
Both kinds of components, either those leveraging key vaults or using the cryptopgrahic engine in Dapr, offer the same abstraction layer. This allows your solution to switch between various vaults and/or cryptography components as needed. For example, you can use a locally-stored key during development, and a cloud vault in production.
Cryptographic APIs
Cryptographic APIs allow encrypting and decrypting data using the Dapr Crypto Scheme v1. This is an opinionated encryption scheme designed to use modern, safe cryptographic standards, and processes data (even large files) efficiently as a stream.
Try out cryptography
Quickstarts and tutorials
Want to put the Dapr cryptography API to the test? Walk through the following quickstart and tutorials to see cryptography in action:
Quickstart/tutorial | Description |
---|---|
Cryptography quickstart | Encrypt and decrypt messages and large files using RSA and AES keys with the cryptography API. |
Start using cryptography directly in your app
Want to skip the quickstarts? Not a problem. You can try out the cryptography building block directly in your application to encrypt and decrypt your application. After Dapr is installed, you can begin using the cryptography API starting with the cryptography how-to guide.
Demo
Watch this demo video of the Cryptography API from the Dapr Community Call #83:
Next steps
Use the cryptography API >>Related links
1.10.2 - How to: Use the cryptography APIs
Now that you’ve read about Cryptography as a Dapr building block, let’s walk through using the cryptography APIs with the SDKs.
Note
Dapr cryptography is currently in alpha.Encrypt
Using the Dapr SDK in your project, with the gRPC APIs, you can encrypt a stream of data, such as a file or a string:
# When passing data (a buffer or string), `encrypt` returns a Buffer with the encrypted message
def encrypt_decrypt_string(dapr: DaprClient):
message = 'The secret is "passw0rd"'
# Encrypt the message
resp = dapr.encrypt(
data=message.encode(),
options=EncryptOptions(
# Name of the cryptography component (required)
component_name=CRYPTO_COMPONENT_NAME,
# Key stored in the cryptography component (required)
key_name=RSA_KEY_NAME,
# Algorithm used for wrapping the key, which must be supported by the key named above.
# Options include: "RSA", "AES"
key_wrap_algorithm='RSA',
),
)
# The method returns a readable stream, which we read in full in memory
encrypt_bytes = resp.read()
print(f'Encrypted the message, got {len(encrypt_bytes)} bytes')
Using the Dapr SDK in your project, with the gRPC APIs, you can encrypt data in a buffer or a string:
// When passing data (a buffer or string), `encrypt` returns a Buffer with the encrypted message
const ciphertext = await client.crypto.encrypt(plaintext, {
// Name of the Dapr component (required)
componentName: "mycryptocomponent",
// Name of the key stored in the component (required)
keyName: "mykey",
// Algorithm used for wrapping the key, which must be supported by the key named above.
// Options include: "RSA", "AES"
keyWrapAlgorithm: "RSA",
});
The APIs can also be used with streams, to encrypt data more efficiently when it comes from a stream. The example below encrypts a file, writing to another file, using streams:
// `encrypt` can be used as a Duplex stream
await pipeline(
fs.createReadStream("plaintext.txt"),
await client.crypto.encrypt({
// Name of the Dapr component (required)
componentName: "mycryptocomponent",
// Name of the key stored in the component (required)
keyName: "mykey",
// Algorithm used for wrapping the key, which must be supported by the key named above.
// Options include: "RSA", "AES"
keyWrapAlgorithm: "RSA",
}),
fs.createWriteStream("ciphertext.out"),
);
Using the Dapr SDK in your project, with the gRPC APIs, you can encrypt data in a string or a byte array:
using var client = new DaprClientBuilder().Build();
const string componentName = "azurekeyvault"; //Change this to match your cryptography component
const string keyName = "myKey"; //Change this to match the name of the key in your cryptographic store
const string plainText = "This is the value we're going to encrypt today";
//Encode the string to a UTF-8 byte array and encrypt it
var plainTextBytes = Encoding.UTF8.GetBytes(plainText);
var encryptedBytesResult = await client.EncryptAsync(componentName, plaintextBytes, keyName, new EncryptionOptions(KeyWrapAlgorithm.Rsa));
Using the Dapr SDK in your project, you can encrypt a stream of data, such as a file.
out, err := sdkClient.Encrypt(context.Background(), rf, dapr.EncryptOptions{
// Name of the Dapr component (required)
ComponentName: "mycryptocomponent",
// Name of the key stored in the component (required)
KeyName: "mykey",
// Algorithm used for wrapping the key, which must be supported by the key named above.
// Options include: "RSA", "AES"
Algorithm: "RSA",
})
The following example puts the Encrypt
API in context, with code that reads the file, encrypts it, then stores the result in another file.
// Input file, clear-text
rf, err := os.Open("input")
if err != nil {
panic(err)
}
defer rf.Close()
// Output file, encrypted
wf, err := os.Create("output.enc")
if err != nil {
panic(err)
}
defer wf.Close()
// Encrypt the data using Dapr
out, err := sdkClient.Encrypt(context.Background(), rf, dapr.EncryptOptions{
// These are the 3 required parameters
ComponentName: "mycryptocomponent",
KeyName: "mykey",
Algorithm: "RSA",
})
if err != nil {
panic(err)
}
// Read the stream and copy it to the out file
n, err := io.Copy(wf, out)
if err != nil {
panic(err)
}
fmt.Println("Written", n, "bytes")
The following example uses the Encrypt
API to encrypt a string.
// Input string
rf := strings.NewReader("Amor, châa nullo amato amar perdona, mi prese del costui piacer sÃŦ forte, che, come vedi, ancor non mâabbandona")
// Encrypt the data using Dapr
enc, err := sdkClient.Encrypt(context.Background(), rf, dapr.EncryptOptions{
ComponentName: "mycryptocomponent",
KeyName: "mykey",
Algorithm: "RSA",
})
if err != nil {
panic(err)
}
// Read the encrypted data into a byte slice
enc, err := io.ReadAll(enc)
if err != nil {
panic(err)
}
Decrypt
To decrypt a stream of data, use decrypt
.
def encrypt_decrypt_string(dapr: DaprClient):
message = 'The secret is "passw0rd"'
# ...
# Decrypt the encrypted data
resp = dapr.decrypt(
data=encrypt_bytes,
options=DecryptOptions(
# Name of the cryptography component (required)
component_name=CRYPTO_COMPONENT_NAME,
# Key stored in the cryptography component (required)
key_name=RSA_KEY_NAME,
),
)
# The method returns a readable stream, which we read in full in memory
decrypt_bytes = resp.read()
print(f'Decrypted the message, got {len(decrypt_bytes)} bytes')
print(decrypt_bytes.decode())
assert message == decrypt_bytes.decode()
Using the Dapr SDK, you can decrypt data in a buffer or using streams.
// When passing data as a buffer, `decrypt` returns a Buffer with the decrypted message
const plaintext = await client.crypto.decrypt(ciphertext, {
// Only required option is the component name
componentName: "mycryptocomponent",
});
// `decrypt` can also be used as a Duplex stream
await pipeline(
fs.createReadStream("ciphertext.out"),
await client.crypto.decrypt({
// Only required option is the component name
componentName: "mycryptocomponent",
}),
fs.createWriteStream("plaintext.out"),
);
To decrypt a string, use the ‘DecryptAsync’ gRPC API in your project.
In the following example, we’ll take a byte array (such as from the example above) and decrypt it to a UTF-8 encoded string.
public async Task<string> DecryptBytesAsync(byte[] encryptedBytes)
{
using var client = new DaprClientBuilder().Build();
const string componentName = "azurekeyvault"; //Change this to match your cryptography component
const string keyName = "myKey"; //Change this to match the name of the key in your cryptographic store
var decryptedBytes = await client.DecryptAsync(componentName, encryptedBytes, keyName);
var decryptedString = Encoding.UTF8.GetString(decryptedBytes.ToArray());
return decryptedString;
}
To decrypt a file, use the Decrypt
gRPC API to your project.
In the following example, out
is a stream that can be written to file or read in memory, as in the examples above.
out, err := sdkClient.Decrypt(context.Background(), rf, dapr.EncryptOptions{
// Only required option is the component name
ComponentName: "mycryptocomponent",
})
Next steps
1.11 - Jobs
1.11.1 - Jobs overview
Many applications require job scheduling, or the need to take an action in the future. The jobs API is an orchestrator for scheduling these future jobs, either at a specific time or for a specific interval.
Not only does the jobs API help you with scheduling jobs, but internally, Dapr uses the Scheduler service to schedule actor reminders.
Jobs in Dapr consist of:

How it works
The jobs API is a job scheduler, not the executor which runs the job. The design guarantees at least once job execution with a bias towards durability and horizontal scaling over precision. This means:
- Guaranteed: A job is never invoked before the schedule time is due.
- Not guaranteed: A ceiling time on when the job is invoked after the due time is reached.
All job details and user-associated data for scheduled jobs are stored in an embedded Etcd database in the Scheduler service. You can use jobs to:
- Delay your pub/sub messaging. You can publish a message in a future specific time (for example: a week from today, or a specific UTC date/time).
- Schedule service invocation method calls between applications.
Scenarios
Job scheduling can prove helpful in the following scenarios:
Automated Database Backups: Ensure a database is backed up daily to prevent data loss. Schedule a backup script to run every night at 2 AM, which will create a backup of the database and store it in a secure location.
Regular Data Processing and ETL (Extract, Transform, Load): Process and transform raw data from various sources and load it into a data warehouse. Schedule ETL jobs to run at specific times (for example: hourly, daily) to fetch new data, process it, and update the data warehouse with the latest information.
Email Notifications and Reports: Receive daily sales reports and weekly performance summaries via email. Schedule a job that generates the required reports and sends them via email at 6 a.m. every day for daily reports and 8 a.m. every Monday for weekly summaries.
Maintenance Tasks and System Updates: Perform regular maintenance tasks such as clearing temporary files, updating software, and checking system health. Schedule various maintenance scripts to run at off-peak hours, such as weekends or late nights, to minimize disruption to users.
Batch Processing for Financial Transactions: Processes a large number of transactions that need to be batched and settled at the end of each business day. Schedule batch processing jobs to run at 5 PM every business day, aggregating the dayâs transactions and performing necessary settlements and reconciliations.
Dapr’s jobs API ensures the tasks represented in these scenarios are performed consistently and reliably without manual intervention, improving efficiency and reducing the risk of errors.
Features
The main functionality of the Jobs API allows you to create, retrieve, and delete scheduled jobs. By default, when you create a job with a name that already exists, the operation fails unless you explicitly set the overwrite
flag to true
. This ensures that existing jobs are not accidentally modified or overwritten.
Schedule jobs across multiple replicas
When you create a job, it does not replace an existing job with the same name, unless you explicitly set the overwrite
flag. This means that every time a job is created, it resets the count and only keeps 1 record in the embedded etcd for that job. Therefore, you don’t need to worry about multiple jobs being created and firing off â only the most recent job is recorded and executed, even if all your apps schedule the same job on startup.
The Scheduler service enables the scheduling of jobs to scale across multiple replicas, while guaranteeing that a job is only triggered by 1 Scheduler service instance.
Try out the jobs API
You can try out the jobs API in your application. After Dapr is installed, you can begin using the jobs API, starting with the How-to: Schedule jobs guide.
Next steps
1.11.2 - Features and concepts
Now that you’ve learned about the jobs building block at a high level, let’s deep dive into the features and concepts included with Dapr Jobs and the various SDKs. Dapr Jobs:
- Provides a robust and scalable API for scheduling operations to be triggered in the future.
- Exposes several capabilities which are common across all supported languages.
Job identity
All jobs are registered with a case-sensitive job name. These names are intended to be unique across all services interfacing with the Dapr runtime. The name is used as an identifier when creating and modifying the job as well as to indicate which job a triggered invocation is associated with.
Only one job can be associated with a name at any given time. By default, any attempt to create a new job using the same name as an existing job results in an error. However, if the overwrite
flag is set to true
, the new job overwrites the existing job with the same name.
Scheduling Jobs
A job can be scheduled using any of the following mechanisms:
- Intervals using Cron expressions, duration values, or period expressions
- Specific dates and times
For all time-based schedules, if a timestamp is provided with a time zone via the RFC3339 specification, that time zone is used. When not provided, the time zone used by the server running Dapr is used. In other words, do not assume that times run in UTC time zone, unless otherwise specified when scheduling the job.
Schedule using a Cron expression
When scheduling a job to execute on a specific interval using a Cron expression, the expression is written using 6 fields spanning the values specified in the table below:
seconds | minutes | hours | day of month | month | day of week |
---|---|---|---|---|---|
0-59 | 0-59 | 0-23 | 1-31 | 1-12/jan-dec | 0-6/sun-sat |
Example 1
"0 30 * * * *"
triggers every hour on the half-hour mark.
Example 2
"0 15 3 * * *"
triggers every day at 03:15.
Schedule using a duration value
You can schedule jobs using a Go duration string, in which
a string consists of a (possibly) signed sequence of decimal numbers, each with an optional fraction and a unit suffix.
Valid time units are "ns"
, "us"
, "ms"
, "s"
, "m"
, or "h"
.
Example 1
"2h45m"
triggers every 2 hours and 45 minutes.
Example 2
"37m25s"
triggers every 37 minutes and 25 seconds.
Schedule using a period expression
The following period expressions are supported. The “@every” expression also accepts a Go duration string.
Entry | Description | Equivalent Cron expression |
---|---|---|
@every | Run every (e.g. “@every 1h30m”) | N/A |
@yearly (or @annually) | Run once a year, midnight, January 1st | 0 0 0 1 1 * |
@monthly | Run once a month, midnight, first of month | 0 0 0 1 * * |
@weekly | Run once a week, midnight on Sunday | 0 0 0 * * 0 |
@daily or @midnight | Run once a day at midnight | 0 0 0 * * * |
@hourly | Run once an hour at the beginning of the hour | 0 0 * * * * |
Schedule using a specific date/time
A job can also be scheduled to run at a particular point in time by providing a date using the RFC3339 specification.
Example 1
"2025-12-09T16:09:53+00:00"
Indicates that the job should be run on December 9, 2025 at 4:09:53 PM UTC.
Scheduled triggers
When a scheduled Dapr job is triggered, the runtime sends a message back to the service that scheduled the job using either the HTTP or gRPC approach, depending on which is registered with Dapr when the service starts.
gRPC
When a job reaches its scheduled trigger time, the triggered job is sent back to the application via the following callback function:
Note: The following example is in Go, but applies to any programming language with gRPC support.
import rtv1 "github.com/dapr/dapr/pkg/proto/runtime/v1"
...
func (s *JobService) OnJobEventAlpha1(ctx context.Context, in *rtv1.JobEventRequest) (*rtv1.JobEventResponse, error) {
// Handle the triggered job
}
This function processes the triggered jobs within the context of your gRPC server. When you set up the server, ensure that you register the callback server, which invokes this function when a job is triggered:
...
js := &JobService{}
rtv1.RegisterAppCallbackAlphaServer(server, js)
In this setup, you have full control over how triggered jobs are received and processed, as they are routed directly through this gRPC method.
HTTP
If a gRPC server isn’t registered with Dapr when the application starts up, Dapr instead triggers jobs by making a
POST request to the endpoint /job/<job-name>
. The body includes the following information about the job:
Schedule
: When the job triggers occurRepeatCount
: An optional value indicating how often the job should repeatDueTime
: An optional point in time representing either the one time when the job should execute (if not recurring) or the not-before time from which the schedule should take effectTtl
: An optional value indicating when the job should expirePayload
: A collection of bytes containing data originally stored when the job was scheduledOverwrite
: A flag to allow the requested job to overwrite an existing job with the same name, if it already exists.FailurePolicy
: An optional failure policy for the job.
The DueTime
and Ttl
fields will reflect an RC3339 timestamp value reflective of the time zone provided when the job was
originally scheduled. If no time zone was provided, these values indicate the time zone used by the server running
Dapr.
1.11.3 - How-To: Schedule and handle triggered jobs
Now that you’ve learned what the jobs building block provides, let’s look at an example of how to use the API. The code example below describes an application that schedules jobs for a database backup application and handles them at trigger time, also known as the time the job was sent back to the application because it reached it’s dueTime.
Start the Scheduler service
When you run dapr init
in either self-hosted mode or on Kubernetes, the Dapr Scheduler service is started.
Set up the Jobs API
In your code, set up and schedule jobs within your application.
The following .NET SDK code sample schedules the job named prod-db-backup
. The job data contains information
about the database that you’ll be seeking to backup regularly. Over the course of this example, you’ll:
- Define types used in the rest of the example
- Register an endpoint during application startup that handles all job trigger invocations on the service
- Register the job with Dapr
In the following example, you’ll create records that you’ll serialize and register alongside the job so the information is available when the job is triggered in the future:
- The name of the backup task (
db-backup
) - The backup task’s
Metadata
, including:- The database name (
DBName
) - The database location (
BackupLocation
)
- The database name (
Create an ASP.NET Core project and add the latest version of Dapr.Jobs
from NuGet.
Note: While it’s not strictly necessary for your project to use the
Microsoft.NET.Sdk.Web
SDK to create jobs, as of the time this documentation is authored, only the service that schedules a job receives trigger invocations for it. As those invocations expect an endpoint that can handle the job trigger and requires theMicrosoft.NET.Sdk.Web
SDK, it’s recommended that you use an ASP.NET Core project for this purpose.
Start by defining types to persist our backup job data and apply our own JSON property name attributes to the properties so they’re consistent with other language examples.
//Define the types that we'll represent the job data with
internal sealed record BackupJobData([property: JsonPropertyName("task")] string Task, [property: JsonPropertyName("metadata")] BackupMetadata Metadata);
internal sealed record BackupMetadata([property: JsonPropertyName("DBName")]string DatabaseName, [property: JsonPropertyName("BackupLocation")] string BackupLocation);
Next, set up a handler as part of your application setup that will be called anytime a job is triggered on your application. It’s the responsibility of this handler to identify how jobs should be processed based on the job name provided.
This works by registering a handler with ASP.NET Core at /job/<job-name>
, where <job-name>
is parameterized and
passed into this handler delegate, meeting Dapr’s expectation that an endpoint is available to handle triggered named jobs.
Populate your Program.cs
file with the following:
using System.Text;
using System.Text.Json;
using Dapr.Jobs;
using Dapr.Jobs.Extensions;
using Dapr.Jobs.Models;
using Dapr.Jobs.Models.Responses;
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddDaprJobsClient();
var app = builder.Build();
//Registers an endpoint to receive and process triggered jobs
var cancellationTokenSource = new CancellationTokenSource(TimeSpan.FromSeconds(5));
app.MapDaprScheduledJobHandler((string jobName, ReadOnlyMemory<byte> jobPayload, ILogger logger, CancellationToken cancellationToken) => {
logger?.LogInformation("Received trigger invocation for job '{jobName}'", jobName);
switch (jobName)
{
case "prod-db-backup":
// Deserialize the job payload metadata
var jobData = JsonSerializer.Deserialize<BackupJobData>(jobPayload);
// Process the backup operation - we assume this is implemented elsewhere in your code
await BackupDatabaseAsync(jobData, cancellationToken);
break;
}
}, cancellationTokenSource.Token);
await app.RunAsync();
Finally, the job itself needs to be registered with Dapr so it can be triggered at a later point in time. You can do this
by injecting a DaprJobsClient
into a class and executing as part of an inbound operation to your application, but for
this example’s purposes, it’ll go at the bottom of the Program.cs
file you started above. Because you’ll be using the
DaprJobsClient
you registered with dependency injection, start by creating a scope so you can access it.
//Create a scope so we can access the registered DaprJobsClient
await using scope = app.Services.CreateAsyncScope();
var daprJobsClient = scope.ServiceProvider.GetRequiredService<DaprJobsClient>();
//Create the payload we wish to present alongside our future job triggers
var jobData = new BackupJobData("db-backup", new BackupMetadata("my-prod-db", "/backup-dir"));
//Serialize our payload to UTF-8 bytes
var serializedJobData = JsonSerializer.SerializeToUtf8Bytes(jobData);
//Schedule our backup job to run every minute, but only repeat 10 times
await daprJobsClient.ScheduleJobAsync("prod-db-backup", DaprJobSchedule.FromDuration(TimeSpan.FromMinutes(1)),
serializedJobData, repeats: 10);
The following Go SDK code sample schedules the job named prod-db-backup
. Job data is housed in a backup database ("my-prod-db"
) and is scheduled with ScheduleJobAlpha1
. This provides the jobData
, which includes:
- The backup
Task
name - The backup task’s
Metadata
, including:- The database name (
DBName
) - The database location (
BackupLocation
)
- The database name (
package main
import (
//...
daprc "github.com/dapr/go-sdk/client"
"github.com/dapr/go-sdk/examples/dist-scheduler/api"
"github.com/dapr/go-sdk/service/common"
daprs "github.com/dapr/go-sdk/service/grpc"
)
func main() {
// Initialize the server
server, err := daprs.NewService(":50070")
// ...
if err = server.AddJobEventHandler("prod-db-backup", prodDBBackupHandler); err != nil {
log.Fatalf("failed to register job event handler: %v", err)
}
log.Println("starting server")
go func() {
if err = server.Start(); err != nil {
log.Fatalf("failed to start server: %v", err)
}
}()
// ...
// Set up backup location
jobData, err := json.Marshal(&api.DBBackup{
Task: "db-backup",
Metadata: api.Metadata{
DBName: "my-prod-db",
BackupLocation: "/backup-dir",
},
},
)
// ...
}
The job is scheduled with a Schedule
set and the amount of Repeats
desired. These settings determine a max amount of times the job should be triggered and sent back to the app.
In this example, at trigger time, which is @every 1s
according to the Schedule
, this job is triggered and sent back to the application up to the max Repeats
(10
).
// ...
// Set up the job
job := daprc.Job{
Name: "prod-db-backup",
Schedule: "@every 1s",
Repeats: 10,
Data: &anypb.Any{
Value: jobData,
},
}
When a job is triggered, Dapr will automatically route the job to the event handler you set up during the server initialization. For example, in Go, you’d register the event handler like this:
...
if err = server.AddJobEventHandler("prod-db-backup", prodDBBackupHandler); err != nil {
log.Fatalf("failed to register job event handler: %v", err)
}
Dapr takes care of the underlying routing. When the job is triggered, your prodDBBackupHandler
function is called with
the triggered job data. Hereâs an example of handling the triggered job:
// ...
// At job trigger time this function is called
func prodDBBackupHandler(ctx context.Context, job *common.JobEvent) error {
var jobData common.Job
if err := json.Unmarshal(job.Data, &jobData); err != nil {
// ...
}
var jobPayload api.DBBackup
if err := json.Unmarshal(job.Data, &jobPayload); err != nil {
// ...
}
fmt.Printf("job %d received:\n type: %v \n typeurl: %v\n value: %v\n extracted payload: %v\n", jobCount, job.JobType, jobData.TypeURL, jobData.Value, jobPayload)
jobCount++
return nil
}
Run the Dapr sidecar
Once you’ve set up the Jobs API in your application, in a terminal window run the Dapr sidecar with the following command.
dapr run --app-id=distributed-scheduler \
--metrics-port=9091 \
--dapr-grpc-port 50001 \
--app-port 50070 \
--app-protocol grpc \
--log-level debug \
go run ./main.go
Next steps
1.12 - Conversation
1.12.1 - Conversation overview
Alpha
The conversation API is currently in alpha.Dapr’s conversation API reduces the complexity of securely and reliably interacting with Large Language Models (LLM) at scale. Whether you’re a developer who doesn’t have the necessary native SDKs or a polyglot shop who just wants to focus on the prompt aspects of LLM interactions, the conversation API provides one consistent API entry point to talk to underlying LLM providers.

In additon to enabling critical performance and security functionality (like prompt caching and PII scrubbing), you can also pair the conversation API with Dapr functionalities, like:
- Resiliency circuit breakers and retries to circumvent limit and token errors, or
- Middleware to authenticate requests coming to and from the LLM
Dapr provides observability by issuing metrics for your LLM interactions.
Features
The following features are out-of-the-box for all the supported conversation components.
Prompt caching
Prompt caching optimizes performance by storing and reusing prompts that are often repeated across multiple API calls. To significantly reduce latency and cost, Dapr stores frequent prompts in a local cache to be reused by your cluster, pod, or other, instead of reprocessing the information for every new request.
Personally identifiable information (PII) obfuscation
The PII obfuscation feature identifies and removes any form of sensitive user information from a conversation response. Simply enable PII obfuscation on input and output data to protect your privacy and scrub sensitive details that could be used to identify an individual.
The PII scrubber obfuscates the following user information:
- Phone number
- Email address
- IP address
- Street address
- Credit cards
- Social Security number
- ISBN
- Media Access Control (MAC) address
- Secure Hash Algorithm 1 (SHA-1) hex
- SHA-256 hex
- MD5 hex
Demo
Watch the demo presented during Diagrid’s Dapr v1.15 celebration to see how the conversation API works using the .NET SDK.
Try out conversation
Quickstarts and tutorials
Want to put the Dapr conversation API to the test? Walk through the following quickstart and tutorials to see it in action:
Quickstart/tutorial | Description |
---|---|
Conversation quickstart | Learn how to interact with Large Language Models (LLMs) using the conversation API. |
Start using the conversation API directly in your app
Want to skip the quickstarts? Not a problem. You can try out the conversation building block directly in your application. After Dapr is installed, you can begin using the conversation API starting with the how-to guide.
Next steps
1.12.2 - How-To: Converse with an LLM using the conversation API
Alpha
The conversation API is currently in alpha.Let’s get started using the conversation API. In this guide, you’ll learn how to:
- Set up one of the available Dapr components (echo) that work with the conversation API.
- Add the conversation client to your application.
- Run the connection using
dapr run
.
Set up the conversation component
Create a new configuration file called conversation.yaml
and save to a components or config sub-folder in your application directory.
Select your preferred conversation component spec for your conversation.yaml
file.
For this scenario, we use a simple echo component.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: echo
spec:
type: conversation.echo
version: v1
Use the OpenAI component
To interface with a real LLM, use one of the other supported conversation components, including OpenAI, Hugging Face, Anthropic, DeepSeek, and more.
For example, to swap out the echo
mock component with an OpenAI
component, replace the conversation.yaml
file with the following. You’ll need to copy your API key into the component file.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: openai
spec:
type: conversation.openai
metadata:
- name: key
value: <REPLACE_WITH_YOUR_KEY>
- name: model
value: gpt-4-turbo
Connect the conversation client
The following examples use an HTTP client to send a POST request to Dapr’s sidecar HTTP endpoint. You can also use the Dapr SDK client instead.
using Dapr.AI.Conversation;
using Dapr.AI.Conversation.Extensions;
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddDaprConversationClient();
var app = builder.Build();
var conversationClient = app.Services.GetRequiredService<DaprConversationClient>();
var response = await conversationClient.ConverseAsync("conversation",
new List<DaprConversationInput>
{
new DaprConversationInput(
"Please write a witty haiku about the Dapr distributed programming framework at dapr.io",
DaprConversationRole.Generic)
});
Console.WriteLine("Received the following from the LLM:");
foreach (var resp in response.Outputs)
{
Console.WriteLine($"\t{resp.Result}");
}
package main
import (
"context"
"fmt"
dapr "github.com/dapr/go-sdk/client"
"log"
)
func main() {
client, err := dapr.NewClient()
if err != nil {
panic(err)
}
input := dapr.ConversationInput{
Content: "Please write a witty haiku about the Dapr distributed programming framework at dapr.io",
// Role: "", // Optional
// ScrubPII: false, // Optional
}
fmt.Printf("conversation input: %s\n", input.Content)
var conversationComponent = "echo"
request := dapr.NewConversationRequest(conversationComponent, []dapr.ConversationInput{input})
resp, err := client.ConverseAlpha1(context.Background(), request)
if err != nil {
log.Fatalf("err: %v", err)
}
fmt.Printf("conversation output: %s\n", resp.Outputs[0].Result)
}
use dapr::client::{ConversationInputBuilder, ConversationRequestBuilder};
use std::thread;
use std::time::Duration;
type DaprClient = dapr::Client<dapr::client::TonicClient>;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Sleep to allow for the server to become available
thread::sleep(Duration::from_secs(5));
// Set the Dapr address
let address = "https://127.0.0.1".to_string();
let mut client = DaprClient::connect(address).await?;
let input = ConversationInputBuilder::new("Please write a witty haiku about the Dapr distributed programming framework at dapr.io").build();
let conversation_component = "echo";
let request =
ConversationRequestBuilder::new(conversation_component, vec![input.clone()]).build();
println!("conversation input: {:?}", input.content);
let response = client.converse_alpha1(request).await?;
println!("conversation output: {:?}", response.outputs[0].result);
Ok(())
}
Run the conversation connection
Start the connection using the dapr run
command. For example, for this scenario, we’re running dapr run
on an application with the app ID conversation
and pointing to our conversation YAML file in the ./config
directory.
dapr run --app-id conversation --dapr-grpc-port 50001 --log-level debug --resources-path ./config -- dotnet run
dapr run --app-id conversation --dapr-grpc-port 50001 --log-level debug --resources-path ./config -- go run ./main.go
Expected output
- '== APP == conversation output: Please write a witty haiku about the Dapr distributed programming framework at dapr.io'
dapr run --app-id=conversation --resources-path ./config --dapr-grpc-port 3500 -- cargo run --example conversation
Expected output
- 'conversation input: hello world'
- 'conversation output: hello world'
Advanced features
The conversation API supports the following features:
Prompt caching: Allows developers to cache prompts in Dapr, leading to much faster response times and reducing costs on egress and on inserting the prompt into the LLM provider’s cache.
PII scrubbing: Allows for the obfuscation of data going in and out of the LLM.
To learn how to enable these features, see the conversation API reference guide.
Related links
Try out the conversation API using the full examples provided in the supported SDK repos.
Next steps
2 - Dapr Software Development Kits (SDKs)
The Dapr SDKs are the easiest way for you to get Dapr into your application. Choose your favorite language and get up and running with Dapr in minutes.
SDK packages
Select your preferred language below to learn more about client, server, actor, and workflow packages.
- Client: The Dapr client allows you to invoke Dapr building block APIs and perform each building block’s actions
- Server extensions: The Dapr service extensions allow you to create services that can be invoked by other services and subscribe to topics
- Actor: The Dapr Actor SDK allows you to build virtual actors with methods, state, timers, and persistent reminders
- Workflow: Dapr Workflow makes it easy for you to write long running business logic and integrations in a reliable way
SDK languages
Language | Status | Client | Server extensions | Actor | Workflow |
---|---|---|---|---|---|
.NET | Stable | â | ASP.NET Core | â | â |
Python | Stable | â | gRPC FastAPI Flask | â | â |
Java | Stable | â | Spring Boot Quarkus | â | â |
Go | Stable | â | â | â | â |
PHP | Stable | â | â | â | |
JavaScript | Stable | â | â | â | |
C++ | In development | â | |||
Rust | In development | â | â |
Further reading
2.1 - Dapr .NET SDK
Dapr offers a variety of packages to help with the development of .NET applications. Using them you can create .NET clients, servers, and virtual actors with Dapr.
Prerequisites
- Dapr CLI installed
- Initialized Dapr environment
- .NET 8 or .NET 9 installed
Installation
To get started with the Client .NET SDK, install the Dapr .NET SDK package:
dotnet add package Dapr.Client
Try it out
Put the Dapr .NET SDK to the test. Walk through the .NET quickstarts and tutorials to see Dapr in action:
SDK samples | Description |
---|---|
Quickstarts | Experience Dapr’s API building blocks in just a few minutes using the .NET SDK. |
SDK samples | Clone the SDK repo to try out some examples and get started. |
Pub/sub tutorial | See how Dapr .NET SDK works alongside other Dapr SDKs to enable pub/sub applications. |
Available packages
More information
Learn more about local development options, best practices, or browse NuGet packages to add to your existing .NET applications.
2.1.1 - Getting started with the Dapr client .NET SDK
The Dapr client package allows you to interact with other Dapr applications from a .NET application.
Note
If you haven’t already, try out one of the quickstarts for a quick walk-through on how to use the Dapr .NET SDK with an API building block.Building blocks
The .NET SDK allows you to interface with all of the Dapr building blocks.
Invoke a service
HTTP
You can either use the DaprClient
or System.Net.Http.HttpClient
to invoke your services.
Note
You can also invoke a non-Dapr endpoint using either a namedHTTPEndpoint
or an FQDN URL to the non-Dapr environment.using var client = new DaprClientBuilder().
UseTimeout(TimeSpan.FromSeconds(2)). // Optionally, set a timeout
Build();
// Invokes a POST method named "deposit" that takes input of type "Transaction"
var data = new { id = "17", amount = 99m };
var account = await client.InvokeMethodAsync<Account>("routing", "deposit", data, cancellationToken);
Console.WriteLine("Returned: id:{0} | Balance:{1}", account.Id, account.Balance);
var client = DaprClient.CreateInvokeHttpClient(appId: "routing");
// To set a timeout on the HTTP client:
client.Timeout = TimeSpan.FromSeconds(2);
var deposit = new Transaction { Id = "17", Amount = 99m };
var response = await client.PostAsJsonAsync("/deposit", deposit, cancellationToken);
var account = await response.Content.ReadFromJsonAsync<Account>(cancellationToken: cancellationToken);
Console.WriteLine("Returned: id:{0} | Balance:{1}", account.Id, account.Balance);
gRPC
You can use the DaprClient
to invoke your services over gRPC.
using var cts = new CancellationTokenSource(TimeSpan.FromSeconds(20));
var invoker = DaprClient.CreateInvocationInvoker(appId: myAppId, daprEndpoint: serviceEndpoint);
var client = new MyService.MyServiceClient(invoker);
var options = new CallOptions(cancellationToken: cts.Token, deadline: DateTime.UtcNow.AddSeconds(1));
await client.MyMethodAsync(new Empty(), options);
Assert.Equal(StatusCode.DeadlineExceeded, ex.StatusCode);
- For a full guide on service invocation visit How-To: Invoke a service.
Save & get application state
var client = new DaprClientBuilder().Build();
var state = new Widget() { Size = "small", Color = "yellow", };
await client.SaveStateAsync(storeName, stateKeyName, state, cancellationToken: cancellationToken);
Console.WriteLine("Saved State!");
state = await client.GetStateAsync<Widget>(storeName, stateKeyName, cancellationToken: cancellationToken);
Console.WriteLine($"Got State: {state.Size} {state.Color}");
await client.DeleteStateAsync(storeName, stateKeyName, cancellationToken: cancellationToken);
Console.WriteLine("Deleted State!");
Query State (Alpha)
var query = "{" +
"\"filter\": {" +
"\"EQ\": { \"value.Id\": \"1\" }" +
"}," +
"\"sort\": [" +
"{" +
"\"key\": \"value.Balance\"," +
"\"order\": \"DESC\"" +
"}" +
"]" +
"}";
var client = new DaprClientBuilder().Build();
var queryResponse = await client.QueryStateAsync<Account>("querystore", query, cancellationToken: cancellationToken);
Console.WriteLine($"Got {queryResponse.Results.Count}");
foreach (var account in queryResponse.Results)
{
Console.WriteLine($"Account: {account.Data.Id} has {account.Data.Balance}");
}
- For a full list of state operations visit How-To: Get & save state.
Publish messages
var client = new DaprClientBuilder().Build();
var eventData = new { Id = "17", Amount = 10m, };
await client.PublishEventAsync(pubsubName, "deposit", eventData, cancellationToken);
Console.WriteLine("Published deposit event!");
- For a full list of state operations visit How-To: Publish & subscribe.
- Visit .NET SDK examples for code samples and instructions to try out pub/sub
Interact with output bindings
using var client = new DaprClientBuilder().Build();
// Example payload for the Twilio SendGrid binding
var email = new
{
metadata = new
{
emailTo = "customer@example.com",
subject = "An email from Dapr SendGrid binding",
},
data = "<h1>Testing Dapr Bindings</h1>This is a test.<br>Bye!",
};
await client.InvokeBindingAsync("send-email", "create", email);
- For a full guide on output bindings visit How-To: Use bindings.
Retrieve secrets
var client = new DaprClientBuilder().Build();
// Retrieve a key-value-pair-based secret - returns a Dictionary<string, string>
var secrets = await client.GetSecretAsync("mysecretstore", "key-value-pair-secret");
Console.WriteLine($"Got secret keys: {string.Join(", ", secrets.Keys)}");
var client = new DaprClientBuilder().Build();
// Retrieve a key-value-pair-based secret - returns a Dictionary<string, string>
var secrets = await client.GetSecretAsync("mysecretstore", "key-value-pair-secret");
Console.WriteLine($"Got secret keys: {string.Join(", ", secrets.Keys)}");
// Retrieve a single-valued secret - returns a Dictionary<string, string>
// containing a single value with the secret name as the key
var data = await client.GetSecretAsync("mysecretstore", "single-value-secret");
var value = data["single-value-secret"]
Console.WriteLine("Got a secret value, I'm not going to be print it, it's a secret!");
- For a full guide on secrets visit How-To: Retrieve secrets.
Get Configuration Keys
var client = new DaprClientBuilder().Build();
// Retrieve a specific set of keys.
var specificItems = await client.GetConfiguration("configstore", new List<string>() { "key1", "key2" });
Console.WriteLine($"Here are my values:\n{specificItems[0].Key} -> {specificItems[0].Value}\n{specificItems[1].Key} -> {specificItems[1].Value}");
// Retrieve all configuration items by providing an empty list.
var specificItems = await client.GetConfiguration("configstore", new List<string>());
Console.WriteLine($"I got {configItems.Count} entires!");
foreach (var item in configItems)
{
Console.WriteLine($"{item.Key} -> {item.Value}")
}
Subscribe to Configuration Keys
var client = new DaprClientBuilder().Build();
// The Subscribe Configuration API returns a wrapper around an IAsyncEnumerable<IEnumerable<ConfigurationItem>>.
// Iterate through it by accessing its Source in a foreach loop. The loop will end when the stream is severed
// or if the cancellation token is cancelled.
var subscribeConfigurationResponse = await daprClient.SubscribeConfiguration(store, keys, metadata, cts.Token);
await foreach (var items in subscribeConfigurationResponse.Source.WithCancellation(cts.Token))
{
foreach (var item in items)
{
Console.WriteLine($"{item.Key} -> {item.Value}")
}
}
Distributed lock (Alpha)
Acquire a lock
using System;
using Dapr.Client;
namespace LockService
{
class Program
{
[Obsolete("Distributed Lock API is in Alpha, this can be removed once it is stable.")]
static async Task Main(string[] args)
{
var daprLockName = "lockstore";
var fileName = "my_file_name";
var client = new DaprClientBuilder().Build();
// Locking with this approach will also unlock it automatically, as this is a disposable object
await using (var fileLock = await client.Lock(DAPR_LOCK_NAME, fileName, "random_id_abc123", 60))
{
if (fileLock.Success)
{
Console.WriteLine("Success");
}
else
{
Console.WriteLine($"Failed to lock {fileName}.");
}
}
}
}
}
Unlock an existing lock
using System;
using Dapr.Client;
namespace LockService
{
class Program
{
static async Task Main(string[] args)
{
var daprLockName = "lockstore";
var client = new DaprClientBuilder().Build();
var response = await client.Unlock(DAPR_LOCK_NAME, "my_file_name", "random_id_abc123"));
Console.WriteLine(response.status);
}
}
}
Sidecar APIs
Sidecar Health
The .NET SDK provides a way to poll for the sidecar health, as well as a convenience method to wait for the sidecar to be ready.
Poll for health
This health endpoint returns true when both the sidecar and your application are up (fully initialized).
var client = new DaprClientBuilder().Build();
var isDaprReady = await client.CheckHealthAsync();
if (isDaprReady)
{
// Execute Dapr dependent code.
}
Poll for health (outbound)
This health endpoint returns true when Dapr has initialized all its components, but may not have finished setting up a communication channel with your application.
This is best used when you want to utilize a Dapr component in your startup path, for instance, loading secrets from a secretstore.
var client = new DaprClientBuilder().Build();
var isDaprComponentsReady = await client.CheckOutboundHealthAsync();
if (isDaprComponentsReady)
{
// Execute Dapr component dependent code.
}
Wait for sidecar
The DaprClient
also provides a helper method to wait for the sidecar to become healthy (components only). When using this method, it is recommended to include a CancellationToken
to
allow for the request to timeout. Below is an example of how this is used in the DaprSecretStoreConfigurationProvider
.
// Wait for the Dapr sidecar to report healthy before attempting use Dapr components.
using (var tokenSource = new CancellationTokenSource(sidecarWaitTimeout))
{
await client.WaitForSidecarAsync(tokenSource.Token);
}
// Perform Dapr component operations here i.e. fetching secrets.
Shutdown the sidecar
var client = new DaprClientBuilder().Build();
await client.ShutdownSidecarAsync();
Related links
2.1.1.1 - DaprClient usage
Lifetime management
A DaprClient
holds access to networking resources in the form of TCP sockets used to communicate with the Dapr sidecar. DaprClient
implements IDisposable
to support eager cleanup of resources.
Dependency Injection
The AddDaprClient()
method will register the Dapr client with ASP.NET Core dependency injection. This method accepts an optional
options delegate for configuring the DaprClient
and an ServiceLifetime
argument, allowing you to specify a different lifetime
for the registered resources instead of the default Singleton
value.
The following example assumes all default values are acceptable and is sufficient to register the DaprClient
.
services.AddDaprClient();
The optional configuration delegates are used to configure DaprClient
by specifying options on the provided DaprClientBuilder
as in the following example:
services.AddDaprClient(daprBuilder => {
daprBuilder.UseJsonSerializerOptions(new JsonSerializerOptions {
WriteIndented = true,
MaxDepth = 8
});
daprBuilder.UseTimeout(TimeSpan.FromSeconds(30));
});
The another optional configuration delegate overload provides access to both the DaprClientBuilder
as well as an IServiceProvider
allowing for more advanced configurations that may require injecting services from the dependency injection container.
services.AddSingleton<SampleService>();
services.AddDaprClient((serviceProvider, daprBuilder) => {
var sampleService = serviceProvider.GetRequiredService<SampleService>();
var timeoutValue = sampleService.TimeoutOptions;
daprBuilder.UseTimeout(timeoutValue);
});
Manual Instantiation
Rather than using dependency injection, a DaprClient
can also be built using the static client builder.
For best performance, create a single long-lived instance of DaprClient
and provide access to that shared instance throughout your application. DaprClient
instances are thread-safe and intended to be shared.
Avoid creating a DaprClient
per-operation and disposing it when the operation is complete.
Configuring DaprClient
A DaprClient
can be configured by invoking methods on DaprClientBuilder
class before calling .Build()
to create the client. The settings for each DaprClient
object are separate and cannot be changed after calling .Build()
.
var daprClient = new DaprClientBuilder()
.UseJsonSerializerSettings( ... ) // Configure JSON serializer
.Build();
By default, the DaprClientBuilder
will prioritize the following locations, in the following order, to source the configuration
values:
- The value provided to a method on the
DaprClientBuilder
(e.g.UseTimeout(TimeSpan.FromSeconds(30))
) - The value pulled from an optionally injected
IConfiguration
matching the name expected in the associated environment variable - The value pulled from the associated environment variable
- Default values
Configuring on DaprClientBuilder
The DaprClientBuilder
contains the following methods to set configuration options:
UseHttpEndpoint(string)
: The HTTP endpoint of the Dapr sidecarUseGrpcEndpoint(string)
: Sets the gRPC endpoint of the Dapr sidecarUseGrpcChannelOptions(GrpcChannelOptions)
: Sets the gRPC channel options used to connect to the Dapr sidecarUseHttpClientFactory(IHttpClientFactory)
: Configures the DaprClient to use a registeredIHttpClientFactory
when buildingHttpClient
instancesUseJsonSerializationOptions(JsonSerializerOptions)
: Used to configure JSON serializationUseDaprApiToken(string)
: Adds the provided token to every request to authenticate to the Dapr sidecarUseTimeout(TimeSpan)
: Specifies a timeout value used by theHttpClient
when communicating with the Dapr sidecar
Configuring From IConfiguration
Rather than rely on sourcing configuration values directly from environment variables or because the values are sourced
from dependency injected services, another options is to make these values available on IConfiguration
.
For example, you might be registering your application in a multi-tenant environment and need to prefix the environment
variables used. The following example shows how these values can be sourced from the environment variables to your
IConfiguration
when their keys are prefixed with test_
;
var builder = WebApplication.CreateBuilder(args);
builder.Configuration.AddEnvironmentVariables("test_"); //Retrieves all environment variables that start with "test_" and removes the prefix when sourced from IConfiguration
builder.Services.AddDaprClient();
Configuring From Environment Variables
The SDK will read the following environment variables to configure the default values:
DAPR_HTTP_ENDPOINT
: used to find the HTTP endpoint of the Dapr sidecar, example:https://dapr-api.mycompany.com
DAPR_GRPC_ENDPOINT
: used to find the gRPC endpoint of the Dapr sidecar, example:https://dapr-grpc-api.mycompany.com
DAPR_HTTP_PORT
: ifDAPR_HTTP_ENDPOINT
is not set, this is used to find the HTTP local endpoint of the Dapr sidecarDAPR_GRPC_PORT
: ifDAPR_GRPC_ENDPOINT
is not set, this is used to find the gRPC local endpoint of the Dapr sidecarDAPR_API_TOKEN
: used to set the API Token
Note
If bothDAPR_HTTP_ENDPOINT
and DAPR_HTTP_PORT
are specified, the port value from DAPR_HTTP_PORT
will be ignored in favor of the port
implicitly or explicitly defined on DAPR_HTTP_ENDPOINT
. The same is true of both DAPR_GRPC_ENDPOINT
and DAPR_GRPC_PORT
.Configuring gRPC channel options
Dapr’s use of CancellationToken
for cancellation relies on the configuration of the gRPC channel options and this is enabled by default. If you need to configure these options yourself, make sure to enable the ThrowOperationCanceledOnCancellation setting.
var daprClient = new DaprClientBuilder()
.UseGrpcChannelOptions(new GrpcChannelOptions { ... ThrowOperationCanceledOnCancellation = true })
.Build();
Using cancellation with DaprClient
The APIs on DaprClient that perform asynchronous operations accept an optional CancellationToken
parameter. This follows a standard .NET idiom for cancellable operations. Note that when cancellation occurs, there is no guarantee that the remote endpoint stops processing the request, only that the client has stopped waiting for completion.
When an operation is cancelled, it will throw an OperationCancelledException
.
Understanding DaprClient JSON serialization
Many methods on DaprClient
perform JSON serialization using the System.Text.Json
serializer. Methods that accept an application data type as an argument will JSON serialize it, unless the documentation clearly states otherwise.
It is worth reading the System.Text.Json documentation if you have advanced requirements. The Dapr .NET SDK provides no unique serialization behavior or customizations - it relies on the underlying serializer to convert data to and from the application’s .NET types.
DaprClient
is configured to use a serializer options object configured from JsonSerializerDefaults.Web. This means that DaprClient
will use camelCase
for property names, allow reading quoted numbers ("10.99"
), and will bind properties case-insensitively. These are the same settings used with ASP.NET Core and the System.Text.Json.Http
APIs, and are designed to follow interoperable web conventions.
System.Text.Json
as of .NET 5.0 does not have good support for all of F# language features built-in. If you are using F# you may want to use one of the converter packages that add support for F#’s features such as FSharp.SystemTextJson.
Simple guidance for JSON serialization
Your experience using JSON serialization and DaprClient
will be smooth if you use a feature set that maps to JSON’s type system. These are general guidelines that will simplify your code where they can be applied.
- Avoid inheritance and polymorphism
- Do not attempt to serialize data with cyclic references
- Do not put complex or expensive logic in constructors or property accessors
- Use .NET types that map cleanly to JSON types (numeric types, strings,
DateTime
) - Create your own classes for top-level messages, events, or state values so you can add properties in the future
- Design types with
get
/set
properties OR use the supported pattern for immutable types with JSON
Polymorphism and serialization
The System.Text.Json
serializer used by DaprClient
uses the declared type of values when performing serialization.
This section will use DaprClient.SaveStateAsync<TValue>(...)
in examples, but the advice is applicable to any Dapr building block exposed by the SDK.
public class Widget
{
public string Color { get; set; }
}
...
// Storing a Widget value as JSON in the state store
widget widget = new Widget() { Color = "Green", };
await client.SaveStateAsync("mystatestore", "mykey", widget);
In the example above, the type parameter TValue
has its type argument inferred from the type of the widget
variable. This is important because the System.Text.Json
serializer will perform serialization based on the declared type of the value. The result is that the JSON value { "color": "Green" }
will be stored.
Consider what happens when you try to use derived type of Widget
:
public class Widget
{
public string Color { get; set; }
}
public class SuperWidget : Widget
{
public bool HasSelfCleaningFeature { get; set; }
}
...
// Storing a SuperWidget value as JSON in the state store
Widget widget = new SuperWidget() { Color = "Green", HasSelfCleaningFeature = true, };
await client.SaveStateAsync("mystatestore", "mykey", widget);
In this example we’re using a SuperWidget
but the variable’s declared type is Widget
. Since the JSON serializer’s behavior is determined by the declared type, it only sees a simple Widget
and will save the value { "color": "Green" }
instead of { "color": "Green", "hasSelfCleaningFeature": true }
.
If you want the properties of SuperWidget
to be serialized, then the best option is to override the type argument with object
. This will cause the serializer to include all data as it knows nothing about the type.
Widget widget = new SuperWidget() { Color = "Green", HasSelfCleaningFeature = true, };
await client.SaveStateAsync<object>("mystatestore", "mykey", widget);
Error handling
Methods on DaprClient
will throw DaprException
or a subclass when a failure is encountered.
try
{
var widget = new Widget() { Color = "Green", };
await client.SaveStateAsync("mystatestore", "mykey", widget);
}
catch (DaprException ex)
{
// handle the exception, log, retry, etc.
}
The most common cases of failure will be related to:
- Incorrect configuration of Dapr component
- Transient failures such as a networking problem
- Invalid data, such as a failure to deserialize JSON
In any of these cases you can examine more exception details through the .InnerException
property.
2.1.2 - Dapr actors .NET SDK
With the Dapr actor package, you can interact with Dapr virtual actors from a .NET application.
To get started, walk through the Dapr actors how-to guide.
2.1.2.1 - The IActorProxyFactory interface
Inside of an Actor
class or an ASP.NET Core project, the IActorProxyFactory
interface is recommended to create actor clients.
The AddActors(...)
method will register actor services with ASP.NET Core dependency injection.
- Outside of an actor instance: The
IActorProxyFactory
instance is available through dependency injection as a singleton service. - Inside an actor instance: The
IActorProxyFactory
instance is available as a property (this.ProxyFactory
).
The following is an example of creating a proxy inside an actor:
public Task<MyData> GetDataAsync()
{
var proxy = this.ProxyFactory.CreateActorProxy<IOtherActor>(ActorId.CreateRandom(), "OtherActor");
await proxy.DoSomethingGreat();
return this.StateManager.GetStateAsync<MyData>("my_data");
}
In this guide, you will learn how to use IActorProxyFactory
.
Tip
For a non-dependency-injected application, you can use the static methods onActorProxy
. Since the ActorProxy
methods are error prone, try to avoid using them when configuring custom settings.Identifying an actor
All of the APIs on IActorProxyFactory
will require an actor type and actor id to communicate with an actor. For strongly-typed clients, you also need one of its interfaces.
- Actor type uniquely identifies the actor implementation across the whole application.
- Actor id uniquely identifies an instance of that type.
If you don’t have an actor id
and want to communicate with a new instance, create a random id with ActorId.CreateRandom()
. Since the random id is a cryptographically strong identifier, the runtime will create a new actor instance when you interact with it.
You can use the type ActorReference
to exchange an actor type and actor id with other actors as part of messages.
Two styles of actor client
The actor client supports two different styles of invocation:
Actor client style | Description |
---|---|
Strongly-typed | Strongly-typed clients are based on .NET interfaces and provide the typical benefits of strong-typing. They don’t work with non-.NET actors. |
Weakly-typed | Weakly-typed clients use the ActorProxy class. It is recommended to use these only when required for interop or other advanced reasons. |
Using a strongly-typed client
The following example uses the CreateActorProxy<>
method to create a strongly-typed client. CreateActorProxy<>
requires an actor interface type, and will return an instance of that interface.
// Create a proxy for IOtherActor to type OtherActor with a random id
var proxy = this.ProxyFactory.CreateActorProxy<IOtherActor>(ActorId.CreateRandom(), "OtherActor");
// Invoke a method defined by the interface to invoke the actor
//
// proxy is an implementation of IOtherActor so we can invoke its methods directly
await proxy.DoSomethingGreat();
Using a weakly-typed client
The following example uses the Create
method to create a weakly-typed client. Create
returns an instance of ActorProxy
.
// Create a proxy for type OtherActor with a random id
var proxy = this.ProxyFactory.Create(ActorId.CreateRandom(), "OtherActor");
// Invoke a method by name to invoke the actor
//
// proxy is an instance of ActorProxy.
await proxy.InvokeMethodAsync("DoSomethingGreat");
Since ActorProxy
is a weakly-typed proxy, you need to pass in the actor method name as a string.
You can also use ActorProxy
to invoke methods with both a request and a response message. Request and response messages will be serialized using the System.Text.Json
serializer.
// Create a proxy for type OtherActor with a random id
var proxy = this.ProxyFactory.Create(ActorId.CreateRandom(), "OtherActor");
// Invoke a method on the proxy to invoke the actor
//
// proxy is an instance of ActorProxy.
var request = new MyRequest() { Message = "Hi, it's me.", };
var response = await proxy.InvokeMethodAsync<MyRequest, MyResponse>("DoSomethingGreat", request);
When using a weakly-typed proxy, you must proactively define the correct actor method names and message types. When using a strongly-typed proxy, these names and types are defined for you as part of the interface definition.
Actor method invocation exception details
The actor method invocation exception details are surfaced to the caller and the callee, providing an entry point to track down the issue. Exception details include:
- Method name
- Line number
- Exception type
- UUID
You use the UUID to match the exception on the caller and callee side. Below is an example of exception details:
Dapr.Actors.ActorMethodInvocationException: Remote Actor Method Exception, DETAILS: Exception: NotImplementedException, Method Name: ExceptionExample, Line Number: 14, Exception uuid: d291a006-84d5-42c4-b39e-d6300e9ac38b
Next steps
2.1.2.2 - Author & run actors
Author actors
ActorHost
The ActorHost
:
- Is a required constructor parameter of all actors
- Is provided by the runtime
- Must be passed to the base class constructor
- Contains all of the state that allows that actor instance to communicate with the runtime
internal class MyActor : Actor, IMyActor, IRemindable
{
public MyActor(ActorHost host) // Accept ActorHost in the constructor
: base(host) // Pass ActorHost to the base class constructor
{
}
}
Since the ActorHost
contains state unique to the actor, you don’t need to pass the instance into other parts of your code. It’s recommended only create your own instances of ActorHost
in tests.
Dependency injection
Actors support dependency injection of additional parameters into the constructor. Any other parameters you define will have their values satisfied from the dependency injection container.
internal class MyActor : Actor, IMyActor, IRemindable
{
public MyActor(ActorHost host, BankService bank) // Accept BankService in the constructor
: base(host)
{
...
}
}
An actor type should have a single public
constructor. The actor infrastructure uses the ActivatorUtilities
pattern for constructing actor instances.
You can register types with dependency injection in Startup.cs
to make them available. Read more about the different ways of registering your types.
// In Startup.cs
public void ConfigureServices(IServiceCollection services)
{
...
// Register additional types with dependency injection.
services.AddSingleton<BankService>();
}
Each actor instance has its own dependency injection scope and remains in memory for some time after performing an operation. During that time, the dependency injection scope associated with the actor is also considered live. The scope will be released when the actor is deactivated.
If an actor injects an IServiceProvider
in the constructor, the actor will receive a reference to the IServiceProvider
associated with its scope. The IServiceProvider
can be used to resolve services dynamically in the future.
internal class MyActor : Actor, IMyActor, IRemindable
{
public MyActor(ActorHost host, IServiceProvider services) // Accept IServiceProvider in the constructor
: base(host)
{
...
}
}
When using this pattern, avoid creating many instances of transient services which implement IDisposable
. Since the scope associated with an actor could be considered valid for a long time, you can accumulate many services in memory. See the dependency injection guidelines for more information.
IDisposable and actors
Actors can implement IDisposable
or IAsyncDisposable
. It’s recommended that you rely on dependency injection for resource management rather than implementing dispose functionality in application code. Dispose support is provided in the rare case where it is truly necessary.
Logging
Inside an actor class, you have access to an ILogger
instance through a property on the base Actor
class. This instance is connected to the ASP.NET Core logging system and should be used for all logging inside an actor. Read more about logging. You can configure a variety of different logging formats and output sinks.
Use structured logging with named placeholders like the example below:
public Task<MyData> GetDataAsync()
{
this.Logger.LogInformation("Getting state at {CurrentTime}", DateTime.UtcNow);
return this.StateManager.GetStateAsync<MyData>("my_data");
}
When logging, avoid using format strings like: $"Getting state at {DateTime.UtcNow}"
Logging should use the named placeholder syntax which offers better performance and integration with logging systems.
Using an explicit actor type name
By default, the type of the actor, as seen by clients, is derived from the name of the actor implementation class. The default name will be the class name (without namespace).
If desired, you can specify an explicit type name by attaching an ActorAttribute
attribute to the actor implementation class.
[Actor(TypeName = "MyCustomActorTypeName")]
internal class MyActor : Actor, IMyActor
{
// ...
}
In the example above, the name will be MyCustomActorTypeName
.
No change is needed to the code that registers the actor type with the runtime, providing the value via the attribute is all that is required.
Host actors on the server
Registering actors
Actor registration is part of ConfigureServices
in Startup.cs
. You can register services with dependency injection via the ConfigureServices
method. Registering the set of actor types is part of the registration of actor services.
Inside ConfigureServices
you can:
- Register the actor runtime (
AddActors
) - Register actor types (
options.Actors.RegisterActor<>
) - Configure actor runtime settings
options
- Register additional service types for dependency injection into actors (
services
)
// In Startup.cs
public void ConfigureServices(IServiceCollection services)
{
// Register actor runtime with DI
services.AddActors(options =>
{
// Register actor types and configure actor settings
options.Actors.RegisterActor<MyActor>();
// Configure default settings
options.ActorIdleTimeout = TimeSpan.FromMinutes(10);
options.ActorScanInterval = TimeSpan.FromSeconds(35);
options.DrainOngoingCallTimeout = TimeSpan.FromSeconds(35);
options.DrainRebalancedActors = true;
});
// Register additional services for use with actors
services.AddSingleton<BankService>();
}
Configuring JSON options
The actor runtime uses System.Text.Json for:
- Serializing data to the state store
- Handling requests from the weakly-typed client
By default, the actor runtime uses settings based on JsonSerializerDefaults.Web.
You can configure the JsonSerializerOptions
as part of ConfigureServices
:
// In Startup.cs
public void ConfigureServices(IServiceCollection services)
{
services.AddActors(options =>
{
...
// Customize JSON options
options.JsonSerializerOptions = ...
});
}
Actors and routing
The ASP.NET Core hosting support for actors uses the endpoint routing system. The .NET SDK provides no support hosting actors with the legacy routing system from early ASP.NET Core releases.
Since actors uses endpoint routing, the actors HTTP handler is part of the middleware pipeline. The following is a minimal example of a Configure
method setting up the middleware pipeline with actors.
// in Startup.cs
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
app.UseRouting();
app.UseEndpoints(endpoints =>
{
// Register actors handlers that interface with the Dapr runtime.
endpoints.MapActorsHandlers();
});
}
The UseRouting
and UseEndpoints
calls are necessary to configure routing. Configure actors as part of the pipeline by adding MapActorsHandlers
inside the endpoint middleware.
This is a minimal example, it’s valid for Actors functionality to existing alongside:
- Controllers
- Razor Pages
- Blazor
- gRPC Services
- Dapr pub/sub handler
- other endpoints such as health checks
Problematic middleware
Certain middleware may interfere with the routing of Dapr requests to the actors handlers. In particular, the UseHttpsRedirection
is problematic for Dapr’s default configuration. Dapr sends requests over unencrypted HTTP by default, which the UseHttpsRedirection
middleware will block. This middleware cannot be used with Dapr at this time.
// in Startup.cs
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
// INVALID - this will block non-HTTPS requests
app.UseHttpsRedirection();
// INVALID - this will block non-HTTPS requests
app.UseRouting();
app.UseEndpoints(endpoints =>
{
// Register actors handlers that interface with the Dapr runtime.
endpoints.MapActorsHandlers();
});
}
Next steps
2.1.2.3 - Actor serialization in the .NET SDK
Actor Serialization
The Dapr actor package enables you to use Dapr virtual actors within a .NET application with either a weakly- or strongly-typed client. Each utilizes a different serialization approach. This document will review the differences and convey a few key ground rules to understand in either scenario.
Please be advised that it is not a supported scenario to use the weakly- or strongly typed actor clients interchangeably because of these different serialization approaches. The data persisted using one Actor client will not be accessible using the other Actor client, so it is important to pick one and use it consistently throughout your application.
Weakly-typed Dapr Actor client
In this section, you will learn how to configure your C# types so they are properly serialized and deserialized at runtime when using a weakly-typed actor client. These clients use string-based names of methods with request and response payloads that are serialized using the System.Text.Json serializer. Please note that this serialization framework is not specific to Dapr and is separately maintained by the .NET team within the .NET GitHub repository.
When using the weakly-typed Dapr Actor client to invoke methods from your various actors, it’s not necessary to independently serialize or deserialize the method payloads as this will happen transparently on your behalf by the SDK.
The client will use the latest version of System.Text.Json available for the version of .NET you’re building against and serialization is subject to all the inherent capabilities provided in the associated .NET documentation.
The serializer will be configured to use the JsonSerializerOptions.Web
default options unless overridden with a custom options configuration which means the following are applied:
- Deserialization of the property name is performed in a case-insensitive manner
- Serialization of the property name is performed using camel casing unless the property is overridden with a
[JsonPropertyName]
attribute - Deserialization will read numeric values from number and/or string values
Basic Serialization
In the following example, we present a simple class named Doodad though it could just as well be a record as well.
public class Doodad
{
public Guid Id { get; set; }
public string Name { get; set; }
public int Count { get; set; }
}
By default, this will serialize using the names of the members as used in the type and whatever values it was instantiated with:
{"id": "a06ced64-4f42-48ad-84dd-46ae6a7e333d", "name": "DoodadName", "count": 5}
Override Serialized Property Name
The default property names can be overridden by applying the [JsonPropertyName]
attribute to desired properties.
Generally, this isn’t going to be necessary for types you’re persisting to the actor state as you’re not intended to read or write them independent of Dapr-associated functionality, but the following is provided just to clearly illustrate that it’s possible.
Override Property Names on Classes
Here’s an example demonstrating the use of JsonPropertyName
to change the name for the first property following serialization. Note that the last usage of JsonPropertyName
on the Count
property
matches what it would be expected to serialize to. This is largely just to demonstrate that applying this attribute won’t negatively impact anything - in fact, it might be preferable if you later
decide to change the default serialization options but still need to consistently access the properties previously serialized before that change as JsonPropertyName
will override those options.
public class Doodad
{
[JsonPropertyName("identifier")]
public Guid Id { get; set; }
public string Name { get; set; }
[JsonPropertyName("count")]
public int Count { get; set; }
}
This would serialize to the following:
{"identifier": "a06ced64-4f42-48ad-84dd-46ae6a7e333d", "name": "DoodadName", "count": 5}
Override Property Names on Records
Let’s try doing the same thing with a record from C# 12 or later:
public record Thingy(string Name, [JsonPropertyName("count")] int Count);
Because the argument passed in a primary constructor (introduced in C# 12) can be applied to either a property or field within a record, using the [JsonPropertyName]
attribute may
require specifying that you intend the attribute to apply to a property and not a field in some ambiguous cases. Should this be necessary, you’d indicate as much in the primary constructor with:
public record Thingy(string Name, [property: JsonPropertyName("count")] int Count);
If [property: ]
is applied to the [JsonPropertyName]
attribute where it’s not necessary, it will not negatively impact serialization or deserialization as the operation will
proceed normally as though it were a property (as it typically would if not marked as such).
Enumeration types
Enumerations, including flat enumerations are serializable to JSON, but the value persisted may surprise you. Again, it’s not expected that the developer should ever engage with the serialized data independently of Dapr, but the following information may at least help in diagnosing why a seemingly mild version migration isn’t working as expected.
Take the following enum
type providing the various seasons in the year:
public enum Season
{
Spring,
Summer,
Fall,
Winter
}
We’ll go ahead and use a separate demonstration type that references our Season
and simultaneously illustrate how this works with records:
public record Engagement(string Name, Season TimeOfYear);
Given the following initialized instance:
var myEngagement = new Engagement("Ski Trip", Season.Winter);
This would serialize to the following JSON:
{"name": "Ski Trip", "season": 3}
That might be unexpected that our Season.Winter
value was represented as a 3
, but this is because the serializer is going to automatically use numeric representations
of the enum values starting with zero for the first value and incrementing the numeric value for each additional value available. Again, if a migration were taking place and
a developer had flipped the order of the enums, this would affect a breaking change in your solution as the serialized numeric values would point to different values when deserialized.
Rather, there is a JsonConverter
available with System.Text.Json
that will instead opt to use a string-based value instead of the numeric value. The [JsonConverter]
attribute needs
to be applied to be enum type itself to enable this, but will then be realized in any downstream serialization or deserialization operation that references the enum.
[JsonConverter(typeof(JsonStringEnumConverter<Season>))]
public enum Season
{
Spring,
Summer,
Fall,
Winter
}
Using the same values from our myEngagement
instance above, this would produce the following JSON instead:
{"name": "Ski Trip", "season": "Winter"}
As a result, the enum members can be shifted around without fear of introducing errors during deserialization.
Custom Enumeration Values
The System.Text.Json serialization platform doesn’t, out of the box, support the use of [EnumMember]
to allow you to change the value of enum that’s used during serialization or deserialization, but
there are scenarios where this could be useful. Again, assume that you’re tasking with refactoring the solution to apply some better names to your various
enums. You’re using the JsonStringEnumConverter<TType>
detailed above so you’re saving the name of the enum to value instead of a numeric value, but if you change
the enum name, that will introduce a breaking change as the name will no longer match what’s in state.
Do note that if you opt into using this approach, you should decorate all your enum members with the [EnumMeber]
attribute so that the values are consistently applied for each enum value instead
of haphazardly. Nothing will validate this at build or runtime, but it is considered a best practice operation.
How can you specify the precise value persisted while still changing the name of the enum member in this scenario? Use a custom JsonConverter
with an extension method that can pull the value
out of the attached [EnumMember]
attributes where provided. Add the following to your solution:
public sealed class EnumMemberJsonConverter<T> : JsonConverter<T> where T : struct, Enum
{
/// <summary>Reads and converts the JSON to type <typeparamref name="T" />.</summary>
/// <param name="reader">The reader.</param>
/// <param name="typeToConvert">The type to convert.</param>
/// <param name="options">An object that specifies serialization options to use.</param>
/// <returns>The converted value.</returns>
public override T Read(ref Utf8JsonReader reader, Type typeToConvert, JsonSerializerOptions options)
{
// Get the string value from the JSON reader
var value = reader.GetString();
// Loop through all the enum values
foreach (var enumValue in Enum.GetValues<T>())
{
// Get the value from the EnumMember attribute, if any
var enumMemberValue = GetValueFromEnumMember(enumValue);
// If the values match, return the enum value
if (value == enumMemberValue)
{
return enumValue;
}
}
// If no match found, throw an exception
throw new JsonException($"Invalid value for {typeToConvert.Name}: {value}");
}
/// <summary>Writes a specified value as JSON.</summary>
/// <param name="writer">The writer to write to.</param>
/// <param name="value">The value to convert to JSON.</param>
/// <param name="options">An object that specifies serialization options to use.</param>
public override void Write(Utf8JsonWriter writer, T value, JsonSerializerOptions options)
{
// Get the value from the EnumMember attribute, if any
var enumMemberValue = GetValueFromEnumMember(value);
// Write the value to the JSON writer
writer.WriteStringValue(enumMemberValue);
}
private static string GetValueFromEnumMember(T value)
{
MemberInfo[] member = typeof(T).GetMember(value.ToString(), BindingFlags.DeclaredOnly | BindingFlags.Static | BindingFlags.Public);
if (member.Length == 0)
return value.ToString();
object[] customAttributes = member.GetCustomAttributes(typeof(EnumMemberAttribute), false);
if (customAttributes.Length != 0)
{
EnumMemberAttribute enumMemberAttribute = (EnumMemberAttribute)customAttributes;
if (enumMemberAttribute != null && enumMemberAttribute.Value != null)
return enumMemberAttribute.Value;
}
return value.ToString();
}
}
Now let’s add a sample enumerator. We’ll set a value that uses the lower-case version of each enum member to demonstrate this. Don’t forget to decorate the enum with the JsonConverter
attribute and reference our custom converter in place of the numeral-to-string converter used in the last section.
[JsonConverter(typeof(EnumMemberJsonConverter<Season>))]
public enum Season
{
[EnumMember(Value="spring")]
Spring,
[EnumMember(Value="summer")]
Summer,
[EnumMember(Value="fall")]
Fall,
[EnumMember(Value="winter")]
Winter
}
Let’s use our sample record from before. We’ll also add a [JsonPropertyName]
attribute just to augment the demonstration:
public record Engagement([property: JsonPropertyName("event")] string Name, Season TimeOfYear);
And finally, let’s initialize a new instance of this:
var myEngagement = new Engagement("Conference", Season.Fall);
This time, serialization will take into account the values from the attached [EnumMember]
attribute providing us a mechanism to refactor our application without necessitating
a complex versioning scheme for our existing enum values in the state.
{"event": "Conference", "season": "fall"}
Polymorphic Serialization
When working with polymorphic types in Dapr Actor clients, it is essential to handle serialization and deserialization correctly to ensure that the appropriate derived types are instantiated. Polymorphic serialization allows you to serialize objects of a base type while preserving the specific derived type information.
To enable polymorphic deserialization, you must use the [JsonPolymorphic]
attribute on your base type. Additionally,
it is crucial to include the [AllowOutOfOrderMetadataProperties]
attribute to ensure that metadata properties, such as $type
can be processed correctly by System.Text.Json even if they are not the first properties in the JSON object.
Example
[JsonPolymorphic]
[AllowOutOfOrderMetadataProperties]
public abstract class SampleValueBase
{
public string CommonProperty { get; set; }
}
public class DerivedSampleValue : SampleValueBase
{
public string SpecificProperty { get; set; }
}
In this example, the SampleValueBase
class is marked with both [JsonPolymorphic]
and [AllowOutOfOrderMetadataProperties]
attributes. This setup ensures that the $type
metadata property can be correctly identified and processed during
deserialization, regardless of its position in the JSON object.
By following this approach, you can effectively manage polymorphic serialization and deserialization in your Dapr Actor clients, ensuring that the correct derived types are instantiated and used.
Strongly-typed Dapr Actor client
In this section, you will learn how to configure your classes and records so they are properly serialized and deserialized at runtime when using a strongly-typed actor client. These clients are implemented using .NET interfaces and are not compatible with Dapr Actors written using other languages.
This actor client serializes data using an engine called the Data Contract Serializer which converts your C# types to and from XML documents. This serialization framework is not specific to Dapr and is separately maintained by the .NET team within the .NET GitHub repository.
When sending or receiving primitives (like strings or ints), this serialization happens transparently and there’s no requisite preparation needed on your part. However, when working with complex types such as those you create, there are some important rules to take into consideration so this process works smoothly.
Serializable Types
There are several important considerations to keep in mind when using the Data Contract Serializer:
- By default, all types, read/write properties (after construction) and fields marked as publicly visible are serialized
- All types must either expose a public parameterless constructor or be decorated with the DataContractAttribute attribute
- Init-only setters are only supported with the use of the DataContractAttribute attribute
- Read-only fields, properties without a Get and Set method and internal or properties with private Get and Set methods are ignored during serialization
- Serialization is supported for types that use other complex types that are not themselves marked with the DataContractAttribute attribute through the use of the KnownTypesAttribute attribute
- If a type is marked with the DataContractAttribute attribute, all members you wish to serialize and deserialize must be decorated with the DataMemberAttribute attribute as well or they’ll be set to their default values
How does deserialization work?
The approach used for deserialization depends on whether or not the type is decorated with the DataContractAttribute attribute. If this attribute isn’t present, an instance of the type is created using the parameterless constructor. Each of the properties and fields are then mapped into the type using their respective setters and the instance is returned to the caller.
If the type is marked with [DataContract]
, the serializer instead uses reflection to read the metadata of the type and determine which properties or fields should be included based on whether or not they’re marked with the DataMemberAttribute attribute as it’s performed on an opt-in basis. It then allocates an uninitialized object in memory (avoiding the use of any constructors, parameterless or not) and then sets the value directly on each mapped property or field, even if private or uses init-only setters. Serialization callbacks are invoked as applicable throughout this process and then the object is returned to the caller.
Use of the serialization attributes is highly recommended as they grant more flexibility to override names and namespaces and generally use more of the modern C# functionality. While the default serializer can be relied on for primitive types, it’s not recommended for any of your own types, whether they be classes, structs or records. It’s recommended that if you decorate a type with the DataContractAttribute attribute, you also explicitly decorate each of the members you want to serialize or deserialize with the DataMemberAttribute attribute as well.
.NET Classes
Classes are fully supported in the Data Contract Serializer provided that that other rules detailed on this page and the Data Contract Serializer documentation are also followed.
The most important thing to remember here is that you must either have a public parameterless constructor or you must decorate it with the appropriate attributes. Let’s review some examples to really clarify what will and won’t work.
In the following example, we present a simple class named Doodad. We don’t provide an explicit constructor here, so the compiler will provide an default parameterless constructor. Because we’re using supported primitive types (Guid, string and int32) and all our members have a public getter and setter, no attributes are required and we’ll be able to use this class without issue when sending and receiving it from a Dapr actor method.
public class Doodad
{
public Guid Id { get; set; }
public string Name { get; set; }
public int Count { get; set; }
}
By default, this will serialize using the names of the members as used in the type and whatever values it was instantiated with:
<Doodad>
<Id>a06ced64-4f42-48ad-84dd-46ae6a7e333d</Id>
<Name>DoodadName</Name>
<Count>5</Count>
</Doodad>
So let’s tweak it - let’s add our own constructor and only use init-only setters on the members. This will fail to serialize and deserialize not because of the use of the init-only setters, but because there’s no parameterless constructors.
// WILL NOT SERIALIZE PROPERLY!
public class Doodad
{
public Doodad(string name, int count)
{
Id = Guid.NewGuid();
Name = name;
Count = count;
}
public Guid Id { get; set; }
public string Name { get; init; }
public int Count { get; init; }
}
If we add a public parameterless constructor to the type, we’re good to go and this will work without further annotations.
public class Doodad
{
public Doodad()
{
}
public Doodad(string name, int count)
{
Id = Guid.NewGuid();
Name = name;
Count = count;
}
public Guid Id { get; set; }
public string Name { get; set; }
public int Count { get; set; }
}
But what if we don’t want to add this constructor? Perhaps you don’t want your developers to accidentally create an instance of this Doodad using an unintended constructor. That’s where the more flexible attributes are useful. If you decorate your type with a DataContractAttribute attribute, you can drop your parameterless constructor and it will work once again.
[DataContract]
public class Doodad
{
public Doodad(string name, int count)
{
Id = Guid.NewGuid();
Name = name;
Count = count;
}
public Guid Id { get; set; }
public string Name { get; set; }
public int Count { get; set; }
}
In the above example, we don’t need to also use the DataMemberAttribute attributes because again, we’re using built-in primitives that the serializer supports. But, we do get more flexibility if we use the attributes. From the DataContractAttribute attribute, we can specify our own XML namespace with the Namespace argument and, via the Name argument, change the name of the type as used when serialized into the XML document.
It’s a recommended practice to append the DataContractAttribute attribute to the type and the DataMemberAttribute attributes to all the members you want to serialize anyway - if they’re not necessary and you’re not changing the default values, they’ll just be ignored, but they give you a mechanism to opt into serializing members that wouldn’t otherwise have been included such as those marked as private or that are themselves complex types or collections.
Note that if you do opt into serializing your private members, their values will be serialized into plain text - they can very well be viewed, intercepted and potentially manipulated based on how you’re handing the data once serialized, so it’s an important consideration whether you want to mark these members or not in your use case.
In the following example, we’ll look at using the attributes to change the serialized names of some of the members as well as introduce the IgnoreDataMemberAttribute attribute. As the name indicates, this tells the serializer to skip this property even though it’d be otherwise eligible to serialize. Further, because I’m decorating the type with the DataContractAttribute attribute, it means that I can use init-only setters on the properties.
[DataContract(Name="Doodad")]
public class Doodad
{
public Doodad(string name = "MyDoodad", int count = 5)
{
Id = Guid.NewGuid();
Name = name;
Count = count;
}
[DataMember(Name = "id")]
public Guid Id { get; init; }
[IgnoreDataMember]
public string Name { get; init; }
[DataMember]
public int Count { get; init; }
}
When this is serialized, because we’re changing the names of the serialized members, we can expect a new instance of Doodad using the default values this to be serialized as:
<Doodad>
<id>a06ced64-4f42-48ad-84dd-46ae6a7e333d</id>
<Count>5</Count>
</Doodad>
Classes in C# 12 - Primary Constructors
C# 12 brought us primary constructors on classes. Use of a primary constructor means the compiler will be prevented from creating the default implicit parameterless constructor. While a primary constructor on a class doesn’t generate any public properties, it does mean that if you pass this primary constructor any arguments or have non-primitive types in your class, you’ll either need to specify your own parameterless constructor or use the serialization attributes.
Here’s an example where we’re using the primary constructor to inject an ILogger to a field and add our own parameterless constructor without the need for any attributes.
public class Doodad(ILogger<Doodad> _logger)
{
public Doodad() {} //Our parameterless constructor
public Doodad(string name, int count)
{
Id = Guid.NewGuid();
Name = name;
Count = count;
}
public Guid Id { get; set; }
public string Name { get; set; }
public int Count { get; set; }
}
And using our serialization attributes (again, opting for init-only setters since we’re using the serialization attributes):
[DataContract]
public class Doodad(ILogger<Doodad> _logger)
{
public Doodad(string name, int count)
{
Id = Guid.NewGuid();
Name = name;
Count = count;
}
[DataMember]
public Guid Id { get; init; }
[DataMember]
public string Name { get; init; }
[DataMember]
public int Count { get; init; }
}
.NET Structs
Structs are supported by the Data Contract serializer provided that they are marked with the DataContractAttribute attribute and the members you wish to serialize are marked with the DataMemberAttribute attribute. Further, to support deserialization, the struct will also need to have a parameterless constructor. This works even if you define your own parameterless constructor as enabled in C# 10.
[DataContract]
public struct Doodad
{
[DataMember]
public int Count { get; set; }
}
.NET Records
Records were introduced in C# 9 and follow precisely the same rules as classes when it comes to serialization. We recommend that you should decorate all your records with the DataContractAttribute attribute and members you wish to serialize with DataMemberAttribute attributes so you don’t experience any deserialization issues using this or other newer C# functionalities. Because record classes use init-only setters for properties by default and encourage the use of the primary constructor, applying these attributes to your types ensures that the serializer can properly otherwise accommodate your types as-is.
Typically records are presented as a simple one-line statement using the new primary constructor concept:
public record Doodad(Guid Id, string Name, int Count);
This will throw an error encouraging the use of the serialization attributes as soon as you use it in a Dapr actor method invocation because there’s no parameterless constructor available nor is it decorated with the aforementioned attributes.
Here we add an explicit parameterless constructor and it won’t throw an error, but none of the values will be set during deserialization since they’re created with init-only setters. Because this doesn’t use the DataContractAttribute attribute or the DataMemberAttribute attribute on any members, the serializer will be unable to map the target members correctly during deserialization.
public record Doodad(Guid Id, string Name, int Count)
{
public Doodad() {}
}
This approach does without the additional constructor and instead relies on the serialization attributes. Because we mark the type with the DataContractAttribute attribute and decorate each member with its own DataMemberAttribute attribute, the serialization engine will be able to map from the XML document to our type without issue.
[DataContract]
public record Doodad(
[property: DataMember] Guid Id,
[property: DataMember] string Name,
[property: DataMember] int Count)
Supported Primitive Types
There are several types built into .NET that are considered primitive and eligible for serialization without additional effort on the part of the developer:
There are additional types that aren’t actually primitives but have similar built-in support:
Again, if you want to pass these types around via your actor methods, no additional consideration is necessary as they’ll be serialized and deserialized without issue. Further, types that are themselves marked with the (SerializeableAttribute)[https://learn.microsoft.com/en-us/dotnet/api/system.serializableattribute] attribute will be serialized.
Enumeration Types
Enumerations, including flag enumerations are serializable if appropriately marked. The enum members you wish to be serialized must be marked with the EnumMemberAttribute attribute in order to be serialized. Passing a custom value into the optional Value argument on this attribute will allow you to specify the value used for the member in the serialized document instead of having the serializer derive it from the name of the member.
The enum type does not require that the type be decorated with the DataContractAttribute
attribute - only that the members you wish to serialize be decorated with the EnumMemberAttribute
attributes.
public enum Colors
{
[EnumMember]
Red,
[EnumMember(Value="g")]
Green,
Blue, //Even if used by a type, this value will not be serialized as it's not decorated with the EnumMember attribute
}
Collection Types
With regards to the data contact serializer, all collection types that implement the IEnumerable interface including arays and generic collections are considered collections. Those types that implement IDictionary or the generic IDictionary<TKey, TValue> are considered dictionary collections; all others are list collections.
Not unlike other complex types, collection types must have a parameterless constructor available. Further, they must also have a method called Add so they can be properly serialized and deserialized. The types used by these collection types must themselves be marked with the DataContractAttribute
attribute or otherwise be serializable as described throughout this document.
Data Contract Versioning
As the data contract serializer is only used in Dapr with respect to serializing the values in the .NET SDK to and from the Dapr actor instances via the proxy methods, there’s little need to consider versioning of data contracts as the data isn’t being persisted between application versions using the same serializer. For those interested in learning more about data contract versioning visit here.
Known Types
Nesting your own complex types is easily accommodated by marking each of the types with the DataContractAttribute attribute. This informs the serializer as to how deserialization should be performed. But what if you’re working with polymorphic types and one of your members is a base class or interface with derived classes or other implementations? Here, you’ll use the KnownTypeAttribute attribute to give a hint to the serializer about how to proceed.
When you apply the KnownTypeAttribute attribute to a type, you are informing the data contract serializer about what subtypes it might encounter allowing it to properly handle the serialization and deserialization of these types, even when the actual type at runtime is different from the declared type.
[DataContract]
[KnownType(typeof(DerivedClass))]
public class BaseClass
{
//Members of the base class
}
[DataContract]
public class DerivedClass : BaseClass
{
//Additional members of the derived class
}
In this example, the BaseClass
is marked with [KnownType(typeof(DerivedClass))]
which tells the data contract serializer that DerivedClass
is a possible implementation of BaseClass
that it may need to serialize or deserialize. Without this attribute, the serialize would not be aware of the DerivedClass
when it encounters an instance of BaseClass
that is actually of type DerivedClass
and this could lead to a serialization exception because the serializer would not know how to handle the derived type. By specifying all possible derived types as known types, you ensure that the serializer can process the type and its members correctly.
For more information and examples about using [KnownType]
, please refer to the official documentation.
2.1.2.4 - How to: Run and use virtual actors in the .NET SDK
The Dapr actor package allows you to interact with Dapr virtual actors from a .NET application. In this guide, you learn how to:
- Create an Actor (
MyActor
). - Invoke its methods on the client application.
MyActor --- MyActor.Interfaces
|
+- MyActorService
|
+- MyActorClient
The interface project (\MyActor\MyActor.Interfaces)
This project contains the interface definition for the actor. Actor interfaces can be defined in any project with any name. The interface defines the actor contract shared by:
- The actor implementation
- The clients calling the actor
Because client projects may depend on it, it’s better to define it in an assembly separate from the actor implementation.
The actor service project (\MyActor\MyActorService)
This project implements the ASP.Net Core web service that hosts the actor. It contains the implementation of the actor, MyActor.cs
. An actor implementation is a class that:
- Derives from the base type Actor
- Implements the interfaces defined in the
MyActor.Interfaces
project.
An actor class must also implement a constructor that accepts an ActorService
instance and an ActorId
, and passes them to the base Actor class.
The actor client project (\MyActor\MyActorClient)
This project contains the implementation of the actor client which calls MyActor’s method defined in Actor Interfaces.
Prerequisites
- Dapr CLI installed.
- Initialized Dapr environment.
- .NET 8 or .NET 9 installed
Step 0: Prepare
Since we’ll be creating 3 projects, choose an empty directory to start from, and open it in your terminal of choice.
Step 1: Create actor interfaces
Actor interface defines the actor contract that is shared by the actor implementation and the clients calling the actor.
Actor interface is defined with the below requirements:
- Actor interface must inherit
Dapr.Actors.IActor
interface - The return type of Actor method must be
Task
orTask<object>
- Actor method can have one argument at a maximum
Create interface project and add dependencies
# Create Actor Interfaces
dotnet new classlib -o MyActor.Interfaces
cd MyActor.Interfaces
# Add Dapr.Actors nuget package. Please use the latest package version from nuget.org
dotnet add package Dapr.Actors
cd ..
Implement IMyActor interface
Define IMyActor
interface and MyData
data object. Paste the following code into MyActor.cs
in the MyActor.Interfaces
project.
using Dapr.Actors;
using Dapr.Actors.Runtime;
using System.Threading.Tasks;
namespace MyActor.Interfaces
{
public interface IMyActor : IActor
{
Task<string> SetDataAsync(MyData data);
Task<MyData> GetDataAsync();
Task RegisterReminder();
Task UnregisterReminder();
Task<IActorReminder> GetReminder();
Task RegisterTimer();
Task UnregisterTimer();
}
public class MyData
{
public string PropertyA { get; set; }
public string PropertyB { get; set; }
public override string ToString()
{
var propAValue = this.PropertyA == null ? "null" : this.PropertyA;
var propBValue = this.PropertyB == null ? "null" : this.PropertyB;
return $"PropertyA: {propAValue}, PropertyB: {propBValue}";
}
}
}
Step 2: Create actor service
Dapr uses ASP.NET web service to host Actor service. This section will implement IMyActor
actor interface and register Actor to Dapr Runtime.
Create actor service project and add dependencies
# Create ASP.Net Web service to host Dapr actor
dotnet new web -o MyActorService
cd MyActorService
# Add Dapr.Actors.AspNetCore nuget package. Please use the latest package version from nuget.org
dotnet add package Dapr.Actors.AspNetCore
# Add Actor Interface reference
dotnet add reference ../MyActor.Interfaces/MyActor.Interfaces.csproj
cd ..
Add actor implementation
Implement IMyActor interface and derive from Dapr.Actors.Actor
class. Following example shows how to use Actor Reminders as well. For Actors to use Reminders, it must derive from IRemindable. If you don’t intend to use Reminder feature, you can skip implementing IRemindable and reminder specific methods which are shown in the code below.
Paste the following code into MyActor.cs
in the MyActorService
project:
using Dapr.Actors;
using Dapr.Actors.Runtime;
using MyActor.Interfaces;
using System;
using System.Threading.Tasks;
namespace MyActorService
{
internal class MyActor : Actor, IMyActor, IRemindable
{
// The constructor must accept ActorHost as a parameter, and can also accept additional
// parameters that will be retrieved from the dependency injection container
//
/// <summary>
/// Initializes a new instance of MyActor
/// </summary>
/// <param name="host">The Dapr.Actors.Runtime.ActorHost that will host this actor instance.</param>
public MyActor(ActorHost host)
: base(host)
{
}
/// <summary>
/// This method is called whenever an actor is activated.
/// An actor is activated the first time any of its methods are invoked.
/// </summary>
protected override Task OnActivateAsync()
{
// Provides opportunity to perform some optional setup.
Console.WriteLine($"Activating actor id: {this.Id}");
return Task.CompletedTask;
}
/// <summary>
/// This method is called whenever an actor is deactivated after a period of inactivity.
/// </summary>
protected override Task OnDeactivateAsync()
{
// Provides Opporunity to perform optional cleanup.
Console.WriteLine($"Deactivating actor id: {this.Id}");
return Task.CompletedTask;
}
/// <summary>
/// Set MyData into actor's private state store
/// </summary>
/// <param name="data">the user-defined MyData which will be stored into state store as "my_data" state</param>
public async Task<string> SetDataAsync(MyData data)
{
// Data is saved to configured state store implicitly after each method execution by Actor's runtime.
// Data can also be saved explicitly by calling this.StateManager.SaveStateAsync();
// State to be saved must be DataContract serializable.
await this.StateManager.SetStateAsync<MyData>(
"my_data", // state name
data); // data saved for the named state "my_data"
return "Success";
}
/// <summary>
/// Get MyData from actor's private state store
/// </summary>
/// <return>the user-defined MyData which is stored into state store as "my_data" state</return>
public Task<MyData> GetDataAsync()
{
// Gets state from the state store.
return this.StateManager.GetStateAsync<MyData>("my_data");
}
/// <summary>
/// Register MyReminder reminder with the actor
/// </summary>
public async Task RegisterReminder()
{
await this.RegisterReminderAsync(
"MyReminder", // The name of the reminder
null, // User state passed to IRemindable.ReceiveReminderAsync()
TimeSpan.FromSeconds(5), // Time to delay before invoking the reminder for the first time
TimeSpan.FromSeconds(5)); // Time interval between reminder invocations after the first invocation
}
/// <summary>
/// Get MyReminder reminder details with the actor
/// </summary>
public async Task<IActorReminder> GetReminder()
{
await this.GetReminderAsync("MyReminder");
}
/// <summary>
/// Unregister MyReminder reminder with the actor
/// </summary>
public Task UnregisterReminder()
{
Console.WriteLine("Unregistering MyReminder...");
return this.UnregisterReminderAsync("MyReminder");
}
// <summary>
// Implement IRemindeable.ReceiveReminderAsync() which is call back invoked when an actor reminder is triggered.
// </summary>
public Task ReceiveReminderAsync(string reminderName, byte[] state, TimeSpan dueTime, TimeSpan period)
{
Console.WriteLine("ReceiveReminderAsync is called!");
return Task.CompletedTask;
}
/// <summary>
/// Register MyTimer timer with the actor
/// </summary>
public Task RegisterTimer()
{
return this.RegisterTimerAsync(
"MyTimer", // The name of the timer
nameof(this.OnTimerCallBack), // Timer callback
null, // User state passed to OnTimerCallback()
TimeSpan.FromSeconds(5), // Time to delay before the async callback is first invoked
TimeSpan.FromSeconds(5)); // Time interval between invocations of the async callback
}
/// <summary>
/// Unregister MyTimer timer with the actor
/// </summary>
public Task UnregisterTimer()
{
Console.WriteLine("Unregistering MyTimer...");
return this.UnregisterTimerAsync("MyTimer");
}
/// <summary>
/// Timer callback once timer is expired
/// </summary>
private Task OnTimerCallBack(byte[] data)
{
Console.WriteLine("OnTimerCallBack is called!");
return Task.CompletedTask;
}
}
}
Register actor runtime with ASP.NET Core startup
The Actor runtime is configured through ASP.NET Core Startup.cs
.
The runtime uses the ASP.NET Core dependency injection system to register actor types and essential services. This integration is provided through the AddActors(...)
method call in ConfigureServices(...)
. Use the delegate passed to AddActors(...)
to register actor types and configure actor runtime settings. You can register additional types for dependency injection inside ConfigureServices(...)
. These will be available to be injected into the constructors of your Actor types.
Actors are implemented via HTTP calls with the Dapr runtime. This functionality is part of the application’s HTTP processing pipeline and is registered inside UseEndpoints(...)
inside Configure(...)
.
Paste the following code into Startup.cs
in the MyActorService
project:
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
namespace MyActorService
{
public class Startup
{
public void ConfigureServices(IServiceCollection services)
{
services.AddActors(options =>
{
// Register actor types and configure actor settings
options.Actors.RegisterActor<MyActor>();
});
}
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
app.UseRouting();
// Register actors handlers that interface with the Dapr runtime.
app.MapActorsHandlers();
}
}
}
Step 3: Add a client
Create a simple console app to call the actor service. Dapr SDK provides Actor Proxy client to invoke actor methods defined in Actor Interface.
Create actor client project and add dependencies
# Create Actor's Client
dotnet new console -o MyActorClient
cd MyActorClient
# Add Dapr.Actors nuget package. Please use the latest package version from nuget.org
dotnet add package Dapr.Actors
# Add Actor Interface reference
dotnet add reference ../MyActor.Interfaces/MyActor.Interfaces.csproj
cd ..
Invoke actor methods with strongly-typed client
You can use ActorProxy.Create<IMyActor>(..)
to create a strongly-typed client and invoke methods on the actor.
Paste the following code into Program.cs
in the MyActorClient
project:
using System;
using System.Threading.Tasks;
using Dapr.Actors;
using Dapr.Actors.Client;
using MyActor.Interfaces;
namespace MyActorClient
{
class Program
{
static async Task MainAsync(string[] args)
{
Console.WriteLine("Startup up...");
// Registered Actor Type in Actor Service
var actorType = "MyActor";
// An ActorId uniquely identifies an actor instance
// If the actor matching this id does not exist, it will be created
var actorId = new ActorId("1");
// Create the local proxy by using the same interface that the service implements.
//
// You need to provide the type and id so the actor can be located.
var proxy = ActorProxy.Create<IMyActor>(actorId, actorType);
// Now you can use the actor interface to call the actor's methods.
Console.WriteLine($"Calling SetDataAsync on {actorType}:{actorId}...");
var response = await proxy.SetDataAsync(new MyData()
{
PropertyA = "ValueA",
PropertyB = "ValueB",
});
Console.WriteLine($"Got response: {response}");
Console.WriteLine($"Calling GetDataAsync on {actorType}:{actorId}...");
var savedData = await proxy.GetDataAsync();
Console.WriteLine($"Got response: {savedData}");
}
}
}
Running the code
The projects that you’ve created can now to test the sample.
Run MyActorService
Since
MyActorService
is hosting actors, it needs to be run with the Dapr CLI.cd MyActorService dapr run --app-id myapp --app-port 5000 --dapr-http-port 3500 -- dotnet run
You will see commandline output from both
daprd
andMyActorService
in this terminal. You should see something like the following, which indicates that the application started successfully.... âšī¸ Updating metadata for app command: dotnet run â You're up and running! Both Dapr and your app logs will appear here. == APP == info: Microsoft.Hosting.Lifetime[0] == APP == Now listening on: https://localhost:5001 == APP == info: Microsoft.Hosting.Lifetime[0] == APP == Now listening on: http://localhost:5000 == APP == info: Microsoft.Hosting.Lifetime[0] == APP == Application started. Press Ctrl+C to shut down. == APP == info: Microsoft.Hosting.Lifetime[0] == APP == Hosting environment: Development == APP == info: Microsoft.Hosting.Lifetime[0] == APP == Content root path: /Users/ryan/actortest/MyActorService
Run MyActorClient
MyActorClient
is acting as the client, and it can be run normally withdotnet run
.Open a new terminal an navigate to the
MyActorClient
directory. Then run the project with:dotnet run
You should see commandline output like:
Startup up... Calling SetDataAsync on MyActor:1... Got response: Success Calling GetDataAsync on MyActor:1... Got response: PropertyA: ValueA, PropertyB: ValueB
đĄ This sample relies on a few assumptions. The default listening port for an ASP.NET Core web project is 5000, which is being passed to
dapr run
as--app-port 5000
. The default HTTP port for the Dapr sidecar is 3500. We’re telling the sidecar forMyActorService
to use 3500 so thatMyActorClient
can rely on the default value.
Now you have successfully created an actor service and client. See the related links section to learn more.
Related links
2.1.3 - Dapr Workflow .NET SDK
2.1.3.1 - DaprWorkflowClient usage
Lifetime management
A DaprWorkflowClient
holds access to networking resources in the form of TCP sockets used to communicate with the Dapr sidecar as well
as other types used in the management and operation of Workflows. DaprWorkflowClient
implements IAsyncDisposable
to support eager
cleanup of resources.
Dependency Injection
The AddDaprWorkflow()
method will register the Dapr workflow services with ASP.NET Core dependency injection. This method
requires an options delegate that defines each of the workflows and activities you wish to register and use in your application.
Note
This method will attempt to register aDaprClient
instance, but this will only work if it hasn’t already been registered with another
lifetime. For example, an earlier call to AddDaprClient()
with a singleton lifetime will always use a singleton regardless of the
lifetime chose for the workflow client. The DaprClient
instance will be used to communicate with the Dapr sidecar and if it’s not
yet registered, the lifetime provided during the AddDaprWorkflow()
registration will be used to register the DaprWorkflowClient
as well as its own dependencies.Singleton Registration
By default, the AddDaprWorkflow
method will register the DaprWorkflowClient
and associated services using a singleton lifetime. This means
that the services will be instantiated only a single time.
The following is an example of how registration of the DaprWorkflowClient
as it would appear in a typical Program.cs
file:
builder.Services.AddDaprWorkflow(options => {
options.RegisterWorkflow<YourWorkflow>();
options.RegisterActivity<YourActivity>();
});
var app = builder.Build();
await app.RunAsync();
Scoped Registration
While this may generally be acceptable in your use case, you may instead wish to override the lifetime specified. This is done by passing a ServiceLifetime
argument in AddDaprWorkflow
. For example, you may wish to inject another scoped service into your ASP.NET Core processing pipeline
that needs context used by the DaprClient
that wouldn’t be available if the former service were registered as a singleton.
This is demonstrated in the following example:
builder.Services.AddDaprWorkflow(options => {
options.RegisterWorkflow<YourWorkflow>();
options.RegisterActivity<YourActivity>();
}, ServiceLifecycle.Scoped);
var app = builder.Build();
await app.RunAsync();
Transient Registration
Finally, Dapr services can also be registered using a transient lifetime meaning that they will be initialized every time they’re injected. This is demonstrated in the following example:
builder.Services.AddDaprWorkflow(options => {
options.RegisterWorkflow<YourWorkflow>();
options.RegisterActivity<YourActivity>();
}, ServiceLifecycle.Transient);
var app = builder.Build();
await app.RunAsync();
Injecting Services into Workflow Activities
Workflow activities support the same dependency injection that developers have come to expect of modern C# applications. Assuming a proper
registration at startup, any such type can be injected into the constructor of the workflow activity and available to utilize during
the execution of the workflow. This makes it simple to add logging via an injected ILogger
or access to other Dapr
building blocks by injecting DaprClient
or DaprJobsClient
, for example.
internal sealed class SquareNumberActivity : WorkflowActivity<int, int>
{
private readonly ILogger _logger;
public MyActivity(ILogger logger)
{
this._logger = logger;
}
public override Task<int> RunAsync(WorkflowActivityContext context, int input)
{
this._logger.LogInformation("Squaring the value {number}", input);
var result = input * input;
this._logger.LogInformation("Got a result of {squareResult}", result);
return Task.FromResult(result);
}
}
Using ILogger in Workflow
Because workflows must be deterministic, it is not possible to inject arbitrary services into them. For example,
if you were able to inject a standard ILogger
into a workflow and it needed to be replayed because of an error,
subsequent replay from the event source log would result in the log recording additional operations that didn’t actually
take place a second or third time because their results were sourced from the log. This has the potential to introduce
a significant amount of confusion. Rather, a replay-safe logger is made available for use within workflows. It will only
log events the first time the workflow runs and will not log anything whenever the workflow is being replaced.
This logger can be retrieved from a method present on the WorkflowContext
available on your workflow instance and
otherwise used precisely as you might otherwise use an ILogger
instance.
An end-to-end sample demonstrating this can be seen in the .NET SDK repository but a brief extraction of this sample is available below.
public class OrderProcessingWorkflow : Workflow<OrderPayload, OrderResult>
{
public override async Task<OrderResult> RunAsync(WorkflowContext context, OrderPayload order)
{
string orderId = context.InstanceId;
var logger = context.CreateReplaySafeLogger<OrderProcessingWorkflow>(); //Use this method to access the logger instance
logger.LogInformation("Received order {orderId} for {quantity} {name} at ${totalCost}", orderId, order.Quantity, order.Name, order.TotalCost);
//...
}
}
2.1.3.2 - How to: Author and manage Dapr Workflow in the .NET SDK
Let’s create a Dapr workflow and invoke it using the console. In the provided order processing workflow example, the console prompts provide directions on how to both purchase and restock items. In this guide, you will:
- Deploy a .NET console application (WorkflowConsoleApp).
- Utilize the .NET workflow SDK and API calls to start and query workflow instances.
In the .NET example project:
- The main
Program.cs
file contains the setup of the app, including the registration of the workflow and workflow activities. - The workflow definition is found in the
Workflows
directory. - The workflow activity definitions are found in the
Activities
directory.
Prerequisites
- Dapr CLI
- Initialized Dapr environment
- .NET 8 or .NET 9 installed
Set up the environment
Clone the .NET SDK repo.
git clone https://github.com/dapr/dotnet-sdk.git
From the .NET SDK root directory, navigate to the Dapr Workflow example.
cd examples/Workflow
Run the application locally
To run the Dapr application, you need to start the .NET program and a Dapr sidecar. Navigate to the WorkflowConsoleApp
directory.
cd WorkflowConsoleApp
Start the program.
dotnet run
In a new terminal, navigate again to the WorkflowConsoleApp
directory and run the Dapr sidecar alongside the program.
dapr run --app-id wfapp --dapr-grpc-port 4001 --dapr-http-port 3500
Dapr listens for HTTP requests at
http://localhost:3500
and internal workflow gRPC requests athttp://localhost:4001
.
Start a workflow
To start a workflow, you have two options:
- Follow the directions from the console prompts.
- Use the workflow API and send a request to Dapr directly.
This guide focuses on the workflow API option.
Note
- You can find the commands below in the
WorkflowConsoleApp
/demo.http
file. - The body of the curl request is the purchase order information used as the input of the workflow.
- The “12345678” in the commands represents the unique identifier for the workflow and can be replaced with any identifier of your choosing.
Run the following command to start a workflow.
curl -i -X POST http://localhost:3500/v1.0/workflows/dapr/OrderProcessingWorkflow/start?instanceID=12345678 \
-H "Content-Type: application/json" \
-d '{"Name": "Paperclips", "TotalCost": 99.95, "Quantity": 1}'
curl -i -X POST http://localhost:3500/v1.0/workflows/dapr/OrderProcessingWorkflow/start?instanceID=12345678 `
-H "Content-Type: application/json" `
-d '{"Name": "Paperclips", "TotalCost": 99.95, "Quantity": 1}'
If successful, you should see a response like the following:
{"instanceID":"12345678"}
Send an HTTP request to get the status of the workflow that was started:
curl -i -X GET http://localhost:3500/v1.0/workflows/dapr/12345678
The workflow is designed to take several seconds to complete. If the workflow hasn’t completed when you issue the HTTP request, you’ll see the following JSON response (formatted for readability) with workflow status as RUNNING
:
{
"instanceID": "12345678",
"workflowName": "OrderProcessingWorkflow",
"createdAt": "2023-05-10T00:42:03.911444105Z",
"lastUpdatedAt": "2023-05-10T00:42:06.142214153Z",
"runtimeStatus": "RUNNING",
"properties": {
"dapr.workflow.custom_status": "",
"dapr.workflow.input": "{\"Name\": \"Paperclips\", \"TotalCost\": 99.95, \"Quantity\": 1}"
}
}
Once the workflow has completed running, you should see the following output, indicating that it has reached the COMPLETED
status:
{
"instanceID": "12345678",
"workflowName": "OrderProcessingWorkflow",
"createdAt": "2023-05-10T00:42:03.911444105Z",
"lastUpdatedAt": "2023-05-10T00:42:18.527704176Z",
"runtimeStatus": "COMPLETED",
"properties": {
"dapr.workflow.custom_status": "",
"dapr.workflow.input": "{\"Name\": \"Paperclips\", \"TotalCost\": 99.95, \"Quantity\": 1}",
"dapr.workflow.output": "{\"Processed\":true}"
}
}
When the workflow has completed, the stdout of the workflow app should look like:
info: WorkflowConsoleApp.Activities.NotifyActivity[0]
Received order 12345678 for Paperclips at $99.95
info: WorkflowConsoleApp.Activities.ReserveInventoryActivity[0]
Reserving inventory: 12345678, Paperclips, 1
info: WorkflowConsoleApp.Activities.ProcessPaymentActivity[0]
Processing payment: 12345678, 99.95, USD
info: WorkflowConsoleApp.Activities.NotifyActivity[0]
Order 12345678 processed successfully!
If you have Zipkin configured for Dapr locally on your machine, then you can view the workflow trace spans in the Zipkin web UI (typically at http://localhost:9411/zipkin/).
Demo
Watch this video demonstrating .NET Workflow:
Next steps
2.1.4 - Dapr AI .NET SDK
With the Dapr AI package, you can interact with the Dapr AI workloads from a .NET application.
Today, Dapr provides the Conversational API to engage with large language models. To get started with this workload, walk through the Dapr Conversational AI how-to guide.
2.1.4.1 - Dapr AI Client
The Dapr AI client package allows you to interact with the AI capabilities provided by the Dapr sidecar.
Lifetime management
A DaprConversationClient
is a version of the Dapr client that is dedicated to interacting with the Dapr Conversation
API. It can be registered alongside a DaprClient
and other Dapr clients without issue.
It maintains access to networking resources in the form of TCP sockets used to communicate with the Dapr sidecar.
For best performance, create a single long-lived instance of DaprConversationClient
and provide access to that shared
instance throughout your application. DaprConversationClient
instances are thread-safe and intended to be shared.
This can be aided by utilizing the dependency injection functionality. The registration method supports registration
as a singleton, a scoped instance or as transient (meaning it’s recreated every time it’s injected), but also enables
registration to utilize values from an IConfiguration
or other injected service in a way that’s impractical when
creating the client from scratch in each of your classes.
Avoid creating a DaprConversationClient
for each operation.
Configuring DaprConversationClient via DaprConversationClientBuilder
A DaprConversationClient
can be configured by invoking methods on the DaprConversationClientBuilder
class before
calling .Build()
to create the client itself. The settings for each DaprConversationClient
are separate
and cannot be changed after calling .Build()
.
var daprConversationClient = new DaprConversationClientBuilder()
.UseDaprApiToken("abc123") // Specify the API token used to authenticate to other Dapr sidecars
.Build();
The DaprConversationClientBuilder
contains settings for:
- The HTTP endpoint of the Dapr sidecar
- The gRPC endpoint of the Dapr sidecar
- The
JsonSerializerOptions
object used to configure JSON serialization - The
GrpcChannelOptions
object used to configure gRPC - The API token used to authenticate requests to the sidecar
- The factory method used to create the
HttpClient
instance used by the SDK - The timeout used for the
HttpClient
instance when making requests to the sidecar
The SDK will read the following environment variables to configure the default values:
DAPR_HTTP_ENDPOINT
: used to find the HTTP endpoint of the Dapr sidecar, example:https://dapr-api.mycompany.com
DAPR_GRPC_ENDPOINT
: used to find the gRPC endpoint of the Dapr sidecar, example:https://dapr-grpc-api.mycompany.com
DAPR_HTTP_PORT
: ifDAPR_HTTP_ENDPOINT
is not set, this is used to find the HTTP local endpoint of the Dapr sidecarDAPR_GRPC_PORT
: ifDAPR_GRPC_ENDPOINT
is not set, this is used to find the gRPC local endpoint of the Dapr sidecarDAPR_API_TOKEN
: used to set the API token
Configuring gRPC channel options
Dapr’s use of CancellationToken
for cancellation relies on the configuration of the gRPC channel options. If you need
to configure these options yourself, make sure to enable the ThrowOperationCanceledOnCancellation setting.
var daprConversationClient = new DaprConversationClientBuilder()
.UseGrpcChannelOptions(new GrpcChannelOptions { ... ThrowOperationCanceledOnCancellation = true })
.Build();
Using cancellation with DaprConversationClient
The APIs on DaprConversationClient
perform asynchronous operations and accept an optional CancellationToken
parameter. This
follows a standard .NET practice for cancellable operations. Note that when cancellation occurs, there is no guarantee that
the remote endpoint stops processing the request, only that the client has stopped waiting for completion.
When an operation is cancelled, it will throw an OperationCancelledException
.
Configuring DaprConversationClient
via dependency injection
Using the built-in extension methods for registering the DaprConversationClient
in a dependency injection container can
provide the benefit of registering the long-lived service a single time, centralize complex configuration and improve
performance by ensuring similarly long-lived resources are re-purposed when possible (e.g. HttpClient
instances).
There are three overloads available to give the developer the greatest flexibility in configuring the client for their
scenario. Each of these will register the IHttpClientFactory
on your behalf if not already registered, and configure
the DaprConversationClientBuilder
to use it when creating the HttpClient
instance in order to re-use the same instance as
much as possible and avoid socket exhaustion and other issues.
In the first approach, there’s no configuration done by the developer and the DaprConversationClient
is configured with the
default settings.
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddDaprConversationClient(); //Registers the `DaprConversationClient` to be injected as needed
var app = builder.Build();
Sometimes the developer will need to configure the created client using the various configuration options detailed
above. This is done through an overload that passes in the DaprConversationClientBuiler
and exposes methods for configuring
the necessary options.
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddDaprConversationClient((_, daprConversationClientBuilder) => {
//Set the API token
daprConversationClientBuilder.UseDaprApiToken("abc123");
//Specify a non-standard HTTP endpoint
daprConversationClientBuilder.UseHttpEndpoint("http://dapr.my-company.com");
});
var app = builder.Build();
Finally, it’s possible that the developer may need to retrieve information from another service in order to populate
these configuration values. That value may be provided from a DaprClient
instance, a vendor-specific SDK or some
local service, but as long as it’s also registered in DI, it can be injected into this configuration operation via the
last overload:
var builder = WebApplication.CreateBuilder(args);
//Register a fictional service that retrieves secrets from somewhere
builder.Services.AddSingleton<SecretService>();
builder.Services.AddDaprConversationClient((serviceProvider, daprConversationClientBuilder) => {
//Retrieve an instance of the `SecretService` from the service provider
var secretService = serviceProvider.GetRequiredService<SecretService>();
var daprApiToken = secretService.GetSecret("DaprApiToken").Value;
//Configure the `DaprConversationClientBuilder`
daprConversationClientBuilder.UseDaprApiToken(daprApiToken);
});
var app = builder.Build();
2.1.4.2 - How to: Create and use Dapr AI Conversations in the .NET SDK
Prerequisites
- .NET 8, or .NET 9 installed
- Dapr CLI
- Initialized Dapr environment
Installation
To get started with the Dapr AI .NET SDK client, install the Dapr.AI package from NuGet:
dotnet add package Dapr.AI
A DaprConversationClient
maintains access to networking resources in the form of TCP sockets used to communicate with the Dapr sidecar.
Dependency Injection
The AddDaprAiConversation()
method will register the Dapr client ASP.NET Core dependency injection and is the recommended approach
for using this package. This method accepts an optional options delegate for configuring the DaprConversationClient
and a
ServiceLifetime
argument, allowing you to specify a different lifetime for the registered services instead of the default Singleton
value.
The following example assumes all default values are acceptable and is sufficient to register the DaprConversationClient
:
services.AddDaprAiConversation();
The optional configuration delegate is used to configure the DaprConversationClient
by specifying options on the
DaprConversationClientBuilder
as in the following example:
services.AddSingleton<DefaultOptionsProvider>();
services.AddDaprAiConversation((serviceProvider, clientBuilder) => {
//Inject a service to source a value from
var optionsProvider = serviceProvider.GetRequiredService<DefaultOptionsProvider>();
var standardTimeout = optionsProvider.GetStandardTimeout();
//Configure the value on the client builder
clientBuilder.UseTimeout(standardTimeout);
});
Manual Instantiation
Rather than using dependency injection, a DaprConversationClient
can also be built using the static client builder.
For best performance, create a single long-lived instance of DaprConversationClient
and provide access to that shared instance throughout
your application. DaprConversationClient
instances are thread-safe and intended to be shared.
Avoid creating a DaprConversationClient
per-operation.
A DaprConversationClient
can be configured by invoking methods on the DaprConversationClientBuilder
class before calling .Build()
to create the client. The settings for each DaprConversationClient
are separate and cannot be changed after calling .Build()
.
var daprConversationClient = new DaprConversationClientBuilder()
.UseJsonSerializerSettings( ... ) //Configure JSON serializer
.Build();
See the .NET documentation here for more information about the options available when configuring the Dapr client via the builder.
Try it out
Put the Dapr AI .NET SDK to the test. Walk through the samples to see Dapr in action:
SDK Samples | Description |
---|---|
SDK samples | Clone the SDK repo to try out some examples and get started. |
Building Blocks
This part of the .NET SDK allows you to interface with the Conversations API to send and receive messages from large language models.
2.1.5 - Dapr Jobs .NET SDK
With the Dapr Job package, you can interact with the Dapr Job APIs from a .NET application to trigger future operations to run according to a predefined schedule with an optional payload.
To get started, walk through the Dapr Jobs how-to guide and refer to best practices documentation for additional guidance.
2.1.5.1 - How to: Author and manage Dapr Jobs in the .NET SDK
Let’s create an endpoint that will be invoked by Dapr Jobs when it triggers, then schedule the job in the same app. We’ll use the simple example provided here, for the following demonstration and walk through it as an explainer of how you can schedule one-time or recurring jobs using either an interval or Cron expression yourself. In this guide, you will:
- Deploy a .NET Web API application (JobsSample)
- Utilize the Dapr .NET Jobs SDK to schedule a job invocation and set up the endpoint to be triggered
In the .NET example project:
- The main
Program.cs
file comprises the entirety of this demonstration.
Prerequisites
- Dapr CLI
- Initialized Dapr environment
- .NET 8 or .NET 9 installed
- Dapr.Jobs NuGet package installed to your project
Set up the environment
Clone the .NET SDK repo.
git clone https://github.com/dapr/dotnet-sdk.git
From the .NET SDK root directory, navigate to the Dapr Jobs example.
cd examples/Jobs
Run the application locally
To run the Dapr application, you need to start the .NET program and a Dapr sidecar. Navigate to the JobsSample
directory.
cd JobsSample
We’ll run a command that starts both the Dapr sidecar and the .NET program at the same time.
dapr run --app-id jobsapp --dapr-grpc-port 4001 --dapr-http-port 3500 -- dotnet run
Dapr listens for HTTP requests at
http://localhost:3500
and internal Jobs gRPC requests athttp://localhost:4001
.
Register the Dapr Jobs client with dependency injection
The Dapr Jobs SDK provides an extension method to simplify the registration of the Dapr Jobs client. Before completing
the dependency injection registration in Program.cs
, add the following line:
var builder = WebApplication.CreateBuilder(args);
//Add anywhere between these two lines
builder.Services.AddDaprJobsClient();
var app = builder.Build();
Note that in today’s implementation of the Jobs API, the app that schedules the job will also be the app that receives the trigger notification. In other words, you cannot schedule a trigger to run in another application. As a result, while you don’t explicitly need the Dapr Jobs client to be registered in your application to schedule a trigger invocation endpoint, your endpoint will never be invoked without the same app also scheduling the job somehow (whether via this Dapr Jobs .NET SDK or an HTTP call to the sidecar).
It’s possible that you may want to provide some configuration options to the Dapr Jobs client that
should be present with each call to the sidecar such as a Dapr API token, or you want to use a non-standard
HTTP or gRPC endpoint. This is possible through use of an overload of the registration method that allows configuration of a
DaprJobsClientBuilder
instance:
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddDaprJobsClient((_, daprJobsClientBuilder) =>
{
daprJobsClientBuilder.UseDaprApiToken("abc123");
daprJobsClientBuilder.UseHttpEndpoint("http://localhost:8512"); //Non-standard sidecar HTTP endpoint
});
var app = builder.Build();
Still, it’s possible that whatever values you wish to inject need to be retrieved from some other source, itself registered as a dependency. There’s one more overload you can use to inject an IServiceProvider
into the configuration action method. In the following example, we register a fictional singleton that can retrieve secrets from somewhere and pass it into the configuration method for AddDaprJobClient
so
we can retrieve our Dapr API token from somewhere else for registration here:
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddSingleton<SecretRetriever>();
builder.Services.AddDaprJobsClient((serviceProvider, daprJobsClientBuilder) =>
{
var secretRetriever = serviceProvider.GetRequiredService<SecretRetriever>();
var daprApiToken = secretRetriever.GetSecret("DaprApiToken").Value;
daprJobsClientBuilder.UseDaprApiToken(daprApiToken);
daprJobsClientBuilder.UseHttpEndpoint("http://localhost:8512");
});
var app = builder.Build();
Use the Dapr Jobs client using IConfiguration
It’s possible to configure the Dapr Jobs client using the values in your registered IConfiguration
as well without
explicitly specifying each of the value overrides using the DaprJobsClientBuilder
as demonstrated in the previous
section. Rather, by populating an IConfiguration
made available through dependency injection the AddDaprJobsClient()
registration will automatically use these values over their respective defaults.
Start by populating the values in your configuration. This can be done in several different ways as demonstrated below.
Configuration via ConfigurationBuilder
Application settings can be configured without using a configuration source and by instead populating the value in-memory
using a ConfigurationBuilder
instance:
var builder = WebApplication.CreateBuilder();
//Create the configuration
var configuration = new ConfigurationBuilder()
.AddInMemoryCollection(new Dictionary<string, string> {
{ "DAPR_HTTP_ENDPOINT", "http://localhost:54321" },
{ "DAPR_API_TOKEN", "abc123" }
})
.Build();
builder.Configuration.AddConfiguration(configuration);
builder.Services.AddDaprJobsClient(); //This will automatically populate the HTTP endpoint and API token values from the IConfiguration
Configuration via Environment Variables
Application settings can be accessed from environment variables available to your application.
The following environment variables will be used to populate both the HTTP endpoint and API token used to register the Dapr Jobs client.
Key | Value |
---|---|
DAPR_HTTP_ENDPOINT | http://localhost:54321 |
DAPR_API_TOKEN | abc123 |
var builder = WebApplication.CreateBuilder();
builder.Configuration.AddEnvironmentVariables();
builder.Services.AddDaprJobsClient();
The Dapr Jobs client will be configured to use both the HTTP endpoint http://localhost:54321
and populate all outbound
requests with the API token header abc123
.
Configuration via prefixed Environment Variables
However, in shared-host scenarios where there are multiple applications all running on the same machine without using containers or in development environments, it’s not uncommon to prefix environment variables. The following example assumes that both the HTTP endpoint and the API token will be pulled from environment variables prefixed with the value “myapp_”. The two environment variables used in this scenario are as follows:
Key | Value |
---|---|
myapp_DAPR_HTTP_ENDPOINT | http://localhost:54321 |
myapp_DAPR_API_TOKEN | abc123 |
These environment variables will be loaded into the registered configuration in the following example and made available without the prefix attached.
var builder = WebApplication.CreateBuilder();
builder.Configuration.AddEnvironmentVariables(prefix: "myapp_");
builder.Services.AddDaprJobsClient();
The Dapr Jobs client will be configured to use both the HTTP endpoint http://localhost:54321
and populate all outbound
requests with the API token header abc123
.
Use the Dapr Jobs client without relying on dependency injection
While the use of dependency injection simplifies the use of complex types in .NET and makes it easier to
deal with complicated configurations, you’re not required to register the DaprJobsClient
in this way. Rather, you can also elect to create an instance of it from a DaprJobsClientBuilder
instance as demonstrated below:
public class MySampleClass
{
public void DoSomething()
{
var daprJobsClientBuilder = new DaprJobsClientBuilder();
var daprJobsClient = daprJobsClientBuilder.Build();
//Do something with the `daprJobsClient`
}
}
Set up a endpoint to be invoked when the job is triggered
It’s easy to set up a jobs endpoint if you’re at all familiar with minimal APIs in ASP.NET Core as the syntax is the same between the two.
Once dependency injection registration has been completed, configure the application the same way you would to handle mapping an HTTP request via the minimal API functionality in ASP.NET Core. Implemented as an extension method,
pass the name of the job it should be responsive to and a delegate. Services can be injected into the delegate’s arguments as you wish and the job payload can be accessed from the ReadOnlyMemory<byte>
originally provided to the
job registration.
There are two delegates you can use here. One provides an IServiceProvider
in case you need to inject other services into the handler:
//We have this from the example above
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddDaprJobsClient();
var app = builder.Build();
//Add our endpoint registration
app.MapDaprScheduledJob("myJob", (IServiceProvider serviceProvider, string jobName, ReadOnlyMemory<byte> jobPayload) => {
var logger = serviceProvider.GetService<ILogger>();
logger?.LogInformation("Received trigger invocation for '{jobName}'", "myJob");
//Do something...
});
app.Run();
The other overload of the delegate doesn’t require an IServiceProvider
if not necessary:
//We have this from the example above
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddDaprJobsClient();
var app = builder.Build();
//Add our endpoint registration
app.MapDaprScheduledJob("myJob", (string jobName, ReadOnlyMemory<byte> jobPayload) => {
//Do something...
});
app.Run();
Support cancellation tokens when processing mapped invocations
You may want to ensure that timeouts are handled on job invocations so that they don’t indefinitely hang and use system resources. When setting up the job mapping, there’s an optional TimeSpan
parameter that can be
provided as the last argument to specify a timeout for the request. Every time the job mapping invocation is triggered, a new CancellationTokenSource
will be created using this timeout parameter and a CancellationToken
will be created from it to put an upper bound on the processing of the request. If a timeout isn’t provided, this defaults to CancellationToken.None
and a timeout will not be automatically applied to the mapping.
//We have this from the example above
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddDaprJobsClient();
var app = builder.Build();
//Add our endpoint registration
app.MapDaprScheduledJob("myJob", (string jobName, ReadOnlyMemory<byte> jobPayload) => {
//Do something...
}, TimeSpan.FromSeconds(15)); //Assigns a maximum timeout of 15 seconds for handling the invocation request
app.Run();
Register the job
Finally, we have to register the job we want scheduled. Note that from here, all SDK methods have cancellation token support and use a default token if not otherwise set.
There are three different ways to set up a job that vary based on how you want to configure the schedule. The following shows the different arguments available when scheduling a job:
Argument Name | Type | Description | Required |
---|---|---|---|
jobName | string | The name of the job being scheduled. | Yes |
schedule | DaprJobSchedule | The schedule defining when the job will be triggered. | Yes |
payload | ReadOnlyMemory | Job data provided to the invocation endpoint when triggered. | No |
startingFrom | DateTime | The point in time from which the job schedule should start. | No |
repeats | int | The maximum number of times the job should be triggered. | No |
ttl | When the job should expires and no longer trigger. | No | |
overwrite | bool | A flag indicating whether an existing job should be overwritten when submitted or false to require that an existing job with the same name be deleted first. | No |
cancellationToken | CancellationToken | Used to cancel out of the operation early, e.g. because of an operation timeout. | No |
DaprJobSchedule
All jobs are scheduled via the SDK using the DaprJobSchedule
which creates an expression passed to the
runtime to schedule jobs. There are several static methods exposed on the DaprJobSchedule
used to faciliate
easy registration of each of the kinds of job schedules available as follows. This separates specifying
the job schedule itself from any additional options like repeating the operation or providing a cancellation token.
One-time job
A one-time job is exactly that; it will run at a single point in time and will not repeat.
This approach requires that you select a job name and specify a time it should be triggered.
DaprJobSchedule.FromDateTime(DateTimeOffset scheduledTime)
One-time jobs can be scheduled from the Dapr Jobs client as in the following example:
public class MyOperation(DaprJobsClient daprJobsClient)
{
public async Task ScheduleOneTimeJobAsync(CancellationToken cancellationToken)
{
var today = DateTimeOffset.UtcNow;
var threeDaysFromNow = today.AddDays(3);
var schedule = DaprJobSchedule.FromDateTime(threeDaysFromNow);
await daprJobsClient.ScheduleJobAsync("job", schedule, cancellationToken: cancellationToken);
}
}
Interval-based job
An interval-based job is one that runs on a recurring loop configured as a fixed amount of time, not unlike how reminders work in the Actors building block today.
DaprJobSchedule.FromDuration(TimeSpan interval)
Interval-based jobs can be scheduled from the Dapr Jobs client as in the following example:
public class MyOperation(DaprJobsClient daprJobsClient)
{
public async Task ScheduleIntervalJobAsync(CancellationToken cancellationToken)
{
var hourlyInterval = TimeSpan.FromHours(1);
//Trigger the job hourly, but a maximum of 5 times
var schedule = DaprJobSchedule.FromDuration(hourlyInterval);
await daprJobsClient.ScheduleJobAsync("job", schedule, repeats: 5, cancellationToken: cancellationToken);
}
}
Cron-based job
A Cron-based job is scheduled using a Cron expression. This gives more calendar-based control over when the job is triggered as it can used calendar-based values in the expression.
DaprJobSchedule.FromCronExpression(string cronExpression)
There are two different approaches supported to scheduling a Cron-based job in the Dapr SDK.
Provide your own Cron expression
You can just provide your own Cron expression via a string via DaprJobSchedule.FromExpression()
:
public class MyOperation(DaprJobsClient daprJobsClient)
{
public async Task ScheduleCronJobAsync(CancellationToken cancellationToken)
{
//At the top of every other hour on the fifth day of the month
const string cronSchedule = "0 */2 5 * *";
var schedule = DaprJobSchedule.FromExpression(cronSchedule);
//Don't start this until next month
var now = DateTime.UtcNow;
var oneMonthFromNow = now.AddMonths(1);
var firstOfNextMonth = new DateTime(oneMonthFromNow.Year, oneMonthFromNow.Month, 1, 0, 0, 0);
await daprJobsClient.ScheduleJobAsync("myJobName", )
await daprJobsClient.ScheduleCronJobAsync("myJobName", schedule, dueTime: firstOfNextMonth, cancellationToken: cancellationToken);
}
}
Use the CronExpressionBuilder
Alternatively, you can use our fluent builder to produce a valid Cron expression:
public class MyOperation(DaprJobsClient daprJobsClient)
{
public async Task ScheduleCronJobAsync(CancellationToken cancellationToken)
{
//At the top of every other hour on the fifth day of the month
var cronExpression = new CronExpressionBuilder()
.Every(EveryCronPeriod.Hour, 2)
.On(OnCronPeriod.DayOfMonth, 5)
.ToString();
var schedule = DaprJobSchedule.FromExpression(cronExpression);
//Don't start this until next month
var now = DateTime.UtcNow;
var oneMonthFromNow = now.AddMonths(1);
var firstOfNextMonth = new DateTime(oneMonthFromNow.Year, oneMonthFromNow.Month, 1, 0, 0, 0);
await daprJobsClient.ScheduleJobAsync("myJobName", )
await daprJobsClient.ScheduleCronJobAsync("myJobName", schedule, dueTime: firstOfNextMonth, cancellationToken: cancellationToken);
}
}
Get details of already-scheduled job
If you know the name of an already-scheduled job, you can retrieve its metadata without waiting for it to
be triggered. The returned JobDetails
exposes a few helpful properties for consuming the information from the Dapr Jobs API:
- If the
Schedule
property contains a Cron expression, theIsCronExpression
property will be true and the expression will also be available in theCronExpression
property. - If the
Schedule
property contains a duration value, theIsIntervalExpression
property will instead be true and the value will be converted to aTimeSpan
value accessible from theInterval
property.
This can be done by using the following:
public class MyOperation(DaprJobsClient daprJobsClient)
{
public async Task<JobDetails> GetJobDetailsAsync(string jobName, CancellationToken cancellationToken)
{
var jobDetails = await daprJobsClient.GetJobAsync(jobName, canecllationToken);
return jobDetails;
}
}
Delete a scheduled job
To delete a scheduled job, you’ll need to know its name. From there, it’s as simple as calling the DeleteJobAsync
method on the Dapr Jobs client:
public class MyOperation(DaprJobsClient daprJobsClient)
{
public async Task DeleteJobAsync(string jobName, CancellationToken cancellationToken)
{
await daprJobsClient.DeleteJobAsync(jobName, cancellationToken);
}
}
2.1.5.2 - DaprJobsClient usage
Lifetime management
A DaprJobsClient
is a version of the Dapr client that is dedicated to interacting with the Dapr Jobs API. It can be
registered alongside a DaprClient
and other Dapr clients without issue.
It maintains access to networking resources in the form of TCP sockets used to communicate with the Dapr sidecar and
implements IDisposable
to support the eager cleanup of resources.
For best performance, create a single long-lived instance of DaprJobsClient
and provide access to that shared instance
throughout your application. DaprJobsClient
instances are thread-safe and intended to be shared.
This can be aided by utilizing the dependency injection functionality. The registration method supports registration using
as a singleton, a scoped instance or as transient (meaning it’s recreated every time it’s injected), but also enables
registration to utilize values from an IConfiguration
or other injected service in a way that’s impractical when
creating the client from scratch in each of your classes.
Avoid creating a DaprJobsClient
for each operation and disposing it when the operation is complete.
Configuring DaprJobsClient via the DaprJobsClientBuilder
A DaprJobsClient
can be configured by invoking methods on the DaprJobsClientBuilder
class before calling .Build()
to create the client itself. The settings for each DaprJobsClient
are separate
and cannot be changed after calling .Build()
.
var daprJobsClient = new DaprJobsClientBuilder()
.UseDaprApiToken("abc123") // Specify the API token used to authenticate to other Dapr sidecars
.Build();
The DaprJobsClientBuilder
contains settings for:
- The HTTP endpoint of the Dapr sidecar
- The gRPC endpoint of the Dapr sidecar
- The
JsonSerializerOptions
object used to configure JSON serialization - The
GrpcChannelOptions
object used to configure gRPC - The API token used to authenticate requests to the sidecar
- The factory method used to create the
HttpClient
instance used by the SDK - The timeout used for the
HttpClient
instance when making requests to the sidecar
The SDK will read the following environment variables to configure the default values:
DAPR_HTTP_ENDPOINT
: used to find the HTTP endpoint of the Dapr sidecar, example:https://dapr-api.mycompany.com
DAPR_GRPC_ENDPOINT
: used to find the gRPC endpoint of the Dapr sidecar, example:https://dapr-grpc-api.mycompany.com
DAPR_HTTP_PORT
: ifDAPR_HTTP_ENDPOINT
is not set, this is used to find the HTTP local endpoint of the Dapr sidecarDAPR_GRPC_PORT
: ifDAPR_GRPC_ENDPOINT
is not set, this is used to find the gRPC local endpoint of the Dapr sidecarDAPR_API_TOKEN
: used to set the API token
Configuring gRPC channel options
Dapr’s use of CancellationToken
for cancellation relies on the configuration of the gRPC channel options. If you need
to configure these options yourself, make sure to enable the ThrowOperationCanceledOnCancellation setting.
var daprJobsClient = new DaprJobsClientBuilder()
.UseGrpcChannelOptions(new GrpcChannelOptions { ... ThrowOperationCanceledOnCancellation = true })
.Build();
Using cancellation with DaprJobsClient
The APIs on DaprJobsClient
perform asynchronous operations and accept an optional CancellationToken
parameter. This
follows a standard .NET practice for cancellable operations. Note that when cancellation occurs, there is no guarantee that
the remote endpoint stops processing the request, only that the client has stopped waiting for completion.
When an operation is cancelled, it will throw an OperationCancelledException
.
Configuring DaprJobsClient
via dependency injection
Using the built-in extension methods for registering the DaprJobsClient
in a dependency injection container can
provide the benefit of registering the long-lived service a single time, centralize complex configuration and improve
performance by ensuring similarly long-lived resources are re-purposed when possible (e.g. HttpClient
instances).
There are three overloads available to give the developer the greatest flexibility in configuring the client for their
scenario. Each of these will register the IHttpClientFactory
on your behalf if not already registered, and configure
the DaprJobsClientBuilder
to use it when creating the HttpClient
instance in order to re-use the same instance as
much as possible and avoid socket exhaustion and other issues.
In the first approach, there’s no configuration done by the developer and the DaprJobsClient
is configured with the
default settings.
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddDaprJobsClient(); //Registers the `DaprJobsClient` to be injected as needed
var app = builder.Build();
Sometimes the developer will need to configure the created client using the various configuration options detailed
above. This is done through an overload that passes in the DaprJobsClientBuiler
and exposes methods for configuring
the necessary options.
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddDaprJobsClient((_, daprJobsClientBuilder) => {
//Set the API token
daprJobsClientBuilder.UseDaprApiToken("abc123");
//Specify a non-standard HTTP endpoint
daprJobsClientBuilder.UseHttpEndpoint("http://dapr.my-company.com");
});
var app = builder.Build();
Finally, it’s possible that the developer may need to retrieve information from another service in order to populate
these configuration values. That value may be provided from a DaprClient
instance, a vendor-specific SDK or some
local service, but as long as it’s also registered in DI, it can be injected into this configuration operation via the
last overload:
var builder = WebApplication.CreateBuilder(args);
//Register a fictional service that retrieves secrets from somewhere
builder.Services.AddSingleton<SecretService>();
builder.Services.AddDaprJobsClient((serviceProvider, daprJobsClientBuilder) => {
//Retrieve an instance of the `SecretService` from the service provider
var secretService = serviceProvider.GetRequiredService<SecretService>();
var daprApiToken = secretService.GetSecret("DaprApiToken").Value;
//Configure the `DaprJobsClientBuilder`
daprJobsClientBuilder.UseDaprApiToken(daprApiToken);
});
var app = builder.Build();
Understanding payload serialization on DaprJobsClient
While there are many methods on the DaprClient
that automatically serialize and deserialize data using the
System.Text.Json
serializer, this SDK takes a different philosophy. Instead, the relevant methods accept an optional
payload of ReadOnlyMemory<byte>
meaning that serialization is an exercise left to the developer and is not
generally handled by the SDK.
That said, there are some helper extension methods available for each of the scheduling methods. If you know that you
want to use a type that’s JSON-serializable, you can use the Schedule*WithPayloadAsync
method for each scheduling
type that accepts an object
as a payload and an optional JsonSerializerOptions
to use when serializing the value.
This will convert the value to UTF-8 encoded bytes for you as a convenience. Here’s an example of what this might
look like when scheduling a Cron expression:
public sealed record Doodad (string Name, int Value);
//...
var doodad = new Doodad("Thing", 100);
await daprJobsClient.ScheduleCronJobWithPayloadAsync("myJob", "5 * * * *", doodad);
In the same vein, if you have a plain string value, you can use an overload of the same method to serialize a string-typed payload and the JSON serialization step will be skipped and it’ll only be encoded to an array of UTF-8 encoded bytes. Here’s an example of what this might look like when scheduling a one-time job:
var now = DateTime.UtcNow;
var oneWeekFromNow = now.AddDays(7);
await daprJobsClient.ScheduleOneTimeJobWithPayloadAsync("myOtherJob", oneWeekFromNow, "This is a test!");
The delegate handling the job invocation expects at least two arguments to be present:
- A
string
that is populated with thejobName
, providing the name of the invoked job - A
ReadOnlyMemory<byte>
that is populated with the bytes originally provided during the job registration.
Because the payload is stored as a ReadOnlyMemory<byte>
, the developer has the freedom to serialize and deserialize
as they wish, but there are again two helper extensions included that can deserialize this to either a JSON-compatible
type or a string. Both methods assume that the developer encoded the originally scheduled job (perhaps using the
helper serialization methods) as these methods will not force the bytes to represent something they’re not.
To deserialize the bytes to a string, the following helper method can be used:
var payloadAsString = Encoding.UTF8.GetString(jobPayload.Span); //If successful, returns a string with the value
Error handling
Methods on DaprJobsClient
will throw a DaprJobsServiceException
if an issue is encountered between the SDK
and the Jobs API service running on the Dapr sidecar. If a failure is encountered because of a poorly formatted
request made to the Jobs API service through this SDK, a DaprMalformedJobException
will be thrown. In case of
illegal argument values, the appropriate standard exception will be thrown (e.g. ArgumentOutOfRangeException
or ArgumentNullException
) with the name of the offending argument. And for anything else, a DaprException
will be thrown.
The most common cases of failure will be related to:
- Incorrect argument formatting while engaging with the Jobs API
- Transient failures such as a networking problem
- Invalid data, such as a failure to deserialize a value into a type it wasn’t originally serialized from
In any of these cases, you can examine more exception details through the .InnerException
property.
2.1.6 - Dapr Cryptography .NET SDK
With the Dapr Cryptography package, you can perform high-performance encryption and decryption operations with Dapr.
To get started with this functionality, walk through the [Dapr Cryptography(https://v1-16.docs.dapr.io/developing-applications/sdks/dotnet/dotnet-cryptography/dotnet-cryptography-howto/) how-to guide.
2.1.6.1 - Dapr Cryptography Client
The Dapr Cryptography package allows you to perform encryption and decryption operations provided by the Dapr sidecar.
Lifetime management
A DaprEncryptionClient
is a version of the Dapr client that is dedicated to interacting with the Dapr Cryptography API.
It can be registered alongside a DaprClient
and other Dapr clients without issue.
It maintains access to networking resources in the form of TCP sockets used to communicate with the Dapr sidecar.
For best performance, create a single long-lived instance of DaprEncryptionClient
and provide access to that shared
instance throughout your application. DaprEncryptionClient
instances are thread-safe and intended to be shared.
This can be aided by utilizing the dependency injection functionality. The registration method supports registration
as a singleton, a scoped instance, or as a transient (meaning it’s recreated every time it’s injected), but also enables
registration to utilize values from an IConfiguration
or other injected service in a way that’s impractical when creating
the client from scratch in each of your classes.
Avoid creating a DaprEncryptionClient
for each operation.
Configuring DaprEncryptionClient
via DaprEncryptionClientBuilder
A DaprCryptographyClient
can be configured by invoking methods on the DaprEncryptionClientBuilder
class before calling
.Build()
to create the client itself. The settings for each DaprEncryptionClientBuilder
are separate can cannot be
changed after calling .Build()
.
var daprEncryptionClient = new DaprEncryptionClientBuilder()
.UseDaprApiToken("abc123") //Specify the API token used to authenticate to the Dapr sidecar
.Build();
The DaprEncryptionClientBuilder
contains settings for:
- The HTTP endpoint of the Dapr sidecar
- The gRPC endpoint of the Dapr sidecar
- The
JsonSerializerOptions
object used to configure JSON serialization - The
GrpcChannelOptions
object used to configure gRPC - The API token used to authenticate requests to the sidecar
- The factory method used to create the
HttpClient
instance used by the SDK - The timeout used for the
HttpClient
instance when making requests to the sidecar
The SDK will read the following environment variables to configure the default values:
DAPR_HTTP_ENDPOINT
: used to find the HTTP endpoint of the Dapr sidecar, example:https://dapr-api.mycompany.com
DAPR_GRPC_ENDPOINT
: used to find the gRPC endpoint of the Dapr sidecar, example:https://dapr-grpc-api.mycompany.com
DAPR_HTTP_PORT
: ifDAPR_HTTP_ENDPOINT
is not set, this is used to find the HTTP local endpoint of the Dapr sidecarDAPR_GRPC_PORT
: ifDAPR_GRPC_ENDPOINT
is not set, this is used to find the gRPC local endpoint of the Dapr sidecarDAPR_API_TOKEN
: used to set the API token
Configuring gRPC channel options
Dapr’s use of CancellationToken
for cancellation relies on the configuration of the gRPC channel options. If you need
to configure these options yourself, make sure to enable the ThrowOperationCanceledOnCancellation setting.
var daprEncryptionClient = new DaprEncryptionClientBuilder()
.UseGrpcChannelOptions(new GrpcChannelOptions { .. ThrowOperationCanceledOnCancellation = true })
.Build();
Using cancellation with DaprEncryptionClient
The APIs on DaprEncryptionClient
perform asynchronous operations and accept an optional CancellationToken
parameter. This
follows a standard .NET practice for cancellable operations. Note that when cancellation occurs, there is no guarantee that
the remote endpoint stops processing the request, only that the client has stopped waiting for completion.
When an operation is cancelled, it will throw an OperationCancelledException
.
Configuring DaprEncryptionClient
via dependency injection
Using the built-in extension methods for registering the DaprEncryptionClient
in a dependency injection container can
provide the benefit of registering the long-lived service a single time, centralize complex configuration and improve
performance by ensuring similarly long-lived resources are re-purposed when possible (e.g. HttpClient
instances).
There are three overloads available to give the developer the greatest flexibility in configuring the client for their
scenario. Each of these will register the IHttpClientFactory
on your behalf if not already registered, and configure
the DaprEncryptionClientBuilder
to use it when creating the HttpClient
instance in order to re-use the same instance as
much as possible and avoid socket exhaustion and other issues.
In the first approach, there’s no configuration done by the developer and the DaprEncryptionClient
is configured with the
default settings.
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddDaprEncryptionClent(); //Registers the `DaprEncryptionClient` to be injected as needed
var app = builder.Build();
Sometimes the developer will need to configure the created client using the various configuration options detailed
above. This is done through an overload that passes in the DaprEncryptionClientBuiler
and exposes methods for configuring
the necessary options.
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddDaprEncryptionClient((_, daprEncrpyptionClientBuilder) => {
//Set the API token
daprEncryptionClientBuilder.UseDaprApiToken("abc123");
//Specify a non-standard HTTP endpoint
daprEncryptionClientBuilder.UseHttpEndpoint("http://dapr.my-company.com");
});
var app = builder.Build();
Finally, it’s possible that the developer may need to retrieve information from another service in order to populate
these configuration values. That value may be provided from a DaprClient
instance, a vendor-specific SDK or some
local service, but as long as it’s also registered in DI, it can be injected into this configuration operation via the
last overload:
var builder = WebApplication.CreateBuilder(args);
//Register a fictional service that retrieves secrets from somewhere
builder.Services.AddSingleton<SecretService>();
builder.Services.AddDaprEncryptionClient((serviceProvider, daprEncryptionClientBuilder) => {
//Retrieve an instance of the `SecretService` from the service provider
var secretService = serviceProvider.GetRequiredService<SecretService>();
var daprApiToken = secretService.GetSecret("DaprApiToken").Value;
//Configure the `DaprEncryptionClientBuilder`
daprEncryptionClientBuilder.UseDaprApiToken(daprApiToken);
});
var app = builder.Build();
2.1.6.2 - How to: Create an use Dapr Cryptography in the .NET SDK
Prerequisites
- .NET 8, or .NET 9 installed
- Dapr CLI
- Initialized Dapr environment
Installation
To get started with the Dapr Cryptography client, install the Dapr.Cryptography package from NuGet:
dotnet add package Dapr.Cryptography
A DaprEncryptionClient
maintains access to networking resources in the form of TCP sockets used to communicate with
the Dapr sidecar.
Dependency Injection
The AddDaprEncryptionClient()
method will register the Dapr client with dependency injection and is the recommended approach
for using this package. This method accepts an optional options delegate for configuring the DaprEncryptionClient
and a
ServiceLifetime
argument, allowing you to specify a different lifetime for the registered services instead of the default Singleton
value.
The following example assumes all default values are acceptable and is sufficient to register the DaprEncryptionClient
:
services.AddDaprEncryptionClient();
The optional configuration delegate is used to configure the DaprEncryptionClient
by specifying options on the
DaprEncryptionClientBuilder
as in the following example:
services.AddSingleton<DefaultOptionsProvider>();
services.AddDaprEncryptionClient((serviceProvider, clientBuilder) => {
//Inject a service to source a value from
var optionsProvider = serviceProvider.GetRequiredService<DefaultOptionsProvider>();
var standardTimeout = optionsProvider.GetStandardTimeout();
//Configure the value on the client builder
clientBuilder.UseTimeout(standardTimeout);
});
Manual Instantiation
Rather than using dependency injection, a DaprEncryptionClient
can also be built using the static client builder.
For best performance, create a single long-lived instance of DaprEncryptionClient
and provide access to that shared instance throughout
your application. DaprEncryptionClient
instances are thread-safe and intended to be shared.
Avoid creating a DaprEncryptionClient
per-operation.
A DaprEncryptionClient
can be configured by invoking methods on the DaprEncryptionClientBuilder
class before calling .Build()
to create the client. The settings for each DaprEncryptionClient
are separate and cannot be changed after calling .Build()
.
var daprEncryptionClient = new DaprEncryptionClientBuilder()
.UseJsonSerializerSettings( ... ) //Configure JSON serializer
.Build();
See the .NET documentation here for more information about the options available when configuring the Dapr client via the builder.
Try it out
Put the Dapr AI .NET SDK to the test. Walk through the samples to see Dapr in action:
SDK Samples | Description |
---|---|
SDK samples | Clone the SDK repo to try out some examples and get started. |
2.1.7 - Dapr Messaging .NET SDK
With the Dapr Messaging package, you can interact with the Dapr messaging APIs from a .NET application. In the v1.15 release, this package only contains the functionality corresponding to the streaming PubSub capability.
Future Dapr .NET SDK releases will migrate existing messaging capabilities out from Dapr.Client to this Dapr.Messaging package. This will be documented in the release notes, documentation and obsolete attributes in advance.
To get started, walk through the Dapr Messaging how-to guide and refer to best practices documentation for additional guidance.
2.1.7.1 - How to: Author and manage Dapr streaming subscriptions in the .NET SDK
Let’s create a subscription to a pub/sub topic or queue at using the streaming capability. We’ll use the simple example provided here, for the following demonstration and walk through it as an explainer of how you can configure message handlers at runtime and which do not require an endpoint to be pre-configured. In this guide, you will:
- Deploy a .NET Web API application (StreamingSubscriptionExample)
- Utilize the Dapr .NET Messaging SDK to subscribe dynamically to a pub/sub topic.
Prerequisites
- Dapr CLI
- Initialized Dapr environment
- .NET 8 or .NET 9 installed
- Dapr.Messaging NuGet package installed to your project
Set up the environment
Clone the .NET SDK repo.
git clone https://github.com/dapr/dotnet-sdk.git
From the .NET SDK root directory, navigate to the Dapr streaming PubSub example.
cd examples/Client/PublishSubscribe
Run the application locally
To run the Dapr application, you need to start the .NET program and a Dapr sidecar. Navigate to the StreamingSubscriptionExample
directory.
cd StreamingSubscriptionExample
We’ll run a command that starts both the Dapr sidecar and the .NET program at the same time.
dapr run --app-id pubsubapp --dapr-grpc-port 4001 --dapr-http-port 3500 -- dotnet run
Dapr listens for HTTP requests at
http://localhost:3500
and internal Jobs gRPC requests athttp://localhost:4001
.
Register the Dapr PubSub client with dependency injection
The Dapr Messaging SDK provides an extension method to simplify the registration of the Dapr PubSub client. Before
completing the dependency injection registration in Program.cs
, add the following line:
var builder = WebApplication.CreateBuilder(args);
//Add anywhere between these two
builder.Services.AddDaprPubSubClient(); //That's it
var app = builder.Build();
It’s possible that you may want to provide some configuration options to the Dapr PubSub client that
should be present with each call to the sidecar such as a Dapr API token, or you want to use a non-standard
HTTP or gRPC endpoint. This be possible through use of an overload of the registration method that allows configuration
of a DaprPublishSubscribeClientBuilder
instance:
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddDaprPubSubClient((_, daprPubSubClientBuilder) => {
daprPubSubClientBuilder.UseDaprApiToken("abc123");
daprPubSubClientBuilder.UseHttpEndpoint("http://localhost:8512"); //Non-standard sidecar HTTP endpoint
});
var app = builder.Build();
Still, it’s possible that whatever values you wish to inject need to be retrieved from some other source, itself registered as a dependency. There’s one more overload you can use to inject an IServiceProvider
into the configuration action method. In the following example, we register a fictional singleton that can retrieve secrets from somewhere and pass it into the configuration method for AddDaprJobClient
so
we can retrieve our Dapr API token from somewhere else for registration here:
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddSingleton<SecretRetriever>();
builder.Services.AddDaprPubSubClient((serviceProvider, daprPubSubClientBuilder) => {
var secretRetriever = serviceProvider.GetRequiredService<SecretRetriever>();
var daprApiToken = secretRetriever.GetSecret("DaprApiToken").Value;
daprPubSubClientBuilder.UseDaprApiToken(daprApiToken);
daprPubSubClientBuilder.UseHttpEndpoint("http://localhost:8512");
});
var app = builder.Build();
Use the Dapr PubSub client using IConfiguration
It’s possible to configure the Dapr PubSub client using the values in your registered IConfiguration
as well without
explicitly specifying each of the value overrides using the DaprPublishSubscribeClientBuilder
as demonstrated in the previous
section. Rather, by populating an IConfiguration
made available through dependency injection the AddDaprPubSubClient()
registration will automatically use these values over their respective defaults.
Start by populating the values in your configuration. This can be done in several different ways as demonstrated below.
Configuration via ConfigurationBuilder
Application settings can be configured without using a configuration source and by instead populating the value in-memory
using a ConfigurationBuilder
instance:
var builder = WebApplication.CreateBuilder();
//Create the configuration
var configuration = new ConfigurationBuilder()
.AddInMemoryCollection(new Dictionary<string, string> {
{ "DAPR_HTTP_ENDPOINT", "http://localhost:54321" },
{ "DAPR_API_TOKEN", "abc123" }
})
.Build();
builder.Configuration.AddConfiguration(configuration);
builder.Services.AddDaprPubSubClient(); //This will automatically populate the HTTP endpoint and API token values from the IConfiguration
Configuration via Environment Variables
Application settings can be accessed from environment variables available to your application.
The following environment variables will be used to populate both the HTTP endpoint and API token used to register the Dapr PubSub client.
Key | Value |
---|---|
DAPR_HTTP_ENDPOINT | http://localhost:54321 |
DAPR_API_TOKEN | abc123 |
var builder = WebApplication.CreateBuilder();
builder.Configuration.AddEnvironmentVariables();
builder.Services.AddDaprPubSubClient();
The Dapr PubSub client will be configured to use both the HTTP endpoint http://localhost:54321
and populate all outbound
requests with the API token header abc123
.
Configuration via prefixed Environment Variables
However, in shared-host scenarios where there are multiple applications all running on the same machine without using containers or in development environments, it’s not uncommon to prefix environment variables. The following example assumes that both the HTTP endpoint and the API token will be pulled from environment variables prefixed with the value “myapp_”. The two environment variables used in this scenario are as follows:
Key | Value |
---|---|
myapp_DAPR_HTTP_ENDPOINT | http://localhost:54321 |
myapp_DAPR_API_TOKEN | abc123 |
These environment variables will be loaded into the registered configuration in the following example and made available without the prefix attached.
var builder = WebApplication.CreateBuilder();
builder.Configuration.AddEnvironmentVariables(prefix: "myapp_");
builder.Services.AddDaprPubSubClient();
The Dapr PubSub client will be configured to use both the HTTP endpoint http://localhost:54321
and populate all outbound
requests with the API token header abc123
.
Use the Dapr PubSub client without relying on dependency injection
While the use of dependency injection simplifies the use of complex types in .NET and makes it easier to
deal with complicated configurations, you’re not required to register the DaprPublishSubscribeClient
in this way.
Rather, you can also elect to create an instance of it from a DaprPublishSubscribeClientBuilder
instance as
demonstrated below:
public class MySampleClass
{
public void DoSomething()
{
var daprPubSubClientBuilder = new DaprPublishSubscribeClientBuilder();
var daprPubSubClient = daprPubSubClientBuilder.Build();
//Do something with the `daprPubSubClient`
}
}
Set up message handler
The streaming subscription implementation in Dapr gives you greater control over handling backpressure from events by leaving the messages in the Dapr runtime until your application is ready to accept them. The .NET SDK supports a high-performance queue for maintaining a local cache of these messages in your application while processing is pending. These messages will persist in the queue until processing either times out for each one or a response action is taken for each (typically after processing succeeds or fails). Until this response action is received by the Dapr runtime, the messages will be persisted by Dapr and made available in case of a service failure.
The various response actions available are as follows:
Response Action | Description |
---|---|
Retry | The event should be delivered again in the future. |
Drop | The event should be deleted (or forwarded to a dead letter queue, if configured) and not attempted again. |
Success | The event should be deleted as it was successfully processed. |
The handler will receive only one message at a time and if a cancellation token is provided to the subscription, this token will be provided during the handler invocation.
The handler must be configured to return a Task<TopicResponseAction>
indicating one of these operations, even if from
a try/catch block. If an exception is not caught by your handler, the subscription will use the response action configured
in the options during subscription registration.
The following demonstrates the sample message handler provided in the example:
Task<TopicResponseAction> HandleMessageAsync(TopicMessage message, CancellationToken cancellationToken = default)
{
try
{
//Do something with the message
Console.WriteLine(Encoding.UTF8.GetString(message.Data.Span));
return Task.FromResult(TopicResponseAction.Success);
}
catch
{
return Task.FromResult(TopicResponseAction.Retry);
}
}
Configure and subscribe to the PubSub topic
Configuration of the streaming subscription requires the name of the PubSub component registered with Dapr, the name
of the topic or queue being subscribed to, the DaprSubscriptionOptions
providing the configuration for the subscription,
the message handler and an optional cancellation token. The only required argument to the DaprSubscriptionOptions
is
the default MessageHandlingPolicy
which consists of a per-event timeout and the TopicResponseAction
to take when
that timeout occurs.
Other options are as follows:
Property Name | Description |
---|---|
Metadata | Additional subscription metadata |
DeadLetterTopic | The optional name of the dead-letter topic to send dropped messages to. |
MaximumQueuedMessages | By default, there is no maximum boundary enforced for the internal queue, but setting this |
property would impose an upper limit. | |
MaximumCleanupTimeout | When the subscription is disposed of or the token flags a cancellation request, this specifies |
the maximum amount of time available to process the remaining messages in the internal queue. |
Subscription is then configured as in the following example:
var messagingClient = app.Services.GetRequiredService<DaprPublishSubscribeClient>();
var cancellationTokenSource = new CancellationTokenSource(TimeSpan.FromSeconds(60)); //Override the default of 30 seconds
var options = new DaprSubscriptionOptions(new MessageHandlingPolicy(TimeSpan.FromSeconds(10), TopicResponseAction.Retry));
var subscription = await messagingClient.SubscribeAsync("pubsub", "mytopic", options, HandleMessageAsync, cancellationTokenSource.Token);
Terminate and clean up subscription
When you’ve finished with your subscription and wish to stop receiving new events, simply await a call to
DisposeAsync()
on your subscription instance. This will cause the client to unregister from additional events and
proceed to finish processing all the events still leftover in the backpressure queue, if any, before disposing of any
internal resources. This cleanup will be limited to the timeout interval provided in the DaprSubscriptionOptions
when
the subscription was registered and by default, this is set to 30 seconds.
2.1.7.2 - DaprPublishSubscribeClient usage
Lifetime management
A DaprPublishSubscribeClient
is a version of the Dapr client that is dedicated to interacting with the Dapr Messaging API.
It can be registered alongside a DaprClient
and other Dapr clients without issue.
It maintains access to networking resources in the form of TCP sockets used to communicate with the Dapr sidecar and implements
IAsyncDisposable
to support the eager cleanup of resources.
For best performance, create a single long-lived instance of DaprPublishSubscribeClient
and provide access to that shared
instance throughout your application. DaprPublishSubscribeClient
instances are thread-safe and intended to be shared.
This can be aided by utilizing the dependency injection functionality. The registration method supports registration using
as a singleton, a scoped instance or as transient (meaning it’s recreated every time it’s injected), but also enables
registration to utilize values from an IConfiguration
or other injected service in a way that’s impractical when
creating the client from scratch in each of your classes.
Avoid creating a DaprPublishSubscribeClient
for each operation and disposing it when the operation is complete. It’s
intended that the DaprPublishSubscribeClient
should only be disposed when you no longer wish to receive events on the
subscription as disposing it will cancel the ongoing receipt of new events.
Configuring DaprPublishSubscribeClient via the DaprPublishSubscribeClientBuilder
A DaprPublishSubscribeClient
can be configured by invoking methods on the DaprPublishSubscribeClientBuilder
class
before calling .Build()
to create the client itself. The settings for each DaprPublishSubscribeClient
are separate
and cannot be changed after calling .Build()
.
var daprPubsubClient = new DaprPublishSubscribeClientBuilder()
.UseDaprApiToken("abc123") // Specify the API token used to authenticate to other Dapr sidecars
.Build();
The DaprPublishSubscribeClientBuilder
contains settings for:
- The HTTP endpoint of the Dapr sidecar
- The gRPC endpoint of the Dapr sidecar
- The
JsonSerializerOptions
object used to configure JSON serialization - The
GrpcChannelOptions
object used to configure gRPC - The API token used to authenticate requests to the sidecar
- The factory method used to create the
HttpClient
instance used by the SDK - The timeout used for the
HttpClient
instance when making requests to the sidecar
The SDK will read the following environment variables to configure the default values:
DAPR_HTTP_ENDPOINT
: used to find the HTTP endpoint of the Dapr sidecar, example:https://dapr-api.mycompany.com
DAPR_GRPC_ENDPOINT
: used to find the gRPC endpoint of the Dapr sidecar, example:https://dapr-grpc-api.mycompany.com
DAPR_HTTP_PORT
: ifDAPR_HTTP_ENDPOINT
is not set, this is used to find the HTTP local endpoint of the Dapr sidecarDAPR_GRPC_PORT
: ifDAPR_GRPC_ENDPOINT
is not set, this is used to find the gRPC local endpoint of the Dapr sidecarDAPR_API_TOKEN
: used to set the API token
Configuring gRPC channel options
Dapr’s use of CancellationToken
for cancellation relies on the configuration of the gRPC channel options. If you
need to configure these options yourself, make sure to enable the ThrowOperationCanceledOnCancellation setting.
var daprPubsubClient = new DaprPublishSubscribeClientBuilder()
.UseGrpcChannelOptions(new GrpcChannelOptions { ... ThrowOperationCanceledOnCancellation = true })
.Build();
Using cancellation with DaprPublishSubscribeClient
The APIs on DaprPublishSubscribeClient
perform asynchronous operations and accept an optional CancellationToken
parameter. This follows a standard .NET practice for cancellable operations. Note that when cancellation occurs, there is
no guarantee that the remote endpoint stops processing the request, only that the client has stopped waiting for completion.
When an operation is cancelled, it will throw an OperationCancelledException
.
Configuring DaprPublishSubscribeClient
via dependency injection
Using the built-in extension methods for registering the DaprPublishSubscribeClient
in a dependency injection container
can provide the benefit of registering the long-lived service a single time, centralize complex configuration and improve
performance by ensuring similarly long-lived resources are re-purposed when possible (e.g. HttpClient
instances).
There are three overloads available to give the developer the greatest flexibility in configuring the client for their
scenario. Each of these will register the IHttpClientFactory
on your behalf if not already registered, and configure
the DaprPublishSubscribeClientBuilder
to use it when creating the HttpClient
instance in order to re-use the same
instance as much as possible and avoid socket exhaustion and other issues.
In the first approach, there’s no configuration done by the developer and the DaprPublishSubscribeClient
is configured with
the default settings.
var builder = WebApplication.CreateBuilder(args);
builder.Services.DaprPublishSubscribeClient(); //Registers the `DaprPublishSubscribeClient` to be injected as needed
var app = builder.Build();
Sometimes the developer will need to configure the created client using the various configuration options detailed above. This is done through an overload that passes in the DaprJobsClientBuiler
and exposes methods for configuring the necessary options.
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddDaprJobsClient((_, daprPubSubClientBuilder) => {
//Set the API token
daprPubSubClientBuilder.UseDaprApiToken("abc123");
//Specify a non-standard HTTP endpoint
daprPubSubClientBuilder.UseHttpEndpoint("http://dapr.my-company.com");
});
var app = builder.Build();
Finally, it’s possible that the developer may need to retrieve information from another service in order to populate these configuration values. That value may be provided from a DaprClient
instance, a vendor-specific SDK or some local service, but as long as it’s also registered in DI, it can be injected into this configuration operation via the last overload:
var builder = WebApplication.CreateBuilder(args);
//Register a fictional service that retrieves secrets from somewhere
builder.Services.AddSingleton<SecretService>();
builder.Services.AddDaprPublishSubscribeClient((serviceProvider, daprPubSubClientBuilder) => {
//Retrieve an instance of the `SecretService` from the service provider
var secretService = serviceProvider.GetRequiredService<SecretService>();
var daprApiToken = secretService.GetSecret("DaprApiToken").Value;
//Configure the `DaprPublishSubscribeClientBuilder`
daprPubSubClientBuilder.UseDaprApiToken(daprApiToken);
});
var app = builder.Build();
2.1.8 - Best Practices for the Dapr .NET SDK
Building with confidence
The Dapr .NET SDK offers a rich set of capabilities for building distributed applications. This section provides practical guidance for using the SDK effectively in production scenariosâfocusing on reliability, maintainability, and developer experience.
Topics covered include:
- Error handling strategies across Dapr building blocks
- Managing experimental features and suppressing related warnings
- Leveraging source analyzers and generators to reduce boilerplate and catch issues early
- General .NET development practices in Dapr-based applications
Error model guidance
Dapr operations can fail for many reasonsânetwork issues, misconfigured components, or transient faults. The SDK provides structured error types to help you distinguish between retryable and fatal errors.
Learn how to use DaprException
and its derived types effectively here.
Experimental attributes
Some SDK features are marked as experimental and may change in future releases. These are annotated with
[Experimental]
and generate build-time warnings by default. You can:
- Suppress warnings selectively using
#pragma warning disable
- Use
SuppressMessage
attributes for finer control - Track experimental usage across your codebase
Learn more about our use of the [Experimenta]
attribute here.
Source tooling
The SDK includes Roslyn-based analyzers and source generators to help you write better code with less effort. These tools:
- Warn about common misuses of the SDK
- Generate boilerplate for actor registration and invocation
- Support IDE integration for faster feedback
Read more about how to install and use these analyzers here.
Additional guidance
This section is designed to support a wide range of development scenarios. As your applications grow in complexity, you’ll find increasingly relevant practices and patterns for working with Dapr in .NETâfrom actor lifecycle management to configuration strategies and performance tuning.
2.1.8.1 - Error Model in the Dapr .NET SDK
The Dapr .NET SDK supports the richer error model, implemented by the Dapr runtime. This model provides a way for applications to enrich their errors with added context, allowing consumers of the application to better understand the issue and resolve it faster. You can read more about the richer error model here, and you can find the Dapr proto file implementing these errors here.
The Dapr .NET SDK implements all details supported by the Dapr runtime, implemented in the Dapr.Common.Exceptions
namespace, and is accessible through
the DaprException
extension method TryGetExtendedErrorInfo
. Currently, this detail extraction is only supported for
RpcException
s where the details are present.
// Example usage of ExtendedErrorInfo
try
{
// Perform some action with the Dapr client that throws a DaprException.
}
catch (DaprException daprEx)
{
if (daprEx.TryGetExtendedErrorInfo(out DaprExtendedErrorInfo errorInfo)
{
Console.WriteLine(errorInfo.Code);
Console.WriteLine(errorInfo.Message);
foreach (DaprExtendedErrorDetail detail in errorInfo.Details)
{
Console.WriteLine(detail.ErrorType);
switch (detail.ErrorType)
case ExtendedErrorType.ErrorInfo:
Console.WriteLine(detail.Reason);
Console.WriteLine(detail.Domain);
default:
Console.WriteLine(detail.TypeUrl);
}
}
}
DaprExtendedErrorInfo
Contains Code
(the status code) and Message
(the error message) associated with the error, parsed from an inner RpcException
.
Also contains a collection of DaprExtendedErrorDetails
parsed from the details in the exception.
DaprExtendedErrorDetail
All details implement the abstract DaprExtendedErrorDetail
and have an associated DaprExtendedErrorType
.
RetryInfo
Information notifying the client how long to wait before they should retry. Provides a DaprRetryDelay
with the properties
Second
(offset in seconds) and Nano
(offset in nanoseconds).
DebugInfo
Debugging information offered by the server. Contains StackEntries
(a collection of strings containing the stack trace), and
Detail
(further debugging information).
QuotaFailure
Information relating to some quota that may have been reached, such as a daily usage limit on an API. It has one property Violations
,
a collection of DaprQuotaFailureViolation
, which each contain a Subject
(the subject of the request) and Description
(further information regarding the failure).
PreconditionFailure
Information informing the client that some required precondition was not met. Has one property Violations
, a collection of
DaprPreconditionFailureViolation
, which each has Subject
(subject where the precondition failure occured, e.g. “Azure”),
Type
(representation of the precondition type, e.g. “TermsOfService”), and Description
(further description e.g. “ToS must be accepted.”).
RequestInfo
Information returned by the server that can be used by the server to identify the client’s request. Contains
RequestId
and ServingData
properties, RequestId
being some string (such as a UID) the server can interpret,
and ServingData
being some arbitrary data that made up part of the request.
LocalizedMessage
Contains a localized message, along with the locale of the message. Contains Locale
(the locale e.g. “en-US”) and Message
(the localized message).
BadRequest
Describes a bad request field. Contains collection of DaprBadRequestDetailFieldViolation
, which each has Field
(the offending field in request, e.g. ‘first_name’) and
Description
(further information detailing the reason, e.g. “first_name cannot contain special characters”).
ErrorInfo
Details the cause of an error. Contains three properties, Reason
(the reason for the error, which should take the form of UPPER_SNAKE_CASE, e.g. DAPR_INVALID_KEY),
Domain
(domain the error belongs to, e.g. ‘dapr.io’), and Metadata
, a key/value-based collection with further information.
Help
Provides resources for the client to perform further research into the issue. Contains a collection of DaprHelpDetailLink
,
which provides Url
(a url to help or documentation), and Description
(a description of what the link provides).
ResourceInfo
Provides information relating to an accessed resource. Provides three properties ResourceType
(type of the resource being access e.g. “Azure service bus”),
ResourceName
(the name of the resource e.g. “my-configured-service-bus”), Owner
(the owner of the resource e.g. “subscriptionowner@dapr.io”),
and Description
(further information on the resource relating to the error, e.g. “missing permissions to use this resource”).
Unknown
Returned when the detail type url cannot be mapped to the correct DaprExtendedErrorDetail
implementation.
Provides one property TypeUrl
(the type url that could not be parsed, e.g. “type.googleapis.com/Google.rpc.UnrecognizedType”).
2.1.8.2 - Experimental Attributes
[Experimental]
attributeExperimental Attributes
Introduction to Experimental Attributes
With the release of .NET 8, C# 12 introduced the [Experimental]
attribute, which provides a standardized way to mark
APIs that are still in development or experimental. This attribute is defined in the System.Diagnostics.CodeAnalysis
namespace and requires a diagnostic ID parameter used to generate compiler warnings when the experimental API
is used.
In the Dapr .NET SDK, we now use the [Experimental]
attribute instead of [Obsolete]
to mark building blocks and
components that have not yet passed the stable lifecycle certification. This approach provides a clearer distinction
between:
Experimental APIs - Features that are available but still evolving and have not yet been certified as stable according to the Dapr Component Certification Lifecycle.
Obsolete APIs - Features that are truly deprecated and will be removed in a future release.
Usage in the Dapr .NET SDK
In the Dapr .NET SDK, we apply the [Experimental]
attribute at the class level for building blocks that are still in
the Alpha or Beta stages of the Component Certification Lifecycle.
The attribute includes:
- A diagnostic ID that identifies the experimental building block
- A URL that points to the relevant documentation for that block
For example:
using System.Diagnostics.CodeAnalysis;
namespace Dapr.Cryptography.Encryption
{
[Experimental("DAPR_CRYPTOGRAPHY", UrlFormat = "https://docs.dapr.io/developing-applications/building-blocks/cryptography/cryptography-overview/")]
public class DaprEncryptionClient
{
// Implementation
}
}
The diagnostic IDs follow a naming convention of DAPR_[BUILDING_BLOCK_NAME]
, such as:
DAPR_CONVERSATION
- For the Conversation building blockDAPR_CRYPTOGRAPHY
- For the Cryptography building blockDAPR_JOBS
- For the Jobs building blockDAPR_DISTRIBUTEDLOCK
- For the Distributed Lock building block
Suppressing Experimental Warnings
When you use APIs marked with the [Experimental]
attribute, the compiler will generate errors.
To build your solution without marking your own code as experimental, you will need to suppress these errors. Here are
several approaches to do this:
Option 1: Using #pragma directive
You can use the #pragma warning
directive to suppress the warning for specific sections of code:
// Disable experimental warning
#pragma warning disable DAPR_CRYPTOGRAPHY
// Your code using the experimental API
var client = new DaprEncryptionClient();
// Re-enable the warning
#pragma warning restore DAPR_CRYPTOGRAPHY
This approach is useful when you want to suppress warnings only for specific sections of your code.
Option 2: Project-level suppression
To suppress warnings for an entire project, add the following to your .csproj
file.
file.
<PropertyGroup>
<NoWarn>$(NoWarn);DAPR_CRYPTOGRAPHY</NoWarn>
</PropertyGroup>
You can include multiple diagnostic IDs separated by semicolons:
<PropertyGroup>
<NoWarn>$(NoWarn);DAPR_CONVERSATION;DAPR_JOBS;DAPR_DISTRIBUTEDLOCK;DAPR_CRYPTOGRAPHY</NoWarn>
</PropertyGroup>
This approach is particularly useful for test projects that need to use experimental APIs.
Option 3: Directory-level suppression
For suppressing warnings across multiple projects in a directory, add a Directory.Build.props
file:
<PropertyGroup>
<NoWarn>$(NoWarn);DAPR_CONVERSATION;DAPR_JOBS;DAPR_DISTRIBUTEDLOCK;DAPR_CRYPTOGRAPHY</NoWarn>
</PropertyGroup>
This file should be placed in the root directory of your test projects. You can learn more about using
Directory.Build.props
files in the
MSBuild documentation.
Lifecycle of Experimental APIs
As building blocks move through the certification lifecycle and reach the “Stable” stage, the [Experimental]
attribute will be removed. No migration or code changes will be required from users when this happens, except for the removal of any warning suppressions if they were added.
Conversely, the [Obsolete]
attribute will now be reserved exclusively for APIs that are truly deprecated and scheduled for removal. When you see a method or class marked with [Obsolete]
, you should plan to migrate away from it according to the migration guidance provided in the attribute message.
Best Practices
In application code:
- Be cautious when using experimental APIs, as they may change in future releases
- Consider isolating usage of experimental APIs to make future updates easier
- Document your use of experimental APIs for team awareness
In test code:
- Use project-level suppression to avoid cluttering test code with warning suppressions
- Regularly review which experimental APIs you’re using and check if they’ve been stabilized
When contributing to the SDK:
- Use
[Experimental]
for new building blocks that haven’t completed certification - Use
[Obsolete]
only for truly deprecated APIs - Provide clear documentation links in the
UrlFormat
parameter
- Use
Additional Resources
2.1.8.3 - Dapr source code analyzers and generators
Dapr supports a growing collection of optional Roslyn analyzers and code fix providers that inspect your code for code quality issues. Starting with the release of v1.16, developers have the opportunity to install additional projects from NuGet alongside each of the standard capability packages to enable these analyzers in their solutions.
Note
A future release of the Dapr .NET SDK will include these analyzers by default without requiring a separate package install.Rule violations will typically be marked as Info
or Warning
so that if the analyzer identifies an issue, it won’t
necessarily break builds. All code analysis violations appear with the prefix “DAPR” and are uniquely distinguished
by a number following this prefix.
Note
At this time, the first two digits of the diagnostic identifier map one-to-one to distinct Dapr packages, but this is subject to change in the future as more analyzers are developed.Install and configure analyzers
The following packages will be available via NuGet following the v1.16 Dapr release:
- Dapr.Actors.Analyzers
- Dapr.Jobs.Analyzers
- Dapr.Workflow.Analyzers
Install each NuGet package on every project where you want the analyzers to run. The package will be installed as a project dependency and analyzers will run as you write your code or as part of a CI/CD build. The analyzers will flag issues in your existing code and warn you about new issues as you build your project.
Many of our analyzers have associated code fixes that can be applied to automatically correct the problem. If your IDE supports this capability, any available code fixes will show up as an inline menu option in your code.
Further, most of our analyzers should also report a specific line and column number in your code of the syntax that’s been identified as a key aspect of the rule. If your IDE supports it, double clicking any of the analyzer warnings should jump directly to the part of your code responsible for the violating the analyzer’s rule.
Suppress specific analyzers
If you wish to keep an analyzer from firing against some particular piece of your project, their outputs can be individually targeted for suppression through a number of ways. Read more about suppressing analyzers in projects or files in the associated .NET documentation.
Disable all analyzers
If you wish to disable all analyzers in your project without removing any packages providing them, set
the EnableNETAnalyzers
property to false
in your csproj file.
Available Analyzers
Diagnostic ID | Dapr Package | Category | Severity | Version Added | Description | Code Fix Available |
---|---|---|---|---|---|---|
DAPR1301 | Dapr.Workflow | Usage | Warning | 1.16 | The workflow type is not registered with the dependency injection provider | Yes |
DAPR1302 | Dapr.Workflow | Usage | Warning | 1.16 | The workflow activity type is not registered with the dependency injection provider | Yes |
DAPR1401 | Dapr.Actors | Usage | Warning | 1.16 | Actor timer method invocations require the named callback method to exist on type | No |
DAPR1402 | Dapr.Actors | Usage | Warning | 1.16 | The actor type is not registered with dependency injection | Yes |
DAPR1403 | Dapr.Actors | Interoperability | Info | 1.16 | Set options.UseJsonSerialization to true to support interoperability with non-.NET actors | Yes |
DAPR1404 | Dapr.Actors | Usage | Warning | 1.16 | Call app.MapActorsHandlers to map endpoints for Dapr actors | Yes |
DAPR1501 | Dapr.Jobs | Usage | Warning | 1.16 | Job invocations require the MapDaprScheduledJobHandler to be set and configured for each anticipated job on IEndpointRouteBuilder | No |
Analyzer Categories
The following are each of the eligible categories that an analyzer can be assigned to and are modeled after the standard categories used by the .NET analyzers:
- Design
- Documentation
- Globalization
- Interoperability
- Maintainability
- Naming
- Performance
- Reliability
- Security
- Usage
2.1.9 - Developing applications with the Dapr .NET SDK
Thinking more than one at a time
Using your favorite IDE or editor to launch an application typically assumes that you only need to run one thing: the application you’re debugging. However, developing microservices challenges you to think about your local development process for more than one at a time. A microservices application has multiple services that you might need running simultaneously, and dependencies (like state stores) to manage.
Adding Dapr to your development process means you need to manage the following concerns:
- Each service you want to run
- A Dapr sidecar for each service
- Dapr component and configuration manifests
- Additional dependencies such as state stores
- optional: the Dapr placement service for actors
This document assumes that you’re building a production application and want to create a repeatable and robust set of development practices. The guidance here is generalized, and applies to any .NET server application using Dapr (including actors).
Managing components
You have two primary methods of storing component definitions for local development with Dapr:
- Use the default location (
~/.dapr/components
) - Use your own location
Creating a folder within your source code repository to store components and configuration will give you a way to version and share these definitions. The guidance provided here will assume you created a folder next to the application source code to store these files.
Development options
Choose one of these links to learn about tools you can use in local development scenarios. It’s suggested that you familiarize yourself with each of them to get a sense of the options provided by the .NET SDK.
2.1.9.1 - Dapr .NET SDK Development with Dapr CLI
Dapr CLI
Consider this to be a .NET companion to the Dapr Self-Hosted with Docker Guide.
The Dapr CLI provides you with a good base to work from by initializing a local redis container, zipkin container, the placement service, and component manifests for redis. This will enable you to work with the following building blocks on a fresh install with no additional setup:
You can run .NET services with dapr run
as your strategy for developing locally. Plan on running one of these commands per-service in order to launch your application.
- Pro: this is easy to set up since its part of the default Dapr installation
- Con: this uses long-running docker containers on your machine, which might not be desirable
- Con: the scalability of this approach is poor since it requires running a separate command per-service
Using the Dapr CLI
For each service you need to choose:
- A unique app-id for addressing (
app-id
) - A unique listening port for HTTP (
port
)
You also should have decided on where you are storing components (components-path
).
The following command can be run from multiple terminals to launch each service, with the respective values substituted.
dapr run --app-id <app-id> --app-port <port> --components-path <components-path> -- dotnet run -p <project> --urls http://localhost:<port>
Explanation: this command will use dapr run
to launch each service and its sidecar. The first half of the command (before --
) passes required configuration to the Dapr CLI. The second half of the command (after --
) passes required configuration to the dotnet run
command.
đĄ Ports
Since you need to configure a unique port for each service, you can use this command to pass that port value to both Dapr and the service.--urls http://localhost:<port>
will configure ASP.NET Core to listen for traffic on the provided port. Using configuration at the commandline is a more flexible approach than hardcoding a listening port elsewhere.If any of your services do not accept HTTP traffic, then modify the command above by removing the --app-port
and --urls
arguments.
Next steps
If you need to debug, then use the attach feature of your debugger to attach to one of the running processes.
If you want to scale up this approach, then consider building a script which automates this process for your whole application.
2.1.9.2 - Dapr .NET SDK Development with Docker-Compose
Docker-Compose
Consider this to be a .NET companion to the Dapr Self-Hosted with Docker Guide.
docker-compose
is a CLI tool included with Docker Desktop that you can use to run multiple containers at a time. It is a way to automate the lifecycle of multiple containers together, and offers a development experience similar to a production environment for applications targeting Kubernetes.
- Pro: Since
docker-compose
manages containers for you, you can make dependencies part of the application definition and stop the long-running containers on your machine. - Con: most investment required, services need to be containerized to get started.
- Con: can be difficult to debug and troubleshoot if you are unfamilar with Docker.
Using docker-compose
From the .NET perspective, there is no specialized guidance needed for docker-compose
with Dapr. docker-compose
runs containers, and once your service is in a container, configuring it similar to any other programming technology.
đĄ App Port
In a container, an ASP.NET Core app will listen on port 80 by default. Remember this for when you need to configure the--app-port
later.To summarize the approach:
- Create a
Dockerfile
for each service - Create a
docker-compose.yaml
and place check it in to the source code repository
To understand the authoring the docker-compose.yaml
you should start with the Hello, docker-compose sample.
Similar to running locally with dapr run
for each service you need to choose a unique app-id. Choosing the container name as the app-id will make this simple to remember.
The compose file will contain at a minimum:
- A network that the containers use to communicate
- Each service’s container
- A
<service>-daprd
sidecar container with the service’s port and app-id specified - Additional dependencies that run in containers (redis for example)
- optional: Dapr placement container (for actors)
You can also view a larger example from the eShopOnContainers sample application.
2.1.9.3 - Dapr .NET SDK Development with .NET Aspire
.NET Aspire
.NET Aspire is a development tool designed to make it easier to include external software into .NET applications by providing a framework that allows third-party services to be readily integrated, observed and provisioned alongside your own software.
Aspire simplifies local development by providing rich integration with popular IDEs including Microsoft Visual Studio, Visual Studio Code, JetBrains Rider and others to launch your application with the debugger while automatically launching and provisioning access to other integrations as well, including Dapr.
While Aspire also assists with deployment of your application to various cloud hosts like Microsoft Azure and Amazon AWS, deployment is currently outside the scope of this guide. More information can be found in Aspire’s documentation here.
An end-to-end demonstration featuring the following and demonstrating service invocation between multiple Dapr-enabled services can be found here.
Prerequisites
- Both the Dapr .NET SDK and .NET Aspire are compatible with .NET 8 or .NET 9
- An OCI compliant container runtime such as Docker Desktop or Podman
- Install and initialize Dapr v1.16 or later
Using .NET Aspire via CLI
We’ll start by creating a brand new .NET application. Open your preferred CLI and navigate to the directory you wish to create your new .NET solution within. Start by using the following command to install a template that will create an empty Aspire application:
dotnet new install Aspire.ProjectTemplates
Once that’s installed, proceed to create an empty .NET Aspire application in your current directory. The -n
argument
allows you to specify the name of the output solution. If it’s excluded, the .NET CLI will instead use the name
of the output directory, e.g. C:\source\aspiredemo
will result in the solution being named aspiredemo
. The rest
of this tutorial will assume a solution named aspiredemo
.
dotnet new aspire -n aspiredemo
This will create two Aspire-specific directories and one file in your directory:
aspiredemo.AppHost/
contains the Aspire orchestration project that is used to configure each of the integrations used in your application(s).aspiredemo.ServiceDefaults/
contains a collection of extensions meant to be shared across your solution to aid in resilience, service discovery and telemetry capabilities offered by Aspire (these are distinct from the capabilities offered in Dapr itself).aspiredemo.sln
is the file that maintains the layout of your current solution
We’ll next create twp projects that’ll serve as our Dapr application and demonstrate Dapr functionality. From the same
directory, use the following to create an empty ASP.NET Core project called FrontEndApp
and another called
‘BackEndApp’. Either one will be created relative to your current directory in
FrontEndApp\FrontEndApp.csproj
and BackEndApp\BackEndApp.csproj
, respectively.
dotnet new web --name FrontEndApp
Next we’ll configure the AppHost project to add the necessary package to support local Dapr development. Navigate
into the AppHost directory with the following and install the CommunityToolkit.Aspire.Hosting.Dapr
package from NuGet into the project.
We’ll also add a reference to our FrontEndApp
project so we can reference it during the registration process.
Aspire.Hosting.Dapr
, which has been marked as deprecated.Aspire.Hosting.Dapr
, which has been marked as deprecated.cd aspiredemo.AppHost
dotnet add package CommunityToolkit.Aspire.Hosting.Dapr
dotnet add reference ../FrontEndApp/
dotnet add reference ../BackEndApp/
Next, we need to configure Dapr as a resource to be loaded alongside your project. Open the Program.cs
file in that
project within your preferred IDE. It should look similar to the following:
var builder = DistributedApplication.CreateBuilder(args);
builder.Build().Run();
If you’re familiar with the dependency injection approach used in ASP.NET Core projects or others utilizing the
Microsoft.Extensions.DependencyInjection
functionality, you’ll find that this will be a familiar experience.
Because we’ve already added a project reference to MyApp
, we need to start by adding a reference in this configuration
as well. Add the following before the builder.Build().Run()
line:
var backEndApp = builder
.AddProject<Projects.BackEndApp>("be")
.WithDaprSidecar();
var frontEndApp = builder
.AddProject<Projects.FrontEndApp>("fe")
.WithDaprSidecar();
Because the project reference has been added to this solution, your project shows up as a type within the Projects.
namespace for our purposes here. The name of the variable you assign the project to doesn’t much matter in this tutorial
but would be used if you wanted to create a reference between this project and another using Aspire’s service discovery
functionality.
Adding .WithDaprSidecar()
configures Dapr as a .NET Aspire resource so that when the project runs, the sidecar will be
deployed alongside your application. This accepts a number of different options and could optionally be configured as in
the following example:
DaprSidecarOptions sidecarOptions = new()
{
AppId = "how-dapr-identifies-your-app",
AppPort = 8080, //Note that this argument is required if you intend to configure pubsub, actors or workflows as of Aspire v9.0
DaprGrpcPort = 50001,
DaprHttpPort = 3500,
MetricsPort = 9090
};
builder
.AddProject<Projects.BackEndApp>("be")
.WithReference(myApp)
.WithDaprSidecar(sidecarOptions);
Finally, let’s add an endpoint to the back-end app that we can invoke using Dapr’s service invocation to display to a page to demonstrate that Dapr is working as expected.
When you open the solution in your IDE, ensure that the aspiredemo.AppHost
is configured as your startup project, but
when you launch it in a debug configuration, you’ll note that your integrated console should reflect your expected Dapr
logs and it will be available to your application.
2.1.10 - How to troubleshoot and debug with the Dapr .NET SDK
2.1.10.1 - Troubleshoot Pub/Sub with the .NET SDK
Troubleshooting Pub/Sub
The most common problem with pub/sub is that the pub/sub endpoint in your application is not being called.
There are a few layers to this problem with different solutions:
- The application is not receiving any traffic from Dapr
- The application is not registering pub/sub endpoints with Dapr
- The pub/sub endpoints are registered with Dapr, but the request is not reaching the desired endpoint
Step 1: Turn up the logs
This is important. Future steps will depend on your ability to see logging output. ASP.NET Core logs almost nothing with the default log settings, so you will need to change it.
Adjust the logging verbosity to include Information
logging for ASP.NET Core as described here. Set the Microsoft
key to Information
.
Step 2: Verify you can receive traffic from Dapr
Start the application as you would normally (
dapr run ...
). Make sure that you’re including an--app-port
argument in the commandline. Dapr needs to know that your application is listening for traffic. By default an ASP.NET Core application will listen for HTTP on port 5000 in local development.Wait for Dapr to finish starting
Examine the logs
You should see a log entry like:
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 GET http://localhost:5000/.....
During initialization Dapr will make some requests to your application for configuration. If you can’t find these then it means that something has gone wrong. Please ask for help either via an issue or in Discord (include the logs). If you see requests made to your application, then continue to step 3.
Step 3: Verify endpoint registration
Start the application as you would normally (
dapr run ...
).Use
curl
at the command line (or another HTTP testing tool) to access the/dapr/subscribe
endpoint.
Here’s an example command assuming your application’s listening port is 5000:
curl http://localhost:5000/dapr/subscribe -v
For a correctly configured application the output should look like the following:
* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 5000 (#0)
> GET /dapr/subscribe HTTP/1.1
> Host: localhost:5000
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Fri, 15 Jan 2021 22:31:40 GMT
< Content-Type: application/json
< Server: Kestrel
< Transfer-Encoding: chunked
<
* Connection #0 to host localhost left intact
[{"topic":"deposit","route":"deposit","pubsubName":"pubsub"},{"topic":"withdraw","route":"withdraw","pubsubName":"pubsub"}]* Closing connection 0
Pay particular attention to the HTTP status code, and the JSON output.
< HTTP/1.1 200 OK
A 200 status code indicates success.
The JSON blob that’s included near the end is the output of /dapr/subscribe
that’s processed by the Dapr runtime. In this case it’s using the ControllerSample
in this repo - so this is an example of correct output.
[
{"topic":"deposit","route":"deposit","pubsubName":"pubsub"},
{"topic":"withdraw","route":"withdraw","pubsubName":"pubsub"}
]
With the output of this command in hand, you are ready to diagnose a problem or move on to the next step.
Option 0: The response was a 200 included some pub/sub entries
If you have entries in the JSON output from this test then the problem lies elsewhere, move on to step 2.
Option 1: The response was not a 200, or didn’t contain JSON
If the response was not a 200 or did not contain JSON, then the MapSubscribeHandler()
endpoint was not reached.
Make sure you have some code like the following in Startup.cs
and repeat the test.
app.UseRouting();
app.UseCloudEvents();
app.UseEndpoints(endpoints =>
{
endpoints.MapSubscribeHandler(); // This is the Dapr subscribe handler
endpoints.MapControllers();
});
If adding the subscribe handler did not resolve the problem, please open an issue on this repo and include the contents of your Startup.cs
file.
Option 2: The response contained JSON but it was empty (like []
)
If the JSON output was an empty array (like []
) then the subscribe handler is registered, but no topic endpoints were registered.
If you’re using a controller for pub/sub you should have a method like:
[Topic("pubsub", "deposit")]
[HttpPost("deposit")]
public async Task<ActionResult> Deposit(...)
// Using Pub/Sub routing
[Topic("pubsub", "transactions", "event.type == \"withdraw.v2\"", 1)]
[HttpPost("withdraw")]
public async Task<ActionResult> Withdraw(...)
In this example the Topic
and HttpPost
attributes are required, but other details might be different.
If you’re using routing for pub/sub you should have an endpoint like:
endpoints.MapPost("deposit", ...).WithTopic("pubsub", "deposit");
In this example the call to WithTopic(...)
is required but other details might be different.
After correcting this code and re-testing if the JSON output is still the empty array (like []
) then please open an issue on this repository and include the contents of Startup.cs
and your pub/sub endpoint.
Step 4: Verify endpoint reachability
In this step we’ll verify that the entries registered with pub/sub are reachable. The last step should have left you with some JSON output like the following:
[
{
"pubsubName": "pubsub",
"topic": "deposit",
"route": "deposit"
},
{
"pubsubName": "pubsub",
"topic": "deposit",
"routes": {
"rules": [
{
"match": "event.type == \"withdraw.v2\"",
"path": "withdraw"
}
]
}
}
]
Keep this output, as we’ll use the route
information to test the application.
Start the application as you would normally (
dapr run ...
).Use
curl
at the command line (or another HTTP testing tool) to access one of the routes registered with a pub/sub endpoint.
Here’s an example command assuming your application’s listening port is 5000, and one of your pub/sub routes is withdraw
:
curl http://localhost:5000/withdraw -H 'Content-Type: application/json' -d '{}' -v
Here’s the output from running the above command against the sample:
* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 5000 (#0)
> POST /withdraw HTTP/1.1
> Host: localhost:5000
> User-Agent: curl/7.64.1
> Accept: */*
> Content-Type: application/json
> Content-Length: 2
>
* upload completely sent off: 2 out of 2 bytes
< HTTP/1.1 400 Bad Request
< Date: Fri, 15 Jan 2021 22:53:27 GMT
< Content-Type: application/problem+json; charset=utf-8
< Server: Kestrel
< Transfer-Encoding: chunked
<
* Connection #0 to host localhost left intact
{"type":"https://tools.ietf.org/html/rfc7231#section-6.5.1","title":"One or more validation errors occurred.","status":400,"traceId":"|5e9d7eee-4ea66b1e144ce9bb.","errors":{"Id":["The Id field is required."]}}* Closing connection 0
Based on the HTTP 400 and JSON payload, this response indicates that the endpoint was reached but the request was rejected due to a validation error.
You should also look at the console output of the running application. This is example output with the Dapr logging headers stripped away for clarity.
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 POST http://localhost:5000/withdraw application/json 2
info: Microsoft.AspNetCore.Routing.EndpointMiddleware[0]
Executing endpoint 'ControllerSample.Controllers.SampleController.Withdraw (ControllerSample)'
info: Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker[3]
Route matched with {action = "Withdraw", controller = "Sample"}. Executing controller action with signature System.Threading.Tasks.Task`1[Microsoft.AspNetCore.Mvc.ActionResult`1[ControllerSample.Account]] Withdraw(ControllerSample.Transaction, Dapr.Client.DaprClient) on controller ControllerSample.Controllers.SampleController (ControllerSample).
info: Microsoft.AspNetCore.Mvc.Infrastructure.ObjectResultExecutor[1]
Executing ObjectResult, writing value of type 'Microsoft.AspNetCore.Mvc.ValidationProblemDetails'.
info: Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker[2]
Executed action ControllerSample.Controllers.SampleController.Withdraw (ControllerSample) in 52.1211ms
info: Microsoft.AspNetCore.Routing.EndpointMiddleware[1]
Executed endpoint 'ControllerSample.Controllers.SampleController.Withdraw (ControllerSample)'
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
Request finished in 157.056ms 400 application/problem+json; charset=utf-8
The log entry of primary interest is the one coming from routing:
info: Microsoft.AspNetCore.Routing.EndpointMiddleware[0]
Executing endpoint 'ControllerSample.Controllers.SampleController.Withdraw (ControllerSample)'
This entry shows that:
- Routing executed
- Routing chose the
ControllerSample.Controllers.SampleController.Withdraw (ControllerSample)'
endpoint
Now you have the information needed to troubleshoot this step.
Option 0: Routing chose the correct endpoint
If the information in the routing log entry is correct, then it means that in isolation your application is behaving correctly.
Example:
info: Microsoft.AspNetCore.Routing.EndpointMiddleware[0]
Executing endpoint 'ControllerSample.Controllers.SampleController.Withdraw (ControllerSample)'
You might want to try using the Dapr cli to execute send a pub/sub message directly and compare the logging output.
Example command:
dapr publish --pubsub pubsub --topic withdraw --data '{}'
If after doing this you still don’t understand the problem please open an issue on this repo and include the contents of your Startup.cs
.
Option 1: Routing did not execute
If you don’t see an entry for Microsoft.AspNetCore.Routing.EndpointMiddleware
in the logs, then it means that the request was handled by something other than routing. Usually the problem in this case is a misbehaving middleware. Other logs from the request might give you a clue to what’s happening.
If you need help understanding the problem please open an issue on this repo and include the contents of your Startup.cs
.
Option 2: Routing chose the wrong endpoint
If you see an entry for Microsoft.AspNetCore.Routing.EndpointMiddleware
in the logs, but it contains the wrong endpoint then it means that you’ve got a routing conflict. The endpoint that was chosen will appear in the logs so that should give you an idea of what’s causing the conflict.
If you need help understanding the problem please open an issue on this repo and include the contents of your Startup.cs
.
2.2 - Dapr Go SDK
A client library to help build Dapr applications in Go. This client supports all public Dapr APIs while focusing on idiomatic Go experiences and developer productivity.
Client
Use the Go Client SDK for invoking public Dapr APIs [**Learn more about the Go Client SDK**](https://v1-16.docs.dapr.io/developing-applications/sdks/go/go-client/)
Service
Use the Dapr Service (Callback) SDK for Go to create services that will be invoked by Dapr. [**Learn more about the Go Service (Callback) SDK**](https://v1-16.docs.dapr.io/developing-applications/sdks/go/go-service/)
2.2.1 - Getting started with the Dapr client Go SDK
The Dapr client package allows you to interact with other Dapr applications from a Go application.
Prerequisites
- Dapr CLI installed
- Initialized Dapr environment
- Go installed
Import the client package
import "github.com/dapr/go-sdk/client"
Error handling
Dapr errors are based on gRPC’s richer error model. The following code shows an example of how you can parse and handle the error details:
if err != nil {
st := status.Convert(err)
fmt.Printf("Code: %s\n", st.Code().String())
fmt.Printf("Message: %s\n", st.Message())
for _, detail := range st.Details() {
switch t := detail.(type) {
case *errdetails.ErrorInfo:
// Handle ErrorInfo details
fmt.Printf("ErrorInfo:\n- Domain: %s\n- Reason: %s\n- Metadata: %v\n", t.GetDomain(), t.GetReason(), t.GetMetadata())
case *errdetails.BadRequest:
// Handle BadRequest details
fmt.Println("BadRequest:")
for _, violation := range t.GetFieldViolations() {
fmt.Printf("- Key: %s\n", violation.GetField())
fmt.Printf("- The %q field was wrong: %s\n", violation.GetField(), violation.GetDescription())
}
case *errdetails.ResourceInfo:
// Handle ResourceInfo details
fmt.Printf("ResourceInfo:\n- Resource type: %s\n- Resource name: %s\n- Owner: %s\n- Description: %s\n",
t.GetResourceType(), t.GetResourceName(), t.GetOwner(), t.GetDescription())
case *errdetails.Help:
// Handle ResourceInfo details
fmt.Println("HelpInfo:")
for _, link := range t.GetLinks() {
fmt.Printf("- Url: %s\n", link.Url)
fmt.Printf("- Description: %s\n", link.Description)
}
default:
// Add cases for other types of details you expect
fmt.Printf("Unhandled error detail type: %v\n", t)
}
}
}
Building blocks
The Go SDK allows you to interface with all of the Dapr building blocks.
Service Invocation
To invoke a specific method on another service running with Dapr sidecar, the Dapr client Go SDK provides two options:
Invoke a service without data:
resp, err := client.InvokeMethod(ctx, "app-id", "method-name", "post")
Invoke a service with data:
content := &dapr.DataContent{
ContentType: "application/json",
Data: []byte(`{ "id": "a123", "value": "demo", "valid": true }`),
}
resp, err = client.InvokeMethodWithContent(ctx, "app-id", "method-name", "post", content)
For a full guide on service invocation, visit How-To: Invoke a service.
Workflows
Workflows and their activities can be authored and managed using the Dapr Go SDK like so:
import (
...
"github.com/dapr/go-sdk/workflow"
...
)
func ExampleWorkflow(ctx *workflow.WorkflowContext) (any, error) {
var output string
input := "world"
if err := ctx.CallActivity(ExampleActivity, workflow.ActivityInput(input)).Await(&output); err != nil {
return nil, err
}
// Print output - "hello world"
fmt.Println(output)
return nil, nil
}
func ExampleActivity(ctx workflow.ActivityContext) (any, error) {
var input int
if err := ctx.GetInput(&input); err != nil {
return "", err
}
return fmt.Sprintf("hello %s", input), nil
}
func main() {
// Create a workflow worker
w, err := workflow.NewWorker()
if err != nil {
log.Fatalf("error creating worker: %v", err)
}
// Register the workflow
w.RegisterWorkflow(ExampleWorkflow)
// Register the activity
w.RegisterActivity(ExampleActivity)
// Start workflow runner
if err := w.Start(); err != nil {
log.Fatal(err)
}
// Create a workflow client
wfClient, err := workflow.NewClient()
if err != nil {
log.Fatal(err)
}
// Start a new workflow
id, err := wfClient.ScheduleNewWorkflow(context.Background(), "ExampleWorkflow")
if err != nil {
log.Fatal(err)
}
// Wait for the workflow to complete
metadata, err := wfClient.WaitForWorkflowCompletion(ctx, id)
if err != nil {
log.Fatal(err)
}
// Print workflow status post-completion
fmt.Println(metadata.RuntimeStatus)
// Shutdown Worker
w.Shutdown()
}
- For a more comprehensive guide on workflows visit these How-To guides:
- Visit the Go SDK Examples to jump into complete examples:
State Management
For simple use-cases, Dapr client provides easy to use Save
, Get
, Delete
methods:
ctx := context.Background()
data := []byte("hello")
store := "my-store" // defined in the component YAML
// save state with the key key1, default options: strong, last-write
if err := client.SaveState(ctx, store, "key1", data, nil); err != nil {
panic(err)
}
// get state for key key1
item, err := client.GetState(ctx, store, "key1", nil)
if err != nil {
panic(err)
}
fmt.Printf("data [key:%s etag:%s]: %s", item.Key, item.Etag, string(item.Value))
// delete state for key key1
if err := client.DeleteState(ctx, store, "key1", nil); err != nil {
panic(err)
}
For more granular control, the Dapr Go client exposes SetStateItem
type, which can be use to gain more control over the state operations and allow for multiple items to be saved at once:
item1 := &dapr.SetStateItem{
Key: "key1",
Etag: &ETag{
Value: "1",
},
Metadata: map[string]string{
"created-on": time.Now().UTC().String(),
},
Value: []byte("hello"),
Options: &dapr.StateOptions{
Concurrency: dapr.StateConcurrencyLastWrite,
Consistency: dapr.StateConsistencyStrong,
},
}
item2 := &dapr.SetStateItem{
Key: "key2",
Metadata: map[string]string{
"created-on": time.Now().UTC().String(),
},
Value: []byte("hello again"),
}
item3 := &dapr.SetStateItem{
Key: "key3",
Etag: &dapr.ETag{
Value: "1",
},
Value: []byte("hello again"),
}
if err := client.SaveBulkState(ctx, store, item1, item2, item3); err != nil {
panic(err)
}
Similarly, GetBulkState
method provides a way to retrieve multiple state items in a single operation:
keys := []string{"key1", "key2", "key3"}
items, err := client.GetBulkState(ctx, store, keys, nil,100)
And the ExecuteStateTransaction
method to execute multiple upsert or delete operations transactionally.
ops := make([]*dapr.StateOperation, 0)
op1 := &dapr.StateOperation{
Type: dapr.StateOperationTypeUpsert,
Item: &dapr.SetStateItem{
Key: "key1",
Value: []byte(data),
},
}
op2 := &dapr.StateOperation{
Type: dapr.StateOperationTypeDelete,
Item: &dapr.SetStateItem{
Key: "key2",
},
}
ops = append(ops, op1, op2)
meta := map[string]string{}
err := testClient.ExecuteStateTransaction(ctx, store, meta, ops)
Retrieve, filter, and sort key/value data stored in your statestore using QueryState
.
// Define the query string
query := `{
"filter": {
"EQ": { "value.Id": "1" }
},
"sort": [
{
"key": "value.Balance",
"order": "DESC"
}
]
}`
// Use the client to query the state
queryResponse, err := c.QueryState(ctx, "querystore", query)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Got %d\n", len(queryResponse))
for _, account := range queryResponse {
var data Account
err := account.Unmarshal(&data)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Account: %s has %f\n", data.ID, data.Balance)
}
Note: Query state API is currently in alpha
For a full guide on state management, visit How-To: Save & get state.
Publish Messages
To publish data onto a topic, the Dapr Go client provides a simple method:
data := []byte(`{ "id": "a123", "value": "abcdefg", "valid": true }`)
if err := client.PublishEvent(ctx, "component-name", "topic-name", data); err != nil {
panic(err)
}
To publish multiple messages at once, the PublishEvents
method can be used:
events := []string{"event1", "event2", "event3"}
res := client.PublishEvents(ctx, "component-name", "topic-name", events)
if res.Error != nil {
panic(res.Error)
}
For a full guide on pub/sub, visit How-To: Publish & subscribe.
Workflow
You can create workflows using the Go SDK. For example, start with a simple workflow activity:
func TestActivity(ctx workflow.ActivityContext) (any, error) {
var input int
if err := ctx.GetInput(&input); err != nil {
return "", err
}
// Do something here
return "result", nil
}
Write a simple workflow function:
func TestWorkflow(ctx *workflow.WorkflowContext) (any, error) {
var input int
if err := ctx.GetInput(&input); err != nil {
return nil, err
}
var output string
if err := ctx.CallActivity(TestActivity, workflow.ActivityInput(input)).Await(&output); err != nil {
return nil, err
}
if err := ctx.WaitForExternalEvent("testEvent", time.Second*60).Await(&output); err != nil {
return nil, err
}
if err := ctx.CreateTimer(time.Second).Await(nil); err != nil {
return nil, nil
}
return output, nil
}
Then compose your application that will use the workflow you’ve created. Refer to the How-To: Author workflows guide for a full walk-through.
Try out the Go SDK workflow example.
Jobs
The Dapr client Go SDK allows you to schedule, get, and delete jobs. Jobs enable you to schedule work to be executed at specific times or intervals.
Scheduling a Job
To schedule a new job, use the ScheduleJobAlpha1
method:
import (
"google.golang.org/protobuf/types/known/anypb"
)
// Create job data
data, err := anypb.New(&YourDataStruct{Message: "Hello, Job!"})
if err != nil {
panic(err)
}
// Create a simple job using the builder pattern
job := client.NewJob("my-scheduled-job",
client.WithJobData(data),
client.WithJobDueTime("10s"), // Execute in 10 seconds
)
// Schedule the job
err = client.ScheduleJobAlpha1(ctx, job)
if err != nil {
panic(err)
}
Job with Schedule and Repeats
You can create recurring jobs using the Schedule
field with cron expressions:
job := client.NewJob("recurring-job",
client.WithJobData(data),
client.WithJobSchedule("0 9 * * *"), // Run at 9 AM every day
client.WithJobRepeats(10), // Repeat 10 times
client.WithJobTTL("1h"), // Job expires after 1 hour
)
err = client.ScheduleJobAlpha1(ctx, job)
Job with Failure Policy
Configure how jobs should handle failures using failure policies:
// Constant retry policy with max retries and interval
job := client.NewJob("resilient-job",
client.WithJobData(data),
client.WithJobDueTime("2024-01-01T10:00:00Z"),
client.WithJobConstantFailurePolicy(),
client.WithJobConstantFailurePolicyMaxRetries(3),
client.WithJobConstantFailurePolicyInterval(30*time.Second),
)
err = client.ScheduleJobAlpha1(ctx, job)
For jobs that should not be retried on failure, use the drop policy:
job := client.NewJob("one-shot-job",
client.WithJobData(data),
client.WithJobDueTime("2024-01-01T10:00:00Z"),
client.WithJobDropFailurePolicy(),
)
err = client.ScheduleJobAlpha1(ctx, job)
Getting a Job
To get information about a scheduled job:
job, err := client.GetJobAlpha1(ctx, "my-scheduled-job")
if err != nil {
panic(err)
}
fmt.Printf("Job: %s, Schedule: %s, Repeats: %d\n",
job.Name, job.Schedule, job.Repeats)
Deleting a Job
To cancel a scheduled job:
err = client.DeleteJobAlpha1(ctx, "my-scheduled-job")
if err != nil {
panic(err)
}
For a full guide on jobs, visit How-To: Schedule and manage jobs.
Output Bindings
The Dapr Go client SDK provides two methods to invoke an operation on a Dapr-defined binding. Dapr supports input, output, and bidirectional bindings.
For simple, output-only binding:
in := &dapr.InvokeBindingRequest{ Name: "binding-name", Operation: "operation-name" }
err = client.InvokeOutputBinding(ctx, in)
To invoke method with content and metadata:
in := &dapr.InvokeBindingRequest{
Name: "binding-name",
Operation: "operation-name",
Data: []byte("hello"),
Metadata: map[string]string{"k1": "v1", "k2": "v2"},
}
out, err := client.InvokeBinding(ctx, in)
For a full guide on output bindings, visit How-To: Use bindings.
Actors
Use the Dapr Go client SDK to write actors.
// MyActor represents an example actor type.
type MyActor struct {
actors.Actor
}
// MyActorMethod is a method that can be invoked on MyActor.
func (a *MyActor) MyActorMethod(ctx context.Context, req *actors.Message) (string, error) {
log.Printf("Received message: %s", req.Data)
return "Hello from MyActor!", nil
}
func main() {
// Create a Dapr client
daprClient, err := client.NewClient()
if err != nil {
log.Fatal("Error creating Dapr client: ", err)
}
// Register the actor type with Dapr
actors.RegisterActor(&MyActor{})
// Create an actor client
actorClient := actors.NewClient(daprClient)
// Create an actor ID
actorID := actors.NewActorID("myactor")
// Get or create the actor
err = actorClient.SaveActorState(context.Background(), "myactorstore", actorID, map[string]interface{}{"data": "initial state"})
if err != nil {
log.Fatal("Error saving actor state: ", err)
}
// Invoke a method on the actor
resp, err := actorClient.InvokeActorMethod(context.Background(), "myactorstore", actorID, "MyActorMethod", &actors.Message{Data: []byte("Hello from client!")})
if err != nil {
log.Fatal("Error invoking actor method: ", err)
}
log.Printf("Response from actor: %s", resp.Data)
// Wait for a few seconds before terminating
time.Sleep(5 * time.Second)
// Delete the actor
err = actorClient.DeleteActor(context.Background(), "myactorstore", actorID)
if err != nil {
log.Fatal("Error deleting actor: ", err)
}
// Close the Dapr client
daprClient.Close()
}
For a full guide on actors, visit the Actors building block documentation.
Secret Management
The Dapr client also provides access to the runtime secrets that can be backed by any number of secrete stores (e.g. Kubernetes Secrets, HashiCorp Vault, or Azure KeyVault):
opt := map[string]string{
"version": "2",
}
secret, err := client.GetSecret(ctx, "store-name", "secret-name", opt)
Authentication
By default, Dapr relies on the network boundary to limit access to its API. If however the target Dapr API is configured with token-based authentication, users can configure the Go Dapr client with that token in two ways:
Environment Variable
If the DAPR_API_TOKEN environment variable is defined, Dapr will automatically use it to augment its Dapr API invocations to ensure authentication.
Explicit Method
In addition, users can also set the API token explicitly on any Dapr client instance. This approach is helpful in cases when the user code needs to create multiple clients for different Dapr API endpoints.
func main() {
client, err := dapr.NewClient()
if err != nil {
panic(err)
}
defer client.Close()
client.WithAuthToken("your-Dapr-API-token-here")
}
For a full guide on secrets, visit How-To: Retrieve secrets.
Distributed Lock
The Dapr client provides mutually exclusive access to a resource using a lock. With a lock, you can:
- Provide access to a database row, table, or an entire database
- Lock reading messages from a queue in a sequential manner
package main
import (
"fmt"
dapr "github.com/dapr/go-sdk/client"
)
func main() {
client, err := dapr.NewClient()
if err != nil {
panic(err)
}
defer client.Close()
resp, err := client.TryLockAlpha1(ctx, "lockstore", &dapr.LockRequest{
LockOwner: "random_id_abc123",
ResourceID: "my_file_name",
ExpiryInSeconds: 60,
})
fmt.Println(resp.Success)
}
For a full guide on distributed lock, visit How-To: Use a lock.
Configuration
With the Dapr client Go SDK, you can consume configuration items that are returned as read-only key/value pairs, and subscribe to configuration item changes.
Config Get
items, err := client.GetConfigurationItem(ctx, "example-config", "mykey")
if err != nil {
panic(err)
}
fmt.Printf("get config = %s\n", (*items).Value)
Config Subscribe
go func() {
if err := client.SubscribeConfigurationItems(ctx, "example-config", []string{"mySubscribeKey1", "mySubscribeKey2", "mySubscribeKey3"}, func(id string, items map[string]*dapr.ConfigurationItem) {
for k, v := range items {
fmt.Printf("get updated config key = %s, value = %s \n", k, v.Value)
}
subscribeID = id
}); err != nil {
panic(err)
}
}()
For a full guide on configuration, visit How-To: Manage configuration from a store.
Cryptography
With the Dapr client Go SDK, you can use the high-level Encrypt
and Decrypt
cryptography APIs to encrypt and decrypt files while working on a stream of data.
To encrypt:
// Encrypt the data using Dapr
out, err := client.Encrypt(context.Background(), rf, dapr.EncryptOptions{
// These are the 3 required parameters
ComponentName: "mycryptocomponent",
KeyName: "mykey",
Algorithm: "RSA",
})
if err != nil {
panic(err)
}
To decrypt:
// Decrypt the data using Dapr
out, err := client.Decrypt(context.Background(), rf, dapr.EncryptOptions{
// Only required option is the component name
ComponentName: "mycryptocomponent",
})
For a full guide on cryptography, visit How-To: Use the cryptography APIs.
Related links
2.2.2 - Getting started with the Dapr Service (Callback) SDK for Go
In addition to this Dapr API client, Dapr Go SDK also provides service package to bootstrap your Dapr callback services. These services can be developed in either gRPC or HTTP:
2.2.2.1 - Getting started with the Dapr HTTP Service SDK for Go
Prerequisite
Start by importing Dapr Go service/http package:
daprd "github.com/dapr/go-sdk/service/http"
Creating and Starting Service
To create an HTTP Dapr service, first, create a Dapr callback instance with a specific address:
s := daprd.NewService(":8080")
Or with address and an existing http.ServeMux in case you want to combine existing server implementations:
mux := http.NewServeMux()
mux.HandleFunc("/", myOtherHandler)
s := daprd.NewServiceWithMux(":8080", mux)
Once you create a service instance, you can “attach” to that service any number of event, binding, and service invocation logic handlers as shown below. Onces the logic is defined, you are ready to start the service:
if err := s.Start(); err != nil && err != http.ErrServerClosed {
log.Fatalf("error: %v", err)
}
Event Handling
To handle events from specific topic you need to add at least one topic event handler before starting the service:
sub := &common.Subscription{
PubsubName: "messages",
Topic: "topic1",
Route: "/events",
}
err := s.AddTopicEventHandler(sub, eventHandler)
if err != nil {
log.Fatalf("error adding topic subscription: %v", err)
}
The handler method itself can be any method with the expected signature:
func eventHandler(ctx context.Context, e *common.TopicEvent) (retry bool, err error) {
log.Printf("event - PubsubName:%s, Topic:%s, ID:%s, Data: %v", e.PubsubName, e.Topic, e.ID, e.Data)
// do something with the event
return true, nil
}
Optionally, you can use routing rules to send messages to different handlers based on the contents of the CloudEvent.
sub := &common.Subscription{
PubsubName: "messages",
Topic: "topic1",
Route: "/important",
Match: `event.type == "important"`,
Priority: 1,
}
err := s.AddTopicEventHandler(sub, importantHandler)
if err != nil {
log.Fatalf("error adding topic subscription: %v", err)
}
You can also create a custom type that implements the TopicEventSubscriber
interface to handle your events:
type EventHandler struct {
// any data or references that your event handler needs.
}
func (h *EventHandler) Handle(ctx context.Context, e *common.TopicEvent) (retry bool, err error) {
log.Printf("event - PubsubName:%s, Topic:%s, ID:%s, Data: %v", e.PubsubName, e.Topic, e.ID, e.Data)
// do something with the event
return true, nil
}
The EventHandler
can then be added using the AddTopicEventSubscriber
method:
sub := &common.Subscription{
PubsubName: "messages",
Topic: "topic1",
}
eventHandler := &EventHandler{
// initialize any fields
}
if err := s.AddTopicEventSubscriber(sub, eventHandler); err != nil {
log.Fatalf("error adding topic subscription: %v", err)
}
Service Invocation Handler
To handle service invocations you will need to add at least one service invocation handler before starting the service:
if err := s.AddServiceInvocationHandler("/echo", echoHandler); err != nil {
log.Fatalf("error adding invocation handler: %v", err)
}
The handler method itself can be any method with the expected signature:
func echoHandler(ctx context.Context, in *common.InvocationEvent) (out *common.Content, err error) {
log.Printf("echo - ContentType:%s, Verb:%s, QueryString:%s, %+v", in.ContentType, in.Verb, in.QueryString, string(in.Data))
// do something with the invocation here
out = &common.Content{
Data: in.Data,
ContentType: in.ContentType,
DataTypeURL: in.DataTypeURL,
}
return
}
Binding Invocation Handler
if err := s.AddBindingInvocationHandler("/run", runHandler); err != nil {
log.Fatalf("error adding binding handler: %v", err)
}
The handler method itself can be any method with the expected signature:
func runHandler(ctx context.Context, in *common.BindingEvent) (out []byte, err error) {
log.Printf("binding - Data:%v, Meta:%v", in.Data, in.Metadata)
// do something with the invocation here
return nil, nil
}
Related links
2.2.2.2 - Getting started with the Dapr Service (Callback) SDK for Go
Dapr gRPC Service SDK for Go
Prerequisite
Start by importing Dapr Go service/grpc package:
daprd "github.com/dapr/go-sdk/service/grpc"
Creating and Starting Service
To create a gRPC Dapr service, first, create a Dapr callback instance with a specific address:
s, err := daprd.NewService(":50001")
if err != nil {
log.Fatalf("failed to start the server: %v", err)
}
Or with address and an existing net.Listener in case you want to combine existing server listener:
list, err := net.Listen("tcp", "localhost:0")
if err != nil {
log.Fatalf("gRPC listener creation failed: %s", err)
}
s := daprd.NewServiceWithListener(list)
Once you create a service instance, you can “attach” to that service any number of event, binding, and service invocation logic handlers as shown below. Onces the logic is defined, you are ready to start the service:
if err := s.Start(); err != nil {
log.Fatalf("server error: %v", err)
}
Event Handling
To handle events from specific topic you need to add at least one topic event handler before starting the service:
sub := &common.Subscription{
PubsubName: "messages",
Topic: "topic1",
}
if err := s.AddTopicEventHandler(sub, eventHandler); err != nil {
log.Fatalf("error adding topic subscription: %v", err)
}
The handler method itself can be any method with the expected signature:
func eventHandler(ctx context.Context, e *common.TopicEvent) (retry bool, err error) {
log.Printf("event - PubsubName:%s, Topic:%s, ID:%s, Data: %v", e.PubsubName, e.Topic, e.ID, e.Data)
// do something with the event
return true, nil
}
Optionally, you can use routing rules to send messages to different handlers based on the contents of the CloudEvent.
sub := &common.Subscription{
PubsubName: "messages",
Topic: "topic1",
Route: "/important",
Match: `event.type == "important"`,
Priority: 1,
}
err := s.AddTopicEventHandler(sub, importantHandler)
if err != nil {
log.Fatalf("error adding topic subscription: %v", err)
}
You can also create a custom type that implements the TopicEventSubscriber
interface to handle your events:
type EventHandler struct {
// any data or references that your event handler needs.
}
func (h *EventHandler) Handle(ctx context.Context, e *common.TopicEvent) (retry bool, err error) {
log.Printf("event - PubsubName:%s, Topic:%s, ID:%s, Data: %v", e.PubsubName, e.Topic, e.ID, e.Data)
// do something with the event
return true, nil
}
The EventHandler
can then be added using the AddTopicEventSubscriber
method:
sub := &common.Subscription{
PubsubName: "messages",
Topic: "topic1",
}
eventHandler := &EventHandler{
// initialize any fields
}
if err := s.AddTopicEventSubscriber(sub, eventHandler); err != nil {
log.Fatalf("error adding topic subscription: %v", err)
}
Service Invocation Handler
To handle service invocations you will need to add at least one service invocation handler before starting the service:
if err := s.AddServiceInvocationHandler("echo", echoHandler); err != nil {
log.Fatalf("error adding invocation handler: %v", err)
}
The handler method itself can be any method with the expected signature:
func echoHandler(ctx context.Context, in *common.InvocationEvent) (out *common.Content, err error) {
log.Printf("echo - ContentType:%s, Verb:%s, QueryString:%s, %+v", in.ContentType, in.Verb, in.QueryString, string(in.Data))
// do something with the invocation here
out = &common.Content{
Data: in.Data,
ContentType: in.ContentType,
DataTypeURL: in.DataTypeURL,
}
return
}
Binding Invocation Handler
To handle binding invocations you will need to add at least one binding invocation handler before starting the service:
if err := s.AddBindingInvocationHandler("run", runHandler); err != nil {
log.Fatalf("error adding binding handler: %v", err)
}
The handler method itself can be any method with the expected signature:
func runHandler(ctx context.Context, in *common.BindingEvent) (out []byte, err error) {
log.Printf("binding - Data:%v, Meta:%v", in.Data, in.Metadata)
// do something with the invocation here
return nil, nil
}
Related links
2.3 - Dapr Java SDK
Dapr offers a variety of packages to help with the development of Java applications. Using them you can create Java clients, servers, and virtual actors with Dapr.
Prerequisites
- Dapr CLI installed
- Initialized Dapr environment
- JDK 11 or above - the published jars are compatible with Java 8:
- Install one of the following build tools for Java:
Import Dapr’s Java SDK
Next, import the Java SDK packages to get started. Select your preferred build tool to learn how to import.
For a Maven project, add the following to your pom.xml
file:
<project>
...
<dependencies>
...
<!-- Dapr's core SDK with all features, except Actors. -->
<dependency>
<groupId>io.dapr</groupId>
<artifactId>dapr-sdk</artifactId>
<version>1.15.0</version>
</dependency>
<!-- Dapr's SDK for Actors (optional). -->
<dependency>
<groupId>io.dapr</groupId>
<artifactId>dapr-sdk-actors</artifactId>
<version>1.15.0</version>
</dependency>
<!-- Dapr's SDK integration with SpringBoot (optional). -->
<dependency>
<groupId>io.dapr</groupId>
<artifactId>dapr-sdk-springboot</artifactId>
<version>1.15.0</version>
</dependency>
...
</dependencies>
...
</project>
For a Gradle project, add the following to your build.gradle
file:
dependencies {
...
// Dapr's core SDK with all features, except Actors.
compile('io.dapr:dapr-sdk:1.15.0')
// Dapr's SDK for Actors (optional).
compile('io.dapr:dapr-sdk-actors:1.15.0')
// Dapr's SDK integration with SpringBoot (optional).
compile('io.dapr:dapr-sdk-springboot:1.15.0')
}
If you are also using Spring Boot, you may run into a common issue where the OkHttp
version that the Dapr SDK uses conflicts with the one specified in the Spring Boot Bill of Materials.
You can fix this by specifying a compatible OkHttp
version in your project to match the version that the Dapr SDK uses:
<dependency>
<groupId>com.squareup.okhttp3</groupId>
<artifactId>okhttp</artifactId>
<version>1.15.0</version>
</dependency>
Try it out
Put the Dapr Java SDK to the test. Walk through the Java quickstarts and tutorials to see Dapr in action:
SDK samples | Description |
---|---|
Quickstarts | Experience Dapr’s API building blocks in just a few minutes using the Java SDK. |
SDK samples | Clone the SDK repo to try out some examples and get started. |
import io.dapr.client.DaprClient;
import io.dapr.client.DaprClientBuilder;
try (DaprClient client = (new DaprClientBuilder()).build()) {
// sending a class with message; BINDING_OPERATION="create"
client.invokeBinding(BINDING_NAME, BINDING_OPERATION, myClass).block();
// sending a plain string
client.invokeBinding(BINDING_NAME, BINDING_OPERATION, message).block();
}
- For a full guide on output bindings visit How-To: Output bindings.
- Visit Java SDK examples for code samples and instructions to try out output bindings.
Available packages
2.3.1 - AI
2.3.1.1 - How to: Author and manage Dapr Conversation AI in the Java SDK
As part of this demonstration, we will look at how to use the Conversation API to converse with a Large Language Model (LLM). The API will return the response from the LLM for the given prompt. With the provided conversation ai example, you will:
- You will provide a prompt using the Conversation AI example
- Filter out Personally identifiable information (PII).
This example uses the default configuration from dapr init
in self-hosted mode.
Prerequisites
- Dapr CLI and initialized environment.
- Java JDK 11 (or greater):
- Oracle JDK, or
- OpenJDK
- Apache Maven, version 3.x.
- Docker Desktop
Set up the environment
Clone the Java SDK repo and navigate into it.
git clone https://github.com/dapr/java-sdk.git
cd java-sdk
Run the following command to install the requirements for running the Conversation AI example with the Dapr Java SDK.
mvn clean install -DskipTests
From the Java SDK root directory, navigate to the examples’ directory.
cd examples
Run the Dapr sidecar.
dapr run --app-id conversationapp --dapr-grpc-port 51439 --dapr-http-port 3500 --app-port 8080
Now, Dapr is listening for HTTP requests at
http://localhost:3500
and gRPC requests athttp://localhost:51439
.
Send a prompt with Personally identifiable information (PII) to the Conversation AI API
In the DemoConversationAI
there are steps to send a prompt using the converse
method under the DaprPreviewClient
.
public class DemoConversationAI {
/**
* The main method to start the client.
*
* @param args Input arguments (unused).
*/
public static void main(String[] args) {
try (DaprPreviewClient client = new DaprClientBuilder().buildPreviewClient()) {
System.out.println("Sending the following input to LLM: Hello How are you? This is the my number 672-123-4567");
ConversationInput daprConversationInput = new ConversationInput("Hello How are you? "
+ "This is the my number 672-123-4567");
// Component name is the name provided in the metadata block of the conversation.yaml file.
Mono<ConversationResponse> responseMono = client.converse(new ConversationRequest("echo",
List.of(daprConversationInput))
.setContextId("contextId")
.setScrubPii(true).setTemperature(1.1d));
ConversationResponse response = responseMono.block();
System.out.printf("Conversation output: %s", response.getConversationOutputs().get(0).getResult());
} catch (Exception e) {
throw new RuntimeException(e);
}
}
}
Run the DemoConversationAI
with the following command.
java -jar target/dapr-java-sdk-examples-exec.jar io.dapr.examples.conversation.DemoConversationAI
Sample output
== APP == Conversation output: Hello How are you? This is the my number <ISBN>
As shown in the output, the number sent to the API is obfuscated and returned in the form of
Next steps
2.3.2 - Getting started with the Dapr client Java SDK
The Dapr client package allows you to interact with other Dapr applications from a Java application.
Note
If you haven’t already, try out one of the quickstarts for a quick walk-through on how to use the Dapr Java SDK with an API building block.Prerequisites
Complete initial setup and import the Java SDK into your project
Initializing the client
You can initialize a Dapr client as so:
DaprClient client = new DaprClientBuilder().build()
This will connect to the default Dapr gRPC endpoint localhost:50001
. For information about configuring the client using environment variables and system properties, see Properties.
Error Handling
Initially, errors in Dapr followed the Standard gRPC error model. However, to provide more detailed and informative error messages, in version 1.13 an enhanced error model has been introduced which aligns with the gRPC Richer error model. In response, the Java SDK extended the DaprException to include the error details that were added in Dapr.
Example of handling the DaprException and consuming the error details when using the Dapr Java SDK:
...
try {
client.publishEvent("unknown_pubsub", "mytopic", "mydata").block();
} catch (DaprException exception) {
System.out.println("Dapr exception's error code: " + exception.getErrorCode());
System.out.println("Dapr exception's message: " + exception.getMessage());
// DaprException now contains `getStatusDetails()` to include more details about the error from Dapr runtime.
System.out.println("Dapr exception's reason: " + exception.getStatusDetails().get(
DaprErrorDetails.ErrorDetailType.ERROR_INFO,
"reason",
TypeRef.STRING));
}
...
Building blocks
The Java SDK allows you to interface with all of the Dapr building blocks.
Invoke a service
import io.dapr.client.DaprClient;
import io.dapr.client.DaprClientBuilder;
try (DaprClient client = (new DaprClientBuilder()).build()) {
// invoke a 'GET' method (HTTP) skipping serialization: \say with a Mono<byte[]> return type
// for gRPC set HttpExtension.NONE parameters below
response = client.invokeMethod(SERVICE_TO_INVOKE, METHOD_TO_INVOKE, "{\"name\":\"World!\"}", HttpExtension.GET, byte[].class).block();
// invoke a 'POST' method (HTTP) skipping serialization: to \say with a Mono<byte[]> return type
response = client.invokeMethod(SERVICE_TO_INVOKE, METHOD_TO_INVOKE, "{\"id\":\"100\", \"FirstName\":\"Value\", \"LastName\":\"Value\"}", HttpExtension.POST, byte[].class).block();
System.out.println(new String(response));
// invoke a 'POST' method (HTTP) with serialization: \employees with a Mono<Employee> return type
Employee newEmployee = new Employee("Nigel", "Guitarist");
Employee employeeResponse = client.invokeMethod(SERVICE_TO_INVOKE, "employees", newEmployee, HttpExtension.POST, Employee.class).block();
}
- For a full guide on service invocation visit How-To: Invoke a service.
- Visit Java SDK examples for code samples and instructions to try out service invocation
Save & get application state
import io.dapr.client.DaprClient;
import io.dapr.client.DaprClientBuilder;
import io.dapr.client.domain.State;
import reactor.core.publisher.Mono;
try (DaprClient client = (new DaprClientBuilder()).build()) {
// Save state
client.saveState(STATE_STORE_NAME, FIRST_KEY_NAME, myClass).block();
// Get state
State<MyClass> retrievedMessage = client.getState(STATE_STORE_NAME, FIRST_KEY_NAME, MyClass.class).block();
// Delete state
client.deleteState(STATE_STORE_NAME, FIRST_KEY_NAME).block();
}
- For a full list of state operations visit How-To: Get & save state.
- Visit Java SDK examples for code samples and instructions to try out state management
Publish & subscribe to messages
Publish messages
import io.dapr.client.DaprClient;
import io.dapr.client.DaprClientBuilder;
import io.dapr.client.domain.Metadata;
import static java.util.Collections.singletonMap;
try (DaprClient client = (new DaprClientBuilder()).build()) {
client.publishEvent(PUBSUB_NAME, TOPIC_NAME, message, singletonMap(Metadata.TTL_IN_SECONDS, MESSAGE_TTL_IN_SECONDS)).block();
}
Subscribe to messages
import com.fasterxml.jackson.databind.ObjectMapper;
import io.dapr.Topic;
import io.dapr.client.domain.BulkSubscribeAppResponse;
import io.dapr.client.domain.BulkSubscribeAppResponseEntry;
import io.dapr.client.domain.BulkSubscribeAppResponseStatus;
import io.dapr.client.domain.BulkSubscribeMessage;
import io.dapr.client.domain.BulkSubscribeMessageEntry;
import io.dapr.client.domain.CloudEvent;
import io.dapr.springboot.annotations.BulkSubscribe;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RestController;
import reactor.core.publisher.Mono;
@RestController
public class SubscriberController {
private static final ObjectMapper OBJECT_MAPPER = new ObjectMapper();
@Topic(name = "testingtopic", pubsubName = "${myAppProperty:messagebus}")
@PostMapping(path = "/testingtopic")
public Mono<Void> handleMessage(@RequestBody(required = false) CloudEvent<?> cloudEvent) {
return Mono.fromRunnable(() -> {
try {
System.out.println("Subscriber got: " + cloudEvent.getData());
System.out.println("Subscriber got: " + OBJECT_MAPPER.writeValueAsString(cloudEvent));
} catch (Exception e) {
throw new RuntimeException(e);
}
});
}
@Topic(name = "testingtopic", pubsubName = "${myAppProperty:messagebus}",
rule = @Rule(match = "event.type == 'myevent.v2'", priority = 1))
@PostMapping(path = "/testingtopicV2")
public Mono<Void> handleMessageV2(@RequestBody(required = false) CloudEvent envelope) {
return Mono.fromRunnable(() -> {
try {
System.out.println("Subscriber got: " + cloudEvent.getData());
System.out.println("Subscriber got: " + OBJECT_MAPPER.writeValueAsString(cloudEvent));
} catch (Exception e) {
throw new RuntimeException(e);
}
});
}
@BulkSubscribe()
@Topic(name = "testingtopicbulk", pubsubName = "${myAppProperty:messagebus}")
@PostMapping(path = "/testingtopicbulk")
public Mono<BulkSubscribeAppResponse> handleBulkMessage(
@RequestBody(required = false) BulkSubscribeMessage<CloudEvent<String>> bulkMessage) {
return Mono.fromCallable(() -> {
if (bulkMessage.getEntries().size() == 0) {
return new BulkSubscribeAppResponse(new ArrayList<BulkSubscribeAppResponseEntry>());
}
System.out.println("Bulk Subscriber received " + bulkMessage.getEntries().size() + " messages.");
List<BulkSubscribeAppResponseEntry> entries = new ArrayList<BulkSubscribeAppResponseEntry>();
for (BulkSubscribeMessageEntry<?> entry : bulkMessage.getEntries()) {
try {
System.out.printf("Bulk Subscriber message has entry ID: %s\n", entry.getEntryId());
CloudEvent<?> cloudEvent = (CloudEvent<?>) entry.getEvent();
System.out.printf("Bulk Subscriber got: %s\n", cloudEvent.getData());
entries.add(new BulkSubscribeAppResponseEntry(entry.getEntryId(), BulkSubscribeAppResponseStatus.SUCCESS));
} catch (Exception e) {
e.printStackTrace();
entries.add(new BulkSubscribeAppResponseEntry(entry.getEntryId(), BulkSubscribeAppResponseStatus.RETRY));
}
}
return new BulkSubscribeAppResponse(entries);
});
}
}
Bulk Publish Messages
Note: API is in Alpha stage
import io.dapr.client.DaprClientBuilder;
import io.dapr.client.DaprPreviewClient;
import io.dapr.client.domain.BulkPublishResponse;
import io.dapr.client.domain.BulkPublishResponseFailedEntry;
import java.util.ArrayList;
import java.util.List;
class Solution {
public void publishMessages() {
try (DaprPreviewClient client = (new DaprClientBuilder()).buildPreviewClient()) {
// Create a list of messages to publish
List<String> messages = new ArrayList<>();
for (int i = 0; i < NUM_MESSAGES; i++) {
String message = String.format("This is message #%d", i);
messages.add(message);
System.out.println("Going to publish message : " + message);
}
// Publish list of messages using the bulk publish API
BulkPublishResponse<String> res = client.publishEvents(PUBSUB_NAME, TOPIC_NAME, "text/plain", messages).block()
}
}
}
- For a full guide on publishing messages and subscribing to a topic How-To: Publish & subscribe.
- Visit Java SDK examples for code samples and instructions to try out pub/sub
Interact with output bindings
import io.dapr.client.DaprClient;
import io.dapr.client.DaprClientBuilder;
try (DaprClient client = (new DaprClientBuilder()).build()) {
// sending a class with message; BINDING_OPERATION="create"
client.invokeBinding(BINDING_NAME, BINDING_OPERATION, myClass).block();
// sending a plain string
client.invokeBinding(BINDING_NAME, BINDING_OPERATION, message).block();
}
- For a full guide on output bindings visit How-To: Output bindings.
- Visit Java SDK examples for code samples and instructions to try out output bindings.
Interact with input bindings
import org.springframework.web.bind.annotation.*;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@RestController
@RequestMapping("/")
public class myClass {
private static final Logger log = LoggerFactory.getLogger(myClass);
@PostMapping(path = "/checkout")
public Mono<String> getCheckout(@RequestBody(required = false) byte[] body) {
return Mono.fromRunnable(() ->
log.info("Received Message: " + new String(body)));
}
}
- For a full guide on input bindings, visit How-To: Input bindings.
- Visit Java SDK examples for code samples and instructions to try out input bindings.
Retrieve secrets
import com.fasterxml.jackson.databind.ObjectMapper;
import io.dapr.client.DaprClient;
import io.dapr.client.DaprClientBuilder;
import java.util.Map;
try (DaprClient client = (new DaprClientBuilder()).build()) {
Map<String, String> secret = client.getSecret(SECRET_STORE_NAME, secretKey).block();
System.out.println(JSON_SERIALIZER.writeValueAsString(secret));
}
- For a full guide on secrets visit How-To: Retrieve secrets.
- Visit Java SDK examples for code samples and instructions to try out retrieving secrets
Actors
An actor is an isolated, independent unit of compute and state with single-threaded execution. Dapr provides an actor implementation based on the Virtual Actor pattern, which provides a single-threaded programming model and where actors are garbage collected when not in use. With Dapr’s implementaiton, you write your Dapr actors according to the Actor model, and Dapr leverages the scalability and reliability that the underlying platform provides.
import io.dapr.actors.ActorMethod;
import io.dapr.actors.ActorType;
import reactor.core.publisher.Mono;
@ActorType(name = "DemoActor")
public interface DemoActor {
void registerReminder();
@ActorMethod(name = "echo_message")
String say(String something);
void clock(String message);
@ActorMethod(returns = Integer.class)
Mono<Integer> incrementAndGet(int delta);
}
- For a full guide on actors visit How-To: Use virtual actors in Dapr.
- Visit Java SDK examples for code samples and instructions to try actors
Get & Subscribe to application configurations
Note this is a preview API and thus will only be accessible via the DaprPreviewClient interface and not the normal DaprClient interface
import io.dapr.client.DaprClientBuilder;
import io.dapr.client.DaprPreviewClient;
import io.dapr.client.domain.ConfigurationItem;
import io.dapr.client.domain.GetConfigurationRequest;
import io.dapr.client.domain.SubscribeConfigurationRequest;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
try (DaprPreviewClient client = (new DaprClientBuilder()).buildPreviewClient()) {
// Get configuration for a single key
Mono<ConfigurationItem> item = client.getConfiguration(CONFIG_STORE_NAME, CONFIG_KEY).block();
// Get configurations for multiple keys
Mono<Map<String, ConfigurationItem>> items =
client.getConfiguration(CONFIG_STORE_NAME, CONFIG_KEY_1, CONFIG_KEY_2);
// Subscribe to configuration changes
Flux<SubscribeConfigurationResponse> outFlux = client.subscribeConfiguration(CONFIG_STORE_NAME, CONFIG_KEY_1, CONFIG_KEY_2);
outFlux.subscribe(configItems -> configItems.forEach(...));
// Unsubscribe from configuration changes
Mono<UnsubscribeConfigurationResponse> unsubscribe = client.unsubscribeConfiguration(SUBSCRIPTION_ID, CONFIG_STORE_NAME)
}
- For a full list of configuration operations visit How-To: Manage configuration from a store.
- Visit Java SDK examples for code samples and instructions to try out different configuration operations.
Query saved state
Note this is a preview API and thus will only be accessible via the DaprPreviewClient interface and not the normal DaprClient interface
import io.dapr.client.DaprClient;
import io.dapr.client.DaprClientBuilder;
import io.dapr.client.DaprPreviewClient;
import io.dapr.client.domain.QueryStateItem;
import io.dapr.client.domain.QueryStateRequest;
import io.dapr.client.domain.QueryStateResponse;
import io.dapr.client.domain.query.Query;
import io.dapr.client.domain.query.Sorting;
import io.dapr.client.domain.query.filters.EqFilter;
try (DaprClient client = builder.build(); DaprPreviewClient previewClient = builder.buildPreviewClient()) {
String searchVal = args.length == 0 ? "searchValue" : args[0];
// Create JSON data
Listing first = new Listing();
first.setPropertyType("apartment");
first.setId("1000");
...
Listing second = new Listing();
second.setPropertyType("row-house");
second.setId("1002");
...
Listing third = new Listing();
third.setPropertyType("apartment");
third.setId("1003");
...
Listing fourth = new Listing();
fourth.setPropertyType("apartment");
fourth.setId("1001");
...
Map<String, String> meta = new HashMap<>();
meta.put("contentType", "application/json");
// Save state
SaveStateRequest request = new SaveStateRequest(STATE_STORE_NAME).setStates(
new State<>("1", first, null, meta, null),
new State<>("2", second, null, meta, null),
new State<>("3", third, null, meta, null),
new State<>("4", fourth, null, meta, null)
);
client.saveBulkState(request).block();
// Create query and query state request
Query query = new Query()
.setFilter(new EqFilter<>("propertyType", "apartment"))
.setSort(Arrays.asList(new Sorting("id", Sorting.Order.DESC)));
QueryStateRequest request = new QueryStateRequest(STATE_STORE_NAME)
.setQuery(query);
// Use preview client to call query state API
QueryStateResponse<MyData> result = previewClient.queryState(request, MyData.class).block();
// View Query state response
System.out.println("Found " + result.getResults().size() + " items.");
for (QueryStateItem<Listing> item : result.getResults()) {
System.out.println("Key: " + item.getKey());
System.out.println("Data: " + item.getValue());
}
}
- For a full how-to on query state, visit How-To: Query state.
- Visit Java SDK examples for complete code sample.
Distributed lock
package io.dapr.examples.lock.grpc;
import io.dapr.client.DaprClientBuilder;
import io.dapr.client.DaprPreviewClient;
import io.dapr.client.domain.LockRequest;
import io.dapr.client.domain.UnlockRequest;
import io.dapr.client.domain.UnlockResponseStatus;
import reactor.core.publisher.Mono;
public class DistributedLockGrpcClient {
private static final String LOCK_STORE_NAME = "lockstore";
/**
* Executes various methods to check the different apis.
*
* @param args arguments
* @throws Exception throws Exception
*/
public static void main(String[] args) throws Exception {
try (DaprPreviewClient client = (new DaprClientBuilder()).buildPreviewClient()) {
System.out.println("Using preview client...");
tryLock(client);
unlock(client);
}
}
/**
* Trying to get lock.
*
* @param client DaprPreviewClient object
*/
public static void tryLock(DaprPreviewClient client) {
System.out.println("*******trying to get a free distributed lock********");
try {
LockRequest lockRequest = new LockRequest(LOCK_STORE_NAME, "resouce1", "owner1", 5);
Mono<Boolean> result = client.tryLock(lockRequest);
System.out.println("Lock result -> " + (Boolean.TRUE.equals(result.block()) ? "SUCCESS" : "FAIL"));
} catch (Exception ex) {
System.out.println(ex.getMessage());
}
}
/**
* Unlock a lock.
*
* @param client DaprPreviewClient object
*/
public static void unlock(DaprPreviewClient client) {
System.out.println("*******unlock a distributed lock********");
try {
UnlockRequest unlockRequest = new UnlockRequest(LOCK_STORE_NAME, "resouce1", "owner1");
Mono<UnlockResponseStatus> result = client.unlock(unlockRequest);
System.out.println("Unlock result ->" + result.block().name());
} catch (Exception ex) {
System.out.println(ex.getMessage());
}
}
}
- For a full how-to on distributed lock, visit How-To: Use a Lock
- Visit Java SDK examples for complete code sample.
Workflow
package io.dapr.examples.workflows;
import io.dapr.workflows.client.DaprWorkflowClient;
import io.dapr.workflows.client.WorkflowInstanceStatus;
import java.time.Duration;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
/**
* For setup instructions, see the README.
*/
public class DemoWorkflowClient {
/**
* The main method.
*
* @param args Input arguments (unused).
* @throws InterruptedException If program has been interrupted.
*/
public static void main(String[] args) throws InterruptedException {
DaprWorkflowClient client = new DaprWorkflowClient();
try (client) {
String separatorStr = "*******";
System.out.println(separatorStr);
String instanceId = client.scheduleNewWorkflow(DemoWorkflow.class, "input data");
System.out.printf("Started new workflow instance with random ID: %s%n", instanceId);
System.out.println(separatorStr);
System.out.println("**GetInstanceMetadata:Running Workflow**");
WorkflowInstanceStatus workflowMetadata = client.getInstanceState(instanceId, true);
System.out.printf("Result: %s%n", workflowMetadata);
System.out.println(separatorStr);
System.out.println("**WaitForInstanceStart**");
try {
WorkflowInstanceStatus waitForInstanceStartResult =
client.waitForInstanceStart(instanceId, Duration.ofSeconds(60), true);
System.out.printf("Result: %s%n", waitForInstanceStartResult);
} catch (TimeoutException ex) {
System.out.printf("waitForInstanceStart has an exception:%s%n", ex);
}
System.out.println(separatorStr);
System.out.println("**SendExternalMessage**");
client.raiseEvent(instanceId, "TestEvent", "TestEventPayload");
System.out.println(separatorStr);
System.out.println("** Registering parallel Events to be captured by allOf(t1,t2,t3) **");
client.raiseEvent(instanceId, "event1", "TestEvent 1 Payload");
client.raiseEvent(instanceId, "event2", "TestEvent 2 Payload");
client.raiseEvent(instanceId, "event3", "TestEvent 3 Payload");
System.out.printf("Events raised for workflow with instanceId: %s\n", instanceId);
System.out.println(separatorStr);
System.out.println("** Registering Event to be captured by anyOf(t1,t2,t3) **");
client.raiseEvent(instanceId, "e2", "event 2 Payload");
System.out.printf("Event raised for workflow with instanceId: %s\n", instanceId);
System.out.println(separatorStr);
System.out.println("**WaitForInstanceCompletion**");
try {
WorkflowInstanceStatus waitForInstanceCompletionResult =
client.waitForInstanceCompletion(instanceId, Duration.ofSeconds(60), true);
System.out.printf("Result: %s%n", waitForInstanceCompletionResult);
} catch (TimeoutException ex) {
System.out.printf("waitForInstanceCompletion has an exception:%s%n", ex);
}
System.out.println(separatorStr);
System.out.println("**purgeInstance**");
boolean purgeResult = client.purgeInstance(instanceId);
System.out.printf("purgeResult: %s%n", purgeResult);
System.out.println(separatorStr);
System.out.println("**raiseEvent**");
String eventInstanceId = client.scheduleNewWorkflow(DemoWorkflow.class);
System.out.printf("Started new workflow instance with random ID: %s%n", eventInstanceId);
client.raiseEvent(eventInstanceId, "TestException", null);
System.out.printf("Event raised for workflow with instanceId: %s\n", eventInstanceId);
System.out.println(separatorStr);
String instanceToTerminateId = "terminateMe";
client.scheduleNewWorkflow(DemoWorkflow.class, null, instanceToTerminateId);
System.out.printf("Started new workflow instance with specified ID: %s%n", instanceToTerminateId);
TimeUnit.SECONDS.sleep(5);
System.out.println("Terminate this workflow instance manually before the timeout is reached");
client.terminateWorkflow(instanceToTerminateId, null);
System.out.println(separatorStr);
String restartingInstanceId = "restarting";
client.scheduleNewWorkflow(DemoWorkflow.class, null, restartingInstanceId);
System.out.printf("Started new workflow instance with ID: %s%n", restartingInstanceId);
System.out.println("Sleeping 30 seconds to restart the workflow");
TimeUnit.SECONDS.sleep(30);
System.out.println("**SendExternalMessage: RestartEvent**");
client.raiseEvent(restartingInstanceId, "RestartEvent", "RestartEventPayload");
System.out.println("Sleeping 30 seconds to terminate the eternal workflow");
TimeUnit.SECONDS.sleep(30);
client.terminateWorkflow(restartingInstanceId, null);
}
System.out.println("Exiting DemoWorkflowClient.");
System.exit(0);
}
}
- For a full guide on workflows, visit:
- Learn more about how to use workflows with the Java SDK.
Sidecar APIs
Wait for sidecar
The DaprClient
also provides a helper method to wait for the sidecar to become healthy (components only). When using
this method, be sure to specify a timeout in milliseconds and block() to wait for the result of a reactive operation.
// Wait for the Dapr sidecar to report healthy before attempting to use Dapr components.
try (DaprClient client = new DaprClientBuilder().build()) {
System.out.println("Waiting for Dapr sidecar ...");
client.waitForSidecar(10000).block(); // Specify the timeout in milliseconds
System.out.println("Dapr sidecar is ready.");
...
}
// Perform Dapr component operations here i.e. fetching secrets or saving state.
Shutdown the sidecar
try (DaprClient client = new DaprClientBuilder().build()) {
logger.info("Sending shutdown request.");
client.shutdown().block();
logger.info("Ensuring dapr has stopped.");
...
}
Learn more about the Dapr Java SDK packages available to add to your Java applications.
Related links
For a full list of SDK properties and how to configure them, visit Properties.
2.3.2.1 - Properties
Properties
The Dapr Java SDK provides a set of global properties that control the behavior of the SDK. These properties can be configured using environment variables or system properties. System properties can be set using the -D
flag when running your Java application.
These properties affect the entire SDK, including clients and runtime. They control aspects such as:
- Sidecar connectivity (endpoints, ports)
- Security settings (TLS, API tokens)
- Performance tuning (timeouts, connection pools)
- Protocol settings (gRPC, HTTP)
- String encoding
Environment Variables
The following environment variables are available for configuring the Dapr Java SDK:
Sidecar Endpoints
When these variables are set, the client will automatically use them to connect to the Dapr sidecar.
Environment Variable | Description | Default |
---|---|---|
DAPR_GRPC_ENDPOINT | The gRPC endpoint for the Dapr sidecar | localhost:50001 |
DAPR_HTTP_ENDPOINT | The HTTP endpoint for the Dapr sidecar | localhost:3500 |
DAPR_GRPC_PORT | The gRPC port for the Dapr sidecar (legacy, DAPR_GRPC_ENDPOINT takes precedence) | 50001 |
DAPR_HTTP_PORT | The HTTP port for the Dapr sidecar (legacy, DAPR_HTTP_ENDPOINT takes precedence) | 3500 |
API Token
Environment Variable | Description | Default |
---|---|---|
DAPR_API_TOKEN | API token for authentication between app and Dapr sidecar. This is the same token used by the Dapr runtime for API authentication. For more details, see Dapr API token authentication and Environment variables reference. | null |
gRPC Configuration
TLS Settings
For secure gRPC communication, you can configure TLS settings using the following environment variables:
Environment Variable | Description | Default |
---|---|---|
DAPR_GRPC_TLS_INSECURE | When set to “true”, enables insecure TLS mode which still uses TLS but doesn’t verify certificates. This uses InsecureTrustManagerFactory to trust all certificates. This should only be used for testing or in secure environments. | false |
DAPR_GRPC_TLS_CA_PATH | Path to the CA certificate file. This is used for TLS connections to servers with self-signed certificates. | null |
DAPR_GRPC_TLS_CERT_PATH | Path to the TLS certificate file for client authentication. | null |
DAPR_GRPC_TLS_KEY_PATH | Path to the TLS private key file for client authentication. | null |
Keepalive Settings
Configure gRPC keepalive behavior using these environment variables:
Environment Variable | Description | Default |
---|---|---|
DAPR_GRPC_ENABLE_KEEP_ALIVE | Whether to enable gRPC keepalive | false |
DAPR_GRPC_KEEP_ALIVE_TIME_SECONDS | gRPC keepalive time in seconds | 10 |
DAPR_GRPC_KEEP_ALIVE_TIMEOUT_SECONDS | gRPC keepalive timeout in seconds | 5 |
DAPR_GRPC_KEEP_ALIVE_WITHOUT_CALLS | Whether to keep gRPC connection alive without calls | true |
Inbound Message Settings
Configure gRPC inbound message settings using these environment variables:
Environment Variable | Description | Default |
---|---|---|
DAPR_GRPC_MAX_INBOUND_MESSAGE_SIZE_BYTES | Dapr’s maximum inbound message size for gRPC in bytes. This value sets the maximum size of a gRPC message that can be received by the application | 4194304 |
DAPR_GRPC_MAX_INBOUND_METADATA_SIZE_BYTES | Dapr’s maximum inbound metadata size for gRPC in bytes | 8192 |
HTTP Client Configuration
These properties control the behavior of the HTTP client used for communication with the Dapr sidecar:
Environment Variable | Description | Default |
---|---|---|
DAPR_HTTP_CLIENT_READ_TIMEOUT_SECONDS | Timeout in seconds for HTTP client read operations. This is the maximum time to wait for a response from the Dapr sidecar. | 60 |
DAPR_HTTP_CLIENT_MAX_REQUESTS | Maximum number of concurrent HTTP requests that can be executed. Above this limit, requests will queue in memory waiting for running calls to complete. | 1024 |
DAPR_HTTP_CLIENT_MAX_IDLE_CONNECTIONS | Maximum number of idle connections in the HTTP connection pool. This is the maximum number of connections that can remain idle in the pool. | 128 |
API Configuration
These properties control the behavior of API calls made through the SDK:
Environment Variable | Description | Default |
---|---|---|
DAPR_API_MAX_RETRIES | Maximum number of retries for retriable exceptions when making API calls to the Dapr sidecar | 0 |
DAPR_API_TIMEOUT_MILLISECONDS | Timeout in milliseconds for API calls to the Dapr sidecar. A value of 0 means no timeout. | 0 |
String Encoding
Environment Variable | Description | Default |
---|---|---|
DAPR_STRING_CHARSET | Character set used for string encoding/decoding in the SDK. Must be a valid Java charset name. | UTF-8 |
System Properties
All environment variables can be set as system properties using the -D
flag. Here is the complete list of available system properties:
System Property | Description | Default |
---|---|---|
dapr.sidecar.ip | IP address for the Dapr sidecar | localhost |
dapr.http.port | HTTP port for the Dapr sidecar | 3500 |
dapr.grpc.port | gRPC port for the Dapr sidecar | 50001 |
dapr.grpc.tls.cert.path | Path to the gRPC TLS certificate | null |
dapr.grpc.tls.key.path | Path to the gRPC TLS key | null |
dapr.grpc.tls.ca.path | Path to the gRPC TLS CA certificate | null |
dapr.grpc.tls.insecure | Whether to use insecure TLS mode | false |
dapr.grpc.endpoint | gRPC endpoint for remote sidecar | null |
dapr.grpc.enable.keep.alive | Whether to enable gRPC keepalive | false |
dapr.grpc.keep.alive.time.seconds | gRPC keepalive time in seconds | 10 |
dapr.grpc.keep.alive.timeout.seconds | gRPC keepalive timeout in seconds | 5 |
dapr.grpc.keep.alive.without.calls | Whether to keep gRPC connection alive without calls | true |
dapr.http.endpoint | HTTP endpoint for remote sidecar | null |
dapr.api.maxRetries | Maximum number of retries for API calls | 0 |
dapr.api.timeoutMilliseconds | Timeout for API calls in milliseconds | 0 |
dapr.api.token | API token for authentication | null |
dapr.string.charset | String encoding used in the SDK | UTF-8 |
dapr.http.client.readTimeoutSeconds | Timeout in seconds for HTTP client reads | 60 |
dapr.http.client.maxRequests | Maximum number of concurrent HTTP requests | 1024 |
dapr.http.client.maxIdleConnections | Maximum number of idle HTTP connections | 128 |
Property Resolution Order
Properties are resolved in the following order:
- Override values (if provided when creating a Properties instance)
- System properties (set via
-D
) - Environment variables
- Default values
The SDK checks each source in order. If a value is invalid for the property type (e.g., non-numeric for a numeric property), the SDK will log a warning and try the next source. For example:
# Invalid boolean value - will be ignored
java -Ddapr.grpc.enable.keep.alive=not-a-boolean -jar myapp.jar
# Valid boolean value - will be used
export DAPR_GRPC_ENABLE_KEEP_ALIVE=false
In this case, the environment variable is used because the system property value is invalid. However, if both values are valid, the system property takes precedence:
# Valid boolean value - will be used
java -Ddapr.grpc.enable.keep.alive=true -jar myapp.jar
# Valid boolean value - will be ignored
export DAPR_GRPC_ENABLE_KEEP_ALIVE=false
Override values can be set using the DaprClientBuilder
in two ways:
- Using individual property overrides (recommended for most cases):
import io.dapr.config.Properties;
// Set a single property override
DaprClient client = new DaprClientBuilder()
.withPropertyOverride(Properties.GRPC_ENABLE_KEEP_ALIVE, "true")
.build();
// Or set multiple property overrides
DaprClient client = new DaprClientBuilder()
.withPropertyOverride(Properties.GRPC_ENABLE_KEEP_ALIVE, "true")
.withPropertyOverride(Properties.HTTP_CLIENT_READ_TIMEOUT_SECONDS, "120")
.build();
- Using a Properties instance (useful when you have many properties to set at once):
// Create a map of property overrides
Map<String, String> overrides = new HashMap<>();
overrides.put("dapr.grpc.enable.keep.alive", "true");
overrides.put("dapr.http.client.readTimeoutSeconds", "120");
// Create a Properties instance with overrides
Properties properties = new Properties(overrides);
// Use these properties when creating a client
DaprClient client = new DaprClientBuilder()
.withProperties(properties)
.build();
For most use cases, you’ll use system properties or environment variables. Override values are primarily used when you need different property values for different instances of the SDK in the same application.
Proxy Configuration
You can configure proxy settings for your Java application using system properties. These are standard Java system properties that are part of Java’s networking layer (java.net
package), not specific to Dapr. They are used by Java’s networking stack, including the HTTP client that Dapr’s SDK uses.
For detailed information about Java’s proxy configuration, including all available properties and their usage, see the Java Networking Properties documentation.
For example, here’s how to configure a proxy:
# Configure HTTP proxy - replace with your actual proxy server details
java -Dhttp.proxyHost=your-proxy-server.com -Dhttp.proxyPort=8080 -jar myapp.jar
# Configure HTTPS proxy - replace with your actual proxy server details
java -Dhttps.proxyHost=your-proxy-server.com -Dhttps.proxyPort=8443 -jar myapp.jar
Replace your-proxy-server.com
with your actual proxy server hostname or IP address, and adjust the port numbers to match your proxy server configuration.
These proxy settings will affect all HTTP/HTTPS connections made by your Java application, including connections to the Dapr sidecar.
2.3.3 - Jobs
2.3.3.1 - How to: Author and manage Dapr Jobs in the Java SDK
As part of this demonstration we will schedule a Dapr Job. The scheduled job will trigger an endpoint registered in the same app. With the provided jobs example, you will:
- Schedule a Job Job scheduling example
- Register an endpoint for the dapr sidecar to invoke at trigger time Endpoint Registration
This example uses the default configuration from dapr init
in self-hosted mode.
Prerequisites
- Dapr CLI and initialized environment.
- Java JDK 11 (or greater):
- Oracle JDK, or
- OpenJDK
- Apache Maven, version 3.x.
- Docker Desktop
Set up the environment
Clone the Java SDK repo and navigate into it.
git clone https://github.com/dapr/java-sdk.git
cd java-sdk
Run the following command to install the requirements for running the jobs example with the Dapr Java SDK.
mvn clean install -DskipTests
From the Java SDK root directory, navigate to the examples’ directory.
cd examples
Run the Dapr sidecar.
dapr run --app-id jobsapp --dapr-grpc-port 51439 --dapr-http-port 3500 --app-port 8080
Now, Dapr is listening for HTTP requests at
http://localhost:3500
and internal Jobs gRPC requests athttp://localhost:51439
.
Schedule and Get a job
In the DemoJobsClient
there are steps to schedule a job. Calling scheduleJob
using the DaprPreviewClient
will schedule a job with the Dapr Runtime.
public class DemoJobsClient {
/**
* The main method of this app to schedule and get jobs.
*/
public static void main(String[] args) throws Exception {
try (DaprPreviewClient client = new DaprClientBuilder().withPropertyOverrides(overrides).buildPreviewClient()) {
// Schedule a job.
System.out.println("**** Scheduling a Job with name dapr-jobs-1 *****");
ScheduleJobRequest scheduleJobRequest = new ScheduleJobRequest("dapr-job-1",
JobSchedule.fromString("* * * * * *")).setData("Hello World!".getBytes());
client.scheduleJob(scheduleJobRequest).block();
System.out.println("**** Scheduling job dapr-jobs-1 completed *****");
}
}
}
Call getJob
to retrieve the job details that were previously created and scheduled.
client.getJob(new GetJobRequest("dapr-job-1")).block()
Run the DemoJobsClient
with the following command.
java -jar target/dapr-java-sdk-examples-exec.jar io.dapr.examples.jobs.DemoJobsClient
Sample output
**** Scheduling a Job with name dapr-jobs-1 *****
**** Scheduling job dapr-jobs-1 completed *****
**** Retrieving a Job with name dapr-jobs-1 *****
Set up an endpoint to be invoked when the job is triggered
The DemoJobsSpringApplication
class starts a Spring Boot application that registers the endpoints specified in the JobsController
This endpoint acts like a callback for the scheduled job requests.
@RestController
public class JobsController {
/**
* Handles jobs callback from Dapr.
*
* @param jobName name of the job.
* @param payload data from the job if payload exists.
* @return Empty Mono.
*/
@PostMapping("/job/{jobName}")
public Mono<Void> handleJob(@PathVariable("jobName") String jobName,
@RequestBody(required = false) byte[] payload) {
System.out.println("Job Name: " + jobName);
System.out.println("Job Payload: " + new String(payload));
return Mono.empty();
}
}
Parameters:
jobName
: The name of the triggered job.payload
: Optional payload data associated with the job (as a byte array).
Run the Spring Boot application with the following command.
java -jar target/dapr-java-sdk-examples-exec.jar io.dapr.examples.jobs.DemoJobsSpringApplication
Sample output
Job Name: dapr-job-1
Job Payload: Hello World!
Delete a scheduled job
public class DemoJobsClient {
/**
* The main method of this app deletes a job that was previously scheduled.
*/
public static void main(String[] args) throws Exception {
try (DaprPreviewClient client = new DaprClientBuilder().buildPreviewClient()) {
// Delete a job.
System.out.println("**** Delete a Job with name dapr-jobs-1 *****");
client.deleteJob(new DeleteJobRequest("dapr-job-1")).block();
}
}
}
Next steps
2.3.4 - Workflow
2.3.4.1 - How to: Author and manage Dapr Workflow in the Java SDK
Letâs create a Dapr workflow and invoke it using the console. With the provided workflow example, you will:
- Execute the workflow instance using the Java workflow worker
- Utilize the Java workflow client and API calls to start and terminate workflow instances
This example uses the default configuration from dapr init
in self-hosted mode.
Prerequisites
- Dapr CLI and initialized environment.
- Java JDK 11 (or greater):
- Oracle JDK, or
- OpenJDK
- Apache Maven, version 3.x.
- Verify you’re using the latest proto bindings
Set up the environment
Clone the Java SDK repo and navigate into it.
git clone https://github.com/dapr/java-sdk.git
cd java-sdk
Run the following command to install the requirements for running this workflow sample with the Dapr Java SDK.
mvn clean install
From the Java SDK root directory, navigate to the Dapr Workflow example.
cd examples
Run the DemoWorkflowWorker
The DemoWorkflowWorker
class registers an implementation of DemoWorkflow
in Dapr’s workflow runtime engine. In the DemoWorkflowWorker.java
file, you can find the DemoWorkflowWorker
class and the main
method:
public class DemoWorkflowWorker {
public static void main(String[] args) throws Exception {
// Register the Workflow with the runtime.
WorkflowRuntime.getInstance().registerWorkflow(DemoWorkflow.class);
System.out.println("Start workflow runtime");
WorkflowRuntime.getInstance().startAndBlock();
System.exit(0);
}
}
In the code above:
WorkflowRuntime.getInstance().registerWorkflow()
registersDemoWorkflow
as a workflow in the Dapr Workflow runtime.WorkflowRuntime.getInstance().start()
builds and starts the engine within the Dapr Workflow runtime.
In the terminal, execute the following command to kick off the DemoWorkflowWorker
:
dapr run --app-id demoworkflowworker --resources-path ./components/workflows --dapr-grpc-port 50001 -- java -jar target/dapr-java-sdk-examples-exec.jar io.dapr.examples.workflows.DemoWorkflowWorker
Expected output
You're up and running! Both Dapr and your app logs will appear here.
...
== APP == Start workflow runtime
== APP == Sep 13, 2023 9:02:03 AM com.microsoft.durabletask.DurableTaskGrpcWorker startAndBlock
== APP == INFO: Durable Task worker is connecting to sidecar at 127.0.0.1:50001.
Run the `DemoWorkflowClient
The DemoWorkflowClient
starts instances of workflows that have been registered with Dapr.
public class DemoWorkflowClient {
// ...
public static void main(String[] args) throws InterruptedException {
DaprWorkflowClient client = new DaprWorkflowClient();
try (client) {
String separatorStr = "*******";
System.out.println(separatorStr);
String instanceId = client.scheduleNewWorkflow(DemoWorkflow.class, "input data");
System.out.printf("Started new workflow instance with random ID: %s%n", instanceId);
System.out.println(separatorStr);
System.out.println("**GetInstanceMetadata:Running Workflow**");
WorkflowInstanceStatus workflowMetadata = client.getInstanceState(instanceId, true);
System.out.printf("Result: %s%n", workflowMetadata);
System.out.println(separatorStr);
System.out.println("**WaitForInstanceStart**");
try {
WorkflowInstanceStatus waitForInstanceStartResult =
client.waitForInstanceStart(instanceId, Duration.ofSeconds(60), true);
System.out.printf("Result: %s%n", waitForInstanceStartResult);
} catch (TimeoutException ex) {
System.out.printf("waitForInstanceStart has an exception:%s%n", ex);
}
System.out.println(separatorStr);
System.out.println("**SendExternalMessage**");
client.raiseEvent(instanceId, "TestEvent", "TestEventPayload");
System.out.println(separatorStr);
System.out.println("** Registering parallel Events to be captured by allOf(t1,t2,t3) **");
client.raiseEvent(instanceId, "event1", "TestEvent 1 Payload");
client.raiseEvent(instanceId, "event2", "TestEvent 2 Payload");
client.raiseEvent(instanceId, "event3", "TestEvent 3 Payload");
System.out.printf("Events raised for workflow with instanceId: %s\n", instanceId);
System.out.println(separatorStr);
System.out.println("** Registering Event to be captured by anyOf(t1,t2,t3) **");
client.raiseEvent(instanceId, "e2", "event 2 Payload");
System.out.printf("Event raised for workflow with instanceId: %s\n", instanceId);
System.out.println(separatorStr);
System.out.println("**WaitForInstanceCompletion**");
try {
WorkflowInstanceStatus waitForInstanceCompletionResult =
client.waitForInstanceCompletion(instanceId, Duration.ofSeconds(60), true);
System.out.printf("Result: %s%n", waitForInstanceCompletionResult);
} catch (TimeoutException ex) {
System.out.printf("waitForInstanceCompletion has an exception:%s%n", ex);
}
System.out.println(separatorStr);
System.out.println("**purgeInstance**");
boolean purgeResult = client.purgeInstance(instanceId);
System.out.printf("purgeResult: %s%n", purgeResult);
System.out.println(separatorStr);
System.out.println("**raiseEvent**");
String eventInstanceId = client.scheduleNewWorkflow(DemoWorkflow.class);
System.out.printf("Started new workflow instance with random ID: %s%n", eventInstanceId);
client.raiseEvent(eventInstanceId, "TestException", null);
System.out.printf("Event raised for workflow with instanceId: %s\n", eventInstanceId);
System.out.println(separatorStr);
String instanceToTerminateId = "terminateMe";
client.scheduleNewWorkflow(DemoWorkflow.class, null, instanceToTerminateId);
System.out.printf("Started new workflow instance with specified ID: %s%n", instanceToTerminateId);
TimeUnit.SECONDS.sleep(5);
System.out.println("Terminate this workflow instance manually before the timeout is reached");
client.terminateWorkflow(instanceToTerminateId, null);
System.out.println(separatorStr);
String restartingInstanceId = "restarting";
client.scheduleNewWorkflow(DemoWorkflow.class, null, restartingInstanceId);
System.out.printf("Started new workflow instance with ID: %s%n", restartingInstanceId);
System.out.println("Sleeping 30 seconds to restart the workflow");
TimeUnit.SECONDS.sleep(30);
System.out.println("**SendExternalMessage: RestartEvent**");
client.raiseEvent(restartingInstanceId, "RestartEvent", "RestartEventPayload");
System.out.println("Sleeping 30 seconds to terminate the eternal workflow");
TimeUnit.SECONDS.sleep(30);
client.terminateWorkflow(restartingInstanceId, null);
}
System.out.println("Exiting DemoWorkflowClient.");
System.exit(0);
}
}
In a second terminal window, start the workflow by running the following command:
java -jar target/dapr-java-sdk-examples-exec.jar io.dapr.examples.workflows.DemoWorkflowClient
Expected output
*******
Started new workflow instance with random ID: 0b4cc0d5-413a-4c1c-816a-a71fa24740d4
*******
**GetInstanceMetadata:Running Workflow**
Result: [Name: 'io.dapr.examples.workflows.DemoWorkflow', ID: '0b4cc0d5-413a-4c1c-816a-a71fa24740d4', RuntimeStatus: RUNNING, CreatedAt: 2023-09-13T13:02:30.547Z, LastUpdatedAt: 2023-09-13T13:02:30.699Z, Input: '"input data"', Output: '']
*******
**WaitForInstanceStart**
Result: [Name: 'io.dapr.examples.workflows.DemoWorkflow', ID: '0b4cc0d5-413a-4c1c-816a-a71fa24740d4', RuntimeStatus: RUNNING, CreatedAt: 2023-09-13T13:02:30.547Z, LastUpdatedAt: 2023-09-13T13:02:30.699Z, Input: '"input data"', Output: '']
*******
**SendExternalMessage**
*******
** Registering parallel Events to be captured by allOf(t1,t2,t3) **
Events raised for workflow with instanceId: 0b4cc0d5-413a-4c1c-816a-a71fa24740d4
*******
** Registering Event to be captured by anyOf(t1,t2,t3) **
Event raised for workflow with instanceId: 0b4cc0d5-413a-4c1c-816a-a71fa24740d4
*******
**WaitForInstanceCompletion**
Result: [Name: 'io.dapr.examples.workflows.DemoWorkflow', ID: '0b4cc0d5-413a-4c1c-816a-a71fa24740d4', RuntimeStatus: FAILED, CreatedAt: 2023-09-13T13:02:30.547Z, LastUpdatedAt: 2023-09-13T13:02:55.054Z, Input: '"input data"', Output: '']
*******
**purgeInstance**
purgeResult: true
*******
**raiseEvent**
Started new workflow instance with random ID: 7707d141-ebd0-4e54-816e-703cb7a52747
Event raised for workflow with instanceId: 7707d141-ebd0-4e54-816e-703cb7a52747
*******
Started new workflow instance with specified ID: terminateMe
Terminate this workflow instance manually before the timeout is reached
*******
Started new workflow instance with ID: restarting
Sleeping 30 seconds to restart the workflow
**SendExternalMessage: RestartEvent**
Sleeping 30 seconds to terminate the eternal workflow
Exiting DemoWorkflowClient.
What happened?
- When you ran
dapr run
, the workflow worker registered the workflow (DemoWorkflow
) and its actvities to the Dapr Workflow engine. - When you ran
java
, the workflow client started the workflow instance with the following activities. You can follow along with the output in the terminal where you randapr run
.- The workflow is started, raises three parallel tasks, and waits for them to complete.
- The workflow client calls the activity and sends the “Hello Activity” message to the console.
- The workflow times out and is purged.
- The workflow client starts a new workflow instance with a random ID, uses another workflow instance called
terminateMe
to terminate it, and restarts it with the workflow calledrestarting
. - The worfklow client is then exited.
Next steps
2.3.5 - Getting started with the Dapr and Spring Boot
By combining Dapr and Spring Boot, we can create infrastructure independent Java applications that can be deployed across different environments, supporting a wide range of on-premises and cloud provider services.
First, we will start with a simple integration covering the DaprClient
and the Testcontainers integration, to then use Spring and Spring Boot mechanisms and programming model to leverage the Dapr APIs under the hood. This helps teams to remove dependencies such as clients and drivers required to connect to environment-specific infrastructure (databases, key-value stores, message brokers, configuration/secret stores, etc)
Note
The Spring Boot integration requires Spring Boot 3.x+ to work. This will not work with Spring Boot 2.x. The Spring Boot integration remains in alpha. We need your help and feedback to graduate it. Please join the #java-sdk discord channel discussion or open issues in the dapr/java-sdk.Adding the Dapr and Spring Boot integration to your project
If you already have a Spring Boot application, you can directly add the following dependencies to your project:
<dependency>
<groupId>io.dapr.spring</groupId>
<artifactId>dapr-spring-boot-starter</artifactId>
<version>0.15.0</version>
</dependency>
<dependency>
<groupId>io.dapr.spring</groupId>
<artifactId>dapr-spring-boot-starter-test</artifactId>
<version>0.15.0</version>
<scope>test</scope>
</dependency>
You can find the latest released version here.
By adding these dependencies, you can:
- Autowire a
DaprClient
to use inside your applications - Use the Spring Data and Messaging abstractions and programming model that uses the Dapr APIs under the hood
- Improve your inner-development loop by relying on Testcontainers to bootstrap Dapr Control plane services and default components
Once these dependencies are in your application, you can rely on Spring Boot autoconfiguration to autowire a DaprClient
instance:
@Autowired
private DaprClient daprClient;
This will connect to the default Dapr gRPC endpoint localhost:50001
, requiring you to start Dapr outside of your application.
Note
By default, the following properties are preconfigured for DaprClient
and DaprWorkflowClient
:
dapr.client.httpEndpoint=http://localhost
dapr.client.httpPort=3500
dapr.client.grpcEndpoint=localhost
dapr.client.grpcPort=50001
dapr.client.apiToken=<your remote api token>
These values are used by default, but you can override them in your application.properties
file to suit your environment. Please note that both kebab case and camel case are supported.
You can use the DaprClient
to interact with the Dapr APIs anywhere in your application, for example from inside a REST endpoint:
@RestController
public class DemoRestController {
@Autowired
private DaprClient daprClient;
@PostMapping("/store")
public void storeOrder(@RequestBody Order order){
daprClient.saveState("kvstore", order.orderId(), order).block();
}
}
record Order(String orderId, Integer amount){}
If you want to avoid managing Dapr outside of your Spring Boot application, you can rely on Testcontainers to bootstrap Dapr beside your application for development purposes.
To do this we can create a test configuration that uses Testcontainers
to bootstrap all we need to develop our applications using the Dapr APIs.
Using Testcontainers and Dapr integrations, we let the @TestConfiguration
bootstrap Dapr for our applications.
Notice that for this example, we are configuring Dapr with a Statestore component called kvstore
that connects to an instance of PostgreSQL
also bootstrapped by Testcontainers.
@TestConfiguration(proxyBeanMethods = false)
public class DaprTestContainersConfig {
@Bean
@ServiceConnection
public DaprContainer daprContainer(Network daprNetwork, PostgreSQLContainer<?> postgreSQLContainer){
return new DaprContainer("daprio/daprd:1.16.0-rc.3")
.withAppName("producer-app")
.withNetwork(daprNetwork)
.withComponent(new Component("kvstore", "state.postgresql", "v1", STATE_STORE_PROPERTIES))
.withComponent(new Component("kvbinding", "bindings.postgresql", "v1", BINDING_PROPERTIES))
.dependsOn(postgreSQLContainer);
}
}
Inside the test classpath you can add a new Spring Boot Application that uses this configuration for tests:
@SpringBootApplication
public class TestProducerApplication {
public static void main(String[] args) {
SpringApplication
.from(ProducerApplication::main)
.with(DaprTestContainersConfig.class)
.run(args);
}
}
Now you can start your application with:
mvn spring-boot:test-run
Running this command will start the application, using the provided test configuration that includes the Testcontainers and Dapr integration. In the logs you should be able to see that the daprd
and the placement
service containers were started for your application.
Besides the previous configuration (DaprTestContainersConfig
) your tests shouldn’t be testing Dapr itself, just the REST endpoints that your application is exposing.
Leveraging Spring & Spring Boot programming model with Dapr
The Java SDK allows you to interface with all of the Dapr building blocks.
But if you want to leverage the Spring and Spring Boot programming model you can use the dapr-spring-boot-starter
integration.
This includes implementations of Spring Data (KeyValueTemplate
and CrudRepository
) as well as a DaprMessagingTemplate
for producing and consuming messages
(similar to Spring Kafka, Spring Pulsar and Spring AMQP for RabbitMQ) and Dapr workflows.
Using Spring Data CrudRepository
and KeyValueTemplate
You can use well known Spring Data constructs relying on a Dapr-based implementation. With Dapr, you don’t need to add any infrastructure-related driver or client, making your Spring application lighter and decoupled from the environment where it is running.
Under the hood these implementations use the Dapr Statestore and Binding APIs.
Configuration parameters
With Spring Data abstractions you can configure which statestore and bindings will be used by Dapr to connect to the available infrastructure. This can be done by setting the following properties:
dapr.statestore.name=kvstore
dapr.statestore.binding=kvbinding
Then you can @Autowire
a KeyValueTemplate
or a CrudRepository
like this:
@RestController
@EnableDaprRepositories
public class OrdersRestController {
@Autowired
private OrderRepository repository;
@PostMapping("/orders")
public void storeOrder(@RequestBody Order order){
repository.save(order);
}
@GetMapping("/orders")
public Iterable<Order> getAll(){
return repository.findAll();
}
}
Where OrderRepository
is defined in an interface that extends the Spring Data CrudRepository
interface:
public interface OrderRepository extends CrudRepository<Order, String> {}
Notice that the @EnableDaprRepositories
annotation does all the magic of wiring the Dapr APIs under the CrudRespository
interface.
Because Dapr allow users to interact with different StateStores from the same application, as a user you need to provide the following beans as a Spring Boot @Configuration
:
@Configuration
@EnableConfigurationProperties({DaprStateStoreProperties.class})
public class ProducerAppConfiguration {
@Bean
public KeyValueAdapterResolver keyValueAdapterResolver(DaprClient daprClient, ObjectMapper mapper, DaprStateStoreProperties daprStatestoreProperties) {
String storeName = daprStatestoreProperties.getName();
String bindingName = daprStatestoreProperties.getBinding();
return new DaprKeyValueAdapterResolver(daprClient, mapper, storeName, bindingName);
}
@Bean
public DaprKeyValueTemplate daprKeyValueTemplate(KeyValueAdapterResolver keyValueAdapterResolver) {
return new DaprKeyValueTemplate(keyValueAdapterResolver);
}
}
Using Spring Messaging for producing and consuming events
Similar to Spring Kafka, Spring Pulsar and Spring AMQP you can use the DaprMessagingTemplate
to publish messages to the configured infrastructure. To consume messages you can use the @Topic
annotation (soon to be renamed to @DaprListener
).
To publish events/messages you can @Autowired
the DaprMessagingTemplate
in your Spring application.
For this example we will be publishing Order
events and we are sending messages to the topic named topic
.
@Autowired
private DaprMessagingTemplate<Order> messagingTemplate;
@PostMapping("/orders")
public void storeOrder(@RequestBody Order order){
repository.save(order);
messagingTemplate.send("topic", order);
}
Similarly to the CrudRepository
we need to specify which PubSub broker do we want to use to publish and consume our messages.
dapr.pubsub.name=pubsub
Because with Dapr you can connect to multiple PubSub brokers you need to provide the following bean to let Dapr know which PubSub broker your DaprMessagingTemplate
will use:
@Bean
public DaprMessagingTemplate<Order> messagingTemplate(DaprClient daprClient,
DaprPubSubProperties daprPubSubProperties) {
return new DaprMessagingTemplate<>(daprClient, daprPubSubProperties.getName());
}
Finally, because Dapr PubSub requires a bidirectional connection between your application and Dapr you need to expand your Testcontainers configuration with a few parameters:
@Bean
@ServiceConnection
public DaprContainer daprContainer(Network daprNetwork, PostgreSQLContainer<?> postgreSQLContainer, RabbitMQContainer rabbitMQContainer){
return new DaprContainer("daprio/daprd:1.16.0-rc.3")
.withAppName("producer-app")
.withNetwork(daprNetwork)
.withComponent(new Component("kvstore", "state.postgresql", "v1", STATE_STORE_PROPERTIES))
.withComponent(new Component("kvbinding", "bindings.postgresql", "v1", BINDING_PROPERTIES))
.withComponent(new Component("pubsub", "pubsub.rabbitmq", "v1", rabbitMqProperties))
.withAppPort(8080)
.withAppChannelAddress("host.testcontainers.internal")
.dependsOn(rabbitMQContainer)
.dependsOn(postgreSQLContainer);
}
Now, in the Dapr configuration we have included a pubsub
component that will connect to an instance of RabbitMQ started by Testcontainers.
We have also set two important parameters .withAppPort(8080)
and .withAppChannelAddress("host.testcontainers.internal")
which allows Dapr to
contact back to the application when a message is published in the broker.
To listen to events/messages you need to expose an endpoint in the application that will be responsible to receive the messages.
If you expose a REST endpoint you can use the @Topic
annotation to let Dapr know where it needs to forward the events/messages too:
@PostMapping("subscribe")
@Topic(pubsubName = "pubsub", name = "topic")
public void subscribe(@RequestBody CloudEvent<Order> cloudEvent){
events.add(cloudEvent);
}
Upon bootstrapping your application, Dapr will register the subscription to messages to be forwarded to the subscribe
endpoint exposed by your application.
If you are writing tests for these subscribers you need to ensure that Testcontainers knows that your application will be running on port 8080, so containers started with Testcontainers know where your application is:
@BeforeAll
public static void setup(){
org.testcontainers.Testcontainers.exposeHostPorts(8080);
}
You can check and run the full example source code here.
Using Dapr Workflows with Spring Boot
Following the same approach that we used for Spring Data and Spring Messaging, the dapr-spring-boot-starter
brings Dapr Workflow integration for Spring Boot users.
To work with Dapr Workflows you need to define and implement your workflows using code. The Dapr Spring Boot Starter makes your life easier by managing Workflow
s and WorkflowActivity
s as Spring beans.
In order to enable the automatic bean discovery you can annotate your @SpringBootApplication
with the @EnableDaprWorkflows
annotation:
@SpringBootApplication
@EnableDaprWorkflows
public class MySpringBootApplication {}
By adding this annotation, all the WorkflowActivity
s will be automatically managed by Spring and registered to the workflow engine.
By having all WorkflowActivity
s as managed beans we can use Spring @Autowired
mechanism to inject any bean that our workflow activity might need to implement its functionality, for example the @RestTemplate
:
public class MyWorkflowActivity implements WorkflowActivity {
@Autowired
private RestTemplate restTemplate;
You can also @Autowired
the DaprWorkflowClient
to create new instances of your workflows.
@Autowired
private DaprWorkflowClient daprWorkflowClient;
This enable applications to schedule new workflow instances and raise events.
String instanceId = daprWorkflowClient.scheduleNewWorkflow(MyWorkflow.class, payload);
and
daprWorkflowClient.raiseEvent(instanceId, "MyEvenet", event);
Check the Dapr Workflow documentation for more information about how to work with Dapr Workflows.
Next steps
Learn more about the Dapr Java SDK packages available to add to your Java applications.
Related links
2.4 - JavaScript SDK
A client library for building Dapr apps in JavaScript and TypeScript. This client abstracts the public Dapr APIs like service to service invocation, state management, pub/sub, secrets, and much more, and provides a simple, intuitive API for building applications.
Installation
To get started with the JavaScript SDK, install the Dapr JavaScript SDK package from NPM:
npm install --save @dapr/dapr
Structure
The Dapr JavaScript SDK contains two major components:
- DaprServer: to manage all Dapr sidecar to application communication.
- DaprClient: to manage all application to Dapr sidecar communication.
The above communication can be configured to use either of the gRPC or HTTP protocols.
![]() | ![]() |
Getting Started
To help you get started, check out the resources below:
2.4.1 - JavaScript Client SDK
Introduction
The Dapr Client allows you to communicate with the Dapr Sidecar and get access to its client facing features such as Publishing Events, Invoking Output Bindings, State Management, Secret Management, and much more.
Pre-requisites
- Dapr CLI installed
- Initialized Dapr environment
- Latest LTS version of Node.js or greater
Installing and importing Dapr’s JS SDK
- Install the SDK with
npm
:
npm i @dapr/dapr --save
- Import the libraries:
import { DaprClient, DaprServer, HttpMethod, CommunicationProtocolEnum } from "@dapr/dapr";
const daprHost = "127.0.0.1"; // Dapr Sidecar Host
const daprPort = "3500"; // Dapr Sidecar Port of this Example Server
const serverHost = "127.0.0.1"; // App Host of this Example Server
const serverPort = "50051"; // App Port of this Example Server
// HTTP Example
const client = new DaprClient({ daprHost, daprPort });
// GRPC Example
const client = new DaprClient({ daprHost, daprPort, communicationProtocol: CommunicationProtocolEnum.GRPC });
Running
To run the examples, you can use two different protocols to interact with the Dapr sidecar: HTTP (default) or gRPC.
Using HTTP (default)
import { DaprClient } from "@dapr/dapr";
const client = new DaprClient({ daprHost, daprPort });
# Using dapr run
dapr run --app-id example-sdk --app-protocol http -- npm run start
# or, using npm script
npm run start:dapr-http
Using gRPC
Since HTTP is the default, you will have to adapt the communication protocol to use gRPC. You can do this by passing an extra argument to the client or server constructor.
import { DaprClient, CommunicationProtocol } from "@dapr/dapr";
const client = new DaprClient({ daprHost, daprPort, communicationProtocol: CommunicationProtocol.GRPC });
# Using dapr run
dapr run --app-id example-sdk --app-protocol grpc -- npm run start
# or, using npm script
npm run start:dapr-grpc
Environment Variables
Dapr Sidecar Endpoints
You can use the DAPR_HTTP_ENDPOINT
and DAPR_GRPC_ENDPOINT
environment variables to set the Dapr
Sidecar’s HTTP and gRPC endpoints respectively. When these variables are set, the daprHost
and daprPort
don’t have to be set in the options argument of the constructor, the client will parse them automatically
out of the provided endpoints.
import { DaprClient, CommunicationProtocol } from "@dapr/dapr";
// Using HTTP, when DAPR_HTTP_ENDPOINT is set
const client = new DaprClient();
// Using gRPC, when DAPR_GRPC_ENDPOINT is set
const client = new DaprClient({ communicationProtocol: CommunicationProtocol.GRPC });
If the environment variables are set, but daprHost
and daprPort
values are passed to the
constructor, the latter will take precedence over the environment variables.
Dapr API Token
You can use the DAPR_API_TOKEN
environment variable to set the Dapr API token. When this variable
is set, the daprApiToken
doesn’t have to be set in the options argument of the constructor,
the client will get it automatically.
General
Increasing Body Size
You can increase the body size that is used by the application to communicate with the sidecar by using aDaprClient
’s option.
import { DaprClient, CommunicationProtocol } from "@dapr/dapr";
// Allow a body size of 10Mb to be used
// The default is 4Mb
const client = new DaprClient({
daprHost,
daprPort,
communicationProtocol: CommunicationProtocol.HTTP,
maxBodySizeMb: 10,
});
Proxying Requests
By proxying requests, we can utilize the unique capabilities that Dapr brings with its sidecar architecture such as service discovery, logging, etc., enabling us to instantly “upgrade” our gRPC services. This feature of gRPC proxying was demonstrated in community call 41.
Creating a Proxy
To perform gRPC proxying, simply create a proxy by calling the client.proxy.create()
method:
// As always, create a client to our dapr sidecar
// this client takes care of making sure the sidecar is started, that we can communicate, ...
const clientSidecar = new DaprClient({ daprHost, daprPort, communicationProtocol: CommunicationProtocol.GRPC });
// Create a Proxy that allows us to use our gRPC code
const clientProxy = await clientSidecar.proxy.create<GreeterClient>(GreeterClient);
We can now call the methods as defined in our GreeterClient
interface (which in this case is from the Hello World example)
Behind the Scenes (Technical Working)
- The gRPC service gets started in Dapr. We tell Dapr which port this gRPC server is running on through
--app-port
and give it a unique Dapr app ID with--app-id <APP_ID_HERE>
- We can now call the Dapr Sidecar through a client that will connect to the Sidecar
- Whilst calling the Dapr Sidecar, we provide a metadata key named
dapr-app-id
with the value of our gRPC server booted in Dapr (e.g.server
in our example) - Dapr will now forward the call to the gRPC server configured
Building blocks
The JavaScript Client SDK allows you to interface with all of the Dapr building blocks focusing on Client to Sidecar features.
Invocation API
Invoke a Service
import { DaprClient, HttpMethod } from "@dapr/dapr";
const daprHost = "127.0.0.1";
const daprPort = "3500";
async function start() {
const client = new DaprClient({ daprHost, daprPort });
const serviceAppId = "my-app-id";
const serviceMethod = "say-hello";
// POST Request
const response = await client.invoker.invoke(serviceAppId, serviceMethod, HttpMethod.POST, { hello: "world" });
// POST Request with headers
const response = await client.invoker.invoke(
serviceAppId,
serviceMethod,
HttpMethod.POST,
{ hello: "world" },
{ headers: { "X-User-ID": "123" } },
);
// GET Request
const response = await client.invoker.invoke(serviceAppId, serviceMethod, HttpMethod.GET);
}
start().catch((e) => {
console.error(e);
process.exit(1);
});
For a full guide on service invocation visit How-To: Invoke a service.
State Management API
Save, Get and Delete application state
import { DaprClient } from "@dapr/dapr";
const daprHost = "127.0.0.1";
const daprPort = "3500";
async function start() {
const client = new DaprClient({ daprHost, daprPort });
const serviceStoreName = "my-state-store-name";
// Save State
const response = await client.state.save(
serviceStoreName,
[
{
key: "first-key-name",
value: "hello",
metadata: {
foo: "bar",
},
},
{
key: "second-key-name",
value: "world",
},
],
{
metadata: {
ttlInSeconds: "3", // this should override the ttl in the state item
},
},
);
// Get State
const response = await client.state.get(serviceStoreName, "first-key-name");
// Get Bulk State
const response = await client.state.getBulk(serviceStoreName, ["first-key-name", "second-key-name"]);
// State Transactions
await client.state.transaction(serviceStoreName, [
{
operation: "upsert",
request: {
key: "first-key-name",
value: "new-data",
},
},
{
operation: "delete",
request: {
key: "second-key-name",
},
},
]);
// Delete State
const response = await client.state.delete(serviceStoreName, "first-key-name");
}
start().catch((e) => {
console.error(e);
process.exit(1);
});
For a full list of state operations visit How-To: Get & save state.
Query State API
import { DaprClient } from "@dapr/dapr";
async function start() {
const client = new DaprClient({ daprHost, daprPort });
const res = await client.state.query("state-mongodb", {
filter: {
OR: [
{
EQ: { "person.org": "Dev Ops" },
},
{
AND: [
{
EQ: { "person.org": "Finance" },
},
{
IN: { state: ["CA", "WA"] },
},
],
},
],
},
sort: [
{
key: "state",
order: "DESC",
},
],
page: {
limit: 10,
},
});
console.log(res);
}
start().catch((e) => {
console.error(e);
process.exit(1);
});
PubSub API
Publish messages
import { DaprClient } from "@dapr/dapr";
const daprHost = "127.0.0.1";
const daprPort = "3500";
async function start() {
const client = new DaprClient({ daprHost, daprPort });
const pubSubName = "my-pubsub-name";
const topic = "topic-a";
// Publish message to topic as text/plain
// Note, the content type is inferred from the message type unless specified explicitly
const response = await client.pubsub.publish(pubSubName, topic, "hello, world!");
// If publish fails, response contains the error
console.log(response);
// Publish message to topic as application/json
await client.pubsub.publish(pubSubName, topic, { hello: "world" });
// Publish a JSON message as plain text
const options = { contentType: "text/plain" };
await client.pubsub.publish(pubSubName, topic, { hello: "world" }, options);
// Publish message to topic as application/cloudevents+json
// You can also use the cloudevent SDK to create cloud events https://github.com/cloudevents/sdk-javascript
const cloudEvent = {
specversion: "1.0",
source: "/some/source",
type: "example",
id: "1234",
};
await client.pubsub.publish(pubSubName, topic, cloudEvent);
// Publish a cloudevent as raw payload
const options = { metadata: { rawPayload: true } };
await client.pubsub.publish(pubSubName, topic, "hello, world!", options);
// Publish multiple messages to a topic as text/plain
await client.pubsub.publishBulk(pubSubName, topic, ["message 1", "message 2", "message 3"]);
// Publish multiple messages to a topic as application/json
await client.pubsub.publishBulk(pubSubName, topic, [
{ hello: "message 1" },
{ hello: "message 2" },
{ hello: "message 3" },
]);
// Publish multiple messages with explicit bulk publish messages
const bulkPublishMessages = [
{
entryID: "entry-1",
contentType: "application/json",
event: { hello: "foo message 1" },
},
{
entryID: "entry-2",
contentType: "application/cloudevents+json",
event: { ...cloudEvent, data: "foo message 2", datacontenttype: "text/plain" },
},
{
entryID: "entry-3",
contentType: "text/plain",
event: "foo message 3",
},
];
await client.pubsub.publishBulk(pubSubName, topic, bulkPublishMessages);
}
start().catch((e) => {
console.error(e);
process.exit(1);
});
Bindings API
Invoke Output Binding
Output Bindings
import { DaprClient } from "@dapr/dapr";
const daprHost = "127.0.0.1";
const daprPort = "3500";
async function start() {
const client = new DaprClient({ daprHost, daprPort });
const bindingName = "my-binding-name";
const bindingOperation = "create";
const message = { hello: "world" };
const response = await client.binding.send(bindingName, bindingOperation, message);
}
start().catch((e) => {
console.error(e);
process.exit(1);
});
For a full guide on output bindings visit How-To: Use bindings.
Secret API
Retrieve secrets
import { DaprClient } from "@dapr/dapr";
const daprHost = "127.0.0.1";
const daprPort = "3500";
async function start() {
const client = new DaprClient({ daprHost, daprPort });
const secretStoreName = "my-secret-store";
const secretKey = "secret-key";
// Retrieve a single secret from secret store
const response = await client.secret.get(secretStoreName, secretKey);
// Retrieve all secrets from secret store
const response = await client.secret.getBulk(secretStoreName);
}
start().catch((e) => {
console.error(e);
process.exit(1);
});
For a full guide on secrets visit How-To: Retrieve secrets.
Configuration API
Get Configuration Keys
import { DaprClient } from "@dapr/dapr";
const daprHost = "127.0.0.1";
async function start() {
const client = new DaprClient({
daprHost,
daprPort: process.env.DAPR_GRPC_PORT,
communicationProtocol: CommunicationProtocolEnum.GRPC,
});
const config = await client.configuration.get("config-store", ["key1", "key2"]);
console.log(config);
}
start().catch((e) => {
console.error(e);
process.exit(1);
});
Sample output:
{
items: {
key1: { key: 'key1', value: 'foo', version: '', metadata: {} },
key2: { key: 'key2', value: 'bar2', version: '', metadata: {} }
}
}
Subscribe to Configuration Updates
import { DaprClient } from "@dapr/dapr";
const daprHost = "127.0.0.1";
async function start() {
const client = new DaprClient({
daprHost,
daprPort: process.env.DAPR_GRPC_PORT,
communicationProtocol: CommunicationProtocolEnum.GRPC,
});
// Subscribes to config store changes for keys "key1" and "key2"
const stream = await client.configuration.subscribeWithKeys("config-store", ["key1", "key2"], async (data) => {
console.log("Subscribe received updates from config store: ", data);
});
// Wait for 60 seconds and unsubscribe.
await new Promise((resolve) => setTimeout(resolve, 60000));
stream.stop();
}
start().catch((e) => {
console.error(e);
process.exit(1);
});
Sample output:
Subscribe received updates from config store: {
items: { key2: { key: 'key2', value: 'bar', version: '', metadata: {} } }
}
Subscribe received updates from config store: {
items: { key1: { key: 'key1', value: 'foobar', version: '', metadata: {} } }
}
Cryptography API
Support for the cryptography API is only available on the gRPC client in the JavaScript SDK.
import { createReadStream, createWriteStream } from "node:fs";
import { readFile, writeFile } from "node:fs/promises";
import { pipeline } from "node:stream/promises";
import { DaprClient, CommunicationProtocolEnum } from "@dapr/dapr";
const daprHost = "127.0.0.1";
const daprPort = "50050"; // Dapr Sidecar Port of this example server
async function start() {
const client = new DaprClient({
daprHost,
daprPort,
communicationProtocol: CommunicationProtocolEnum.GRPC,
});
// Encrypt and decrypt a message using streams
await encryptDecryptStream(client);
// Encrypt and decrypt a message from a buffer
await encryptDecryptBuffer(client);
}
async function encryptDecryptStream(client: DaprClient) {
// First, encrypt the message
console.log("== Encrypting message using streams");
console.log("Encrypting plaintext.txt to ciphertext.out");
await pipeline(
createReadStream("plaintext.txt"),
await client.crypto.encrypt({
componentName: "crypto-local",
keyName: "symmetric256",
keyWrapAlgorithm: "A256KW",
}),
createWriteStream("ciphertext.out"),
);
// Decrypt the message
console.log("== Decrypting message using streams");
console.log("Encrypting ciphertext.out to plaintext.out");
await pipeline(
createReadStream("ciphertext.out"),
await client.crypto.decrypt({
componentName: "crypto-local",
}),
createWriteStream("plaintext.out"),
);
}
async function encryptDecryptBuffer(client: DaprClient) {
// Read "plaintext.txt" so we have some content
const plaintext = await readFile("plaintext.txt");
// First, encrypt the message
console.log("== Encrypting message using buffers");
const ciphertext = await client.crypto.encrypt(plaintext, {
componentName: "crypto-local",
keyName: "my-rsa-key",
keyWrapAlgorithm: "RSA",
});
await writeFile("test.out", ciphertext);
// Decrypt the message
console.log("== Decrypting message using buffers");
const decrypted = await client.crypto.decrypt(ciphertext, {
componentName: "crypto-local",
});
// The contents should be equal
if (plaintext.compare(decrypted) !== 0) {
throw new Error("Decrypted message does not match original message");
}
}
start().catch((e) => {
console.error(e);
process.exit(1);
});
For a full guide on cryptography visit How-To: Cryptography.
Distributed Lock API
Try Lock and Unlock APIs
import { CommunicationProtocolEnum, DaprClient } from "@dapr/dapr";
import { LockStatus } from "@dapr/dapr/types/lock/UnlockResponse";
const daprHost = "127.0.0.1";
const daprPortDefault = "3500";
async function start() {
const client = new DaprClient({ daprHost, daprPort });
const storeName = "redislock";
const resourceId = "resourceId";
const lockOwner = "owner1";
let expiryInSeconds = 1000;
console.log(`Acquiring lock on ${storeName}, ${resourceId} as owner: ${lockOwner}`);
const lockResponse = await client.lock.lock(storeName, resourceId, lockOwner, expiryInSeconds);
console.log(lockResponse);
console.log(`Unlocking on ${storeName}, ${resourceId} as owner: ${lockOwner}`);
const unlockResponse = await client.lock.unlock(storeName, resourceId, lockOwner);
console.log("Unlock API response: " + getResponseStatus(unlockResponse.status));
}
function getResponseStatus(status: LockStatus) {
switch (status) {
case LockStatus.Success:
return "Success";
case LockStatus.LockDoesNotExist:
return "LockDoesNotExist";
case LockStatus.LockBelongsToOthers:
return "LockBelongsToOthers";
default:
return "InternalError";
}
}
start().catch((e) => {
console.error(e);
process.exit(1);
});
For a full guide on distributed locks visit How-To: Use Distributed Locks.
Workflow API
Workflow management
import { DaprClient } from "@dapr/dapr";
async function start() {
const client = new DaprClient();
// Start a new workflow instance
const instanceId = await client.workflow.start("OrderProcessingWorkflow", {
Name: "Paperclips",
TotalCost: 99.95,
Quantity: 4,
});
console.log(`Started workflow instance ${instanceId}`);
// Get a workflow instance
const workflow = await client.workflow.get(instanceId);
console.log(
`Workflow ${workflow.workflowName}, created at ${workflow.createdAt.toUTCString()}, has status ${
workflow.runtimeStatus
}`,
);
console.log(`Additional properties: ${JSON.stringify(workflow.properties)}`);
// Pause a workflow instance
await client.workflow.pause(instanceId);
console.log(`Paused workflow instance ${instanceId}`);
// Resume a workflow instance
await client.workflow.resume(instanceId);
console.log(`Resumed workflow instance ${instanceId}`);
// Terminate a workflow instance
await client.workflow.terminate(instanceId);
console.log(`Terminated workflow instance ${instanceId}`);
// Purge a workflow instance
await client.workflow.purge(instanceId);
console.log(`Purged workflow instance ${instanceId}`);
}
start().catch((e) => {
console.error(e);
process.exit(1);
});
Related links
2.4.2 - JavaScript Server SDK
Introduction
The Dapr Server will allow you to receive communication from the Dapr Sidecar and get access to its server facing features such as: Subscribing to Events, Receiving Input Bindings, and much more.
Pre-requisites
- Dapr CLI installed
- Initialized Dapr environment
- Latest LTS version of Node or greater
Installing and importing Dapr’s JS SDK
- Install the SDK with
npm
:
npm i @dapr/dapr --save
- Import the libraries:
import { DaprServer, CommunicationProtocolEnum } from "@dapr/dapr";
const daprHost = "127.0.0.1"; // Dapr Sidecar Host
const daprPort = "3500"; // Dapr Sidecar Port of this Example Server
const serverHost = "127.0.0.1"; // App Host of this Example Server
const serverPort = "50051"; // App Port of this Example Server
// HTTP Example
const server = new DaprServer({
serverHost,
serverPort,
communicationProtocol: CommunicationProtocolEnum.HTTP, // DaprClient to use same communication protocol as DaprServer, in case DaprClient protocol not mentioned explicitly
clientOptions: {
daprHost,
daprPort,
},
});
// GRPC Example
const server = new DaprServer({
serverHost,
serverPort,
communicationProtocol: CommunicationProtocolEnum.GRPC,
clientOptions: {
daprHost,
daprPort,
},
});
Running
To run the examples, you can use two different protocols to interact with the Dapr sidecar: HTTP (default) or gRPC.
Using HTTP (built-in express webserver)
import { DaprServer } from "@dapr/dapr";
const server = new DaprServer({
serverHost: appHost,
serverPort: appPort,
clientOptions: {
daprHost,
daprPort,
},
});
// initialize subscribtions, ... before server start
// the dapr sidecar relies on these
await server.start();
# Using dapr run
dapr run --app-id example-sdk --app-port 50051 --app-protocol http -- npm run start
# or, using npm script
npm run start:dapr-http
âšī¸ Note: The
app-port
is required here, as this is where our server will need to bind to. Dapr will check for the application to bind to this port, before finishing start-up.
Using HTTP (bring your own express webserver)
Instead of using the built-in web server for Dapr sidecar to application communication, you can also bring your own instance. This is helpful in scenarios like when you are building a REST API back-end and want to integrate Dapr directly in it.
Note, this is currently available for express
only.
đĄ Note: when using a custom web-server, the SDK will configure server properties like max body size, and add new routes to it. The routes are unique on their own to avoid any collisions with your application, but it’s not guaranteed to not collide.
import { DaprServer, CommunicationProtocolEnum } from "@dapr/dapr";
import express from "express";
const myApp = express();
myApp.get("/my-custom-endpoint", (req, res) => {
res.send({ msg: "My own express app!" });
});
const daprServer = new DaprServer({
serverHost: "127.0.0.1", // App Host
serverPort: "50002", // App Port
serverHttp: myApp,
clientOptions: {
daprHost
daprPort
}
});
// Initialize subscriptions before the server starts, the Dapr sidecar uses it.
// This will also initialize the app server itself (removing the need for `app.listen` to be called).
await daprServer.start();
After configuring the above, you can call your custom endpoint as you normally would:
const res = await fetch(`http://127.0.0.1:50002/my-custom-endpoint`);
const json = await res.json();
Using gRPC
Since HTTP is the default, you will have to adapt the communication protocol to use gRPC. You can do this by passing an extra argument to the client or server constructor.
import { DaprServer, CommunicationProtocol } from "@dapr/dapr";
const server = new DaprServer({
serverHost: appHost,
serverPort: appPort,
communicationProtocol: CommunicationProtocolEnum.GRPC,
clientOptions: {
daprHost,
daprPort,
},
});
// initialize subscribtions, ... before server start
// the dapr sidecar relies on these
await server.start();
# Using dapr run
dapr run --app-id example-sdk --app-port 50051 --app-protocol grpc -- npm run start
# or, using npm script
npm run start:dapr-grpc
âšī¸ Note: The
app-port
is required here, as this is where our server will need to bind to. Dapr will check for the application to bind to this port, before finishing start-up.
Building blocks
The JavaScript Server SDK allows you to interface with all of the Dapr building blocks focusing on Sidecar to App features.
Invocation API
Listen to an Invocation
import { DaprServer, DaprInvokerCallbackContent } from "@dapr/dapr";
const daprHost = "127.0.0.1"; // Dapr Sidecar Host
const daprPort = "3500"; // Dapr Sidecar Port of this Example Server
const serverHost = "127.0.0.1"; // App Host of this Example Server
const serverPort = "50051"; // App Port of this Example Server "
async function start() {
const server = new DaprServer({
serverHost,
serverPort,
clientOptions: {
daprHost,
daprPort,
},
});
const callbackFunction = (data: DaprInvokerCallbackContent) => {
console.log("Received body: ", data.body);
console.log("Received metadata: ", data.metadata);
console.log("Received query: ", data.query);
console.log("Received headers: ", data.headers); // only available in HTTP
};
await server.invoker.listen("hello-world", callbackFunction, { method: HttpMethod.GET });
// You can now invoke the service with your app id and method "hello-world"
await server.start();
}
start().catch((e) => {
console.error(e);
process.exit(1);
});
For a full guide on service invocation visit How-To: Invoke a service.
PubSub API
Subscribe to messages
Subscribing to messages can be done in several ways to offer flexibility of receiving messages on your topics:
- Direct subscription through the
subscribe
method - Direct susbcription with options through the
subscribeWithOptions
method - Subscription afterwards through the
susbcribeOnEvent
method
Each time an event arrives, we pass its body as data
and the headers as headers
, which can contain properties of the event publisher (e.g., a device ID from IoT Hub)
Dapr requires subscriptions to be set up on startup, but in the JS SDK we allow event handlers to be added afterwards as well, providing you the flexibility of programming.
An example is provided below
import { DaprServer } from "@dapr/dapr";
const daprHost = "127.0.0.1"; // Dapr Sidecar Host
const daprPort = "3500"; // Dapr Sidecar Port of this Example Server
const serverHost = "127.0.0.1"; // App Host of this Example Server
const serverPort = "50051"; // App Port of this Example Server "
async function start() {
const server = new DaprServer({
serverHost,
serverPort,
clientOptions: {
daprHost,
daprPort,
},
});
const pubSubName = "my-pubsub-name";
const topic = "topic-a";
// Configure Subscriber for a Topic
// Method 1: Direct subscription through the `subscribe` method
await server.pubsub.subscribe(pubSubName, topic, async (data: any, headers: object) =>
console.log(`Received Data: ${JSON.stringify(data)} with headers: ${JSON.stringify(headers)}`),
);
// Method 2: Direct susbcription with options through the `subscribeWithOptions` method
await server.pubsub.subscribeWithOptions(pubSubName, topic, {
callback: async (data: any, headers: object) =>
console.log(`Received Data: ${JSON.stringify(data)} with headers: ${JSON.stringify(headers)}`),
});
// Method 3: Subscription afterwards through the `susbcribeOnEvent` method
// Note: we use default, since if no route was passed (empty options) we utilize "default" as the route name
await server.pubsub.subscribeWithOptions("pubsub-redis", "topic-options-1", {});
server.pubsub.subscribeToRoute("pubsub-redis", "topic-options-1", "default", async (data: any, headers: object) => {
console.log(`Received Data: ${JSON.stringify(data)} with headers: ${JSON.stringify(headers)}`);
});
// Start the server
await server.start();
}
For a full list of state operations visit How-To: Publish & subscribe.
Subscribe with SUCCESS/RETRY/DROP status
Dapr supports status codes for retry logic to specify what should happen after a message gets processed.
â ī¸ The JS SDK allows multiple callbacks on the same topic, we handle priority of status on
RETRY
>DROP
>SUCCESS
and default toSUCCESS
â ī¸ Make sure to configure resiliency in your application to handle
RETRY
messages
In the JS SDK we support these messages through the DaprPubSubStatusEnum
enum. To ensure Dapr will retry we configure a Resiliency policy as well.
components/resiliency.yaml
apiVersion: dapr.io/v1alpha1
kind: Resiliency
metadata:
name: myresiliency
spec:
policies:
retries:
# Global Retry Policy for Inbound Component operations
DefaultComponentInboundRetryPolicy:
policy: constant
duration: 500ms
maxRetries: 10
targets:
components:
messagebus:
inbound:
retry: DefaultComponentInboundRetryPolicy
src/index.ts
import { DaprServer, DaprPubSubStatusEnum } from "@dapr/dapr";
const daprHost = "127.0.0.1"; // Dapr Sidecar Host
const daprPort = "3500"; // Dapr Sidecar Port of this Example Server
const serverHost = "127.0.0.1"; // App Host of this Example Server
const serverPort = "50051"; // App Port of this Example Server "
async function start() {
const server = new DaprServer({
serverHost,
serverPort,
clientOptions: {
daprHost,
daprPort,
},
});
const pubSubName = "my-pubsub-name";
const topic = "topic-a";
// Process a message successfully
await server.pubsub.subscribe(pubSubName, topic, async (data: any, headers: object) => {
return DaprPubSubStatusEnum.SUCCESS;
});
// Retry a message
// Note: this example will keep on retrying to deliver the message
// Note 2: each component can have their own retry configuration
// e.g., https://docs.dapr.io/reference/components-reference/supported-pubsub/setup-redis-pubsub/
await server.pubsub.subscribe(pubSubName, topic, async (data: any, headers: object) => {
return DaprPubSubStatusEnum.RETRY;
});
// Drop a message
await server.pubsub.subscribe(pubSubName, topic, async (data: any, headers: object) => {
return DaprPubSubStatusEnum.DROP;
});
// Start the server
await server.start();
}
Subscribe to messages rule based
Dapr supports routing messages to different handlers (routes) based on rules.
E.g., you are writing an application that needs to handle messages depending on their “type” with Dapr, you can send them to different routes
handlerType1
andhandlerType2
with the default route beinghandlerDefault
import { DaprServer } from "@dapr/dapr";
const daprHost = "127.0.0.1"; // Dapr Sidecar Host
const daprPort = "3500"; // Dapr Sidecar Port of this Example Server
const serverHost = "127.0.0.1"; // App Host of this Example Server
const serverPort = "50051"; // App Port of this Example Server "
async function start() {
const server = new DaprServer({
serverHost,
serverPort,
clientOptions: {
daprHost,
daprPort,
},
});
const pubSubName = "my-pubsub-name";
const topic = "topic-a";
// Configure Subscriber for a Topic with rule set
// Note: the default route and match patterns are optional
await server.pubsub.subscribe("pubsub-redis", "topic-1", {
default: "/default",
rules: [
{
match: `event.type == "my-type-1"`,
path: "/type-1",
},
{
match: `event.type == "my-type-2"`,
path: "/type-2",
},
],
});
// Add handlers for each route
server.pubsub.subscribeToRoute("pubsub-redis", "topic-1", "default", async (data) => {
console.log(`Handling Default`);
});
server.pubsub.subscribeToRoute("pubsub-redis", "topic-1", "type-1", async (data) => {
console.log(`Handling Type 1`);
});
server.pubsub.subscribeToRoute("pubsub-redis", "topic-1", "type-2", async (data) => {
console.log(`Handling Type 2`);
});
// Start the server
await server.start();
}
Susbcribe with Wildcards
The popular wildcards *
and +
are supported (make sure to validate if the pubsub component supports it) and can be subscribed to as follows:
import { DaprServer } from "@dapr/dapr";
const daprHost = "127.0.0.1"; // Dapr Sidecar Host
const daprPort = "3500"; // Dapr Sidecar Port of this Example Server
const serverHost = "127.0.0.1"; // App Host of this Example Server
const serverPort = "50051"; // App Port of this Example Server "
async function start() {
const server = new DaprServer({
serverHost,
serverPort,
clientOptions: {
daprHost,
daprPort,
},
});
const pubSubName = "my-pubsub-name";
// * Wildcard
await server.pubsub.subscribe(pubSubName, "/events/*", async (data: any, headers: object) =>
console.log(`Received Data: ${JSON.stringify(data)}`),
);
// + Wildcard
await server.pubsub.subscribe(pubSubName, "/events/+/temperature", async (data: any, headers: object) =>
console.log(`Received Data: ${JSON.stringify(data)}`),
);
// Start the server
await server.start();
}
Bulk Subscribe to messages
Bulk Subscription is supported and is available through following API:
- Bulk subscription through the
subscribeBulk
method:maxMessagesCount
andmaxAwaitDurationMs
are optional; and if not provided, default values for related components will be used.
While listening for messages, the application receives messages from Dapr in bulk. However, like regular subscribe, the callback function receives a single message at a time, and the user can choose to return a DaprPubSubStatusEnum
value to acknowledge successfully, retry, or drop the message. The default behavior is to return a success response.
Please refer this document for more details.
import { DaprServer } from "@dapr/dapr";
const pubSubName = "orderPubSub";
const topic = "topicbulk";
const daprHost = process.env.DAPR_HOST || "127.0.0.1";
const daprHttpPort = process.env.DAPR_HTTP_PORT || "3502";
const serverHost = process.env.SERVER_HOST || "127.0.0.1";
const serverPort = process.env.APP_PORT || 5001;
async function start() {
const server = new DaprServer({
serverHost,
serverPort,
clientOptions: {
daprHost,
daprPort: daprHttpPort,
},
});
// Publish multiple messages to a topic with default config.
await client.pubsub.subscribeBulk(pubSubName, topic, (data) =>
console.log("Subscriber received: " + JSON.stringify(data)),
);
// Publish multiple messages to a topic with specific maxMessagesCount and maxAwaitDurationMs.
await client.pubsub.subscribeBulk(
pubSubName,
topic,
(data) => {
console.log("Subscriber received: " + JSON.stringify(data));
return DaprPubSubStatusEnum.SUCCESS; // If App doesn't return anything, the default is SUCCESS. App can also return RETRY or DROP based on the incoming message.
},
{
maxMessagesCount: 100,
maxAwaitDurationMs: 40,
},
);
}
Dead Letter Topics
Dapr supports dead letter topic. This means that when a message fails to be processed, it gets sent to a dead letter queue. E.g., when a message fails to be handled on /my-queue
it will be sent to /my-queue-failed
.
E.g., when a message fails to be handled on /my-queue
it will be sent to /my-queue-failed
.
You can use the following options with subscribeWithOptions
method:
deadletterTopic
: Specify a deadletter topic name (note: if none is provided we create one nameddeadletter
)deadletterCallback
: The method to trigger as handler for our deadletter
Implementing Deadletter support in the JS SDK can be done by either
- Passing the
deadletterCallback
as an option - By subscribing to route manually with
subscribeToRoute
An example is provided below
import { DaprServer } from "@dapr/dapr";
const daprHost = "127.0.0.1"; // Dapr Sidecar Host
const daprPort = "3500"; // Dapr Sidecar Port of this Example Server
const serverHost = "127.0.0.1"; // App Host of this Example Server
const serverPort = "50051"; // App Port of this Example Server "
async function start() {
const server = new DaprServer({
serverHost,
serverPort,
clientOptions: {
daprHost,
daprPort,
},
});
const pubSubName = "my-pubsub-name";
// Method 1 (direct subscribing through subscribeWithOptions)
await server.pubsub.subscribeWithOptions("pubsub-redis", "topic-options-5", {
callback: async (data: any) => {
throw new Error("Triggering Deadletter");
},
deadLetterCallback: async (data: any) => {
console.log("Handling Deadletter message");
},
});
// Method 2 (subscribe afterwards)
await server.pubsub.subscribeWithOptions("pubsub-redis", "topic-options-1", {
deadletterTopic: "my-deadletter-topic",
});
server.pubsub.subscribeToRoute("pubsub-redis", "topic-options-1", "default", async () => {
throw new Error("Triggering Deadletter");
});
server.pubsub.subscribeToRoute("pubsub-redis", "topic-options-1", "my-deadletter-topic", async () => {
console.log("Handling Deadletter message");
});
// Start server
await server.start();
}
Bindings API
Receive an Input Binding
import { DaprServer } from "@dapr/dapr";
const daprHost = "127.0.0.1";
const daprPort = "3500";
const serverHost = "127.0.0.1";
const serverPort = "5051";
async function start() {
const server = new DaprServer({
serverHost,
serverPort,
clientOptions: {
daprHost,
daprPort,
},
});
const bindingName = "my-binding-name";
const response = await server.binding.receive(bindingName, async (data: any) =>
console.log(`Got Data: ${JSON.stringify(data)}`),
);
await server.start();
}
start().catch((e) => {
console.error(e);
process.exit(1);
});
For a full guide on output bindings visit How-To: Use bindings.
Configuration API
đĄ The configuration API is currently only available through gRPC
Getting a configuration value
import { DaprServer } from "@dapr/dapr";
const daprHost = "127.0.0.1";
const daprPort = "3500";
const serverHost = "127.0.0.1";
const serverPort = "5051";
async function start() {
const client = new DaprClient({
daprHost,
daprPort,
communicationProtocol: CommunicationProtocolEnum.GRPC,
});
const config = await client.configuration.get("config-redis", ["myconfigkey1", "myconfigkey2"]);
}
start().catch((e) => {
console.error(e);
process.exit(1);
});
Subscribing to Key Changes
import { DaprServer } from "@dapr/dapr";
const daprHost = "127.0.0.1";
const daprPort = "3500";
const serverHost = "127.0.0.1";
const serverPort = "5051";
async function start() {
const client = new DaprClient({
daprHost,
daprPort,
communicationProtocol: CommunicationProtocolEnum.GRPC,
});
const stream = await client.configuration.subscribeWithKeys("config-redis", ["myconfigkey1", "myconfigkey2"], () => {
// Received a key update
});
// When you are ready to stop listening, call the following
await stream.close();
}
start().catch((e) => {
console.error(e);
process.exit(1);
});
Related links
2.4.3 - JavaScript SDK for Actors
The Dapr actors package allows you to interact with Dapr virtual actors from a JavaScript application. The examples below demonstrate how to use the JavaScript SDK for interacting with virtual actors.
For a more in-depth overview of Dapr actors, visit the actors overview page.
Pre-requisites
- Dapr CLI installed
- Initialized Dapr environment
- Latest LTS version of Node or greater
- JavaScript NPM package installed
Scenario
The below code examples loosely describe the scenario of a Parking Garage Spot Monitoring System, which can be seen in this video by Mark Russinovich.
A parking garage consists of hundreds of parking spaces, where each parking space includes a sensor that provides updates to a centralized monitoring system. The parking space sensors (our actors) detect if a parking space is occupied or available.
To jump in and run this example yourself, clone the source code, which can be found in the JavaScript SDK examples directory.
Actor Interface
The actor interface defines the contract that is shared between the actor implementation and the clients calling the actor. In the example below, we have created an interace for a parking garage sensor. Each sensor has 2 methods: carEnter
and carLeave
, which defines the state of the parking space:
export default interface ParkingSensorInterface {
carEnter(): Promise<void>;
carLeave(): Promise<void>;
}
Actor Implementation
An actor implementation defines a class by extending the base type AbstractActor
and implementing the actor interface (ParkingSensorInterface
in this case).
The following code describes an actor implementation along with a few helper methods.
import { AbstractActor } from "@dapr/dapr";
import ParkingSensorInterface from "./ParkingSensorInterface";
export default class ParkingSensorImpl extends AbstractActor implements ParkingSensorInterface {
async carEnter(): Promise<void> {
// Implementation that updates state that this parking spaces is occupied.
}
async carLeave(): Promise<void> {
// Implementation that updates state that this parking spaces is available.
}
private async getInfo(): Promise<object> {
// Implementation of requesting an update from the parking space sensor.
}
/**
* @override
*/
async onActivate(): Promise<void> {
// Initialization logic called by AbstractActor.
}
}
Configuring Actor Runtime
To configure actor runtime, use the DaprClientOptions
. The various parameters and their default values are documented at How-to: Use virtual actors in Dapr.
Note, the timeouts and intervals should be formatted as time.ParseDuration strings.
import { CommunicationProtocolEnum, DaprClient, DaprServer } from "@dapr/dapr";
// Configure the actor runtime with the DaprClientOptions.
const clientOptions = {
daprHost: daprHost,
daprPort: daprPort,
communicationProtocol: CommunicationProtocolEnum.HTTP,
actor: {
actorIdleTimeout: "1h",
actorScanInterval: "30s",
drainOngoingCallTimeout: "1m",
drainRebalancedActors: true,
reentrancy: {
enabled: true,
maxStackDepth: 32,
},
remindersStoragePartitions: 0,
},
};
// Use the options when creating DaprServer and DaprClient.
// Note, DaprServer creates a DaprClient internally, which needs to be configured with clientOptions.
const server = new DaprServer({ serverHost, serverPort, clientOptions });
const client = new DaprClient(clientOptions);
Registering Actors
Initialize and register your actors by using the DaprServer
package:
import { DaprServer } from "@dapr/dapr";
import ParkingSensorImpl from "./ParkingSensorImpl";
const daprHost = "127.0.0.1";
const daprPort = "50000";
const serverHost = "127.0.0.1";
const serverPort = "50001";
const server = new DaprServer({
serverHost,
serverPort,
clientOptions: {
daprHost,
daprPort,
},
});
await server.actor.init(); // Let the server know we need actors
server.actor.registerActor(ParkingSensorImpl); // Register the actor
await server.start(); // Start the server
// To get the registered actors, you can invoke `getRegisteredActors`:
const resRegisteredActors = await server.actor.getRegisteredActors();
console.log(`Registered Actors: ${JSON.stringify(resRegisteredActors)}`);
Invoking Actor Methods
After Actors are registered, create a Proxy object that implements ParkingSensorInterface
using the ActorProxyBuilder
. You can invoke the actor methods by directly calling methods on the Proxy object. Internally, it translates to making a network call to the Actor API and fetches the result back.
import { ActorId, DaprClient } from "@dapr/dapr";
import ParkingSensorImpl from "./ParkingSensorImpl";
import ParkingSensorInterface from "./ParkingSensorInterface";
const daprHost = "127.0.0.1";
const daprPort = "50000";
const client = new DaprClient({ daprHost, daprPort });
// Create a new actor builder. It can be used to create multiple actors of a type.
const builder = new ActorProxyBuilder<ParkingSensorInterface>(ParkingSensorImpl, client);
// Create a new actor instance.
const actor = builder.build(new ActorId("my-actor"));
// Or alternatively, use a random ID
// const actor = builder.build(ActorId.createRandomId());
// Invoke the method.
await actor.carEnter();
Using states with Actor
import { AbstractActor } from "@dapr/dapr";
import ActorStateInterface from "./ActorStateInterface";
export default class ActorStateExample extends AbstractActor implements ActorStateInterface {
async setState(key: string, value: any): Promise<void> {
await this.getStateManager().setState(key, value);
await this.getStateManager().saveState();
}
async removeState(key: string): Promise<void> {
await this.getStateManager().removeState(key);
await this.getStateManager().saveState();
}
// getState with a specific type
async getState<T>(key: string): Promise<T | null> {
return await this.getStateManager<T>().getState(key);
}
// getState without type as `any`
async getState(key: string): Promise<any> {
return await this.getStateManager().getState(key);
}
}
Actor Timers and Reminders
The JS SDK supports actors that can schedule periodic work on themselves by registering either timers or reminders. The main difference between timers and reminders is that the Dapr actor runtime does not retain any information about timers after deactivation, but persists reminders information using the Dapr actor state provider.
This distinction allows users to trade off between light-weight but stateless timers versus more resource-demanding but stateful reminders.
The scheduling interface of timers and reminders is identical. For an more in-depth look at the scheduling configurations see the actors timers and reminders docs.
Actor Timers
// ...
const actor = builder.build(new ActorId("my-actor"));
// Register a timer
await actor.registerActorTimer(
"timer-id", // Unique name of the timer.
"cb-method", // Callback method to execute when timer is fired.
Temporal.Duration.from({ seconds: 2 }), // DueTime
Temporal.Duration.from({ seconds: 1 }), // Period
Temporal.Duration.from({ seconds: 1 }), // TTL
50, // State to be sent to timer callback.
);
// Delete the timer
await actor.unregisterActorTimer("timer-id");
Actor Reminders
// ...
const actor = builder.build(new ActorId("my-actor"));
// Register a reminder, it has a default callback: `receiveReminder`
await actor.registerActorReminder(
"reminder-id", // Unique name of the reminder.
Temporal.Duration.from({ seconds: 2 }), // DueTime
Temporal.Duration.from({ seconds: 1 }), // Period
Temporal.Duration.from({ seconds: 1 }), // TTL
100, // State to be sent to reminder callback.
);
// Delete the reminder
await actor.unregisterActorReminder("reminder-id");
To handle the callback, you need to override the default receiveReminder
implementation in your actor. For example, from our original actor implementation:
export default class ParkingSensorImpl extends AbstractActor implements ParkingSensorInterface {
// ...
/**
* @override
*/
async receiveReminder(state: any): Promise<void> {
// handle stuff here
}
// ...
}
For a full guide on actors, visit How-To: Use virtual actors in Dapr.
2.4.4 - Logging in JavaScript SDK
Introduction
The JavaScript SDK comes with a out-of-box Console
based logger. The SDK emits various internal logs to help users understand the chain of events and troubleshoot problems. A consumer of this SDK can customize the verbosity of the log, as well as provide their own implementation for the logger.
Configure log level
There are five levels of logging in descending order of importance - error
, warn
, info
, verbose
, and debug
. Setting the log to a level means that the logger will emit all the logs that are at least as important as the mentioned level. For example, setting to verbose
log means that the SDK will not emit debug
level logs. The default log level is info
.
Dapr Client
import { CommunicationProtocolEnum, DaprClient, LogLevel } from "@dapr/dapr";
// create a client instance with log level set to verbose.
const client = new DaprClient({
daprHost,
daprPort,
communicationProtocol: CommunicationProtocolEnum.HTTP,
logger: { level: LogLevel.Verbose },
});
For more details on how to use the Client, see JavaScript Client.
DaprServer
import { CommunicationProtocolEnum, DaprServer, LogLevel } from "@dapr/dapr";
// create a server instance with log level set to error.
const server = new DaprServer({
serverHost,
serverPort,
clientOptions: {
daprHost,
daprPort,
logger: { level: LogLevel.Error },
},
});
For more details on how to use the Server, see JavaScript Server.
Custom LoggerService
The JavaScript SDK uses the in-built Console
for logging. To use a custom logger like Winston or Pino, you can implement the LoggerService
interface.
Winston based logging:
Create a new implementation of LoggerService
.
import { LoggerService } from "@dapr/dapr";
import * as winston from "winston";
export class WinstonLoggerService implements LoggerService {
private logger;
constructor() {
this.logger = winston.createLogger({
transports: [new winston.transports.Console(), new winston.transports.File({ filename: "combined.log" })],
});
}
error(message: any, ...optionalParams: any[]): void {
this.logger.error(message, ...optionalParams);
}
warn(message: any, ...optionalParams: any[]): void {
this.logger.warn(message, ...optionalParams);
}
info(message: any, ...optionalParams: any[]): void {
this.logger.info(message, ...optionalParams);
}
verbose(message: any, ...optionalParams: any[]): void {
this.logger.verbose(message, ...optionalParams);
}
debug(message: any, ...optionalParams: any[]): void {
this.logger.debug(message, ...optionalParams);
}
}
Pass the new implementation to the SDK.
import { CommunicationProtocolEnum, DaprClient, LogLevel } from "@dapr/dapr";
import { WinstonLoggerService } from "./WinstonLoggerService";
const winstonLoggerService = new WinstonLoggerService();
// create a client instance with log level set to verbose and logger service as winston.
const client = new DaprClient({
daprHost,
daprPort,
communicationProtocol: CommunicationProtocolEnum.HTTP,
logger: { level: LogLevel.Verbose, service: winstonLoggerService },
});
2.4.5 - JavaScript Examples
Quickstarts
- State Management: Learn the concept of state management with Dapr
- Pub Sub: Create your own Publish / Subscribe system
- Secrets Management
- Service Invocation
Articles
Want your article added? Let us know! so we can add it below
xaviergeerinck.com - Create an Azure IoT Hub Stream Processor with Dapr
xaviergeerinck.com - Integrate Dapr with Nest.JS and the Dapr JS SDK
xaviergeerinck.com - Parking Garage Sensor implementation using Dapr Actors
xaviergeerinck.com - Tutorial - Creating an Email Microservice with Typescript and Dapr
xaviergeerinck.com - Dapr - Creating a User Login/Register Microservice
2.4.6 - How to: Author and manage Dapr Workflow in the JavaScript SDK
Letâs create a Dapr workflow and invoke it using the console. With the provided workflow example, you will:
- Execute the workflow instance using the JavaScript workflow worker
- Utilize the JavaScript workflow client and API calls to start and terminate workflow instances
This example uses the default configuration from dapr init
in self-hosted mode.
Prerequisites
- Verify you’re using the latest proto bindings
Set up the environment
Clone the JavaScript SDK repo and navigate into it.
git clone https://github.com/dapr/js-sdk
cd js-sdk
From the JavaScript SDK root directory, navigate to the Dapr Workflow example.
cd examples/workflow/authoring
Run the following command to install the requirements for running this workflow sample with the Dapr JavaScript SDK.
npm install
Run the activity-sequence.ts
The activity-sequence
file registers a workflow and an activity with the Dapr Workflow runtime. The workflow is a sequence of activities that are executed in order. We use DaprWorkflowClient to schedule a new workflow instance and wait for it to complete.
const daprHost = "localhost";
const daprPort = "50001";
const workflowClient = new DaprWorkflowClient({
daprHost,
daprPort,
});
const workflowRuntime = new WorkflowRuntime({
daprHost,
daprPort,
});
const hello = async (_: WorkflowActivityContext, name: string) => {
return `Hello ${name}!`;
};
const sequence: TWorkflow = async function* (ctx: WorkflowContext): any {
const cities: string[] = [];
const result1 = yield ctx.callActivity(hello, "Tokyo");
cities.push(result1);
const result2 = yield ctx.callActivity(hello, "Seattle");
cities.push(result2);
const result3 = yield ctx.callActivity(hello, "London");
cities.push(result3);
return cities;
};
workflowRuntime.registerWorkflow(sequence).registerActivity(hello);
// Wrap the worker startup in a try-catch block to handle any errors during startup
try {
await workflowRuntime.start();
console.log("Workflow runtime started successfully");
} catch (error) {
console.error("Error starting workflow runtime:", error);
}
// Schedule a new orchestration
try {
const id = await workflowClient.scheduleNewWorkflow(sequence);
console.log(`Orchestration scheduled with ID: ${id}`);
// Wait for orchestration completion
const state = await workflowClient.waitForWorkflowCompletion(id, undefined, 30);
console.log(`Orchestration completed! Result: ${state?.serializedOutput}`);
} catch (error) {
console.error("Error scheduling or waiting for orchestration:", error);
}
In the code above:
workflowRuntime.registerWorkflow(sequence)
registerssequence
as a workflow in the Dapr Workflow runtime.await workflowRuntime.start();
builds and starts the engine within the Dapr Workflow runtime.await workflowClient.scheduleNewWorkflow(sequence)
schedules a new workflow instance with the Dapr Workflow runtime.await workflowClient.waitForWorkflowCompletion(id, undefined, 30)
waits for the workflow instance to complete.
In the terminal, execute the following command to kick off the activity-sequence.ts
:
npm run start:dapr:activity-sequence
Expected output
You're up and running! Both Dapr and your app logs will appear here.
...
== APP == Orchestration scheduled with ID: dc040bea-6436-4051-9166-c9294f9d2201
== APP == Waiting 30 seconds for instance dc040bea-6436-4051-9166-c9294f9d2201 to complete...
== APP == Received "Orchestrator Request" work item with instance id 'dc040bea-6436-4051-9166-c9294f9d2201'
== APP == dc040bea-6436-4051-9166-c9294f9d2201: Rebuilding local state with 0 history event...
== APP == dc040bea-6436-4051-9166-c9294f9d2201: Processing 2 new history event(s): [ORCHESTRATORSTARTED=1, EXECUTIONSTARTED=1]
== APP == dc040bea-6436-4051-9166-c9294f9d2201: Waiting for 1 task(s) and 0 event(s) to complete...
== APP == dc040bea-6436-4051-9166-c9294f9d2201: Returning 1 action(s)
== APP == Received "Activity Request" work item
== APP == Activity hello completed with output "Hello Tokyo!" (14 chars)
== APP == Received "Orchestrator Request" work item with instance id 'dc040bea-6436-4051-9166-c9294f9d2201'
== APP == dc040bea-6436-4051-9166-c9294f9d2201: Rebuilding local state with 3 history event...
== APP == dc040bea-6436-4051-9166-c9294f9d2201: Processing 2 new history event(s): [ORCHESTRATORSTARTED=1, TASKCOMPLETED=1]
== APP == dc040bea-6436-4051-9166-c9294f9d2201: Waiting for 1 task(s) and 0 event(s) to complete...
== APP == dc040bea-6436-4051-9166-c9294f9d2201: Returning 1 action(s)
== APP == Received "Activity Request" work item
== APP == Activity hello completed with output "Hello Seattle!" (16 chars)
== APP == Received "Orchestrator Request" work item with instance id 'dc040bea-6436-4051-9166-c9294f9d2201'
== APP == dc040bea-6436-4051-9166-c9294f9d2201: Rebuilding local state with 6 history event...
== APP == dc040bea-6436-4051-9166-c9294f9d2201: Processing 2 new history event(s): [ORCHESTRATORSTARTED=1, TASKCOMPLETED=1]
== APP == dc040bea-6436-4051-9166-c9294f9d2201: Waiting for 1 task(s) and 0 event(s) to complete...
== APP == dc040bea-6436-4051-9166-c9294f9d2201: Returning 1 action(s)
== APP == Received "Activity Request" work item
== APP == Activity hello completed with output "Hello London!" (15 chars)
== APP == Received "Orchestrator Request" work item with instance id 'dc040bea-6436-4051-9166-c9294f9d2201'
== APP == dc040bea-6436-4051-9166-c9294f9d2201: Rebuilding local state with 9 history event...
== APP == dc040bea-6436-4051-9166-c9294f9d2201: Processing 2 new history event(s): [ORCHESTRATORSTARTED=1, TASKCOMPLETED=1]
== APP == dc040bea-6436-4051-9166-c9294f9d2201: Orchestration completed with status COMPLETED
== APP == dc040bea-6436-4051-9166-c9294f9d2201: Returning 1 action(s)
INFO[0006] dc040bea-6436-4051-9166-c9294f9d2201: 'sequence' completed with a COMPLETED status. app_id=activity-sequence-workflow instance=kaibocai-devbox scope=wfengine.backend type=log ver=1.12.3
== APP == Instance dc040bea-6436-4051-9166-c9294f9d2201 completed
== APP == Orchestration completed! Result: ["Hello Tokyo!","Hello Seattle!","Hello London!"]
Next steps
2.5 - Dapr PHP SDK
Dapr offers an SDK to help with the development of PHP applications. Using it, you can create PHP clients, servers, and virtual actors with Dapr.
Setting up
Prerequisites
Optional Prerequisites
Initialize your project
In a directory where you want to create your service, run composer init
and answer the questions.
Install with composer require dapr/php-sdk
and any other dependencies you may wish to use.
Configure your service
Create a config.php, copying the contents below:
<?php
use Dapr\Actors\Generators\ProxyFactory;
use Dapr\Middleware\Defaults\{Response\ApplicationJson,Tracing};
use Psr\Log\LogLevel;
use function DI\{env,get};
return [
// set the log level
'dapr.log.level' => LogLevel::WARNING,
// Generate a new proxy on each request - recommended for development
'dapr.actors.proxy.generation' => ProxyFactory::GENERATED,
// put any subscriptions here
'dapr.subscriptions' => [],
// if this service will be hosting any actors, add them here
'dapr.actors' => [],
// if this service will be hosting any actors, configure how long until dapr should consider an actor idle
'dapr.actors.idle_timeout' => null,
// if this service will be hosting any actors, configure how often dapr will check for idle actors
'dapr.actors.scan_interval' => null,
// if this service will be hosting any actors, configure how long dapr will wait for an actor to finish during drains
'dapr.actors.drain_timeout' => null,
// if this service will be hosting any actors, configure if dapr should wait for an actor to finish
'dapr.actors.drain_enabled' => null,
// you shouldn't have to change this, but the setting is here if you need to
'dapr.port' => env('DAPR_HTTP_PORT', '3500'),
// add any custom serialization routines here
'dapr.serializers.custom' => [],
// add any custom deserialization routines here
'dapr.deserializers.custom' => [],
// the following has no effect, as it is the default middlewares and processed in order specified
'dapr.http.middleware.request' => [get(Tracing::class)],
'dapr.http.middleware.response' => [get(ApplicationJson::class), get(Tracing::class)],
];
Create your service
Create index.php
and put the following contents:
<?php
require_once __DIR__.'/vendor/autoload.php';
use Dapr\App;
$app = App::create(configure: fn(\DI\ContainerBuilder $builder) => $builder->addDefinitions(__DIR__ . '/config.php'));
$app->get('/hello/{name}', function(string $name) {
return ['hello' => $name];
});
$app->start();
Try it out
Initialize dapr with dapr init
and then start the project with dapr run -a dev -p 3000 -- php -S 0.0.0.0:3000
.
You can now open a web browser and point it to http://localhost:3000/hello/world
replacing world
with your name, a pet’s name, or whatever you want.
Congratulations, you’ve created your first Dapr service! I’m excited to see what you’ll do with it!
More Information
2.5.1 - Virtual Actors
If you’re new to the actor pattern, the best place to learn about the actor pattern is in the Actor Overview.
In the PHP SDK, there are two sides to an actor, the Client, and the Actor (aka, the Runtime). As a client of an actor,
you’ll interact with a remote actor via the ActorProxy
class. This class generates a proxy class on-the-fly using one
of several configured strategies.
When writing an actor, state can be managed for you. You can hook into the actor lifecycle, and define reminders and timers. This gives you considerable power for handling all types of problems that the actor pattern is suited for.
The Actor Proxy
Whenever you want to communicate with an actor, you’ll need to get a proxy object to do so. The proxy is responsible for serializing your request, deserializing the response, and returning it to you, all while obeying the contract defined by the specified interface.
In order to create the proxy, you’ll first need an interface to define how and what you send and receive from an actor. For example, if you want to communicate with a counting actor that solely keeps track of counts, you might define the interface as follows:
<?php
#[\Dapr\Actors\Attributes\DaprType('Counter')]
interface ICount {
function increment(int $amount = 1): void;
function get_count(): int;
}
It’s a good idea to put this interface in a shared library that the actor and clients can both access (if both are written in PHP). The DaprType
attribute tells the DaprClient the name of the actor to send to. It should match the implementation’s DaprType
, though
you can override the type if needed.
<?php
$app->run(function(\Dapr\Actors\ActorProxy $actorProxy) {
$actor = $actorProxy->get(ICount::class, 'actor-id');
$actor->increment(10);
});
Writing Actors
To create an actor, you need to implement the interface you defined earlier and also add the DaprType
attribute. All
actors must implement IActor
, however there’s an Actor
base class that implements the boilerplate making your
implementation much simpler.
Here’s the counter actor:
<?php
#[\Dapr\Actors\Attributes\DaprType('Count')]
class Counter extends \Dapr\Actors\Actor implements ICount {
function __construct(string $id, private CountState $state) {
parent::__construct($id);
}
function increment(int $amount = 1): void {
$this->state->count += $amount;
}
function get_count(): int {
return $this->state->count;
}
}
The most important bit is the constructor. It takes at least one argument with the name of id
which is the id of the
actor. Any additional arguments are injected by the DI container, including any ActorState
you want to use.
Actor Lifecycle
An actor is instantiated via the constructor on every request targeting that actor type. You can use it to calculate ephemeral state or handle any kind of request-specific startup you require, such as setting up other clients or connections.
After the actor is instantiated, the on_activation()
method may be called. The on_activation()
method is called any
time the actor “wakes up” or when it is created for the first time. It is not called on every request.
Next, the actor method is called. This may be from a timer, reminder, or from a client. You may perform any work that needs to be done and/or throw an exception.
Finally, the result of the work is returned to the caller. After some time (depending on how you’ve configured the
service), the actor will be deactivated and on_deactivation()
will be called. This may not be called if the host dies,
daprd crashes, or some other error occurs which prevents it from being called successfully.
Actor State
Actor state is a “Plain Old PHP Object” (POPO) that extends ActorState
. The ActorState
base class provides a couple
of useful methods. Here’s an example implementation:
<?php
class CountState extends \Dapr\Actors\ActorState {
public int $count = 0;
}
Registering an Actor
Dapr expects to know what actors a service may host at startup. You need to add it to the configuration:
If you want to take advantage of pre-compiled dependency injection, you need to use a factory:
<?php
// in config.php
return [
'dapr.actors' => fn() => [Counter::class],
];
All that is required to start the app:
<?php
require_once __DIR__ . '/vendor/autoload.php';
$app = \Dapr\App::create(
configure: fn(\DI\ContainerBuilder $builder) => $builder->addDefinitions('config.php')->enableCompilation(__DIR__)
);
$app->start();
<?php
// in config.php
return [
'dapr.actors' => [Counter::class]
];
All that is required to start the app:
<?php
require_once __DIR__ . '/vendor/autoload.php';
$app = \Dapr\App::create(configure: fn(\DI\ContainerBuilder $builder) => $builder->addDefinitions('config.php'));
$app->start();
2.5.1.1 - Production Reference: Actors
Proxy modes
There are four different modes actor proxies are handled. Each mode presents different trade-offs that you’ll need to weigh during development and in production.
<?php
\Dapr\Actors\Generators\ProxyFactory::GENERATED;
\Dapr\Actors\Generators\ProxyFactory::GENERATED_CACHED;
\Dapr\Actors\Generators\ProxyFactory::ONLY_EXISTING;
\Dapr\Actors\Generators\ProxyFactory::DYNAMIC;
It can be set with dapr.actors.proxy.generation
configuration key.
This is the default mode. In this mode, a class is generated and eval
’d on every request. It’s mostly for development
and shouldn’t be used in production.
This is the same as ProxyModes::GENERATED
except the class is stored in a tmp file so it doesn’t need to be
regenerated on every request. It doesn’t know when to update the cached class, so using it in development is discouraged
but is offered for when manually generating the files isn’t possible.
In this mode, an exception is thrown if the proxy class doesn’t exist. This is useful for when you don’t want to generate code in production. You’ll have to make sure the class is generated and pre-/autoloaded.
Generating proxies
You can create a composer script to generate proxies on demand to take advantage of the ONLY_EXISTING
mode.
Create a ProxyCompiler.php
<?php
class ProxyCompiler {
private const PROXIES = [
MyActorInterface::class,
MyOtherActorInterface::class,
];
private const PROXY_LOCATION = __DIR__.'/proxies/';
public static function compile() {
try {
$app = \Dapr\App::create();
foreach(self::PROXIES as $interface) {
$output = $app->run(function(\DI\FactoryInterface $factory) use ($interface) {
return \Dapr\Actors\Generators\FileGenerator::generate($interface, $factory);
});
$reflection = new ReflectionClass($interface);
$dapr_type = $reflection->getAttributes(\Dapr\Actors\Attributes\DaprType::class)[0]->newInstance()->type;
$filename = 'dapr_proxy_'.$dapr_type.'.php';
file_put_contents(self::PROXY_LOCATION.$filename, $output);
echo "Compiled: $interface";
}
} catch (Exception $ex) {
echo "Failed to generate proxy for $interface\n{$ex->getMessage()} on line {$ex->getLine()} in {$ex->getFile()}\n";
}
}
}
Then add a psr-4 autoloader for the generated proxies and a script in composer.json
:
{
"autoload": {
"psr-4": {
"Dapr\\Proxies\\": "path/to/proxies"
}
},
"scripts": {
"compile-proxies": "ProxyCompiler::compile"
}
}
And finally, configure dapr to only use the generated proxies:
<?php
// in config.php
return [
'dapr.actors.proxy.generation' => ProxyFactory::ONLY_EXISTING,
];
In this mode, the proxy satisfies the interface contract, however, it does not actually implement the interface itself
(meaning instanceof
will be false
). This mode takes advantage of a few quirks in PHP to work and exists for cases
where code cannot be eval
’d or generated.
Requests
Creating an actor proxy is very inexpensive for any mode. There are no requests made when creating an actor proxy object.
When you call a method on a proxy object, only methods that you implemented are serviced by your actor implementation.
get_id()
is handled locally, and get_reminder()
, delete_reminder()
, etc. are handled by the daprd
.
Actor implementation
Every actor implementation in PHP must implement \Dapr\Actors\IActor
and use the \Dapr\Actors\ActorTrait
trait. This
allows for fast reflection and some shortcuts. Using the \Dapr\Actors\Actor
abstract base class does this for you, but
if you need to override the default behavior, you can do so by implementing the interface and using the trait.
Activation and deactivation
When an actor activates, a token file is written to a temporary directory (by default this is in
'/tmp/dapr_' + sha256(concat(Dapr type, id))
in linux and '%temp%/dapr_' + sha256(concat(Dapr type, id))
on Windows).
This is persisted until the actor deactivates, or the host shuts down. This allows for on_activation
to be called once
and only once when Dapr activates the actor on the host.
Performance
Actor method invocation is very fast on a production setup with php-fpm
and nginx
, or IIS on Windows. Even though
the actor is constructed on every request, actor state keys are only loaded on-demand and not during each request.
However, there is some overhead in loading each key individually. This can be mitigated by storing an array of data in
state, trading some usability for speed. It is not recommended doing this from the start, but as an optimization when
needed.
Versioning state
The names of the variables in the ActorState
object directly correspond to key names in the store. This means that if
you change the type or name of a variable, you may run into errors. To get around this, you may need to version your state
object. In order to do this, you’ll need to override how state is loaded and stored. There are many ways to approach this,
one such solution might be something like this:
<?php
class VersionedState extends \Dapr\Actors\ActorState {
/**
* @var int The current version of the state in the store. We give a default value of the current version.
* However, it may be in the store with a different value.
*/
public int $state_version = self::VERSION;
/**
* @var int The current version of the data
*/
private const VERSION = 3;
/**
* Call when your actor activates.
*/
public function upgrade() {
if($this->state_version < self::VERSION) {
$value = parent::__get($this->get_versioned_key('key', $this->state_version));
// update the value after updating the data structure
parent::__set($this->get_versioned_key('key', self::VERSION), $value);
$this->state_version = self::VERSION;
$this->save_state();
}
}
// if you upgrade all keys as needed in the method above, you don't need to walk the previous
// keys when loading/saving and you can just get the current version of the key.
private function get_previous_version(int $version): int {
return $this->has_previous_version($version) ? $version - 1 : $version;
}
private function has_previous_version(int $version): bool {
return $version >= 0;
}
private function walk_versions(int $version, callable $callback, callable $predicate): mixed {
$value = $callback($version);
if($predicate($value) || !$this->has_previous_version($version)) {
return $value;
}
return $this->walk_versions($this->get_previous_version($version), $callback, $predicate);
}
private function get_versioned_key(string $key, int $version) {
return $this->has_previous_version($version) ? $version.$key : $key;
}
public function __get(string $key): mixed {
return $this->walk_versions(
self::VERSION,
fn($version) => parent::__get($this->get_versioned_key($key, $version)),
fn($value) => isset($value)
);
}
public function __isset(string $key): bool {
return $this->walk_versions(
self::VERSION,
fn($version) => parent::__isset($this->get_versioned_key($key, $version)),
fn($isset) => $isset
);
}
public function __set(string $key,mixed $value): void {
// optional: you can unset previous versions of the key
parent::__set($this->get_versioned_key($key, self::VERSION), $value);
}
public function __unset(string $key) : void {
// unset this version and all previous versions
$this->walk_versions(
self::VERSION,
fn($version) => parent::__unset($this->get_versioned_key($key, $version)),
fn() => false
);
}
}
There’s a lot to be optimized, and it wouldn’t be a good idea to use this verbatim in production, but you can get the gist of how it would work. A lot of it will depend on your use case which is why there’s not something like this in the SDK. For instance, in this example implementation, the previous value is kept for where there may be a bug during an upgrade; keeping the previous value allows for running the upgrade again, but you may wish to delete the previous value.
2.5.2 - The App
In PHP, there is no default router. Thus, the \Dapr\App
class is provided. It uses
Nikic’s FastRoute under the hood. However, you are free to use any router or
framework that you’d like. Just check out the add_dapr_routes()
method in the App
class to see how actors and
subscriptions are implemented.
Every app should start with App::create()
which takes two parameters, the first is an existing DI container, if you
have one, and the second is a callback to hook into the ContainerBuilder
and add your own configuration.
From there, you should define your routes and then call $app->start()
to execute the route on the current request.
<?php
// app.php
require_once __DIR__ . '/vendor/autoload.php';
$app = \Dapr\App::create(configure: fn(\DI\ContainerBuilder $builder) => $builder->addDefinitions('config.php'));
// add a controller for GET /test/{id} that returns the id
$app->get('/test/{id}', fn(string $id) => $id);
$app->start();
Returning from a controller
You can return anything from a controller, and it will be serialized into a json object. You can also request the Psr Response object and return that instead, allowing you to customize headers, and have control over the entire response:
<?php
$app = \Dapr\App::create(configure: fn(\DI\ContainerBuilder $builder) => $builder->addDefinitions('config.php'));
// add a controller for GET /test/{id} that returns the id
$app->get('/test/{id}',
fn(
string $id,
\Psr\Http\Message\ResponseInterface $response,
\Nyholm\Psr7\Factory\Psr17Factory $factory) => $response->withBody($factory->createStream($id)));
$app->start();
Using the app as a client
When you just want to use Dapr as a client, such as in existing code, you can call $app->run()
. In these cases, there’s
usually no need for a custom configuration, however, you may want to use a compiled DI container, especially in production:
<?php
// app.php
require_once __DIR__ . '/vendor/autoload.php';
$app = \Dapr\App::create(configure: fn(\DI\ContainerBuilder $builder) => $builder->enableCompilation(__DIR__));
$result = $app->run(fn(\Dapr\DaprClient $client) => $client->get('/invoke/other-app/method/my-method'));
Using in other frameworks
A DaprClient
object is provided, in fact, all the sugar used by the App
object is built on the DaprClient
.
<?php
require_once __DIR__ . '/vendor/autoload.php';
$clientBuilder = \Dapr\Client\DaprClient::clientBuilder();
// you can customize (de)serialization or comment out to use the default JSON serializers.
$clientBuilder = $clientBuilder->withSerializationConfig($yourSerializer)->withDeserializationConfig($yourDeserializer);
// you can also pass it a logger
$clientBuilder = $clientBuilder->withLogger($myLogger);
// and change the url of the sidecar, for example, using https
$clientBuilder = $clientBuilder->useHttpClient('https://localhost:3800')
There are several functions you can call before
2.5.2.1 - Unit Testing
Unit and integration tests are first-class citizens with the PHP SDK. Using the DI container, mocks, stubs,
and the provided \Dapr\Mocks\TestClient
allows you to have very fine-grained tests.
Testing Actors
With actors, there are two things we’re interested in while the actor is under test:
- The returned result based on an initial state
- The resulting state based on the initial state
Here’s an example test a very simple actor that updates its state and returns a specific value:
<?php
// TestState.php
class TestState extends \Dapr\Actors\ActorState
{
public int $number;
}
// TestActor.php
#[\Dapr\Actors\Attributes\DaprType('TestActor')]
class TestActor extends \Dapr\Actors\Actor
{
public function __construct(string $id, private TestState $state)
{
parent::__construct($id);
}
public function oddIncrement(): bool
{
if ($this->state->number % 2 === 0) {
return false;
}
$this->state->number += 1;
return true;
}
}
// TheTest.php
class TheTest extends \PHPUnit\Framework\TestCase
{
private \DI\Container $container;
public function setUp(): void
{
parent::setUp();
// create a default app and extract the DI container from it
$app = \Dapr\App::create(
configure: fn(\DI\ContainerBuilder $builder) => $builder->addDefinitions(
['dapr.actors' => [TestActor::class]],
[\Dapr\DaprClient::class => \DI\autowire(\Dapr\Mocks\TestClient::class)]
));
$app->run(fn(\DI\Container $container) => $this->container = $container);
}
public function testIncrementsWhenOdd()
{
$id = uniqid();
$runtime = $this->container->get(\Dapr\Actors\ActorRuntime::class);
$client = $this->getClient();
// return the current state from http://localhost:1313/reference/api/actors_api/
$client->register_get("/actors/TestActor/$id/state/number", code: 200, data: 3);
// ensure it increments from http://localhost:1313/reference/api/actors_api/
$client->register_post(
"/actors/TestActor/$id/state",
code: 204,
response_data: null,
expected_request: [
[
'operation' => 'upsert',
'request' => [
'key' => 'number',
'value' => 4,
],
],
]
);
$result = $runtime->resolve_actor(
'TestActor',
$id,
fn($actor) => $runtime->do_method($actor, 'oddIncrement', null)
);
$this->assertTrue($result);
}
private function getClient(): \Dapr\Mocks\TestClient
{
return $this->container->get(\Dapr\DaprClient::class);
}
}
<?php
// TestState.php
class TestState extends \Dapr\Actors\ActorState
{
public int $number;
}
// TestActor.php
#[\Dapr\Actors\Attributes\DaprType('TestActor')]
class TestActor extends \Dapr\Actors\Actor
{
public function __construct(string $id, private TestState $state)
{
parent::__construct($id);
}
public function oddIncrement(): bool
{
if ($this->state->number % 2 === 0) {
return false;
}
$this->state->number += 1;
return true;
}
}
// TheTest.php
class TheTest extends \PHPUnit\Framework\TestCase
{
public function testNotIncrementsWhenEven() {
$container = new \DI\Container();
$state = new TestState($container, $container);
$state->number = 4;
$id = uniqid();
$actor = new TestActor($id, $state);
$this->assertFalse($actor->oddIncrement());
$this->assertSame(4, $state->number);
}
}
Testing Transactions
When building on transactions, you’ll likely want to test how a failed transaction is handled. In order to do that, you need to inject failures and ensure the transaction matches what you expect.
<?php
// MyState.php
#[\Dapr\State\Attributes\StateStore('statestore', \Dapr\consistency\EventualFirstWrite::class)]
class MyState extends \Dapr\State\TransactionalState {
public string $value = '';
}
// SomeService.php
class SomeService {
public function __construct(private MyState $state) {}
public function doWork() {
$this->state->begin();
$this->state->value = "hello world";
$this->state->commit();
}
}
// TheTest.php
class TheTest extends \PHPUnit\Framework\TestCase {
private \DI\Container $container;
public function setUp(): void
{
parent::setUp();
$app = \Dapr\App::create(configure: fn(\DI\ContainerBuilder $builder)
=> $builder->addDefinitions([\Dapr\DaprClient::class => \DI\autowire(\Dapr\Mocks\TestClient::class)]));
$this->container = $app->run(fn(\DI\Container $container) => $container);
}
private function getClient(): \Dapr\Mocks\TestClient {
return $this->container->get(\Dapr\DaprClient::class);
}
public function testTransactionFailure() {
$client = $this->getClient();
// create a response from https://v1-16.docs.dapr.io/reference/api/state_api/
$client->register_post('/state/statestore/bulk', code: 200, response_data: [
[
'key' => 'value',
// no previous value
],
], expected_request: [
'keys' => ['value'],
'parallelism' => 10
]);
$client->register_post('/state/statestore/transaction',
code: 200,
response_data: null,
expected_request: [
'operations' => [
[
'operation' => 'upsert',
'request' => [
'key' => 'value',
'value' => 'hello world'
]
]
]
]
);
$state = new MyState($this->container, $this->container);
$service = new SomeService($state);
$service->doWork();
$this->assertSame('hello world', $state->value);
}
}
<?php
// MyState.php
#[\Dapr\State\Attributes\StateStore('statestore', \Dapr\consistency\EventualFirstWrite::class)]
class MyState extends \Dapr\State\TransactionalState {
public string $value = '';
}
// SomeService.php
class SomeService {
public function __construct(private MyState $state) {}
public function doWork() {
$this->state->begin();
$this->state->value = "hello world";
$this->state->commit();
}
}
// TheTest.php
class TheTest extends \PHPUnit\Framework\TestCase {
public function testTransactionFailure() {
$state = $this->createStub(MyState::class);
$service = new SomeService($state);
$service->doWork();
$this->assertSame('hello world', $state->value);
}
}
2.5.3 - Custom Serialization
Dapr uses JSON serialization and thus (complex) type information is lost when sending/receiving data.
Serialization
When returning an object from a controller, passing an object to the DaprClient
, or storing an object in a state store,
only public properties are scanned and serialized. You can customize this behavior by implementing \Dapr\Serialization\ISerialize
.
For example, if you wanted to create an ID type that serialized to a string, you may implement it like so:
<?php
class MyId implements \Dapr\Serialization\Serializers\ISerialize
{
public string $id;
public function serialize(mixed $value,\Dapr\Serialization\ISerializer $serializer): mixed
{
// $value === $this
return $this->id;
}
}
This works for any type that we have full ownership over, however, it doesn’t work for classes from libraries or PHP itself. For that, you need to register a custom serializer with the DI container:
<?php
// in config.php
class SerializeSomeClass implements \Dapr\Serialization\Serializers\ISerialize
{
public function serialize(mixed $value,\Dapr\Serialization\ISerializer $serializer) : mixed
{
// serialize $value and return the result
}
}
return [
'dapr.serializers.custom' => [SomeClass::class => new SerializeSomeClass()],
];
Deserialization
Deserialization works exactly the same way, except the interface is \Dapr\Deserialization\Deserializers\IDeserialize
.
2.5.4 - Publish and Subscribe with PHP
With Dapr, you can publish anything, including cloud events. The SDK contains a simple cloud event implementation, but you can also just pass an array that conforms to the cloud event spec or use another library.
<?php
$app->post('/publish', function(\Dapr\Client\DaprClient $daprClient) {
$daprClient->publishEvent(pubsubName: 'pubsub', topicName: 'my-topic', data: ['something' => 'happened']);
});
For more information about publish/subscribe, check out the howto.
Data content type
The PHP SDK allows setting the data content type either when constructing a custom cloud event, or when publishing raw data.
<?php
$event = new \Dapr\PubSub\CloudEvent();
$event->data = $xml;
$event->data_content_type = 'application/xml';
<?php
/**
* @var \Dapr\Client\DaprClient $daprClient
*/
$daprClient->publishEvent(pubsubName: 'pubsub', topicName: 'my-topic', data: $raw_data, contentType: 'application/octet-stream');
Binary data
Only <code>application/octet-steam</code> is supported for binary data.
Receiving cloud events
In your subscription handler, you can have the DI Container inject either a Dapr\PubSub\CloudEvent
or an array
into
your controller. The former does some validation to ensure you have a proper event. If you need direct access to the
data, or the events do not conform to the spec, use an array
.
2.5.5 - State Management with PHP
Dapr offers a great modular approach to using state in your application. The best way to learn the basics is to visit the howto.
Metadata
Many state components allow you to pass metadata to the component to control specific aspects of the component’s behavior. The PHP SDK allows you to pass that metadata through:
<?php
// using the state manager
$app->run(
fn(\Dapr\State\StateManager $stateManager) =>
$stateManager->save_state('statestore', new \Dapr\State\StateItem('key', 'value', metadata: ['port' => '112'])));
// using the DaprClient
$app->run(fn(\Dapr\Client\DaprClient $daprClient) => $daprClient->saveState(storeName: 'statestore', key: 'key', value: 'value', metadata: ['port' => '112']))
This is an example of how you might pass the port metadata to Cassandra.
Every state operation allows passing metadata.
Consistency/concurrency
In the PHP SDK, there are four classes that represent the four different types of consistency and concurrency in Dapr:
<?php
[
\Dapr\consistency\StrongLastWrite::class,
\Dapr\consistency\StrongFirstWrite::class,
\Dapr\consistency\EventualLastWrite::class,
\Dapr\consistency\EventualFirstWrite::class,
]
Passing one of them to a StateManager
method or using the StateStore()
attribute allows you to define how the state
store should handle conflicts.
Parallelism
When doing a bulk read or beginning a transaction, you can specify the amount of parallelism. Dapr will read “at most”
that many keys at a time from the underlying store if it has to read one key at a time. This can be helpful to control
the load on the state store at the expense of performance. The default is 10
.
Prefix
Hardcoded key names are useful, but why not make state objects more reusable? When committing a transaction or saving an object to state, you can pass a prefix that is applied to every key in the object.
<?php
class TransactionObject extends \Dapr\State\TransactionalState {
public string $key;
}
$app->run(function (TransactionObject $object ) {
$object->begin(prefix: 'my-prefix-');
$object->key = 'value';
// commit to key `my-prefix-key`
$object->commit();
});
<?php
class StateObject {
public string $key;
}
$app->run(function(\Dapr\State\StateManager $stateManager) {
$stateManager->load_object($obj = new StateObject(), prefix: 'my-prefix-');
// original value is from `my-prefix-key`
$obj->key = 'value';
// save to `my-prefix-key`
$stateManager->save_object($obj, prefix: 'my-prefix-');
});
2.6 - Dapr Python SDK
Dapr offers a variety of subpackages to help with the development of Python applications. Using them you can create Python clients, servers, and virtual actors with Dapr.
Prerequisites
- Dapr CLI installed
- Initialized Dapr environment
- Python 3.9+ installed
Installation
To get started with the Python SDK, install the main Dapr Python SDK package.
pip install dapr
Note: The development package will contain features and behavior that will be compatible with the pre-release version of the Dapr runtime. Make sure to uninstall any stable versions of the Python SDK before installing the dapr-dev package.
pip install dapr-dev
Available subpackages
SDK imports
Python SDK imports are subpackages included with the main SDK install, but need to be imported when used. The most common imports provided by the Dapr Python SDK are:
Learn more about all of the available Dapr Python SDK imports.
SDK extensions
SDK extensions mainly work as utilities for receiving pub/sub events, programatically creating pub/sub subscriptions, and handling input binding events. While you can acheive all of these tasks without an extension, using a Python SDK extension proves convenient.
Learn more about the Dapr Python SDK extensions.
Try it out
Clone the Python SDK repo.
git clone https://github.com/dapr/python-sdk.git
Walk through the Python quickstarts, tutorials, and examples to see Dapr in action:
SDK samples | Description |
---|---|
Quickstarts | Experience Dapr’s API building blocks in just a few minutes using the Python SDK. |
SDK samples | Clone the SDK repo to try out some examples and get started. |
Bindings tutorial | See how Dapr Python SDK works alongside other Dapr SDKs to enable bindings. |
Distributed Calculator tutorial | Use the Dapr Python SDK to handle method invocation and state persistent capabilities. |
Hello World tutorial | Learn how to get Dapr up and running locally on your machine with the Python SDK. |
Hello Kubernetes tutorial | Get up and running with the Dapr Python SDK in a Kubernetes cluster. |
Observability tutorial | Explore Dapr’s metric collection, tracing, logging and health check capabilities using the Python SDK. |
Pub/sub tutorial | See how Dapr Python SDK works alongside other Dapr SDKs to enable pub/sub applications. |
More information
2.6.1 - Getting started with the Dapr client Python SDK
The Dapr client package allows you to interact with other Dapr applications from a Python application.
Note
If you haven’t already, try out one of the quickstarts for a quick walk-through on how to use the Dapr Python SDK with an API building block.Prerequisites
Install the Dapr Python package before getting started.
Import the client package
The dapr
package contains the DaprClient
, which is used to create and use a client.
from dapr.clients import DaprClient
Initialising the client
You can initialise a Dapr client in multiple ways:
Default values:
When you initialise the client without any parameters it will use the default values for a Dapr
sidecar instance (127.0.0.1:50001
).
from dapr.clients import DaprClient
with DaprClient() as d:
# use the client
Specifying an endpoint on initialisation:
When passed as an argument in the constructor, the gRPC endpoint takes precedence over any configuration or environment variable.
from dapr.clients import DaprClient
with DaprClient("mydomain:50051?tls=true") as d:
# use the client
Configuration options:
Dapr Sidecar Endpoints
You can use the standardised DAPR_GRPC_ENDPOINT
environment variable to
specify the gRPC endpoint. When this variable is set, the client can be initialised
without any arguments:
export DAPR_GRPC_ENDPOINT="mydomain:50051?tls=true"
from dapr.clients import DaprClient
with DaprClient() as d:
# the client will use the endpoint specified in the environment variables
The legacy environment variables DAPR_RUNTIME_HOST
, DAPR_HTTP_PORT
and DAPR_GRPC_PORT
are
also supported, but DAPR_GRPC_ENDPOINT
takes precedence.
Dapr API Token
If your Dapr instance is configured to require the DAPR_API_TOKEN
environment variable, you can
set it in the environment and the client will use it automatically.
You can read more about Dapr API token authentication here.
Health timeout
On client initialisation, a health check is performed against the Dapr sidecar (/healthz/outbound
).
The client will wait for the sidecar to be up and running before proceeding.
The default healthcheck timeout is 60 seconds, but it can be overridden by setting the DAPR_HEALTH_TIMEOUT
environment variable.
Retries and timeout
The Dapr client can retry a request if a specific error code is received from the sidecar. This is
configurable through the DAPR_API_MAX_RETRIES
environment variable and is picked up automatically,
not requiring any code changes.
The default value for DAPR_API_MAX_RETRIES
is 0
, which means no retries will be made.
You can fine-tune more retry parameters by creating a dapr.clients.retry.RetryPolicy
object and
passing it to the DaprClient constructor:
from dapr.clients.retry import RetryPolicy
retry = RetryPolicy(
max_attempts=5,
initial_backoff=1,
max_backoff=20,
backoff_multiplier=1.5,
retryable_http_status_codes=[408, 429, 500, 502, 503, 504],
retryable_grpc_status_codes=[StatusCode.UNAVAILABLE, StatusCode.DEADLINE_EXCEEDED, ]
)
with DaprClient(retry_policy=retry) as d:
...
or for actors:
factory = ActorProxyFactory(retry_policy=RetryPolicy(max_attempts=3))
proxy = ActorProxy.create('DemoActor', ActorId('1'), DemoActorInterface, factory)
Timeout can be set for all calls through the environment variable DAPR_API_TIMEOUT_SECONDS
. The default value is 60 seconds.
Note: You can control timeouts on service invocation separately, by passing a
timeout
parameter to theinvoke_method
method.
Error handling
Initially, errors in Dapr followed the Standard gRPC error model. However, to provide more detailed and informative error messages, in version 1.13 an enhanced error model has been introduced which aligns with the gRPC Richer error model. In response, the Python SDK implemented DaprGrpcError
, a custom exception class designed to improve the developer experience.
It’s important to note that the transition to using DaprGrpcError
for all gRPC status exceptions is a work in progress. As of now, not every API call in the SDK has been updated to leverage this custom exception. We are actively working on this enhancement and welcome contributions from the community.
Example of handling DaprGrpcError
exceptions when using the Dapr python-SDK:
try:
d.save_state(store_name=storeName, key=key, value=value)
except DaprGrpcError as err:
print(f'Status code: {err.code()}')
print(f"Message: {err.message()}")
print(f"Error code: {err.error_code()}")
print(f"Error info(reason): {err.error_info.reason}")
print(f"Resource info (resource type): {err.resource_info.resource_type}")
print(f"Resource info (resource name): {err.resource_info.resource_name}")
print(f"Bad request (field): {err.bad_request.field_violations[0].field}")
print(f"Bad request (description): {err.bad_request.field_violations[0].description}")
Building blocks
The Python SDK allows you to interface with all of the Dapr building blocks.
Invoke a service
The Dapr Python SDK provides a simple API for invoking services via either HTTP or gRPC (deprecated). The protocol can be selected by setting the DAPR_API_METHOD_INVOCATION_PROTOCOL
environment variable, defaulting to HTTP when unset. GRPC service invocation in Dapr is deprecated and GRPC proxying is recommended as an alternative.
from dapr.clients import DaprClient
with DaprClient() as d:
# invoke a method (gRPC or HTTP GET)
resp = d.invoke_method('service-to-invoke', 'method-to-invoke', data='{"message":"Hello World"}')
# for other HTTP verbs the verb must be specified
# invoke a 'POST' method (HTTP only)
resp = d.invoke_method('service-to-invoke', 'method-to-invoke', data='{"id":"100", "FirstName":"Value", "LastName":"Value"}', http_verb='post')
The base endpoint for HTTP api calls is specified in the DAPR_HTTP_ENDPOINT
environment variable.
If this variable is not set, the endpoint value is derived from the DAPR_RUNTIME_HOST
and DAPR_HTTP_PORT
variables, whose default values are 127.0.0.1
and 3500
accordingly.
The base endpoint for gRPC calls is the one used for the client initialisation (explained above).
- For a full guide on service invocation visit How-To: Invoke a service.
- Visit Python SDK examples for code samples and instructions to try out service invocation.
Save & get application state
from dapr.clients import DaprClient
with DaprClient() as d:
# Save state
d.save_state(store_name="statestore", key="key1", value="value1")
# Get state
data = d.get_state(store_name="statestore", key="key1").data
# Delete state
d.delete_state(store_name="statestore", key="key1")
- For a full list of state operations visit How-To: Get & save state.
- Visit Python SDK examples for code samples and instructions to try out state management.
Query application state (Alpha)
from dapr import DaprClient
query = '''
{
"filter": {
"EQ": { "state": "CA" }
},
"sort": [
{
"key": "person.id",
"order": "DESC"
}
]
}
'''
with DaprClient() as d:
resp = d.query_state(
store_name='state_store',
query=query,
states_metadata={"metakey": "metavalue"}, # optional
)
- For a full list of state store query options visit How-To: Query state.
- Visit Python SDK examples for code samples and instructions to try out state store querying.
Publish & subscribe
Publish messages
from dapr.clients import DaprClient
with DaprClient() as d:
resp = d.publish_event(pubsub_name='pubsub', topic_name='TOPIC_A', data='{"message":"Hello World"}')
Send CloudEvents messages with a json payload:
from dapr.clients import DaprClient
import json
with DaprClient() as d:
cloud_event = {
'specversion': '1.0',
'type': 'com.example.event',
'source': 'my-service',
'id': 'myid',
'data': {'id': 1, 'message': 'hello world'},
'datacontenttype': 'application/json',
}
# Set the data content type to 'application/cloudevents+json'
resp = d.publish_event(
pubsub_name='pubsub',
topic_name='TOPIC_CE',
data=json.dumps(cloud_event),
data_content_type='application/cloudevents+json',
)
Publish CloudEvents messages with plain text payload:
from dapr.clients import DaprClient
import json
with DaprClient() as d:
cloud_event = {
'specversion': '1.0',
'type': 'com.example.event',
'source': 'my-service',
'id': "myid",
'data': 'hello world',
'datacontenttype': 'text/plain',
}
# Set the data content type to 'application/cloudevents+json'
resp = d.publish_event(
pubsub_name='pubsub',
topic_name='TOPIC_CE',
data=json.dumps(cloud_event),
data_content_type='application/cloudevents+json',
)
Subscribe to messages
from cloudevents.sdk.event import v1
from dapr.ext.grpc import App
import json
app = App()
# Default subscription for a topic
@app.subscribe(pubsub_name='pubsub', topic='TOPIC_A')
def mytopic(event: v1.Event) -> None:
data = json.loads(event.Data())
print(f'Received: id={data["id"]}, message="{data ["message"]}"'
' content_type="{event.content_type}"',flush=True)
# Specific handler using Pub/Sub routing
@app.subscribe(pubsub_name='pubsub', topic='TOPIC_A',
rule=Rule("event.type == \"important\"", 1))
def mytopic_important(event: v1.Event) -> None:
data = json.loads(event.Data())
print(f'Received: id={data["id"]}, message="{data ["message"]}"'
' content_type="{event.content_type}"',flush=True)
- For more information about pub/sub, visit How-To: Publish & subscribe.
- Visit Python SDK examples for code samples and instructions to try out pub/sub.
Streaming message subscription
You can create a streaming subscription to a PubSub topic using either the subscribe
or subscribe_handler
methods.
The subscribe
method returns an iterable Subscription
object, which allows you to pull messages from the
stream by using a for
loop (ex. for message in subscription
) or by
calling the next_message
method. This will block on the main thread while waiting for messages.
When done, you should call the close method to terminate the
subscription and stop receiving messages.
The subscribe_with_handler
method accepts a callback function that is executed for each message
received from the stream.
It runs in a separate thread, so it doesn’t block the main thread. The callback should return a
TopicEventResponse
(ex. TopicEventResponse('success')
), indicating whether the message was
processed successfully, should be retried, or should be discarded. The method will automatically
manage message acknowledgements based on the returned status. The call to subscribe_with_handler
method returns a close function, which should be called to terminate the subscription when you’re
done.
Here’s an example of using the subscribe
method:
import time
from dapr.clients import DaprClient
from dapr.clients.grpc.subscription import StreamInactiveError, StreamCancelledError
counter = 0
def process_message(message):
global counter
counter += 1
# Process the message here
print(f'Processing message: {message.data()} from {message.topic()}...')
return 'success'
def main():
with DaprClient() as client:
global counter
subscription = client.subscribe(
pubsub_name='pubsub', topic='TOPIC_A', dead_letter_topic='TOPIC_A_DEAD'
)
try:
for message in subscription:
if message is None:
print('No message received. The stream might have been cancelled.')
continue
try:
response_status = process_message(message)
if response_status == 'success':
subscription.respond_success(message)
elif response_status == 'retry':
subscription.respond_retry(message)
elif response_status == 'drop':
subscription.respond_drop(message)
if counter >= 5:
break
except StreamInactiveError:
print('Stream is inactive. Retrying...')
time.sleep(1)
continue
except StreamCancelledError:
print('Stream was cancelled')
break
except Exception as e:
print(f'Error occurred during message processing: {e}')
finally:
print('Closing subscription...')
subscription.close()
if __name__ == '__main__':
main()
And here’s an example of using the subscribe_with_handler
method:
import time
from dapr.clients import DaprClient
from dapr.clients.grpc._response import TopicEventResponse
counter = 0
def process_message(message):
# Process the message here
global counter
counter += 1
print(f'Processing message: {message.data()} from {message.topic()}...')
return TopicEventResponse('success')
def main():
with (DaprClient() as client):
# This will start a new thread that will listen for messages
# and process them in the `process_message` function
close_fn = client.subscribe_with_handler(
pubsub_name='pubsub', topic='TOPIC_A', handler_fn=process_message,
dead_letter_topic='TOPIC_A_DEAD'
)
while counter < 5:
time.sleep(1)
print("Closing subscription...")
close_fn()
if __name__ == '__main__':
main()
- For more information about pub/sub, visit How-To: Publish & subscribe.
- Visit Python SDK examples for code samples and instructions to try out streaming pub/sub.
Conversation (Alpha)
Note
The Dapr Conversation API is currently in alpha.Since version 1.15 Dapr offers developers the capability to securely and reliably interact with Large Language Models (LLM) through the Conversation API.
from dapr.clients import DaprClient
from dapr.clients.grpc._request import ConversationInput
with DaprClient() as d:
inputs = [
ConversationInput(content="What's Dapr?", role='user', scrub_pii=True),
ConversationInput(content='Give a brief overview.', role='user', scrub_pii=True),
]
metadata = {
'model': 'foo',
'key': 'authKey',
'cacheTTL': '10m',
}
response = d.converse_alpha1(
name='echo', inputs=inputs, temperature=0.7, context_id='chat-123', metadata=metadata
)
for output in response.outputs:
print(f'Result: {output.result}')
Interact with output bindings
from dapr.clients import DaprClient
with DaprClient() as d:
resp = d.invoke_binding(binding_name='kafkaBinding', operation='create', data='{"message":"Hello World"}')
- For a full guide on output bindings visit How-To: Use bindings.
- Visit Python SDK examples for code samples and instructions to try out output bindings.
Retrieve secrets
from dapr.clients import DaprClient
with DaprClient() as d:
resp = d.get_secret(store_name='localsecretstore', key='secretKey')
- For a full guide on secrets visit How-To: Retrieve secrets.
- Visit Python SDK examples for code samples and instructions to try out retrieving secrets
Configuration
Get configuration
from dapr.clients import DaprClient
with DaprClient() as d:
# Get Configuration
configuration = d.get_configuration(store_name='configurationstore', keys=['orderId'], config_metadata={})
Subscribe to configuration
import asyncio
from time import sleep
from dapr.clients import DaprClient
async def executeConfiguration():
with DaprClient() as d:
storeName = 'configurationstore'
key = 'orderId'
# Wait for sidecar to be up within 20 seconds.
d.wait(20)
# Subscribe to configuration by key.
configuration = await d.subscribe_configuration(store_name=storeName, keys=[key], config_metadata={})
while True:
if configuration != None:
items = configuration.get_items()
for key, item in items:
print(f"Subscribe key={key} value={item.value} version={item.version}", flush=True)
else:
print("Nothing yet")
sleep(5)
asyncio.run(executeConfiguration())
- Learn more about managing configurations via the How-To: Manage configuration guide.
- Visit Python SDK examples for code samples and instructions to try out configuration.
Distributed Lock
from dapr.clients import DaprClient
def main():
# Lock parameters
store_name = 'lockstore' # as defined in components/lockstore.yaml
resource_id = 'example-lock-resource'
client_id = 'example-client-id'
expiry_in_seconds = 60
with DaprClient() as dapr:
print('Will try to acquire a lock from lock store named [%s]' % store_name)
print('The lock is for a resource named [%s]' % resource_id)
print('The client identifier is [%s]' % client_id)
print('The lock will will expire in %s seconds.' % expiry_in_seconds)
with dapr.try_lock(store_name, resource_id, client_id, expiry_in_seconds) as lock_result:
assert lock_result.success, 'Failed to acquire the lock. Aborting.'
print('Lock acquired successfully!!!')
# At this point the lock was released - by magic of the `with` clause ;)
unlock_result = dapr.unlock(store_name, resource_id, client_id)
print('We already released the lock so unlocking will not work.')
print('We tried to unlock it anyway and got back [%s]' % unlock_result.status)
- Learn more about using a distributed lock: How-To: Use a lock.
- Visit Python SDK examples for code samples and instructions to try out distributed lock.
Cryptography
from dapr.clients import DaprClient
message = 'The secret is "passw0rd"'
def main():
with DaprClient() as d:
resp = d.encrypt(
data=message.encode(),
options=EncryptOptions(
component_name='crypto-localstorage',
key_name='rsa-private-key.pem',
key_wrap_algorithm='RSA',
),
)
encrypt_bytes = resp.read()
resp = d.decrypt(
data=encrypt_bytes,
options=DecryptOptions(
component_name='crypto-localstorage',
key_name='rsa-private-key.pem',
),
)
decrypt_bytes = resp.read()
print(decrypt_bytes.decode()) # The secret is "passw0rd"
- For a full list of state operations visit How-To: Use the cryptography APIs.
- Visit Python SDK examples for code samples and instructions to try out cryptography
Related links
2.6.2 - Getting started with the Dapr actor Python SDK
The Dapr actor package allows you to interact with Dapr virtual actors from a Python application.
Pre-requisites
- Dapr CLI installed
- Initialized Dapr environment
- Python 3.9+ installed
- Dapr Python package installed
Actor interface
The interface defines the actor contract that is shared between the actor implementation and the clients calling the actor. Because a client may depend on it, it typically makes sense to define it in an assembly that is separate from the actor implementation.
from dapr.actor import ActorInterface, actormethod
class DemoActorInterface(ActorInterface):
@actormethod(name="GetMyData")
async def get_my_data(self) -> object:
...
Actor services
An actor service hosts the virtual actor. It is implemented a class that derives from the base type Actor
and implements the interfaces defined in the actor interface.
Actors can be created using one of the Dapr actor extensions:
Actor client
An actor client contains the implementation of the actor client which calls the actor methods defined in the actor interface.
import asyncio
from dapr.actor import ActorProxy, ActorId
from demo_actor_interface import DemoActorInterface
async def main():
# Create proxy client
proxy = ActorProxy.create('DemoActor', ActorId('1'), DemoActorInterface)
# Call method on client
resp = await proxy.GetMyData()
Sample
Visit this page for a runnable actor sample.
Mock Actor Testing
The Dapr Python SDK provides the ability to create mock actors to unit test your actor methods and see how they interact with the actor state.
Sample Usage
from dapr.actor.runtime.mock_actor import create_mock_actor
class MyActor(Actor, MyActorInterface):
async def save_state(self, data) -> None:
await self._state_manager.set_state('mystate', data)
await self._state_manager.save_state()
mock_actor = create_mock_actor(MyActor, "id")
await mock_actor.save_state(5)
assert mockactor._state_manager._mock_state['mystate'] == 5 #True
Mock actors are created by passing your actor class and an actor ID (a string) to the create_mock_actor function. This function returns an instance of the actor with many internal methods overridden. Instead of interacting with Dapr for tasks like saving state or managing timers, the mock actor uses in-memory state to simulate these behaviors.
This state can be accessed through the following variables:
IMPORTANT NOTE: Due to type hinting issues as discussed further down, these variables will not be visible to type hinters/linters/etc, who will think they are invalid variables. You will need to use them with #type: ignore in order to satisfy any such systems.
_state_manager._mock_state()
A[str, object]
dict where all the actor state is stored. Any variable saved via_state_manager.save_state(key, value)
, or any other statemanager method is stored in the dict as that key, value pair. Any value loaded viatry_get_state
or any other statemanager method is taken from this dict._state_manager._mock_timers()
A[str, ActorTimerData]
dict which holds the active actor timers. Any actor method which would add or remove a timer adds or pops the appropriateActorTimerData
object from this dict._state_manager._mock_reminders()
A [str, ActorReminderData] dict which holds the active actor reminders. Any actor method which would add or remove a timer adds or pops the appropriate ActorReminderData object from this dict.
Note: The timers and reminders will never actually trigger. The dictionaries exist only so methods that should add or remove timers/reminders can be tested. If you need to test the callbacks they should activate, you should call them directly with the appropriate values:
result = await mock_actor.recieve_reminder(name, state, due_time, period, _ttl)
# Test the result directly or test for side effects (like changing state) by querying `_state_manager._mock_state`
Usage and Limitations
To allow for more fine-grained control, the _on_activate
method will not be called automatically the way it is when Dapr initializes a new Actor instance. You should call it manually as needed as part of your tests.
A current limitation of the mock actor system is that it does not call the _on_pre_actor_method
and _on_post_actor_method
methods. You can always call these methods manually as part of a test.
The __init__
, register_timer
, unregister_timer
, register_reminder
, unregister_reminder
methods are all overwritten by the MockActor class that gets applied as a mixin via create_mock_actor
. If your actor itself overwrites these methods, those modifications will themselves be overwritten and the actor will likely not behave as you expect.
note: __init__
is a special case where you are expected to define it as
def __init__(self, ctx, actor_id):
super().__init__(ctx, actor_id)
Mock actors work fine with this, but if you have added any extra logic into __init__
, it will be overwritten. It is worth noting that the correct way to apply logic on initialization is via _on_activate
(which can also be safely used with mock actors) instead of __init__
.
If you have an actor which does override default Dapr actor methods, you can create a custom subclass of the MockActor class (from MockActor.py) which implements whatever custom logic you have along with interacting with _mock_state
, _mock_timers
, and _mock_reminders
as normal, and then applying that custom class as a mixin via a create_mock_actor
function you define yourself.
The actor _runtime_ctx
variable is set to None. All the normal actor methods have been overwritten such as to not call it, but if your code itself interacts directly with _runtime_ctx
, tests may fail.
The actor _state_manager is overwritten with an instance of MockStateManager
. This has all the same methods and functionality of the base ActorStateManager
, except for using the various _mock
variables for storing data instead of the _runtime_ctx
. If your code implements its own custom state manager it will be overwritten and tests will likely fail.
Type Hinting
Because of Python’s lack of a unified method for type hinting type intersections (see: python/typing #213), type hinting unfortunately doesn’t work with Mock Actors. The return type is type hinted as “instance of Actor subclass T” when it should really be type hinted as “instance of MockActor subclass T” or “instance of type intersection [Actor subclass T, MockActor]
” (where, it is worth noting, MockActor
is itself a subclass of Actor
).
This means that, for example, if you hover over mockactor._state_manager
in a code editor, it will come up as an instance of ActorStateManager (instead of MockStateManager), and various IDE helper functions (like VSCode’s Go to Definition
, which will bring you to the definition of ActorStateManager instead of MockStateManager) won’t work properly.
For now, this issue is unfixable, so it’s merely something to be noted because of the confusion it might cause. If in the future it becomes possible to accurately type hint cases like this feel free to open an issue about implementing it.
2.6.3 - Dapr Python SDK extensions
2.6.3.1 - Getting started with the Dapr Python gRPC service extension
The Dapr Python SDK provides a built in gRPC server extension, dapr.ext.grpc
, for creating Dapr services.
Installation
You can download and install the Dapr gRPC server extension with:
pip install dapr-ext-grpc
Note
The development package will contain features and behavior that will be compatible with the pre-release version of the Dapr runtime. Make sure to uninstall any stable versions of the Python SDK extension before installing the <code>dapr-dev</code> package.
pip3 install dapr-ext-grpc-dev
Examples
The App
object can be used to create a server.
Listen for service invocation requests
The InvokeMethodReqest
and InvokeMethodResponse
objects can be used to handle incoming requests.
A simple service that will listen and respond to requests will look like:
from dapr.ext.grpc import App, InvokeMethodRequest, InvokeMethodResponse
app = App()
@app.method(name='my-method')
def mymethod(request: InvokeMethodRequest) -> InvokeMethodResponse:
print(request.metadata, flush=True)
print(request.text(), flush=True)
return InvokeMethodResponse(b'INVOKE_RECEIVED', "text/plain; charset=UTF-8")
app.run(50051)
A full sample can be found here.
Subscribe to a topic
When subscribing to a topic, you can instruct dapr whether the event delivered has been accepted, or whether it should be dropped, or retried later.
from typing import Optional
from cloudevents.sdk.event import v1
from dapr.ext.grpc import App
from dapr.clients.grpc._response import TopicEventResponse
app = App()
# Default subscription for a topic
@app.subscribe(pubsub_name='pubsub', topic='TOPIC_A')
def mytopic(event: v1.Event) -> Optional[TopicEventResponse]:
print(event.Data(),flush=True)
# Returning None (or not doing a return explicitly) is equivalent
# to returning a TopicEventResponse("success").
# You can also return TopicEventResponse("retry") for dapr to log
# the message and retry delivery later, or TopicEventResponse("drop")
# for it to drop the message
return TopicEventResponse("success")
# Specific handler using Pub/Sub routing
@app.subscribe(pubsub_name='pubsub', topic='TOPIC_A',
rule=Rule("event.type == \"important\"", 1))
def mytopic_important(event: v1.Event) -> None:
print(event.Data(),flush=True)
# Handler with disabled topic validation
@app.subscribe(pubsub_name='pubsub-mqtt', topic='topic/#', disable_topic_validation=True,)
def mytopic_wildcard(event: v1.Event) -> None:
print(event.Data(),flush=True)
app.run(50051)
A full sample can be found here.
Setup input binding trigger
from dapr.ext.grpc import App, BindingRequest
app = App()
@app.binding('kafkaBinding')
def binding(request: BindingRequest):
print(request.text(), flush=True)
app.run(50051)
A full sample can be found here.
Related links
2.6.3.2 - Dapr Python SDK integration with FastAPI
The Dapr Python SDK provides integration with FastAPI using the dapr-ext-fastapi
extension.
Installation
You can download and install the Dapr FastAPI extension with:
pip install dapr-ext-fastapi
Note
The development package will contain features and behavior that will be compatible with the pre-release version of the Dapr runtime. Make sure to uninstall any stable versions of the Python SDK extension before installing the <code>dapr-dev</code> package.
pip install dapr-ext-fastapi-dev
Example
Subscribing to events of different types
import uvicorn
from fastapi import Body, FastAPI
from dapr.ext.fastapi import DaprApp
from pydantic import BaseModel
class RawEventModel(BaseModel):
body: str
class User(BaseModel):
id: int
name: str
class CloudEventModel(BaseModel):
data: User
datacontenttype: str
id: str
pubsubname: str
source: str
specversion: str
topic: str
traceid: str
traceparent: str
tracestate: str
type: str
app = FastAPI()
dapr_app = DaprApp(app)
# Allow handling event with any structure (Easiest, but least robust)
# dapr publish --publish-app-id sample --topic any_topic --pubsub pubsub --data '{"id":"7", "desc": "good", "size":"small"}'
@dapr_app.subscribe(pubsub='pubsub', topic='any_topic')
def any_event_handler(event_data = Body()):
print(event_data)
# For robustness choose one of the below based on if publisher is using CloudEvents
# Handle events sent with CloudEvents
# dapr publish --publish-app-id sample --topic cloud_topic --pubsub pubsub --data '{"id":"7", "name":"Bob Jones"}'
@dapr_app.subscribe(pubsub='pubsub', topic='cloud_topic')
def cloud_event_handler(event_data: CloudEventModel):
print(event_data)
# Handle raw events sent without CloudEvents
# curl -X "POST" http://localhost:3500/v1.0/publish/pubsub/raw_topic?metadata.rawPayload=true -H "Content-Type: application/json" -d '{"body": "345"}'
@dapr_app.subscribe(pubsub='pubsub', topic='raw_topic')
def raw_event_handler(event_data: RawEventModel):
print(event_data)
if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", port=30212)
Creating an actor
from fastapi import FastAPI
from dapr.ext.fastapi import DaprActor
from demo_actor import DemoActor
app = FastAPI(title=f'{DemoActor.__name__}Service')
# Add Dapr Actor Extension
actor = DaprActor(app)
@app.on_event("startup")
async def startup_event():
# Register DemoActor
await actor.register_actor(DemoActor)
@app.get("/GetMyData")
def get_my_data():
return "{'message': 'myData'}"
2.6.3.3 - Dapr Python SDK integration with Flask
The Dapr Python SDK provides integration with Flask using the flask-dapr
extension.
Installation
You can download and install the Dapr Flask extension with:
pip install flask-dapr
Note
The development package will contain features and behavior that will be compatible with the pre-release version of the Dapr runtime. Make sure to uninstall any stable versions of the Python SDK extension before installing the <code>dapr-dev</code> package.
pip install flask-dapr-dev
Example
from flask import Flask
from flask_dapr.actor import DaprActor
from dapr.conf import settings
from demo_actor import DemoActor
app = Flask(f'{DemoActor.__name__}Service')
# Enable DaprActor Flask extension
actor = DaprActor(app)
# Register DemoActor
actor.register_actor(DemoActor)
# Setup method route
@app.route('/GetMyData', methods=['GET'])
def get_my_data():
return {'message': 'myData'}, 200
# Run application
if __name__ == '__main__':
app.run(port=settings.HTTP_APP_PORT)
2.6.3.4 - Dapr Python SDK integration with Dapr Workflow extension
The Dapr Python SDK provides a built-in Dapr Workflow extension, dapr.ext.workflow
, for creating Dapr services.
Installation
You can download and install the Dapr Workflow extension with:
pip install dapr-ext-workflow
Note
The development package will contain features and behavior that will be compatible with the pre-release version of the Dapr runtime. Make sure to uninstall any stable versions of the Python SDK extension before installing the <code>dapr-dev</code> package.
pip install dapr-ext-workflow-dev
Example
from time import sleep
import dapr.ext.workflow as wf
wfr = wf.WorkflowRuntime()
@wfr.workflow(name='random_workflow')
def task_chain_workflow(ctx: wf.DaprWorkflowContext, wf_input: int):
try:
result1 = yield ctx.call_activity(step1, input=wf_input)
result2 = yield ctx.call_activity(step2, input=result1)
except Exception as e:
yield ctx.call_activity(error_handler, input=str(e))
raise
return [result1, result2]
@wfr.activity(name='step1')
def step1(ctx, activity_input):
print(f'Step 1: Received input: {activity_input}.')
# Do some work
return activity_input + 1
@wfr.activity
def step2(ctx, activity_input):
print(f'Step 2: Received input: {activity_input}.')
# Do some work
return activity_input * 2
@wfr.activity
def error_handler(ctx, error):
print(f'Executing error handler: {error}.')
# Do some compensating work
if __name__ == '__main__':
wfr.start()
sleep(10) # wait for workflow runtime to start
wf_client = wf.DaprWorkflowClient()
instance_id = wf_client.schedule_new_workflow(workflow=task_chain_workflow, input=42)
print(f'Workflow started. Instance ID: {instance_id}')
state = wf_client.wait_for_workflow_completion(instance_id)
print(f'Workflow completed! Status: {state.runtime_status}')
wfr.shutdown()
- Learn more about authoring and managing workflows:
- Visit Python SDK examples for code samples and instructions to try out Dapr Workflow:
Next steps
Getting started with the Dapr Workflow Python SDK2.6.3.4.1 - Getting started with the Dapr Workflow Python SDK
Letâs create a Dapr workflow and invoke it using the console. With the provided workflow example, you will:
- Run a Python console application that demonstrates workflow orchestration with activities, child workflows, and external events
- Learn how to handle retries, timeouts, and workflow state management
- Use the Python workflow SDK to start, pause, resume, and purge workflow instances
This example uses the default configuration from dapr init
in self-hosted mode.
In the Python example project, the simple.py
file contains the setup of the app, including:
- The workflow definition
- The workflow activity definitions
- The registration of the workflow and workflow activities
Prerequisites
- Dapr CLI installed
- Initialized Dapr environment
- Python 3.9+ installed
- Dapr Python package and the workflow extension installed
- Verify you’re using the latest proto bindings
Set up the environment
Start by cloning the [Python SDK repo].
git clone https://github.com/dapr/python-sdk.git
From the Python SDK root directory, navigate to the Dapr Workflow example.
cd examples/workflow
Run the following command to install the requirements for running this workflow sample with the Dapr Python SDK.
pip3 install -r workflow/requirements.txt
Run the application locally
To run the Dapr application, you need to start the Python program and a Dapr sidecar. In the terminal, run:
dapr run --app-id wf-simple-example --dapr-grpc-port 50001 --resources-path components -- python3 simple.py
Note: Since Python3.exe is not defined in Windows, you may need to use
python simple.py
instead ofpython3 simple.py
.
Expected output
- "== APP == Hi Counter!"
- "== APP == New counter value is: 1!"
- "== APP == New counter value is: 11!"
- "== APP == Retry count value is: 0!"
- "== APP == Retry count value is: 1! This print statement verifies retry"
- "== APP == Appending 1 to child_orchestrator_string!"
- "== APP == Appending a to child_orchestrator_string!"
- "== APP == Appending a to child_orchestrator_string!"
- "== APP == Appending 2 to child_orchestrator_string!"
- "== APP == Appending b to child_orchestrator_string!"
- "== APP == Appending b to child_orchestrator_string!"
- "== APP == Appending 3 to child_orchestrator_string!"
- "== APP == Appending c to child_orchestrator_string!"
- "== APP == Appending c to child_orchestrator_string!"
- "== APP == Get response from hello_world_wf after pause call: Suspended"
- "== APP == Get response from hello_world_wf after resume call: Running"
- "== APP == New counter value is: 111!"
- "== APP == New counter value is: 1111!"
- "== APP == Workflow completed! Result: "Completed"
What happened?
When you run the application, several key workflow features are shown:
Workflow and Activity Registration: The application uses Python decorators to automatically register workflows and activities with the runtime. This decorator-based approach provides a clean, declarative way to define your workflow components:
@wfr.workflow(name='hello_world_wf') def hello_world_wf(ctx: DaprWorkflowContext, wf_input): # Workflow definition... @wfr.activity(name='hello_act') def hello_act(ctx: WorkflowActivityContext, wf_input): # Activity definition...
Runtime Setup: The application initializes the workflow runtime and client:
wfr = WorkflowRuntime() wfr.start() wf_client = DaprWorkflowClient()
Activity Execution: The workflow executes a series of activities that increment a counter:
@wfr.workflow(name='hello_world_wf') def hello_world_wf(ctx: DaprWorkflowContext, wf_input): yield ctx.call_activity(hello_act, input=1) yield ctx.call_activity(hello_act, input=10)
Retry Logic: The workflow demonstrates error handling with a retry policy:
retry_policy = RetryPolicy( first_retry_interval=timedelta(seconds=1), max_number_of_attempts=3, backoff_coefficient=2, max_retry_interval=timedelta(seconds=10), retry_timeout=timedelta(seconds=100), ) yield ctx.call_activity(hello_retryable_act, retry_policy=retry_policy)
Child Workflow: A child workflow is executed with its own retry policy:
yield ctx.call_child_workflow(child_retryable_wf, retry_policy=retry_policy)
External Event Handling: The workflow waits for an external event with a timeout:
event = ctx.wait_for_external_event(event_name) timeout = ctx.create_timer(timedelta(seconds=30)) winner = yield when_any([event, timeout])
Workflow Lifecycle Management: The example demonstrates how to pause and resume the workflow:
wf_client.pause_workflow(instance_id=instance_id) metadata = wf_client.get_workflow_state(instance_id=instance_id) # ... check status ... wf_client.resume_workflow(instance_id=instance_id)
Event Raising: After resuming, the workflow raises an event:
wf_client.raise_workflow_event( instance_id=instance_id, event_name=event_name, data=event_data )
Completion and Cleanup: Finally, the workflow waits for completion and cleans up:
state = wf_client.wait_for_workflow_completion( instance_id, timeout_in_seconds=30 ) wf_client.purge_workflow(instance_id=instance_id)
Next steps
2.7 - Dapr Rust SDK
Note
The Dapr Rust-SDK is currently in Alpha. Work is underway to bring it to a stable release and will likely involve breaking changes.A client library to help build Dapr applications using Rust. This client is targeting support for all public Dapr APIs while focusing on idiomatic Rust experiences and developer productivity.
Client
Use the Rust Client SDK for invoking public Dapr APIs [**Learn more about the Rust Client SDK**](https://v1-16.docs.dapr.io/developing-applications/sdks/rust/rust-client/)
2.7.1 - Getting started with the Dapr client Rust SDK
The Dapr client package allows you to interact with other Dapr applications from a Rust application.
Note
The Dapr Rust-SDK is currently in Alpha. Work is underway to bring it to a stable release and will likely involve breaking changes.Prerequisites
- Dapr CLI installed
- Initialized Dapr environment
- Rust installed
Import the client package
Add Dapr to your cargo.toml
[dependencies]
# Other dependencies
dapr = "0.16.0"
You can either reference dapr::Client
or bind the full path to a new name as follows:
use dapr::Client as DaprClient;
Instantiating the Dapr client
let addr = "https://127.0.0.1".to_string();
let mut client = dapr::Client::<dapr::client::TonicClient>::connect(addr,
port).await?;
Alternatively if you would like to specify a custom port, this can be done by using this connect method:
let mut client = dapr::Client::<dapr::client::TonicClient>::connect_with_port(addr, "3500".to_string()).await?;
Building blocks
The Rust SDK allows you to interface with the Dapr building blocks.
Service Invocation (gRPC)
To invoke a specific method on another service running with Dapr sidecar, the Dapr client provides two options:
Invoke a (gRPC) service
let response = client
.invoke_service("service-to-invoke", "method-to-invoke", Some(data))
.await
.unwrap();
For a full guide on service invocation, visit How-To: Invoke a service.
State Management
The Dapr Client provides access to these state management methods: save_state
, get_state
, delete_state
that can be used like so:
let store_name = String::from("statestore");
let key = String::from("hello");
let val = String::from("world").into_bytes();
// save key-value pair in the state store
client
.save_state(store_name, key, val, None, None, None)
.await?;
let get_response = client
.get_state("statestore", "hello", None)
.await?;
// delete a value from the state store
client
.delete_state("statestore", "hello", None)
.await?;
Multiple states can be sent with the save_bulk_states
method.
For a full guide on state management, visit How-To: Save & get state.
Publish Messages
To publish data onto a topic, the Dapr client provides a simple method:
let pubsub_name = "pubsub-name".to_string();
let pubsub_topic = "topic-name".to_string();
let pubsub_content_type = "text/plain".to_string();
let data = "content".to_string().into_bytes();
client
.publish_event(pubsub_name, pubsub_topic, pubsub_content_type, data, None)
.await?;
For a full guide on pub/sub, visit How-To: Publish & subscribe.
Related links
3 - Dapr Agents
What is Dapr Agents?
Dapr Agents is a framework for building LLM-powered autonomous agentic applications using Dapr’s distributed systems capabilities. It provides tools for creating AI agents that can execute tasks, make decisions, and collaborate through workflows, while leveraging Dapr’s state management, messaging, and observability features for reliable execution at scale.
3.1 - Introduction
Dapr Agents is a developer framework for building production-grade, resilient AI agent systems powered by Large Language Models (LLMs). Built on the battle-tested Dapr project, it enables developers to create autonomous systems that reason through problems, make dynamic decisions, and collaborate seamlessly. It includes built-in observability and stateful workflow execution to ensure agentic workflows complete successfully, regardless of complexity. Whether you’re developing single-agent applications or complex multi-agent workflows, Dapr Agents provides the infrastructure for intelligent, adaptive systems that scale across environments.
Core Capabilities
- Scale and Efficiency: Run thousands of agents efficiently on a single core. Dapr distributes single and multi-agent apps transparently across fleets of machines and handles their lifecycle.
- Workflow Resilience: Automatically retries agentic workflows and ensures task completion.
- Data-Driven Agents: Directly integrate with databases, documents, and unstructured data by connecting to dozens of different data sources.
- Multi-Agent Systems: Secure and observable by default, enabling collaboration between agents.
- Kubernetes-Native: Easily deploy and manage agents in Kubernetes environments.
- Platform-Ready: Access scopes and declarative resources enable platform teams to integrate Dapr agents into their systems.
- Vendor-Neutral & Open Source: Avoid vendor lock-in and gain flexibility across cloud and on-premises deployments.
Key Features
Dapr Agents provides specialized modules designed for creating intelligent, autonomous systems. Each module is designed to work independently, allowing you to use any combination that fits your application needs.
Building Block | Description |
---|---|
LLM Integration | Uses Dapr Conversation API to abstract LLM inference APIs for chat completion, or provides native clients for other LLM integrations such as embeddings, audio, etc. |
Structured Outputs | Leverage capabilities like OpenAI’s Function Calling to generate predictable, reliable results following JSON Schema and OpenAPI standards for tool integration. |
Tool Selection | Dynamic tool selection based on requirements, best action, and execution through Function Calling capabilities. |
MCP Support | Built-in support for Model Context Protocol enabling agents to dynamically discover and invoke external tools through standardized interfaces. |
Memory Management | Retain context across interactions with options from simple in-memory lists to vector databases, integrating with Dapr state stores for scalable, persistent memory. |
Durable Agents | Workflow-backed agents that provide fault-tolerant execution with persistent state management and automatic retry mechanisms for long-running processes. |
Headless Agents | Expose agents over REST for long-running tasks, enabling programmatic access and integration without requiring user interfaces or human intervention. |
Event-Driven Communication | Enable agent collaboration through Pub/Sub messaging for event-driven communication, task distribution, and real-time coordination in distributed systems. |
Agent Orchestration | Deterministic agent orchestration using Dapr Workflows with higher-level tasks that interact with LLMs for complex multi-step processes. |
Agentic Patterns
Dapr Agents enables a comprehensive set of patterns that represent different approaches to building intelligent systems.

These patterns exist along a spectrum of autonomy, from predictable workflow-based approaches to fully autonomous agents that can dynamically plan and execute their own strategies. Each pattern addresses specific use cases and offers different trade-offs between deterministic outcomes and autonomy:
Pattern | Description |
---|---|
Augmented LLM | Enhances a language model with external capabilities like memory and tools, providing a foundation for AI-driven applications. |
Prompt Chaining | Decomposes complex tasks into a sequence of steps where each LLM call processes the output of the previous one. |
Routing | Classifies inputs and directs them to specialized follow-up tasks, enabling separation of concerns and expert specialization. |
Parallelization | Processes multiple dimensions of a problem simultaneously with outputs aggregated programmatically for improved efficiency. |
Orchestrator-Workers | Features a central orchestrator LLM that dynamically breaks down tasks, delegates them to worker LLMs, and synthesizes results. |
Evaluator-Optimizer | Implements a dual-LLM process where one model generates responses while another provides evaluation and feedback in an iterative loop. |
Durable Agent | Extends the Augmented LLM by adding durability and persistence to agent interactions using Dapr’s state stores. |
Developer Experience
Dapr Agents is a Python framework built on top of the Python Dapr SDK, providing a comprehensive development experience for building agentic systems.
Getting Started
Get started with Dapr Agents by following the instructions on the Getting Started page.
Framework Integrations
Dapr Agents integrates with popular Python frameworks and tools. For detailed integration guides and examples, see the integrations page.
Operational Support
Dapr Agents inherits Dapr’s enterprise-grade operational capabilities, providing comprehensive support for production deployments of agentic systems.
Built-in Operational Features
- Observability - Distributed tracing, metrics collection, and logging for agent interactions and workflow execution
- Security - mTLS encryption, access control, and secrets management for secure agent communication
- Resiliency - Automatic retries, circuit breakers, and timeout policies for fault-tolerant agent operations
- Infrastructure Abstraction - Dapr components abstract LLM providers, memory stores, storage and messaging backends, enabling seamless transitions between development and production environments
These capabilities enable teams to monitor agent performance, secure multi-agent communications, and ensure reliable execution of complex agentic workflows in production environments.
Contributing
Whether you’re interested in enhancing the framework, adding new integrations, or improving documentation, we welcome contributions from the community.
For development setup and guidelines, see our Contributor Guide.
3.2 - Getting Started
Dapr Agents Concepts
If you are looking for an introductory overview of Dapr Agents and want to learn more about basic Dapr Agents terminology, we recommend starting with the introduction and concepts sections.Install Dapr CLI
While simple examples in Dapr Agents can be used without the sidecar, the recommended mode is with the Dapr sidecar. To benefit from the full power of Dapr Agents, install the Dapr CLI for running Dapr locally or on Kubernetes for development purposes. For a complete step-by-step guide, follow the Dapr CLI installation page.
Verify the CLI is installed by restarting your terminal/command prompt and running the following:
dapr -h
Initialize Dapr in Local Mode
Note
Make sure you have Docker already installed.Initialize Dapr locally to set up a self-hosted environment for development. This process fetches and installs the Dapr sidecar binaries, runs essential services as Docker containers, and prepares a default components folder for your application. For detailed steps, see the official guide on initializing Dapr locally.
To initialize the Dapr control plane containers and create a default configuration file, run:
dapr init
Verify you have container instances with daprio/dapr
, openzipkin/zipkin
, and redis
images running:
docker ps
Install Python
Note
Make sure you have Python already installed.Python >=3.10
. For installation instructions, visit the official Python installation guide.Create Your First Dapr Agent
Let’s create a weather assistant agent that demonstrates tool calling with Dapr state management used for conversation memory.
1. Create the environment file
Create a .env
file with your OpenAI API key:
OPENAI_API_KEY=your_api_key_here
This API key is essential for agents to communicate with the LLM, as the default LLM client in the agent uses OpenAI’s services. If you don’t have an API key, you can create one here.
2. Create the Dapr component
Create a components
directory and add historystore.yaml
:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: historystore
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""
This component will be used to store the conversation history, as LLMs are stateless and every chat interaction needs to send all the previous conversations to maintain context.
3. Create the agent with weather tool
Create weather_agent.py
:
import asyncio
from dapr_agents import tool, Agent
from dapr_agents.memory import ConversationDaprStateMemory
from dotenv import load_dotenv
load_dotenv()
@tool
def get_weather() -> str:
"""Get current weather."""
return "It's 72°F and sunny"
async def main():
agent = Agent(
name="WeatherAgent",
role="Weather Assistant",
instructions=["Help users with weather information"],
memory=ConversationDaprStateMemory(store_name="historystore", session_id="hello-world"),
tools=[get_weather],
)
# First interaction
response1 = await agent.run("Hi! My name is John. What's the weather?")
print(f"Agent: {response1}")
# Second interaction - agent should remember the name
response2 = await agent.run("What's my name?")
print(f"Agent: {response2}")
if __name__ == "__main__":
asyncio.run(main())
This code creates an agent with a single weather tool and uses Dapr for memory persistence.
4. Set up virtual environment to install dapr-agent
For the latest version of Dapr Agents, check the PyPI page.
Create a requirements.txt
file with the necessary dependencies:
dapr-agents
python-dotenv
Create and activate a virtual environment, then install the dependencies:
# Create a virtual environment
python3.10 -m venv .venv
# Activate the virtual environment
# On Windows:
.venv\Scripts\activate
# On macOS/Linux:
source .venv/bin/activate
# Install dependencies
pip install -r requirements.txt
5. Run with Dapr
dapr run --app-id weatheragent --resources-path ./components -- python weather_agent.py
This command starts a Dapr sidecar with the conversation component and launches the agent that communicates with the sidecar for state persistence. Notice how in the agent’s responses, it remembers the user’s name from the first chat interaction, demonstrating the conversation memory in action.
6. Enable Redis Insights (Optional)
Dapr uses Redis by default for state management and pub/sub messaging, which are fundamental to Dapr Agents’s agentic workflows. To inspect the Redis instance, a great tool to use is Redis Insight, and you can use it to inspect the agent memory populated earlier. To run Redis Insights, run:
docker run --rm -d --name redisinsight -p 5540:5540 redis/redisinsight:latest
Once running, access the Redis Insight interface at http://localhost:5540/
Inside Redis Insight, you can connect to a Redis instance, so let’s connect to the one used by the agent:
- Port: 6379
- Host (Linux): 172.17.0.1
- Host (Windows/Mac): host.docker.internal (example
host.docker.internal:6379
)
Redis Insight makes it easy to visualize and manage the data powering your agentic workflows, ensuring efficient debugging, monitoring, and optimization.
Here you can browse the state store used in the agent and explore its data.
Next Steps
Now that you have Dapr Agents installed and running, explore more advanced examples and patterns in the quickstarts section to learn about multi-agent workflows, durable agents, and integration with Dapr’s powerful distributed capabilities.
3.3 - Why Dapr Agents
Dapr Agents is an open-source framework for building and orchestrating LLM-based autonomous agents that leverages Dapr’s proven distributed systems foundation. Unlike other agentic frameworks that require developers to build infrastructure from scratch, Dapr Agents enables teams to focus on agent intelligence by providing enterprise-grade scalability, state management, and messaging capabilities out of the box. This approach eliminates the complexity of recreating distributed system fundamentals while delivering production-ready agentic workflows.RetryClaude can make mistakes. Please double-check responses.
Challenges with Existing Frameworks
Many agentic frameworks today attempt to redefine how microservices are built and orchestrated by developing their own platforms for core distributed system capabilities. While these efforts showcase innovation, they often lead to steep learning curves, fragmented systems, and unnecessary complexity when scaling or adapting to new environments.
These frameworks require developers to adopt entirely new paradigms or recreate foundational infrastructure, rather than building on existing solutions that are proven to handle these challenges at scale. This added complexity diverts focus from the primary goal: designing and implementing intelligent, effective agents.
How Dapr Agents Solves It
Dapr Agents takes a different approach by building on Dapr, leveraging its proven APIs and patterns including workflows, pub/sub messaging, state management, and service communication. This integration eliminates the need to recreate foundational components from scratch.
By integrating with Dapr’s runtime and modular components, Dapr Agents empowers developers to build and deploy agents that work as collaborative services within larger systems. Whether experimenting with a single agent or orchestrating workflows involving multiple agents, Dapr Agents allows teams to concentrate on the intelligence and behavior of LLM-powered agents while leveraging a proven framework for scalability and reliability.
Principles
Agent-Centric Design
Dapr Agents is designed to place agents, powered by LLMs, at the core of task execution and workflow orchestration. This principle emphasizes:
- LLM-Powered Agents: Dapr Agents enables the creation of agents that leverage LLMs for reasoning, dynamic decision-making, and natural language interactions.
- Adaptive Task Handling: Agents in Dapr Agents are equipped with flexible patterns like tool calling and reasoning loops (e.g., ReAct), allowing them to autonomously tackle complex and evolving tasks.
- Multi-agent Systems: Dapr Agents’ framework allows agents to act as modular, reusable building blocks that integrate seamlessly into workflows, whether they operate independently or collaboratively.
While Dapr Agents centers around agents, it also recognizes the versatility of using LLMs directly in deterministic workflows or simpler task sequences. In scenarios where the agent’s built-in task-handling patterns, like tool calling
or ReAct
loops, are unnecessary, LLMs can act as core components for reasoning and decision-making. This flexibility ensures users can adapt Dapr Agents to suit diverse needs without being confined to a single approach.
Note
Agents can be used standalone and create workflows behind the scene, or act as autonomous steps in deterministic workflows.Backed by Durable Workflows
Dapr Agents places durability at the core of its architecture, leveraging Dapr Workflows as the foundation for durable agent execution and deterministic multi-agent orchestration.
- Durable Agent Execution: DurableAgents are fundamentally workflow-backed, ensuring all LLM calls and tool executions remain durable, auditable, and resumable. Workflow checkpointing guarantees agents can recover from any point of failure while maintaining state consistency.
- Deterministic Multi-Agent Orchestration: Workflows provide centralized control over task dependencies and coordination between multiple agents. Dapr’s code-first workflow engine enables reliable orchestration of complex business processes while preserving agent autonomy where appropriate.
By integrating workflows as the foundational layer, Dapr Agents enables systems that combine the reliability of deterministic execution with the intelligence of LLM-powered agents, ensuring production-grade reliability and scalability.
Note
Workflows in Dapr Agents provide the foundation for building production-ready agentic systems that combine reliable execution with LLM-powered intelligence.Modular Component Model
Dapr Agents utilizes Dapr’s pluggable component framework and building blocks to simplify development and enhance flexibility:
- Building Blocks for Core Functionality: Dapr provides API building blocks, such as Pub/Sub messaging, state management, service invocation, and more, to address common microservice challenges and promote best practices.
- Interchangeable Components: Each building block operates on swappable components (e.g., Redis, Kafka, Azure CosmosDB), allowing you to replace implementations without changing application code.
- Seamless Transitions: Develop locally with default configurations and deploy effortlessly to cloud environments by simply updating component definitions.
Note
Developers can easily switch between different components (e.g., Redis to DynamoDB, OpenAI, Anthropic) based on their deployment environment, ensuring portability and adaptability.Message-Driven Communication
Dapr Agents emphasizes the use of Pub/Sub messaging for event-driven communication between agents. This principle ensures:
- Decoupled Architecture: Asynchronous communication for scalability and modularity.
- Real-Time Adaptability: Agents react dynamically to events for faster, more flexible task execution.
- Event-Driven Workflows: : By combining Pub/Sub messaging with workflow capabilities, agents can collaborate through event streams while participating in larger orchestrated workflows, enabling both autonomous coordination and structured task execution.
Note
Pub/Sub messaging serves as the backbone for Dapr Agents’ event-driven workflows, enabling agents to communicate and collaborate in real time while maintaining loose coupling.Decoupled Infrastructure Design
Dapr Agents ensures a clean separation between agents and the underlying infrastructure, emphasizing simplicity, scalability, and adaptability:
- Agent Simplicity: Agents focus purely on reasoning and task execution, while Pub/Sub messaging, routing, and validation are managed externally by modular infrastructure components.
- Scalable and Adaptable Systems: By offloading non-agent-specific responsibilities, Dapr Agents allows agents to scale independently and adapt seamlessly to new use cases or integrations.
Note
Decoupling infrastructure keeps agents focused on tasks while enabling seamless scalability and integration across systems.Dapr Agents Benefits
Scalable Workflows as First-Class Citizens
Dapr Agents uses a durable-execution workflow engine that guarantees each agent task executes to completion despite network interruptions, node crashes, and other disruptive failures. Developers do not need to understand the underlying workflow engine conceptsâsimply write an agent that performs any number of tasks and these will be automatically distributed across the cluster. If any task fails, it will be retried and recover its state from where it left off.
Cost-Effective AI Adoption
Dapr Agents builds on Dapr’s Workflow API, which represents each agent as an actor, a single unit of compute and state that is thread-safe and natively distributed. This design enables a scale-to-zero architecture that minimizes infrastructure costs, making AI adoption accessible to organizations of all sizes. The underlying virtual actor model allows thousands of agents to run on demand on a single machine with low latency when scaling from zero. When unused, agents are reclaimed by the system but retain their state until needed again. This design eliminates the trade-off between performance and resource efficiency.
Data-Centric AI Agents
With built-in connectivity to over 50 enterprise data sources, Dapr Agents efficiently handles structured and unstructured data. From basic PDF extraction to large-scale database interactions, it enables data-driven AI workflows with minimal code changes. Dapr’s bindings and state stores, along with MCP support, provide access to numerous data sources for agent data ingestion.
Accelerated Development
Dapr Agents provides AI features that give developers a complete API surface to tackle common problems, including:
- Flexible prompting
- Structured outputs
- Multiple LLM providers
- Contextual memory
- Intelligent tool selection
- MCP integration
- Multi-agent communications
Integrated Security and Reliability
By building on Dapr, platform and infrastructure teams can apply Dapr’s resiliency policies to the database and message broker components used by Dapr Agents. These policies include timeouts, retry/backoff strategies, and circuit breakers. For security, Dapr provides options to scope access to specific databases or message brokers to one or more agentic app deployments. Additionally, Dapr Agents uses mTLS to encrypt communication between its underlying components.
Built-in Messaging and State Infrastructure
- Service-to-Service Invocation: Enables direct communication between agents with built-in service discovery, error handling, and distributed tracing. Agents can use this for synchronous messaging in multi-agent workflows.
- Publish and Subscribe: Supports loosely coupled collaboration between agents through a shared message bus. This enables real-time, event-driven interactions for task distribution and coordination.
- Durable Workflow: Defines long-running, persistent workflows that combine deterministic processes with LLM-based decision-making. Dapr Agents uses this to orchestrate complex multi-step agentic workflows.
- State Management: Provides a flexible key-value store for agents to retain context across interactions, ensuring continuity and adaptability during workflows.
- LLM Integration: Uses Dapr Conversation API to abstract LLM inference APIs for chat completion, and provides native clients for other LLM integrations such as embeddings and audio processing.
Vendor-Neutral and Open Source
As part of the CNCF, Dapr Agents is vendor-neutral, eliminating concerns about lock-in, intellectual property risks, or proprietary restrictions. Organizations gain full flexibility and control over their AI applications using open-source software they can audit and contribute to.
3.4 - Core Concepts
Dapr Agents provides a structured way to build and orchestrate applications that use LLMs without getting bogged down in infrastructure details. The primary goal is to make AI development by abstracting away the complexities of working with LLMs, tools, memory management, and distributed systems, allowing developers to focus on the business logic of their AI applications. Agents in this framework are the fundamental building blocks.
Agents
Agents are autonomous units powered by Large Language Models (LLMs), designed to execute tasks, reason through problems, and collaborate within workflows. Acting as intelligent building blocks, agents combine reasoning with tool integration, memory, and collaboration features to get to the desired outcome.
Dapr Agents provides two agent types, each designed for different use cases:
Agent
The standard Agent
class is a conversational agent that manages tool calls and conversations using a language model. It provides, synchronous execution with built-in conversation memory.
@tool
def my_weather_func() -> str:
"""Get current weather."""
return "It's 72°F and sunny"
async def main():
weather_agent = Agent(
name="WeatherAgent",
role="Weather Assistant",
instructions=["Help users with weather information"],
tools=[my_weather_func],
memory=ConversationDaprStateMemory(store_name="historystore", session_id="some-id"),
)
response1 = await weather_agent.run("What's the weather?")
response2 = await weather_agent.run("How about now?")
This example shows how to create a simple agent with tool integration. The agent processes queries synchronously and maintains conversation context across multiple interactions using Dapr State Store API.
Durable Agent
The DurableAgent
class is a workflow-based agent that extends the standard Agent with Dapr Workflows for long-running, fault-tolerant, and durable execution. It provides persistent state management, automatic retry mechanisms, and deterministic execution across failures.
travel_planner = DurableAgent(
name="TravelBuddy",
role="Travel Planner",
instructions=["Help users find flights and remember preferences"],
tools=[search_flights],
memory=ConversationDaprStateMemory(
store_name="conversationstore", session_id="my-unique-id"
),
# DurableAgent Configurations
message_bus_name="messagepubsub",
state_store_name="workflowstatestore",
state_key="workflow_state",
agents_registry_store_name="registrystatestore",
agents_registry_key="agents_registry",
)
travel_planner.as_service(port=8001)
await travel_planner.start()
This example demonstrates creating a workflow-backed agent that runs autonomously in the background. The agent can be triggered once and continues execution even across system restarts.
Key Characteristics:
- Workflow-based execution using Dapr Workflows
- Persistent workflow state management across sessions and failures
- Automatic retry and recovery mechanisms
- Deterministic execution with checkpointing
- Built-in message routing and agent communication
- Supports complex orchestration patterns and multi-agent collaboration
When to use:
- Multi-step workflows that span time or systems
- Tasks requiring guaranteed progress tracking and state persistence
- Scenarios where operations may pause, fail, or need recovery without data loss
- Complex agent orchestration and multi-agent collaboration
- Production systems requiring fault tolerance and scalability
In Summary:
Agent Type | Memory Type | Execution | Interaction Mode |
---|---|---|---|
Agent | In-memory or Persistent | Ephemeral | Synchronous / Conversational |
Durable Agent | In-memory or Persistent | Durable (Workflow-backed) | Asynchronous / Headless |
Regular
Agent
: Interaction is synchronousâyou send conversational prompts and receive responses immediately. The conversation can be stored in memory or persisted, but the execution is ephemeral and does not survive restarts.DurableAgent
(Workflow-backed): Interaction is asynchronousâyou trigger the agent once, and it runs autonomously in the background until completion. The conversation state can also be in memory or persisted, but the execution is durable and can resume across failures or restarts.
Core Agent Features
An agentic system is a distributed system that requires a variety of behaviors and supporting infrastructure.
LLM Integration
Dapr Agents provides a unified interface to connect with LLM inference APIs. This abstraction allows developers to seamlessly integrate their agents with cutting-edge language models for reasoning and decision-making. The framework includes multiple LLM clients for different providers and modalities:
OpenAIChatClient
: Full spectrum support for OpenAI models including chat, embeddings, and audioHFHubChatClient
: For Hugging Face models supporting both chat and embeddingsNVIDIAChatClient
: For NVIDIA AI Foundation models supporting local inference and chatElevenLabs
: Support for speech and voice capabilitiesDaprChatClient
: Unified API for LLM interactions via Dapr’s Conversation API with built-in security (scopes, secrets, PII obfuscation), resiliency (timeouts, retries, circuit breakers), and observability via OpenTelemetry & Prometheus
Prompt Flexibility
Dapr Agents supports flexible prompt templates to shape agent behavior and reasoning. Users can define placeholders within prompts, enabling dynamic input of context for inference calls. By leveraging prompt formatting with Jinja templates, users can include loops, conditions, and variables, providing precise control over the structure and content of prompts. This flexibility ensures that LLM responses are tailored to the task at hand, offering modularity and adaptability for diverse use cases.
Structured Outputs
Agents in Dapr Agents leverage structured output capabilities, such as OpenAIâs Function Calling, to generate predictable and reliable results. These outputs follow JSON Schema Draft 2020-12 and OpenAPI Specification v3.1.0 standards, enabling easy interoperability and tool integration.
# Define our data model
class Dog(BaseModel):
name: str
breed: str
reason: str
# Initialize the chat client
llm = OpenAIChatClient()
# Get structured response
response = llm.generate(
messages=[UserMessage("One famous dog in history.")], response_format=Dog
)
print(json.dumps(response.model_dump(), indent=2))
This demonstrates how LLMs generate structured data according to a schema. The Pydantic model (Dog) specifies the exact structure and data types expected, while the response_format parameter instructs the LLM to return data matching the model, ensuring consistent and predictable outputs for downstream processing.
Tool Calling
Tool Calling is an essential pattern in autonomous agent design, allowing AI agents to interact dynamically with external tools based on user input. Agents dynamically select the appropriate tool for a given task, using LLMs to analyze requirements and choose the best action.
@tool(args_model=GetWeatherSchema)
def get_weather(location: str) -> str:
"""Get weather information based on location."""
import random
temperature = random.randint(60, 80)
return f"{location}: {temperature}F."
Each tool has a descriptive docstring that helps the LLM understand when to use it. The @tool
decorator marks a function as a tool, while the Pydantic model (GetWeatherSchema
) defines input parameters for structured validation.
- The user submits a query specifying a task and the available tools.
- The LLM analyzes the query and selects the right tool for the task.
- The LLM provides a structured JSON output containing the tool’s unique ID, name, and arguments.
- The AI agent parses the JSON, executes the tool with the provided arguments, and sends the results back as a tool message.
- The LLM then summarizes the tool’s execution results within the user’s context to deliver a comprehensive final response.
This is supported directly through LLM parametric knowledge and enhanced by Function Calling, ensuring tools are invoked efficiently and accurately.
MCP Support
Dapr Agents includes built-in support for the Model Context Protocol (MCP), enabling agents to dynamically discover and invoke external tools through a standardized interface. Using the provided MCPClient, agents can connect to MCP servers via two transport options: stdio for local development and sse for remote or distributed environments.
client = MCPClient()
await client.connect_sse("local", url="http://localhost:8000/sse")
# Convert MCP tools to AgentTool list
tools = client.get_all_tools()
Once connected, the MCP client fetches all available tools from the server and prepares them for immediate use within the agentâs toolset. This allows agents to incorporate capabilities exposed by external processesâsuch as local Python scripts or remote services without hardcoding or preloading them. Agents can invoke these tools at runtime, expanding their behavior based on whatâs offered by the active MCP server.
Memory
Agents retain context across interactions, enhancing their ability to provide coherent and adaptive responses. Memory options range from simple in-memory lists for managing chat history to vector databases for semantic search, and also integrates with Dapr state stores, for scalable and persistent memory for advanced use cases from 28 different state store providers.
# ConversationListMemory (Simple In-Memory) - Default
memory_list = ConversationListMemory()
# ConversationVectorMemory (Vector Store)
memory_vector = ConversationVectorMemory(
vector_store=your_vector_store_instance,
distance_metric="cosine"
)
# 3. ConversationDaprStateMemory (Dapr State Store)
memory_dapr = ConversationDaprStateMemory(
store_name="historystore", # Maps to Dapr component name
session_id="some-id"
)
# Using with an agent
agent = Agent(
name="MyAgent",
role="Assistant",
memory=memory_dapr # Pass any memory implementation
)
ConversationListMemory
is the default memory implementation when none is specified. It provides fast, temporary storage in Python lists for development and testing. The Dapr’s memory implementations are interchangeable, allowing you to switch between them without modifying your agent logic.
Memory Implementation | Type | Persistence | Search | Use Case |
---|---|---|---|---|
ConversationListMemory (Default) | In-Memory | â | Linear | Development |
ConversationVectorMemory | Vector Store | â | Semantic | RAG/AI Apps |
ConversationDaprStateMemory | Dapr State Store | â | Query | Production |
Agent Services
DurableAgents
are exposed as independent services using FastAPI and Dapr applications. This modular approach separates the agent’s logic from its service layer, enabling seamless reuse, deployment, and integration into multi-agent systems.
travel_planner.as_service(port=8001)
await travel_planner.start()
This exposes the agent as a REST service, allowing other systems to interact with it through standard HTTP requests such as this one:
curl -i -X POST http://localhost:8001/start-workflow \
-H "Content-Type: application/json" \
-d '{"task": "I want to find flights to Paris"}'
Unlike conversational agents that provide immediate synchronous responses, durable agents operate as headless services that are triggered asynchronously. You trigger it, receive a workflow instance ID, and can track progress over time. This enables long-running, fault-tolerant operations that can span multiple systems and survive restarts, making them ideal for complex multi-step processes in production environments.
Multi-agent Systems (MAS)
While it’s tempting to build a fully autonomous agent capable of handling many tasks, in practice, it’s more effective to break this down into specialized agents equipped with appropriate tools and instructions, then coordinate interactions between multiple agents.
Multi-agent systems (MAS) distribute workflow execution across multiple coordinated agents to efficiently achieve shared objectives. This approach, called agent orchestration, enables better specialization, scalability, and maintainability compared to monolithic agent designs.
Dapr Agents supports two primary orchestration approaches via Dapr Workflows and Dapr PubSub:
- Deterministic Workflow-based Orchestration - Provides clear, repeatable processes with predefined sequences and decision points
- Event-driven Orchestration - Enables dynamic, adaptive collaboration through message-based coordination among agents
Both approaches utilize a central orchestrator that coordinates multiple specialized agents, each handling specific tasks or domains, ensuring efficient task distribution and seamless collaboration across the system.
Deterministic Workflows
Workflows are structured processes where LLM agents and tools collaborate in predefined sequences to accomplish complex tasks. Unlike fully autonomous agents that make all decisions independently, workflows provide a balance of structure and predictability from the workflow definition, intelligence and flexibility from LLM agents, and reliability and durability from Dapr’s workflow engine.
This approach is particularly suitable for business-critical applications where you need both the intelligence of LLMs and the reliability of traditional software systems.
# Define Workflow logic
@workflow(name="task_chain_workflow")
def task_chain_workflow(ctx: DaprWorkflowContext):
result1 = yield ctx.call_activity(get_character)
result2 = yield ctx.call_activity(get_line, input={"character": result1})
return result2
@task(description="Pick a random character from The Lord of the Rings and respond with the character's name only")
def get_character() -> str:
pass
@task(description="What is a famous line by {character}")
def get_line(character: str) -> str:
pass
This workflow demonstrates sequential task execution where the output of one task becomes the input for the next, enabling complex multi-step processes with clear dependencies and data flow.
Dapr Agents supports coordination of LLM interactions at different levels of granularity:
Prompt Tasks
Tasks created from prompts that leverage LLM reasoning capabilities for specific, well-defined operations.
@task(description="Pick a random character from The Lord of the Rings and respond with the character's name only")
def get_character() -> str:
pass
While technically not full agents (as they lack tools and memory), prompt tasks serve as lightweight agentic building blocks that perform focused LLM interactions within the broader workflow context.
Agent Tasks
Tasks based on agents with tools, providing greater flexibility and capability for complex operations requiring external integrations.
@task(agent=custom_agent, description="Retrieve stock data for {ticker}")
def get_stock_data(ticker: str) -> dict:
pass
Agent tasks enable workflows to leverage specialized agents with their own tools, memory, and reasoning capabilities while maintaining the structured coordination benefits of workflow orchestration.
Note: Agent tasks must use regular
Agent
instances, notDurableAgent
instances, as workflows manage the execution context and durability through the Dapr workflow engine.
Workflow Patterns
Workflows enable the implementation of various agentic patterns through structured orchestration, including Prompt Chaining, Routing, Parallelization, Orchestrator-Workers, Evaluator-Optimizer, Human-in-the-loop, and others. For detailed implementations and examples of these patterns, see the Patterns documentation.
Workflows vs. Durable Agents
Both DurableAgent and workflow-based agent orchestration use Dapr workflows behind the scenes for durability and reliability, but they differ in how control flow is determined.
Aspect | Workflows | Durable Agents |
---|---|---|
Control | Developer-defined process flow | Agent determines next steps |
Predictability | Higher | Lower |
Flexibility | Fixed overall structure, flexible within steps | Completely flexible |
Reliability | Very high (workflow engine guarantees) | Depends on agent implementation |
Complexity | Simpler to reason about | Harder to debug and understand |
Use Cases | Business processes, regulated domains | Open-ended research, creative tasks |
The key difference lies in control flow determination: with DurableAgent, the workflow is created dynamically by the LLM’s planning decisions, executing entirely within a single agent context. In contrast, with deterministic workflows, the developer explicitly defines the coordination between one or more LLM interactions, providing structured orchestration across multiple tasks or agents.
Event-driven Orchestration
Event-driven agent orchestration enables multiple specialized agents to collaborate through asynchronous Pub/Sub messaging. This approach provides powerful collaborative problem-solving, parallel processing, and division of responsibilities among specialized agents through independent scaling, resilience via service isolation, and clear separation of responsibilities.
Core Participants
The core participants in this multi-agent coordination systems are the following.
Durable Agents
Each agent runs as an independent service with its own lifecycle, configured as a standard DurableAgent with pub/sub enabled:
hobbit_service = DurableAgent(
name="Frodo",
instructions=["Speak like Frodo, with humility and determination."],
message_bus_name="messagepubsub",
state_store_name="workflowstatestore",
state_key="workflow_state",
agents_registry_store_name="agentstatestore",
agents_registry_key="agents_registry",
)
Orchestrator
The orchestrator coordinates interactions between agents and manages conversation flow by selecting appropriate agents, managing interaction sequences, and tracking progress. Dapr Agents offers three orchestration strategies: Random, RoundRobin, and LLM-based orchestration.
llm_orchestrator = LLMOrchestrator(
name="LLMOrchestrator",
message_bus_name="messagepubsub",
state_store_name="agenticworkflowstate",
state_key="workflow_state",
agents_registry_store_name="agentstatestore",
agents_registry_key="agents_registry",
max_iterations=3
)
The LLM-based orchestrator uses intelligent agent selection for context-aware decision making, while Random and RoundRobin provide alternative coordination strategies for simpler use cases.
Communication Flow
Agents communicate through an event-driven pub/sub system that enables asynchronous communication, decoupled architecture, scalable interactions, and reliable message delivery. The typical collaboration flow involves client query submission, orchestrator-driven agent selection, agent response processing, and iterative coordination until task completion.
This approach is particularly effective for complex problem solving requiring multiple expertise areas, creative collaboration from diverse perspectives, role-playing scenarios, and distributed processing of large tasks.
How Messaging Works
Messaging connects agents in workflows, enabling real-time communication and coordination. It acts as the backbone of event-driven interactions, ensuring that agents work together effectively without requiring direct connections.
Through messaging, agents can:
- Collaborate Across Tasks: Agents exchange messages to share updates, broadcast events, or deliver task results.
- Orchestrate Workflows: Tasks are triggered and coordinated through published messages, enabling workflows to adjust dynamically.
- Respond to Events: Agents adapt to real-time changes by subscribing to relevant topics and processing events as they occur.
By using messaging, workflows remain modular and scalable, with agents focusing on their specific roles while seamlessly participating in the broader system.
Message Bus and Topics
The message bus serves as the central system that manages topics and message delivery. Agents interact with the message bus to send and receive messages:
- Publishing Messages: Agents publish messages to a specific topic, making the information available to all subscribed agents.
- Subscribing to Topics: Agents subscribe to topics relevant to their roles, ensuring they only receive the messages they need.
- Broadcasting Updates: Multiple agents can subscribe to the same topic, allowing them to act on shared events or updates.
Why Pub/Sub Messaging for Agentic Workflows?
Pub/Sub messaging is essential for event-driven agentic workflows because it:
- Decouples Components: Agents publish messages without needing to know which agents will receive them, promoting modular and scalable designs.
- Enables Real-Time Communication: Messages are delivered as events occur, allowing agents to react instantly.
- Fosters Collaboration: Multiple agents can subscribe to the same topic, making it easy to share updates or divide responsibilities.
- Enables Scalability:The message bus ensures that communication scales effortlessly, whether you are adding new agents, expanding workflows, or adapting to changing requirements. Agents remain loosely coupled, allowing workflows to evolve without disruptions.
This messaging framework ensures that agents operate efficiently, workflows remain flexible, and systems can scale dynamically.
3.5 - Agentic Patterns
Dapr Agents simplify the implementation of agentic systems, from simple augmented LLMs to fully autonomous agents in enterprise environments. The following sections describe several application patterns that can benefit from Dapr Agents.
Overview
Agentic systems use design patterns such as reflection, tool use, planning, and multi-agent collaboration to achieve better results than simple single-prompt interactions. Rather than thinking of “agent” as a binary classification, it’s more useful to think of systems as being agentic to different degrees.
This ranges from simple workflows that prompt a model once, to sophisticated systems that can carry out multiple iterative steps with greater autonomy. There are two fundamental architectural approaches:
- Workflows: Systems where LLMs and tools are orchestrated through predefined code paths (more prescriptive)
- Agents: Systems where LLMs dynamically direct their own processes and tool usage (more autonomous)
On one end, we have predictable workflows with well-defined decision paths and deterministic outcomes. On the other end, we have AI agents that can dynamically direct their own strategies. While fully autonomous agents might seem appealing, workflows often provide better predictability and consistency for well-defined tasks. This aligns with enterprise requirements where reliability and maintainability are crucial.

The patterns in this documentation start with the Augmented LLM, then progress through workflow-based approaches that offer predictability and control, before moving toward more autonomous patterns. Each addresses specific use cases and offers different trade-offs between deterministic outcomes and autonomy.
Augmented LLM
The Augmented LLM pattern is the foundational building block for any kind of agentic system. It enhances a language model with external capabilities like memory and tools, providing a basic but powerful foundation for AI-driven applications.

This pattern is ideal for scenarios where you need an LLM with enhanced capabilities but don’t require complex orchestration or autonomous decision-making. The augmented LLM can access external tools, maintain conversation history, and provide consistent responses across interactions.
Use Cases:
- Personal assistants that remember user preferences
- Customer support agents that access product information
- Research tools that retrieve and analyze information
Implementation with Dapr Agents:
from dapr_agents import Agent, tool
@tool
def search_flights(destination: str) -> List[FlightOption]:
"""Search for flights to the specified destination."""
# Mock flight data (would be an external API call in a real app)
return [
FlightOption(airline="SkyHighAir", price=450.00),
FlightOption(airline="GlobalWings", price=375.50)
]
# Create agent with memory and tools
travel_planner = Agent(
name="TravelBuddy",
role="Travel Planner Assistant",
instructions=["Remember destinations and help find flights"],
tools=[search_flights],
)
Dapr Agents automatically handles:
- Agent configuration - Simple configuration with role and instructions guides the LLM behavior
- Memory persistence - The agent manages conversation memory
- Tool integration - The
@tool
decorator handles input validation, type conversion, and output formatting
The foundational building block of any agentic system is the Augmented LLM - a language model enhanced with external capabilities like memory, tools, and retrieval. In Dapr Agents, this is represented by the Agent
class. However, while this provides essential capabilities, it alone is often not sufficient for complex enterprise scenarios. This is why it’s typically combined with workflow orchestration that provides structure, reliability, and coordination for multi-step processes.
Prompt Chaining
The Prompt Chaining pattern addresses complex requirements by decomposing tasks into a sequence of steps, where each LLM call processes the output of the previous one. This pattern allows for better control of the overall process, validation between steps, and specialization of each step.

Use Cases:
- Content generation (creating outlines first, then expanding, then reviewing)
- Multi-stage analysis (performing complex analysis into sequential steps)
- Quality assurance workflows (adding validation between processing steps)
Implementation with Dapr Agents:
from dapr_agents import DaprWorkflowContext, workflow
@workflow(name='travel_planning_workflow')
def travel_planning_workflow(ctx: DaprWorkflowContext, user_input: str):
# Step 1: Extract destination using a simple prompt (no agent)
destination_text = yield ctx.call_activity(extract_destination, input=user_input)
# Gate: Check if destination is valid
if "paris" not in destination_text.lower():
return "Unable to create itinerary: Destination not recognized or supported."
# Step 2: Generate outline with planning agent (has tools)
travel_outline = yield ctx.call_activity(create_travel_outline, input=destination_text)
# Step 3: Expand into detailed plan with itinerary agent (no tools)
detailed_itinerary = yield ctx.call_activity(expand_itinerary, input=travel_outline)
return detailed_itinerary
The implementation showcases three different approaches:
- Basic prompt-based task (no agent)
- Agent-based task without tools
- Agent-based task with tools
Dapr Agents’ workflow orchestration provides:
- Workflow as Code - Tasks are defined in developer-friendly ways
- Workflow Persistence - Long-running chained tasks survive process restarts
- Hybrid Execution - Easily mix prompts, agent calls, and tool-equipped agents
Routing
The Routing pattern addresses diverse request types by classifying inputs and directing them to specialized follow-up tasks. This allows for separation of concerns and creates specialized experts for different types of queries.

Use Cases:
- Resource optimization (sending simple queries to smaller models)
- Multi-lingual support (routing queries to language-specific handlers)
- Customer support (directing different query types to specialized handlers)
- Content creation (routing writing tasks to topic specialists)
- Hybrid LLM systems (using different models for different tasks)
Implementation with Dapr Agents:
@workflow(name="travel_assistant_workflow")
def travel_assistant_workflow(ctx: DaprWorkflowContext, input_params: dict):
user_query = input_params.get("query")
# Classify the query type using an LLM
query_type = yield ctx.call_activity(classify_query, input={"query": user_query})
# Route to the appropriate specialized handler
if query_type == QueryType.ATTRACTIONS:
response = yield ctx.call_activity(
handle_attractions_query,
input={"query": user_query}
)
elif query_type == QueryType.ACCOMMODATIONS:
response = yield ctx.call_activity(
handle_accommodations_query,
input={"query": user_query}
)
elif query_type == QueryType.TRANSPORTATION:
response = yield ctx.call_activity(
handle_transportation_query,
input={"query": user_query}
)
else:
response = "I'm not sure how to help with that specific travel question."
return response
The advantages of Dapr’s approach include:
- Familiar Control Flow - Uses standard programming if-else constructs for routing
- Extensibility - The control flow can be extended for future requirements easily
- LLM-Powered Classification - Uses an LLM to categorize queries dynamically
Parallelization
The Parallelization pattern enables processing multiple dimensions of a problem simultaneously, with outputs aggregated programmatically. This pattern improves efficiency for complex tasks with independent subtasks that can be processed concurrently.

Use Cases:
- Complex research (processing different aspects of a topic in parallel)
- Multi-faceted planning (creating various elements of a plan concurrently)
- Product analysis (analyzing different aspects of a product in parallel)
- Content creation (generating multiple sections of a document simultaneously)
Implementation with Dapr Agents:
@workflow(name="travel_planning_workflow")
def travel_planning_workflow(ctx: DaprWorkflowContext, input_params: dict):
destination = input_params.get("destination")
preferences = input_params.get("preferences")
days = input_params.get("days")
# Process three aspects of the travel plan in parallel
parallel_tasks = [
ctx.call_activity(research_attractions, input={
"destination": destination,
"preferences": preferences,
"days": days
}),
ctx.call_activity(recommend_accommodations, input={
"destination": destination,
"preferences": preferences,
"days": days
}),
ctx.call_activity(suggest_transportation, input={
"destination": destination,
"preferences": preferences,
"days": days
})
]
# Wait for all parallel tasks to complete
results = yield wfapp.when_all(parallel_tasks)
# Aggregate results into final plan
final_plan = yield ctx.call_activity(create_final_plan, input={"results": results})
return final_plan
The benefits of using Dapr for parallelization include:
- Simplified Concurrency - Handles the complex orchestration of parallel tasks
- Automatic Synchronization - Waits for all parallel tasks to complete
- Workflow Durability - The entire parallel process is durable and recoverable
Orchestrator-Workers
For highly complex tasks where the number and nature of subtasks can’t be known in advance, the Orchestrator-Workers pattern offers a powerful solution. This pattern features a central orchestrator LLM that dynamically breaks down tasks, delegates them to worker LLMs, and synthesizes their results.

Unlike previous patterns where workflows are predefined, the orchestrator determines the workflow dynamically based on the specific input.
Use Cases:
- Software development tasks spanning multiple files
- Research gathering information from multiple sources
- Business analysis evaluating different facets of a complex problem
- Content creation combining specialized content from various domains
Implementation with Dapr Agents:
@workflow(name="orchestrator_travel_planner")
def orchestrator_travel_planner(ctx: DaprWorkflowContext, input_params: dict):
travel_request = input_params.get("request")
# Step 1: Orchestrator analyzes request and determines required tasks
plan_result = yield ctx.call_activity(
create_travel_plan,
input={"request": travel_request}
)
tasks = plan_result.get("tasks", [])
# Step 2: Execute each task with a worker LLM
worker_results = []
for task in tasks:
task_result = yield ctx.call_activity(
execute_travel_task,
input={"task": task}
)
worker_results.append({
"task_id": task["task_id"],
"result": task_result
})
# Step 3: Synthesize the results into a cohesive travel plan
final_plan = yield ctx.call_activity(
synthesize_travel_plan,
input={
"request": travel_request,
"results": worker_results
}
)
return final_plan
The advantages of Dapr for the Orchestrator-Workers pattern include:
- Dynamic Planning - The orchestrator can dynamically create subtasks based on input
- Worker Isolation - Each worker focuses on solving one specific aspect of the problem
- Simplified Synthesis - The final synthesis step combines results into a coherent output
Evaluator-Optimizer
Quality is often achieved through iteration and refinement. The Evaluator-Optimizer pattern implements a dual-LLM process where one model generates responses while another provides evaluation and feedback in an iterative loop.

Use Cases:
- Content creation requiring adherence to specific style guidelines
- Translation needing nuanced understanding and expression
- Code generation meeting specific requirements and handling edge cases
- Complex search requiring multiple rounds of information gathering and refinement
Implementation with Dapr Agents:
@workflow(name="evaluator_optimizer_travel_planner")
def evaluator_optimizer_travel_planner(ctx: DaprWorkflowContext, input_params: dict):
travel_request = input_params.get("request")
max_iterations = input_params.get("max_iterations", 3)
# Generate initial travel plan
current_plan = yield ctx.call_activity(
generate_travel_plan,
input={"request": travel_request, "feedback": None}
)
# Evaluation loop
iteration = 1
meets_criteria = False
while iteration <= max_iterations and not meets_criteria:
# Evaluate the current plan
evaluation = yield ctx.call_activity(
evaluate_travel_plan,
input={"request": travel_request, "plan": current_plan}
)
score = evaluation.get("score", 0)
feedback = evaluation.get("feedback", [])
meets_criteria = evaluation.get("meets_criteria", False)
# Stop if we meet criteria or reached max iterations
if meets_criteria or iteration >= max_iterations:
break
# Optimize the plan based on feedback
current_plan = yield ctx.call_activity(
generate_travel_plan,
input={"request": travel_request, "feedback": feedback}
)
iteration += 1
return {
"final_plan": current_plan,
"iterations": iteration,
"final_score": score
}
The benefits of using Dapr for this pattern include:
- Iterative Improvement Loop - Manages the feedback cycle between generation and evaluation
- Quality Criteria - Enables clear definition of what constitutes acceptable output
- Maximum Iteration Control - Prevents infinite loops by enforcing iteration limits
Durable Agent
Moving to the far end of the agentic spectrum, the Durable Agent pattern represents a shift from workflow-based approaches. Instead of predefined steps, we have an autonomous agent that can plan its own steps and execute them based on its understanding of the goal.
Enterprise applications often need durable execution and reliability that go beyond in-memory capabilities. Dapr’s DurableAgent
class helps you implement autonomous agents with the reliability of workflows, as these agents are backed by Dapr workflows behind the scenes. The DurableAgent
extends the basic Agent
class by adding durability to agent execution.

This pattern doesn’t just persist message history â it dynamically creates workflows with durable activities for each interaction, where LLM calls and tool executions are stored reliably in Dapr’s state stores. This makes it ideal for production environments where reliability is critical.
The Durable Agent also enables the “headless agents” approach where autonomous systems that operate without direct user interaction. Dapr’s Durable Agent exposes REST and Pub/Sub APIs, making it ideal for long-running operations that are triggered by other applications or external events.
Use Cases:
- Long-running tasks that may take minutes or days to complete
- Distributed systems running across multiple services
- Customer support handling complex multi-session tickets
- Business processes with LLM intelligence at each step
- Personal assistants handling scheduling and information lookup
- Autonomous background processes triggered by external systems
Implementation with Dapr Agents:
from dapr_agents import DurableAgent
travel_planner = DurableAgent(
name="TravelBuddy",
role="Travel Planner",
goal="Help users find flights and remember preferences",
instructions=[
"Find flights to destinations",
"Remember user preferences",
"Provide clear flight info"
],
tools=[search_flights],
message_bus_name="messagepubsub",
state_store_name="workflowstatestore",
state_key="workflow_state",
agents_registry_store_name="workflowstatestore",
agents_registry_key="agents_registry",
)
The implementation follows Dapr’s sidecar architecture model, where all infrastructure concerns are handled by the Dapr runtime:
- Persistent Memory - Agent state is stored in Dapr’s state store, surviving process crashes
- Workflow Orchestration - All agent interactions managed through Dapr’s workflow system
- Service Exposure - REST endpoints for workflow management come out of the box
- Pub/Sub Input/Output - Event-driven messaging through Dapr’s pub/sub system for seamless integration
The Durable Agent enables the concept of “headless agents” - autonomous systems that operate without direct user interaction. Dapr’s Durable Agent exposes both REST and Pub/Sub APIs, making it ideal for long-running operations that are triggered by other applications or external events. This allows agents to run in the background, processing requests asynchronously and integrating seamlessly into larger distributed systems.
Choosing the Right Pattern
The journey from simple agentic workflows to fully autonomous agents represents a spectrum of approaches for integrating LLMs into your applications. Different use cases call for different levels of agency and control:
- Start with simpler patterns like Augmented LLM and Prompt Chaining for well-defined tasks where predictability is crucial
- Progress to more dynamic patterns like Parallelization and Orchestrator-Workers as your needs grow more complex
- Consider fully autonomous agents only for open-ended tasks where the benefits of flexibility outweigh the need for strict control
3.6 - Integrations
Out-of-the-box Tools
Text Splitter
The Text Splitter module is a foundational integration in Dapr Agents
designed to preprocess documents for use in Retrieval-Augmented Generation (RAG) workflows and other in-context learning
applications. Its primary purpose is to break large documents into smaller, meaningful chunks that can be embedded, indexed, and efficiently retrieved based on user queries.
By focusing on manageable chunk sizes and preserving contextual integrity through overlaps, the Text Splitter ensures documents are processed in a way that supports downstream tasks like question answering, summarization, and document retrieval.
Why Use a Text Splitter?
When building RAG pipelines, splitting text into smaller chunks serves these key purposes:
- Enabling Effective Indexing: Chunks are embedded and stored in a vector database, making them retrievable based on similarity to user queries.
- Maintaining Semantic Coherence: Overlapping chunks help retain context across splits, ensuring the system can connect related pieces of information.
- Handling Model Limitations: Many models have input size limits. Splitting ensures text fits within these constraints while remaining meaningful.
This step is crucial for preparing knowledge to be embedded into a searchable format, forming the backbone of retrieval-based workflows.
Strategies for Text Splitting
The Text Splitter supports multiple strategies to handle different types of documents effectively. These strategies balance the size of each chunk with the need to maintain context.
1. Character-Based Length
- How It Works: Counts the number of characters in each chunk.
- Use Case: Simple and effective for text splitting without dependency on external tokenization tools.
Example:
from dapr_agents.document.splitter.text import TextSplitter
# Character-based splitter (default)
splitter = TextSplitter(chunk_size=1024, chunk_overlap=200)
2. Token-Based Length
- How It Works: Counts tokens, which are the semantic units used by language models (e.g., words or subwords).
- Use Case: Ensures compatibility with models like GPT, where token limits are critical.
Example:
import tiktoken
from dapr_agents.document.splitter.text import TextSplitter
enc = tiktoken.get_encoding("cl100k_base")
def length_function(text: str) -> int:
return len(enc.encode(text))
splitter = TextSplitter(
chunk_size=1024,
chunk_overlap=200,
chunk_size_function=length_function
)
The flexibility to define the chunk size function makes the Text Splitter adaptable to various scenarios.
Chunk Overlap
To preserve context, the Text Splitter includes a chunk overlap feature. This ensures that parts of one chunk carry over into the next, helping maintain continuity when chunks are processed sequentially.
Example:
- With
chunk_size=1024
andchunk_overlap=200
, the last200
tokens or characters of one chunk appear at the start of the next. - This design helps in tasks like text generation, where maintaining context across chunks is essential.
How to Use the Text Splitter
Here’s a practical example of using the Text Splitter to process a PDF document:
Step 1: Load a PDF
import requests
from pathlib import Path
# Download PDF
pdf_url = "https://arxiv.org/pdf/2412.05265.pdf"
local_pdf_path = Path("arxiv_paper.pdf")
if not local_pdf_path.exists():
response = requests.get(pdf_url)
response.raise_for_status()
with open(local_pdf_path, "wb") as pdf_file:
pdf_file.write(response.content)
Step 2: Read the Document
For this example, we use Dapr Agents’ PyPDFReader
.
Note
The PyPDF Reader relies on the pypdf python library, which is not included in the Dapr Agents core module. This design choice helps maintain modularity and avoids adding unnecessary dependencies for users who may not require this functionality. To use the PyPDF Reader, ensure that you install the library separately.pip install pypdf
Then, initialize the reader to load the PDF file.
from dapr_agents.document.reader.pdf.pypdf import PyPDFReader
reader = PyPDFReader()
documents = reader.load(local_pdf_path)
Step 3: Split the Document
splitter = TextSplitter(
chunk_size=1024,
chunk_overlap=200,
chunk_size_function=length_function
)
chunked_documents = splitter.split_documents(documents)
Step 4: Analyze Results
print(f"Original document pages: {len(documents)}")
print(f"Total chunks: {len(chunked_documents)}")
print(f"First chunk: {chunked_documents[0]}")
Key Features
- Hierarchical Splitting: Splits text by separators (e.g., paragraphs), then refines chunks further if needed.
- Customizable Chunk Size: Supports character-based and token-based length functions.
- Overlap for Context: Retains portions of one chunk in the next to maintain continuity.
- Metadata Preservation: Each chunk retains metadata like page numbers and start/end indices for easier mapping.
By understanding and leveraging the Text Splitter
, you can preprocess large documents effectively, ensuring they are ready for embedding, indexing, and retrieval in advanced workflows like RAG pipelines.
Arxiv Fetcher
The Arxiv Fetcher module in Dapr Agents
provides a powerful interface to interact with the arXiv API. It is designed to help users programmatically search for, retrieve, and download scientific papers from arXiv. With advanced querying capabilities, metadata extraction, and support for downloading PDF files, the Arxiv Fetcher is ideal for researchers, developers, and teams working with academic literature.
Why Use the Arxiv Fetcher?
The Arxiv Fetcher simplifies the process of accessing research papers, offering features like:
- Automated Literature Search: Query arXiv for specific topics, keywords, or authors.
- Metadata Retrieval: Extract structured metadata, such as titles, abstracts, authors, categories, and submission dates.
- Precise Filtering: Limit search results by date ranges (e.g., retrieve the latest research in a field).
- PDF Downloading: Fetch full-text PDFs of papers for offline use.
How to Use the Arxiv Fetcher
Step 1: Install Required Modules
Note
The Arxiv Fetcher relies on a lightweight Python wrapper for the arXiv API, which is not included in the Dapr Agents core module. This design choice helps maintain modularity and avoids adding unnecessary dependencies for users who may not require this functionality. To use the Arxiv Fetcher, ensure you install the library separately.pip install arxiv
Step 2: Initialize the Fetcher
Set up the ArxivFetcher
to begin interacting with the arXiv API.
from dapr_agents.document import ArxivFetcher
# Initialize the fetcher
fetcher = ArxivFetcher()
Step 3: Perform Searches
Basic Search by Query String
Search for papers using simple keywords. The results are returned as Document objects, each containing:
text
: The abstract of the paper.metadata
: Structured metadata such as title, authors, categories, and submission dates.
# Search for papers related to "machine learning"
results = fetcher.search(query="machine learning", max_results=5)
# Display metadata and summaries
for doc in results:
print(f"Title: {doc.metadata['title']}")
print(f"Authors: {', '.join(doc.metadata['authors'])}")
print(f"Summary: {doc.text}\n")
Advanced Querying
Refine searches using logical operators like AND, OR, and NOT or perform field-specific searches, such as by author.
Examples:
Search for papers on “agents” and “cybersecurity”:
results = fetcher.search(query="all:(agents AND cybersecurity)", max_results=10)
Exclude specific terms (e.g., “quantum” but not “computing”):
results = fetcher.search(query="all:(quantum NOT computing)", max_results=10)
Search for papers by a specific author:
results = fetcher.search(query='au:"John Doe"', max_results=10)
Filter Papers by Date
Limit search results to a specific time range, such as papers submitted in the last 24 hours.
from datetime import datetime, timedelta
# Calculate the date range
last_24_hours = (datetime.now() - timedelta(days=1)).strftime("%Y%m%d")
today = datetime.now().strftime("%Y%m%d")
# Search for recent papers
recent_results = fetcher.search(
query="all:(agents AND cybersecurity)",
from_date=last_24_hours,
to_date=today,
max_results=5
)
# Display metadata
for doc in recent_results:
print(f"Title: {doc.metadata['title']}")
print(f"Authors: {', '.join(doc.metadata['authors'])}")
print(f"Published: {doc.metadata['published']}")
print(f"Summary: {doc.text}\n")
Step 4: Download PDFs
Fetch the full-text PDFs of papers for offline use. Metadata is preserved alongside the downloaded files.
import os
from pathlib import Path
# Create a directory for downloads
os.makedirs("arxiv_papers", exist_ok=True)
# Download PDFs
download_results = fetcher.search(
query="all:(agents AND cybersecurity)",
max_results=5,
download=True,
dirpath=Path("arxiv_papers")
)
for paper in download_results:
print(f"Downloaded Paper: {paper['title']}")
print(f"File Path: {paper['file_path']}\n")
Step 5: Extract and Process PDF Content
Use PyPDFReader
from Dapr Agents
to extract content from downloaded PDFs. Each page is treated as a separate Document object with metadata.
from pathlib import Path
from dapr_agents.document import PyPDFReader
reader = PyPDFReader()
docs_read = []
for paper in download_results:
local_pdf_path = Path(paper["file_path"])
documents = reader.load(local_pdf_path, additional_metadata=paper)
docs_read.extend(documents)
# Verify results
print(f"Extracted {len(docs_read)} documents.")
print(f"First document text: {docs_read[0].text}")
print(f"Metadata: {docs_read[0].metadata}")
Practical Applications
The Arxiv Fetcher enables various use cases for researchers and developers:
- Literature Reviews: Quickly retrieve and organize relevant papers on a given topic or by a specific author.
- Trend Analysis: Identify the latest research in a domain by filtering for recent submissions.
- Offline Research Workflows: Download and process PDFs for local analysis and archiving.
Next Steps
While the Arxiv Fetcher provides robust functionality for retrieving and processing research papers, its output can be integrated into advanced workflows:
- Building a Searchable Knowledge Base: Combine fetched papers with integrations like text splitting and vector embeddings for advanced search capabilities.
- Retrieval-Augmented Generation (RAG): Use processed papers as inputs for RAG pipelines to power question-answering systems.
- Automated Literature Surveys: Generate summaries or insights based on the fetched and processed research.
3.7 - Quickstarts
Dapr Agents Quickstarts demonstrate how to use Dapr Agents to build applications with LLM-powered autonomous agents and event-driven workflows. Each quickstart builds upon the previous one, introducing new concepts incrementally.
Before you begin
Quickstarts
Scenario | What You’ll Learn |
---|---|
Hello World A rapid introduction that demonstrates core Dapr Agents concepts through simple, practical examples. | - Basic LLM Usage: Simple text generation with OpenAI models - Creating Agents: Building agents with custom tools in under 20 lines of code - Simple Workflows: Setting up multi-step LLM processes |
LLM Call with Dapr Chat Client Explore interaction with Language Models through Dapr Agents’ DaprChatClient , featuring basic text generation with plain text prompts and templates. | - Text Completion: Generating responses to prompts - Swapping LLM providers: Switching LLM backends without application code change - Resilience: Setting timeout, retry and circuit-breaking - PII Obfuscation: Automatically detect and mask sensitive user information |
LLM Call with OpenAI Client Leverage native LLM client libraries with Dapr Agents using the OpenAI Client for chat completion, audio processing, and embeddings. | - Text Completion: Generating responses to prompts - Structured Outputs: Converting LLM responses to Pydantic objects Note: Other quickstarts for specific clients are available for Elevenlabs, Hugging Face, and Nvidia. |
Agent Tool Call Build your first AI agent with custom tools by creating a practical weather assistant that fetches information and performs actions. | - Tool Definition: Creating reusable tools with the @tool decorator- Agent Configuration: Setting up agents with roles, goals, and tools - Function Calling: Enabling LLMs to execute Python functions |
Agentic Workflow Dive into stateful workflows with Dapr Agents by orchestrating sequential and parallel tasks through powerful workflow capabilities. | - LLM-powered Tasks: Using language models in workflows - Task Chaining: Creating resilient multi-step processes executing in sequence - Fan-out/Fan-in: Executing activities in parallel; then synchronizing these activities until all preceding activities have completed |
Multi-Agent Workflows Explore advanced event-driven workflows featuring a Lord of the Rings themed multi-agent system where autonomous agents collaborate to solve problems. | - Multi-agent Systems: Creating a network of specialized agents - Event-driven Architecture: Implementing pub/sub messaging between agents - Workflow Orchestration: Coordinating agents through different selection strategies |
Multi-Agent Workflow on Kubernetes Run multi-agent workflows in Kubernetes, demonstrating deployment and orchestration of event-driven agent systems in a containerized environment. | - Kubernetes Deployment: Running agents on Kubernetes - Container Orchestration: Managing agent lifecycles with K8s - Service Communication: Inter-agent communication in K8s |
Document Agent with Chainlit Create a conversational agent with an operational UI that can upload, and learn unstructured documents while retaining long-term memory. | - Conversational Document Agent: Upload and converse over unstructured documents - Cloud Agnostic Storage: Upload files to multiple storage providers - Conversation Memory Storage: Persists conversation history using external storage. |
Data Agent with MCP and Chainlit Build a conversational agent over a Postgres database using Model Composition Protocol (MCP) with a ChatGPT-like interface. | - Database Querying: Natural language queries to relational databases - MCP Integration: Connecting to databases without DB-specific code - Data Analysis: Complex data analysis through conversation |
4 - Error codes
4.1 - Errors overview
An error code is a numeric or alphamueric code that indicates the nature of an error and, when possible, why it occured.
Dapr error codes are standardized strings for over 80+ common errors across HTTP and gRPC requests when using the Dapr APIs. These codes are both:
- Returned in the JSON response body of the request.
- When enabled, logged in debug-level logs in the runtime.
- If you’re running in Kubernetes, error codes are logged in the sidecar.
- If you’re running in self-hosted, you can enable and run debug logs.
Error format
Dapr error codes consist of a prefix, a category, and shorthand of the error itself. For example:
Prefix | Category | Error shorthand |
---|---|---|
ERR_ | PUBSUB_ | NOT_FOUND |
Some of the most common errors returned include:
- ERR_ACTOR_TIMER_CREATE
- ERR_PURGE_WORKFLOW
- ERR_STATE_STORE_NOT_FOUND
- ERR_HEALTH_NOT_READY
An error returned for a state store not found might look like the following:
{
"error": "Bad Request",
"error_msg": "{\"errorCode\":\"ERR_STATE_STORE_NOT_FOUND\",\"message\":\"state store <name> is not found\",\"details\":[{\"@type\":\"type.googleapis.com/google.rpc.ErrorInfo\",\"domain\":\"dapr.io\",\"metadata\":{\"appID\":\"nodeapp\"},\"reason\":\"DAPR_STATE_NOT_FOUND\"}]}",
"status": 400
}
The returned error includes:
- The error code:
ERR_STATE_STORE_NOT_FOUND
- The error message describing the issue:
state store <name> is not found
- The app ID in which the error is occuring:
nodeapp
- The reason for the error:
DAPR_STATE_NOT_FOUND
Dapr error code metrics
Metrics help you see when exactly errors are occuring from within the runtime. Error code metrics are collected using the error_code_total
endpoint. This endpoint is disabled by default. You can enable it using the recordErrorCodes
field in your configuration file.
Demo
Watch a demo presented during Diagrid’s Dapr v1.15 celebration to see how to enable error code metrics and handle error codes returned in the runtime.
Next step
See a list of all Dapr error codes4.2 - Error codes reference guide
The following tables list the error codes returned by Dapr runtime.
The error codes are returned in the response body of an HTTP request or in the ErrorInfo
section of a gRPC status response, if one is present.
An effort is underway to enrich all gRPC error responses according to the Richer Error Model. Error codes without a corresponding gRPC code indicate those errors have not yet been updated to this model.
Actors API
HTTP Code | gRPC Code | Description |
---|---|---|
ERR_ACTOR_INSTANCE_MISSING | Missing actor instance | |
ERR_ACTOR_INVOKE_METHOD | Error invoking actor method | |
ERR_ACTOR_RUNTIME_NOT_FOUND | Actor runtime not found | |
ERR_ACTOR_STATE_GET | Error getting actor state | |
ERR_ACTOR_STATE_TRANSACTION_SAVE | Error saving actor transaction | |
ERR_ACTOR_REMINDER_CREATE | Error creating actor reminder | |
ERR_ACTOR_REMINDER_DELETE | Error deleting actor reminder | |
ERR_ACTOR_REMINDER_GET | Error getting actor reminder | |
ERR_ACTOR_REMINDER_NON_HOSTED | Reminder operation on non-hosted actor type | |
ERR_ACTOR_TIMER_CREATE | Error creating actor timer | |
ERR_ACTOR_NO_APP_CHANNEL | App channel not initialized | |
ERR_ACTOR_STACK_DEPTH | Maximum actor call stack depth exceeded | |
ERR_ACTOR_NO_PLACEMENT | Placement service not configured | |
ERR_ACTOR_RUNTIME_CLOSED | Actor runtime is closed | |
ERR_ACTOR_NAMESPACE_REQUIRED | Actors must have a namespace configured when running in Kubernetes mode | |
ERR_ACTOR_NO_ADDRESS | No address found for actor |
Workflows API
HTTP Code | gRPC Code | Description |
---|---|---|
ERR_GET_WORKFLOW | Error getting workflow | |
ERR_START_WORKFLOW | Error starting workflow | |
ERR_PAUSE_WORKFLOW | Error pausing workflow | |
ERR_RESUME_WORKFLOW | Error resuming workflow | |
ERR_TERMINATE_WORKFLOW | Error terminating workflow | |
ERR_PURGE_WORKFLOW | Error purging workflow | |
ERR_RAISE_EVENT_WORKFLOW | Error raising event in workflow | |
ERR_WORKFLOW_COMPONENT_MISSING | Missing workflow component | |
ERR_WORKFLOW_COMPONENT_NOT_FOUND | Workflow component not found | |
ERR_WORKFLOW_EVENT_NAME_MISSING | Missing workflow event name | |
ERR_WORKFLOW_NAME_MISSING | Workflow name not configured | |
ERR_INSTANCE_ID_INVALID | Invalid workflow instance ID. (Only alphanumeric and underscore characters are allowed) | |
ERR_INSTANCE_ID_NOT_FOUND | Workflow instance ID not found | |
ERR_INSTANCE_ID_PROVIDED_MISSING | Missing workflow instance ID | |
ERR_INSTANCE_ID_TOO_LONG | Workflow instance ID too long |
State management API
HTTP Code | gRPC Code | Description |
---|---|---|
ERR_STATE_TRANSACTION | Error in state transaction | |
ERR_STATE_SAVE | Error saving state | |
ERR_STATE_GET | Error getting state | |
ERR_STATE_DELETE | Error deleting state | |
ERR_STATE_BULK_DELETE | Error deleting state in bulk | |
ERR_STATE_BULK_GET | Error getting state in bulk | |
ERR_NOT_SUPPORTED_STATE_OPERATION | Operation not supported in transaction | |
ERR_STATE_QUERY | DAPR_STATE_QUERY_FAILED | Error querying state |
ERR_STATE_STORE_NOT_FOUND | DAPR_STATE_NOT_FOUND | State store not found |
ERR_STATE_STORE_NOT_CONFIGURED | DAPR_STATE_NOT_CONFIGURED | State store not configured |
ERR_STATE_STORE_NOT_SUPPORTED | DAPR_STATE_TRANSACTIONS_NOT_SUPPORTED | State store does not support transactions |
ERR_STATE_STORE_NOT_SUPPORTED | DAPR_STATE_QUERYING_NOT_SUPPORTED | State store does not support querying |
ERR_STATE_STORE_TOO_MANY_TRANSACTIONS | DAPR_STATE_TOO_MANY_TRANSACTIONS | Too many operations per transaction |
ERR_MALFORMED_REQUEST | DAPR_STATE_ILLEGAL_KEY | Invalid key |
Configuration API
HTTP Code | gRPC Code | Description |
---|---|---|
ERR_CONFIGURATION_GET | Error getting configuration | |
ERR_CONFIGURATION_STORE_NOT_CONFIGURED | Configuration store not configured | |
ERR_CONFIGURATION_STORE_NOT_FOUND | Configuration store not found | |
ERR_CONFIGURATION_SUBSCRIBE | Error subscribing to configuration | |
ERR_CONFIGURATION_UNSUBSCRIBE | Error unsubscribing from configuration |
Crypto API
HTTP Code | gRPC Code | Description |
---|---|---|
ERR_CRYPTO | Error in crypto operation | |
ERR_CRYPTO_KEY | Error retrieving crypto key | |
ERR_CRYPTO_PROVIDER_NOT_FOUND | Crypto provider not found | |
ERR_CRYPTO_PROVIDERS_NOT_CONFIGURED | Crypto providers not configured |
Secrets API
HTTP Code | gRPC Code | Description |
---|---|---|
ERR_SECRET_GET | Error getting secret | |
ERR_SECRET_STORE_NOT_FOUND | Secret store not found | |
ERR_SECRET_STORES_NOT_CONFIGURED | Secret store not configured | |
ERR_PERMISSION_DENIED | Permission denied by policy |
Pub/Sub and messaging errors
HTTP Code | gRPC Code | Description |
---|---|---|
ERR_PUBSUB_EMPTY | DAPR_PUBSUB_NAME_EMPTY | Pubsub name is empty |
ERR_PUBSUB_NOT_FOUND | DAPR_PUBSUB_NOT_FOUND | Pubsub not found |
ERR_PUBSUB_NOT_FOUND | DAPR_PUBSUB_TEST_NOT_FOUND | Pubsub not found |
ERR_PUBSUB_NOT_CONFIGURED | DAPR_PUBSUB_NOT_CONFIGURED | Pubsub not configured |
ERR_TOPIC_NAME_EMPTY | DAPR_PUBSUB_TOPIC_NAME_EMPTY | Topic name is empty |
ERR_PUBSUB_FORBIDDEN | DAPR_PUBSUB_FORBIDDEN | Access to topic forbidden for APP ID |
ERR_PUBSUB_PUBLISH_MESSAGE | DAPR_PUBSUB_PUBLISH_MESSAGE | Error publishing message |
ERR_PUBSUB_REQUEST_METADATA | DAPR_PUBSUB_METADATA_DESERIALIZATION | Error deserializing metadata |
ERR_PUBSUB_CLOUD_EVENTS_SER | DAPR_PUBSUB_CLOUD_EVENT_CREATION | Error creating CloudEvent |
ERR_PUBSUB_EVENTS_SER | DAPR_PUBSUB_MARSHAL_ENVELOPE | Error marshalling Cloud Event envelope |
ERR_PUBSUB_EVENTS_SER | DAPR_PUBSUB_MARSHAL_EVENTS | Error marshalling events to bytes |
ERR_PUBSUB_EVENTS_SER | DAPR_PUBSUB_UNMARSHAL_EVENTS | Error unmarshalling events |
ERR_PUBLISH_OUTBOX | Error publishing message to outbox |
Conversation API
HTTP Code | gRPC Code | Description |
---|---|---|
ERR_CONVERSATION_INVALID_PARMS | Invalid parameters for conversation component | |
ERR_CONVERSATION_INVOKE | Error invoking conversation | |
ERR_CONVERSATION_MISSING_INPUTS | Missing inputs for conversation | |
ERR_CONVERSATION_NOT_FOUND | Conversation not found |
Service Invocation / Direct Messaging API
HTTP Code | gRPC Code | Description |
---|---|---|
ERR_DIRECT_INVOKE | Error invoking service |
Bindings API
HTTP Code | gRPC Code | Description |
---|---|---|
ERR_INVOKE_OUTPUT_BINDING | Error invoking output binding |
Distributed Lock API
HTTP Code | gRPC Code | Description |
---|---|---|
ERR_LOCK_STORE_NOT_CONFIGURED | Lock store not configured | |
ERR_LOCK_STORE_NOT_FOUND | Lock store not found | |
ERR_TRY_LOCK | Error acquiring lock | |
ERR_UNLOCK | Error releasing lock |
Healthz
HTTP Code | gRPC Code | Description |
---|---|---|
ERR_HEALTH_NOT_READY | Dapr not ready | |
ERR_HEALTH_APPID_NOT_MATCH | Dapr App ID does not match | |
ERR_OUTBOUND_HEALTH_NOT_READY | Dapr outbound not ready |
Common
HTTP Code | gRPC Code | Description |
---|---|---|
ERR_API_UNIMPLEMENTED | API not implemented | |
ERR_APP_CHANNEL_NIL | App channel is nil | |
ERR_BAD_REQUEST | Bad request | |
ERR_BODY_READ | Error reading request body | |
ERR_INTERNAL | Internal error | |
ERR_MALFORMED_REQUEST | Malformed request | |
ERR_MALFORMED_REQUEST_DATA | Malformed request data | |
ERR_MALFORMED_RESPONSE | Malformed response |
Scheduler/Jobs API
HTTP Code | gRPC Code | Description |
---|---|---|
DAPR_SCHEDULER_SCHEDULE_JOB | DAPR_SCHEDULER_SCHEDULE_JOB | Error scheduling job |
DAPR_SCHEDULER_JOB_NAME | DAPR_SCHEDULER_JOB_NAME | Job name should only be set in the url |
DAPR_SCHEDULER_JOB_NAME_EMPTY | DAPR_SCHEDULER_JOB_NAME_EMPTY | Job name is empty |
DAPR_SCHEDULER_GET_JOB | DAPR_SCHEDULER_GET_JOB | Error getting job |
DAPR_SCHEDULER_LIST_JOBS | DAPR_SCHEDULER_LIST_JOBS | Error listing jobs |
DAPR_SCHEDULER_DELETE_JOB | DAPR_SCHEDULER_DELETE_JOB | Error deleting job |
DAPR_SCHEDULER_EMPTY | DAPR_SCHEDULER_EMPTY | Required argument is empty |
DAPR_SCHEDULER_SCHEDULE_EMPTY | DAPR_SCHEDULER_SCHEDULE_EMPTY | No schedule provided for job |
Generic
HTTP Code | gRPC Code | Description |
---|---|---|
ERROR | ERROR | Generic error |
Next steps
4.3 - Handling HTTP error codes
For HTTP calls made to Dapr runtime, when an error is encountered, an error JSON is returned in response body. The JSON contains an error code and an descriptive error message.
{
"errorCode": "ERR_STATE_GET",
"message": "Requested state key does not exist in state store."
}
Related
4.4 - Handling gRPC error codes
Initially, errors followed the Standard gRPC error model. However, to provide more detailed and informative error messages, an enhanced error model has been defined which aligns with the gRPC Richer error model.
Note
Not all Dapr errors have been converted to the richer gRPC error model.Standard gRPC Error Model
The Standard gRPC error model is an approach to error reporting in gRPC. Each error response includes an error code and an error message. The error codes are standardized and reflect common error conditions.
Example of a Standard gRPC Error Response:
ERROR:
Code: InvalidArgument
Message: input key/keyPrefix 'bad||keyname' can't contain '||'
Richer gRPC Error Model
The Richer gRPC error model extends the standard error model by providing additional context and details about the error. This model includes the standard error code
and message
, along with a details
section that can contain various types of information, such as ErrorInfo
, ResourceInfo
, and BadRequest
details.
Example of a Richer gRPC Error Response:
ERROR:
Code: InvalidArgument
Message: input key/keyPrefix 'bad||keyname' can't contain '||'
Details:
1) {
"@type": "type.googleapis.com/google.rpc.ErrorInfo",
"domain": "dapr.io",
"reason": "DAPR_STATE_ILLEGAL_KEY"
}
2) {
"@type": "type.googleapis.com/google.rpc.ResourceInfo",
"resourceName": "statestore",
"resourceType": "state"
}
3) {
"@type": "type.googleapis.com/google.rpc.BadRequest",
"fieldViolations": [
{
"field": "bad||keyname",
"description": "input key/keyPrefix 'bad||keyname' can't contain '||'"
}
]
}
For HTTP clients, Dapr translates the gRPC error model to a similar structure in JSON format. The response includes an errorCode
, a message
, and a details
array that mirrors the structure found in the richer gRPC model.
Example of an HTTP error response:
{
"errorCode": "ERR_MALFORMED_REQUEST",
"message": "api error: code = InvalidArgument desc = input key/keyPrefix 'bad||keyname' can't contain '||'",
"details": [
{
"@type": "type.googleapis.com/google.rpc.ErrorInfo",
"domain": "dapr.io",
"metadata": null,
"reason": "DAPR_STATE_ILLEGAL_KEY"
},
{
"@type": "type.googleapis.com/google.rpc.ResourceInfo",
"description": "",
"owner": "",
"resource_name": "statestore",
"resource_type": "state"
},
{
"@type": "type.googleapis.com/google.rpc.BadRequest",
"field_violations": [
{
"field": "bad||keyname",
"description": "api error: code = InvalidArgument desc = input key/keyPrefix 'bad||keyname' can't contain '||'"
}
]
}
]
}
You can find the specification of all the possible status details here.
Related Links
5 - Local development
5.1 - IDE support
5.1.1 - Visual Studio Code integration with Dapr
5.1.1.1 - Dapr Visual Studio Code extension overview
Dapr offers a preview Dapr Visual Studio Code extension for local development which enables users a variety of features related to better managing their Dapr applications and debugging of your Dapr applications for all supported Dapr languages which are .NET, Go, PHP, Python and Java.
Features
Scaffold Dapr debugging tasks
The Dapr extension helps you debug your applications with Dapr using Visual Studio Code’s built-in debugging capability.
Using the Dapr: Scaffold Dapr Tasks
Command Palette operation, you can update your existing task.json
and launch.json
files to launch and configure the Dapr sidecar when you begin debugging.
- Make sure you have a launch configuration set for your app. (Learn more)
- Open the Command Palette with
Ctrl+Shift+P
- Select
Dapr: Scaffold Dapr Tasks
- Run your app and the Dapr sidecar with
F5
or via the Run view.
Scaffold Dapr components
When adding Dapr to your application, you may want to have a dedicated components directory, separate from the default components initialized as part of dapr init
.
To create a dedicated components folder with the default statestore
, pubsub
, and zipkin
components, use the Dapr: Scaffold Dapr Components
Command Palette operation.
- Open your application directory in Visual Studio Code
- Open the Command Palette with
Ctrl+Shift+P
- Select
Dapr: Scaffold Dapr Components
- Run your application with
dapr run --resources-path ./components -- ...
View running Dapr applications
The Applications view shows Dapr applications running locally on your machine.
Invoke Dapr applications
Within the Applications view, users can right-click and invoke Dapr apps via GET or POST methods, optionally specifying a payload.
Publish events to Dapr applications
Within the Applications view, users can right-click and publish messages to a running Dapr application, specifying the topic and payload.
Users can also publish messages to all running applications.
Additional resources
Debugging multiple Dapr applications at the same time
Using the VS Code extension, you can debug multiple Dapr applications at the same time with Multi-target debugging.
Community call demo
Watch this video on how to use the Dapr VS Code extension:
5.1.1.2 - How-To: Debug Dapr applications with Visual Studio Code
Manual debugging
When developing Dapr applications, you typically use the Dapr CLI to start your daprized service similar to this:
dapr run --app-id nodeapp --app-port 3000 --dapr-http-port 3500 app.js
One approach to attaching the debugger to your service is to first run daprd with the correct arguments from the command line and then launch your code and attach the debugger. While this is a perfectly acceptable solution, it does require a few extra steps and some instruction to developers who might want to clone your repo and hit the “play” button to begin debugging.
If your application is a collection of microservices, each with a Dapr sidecar, it will be useful to debug them together in Visual Studio Code. This page will use the hello world quickstart to showcase how to configure VSCode to debug multiple Dapr application using VSCode debugging.
Prerequisites
- Install the Dapr extension. You will be using the tasks it offers later on.
- Optionally clone the hello world quickstart
Step 1: Configure launch.json
The file .vscode/launch.json
contains launch configurations for a VS Code debug run. This file defines what will launch and how it is configured when the user begins debugging. Configurations are available for each programming language in the Visual Studio Code marketplace.
Scaffold debugging configuration
The Dapr VSCode extension offers built-in scaffolding to generate launch.json
and tasks.json
for you.
In the case of the hello world quickstart, two applications are launched, each with its own Dapr sidecar. One is written in Node.JS, and the other in Python. You’ll notice each configuration contains a daprd run
preLaunchTask and a daprd stop
postDebugTask.
{
"version": "0.2.0",
"configurations": [
{
"type": "pwa-node",
"request": "launch",
"name": "Nodeapp with Dapr",
"skipFiles": [
"<node_internals>/**"
],
"program": "${workspaceFolder}/node/app.js",
"preLaunchTask": "daprd-debug-node",
"postDebugTask": "daprd-down-node"
},
{
"type": "python",
"request": "launch",
"name": "Pythonapp with Dapr",
"program": "${workspaceFolder}/python/app.py",
"console": "integratedTerminal",
"preLaunchTask": "daprd-debug-python",
"postDebugTask": "daprd-down-python"
}
]
}
If you’re using ports other than the default ports baked into the code, set the DAPR_HTTP_PORT
and DAPR_GRPC_PORT
environment variables in the launch.json
debug configuration. Match with the httpPort
and grpcPort
in the daprd tasks.json
. For example, launch.json
:
{
// Set the non-default HTTP and gRPC ports
"env": {
"DAPR_HTTP_PORT": "3502",
"DAPR_GRPC_PORT": "50002"
},
}
tasks.json
:
{
// Match with ports set in launch.json
"httpPort": 3502,
"grpcPort": 50002
}
Each configuration requires a request
, type
and name
. These parameters help VSCode identify the task configurations in the .vscode/tasks.json
files.
type
defines the language used. Depending on the language, it might require an extension found in the marketplace, such as the Python Extension.name
is a unique name for the configuration. This is used for compound configurations when calling multiple configurations in your project.${workspaceFolder}
is a VS Code variable reference. This is the path to the workspace opened in VS Code.- The
preLaunchTask
andpostDebugTask
parameters refer to the program configurations run before and after launching the application. See step 2 on how to configure these.
For more information on VSCode debugging parameters see VS Code launch attributes.
Step 2: Configure tasks.json
For each task defined in .vscode/launch.json
, a corresponding task definition must exist in .vscode/tasks.json
.
For the quickstart, each service needs a task to launch a Dapr sidecar with the daprd
type, and a task to stop the sidecar with daprd-down
. The parameters appId
, httpPort
, metricsPort
, label
and type
are required. Additional optional parameters are available, see the reference table here.
{
"version": "2.0.0",
"tasks": [
{
"label": "daprd-debug-node",
"type": "daprd",
"appId": "nodeapp",
"appPort": 3000,
"httpPort": 3500,
"metricsPort": 9090
},
{
"label": "daprd-down-node",
"type": "daprd-down",
"appId": "nodeapp"
},
{
"label": "daprd-debug-python",
"type": "daprd",
"appId": "pythonapp",
"httpPort": 53109,
"grpcPort": 53317,
"metricsPort": 9091
},
{
"label": "daprd-down-python",
"type": "daprd-down",
"appId": "pythonapp"
}
]
}
Step 3: Configure a compound launch in launch.json
A compound launch configuration can defined in .vscode/launch.json
and is a set of two or more launch configurations that are launched in parallel. Optionally, a preLaunchTask
can be specified and run before the individual debug sessions are started.
For this example the compound configuration is:
{
"version": "2.0.0",
"configurations": [...],
"compounds": [
{
"name": "Node/Python Dapr",
"configurations": ["Nodeapp with Dapr","Pythonapp with Dapr"]
}
]
}
Step 4: Launch your debugging session
You can now run the applications in debug mode by finding the compound command name you have defined in the previous step in the VS Code debugger:

You are now debugging multiple applications with Dapr!
Daprd parameter table
Below are the supported parameters for VS Code tasks. These parameters are equivalent to daprd
arguments as detailed in this reference:
Parameter | Description | Required | Example |
---|---|---|---|
allowedOrigins | Allowed HTTP origins (default “*”) | No | "allowedOrigins": "*" |
appId | The unique ID of the application. Used for service discovery, state encapsulation and the pub/sub consumer ID | Yes | "appId": "divideapp" |
appMaxConcurrency | Limit the concurrency of your application. A valid value is any number larger than 0 | No | "appMaxConcurrency": -1 |
appPort | This parameter tells Dapr which port your application is listening on | Yes | "appPort": 4000 |
appProtocol | Tells Dapr which protocol your application is using. Valid options are http , grpc , https , grpcs , h2c . Default is http . | No | "appProtocol": "http" |
args | Sets a list of arguments to pass on to the Dapr app | No | “args”: [] |
componentsPath | Path for components directory. If empty, components will not be loaded. | No | "componentsPath": "./components" |
config | Tells Dapr which Configuration resource to use | No | "config": "./config" |
controlPlaneAddress | Address for a Dapr control plane | No | "controlPlaneAddress": "http://localhost:1366/" |
enableProfiling | Enable profiling | No | "enableProfiling": false |
enableMtls | Enables automatic mTLS for daprd to daprd communication channels | No | "enableMtls": false |
grpcPort | gRPC port for the Dapr API to listen on (default â50001â) | Yes, if multiple apps | "grpcPort": 50004 |
httpPort | The HTTP port for the Dapr API | Yes | "httpPort": 3502 |
internalGrpcPort | gRPC port for the Dapr Internal API to listen on | No | "internalGrpcPort": 50001 |
logAsJson | Setting this parameter to true outputs logs in JSON format. Default is false | No | "logAsJson": false |
logLevel | Sets the log level for the Dapr sidecar. Allowed values are debug, info, warn, error. Default is info | No | "logLevel": "debug" |
metricsPort | Sets the port for the sidecar metrics server. Default is 9090 | Yes, if multiple apps | "metricsPort": 9093 |
mode | Runtime mode for Dapr (default âstandaloneâ) | No | "mode": "standalone" |
placementHostAddress | Addresses for Dapr Actor Placement servers | No | "placementHostAddress": "http://localhost:1313/" |
profilePort | The port for the profile server (default â7777â) | No | "profilePort": 7777 |
sentryAddress | Address for the Sentry CA service | No | "sentryAddress": "http://localhost:1345/" |
type | Tells VS Code it will be a daprd task type | Yes | "type": "daprd" |
Related Links
5.1.1.3 - Developing Dapr applications with Dev Containers
The Visual Studio Code Dev Containers extension lets you use a self-contained Docker container as a complete development environment, without installing any additional packages, libraries, or utilities in your local filesystem.
Dapr has pre-built Dev Containers for C# and JavaScript/TypeScript; you can pick the one of your choice for a ready made environment. Note these pre-built containers automatically update to the latest Dapr release.
We also publish a Dev Container feature that installs the Dapr CLI inside any Dev Container.
Setup the development environment
Prerequisites
Add the Dapr CLI using a Dev Container feature
You can install the Dapr CLI inside any Dev Container using Dev Container features.
To do that, edit your devcontainer.json
and add two objects in the "features"
section:
"features": {
// Install the Dapr CLI
"ghcr.io/dapr/cli/dapr-cli:0": {},
// Enable Docker (via Docker-in-Docker)
"ghcr.io/devcontainers/features/docker-in-docker:2": {},
// Alternatively, use Docker-outside-of-Docker (uses Docker in the host)
//"ghcr.io/devcontainers/features/docker-outside-of-docker:1": {},
}
After saving the JSON file and (re-)building the container that hosts your development environment, you will have the Dapr CLI (and Docker) available, and can install Dapr by running this command in the container:
dapr init
Example: create a Java Dev Container for Dapr
This is an example of creating a Dev Container for creating Java apps that use Dapr, based on the official Java 17 Dev Container image.
Place this in a file called .devcontainer/devcontainer.json
in your project:
// For format details, see https://aka.ms/devcontainer.json. For config options, see the
// README at: https://github.com/devcontainers/templates/tree/main/src/java
{
"name": "Java",
// Or use a Dockerfile or Docker Compose file. More info: https://containers.dev/guide/dockerfile
"image": "mcr.microsoft.com/devcontainers/java:0-17",
"features": {
"ghcr.io/devcontainers/features/java:1": {
"version": "none",
"installMaven": "false",
"installGradle": "false"
},
// Install the Dapr CLI
"ghcr.io/dapr/cli/dapr-cli:0": {},
// Enable Docker (via Docker-in-Docker)
"ghcr.io/devcontainers/features/docker-in-docker:2": {},
// Alternatively, use Docker-outside-of-Docker (uses Docker in the host)
//"ghcr.io/devcontainers/features/docker-outside-of-docker:1": {},
}
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "java -version",
// Configure tool-specific properties.
// "customizations": {},
// Uncomment to connect as root instead. More info: https://aka.ms/dev-containers-non-root.
// "remoteUser": "root"
}
Then, using the VS Code command palette (CTRL + SHIFT + P
or CMD + SHIFT + P
on Mac), select Dev Containers: Rebuild and Reopen in Container
.
Use a pre-built Dev Container (C# and JavaScript/TypeScript)
- Open your application workspace in VS Code
- In the command command palette (
CTRL + SHIFT + P
orCMD + SHIFT + P
on Mac) type and selectDev Containers: Add Development Container Configuration Files...
- Type
dapr
to filter the list to available Dapr remote containers and choose the language container that matches your application. Note you may need to selectShow All Definitions...
- Follow the prompts to reopen your workspace in the container.
Example
Watch this video on how to use the Dapr Dev Containers with your application.
5.1.2 - IntelliJ
When developing Dapr applications, you typically use the Dapr CLI to start your ‘Daprized’ service similar to this:
dapr run --app-id nodeapp --app-port 3000 --dapr-http-port 3500 app.js
This uses the default components yaml files (created on dapr init
) so that your service can interact with the local Redis container. This is great when you are just getting started but what if you want to attach a debugger to your service and step through the code? This is where you can use the dapr cli without invoking an app.
One approach to attaching the debugger to your service is to first run dapr run --
from the command line and then launch your code and attach the debugger. While this is a perfectly acceptable solution, it does require a few extra steps (like switching between terminal and IDE) and some instruction to developers who might want to clone your repo and hit the “play” button to begin debugging.
This document explains how to use dapr
directly from IntelliJ. As a pre-requisite, make sure you have initialized the Dapr’s dev environment via dapr init
.
Let’s get started!
Add Dapr as an ‘External Tool’
First, quit IntelliJ before modifying the configurations file directly.
IntelliJ configuration file location
For versions 2020.1 and above the configuration files for tools should be located in:
%USERPROFILE%\AppData\Roaming\JetBrains\IntelliJIdea2020.1\tools\
$HOME/.config/JetBrains/IntelliJIdea2020.1/tools/
~/Library/Application\ Support/JetBrains/IntelliJIdea2020.1/tools/
The configuration file location is different for version 2019.3 or prior. See here for more details.
Change the version of IntelliJ in the path if needed.
Create or edit the file in <CONFIG PATH>/tools/External\ Tools.xml
(change IntelliJ version in path if needed). The <CONFIG PATH>
is OS dependent as seen above.
Add a new <tool></tool>
entry:
<toolSet name="External Tools">
...
<!-- 1. Each tool has its own app-id, so create one per application to be debugged -->
<tool name="dapr for DemoService in examples" description="Dapr sidecar" showInMainMenu="false" showInEditor="false" showInProject="false" showInSearchPopup="false" disabled="false" useConsole="true" showConsoleOnStdOut="true" showConsoleOnStdErr="true" synchronizeAfterRun="true">
<exec>
<!-- 2. For Linux or MacOS use: /usr/local/bin/dapr -->
<option name="COMMAND" value="C:\dapr\dapr.exe" />
<!-- 3. Choose app, http and grpc ports that do not conflict with other daprd command entries (placement address should not change). -->
<option name="PARAMETERS" value="run -app-id demoservice -app-port 3000 -dapr-http-port 3005 -dapr-grpc-port 52000" />
<!-- 4. Use the folder where the `components` folder is located -->
<option name="WORKING_DIRECTORY" value="C:/Code/dapr/java-sdk/examples" />
</exec>
</tool>
...
</toolSet>
Optionally, you may also create a new entry for a sidecar tool that can be reused across many projects:
<toolSet name="External Tools">
...
<!-- 1. Reusable entry for apps with app port. -->
<tool name="dapr with app-port" description="Dapr sidecar" showInMainMenu="false" showInEditor="false" showInProject="false" showInSearchPopup="false" disabled="false" useConsole="true" showConsoleOnStdOut="true" showConsoleOnStdErr="true" synchronizeAfterRun="true">
<exec>
<!-- 2. For Linux or MacOS use: /usr/local/bin/dapr -->
<option name="COMMAND" value="c:\dapr\dapr.exe" />
<!-- 3. Prompts user 4 times (in order): app id, app port, Dapr's http port, Dapr's grpc port. -->
<option name="PARAMETERS" value="run --app-id $Prompt$ --app-port $Prompt$ --dapr-http-port $Prompt$ --dapr-grpc-port $Prompt$" />
<!-- 4. Use the folder where the `components` folder is located -->
<option name="WORKING_DIRECTORY" value="$ProjectFileDir$" />
</exec>
</tool>
<!-- 1. Reusable entry for apps without app port. -->
<tool name="dapr without app-port" description="Dapr sidecar" showInMainMenu="false" showInEditor="false" showInProject="false" showInSearchPopup="false" disabled="false" useConsole="true" showConsoleOnStdOut="true" showConsoleOnStdErr="true" synchronizeAfterRun="true">
<exec>
<!-- 2. For Linux or MacOS use: /usr/local/bin/dapr -->
<option name="COMMAND" value="c:\dapr\dapr.exe" />
<!-- 3. Prompts user 3 times (in order): app id, Dapr's http port, Dapr's grpc port. -->
<option name="PARAMETERS" value="run --app-id $Prompt$ --dapr-http-port $Prompt$ --dapr-grpc-port $Prompt$" />
<!-- 4. Use the folder where the `components` folder is located -->
<option name="WORKING_DIRECTORY" value="$ProjectFileDir$" />
</exec>
</tool>
...
</toolSet>
Create or edit run configuration
Now, create or edit the run configuration for the application to be debugged. It can be found in the menu next to the main()
function.
Now, add the program arguments and environment variables. These need to match the ports defined in the entry in ‘External Tool’ above.
- Command line arguments for this example:
-p 3000
- Environment variables for this example:
DAPR_HTTP_PORT=3005;DAPR_GRPC_PORT=52000
Start debugging
Once the one-time config above is done, there are two steps required to debug a Java application with Dapr in IntelliJ:
- Start
dapr
viaTools
->External Tool
in IntelliJ.
- Start your application in debug mode.
Wrapping up
After debugging, make sure you stop both dapr
and your app in IntelliJ.
Note: Since you launched the service(s) using the dapr run CLI command, the dapr list command will show runs from IntelliJ in the list of apps that are currently running with Dapr.
Happy debugging!
Related links
- Change in IntelliJ configuration directory location
5.2 - Multi-App Run
5.2.1 - Multi-App Run overview
Note
Multi-App Run for Kubernetes is currently a preview feature.Let’s say you want to run several applications locally to test them together, similar to a production scenario. Multi-App Run allows you to start and stop a set of applications simultaneously, either:
- Locally/self-hosted with processes, or
- By building container images and deploying to a Kubernetes cluster
- You can use a local Kubernetes cluster (KiND) or one deploy to a Cloud (AKS, EKS, and GKE).
The Multi-App Run template file describes how to start multiple applications as if you had run many separate CLI run
commands. By default, this template file is called dapr.yaml
.
Multi-App Run template file
When you execute dapr run -f .
, it starts the multi-app template file (named dapr.yaml
) present in the current directory to run all the applications.
You can name template file with preferred name other than the default. For example dapr run -f ./<your-preferred-file-name>.yaml
.
The following example includes some of the template properties you can customize for your applications. In the example, you can simultaneously launch 2 applications with app IDs of processor
and emit-metrics
.
version: 1
apps:
- appID: processor
appDirPath: ../apps/processor/
appPort: 9081
daprHTTPPort: 3510
command: ["go","run", "app.go"]
- appID: emit-metrics
appDirPath: ../apps/emit-metrics/
daprHTTPPort: 3511
env:
DAPR_HOST_ADD: localhost
command: ["go","run", "app.go"]
For a more in-depth example and explanation of the template properties, see Multi-app template.
Locations for resources and configuration files
You have options on where to place your applications’ resources and configuration files when using Multi-App Run.
Point to one file location (with convention)
You can set all of your applications resources and configurations at the ~/.dapr
root. This is helpful when all applications share the same resources path, like when testing on a local machine.
Separate file locations for each application (with convention)
When using Multi-App Run, each application directory can have a .dapr
folder, which contains a config.yaml
file and a resources
directory. Otherwise, if the .dapr
directory is not present within the app directory, the default ~/.dapr/resources/
and ~/.dapr/config.yaml
locations are used.
If you decide to add a .dapr
directory in each application directory, with a /resources
directory and config.yaml
file, you can specify different resources paths for each application. This approach remains within convention by using the default ~/.dapr
.
Point to separate locations (custom)
You can also name each app directory’s .dapr
directory something other than .dapr
, such as, webapp
, or backend
. This helps if you’d like to be explicit about resource or application directory paths.
Logs
The run template provides two log destination fields for each application and its associated daprd process:
appLogDestination
: This field configures the log destination for the application. The possible values areconsole
,file
andfileAndConsole
. The default value isfileAndConsole
where application logs are written to both console and to a file by default.daprdLogDestination
: This field configures the log destination for thedaprd
process. The possible values areconsole
,file
andfileAndConsole
. The default value isfile
where thedaprd
logs are written to a file by default.
Log file format
Logs for application and daprd
are captured in separate files. These log files are created automatically under .dapr/logs
directory under each application directory (appDirPath
in the template). These log file names follow the pattern seen below:
<appID>_app_<timestamp>.log
(file name format forapp
log)<appID>_daprd_<timestamp>.log
(file name format fordaprd
log)
Even if you’ve decided to rename your resources folder to something other than .dapr
, the log files are written only to the .dapr/logs
folder (created in the application directory).
Watch the demo
Multi-App Run template file
When you execute dapr run -k -f .
or dapr run -k -f dapr.yaml
, the applications defined in the dapr.yaml
Multi-App Run template file starts in Kubernetes default namespace.
Note: Currently, the Multi-App Run template can only start applications in the default Kubernetes namespace.
The necessary default service and deployment definitions for Kubernetes are generated within the .dapr/deploy
folder for each app in the dapr.yaml
template.
If the createService
field is set to true
in the dapr.yaml
template for an app, then the service.yaml
file is generated in the .dapr/deploy
folder of the app.
Otherwise, only the deployment.yaml
file is generated for each app that has the containerImage
field set.
The files service.yaml
and deployment.yaml
are used to deploy the applications in default
namespace in Kubernetes. This feature is specifically targeted only for running multiple apps in a dev/test environment in Kubernetes.
You can name the template file with any preferred name other than the default. For example:
dapr run -k -f ./<your-preferred-file-name>.yaml
The following example includes some of the template properties you can customize for your applications. In the example, you can simultaneously launch 2 applications with app IDs of nodeapp
and pythonapp
.
version: 1
common:
apps:
- appID: nodeapp
appDirPath: ./nodeapp/
appPort: 3000
containerImage: ghcr.io/dapr/samples/hello-k8s-node:latest
containerImagePullPolicy: Always
createService: true
env:
APP_PORT: 3000
- appID: pythonapp
appDirPath: ./pythonapp/
containerImage: ghcr.io/dapr/samples/hello-k8s-python:latest
Note:
- If the
containerImage
field is not specified,dapr run -k -f
produces an error.- The containerImagePullPolicy indicates that a new container image is always downloaded for this app.
- The
createService
field defines a basic service in Kubernetes (ClusterIP or LoadBalancer) that targets the--app-port
specified in the template. IfcreateService
isn’t specified, the application is not accessible from outside the cluster.
For a more in-depth example and explanation of the template properties, see Multi-app template.
Logs
The run template provides two log destination fields for each application and its associated daprd process:
appLogDestination
: This field configures the log destination for the application. The possible values areconsole
,file
andfileAndConsole
. The default value isfileAndConsole
where application logs are written to both console and to a file by default.daprdLogDestination
: This field configures the log destination for thedaprd
process. The possible values areconsole
,file
andfileAndConsole
. The default value isfile
where thedaprd
logs are written to a file by default.
Log file format
Logs for application and daprd
are captured in separate files. These log files are created automatically under .dapr/logs
directory under each application directory (appDirPath
in the template). These log file names follow the pattern seen below:
<appID>_app_<timestamp>.log
(file name format forapp
log)<appID>_daprd_<timestamp>.log
(file name format fordaprd
log)
Even if you’ve decided to rename your resources folder to something other than .dapr
, the log files are written only to the .dapr/logs
folder (created in the application directory).
Watch the demo
Watch this video for an overview on Multi-App Run in Kubernetes:
Next steps
5.2.2 - How to: Use the Multi-App Run template file
Note
Multi-App Run for Kubernetes is currently a preview feature.The Multi-App Run template file is a YAML file that you can use to run multiple applications at once. In this guide, you’ll learn how to:
- Use the multi-app template
- View started applications
- Stop the multi-app template
- Structure the multi-app template file
Use the multi-app template
You can use the multi-app template file in one of the following two ways:
Execute by providing a directory path
When you provide a directory path, the CLI will try to locate the Multi-App Run template file, named dapr.yaml
by default in the directory. If the file is not found, the CLI will return an error.
Execute the following CLI command to read the Multi-App Run template file, named dapr.yaml
by default:
# the template file needs to be called `dapr.yaml` by default if a directory path is given
dapr run -f <dir_path>
dapr run -f <dir_path> -k
Execute by providing a file path
If the Multi-App Run template file is named something other than dapr.yaml
, then you can provide the relative or absolute file path to the command:
dapr run -f ./path/to/<your-preferred-file-name>.yaml
dapr run -f ./path/to/<your-preferred-file-name>.yaml -k
View the started applications
Once the multi-app template is running, you can view the started applications with the following command:
dapr list
dapr list -k
Stop the multi-app template
Stop the multi-app run template anytime with either of the following commands:
# the template file needs to be called `dapr.yaml` by default if a directory path is given
dapr stop -f <dir_path>
or:
dapr stop -f ./path/to/<your-preferred-file-name>.yaml
# the template file needs to be called `dapr.yaml` by default if a directory path is given
dapr stop -f <dir_path> -k
or:
dapr stop -f ./path/to/<your-preferred-file-name>.yaml -k
Template file structure
The Multi-App Run template file can include the following properties. Below is an example template showing two applications that are configured with some of the properties.
version: 1
common: # optional section for variables shared across apps
resourcesPath: ./app/components # any dapr resources to be shared across apps
env: # any environment variable shared across apps
DEBUG: true
apps:
- appID: webapp # optional
appDirPath: .dapr/webapp/ # REQUIRED
resourcesPath: .dapr/resources # deprecated
resourcesPaths: .dapr/resources # comma separated resources paths. (optional) can be left to default value by convention.
appChannelAddress: 127.0.0.1 # network address where the app listens on. (optional) can be left to default value by convention.
configFilePath: .dapr/config.yaml # (optional) can be default by convention too, ignore if file is not found.
appProtocol: http
appPort: 8080
appHealthCheckPath: "/healthz"
command: ["python3", "app.py"]
appLogDestination: file # (optional), can be file, console or fileAndConsole. default is fileAndConsole.
daprdLogDestination: file # (optional), can be file, console or fileAndConsole. default is file.
- appID: backend # optional
appDirPath: .dapr/backend/ # REQUIRED
appProtocol: grpc
appPort: 3000
unixDomainSocket: "/tmp/test-socket"
env:
DEBUG: false
command: ["./backend"]
The following rules apply for all the paths present in the template file:
- If the path is absolute, it is used as is.
- All relative paths under common section should be provided relative to the template file path.
appDirPath
under apps section should be provided relative to the template file path.- All other relative paths under apps section should be provided relative to the
appDirPath
.
version: 1
common: # optional section for variables shared across apps
env: # any environment variable shared across apps
DEBUG: true
apps:
- appID: webapp # optional
appDirPath: .dapr/webapp/ # REQUIRED
appChannelAddress: 127.0.0.1 # network address where the app listens on. (optional) can be left to default value by convention.
appProtocol: http
appPort: 8080
appHealthCheckPath: "/healthz"
appLogDestination: file # (optional), can be file, console or fileAndConsole. default is fileAndConsole.
daprdLogDestination: file # (optional), can be file, console or fileAndConsole. default is file.
containerImage: ghcr.io/dapr/samples/hello-k8s-node:latest # (optional) URI of the container image to be used when deploying to Kubernetes dev/test environment.
containerImagePullPolicy: IfNotPresent # (optional), the container image is downloaded if one is not present locally, otherwise the local one is used.
createService: true # (optional) Create a Kubernetes service for the application when deploying to dev/test environment.
- appID: backend # optional
appDirPath: .dapr/backend/ # REQUIRED
appProtocol: grpc
appPort: 3000
unixDomainSocket: "/tmp/test-socket"
env:
DEBUG: false
The following rules apply for all the paths present in the template file:
- If the path is absolute, it is used as is.
appDirPath
under apps section should be provided relative to the template file path.- All relative paths under app section should be provided relative to the
appDirPath
.
Template properties
The properties for the Multi-App Run template align with the dapr run
CLI flags, listed in the CLI reference documentation.
Properties | Required | Details | Example |
---|---|---|---|
appDirPath | Y | Path to the your application code | ./webapp/ , ./backend/ |
appID | N | Application’s app ID. If not provided, will be derived from appDirPath | webapp , backend |
resourcesPath | N | Deprecated. Path to your Dapr resources. Can be default value by convention | ./app/components , ./webapp/components |
resourcesPaths | N | Comma separated paths to your Dapr resources. Can be default value by convention | ./app/components , ./webapp/components |
appChannelAddress | N | The network address the application listens on. Can be left to the default value by convention. | 127.0.0.1 |
configFilePath | N | Path to your application’s configuration file | ./webapp/config.yaml |
appProtocol | N | The protocol Dapr uses to talk to the application. | http , grpc |
appPort | N | The port your application is listening on | 8080 , 3000 |
daprHTTPPort | N | Dapr HTTP port | |
daprGRPCPort | N | Dapr GRPC port | |
daprInternalGRPCPort | N | gRPC port for the Dapr Internal API to listen on; used when parsing the value from a local DNS component | |
metricsPort | N | The port that Dapr sends its metrics information to | |
unixDomainSocket | N | Path to a unix domain socket dir mount. If specified, communication with the Dapr sidecar uses unix domain sockets for lower latency and greater throughput when compared to using TCP ports. Not available on Windows. | /tmp/test-socket |
profilePort | N | The port for the profile server to listen on | |
enableProfiling | N | Enable profiling via an HTTP endpoint | |
apiListenAddresses | N | Dapr API listen addresses | |
logLevel | N | The log verbosity. | |
appMaxConcurrency | N | The concurrency level of the application; default is unlimited | |
placementHostAddress | N | Comma separated list of addresses for Dapr placement servers | 127.0.0.1:50057,127.0.0.1:50058 |
schedulerHostAddress | N | Dapr Scheduler Service host address | localhost:50006 |
appSSL | N | Enable https when Dapr invokes the application | |
maxBodySize | N | Max size of the request body in MB. Set the value using size units (e.g., 16Mi for 16MB). The default is 4Mi | |
readBufferSize | N | Max size of the HTTP read buffer in KB. This also limits the maximum size of HTTP headers. Set the value using size units, for example 32Ki will support headers up to 32KB . Default is 4Ki for 4KB | |
enableAppHealthCheck | N | Enable the app health check on the application | true , false |
appHealthCheckPath | N | Path to the health check file | /healthz |
appHealthProbeInterval | N | Interval to probe for the health of the app in seconds | |
appHealthProbeTimeout | N | Timeout for app health probes in milliseconds | |
appHealthThreshold | N | Number of consecutive failures for the app to be considered unhealthy | |
enableApiLogging | N | Enable the logging of all API calls from application to Dapr | |
runtimePath | N | Dapr runtime install path | |
env | N | Map to environment variable; environment variables applied per application will overwrite environment variables shared across applications | DEBUG , DAPR_HOST_ADD |
appLogDestination | N | Log destination for outputting app logs; Its value can be file, console or fileAndConsole. Default is fileAndConsole | file , console , fileAndConsole |
daprdLogDestination | N | Log destination for outputting daprd logs; Its value can be file, console or fileAndConsole. Default is file | file , console , fileAndConsole |
Next steps
The properties for the Multi-App Run template align with the dapr run -k
CLI flags, listed in the CLI reference documentation.
Properties | Required | Details | Example |
---|---|---|---|
appDirPath | Y | Path to the your application code | ./webapp/ , ./backend/ |
appID | N | Application’s app ID. If not provided, will be derived from appDirPath | webapp , backend |
appChannelAddress | N | The network address the application listens on. Can be left to the default value by convention. | 127.0.0 , localhost |
appProtocol | N | The protocol Dapr uses to talk to the application. | http , grpc |
appPort | N | The port your application is listening on | 8080 , 3000 |
daprHTTPPort | N | Dapr HTTP port | |
daprGRPCPort | N | Dapr GRPC port | |
daprInternalGRPCPort | N | gRPC port for the Dapr Internal API to listen on; used when parsing the value from a local DNS component | |
metricsPort | N | The port that Dapr sends its metrics information to | |
unixDomainSocket | N | Path to a unix domain socket dir mount. If specified, communication with the Dapr sidecar uses unix domain sockets for lower latency and greater throughput when compared to using TCP ports. Not available on Windows. | /tmp/test-socket |
profilePort | N | The port for the profile server to listen on | |
enableProfiling | N | Enable profiling via an HTTP endpoint | |
apiListenAddresses | N | Dapr API listen addresses | |
logLevel | N | The log verbosity. | |
appMaxConcurrency | N | The concurrency level of the application; default is unlimited | |
placementHostAddress | N | Comma separated list of addresses for Dapr placement servers | 127.0.0.1:50057,127.0.0.1:50058 |
schedulerHostAddress | N | Dapr Scheduler Service host address | 127.0.0.1:50006 |
appSSL | N | Enable HTTPS when Dapr invokes the application | |
maxBodySize | N | Max size of the request body in MB. Set the value using size units (e.g., 16Mi for 16MB). The default is 4Mi | 16Mi |
readBufferSize | N | Max size of the HTTP read buffer in KB. This also limits the maximum size of HTTP headers. Set the value using size units, for example 32Ki will support headers up to 32KB . Default is 4Ki for 4KB | 32Ki |
enableAppHealthCheck | N | Enable the app health check on the application | true , false |
appHealthCheckPath | N | Path to the health check file | /healthz |
appHealthProbeInterval | N | Interval to probe for the health of the app in seconds | |
appHealthProbeTimeout | N | Timeout for app health probes in milliseconds | |
appHealthThreshold | N | Number of consecutive failures for the app to be considered unhealthy | |
enableApiLogging | N | Enable the logging of all API calls from application to Dapr | |
env | N | Map to environment variable; environment variables applied per application will overwrite environment variables shared across applications | DEBUG , DAPR_HOST_ADD |
appLogDestination | N | Log destination for outputting app logs; Its value can be file, console or fileAndConsole. Default is fileAndConsole | file , console , fileAndConsole |
daprdLogDestination | N | Log destination for outputting daprd logs; Its value can be file, console or fileAndConsole. Default is file | file , console , fileAndConsole |
containerImage | N | URI of the container image to be used when deploying to Kubernetes dev/test environment. | ghcr.io/dapr/samples/hello-k8s-python:latest |
containerImagePullPolicy | N | The container image pull policy (default to Always ). | Always , IfNotPresent , Never |
createService | N | Create a Kubernetes service for the application when deploying to dev/test environment. | true , false |
Next steps
Watch this video for an overview on Multi-App Run in Kubernetes:
5.3 - How to: Use the gRPC interface in your Dapr application
Dapr implements both an HTTP and a gRPC API for local calls. gRPC is useful for low-latency, high performance scenarios and has language integration using the proto clients.
Find a list of auto-generated clients in the Dapr SDK documentation.
The Dapr runtime implements a proto service that apps can communicate with via gRPC.
In addition to calling Dapr via gRPC, Dapr supports service-to-service calls with gRPC by acting as a proxy. Learn more in the gRPC service invocation how-to guide.
This guide demonstrates configuring and invoking Dapr with gRPC using a Go SDK application.
Configure Dapr to communicate with an app via gRPC
When running in self-hosted mode, use the --app-protocol
flag to tell Dapr to use gRPC to talk to the app.
dapr run --app-protocol grpc --app-port 5005 node app.js
This tells Dapr to communicate with your app via gRPC over port 5005
.
On Kubernetes, set the following annotations in your deployment YAML:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
namespace: default
labels:
app: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
annotations:
dapr.io/enabled: "true"
dapr.io/app-id: "myapp"
dapr.io/app-protocol: "grpc"
dapr.io/app-port: "5005"
...
Invoke Dapr with gRPC
The following steps show how to create a Dapr client and call the SaveStateData
operation on it.
Import the package:
package main import ( "context" "log" "os" dapr "github.com/dapr/go-sdk/client" )
Create the client:
// just for this demo ctx := context.Background() data := []byte("ping") // create the client client, err := dapr.NewClient() if err != nil { log.Panic(err) } defer client.Close()
- Invoke the
SaveState
method:
// save state with the key key1 err = client.SaveState(ctx, "statestore", "key1", data) if err != nil { log.Panic(err) } log.Println("data saved")
- Invoke the
Now you can explore all the different methods on the Dapr client.
Create a gRPC app with Dapr
The following steps will show how to create an app that exposes a server for with which Dapr can communicate.
Import the package:
package main import ( "context" "fmt" "log" "net" "github.com/golang/protobuf/ptypes/any" "github.com/golang/protobuf/ptypes/empty" commonv1pb "github.com/dapr/dapr/pkg/proto/common/v1" pb "github.com/dapr/dapr/pkg/proto/runtime/v1" "google.golang.org/grpc" )
Implement the interface:
// server is our user app type server struct { pb.UnimplementedAppCallbackServer } // EchoMethod is a simple demo method to invoke func (s *server) EchoMethod() string { return "pong" } // This method gets invoked when a remote service has called the app through Dapr // The payload carries a Method to identify the method, a set of metadata properties and an optional payload func (s *server) OnInvoke(ctx context.Context, in *commonv1pb.InvokeRequest) (*commonv1pb.InvokeResponse, error) { var response string switch in.Method { case "EchoMethod": response = s.EchoMethod() } return &commonv1pb.InvokeResponse{ ContentType: "text/plain; charset=UTF-8", Data: &any.Any{Value: []byte(response)}, }, nil } // Dapr will call this method to get the list of topics the app wants to subscribe to. In this example, we are telling Dapr // To subscribe to a topic named TopicA func (s *server) ListTopicSubscriptions(ctx context.Context, in *empty.Empty) (*pb.ListTopicSubscriptionsResponse, error) { return &pb.ListTopicSubscriptionsResponse{ Subscriptions: []*pb.TopicSubscription{ {Topic: "TopicA"}, }, }, nil } // Dapr will call this method to get the list of bindings the app will get invoked by. In this example, we are telling Dapr // To invoke our app with a binding named storage func (s *server) ListInputBindings(ctx context.Context, in *empty.Empty) (*pb.ListInputBindingsResponse, error) { return &pb.ListInputBindingsResponse{ Bindings: []string{"storage"}, }, nil } // This method gets invoked every time a new event is fired from a registered binding. The message carries the binding name, a payload and optional metadata func (s *server) OnBindingEvent(ctx context.Context, in *pb.BindingEventRequest) (*pb.BindingEventResponse, error) { fmt.Println("Invoked from binding") return &pb.BindingEventResponse{}, nil } // This method is fired whenever a message has been published to a topic that has been subscribed. Dapr sends published messages in a CloudEvents 0.3 envelope. func (s *server) OnTopicEvent(ctx context.Context, in *pb.TopicEventRequest) (*pb.TopicEventResponse, error) { fmt.Println("Topic message arrived") return &pb.TopicEventResponse{}, nil }
Create the server:
func main() { // create listener lis, err := net.Listen("tcp", ":50001") if err != nil { log.Fatalf("failed to listen: %v", err) } // create grpc server s := grpc.NewServer() pb.RegisterAppCallbackServer(s, &server{}) fmt.Println("Client starting...") // and start... if err := s.Serve(lis); err != nil { log.Fatalf("failed to serve: %v", err) } }
This creates a gRPC server for your app on port 50001.
Run the application
To run locally, use the Dapr CLI:
dapr run --app-id goapp --app-port 50001 --app-protocol grpc go run main.go
On Kubernetes, set the required dapr.io/app-protocol: "grpc"
and dapr.io/app-port: "50001
annotations in your pod spec template, as mentioned above.
Other languages
You can use Dapr with any language supported by Protobuf, and not just with the currently available generated SDKs.
Using the protoc tool, you can generate the Dapr clients for other languages like Ruby, C++, Rust, and others.
Related Topics
5.4 - Serialization in Dapr's SDKs
Dapr SDKs provide serialization for two use cases. First, for API objects sent through request and response payloads. Second, for objects to be persisted. For both of these cases, a default serialization method is provided in each language SDK.
Language SDK | Default Serializer |
---|---|
.NET | DataContracts for remoted actors, System.Text.Json otherwise. Read more about .NET serialization here |
Java | DefaultObjectSerializer for JSON serialization |
JavaScript | JSON |
Service invocation
using var client = (new DaprClientBuilder()).Build();
await client.InvokeMethodAsync("myappid", "saySomething", "My Message");
DaprClient client = (new DaprClientBuilder()).build();
client.invokeMethod("myappid", "saySomething", "My Message", HttpExtension.POST).block();
In the example above, the app myappid
receives a POST
request for the saySomething
method with the request payload as
"My Message"
- quoted since the serializer will serialize the input String to JSON.
POST /saySomething HTTP/1.1
Host: localhost
Content-Type: text/plain
Content-Length: 12
"My Message"
State management
using var client = (new DaprClientBuilder()).Build();
var state = new Dictionary<string, string>
{
{ "key": "MyKey" },
{ "value": "My Message" }
};
await client.SaveStateAsync("MyStateStore", "MyKey", state);
DaprClient client = (new DaprClientBuilder()).build();
client.saveState("MyStateStore", "MyKey", "My Message").block();
In this example, My Message
is saved. It is not quoted because Dapr’s API internally parse the JSON request
object before saving it.
[
{
"key": "MyKey",
"value": "My Message"
}
]
PubSub
using var client = (new DaprClientBuilder()).Build();
await client.PublishEventAsync("MyPubSubName", "TopicName", "My Message");
The event is published and the content is serialized to byte[]
and sent to Dapr sidecar. The subscriber receives it as a CloudEvent. Cloud event defines data
as String. The Dapr SDK also provides a built-in deserializer for CloudEvent
object.
public async Task<IActionResult> HandleMessage(string message)
{
//ASP.NET Core automatically deserializes the UTF-8 encoded bytes to a string
return new Ok();
}
or
app.MapPost("/TopicName", [Topic("MyPubSubName", "TopicName")] (string message) => {
return Results.Ok();
}
DaprClient client = (new DaprClientBuilder()).build();
client.publishEvent("TopicName", "My Message").block();
The event is published and the content is serialized to byte[]
and sent to Dapr sidecar. The subscriber receives it as a CloudEvent. Cloud event defines data
as String. The Dapr SDK also provides a built-in deserializer for CloudEvent
objects.
@PostMapping(path = "/TopicName")
public void handleMessage(@RequestBody(required = false) byte[] body) {
// Dapr's event is compliant to CloudEvent.
CloudEvent event = CloudEvent.deserialize(body);
}
Bindings
For output bindings the object is serialized to byte[]
whereas the input binding receives the raw byte[]
as-is and deserializes it to the expected object type.
- Output binding:
using var client = (new DaprClientBuilder()).Build();
await client.InvokeBindingAsync("sample", "My Message");
- Input binding (controllers):
[ApiController]
public class SampleController : ControllerBase
{
[HttpPost("propagate")]
public ActionResult<string> GetValue([FromBody] int itemId)
{
Console.WriteLine($"Received message: {itemId}");
return $"itemID:{itemId}";
}
}
- Input binding (minimal API):
app.MapPost("value", ([FromBody] int itemId) =>
{
Console.WriteLine($"Received message: {itemId}");
return ${itemID:{itemId}";
});
- Output binding:
DaprClient client = (new DaprClientBuilder()).build();
client.invokeBinding("sample", "My Message").block();
- Input binding:
@PostMapping(path = "/sample")
public void handleInputBinding(@RequestBody(required = false) byte[] body) {
String message = (new DefaultObjectSerializer()).deserialize(body, String.class);
System.out.println(message);
}
It should print:
My Message
Actor Method invocation
Object serialization and deserialization for Actor method invocation are same as for the service method invocation, the only difference is that the application does not need to deserialize the request or serialize the response since it is all done transparently by the SDK.
For Actor methods, the SDK only supports methods with zero or one parameter.
The .NET SDK supports two different serialization types based on whether you're using strongly-typed (DataContracts) or weakly-typed (DataContracts or System.Text.JSON) actor client. [This document](https://v1-16.docs.dapr.io/developing-applications/sdks/dotnet/dotnet-actors/dotnet-actors-serialization/) can provide more information about the differences between each and additional considerations to keep in mind.- Invoking an Actor’s method using the weakly-typed client and System.Text.JSON:
var proxy = this.ProxyFactory.Create(ActorId.CreateRandom(), "DemoActor");
await proxy.SayAsync("My message");
- Implementing an Actor’s method:
public Task SayAsync(string message)
{
Console.WriteLine(message);
return Task.CompletedTask;
}
- Invoking an Actor’s method:
public static void main() {
ActorProxyBuilder builder = new ActorProxyBuilder("DemoActor");
String result = actor.invokeActorMethod("say", "My Message", String.class).block();
}
- Implementing an Actor’s method:
public String say(String something) {
System.out.println(something);
return "OK";
}
It should print:
My Message
Actor’s state management
Actors can also have state. In this case, the state manager will serialize and deserialize the objects using the state serializer and handle it transparently to the application.
public Task SayAsync(string message)
{
// Reads state from a key
var previousMessage = await this.StateManager.GetStateAsync<string>("lastmessage");
// Sets the new state for the key after serializing it
await this.StateManager.SetStateAsync("lastmessage", message);
return previousMessage;
}
public String actorMethod(String message) {
// Reads a state from key and deserializes it to String.
String previousMessage = super.getActorStateManager().get("lastmessage", String.class).block();
// Sets the new state for the key after serializing it.
super.getActorStateManager().set("lastmessage", message).block();
return previousMessage;
}
Default serializer
The default serializer for Dapr is a JSON serializer with the following expectations:
- Use of basic JSON data types for cross-language and cross-platform compatibility: string, number, array, boolean, null and another JSON object. Every complex property type in application’s serializable objects (DateTime, for example), should be represented as one of the JSON’s basic types.
- Data persisted with the default serializer should be saved as JSON objects too, without extra quotes or encoding. The example below shows how a string and a JSON object would look like in a Redis store.
redis-cli MGET "ActorStateIT_StatefulActorService||StatefulActorTest||1581130928192||message
"This is a message to be saved and retrieved."
redis-cli MGET "ActorStateIT_StatefulActorService||StatefulActorTest||1581130928192||mydata
{"value":"My data value."}
- Custom serializers must serialize object to
byte[]
. - Custom serializers must deserialize
byte[]
to object. - When user provides a custom serializer, it should be transferred or persisted as
byte[]
. When persisting, also encode as Base64 string. This is done natively by most JSON libraries.
redis-cli MGET "ActorStateIT_StatefulActorService||StatefulActorTest||1581130928192||message
"VGhpcyBpcyBhIG1lc3NhZ2UgdG8gYmUgc2F2ZWQgYW5kIHJldHJpZXZlZC4="
redis-cli MGET "ActorStateIT_StatefulActorService||StatefulActorTest||1581130928192||mydata
"eyJ2YWx1ZSI6Ik15IGRhdGEgdmFsdWUuIn0="
6 - Debugging Dapr applications and the Dapr control plane
6.1 - Debug Dapr in Kubernetes mode
6.1.1 - Debug Dapr control plane on Kubernetes
Overview
Sometimes it is necessary to understand what’s going on in Dapr control plane (aka, Kubernetes services), including dapr-sidecar-injector
, dapr-operator
, dapr-placement
, and dapr-sentry
, especially when you diagnose your Dapr application and wonder if there’s something wrong in Dapr itself. Additionally, you may be developing a new feature for Dapr on Kubernetes and want to debug your code.
This guide will cover how to use Dapr debugging binaries to debug the Dapr services on your Kubernetes cluster.
Debugging Dapr Kubernetes services
Pre-requisites
- Familiarize yourself with this guide to learn how to deploy Dapr to your Kubernetes cluster.
- Setup your dev environment
- Helm
1. Build Dapr debugging binaries
In order to debug Dapr Kubernetes services, it’s required to rebuild all Dapr binaries and Docker images to disable compiler optimization. To do this, execute the following commands:
git clone https://github.com/dapr/dapr.git
cd dapr
make release GOOS=linux GOARCH=amd64 DEBUG=1
On Windows download MingGW and use
ming32-make.exe
instead ofmake
.
In the above command, ‘DEBUG’ is specified to ‘1’ to disable compiler optimization. ‘GOOS=linux’ and ‘GOARCH=amd64’ are also necessary since the binaries will be packaged into Linux-based Docker image in the next step.
The binaries could be found under ‘dist/linux_amd64/debug’ sub-directory under the ‘dapr’ directory.
2. Build Dapr debugging Docker images
Use the following commands to package the debugging binaries into Docker images. Before this, you need to login your docker.io account, and if you don’t have it yet, you may need to consider registering one from “https://hub.docker.com/".
export DAPR_TAG=dev
export DAPR_REGISTRY=<your docker.io id>
docker login
make docker-push DEBUG=1
Once the Dapr Docker images are built and pushed onto Docker hub, then you are ready to re-install Dapr in your Kubernetes cluster.
3. Install Dapr debugging binaries
If Dapr has already been installed in your Kubernetes cluster, uninstall it first:
dapr uninstall -k
We will use ‘helm’ to install Dapr debugging binaries. In the following sections, we will use Dapr operator as an example to demonstrate how to configure, install, and debug Dapr services in a Kubernetes environment.
First configure a values file with these options:
global:
registry: docker.io/<your docker.io id>
tag: "dev-linux-amd64"
dapr_operator:
debug:
enabled: true
initialDelaySeconds: 3000
Notice
If you need to debug the startup time of Dapr services, you need to consider configuringinitialDelaySeconds
to a very long time value, e.g. “3000” seconds. If this is not the case, configure it to a short time value, e.g. “3” seconds.Then step into ‘dapr’ directory which’s cloned from GitHub in the beginning of this guide if you haven’t, and execute the following command:
helm install dapr charts/dapr --namespace dapr-system --values values.yml --wait
4. Forward debugging port
To debug the target Dapr service (Dapr operator in this case), its pre-configured debug port needs to be visible to your IDE. In order to achieve this, we need to find the target Dapr service’s pod first:
$ kubectl get pods -n dapr-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
dapr-dashboard-64b46f98b6-dl2n9 1/1 Running 0 61s 172.17.0.9 minikube <none> <none>
dapr-operator-7878f94fcd-6bfx9 1/1 Running 1 61s 172.17.0.7 minikube <none> <none>
dapr-placement-server-0 1/1 Running 1 61s 172.17.0.8 minikube <none> <none>
dapr-sentry-68c7d4c7df-sc47x 1/1 Running 0 61s 172.17.0.6 minikube <none> <none>
dapr-sidecar-injector-56c8f489bb-t2st9 1/1 Running 0 61s 172.17.0.10 minikube <none> <none>
Then use kubectl’s port-forward
command to expose the internal debug port to the external IDE:
$ kubectl port-forward dapr-operator-7878f94fcd-6bfx9 40000:40000 -n dapr-system
Forwarding from 127.0.0.1:40000 -> 40000
Forwarding from [::1]:40000 -> 40000
All done. Now you can point to port 40000 and start a remote debug session from your favorite IDE.
Related links
6.1.2 - Debug daprd on Kubernetes
Overview
Sometimes it is necessary to understand what’s going on in the Dapr sidecar (daprd), which runs as a sidecar next to your application, especially when you diagnose your Dapr application and wonder if there’s something wrong in Dapr itself. Additionally, you may be developing a new feature for Dapr on Kubernetes and want to debug your code.
This guide covers how to use built-in Dapr debugging to debug the Dapr sidecar in your Kubernetes pods. To learn how to view logs and troubleshoot Dapr in Kubernetes, see the Configure and view Dapr logs guide
Pre-requisites
- Refer to this guide to learn how to deploy Dapr to your Kubernetes cluster.
- Follow this guide to build the Dapr debugging binaries you will be deploying in the next step.
Initialize Dapr in debug mode
If Dapr has already been installed in your Kubernetes cluster, uninstall it first:
dapr uninstall -k
We will use ‘helm’ to install Dapr debugging binaries. For more information refer to Install with Helm.
First configure a values file named values.yml
with these options:
global:
registry: docker.io/<your docker.io id>
tag: "dev-linux-amd64"
Then step into ‘dapr’ directory from your cloned dapr/dapr repository and execute the following command:
helm install dapr charts/dapr --namespace dapr-system --values values.yml --wait
To enable debug mode for daprd, you need to put an extra annotation dapr.io/enable-debug
in your application’s deployment file. Let’s use quickstarts/hello-kubernetes as an example. Modify ‘deploy/node.yaml’ like below:
diff --git a/hello-kubernetes/deploy/node.yaml b/hello-kubernetes/deploy/node.yaml
index 23185a6..6cdb0ae 100644
--- a/hello-kubernetes/deploy/node.yaml
+++ b/hello-kubernetes/deploy/node.yaml
@@ -33,6 +33,7 @@ spec:
dapr.io/enabled: "true"
dapr.io/app-id: "nodeapp"
dapr.io/app-port: "3000"
+ dapr.io/enable-debug: "true"
spec:
containers:
- name: node
The annotation dapr.io/enable-debug
will hint Dapr injector to inject Dapr sidecar into the debug mode. You can also specify the debug port with annotation dapr.io/debug-port
, otherwise the default port will be “40000”.
Deploy the application with the following command. For the complete guide refer to the Dapr Kubernetes Quickstart:
kubectl apply -f ./deploy/node.yaml
Figure out the target application’s pod name with the following command:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nodeapp-78866448f5-pqdtr 1/2 Running 0 14s
Then use kubectl’s port-forward
command to expose the internal debug port to the external IDE:
$ kubectl port-forward nodeapp-78866448f5-pqdtr 40000:40000
Forwarding from 127.0.0.1:40000 -> 40000
Forwarding from [::1]:40000 -> 40000
All done. Now you can point to port 40000 and start a remote debug session to daprd from your favorite IDE.
Commonly used kubectl
commands
Use the following common kubectl
commands when debugging daprd and applications running on Kubernetes.
Get all pods, events, and services:
kubectl get all
kubectl get all --n <namespace>
kubectl get all --all-namespaces
Get each specifically:
kubectl get pods
kubectl get events --n <namespace>
kubectl get events --sort-by=.metadata.creationTimestamp --n <namespace>
kubectl get services
Check logs:
kubectl logs <podId> daprd
kubectl logs <podId> <myAppContainerName>
kuebctl logs <deploymentId> daprd
kubectl logs <deploymentId> <myAppContainerName>
kubectl describe pod <podId>
kubectl describe deploy <deployId>
kubectl describe replicaset <replicasetId>
Restart a pod by running the following command:
kubectl delete pod <podId>
This causes the replicaset
controller to restart the pod after the delete.
Watch the demo
See the presentation on troubleshooting Dapr on Kubernetes in the Dapr Community Call #36.
Related links
6.2 - Debugging Dapr Apps running in Docker Compose
The goal of this article is to demonstrate a way to debug one or more daprised applications (via your IDE, locally) while remaining integrated with the other applications that have deployed in the docker compose environment.
Let’s take the minimal example of a docker compose file which contains just two services :
nodeapp
- your appnodeapp-dapr
- the dapr sidecar process to yournodeapp
service
compose.yml
services:
nodeapp:
build: ./node
ports:
- "50001:50001"
networks:
- hello-dapr
nodeapp-dapr:
image: "daprio/daprd:edge"
command: [
"./daprd",
"--app-id", "nodeapp",
"--app-port", "3000",
"--resources-path", "./components"
]
volumes:
- "./components/:/components"
depends_on:
- nodeapp
network_mode: "service:nodeapp"
networks:
hello-dapr
When you run this docker file with docker compose -f compose.yml up
this will deploy to Docker and run as normal.
But how do we debug the nodeapp
while still integrated to the running dapr sidecar process, and anything else that you may have deployed via the Docker compose file?
Lets start by introducing a second docker compose file called compose.debug.yml
. This second compose file will augment with the first compose file when the up
command is ran.
compose.debug.yml
services:
nodeapp: # Isolate the nodeapp by removing its ports and taking it off the network
ports: !reset []
networks: !reset
- ""
nodeapp-dapr:
command: ["./daprd",
"--app-id", "nodeapp",
"--app-port", "8080", # This must match the port that your app is exposed on when debugging in the IDE
"--resources-path", "./components",
"--app-channel-address", "host.docker.internal"] # Make the sidecar look on the host for the App Channel
network_mode: !reset "" # Reset the network_mode...
networks: # ... so that the sidecar can go into the normal network
- hello-dapr
ports:
- "3500:3500" # Expose the HTTP port to the host
- "50001:50001" # Expose the GRPC port to the host (Dapr Worfklows depends upon the GRPC channel)
Next, ensure that your nodeapp
is running/debugging in your IDE of choice, and is exposed on the same port that you specifed above in the compose.debug.yml
- In the example above this is set to port 8080
.
Next, stop any existing compose sessions you may have started, and run the following command to run both docker compose files combined together :
docker compose -f compose.yml -f compose.debug.yml up
You should now find that the dapr sidecar and your debugging app will have bi-directional communication with each other as if they were running together as normal in the Docker compose environment.
Note : It’s important to highlight that the nodeapp
service in the docker compose environment is actually still running, however it has been removed from the docker network so it is effectively orphaned as nothing can communicate to it.
Demo : Watch this video on how to debug local Dapr apps with Docker Compose
7 - Integrations
7.1 - Integrations with AWS
7.1.1 - Authenticating to AWS
Dapr components leveraging AWS services (for example, DynamoDB, SQS, S3) utilize standardized configuration attributes via the AWS SDK. Learn more about how the AWS SDK handles credentials.
You can configure authentication using the AWS SDKâs default provider chain or one of the predefined AWS authentication profiles outlined below. Verify your component configuration by testing and inspecting Dapr runtime logs to confirm proper initialization.
Terminology
- ARN (Amazon Resource Name): A unique identifier used to specify AWS resources. Format:
arn:partition:service:region:account-id:resource
. Example:arn:aws:iam::123456789012:role/example-role
. - IAM (Identity and Access Management): AWS’s service for managing access to AWS resources securely.
Authentication Profiles
Access Key ID and Secret Access Key
Use static Access Key and Secret Key credentials, either through component metadata fields or via default AWS configuration.
Important
Prefer loading credentials via the default AWS configuration in scenarios such as:
- Running the Dapr sidecar (
daprd
) with your application on EKS (AWS Kubernetes). - Using nodes or pods attached to IAM policies that define AWS resource access.
Attribute | Required | Description | Example |
---|---|---|---|
region | Y | AWS region to connect to. | “us-east-1” |
accessKey | N | AWS Access key id. Will be required in Dapr v1.17. | “AKIAIOSFODNN7EXAMPLE” |
secretKey | N | AWS Secret access key, used alongside accessKey . Will be required in Dapr v1.17. | “wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY” |
sessionToken | N | AWS Session token, used with accessKey and secretKey . Often unnecessary for IAM user keys. |
Assume IAM Role
This profile allows Dapr to assume a specific IAM Role. Typically used when the Dapr sidecar runs on EKS or nodes/pods linked to IAM policies. Currently supported by Kafka and PostgreSQL components.
Attribute | Required | Description | Example |
---|---|---|---|
region | Y | AWS region to connect to. | “us-east-1” |
assumeRoleArn | N | ARN of the IAM role with AWS resource access. Will be required in Dapr v1.17. | “arn:aws:iam::123456789:role/mskRole” |
sessionName | N | Session name for role assumption. Default is "DaprDefaultSession" . | “MyAppSession” |
Credentials from Environment Variables
Authenticate using environment variables. This is especially useful for Dapr in self-hosted mode where sidecar injectors donât configure environment variables.
There are no metadata fields required for this authentication profile.
IAM Roles Anywhere
IAM Roles Anywhere extends IAM role-based authentication to external workloads. It eliminates the need for long-term credentials by using cryptographically signed certificates, anchored in a trust relationship using Dapr PKI. Dapr SPIFFE identity X.509 certificates are used to authenticate to AWS services, and Dapr handles credential rotation at half the session lifespan.
To configure this authentication profile:
- Create a Trust Anchor in the trusting AWS account using the Dapr certificate bundle as an
External certificate bundle
. - Create an IAM role with the resource permissions policy necessary, as well as a trust entity for the Roles Anywhere AWS service. Here, you specify SPIFFE identities allowed.
- Create an IAM Profile under the Roles Anywhere service, linking the IAM Role.
Attribute | Required | Description | Example |
---|---|---|---|
trustAnchorArn | Y | ARN of the Trust Anchor in the AWS account granting trust to the Dapr Certificate Authority. | arn:aws:rolesanywhere:us-west-1:012345678910:trust-anchor/01234568-0123-0123-0123-012345678901 |
trustProfileArn | Y | ARN of the AWS IAM Profile in the trusting AWS account. | arn:aws:rolesanywhere:us-west-1:012345678910:profile/01234568-0123-0123-0123-012345678901 |
assumeRoleArn | Y | ARN of the AWS IAM role to assume in the trusting AWS account. | arn:aws:iam:012345678910:role/exampleIAMRoleName |
Additional Fields
Some AWS components include additional optional fields:
Attribute | Required | Description | Example |
---|---|---|---|
endpoint | N | The endpoint is normally handled internally by the AWS SDK. However, in some situations it might make sense to set it locally - for example if developing against DynamoDB Local. |
Furthermore, non-native AWS components such as Kafka and PostgreSQL that support AWS authentication profiles have metadata fields to trigger the AWS authentication logic. Be sure to check specific component documentation.
Alternatives to explicitly specifying credentials in component manifest files
In production scenarios, it is recommended to use a solution such as:
If running on AWS EKS, you can link an IAM role to a Kubernetes service account, which your pod can use.
All of these solutions solve the same problem: They allow the Dapr runtime process (or sidecar) to retrieve credentials dynamically, so that explicit credentials aren’t needed. This provides several benefits, such as automated key rotation, and avoiding having to manage secrets.
Both Kiam and Kube2IAM work by intercepting calls to the instance metadata service.
Setting Up Dapr with AWS EKS Pod Identity
EKS Pod Identities provide the ability to manage credentials for your applications, similar to the way that Amazon EC2 instance profiles provide credentials to Amazon EC2 instances. Instead of creating and distributing your AWS credentials to the containers or using the Amazon EC2 instanceâs role, you associate an IAM role with a Kubernetes service account and configure your Pods to use the service account.
To see a comprehensive example on how to authorize pod access to AWS Secrets Manager from EKS using AWS EKS Pod Identity, follow the sample in this repository.
Use an instance profile when running in stand-alone mode on AWS EC2
If running Dapr directly on an AWS EC2 instance in stand-alone mode, you can use instance profiles.
- Configure an IAM role.
- Attach it to the instance profile for the ec2 instance.
Dapr then authenticates to AWS without specifying credentials in the Dapr component manifest.
Authenticate to AWS when running dapr locally in stand-alone mode
When running Dapr (or the Dapr runtime directly) in stand-alone mode, you can inject environment variables into the process, like the following example:
FOO=bar daprd --app-id myapp
If you have configured named AWS profiles locally, you can tell Dapr (or the Dapr runtime) which profile to use by specifying the “AWS_PROFILE” environment variable:
AWS_PROFILE=myprofile dapr run...
or
AWS_PROFILE=myprofile daprd...
You can use any of the supported environment variables to configure Dapr in this manner.
On Windows, the environment variable needs to be set before starting the dapr
or daprd
command, doing it inline (like in Linux/MacOS) is not supported.
Authenticate to AWS if using AWS SSO based profiles
If you authenticate to AWS using AWS SSO, the AWS SDK for Go (both v1 and v2) provides native support for AWS SSO credential providers. This means you can use AWS SSO profiles directly without additional utilities.
For more information about AWS SSO support in the AWS SDK for Go, see the AWS blog post.
Next steps
Refer to AWS component specs >>Related links
7.2 - Integrations with Azure
7.2.1 - Authenticate to Azure
7.2.1.1 - Authenticating to Azure
Most Azure components for Dapr support authenticating with Microsoft Entra ID. Thanks to this:
- Administrators can leverage all the benefits of fine-tuned permissions with Azure Role-Based Access Control (RBAC).
- Applications running on Azure services such as Azure Container Apps, Azure Kubernetes Service, Azure VMs, or any other Azure platform services can leverage Managed Identities (MI) and Workload Identity. These offer the ability to authenticate your applications without having to manage sensitive credentials.
About authentication with Microsoft Entra ID
Microsoft Entra ID is Azure’s identity and access management (IAM) solution, which is used to authenticate and authorize users and services.
Microsoft Entra ID is built on top of open standards such OAuth 2.0, which allows services (applications) to obtain access tokens to make requests to Azure services, including Azure Storage, Azure Service Bus, Azure Key Vault, Azure Cosmos DB, Azure Database for Postgres, Azure SQL, etc.
In Azure terminology, an application is also called a “Service Principal”.
Some Azure components offer alternative authentication methods, such as systems based on “shared keys” or “access tokens”. Although these are valid and supported by Dapr, you should authenticate your Dapr components using Microsoft Entra ID whenever possible to take advantage of many benefits, including:
- Managed Identities and Workload Identity
- Role-Based Access Control
- Auditing
- (Optional) Authentication using certificates
Managed Identities and Workload Identity
With Managed Identities (MI), your application can authenticate with Microsoft Entra ID and obtain an access token to make requests to Azure services. When your application is running on a supported Azure service (such as Azure VMs, Azure Container Apps, Azure Web Apps, etc), an identity for your application can be assigned at the infrastructure level.
Once using MI, your code doesn’t have to deal with credentials, which:
- Removes the challenge of managing credentials safely
- Allows greater separation of concerns between development and operations teams
- Reduces the number of people with access to credentials
- Simplifies operational aspectsâespecially when multiple environments are used
Applications running on Azure Kubernetes Service can similarly leverage Workload Identity to automatically provide an identity to individual pods.
Role-Based Access Control
When using Azure Role-Based Access Control (RBAC) with supported services, permissions given to an application can be fine-tuned. For example, you can restrict access to a subset of data or make the access read-only.
Auditing
Using Microsoft Entra ID provides an improved auditing experience for access. Tenant administrators can consult audit logs to track authentication requests.
(Optional) Authentication using certificates
While Microsoft Entra ID allows you to use MI, you still have the option to authenticate using certificates.
Support for other Azure environments
By default, Dapr components are configured to interact with Azure resources in the “public cloud”. If your application is deployed to another cloud, such as Azure China or Azure Government (“sovereign clouds”), you can enable that for supported components by setting the azureEnvironment
metadata property to one of the supported values:
- Azure public cloud (default):
"AzurePublicCloud"
- Azure China:
"AzureChinaCloud"
- Azure Government:
"AzureUSGovernmentCloud"
Support for sovereign clouds is experimental.
Credentials metadata fields
To authenticate with Microsoft Entra ID, you will need to add the following credentials as values in the metadata for your Dapr component.
Metadata options
Depending on how you’ve passed credentials to your Dapr services, you have multiple metadata options.
- Using client credentials
- Using a certificate
- Using Managed Identities (MI)
- Using Workload Identity on AKS
- Using Azure CLI credentials (development-only)
Authenticating using client credentials
Field | Required | Details | Example |
---|---|---|---|
azureTenantId | Y | ID of the Microsoft Entra ID tenant | "cd4b2887-304c-47e1-b4d5-65447fdd542b" |
azureClientId | Y | Client ID (application ID) | "c7dd251f-811f-4ba2-a905-acd4d3f8f08b" |
azureClientSecret | Y | Client secret (application password) | "Ecy3XG7zVZK3/vl/a2NSB+a1zXLa8RnMum/IgD0E" |
When running on Kubernetes, you can also use references to Kubernetes secrets for any or all of the values above.
Authenticating using a certificate
Field | Required | Details | Example |
---|---|---|---|
azureTenantId | Y | ID of the Microsoft Entra ID tenant | "cd4b2887-304c-47e1-b4d5-65447fdd542b" |
azureClientId | Y | Client ID (application ID) | "c7dd251f-811f-4ba2-a905-acd4d3f8f08b" |
azureCertificate | One of azureCertificate and azureCertificateFile | Certificate and private key (in PFX/PKCS#12 format) | "-----BEGIN PRIVATE KEY-----\n MIIEvgI... \n -----END PRIVATE KEY----- \n -----BEGIN CERTIFICATE----- \n MIICoTC... \n -----END CERTIFICATE----- |
azureCertificateFile | One of azureCertificate and azureCertificateFile | Path to the PFX/PKCS#12 file containing the certificate and private key | "/path/to/file.pem" |
azureCertificatePassword | N | Password for the certificate if encrypted | "password" |
When running on Kubernetes, you can also use references to Kubernetes secrets for any or all of the values above.
Authenticating with Managed Identities (MI)
Field | Required | Details | Example |
---|---|---|---|
azureClientId | N | Client ID (application ID) | "c7dd251f-811f-4ba2-a905-acd4d3f8f08b" |
Using Managed Identities, the azureClientId
field is generally recommended. The field is optional when using a system-assigned identity, but may be required when using user-assigned identities.
Authenticating with Workload Identity on AKS
When running on Azure Kubernetes Service (AKS), you can authenticate components using Workload Identity. Refer to the Azure AKS documentation on enabling Workload Identity for your Kubernetes resources.
Authenticating using Azure CLI credentials (development-only)
Important: This authentication method is recommended for development only.
This authentication method can be useful while developing on a local machine. You will need:
- The Azure CLI installed
- Have successfully authenticated using the
az login
command
When Dapr is running on a host where there are credentials available for the Azure CLI, components can use those to authenticate automatically if no other authentication method is configuration.
Using this authentication method does not require setting any metadata option.
Example usage in a Dapr component
In this example, you will set up an Azure Key Vault secret store component that uses Microsoft Entra ID to authenticate.
To use a client secret, create a file called azurekeyvault.yaml
in the components directory, filling in with the details from the above setup process:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: azurekeyvault
namespace: default
spec:
type: secretstores.azure.keyvault
version: v1
metadata:
- name: vaultName
value: "[your_keyvault_name]"
- name: azureTenantId
value: "[your_tenant_id]"
- name: azureClientId
value: "[your_client_id]"
- name: azureClientSecret
value : "[your_client_secret]"
If you want to use a certificate saved on the local disk, instead, use:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: azurekeyvault
namespace: default
spec:
type: secretstores.azure.keyvault
version: v1
metadata:
- name: vaultName
value: "[your_keyvault_name]"
- name: azureTenantId
value: "[your_tenant_id]"
- name: azureClientId
value: "[your_client_id]"
- name: azureCertificateFile
value : "[pfx_certificate_file_fully_qualified_local_path]"
In Kubernetes, you store the client secret or the certificate into the Kubernetes Secret Store and then refer to those in the YAML file.
To use a client secret:
Create a Kubernetes secret using the following command:
kubectl create secret generic [your_k8s_secret_name] --from-literal=[your_k8s_secret_key]=[your_client_secret]
[your_client_secret]
is the application’s client secret as generated above[your_k8s_secret_name]
is secret name in the Kubernetes secret store[your_k8s_secret_key]
is secret key in the Kubernetes secret store
Create an
azurekeyvault.yaml
component file.The component yaml refers to the Kubernetes secretstore using
auth
property andsecretKeyRef
refers to the client secret stored in the Kubernetes secret store.apiVersion: dapr.io/v1alpha1 kind: Component metadata: name: azurekeyvault namespace: default spec: type: secretstores.azure.keyvault version: v1 metadata: - name: vaultName value: "[your_keyvault_name]" - name: azureTenantId value: "[your_tenant_id]" - name: azureClientId value: "[your_client_id]" - name: azureClientSecret secretKeyRef: name: "[your_k8s_secret_name]" key: "[your_k8s_secret_key]" auth: secretStore: kubernetes
Apply the
azurekeyvault.yaml
component:kubectl apply -f azurekeyvault.yaml
To use a certificate:
Create a Kubernetes secret using the following command:
kubectl create secret generic [your_k8s_secret_name] --from-file=[your_k8s_secret_key]=[pfx_certificate_file_fully_qualified_local_path]
[pfx_certificate_file_fully_qualified_local_path]
is the path to the PFX file you obtained earlier[your_k8s_secret_name]
is secret name in the Kubernetes secret store[your_k8s_secret_key]
is secret key in the Kubernetes secret store
Create an
azurekeyvault.yaml
component file.The component yaml refers to the Kubernetes secretstore using
auth
property andsecretKeyRef
refers to the certificate stored in the Kubernetes secret store.apiVersion: dapr.io/v1alpha1 kind: Component metadata: name: azurekeyvault namespace: default spec: type: secretstores.azure.keyvault version: v1 metadata: - name: vaultName value: "[your_keyvault_name]" - name: azureTenantId value: "[your_tenant_id]" - name: azureClientId value: "[your_client_id]" - name: azureCertificate secretKeyRef: name: "[your_k8s_secret_name]" key: "[your_k8s_secret_key]" auth: secretStore: kubernetes
Apply the
azurekeyvault.yaml
component:kubectl apply -f azurekeyvault.yaml
Next steps
Generate a new Microsoft Entra ID application and Service Principal >>References
7.2.1.2 - How to: Generate a new Microsoft Entra ID application and Service Principal
Prerequisites
- An Azure subscription
- Azure CLI
- jq
- OpenSSL (included by default on all Linux and macOS systems, as well as on WSL)
- Make sure you’re using a bash or zsh shell
Log into Azure using the Azure CLI
In a new terminal, run the following command:
az login
az account set -s [your subscription id]
Create an Microsoft Entra ID application
Create the Microsoft Entra ID application with:
# Friendly name for the application / Service Principal
APP_NAME="dapr-application"
# Create the app
APP_ID=$(az ad app create --display-name "${APP_NAME}" | jq -r .appId)
Select how you’d prefer to pass credentials.
To create a client secret, run the following command.
az ad app credential reset \
--id "${APP_ID}" \
--years 2
This generates a random, 40-characters long password based on the base64
charset. This password will be valid for 2 years, before you need to rotate it.
Save the output values returned; you’ll need them for Dapr to authenticate with Azure. The expected output:
{
"appId": "<your-app-id>",
"password": "<your-password>",
"tenant": "<your-azure-tenant>"
}
When adding the returned values to your Dapr component’s metadata:
appId
is the value forazureClientId
password
is the value forazureClientSecret
(this was randomly-generated)tenant
is the value forazureTenantId
For a PFX (PKCS#12) certificate, run the following command to create a self-signed certificate:
az ad app credential reset \
--id "${APP_ID}" \
--create-cert
Note: Self-signed certificates are recommended for development only. For production, you should use certificates signed by a CA and imported with the
--cert
flag.
The output of the command above should look like:
Save the output values returned; you’ll need them for Dapr to authenticate with Azure. The expected output:
{
"appId": "<your-app-id>",
"fileWithCertAndPrivateKey": "<file-path>",
"password": null,
"tenant": "<your-azure-tenant>"
}
When adding the returned values to your Dapr component’s metadata:
appId
is the value forazureClientId
tenant
is the value forazureTenantId
fileWithCertAndPrivateKey
indicates the location of the self-signed PFX certificate and private key. Use the contents of that file asazureCertificate
(or write it to a file on the server and useazureCertificateFile
)
Note: While the generated file has the
.pem
extension, it contains a certificate and private key encoded as PFX (PKCS#12).
Create a Service Principal
Once you have created an Microsoft Entra ID application, create a Service Principal for that application. With this Service Principal, you can grant it access to Azure resources.
To create the Service Principal, run the following command:
SERVICE_PRINCIPAL_ID=$(az ad sp create \
--id "${APP_ID}" \
| jq -r .id)
echo "Service Principal ID: ${SERVICE_PRINCIPAL_ID}"
Expected output:
Service Principal ID: 1d0ccf05-5427-4b5e-8eb4-005ac5f9f163
The returned value above is the Service Principal ID, which is different from the Microsoft Entra ID application ID (client ID). The Service Principal ID is defined within an Azure tenant and used to grant access to Azure resources to an application
You’ll use the Service Principal ID to grant permissions to an application to access Azure resources.
Meanwhile, the client ID is used by your application to authenticate. You’ll use the client ID in Dapr manifests to configure authentication with Azure services.
Keep in mind that the Service Principal that was just created does not have access to any Azure resource by default. Access will need to be granted to each resource as needed, as documented in the docs for the components.
Next steps
Use Managed Identities >>7.2.1.3 - How to: Use managed identities
Using managed identities, authentication happens automatically by virtue of your application running on top of an Azure service that has either a system-managed or a user-assigned identity.
To get started, you need to enable a managed identity as a service option/functionality in various Azure services, independent of Dapr. Enabling this creates an identity (or application) under the hood for Microsoft Entra ID (previously Azure Active Directory ID) purposes.
Your Dapr services can then leverage that identity to authenticate with Microsoft Entra ID, transparently and without you having to specify any credentials.
In this guide, you learn how to:
- Grant your identity to the Azure service you’re using via official Azure documentation
- Set up either a system-managed or user-assigned identity in your component
That’s about all there is to it.
Note
In your component YAML, you only need theazureClientId
property if using user-assigned identity. Otherwise, you can omit this property for system-managed identity to be used by default.Grant access to the service
Set the requisite Microsoft Entra ID role assignments or custom permissions to your system-managed or user-assigned identity for a particular Azure resource (as identified by the resource scope).
You can set up a managed identity to a new or existing Azure resource. The instructions depend on the service use. Check the following official documentation for the most appropriate instructions:
- Azure Kubernetes Service (AKS)
- Azure Container Apps (ACA)
- Azure App Service (including Azure Web Apps and Azure Functions)
- Azure Virtual Machines (VM)
- Azure Virtual Machines Scale Sets (VMSS)
- Azure Container Instance (ACI)
After assigning a system-managed identity to your Azure resource, you’ll have credentials like the following:
{
"principalId": "<object-id>",
"tenantId": "<tenant-id>",
"type": "SystemAssigned",
"userAssignedIdentities": null
}
From the returned values, take note of the principalId
value, which is the Service Principal ID created for your identity. Use that to grant access permissions for your Azure resources component to access the identity.
Managed identities in Azure Container Apps
Every container app has a completely different system-managed identity, making it very unmanageable to handle the required role assignments across multiple apps.
Instead, it’s strongly recommended to use a user-assigned identity and attach this to all the apps that should load the component. Then, you should scope the component to those same apps.
Set up identities in your component
By default, Dapr Azure components look up the system-managed identity of the environment they run in and authenticate as that. Generally, for a given component, there are no required properties to use system-managed identity other than the service name, storage account name, and any other properites required by the Azure service (listed in the documentation).
For user-assigned idenitities, in addition to the basic properties required by the service you’re using, you need to specify the azureClientId
(user-assigned identity ID) in the component. Make sure the user-assigned identity is attached to the Azure service Dapr is running on, or else you won’t be able to use that identity.
Note
If the sidecar loads a component which does not specifyazureClientId
, it only tries the system-assigned identity. If the component specifies the azureClientId
property, it only tries the particular user-assigned identity with that ID.The following examples demonstrate setting up either a system-managed or user-assigned identity in an Azure KeyVault secrets component.
If you set up system-managed identity using an Azure KeyVault component, the YAML would look like the following:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: azurekeyvault
spec:
type: secretstores.azure.keyvault
version: v1
metadata:
- name: vaultName
value: mykeyvault
In this example, the system-managed identity looks up the service identity and communicates with the mykeyvault
vault. Next, grant your system-managed identiy access to the desired service.
If you set up user-assigned identity using an Azure KeyVault component, the YAML would look like the following:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: azurekeyvault
spec:
type: secretstores.azure.keyvault
version: v1
metadata:
- name: vaultName
value: mykeyvault
- name: azureClientId
value: someAzureIdentityClientIDHere
Once you’ve set up the component YAML with the azureClientId
property, you can grant your user-assigned identity access to your service.
For component configuration in Kubernetes or AKS, refer to the Workload Identity guidance.
Troubleshooting
If you receive an error or your managed identity doesn’t work as expected, check if the following items are true:
The system-managed identity or user-assigned identity don’t have the required permissions on the target resource.
The user-assigned identity isn’t attached to the Azure service (container app or pod) from which you’re loading the component. This can especially happen if:
- You have an unscoped component (a component loaded by all container apps in an environment, or all deployments in your AKS cluster).
- You attached the user-assigned identity to only one container app or one deployment in AKS (using Azure Workload Identity).
In this scenario, since the identity isn’t attached to every other container app or deployment in AKS, the component referencing the user-assigned identity via
azureClientId
fails.
Best practice: When using user-assigned identities, make sure to scope your components to specific apps!
Next steps
Refer to Azure component specs >>7.2.2 - Dapr integration policies for Azure API Management
Azure API Management is a way to create consistent and modern API gateways for back-end services, including those built with Dapr. You can enable Dapr support in self-hosted API Management gateways to allow them to:
- Forward requests to Dapr services
- Send messages to Dapr Pub/Sub topics
- Trigger Dapr output bindings
Try out the Dapr & Azure API Management Integration sample.
Learn more about Dapr integration policies7.2.3 - Dapr extension for Azure Functions runtime
Dapr integrates with the Azure Functions runtime via an extension that lets a function seamlessly interact with Dapr.
- Azure Functions provides an event-driven programming model.
- Dapr provides cloud-native building blocks.
The extension combines the two for serverless and event-driven apps.
Try out the Dapr extension for Azure Functions7.2.4 - Dapr extension for Azure Kubernetes Service (AKS)
The recommended approach for installing Dapr on AKS is to use the AKS Dapr extension. The extension offers:
- Support for all native Dapr configuration capabilities through command-line arguments via the Azure CLI
- The option of opting into automatic minor version upgrades of the Dapr runtime
Note
If you install Dapr through the AKS extension, best practice is to continue using the extension for future management of Dapr instead of the Dapr CLI. Combining the two tools can cause conflicts and result in undesired behavior.Prerequisites for using the Dapr extension for AKS:
- An Azure subscription
- The latest version of the Azure CLI
- An existing AKS cluster
- The Azure Kubernetes Service RBAC Admin role
7.3 - Integrations with Diagrid
7.3.1 - Conductor: Enterprise Dapr for Kubernetes
Diagrid Conductor quickly and securely connects to all your Kubernetes clusters running Dapr and Daprized applications, delivering operational excellence, security & reliability and insights & collaboration.
Automated Dapr management
One-click installation, upgrade and patching of Dapr with selective application update and automated rollback means youâre always up to date.
Advisor: Discover and automate best practices
Be informed and apply production best practices, with continuous checking to prevent misconfigurations, increasing security, reliability and performance.
Resource usage reporting and tracking
By studying past resource behavior, recommend application resource optimization usage leading to significant cost savings on CPU and memory.
Application visualizer
The application graph facilitates collaboration between dev and ops by providing a dynamic overview of your services and infrastructure components.
Learn more about Diagrid Conductor7.4 - How to: Autoscale a Dapr app with KEDA
Dapr, with its building-block API approach, along with the many pub/sub components, makes it easy to write message processing applications. Since Dapr can run in many environments (for example VMs, bare-metal, Cloud or Edge Kubernetes) the autoscaling of Dapr applications is managed by the hosting layer.
For Kubernetes, Dapr integrates with KEDA, an event driven autoscaler for Kubernetes. Many of Dapr’s pub/sub components overlap with the scalers provided by KEDA, so it’s easy to configure your Dapr deployment on Kubernetes to autoscale based on the back pressure using KEDA.
In this guide, you configure a scalable Dapr application, along with the back pressure on Kafka topic. However, you can apply this approach to any pub/sub components offered by Dapr.
Note
If you’re working with Azure Container Apps, refer to the official Azure documentation for scaling Dapr applications using KEDA scalers.Install KEDA
To install KEDA, follow the Deploying KEDA instructions on the KEDA website.
Install and deploy Kafka
If you don’t have access to a Kafka service, you can install it into your Kubernetes cluster for this example by using Helm:
helm repo add confluentinc https://confluentinc.github.io/cp-helm-charts/
helm repo update
kubectl create ns kafka
helm install kafka confluentinc/cp-helm-charts -n kafka \
--set cp-schema-registry.enabled=false \
--set cp-kafka-rest.enabled=false \
--set cp-kafka-connect.enabled=false
To check on the status of the Kafka deployment:
kubectl rollout status deployment.apps/kafka-cp-control-center -n kafka
kubectl rollout status deployment.apps/kafka-cp-ksql-server -n kafka
kubectl rollout status statefulset.apps/kafka-cp-kafka -n kafka
kubectl rollout status statefulset.apps/kafka-cp-zookeeper -n kafka
Once installed, deploy the Kafka client and wait until it’s ready:
kubectl apply -n kafka -f deployment/kafka-client.yaml
kubectl wait -n kafka --for=condition=ready pod kafka-client --timeout=120s
Create the Kafka topic
Create the topic used in this example (demo-topic
):
kubectl -n kafka exec -it kafka-client -- kafka-topics \
--zookeeper kafka-cp-zookeeper-headless:2181 \
--topic demo-topic \
--create \
--partitions 10 \
--replication-factor 3 \
--if-not-exists
The number of topic
partitions
is related to the maximum number of replicas KEDA creates for your deployments.
Deploy a Dapr pub/sub component
Deploy the Dapr Kafka pub/sub component for Kubernetes. Paste the following YAML into a file named kafka-pubsub.yaml
:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: autoscaling-pubsub
spec:
type: pubsub.kafka
version: v1
metadata:
- name: brokers
value: kafka-cp-kafka.kafka.svc.cluster.local:9092
- name: authRequired
value: "false"
- name: consumerID
value: autoscaling-subscriber
The above YAML defines the pub/sub component that your application subscribes to and that you created earlier (demo-topic
).
If you used the Kafka Helm install instructions, you can leave the brokers
value as-is. Otherwise, change this value to the connection string to your Kafka brokers.
Notice the autoscaling-subscriber
value set for consumerID
. This value is used later to ensure that KEDA and your deployment use the same Kafka partition offset.
Now, deploy the component to the cluster:
kubectl apply -f kafka-pubsub.yaml
Deploy KEDA autoscaler for Kafka
Deploy the KEDA scaling object that:
- Monitors the lag on the specified Kafka topic
- Configures the Kubernetes Horizontal Pod Autoscaler (HPA) to scale your Dapr deployment in and out
Paste the following into a file named kafka_scaler.yaml
, and configure your Dapr deployment in the required place:
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: subscriber-scaler
spec:
scaleTargetRef:
name: <REPLACE-WITH-DAPR-DEPLOYMENT-NAME>
pollingInterval: 15
minReplicaCount: 0
maxReplicaCount: 10
triggers:
- type: kafka
metadata:
topic: demo-topic
bootstrapServers: kafka-cp-kafka.kafka.svc.cluster.local:9092
consumerGroup: autoscaling-subscriber
lagThreshold: "5"
Let’s review a few metadata values in the file above:
Values | Description |
---|---|
scaleTargetRef /name | The Dapr ID of your app defined in the Deployment (The value of the dapr.io/id annotation). |
pollingInterval | The frequency in seconds with which KEDA checks Kafka for current topic partition offset. |
minReplicaCount | The minimum number of replicas KEDA creates for your deployment. If your application takes a long time to start, it may be better to set this to 1 to ensure at least one replica of your deployment is always running. Otherwise, set to 0 and KEDA creates the first replica for you. |
maxReplicaCount | The maximum number of replicas for your deployment. Given how Kafka partition offset works, you shouldn’t set that value higher than the total number of topic partitions. |
triggers /metadata /topic | Should be set to the same topic to which your Dapr deployment subscribed (in this example, demo-topic ). |
triggers /metadata /bootstrapServers | Should be set to the same broker connection string used in the kafka-pubsub.yaml file. |
triggers /metadata /consumerGroup | Should be set to the same value as the consumerID in the kafka-pubsub.yaml file. |
Important
Setting the connection string, topic, and consumer group to the same values for both the Dapr service subscription and the KEDA scaler configuration is critical to ensure the autoscaling works correctly.Deploy the KEDA scaler to Kubernetes:
kubectl apply -f kafka_scaler.yaml
All done!
See the KEDA scaler work
Now that the ScaledObject
KEDA object is configured, your deployment will scale based on the lag of the Kafka topic. Learn more about configuring KEDA for Kafka topics.
As defined in the KEDA scaler manifest, you can now start publishing messages to your Kafka topic demo-topic
and watch the pods autoscale when the lag threshold is higher than 5
topics. Publish messages to the Kafka Dapr component by using the Dapr Publish CLI command.
Next steps
Learn about scaling your Dapr pub/sub or binding application with KEDA in Azure Container Apps
7.5 - How to: Use the Dapr CLI in a GitHub Actions workflow
Dapr can be integrated with GitHub Actions via the Dapr tool installer available in the GitHub Marketplace. This installer adds the Dapr CLI to your workflow, allowing you to deploy, manage, and upgrade Dapr across your environments.
Install the Dapr CLI via the Dapr tool installer
Copy and paste the following installer snippet into your application’s YAML file:
- name: Dapr tool installer
uses: dapr/setup-dapr@v1
The dapr/setup-dapr
action will install the specified version of the Dapr CLI on macOS, Linux, and Windows runners. Once installed, you can run any Dapr CLI command to manage your Dapr environments.
Refer to the action.yml
metadata file for details about all the inputs.
Example
For example, for an application using the Dapr extention for Azure Kubernetes Service (AKS), your application YAML will look like the following:
- name: Install Dapr
uses: dapr/setup-dapr@v1
with:
version: '1.15.5'
- name: Initialize Dapr
shell: bash
run: |
# Get the credentials to K8s to use with dapr init
az aks get-credentials --resource-group ${{ env.RG_NAME }} --name "${{ steps.azure-deployment.outputs.aksName }}"
# Initialize Dapr
# Group the Dapr init logs so these lines can be collapsed.
echo "::group::Initialize Dapr"
dapr init --kubernetes --wait --runtime-version ${{ env.DAPR_VERSION }}
echo "::endgroup::"
dapr status --kubernetes
working-directory: ./demos/demo3
Next steps
- Learn more about GitHub Actions.
7.6 - How to: Use the Dapr Kubernetes Operator
You can use the Dapr Kubernetes Operator to manage the Dapr control plane. Use the operator to automate the tasks required to manage the lifecycle of Dapr control plane in Kubernetes mode.
Install and use the Dapr Kubernetes Operator7.7 - How to: Integrate with Kratix
As part of the Kratix Marketplace, Dapr can be used to build custom platforms tailored to your needs.
Note
The Dapr Helm chart generates static public and private key pairs that are published in the repository. This promise should only be used locally for demo purposes. If you wish to use this promise for more than demo purposes, it’s recommended to manually update all the secrets in the promise with keys with your own credentials.Get started by simply installing the Dapr Promise, which installs Dapr on all matching clusters.
Install the Dapr Promise7.8 - How to: Integrate with Argo CD
Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes. It enables you to manage your Kubernetes deployments by tracking the desired application state in Git repositories and automatically syncing it to your clusters.
Integration with Dapr
You can use Argo CD to manage the deployment of Dapr control plane components and Dapr-enabled applications. By adopting a GitOps approach, you ensure that Dapr’s configurations and applications are consistently deployed, versioned, and auditable across your environments. Argo CD can be easily configured to deploy Helm charts, manifests, and Dapr components stored in Git repositories.
Sample code
A sample project demonstrating Dapr deployment with Argo CD is available at https://github.com/dapr/samples/tree/master/dapr-argocd.
8 - Components
8.1 - Pluggable components
8.1.1 - Pluggable components overview
Pluggable components are components that are not included as part the runtime, as opposed to the built-in components included with dapr init
. You can configure Dapr to use pluggable components that leverage the building block APIs, but are registered differently from the built-in Dapr components.

Pluggable components vs. built-in components
Dapr provides two approaches for registering and creating components:
- The built-in components included in the runtime and found in the components-contrib repository .
- Pluggable components which are deployed and registered independently.
While both registration options leverage Dapr’s building block APIs, each has a different implementation processes.
Component details | Built-in Component | Pluggable Components |
---|---|---|
Language | Can only be written in Go | Can be written in any gRPC-supported language |
Where it runs | As part of the Dapr runtime executable | As a distinct process or container in a pod. Runs separate from Dapr itself. |
Registers with Dapr | Included into the Dapr codebase | Registers with Dapr via Unix Domain Sockets (using gRPC ) |
Distribution | Distributed with Dapr release. New features added to component are aligned with Dapr releases | Distributed independently from Dapr itself. New features can be added when needed and follows its own release cycle. |
How component is activated | Dapr starts runs the component (automatic) | User starts component (manual) |
Why create a pluggable component?
Pluggable components prove useful in scenarios where:
- You require a private component.
- You want to keep your component separate from the Dapr release process.
- You are not as familiar with Go, or implementing your component in Go is not ideal.
Features
Implement a pluggable component
In order to implement a pluggable component, you need to implement a gRPC service in the component. Implementing the gRPC service requires three steps:
- Find the proto definition file
- Create service scaffolding
- Define the service
Learn more about how to develop and implement a pluggable component
Leverage multiple building blocks for a component
In addition to implementing multiple gRPC services from the same component (for example StateStore
, QueriableStateStore
, TransactionalStateStore
etc.), a pluggable component can also expose implementations for other component interfaces. This means that a single pluggable component can simultaneously function as a state store, pub/sub, and input or output binding. In other words, you can implement multiple component interfaces into a pluggable component and expose them as gRPC services.
While exposing multiple component interfaces on the same pluggable component lowers the operational burden of deploying multiple components, it makes implementing and debugging your component harder. If in doubt, stick to a “separation of concerns” by merging multiple components interfaces into the same pluggable component only when necessary.
Operationalize a pluggable component
Built-in components and pluggable components share one thing in common: both need a component specification. Built-in components do not require any extra steps to be used: Dapr is ready to use them automatically.
In contrast, pluggable components require additional steps before they can communicate with Dapr. You need to first run the component and facilitate Dapr-component communication to kick off the registration process.
Next steps
8.1.2 - How to: Implement pluggable components
In this guide, you’ll learn why and how to implement a pluggable component. To learn how to configure and register a pluggable component, refer to How to: Register a pluggable component
Implement a pluggable component
In order to implement a pluggable component, you need to implement a gRPC service in the component. Implementing the gRPC service requires three steps:
Find the proto definition file
Proto definitions are provided for each supported service interface (state store, pub/sub, bindings, secret stores).
Currently, the following component APIs are supported:
- State stores
- Pub/sub
- Bindings
- Secret stores
Component | Type | gRPC definition | Built-in Reference Implementation | Docs |
---|---|---|---|---|
State Store | state | state.proto | Redis | concept, howto, api spec |
Pub/sub | pubsub | pubsub.proto | Redis | concept, howto, api spec |
Bindings | bindings | bindings.proto | Kafka | concept, input howto, output howto, api spec |
Secret Store | secretstores | secretstore.proto | Hashicorp/Vault | concept, howto-secrets, api spec |
Below is a snippet of the gRPC service definition for pluggable component state stores ([state.proto]):
// StateStore service provides a gRPC interface for state store components.
service StateStore {
// Initializes the state store component with the given metadata.
rpc Init(InitRequest) returns (InitResponse) {}
// Returns a list of implemented state store features.
rpc Features(FeaturesRequest) returns (FeaturesResponse) {}
// Ping the state store. Used for liveness purposes.
rpc Ping(PingRequest) returns (PingResponse) {}
// Deletes the specified key from the state store.
rpc Delete(DeleteRequest) returns (DeleteResponse) {}
// Get data from the given key.
rpc Get(GetRequest) returns (GetResponse) {}
// Sets the value of the specified key.
rpc Set(SetRequest) returns (SetResponse) {}
// Deletes many keys at once.
rpc BulkDelete(BulkDeleteRequest) returns (BulkDeleteResponse) {}
// Retrieves many keys at once.
rpc BulkGet(BulkGetRequest) returns (BulkGetResponse) {}
// Set the value of many keys at once.
rpc BulkSet(BulkSetRequest) returns (BulkSetResponse) {}
}
The interface for the StateStore
service exposes a total of 9 methods:
- 2 methods for initialization and components capability advertisement (Init and Features)
- 1 method for health-ness or liveness check (Ping)
- 3 methods for CRUD (Get, Set, Delete)
- 3 methods for bulk CRUD operations (BulkGet, BulkSet, BulkDelete)
Create service scaffolding
Use protocol buffers and gRPC tools to create the necessary scaffolding for the service. Learn more about these tools via the gRPC concepts documentation.
These tools generate code targeting any gRPC-supported language. This code serves as the base for your server and it provides:
- Functionality to handle client calls
- Infrastructure to:
- Decode incoming requests
- Execute service methods
- Encode service responses
The generated code is incomplete. It is missing:
- A concrete implementation for the methods your target service defines (the core of your pluggable component).
- Code on how to handle Unix Socket Domain integration, which is Dapr specific.
- Code handling integration with your downstream services.
Learn more about filling these gaps in the next step.
Define the service
Provide a concrete implementation of the desired service. Each component has a gRPC service definition for its core functionality which is the same as the core component interface. For example:
State stores
A pluggable state store must provide an implementation of the
StateStore
service interface.In addition to this core functionality, some components might also expose functionality under other optional services. For example, you can add extra functionality by defining the implementation for a
QueriableStateStore
service and aTransactionalStateStore
service.Pub/sub
Pluggable pub/sub components only have a single core service interface defined pubsub.proto. They have no optional service interfaces.
Bindings
Pluggable input and output bindings have a single core service definition on bindings.proto. They have no optional service interfaces.
Secret Store
Pluggable Secret store have a single core service definition on secretstore.proto. They have no optional service interfaces.
After generating the above state store example’s service scaffolding code using gRPC and protocol buffers tools, you can define concrete implementations for the 9 methods defined under service StateStore
, along with code to initialize and communicate with your dependencies.
This concrete implementation and auxiliary code are the core of your pluggable component. They define how your component behaves when handling gRPC requests from Dapr.
Returning semantic errors
Returning semantic errors are also part of the pluggable component protocol. The component must return specific gRPC codes that have semantic meaning for the user application, those errors are used to a variety of situations from concurrency requirements to informational only.
Error | gRPC error code | Source component | Description |
---|---|---|---|
ETag Mismatch | codes.FailedPrecondition | State store | Error mapping to meet concurrency requirements |
ETag Invalid | codes.InvalidArgument | State store | |
Bulk Delete Row Mismatch | codes.Internal | State store |
Learn more about concurrency requirements in the State Management overview.
The following examples demonstrate how to return an error in your own pluggable component, changing the messages to suit your needs.
Important: In order to use .NET for error mapping, first install the
Google.Api.CommonProtos
NuGet package.
Etag Mismatch
var badRequest = new BadRequest();
var des = "The ETag field provided does not match the one in the store";
badRequest.FieldViolations.Add( Â Â
new Google.Rpc.BadRequest.Types.FieldViolation
  {      Â
Field = "etag",
Description = des
  });
var baseStatusCode = Grpc.Core.StatusCode.FailedPrecondition;
var status = new Google.Rpc.Status{ Â Â
Code = (int)baseStatusCode
};
status.Details.Add(Google.Protobuf.WellKnownTypes.Any.Pack(badRequest));
var metadata = new Metadata();
metadata.Add("grpc-status-details-bin", status.ToByteArray());
throw new RpcException(new Grpc.Core.Status(baseStatusCode, "fake-err-msg"), metadata);
Etag Invalid
var badRequest = new BadRequest();
var des = "The ETag field must only contain alphanumeric characters";
badRequest.FieldViolations.Add(
new Google.Rpc.BadRequest.Types.FieldViolation
{
Field = "etag",
Description = des
});
var baseStatusCode = Grpc.Core.StatusCode.InvalidArgument;
var status = new Google.Rpc.Status
{
Code = (int)baseStatusCode
};
status.Details.Add(Google.Protobuf.WellKnownTypes.Any.Pack(badRequest));
var metadata = new Metadata();
metadata.Add("grpc-status-details-bin", status.ToByteArray());
throw new RpcException(new Grpc.Core.Status(baseStatusCode, "fake-err-msg"), metadata);
Bulk Delete Row Mismatch
var errorInfo = new Google.Rpc.ErrorInfo();
errorInfo.Metadata.Add("expected", "100");
errorInfo.Metadata.Add("affected", "99");
var baseStatusCode = Grpc.Core.StatusCode.Internal;
var status = new Google.Rpc.Status{
  Code = (int)baseStatusCode
};
status.Details.Add(Google.Protobuf.WellKnownTypes.Any.Pack(errorInfo));
var metadata = new Metadata();
metadata.Add("grpc-status-details-bin", status.ToByteArray());
throw new RpcException(new Grpc.Core.Status(baseStatusCode, "fake-err-msg"), metadata);
Just like the Dapr Java SDK, the Java Pluggable Components SDK uses Project Reactor, which provides an asynchronous API for Java.
Errors can be returned directly by:
- Calling the
.error()
method in theMono
orFlux
that your method returns - Providing the appropriate exception as parameter.
You can also raise an exception, as long as it is captured and fed back to your resulting Mono
or Flux
.
ETag Mismatch
final Status status = Status.newBuilder()
.setCode(io.grpc.Status.Code.FAILED_PRECONDITION.value())
.setMessage("fake-err-msg-for-etag-mismatch")
.addDetails(Any.pack(BadRequest.FieldViolation.newBuilder()
.setField("etag")
.setDescription("The ETag field provided does not match the one in the store")
.build()))
.build();
return Mono.error(StatusProto.toStatusException(status));
ETag Invalid
final Status status = Status.newBuilder()
.setCode(io.grpc.Status.Code.INVALID_ARGUMENT.value())
.setMessage("fake-err-msg-for-invalid-etag")
.addDetails(Any.pack(BadRequest.FieldViolation.newBuilder()
.setField("etag")
.setDescription("The ETag field must only contain alphanumeric characters")
.build()))
.build();
return Mono.error(StatusProto.toStatusException(status));
Bulk Delete Row Mismatch
final Status status = Status.newBuilder()
.setCode(io.grpc.Status.Code.INTERNAL.value())
.setMessage("fake-err-msg-for-bulk-delete-row-mismatch")
.addDetails(Any.pack(ErrorInfo.newBuilder()
.putAllMetadata(Map.ofEntries(
Map.entry("affected", "99"),
Map.entry("expected", "100")
))
.build()))
.build();
return Mono.error(StatusProto.toStatusException(status));
ETag Mismatch
st := status.New(codes.FailedPrecondition, "fake-err-msg")
desc := "The ETag field provided does not match the one in the store"
v := &errdetails.BadRequest_FieldViolation{
Field: etagField,
Description: desc,
}
br := &errdetails.BadRequest{}
br.FieldViolations = append(br.FieldViolations, v)
st, err := st.WithDetails(br)
ETag Invalid
st := status.New(codes.InvalidArgument, "fake-err-msg")
desc := "The ETag field must only contain alphanumeric characters"
v := &errdetails.BadRequest_FieldViolation{
Field: etagField,
Description: desc,
}
br := &errdetails.BadRequest{}
br.FieldViolations = append(br.FieldViolations, v)
st, err := st.WithDetails(br)
Bulk Delete Row Mismatch
st := status.New(codes.Internal, "fake-err-msg")
br := &errdetails.ErrorInfo{}
br.Metadata = map[string]string{
affected: "99",
expected: "100",
}
st, err := st.WithDetails(br)
Next steps
- Get started with developing .NET pluggable component using this sample code
- Review the pluggable components overview
- Learn how to register your pluggable component
8.1.3 - Pluggable components SDKs
The Dapr SDKs are the easiest way for you to create pluggable components. Choose your favorite language and start creating components in minutes.
Pluggable components SDKs
Language | Status |
---|---|
Go | In development |
.NET | In development |
8.1.3.1 - Getting started with the Dapr pluggable components .NET SDK
Dapr offers NuGet packages to help with the development of .NET pluggable components.
Prerequisites
- .NET 6 SDK or later
- Dapr 1.9 CLI or later
- Initialized Dapr environment
- Linux, Mac, or Windows (with WSL)
Note
Development of Dapr pluggable components on Windows requires WSL as some development platforms do not fully support Unix Domain Sockets on “native” Windows.Project creation
Creating a pluggable component starts with an empty ASP.NET project.
dotnet new web --name <project name>
Add NuGet packages
Add the Dapr .NET pluggable components NuGet package.
dotnet add package Dapr.PluggableComponents.AspNetCore
Create application and service
Creating a Dapr pluggable component application is similar to creating an ASP.NET application. In Program.cs
, replace the WebApplication
related code with the Dapr DaprPluggableComponentsApplication
equivalent.
using Dapr.PluggableComponents;
var app = DaprPluggableComponentsApplication.Create();
app.RegisterService(
"<socket name>",
serviceBuilder =>
{
// Register one or more components with this service.
});
app.Run();
This creates an application with a single service. Each service:
- Corresponds to a single Unix Domain Socket
- Can host one or more component types
Note
Only a single component of each type can be registered with an individual service. However, multiple components of the same type can be spread across multiple services.Implement and register components
- Implementing an input/output binding component
- Implementing a pub-sub component
- Implementing a state store component
Test components locally
Pluggable components can be tested by starting the application on the command line and configuring a Dapr sidecar to use it.
To start the component, in the application directory:
dotnet run
To configure Dapr to use the component, in the resources path directory:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <component name>
spec:
type: state.<socket name>
version: v1
metadata:
- name: key1
value: value1
- name: key2
value: value2
Any metadata
properties will be passed to the component via its IPluggableComponent.InitAsync()
method when the component is instantiated.
To start Dapr (and, optionally, the service making use of the service):
dapr run --app-id <app id> --resources-path <resources path> ...
At this point, the Dapr sidecar will have started and connected via Unix Domain Socket to the component. You can then interact with the component either:
- Through the service using the component (if started), or
- By using the Dapr HTTP or gRPC API directly
Create Container
There are several ways to create a container for your component for eventual deployment.
Use .NET SDK
The .NET 7 and later SDKs enable you to create a .NET-based container for your application without a Dockerfile
, even for those targeting earlier versions of the .NET SDK. This is probably the simplest way of generating a container for your component today.
Note
Currently, the .NET 7 SDK requires Docker Desktop on the local machine, a special NuGet package, and Docker Desktop on the local machine to build containers. Future versions of .NET SDK plan to eliminate those requirements.
Multiple versions of the .NET SDK can be installed on the local machine at the same time.
Add the Microsoft.NET.Build.Containers
NuGet package to the component project.
dotnet add package Microsoft.NET.Build.Containers
Publish the application as a container:
dotnet publish --os linux --arch x64 /t:PublishContainer -c Release
Note
Ensure the architecture argument--arch x64
matches that of the component’s ultimate deployment target. By default, the architecture of the generated container matches that of the local machine. For example, if the local machine is ARM64-based (for example, a M1 or M2 Mac) and the argument is omitted, an ARM64 container will be generated which may not be compatible with deployment targets expecting an AMD64 container.For more configuration options, such as controlling the container name, tag, and base image, see the .NET publish as container guide.
Use a Dockerfile
While there are tools that can generate a Dockerfile
for a .NET application, the .NET SDK itself does not. A typical Dockerfile
might look like:
FROM mcr.microsoft.com/dotnet/aspnet:<runtime> AS base
WORKDIR /app
# Creates a non-root user with an explicit UID and adds permission to access the /app folder
# For more info, please refer to https://aka.ms/vscode-docker-dotnet-configure-containers
RUN adduser -u 5678 --disabled-password --gecos "" appuser && chown -R appuser /app
USER appuser
FROM mcr.microsoft.com/dotnet/sdk:<runtime> AS build
WORKDIR /src
COPY ["<application>.csproj", "<application folder>/"]
RUN dotnet restore "<application folder>/<application>.csproj"
COPY . .
WORKDIR "/src/<application folder>"
RUN dotnet build "<application>.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "<application>.csproj" -c Release -o /app/publish /p:UseAppHost=false
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "<application>.dll"]
Build the image:
docker build -f Dockerfile -t <image name>:<tag> .
Note
Paths forCOPY
operations in the Dockerfile
are relative to the Docker context passed when building the image, while the Docker context itself will vary depending on the needs of the project being built (for example, if it has referenced projects). In the example above, the assumption is that the Docker context is the component project directory.Demo
Watch this video for a demo on building pluggable components with .NET:
Next steps
- Learn advanced steps for the Pluggable Component .NET SDK
- Learn more about using the Pluggable Component .NET SDK for:
8.1.3.1.1 - Implementing a .NET input/output binding component
Creating a binding component requires just a few basic steps.
Add bindings namespaces
Add using
statements for the bindings related namespaces.
using Dapr.PluggableComponents.Components;
using Dapr.PluggableComponents.Components.Bindings;
Input bindings: Implement IInputBinding
Create a class that implements the IInputBinding
interface.
internal sealed class MyBinding : IInputBinding
{
public Task InitAsync(MetadataRequest request, CancellationToken cancellationToken = default)
{
// Called to initialize the component with its configured metadata...
}
public async Task ReadAsync(MessageDeliveryHandler<InputBindingReadRequest, InputBindingReadResponse> deliveryHandler, CancellationToken cancellationToken = default)
{
// Until canceled, check the underlying store for messages and deliver them to the Dapr runtime...
}
}
Calls to the ReadAsync()
method are “long-lived”, in that the method is not expected to return until canceled (for example, via the cancellationToken
). As messages are read from the underlying store of the component, they are delivered to the Dapr runtime via the deliveryHandler
callback. Delivery allows the component to receive notification if/when the application (served by the Dapr runtime) acknowledges processing of the message.
public async Task ReadAsync(MessageDeliveryHandler<InputBindingReadRequest, InputBindingReadResponse> deliveryHandler, CancellationToken cancellationToken = default)
{
TimeSpan pollInterval = // Polling interval (e.g. from initalization metadata)...
// Poll the underlying store until canceled...
while (!cancellationToken.IsCancellationRequested)
{
var messages = // Poll underlying store for messages...
foreach (var message in messages)
{
// Deliver the message to the Dapr runtime...
await deliveryHandler(
new InputBindingReadResponse
{
// Set the message content...
},
// Callback invoked when application acknowledges the message...
async request =>
{
// Process response data or error message...
})
}
// Wait for the next poll (or cancellation)...
await Task.Delay(pollInterval, cancellationToken);
}
}
Output bindings: Implement IOutputBinding
Create a class that implements the IOutputBinding
interface.
internal sealed class MyBinding : IOutputBinding
{
public Task InitAsync(MetadataRequest request, CancellationToken cancellationToken = default)
{
// Called to initialize the component with its configured metadata...
}
public Task<OutputBindingInvokeResponse> InvokeAsync(OutputBindingInvokeRequest request, CancellationToken cancellationToken = default)
{
// Called to invoke a specific operation...
}
public Task<string[]> ListOperationsAsync(CancellationToken cancellationToken = default)
{
// Called to list the operations that can be invoked.
}
}
Input and output binding components
A component can be both an input and output binding, simply by implementing both interfaces.
internal sealed class MyBinding : IInputBinding, IOutputBinding
{
// IInputBinding Implementation...
// IOutputBinding Implementation...
}
Register binding component
In the main program file (for example, Program.cs
), register the binding component in an application service.
using Dapr.PluggableComponents;
var app = DaprPluggableComponentsApplication.Create();
app.RegisterService(
"<socket name>",
serviceBuilder =>
{
serviceBuilder.RegisterBinding<MyBinding>();
});
app.Run();
Note
A component that implements bothIInputBinding
and IOutputBinding
will be registered as both an input and output binding.Next steps
- Learn advanced steps for the Pluggable Component .NET SDK
- Learn more about using the Pluggable Component .NET SDK for:
8.1.3.1.2 - Implementing a .NET pub/sub component
Creating a pub/sub component requires just a few basic steps.
Add pub/sub namespaces
Add using
statements for the pub/sub related namespaces.
using Dapr.PluggableComponents.Components;
using Dapr.PluggableComponents.Components.PubSub;
Implement IPubSub
Create a class that implements the IPubSub
interface.
internal sealed class MyPubSub : IPubSub
{
public Task InitAsync(MetadataRequest request, CancellationToken cancellationToken = default)
{
// Called to initialize the component with its configured metadata...
}
public Task PublishAsync(PubSubPublishRequest request, CancellationToken cancellationToken = default)
{
// Send the message to the "topic"...
}
public Task PullMessagesAsync(PubSubPullMessagesTopic topic, MessageDeliveryHandler<string?, PubSubPullMessagesResponse> deliveryHandler, CancellationToken cancellationToken = default)
{
// Until canceled, check the topic for messages and deliver them to the Dapr runtime...
}
}
Calls to the PullMessagesAsync()
method are “long-lived”, in that the method is not expected to return until canceled (for example, via the cancellationToken
). The “topic” from which messages should be pulled is passed via the topic
argument, while the delivery to the Dapr runtime is performed via the deliveryHandler
callback. Delivery allows the component to receive notification if/when the application (served by the Dapr runtime) acknowledges processing of the message.
public async Task PullMessagesAsync(PubSubPullMessagesTopic topic, MessageDeliveryHandler<string?, PubSubPullMessagesResponse> deliveryHandler, CancellationToken cancellationToken = default)
{
TimeSpan pollInterval = // Polling interval (e.g. from initalization metadata)...
// Poll the topic until canceled...
while (!cancellationToken.IsCancellationRequested)
{
var messages = // Poll topic for messages...
foreach (var message in messages)
{
// Deliver the message to the Dapr runtime...
await deliveryHandler(
new PubSubPullMessagesResponse(topicName)
{
// Set the message content...
},
// Callback invoked when application acknowledges the message...
async errorMessage =>
{
// An empty message indicates the application successfully processed the message...
if (String.IsNullOrEmpty(errorMessage))
{
// Delete the message from the topic...
}
})
}
// Wait for the next poll (or cancellation)...
await Task.Delay(pollInterval, cancellationToken);
}
}
Register pub/sub component
In the main program file (for example, Program.cs
), register the pub/sub component with an application service.
using Dapr.PluggableComponents;
var app = DaprPluggableComponentsApplication.Create();
app.RegisterService(
"<socket name>",
serviceBuilder =>
{
serviceBuilder.RegisterPubSub<MyPubSub>();
});
app.Run();
Next steps
- Learn advanced steps for the Pluggable Component .NET SDK
- Learn more about using the Pluggable Component .NET SDK for:
8.1.3.1.3 - Implementing a .NET state store component
Creating a state store component requires just a few basic steps.
Add state store namespaces
Add using
statements for the state store related namespaces.
using Dapr.PluggableComponents.Components;
using Dapr.PluggableComponents.Components.StateStore;
Implement IStateStore
Create a class that implements the IStateStore
interface.
internal sealed class MyStateStore : IStateStore
{
public Task DeleteAsync(StateStoreDeleteRequest request, CancellationToken cancellationToken = default)
{
// Delete the requested key from the state store...
}
public Task<StateStoreGetResponse?> GetAsync(StateStoreGetRequest request, CancellationToken cancellationToken = default)
{
// Get the requested key value from from the state store, else return null...
}
public Task InitAsync(MetadataRequest request, CancellationToken cancellationToken = default)
{
// Called to initialize the component with its configured metadata...
}
public Task SetAsync(StateStoreSetRequest request, CancellationToken cancellationToken = default)
{
// Set the requested key to the specified value in the state store...
}
}
Register state store component
In the main program file (for example, Program.cs
), register the state store with an application service.
using Dapr.PluggableComponents;
var app = DaprPluggableComponentsApplication.Create();
app.RegisterService(
"<socket name>",
serviceBuilder =>
{
serviceBuilder.RegisterStateStore<MyStateStore>();
});
app.Run();
Bulk state stores
State stores that intend to support bulk operations should implement the optional IBulkStateStore
interface. Its methods mirror those of the base IStateStore
interface, but include multiple requested values.
Note
The Dapr runtime will emulate bulk state store operations for state stores that do not implementIBulkStateStore
by calling its operations individually.internal sealed class MyStateStore : IStateStore, IBulkStateStore
{
// ...
public Task BulkDeleteAsync(StateStoreDeleteRequest[] requests, CancellationToken cancellationToken = default)
{
// Delete all of the requested values from the state store...
}
public Task<StateStoreBulkStateItem[]> BulkGetAsync(StateStoreGetRequest[] requests, CancellationToken cancellationToken = default)
{
// Return the values of all of the requested values from the state store...
}
public Task BulkSetAsync(StateStoreSetRequest[] requests, CancellationToken cancellationToken = default)
{
// Set all of the values of the requested keys in the state store...
}
}
Transactional state stores
State stores that intend to support transactions should implement the optional ITransactionalStateStore
interface. Its TransactAsync()
method is passed a request with a sequence of delete and/or set operations to be performed within a transaction. The state store should iterate over the sequence and call each operation’s Visit()
method, passing callbacks that represent the action to take for each type of operation.
internal sealed class MyStateStore : IStateStore, ITransactionalStateStore
{
// ...
public async Task TransactAsync(StateStoreTransactRequest request, CancellationToken cancellationToken = default)
{
// Start transaction...
try
{
foreach (var operation in request.Operations)
{
await operation.Visit(
async deleteRequest =>
{
// Process delete request...
},
async setRequest =>
{
// Process set request...
});
}
}
catch
{
// Rollback transaction...
throw;
}
// Commit transaction...
}
}
Queryable state stores
State stores that intend to support queries should implement the optional IQueryableStateStore
interface. Its QueryAsync()
method is passed details about the query, such as the filter(s), result limits and pagination, and sort order(s) of the results. The state store should use those details to generate a set of values to return as part of its response.
internal sealed class MyStateStore : IStateStore, IQueryableStateStore
{
// ...
public Task<StateStoreQueryResponse> QueryAsync(StateStoreQueryRequest request, CancellationToken cancellationToken = default)
{
// Generate and return results...
}
}
ETag and other semantic error handling
The Dapr runtime has additional handling of certain error conditions resulting from some state store operations. State stores can indicate such conditions by throwing specific exceptions from its operation logic:
Exception | Applicable Operations | Description |
---|---|---|
ETagInvalidException | Delete, Set, Bulk Delete, Bulk Set | When an ETag is invalid |
ETagMismatchException | Delete, Set, Bulk Delete, Bulk Set | When an ETag does not match an expected value |
BulkDeleteRowMismatchException | Bulk Delete | When the number of affected rows does not match the expected rows |
Next steps
- Learn advanced steps for the Pluggable Component .NET SDK
- Learn more about using the Pluggable Component .NET SDK for:
8.1.3.1.4 - Advanced uses of the Dapr pluggable components .NET SDK
While not typically needed by most, these guides show advanced ways to can configure your .NET pluggable components.
8.1.3.1.4.1 - Application Environment of a .NET Dapr pluggable component
A .NET Dapr pluggable component application can be configured for dependency injection, logging, and configuration values similarly to ASP.NET applications. The DaprPluggableComponentsApplication
exposes a similar set of configuration properties to that exposed by WebApplicationBuilder
.
Dependency injection
Components registered with services can participate in dependency injection. Arguments in the components constructor will be injected during creation, assuming those types have been registered with the application. You can register them through the IServiceCollection
exposed by DaprPluggableComponentsApplication
.
var app = DaprPluggableComponentsApplication.Create();
// Register MyService as the singleton implementation of IService.
app.Services.AddSingleton<IService, MyService>();
app.RegisterService(
"<service name>",
serviceBuilder =>
{
serviceBuilder.RegisterStateStore<MyStateStore>();
});
app.Run();
interface IService
{
// ...
}
class MyService : IService
{
// ...
}
class MyStateStore : IStateStore
{
// Inject IService on creation of the state store.
public MyStateStore(IService service)
{
// ...
}
// ...
}
Warning
Use ofIServiceCollection.AddScoped()
is not recommended. Such instances’ lifetimes are bound to a single gRPC method call, which does not match the lifetime of an individual component instance.Logging
.NET Dapr pluggable components can use the standard .NET logging mechanisms. The DaprPluggableComponentsApplication
exposes an ILoggingBuilder
, through which it can be configured.
Note
Like with ASP.NET, logger services (for example,ILogger<T>
) are pre-registered.var app = DaprPluggableComponentsApplication.Create();
// Reset the default loggers and setup new ones.
app.Logging.ClearProviders();
app.Logging.AddConsole();
app.RegisterService(
"<service name>",
serviceBuilder =>
{
serviceBuilder.RegisterStateStore<MyStateStore>();
});
app.Run();
class MyStateStore : IStateStore
{
// Inject a logger on creation of the state store.
public MyStateStore(ILogger<MyStateStore> logger)
{
// ...
}
// ...
}
Configuration Values
Since .NET pluggable components are built on ASP.NET, they can use its standard configuration mechanisms and default to the same set of pre-registered providers. The DaprPluggableComponentsApplication
exposes an IConfigurationManager
through which it can be configured.
var app = DaprPluggableComponentsApplication.Create();
// Reset the default configuration providers and add new ones.
((IConfigurationBuilder)app.Configuration).Sources.Clear();
app.Configuration.AddEnvironmentVariables();
// Get configuration value on startup.
const value = app.Configuration["<name>"];
app.RegisterService(
"<service name>",
serviceBuilder =>
{
serviceBuilder.RegisterStateStore<MyStateStore>();
});
app.Run();
class MyStateStore : IStateStore
{
// Inject the configuration on creation of the state store.
public MyStateStore(IConfiguration configuration)
{
// ...
}
// ...
}
Next steps
- Learn more about the component lifetime
- Learn more about multiple services
- Learn more about using the Pluggable Component .NET SDK for:
8.1.3.1.4.2 - Lifetimes of .NET Dapr pluggable components
There are two ways to register a component:
- The component operates as a singleton, with lifetime managed by the SDK
- A component’s lifetime is determined by the pluggable component and can be multi-instance or a singleton, as needed
Singleton components
Components registered by type are singletons: one instance will serve all configured components of that type associated with that socket. This approach is best when only a single component of that type exists and is shared amongst Dapr applications.
var app = DaprPluggableComponentsApplication.Create();
app.RegisterService(
"service-a",
serviceBuilder =>
{
serviceBuilder.RegisterStateStore<SingletonStateStore>();
});
app.Run();
class SingletonStateStore : IStateStore
{
// ...
}
Multi-instance components
Components can be registered by passing a “factory method”. This method will be called for each configured component of that type associated with that socket. The method returns the instance to associate with that component (whether shared or not). This approach is best when multiple components of the same type may be configured with different sets of metadata, when component operations need to be isolated from one another, etc.
The factory method will be passed context, such as the ID of the configured Dapr component, that can be used to differentiate component instances.
var app = DaprPluggableComponentsApplication.Create();
app.RegisterService(
"service-a",
serviceBuilder =>
{
serviceBuilder.RegisterStateStore(
context =>
{
return new MultiStateStore(context.InstanceId);
});
});
app.Run();
class MultiStateStore : IStateStore
{
private readonly string instanceId;
public MultiStateStore(string instanceId)
{
this.instanceId = instanceId;
}
// ...
}
Next steps
- Learn more about the application environment
- Learn more about multiple services
- Learn more about using the Pluggable Component .NET SDK for:
8.1.3.1.4.3 - Multiple services in a .NET Dapr pluggable component
A pluggable component can host multiple components of varying types. You might do this:
- To minimize the number of sidecars running in a cluster
- To group related components that are likely to share libraries and implementation, such as:
- A database exposed both as a general state store, and
- Output bindings that allow more specific operations.
Each Unix Domain Socket can manage calls to one component of each type. To host multiple components of the same type, you can spread those types across multiple sockets. The SDK binds each socket to a “service”, with each service composed of one or more component types.
Registering multiple services
Each call to RegisterService()
binds a socket to a set of registered components, where one of each type of component can be registered per service.
var app = DaprPluggableComponentsApplication.Create();
app.RegisterService(
"service-a",
serviceBuilder =>
{
serviceBuilder.RegisterStateStore<MyDatabaseStateStore>();
serviceBuilder.RegisterBinding<MyDatabaseOutputBinding>();
});
app.RegisterService(
"service-b",
serviceBuilder =>
{
serviceBuilder.RegisterStateStore<AnotherStateStore>();
});
app.Run();
class MyDatabaseStateStore : IStateStore
{
// ...
}
class MyDatabaseOutputBinding : IOutputBinding
{
// ...
}
class AnotherStateStore : IStateStore
{
// ...
}
Configuring Multiple Components
Configuring Dapr to use the hosted components is the same as for any single component - the component YAML refers to the associated socket.
#
# This component uses the state store associated with socket `state-store-a`
#
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: state-store-a
spec:
type: state.service-a
version: v1
metadata: []
#
# This component uses the state store associated with socket `state-store-b`
#
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: state-store-b
spec:
type: state.service-b
version: v1
metadata: []
Next steps
- Learn more about the component lifetime
- Learn more about the application environment
- Learn more about using the Pluggable Component .NET SDK for:
8.1.3.2 - Getting started with the Dapr pluggable components Go SDK
Dapr offers packages to help with the development of Go pluggable components.
Prerequisites
- Go 1.20 or later
- Dapr 1.9 CLI or later
- Initialized Dapr environment
- Linux, Mac, or Windows (with WSL)
Note
Development of Dapr pluggable components on Windows requires WSL. Not all languages and SDKs expose Unix Domain Sockets on “native” Windows.Application creation
Creating a pluggable component starts with an empty Go application.
mkdir example
cd example
go mod init example
Import Dapr packages
Import the Dapr pluggable components SDK package.
go get github.com/dapr-sandbox/components-go-sdk@v0.1.0
Create main package
In main.go
, import the Dapr plugggable components package and run the application.
package main
import (
dapr "github.com/dapr-sandbox/components-go-sdk"
)
func main() {
dapr.MustRun()
}
This creates an application with no components. You will need to implement and register one or more components.
Implement and register components
- Implementing an input/output binding component
- Implementing a pub/sub component
- Implementing a state store component
Note
Only a single component of each type can be registered with an individual service. However, multiple components of the same type can be spread across multiple services.Test components locally
Create the Dapr components socket directory
Dapr communicates with pluggable components via Unix Domain Sockets files in a common directory. By default, both Dapr and pluggable components use the /tmp/dapr-components-sockets
directory. You should create this directory if it does not already exist.
mkdir /tmp/dapr-components-sockets
Start the pluggable component
Pluggable components can be tested by starting the application on the command line.
To start the component, in the application directory:
go run main.go
Configure Dapr to use the pluggable component
To configure Dapr to use the component, create a component YAML file in the resources directory. For example, for a state store component:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <component name>
spec:
type: state.<socket name>
version: v1
metadata:
- name: key1
value: value1
- name: key2
value: value2
Any metadata
properties will be passed to the component via its Store.Init(metadata state.Metadata)
method when the component is instantiated.
Start Dapr
To start Dapr (and, optionally, the service making use of the service):
dapr run --app-id <app id> --resources-path <resources path> ...
At this point, the Dapr sidecar will have started and connected via Unix Domain Socket to the component. You can then interact with the component either:
- Through the service using the component (if started), or
- By using the Dapr HTTP or gRPC API directly
Create container
Pluggable components are deployed as containers that run as sidecars to the application (like Dapr itself). A typical Dockerfile
for creating a Docker image for a Go application might look like:
FROM golang:1.20-alpine AS builder
WORKDIR /usr/src/app
# Download dependencies
COPY go.mod go.sum ./
RUN go mod download && go mod verify
# Build the application
COPY . .
RUN go build -v -o /usr/src/bin/app .
FROM alpine:latest
# Setup non-root user and permissions
RUN addgroup -S app && adduser -S app -G app
RUN mkdir /tmp/dapr-components-sockets && chown app /tmp/dapr-components-sockets
# Copy application to runtime image
COPY --from=builder --chown=app /usr/src/bin/app /app
USER app
CMD ["/app"]
Build the image:
docker build -f Dockerfile -t <image name>:<tag> .
Note
Paths forCOPY
operations in the Dockerfile
are relative to the Docker context passed when building the image, while the Docker context itself will vary depending on the needs of the application being built. In the example above, the assumption is that the Docker context is the component application directory.Next steps
- Advanced techniques with the pluggable components Go SDK
- Learn more about implementing:
8.1.3.2.1 - Implementing a Go input/output binding component
Creating a binding component requires just a few basic steps.
Import bindings packages
Create the file components/inputbinding.go
and add import
statements for the state store related packages.
package components
import (
"context"
"github.com/dapr/components-contrib/bindings"
)
Input bindings: Implement the InputBinding
interface
Create a type that implements the InputBinding
interface.
type MyInputBindingComponent struct {
}
func (component *MyInputBindingComponent) Init(meta bindings.Metadata) error {
// Called to initialize the component with its configured metadata...
}
func (component *MyInputBindingComponent) Read(ctx context.Context, handler bindings.Handler) error {
// Until canceled, check the underlying store for messages and deliver them to the Dapr runtime...
}
Calls to the Read()
method are expected to set up a long-lived mechanism for retrieving messages but immediately return nil
(or an error, if that mechanism could not be set up). The mechanism should end when canceled (for example, via the ctx.Done() or ctx.Err() != nil
). As messages are read from the underlying store of the component, they are delivered to the Dapr runtime via the handler
callback, which does not return until the application (served by the Dapr runtime) acknowledges processing of the message.
func (b *MyInputBindingComponent) Read(ctx context.Context, handler bindings.Handler) error {
go func() {
for {
err := ctx.Err()
if err != nil {
return
}
messages := // Poll for messages...
for _, message := range messages {
handler(ctx, &bindings.ReadResponse{
// Set the message content...
})
}
select {
case <-ctx.Done():
case <-time.After(5 * time.Second):
}
}
}()
return nil
}
Output bindings: Implement the OutputBinding
interface
Create a type that implements the OutputBinding
interface.
type MyOutputBindingComponent struct {
}
func (component *MyOutputBindingComponent) Init(meta bindings.Metadata) error {
// Called to initialize the component with its configured metadata...
}
func (component *MyOutputBindingComponent) Invoke(ctx context.Context, req *bindings.InvokeRequest) (*bindings.InvokeResponse, error) {
// Called to invoke a specific operation...
}
func (component *MyOutputBindingComponent) Operations() []bindings.OperationKind {
// Called to list the operations that can be invoked.
}
Input and output binding components
A component can be both an input and output binding. Simply implement both interfaces and register the component as both binding types.
Register binding component
In the main application file (for example, main.go
), register the binding component with the application.
package main
import (
"example/components"
dapr "github.com/dapr-sandbox/components-go-sdk"
"github.com/dapr-sandbox/components-go-sdk/bindings/v1"
)
func main() {
// Register an import binding...
dapr.Register("my-inputbinding", dapr.WithInputBinding(func() bindings.InputBinding {
return &components.MyInputBindingComponent{}
}))
// Register an output binding...
dapr.Register("my-outputbinding", dapr.WithOutputBinding(func() bindings.OutputBinding {
return &components.MyOutputBindingComponent{}
}))
dapr.MustRun()
}
Next steps
- Advanced techniques with the pluggable components Go SDK
- Learn more about implementing:
8.1.3.2.2 - Implementing a Go pub/sub component
Creating a pub/sub component requires just a few basic steps.
Import pub/sub packages
Create the file components/pubsub.go
and add import
statements for the pub/sub related packages.
package components
import (
"context"
"github.com/dapr/components-contrib/pubsub"
)
Implement the PubSub
interface
Create a type that implements the PubSub
interface.
type MyPubSubComponent struct {
}
func (component *MyPubSubComponent) Init(metadata pubsub.Metadata) error {
// Called to initialize the component with its configured metadata...
}
func (component *MyPubSubComponent) Close() error {
// Not used with pluggable components...
return nil
}
func (component *MyPubSubComponent) Features() []pubsub.Feature {
// Return a list of features supported by the component...
}
func (component *MyPubSubComponent) Publish(req *pubsub.PublishRequest) error {
// Send the message to the "topic"...
}
func (component *MyPubSubComponent) Subscribe(ctx context.Context, req pubsub.SubscribeRequest, handler pubsub.Handler) error {
// Until canceled, check the topic for messages and deliver them to the Dapr runtime...
}
Calls to the Subscribe()
method are expected to set up a long-lived mechanism for retrieving messages but immediately return nil
(or an error, if that mechanism could not be set up). The mechanism should end when canceled (for example, via the ctx.Done()
or ctx.Err() != nil
). The “topic” from which messages should be pulled is passed via the req
argument, while the delivery to the Dapr runtime is performed via the handler
callback. The callback doesn’t return until the application (served by the Dapr runtime) acknowledges processing of the message.
func (component *MyPubSubComponent) Subscribe(ctx context.Context, req pubsub.SubscribeRequest, handler pubsub.Handler) error {
go func() {
for {
err := ctx.Err()
if err != nil {
return
}
messages := // Poll for messages...
for _, message := range messages {
handler(ctx, &pubsub.NewMessage{
// Set the message content...
})
}
select {
case <-ctx.Done():
case <-time.After(5 * time.Second):
}
}
}()
return nil
}
Register pub/sub component
In the main application file (for example, main.go
), register the pub/sub component with the application.
package main
import (
"example/components"
dapr "github.com/dapr-sandbox/components-go-sdk"
"github.com/dapr-sandbox/components-go-sdk/pubsub/v1"
)
func main() {
dapr.Register("<socket name>", dapr.WithPubSub(func() pubsub.PubSub {
return &components.MyPubSubComponent{}
}))
dapr.MustRun()
}
Next steps
- Advanced techniques with the pluggable components Go SDK
- Learn more about implementing:
8.1.3.2.3 - Implementing a Go state store component
Creating a state store component requires just a few basic steps.
Import state store packages
Create the file components/statestore.go
and add import
statements for the state store related packages.
package components
import (
"context"
"github.com/dapr/components-contrib/state"
)
Implement the Store
interface
Create a type that implements the Store
interface.
type MyStateStore struct {
}
func (store *MyStateStore) Init(metadata state.Metadata) error {
// Called to initialize the component with its configured metadata...
}
func (store *MyStateStore) GetComponentMetadata() map[string]string {
// Not used with pluggable components...
return map[string]string{}
}
func (store *MyStateStore) Features() []state.Feature {
// Return a list of features supported by the state store...
}
func (store *MyStateStore) Delete(ctx context.Context, req *state.DeleteRequest) error {
// Delete the requested key from the state store...
}
func (store *MyStateStore) Get(ctx context.Context, req *state.GetRequest) (*state.GetResponse, error) {
// Get the requested key value from the state store, else return an empty response...
}
func (store *MyStateStore) Set(ctx context.Context, req *state.SetRequest) error {
// Set the requested key to the specified value in the state store...
}
func (store *MyStateStore) BulkGet(ctx context.Context, req []state.GetRequest) (bool, []state.BulkGetResponse, error) {
// Get the requested key values from the state store...
}
func (store *MyStateStore) BulkDelete(ctx context.Context, req []state.DeleteRequest) error {
// Delete the requested keys from the state store...
}
func (store *MyStateStore) BulkSet(ctx context.Context, req []state.SetRequest) error {
// Set the requested keys to their specified values in the state store...
}
Register state store component
In the main application file (for example, main.go
), register the state store with an application service.
package main
import (
"example/components"
dapr "github.com/dapr-sandbox/components-go-sdk"
"github.com/dapr-sandbox/components-go-sdk/state/v1"
)
func main() {
dapr.Register("<socket name>", dapr.WithStateStore(func() state.Store {
return &components.MyStateStoreComponent{}
}))
dapr.MustRun()
}
Bulk state stores
While state stores are required to support the bulk operations, their implementations sequentially delegate to the individual operation methods.
Transactional state stores
State stores that intend to support transactions should implement the optional TransactionalStore
interface. Its Multi()
method receives a request with a sequence of delete
and/or set
operations to be performed within a transaction. The state store should iterate over the sequence and apply each operation.
func (store *MyStateStoreComponent) Multi(ctx context.Context, request *state.TransactionalStateRequest) error {
// Start transaction...
for _, operation := range request.Operations {
switch operation.Operation {
case state.Delete:
deleteRequest := operation.Request.(state.DeleteRequest)
// Process delete request...
case state.Upsert:
setRequest := operation.Request.(state.SetRequest)
// Process set request...
}
}
// End (or rollback) transaction...
return nil
}
Queryable state stores
State stores that intend to support queries should implement the optional Querier
interface. Its Query()
method is passed details about the query, such as the filter(s), result limits, pagination, and sort order(s) of the results. The state store uses those details to generate a set of values to return as part of its response.
func (store *MyStateStoreComponent) Query(ctx context.Context, req *state.QueryRequest) (*state.QueryResponse, error) {
// Generate and return results...
}
ETag and other semantic error handling
The Dapr runtime has additional handling of certain error conditions resulting from some state store operations. State stores can indicate such conditions by returning specific errors from its operation logic:
Error | Applicable Operations | Description |
---|---|---|
NewETagError(state.ETagInvalid, ...) | Delete, Set, Bulk Delete, Bulk Set | When an ETag is invalid |
NewETagError(state.ETagMismatch, ...) | Delete, Set, Bulk Delete, Bulk Set | When an ETag does not match an expected value |
NewBulkDeleteRowMismatchError(...) | Bulk Delete | When the number of affected rows does not match the expected rows |
Next steps
- Advanced techniques with the pluggable components Go SDK
- Learn more about implementing:
8.1.3.2.4 - Advanced uses of the Dapr pluggable components .Go SDK
While not typically needed by most, these guides show advanced ways you can configure your Go pluggable components.
Component lifetime
Pluggable components are registered by passing a “factory method” that is called for each configured Dapr component of that type associated with that socket. The method returns the instance associated with that Dapr component (whether shared or not). This allows multiple Dapr components of the same type to be configured with different sets of metadata, when component operations need to be isolated from one another, etc.
Registering multiple services
Each call to Register()
binds a socket to a registered pluggable component. One of each component type (input/output binding, pub/sub, and state store) can be registered per socket.
func main() {
dapr.Register("service-a", dapr.WithStateStore(func() state.Store {
return &components.MyDatabaseStoreComponent{}
}))
dapr.Register("service-a", dapr.WithOutputBinding(func() bindings.OutputBinding {
return &components.MyDatabaseOutputBindingComponent{}
}))
dapr.Register("service-b", dapr.WithStateStore(func() state.Store {
return &components.MyDatabaseStoreComponent{}
}))
dapr.MustRun()
}
In the example above, a state store and output binding is registered with the socket service-a
while another state store is registered with the socket service-b
.
Configuring Multiple Components
Configuring Dapr to use the hosted components is the same as for any single component - the component YAML refers to the associated socket. For example, to configure Dapr state stores for the two components registered above (to sockets service-a
and service-b
), you create two configuration files, each referencing their respective socket.
#
# This component uses the state store associated with socket `service-a`
#
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: state-store-a
spec:
type: state.service-a
version: v1
metadata: []
#
# This component uses the state store associated with socket `service-b`
#
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: state-store-b
spec:
type: state.service-b
version: v1
metadata: []
Next steps
8.2 - How to: Author middleware components
Dapr allows custom processing pipelines to be defined by chaining a series of middleware components. In this guide, you’ll learn how to create a middleware component. To learn how to configure an existing middleware component, see Configure middleware components
Writing a custom HTTP middleware
HTTP middlewares in Dapr wrap standard Go net/http handler functions.
Your middleware needs to implement a middleware interface, which defines a GetHandler method that returns a http.Handler callback and an error:
type Middleware interface {
GetHandler(metadata middleware.Metadata) (func(next http.Handler) http.Handler, error)
}
The handler receives a next
callback that should be invoked to continue processing the request.
Your handler implementation can include an inbound logic, outbound logic, or both:
func (m *customMiddleware) GetHandler(metadata middleware.Metadata) (func(next http.Handler) http.Handler, error) {
var err error
return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Inbound logic
// ...
// Call the next handler
next.ServeHTTP(w, r)
// Outbound logic
// ...
}
}, err
}