Detailed information on the AWS DynamoDB state store component
Component format
To setup a DynamoDB state store create a component of type state.aws.dynamodb. See this guide on how to create and apply a state store configuration.
apiVersion:dapr.io/v1alpha1kind:Componentmetadata:name:<NAME>spec:type:state.aws.dynamodbversion:v1metadata:- name:tablevalue:"Contracts"- name:accessKeyvalue:"AKIAIOSFODNN7EXAMPLE"# Optional- name:secretKeyvalue:"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"# Optional- name:endpointvalue:"http://localhost:8080"# Optional- name:regionvalue:"eu-west-1"# Optional- name:sessionTokenvalue:"myTOKEN"# Optional- name:ttlAttributeNamevalue:"expiresAt"# Optional- name:partitionKeyvalue:"ContractID"# Optional# Uncomment this if you wish to use AWS DynamoDB as a state store for actors (optional)#- name: actorStateStore# value: "true"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.
Primary Key
In order to use DynamoDB as a Dapr state store, the table must have a primary key named key. See the section Partition Keys for an option to change this behavior.
Spec metadata fields
Field
Required
Details
Example
table
Y
name of the DynamoDB table to use
"Contracts"
accessKey
N
ID of the AWS account with appropriate permissions to SNS and SQS. Can be secretKeyRef to use a secret reference
"AKIAIOSFODNN7EXAMPLE"
secretKey
N
Secret for the AWS user. Can be secretKeyRef to use a secret reference
AWS endpoint for the component to use. Only used for local development. The endpoint is unncessary when running against production AWS
"http://localhost:4566"
sessionToken
N
AWS session token to use. A session token is only required if you are using temporary security credentials.
"TOKEN"
ttlAttributeName
N
The table attribute name which should be used for TTL.
"expiresAt"
partitionKey
N
The table primary key or partition key attribute name. This field is used to replace the default primary key attribute name "key". See the section Partition Keys.
"ContractID"
actorStateStore
N
Consider this state store for actors. Defaults to “false”
"true", "false"
Important
When running the Dapr sidecar (daprd) with your application on EKS (AWS Kubernetes), if you’re using a node/pod that has already been attached to an IAM policy defining access to AWS resources, you must not provide AWS access-key, secret-key, and tokens in the definition of the component spec you’re using.
In order to use DynamoDB TTL feature, you must enable TTL on your table and define the attribute name.
The attribute name must be defined in the ttlAttributeName field.
See official AWS docs.
Partition Keys
By default, the DynamoDB state store component uses the table attribute name key as primary/partition key in the DynamoDB table.
This can be overridden by specifying a metadata field in the component configuration with a key of partitionKey and a value of the desired attribute name.
The following operation passes "A12345" as the value for key, and based on the component specification provided above, the Dapr runtime will replace the key attribute name
with ContractID as the Partition/Primary Key sent to DynamoDB:
Detailed information on the Azure Blob Store state store component
Component format
To setup the Azure Blob Storage state store create a component of type state.azure.blobstorage. See this guide on how to create and apply a state store configuration.
apiVersion:dapr.io/v1alpha1kind:Componentmetadata:name:<NAME>spec:type:state.azure.blobstorage# Supports v1 and v2. Users should always use v2 by default. There is no# migration path from v1 to v2, see `versioning` below.version:v2metadata:- name:accountNamevalue:"[your_account_name]"- name:accountKeyvalue:"[your_account_key]"- name:containerNamevalue:"[your_container_name]"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.
Versioning
Dapr has 2 versions of the Azure Blob Storage state store component: v1 and v2. It is recommended to use v2 for all new applications. v1 is considered legacy and is preserved for compatibility with existing applications only.
In v1, a longstanding implementation issue was identified, where the key prefix was incorrectly stripped by the component, essentially behaving as if keyPrefix was always set to none.
The updated v2 of the component fixes the incorrect behavior and makes the state store correctly respect the keyPrefix property.
While v1 and v2 have the same metadata fields, they are otherwise incompatible, with no automatic data migration path for v1 to v2.
If you are using v1 of this component, you should continue to use v1 until you create a new state store.
Spec metadata fields
Field
Required
Details
Example
accountName
Y
The storage account name
"mystorageaccount".
accountKey
Y (unless using Microsoft Entra ID)
Primary or secondary storage key
"key"
containerName
Y
The name of the container to be used for Dapr state. The container will be created for you if it doesn’t exist
"container"
azureEnvironment
N
Optional name for the Azure environment if using a different Azure cloud
Optional custom endpoint URL. This is useful when using the Azurite emulator or when using custom domains for Azure Storage (although this is not officially supported). The endpoint must be the full base URL, including the protocol (http:// or https://), the IP or FQDN, and optional port.
"http://127.0.0.1:10000"
ContentType
N
The blob’s content type
"text/plain"
ContentMD5
N
The blob’s MD5 hash
"vZGKbMRDAnMs4BIwlXaRvQ=="
ContentEncoding
N
The blob’s content encoding
"UTF-8"
ContentLanguage
N
The blob’s content language
"en-us"
ContentDisposition
N
The blob’s content disposition. Conveys additional information about how to process the response payload
"attachment"
CacheControl
N
The blob’s cache control
"no-cache"
Setup Azure Blob Storage
Follow the instructions from the Azure documentation on how to create an Azure Storage Account.
If you wish to create a container for Dapr to use, you can do so beforehand. However, the Blob Storage state provider will create one for you automatically if it doesn’t exist.
In order to setup Azure Blob Storage as a state store, you will need the following properties:
accountName: The storage account name. For example: mystorageaccount.
accountKey: Primary or secondary storage account key.
containerName: The name of the container to be used for Dapr state. The container will be created for you if it doesn’t exist.
Authenticating with Microsoft Entra ID
This component supports authentication with Microsoft Entra ID as an alternative to use account keys. Whenever possible, it is recommended that you use Microsoft Entra ID for authentication in production systems, to take advantage of better security, fine-tuned access control, and the ability to use managed identities for apps running on Azure.
The following scripts are optimized for a bash or zsh shell and require the following apps installed:
You must also be authenticated with Azure in your Azure CLI.
To get started with using Microsoft Entra ID for authenticating the Blob Storage state store component, make sure you’ve created an Microsoft Entra ID application and a Service Principal as explained in the Authenticating to Azure document.
Once done, set a variable with the ID of the Service Principal that you created:
Using RBAC, assign a role to our Service Principal so it can access data inside the Storage Account.
In this case, you are assigning the “Storage blob Data Contributor” role, which has broad access; other more restrictive roles can be used as well, depending on your application.
RG_ID=$(az group show --resource-group ${RG_NAME}| jq -r ".id")az role assignment create \
--assignee "${SERVICE_PRINCIPAL_ID}"\
--role "Storage blob Data Contributor"\
--scope "${RG_ID}/providers/Microsoft.Storage/storageAccounts/${STORAGE_ACCOUNT_NAME}"
When authenticating your component using Microsoft Entra ID, the accountKey field is not required. Instead, please specify the required credentials in the component’s metadata (if any) according to the Authenticating to Azure document.
Detailed information on the Azure Cosmos DB (SQL API) state store component
Component format
To setup Azure Cosmos DB state store create a component of type state.azure.cosmosdb. See this guide on how to create and apply a state store configuration.
apiVersion:dapr.io/v1alpha1kind:Componentmetadata:name:<NAME>spec:type:state.azure.cosmosdbversion:v1metadata:- name:urlvalue:<REPLACE-WITH-URL>- name:masterKeyvalue:<REPLACE-WITH-MASTER-KEY>- name:databasevalue:<REPLACE-WITH-DATABASE>- name:collectionvalue:<REPLACE-WITH-COLLECTION># Uncomment this if you wish to use Azure Cosmos DB as a state store for actors (optional)#- name: actorStateStore# value: "true"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.
If you wish to use Cosmos DB as an actor store, append the following to the yaml.
- name:actorStateStorevalue:"true"
Spec metadata fields
Field
Required
Details
Example
url
Y
The Cosmos DB url
"https://******.documents.azure.com:443/".
masterKey
Y*
The key to authenticate to the Cosmos DB account. Only required when not using Microsoft Entra ID authentication.
"key"
database
Y
The name of the database
"db"
collection
Y
The name of the collection (container)
"collection"
actorStateStore
N
Consider this state store for actors. Defaults to "false"
"true", "false"
Microsoft Entra ID authentication
The Azure Cosmos DB state store component supports authentication using all Microsoft Entra ID mechanisms. For further information and the relevant component metadata fields to provide depending on the choice of Microsoft Entra ID authentication mechanism, see the docs for authenticating to Azure.
You can read additional information for setting up Cosmos DB with Azure AD authentication in the section below.
Setup Azure Cosmos DB
Follow the instructions from the Azure documentation on how to create an Azure Cosmos DB account. The database and collection must be created in Cosmos DB before Dapr can use it.
Important: The partition key for the collection must be named /partitionKey (note: this is case-sensitive).
In order to setup Cosmos DB as a state store, you need the following properties:
URL: the Cosmos DB url. for example: https://******.documents.azure.com:443/
Master Key: The key to authenticate to the Cosmos DB account. Skip this if using Microsoft Entra ID authentication.
Database: The name of the database
Collection: The name of the collection (or container)
TTLs and cleanups
This state store supports Time-To-Live (TTL) for records stored with Dapr. When storing data using Dapr, you can set the ttlInSeconds metadata property to override the default TTL on the CosmodDB container, indicating when the data should be considered “expired”. Note that this value only takes effect if the container’s DefaultTimeToLive field has a non-NULL value. See the CosmosDB documentation for more information.
Best Practices for Production Use
Azure Cosmos DB shares a strict metadata request rate limit across all databases in a single Azure Cosmos DB account. New connections to Azure Cosmos DB assume a large percentage of the allowable request rate limit. (See the Cosmos DB documentation)
Therefore several strategies must be applied to avoid simultaneous new connections to Azure Cosmos DB:
Ensure sidecars of applications only load the Azure Cosmos DB component when they require it to avoid unnecessary database connections. This can be done by scoping your components to specific applications.
Choose deployment strategies that sequentially deploy or start your applications to minimize bursts in new connections to your Azure Cosmos DB accounts.
Avoid reusing the same Azure Cosmos DB account for unrelated databases or systems (even outside of Dapr). Distinct Azure Cosmos DB accounts have distinct rate limits.
Increase the initTimeout value to allow the component to retry connecting to Azure Cosmos DB during side car initialization for up to 5 minutes. The default value is 5s and should be increased. When using Kubernetes, increasing this value may also require an update to your Readiness and Liveness probes.
To use the Cosmos DB state store, your data must be sent to Dapr in JSON-serialized format. Having it just JSON serializable will not work.
If you are using the Dapr SDKs (for example the .NET SDK), the SDK automatically serializes your data to JSON.
If you want to invoke Dapr’s HTTP endpoint directly, take a look at the examples (using curl) in the Partition keys section below.
Partition keys
For non-actor state operations, the Azure Cosmos DB state store will use the key property provided in the requests to the Dapr API to determine the Cosmos DB partition key. This can be overridden by specifying a metadata field in the request with a key of partitionKey and a value of the desired partition.
The following operation uses nihilus as the partition key value sent to Cosmos DB:
For non-actor state operations, if you want to control the Cosmos DB partition, you can specify it in metadata. Reusing the example above, here’s how to put it under the mypartition partition
For actor state operations, the partition key is generated by Dapr using the appId, the actor type, and the actor id, such that data for the same actor always ends up under the same partition (you do not need to specify it). This is because actor state operations must use transactions, and in Cosmos DB the items in a transaction must be on the same partition.
Setting up Cosmos DB for authenticating with Microsoft Entra ID
When using the Dapr Cosmos DB state store and authenticating with Microsoft Entra ID, you need to perform a few additional steps to set up your environment.
Prerequisites:
You need a Service Principal created as per the instructions in the authenticating to Azure page. You need the ID of the Service Principal for the commands below (note that this is different from the client ID of your application, or the value you use for azureClientId in the metadata).
The scripts below are optimized for a bash or zsh shell
Granting your Microsoft Entra ID application access to Cosmos DB
You can find more information on the official documentation, including instructions to assign more granular permissions.
In order to grant your application permissions to access data stored in Cosmos DB, you need to assign it a custom role for the Cosmos DB data plane. In this example you’re going to use a built-in role, “Cosmos DB Built-in Data Contributor”, which grants your application full read-write access to the data; you can optionally create custom, fine-tuned roles following the instructions in the official docs.
# Name of the Resource Group that contains your Cosmos DBRESOURCE_GROUP="..."# Name of your Cosmos DB accountACCOUNT_NAME="..."# ID of your Service Principal objectPRINCIPAL_ID="..."# ID of the "Cosmos DB Built-in Data Contributor" role# You can also use the ID of a custom roleROLE_ID="00000000-0000-0000-0000-000000000002"az cosmosdb sql role assignment create \
--account-name "$ACCOUNT_NAME"\
--resource-group "$RESOURCE_GROUP"\
--scope "/"\
--principal-id "$PRINCIPAL_ID"\
--role-definition-id "$ROLE_ID"
Optimizations
Optimizing Cosmos DB for bulk operation write performance
If you are building a system that only ever reads data from Cosmos DB via key (id), which is the default Dapr behavior when using the state management API or actors, there are ways you can optimize Cosmos DB for improved write speeds. This is done by excluding all paths from indexing. By default, Cosmos DB indexes all fields inside of a document. On systems that are write-heavy and run little-to-no queries on values within a document, this indexing policy slows down the time it takes to write or update a document in Cosmos DB. This is exacerbated in high-volume systems.
For example, the default Terraform definition for a Cosmos SQL container indexing reads as follows:
It is possible to force Cosmos DB to only index the id and partitionKey fields by excluding all other fields from indexing. This can be done by updating the above to read as follows:
indexing_policy{ # This could also be set to "none" if you are using the container purely as a key-value store. This may be applicable if your container is only going to be used as a distributed cache.
indexing_mode="consistent" # Note that included_path has been replaced with excluded_path
excluded_path{path="/*"}}
Note
This optimization comes at the cost of queries against fields inside of documents within the state store. This would likely impact any stored procedures or SQL queries defined and executed. It is only recommended that this optimization be applied only if you are using the Dapr State Management API or Dapr Actors to interact with Cosmos DB.
Optimizing Cosmos DB for cost savings
If you intend to use Cosmos DB only as a key-value pair, it may be in your interest to consider converting your state object to JSON and compressing it before persisting it to state, and subsequently decompressing it when reading it out of state. This is because Cosmos DB bills your usage based on the maximum number of RU/s used in a given time period (typically each hour). Furthermore, RU usage is calculated as 1 RU per 1 KB of data you read or write. Compression helps by reducing the size of the data stored in Cosmos DB and subsequently reducing RU usage.
This savings is particularly significant for Dapr actors. While the Dapr State Management API does a base64 encoding of your object before saving, Dapr actor state is saved as raw, formatted JSON. This means multiple lines with indentations for formatting. Compressing can signficantly reduce the size of actor state objects. For example, if you have an actor state object that is 75KB in size when the actor is hydrated, you will use 75 RU/s to read that object out of state. If you then modify the state object and it grows to 100KB, you will use 100 RU/s to write that object to Cosmos DB, totalling 175 RU/s for the I/O operation. Let’s say your actors are concurrently handling 1000 requests per second, you will need at least 175,000 RU/s to meet that load. With effective compression, the size reduction can be in the region of 90%, which means you will only need in the region of 17,500 RU/s to meet the load.
Note
This particular optimization only makes sense if you are saving large objects to state. The performance and memory tradeoff for performing the compression and decompression on either end need to make sense for your use case. Furthermore, once the data is saved to state, it is not human readable, nor is it queryable. You should only adopt this optimization if you are saving large state objects as key-value pairs.
Detailed information on the Azure Table Storage state store component which can be used to connect to Cosmos DB Table API and Azure Tables
Component format
To setup Azure Tablestorage state store create a component of type state.azure.tablestorage. See this guide on how to create and apply a state store configuration.
Skips the check for and, if necessary, creation of the specified storage table. This is useful when using active directory authentication with minimal privileges. Defaults to false.
"true"
Microsoft Entra ID authentication
The Azure Cosmos DB state store component supports authentication using all Microsoft Entra ID mechanisms. For further information and the relevant component metadata fields to provide depending on the choice of Microsoft Entra ID authentication mechanism, see the docs for authenticating to Azure.
You can read additional information for setting up Cosmos DB with Microsoft Entra ID authentication in the section below.
Option 1: Setup Azure Table Storage
Follow the instructions from the Azure documentation on how to create an Azure Storage Account.
If you wish to create a table for Dapr to use, you can do so beforehand. However, Table Storage state provider will create one for you automatically if it doesn’t exist, unless the skipCreateTable option is enabled.
In order to setup Azure Table Storage as a state store, you will need the following properties:
AccountName: The storage account name. For example: mystorageaccount.
AccountKey: Primary or secondary storage key. Skip this if using Microsoft Entra ID authentication.
TableName: The name of the table to be used for Dapr state. The table will be created for you if it doesn’t exist, unless the skipCreateTable option is enabled.
cosmosDbMode: Set this to false to connect to Azure Tables.
Option 2: Setup Azure Cosmos DB Table API
Follow the instructions from the Azure documentation on creating a Cosmos DB account with Table API.
If you wish to create a table for Dapr to use, you can do so beforehand. However, Table Storage state provider will create one for you automatically if it doesn’t exist, unless the skipCreateTable option is enabled.
In order to setup Azure Cosmos DB Table API as a state store, you will need the following properties:
AccountName: The Cosmos DB account name. For example: mycosmosaccount.
AccountKey: The Cosmos DB master key. Skip this if using Microsoft Entra ID authentication.
TableName: The name of the table to be used for Dapr state. The table will be created for you if it doesn’t exist, unless the skipCreateTable option is enabled.
cosmosDbMode: Set this to true to connect to Azure Tables.
Partitioning
The Azure Table Storage state store uses the key property provided in the requests to the Dapr API to determine the row key. Service Name is used for partition key. This provides best performance, as each service type stores state in it’s own table partition.
This state store creates a column called Value in the table storage and puts raw state inside it.
For example, the following operation coming from service called myservice
Detailed information on the Cloudflare Workers KV state store component
Create a Dapr component
To setup a Cloudflare Workers KV state store, create a component of type state.cloudflare.workerskv. See this guide on how to create and apply a state store configuration.
apiVersion:dapr.io/v1alpha1kind:Componentmetadata:name:<NAME>spec:type:state.cloudflare.workerskvversion:v1# Increase the initTimeout if Dapr is managing the Worker for youinitTimeout:"120s"metadata:# ID of the Workers KV namespace (required)- name:kvNamespaceIDvalue:""# Name of the Worker (required)- name:workerNamevalue:""# PEM-encoded private Ed25519 key (required)- name:keyvalue:| -----BEGIN PRIVATE KEY-----
MC4CAQ...
-----END PRIVATE KEY-----# Cloudflare account ID (required to have Dapr manage the Worker)- name:cfAccountIDvalue:""# API token for Cloudflare (required to have Dapr manage the Worker)- name:cfAPITokenvalue:""# URL of the Worker (required if the Worker has been pre-created outside of Dapr)- name:workerUrlvalue:""
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.
Spec metadata fields
Field
Required
Details
Example
kvNamespaceID
Y
ID of the pre-created Workers KV namespace
"123456789abcdef8b5588f3d134f74ac"
workerName
Y
Name of the Worker to connect to
"mydaprkv"
key
Y
Ed25519 private key, PEM-encoded
See example above
cfAccountID
Y/N
Cloudflare account ID. Required to have Dapr manage the worker.
"456789abcdef8b5588f3d134f74ac"def
cfAPIToken
Y/N
API token for Cloudflare. Required to have Dapr manage the Worker.
"secret-key"
workerUrl
Y/N
URL of the Worker. Required if the Worker has been pre-provisioned outside of Dapr.
"https://mydaprkv.mydomain.workers.dev"
When you configure Dapr to create your Worker for you, you may need to set a longer value for the initTimeout property of the component, to allow enough time for the Worker script to be deployed. For example: initTimeout: "120s"
Create a Workers KV namespace
To use this component, you must have a Workers KV namespace created in your Cloudflare account.
You can create a new Workers KV namespace in one of two ways:
Using the Cloudflare dashboard
Make note of the “ID” of the Workers KV namespace that you can see in the dashboard. This is a hex string (for example 123456789abcdef8b5588f3d134f74ac)–not the name you used when you created it!
# Authenticate if needed with `npx wrangler login` firstwrangler kv:namespace create <NAME>
The output contains the ID of the namespace, for example:
{ binding = "<NAME>", id = "123456789abcdef8b5588f3d134f74ac" }
Configuring the Worker
Because Cloudflare Workers KV namespaces can only be accessed by scripts running on Workers, Dapr needs to maintain a Worker to communicate with the Workers KV storage.
Dapr can manage the Worker for you automatically, or you can pre-provision a Worker yourself. Pre-provisioning the Worker is the only supported option when running on workerd.
Important
Use a separate Worker for each Dapr component. Do not use the same Worker script for different Cloudflare Workers KV state store components, and do not use the same Worker script for different Cloudflare components in Dapr (e.g. the Workers KV state store and the Queues binding).
If you want to let Dapr manage the Worker for you, you will need to provide these 3 metadata options:
workerName: Name of the Worker script. This will be the first part of the URL of your Worker. For example, if the “workers.dev” domain configured for your Cloudflare account is mydomain.workers.dev and you set workerName to mydaprkv, the Worker that Dapr deploys will be available at https://mydaprkv.mydomain.workers.dev.
cfAccountID: ID of your Cloudflare account. You can find this in your browser’s URL bar after logging into the Cloudflare dashboard, with the ID being the hex string right after dash.cloudflare.com. For example, if the URL is https://dash.cloudflare.com/456789abcdef8b5588f3d134f74acdef, the value for cfAccountID is 456789abcdef8b5588f3d134f74acdef.
cfAPIToken: API token with permission to create and edit Workers and Workers KV namespaces. You can create it from the “API Tokens” page in the “My Profile” section in the Cloudflare dashboard:
Click on “Create token”.
Select the “Edit Cloudflare Workers” template.
Follow the on-screen instructions to generate a new API token.
When Dapr is configured to manage the Worker for you, when a Dapr Runtime is started it checks that the Worker exists and it’s up-to-date. If the Worker doesn’t exist, or if it’s using an outdated version, Dapr will create or upgrade it for you automatically.
If you’d rather not give Dapr permissions to deploy Worker scripts for you, you can manually provision a Worker for Dapr to use. Note that if you have multiple Dapr components that interact with Cloudflare services via a Worker, you will need to create a separate Worker for each one of them.
To manually provision a Worker script, you will need to have Node.js installed on your local machine.
Create a new folder where you’ll place the source code of the Worker, for example: daprworker.
If you haven’t already, authenticate with Wrangler (the Cloudflare Workers CLI) using: npx wrangler login.
Inside the newly-created folder, create a new wrangler.toml file with the contents below, filling in the missing information as appropriate:
# Name of your Worker, for example "mydaprkv"name=""# Do not change these optionsmain="worker.js"compatibility_date="2022-12-09"usage_model="bundled"[vars]# Set this to the **public** part of the Ed25519 key, PEM-encoded (with newlines replaced with `\n`).# Example:# PUBLIC_KEY = "-----BEGIN PUBLIC KEY-----\nMCowB...=\n-----END PUBLIC KEY-----PUBLIC_KEY=""# Set this to the name of your Worker (same as the value of the "name" property above), for example "mydaprkv".TOKEN_AUDIENCE=""[[kv_namespaces]]# Set the next two values to the ID (not name) of your KV namespace, for example "123456789abcdef8b5588f3d134f74ac".# Note that they will both be set to the same value.binding=""id=""
Note: see the next section for how to generate an Ed25519 key pair. Make sure you use the public part of the key when deploying a Worker!
Copy the (pre-compiled and minified) code of the Worker in the worker.js file. You can do that with this command:
# Set this to the version of Dapr that you're usingDAPR_VERSION="release-1.15"curl -LfO "https://raw.githubusercontent.com/dapr/components-contrib/${DAPR_VERSION}/internal/component/cloudflare/workers/code/worker.js"
Deploy the Worker using Wrangler:
npx wrangler publish
Once your Worker has been deployed, you will need to initialize the component with these two metadata options:
workerName: Name of the Worker script. This is the value you set in the name property in the wrangler.toml file.
workerUrl: URL of the deployed Worker. The npx wrangler command will show the full URL to you, for example https://mydaprkv.mydomain.workers.dev.
Generate an Ed25519 key pair
All Cloudflare Workers listen on the public Internet, so Dapr needs to use additional authentication and data protection measures to ensure that no other person or application can communicate with your Worker (and thus, with your Worker KV namespace). These include industry-standard measures such as:
All requests made by Dapr to the Worker are authenticated via a bearer token (technically, a JWT) which is signed with an Ed25519 key.
All communications between Dapr and your Worker happen over an encrypted connection, using TLS (HTTPS).
The bearer token is generated on each request and is valid for a brief period of time only (currently, one minute).
To let Dapr issue bearer tokens, and have your Worker validate them, you will need to generate a new Ed25519 key pair. Here are examples of generating the key pair using OpenSSL or the step CLI.
Support for generating Ed25519 keys is available since OpenSSL 1.1.0, so the commands below will not work if you’re using an older version of OpenSSL.
Note for Mac users: on macOS, the “openssl” binary that is shipped by Apple is actually based on LibreSSL, which as of writing doesn’t support Ed25519 keys. If you’re using macOS, either use the step CLI, or install OpenSSL 3.0 from Homebrew using brew install openssl@3 then replacing openssl in the commands below with $(brew --prefix)/opt/openssl@3/bin/openssl.
You can generate a new Ed25519 key pair with OpenSSL using:
Regardless of how you generated your key pair, with the instructions above you’ll have two files:
private.pem contains the private part of the key; use the contents of this file for the key property of the component’s metadata.
public.pem contains the public part of the key, which you’ll need only if you’re deploying a Worker manually (as per the instructions in the previoius section).
Warning
Protect the private part of your key and treat it as a secret value!
Additional notes
Note that Cloudflare Workers KV doesn’t guarantee strong data consistency. Although changes are visible immediately (usually) for requests made to the same Cloudflare datacenter, it can take a certain amount of time (usually up to one minute) for changes to be replicated across all Cloudflare regions.
This state store supports TTLs with Dapr, but the minimum value for the TTL is 1 minute.
Detailed information on the CockroachDB state store component
Create a Dapr component
Create a file called cockroachdb.yaml, paste the following and replace the <CONNECTION STRING> value with your connection string. The connection string for CockroachDB follow the same standard for PostgreSQL connection string. For example, "host=localhost user=root port=26257 connect_timeout=10 database=dapr_test". See the CockroachDB documentation on database connections for information on how to define a connection string.
If you want to also configure CockroachDB to store actors, add the actorStateStore option as in the example below.
apiVersion:dapr.io/v1alpha1kind:Componentmetadata:name:<NAME>spec:type:state.cockroachdbversion:v1metadata:# Connection string- name:connectionStringvalue:"<CONNECTION STRING>"# Timeout for database operations, in seconds (optional)#- name: timeoutInSeconds# value: 20# Name of the table where to store the state (optional)#- name: tableName# value: "state"# Name of the table where to store metadata used by Dapr (optional)#- name: metadataTableName# value: "dapr_metadata"# Cleanup interval in seconds, to remove expired rows (optional)#- name: cleanupIntervalInSeconds# value: 3600# Max idle time for connections before they're closed (optional)#- name: connectionMaxIdleTime# value: 0# Uncomment this if you wish to use CockroachDB as a state store for actors (optional)#- name: actorStateStore# value: "true"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.
Timeout, in seconds, for all database operations. Defaults to 20
30
tableName
N
Name of the table where the data is stored. Defaults to state. Can optionally have the schema name as prefix, such as public.state
"state", "public.state"
metadataTableName
N
Name of the table Dapr uses to store a few metadata properties. Defaults to dapr_metadata. Can optionally have the schema name as prefix, such as public.dapr_metadata
"dapr_metadata", "public.dapr_metadata"
cleanupIntervalInSeconds
N
Interval, in seconds, to clean up rows with an expired TTL. Default: 3600 (i.e. 1 hour). Setting this to values <=0 disables the periodic cleanup.
1800, -1
connectionMaxIdleTime
N
Max idle time before unused connections are automatically closed in the connection pool. By default, there’s no value and this is left to the database driver to choose.
"5m"
actorStateStore
N
Consider this state store for actors. Defaults to "false"
"true", "false"
Setup CockroachDB
Run an instance of CockroachDB. You can run a local instance of CockroachDB in Docker CE with the following command:
This example does not describe a production configuration because it sets a single-node cluster, it’s only recommend for local environment.
docker run --name roach1 -p 26257:26257 cockroachdb/cockroach:v21.2.3 start-single-node --insecure
Create a database for state data.
To create a new database in CockroachDB, run the following SQL command inside container:
The easiest way to install CockroachDB on Kubernetes is by using the CockroachDB Operator:
Advanced
TTLs and cleanups
This state store supports Time-To-Live (TTL) for records stored with Dapr. When storing data using Dapr, you can set the ttlInSeconds metadata property to indicate after how many seconds the data should be considered “expired”.
Because CockroachDB doesn’t have built-in support for TTLs, you implement this in Dapr by adding a column in the state table indicating when the data should be considered “expired”. “Expired” records are not returned to the caller, even if they’re still physically stored in the database. A background “garbage collector” periodically scans the state table for expired rows and deletes them.
You can set the interval for the deletion of expired records with the cleanupIntervalInSeconds metadata property, which defaults to 3600 seconds (that is, 1 hour).
Longer intervals require less frequent scans for expired rows, but can require storing expired records for longer, potentially requiring more storage space. If you plan to store many records in your state table, with short TTLs, consider setting cleanupIntervalInSeconds to a smaller value - for example, 300 (300 seconds, or 5 minutes).
If you do not plan to use TTLs with Dapr and the CockroachDB state store, you should consider setting cleanupIntervalInSeconds to a value <= 0 (e.g. 0 or -1) to disable the periodic cleanup and reduce the load on the database.
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.
Spec metadata fields
Field
Required
Details
Example
serverAddress
Y
Comma delimited endpoints
"my-cluster-grpc:1408"
tlsEnabled
N
Indicates if TLS should be enabled. Defaults to false
"true"
tlsClientCertPath
N
Client certificate path for Coherence. Defaults to “”. Can be secretKeyRef to use a secret reference.
"-----BEGIN CERTIFICATE-----\nMIIC9TCCA..."
tlsClientKey
N
Client key for Coherence. Defaults to “”. Can be secretKeyRef to use a secret reference.
"-----BEGIN CERTIFICATE-----\nMIIC9TCCA..."
tlsCertsPath
N
Additional certificates for Coherence. Defaults to “”. Can be secretKeyRef to use a secret reference.
"-----BEGIN CERTIFICATE-----\nMIIC9TCCA..."
ignoreInvalidCerts
N
Indicates if to ignore self-signed certificates for testing only, not to be used in production. Defaults to false
"false"
scopeName
N
A scope name to use for the internal cache. Defaults to ""
"my-scope"
requestTimeout
N
ATimeout for calls to the cluster Defaults to “30s”
"15s"
nearCacheTTL
N
If non-zero a near cache is used and the TTL of the near cache is this value. Defaults to 0s
"60s"
nearCacheUnits
N
If non-zero a near cache is used and the maximum size of the near cache is this value in units. Defaults to 0
"1000"
nearCacheMemory
N
If non-zero a near cache is used and the maximum size of the near cache is this value in bytes. Defaults to 0
"4096"
About Using Near Cache TTL
The Coherence state store allows you to specify a near cache to cache frequently accessed data when using the DAPR client.
When you access data using Get(ctx context.Context, req *GetRequest), returned entries are stored in the near cache and
subsequent data access for keys in the near cache is almost instant, where without a near cache each Get() operation results in a network call.
When using the near cache option, Coherence automatically adds a MapListener to the internal cache which listens on all cache events and updates or invalidates entries in the near cache that have been changed or removed on the server.
To manage the amount of memory used by the near cache, the following options are supported when creating one:
nearCacheTTL – objects expired after time in near cache, for example 5 minutes
nearCacheUnits – maximum number of cache entries in the near cache
nearCacheMemory – maximum amount of memory used by cache entries
You can specify either High-Units or Memory and in either case, optionally, a TTL.
The minimum expiry time for a near cache entry is 1/4 second. This is to ensure that expiry of elements is as
efficient as possible. You will receive an error if you try to set the TTL to a lower value.
Setup Coherence
Run Coherence locally using Docker:
docker run -d -p 1408:1408 -p 30000:30000 ghcr.io/oracle/coherence-ce:25.03.1
You can then interact with the server using localhost:1408.
The easiest way to install Coherence on Kubernetes is by using the Coherence Operator:
Detailed information on the Etcd state store component
Component format
To setup an Etcd state store create a component of type state.etcd. See this guide on how to create and apply a state store configuration.
apiVersion:dapr.io/v1alpha1kind:Componentmetadata:name:<NAME>spec:type:state.etcd# Supports v1 and v2. Users should always use v2 by default. There is no# migration path from v1 to v2, see `versioning` below.version:v2metadata:- name:endpointsvalue: <CONNECTION STRING> # Required. Example:192.168.0.1:2379,192.168.0.2:2379,192.168.0.3:2379- name:keyPrefixPathvalue: <KEY PREFIX STRING> # Optional. default:"". Example:"dapr"- name:tlsEnablevalue: <ENABLE TLS> # Optional. Example:"false"- name:cavalue:<CA># Optional. Required if tlsEnable is `true`.- name:certvalue:<CERT># Optional. Required if tlsEnable is `true`.- name:keyvalue:<KEY># Optional. Required if tlsEnable is `true`.# Uncomment this if you wish to use Etcd as a state store for actors (optional)#- name: actorStateStore# value: "true"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.
Versioning
Dapr has 2 versions of the Etcd state store component: v1 and v2. It is recommended to use v2, as v1 is deprecated.
While v1 and v2 have the same metadata fields, v1 causes data inconsistencies in apps when using Actor TTLs from Dapr v1.12.
v1 and v2 are incompatible with no data migration path for v1 to v2 on an existing active Etcd cluster and keyPrefixPath.
If you are using v1, you should continue to use v1 until you create a new Etcd cluster or use a different keyPrefixPath.
Save the docker-compose.yml file and run the following command to start the Etcd server:
docker-compose up -d
This starts the Etcd server in the background and expose the default Etcd port of 2379. You can then interact with the server using the etcdctl command-line client on localhost:12379. For example:
etcdctl --endpoints=localhost:2379 put mykey myvalue
Use Helm to quickly create an Etcd instance in your Kubernetes cluster. This approach requires Installing Helm.
Follow the Bitnami instructions to get started with setting up Etcd in Kubernetes.
Detailed information on the GCP Firestore state store component
Component format
To setup GCP Firestore state store create a component of type state.gcp.firestore. See this guide on how to create and apply a state store configuration.
apiVersion:dapr.io/v1alpha1kind:Componentmetadata:name:<NAME>spec:type:state.gcp.firestoreversion:v1metadata:- name:project_idvalue:<REPLACE-WITH-PROJECT-ID># Required.- name:type value:<REPLACE-WITH-CREDENTIALS-TYPE># Required.- name:endpoint# Optional. value:"http://localhost:8432"- name:private_key_idvalue:<REPLACE-WITH-PRIVATE-KEY-ID># Optional.- name:private_keyvalue:<REPLACE-WITH-PRIVATE-KEY># Optional, but Required if `private_key_id` is specified.- name:client_emailvalue:<REPLACE-WITH-CLIENT-EMAIL># Optional, but Required if `private_key_id` is specified.- name:client_idvalue:<REPLACE-WITH-CLIENT-ID># Optional, but Required if `private_key_id` is specified.- name:auth_urivalue:<REPLACE-WITH-AUTH-URI># Optional.- name:token_urivalue:<REPLACE-WITH-TOKEN-URI># Optional.- name:auth_provider_x509_cert_urlvalue:<REPLACE-WITH-AUTH-X509-CERT-URL># Optional.- name:client_x509_cert_urlvalue:<REPLACE-WITH-CLIENT-x509-CERT-URL># Optional.- name:entity_kindvalue: <REPLACE-WITH-ENTITY-KIND> # Optional. default:"DaprState"- name:noindexvalue: <REPLACE-WITH-BOOLEAN> # Optional. default:"false"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.
Spec metadata fields
Field
Required
Details
Example
project_id
Y
The ID of the GCP project to use
"project-id"
type
Y
The credentials type
"service_account"
endpoint
N
GCP endpoint for the component to use. Only used for local development with (for example) GCP Datastore Emulator. The endpoint is unnecessary when running against the GCP production API.
"localhost:8432"
private_key_id
N
The ID of the prvate key to use
"private-key-id"
privateKey
N
If using explicit credentials, this field should contain the private_key field from the service account json
Detailed documentation on the in-memory state component
The in-memory state store component maintains state in the Dapr sidecar’s memory. This is primarily meant for development purposes. State is not replicated across multiple sidecars and is lost when the Dapr sidecar is restarted.
Component format
To setup in-memory state store, create a component of type state.in-memory. See this guide on how to create and apply a state store configuration.
apiVersion:dapr.io/v1alpha1kind:Componentmetadata:name:<NAME>spec:type:state.in-memoryversion:v1metadata:# Uncomment this if you wish to use In-memory as a state store for actors (optional)#- name: actorStateStore# value: "true"
Note: While in-memory does not require any specific metadata for the component to work, spec.metadata is a required field.
Detailed information on the JetStream KV state store component
Component format
To setup a JetStream KV state store create a component of type state.jetstream. See this guide on how to create and apply a state store configuration.
apiVersion:dapr.io/v1alpha1kind:Componentmetadata:name:<NAME>spec:type:state.jetstreamversion:v1metadata:- name:natsURLvalue:"nats://localhost:4222"- name:jwtvalue:"eyJhbGciOiJ...6yJV_adQssw5c"# Optional. Used for decentralized JWT authentication- name:seedKeyvalue:"SUACS34K232O...5Z3POU7BNIL4Y"# Optional. Used for decentralized JWT authentication- name:bucketvalue:"<bucketName>"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.
Spec metadatafield
Field
Required
Details
Example
natsURL
Y
NATS server address URL
“nats://localhost:4222”
jwt
N
NATS decentralized authentication JWT
“eyJhbGciOiJ...6yJV_adQssw5c”
seedKey
N
NATS decentralized authentication seed key
“SUACS34K232O...5Z3POU7BNIL4Y”
bucket
Y
JetStream KV bucket name
"<bucketName>"
Create a NATS server
You can run a NATS Server with JetStream enabled locally using Docker:
docker run -d -p 4222:4222 nats:latest -js
You can then interact with the server using the client port: localhost:4222.
Install NATS JetStream on Kubernetes by using the helm:
To set up this state store, create a component of type state.sqlserver. See this guide on how to create and apply a state store configuration.
apiVersion:dapr.io/v1alpha1kind:Componentmetadata:name:<NAME>spec:type:state.sqlserverversion:v1metadata:# Authenticate using SQL Server credentials- name:connectionStringvalue:| Server=myServerName\myInstanceName;Database=myDataBase;User Id=myUsername;Password=myPassword;# Authenticate with Microsoft Entra ID (Azure SQL only)# "useAzureAD" be set to "true"- name:useAzureADvalue:true# Connection string or URL of the Azure SQL database, optionally containing the database- name:connectionStringvalue:| sqlserver://myServerName.database.windows.net:1433?database=myDataBase# Other optional fields (listing default values)- name:tableNamevalue:"state"- name:metadataTableNamevalue:"dapr_metadata"- name:schemavalue:"dbo"- name:keyTypevalue:"string"- name:keyLengthvalue:"200"- name:indexedPropertiesvalue:""- name:cleanupIntervalInSecondsvalue:"3600"# Uncomment this if you wish to use Microsoft SQL Server as a state store for actors (optional)#- name: actorStateStore# value: "true"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.
If you wish to use SQL server as an actor state store, append the following to the metadata:
- name:actorStateStorevalue:"true"
Spec metadata fields
Authenticate using SQL Server credentials
The following metadata options are required to authenticate using SQL Server credentials. This is supported on both SQL Server and Azure SQL.
Field
Required
Details
Example
connectionString
Y
The connection string used to connect. If the connection string contains the database, it must already exist. Otherwise, if the database is omitted, a default database named “Dapr” is created.
Authenticating with Microsoft Entra ID is supported with Azure SQL only. All authentication methods supported by Dapr can be used, including client credentials (“service principal”) and Managed Identity.
Field
Required
Details
Example
useAzureAD
Y
Must be set to true to enable the component to retrieve access tokens from Microsoft Entra ID.
"true"
connectionString
Y
The connection string or URL of the Azure SQL database, without credentials. If the connection string contains the database, it must already exist. Otherwise, if the database is omitted, a default database named “Dapr” is created.
Indicates that Dapr should configure this component for the actor state store (more information).
"true"
cleanupIntervalInSeconds
N
Interval, in seconds, to clean up rows with an expired TTL. Default: "3600" (i.e. 1 hour). Setting this to values <=0 disables the periodic cleanup.
"1800", "-1"
Create a Microsoft SQL Server/Azure SQL instance
Follow the instructions from the Azure documentation on how to create a SQL database. The database must be created before Dapr consumes it.
In order to setup SQL Server as a state store, you need the following properties:
Connection String: The SQL Server connection string. For example: server=localhost;user id=sa;password=your-password;port=1433;database=mydatabase;
Schema: The database schema to use (default=dbo). Will be created if does not exist
Table Name: The database table name. Will be created if does not exist
Indexed Properties: Optional properties from json data which will be indexed and persisted as individual column
Create a dedicated user
When connecting with a dedicated user (not sa), these authorizations are required for the user - even when the user is owner of the desired database schema:
CREATE TABLE
CREATE TYPE
TTLs and cleanups
This state store supports Time-To-Live (TTL) for records stored with Dapr. When storing data using Dapr, you can set the ttlInSeconds metadata property to indicate after how many seconds the data should be considered “expired”.
Because SQL Server doesn’t have built-in support for TTLs, Dapr implements this by adding a column in the state table indicating when the data should be considered “expired”. “Expired” records are not returned to the caller, even if they’re still physically stored in the database. A background “garbage collector” periodically scans the state table for expired rows and deletes them.
You can set the interval for the deletion of expired records with the cleanupIntervalInSeconds metadata property, which defaults to 3600 seconds (that is, 1 hour).
Longer intervals require less frequent scans for expired rows, but can require storing expired records for longer, potentially requiring more storage space. If you plan to store many records in your state table, with short TTLs, consider setting cleanupIntervalInSeconds to a smaller value - for example, 300 (300 seconds, or 5 minutes).
If you do not plan to use TTLs with Dapr and the SQL Server state store, you should consider setting cleanupIntervalInSeconds to a value <= 0 (e.g. 0 or -1) to disable the periodic cleanup and reduce the load on the database.
The state store does not have an index on the ExpireDate column, which means that each clean up operation must perform a full table scan. If you intend to write to the table with a large number of records that use TTLs, you should consider creating an index on the ExpireDate column. An index makes queries faster, but uses more storage space and slightly slows down writes.
Detailed information on the MongoDB state store component
Component format
To setup MongoDB state store create a component of type state.mongodb. See this guide on how to create and apply a state store configuration.
apiVersion:dapr.io/v1alpha1kind:Componentmetadata:name:<NAME>spec:type:state.mongodbversion:v1metadata:- name:servervalue:<REPLACE-WITH-SERVER># Required unless "host" field is set . Example: "server.example.com"- name:hostvalue:<REPLACE-WITH-HOST># Required unless "server" field is set . Example: "mongo-mongodb.default.svc.cluster.local:27017"- name:usernamevalue: <REPLACE-WITH-USERNAME> # Optional. Example:"admin"- name:passwordvalue:<REPLACE-WITH-PASSWORD># Optional.- name:databaseNamevalue: <REPLACE-WITH-DATABASE-NAME> # Optional. default:"daprStore"- name:collectionNamevalue: <REPLACE-WITH-COLLECTION-NAME> # Optional. default:"daprCollection"- name:writeConcernvalue:<REPLACE-WITH-WRITE-CONCERN># Optional.- name:readConcernvalue:<REPLACE-WITH-READ-CONCERN># Optional.- name:operationTimeoutvalue: <REPLACE-WITH-OPERATION-TIMEOUT> # Optional. default:"5s"- name:paramsvalue: <REPLACE-WITH-ADDITIONAL-PARAMETERS> # Optional. Example:"?authSource=daprStore&ssl=true"# Uncomment this if you wish to use MongoDB as a state store for actors (optional)#- name: actorStateStore# value: "true"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.
Actor state store and transactions support
When using as an actor state store or to leverage transactions, MongoDB must be running in a Replica Set.
If you wish to use MongoDB as an actor store, add this metadata option to your Component YAML:
- name:actorStateStorevalue:"true"
Spec metadata fields
Field
Required
Details
Example
server
Y1
The server to connect to, when using DNS SRV record
"server.example.com"
host
Y1
The host to connect to
"mongo-mongodb.default.svc.cluster.local:27017"
username
N
The username of the user to connect with (applicable in conjunction with host)
"admin"
password
N
The password of the user (applicable in conjunction with host)
"password"
databaseName
N
The name of the database to use. Defaults to "daprStore"
"daprStore"
collectionName
N
The name of the collection to use. Defaults to "daprCollection"
Consider this state store for actors. Defaults to "false"
"true", "false"
[1] The server and host fields are mutually exclusive. If neither or both are set, Dapr returns an error.
[2] The params field accepts a query string that specifies connection specific options as <name>=<value> pairs, separated by & and prefixed with ?. e.g. to use “daprStore” db as authentication database and enabling SSL/TLS in connection, specify params as ?authSource=daprStore&ssl=true. See the mongodb manual for the list of available options and their use cases.
Setup MongoDB
You can run a single MongoDB instance locally using Docker:
docker run --name some-mongo -d -p 27017:27017 mongo
You can then interact with the server at localhost:27017. If you do not specify a databaseName value in your component definition, make sure to create a database named daprStore.
In order to use the MongoDB state store for transactions and as an actor state store, you need to run MongoDB as a Replica Set. Refer to the official documentation for how to create a 3-node Replica Set using Docker.
You can conveniently install MongoDB on Kubernetes using the Helm chart packaged by Bitnami. Refer to the documentation for the Helm chart for deploying MongoDB, both as a standalone server, and with a Replica Set (required for using transactions and actors).
This installs MongoDB into the default namespace.
To interact with MongoDB, find the service with: kubectl get svc mongo-mongodb.
For example, if installing using the Helm defaults above, the MongoDB host address would be:
mongo-mongodb.default.svc.cluster.local:27017
Follow the on-screen instructions to get the root password for MongoDB.
The username is typically admin by default.
TTLs and cleanups
This state store supports Time-To-Live (TTL) for records stored with Dapr. When storing data using Dapr, you can set the ttlInSeconds metadata property to indicate when the data should be considered “expired”.
Detailed information on the MySQL state store component
Component format
The MySQL state store components allows connecting to both MySQL and MariaDB databases. In this document, we refer to “MySQL” to indicate both databases.
To setup MySQL state store create a component of type state.mysql. See this guide on how to create and apply a state store configuration.
apiVersion:dapr.io/v1alpha1kind:Componentmetadata:name:<NAME>spec:type:state.mysqlversion:v1metadata:- name:connectionStringvalue:"<CONNECTION STRING>"- name:schemaNamevalue:"<SCHEMA NAME>"- name:tableNamevalue:"<TABLE NAME>"- name:timeoutInSecondsvalue:"30"- name:pemPath# Required if pemContents not provided. Path to pem file.value:"<PEM PATH>"- name:pemContents# Required if pemPath not provided. Pem value.value:"<PEM CONTENTS>"# Uncomment this if you wish to use MySQL & MariaDB as a state store for actors (optional)#- name: actorStateStore# value: "true"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.
If you wish to use MySQL as an actor store, append the following to the yaml.
- name:actorStateStorevalue:"true"
Spec metadata fields
Field
Required
Details
Example
connectionString
Y
The connection string to connect to MySQL. Do not add the schema to the connection string
Non SSL connection: "<user>:<password>@tcp(<server>:3306)/?allowNativePasswords=true", Enforced SSL Connection: "<user>:<password>@tcp(<server>:3306)/?allowNativePasswords=true&tls=custom"
schemaName
N
The schema name to use. Will be created if schema does not exist. Defaults to "dapr_state_store"
"custom_schema", "dapr_schema"
tableName
N
The table name to use. Will be created if table does not exist. Defaults to "state"
"table_name", "dapr_state"
timeoutInSeconds
N
Timeout for all database operations. Defaults to 20
30
pemPath
N
Full path to the PEM file to use for enforced SSL Connection required if pemContents is not provided. Cannot be used in K8s environment
"/path/to/file.pem", "C:\path\to\file.pem"
pemContents
N
Contents of PEM file to use for enforced SSL Connection required if pemPath is not provided. Can be used in K8s environment
"pem value"
cleanupIntervalInSeconds
N
Interval, in seconds, to clean up rows with an expired TTL. Default: 3600 (that is 1 hour). Setting this to values <=0 disables the periodic cleanup.
1800, -1
actorStateStore
N
Consider this state store for actors. Defaults to "false"
"true", "false"
Setup MySQL
Dapr can use any MySQL instance - containerized, running on your local dev machine, or a managed cloud service.
Run an instance of MySQL. You can run a local instance of MySQL in Docker CE with the following command:
This example does not describe a production configuration because it sets the password in plain text and the user name is left as the MySQL default of “root”.
docker run --name dapr-mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:latest
We can use Helm to quickly create a MySQL instance in our Kubernetes cluster. This approach requires Installing Helm.
Run kubectl get pods to see the MySQL containers now running in your cluster.
Next, we’ll get our password, which is slightly different depending on the OS we’re using:
Windows: Run [System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String($(kubectl get secret --namespace default dapr-mysql -o jsonpath="{.data.mysql-root-password}"))) and copy the outputted password.
Linux/MacOS: Run kubectl get secret --namespace default dapr-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode and copy the outputted password.
With the password you can construct your connection string.
Replace the <CONNECTION STRING> value with your connection string. The connection string is a standard MySQL connection string. For example, "<user>:<password>@tcp(<server>:3306)/?allowNativePasswords=true".
Enforced SSL connection
If your server requires SSL your connection string must end with &tls=custom for example, "<user>:<password>@tcp(<server>:3306)/?allowNativePasswords=true&tls=custom". You must replace the <PEM PATH> with a full path to the PEM file. The connection to MySQL will require a minimum TLS version of 1.2.
TTLs and cleanups
This state store supports Time-To-Live (TTL) for records stored with Dapr. When storing data using Dapr, you can set the ttlInSeconds metadata property to indicate when the data should be considered “expired”.
Because MySQL doesn’t have built-in support for TTLs, this is implemented in Dapr by adding a column in the state table indicating when the data is to be considered “expired”. Records that are “expired” are not returned to the caller, even if they’re still physically stored in the database. A background “garbage collector” periodically scans the state table for expired rows and deletes them.
The interval at which the deletion of expired records happens is set with the cleanupIntervalInSeconds metadata property, which defaults to 3600 seconds (that is, 1 hour).
Longer intervals require less frequent scans for expired rows, but can require storing expired records for longer, potentially requiring more storage space. If you plan to store many records in your state table, with short TTLs, consider setting cleanupIntervalInSeconds to a smaller value, for example 300 (300 seconds, or 5 minutes).
If you do not plan to use TTLs with Dapr and the MySQL state store, you should consider setting cleanupIntervalInSeconds to a value <= 0 (e.g. 0 or -1) to disable the periodic cleanup and reduce the load on the database.
Detailed information on the OCI Object Storage state store component
Component format
To setup OCI Object Storage state store create a component of type state.oci.objectstorage. See this guide on how to create and apply a state store configuration.
apiVersion:dapr.io/v1alpha1kind:Componentmetadata:name:<NAME>spec:type:state.oci.objectstorageversion:v1metadata:- name:instancePrincipalAuthenticationvalue:<"true" or "false"> # Optional. default: "false" - name:configFileAuthenticationvalue:<"true" or "false"> # Optional. default: "false" . Not used when instancePrincipalAuthentication == "true" - name:configFilePathvalue:<REPLACE-WITH-FULL-QUALIFIED-PATH-OF-CONFIG-FILE> # Optional. No default. Only used when configFileAuthentication == "true" - name:configFileProfilevalue: <REPLACE-WITH-NAME-OF-PROFILE-IN-CONFIG-FILE> # Optional. default:"DEFAULT". Only used when configFileAuthentication == "true" - name:tenancyOCIDvalue:<REPLACE-WITH-TENANCY-OCID> # Not used when configFileAuthentication == "true" or instancePrincipalAuthentication == "true" - name:userOCIDvalue:<REPLACE-WITH-USER-OCID> # Not used when configFileAuthentication == "true" or instancePrincipalAuthentication == "true" - name:fingerPrintvalue:<REPLACE-WITH-FINGERPRINT> # Not used when configFileAuthentication == "true" or instancePrincipalAuthentication == "true" - name:privateKey # Not used when configFileAuthentication == "true" or instancePrincipalAuthentication == "true" value:| -----BEGIN RSA PRIVATE KEY-----
REPLACE-WITH-PRIVATE-KEY-AS-IN-PEM-FILE
-----END RSA PRIVATE KEY----- - name:regionvalue:<REPLACE-WITH-OCI-REGION> # Not used when configFileAuthentication == "true" or instancePrincipalAuthentication == "true" - name:bucketNamevalue:<REPLACE-WITH-BUCKET-NAME>- name:compartmentOCIDvalue:<REPLACE-WITH-COMPARTMENT-OCID>
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.
Spec metadata fields
Field
Required
Details
Example
instancePrincipalAuthentication
N
Boolean to indicate whether instance principal based authentication is used. Default: "false"
"true" or "false" .
configFileAuthentication
N
Boolean to indicate whether identity credential details are provided through a configuration file. Default: "false" Not required nor used when instancePrincipalAuthentication is true.
"true" or "false" .
configFilePath
N
Full path name to the OCI configuration file. No default value exists. Not used when instancePrincipalAuthentication is true. Note: the ~/ prefix is not supported.
"/home/apps/configuration-files/myOCIConfig.txt".
configFileProfile
N
Name of profile in configuration file to use. Default: "DEFAULT" Not used when instancePrincipalAuthentication is true.
"DEFAULT" or "PRODUCTION" .
tenancyOCID
Y
The OCI tenancy identifier. Not required nor used when instancePrincipalAuthentication is true.
The OCID for an OCI account (this account requires permissions to access OCI Object Storage). Not required nor used when instancePrincipalAuthentication is true.
"ocid1.user.oc1..aaaaaaaaby4oyyyuqwy7623yuwe76"
fingerPrint
Y
Fingerprint of the public key. Not required nor used when instancePrincipalAuthentication is true.
"02:91:6c:49:e2:94:21:15:a7:6b:0e:a7:34:e1:3d:1b"
privateKey
Y
Private key of the RSA key pair. Not required nor used when instancePrincipalAuthentication is true.
"MIIEoyuweHAFGFG2727as+7BTwQRAIW4V"
region
Y
OCI Region. Not required nor used when instancePrincipalAuthentication is true.
"us-ashburn-1"
bucketName
Y
Name of the bucket written to and read from (and if necessary created)
"application-state-store-bucket"
compartmentOCID
Y
The OCID for the compartment that contains the bucket
The OCI Object Storage state store needs to interact with Oracle Cloud Infrastructure. The state store supports two different approaches to authentication. One is based on an identity (a user or service account) and the other is instance principal authentication leveraging the permissions granted to the compute instance running the application workload. Note: Resource Principal Authentication - used for resources that are not instances such as serverless functions - is not currently supported.
Dapr-applications running on Oracle Cloud Infrastructure - in a compute instance or as a container on Kubernetes - can leverage instance principal authentication. See the OCI documentation on calling OCI Services from instances for more background. In short: The instance needs to be member of a Dynamic Group and this Dynamic Group needs to get permissions for interacting with the Object Storage service through IAM policies. In case of such instance principal authentication, specify property instancePrincipalAuthentication as "true". You do not need to configure the properties tenancyOCID, userOCID, region, fingerPrint and privateKey - these will be ignored if you define values for them.
Identity based authentication interacts with OCI through an OCI account that has permissions to create, read and delete objects through OCI Object Storage in the indicated bucket and that is allowed to create a bucket in the specified compartment if the bucket is not created beforehand. The OCI documentation describes how to create an OCI Account. The interaction by the state store is performed using the public key’s fingerprint and a private key from an RSA Key Pair generated for the OCI account. The instructions for generating the key pair and getting hold of the required information are available in the OCI documentation.
Details for the identity and identity’s credentials to be used for interaction with OCI can be provided directly in the Dapr component properties file - using the properties tenancyOCID, userOCID, fingerPrint, privateKey and region - or can be provided from a configuration file as is common for many OCI related tools (such as CLI and Terraform) and SDKs. In the latter case the exact file name and full path has to be provided through property configFilePath. Note: the ~/ prefix is not supported in the path. A configuration file can contain multiple profiles; the desired profile can be specified through property configFileProfile. If no value is provided, DEFAULT is used as the name for the profile to be used. Note: if the indicated profile is not found, then the DEFAULT profile (if it exists) is used instead. The OCI SDK documentation gives details about the definition of the configuration file.
If you wish to create the bucket for Dapr to use, you can do so beforehand. However, Object Storage state provider will create one - in the specified compartment - for you automatically if it doesn’t exist.
In order to setup OCI Object Storage as a state store, you need the following properties:
instancePrincipalAuthentication: The flag that indicates if instance principal based authentication should be used.
configFileAuthentication: The flag that indicates if the OCI identity credential details are provided through a configuration file. Not used when instancePrincipalAuthentication is true.
configFilePath: Full path name to the OCI configuration file. Not used when instancePrincipalAuthentication is true or configFileAuthentication is not true.
configFileProfile: Name of profile in configuration file to use. Default: "DEFAULT" Not required nor used when instancePrincipalAuthentication is true or configFileAuthentication is not true. When the specified profile is not found in the configuration file, the DEFAULT profile is used when it exists
tenancyOCID: The identifier for the OCI cloud tenancy in which the state is to be stored. Not used when instancePrincipalAuthentication is true or configFileAuthentication is true.
userOCID: The identifier for the account used by the state store component to connect to OCI; this must be an account with appropriate permissions on the OCI Object Storage service in the specified compartment and bucket. Not used when instancePrincipalAuthentication is true or configFileAuthentication is true.
fingerPrint: The fingerprint for the public key in the RSA key pair generated for the account indicated by userOCID. Not used when instancePrincipalAuthentication is true or configFileAuthentication is true.
privateKey: The private key in the RSA key pair generated for the account indicated by userOCID. Not used when instancePrincipalAuthentication is true or configFileAuthentication is true.
region: The OCI region - for example us-ashburn-1, eu-amsterdam-1, ap-mumbai-1. Not used when instancePrincipalAuthentication is true
bucketName: The name of the bucket on OCI Object Storage in which state will be created. This bucket can exist already when the state store is initialized or it will be created during initialization of the state store. Note that the name of buckets is unique within a namespace
compartmentOCID: The identifier of the compartment within the tenancy in which the bucket exists or will be created.
What Happens at Runtime?
Every state entry is represented by an object in OCI Object Storage. The OCI Object Storage state store uses the key property provided in the requests to the Dapr API to determine the name of the object. The value is stored as the (literal) content of the object. Each object is assigned a unique ETag value - whenever it is created or updated (aka overwritten); this is native behavior of OCI Object Storage. The state store assigns a meta data tag to every object it writes; the tag is category and its value is dapr-state-store. This allows the objects created as state for Daprized applications to be identified.
You will be able to inspect all state stored through the OCI Object Storage state store by inspecting the contents of the bucket through the console, the APIs, CLI or SDKs. By going directly to the bucket, you can prepare state that will be available as state to your application at runtime.
Time To Live and State Expiration
The OCI Object Storage state store supports Dapr’s Time To Live logic that ensure that state cannot be retrieved after it has expired. See this How To on Setting State Time To Live for details.
OCI Object Storage does not have native support for a Time To Live setting. The implementation in this component uses a meta data tag put on each object for which a TTL has been specified. The tag is called expiry-time-from-ttl and it contains a string in ISO date time format with the UTC based expiry time. When state is retrieved through a call to Get, this component checks if it has the expiry-time-from-ttl set and if so it checks whether it is in the past. In that case, no state is returned.
The following operation therefore (notice the composite key)
The exact value of the expiry-time-from-ttl depends of course on the time at which the state was created and will be 120 seconds later than that moment.
Note that expired state is not removed from the state store by this component. An application operator may decide to run a periodic job that does a form of garbage collection in order to explicitly remove all state that has an expiry-time-from-ttl label with a timestamp in the past.
Concurrency
OCI Object Storage state concurrency is achieved by using ETags. Each object in OCI Object Storage is assigned a unique ETag when it is created or updated (aka replaced). When the Set and Delete requests for this state store specify the FirstWrite concurrency policy, then the request need to provide the actual ETag value for the state to be written or removed for the request to be successful.
Consistency
OCI Object Storage state does not support Transactions.
Query
OCI Object Storage state does not support the Query API.
Detailed information on the Oracle Database state store component
Component format
Create a component properties yaml file, for example called oracle.yaml (but it could be named anything ), paste the following and replace the <CONNECTION STRING> value with your connection string. The connection string is a standard Oracle Database connection string, composed as: "oracle://user/password@host:port/servicename" for example "oracle://demo:demo@localhost:1521/xe".
In case you connect to the database using an Oracle Wallet, you should specify a value for the oracleWalletLocation property, for example: "/home/app/state/Wallet_daprDB/"; this should refer to the local file system directory that contains the file cwallet.sso that is extracted from the Oracle Wallet archive file.
apiVersion:dapr.io/v1alpha1kind:Componentmetadata:name:<NAME>spec:type:state.oracledatabaseversion:v1metadata:- name:connectionStringvalue:"<CONNECTION STRING>"- name:oracleWalletLocationvalue:"<FULL PATH TO DIRECTORY WITH ORACLE WALLET CONTENTS >"# Optional, no default- name:tableNamevalue:"<NAME OF DATABASE TABLE TO STORE STATE IN >"# Optional, defaults to STATE# Uncomment this if you wish to use Oracle Database as a state store for actors (optional)#- name: actorStateStore# value: "true"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.
Spec metadata fields
Field
Required
Details
Example
connectionString
Y
The connection string for Oracle Database
"oracle://user/password@host:port/servicename" for example "oracle://demo:demo@localhost:1521/xe" or for Autonomous Database "oracle://states_schema:State12345pw@adb.us-ashburn-1.oraclecloud.com:1522/k8j2agsqjsw_daprdb_low.adb.oraclecloud.com"
oracleWalletLocation
N
Location of the contents of an Oracle Wallet file (required to connect to Autonomous Database on OCI)
"/home/app/state/Wallet_daprDB/"
tableName
N
Name of the database table in which this instance of the state store records the data default "STATE"
"MY_APP_STATE_STORE"
actorStateStore
N
Consider this state store for actors. Defaults to "false"
"true", "false"
What Happens at Runtime?
When the state store component initializes, it connects to the Oracle Database and checks if a table with the name specified with tableName exists. If it does not, it creates this table (with columns Key, Value, Binary_YN, ETag, Creation_Time, Update_Time, Expiration_time).
Every state entry is represented by a record in the database table. The key property provided in the request is used to determine the name of the object stored literally in the KEY column. The value is stored as the content of the object. Binary content is stored as Base64 encoded text. Each object is assigned a unique ETag value whenever it is created or updated.
Dapr uses a fixed key scheme with composite keys to partition state across applications. For general states, the key format is:
App-ID||state key. The Oracle Database state store maps this key in its entirety to the KEY column.
You can easily inspect all state stored with SQL queries against the tableName table, for example the STATE table.
Time To Live and State Expiration
The Oracle Database state store component supports Dapr’s Time To Live logic that ensures that state cannot be retrieved after it has expired. See this How To on Setting State Time To Live for details.
The Oracle Database does not have native support for a Time-To-Live setting. The implementation in this component uses a column called EXPIRATION_TIME to hold the time after which the record is considered expired. The value in this column is set only when a TTL was specified in a Set request. It is calculated as the current UTC timestamp with the TTL period added to it. When state is retrieved through a call to Get, this component checks if it has the EXPIRATION_TIME set and if so, it checks whether it is in the past. In that case, no state is returned.
with the EXPIRATION_TIME set to a timestamp 2 minutes (120 seconds) (later than the CREATION_TIME)
Note that expired state is not removed from the state store by this component. An application operator may decide to run a periodic job that does a form of garbage collection in order to explicitly remove all state records with an EXPIRATION_TIME in the past. The SQL statement for collecting the expired garbage records:
Concurrency in the Oracle Database state store is achieved by using ETags. Each piece of state recorded in the Oracle Database state store is assigned a unique ETag - a generated, unique string stored in the column ETag - when it is created or updated. Note: the column UPDATE_TIME is also updated whenever a Set operation is performed on an existing record.
Only when the Set and Delete requests for this state store specify the FirstWrite concurrency policy, then the request needs to provide the actual ETag value for the state to be written or removed for the request to be successful. If a different or no concurrency policy is specified, then no check is performed on the ETag value.
Consistency
The Oracle Database state store supports Transactions. Multiple Set and Delete commands can be combined in a request that is processed as a single, atomic transaction.
Note: simple Set and Delete operations are a transaction on their own; when a Set or Delete requests returns an HTTP-20X result, the database transaction has been committed successfully.
Query
Oracle Database state store does not currently support the Query API.
Create an Oracle Database and User Schema
Run an instance of Oracle Database. You can run a local instance of Oracle Database in Docker CE with the following command - or of course use an existing Oracle Database:
docker run -d -p 1521:1521 -e ORACLE_PASSWORD=TheSuperSecret1509! gvenzl/oracle-xe
This example does not describe a production configuration because it sets the password for users SYS and SYSTEM in plain text.
When the output from the conmmand indicates that the container is running, learn the container id using the docker ps command. Then start a shell session using:
docker exec -it <container id> /bin/bash
and subsequently run the SQL*Plus client, connecting to the database as the SYS user:
sqlplus sys/TheSuperSecret1509! as sysdba
Create a database schema for state data. Create a new user schema - for example called dapr - for storing state data. Grant this user (schema) privileges for creating a table and storing data in the associated tablespace.
To create a new user schema in Oracle Database, run the following SQL command:
(optional) Create table for storing state records.
The Oracle Database state store component checks if the table for storing state already exists in the database user schema it connects to and if it does not, it creates that table. However, instead of having the Oracle Database state store component create the table for storing state records at run time, you can also create the table in advance. That gives you - or the DBA for the database - more control over the physical configuration of the table. This also means you do not have to grant the create table privilege to the user schema.
Run the following DDL statement to create the table for storing the state in the dapr database user schema :
You need to provide the password for user ADMIN. You use this account (initially at least) for database administration activities. You can work both in the web based SQL Developer tool, from its desktop counterpart or from any of a plethora of database development tools.
Create a schema for state data.
Create a new user schema in the Oracle Database for storing state data - for example using the ADMIN account. Grant this new user (schema) privileges for creating a table and storing data in the associated tablespace.
To create a new user schema in Oracle Database, run the following SQL command:
(optional) Create table for storing state records.
The Oracle Database state store component checks if the table for storing state already exists in the database user schema it connects to and if it does not, it creates that table. However, instead of having the Oracle Database state store component create the table for storing state records at run time, you can also create the table in advance. That gives you - or the DBA for the database - more control over the physical configuration of the table. This also means you do not have to grant the create table privilege to the user schema.
Run the following DDL statement to create the table for storing the state in the dapr database user schema :
Detailed information on the PostgreSQL state store component
Note
This is the v2 of the PostgreSQL state store component, which contains some improvements to performance and reliability. New applications are encouraged to use v2.
The PostgreSQL v2 state store component is not compatible with the v1 component, and data cannot be migrated between the two components. The v2 component does not offer support for state store query APIs.
There are no plans to deprecate the v1 component.
This component allows using PostgreSQL (Postgres) as state store for Dapr, using the “v2” component. See this guide on how to create and apply a state store configuration.
apiVersion:dapr.io/v1alpha1kind:Componentmetadata:name:<NAME>spec:type:state.postgresql# Note: setting "version" to "v2" is required to use the v2 of the componentversion:v2metadata:# Connection string- name:connectionStringvalue:"<CONNECTION STRING>"# Individual connection parameters - can be used instead to override connectionString parameters#- name: host# value: "localhost"#- name: hostaddr# value: "127.0.0.1"#- name: port# value: "5432"#- name: database# value: "my_db"#- name: user# value: "postgres"#- name: password# value: "example"#- name: sslRootCert# value: "/path/to/ca.crt"# Timeout for database operations, as a Go duration or number of seconds (optional)#- name: timeout# value: 20# Prefix for the table where the data is stored (optional)#- name: tablePrefix# value: ""# Name of the table where to store metadata used by Dapr (optional)#- name: metadataTableName# value: "dapr_metadata"# Cleanup interval in seconds, to remove expired rows (optional)#- name: cleanupInterval# value: "1h"# Maximum number of connections pooled by this component (optional)#- name: maxConns# value: 0# Max idle time for connections before they're closed (optional)#- name: connectionMaxIdleTime# value: 0# Controls the default mode for executing queries. (optional)#- name: queryExecMode# value: ""# Uncomment this if you wish to use PostgreSQL as a state store for actors or workflows (optional)#- name: actorStateStore# value: "true"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.
Spec metadata fields
Authenticate using a connection string
The following metadata options are required to authenticate using a PostgreSQL connection string.
Field
Required
Details
Example
connectionString
Y
The connection string for the PostgreSQL database. See the PostgreSQL documentation on database connections for information on how to define a connection string.
Authenticate using individual connection parameters
In addition to using a connection string, you can optionally specify individual connection parameters. These parameters are equivalent to the standard PostgreSQL connection parameters.
Field
Required
Details
Example
host
Y
The host name or IP address of the PostgreSQL server
"localhost"
hostaddr
N
The IP address of the PostgreSQL server (alternative to host)
"127.0.0.1"
port
Y
The port number of the PostgreSQL server
"5432"
database
Y
The name of the database to connect to
"my_db"
user
Y
The PostgreSQL user to connect as
"postgres"
password
Y
The password for the PostgreSQL user
"example"
sslRootCert
N
Path to the SSL root certificate file
"/path/to/ca.crt"
Note
When using individual connection parameters, these will override the ones present in the connectionString.
Authenticate using Microsoft Entra ID
Authenticating with Microsoft Entra ID is supported with Azure Database for PostgreSQL. All authentication methods supported by Dapr can be used, including client credentials (“service principal”) and Managed Identity.
Field
Required
Details
Example
useAzureAD
Y
Must be set to true to enable the component to retrieve access tokens from Microsoft Entra ID.
"true"
connectionString
Y
The connection string for the PostgreSQL database. This must contain the user, which corresponds to the name of the user created inside PostgreSQL that maps to the Microsoft Entra ID identity. This is often the name of the corresponding principal (for example, the name of the Microsoft Entra ID application). This connection string should not contain any password.
Authenticating with AWS IAM is supported with all versions of PostgreSQL type components.
The user specified in the connection string must be an already existing user in the DB, and an AWS IAM enabled user granted the rds_iam database role.
Authentication is based on the AWS authentication configuration file, or the AccessKey/SecretKey provided.
The AWS authentication token will be dynamically rotated before it’s expiration time with AWS.
Field
Required
Details
Example
useAWSIAM
Y
Must be set to true to enable the component to retrieve access tokens from AWS IAM. This authentication method only works with AWS Relational Database Service for PostgreSQL databases.
"true"
connectionString
Y
The connection string for the PostgreSQL database. This must contain an already existing user, which corresponds to the name of the user created inside PostgreSQL that maps to the AWS IAM policy. This connection string should not contain any password. Note that the database name field is denoted by dbname with AWS.
This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘region’ instead. The AWS Region where the AWS Relational Database Service is deployed to.
"us-east-1"
awsAccessKey
N
This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘accessKey’ instead. AWS access key associated with an IAM account
"AKIAIOSFODNN7EXAMPLE"
awsSecretKey
N
This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘secretKey’ instead. The secret key associated with the access key
"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
awsSessionToken
N
This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘sessionToken’ instead. AWS session token to use. A session token is only required if you are using temporary security credentials.
"TOKEN"
Other metadata options
Field
Required
Details
Example
tablePrefix
N
Prefix for the table where the data is stored. Can optionally have the schema name as prefix, such as public.prefix_
"prefix_", "public.prefix_"
metadataTableName
N
Name of the table Dapr uses to store a few metadata properties. Defaults to dapr_metadata. Can optionally have the schema name as prefix, such as public.dapr_metadata
"dapr_metadata", "public.dapr_metadata"
timeout
N
Timeout for operations on the database, as a Go duration. Integers are interpreted as number of seconds. Defaults to 20s
"30s", 30
cleanupInterval
N
Interval, as a Go duration or number of seconds, to clean up rows with an expired TTL. Default: 1h (1 hour). Setting this to values <=0 disables the periodic cleanup.
"30m", 1800, -1
maxConns
N
Maximum number of connections pooled by this component. Set to 0 or lower to use the default value, which is the greater of 4 or the number of CPUs.
"4"
connectionMaxIdleTime
N
Max idle time before unused connections are automatically closed in the connection pool. By default, there’s no value and this is left to the database driver to choose.
"5m"
queryExecMode
N
Controls the default mode for executing queries. By default Dapr uses the extended protocol and automatically prepares and caches prepared statements. However, this may be incompatible with proxies such as PGBouncer. In this case, it may be preferrable to use exec or simple_protocol.
"simple_protocol"
actorStateStore
N
Consider this state store for actors. Defaults to "false"
"true", "false"
Setup PostgreSQL
Run an instance of PostgreSQL. You can run a local instance of PostgreSQL in Docker with the following command:
docker run -p 5432:5432 -e POSTGRES_PASSWORD=example postgres
This example does not describe a production configuration because it sets the password in plain text and the user name is left as the PostgreSQL default of “postgres”.
Create a database for state data.
Either the default “postgres” database can be used, or create a new database for storing state data.
To create a new database in PostgreSQL, run the following SQL command:
CREATEDATABASEmy_dapr;
Advanced
Differences between v1 and v2
The PostgreSQL state store v2 was introduced in Dapr 1.13. The pre-existing v1 remains available and is not deprecated.
In the v2 component, the table schema has been changed significantly, with the goal of increasing performance and reliability. Most notably, the value stored by Dapr is now of type BYTEA, which allows faster queries and, in some cases, is more space-efficient than the previously-used JSONB column.
However, due to this change, the v2 component does not support the Dapr state store query APIs.
Also, in the v2 component, ETags are now random UUIDs, which ensures better compatibility with other PostgreSQL-compatible databases, such as CockroachDB.
Because of these changes, v1 and v2 components are not able to read or write data from the same table. At this stage, it’s also impossible to migrate data between the two versions of the component.
Displaying the data in human-readable format
The PostgreSQL v2 component stores the state’s value in the value column, which is of type BYTEA. Most PostgreSQL tools, including pgAdmin, consider the value as binary and do not display it in human-readable form by default.
If you want to inspect the value in the state store, and you know it’s not binary (for example, JSON data), you can have the value displayed in human-readable form using a query like the following:
-- Replace "state" with the name of the state table in your environment
SELECT*,convert_from(value,'utf-8')FROMstate;
TTLs and cleanups
This state store supports Time-To-Live (TTL) for records stored with Dapr. When storing data using Dapr, you can set the ttlInSeconds metadata property to indicate after how many seconds the data should be considered “expired”.
Because PostgreSQL doesn’t have built-in support for TTLs, this is implemented in Dapr by adding a column in the state table indicating when the data is to be considered “expired”. Records that are “expired” are not returned to the caller, even if they’re still physically stored in the database. A background “garbage collector” periodically scans the state table for expired rows and deletes them.
You can set the deletion interval of expired records with the cleanupInterval metadata property, which defaults to 3600 seconds (that is, 1 hour).
Longer intervals require less frequent scans for expired rows, but can require storing expired records for longer, potentially requiring more storage space. If you plan to store many records in your state table, with short TTLs, consider setting cleanupInterval to a smaller value; for example, 5m (5 minutes).
If you do not plan to use TTLs with Dapr and the PostgreSQL state store, you should consider setting cleanupInterval to a value <= 0 (for example, 0 or -1) to disable the periodic cleanup and reduce the load on the database.
Detailed information on the PostgreSQL v1 state store component
Note
Starting with Dapr 1.13, you can leverage the PostgreSQL v2 state store component, which contains some improvements to performance and reliability.
The v2 component is not compatible with v1, and data cannot be migrated between the two components. The v2 component does not offer support for state store query APIs.
There are no plans to deprecate the v1 component.
This component allows using PostgreSQL (Postgres) as state store for Dapr, using the “v1” component. See this guide on how to create and apply a state store configuration.
apiVersion:dapr.io/v1alpha1kind:Componentmetadata:name:<NAME>spec:type:state.postgresqlversion:v1metadata:# Connection string- name:connectionStringvalue:"<CONNECTION STRING>"# Individual connection parameters - can be used instead to override connectionString parameters#- name: host# value: "localhost"#- name: hostaddr# value: "127.0.0.1"#- name: port# value: "5432"#- name: database# value: "my_db"#- name: user# value: "postgres"#- name: password# value: "example"#- name: sslRootCert# value: "/path/to/ca.crt"# Timeout for database operations, as a Go duration or number of seconds (optional)#- name: timeout# value: 20# Name of the table where to store the state (optional)#- name: tableName# value: "state"# Name of the table where to store metadata used by Dapr (optional)#- name: metadataTableName# value: "dapr_metadata"# Cleanup interval in seconds, to remove expired rows (optional)#- name: cleanupInterval# value: "1h"# Maximum number of connections pooled by this component (optional)#- name: maxConns# value: 0# Max idle time for connections before they're closed (optional)#- name: connectionMaxIdleTime# value: 0# Controls the default mode for executing queries. (optional)#- name: queryExecMode# value: ""# Uncomment this if you wish to use PostgreSQL as a state store for actors or workflows (optional)#- name: actorStateStore# value: "true"
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.
Spec metadata fields
Authenticate using a connection string
The following metadata options are required to authenticate using a PostgreSQL connection string.
Field
Required
Details
Example
connectionString
Y
The connection string for the PostgreSQL database. See the PostgreSQL documentation on database connections for information on how to define a connection string.
Authenticate using individual connection parameters
In addition to using a connection string, you can optionally specify individual connection parameters. These parameters are equivalent to the standard PostgreSQL connection parameters.
Field
Required
Details
Example
host
Y
The host name or IP address of the PostgreSQL server
"localhost"
hostaddr
N
The IP address of the PostgreSQL server (alternative to host)
"127.0.0.1"
port
Y
The port number of the PostgreSQL server
"5432"
database
Y
The name of the database to connect to
"my_db"
user
Y
The PostgreSQL user to connect as
"postgres"
password
Y
The password for the PostgreSQL user
"example"
sslRootCert
N
Path to the SSL root certificate file
"/path/to/ca.crt"
Note
When using individual connection parameters, these will override the ones present in the connectionString.
Authenticate using Microsoft Entra ID
Authenticating with Microsoft Entra ID is supported with Azure Database for PostgreSQL. All authentication methods supported by Dapr can be used, including client credentials (“service principal”) and Managed Identity.
Field
Required
Details
Example
useAzureAD
Y
Must be set to true to enable the component to retrieve access tokens from Microsoft Entra ID.
"true"
connectionString
Y
The connection string for the PostgreSQL database. This must contain the user, which corresponds to the name of the user created inside PostgreSQL that maps to the Microsoft Entra ID identity; this is often the name of the corresponding principal (e.g. the name of the Microsoft Entra ID application). This connection string should not contain any password.
Authenticating with AWS IAM is supported with all versions of PostgreSQL type components.
The user specified in the connection string must be an already existing user in the DB, and an AWS IAM enabled user granted the rds_iam database role.
Authentication is based on the AWS authentication configuration file, or the AccessKey/SecretKey provided.
The AWS authentication token will be dynamically rotated before it’s expiration time with AWS.
Field
Required
Details
Example
useAWSIAM
Y
Must be set to true to enable the component to retrieve access tokens from AWS IAM. This authentication method only works with AWS Relational Database Service for PostgreSQL databases.
"true"
connectionString
Y
The connection string for the PostgreSQL database. This must contain an already existing user, which corresponds to the name of the user created inside PostgreSQL that maps to the AWS IAM policy. This connection string should not contain any password. Note that the database name field is denoted by dbname with AWS.
The AWS Region where the AWS Relational Database Service is deployed to.
"us-east-1"
awsAccessKey
N
AWS access key associated with an IAM account
"AKIAIOSFODNN7EXAMPLE"
awsSecretKey
N
The secret key associated with the access key
"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
awsSessionToken
N
AWS session token to use. A session token is only required if you are using temporary security credentials.
"TOKEN"
Other metadata options
Field
Required
Details
Example
tableName
N
Name of the table where the data is stored. Defaults to state. Can optionally have the schema name as prefix, such as public.state
"state", "public.state"
metadataTableName
N
Name of the table Dapr uses to store a few metadata properties. Defaults to dapr_metadata. Can optionally have the schema name as prefix, such as public.dapr_metadata
"dapr_metadata", "public.dapr_metadata"
timeout
N
Timeout for operations on the database, as a Go duration. Integers are interpreted as number of seconds. Defaults to 20s
"30s", 30
cleanupInterval
N
Interval, as a Go duration or number of seconds, to clean up rows with an expired TTL. Default: 1h (1 hour). Setting this to values <=0 disables the periodic cleanup.
"30m", 1800, -1
maxConns
N
Maximum number of connections pooled by this component. Set to 0 or lower to use the default value, which is the greater of 4 or the number of CPUs.
"4"
connectionMaxIdleTime
N
Max idle time before unused connections are automatically closed in the connection pool. By default, there’s no value and this is left to the database driver to choose.
"5m"
queryExecMode
N
Controls the default mode for executing queries. By default Dapr uses the extended protocol and automatically prepares and caches prepared statements. However, this may be incompatible with proxies such as PGBouncer. In this case it may be preferrable to use exec or simple_protocol.
"simple_protocol"
actorStateStore
N
Consider this state store for actors. Defaults to "false"
"true", "false"
Setup PostgreSQL
Run an instance of PostgreSQL. You can run a local instance of PostgreSQL in Docker CE with the following command:
docker run -p 5432:5432 -e POSTGRES_PASSWORD=example postgres
This example does not describe a production configuration because it sets the password in plain text and the user name is left as the PostgreSQL default of “postgres”.
Create a database for state data.
Either the default “postgres” database can be used, or create a new database for storing state data.
To create a new database in PostgreSQL, run the following SQL command:
CREATEDATABASEmy_dapr;
Advanced
TTLs and cleanups
This state store supports Time-To-Live (TTL) for records stored with Dapr. When storing data using Dapr, you can set the ttlInSeconds metadata property to indicate after how many seconds the data should be considered “expired”.
Because PostgreSQL doesn’t have built-in support for TTLs, this is implemented in Dapr by adding a column in the state table indicating when the data is to be considered “expired”. Records that are “expired” are not returned to the caller, even if they’re still physically stored in the database. A background “garbage collector” periodically scans the state table for expired rows and deletes them.
You can set the deletion interval of expired records with the cleanupInterval metadata property, which defaults to 3600 seconds (that is, 1 hour).
Longer intervals require less frequent scans for expired rows, but can require storing expired records for longer, potentially requiring more storage space. If you plan to store many records in your state table, with short TTLs, consider setting cleanupInterval to a smaller value; for example, 5m (5 minutes).
If you do not plan to use TTLs with Dapr and the PostgreSQL state store, you should consider setting cleanupInterval to a value <= 0 (for example, 0 or -1) to disable the periodic cleanup and reduce the load on the database.
The column in the state table where the expiration date for records is stored in, expiredate, does not have an index by default, so each periodic cleanup must perform a full-table scan. If you have a table with a very large number of records, and only some of them use a TTL, you may find it useful to create an index on that column. Assuming that your state table name is state (the default), you can use this query:
If the Redis instance supports TLS with public certificates, can be configured to be enabled or disabled. Defaults to "false"
"true", "false"
clientCert
N
The content of the client certificate, used for Redis instances that require client-side certificates. Must be used with clientKey and enableTLS must be set to true. It is recommended to use a secret store as described here
"----BEGIN CERTIFICATE-----\nMIIC..."
clientKey
N
The content of the client private key, used in conjunction with clientCert for authentication. It is recommended to use a secret store as described here
"----BEGIN PRIVATE KEY-----\nMIIE..."
maxRetries
N
Maximum number of retries before giving up. Defaults to 3
5, 10
maxRetryBackoff
N
Maximum backoff between each retry. Defaults to 2 seconds; "-1" disables backoff.
3000000000
failover
N
Property to enabled failover configuration. Needs sentinelMasterName to be set. The redisHost should be the sentinel host address. See Redis Sentinel Documentation. Defaults to "false"
The interval between checking for pending messages to redelivery. Defaults to "60s". "0" disables redelivery.
"30s"
processingTimeout
N
The amount time a message must be pending before attempting to redeliver it. Defaults to "15s". "0" disables redelivery.
"30s"
redisType
N
The type of redis. There are two valid values, one is "node" for single node mode, the other is "cluster" for redis cluster mode. Defaults to "node".
"cluster"
redisDB
N
Database selected after connecting to redis. If "redisType" is "cluster" this option is ignored. Defaults to "0".
"0"
redisMaxRetries
N
Alias for maxRetries. If both values are set maxRetries is ignored.
"5"
redisMinRetryInterval
N
Minimum backoff for redis commands between each retry. Default is "8ms"; "-1" disables backoff.
"8ms"
redisMaxRetryInterval
N
Alias for maxRetryBackoff. If both values are set maxRetryBackoff is ignored.
"5s"
dialTimeout
N
Dial timeout for establishing new connections. Defaults to "5s".
"5s"
readTimeout
N
Timeout for socket reads. If reached, redis commands will fail with a timeout instead of blocking. Defaults to "3s", "-1" for no timeout.
"3s"
writeTimeout
N
Timeout for socket writes. If reached, redis commands will fail with a timeout instead of blocking. Defaults is readTimeout.
"3s"
poolSize
N
Maximum number of socket connections. Default is 10 connections per every CPU as reported by runtime.NumCPU.
"20"
poolTimeout
N
Amount of time client waits for a connection if all connections are busy before returning an error. Default is readTimeout + 1 second.
"5s"
maxConnAge
N
Connection age at which the client retires (closes) the connection. Default is to not close aged connections.
"30m"
minIdleConns
N
Minimum number of idle connections to keep open in order to avoid the performance degradation associated with creating new connections. Defaults to "0".
"2"
idleCheckFrequency
N
Frequency of idle checks made by idle connections reaper. Default is "1m". "-1" disables idle connections reaper.
"-1"
idleTimeout
N
Amount of time after which the client closes idle connections. Should be less than server’s timeout. Default is "5m". "-1" disables idle timeout check.
"10m"
ttlInSeconds
N
Allows specifying a default Time-to-live (TTL) in seconds that will be applied to every state store request unless TTL is explicitly defined via the request metadata.
Consider this state store for actors. Defaults to "false"
"true", "false"
Setup Redis
Dapr can use any Redis instance: containerized, running on your local dev machine, or a managed cloud service.
A Redis instance is automatically created as a Docker container when you run dapr init
You can use Helm to quickly create a Redis instance in our Kubernetes cluster. This approach requires Installing Helm.
Install Redis into your cluster. Note that we’re explicitly setting an image tag to get a version greater than 5, which is what Dapr’ pub/sub functionality requires. If you’re intending on using Redis as just a state store (and not for pub/sub), you do not have to set the image version.
Run kubectl get pods to see the Redis containers now running in your cluster.
Add redis-master:6379 as the redisHost in your redis.yaml file. For example:
metadata:- name:redisHostvalue:redis-master:6379
Next, get the Redis password, which is slightly different depending on the OS we’re using:
Windows: Run kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" > encoded.b64, which creates a file with your encoded password. Next, run certutil -decode encoded.b64 password.txt, which will put your redis password in a text file called password.txt. Copy the password and delete the two files.
Linux/MacOS: Run kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" | base64 --decode and copy the outputted password.
Add this password as the redisPassword value in your redis.yaml file. For example:
Once your instance is created, grab the Host name (FQDN) and your access key from the Azure portal.
For the Host name:
Navigate to the resource’s Overview page.
Copy the Host name value.
For your access key:
Navigate to Settings > Access Keys.
Copy and save your key.
Add your key and your host name to a redis.yaml file that Dapr can apply to your cluster.
If you’re running a sample, add the host and key to the provided redis.yaml.
If you’re creating a project from the ground up, create a redis.yaml file as specified in the Component format section.
Set the redisHost key to [HOST NAME FROM PREVIOUS STEP]:6379 and the redisPassword key to the key you saved earlier.
Note: In a production-grade application, follow secret management instructions to securely manage your secrets.
Enable EntraID support:
Enable Entra ID authentication on your Azure Redis server. This may takes a few minutes.
Set useEntraID to "true" to implement EntraID support for Azure Cache for Redis.
Set enableTLS to "true" to support TLS.
Note:useEntraID assumes that either your UserPrincipal (via AzureCLICredential) or the SystemAssigned managed identity have the RedisDataOwner role permission. If a user-assigned identity is used, you need to specify the azureClientID property.
In addition to supporting storing and querying state data as key/value pairs, the Redis state store optionally supports querying of JSON objects to meet more complex querying or filtering requirements. To enable this feature, the following steps are required:
The Redis store must support Redis modules and specifically both Redisearch and RedisJson. If you are deploying and running Redis then load redisearch and redisjson modules when deploying the Redis service.
``
Specify queryIndexes entry in the metadata of the component config. The value of the queryIndexes is a JSON array of the following format:
[{"name":"<indexing name>","indexes":[{"key":"<JSONPath-like syntax for selected element inside documents>","type":"<value type (supported types: TEXT, NUMERIC)>",},...]},...]
When calling state management API, add the following metadata to the API calls:
If you are using a self-hosted deployment of Dapr, a Redis instance without the JSON module is automatically created as a Docker container when you run dapr init.
Alternatively, you can create an instance of Redis by running the following command:
docker run -p 6379:6379 --name redis --rm redis
The Redis container that gets created on dapr init or via the above command, cannot be used with state store query API alone. You can run redislabs/rejson docker image on a different port(than the already installed Redis is using) to work with they query API.
Note: redislabs/rejson has support only for amd64 architecture.
Use following command to create an instance of redis compatible with query API.
docker run -p 9445:9445 --name rejson --rm redislabs/rejson:2.0.6
Next is to start a Dapr application. Refer to this component configuration file, which contains query indexing schemas. Make sure to modify the redisHost to reflect the local forwarding port which redislabs/rejson uses.
dapr run --app-id demo --dapr-http-port 3500 --resources-path query-api-examples/components/redis
Now populate the state store with the employee dataset, so you can then query it later.
curl -X POST -H "Content-Type: application/json" -d @query-api-examples/dataset.json \
http://localhost:3500/v1.0/state/querystatestore?metadata.contentType=application/json
To make sure the data has been properly stored, you can retrieve a specific object
Detailed information on the RethinkDB state store component
Component format
To setup RethinkDB state store, create a component of type state.rethinkdb. See the how-to guide to create and apply a state store configuration.
apiVersion:dapr.io/v1alpha1kind:Componentmetadata:name:<NAME>spec:type:state.rethinkdbversion:v1metadata:- name:addressvalue:<REPLACE-RETHINKDB-ADDRESS># Required, e.g. 127.0.0.1:28015 or rethinkdb.default.svc.cluster.local:28015).- name:databasevalue:<REPLACE-RETHINKDB-DB-NAME># Required, e.g. dapr (alpha-numerics only)- name:tablevalue:# Optional- name:usernamevalue:<USERNAME># Optional- name:passwordvalue:<PASSWORD># Optional- name:archivevalue:bool# Optional (whether or not store should keep archive table of all the state changes)
Warning
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described here.
If the optional archive metadata is set to true, on each state change, the RethinkDB state store will also log state changes with timestamp in the daprstate_archive table. This allows for time series analyses of the state managed by Dapr.
Detailed information on the SQLite state store component
This component allows using SQLite 3 as state store for Dapr.
The component is currently compiled with SQLite version 3.41.2.
Create a Dapr component
Create a file called sqlite.yaml, paste the following, and replace the <CONNECTION STRING> value with your connection string, which is the path to a file on disk.
If you want to also configure SQLite to store actors, add the actorStateStore option as in the example below.
apiVersion:dapr.io/v1alpha1kind:Componentmetadata:name:<NAME>spec:type:state.sqliteversion:v1metadata:# Connection string- name:connectionStringvalue:"data.db"# Timeout for database operations, in seconds (optional)#- name: timeoutInSeconds# value: 20# Name of the table where to store the state (optional)#- name: tableName# value: "state"# Cleanup interval in seconds, to remove expired rows (optional)#- name: cleanupInterval# value: "1h"# Set busy timeout for database operations#- name: busyTimeout# value: "2s"# Uncomment this if you wish to use SQLite as a state store for actors (optional)#- name: actorStateStore# value: "true"
Spec metadata fields
Field
Required
Details
Example
connectionString
Y
The connection string for the SQLite database. See below for more details.
"path/to/data.db", "file::memory:?cache=shared"
timeout
N
Timeout for operations on the database, as a Go duration. Integers are interpreted as number of seconds. Defaults to 20s
"30s", 30
tableName
N
Name of the table where the data is stored. Defaults to state.
"state"
metadataTableName
N
Name of the table used by Dapr to store metadata for the component. Defaults to metadata.
"metadata"
cleanupInterval
N
Interval, as a Go duration, to clean up rows with an expired TTL. Setting this to values <=0 disables the periodic cleanup. Default: 0 (i.e. disabled)
"2h", "30m", -1
busyTimeout
N
Interval, as a Go duration, to wait in case the SQLite database is currently busy serving another request, before returning a “database busy” error. Default: 2s
"100ms", "5s"
disableWAL
N
If set to true, disables Write-Ahead Logging for journaling of the SQLite database. You should set this to false if the database is stored on a network file system (for example, a folder mounted as a SMB or NFS share). This option is ignored for read-only or in-memory databases.
"true", "false"
actorStateStore
N
Consider this state store for actors. Defaults to "false"
"true", "false"
The connectionString parameter configures how to open the SQLite database.
Normally, this is the path to a file on disk, relative to the current working directory, or absolute. For example: "data.db" (relative to the working directory) or "/mnt/data/mydata.db".
The path is interpreted by the SQLite library, so it’s possible to pass additional options to the SQLite driver using “URI options” if the path begins with file:. For example: "file:path/to/data.db?mode=ro" opens the database at path path/to/data.db in read-only mode. Refer to the SQLite documentation for all supported URI options.
The special case ":memory:" launches the component backed by an in-memory SQLite database. This database is not persisted on disk, not shared across multiple Dapr instances, and all data is lost when the Dapr sidecar is stopped. When using an in-memory database, Dapr automatically sets the cache=shared URI option.
Advanced
TTLs and cleanups
This state store supports Time-To-Live (TTL) for records stored with Dapr. When storing data using Dapr, you can set the ttlInSeconds metadata property to indicate when the data should be considered “expired”.
Because SQLite doesn’t have built-in support for TTLs, this is implemented in Dapr by adding a column in the state table indicating when the data is to be considered “expired”. Records that are “expired” are not returned to the caller, even if they’re still physically stored in the database. A background “garbage collector” periodically scans the state table for expired rows and deletes them.
The cleanupInterval metadata property sets the expired records deletion interval, which is disabled by default.
Longer intervals require less frequent scans for expired rows, but can cause the database to store expired records for longer, potentially requiring more storage space. If you plan to store many records in your state table, with short TTLs, consider setting cleanupInterval to a smaller value, for example 5m.
If you do not plan to use TTLs with Dapr and the SQLite state store, you should consider setting cleanupInterval to a value <= 0 (e.g. 0 or -1) to disable the periodic cleanup and reduce the load on the database. This is the default behavior.
The expiration_time column in the state table, where the expiration date for records is stored, does not have an index by default, so each periodic cleanup must perform a full-table scan. If you have a table with a very large number of records, and only some of them use a TTL, you may find it useful to create an index on that column. Assuming that your state table name is state (the default), you can use this query:
Dapr does not automatically vacuum SQLite databases.
Sharing a SQLite database and using networked filesystems
Although you can have multiple Dapr instances accessing the same SQLite database (for example, because your application is scaled horizontally or because you have multiple apps accessing the same state store), there are some caveats you should keep in mind.
SQLite works best when all clients access a database file on the same, locally-mounted disk. Using virtual disks that are mounted from a SAN (Storage Area Network), as is common practice in virtualized or cloud environments, is fine.
However, storing your SQLite database in a networked filesystem (for example via NFS or SMB, but these examples are not an exhaustive list) should be done with care. The official SQLite documentation has a page dedicated to recommendations and caveats for running SQLite over a network.
Given the risk of data corruption that running SQLite over a networked filesystem (such as via NFS or SMB) comes with, we do not recommend doing that with Dapr in production environment. However, if you do want to do that, you should configure your SQLite Dapr component with disableWAL set to true.