This is the multi-page printable view of this section. Click here to print.
Resiliency Quickstarts
1 - Quickstart: Service-to-component resiliency
Observe Dapr resiliency capabilities by simulating a system failure. In this Quickstart, you will:
- Execute a microservice application that continuously persists and retrieves state via Dapr’s state management API.
- Trigger resiliency policies by simulating a system failure.
- Resolve the failure and the microservice application will resume.

Select your preferred language-specific Dapr SDK before proceeding with the Quickstart.
Pre-requisites
For this example, you will need:
Step 1: Set up the environment
Clone the sample provided in the Quickstarts repo.
git clone https://github.com/dapr/quickstarts.git
In a terminal window, navigate to the order-processor
directory.
cd ../state_management/python/sdk/order-processor
Install dependencies
pip3 install -r requirements.txt
Step 2: Run the application
Run the order-processor
service alongside a Dapr sidecar. The Dapr sidecar then loads the resiliency spec located in the resources directory:
apiVersion: dapr.io/v1alpha1
kind: Resiliency
metadata:
name: myresiliency
scopes:
- order-processor
spec:
policies:
retries:
retryForever:
policy: constant
duration: 5s
maxRetries: -1
circuitBreakers:
simpleCB:
maxRequests: 1
timeout: 5s
trip: consecutiveFailures >= 5
targets:
components:
statestore:
outbound:
retry: retryForever
circuitBreaker: simpleCB
dapr run --app-id order-processor --resources-path ../../../resources/ -- python3
Once the application has started, the order-processor
service writes and reads orderId
key/value pairs to the statestore
Redis instance defined in the statestore.yaml
component.
== APP == Saving Order: { orderId: '1' }
== APP == Getting Order: { orderId: '1' }
== APP == Saving Order: { orderId: '2' }
== APP == Getting Order: { orderId: '2' }
== APP == Saving Order: { orderId: '3' }
== APP == Getting Order: { orderId: '3' }
== APP == Saving Order: { orderId: '4' }
== APP == Getting Order: { orderId: '4' }
Step 3: Introduce a fault
Simulate a fault by stopping the Redis container instance that was initialized when executing dapr init
on your development machine. Once the instance is stopped, write and read operations from the order-processor
service begin to fail.
Since the resiliency.yaml
spec defines statestore
as a component target, all failed requests will apply retry and circuit breaker policies:
targets:
components:
statestore:
outbound:
retry: retryForever
circuitBreaker: simpleCB
In a new terminal window, run the following command to stop Redis:
docker stop dapr_redis
Once Redis is stopped, the requests begin to fail and the retry policy titled retryForever
is applied. The output below shows the logs from the order-processor
service:
INFO[0006] Error processing operation component[statestore] output. Retrying...
As per the retryForever
policy, retries will continue for each failed request indefinitely, in 5 second intervals.
retryForever:
policy: constant
maxInterval: 5s
maxRetries: -1
Once 5 consecutive retries have failed, the circuit breaker policy, simpleCB
, is tripped and the breaker opens, halting all requests:
INFO[0026] Circuit breaker "simpleCB-statestore" changed state from closed to open
circuitBreakers:
simpleCB:
maxRequests: 1
timeout: 5s
trip: consecutiveFailures >= 5
After 5 seconds has surpassed, the circuit breaker will switch to a half-open state, allowing one request through to verify if the fault has been resolved. If the request continues to fail, the circuit will trip back to the open state.
INFO[0031] Circuit breaker "simpleCB-statestore" changed state from open to half-open
INFO[0031] Circuit breaker "simpleCB-statestore" changed state from half-open to open
INFO[0036] Circuit breaker "simpleCB-statestore" changed state from open to half-open
INFO[0036] Circuit breaker "simpleCB-statestore" changed state from half-open to closed
This half-open/open behavior will continue for as long as the Redis container is stopped.
Step 3: Remove the fault
Once you restart the Redis container on your machine, the application will recover seamlessly, picking up where it left off.
docker start dapr_redis
INFO[0036] Recovered processing operation component[statestore] output.
== APP == Saving Order: { orderId: '5' }
== APP == Getting Order: { orderId: '5' }
== APP == Saving Order: { orderId: '6' }
== APP == Getting Order: { orderId: '6' }
== APP == Saving Order: { orderId: '7' }
== APP == Getting Order: { orderId: '7' }
== APP == Saving Order: { orderId: '8' }
== APP == Getting Order: { orderId: '8' }
== APP == Saving Order: { orderId: '9' }
== APP == Getting Order: { orderId: '9' }
Pre-requisites
For this example, you will need:
Step 1: Set up the environment
Clone the sample provided in the Quickstarts repo.
git clone https://github.com/dapr/quickstarts.git
In a terminal window, navigate to the order-processor
directory.
cd ../state_management/javascript/sdk/order-processor
Install dependencies
npm install
Step 2: Run the application
Run the order-processor
service alongside a Dapr sidecar. The Dapr sidecar then loads the resiliency spec located in the resources directory:
apiVersion: dapr.io/v1alpha1
kind: Resiliency
metadata:
name: myresiliency
scopes:
- checkout
spec:
policies:
retries:
retryForever:
policy: constant
maxInterval: 5s
maxRetries: -1
circuitBreakers:
simpleCB:
maxRequests: 1
timeout: 5s
trip: consecutiveFailures >= 5
targets:
apps:
order-processor:
retry: retryForever
circuitBreaker: simpleCB
dapr run --app-id order-processor --resources-path ../../../resources/ -- npm start
Once the application has started, the order-processor
service writes and reads orderId
key/value pairs to the statestore
Redis instance defined in the statestore.yaml
component.
== APP == Saving Order: { orderId: '1' }
== APP == Getting Order: { orderId: '1' }
== APP == Saving Order: { orderId: '2' }
== APP == Getting Order: { orderId: '2' }
== APP == Saving Order: { orderId: '3' }
== APP == Getting Order: { orderId: '3' }
== APP == Saving Order: { orderId: '4' }
== APP == Getting Order: { orderId: '4' }
Step 3: Introduce a fault
Simulate a fault by stopping the Redis container instance that was initialized when executing dapr init
on your development machine. Once the instance is stopped, write and read operations from the order-processor
service begin to fail.
Since the resiliency.yaml
spec defines statestore
as a component target, all failed requests will apply retry and circuit breaker policies:
targets:
components:
statestore:
outbound:
retry: retryForever
circuitBreaker: simpleCB
In a new terminal window, run the following command to stop Redis:
docker stop dapr_redis
Once Redis is stopped, the requests begin to fail and the retry policy titled retryForever
is applied. The output below shows the logs from the order-processor
service:
INFO[0006] Error processing operation component[statestore] output. Retrying...
As per the retryForever
policy, retries will continue for each failed request indefinitely, in 5 second intervals.
retryForever:
policy: constant
maxInterval: 5s
maxRetries: -1
Once 5 consecutive retries have failed, the circuit breaker policy, simpleCB
, is tripped and the breaker opens, halting all requests:
INFO[0026] Circuit breaker "simpleCB-statestore" changed state from closed to open
circuitBreakers:
simpleCB:
maxRequests: 1
timeout: 5s
trip: consecutiveFailures >= 5
After 5 seconds has surpassed, the circuit breaker will switch to a half-open state, allowing one request through to verify if the fault has been resolved. If the request continues to fail, the circuit will trip back to the open state.
INFO[0031] Circuit breaker "simpleCB-statestore" changed state from open to half-open
INFO[0031] Circuit breaker "simpleCB-statestore" changed state from half-open to open
INFO[0036] Circuit breaker "simpleCB-statestore" changed state from open to half-open
INFO[0036] Circuit breaker "simpleCB-statestore" changed state from half-open to closed
This half-open/open behavior will continue for as long as the Redis container is stopped.
Step 3: Remove the fault
Once you restart the Redis container on your machine, the application will recover seamlessly, picking up where it left off.
docker start dapr_redis
INFO[0036] Recovered processing operation component[statestore] output.
== APP == Saving Order: { orderId: '5' }
== APP == Getting Order: { orderId: '5' }
== APP == Saving Order: { orderId: '6' }
== APP == Getting Order: { orderId: '6' }
== APP == Saving Order: { orderId: '7' }
== APP == Getting Order: { orderId: '7' }
== APP == Saving Order: { orderId: '8' }
== APP == Getting Order: { orderId: '8' }
== APP == Saving Order: { orderId: '9' }
== APP == Getting Order: { orderId: '9' }
Pre-requisites
For this example, you will need:
Step 1: Set up the environment
Clone the sample provided in the Quickstarts repo.
git clone https://github.com/dapr/quickstarts.git
In a terminal window, navigate to the order-processor
directory.
cd ../state_management/csharp/sdk/order-processor
Install dependencies
dotnet restore
dotnet build
Step 2: Run the application
Run the order-processor
service alongside a Dapr sidecar. The Dapr sidecar then loads the resiliency spec located in the resources directory:
apiVersion: dapr.io/v1alpha1
kind: Resiliency
metadata:
name: myresiliency
scopes:
- checkout
spec:
policies:
retries:
retryForever:
policy: constant
maxInterval: 5s
maxRetries: -1
circuitBreakers:
simpleCB:
maxRequests: 1
timeout: 5s
trip: consecutiveFailures >= 5
targets:
apps:
order-processor:
retry: retryForever
circuitBreaker: simpleCB
dapr run --app-id order-processor --resources-path ../../../resources/ -- dotnet run
Once the application has started, the order-processor
service writes and reads orderId
key/value pairs to the statestore
Redis instance defined in the statestore.yaml
component.
== APP == Saving Order: { orderId: '1' }
== APP == Getting Order: { orderId: '1' }
== APP == Saving Order: { orderId: '2' }
== APP == Getting Order: { orderId: '2' }
== APP == Saving Order: { orderId: '3' }
== APP == Getting Order: { orderId: '3' }
== APP == Saving Order: { orderId: '4' }
== APP == Getting Order: { orderId: '4' }
Step 3: Introduce a fault
Simulate a fault by stopping the Redis container instance that was initialized when executing dapr init
on your development machine. Once the instance is stopped, write and read operations from the order-processor
service begin to fail.
Since the resiliency.yaml
spec defines statestore
as a component target, all failed requests will apply retry and circuit breaker policies:
targets:
components:
statestore:
outbound:
retry: retryForever
circuitBreaker: simpleCB
In a new terminal window, run the following command to stop Redis:
docker stop dapr_redis
Once Redis is stopped, the requests begin to fail and the retry policy titled retryForever
is applied. The output below shows the logs from the order-processor
service:
INFO[0006] Error processing operation component[statestore] output. Retrying...
As per the retryForever
policy, retries will continue for each failed request indefinitely, in 5 second intervals.
retryForever:
policy: constant
maxInterval: 5s
maxRetries: -1
Once 5 consecutive retries have failed, the circuit breaker policy, simpleCB
, is tripped and the breaker opens, halting all requests:
INFO[0026] Circuit breaker "simpleCB-statestore" changed state from closed to open
circuitBreakers:
simpleCB:
maxRequests: 1
timeout: 5s
trip: consecutiveFailures >= 5
After 5 seconds has surpassed, the circuit breaker will switch to a half-open state, allowing one request through to verify if the fault has been resolved. If the request continues to fail, the circuit will trip back to the open state.
INFO[0031] Circuit breaker "simpleCB-statestore" changed state from open to half-open
INFO[0031] Circuit breaker "simpleCB-statestore" changed state from half-open to open
INFO[0036] Circuit breaker "simpleCB-statestore" changed state from open to half-open
INFO[0036] Circuit breaker "simpleCB-statestore" changed state from half-open to closed
This half-open/open behavior will continue for as long as the Redis container is stopped.
Step 3: Remove the fault
Once you restart the Redis container on your machine, the application will recover seamlessly, picking up where it left off.
docker start dapr_redis
INFO[0036] Recovered processing operation component[statestore] output.
== APP == Saving Order: { orderId: '5' }
== APP == Getting Order: { orderId: '5' }
== APP == Saving Order: { orderId: '6' }
== APP == Getting Order: { orderId: '6' }
== APP == Saving Order: { orderId: '7' }
== APP == Getting Order: { orderId: '7' }
== APP == Saving Order: { orderId: '8' }
== APP == Getting Order: { orderId: '8' }
== APP == Saving Order: { orderId: '9' }
== APP == Getting Order: { orderId: '9' }
Pre-requisites
For this example, you will need:
- Dapr CLI and initialized environment.
- Java JDK 17 (or greater):
- Oracle JDK, or
- OpenJDK
- Apache Maven, version 3.x.
Step 1: Set up the environment
Clone the sample provided in the Quickstarts repo.
git clone https://github.com/dapr/quickstarts.git
In a terminal window, navigate to the order-processor
directory.
cd ../state_management/java/sdk/order-processor
Install dependencies
mvn clean install
Step 2: Run the application
Run the order-processor
service alongside a Dapr sidecar. The Dapr sidecar then loads the resiliency spec located in the resources directory:
apiVersion: dapr.io/v1alpha1
kind: Resiliency
metadata:
name: myresiliency
scopes:
- checkout
spec:
policies:
retries:
retryForever:
policy: constant
maxInterval: 5s
maxRetries: -1
circuitBreakers:
simpleCB:
maxRequests: 1
timeout: 5s
trip: consecutiveFailures >= 5
targets:
apps:
order-processor:
retry: retryForever
circuitBreaker: simpleCB
dapr run --app-id order-processor --resources-path ../../../resources/ -- java -jar target/OrderProcessingService-0.0.1-SNAPSHOT.jar
Once the application has started, the order-processor
service writes and reads orderId
key/value pairs to the statestore
Redis instance defined in the statestore.yaml
component.
== APP == Saving Order: { orderId: '1' }
== APP == Getting Order: { orderId: '1' }
== APP == Saving Order: { orderId: '2' }
== APP == Getting Order: { orderId: '2' }
== APP == Saving Order: { orderId: '3' }
== APP == Getting Order: { orderId: '3' }
== APP == Saving Order: { orderId: '4' }
== APP == Getting Order: { orderId: '4' }
Step 3: Introduce a fault
Simulate a fault by stopping the Redis container instance that was initialized when executing dapr init
on your development machine. Once the instance is stopped, write and read operations from the order-processor
service begin to fail.
Since the resiliency.yaml
spec defines statestore
as a component target, all failed requests will apply retry and circuit breaker policies:
targets:
components:
statestore:
outbound:
retry: retryForever
circuitBreaker: simpleCB
In a new terminal window, run the following command to stop Redis:
docker stop dapr_redis
Once Redis is stopped, the requests begin to fail and the retry policy titled retryForever
is applied. The output below shows the logs from the order-processor
service:
INFO[0006] Error processing operation component[statestore] output. Retrying...
As per the retryForever
policy, retries will continue for each failed request indefinitely, in 5 second intervals.
retryForever:
policy: constant
maxInterval: 5s
maxRetries: -1
Once 5 consecutive retries have failed, the circuit breaker policy, simpleCB
, is tripped and the breaker opens, halting all requests:
INFO[0026] Circuit breaker "simpleCB-statestore" changed state from closed to open
circuitBreakers:
simpleCB:
maxRequests: 1
timeout: 5s
trip: consecutiveFailures >= 5
After 5 seconds has surpassed, the circuit breaker will switch to a half-open state, allowing one request through to verify if the fault has been resolved. If the request continues to fail, the circuit will trip back to the open state.
INFO[0031] Circuit breaker "simpleCB-statestore" changed state from open to half-open
INFO[0031] Circuit breaker "simpleCB-statestore" changed state from half-open to open
INFO[0036] Circuit breaker "simpleCB-statestore" changed state from open to half-open
INFO[0036] Circuit breaker "simpleCB-statestore" changed state from half-open to closed
This half-open/open behavior will continue for as long as the Redis container is stopped.
Step 3: Remove the fault
Once you restart the Redis container on your machine, the application will recover seamlessly, picking up where it left off.
docker start dapr_redis
INFO[0036] Recovered processing operation component[statestore] output.
== APP == Saving Order: { orderId: '5' }
== APP == Getting Order: { orderId: '5' }
== APP == Saving Order: { orderId: '6' }
== APP == Getting Order: { orderId: '6' }
== APP == Saving Order: { orderId: '7' }
== APP == Getting Order: { orderId: '7' }
== APP == Saving Order: { orderId: '8' }
== APP == Getting Order: { orderId: '8' }
== APP == Saving Order: { orderId: '9' }
== APP == Getting Order: { orderId: '9' }
Pre-requisites
For this example, you will need:
Step 1: Set up the environment
Clone the sample provided in the Quickstarts repo.
git clone https://github.com/dapr/quickstarts.git
In a terminal window, navigate to the order-processor
directory.
cd ../state_management/go/sdk/order-processor
Install dependencies
go build .
Step 2: Run the application
Run the order-processor
service alongside a Dapr sidecar. The Dapr sidecar then loads the resiliency spec located in the resources directory:
apiVersion: dapr.io/v1alpha1
kind: Resiliency
metadata:
name: myresiliency
scopes:
- checkout
spec:
policies:
retries:
retryForever:
policy: constant
maxInterval: 5s
maxRetries: -1
circuitBreakers:
simpleCB:
maxRequests: 1
timeout: 5s
trip: consecutiveFailures >= 5
targets:
apps:
order-processor:
retry: retryForever
circuitBreaker: simpleCB
dapr run --app-id order-processor --resources-path ../../../resources -- go run .
Once the application has started, the order-processor
service writes and reads orderId
key/value pairs to the statestore
Redis instance defined in the statestore.yaml
component.
== APP == Saving Order: { orderId: '1' }
== APP == Getting Order: { orderId: '1' }
== APP == Saving Order: { orderId: '2' }
== APP == Getting Order: { orderId: '2' }
== APP == Saving Order: { orderId: '3' }
== APP == Getting Order: { orderId: '3' }
== APP == Saving Order: { orderId: '4' }
== APP == Getting Order: { orderId: '4' }
Step 3: Introduce a fault
Simulate a fault by stopping the Redis container instance that was initialized when executing dapr init
on your development machine. Once the instance is stopped, write and read operations from the order-processor
service begin to fail.
Since the resiliency.yaml
spec defines statestore
as a component target, all failed requests will apply retry and circuit breaker policies:
targets:
components:
statestore:
outbound:
retry: retryForever
circuitBreaker: simpleCB
In a new terminal window, run the following command to stop Redis:
docker stop dapr_redis
Once Redis is stopped, the requests begin to fail and the retry policy titled retryForever
is applied. The output belows shows the logs from the order-processor
service:
INFO[0006] Error processing operation component[statestore] output. Retrying...
As per the retryForever
policy, retries will continue for each failed request indefinitely, in 5 second intervals.
retryForever:
policy: constant
maxInterval: 5s
maxRetries: -1
Once 5 consecutive retries have failed, the circuit breaker policy, simpleCB
, is tripped and the breaker opens, halting all requests:
INFO[0026] Circuit breaker "simpleCB-statestore" changed state from closed to open
circuitBreakers:
simpleCB:
maxRequests: 1
timeout: 5s
trip: consecutiveFailures >= 5
After 5 seconds has surpassed, the circuit breaker will switch to a half-open state, allowing one request through to verify if the fault has been resolved. If the request continues to fail, the circuit will trip back to the open state.
INFO[0031] Circuit breaker "simpleCB-statestore" changed state from open to half-open
INFO[0031] Circuit breaker "simpleCB-statestore" changed state from half-open to open
INFO[0036] Circuit breaker "simpleCB-statestore" changed state from open to half-open
INFO[0036] Circuit breaker "simpleCB-statestore" changed state from half-open to closed
This half-open/open behavior will continue for as long as the Redis container is stopped.
Step 3: Remove the fault
Once you restart the Redis container on your machine, the application will recover seamlessly, picking up where it left off.
docker start dapr_redis
INFO[0036] Recovered processing operation component[statestore] output.
== APP == Saving Order: { orderId: '5' }
== APP == Getting Order: { orderId: '5' }
== APP == Saving Order: { orderId: '6' }
== APP == Getting Order: { orderId: '6' }
== APP == Saving Order: { orderId: '7' }
== APP == Getting Order: { orderId: '7' }
== APP == Saving Order: { orderId: '8' }
== APP == Getting Order: { orderId: '8' }
== APP == Saving Order: { orderId: '9' }
== APP == Getting Order: { orderId: '9' }
Tell us what you think!
We’re continuously working to improve our Quickstart examples and value your feedback. Did you find this quickstart helpful? Do you have suggestions for improvement?
Join the discussion in our discord channel.
Next steps
Learn more about the resiliency feature and how it works with Dapr’s building block APIs.
Explore Dapr tutorials >>2 - Quickstart: Service-to-service resiliency
Observe Dapr resiliency capabilities by simulating a system failure. In this Quickstart, you will:
- Run two microservice applications:
checkout
andorder-processor
.checkout
will continuously make Dapr service invocation requests toorder-processor
. - Trigger the resiliency spec by simulating a system failure.
- Remove the failure to allow the microservice application to recover.

Select your preferred language-specific Dapr SDK before proceeding with the Quickstart.
Pre-requisites
For this example, you will need:
Step 1: Set up the environment
Clone the sample provided in the Quickstarts repo.
git clone https://github.com/dapr/quickstarts.git
Step 2: Run order-processor
service
In a terminal window, from the root of the Quickstart directory, navigate to order-processor
directory.
cd service_invocation/python/http/order-processor
Install dependencies:
pip3 install -r requirements.txt
Run the order-processor
service alongside a Dapr sidecar.
dapr run --app-port 8001 --app-id order-processor --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3501 -- python3 app.py
Step 3: Run the checkout
service application
In a new terminal window, from the root of the Quickstart directory, navigate to the checkout
directory.
cd service_invocation/python/http/checkout
Install dependencies:
pip3 install -r requirements.txt
Run the checkout
service alongside a Dapr sidecar.
dapr run --app-id checkout --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3500 -- python3 app.py
The Dapr sidecar then loads the resiliency spec located in the resources directory:
apiVersion: dapr.io/v1alpha1
kind: Resiliency
metadata:
name: myresiliency
scopes:
- checkout
spec:
policies:
retries:
retryForever:
policy: constant
maxInterval: 5s
maxRetries: -1
circuitBreakers:
simpleCB:
maxRequests: 1
timeout: 5s
trip: consecutiveFailures >= 5
targets:
apps:
order-processor:
retry: retryForever
circuitBreaker: simpleCB
Step 4: View the Service Invocation outputs
When both services and sidecars are running, notice how orders are passed from the checkout
service to the order-processor
service using Dapr service invoke.
checkout
service output:
== APP == Order passed: {"orderId": 1}
== APP == Order passed: {"orderId": 2}
== APP == Order passed: {"orderId": 3}
== APP == Order passed: {"orderId": 4}
order-processor
service output:
== APP == Order received: {"orderId": 1}
== APP == Order received: {"orderId": 2}
== APP == Order received: {"orderId": 3}
== APP == Order received: {"orderId": 4}
Step 5: Introduce a fault
Simulate a fault by stopping the order-processor
service. Once the instance is stopped, service invoke operations from the checkout
service begin to fail.
Since the resiliency.yaml
spec defines the order-processor
service as a resiliency target, all failed requests will apply retry and circuit breaker policies:
targets:
apps:
order-processor:
retry: retryForever
circuitBreaker: simpleCB
In the order-processor
window, stop the service:
CMD + C
CTRL + C
Once the first request fails, the retry policy titled retryForever
is applied:
INFO[0005] Error processing operation endpoint[order-processor, order-processor:orders]. Retrying...
Retries will continue for each failed request indefinitely, in 5 second intervals.
retryForever:
policy: constant
maxInterval: 5s
maxRetries: -1
Once 5 consecutive retries have failed, the circuit breaker policy, simpleCB
, is tripped and the breaker opens, halting all requests:
INFO[0025] Circuit breaker "order-processor:orders" changed state from closed to open
circuitBreakers:
simpleCB:
maxRequests: 1
timeout: 5s
trip: consecutiveFailures >= 5
After 5 seconds has surpassed, the circuit breaker will switch to a half-open state, allowing one request through to verify if the fault has been resolved. If the request continues to fail, the circuit will trip back to the open state.
INFO[0030] Circuit breaker "order-processor:orders" changed state from open to half-open
INFO[0030] Circuit breaker "order-processor:orders" changed state from half-open to open
INFO[0030] Circuit breaker "order-processor:orders" changed state from open to half-open
INFO[0030] Circuit breaker "order-processor:orders" changed state from half-open to open
This half-open/open behavior will continue for as long as the order-processor
service is stopped.
Step 6: Remove the fault
Once you restart the order-processor
service, the application will recover seamlessly, picking up where it left off with accepting order requests.
In the order-processor
service terminal, restart the application:
dapr run --app-port 8001 --app-id order-processor --app-protocol http --dapr-http-port 3501 -- python3 app.py
checkout
service output:
== APP == Order passed: {"orderId": 5}
== APP == Order passed: {"orderId": 6}
== APP == Order passed: {"orderId": 7}
== APP == Order passed: {"orderId": 8}
== APP == Order passed: {"orderId": 9}
== APP == Order passed: {"orderId": 10}
order-processor
service output:
== APP == Order received: {"orderId": 5}
== APP == Order received: {"orderId": 6}
== APP == Order received: {"orderId": 7}
== APP == Order received: {"orderId": 8}
== APP == Order received: {"orderId": 9}
== APP == Order received: {"orderId": 10}
Pre-requisites
For this example, you will need:
Step 1: Set up the environment
Clone the sample provided in the Quickstarts repo.
git clone https://github.com/dapr/quickstarts.git
Step 2: Run the order-processor
service
In a terminal window, from the root of the Quickstart directory,
navigate to order-processor
directory.
cd service_invocation/javascript/http/order-processor
Install dependencies:
npm install
Run the order-processor
service alongside a Dapr sidecar.
dapr run --app-port 5001 --app-id order-processor --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3501 -- npm start
Step 3: Run the checkout
service application
In a new terminal window, from the root of the Quickstart directory,
navigate to the checkout
directory.
cd service_invocation/javascript/http/checkout
Install dependencies:
npm install
Run the checkout
service alongside a Dapr sidecar.
dapr run --app-id checkout --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3500 -- npm start
The Dapr sidecar then loads the resiliency spec located in the resources directory:
apiVersion: dapr.io/v1alpha1
kind: Resiliency
metadata:
name: myresiliency
scopes:
- checkout
spec:
policies:
retries:
retryForever:
policy: constant
maxInterval: 5s
maxRetries: -1
circuitBreakers:
simpleCB:
maxRequests: 1
timeout: 5s
trip: consecutiveFailures >= 5
targets:
apps:
order-processor:
retry: retryForever
circuitBreaker: simpleCB
Step 4: View the Service Invocation outputs
When both services and sidecars are running, notice how orders are passed from the checkout
service to the order-processor
service using Dapr service invoke.
checkout
service output:
== APP == Order passed: {"orderId": 1}
== APP == Order passed: {"orderId": 2}
== APP == Order passed: {"orderId": 3}
== APP == Order passed: {"orderId": 4}
order-processor
service output:
== APP == Order received: {"orderId": 1}
== APP == Order received: {"orderId": 2}
== APP == Order received: {"orderId": 3}
== APP == Order received: {"orderId": 4}
Step 5: Introduce a fault
Simulate a fault by stopping the order-processor
service. Once the instance is stopped, service invoke operations from the checkout
service begin to fail.
Since the resiliency.yaml
spec defines the order-processor
service as a resiliency target, all failed requests will apply retry and circuit breaker policies:
targets:
apps:
order-processor:
retry: retryForever
circuitBreaker: simpleCB
In the order-processor
window, stop the service:
CMD + C
CTRL + C
Once the first request fails, the retry policy titled retryForever
is applied:
INFO[0005] Error processing operation endpoint[order-processor, order-processor:orders]. Retrying...
Retries will continue for each failed request indefinitely, in 5 second intervals.
retryForever:
policy: constant
maxInterval: 5s
maxRetries: -1
Once 5 consecutive retries have failed, the circuit breaker policy, simpleCB
, is tripped and the breaker opens, halting all requests:
INFO[0025] Circuit breaker "order-processor:orders" changed state from closed to open
circuitBreakers:
simpleCB:
maxRequests: 1
timeout: 5s
trip: consecutiveFailures >= 5
After 5 seconds has surpassed, the circuit breaker will switch to a half-open state, allowing one request through to verify if the fault has been resolved. If the request continues to fail, the circuit will trip back to the open state.
INFO[0030] Circuit breaker "order-processor:orders" changed state from open to half-open
INFO[0030] Circuit breaker "order-processor:orders" changed state from half-open to open
INFO[0030] Circuit breaker "order-processor:orders" changed state from open to half-open
INFO[0030] Circuit breaker "order-processor:orders" changed state from half-open to open
This half-open/open behavior will continue for as long as the Redis container is stopped.
Step 6: Remove the fault
Once you restart the order-processor
service, the application will recover seamlessly, picking up where it left off.
In the order-processor
service terminal, restart the application:
dapr run --app-port 5001 --app-id order-processor --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3501 -- npm start
checkout
service output:
== APP == Order passed: {"orderId": 5}
== APP == Order passed: {"orderId": 6}
== APP == Order passed: {"orderId": 7}
== APP == Order passed: {"orderId": 8}
== APP == Order passed: {"orderId": 9}
== APP == Order passed: {"orderId": 10}
order-processor
service output:
== APP == Order received: {"orderId": 5}
== APP == Order received: {"orderId": 6}
== APP == Order received: {"orderId": 7}
== APP == Order received: {"orderId": 8}
== APP == Order received: {"orderId": 9}
== APP == Order received: {"orderId": 10}
Pre-requisites
For this example, you will need:
Step 1: Set up the environment
Clone the sample provided in the Quickstarts repo.
git clone https://github.com/dapr/quickstarts.git
Step 2: Run the order-processor
service
In a terminal window, from the root of the Quickstart directory,
navigate to order-processor
directory.
cd service_invocation/csharp/http/order-processor
Install dependencies:
dotnet restore
dotnet build
Run the order-processor
service alongside a Dapr sidecar.
dapr run --app-port 7001 --app-id order-processor --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3501 -- dotnet run
Step 3: Run the checkout
service application
In a new terminal window, from the root of the Quickstart directory,
navigate to the checkout
directory.
cd service_invocation/csharp/http/checkout
Install dependencies:
dotnet restore
dotnet build
Run the checkout
service alongside a Dapr sidecar.
dapr run --app-id checkout --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3500 -- dotnet run
The Dapr sidecar then loads the resiliency spec located in the resources directory:
apiVersion: dapr.io/v1alpha1
kind: Resiliency
metadata:
name: myresiliency
scopes:
- checkout
spec:
policies:
retries:
retryForever:
policy: constant
maxInterval: 5s
maxRetries: -1
circuitBreakers:
simpleCB:
maxRequests: 1
timeout: 5s
trip: consecutiveFailures >= 5
targets:
apps:
order-processor:
retry: retryForever
circuitBreaker: simpleCB
Step 4: View the Service Invocation outputs
When both services and sidecars are running, notice how orders are passed from the checkout
service to the order-processor
service using Dapr service invoke.
checkout
service output:
== APP == Order passed: {"orderId": 1}
== APP == Order passed: {"orderId": 2}
== APP == Order passed: {"orderId": 3}
== APP == Order passed: {"orderId": 4}
order-processor
service output:
== APP == Order received: {"orderId": 1}
== APP == Order received: {"orderId": 2}
== APP == Order received: {"orderId": 3}
== APP == Order received: {"orderId": 4}
Step 5: Introduce a fault
Simulate a fault by stopping the order-processor
service. Once the instance is stopped, service invoke operations from the checkout
service begin to fail.
Since the resiliency.yaml
spec defines the order-processor
service as a resiliency target, all failed requests will apply retry and circuit breaker policies:
targets:
apps:
order-processor:
retry: retryForever
circuitBreaker: simpleCB
In the order-processor
window, stop the service:
CMD + C
CTRL + C
Once the first request fails, the retry policy titled retryForever
is applied:
INFO[0005] Error processing operation endpoint[order-processor, order-processor:orders]. Retrying...
Retries will continue for each failed request indefinitely, in 5 second intervals.
retryForever:
policy: constant
maxInterval: 5s
maxRetries: -1
Once 5 consecutive retries have failed, the circuit breaker policy, simpleCB
, is tripped and the breaker opens, halting all requests:
INFO[0025] Circuit breaker "order-processor:orders" changed state from closed to open
circuitBreakers:
simpleCB:
maxRequests: 1
timeout: 5s
trip: consecutiveFailures >= 5
After 5 seconds has surpassed, the circuit breaker will switch to a half-open state, allowing one request through to verify if the fault has been resolved. If the request continues to fail, the circuit will trip back to the open state.
INFO[0030] Circuit breaker "order-processor:orders" changed state from open to half-open
INFO[0030] Circuit breaker "order-processor:orders" changed state from half-open to open
INFO[0030] Circuit breaker "order-processor:orders" changed state from open to half-open
INFO[0030] Circuit breaker "order-processor:orders" changed state from half-open to open
This half-open/open behavior will continue for as long as the Redis container is stopped.
Step 6: Remove the fault
Once you restart the order-processor
service, the application will recover seamlessly, picking up where it left off.
In the order-processor
service terminal, restart the application:
dapr run --app-port 7001 --app-id order-processor --app-protocol http --dapr-http-port 3501 -- dotnet run
checkout
service output:
== APP == Order passed: {"orderId": 5}
== APP == Order passed: {"orderId": 6}
== APP == Order passed: {"orderId": 7}
== APP == Order passed: {"orderId": 8}
== APP == Order passed: {"orderId": 9}
== APP == Order passed: {"orderId": 10}
order-processor
service output:
== APP == Order received: {"orderId": 5}
== APP == Order received: {"orderId": 6}
== APP == Order received: {"orderId": 7}
== APP == Order received: {"orderId": 8}
== APP == Order received: {"orderId": 9}
== APP == Order received: {"orderId": 10}
Pre-requisites
For this example, you will need:
- Dapr CLI and initialized environment.
- Java JDK 17 (or greater):
- Oracle JDK, or
- OpenJDK
- Apache Maven, version 3.x.
Step 1: Set up the environment
Clone the sample provided in the Quickstarts repo.
git clone https://github.com/dapr/quickstarts.git
Step 2: Run the order-processor
service
In a terminal window, from the root of the Quickstart directory,
navigate to order-processor
directory.
cd service_invocation/java/http/order-processor
Install dependencies:
mvn clean install
Run the order-processor
service alongside a Dapr sidecar.
dapr run --app-id order-processor --resources-path ../../../resources/ --app-port 9001 --app-protocol http --dapr-http-port 3501 -- java -jar target/OrderProcessingService-0.0.1-SNAPSHOT.jar
Step 3: Run the checkout
service application
In a new terminal window, from the root of the Quickstart directory,
navigate to the checkout
directory.
cd service_invocation/java/http/checkout
Install dependencies:
mvn clean install
Run the checkout
service alongside a Dapr sidecar.
dapr run --app-id checkout --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3500 -- java -jar target/CheckoutService-0.0.1-SNAPSHOT.jar
The Dapr sidecar then loads the resiliency spec located in the resources directory:
apiVersion: dapr.io/v1alpha1
kind: Resiliency
metadata:
name: myresiliency
scopes:
- checkout
spec:
policies:
retries:
retryForever:
policy: constant
maxInterval: 5s
maxRetries: -1
circuitBreakers:
simpleCB:
maxRequests: 1
timeout: 5s
trip: consecutiveFailures >= 5
targets:
apps:
order-processor:
retry: retryForever
circuitBreaker: simpleCB
Step 4: View the Service Invocation outputs
When both services and sidecars are running, notice how orders are passed from the checkout
service to the order-processor
service using Dapr service invoke.
checkout
service output:
== APP == Order passed: {"orderId": 1}
== APP == Order passed: {"orderId": 2}
== APP == Order passed: {"orderId": 3}
== APP == Order passed: {"orderId": 4}
order-processor
service output:
== APP == Order received: {"orderId": 1}
== APP == Order received: {"orderId": 2}
== APP == Order received: {"orderId": 3}
== APP == Order received: {"orderId": 4}
Step 5: Introduce a fault
Simulate a fault by stopping the order-processor
service. Once the instance is stopped, service invoke operations from the checkout
service begin to fail.
Since the resiliency.yaml
spec defines the order-processor
service as a resiliency target, all failed requests will apply retry and circuit breaker policies:
targets:
apps:
order-processor:
retry: retryForever
circuitBreaker: simpleCB
In the order-processor
window, stop the service:
CMD + C
CTRL + C
Once the first request fails, the retry policy titled retryForever
is applied:
INFO[0005] Error processing operation endpoint[order-processor, order-processor:orders]. Retrying...
Retries will continue for each failed request indefinitely, in 5 second intervals.
retryForever:
policy: constant
maxInterval: 5s
maxRetries: -1
Once 5 consecutive retries have failed, the circuit breaker policy, simpleCB
, is tripped and the breaker opens, halting all requests:
INFO[0025] Circuit breaker "order-processor:orders" changed state from closed to open
circuitBreakers:
simpleCB:
maxRequests: 1
timeout: 5s
trip: consecutiveFailures >= 5
After 5 seconds has surpassed, the circuit breaker will switch to a half-open state, allowing one request through to verify if the fault has been resolved. If the request continues to fail, the circuit will trip back to the open state.
INFO[0030] Circuit breaker "order-processor:orders" changed state from open to half-open
INFO[0030] Circuit breaker "order-processor:orders" changed state from half-open to open
INFO[0030] Circuit breaker "order-processor:orders" changed state from open to half-open
INFO[0030] Circuit breaker "order-processor:orders" changed state from half-open to open
This half-open/open behavior will continue for as long as the Redis container is stopped.
Step 6: Remove the fault
Once you restart the order-processor
service, the application will recover seamlessly, picking up where it left off.
In the order-processor
service terminal, restart the application:
dapr run --app-id order-processor --resources-path ../../../resources/ --app-port 9001 --app-protocol http --dapr-http-port 3501 -- java -jar target/OrderProcessingService-0.0.1-SNAPSHOT.jar
checkout
service output:
== APP == Order passed: {"orderId": 5}
== APP == Order passed: {"orderId": 6}
== APP == Order passed: {"orderId": 7}
== APP == Order passed: {"orderId": 8}
== APP == Order passed: {"orderId": 9}
== APP == Order passed: {"orderId": 10}
order-processor
service output:
== APP == Order received: {"orderId": 5}
== APP == Order received: {"orderId": 6}
== APP == Order received: {"orderId": 7}
== APP == Order received: {"orderId": 8}
== APP == Order received: {"orderId": 9}
== APP == Order received: {"orderId": 10}
Pre-requisites
For this example, you will need:
Step 1: Set up the environment
Clone the sample provided in the Quickstarts repo.
git clone https://github.com/dapr/quickstarts.git
Step 2: Run the order-processor
service
In a terminal window, from the root of the Quickstart directory,
navigate to order-processor
directory.
cd service_invocation/go/http/order-processor
Install dependencies:
go build .
Run the order-processor
service alongside a Dapr sidecar.
dapr run --app-port 6001 --app-id order-processor --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3501 -- go run .
Step 3: Run the checkout
service application
In a new terminal window, from the root of the Quickstart directory,
navigate to the checkout
directory.
cd service_invocation/go/http/checkout
Install dependencies:
go build .
Run the checkout
service alongside a Dapr sidecar.
dapr run --app-id checkout --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3500 -- go run .
The Dapr sidecar then loads the resiliency spec located in the resources directory:
apiVersion: dapr.io/v1alpha1
kind: Resiliency
metadata:
name: myresiliency
scopes:
- checkout
spec:
policies:
retries:
retryForever:
policy: constant
maxInterval: 5s
maxRetries: -1
circuitBreakers:
simpleCB:
maxRequests: 1
timeout: 5s
trip: consecutiveFailures >= 5
targets:
apps:
order-processor:
retry: retryForever
circuitBreaker: simpleCB
Step 4: View the Service Invocation outputs
When both services and sidecars are running, notice how orders are passed from the checkout
service to the order-processor
service using Dapr service invoke.
checkout
service output:
== APP == Order passed: {"orderId": 1}
== APP == Order passed: {"orderId": 2}
== APP == Order passed: {"orderId": 3}
== APP == Order passed: {"orderId": 4}
order-processor
service output:
== APP == Order received: {"orderId": 1}
== APP == Order received: {"orderId": 2}
== APP == Order received: {"orderId": 3}
== APP == Order received: {"orderId": 4}
Step 5: Introduce a fault
Simulate a fault by stopping the order-processor
service. Once the instance is stopped, service invoke operations from the checkout
service begin to fail.
Since the resiliency.yaml
spec defines the order-processor
service as a resiliency target, all failed requests will apply retry and circuit breaker policies:
targets:
apps:
order-processor:
retry: retryForever
circuitBreaker: simpleCB
In the order-processor
window, stop the service:
CMD + C
CTRL + C
Once the first request fails, the retry policy titled retryForever
is applied:
INFO[0005] Error processing operation endpoint[order-processor, order-processor:orders]. Retrying...
Retries will continue for each failed request indefinitely, in 5 second intervals.
retryForever:
policy: constant
maxInterval: 5s
maxRetries: -1
Once 5 consecutive retries have failed, the circuit breaker policy, simpleCB
, is tripped and the breaker opens, halting all requests:
INFO[0025] Circuit breaker "order-processor:orders" changed state from closed to open
circuitBreakers:
simpleCB:
maxRequests: 1
timeout: 5s
trip: consecutiveFailures >= 5
After 5 seconds has surpassed, the circuit breaker will switch to a half-open state, allowing one request through to verify if the fault has been resolved. If the request continues to fail, the circuit will trip back to the open state.
INFO[0030] Circuit breaker "order-processor:orders" changed state from open to half-open
INFO[0030] Circuit breaker "order-processor:orders" changed state from half-open to open
INFO[0030] Circuit breaker "order-processor:orders" changed state from open to half-open
INFO[0030] Circuit breaker "order-processor:orders" changed state from half-open to open
This half-open/open behavior will continue for as long as the Redis container is stopped.
Step 6: Remove the fault
Once you restart the order-processor
service, the application will recover seamlessly, picking up where it left off.
In the order-processor
service terminal, restart the application:
dapr run --app-port 6001 --app-id order-processor --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3501 -- go run .
checkout
service output:
== APP == Order passed: {"orderId": 5}
== APP == Order passed: {"orderId": 6}
== APP == Order passed: {"orderId": 7}
== APP == Order passed: {"orderId": 8}
== APP == Order passed: {"orderId": 9}
== APP == Order passed: {"orderId": 10}
order-processor
service output:
== APP == Order received: {"orderId": 5}
== APP == Order received: {"orderId": 6}
== APP == Order received: {"orderId": 7}
== APP == Order received: {"orderId": 8}
== APP == Order received: {"orderId": 9}
== APP == Order received: {"orderId": 10}
Tell us what you think!
We’re continuously working to improve our Quickstart examples and value your feedback. Did you find this quickstart helpful? Do you have suggestions for improvement?
Join the discussion in our discord channel.
Next steps
Visit this link for more information about Dapr resiliency.
Explore Dapr tutorials >>