Introduction to Event-Driven Architecture with KEDA and RabbitMQ
The need to build scalable and reactive systems has increasingly led teams to adopt the Event-Driven Architecture (EDA) model.
Instead of relying on direct integrations between services, communication is done through asynchronous events, which reduces coupling and improves overall performance.
In this practical guide — a true KEDA RabbitMQ tutorial — you will learn to:
- Implement an event-driven audit microservice
- Integrate RabbitMQ as a message broker
- Configure KEDA to automatically scale consumption on Kubernetes
Table of Contents
- What is an Event-Driven Architecture
- Technologies and Tools Used
- Proposed Scenario: Event-Driven Audit Microservice
- Event-Driven Project Structure with RabbitMQ and KEDA
- Step-by-Step Implementation
- Testing Autoscaling on Kubernetes
- Best Practices and Observability
- Conclusion — Advantages of Event-Driven Architecture with KEDA and RabbitMQ
- References and Further Reading
1. What is an Event-Driven Architecture
An event-driven architecture is based on the idea that each relevant action within the system (such as creating an order or updating a record) generates an event.
These events are published to a queue and processed independently by interested services.
1.1. Benefits of Event-Driven Architecture
- Decoupling between services — each component only knows about events, not implementations.
- Automatic scalability — consumers can be scaled as the queue grows.
- Resilience — temporary failures don’t affect the system, as messages remain in the queue.
- Flexibility — new consumers can be added without impacting the rest of the architecture.
1.2. Communication Flow
The diagram below illustrates how the communication flow works in an event-driven architecture:
As we can observe:
- Producers send events to the Message Broker asynchronously
- The Message Broker (RabbitMQ) manages queues and distributes messages
- Independent Consumers process events from their respective queues
- Each service can scale independently based on demand
2. Technologies and Tools Used
| Technology | Function |
|---|---|
| RabbitMQ | Manages message routing and storage |
| KEDA | Autoscales based on RabbitMQ queue size |
| Spring Boot / Quarkus | Audit microservice implementation |
| Docker & Kubernetes | Application containerization and orchestration |
| Prometheus & Grafana | Monitoring and observability |
These tools together form the foundation for a modern and elastic cloud-native architecture.
3. Proposed Scenario: Event-Driven Audit Microservice
The Audit Service will be responsible for recording all critical system actions. Every time an event occurs — for example, a record creation or deletion — a message is sent to RabbitMQ, and the audit service processes this event asynchronously.
3.1. Overall Solution Flow
4. Event-Driven Project Structure with RabbitMQ and KEDA
event-driven-audit/
├── audit-service/
│ ├── src/
│ │ ├── main/java/com/example/audit/
│ │ │ ├── controller/
│ │ │ ├── service/
│ │ │ ├── consumer/
│ │ │ └── model/
│ ├── resources/
│ │ └── application.yml
├── docker-compose.yml
└── k8s/
├── deployment.yml
├── keda-scaledobject.yml
└── rabbitmq-config.yml
A minimal organization like the one above cleanly separates three concerns: the service code, the local development environment, and the Kubernetes execution manifests. This division keeps day-to-day work frictionless and prevents operational details from leaking into business logic.
audit-service: the microservice module itself. This is where events are turned into persisted audit records.src/main/java/com/example/audit/controller: HTTP entry points (audit queries, health/debug endpoints). Even in event-driven solutions, exposing read and health checks via API is common.src/main/java/com/example/audit/service: business rules and persistence orchestration. Keep audit logic here, isolating messaging and transport concerns.src/main/java/com/example/audit/consumer: RabbitMQ integration. Consumers (e.g.,@RabbitListener) translate queue messages into service calls, handling idempotency, conversion, and validation.src/main/java/com/example/audit/model: domain models and event DTOs. The audit event contract lives here (fields, types, semantics), easing evolution without breaking consumers.resources/application.yml: service configuration. Centralizes connections (RabbitMQ, database), queue names, and runtime parameters. Prefer environment variables and Spring profiles to distinguish local, staging, and production.docker-compose.yml: development support. Brings up RabbitMQ (and, if needed, the database) for quick local tests, mirroring queue names and credentials used by the cluster manifests.k8s/deployment.yml: defines theaudit-servicepod in Kubernetes (image, ports, env vars, requests/limits). It does not know about queues or triggers — only the service runtime.k8s/keda-scaledobject.yml: links queue load to autoscaling. Declares which queue to observe, replica bounds, and KEDA’s reaction policy.k8s/rabbitmq-config.yml: broker-related infrastructure resources (credentials, vhost, policies or bindings, depending on the team’s approach). Versioning these avoids manual tweaks and environment drift.
With this structure, every change has a natural home: contracts and logic in audit-service, local experience in docker-compose.yml, and execution/scale behavior in k8s/. The outcome is a predictable dev cycle and a deployment path without surprises.
5. Step-by-Step Implementation
1️⃣ Running RabbitMQ Locally
In the docker-compose.yml file:
version: "3.8"
services:
rabbitmq:
image: rabbitmq:3-management
ports:
- "5672:5672"
- "15672:15672"
environment:
RABBITMQ_DEFAULT_USER: user
RABBITMQ_DEFAULT_PASS: pass
Access the RabbitMQ Management UI.
2️⃣ Creating the Audit Microservice
application.yml
spring:
rabbitmq:
host: rabbitmq
username: user
password: pass
datasource:
url: jdbc:postgresql://db/audit
username: postgres
password: postgres
AuditEvent.java Entity
@Entity
public class AuditEvent {
@Id
@GeneratedValue
private Long id;
private String action;
private String entity;
private String user;
private LocalDateTime timestamp;
}
AuditConsumer.java Consumer
@Component
public class AuditConsumer {
private final AuditService auditService;
public AuditConsumer(AuditService auditService) {
this.auditService = auditService;
}
@RabbitListener(queues = "audit.queue")
public void consume(AuditEvent event) {
auditService.save(event);
}
}
This component consumes messages from the queue and saves audit events to the database.
3️⃣ Deploying the Service on Kubernetes
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: audit-service
spec:
replicas: 1
selector:
matchLabels:
app: audit-service
template:
metadata:
labels:
app: audit-service
spec:
containers:
- name: audit-service
image: yourrepo/audit-service:latest
ports:
- containerPort: 8080
4️⃣ Configuring KEDA for Autoscaling with RabbitMQ
keda-scaledobject.yml
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: audit-service-scaler
spec:
scaleTargetRef:
name: audit-service
minReplicaCount: 1
maxReplicaCount: 10
triggers:
- type: rabbitmq
metadata:
queueName: audit.queue
host: RabbitMQConnectionString
queueLength: "10"
When the audit.queue accumulates messages, KEDA will increase the number of pods. When the volume drops, it will automatically reduce — optimizing cost and resources.
6. Testing Autoscaling on Kubernetes
- Generate sample events to populate the queue.
- Observe RabbitMQ growing with pending messages.
- Verify KEDA dynamically adjusting audit-service replicas.
- Check logs confirming parallel processing.
7. Best Practices and Observability
- Use structured logs (JSON) to facilitate correlation and parsing in monitoring tools.
- Implement correlation IDs to trace events end-to-end.
- Expose metrics for Prometheus and create dashboards in Grafana.
- Configure Dead Letter Queues (DLQs) for problematic messages.
These practices increase reliability and make the system observable and sustainable.
8. Conclusion — Advantages of Event-Driven Architecture with KEDA and RabbitMQ
Adopting an event-driven architecture with RabbitMQ and KEDA allows your system to:
- Process events asynchronously
- Scale automatically based on load
- Be resilient, modular, and easy to evolve
Our audit microservice demonstrated in practice how to combine messaging, autoscaling, and best practices in a modern application.
Suggested next steps:
- Add retries and fallback mechanisms
- Create DLQs for unprocessed messages
- Evolve the design to event streaming (Kafka, Pulsar)