Skip to content
Capstone Project~11 hrs

StreamCart Real-Time Analytics

Process clickstream events on the fly. Build a low-latency architecture to power live Black Friday sales dashboards.

4 Parts/34 Steps/13 Tools
KAFKA_CLUSTER: PROD-01
THROUGHPUT: 12.5k msg/sSEMANTICS: EXACTLY_ONCE
TX_IDUSERSTATUSLATENCY
#101u_882CLEAN0.8ms
#102u_431FLAGGED1.2ms
#103u_219CLEAN0.5ms
#104u_667FLAGGED1.1ms

STATE_STORE

RocksDB Optimized

RELIABILITY

Replication_Factor: 3

fig 1 — real-time fraud detection monitor

LATENCY

<100ms

Fraud Detection

SEMANTICS

EOS

Exactly-Once

BROKERS

3

KRaft Cluster

PANELS

15+

Grafana Dashboard

Kafka Topology

The streaming pipeline from ingestion to production deployment.

streamguard / kafka-topology
INGEST
Schema Registry
Avro Schemas
KRaft Mode
3 Brokers
PROCESS
KStream DSL
Windowing
KTable Joins
State Stores
ALERT
Prometheus
Grafana 15+
AlertManager
DLQ Routing
DEPLOY
Strimzi K8s
EOS v2
Chaos Mesh
HPA Scale

What You'll Build

A complete fraud detection system — from local Kafka cluster to production Kubernetes deployment with full observability.

Multi-Broker Cluster

3-broker KRaft mode Kafka cluster with Schema Registry, Avro schemas, and Kafka UI for real-time monitoring

Real-Time Detection

Windowed aggregations, velocity checks, geographic anomaly detection, and dead letter queue routing

Stream Enrichment

KStream-KTable joins for customer/merchant enrichment, Interactive Queries REST API, and Kafka Connect sinks

K8s + Chaos Labs

Strimzi operator deployment, HPA auto-scaling, Prometheus/Grafana (15+ panels), and failure recovery scenarios

Progressive Build Path

4 parts, each building on the last. Local setup to production Kubernetes.

Infrastructure Standards

Production patterns you'll implement across the streaming platform.

RELIABILITY
EOSexactly-once

Exactly-Once Semantics with idempotent producers and transactional consumers

SCALABILITY
HPAK8s operator

Horizontal scaling via Strimzi operator with auto-scaling pod replicas

RESILIENCE
Chaostested

Broker failures, state corruption, and network partition recovery labs

STATE
<1msRocksDB

Sub-millisecond state store lookups with RocksDB and changelog topics

Environment Setup

Launch the Kafka cluster and register your first Avro schema.

streamguard
# Clone StreamGuard & launch Kafka cluster
$ git clone https://github.com/aide-hub/streamguard.git
$ cd streamguard

# Start 3-broker KRaft cluster + Schema Registry + Kafka UI
$ docker-compose -f docker-compose.kafka.yml up -d

# Register Avro schema for transactions
$ curl -X POST http://localhost:8081/subjects/transactions-value/versions \
$ -H "Content-Type: application/vnd.schemaregistry.v1+json" \
$ -d '{"schema": "{\"type\": \"record\", \"name\": \"Transaction\"}"}'

Tech Stack

Kafka 3.6+Kafka StreamsKafka ConnectSchema RegistryPostgreSQLElasticsearchKubernetesStrimziPrometheusGrafanaAlertManagerDocker

Prerequisites

  • Java fundamentals (classes, streams API, lambda expressions)
  • Docker basics (containers, docker-compose commands)
  • Kafka basics (topics, producers, consumers)
  • Kubernetes concepts (pods, services, deployments)

Related Learning Path

Master Kafka architecture, stream processing patterns, and production deployment strategies before tackling this capstone project.

Kafka Streams Learning Path

Ready to build production stream processing?

Start with Part 1: Ingestion & Schema Registry

Press Cmd+K to open