一个完整的 Sleuth + Zipkin + Kafka + Elasticsearch 的生产级实战项目模板(含 Docker Compose 和代码示例)

以下是 一个完整的生产级实战项目模板,实现:

Spring Cloud Sleuth + Zipkin + Kafka + Elasticsearch
✅ 微服务架构(2个服务 + Feign调用)
✅ 分布式追踪自动上报(通过Kafka)
✅ 数据持久化到Elasticsearch
✅ Docker Compose 一键部署
✅ 支持 Trace ID 日志关联(MDC)


🛠️ 项目名称:sleuth-zipkin-kafka-es-demo

📁 项目结构

sleuth-zipkin-kafka-es-demo/
├── docker-compose.yml                 # 所有中间件一键启动
├── service-a/                         # 订单服务(入口)
│   ├── src/main/java/com/example/...
│   │   ├── OrderController.java       # 提供 REST API
│   │   └── OrderService.java
├── service-b/                         # 用户服务(被调用)
│   ├── src/main/java/com/example/...
│   │   ├── UserController.java        # 提供用户信息接口
│   │   └── UserClient.java            # Feign 客户端
├── pom.xml                            # 共用父 POM
└── README.md

第一步:Docker Compose 部署中间件

docker-compose.yml

version: '3.8'

services:
  # Zookeeper (Kafka依赖)
  zookeeper:
    image: confluentinc/cp-zookeeper:latest
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
    ports:
      - "2181:2181"

  # Kafka
  kafka:
    image: confluentinc/cp-kafka:latest
    depends_on:
      - zookeeper
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
    ports:
      - "9092:9092"

  # Elasticsearch
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.17.3
    environment:
      - discovery.type=single-node
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ports:
      - "9200:9200"
    volumes:
      - esdata:/usr/share/elasticsearch/data

  # Zipkin Server(接收Kafka数据,存入ES)
  zipkin:
    image: openzipkin/zipkin:latest
    depends_on:
      - kafka
      - elasticsearch
    environment:
      - ZIPKIN_STORAGE_TYPE=elasticsearch
      - ES_HOSTS=http://elasticsearch:9200
      - KAFKA_BOOTSTRAP_SERVERS=kafka:9092
      - KAFKA_TOPIC=zipkin
      - KAFKA_STREAMS=1
    ports:
      - "9411:9411"
    restart: unless-stopped

volumes:
  esdata:

📌 启动命令:

docker-compose up -d

等待服务启动后访问:

  • Zipkin UI: http://localhost:9411
  • Elasticsearch: http://localhost:9200

第二步:Maven 父工程 pom.xml

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.example</groupId>
    <artifactId>sleuth-zipkin-demo</artifactId>
    <version>1.0.0</version>
    <packaging>pom</packaging>

    <modules>
        <module>service-a</module>
        <module>service-b</module>
    </modules>

    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>2.7.12</version>
        <relativePath/>
    </parent>

    <properties>
        <java.version>11</java.version>
        <spring-cloud.version>2021.0.8</spring-cloud.version>
    </properties>

    <dependencyManagement>
        <dependencies>
            <dependency>
                <groupId>org.springframework.cloud</groupId>
                <artifactId>spring-cloud-dependencies</artifactId>
                <version>${spring-cloud.version}</version>
                <type>pom</type>
                <scope>import</scope>
            </dependency>
        </dependencies>
    </dependencyManagement>

</project>

✅ 使用 Spring Boot 2.7.x + Spring Cloud 2021.x(兼容 Sleuth Kafka 支持)


第三步:服务 A(订单服务) - service-a/pom.xml

<project>
    <modelVersion>4.0.0</modelVersion>
    <parent>
        <groupId>com.example</groupId>
        <artifactId>sleuth-zipkin-demo</artifactId>
        <version>1.0.0</version>
    </parent>

    <artifactId>service-a</artifactId>

    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-starter-openfeign</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-starter-sleuth</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-sleuth-zipkin</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.kafka</groupId>
            <artifactId>spring-kafka</artifactId>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
        </plugins>
    </build>
</project>

service-a/src/main/resources/application.yml

server:
  port: 8081

spring:
  application:
    name: service-a
  zipkin:
    sender:
      type: kafka
    kafka:
      bootstrap-servers: localhost:9092
      topic: zipkin
  sleuth:
    sampler:
      probability: 0.5  # 测试环境提高采样率

logging:
  pattern:
    console: "%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - [%X{traceId}/%X{spanId}] %msg%n"

service-a/src/main/java/com/example/servicea/OrderController.java

@RestController
@RequestMapping("/api/order")
public class OrderController {

    @Autowired
    private UserClient userClient;

    @GetMapping("/{orderId}")
    public String getOrder(@PathVariable String orderId) {
        System.out.println("【Service-A】处理订单: " + orderId);
        String userInfo = userClient.getUserInfo("1001");
        return "Order{" + "id='" + orderId + "', user=" + userInfo + "}";
    }
}

service-a/src/main/java/com/example/servicea/UserClient.java

@FeignClient(name = "service-b", url = "http://localhost:8082")
public interface UserClient {
    @GetMapping("/api/user/{userId}")
    String getUserInfo(@PathVariable("userId") String userId);
}

第四步:服务 B(用户服务) - service-b/pom.xml(同 service-a)

service-b/src/main/resources/application.yml

server:
  port: 8082

spring:
  application:
    name: service-b
  zipkin:
    sender:
      type: kafka
    kafka:
      bootstrap-servers: localhost:9092
      topic: zipkin
  sleuth:
    sampler:
      probability: 0.5

logging:
  pattern:
    console: "%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - [%X{traceId}/%X{spanId}] %msg%n"

service-b/src/main/java/com/example/serviceb/UserController.java

@RestController
@RequestMapping("/api/user")
public class UserController {

    @GetMapping("/{userId}")
    public String getUserInfo(@PathVariable String userId) {
        System.out.println("【Service-B】查询用户: " + userId);
        try {
            Thread.sleep(100); // 模拟延迟
        } catch (InterruptedException e) {
            Thread.currentThread().interrupt();
        }
        return "User{id='" + userId + "', name='张三'}";
    }
}

第五步:启动与验证

1. 启动服务

# 构建并启动服务A
cd service-a
mvn spring-boot:run

# 启动服务B
cd ../service-b
mvn spring-boot:run

2. 发起请求

curl http://localhost:8081/api/order/ORD12345

输出:

Order{id='ORD12345', user=User{id='1001', name='张三'}}

3. 查看日志(自动带 Trace ID)

10:23:45.123 [http-nio-8081-exec-1] INFO  c.e.s.OrderController - [8a7bd13e12c54b8a/1a2b3c4d5e6f7g8h] 处理订单: ORD12345
10:23:45.130 [http-nio-8082-exec-1] INFO  c.e.s.UserController - [8a7bd13e12c54b8a/9b8c7d6e5f4a3b2c] 查询用户: 1001

👉 注意:两个服务的 traceId 相同!

4. 查看 Zipkin UI

访问:http://localhost:9411

  • 选择服务:service-aservice-b
  • 点击 “Find Traces”
  • 查看调用链,包含:
    • GET /api/order/{orderId}
    • GET /api/user/{userId}
  • 耗时分析、Span 标签(如 http.url, http.status_code

✅ 功能验证清单

功能是否实现
跨服务 Trace ID 传播
Feign 自动注入 B3 Headers
数据通过 Kafka 上报
存入 Elasticsearch 持久化
Zipkin UI 可视化
日志输出 Trace ID(MDC)
支持高采样率控制

🚀 生产环境优化建议

  1. 采样率调整:生产设为 0.05 ~ 0.1
  2. Kafka 集群:使用多节点 Kafka 提高吞吐
  3. Elasticsearch 集群:分片+副本,设置索引生命周期(ILM)
  4. Zipkin 高可用:部署多个实例 + 负载均衡
  5. 安全:Kafka/ES 加密认证,Zipkin 内网隔离
  6. 监控:Prometheus + Grafana 监控 Kafka Lag、Zipkin 延迟

📦 项目打包下载建议

你可以将此项目上传到 GitHub/Gitee,结构如下:

https://github.com/yourname/sleuth-zipkin-kafka-es-demo

包含:

  • docker-compose.yml
  • pom.xml
  • service-a/
  • service-b/
  • README.md(含部署说明)

🔮 后续演进方向

阶段建议
近期增加 Gateway 网关层追踪
中期接入 Prometheus + Grafana 做指标监控
长期迁移到 OpenTelemetry + Grafana Tempo
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值