Spring集成kafka的最佳方式

本文主要简单梳理梳理java应用中生产/消费kafka消息的一些使用选择。

可用类库
  • kafka client
  • spring for apache kafka
  • spring integration kafka
  • spring cloud stream binder kafka

基于java版的kafka client与spring进行集成

org.springframework.kafka spring-kafka 1.2.2.RELEASE
与springboot的集成

对于springboot 1.5版本之前的话,需要自己去配置java configuration,而1.5版本以后则提供了auto config,具体详见org.springframework.boot.autoconfigure.kafka这个包,主要有

  • KafkaAutoConfiguration spring-boot-autoconfigure-1.5.7.RELEASE-sources.jar!/org/springframework/boot/autoconfigure/kafka/KafkaAutoConfiguration.java

@Configuration
@ConditionalOnClass(KafkaTemplate.class)
@EnableConfigurationProperties(KafkaProperties.class)
@Import(KafkaAnnotationDrivenConfiguration.class)
public class KafkaAutoConfiguration {

private final KafkaProperties properties;

public KafkaAutoConfiguration(KafkaProperties properties) {
    this.properties = properties;
}

@Bean
@ConditionalOnMissingBean(KafkaTemplate.class)
public KafkaTemplate<, > kafkaTemplate(
        ProducerFactory<Object, Object> kafkaProducerFactory,
        ProducerListener<Object, Object> kafkaProducerListener) {
    KafkaTemplate<Object, Object> kafkaTemplate = new KafkaTemplate<Object, Object>(
            kafkaProducerFactory);
    kafkaTemplate.setProducerListener(kafkaProducerListener);
    kafkaTemplate.setDefaultTopic(this.properties.getTemplate().getDefaultTopic());
    return kafkaTemplate;
}

@Bean
@ConditionalOnMissingBean(ProducerListener.class)
public ProducerListener<Object, Object> kafkaProducerListener() {
    return new LoggingProducerListener<Object, Object>();
}

@Bean
@ConditionalOnMissingBean(ConsumerFactory.class)
public ConsumerFactory<, > kafkaConsumerFactory() {
    return new DefaultKafkaConsumerFactory<Object, Object>(
            this.properties.buildConsumerProperties());
}

@Bean
@ConditionalOnMissingBean(ProducerFactory.class)
public ProducerFactory<, > kafkaProducerFactory() {
    return new DefaultKafkaProducerFactory<Object, Object>(
            this.properties.buildProducerProperties());
}

}

  • KafkaAnnotationDrivenConfiguration spring-boot-autoconfigure-1.5.7.RELEASE-sources.jar!/org/springframework/boot/autoconfigure/kafka/KafkaAnnotationDrivenConfiguration.java

@Configuration
@ConditionalOnClass(EnableKafka.class)
class KafkaAnnotationDrivenConfiguration {

private final KafkaProperties properties;

KafkaAnnotationDrivenConfiguration(KafkaProperties properties) {
    this.properties = properties;
}

@Bean
@ConditionalOnMissingBean
public ConcurrentKafkaListenerContainerFactoryConfigurer kafkaListenerContainerFactoryConfigurer() {
    ConcurrentKafkaListenerContainerFactoryConfigurer configurer = new ConcurrentKafkaListenerContainerFactoryConfigurer();
    configurer.setKafkaProperties(this.properties);
    return configurer;
}

@Bean
@ConditionalOnMissingBean(name = "kafkaListenerContainerFactory")
public ConcurrentKafkaListenerContainerFactory<, > kafkaListenerContainerFactory(
        ConcurrentKafkaListenerContainerFactoryConfigurer configurer,
        ConsumerFactory<Object, Object> kafkaConsumerFactory) {
    ConcurrentKafkaListenerContainerFactory<Object, Object> factory = new ConcurrentKafkaListenerContainerFactory<Object, Object>();
    configurer.configure(factory, kafkaConsumerFactory);
    return factory;
}

@EnableKafka
@ConditionalOnMissingBean(name = KafkaListenerConfigUtils.KAFKA\_LISTENER\_ANNOTATION\_PROCESSOR\_BEAN\_NAME)
protected static class EnableKafkaConfiguration {

}

}

  • ConcurrentKafkaListenerContainerFactoryConfigurer spring-boot-autoconfigure-1.5.7.RELEASE-sources.jar!/org/springframework/boot/autoconfigure/kafka/ConcurrentKafkaListenerContainerFactoryConfigurer.java

public class ConcurrentKafkaListenerContainerFactoryConfigurer {

private KafkaProperties properties;

/\*\*
 \* Set the {@link KafkaProperties} to use.
 \* @param properties the properties
 \*/
void setKafkaProperties(KafkaProperties properties) {
    this.properties = properties;
}

/\*\*
 \* Configure the specified Kafka listener container factory. The factory can be
 \* further tuned and default settings can be overridden.
 \* @param listenerContainerFactory the {@link ConcurrentKafkaListenerContainerFactory}
 \* instance to configure
 \* @param consumerFactory the {@link ConsumerFactory} to use
 \*/
public void configure(
        ConcurrentKafkaListenerContainerFactory<Object, Object> listenerContainerFactory,
        ConsumerFactory<Object, Object> consumerFactory) {
    listenerContainerFactory.setConsumerFactory(consumerFactory);
    Listener container = this.properties.getListener();
    ContainerProperties containerProperties = listenerContainerFactory
            .getContainerProperties();
    if (container.getAckMode() != null) {
        containerProperties.setAckMode(container.getAckMode());
    }
    if (container.getAckCount() != null) {
        containerProperties.setAckCount(container.getAckCount());
    }
    if (container.getAckTime() != null) {
        containerProperties.setAckTime(container.getAckTime());
    }
    if (container.getPollTimeout() != null) {
        containerProperties.setPollTimeout(container.getPollTimeout());
    }
    if (container.getConcurrency() != null) {
        listenerContainerFactory.setConcurrency(container.getConcurrency());
    }
}

}

创建并发的多个KafkaMessageListenerContainer,相当于一个应用实例创建多个consumer 如果是1.5版本及以上的springboot,使用起来就比较简单了,注入kafkaTemplate直接发消息,然后简单配置一下就可以消费消息

spring integration kafka

spring integration是spring关于Enterprise Integration Patterns的实现,而spring integration kafka则基于spring for apache kafka提供了inbound以及outbound channel的适配器 Starting from version 2.0 version this project is a complete rewrite based on the new spring-kafka project which uses the pure java Producer and Consumer clients provided by Kafka 0.9.x.x and 0.10.x.x

这个的话,没有自动配置,又引入了integration相关的概念,整体来讲,相对复杂一些。

consumer配置
@Bean
public KafkaMessageListenerContainer<String, String> container(
        ConsumerFactory<String, String> kafkaConsumerFactory) {
    return new KafkaMessageListenerContainer<>(kafkaConsumerFactory,
            new ContainerProperties(new TopicPartitionInitialOffset(topic, 0)));
}
@Bean
public ConsumerFactory<, > kafkaConsumerFactory() {
    Map<String, Object> props = new HashMap<>();
    props.put(ConsumerConfig.BOOTSTRAP\_SERVERS\_CONFIG, brokerAddress);
    props.put(ConsumerConfig.GROUP\_ID\_CONFIG, consumerGroup);
    props.put(ConsumerConfig.ENABLE\_AUTO\_COMMIT\_CONFIG, true);
    props.put(ConsumerConfig.AUTO\_COMMIT\_INTERVAL\_MS\_CONFIG, 100);
    props.put(ConsumerConfig.SESSION\_TIMEOUT\_MS\_CONFIG, 15000);
    props.put(ConsumerConfig.AUTO\_OFFSET\_RESET\_CONFIG,"earliest");
    props.put(ConsumerConfig.KEY\_DESERIALIZER\_CLASS\_CONFIG, StringDeserializer.class);
    props.put(ConsumerConfig.VALUE\_DESERIALIZER\_CLASS\_CONFIG, StringDeserializer.class);
    return new DefaultKafkaConsumerFactory<>(props);
}
@Bean
public KafkaMessageDrivenChannelAdapter<String, String> adapter(KafkaMessageListenerContainer<String, String> container) {
    KafkaMessageDrivenChannelAdapter<String, String> kafkaMessageDrivenChannelAdapter =
            new KafkaMessageDrivenChannelAdapter<>(container);
    kafkaMessageDrivenChannelAdapter.setOutputChannel(fromKafka());
    return kafkaMessageDrivenChannelAdapter;
}
@Bean
public PollableChannel fromKafka() {
    return new QueueChannel();
}
producer配置
@Bean
@ServiceActivator(inputChannel = "toKafka")
public MessageHandler handler() throws Exception {
    KafkaProducerMessageHandler<String, String> handler =
            new KafkaProducerMessageHandler<>(kafkaTemplate());
    handler.setTopicExpression(new LiteralExpression(topic));
    handler.setMessageKeyExpression(new LiteralExpression(messageKey));
    return handler;
}
@Bean
public ProducerFactory<String, String> kafkaProducerFactory() {
    Map<String, Object> props = new HashMap<>();
    props.put(ProducerConfig.BOOTSTRAP\_SERVERS\_CONFIG, brokerAddress);
    props.put(ProducerConfig.RETRIES\_CONFIG, 0);
    props.put(ProducerConfig.BATCH\_SIZE\_CONFIG, 16384);
    props.put(ProducerConfig.LINGER\_MS\_CONFIG, 1);
    props.put(ProducerConfig.BUFFER\_MEMORY\_CONFIG, 33554432);
    props.put(ProducerConfig.KEY\_SERIALIZER\_CLASS\_CONFIG, StringSerializer.class);
    props.put(ProducerConfig.VALUE\_SERIALIZER\_CLASS\_CONFIG, StringSerializer.class);
    return new DefaultKafkaProducerFactory<>(props);
}
@Bean
public KafkaTemplate<String, String> kafkaTemplate() {
    return new KafkaTemplate<>(kafkaProducerFactory());
}
收发信息
@Autowired
@Qualifier("fromKafka")
private PollableChannel fromKafka;

@Autowired
@Qualifier("toKafka")
MessageChannel toKafka;

Message msg = fromKafka.receive(10000l);
toKafka.send(new GenericMessage<Object>(UUID.randomUUID().toString()));

spring cloud stream

基于Spring Integration构建,在spring cloud环境中又稍作加工,也稍微有点封装了. 具体详见spring cloud stream kafka实例以及spring-cloud-stream-binder-kafka属性配置

doc

  • spring-kafka
  • spring-integration
  • spring-integration-kafka
  • spring-integration-samples-kafka
  • spring-cloud-stream
  • spring boot与kafka集成
  • 总结kafka的consumer消费能力很低的情况下的处理方案
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值