前提:1)已经下载了kafka的安装包,并上传到服务器,解压,2)已经安装了zookeeper集群,参考zookeeper集群搭建
1.修改 server.properties
############################# Server Basics #############################
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=2 ##id需要在kafka集群里唯一
############################# Socket Server Settings #############################
# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
# FORMAT:
# listeners = listener_name://host_name:port
# EXAMPLE:
# listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://192.168.1.133:9092 ## kafka所在的host IP
....... 省略中间
############################# Log Basics #############################
# A comma separated list of directories under which to store log files
log.dirs=/opt/kafka_2.11-1.1.0/log ## 日志输出的目录,需要创建这个目录
....... 省略中间
############################# Zookeeper #############################
# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=192.168.1.131:2181,192.168.1.132:2181,192.168.1.133:2181
## 配置zookeeper集群,我这里配置的是单独安装的zookeeper集群,不是kafka自带的
# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000
2.创建日志目录
至于kafka的日志都保存了哪些,现在还没详细去搞,不太明白
[root@localhost kafka_2.11-1.1.0]# mkdir log
3.启动、验证
[root@localhost kafka_2.11-1.1.0]# bin/kafka-server-start.sh config/server.properties
## 会输出很多内容,等不输出了,敲下回车就可以
[root@localhost kafka_2.11-1.1.0]# ps -ef | grep kafka
root 2130 1558 4 21:15 pts/0 00:02:49 /opt/jdk1.8.0_171/bin/java -Xmx1G -Xms1G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -Djava.awt.headless=true -Xloggc:/opt/kafka_2.11-1.1.0/bin/../logs/kafkaServer-gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dkafka.logs.dir=/opt/kafka_2.11-1.1.0/bin/../logs -Dlog4j.configuration=file:bin/../config/log4j.properties -cp .:/opt/jdk1.8.0_171/lib/dt.jar:/opt/jdk1.8.0_171/lib/tools.jar:/opt/jdk1.8.0_171/jre/lib:/opt/kafka_2.11-1.1.0/bin/../libs/aopalliance-repackaged-2.5.0-b32.jar:/opt/kafka_2.11-1.1.0/bin/../libs/argparse4j-0.7.0.jar:/opt/kafka_2.11-1.1.0/bin/../libs/commons-lang3-3.5.jar:/opt/kafka_2.11-1.1.0/bin/../libs/connect-api-1.1.0.jar:/opt/kafka_2.11-1.1.0/bin/../libs/connect-file-1.1.0.jar:/opt/kafka_2.11-1.1.0/bin/../libs/connect-json-1.1.0.jar:/opt/kafka_2.11-1.1.0/bin/../libs/connect-runtime-1.1.0.jar:/opt/kafka_2.11-1.1.0/bin/../libs/connect-transforms-1.1.0.jar:/opt/kafka_2.11-1.1.0/bin/../libs/guava-20.0.jar:/opt/kafka_2.11-1.1.0/bin/../libs/hk2-api-2.5.0-b32.jar:/opt/kafka_2.11-1.1.0/bin/../libs/hk2-locator-2.5.0-b32.jar:/opt/kafka_2.11-1.1.0/bin/../libs/hk2-utils-2.5.0-b32.jar:/opt/kafka_2.11-1.1.0/bin/../libs/jackson-annotations-2.9.4.jar:/opt/kafka_2.11-1.1.0/bin/../libs/jackson-core-2.9.4.jar:/opt/kafka_2.11-1.1.0/bin/../libs/jackson-databind-2.9.4.jar:/opt/kafka_2.11-1.1.0/bin/../libs/jackson-jaxrs-base-2.9.4.jar:/opt/kafka_2.11-1.1.0/bin/../libs/jackson-jaxrs-json-provider-2.9.4.jar:/opt/kafka_2.11-1.1.0/bin/../libs/jackson-module-jaxb-annotations-2.9.4.jar:/optkafka_2.11-1.1.0/bin/../libs/javassist-3.20.0-GA.jar:/opt/kafka_2.11-1.1.0/bin/../libs/javassist-3.21.0-GA.jar:/opt/kafka_2.11-1.1.0/bin/../libs/javax.annotation-api-1.2.jar:/opt/kafka_2.11-1.1.0/bin/../libs/javax.inject-1.jar:/opt/kafka_2.11-1.1.0/bin/../libs/javax.inject-2.5.0-b32.jar:/opt/kafka_2.11-1.1.0/bin/../libs/javax.servlet-api-3.1.0.jar:/opt/kafka_2.11-1.1.0/bin/../libs/javax.ws.rs-api-2.0.1.jar:/opt/kafka_2.11-1.1.0/bin/../libs/jersey-client-2.25.1.jar:/opt/kafka_2.11-1.1.0/bin/../libs/jersey-common-2.25.1.jar:/opt/kafka_2.11-1.1.0/bin/../libs/jersey-container-servlet-2.25.1.jar:/opt/kafka_2.11-1.1.0/bin/../libs/jersey-container-servlet-core-2.25.1.jar:/opt/kafka_2.11-1.1.0/bin/../libs/jersey-guava-2.25.1.jar:/opt/kafka_2.11-1.1.0/bin/../libs/jersey-media-jaxb-2.25.1.jar:/opt/kafka_2.11-1.1.0/bin/../libs/jersey-server-2.25.1.jar:/opt/kafka_2.11-1.1.0/bin/../libs/jetty-client-9.2.24.v20180105.jar:/opt/kafka_2.11-1.1.0/bin/../libs/jetty-continuation-9.2.24.v20180105.jar:/opt/kafka_2.11-1.1.0/bin/../libs/jetty-http-9.2.24.v20180105.jar:/opt/kafka_2.11-1.1.0/bin/../libs/jetty-io-9.2.24.v20180105.jar:/opt/kafka_2.11-1.1.0/bin/../libs/jetty-security-9.2.24.v20180105.jar:/opt/kafka_2.11-1.1.0/bin/../libs/jetty-server-9.2.24.v20180105.jar:/opt/kafka_2.11-1.1.0/bin/../libs/jetty-servlet-9.2.24.v20180105.jar:/opt/kafka_2.11-1.1.0/bin/../libs/jetty-servlets-9.2.24.v20180105.jar:/opt/kafka_2.11-1.1.0/bin/../libs/jetty-util-9.2.24.v20180105.jar:/opt/kafka_2.11-1.1.0/bin/../libs/jopt-simple-5.0.4.jar:/opt/kafka_2.11-1.1.0/bin/../libs/kafka_2.11-1.1.0.jar:/opt/kafka_2.11-1.1.0/bin/../libs/kafka_2.11-1.1.0-sources.jar:/opt/kafka_2.11-1.1.0/bin/../libs/kafka_2.11-1.1.0-test-sources.jar:/opt/kafka_2.11-1.1.0/bin/../libs/kafka-clients-1.1.0.jar:/opt/kafka_2.11-1.1.0/bin/../libs/kafka-log4j-appender-1.1.0.jar:/opt/kafka_2.11-1.1.0/bin/../libs/kafka-streams-1.1.0.jar:/opt/kafka_2.11-1.1.0/bin/../libs/kafka-streams-examples-1.1.0.jar:/opt/kafka_2.11-1.1.0/bin/../libs/kafka-streams-test-utils-1.1.0.jar:/opt/kafka_2.11-1.1.0/bin/../libs/kafka-tools-1.1.0.jar:/opt/kafka_2.11-1.1.0/bin/../libs/log4j-1.2.17.jar:/opt/kafka_2.11-1.1.0/bin/../libs/lz4-java-1.4.jar:/opt/kafka_2.11-1.1.0/bin/../libs/maven-artifact-3.5.2.jar:/opt/kafka_2.11-1.1.0/bin/../libs/metrics-core-2.2.0.jar:/opt/kafka_2.11-1.1.0/bin/../libs/osgi-resource-locator-1.0.1.jar:/opt/kafka_2.11-1.1.0/bin/../libs/plexus-utils-3.1.0.jar:/opt/kafka_2.11-1.1.0/bin/../libs/reflections-0.9.11.jar:/opt/kafka_2.11-1.1.0/bin/../libs/rocksdbjni-5.7.3.jar:/opt/kafka_2.11-1.1.0/bin/../libs/scala-library-2.11.12.jar:/opt/kafka_2.11-1.1.0/bin/../libs/scala-logging_2.11-3.7.2.jar:/opt/kafka_2.11-1.1.0/bin/../libs/scala-reflect-2.11.12.jar:/opt/kafka_2.11-1.1.0/bin/../libs/slf4j-api-1.7.25.jar:/opt/kafka_2.11-1.1.0/bin/../libs/slf4j-log4j12-1.7.25.jar:/opt/kafka_2.11-1.1.0/bin/../libs/snappy-java-1.1.7.1.jar:/opt/kafka_2.11-1.1.0/bin/../libs/validation-api-1.1.0.Final.jar:/opt/kafka_2.11-1.1.0/bin/../libs/zkclient-0.10.jar:/opt/kafka_2.11-1.1.0/bin/../libs/zookeeper-3.4.10.jar kafka.Kafka config/server.properties
root 3280 1558 0 22:17 pts/0 00:00:00 grep --color=auto kafka
4.使用
## 创建topic,3个partition, 每个partition有3个副本,在任何一个kafka节点上都可以
/bin/kafka-topics.sh --create --zookeeper 192.168.1.131:2181,192.168.1.132:2181,192.168.1.133:2181 --replication-factor 3 --partitions 3 --topic test
## 查看topic列表,在另外一个节点上查看
[root@localhost kafka_2.11-1.1.0]# bin/kafka-topics.sh --list --zookeeper localhost:2181
test
## 查看topic状态
[root@localhost kafka_2.11-1.1.0]# bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic test
Topic:test PartitionCount:3 ReplicationFactor:3 Configs:
Topic: test Partition: 0 Leader: 2 Replicas: 2,1,0 Isr: 2,1,0
Topic: test Partition: 1 Leader: 0 Replicas: 0,2,1 Isr: 0,2,1
Topic: test Partition: 2 Leader: 1 Replicas: 1,0,2 Isr: 1,0,2
## 发送消息,在任何一个节点执行
bin/kafka-console-producer.sh --broker-list 192.168.1.131:9092,192.168.1.132:9092,192.168.1.133:9092 --topic test
> 输入内容,然后按回车
## 消费消息 在另外一个节点执行。只要生产者输入内容,一敲回车,这里就会显示
[root@localhost kafka_2.11-1.1.0]# bin/kafka-console-consumer.sh --zookeeper 192.168.1.131:2181,192.168.1.132:2181,192.168.1.133:2181 --from-beginning --topic test
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
test1
test4
test
test3
this is msg
another msg
test2
test5