千家信息网

Kafka-2.11学习笔记(二)Shell脚本介绍

发表于:2024-11-19 作者:千家信息网编辑
千家信息网最后更新 2024年11月19日,鲁春利的工作笔记,谁说程序员不能有文艺范?Kafka主要的shell脚本有[hadoop@nnode kafka0.8.2.1]$ ll总计 80-rwxr-xr-x 1 hadoop hadoop
千家信息网最后更新 2024年11月19日Kafka-2.11学习笔记(二)Shell脚本介绍

鲁春利的工作笔记,谁说程序员不能有文艺范?



Kafka主要的shell脚本有

[hadoop@nnode kafka0.8.2.1]$ ll总计 80-rwxr-xr-x 1 hadoop hadoop  943 2015-02-27 kafka-console-consumer.sh-rwxr-xr-x 1 hadoop hadoop  942 2015-02-27 kafka-console-producer.sh-rwxr-xr-x 1 hadoop hadoop  870 2015-02-27 kafka-consumer-offset-checker.sh-rwxr-xr-x 1 hadoop hadoop  946 2015-02-27 kafka-consumer-perf-test.sh-rwxr-xr-x 1 hadoop hadoop  860 2015-02-27 kafka-mirror-maker.sh-rwxr-xr-x 1 hadoop hadoop  884 2015-02-27 kafka-preferred-replica-election.sh-rwxr-xr-x 1 hadoop hadoop  946 2015-02-27 kafka-producer-perf-test.sh-rwxr-xr-x 1 hadoop hadoop  872 2015-02-27 kafka-reassign-partitions.sh-rwxr-xr-x 1 hadoop hadoop  866 2015-02-27 kafka-replay-log-producer.sh-rwxr-xr-x 1 hadoop hadoop  872 2015-02-27 kafka-replica-verification.sh-rwxr-xr-x 1 hadoop hadoop 4185 2015-02-27 kafka-run-class.sh-rwxr-xr-x 1 hadoop hadoop 1333 2015-02-27 kafka-server-start.sh-rwxr-xr-x 1 hadoop hadoop  891 2015-02-27 kafka-server-stop.sh-rwxr-xr-x 1 hadoop hadoop  868 2015-02-27 kafka-simple-consumer-shell.sh-rwxr-xr-x 1 hadoop hadoop  861 2015-02-27 kafka-topics.shdrwxr-xr-x 2 hadoop hadoop 4096 2015-02-27 windows-rwxr-xr-x 1 hadoop hadoop 1370 2015-02-27 zookeeper-server-start.sh-rwxr-xr-x 1 hadoop hadoop  875 2015-02-27 zookeeper-server-stop.sh-rwxr-xr-x 1 hadoop hadoop  968 2015-02-27 zookeeper-shell.sh[hadoop@nnode kafka0.8.2.1]$

说明:Kafka也提供了在windows下运行的bat脚本,在bin/windows目录下。


ZooKeeper脚本

Kafka各组件均依赖于ZooKeeper环境,因此在使用Kafka之前首先需要具备ZooKeeper环境;可以配置ZooKeeper集群,也可以使用Kafka集成的ZooKeeper脚本来启动一个standalone mode的ZooKeeper节点。

# 启动Zookeeper Server[hadoop@nnode kafka0.8.2.1]$ bin/zookeeper-server-start.sh USAGE: bin/zookeeper-server-start.sh zookeeper.properties# 配置文件路径为config/zookeeper.properties,主要配置zookeeper的本地存储路径(dataDir)# 内部实现为调用exec $base_dir/kafka-run-class.sh $EXTRA_ARGS org.apache.zookeeper.server.quorum.QuorumPeerMain $@# 停止ZooKeeper Server[hadoop@nnode kafka0.8.2.1]$ bin/zookeeper-server-stop.sh # 内部实现为调用ps ax | grep -i 'zookeeper' | grep -v grep | awk '{print $1}' | xargs kill -SIGINT# 设置服务器参数[hadoop@nnode kafka0.8.2.1]$ zookeeper-shell.shUSAGE: bin/zookeeper-shell.sh zookeeper_host:port[/path] [args...]# 内部实现为调用exec $(dirname $0)/kafka-run-class.sh org.apache.zookeeper.ZooKeeperMain -server "$@"# zookeeper shell用来查看zookeeper的节点信息[hadoop@nnode kafka0.8.2.1]$ bin/zookeeper-shell.sh nnode:2181,dnode1:2181,dnode2:2181/Connecting to nnode:2181,dnode1:2181,dnode2:2181/Welcome to ZooKeeper!JLine support is disabledWATCHER::WatchedEvent state:SyncConnected type:None path:nullls /[hbase, hadoop-ha, admin, zookeeper, consumers, config, zk-book, brokers, controller_epoch]

说明:$@ 表示所有参数列表。 $# 添加到Shell的参数个数


Kafka启动与停止

# 启动Kafka Server[hadoop@nnode kafka0.8.2.1]$ bin/kafka-server-start.sh USAGE: bin/kafka-server-start.sh [-daemon] server.properties# 内部实现为调用exec $base_dir/kafka-run-class.sh $EXTRA_ARGS kafka.Kafka $@# 略[hadoop@nnode kafka0.8.2.1]$ bin/kafka-run-class.sh # 停止Kafka Server[hadoop@nnode kafka0.8.2.1]$ kafka-server-stop.sh# 内部实现为调用ps ax | grep -i 'kafka\.Kafka' | grep java | grep -v grep | awk '{print $1}' | xargs kill -SIGTERM

说明:Kafka启动时会从config/server.properties读取配置信息,其中Kafka Server启动的三个核心配置项为:

broker.id : broker的唯一标识符,取值为非负整数(可以取ip的最后一组)port : server监听客户端连接的端口(默认为9092)zookeeper.connect : ZK的连接信息,格式为hostname1:port1[,hostname2:port2,hostname3:port3]# 可选log.dirs : Kafka数据存储的路径(默认为/tmp/kafka-logs),以逗号分割的一个或多个目录列表。当有一个新partition被创建时,此时哪个目录中partition数目最少,则新创建的partition会被放置到该目录。num.partitions : Topic的partition数目(默认为1),可以在创建Topic时指定# 其他参考http://kafka.apache.org/documentation.html#configuration


Kafka消息

# 消息生产者[hadoop@nnode kafka0.8.2.1]$ bin/kafka-console-producer.shRead data from standard input and publish it to Kafka. # 从控制台读取数据Option                             Description------                             -------------topic                     REQUIRED: The broker list string in the form HOST1:PORT1,HOST2:PORT2.--broker-list         REQUIRED: The topic id to produce messages to.    # 这两个为必选参数,其他的可选参数可以通过直接执行该命令查看帮助# 消息消费者[hadoop@nnode kafka0.8.2.1]$ bin/kafka-console-consumer.shThe console consumer is a tool that reads data from Kafka and outputs it to standard output.Option                             Description------                             -------------zookeeper                  REQUIRED: The connection string for the zookeeper connection,                                   in the form host:port.(Multiple URLS can be given to allow fail-over.)--topic                     The topic id to consume on. --from-beginning                   If the consumer does not already have an established offset to                                   consume from, start with the earliest message present in the                                    log rather than the latest message.    # zookeeper参数是必须的,其他的都是可选的,具体的参考帮助信息    # 查看消息信息[hadoop@nnode kafka0.8.2.1]$ bin/kafka-topics.shCreate, delete, describe, or change a topic.Option                             Description------                             -------------zookeeper                  REQUIRED: The connection string for the zookeeper connection,                                   in the form host:port. (Multiple URLS can be given to allow fail-over.)--create                           Create a new topic.        --delete                           Delete a topic --alter                            Alter the configuration for the topic.--list                             List all available topics.--describe                         List details for the given topics.--topic                     The topic to be create, alter or describe. Can also accept                                    a regular expression except for --create option。             --help                             Print usage information.                                       # zookeeper参数是必须的,其他的都是可选的,具体的参考帮助信息


其余脚本暂略





0