Failed Installing Kafka on Windows
May 11, 2017
I wanted to try the Kafka message broker on my laptop. Unfortunately it's only an i3 with 8Gb of RAM running Windows 7, and I failed due to an out of memory error.
Here is the documented approach I followed.
Download
First, I installed a JDK for Java 8. I used jdk-8u131-windows-x64.
Next, I downloaded the binary Kafka distribution from https://kafka.apache.org/downloads - in my case this was kafka_2.12-0.10.2.1.tgz
.
I unpacked this to my c: drive into Kafka_2.12-0.10.2.1
Running
To run Kafka, you need to run both Zookeeper and Kafka.
C:\Kafka_2.12-0.10.2.1>bin\windows\zookeeper-server-start config\zookeeper.properties
and then in another dos box
C:\Kafka_2.12-0.10.2.1>bin\windows\kafka-server-start config\server.properties
Testing
To test that everything is working, we can create a topic and then pass a few messages through it.
C:\Kafka_2.12-0.10.2.1>bin\windows\kafka-topics.bat --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test Created topic "test". C:\Kafka_2.12-0.10.2.1>bin\windows\kafka-topics.bat --list --zookeeper localhost:2181 test
Now we have a topic, we can setup a consumer. Run this command in another dos box.
C:\Kafka_2.12-0.10.2.1>bin\windows\kafka-console-consumer --bootstrap-server localhost:9092 --topic test --from-beginning
Finally, we can send some messages to it. Run this command, then type messages into the console, followed by return. This should output the messages into the consumer window.
C:\Kafka_2.12-0.10.2.1>bin\windows\kafka-console-producer --broker-list localhost:9092 --topic test
Problems
I had the following error in the Kafka window. No combination I could find of messing with the memory configuration alleviated this problem. In the end, I gave up and provisioned it on Azure instead.
java.io.IOException: Map failed at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:907) at kafka.log.AbstractIndex.<init>(AbstractIndex.scala:61) at kafka.log.OffsetIndex.<init>(OffsetIndex.scala:52) at kafka.log.LogSegment.<init>(LogSegment.scala:72) at kafka.log.Log.$anonfun$loadSegments$4(Log.scala:210) at kafka.log.Log$$Lambda$268/32902201.apply(Unknown Source) at scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:789) at scala.collection.TraversableLike$WithFilter$$Lambda$209/1135548.apply (Unknown Source) at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:32) at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:29) at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:193) at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:788) at kafka.log.Log.loadSegments(Log.scala:188) at kafka.log.Log.<init>(Log.scala:116) at kafka.log.LogManager.$anonfun$createLog$1(LogManager.scala:365) at kafka.log.LogManager$$Lambda$606/9412500.apply(Unknown Source) at scala.Option.getOrElse(Option.scala:121) at kafka.log.LogManager.createLog(LogManager.scala:361) at kafka.cluster.Partition.$anonfun$getOrCreateReplica$1(Partition.scala:109) at kafka.cluster.Partition$$Lambda$605/32670253.apply(Unknown Source) at kafka.utils.Pool.getAndMaybePut(Pool.scala:70) at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:106) at kafka.cluster.Partition.$anonfun$makeLeader$3(Partition.scala:166) at kafka.cluster.Partition.$anonfun$makeLeader$3$adapted(Partition.scala:166) at kafka.cluster.Partition$$Lambda$604/24170421.apply(Unknown Source) at scala.collection.mutable.HashSet.foreach(HashSet.scala:78) at kafka.cluster.Partition.$anonfun$makeLeader$1(Partition.scala:166) at kafka.cluster.Partition$$Lambda$602/30335633.apply(Unknown Source) at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:213) at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:221) at kafka.cluster.Partition.makeLeader(Partition.scala:160) at kafka.server.ReplicaManager.$anonfun$makeLeaders$5(ReplicaManager.scala:752) at kafka.server.ReplicaManager$$Lambda$601/4765530.apply(Unknown Source) at scala.collection.mutable.HashMap.$anonfun$foreach$1(HashMap.scala:130) at scala.collection.mutable.HashMap$$Lambda$26/14153696.apply(Unknown Source) at scala.collection.mutable.HashTable.foreachEntry(HashTable.scala:233) at scala.collection.mutable.HashTable.foreachEntry$(HashTable.scala:226) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) at scala.collection.mutable.HashMap.foreach(HashMap.scala:130) at kafka.server.ReplicaManager.makeLeaders(ReplicaManager.scala:751) at kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:696) at kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:148) at kafka.server.KafkaApis.handle(KafkaApis.scala:84) at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:62) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.OutOfMemoryError: Map failed at sun.nio.ch.FileChannelImpl.map0(Native Method) at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:904) ... 44 more
I tried to solve this by adding an environment variable KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
, but it made no difference. This has already been added to the kafka-server-start.bat
file to solve this very problem.