2. System Requirements

2.1. Java 8 System Requirements

Windows

  • Windows 10 (8u51 and above)
  • Windows 8.x (Desktop)
  • Windows 7 SP1
  • Windows Vista SP2
  • Windows Server 2008 R2 SP1 (64-bit)
  • Windows Server 2012 and 2012 R2 (64-bit)
  • RAM: 128 MB
  • Disk space: 124 MB for JRE; 2 MB for Java Update
  • Processor: Minimum Pentium 2 266 MHz processor
  • Browsers: Internet Explorer 9 and above, Firefox

Mac OS X

  • Intel-based Mac running Mac OS X 10.8.3+, 10.9+
  • Administrator privileges for installation
  • 64-bit browser
  • A 64-bit browser (Safari, for example) is required to run Oracle Java on Mac.

Linux

  • Oracle Linux 5.5+1
  • Oracle Linux 6.x (32-bit), 6.x (64-bit)2
  • Oracle Linux 7.x (64-bit)2 (8u20 and above)
  • Red Hat Enterprise Linux 5.5+1, 6.x (32-bit), 6.x (64-bit)2
  • Red Hat Enterprise Linux 7.x (64-bit)2 (8u20 and above)
  • Suse Linux Enterprise Server 10 SP2+, 11.x
  • Suse Linux Enterprise Server 12.x (64-bit)2 (8u31 and above)
  • Ubuntu Linux 12.04 LTS, 13.x
  • Ubuntu Linux 14.x (8u25 and above)
  • Ubuntu Linux 15.04 (8u45 and above)
  • Ubuntu Linux 15.10 (8u65 and above)
  • Browsers: Firefox

2.2. ZooKeeper Requirements

ZooKeeper runs in Java, release 1.6 or greater (JDK 6 or greater). It runs as an ensemble of ZooKeeper servers. Three ZooKeeper servers are the minimum recommended size for an ensemble, and we also recommend that they run on separate machines. At Yahoo!, ZooKeeper is usually deployed on dedicated RHEL boxes, with dual-core processors, 2GB of RAM, and 80GB IDE hard drives.

2.3. Recommendations for Kafka

Kafka brokers use both the JVM heap and the OS page cache. The JVM heap is used for the replication of partitions between brokers and for log compaction. Replication requires 1MB (default replica.max.fetch.size) for each partition on the broker. In Apache Kafka 0.10.1 (Confluent Platform 3.1), we added a new configuration (replica.fetch.response.max.bytes) that limits the total RAM used for replication to 10MB, to avoid memory and garbage collection issues when the number of partitions on a broker is high. For log compaction, calculating the required memory is more complicated and we recommend referring to the Kafka documentation if you are using this feature. For small to medium-sized deployments, 4GB heap size is usually sufficient. In addition, it is highly recommended that consumers always read from memory, i.e. from data that was written to Kafka and is still stored in the OS page cache. The amount of memory this requires depends on the rate at which this data is written and how far behind you expect consumers to get. If you write 20GB per hour per broker and you allow brokers to fall 3 hours behind in normal scenario, you will want to reserve 60GB to the OS page cache. In cases where consumers are forced to read from disk, performance will drop significantly

Kafka Connect itself does not use much memory, but some connectors buffer data internally for efficiency. If you run multiple connectors that use buffering, you will want to increase the JVM heap size to 1GB or higher.

Consumers use at least 2MB per consumer and up to 64MB in cases of large responses from brokers (typical for bursty traffic). Producers will have a buffer of 64MB each. Start by allocating 1GB RAM and add 64MB for each producer and 16MB for each consumer planned.