Kresten Krab describes ways that Humio uses Apache Kafka for their product:
Humio is a log analytics system built to run both on-prem and as a hosted offering. It is designed for “on-prem first” because, in many logging use cases, you need the privacy and security of managing your own logging solution. And because volume limitations can often be a problem in Hosted scenarios.
From a software provider’s point of view, fixing issues in an on-prem solution is inherently problematic, and so we have strived to make the solution simple. To realize this goal, a Humio installation consists only of a single process per node running Humio itself, being dependent on Kafka running nearby (we recommend deploying one Humio node per physical CPU so a dual-socket machine typically runs two Humio nodes).
We use Kafka for two things: buffering ingest and as a sequencer of events among the nodes of a Humio cluster.
Read on for more details and a few tips on using Kafka to its fullest.