When talking about high-performance software we probably think of server software (such as nginx) which processes millions requests from thousands clients in parallel. Surely, what makes server software work so fast is high-end CPU running with huge amount of memory and a very fast network link. But even then, the software must utilize these hardware resources at maximum efficiency level, otherwise it will end up wasting the most of the valuable CPU power for unnecessary kernel-user context switching or while waiting for slow I/O operations to complete.
Thankfully, the Operating Systems have a solution to this problem, and it's called kernel event queue. Server software and OS kernel use this mechanism together to achieve minimum latency and maximum scalability (when serving a very large number of clients in parallel). In this article we are going to talk about FreeBSD, macOS and kqueue, Linux and epoll, Windows and I/O Completion Ports. They all have their similarities and differences which we're going to discuss here. The goal of this article is for you to understand the whole mechanism behind kernel queues and to understand how to work with each API.