This is a simple static HTTP server written in Go, whose main purpose is to serve (public) static content as efficient as possible. As such, it basically supports only ``GET`` requests and does not provide features like dynamic content, authentication, reverse proxying, etc.
However it does provide something unique, that no other HTTP server offers: the static content is served from a CDB_ database with almost zero latency.
CDB_ databases are binary files that provide efficient read-only key-value lookup tables, initially used in some DNS and SMTP servers, mainly for their low overhead lookup operations, zero locking in multi-threaded / multi-process scenarios, and "atomic" multi-record updates. This also makes them suitable for low-latency static content serving over HTTP, which this project provides.
* under normal conditions (16 concurrent connections), you get around 72k requests / second, at about 0.4ms latency for 99% of the requests;
* under stress conditions (512 concurrent connections), you get arround 74k requests / second, at about 15ms latency for 99% of the requests;
***under extreme conditions (2048 concurrent connections), you get arround 74k requests / second, at about 500ms latency for 99% of the requests (meanwhile the average is 50ms);**
* (the timeout errors are due to the fact that ``wrk`` is configured to timeout after only 1 second of waiting;)
* (the read errors are due to the fact that the server closes a keep-alive connection after serving 256k requests;)
***the raw performance is comparable with NGinx** (only 20% few requests / second for this "synthetic" benchmark); however for a "real" scenario (i.e. thousand of small files accessed in a random pattern) I think they are on-par; (not to mention how simple it is to configure and deploy ``kawipiko`` as compared to NGinx;)
* (the NGinx configuration file can be found in the `examples folder <./examples>`_; the configuration was obtained after many experiments to squeeze out of NGinx as much performance as possible, given the targeted use-case, namely many small static files;)
*`darkhttpd`_ 512 connections / 1 server process / 4 wrk threads: ::
Requests/sec: 38191.65
Transfer/sec: 8.74MB
Running 30s test @ http://127.0.0.1:8080/index.txt
* get the binaries (either `download <#download-binaries>`_ or `build <#build-from-sources>`_ them);
* get the ``hello-world.cdb`` (from the `examples <./examples>`__ folder inside the repository);
Single process / single threaded
................................
* this scenario will yield a "base-line performance" per core;
* execute the server (in-memory and indexed) (i.e. the "best case scenario"): ::
kawipiko-server \
--bind 127.0.0.1:8080 \
--archive ./hello-world.cdb \
--archive-inmem \
--index-all \
--processes 1 \
--threads 1 \
#
* execute the server (memory mapped) (i.e. the "the recommended scenario"): ::
kawipiko-server \
--bind 127.0.0.1:8080 \
--archive ./hello-world.cdb \
--archive-mmap \
--processes 1 \
--threads 1 \
#
Single process / two threads
............................
* this scenario is the usual setup; configure `--threads` to equal the number of cores;
* execute the server (memory mapped): ::
kawipiko-server \
--bind 127.0.0.1:8080 \
--archive ./hello-world.cdb \
--archive-mmap \
--processes 1 \
--threads 2 \
#
Load generators
...............
* 512 concurrent connections (handled by 2 threads): ::
wrk \
--threads 2 \
--connections 512 \
--timeout 6s \
--duration 30s \
--latency \
http://127.0.0.1:8080/ \
#
* 4096 concurrent connections (handled by 4 threads): ::
wrk \
--threads 4 \
--connections 4096 \
--timeout 6s \
--duration 30s \
--latency \
http://127.0.0.1:8080/ \
#
Take into account
.................
* the number of threads for the server plus for ``wkr`` shouldn't be larger than the number of available cores; (or use different machines for the server and the client;)
* also take into account that by default the number of "file descriptors" on most UNIX/Linux machines is 1024, therefore if you want to try with more connections than 1000, you need to raise this limit; (see bellow;)
* additionally, you can try to pin the server and ``wrk`` to specific cores, increase various priorities (scheduling, IO, etc.); (given that Intel processors have HyperThreading which appear to the OS as individual cores, you should make sure that you pin each process on cores part of the same physical processor / core;)
* pinning the server (cores ``0`` and ``1`` are mapped on physical core ``1``): ::
sudo -u root -n -E -P -- \
taskset -c 0,1 \
nice -n -19 -- \
ionice -c 2 -n 0 -- \
chrt -r 10 \
prlimit -n16384 -- \
sudo -u "${USER}" -n -E -P -- \
kawipiko-server \
... \
#
* pinning the client (cores ``2`` and ``3`` are mapped on physical core ``2``): ::
* (optionally) in order to reduce the serving latency even further, one can preload the entire CDB database in memory, or alternatively mapping it in memory (mmap_); this trades memory for CPU;
* "atomic" site content changes; because the entire site content is held in a single CDB database file, and because the file replacement is atomically achieved via the ``rename`` syscall (or the ``mv`` tool), all the site's resources are "changed" at the same time;
*`_wildcard.*` files (where `.*` are the regular extensions like `.txt`, `.html`, etc.) which will be used if an actual resource is not found under that folder; (these files respect the hierarchical tree structure, i.e. "deeper" ones override the ones closer to "root";)
* support for custom HTTP response headers (for specific files, for specific folders, etc.); (currently only ``Content-Type``, ``Content-Length``, ``Content-Encoding`` and optionally ``ETag`` is included; additionally `Cache-Control: public, immutable, max-age=3600` and a few security related headers are also included;)
* support for mapping virtual hosts to key prefixes; (currently virtual hosts, i.e. the `Host` header, are ignored;)
* (won't fix) the CDB database **maximum size is 2 GiB**; (however if you have a site this large, you are probabbly doing something extreemly wrong, as large files should be offloaded to something like AWS S3 and served through a CDN like CloudFlare or AWS CloudFront;)
* (won't fix) the server **does not support per-request decompression / recompression**; this implies that if the site content was saved in the CDB database with compression (say ``gzip``), the server will serve all resources compressed (i.e. ``Content-Encoding : gzip``), regardless of what the browser accepts (i.e. ``Accept-Encoding: gzip``); the same applies for uncompressed content; (however always using ``gzip`` compression is safe enough as it is implemented in virtually all browsers and HTTP clients out there;)
* (won't fix) regarding the "atomic" site changes, there is a small time window in which a client that has fetched an "old" version of a resource (say an HTML page), but which has not yet fetched the required resources (say the CSS or JS files), and the CDB database was swapped, it will consequently fetch the "new" version of these required resources; however due to the low latency serving, this time window is extreemly small; (**this is not a limitation of this HTTP server, but a limitation of the way the "web" is built;** always use fingerprints in your resources URL, and perhaps always include the current and previous version on each deploy;)
*`ciprian@volution.ro <mailto:ciprian@volution.ro>`_ or `ciprian.craciun@gmail.com <mailto:ciprian.craciun@gmail.com>`_
*`<https://volution.ro/ciprian>`_
*`<https://github.com/cipriancraciun>`_
Notice (copyright and licensing)
================================
Notice -- short version
-----------------------
The code is licensed under AGPL 3 or later.
If you **change** the code within this repository **and use** it for **non-personal** purposes, you'll have to release it as per AGPL.
Notice -- long version
----------------------
For details about the copyright and licensing, please consult the `notice <./documentation/licensing/notice.txt>`__ file in the `documentation/licensing <./documentation/licensing>`_ folder.
If someone requires the sources and/or documentation to be released
under a different license, please send an email to the authors,
stating the licensing requirements, accompanied with the reasons
and other details; then, depending on the situation, the authors might
release the sources and/or documentation under a different license.