However "simple" doesn't imply "dumb" or "limited", instead it implies "efficiency" and removal of superfluous features, inline with UNIX's philosophy of `do one thing and do it well <https://en.wikipedia.org/wiki/Unix_philosophy#Do_One_Thing_and_Do_It_Well>`__.
As such ``kawipiko`` basically supports only ``GET`` (and ``HEAD``) requests and does not provide features like dynamic content, authentication, reverse proxying, etc.
However, ``kawipiko`` does provide something unique, that no other HTTP server offers: the static website content is served from a CDB_ database with almost zero latency.
Moreover, the static website content can be compressed (with either ``gzip`` or ``brotli``) ahead of time, thus reducing not only CPU but also bandwidth and latency.
CDB_ databases are binary files that provide efficient read-only key-value lookup tables, initially used in some DNS and SMTP servers, mainly for their low overhead lookup operations, zero locking in multi-threaded / multi-process scenarios, and "atomic" multi-record updates.
With regard to performance, as described in the `benchmarks section <#benchmarks>`__, ``kawipiko`` is on par with NGinx, sustaining 72k requests / second with 0.4ms latency for 99% of the requests even on my 6 years old laptop.
However the main advantage over NGinx is not raw performance, but deployment and configuration simplicity, plus efficient management and storage of large collections of many small files.
Unlike most (if not all) other servers out-there, in which you just point your web server to the folder holding the static website content root, ``kawipiko`` takes a radically different approach.
In order to serve the static website content, one has to first "compile" it into the CDB file through ``kawipiko-archiver``, and then one can "serve" it from the CDB file through ``kawipiko-server``.
This two step phase also presents a few opportunities:
* one can decouple the "building", "testing", and "publishing" phases of a static website, by using a similar CI/CD pipeline as done for other software projects;
* one can instantaneously rollback to a previous version if the newly published one has issues;
.. note ::
As described in the `limitations section <#limitations>`__, at the moment, if one rebuilds the CDB file, the server has to be restarted.
Each individual file (and consequently of the corresponding HTTP response body) is compressed with either ``gzip`` or Brotli_; by default (or alternatively ``identity``) no compression is used.
Even if compression is explicitly requested, if the compression ratio is bellow a certain threshold (depending on the uncompressed size), the file is stored without any compression.
(It's senseless to force the client to spend time and decompress the response body if that time is not recovered during network transmission.)
``--exclude-index``
Disables using ``index.*`` files (where ``.*`` is one of ``.html``, ``.htm``, ``.xhtml``, ``.xht``, ``.txt``, ``.json``, and ``.xml``) to respond to a request whose URL ends in ``/`` (corresponding to the folder wherein ``index.*`` file is located).
(This can be used to implement "slash" blog style URL's like ``/blog/whatever/`` which maps to ``/blog/whatever/index.html``.)
Disables using a file with the suffix ``.html``, ``.htm``, ``.xhtml``, ``.xht``, and ``.txt`` to respond to a request whose URL does not exactly match an existing file.
Disables adding an ``Cache-Control: public, immutable, max-age=3600`` header that forces the browser (and other intermediary proxies) to cache the response for an hour (the ``public`` and ``max-age=3600`` arguments), and furthermore not request it even on reloads (the ``immutable`` argument).
``--include-etag``
Enables adding an ``ETag`` response header that contains the SHA256 of the response body.
By not including the ``ETag`` header (i.e. the default), and because identical headers are stored only one, if one has many files of the same type (that in turn without ``ETag`` generates the same headers), this can lead to significant reduction in stored headers, including reducing RAM usage.
(At this moment it does not support HTTP conditional requests, i.e. the ``If-None-Match``, ``If-Modified-Since`` and their counterparts; however this ``ETag`` header might be used in conjuction with ``HEAD`` requests to see if the resource has changed.)
``--exclude-file-listing``
Disables the creation of an internal list of files that can be used in conjunction with the ``--index-all`` flag of the ``kawipiko-server``.
``--include-folder-listing``
Enables the creation of an internal list of folders. (Currently not used by the ``kawipiko-server`` tool.)
* any file with the following prefixes: ``.``, ``#``;
* any file with the following suffixes: ``~``, ``#``, ``.log``, ``.tmp``, ``.temp``, ``.lock``;
* any file that contains the following: ``#``;
* any file that exactly matches the following:: ``Thumbs.db``, ``.DS_Store``;
* (at the moment these rules are not configurable through flags;)
``_wildcard.*`` files
.....................
By placing a file whose name matches ``_wildcard.*`` (i.e. with the prefix ``_wildcard.`` and any other suffix), it will be used to respond to any request whose URL fails to find a "better" match.
These wildcard files respect the folder hierarchy, in that wildcard files in (direct or transitive) subfolders override the wildcard file in their parents (direct or transitive).
You freely use symlinks (including pointing outside of the content root) and they will be crawled during archival respecting the "logical" hierarchy they introduce.
by using ``--index-data-content`` a RAM-based hash-map is created to eliminate a CDB lookup operation for this purpose;
*``--index-all`` enables all these indices;
* (depending on the use-case) it is highly recommended to use ``--index-paths``; if ``--exclude-etag`` was used during archival, one can also use ``--index-data-meta``;
* it is highly recommended to use ``--archive-inmem`` or ``--archive-mmap`` or else (especially if data is indexed) the net effect is that of loading everything in RAM;
``--bind``
The IP and port to listen for requests.
``--processes`` and ``--threads``
The number of processes and threads per each process to start.
It is highly recommended to use 1 process and as many threads as there are cores.
Depending on the use-case, one can use multiple processes each with a single thread; this would reduce goroutine contention if it causes problems.
(However note that if using ``--archive-inmem`` each process will allocate its own copy of the database in RAM; in such cases it is highly recommended to use ``--archive-mmap``.)
Enables adding the ``Strict-Transport-Security: max-age=31536000`` and ``Content-Security-Policy: upgrade-insecure-requests`` to the response headers.
(Although at the moment ``kawipiko`` does not support HTTPS, it can be used behind a TLS terminator, load-balancer or proxy that do support HTTPS; therefore these headers instruct the browser to always use HTTPS for the served domain.)
``--security-headers-disable``
Disables adding a few security related headers: ::
It starts the server in "dummy" mode, ignoring all archive related arguments and always responding with ``hello world!\n`` and without additional headers except the HTTP status line and ``Content-Length``.
This argument can be used to benchmark the raw performance of the underlying Go and ``fasthttp`` performance; this is the upper limit on the achievable performance given the underlying technologies.
(From my own benchmarks ``kawipiko``'s adds only about ~15% overhead when actually serving the ``hello-world.cdb`` archive.)
* (optionally) in order to reduce the serving latency even further, one can preload the entire CDB database in memory, or alternatively mapping it in memory (mmap_); this trades memory for CPU;
* "atomic" static website content changes; because the entire content is held in a single CDB database file, and because the file replacement is atomically achieved via the ``rename`` syscall (or the ``mv`` tool), all resources are "changed" at the same time;
*``_wildcard.*`` files (where ``.*`` are the regular extensions like ``.txt``, ``.html``, etc.) which will be used if an actual resource is not found under that folder; (these files respect the hierarchical tree structure, i.e. "deeper" ones override the ones closer to "root";)
* support for custom HTTP response headers (for specific files, for specific folders, etc.); (currently only ``Content-Type``, ``Content-Length``, ``Content-Encoding`` and optionally ``ETag`` is included; additionally ``Cache-Control: public, immutable, max-age=3600`` and a few security related headers are also included;)
* (won't fix) the CDB database **maximum size is 4 GiB**; (however if you have a static website this large, you are probably doing something extremely wrong, as large files should be offloaded to something like AWS S3 and served through a CDN like CloudFlare or AWS CloudFront;)
* (won't fix) the server **does not support per-request decompression / recompression**; this implies that if the content was saved in the CDB database with compression (say ``gzip``), the server will serve all resources compressed (i.e. ``Content-Encoding: gzip``), regardless of what the browser accepts (i.e. ``Accept-Encoding: gzip``); the same applies for uncompressed content; (however always using ``gzip`` compression is safe enough as it is implemented in virtually all browsers and HTTP clients out there;)
* (won't fix) regarding the "atomic" static website changes, there is a small time window in which a client that has fetched an "old" version of a resource (say an HTML page), but which has not yet fetched the required resources (say the CSS or JS files), and the CDB database was swapped, it will consequently fetch the "new" version of these required resources; however due to the low latency serving, this time window is extremely small; (**this is not a limitation of this HTTP server, but a limitation of the way the "web" is built;** always use fingerprints in your resources URL, and perhaps always include the current and previous version on each deploy;)
* under stress conditions (512 concurrent connections), you get around 74k requests / second, at about 15ms latency for 99% of the requests;
***under extreme conditions (2048 concurrent connections), you get around 74k requests / second, at about 500ms latency for 99% of the requests (meanwhile the average is 50ms);**
***the raw performance is comparable with NGinx_** (only 20% few requests / second for this "synthetic" benchmark); however for a "real" scenario (i.e. thousand of small files accessed in a random pattern) I think they are on-par; (not to mention how simple it is to configure and deploy ``kawipiko`` as compared to NGinx;)
* (the NGinx configuration file can be found in the `examples folder <./examples>`__; the configuration was obtained after many experiments to squeeze out of NGinx as much performance as possible, given the targeted use-case, namely many small files;)
* 512 concurrent connections (handled by 2 threads): ::
wrk \
--threads 2 \
--connections 512 \
--timeout 6s \
--duration 30s \
--latency \
http://127.0.0.1:8080/ \
#
* 4096 concurrent connections (handled by 4 threads): ::
wrk \
--threads 4 \
--connections 4096 \
--timeout 6s \
--duration 30s \
--latency \
http://127.0.0.1:8080/ \
#
Methodology notes
.................
* the number of threads for the server plus for ``wkr`` shouldn't be larger than the number of available cores; (or use different machines for the server and the client;)
* also take into account that by default the number of "file descriptors" on most UNIX/Linux machines is 1024, therefore if you want to try with more connections than 1000, you need to raise this limit; (see bellow;)
* additionally, you can try to pin the server and ``wrk`` to specific cores, increase various priorities (scheduling, IO, etc.); (given that Intel processors have HyperThreading which appear to the OS as individual cores, you should make sure that you pin each process on cores part of the same physical processor / core;)
* pinning the server (cores ``0`` and ``1`` are mapped on physical core ``1``): ::
sudo -u root -n -E -P -- \
\
taskset -c 0,1 \
nice -n -19 -- \
ionice -c 2 -n 0 -- \
chrt -r 10 \
prlimit -n16384 -- \
\
sudo -u "${USER}" -n -E -P -- \
\
kawipiko-server \
... \
#
* pinning the client (cores ``2`` and ``3`` are mapped on physical core ``2``): ::
Until I expand upon why I have chosen to use CDB for service static website content, you can read about the `sparkey <https://github.com/spotify/sparkey>`__ from Spotify.
For details about the copyright and licensing, please consult the `notice <./documentation/licensing/notice.txt>`__ file in the `documentation/licensing <./documentation/licensing>`__ folder.
*`Benchmarking LevelDB vs. RocksDB vs. HyperLevelDB vs. LMDB Performance for InfluxDB <https://www.influxdata.com/blog/benchmarking-leveldb-vs-rocksdb-vs-hyperleveldb-vs-lmdb-performance-for-influxdb/>`__ (article);
*`Badger vs LMDB vs BoltDB: Benchmarking key-value databases in Go <https://blog.dgraph.io/post/badger-lmdb-boltdb/>`__ (article);
*`Benchmarking BDB, CDB and Tokyo Cabinet on large datasets <https://www.dmo.ca/blog/benchmarking-hash-databases-on-large-data/>`__ (article);