``kawipiko`` is a **super-lightweight static HTTP (and HTTPS, H2, H3) server** written in Go, whose main purpose is to serve static content as fast as possible, and with the lowest resource consumption (CPU and RAM) as possible.
However "simple" doesn't imply "dumb" or "limited", instead it implies "efficient" through the removal of superfluous features, thus being inline with UNIX's old philosophy of `"do one thing and do it well" <https://en.wikipedia.org/wiki/Unix_philosophy#Do_One_Thing_and_Do_It_Well>`__.
As such ``kawipiko`` basically supports only ``GET`` requests, and does not provide features like dynamic content generation, authentication, reverse proxying, etc.; while still providing compression (``gzip``, ``zopfli``, ``brotli``), plus HTML-CSS-JS minification (TODO), without affecting its performance (due to its unique architecture as seen below).
What ``kawipiko`` does provide is something very unique, that no other HTTP server offers: the static content is served from a `CDB database <#why-cdb>`__ with almost no latency (as compared to classical static servers that still have to pass through the OS via the ``open-read-close`` syscalls).
Moreover as noted earlier, the static content can be compressed (with either ``gzip``, ``zopfli`` or ``brotli``) or minified ahead of time, thus reducing not only CPU but also bandwidth and latency.
`CDB databases <#why-cdb>`__ are binary files that provide efficient read-only key-value lookup tables, initially used in some DNS and SMTP servers, mainly for their low overhead lookup operations, zero locking in multi-threaded / multi-process scenarios, and "atomic" multi-record updates.
For those familiar with Netlify (or competitors like CloudFlare Pages, GitHub Pages, etc.), ``kawipiko`` is a "host-it-yourself" alternative featuring:
* self-contained deployment with simple configuration; (i.e. just `fetch the executable <#installation>`__ and use the `proper flags <#kawipiko-server>`__;)
* (hopefully) extremely secure; (i.e. it doesn't launch processes, it doesn't connect to other services or databases, it doesn't open any files, etc.; basically you can easily ``chroot`` it, or containerize it as is in fashion these days;)
* highly portable, supporting at least Linux (the main development, testing and deployment platform), FreeBSD, OpenBSD, and OSX;
With regard to performance, as described in the `benchmarks section <#benchmarks>`__, ``kawipiko`` is at least on-par with NGinx, sustaining over 100K requests / second with 0.25ms latency for 99% of the requests even on my 6 years old laptop.
However the main advantage over NGinx is not raw performance, but deployment and configuration simplicity, plus efficient management and storage of large collections of many small files.
*``kawipiko`` -- an all-in-one executable that bundles all functionality in one executable; (i.e. ``kawipiko server ...`` or ``kawipiko archiver ...``);
Unlike most (if not all) other servers out-there, in which you just point your web server to the folder holding the static website content root, ``kawipiko`` takes a radically different approach.
In order to serve the static content, one has to first "archive" the content into the CDB file through ``kawipiko-archiver``, and then one can "serve" it from the CDB file through ``kawipiko-server``.
This two step phase also presents a few opportunities:
* one can decouple the "building", "testing", and "publishing" phases of a static website, by using a similar CI/CD pipeline as done for other software projects;
* one can instantaneously rollback to a previous version if the newly published one has issues;
* (optionally) in order to reduce the serving latency even further, one can preload the entire CDB database in memory, or alternatively mapping it in memory (mmap_); this trades memory for CPU;
* "atomic" static website content changes; because the entire content is held in a single CDB database file, and because the file replacement is atomically achieved via the ``rename`` syscall (or the ``mv`` tool), all resources are "changed" at the same time;
*``_wildcard.*`` files (where ``.*`` are the regular extensions like ``.txt``, ``.html``, etc.) which will be used if an actual resource is not found under that folder; (these files respect the hierarchical tree structure, i.e. "deeper" ones override the ones closer to "root";)
* support for custom HTTP response headers (for specific files, for specific folders, etc.); (currently only ``Content-Type``, ``Content-Length``, ``Content-Encoding`` and optionally ``ETag`` is included; additionally ``Cache-Control: public, immutable, max-age=3600`` and a few security related headers are also included;)
* (won't fix) the CDB database **maximum size is 4 GiB**; (however if you have a static website this large, you are probably doing something extremely wrong, as large files should be offloaded to something like AWS S3 and served through a CDN like CloudFlare or AWS CloudFront;)
* (won't fix) the server **does not support per-request decompression / recompression**; this implies that if the content was saved in the CDB database with compression (say ``gzip``), the server will serve all resources compressed (i.e. ``Content-Encoding: gzip``), regardless of what the browser accepts (i.e. ``Accept-Encoding: gzip``); the same applies for uncompressed content; (however always using ``gzip`` compression is safe enough as it is implemented in virtually all browsers and HTTP clients out there;)
* (won't fix) regarding the "atomic" static website changes, there is a small time window in which a client that has fetched an "old" version of a resource (say an HTML page), but which has not yet fetched the required resources (say the CSS or JS files), and the CDB database was swapped, it will consequently fetch the "new" version of these required resources; however due to the low latency serving, this time window is extremely small; (**this is not a limitation of this HTTP server, but a limitation of the way the "web" is built;** always use fingerprints in your resources URL, and perhaps always include the current and previous version on each deploy;)
Until I expand upon why I have chosen to use CDB for service static website content, you can read about the `sparkey <https://github.com/spotify/sparkey>`__ from Spotify.
Because Go is highly portable, highly stable, and especially because it can easily support cross-compiling statically linked binaries to any platform it supports.
For details about the copyright and licensing, please consult the `notice <./documentation/licensing/notice.txt>`__ file in the `documentation/licensing <./documentation/licensing>`__ folder.
*`Benchmarking LevelDB vs. RocksDB vs. HyperLevelDB vs. LMDB Performance for InfluxDB <https://www.influxdata.com/blog/benchmarking-leveldb-vs-rocksdb-vs-hyperleveldb-vs-lmdb-performance-for-influxdb/>`__ (article);
*`Badger vs LMDB vs BoltDB: Benchmarking key-value databases in Go <https://blog.dgraph.io/post/badger-lmdb-boltdb/>`__ (article);
*`Benchmarking BDB, CDB and Tokyo Cabinet on large datasets <https://www.dmo.ca/blog/benchmarking-hash-databases-on-large-data/>`__ (article);