diff --git "a/database/web.json" "b/database/web.json" new file mode 100644--- /dev/null +++ "b/database/web.json" @@ -0,0 +1 @@ +[{"avatar_url": "https://avatars.githubusercontent.com/u/125233599?v=4", "name": "zap", "full_name": "zigzap/zap", "created_at": "2023-01-12T21:36:31Z", "description": "blazingly fast backends in zig", "default_branch": "master", "open_issues": 20, "stargazers_count": 2825, "forks_count": 92, "watchers_count": 2825, "tags_url": "https://api.github.com/repos/zigzap/zap/tags", "license": "-", "topics": ["api", "blazingly", "fast", "http", "rest", "zig", "zig-package"], "size": 11378, "fork": false, "updated_at": "2025-04-12T21:03:27Z", "has_build_zig": true, "has_build_zig_zon": true, "readme_content": "# \u26a1zap\u26a1 - blazingly fast backends in zig\n\n![](https://github.com/zigzap/zap/actions/workflows/build-current-zig.yml/badge.svg) ![](https://github.com/zigzap/zap/actions/workflows/mastercheck.yml/badge.svg) [![Discord](https://img.shields.io/discord/1107835896356675706?label=chat&logo=discord&style=plastic)](https://discord.gg/jQAAN6Ubyj)\n\nZap is the [zig](https://ziglang.org) replacement for the REST APIs I used to\nwrite in [python](https://python.org) with\n[Flask](https://flask.palletsprojects.com) and\n[mongodb](https://www.mongodb.com), etc. It can be considered to be a\nmicroframework for web applications.\n\nWhat I needed as a replacement was a blazingly fast and robust HTTP server that\nI could use with Zig, and I chose to wrap the superb evented networking C\nlibrary [facil.io](https://facil.io). Zap wraps and patches [facil.io - the C\nweb application framework](https://facil.io).\n\n## **\u26a1ZAP\u26a1 IS FAST, ROBUST, AND STABLE**\n\n\nAfter having used ZAP in production for years, I can confidently assert that it\nproved to be:\n\n- \u26a1 **blazingly fast** \u26a1\n- \ud83d\udcaa **extremely robust** \ud83d\udcaa\n\n## FAQ:\n\n- Q: **What version of Zig does Zap support?**\n - Zap uses the latest stable zig release (0.14.0), so you don't have to keep\n up with frequent breaking changes. It's an \"LTS feature\".\n- Q: **Can Zap build with Zig's master branch?**\n - See the `zig-master` branch. Please note that the zig-master branch is not\n the official master branch of ZAP. Be aware that I don't provide tagged\n releases for it. If you know what you are doing, that shouldn't stop you\n from using it with zig master though.\n- Q: **Where is the API documentation?**\n - Docs are a work in progress. You can check them out\n [here](https://zigzap.org/zap).\n - Run `zig build run-docserver` to serve them locally.\n- Q: **Does ZAP work on Windows?**\n - No. This is due to the underlying facil.io C library. Future versions\n of facil.io might support Windows but there is no timeline yet. Your best\n options on Windows are **WSL2 or a docker container**.\n- Q: **Does ZAP support TLS / HTTPS?**\n - Yes, ZAP supports using the system's openssl. See the\n [https](./examples/https/https.zig) example and make sure to build with\n the `-Dopenssl` flag or the environment variable `ZAP_USE_OPENSSL=true`:\n - `.openssl = true,` (in dependent projects' build.zig,\n `b.dependency(\"zap\" .{...})`)\n - `ZAP_USE_OPENSSL=true zig build https`\n - `zig build -Dopenssl=true https`\n\n## Here's what works\n\nI recommend checking out **the new App-based** or the Endpoint-based\nexamples, as they reflect how I intended Zap to be used.\n\nMost of the examples are super stripped down to only include what's necessary to\nshow a feature.\n\n**To see API docs, run `zig build run-docserver`.** To specify a custom\nport and docs dir: `zig build docserver && zig-out/bin/docserver --port=8989\n--docs=path/to/docs`.\n\n### New App-Based Examples\n\n- **[app_basic](examples/app/basic.zig)**: Shows how to use zap.App with a\nsimple Endpoint.\n- **[app_auth](examples/app/auth.zig)**: Shows how to use zap.App with an\nEndpoint using an Authenticator.\n\nSee the other examples for specific uses of Zap.\n\nBenefits of using `zap.App`:\n\n- Provides a global, user-defined \"Application Context\" to all endpoints.\n- Made to work with \"Endpoints\": an endpoint is a struct that covers a `/slug`\n of the requested URL and provides a callback for each supported request method\n (get, put, delete, options, post, head, patch).\n- Each request callback receives:\n - a per-thread arena allocator you can use for throwaway allocations without\n worrying about freeing them.\n - the global \"Application Context\" of your app's choice\n- Endpoint request callbacks are allowed to return errors:\n - you can use `try`.\n - the endpoint's ErrorStrategy defines if runtime errors should be reported to\n the console, to the response (=browser for debugging), or if the error\n should be returned.\n\n### Legacy Endpoint-based examples\n\n- **[endpoint](examples/endpoint/)**: a simple JSON REST API example featuring a\n `/users` endpoint for performing PUT/DELETE/GET/POST operations and listing\n users, together with a simple frontend to play with. **It also introduces a\n `/stop` endpoint** that shuts down Zap, so **memory leak detection** can be\n performed in main().\n - Check out how [main.zig](examples/endpoint/main.zig) uses ZIG's awesome\n `GeneralPurposeAllocator` to report memory leaks when ZAP is shut down.\n The [StopEndpoint](examples/endpoint/stopendpoint.zig) just stops ZAP when\n receiving a request on the `/stop` route.\n- **[endpoint authentication](examples/endpoint_auth/endpoint_auth.zig)**: a\n simple authenticated endpoint. Read more about authentication\n [here](./doc/authentication.md).\n\n\n### Legacy Middleware-Style examples\n\n- **[MIDDLEWARE support](examples/middleware/middleware.zig)**: chain together\n request handlers in middleware style. Provide custom context structs, totally\n type-safe. If you come from GO this might appeal to you.\n- **[MIDDLEWARE with endpoint\n support](examples/middleware_with_endpoint/middleware_with_endpoint.zig)**:\n Same as the example above, but this time we use an endpoint at the end of the\n chain, by wrapping it via `zap.Middleware.EndpointHandler`. Mixing endpoints\n in your middleware chain allows for usage of Zap's authenticated endpoints and\n your custom endpoints. Since Endpoints use a simpler API, you have to use\n `r.setUserContext()` and `r.getUserContext()` with the request if you want to\n access the middleware context from a wrapped endpoint. Since this mechanism\n uses an `*anyopaque` pointer underneath (to not break the Endpoint API), it is\n less type-safe than `zap.Middleware`'s use of contexts.\n- [**Per Request Contexts**](./src/zap.zig#L102) : With the introduction of\n `setUserContext()` and `getUserContext()`, you can, of course use those two in\n projects that don't use `zap.Endpoint` or `zap.Middleware`, too, if you\n really, really, absolutely don't find another way to solve your context\n problem. **We recommend using a `zap.Endpoint`** inside of a struct that\n can provide all the context you need **instead**. You get access to your\n struct in the callbacks via the `@fieldParentPtr()` trick that is used\n extensively in Zap's examples, like the [endpoint\n example](examples/endpoint/endpoint.zig).\n\n### Specific and Very Basic Examples\n\n- **[hello](examples/hello/hello.zig)**: welcomes you with some static HTML\n- **[routes](examples/routes/routes.zig)**: a super easy example dispatching on\n the HTTP path. **NOTE**: The dispatch in the example is a super-basic\n DIY-style dispatch. See endpoint-based examples for more realistic use cases.\n- [**simple_router**](examples/simple_router/simple_router.zig): See how you\n can use `zap.Router` to dispatch to handlers by HTTP path.\n- **[serve](examples/serve/serve.zig)**: the traditional static web server with\n optional dynamic request handling\n- **[sendfile](examples/sendfile/sendfile.zig)**: simple example of how to send\n a file, honoring compression headers, etc.\n- **[bindataformpost](examples/bindataformpost/bindataformpost.zig)**: example\n to receive binary files via form post.\n- **[hello_json](examples/hello_json/hello_json.zig)**: serves you json\n dependent on HTTP path\n- **[mustache](examples/mustache/mustache.zig)**: a simple example using\n [mustache](https://mustache.github.io/) templating.\n- **[http parameters](examples/http_params/http_params.zig)**: a simple example\n sending itself query parameters of all supported types.\n- **[cookies](examples/cookies/cookies.zig)**: a simple example sending itself a\n cookie and responding with a session cookie.\n- **[websockets](examples/websockets/)**: a simple websockets chat for the\n browser.\n- **[Username/Password Session\n Authentication](./examples/userpass_session_auth/)**: A convenience\n authenticator that redirects un-authenticated requests to a login page and\n sends cookies containing session tokens based on username/password pairs\n received via POST request.\n- [**Error Trace Responses**](./examples/senderror/senderror.zig): You can now\n call `r.sendError(err, status_code)` when you catch an error and a stack trace\n will be returned to the client / browser.\n- [**HTTPS**](examples/https/https.zig): Shows how easy it is to use facil.io's\n openssl support. Must be compiled with `-Dopenssl=true` or the environment\n variable `ZAP_USE_OPENSSL` set to `true` and requires openssl dev dependencies\n (headers, lib) to be installed on the system.\n - run it like this: `ZAP_USE_OPENSSL=true zig build run-https`\n OR like this: `zig build -Dopenssl=true run-https`\n - it will tell you how to generate certificates\n\n\n## \u26a1blazingly fast\u26a1\n\nClaiming to be blazingly fast is the new black. At least, Zap doesn't slow you\ndown and if your server performs poorly, it's probably not exactly Zap's fault.\nZap relies on the [facil.io](https://facil.io) framework and so it can't really\nclaim any performance fame for itself. In this initial implementation of Zap,\nI didn't care about optimizations at all.\n\nBut, how fast is it? Being blazingly fast is relative. When compared with a\nsimple GO HTTP server, a simple Zig Zap HTTP server performed really well on my\nmachine (x86_64-linux):\n\n- Zig Zap was nearly 30% faster than GO\n- Zig Zap had over 50% more throughput than GO\n- **YMMV!!!**\n\nSo, being somewhere in the ballpark of basic GO performance, zig zap seems to be\n... of reasonable performance \ud83d\ude0e.\n\nI can rest my case that developing ZAP was a good idea because it's faster than\nboth alternatives: a) staying with Python, and b) creating a GO + Zig hybrid.\n\n### On (now missing) Micro-Benchmarks\n\nI used to have some micro-benchmarks in this repo, showing that Zap beat all the\nother things I tried, and eventually got tired of the meaningless discussions\nthey provoked, the endless issues and PRs that followed, wanting me to add and\nmaintain even more contestants, do more justice to beloved other frameworks,\netc.\n\nCase in point, even for me the micro-benchmarks became meaningless. They were\njust some rough indicator to me confirming that I didn't do anything terribly\nwrong to facil.io, and that facil.io proved to be a reasonable choice, also from\na performance perspective.\n\nHowever, none of the projects I use Zap for, ever even remotely resembled\nanything close to a static HTTP response micro-benchmark.\n\nFor my more CPU-heavy than IO-heavy use-cases, a thread-based microframework\nthat's super robust is still my preferred choice, to this day.\n\nHaving said that, I would **still love** for other, pure-zig HTTP frameworks to\neventually make Zap obsolete. Now, in 2025, the list of candidates is looking\nreally promising.\n\n### \ud83d\udce3 Shout-Outs\n\n- [http.zig](https://github.com/karlseguin/http.zig) : Pure Zig! Close to Zap's\n model. Performance = good!\n- [jetzig](https://github.com/jetzig-framework/jetzig) : Comfortably develop\n modern web applications quickly, using http.zig under the hood\n- [zzz](https://github.com/tardy-org/zzz) : Super promising, super-fast,\n especially for IO-heavy tasks, io_uring support - need I say more?\n\n\n## \ud83d\udcaa Robust\n\nZAP is **very robust**. In fact, it is so robust that I was confidently able to\nonly work with in-memory data (RAM) in all my ZAP projects so far: over 5 large\nonline research experiments. No database, no file persistence, until I hit\n\"save\" at the end \ud83d\ude0a.\n\nSo I was able to postpone my cunning data persistence strategy that's similar to\na mark-and-sweep garbage collector and would only persist \"dirty\" data when\ntraffic is low, in favor of getting stuff online more quickly. But even if\nimplemented, such a persistence strategy is risky because when traffic is not\nlow, it means the system is under (heavy) load. Would you confidently NOT save\ndata when load is high and the data changes most frequently -> the potential\ndata loss is maximized?\n\nTo answer that question, I just skipped it. I skipped saving any data until\nreceiving a \"save\" signal via API. And it worked. ZAP just kept on zapping. When\ntraffic calmed down or all experiment participants had finished, I hit \"save\"\nand went on analyzing the data.\n\nHandling all errors does pay off after all. No hidden control flow, no hidden\nerrors or exceptions is one of Zig's strengths.\n\nTo be honest: There are still pitfalls. E.g. if you request large stack sizes\nfor worker threads, Zig won't like that and panic. So make sure you don't have\nlocal variables that require tens of megabytes of stack space.\n\n\n### \ud83d\udee1\ufe0f Memory-safe\n\nSee the [StopEndpoint](examples/endpoint/stopendpoint.zig) in the\n[endpoint](examples/endpoint) example. The `StopEndpoint` just stops ZAP when\nreceiving a request on the `/stop` route. That example uses ZIG's awesome\n`GeneralPurposeAllocator` in [main.zig](examples/endpoint/main.zig) to report\nmemory leaks when ZAP is shut down.\n\nYou can use the same strategy in your debug builds and tests to check if your\ncode leaks memory.\n\n\n\n## Getting started\n\nMake sure you have **zig 0.14.0** installed. Fetch it from\n[here](https://ziglang.org/download).\n\n```shell\n$ git clone https://github.com/zigzap/zap.git\n$ cd zap\n$ zig build run-hello\n$ # open http://localhost:3000 in your browser\n```\n... and open [http://localhost:3000](http://localhost:3000) in your browser.\n\n## Using \u26a1zap\u26a1 in your own projects\n\nMake sure you have **the latest zig release (0.14.0)** installed. Fetch it from\n[here](https://ziglang.org/download).\n\nIf you don't have an existing zig project, create one like this:\n\n```shell\n$ mkdir zaptest && cd zaptest\n$ zig init\n```\n\nWith an existing Zig project, adding Zap to it is easy:\n\n1. Zig fetch zap\n2. Add zap to your `build.zig`\n\nIn your zig project folder (where `build.zig` is located), run:\n\n\n```\nzig fetch --save \"git+https://github.com/zigzap/zap#v0.10.1\"\n```\n\n\nThen, in your `build.zig`'s `build` function, add the following before\n`b.installArtifact(exe)`:\n\n```zig\n const zap = b.dependency(\"zap\", .{\n .target = target,\n .optimize = optimize,\n .openssl = false, // set to true to enable TLS support\n });\n\n exe.root_module.addImport(\"zap\", zap.module(\"zap\"));\n```\n\nFrom then on, you can use the Zap package in your project via `const zap =\n@import(\"zap\");`. Check out the examples to see how to use Zap.\n\n\n## Contribute to \u26a1zap\u26a1 - blazingly fast\n\nAt the current time, I can only add to zap what I need for my personal and\nprofessional projects. While this happens **blazingly fast**, some if not all\nnice-to-have additions will have to wait. You are very welcome to help make the\nworld a blazingly fast place by providing patches or pull requests, add\ndocumentation or examples, or interesting issues and bug reports - you'll know\nwhat to do when you receive your calling \ud83d\udc7c.\n\n**We have our own [ZAP discord](https://discord.gg/jQAAN6Ubyj) server!!!**\n\n## Support \u26a1zap\u26a1\n\nBeing blazingly fast requires a constant feed of caffeine. I usually manage to\nprovide that to myself for myself. However, to support keeping the juices\nflowing and putting a smile on my face and that warm and cozy feeling into my\nheart, you can always [buy me a coffee](https://buymeacoffee.com/renerocksai)\n\u2615. All donations are welcomed \ud83d\ude4f blazingly fast! That being said, just saying\n\"hi\" also works wonders with the smiles, warmth, and coziness \ud83d\ude0a.\n\n## Examples\n\nYou build and run the examples via:\n\n```shell\n$ zig build [EXAMPLE]\n$ ./zig-out/bin/[EXAMPLE]\n```\n\n... where `[EXAMPLE]` is one of `hello`, `routes`, `serve`, ... see the [list of\nexamples above](#heres-what-works).\n\nExample: building and running the hello example:\n\n```shell\n$ zig build hello\n$ ./zig-out/bin/hello\n```\n\nTo just run an example, like `routes`, without generating an executable, run:\n\n```shell\n$ zig build run-[EXAMPLE]\n```\n\nExample: building and running the routes example:\n\n```shell\n$ zig build run-routes\n```\n\n### [hello](examples/hello/hello.zig)\n\n```zig\nconst std = @import(\"std\");\nconst zap = @import(\"zap\");\n\nfn on_request(r: zap.Request) void {\n if (r.path) |the_path| {\n std.debug.print(\"PATH: {s}\\n\", .{the_path});\n }\n\n if (r.query) |the_query| {\n std.debug.print(\"QUERY: {s}\\n\", .{the_query});\n }\n r.sendBody(\"

Hello from ZAP!!!

\") catch return;\n}\n\npub fn main() !void {\n var listener = zap.HttpListener.init(.{\n .port = 3000,\n .on_request = on_request,\n .log = true,\n });\n try listener.listen();\n\n std.debug.print(\"Listening on 0.0.0.0:3000\\n\", .{});\n\n // start worker threads\n zap.start(.{\n .threads = 2,\n .workers = 2,\n });\n}\n```\n\n\n\n\n\n"}, {"avatar_url": "https://avatars.githubusercontent.com/u/157241284?v=4", "name": "jetzig", "full_name": "jetzig-framework/jetzig", "created_at": "2024-01-20T18:50:29Z", "description": "Jetzig is a web framework written in Zig", "default_branch": "main", "open_issues": 28, "stargazers_count": 904, "forks_count": 43, "watchers_count": 904, "tags_url": "https://api.github.com/repos/jetzig-framework/jetzig/tags", "license": "-", "topics": ["zig-package"], "size": 885, "fork": false, "updated_at": "2025-04-13T03:36:39Z", "has_build_zig": true, "has_build_zig_zon": true, "readme_content": "[![CI](https://github.com/jetzig-framework/jetzig/actions/workflows/CI.yml/badge.svg)](https://github.com/jetzig-framework/jetzig/actions/workflows/CI.yml)\n\n![Jetzig Logo](demo/public/jetzig.png)\n\n_Jetzig_ is a web framework written in 100% pure [Zig](https://ziglang.org) :lizard: for _Linux_, _OS X_, _Windows_, and any _OS_ that can compile _Zig_ code.\n\nOfficial website: [jetzig.dev](https://www.jetzig.dev/)\n\n_Jetzig_ aims to provide a rich set of user-friendly tools for building modern web applications quickly. See the checklist below.\n\nJoin us on Discord ! [https://discord.gg/eufqssz7X6](https://discord.gg/eufqssz7X6).\n\nIf you are interested in _Jetzig_ you will probably find these tools interesting too:\n\n* [Zap](https://github.com/zigzap/zap)\n* [http.zig](https://github.com/karlseguin/http.zig) (_Jetzig_'s backend)\n* [tokamak](https://github.com/cztomsik/tokamak)\n* [zig-router](https://github.com/Cloudef/zig-router)\n* [zig-webui](https://github.com/webui-dev/zig-webui/)\n* [ZTS](https://github.com/zigster64/zts)\n* [Zine](https://github.com/kristoff-it/zine)\n* [Zinc](https://github.com/zon-dev/zinc/)\n* [zUI](https://github.com/thienpow/zui)\n\n## Checklist\n\n* :white_check_mark: File system-based routing with [slug] matching.\n* :white_check_mark: _HTML_ and _JSON_ response (inferred from extension and/or `Accept` header).\n* :white_check_mark: _JSON_-compatible response data builder.\n* :white_check_mark: _HTML_ templating (see [Zmpl](https://github.com/jetzig-framework/zmpl)).\n* :white_check_mark: Per-request arena allocator.\n* :white_check_mark: Sessions.\n* :white_check_mark: Cookies.\n* :white_check_mark: Error handling.\n* :white_check_mark: Static content from /public directory.\n* :white_check_mark: Request/response headers.\n* :white_check_mark: Stack trace output on error.\n* :white_check_mark: Static content generation.\n* :white_check_mark: Param/JSON payload parsing/abstracting.\n* :white_check_mark: Static content parameter definitions.\n* :white_check_mark: Middleware interface.\n* :white_check_mark: MIME type inference.\n* :white_check_mark: Email delivery.\n* :white_check_mark: Background jobs.\n* :white_check_mark: General-purpose cache.\n* :white_check_mark: Development server auto-reload.\n* :white_check_mark: Testing helpers for testing HTTP requests/responses.\n* :white_check_mark: Custom/non-conventional routes.\n* :white_check_mark: Database integration.\n* :x: Environment configurations (development/production/etc.)\n* :x: Email receipt (via SendGrid/AWS SES/etc.)\n\n## LICENSE\n\n[MIT](LICENSE)\n\n## Contributors\n\n* [Zackary Housend](https://github.com/z1fire)\n* [Andreas St\u00fchrk](https://github.com/Trundle)\n* [Karl Seguin](https://github.com/karlseguin)\n* [Bob Farrell](https://github.com/bobf)\n"}, {"avatar_url": "https://avatars.githubusercontent.com/u/206480?v=4", "name": "http.zig", "full_name": "karlseguin/http.zig", "created_at": "2023-03-13T08:51:53Z", "description": "An HTTP/1.1 server for zig", "default_branch": "master", "open_issues": 4, "stargazers_count": 926, "forks_count": 60, "watchers_count": 926, "tags_url": "https://api.github.com/repos/karlseguin/http.zig/tags", "license": "-", "topics": ["http-server", "zig", "zig-library", "zig-package"], "size": 1080, "fork": false, "updated_at": "2025-04-13T12:59:17Z", "has_build_zig": true, "has_build_zig_zon": true, "readme_content": "# An HTTP/1.1 server for Zig.\n\n```zig\nconst std = @import(\"std\");\nconst httpz = @import(\"httpz\");\n\npub fn main() !void {\n var gpa = std.heap.GeneralPurposeAllocator(.{}){};\n const allocator = gpa.allocator();\n\n // More advance cases will use a custom \"Handler\" instead of \"void\".\n // The last parameter is our handler instance, since we have a \"void\"\n // handler, we passed a void ({}) value.\n var server = try httpz.Server(void).init(allocator, .{.port = 5882}, {});\n defer {\n // clean shutdown, finishes serving any live request\n server.stop();\n server.deinit();\n }\n \n var router = try server.router(.{});\n router.get(\"/api/user/:id\", getUser, .{});\n\n // blocks\n try server.listen(); \n}\n\nfn getUser(req: *httpz.Request, res: *httpz.Response) !void {\n res.status = 200;\n try res.json(.{.id = req.param(\"id\").?, .name = \"Teg\"}, .{});\n}\n```\n\n# Table of Contents\n* [Examples](#examples)\n* [Installation](#installation)\n* [Alternatives](#alternatives)\n* [Handler](#handler)\n - [Custom Dispatch](#custom-dispatch)\n - [Per-Request Context](#per-request-context)\n - [Custom Not Found](#not-found)\n - [Custom Error Handler](#error-handler)\n - [Dispatch Takeover](#takover)\n* [Memory And Arenas](#memory-and-arenas)\n* [httpz.Request](httpzrequest)\n* [httpz.Response](httpzresponse)\n* [Router](#router)\n* [Middlewares](#middlewares)\n* [Configuration](#configuration)\n* [Metrics](#metrics)\n* [Testing](#testing)\n* [HTTP Compliance](#http-compliance)\n* [Server Side Events](#server-side-events)\n* [Websocket](#websocket)\n\n# Versions\nThe `master` branch targets Zig 0.14. The `dev` branch targets the latest version of Zig.\n\n# Examples\nSee the [examples](https://github.com/karlseguin/http.zig/tree/master/examples) folder for examples. If you clone this repository, you can run `zig build example_#` to run a specific example:\n\n```bash\n$ zig build example_1\nlistening http://localhost:8800/\n```\n\n# Installation\n1) Add http.zig as a dependency in your `build.zig.zon`:\n\n```bash\nzig fetch --save git+https://github.com/karlseguin/http.zig#master\n```\n\n2) In your `build.zig`, add the `httpz` module as a dependency you your program:\n\n```zig\nconst httpz = b.dependency(\"httpz\", .{\n .target = target,\n .optimize = optimize,\n});\n\n// the executable from your call to b.addExecutable(...)\nexe.root_module.addImport(\"httpz\", httpz.module(\"httpz\"));\n```\n\nThe library tracks Zig master. If you're using a specific version of Zig, use the appropriate branch.\n\n# Alternatives\nIf you're looking for a higher level web framework with more included functionality, consider [JetZig](https://www.jetzig.dev/) or [Tokamak](https://github.com/cztomsik/tokamak) which are built on top of httpz.\n\n## Why not std.http.Server\n`std.http.Server` is very slow and assumes well-behaved clients.\n\nThere are many Zig HTTP server implementations. Most wrap `std.http.Server` and tend to be slow. Benchmark it, you'll see. A few wrap C libraries and are faster (though some of these are slow too!). \n\nhttp.zig is written in Zig, without using `std.http.Server`. On an M2, a basic request can hit 140K requests per seconds.\n\n# Handler\nWhen a non-void Handler is used, the value given to `Server(H).init` is passed to every action. This is how application-specific data can be passed into your actions.\n\nFor example, using [pg.zig](https://github.com/karlseguin/pg.zig), we can make a database connection pool available to each action:\n\n```zig\nconst pg = @import(\"pg\");\nconst std = @import(\"std\");\nconst httpz = @import(\"httpz\");\n\npub fn main() !void {\n var gpa = std.heap.GeneralPurposeAllocator(.{}){};\n const allocator = gpa.allocator();\n\n var db = try pg.Pool.init(allocator, .{\n .connect = .{ .port = 5432, .host = \"localhost\"},\n .auth = .{.username = \"user\", .database = \"db\", .password = \"pass\"}\n });\n defer db.deinit();\n\n var app = App{\n .db = db,\n };\n\n var server = try httpz.Server(*App).init(allocator, .{.port = 5882}, &app);\n var router = try server.router(.{});\n router.get(\"/api/user/:id\", getUser, .{});\n try server.listen();\n}\n\nconst App = struct {\n db: *pg.Pool,\n};\n\nfn getUser(app: *App, req: *httpz.Request, res: *httpz.Response) !void {\n const user_id = req.param(\"id\").?;\n\n var row = try app.db.row(\"select name from users where id = $1\", .{user_id}) orelse {\n res.status = 404;\n res.body = \"Not found\";\n return;\n };\n defer row.deinit() catch {};\n\n try res.json(.{\n .id = user_id,\n .name = row.get([]u8, 0),\n }, .{});\n}\n```\n\n## Custom Dispatch\nBeyond sharing state, your custom handler can be used to control how httpz behaves. By defining a public `dispatch` method you can control how (or even **if**) actions are executed. For example, to log timing, you could do:\n\n```zig\nconst App = struct {\n pub fn dispatch(self: *App, action: httpz.Action(*App), req: *httpz.Request, res: *httpz.Response) !void {\n var timer = try std.time.Timer.start();\n\n // your `dispatch` doesn't _have_ to call the action\n try action(self, req, res);\n\n const elapsed = timer.lap() / 1000; // ns -> us\n std.log.info(\"{} {s} {d}\", .{req.method, req.url.path, elapsed});\n }\n};\n```\n\n### Per-Request Context\nThe 2nd parameter, `action`, is of type `httpz.Action(*App)`. This is a function pointer to the function you specified when setting up the routes. As we've seen, this works well to share global data. But, in many cases, you'll want to have request-specific data.\n\nConsider the case where you want your `dispatch` method to conditionally load a user (maybe from the `Authorization` header of the request). How would you pass this `User` to the action? You can't use the `*App` directly, as this is shared concurrently across all requests.\n\nTo achieve this, we'll add another structure called `RequestContext`. You can call this whatever you want, and it can contain any fields of methods you want.\n\n```zig\nconst RequestContext = struct {\n // You don't have to put a reference to your global data.\n // But chances are you'll want.\n app: *App,\n user: ?User,\n};\n```\n\nWe can now change the definition of our actions and `dispatch` method:\n\n```zig\nfn getUser(ctx: *RequestContext, req: *httpz.Request, res: *httpz.Response) !void {\n // can check if ctx.user is != null\n}\n\nconst App = struct {\n pub fn dispatch(self: *App, action: httpz.Action(*RequestContext), req: *httpz.Request, res: *httpz.Response) !void {\n var ctx = RequestContext{\n .app = self,\n .user = self.loadUser(req),\n }\n return action(&ctx, req, res);\n }\n\n fn loadUser(self: *App, req: *httpz.Request) ?User {\n // todo, maybe using req.header(\"authorizaation\")\n }\n};\n\n```\n\nhttpz infers the type of the action based on the 2nd parameter of your handler's `dispatch` method. If you use a `void` handler or your handler doesn't have a `dispatch` method, then you won't interact with `httpz.Action(H)` directly.\n\n## Not Found\nIf your handler has a public `notFound` method, it will be called whenever a path doesn't match a found route:\n\n```zig\nconst App = struct {\n pub fn notFound(_: *App, req: *httpz.Request, res: *httpz.Response) !void {\n std.log.info(\"404 {} {s}\", .{req.method, req.url.path});\n res.status = 404;\n res.body = \"Not Found\";\n }\n};\n```\n\n## Error Handler\nIf your handler has a public `uncaughtError` method, it will be called whenever there's an unhandled error. This could be due to some internal httpz bug, or because your action return an error. \n\n```zig\nconst App = struct {\n pub fn uncaughtError(_: *App, req: *httpz.Request, res: *httpz.Response, err: anyerror) void {\n std.log.info(\"500 {} {s} {}\", .{req.method, req.url.path, err});\n res.status = 500;\n res.body = \"sorry\";\n }\n};\n```\n\nNotice that, unlike `notFound` and other normal actions, the `uncaughtError` method cannot return an error itself.\n\n## Takeover\nFor the most control, you can define a `handle` method. This circumvents most of Httpz's dispatching, including routing. Frameworks like JetZig hook use `handle` in order to provide their own routing and dispatching. When you define a `handle` method, then any `dispatch`, `notFound` and `uncaughtError` methods are ignored by httpz.\n\n```zig\nconst App = struct {\n pub fn handle(app: *App, req: *httpz.Request, res: *httpz.Response) void {\n // todo\n }\n};\n```\n\nThe behavior `httpz.Server(H)` is controlled by \nThe library supports both simple and complex use cases. A simple use case is shown below. It's initiated by the call to `httpz.Server()`:\n\n```zig\nconst std = @import(\"std\");\nconst httpz = @import(\"httpz\");\n\npub fn main() !void {\n var gpa = std.heap.GeneralPurposeAllocator(.{}){};\n const allocator = gpa.allocator();\n\n var server = try httpz.Server().init(allocator, .{.port = 5882});\n \n // overwrite the default notFound handler\n server.notFound(notFound);\n\n // overwrite the default error handler\n server.errorHandler(errorHandler); \n\n var router = try server.router(.{});\n\n // use get/post/put/head/patch/options/delete\n // you can also use \"all\" to attach to all methods\n router.get(\"/api/user/:id\", getUser, .{});\n\n // start the server in the current thread, blocking.\n try server.listen(); \n}\n\nfn getUser(req: *httpz.Request, res: *httpz.Response) !void {\n // status code 200 is implicit. \n\n // The json helper will automatically set the res.content_type = httpz.ContentType.JSON;\n // Here we're passing an inferred anonymous structure, but you can pass anytype \n // (so long as it can be serialized using std.json.stringify)\n\n try res.json(.{.id = req.param(\"id\").?, .name = \"Teg\"}, .{});\n}\n\nfn notFound(_: *httpz.Request, res: *httpz.Response) !void {\n res.status = 404;\n\n // you can set the body directly to a []u8, but note that the memory\n // must be valid beyond your handler. Use the res.arena if you need to allocate\n // memory for the body.\n res.body = \"Not Found\";\n}\n\n// note that the error handler return `void` and not `!void`\nfn errorHandler(req: *httpz.Request, res: *httpz.Response, err: anyerror) void {\n res.status = 500;\n res.body = \"Internal Server Error\";\n std.log.warn(\"httpz: unhandled exception for request: {s}\\nErr: {}\", .{req.url.raw, err});\n}\n```\n\n# Memory and Arenas\nAny allocations made for the response, such as the body or a header, must remain valid until **after** the action returns. To achieve this, use `res.arena` or the `res.writer()`:\n\n```zig\nfn arenaExample(req: *httpz.Request, res: *httpz.Response) !void {\n const query = try req.query();\n const name = query.get(\"name\") orelse \"stranger\";\n res.body = try std.fmt.allocPrint(res.arena, \"Hello {s}\", .{name});\n}\n\nfn writerExample(req: *httpz.Request, res: *httpz.Response) !void {\n const query = try req.query();\n const name = query.get(\"name\") orelse \"stranger\";\n try std.fmt.format(res.writer(), \"Hello {s}\", .{name});\n}\n```\n\nAlternatively, you can explicitly call `res.write()`. Once `res.write()` returns, the response is sent and your action can cleanup/release any resources.\n\n`res.arena` is actually a configurable-sized thread-local buffer that fallsback to an `std.heap.ArenaAllocator`. In other words, it's fast so it should be your first option for data that needs to live only until your action exits.\n\n# httpz.Request\nThe following fields are the most useful:\n\n* `method` - httpz.Method enum\n* `method_string` - Only set if `method == .OTHER`, else empty. Used when using custom methods.\n* `arena` - A fast thread-local buffer that fallsback to an ArenaAllocator, same as `res.arena`.\n* `url.path` - the path of the request (`[]const u8`)\n* `address` - the std.net.Address of the client\n\nIf you give your route a `data` configuration, the value can be retrieved from the optional `route_data` field of the request.\n\n## Path Parameters\nThe `param` method of `*Request` returns an `?[]const u8`. For example, given the following path:\n\n```zig\nrouter.get(\"/api/users/:user_id/favorite/:id\", user.getFavorite, .{});\n```\n\nThen we could access the `user_id` and `id` via:\n\n```zig\npub fn getFavorite(req *http.Request, res: *http.Response) !void {\n const user_id = req.param(\"user_id\").?;\n const favorite_id = req.param(\"id\").?;\n ...\n```\n\nIn the above, passing any other value to `param` would return a null object (since the route associated with `getFavorite` only defines these 2 parameters). Given that routes are generally statically defined, it should not be possible for `req.param` to return an unexpected null. However, it *is* possible to define two routes to the same action:\n\n```zig\nrouter.put(\"/api/users/:user_id/favorite/:id\", user.updateFavorite, .{});\n\n// currently logged in user, maybe?\nrouter.put(\"/api/use/favorite/:id\", user.updateFavorite, .{});\n```\n\nIn which case the optional return value of `param` might be useful.\n\n## Header Values\nSimilar to `param`, header values can be fetched via the `header` function, which also returns a `?[]const u8`:\n\n```zig\nif (req.header(\"authorization\")) |auth| {\n\n} else { \n // not logged in?:\n}\n```\n\nHeader names are lowercase. Values maintain their original casing.\n\nTo iterate over all headers, use:\n\n```zig\nvar it = req.headers.iterator();\nwhile (it.next()) |kv| {\n // kv.key\n // kv.value\n}\n```\n\n## QueryString\nThe framework does not automatically parse the query string. Therefore, its API is slightly different.\n\n```zig\nconst query = try req.query();\nif (query.get(\"search\")) |search| {\n\n} else {\n // no search parameter\n};\n```\n\nOn first call, the `query` function attempts to parse the querystring. This requires memory allocations to unescape encoded values. The parsed value is internally cached, so subsequent calls to `query()` are fast and cannot fail.\n\nThe original casing of both the key and the name are preserved.\n\nTo iterate over all query parameters, use:\n\n```zig\nvar it = req.query().iterator();\nwhile (it.next()) |kv| {\n // kv.key\n // kv.value\n}\n```\n\n## Body\nThe body of the request, if any, can be accessed using `req.body()`. This returns a `?[]const u8`.\n\n### Json Body\nThe `req.json(TYPE)` function is a wrapper around the `body()` function which will call `std.json.parse` on the body. This function does not consider the content-type of the request and will try to parse any body.\n\n```zig\nif (try req.json(User)) |user| {\n\n}\n```\n\n### JsonValueTree Body\nThe `req.jsonValueTree()` function is a wrapper around the `body()` function which will call `std.json.Parse` on the body, returning a `!?std.jsonValueTree`. This function does not consider the content-type of the request and will try to parse any body.\n\n```zig\nif (try req.jsonValueTree()) |t| {\n // probably want to be more defensive than this\n const product_type = r.root.Object.get(\"type\").?.String;\n //...\n}\n```\n\n### JsonObject Body\nThe even more specific `jsonObject()` function will return an `std.json.ObjectMap` provided the body is a map\n\n```zig\nif (try req.jsonObject()) |t| {\n // probably want to be more defensive than this\n const product_type = t.get(\"type\").?.String;\n //...\n}\n```\n\n### Form Data\nThe body of the request, if any, can be parsed as a \"x-www-form-urlencoded \"value using `req.formData()`. The `request.max_form_count` configuration value must be set to the maximum number of form fields to support. This defaults to 0.\n\nThis behaves similarly to `query()`.\n\nOn first call, the `formData` function attempts to parse the body. This can require memory allocations to unescape encoded values. The parsed value is internally cached, so subsequent calls to `formData()` are fast and cannot fail.\n\nThe original casing of both the key and the name are preserved.\n\nTo iterate over all fields, use:\n\n```zig\nvar it = (try req.formData()).iterator();\nwhile (it.next()) |kv| {\n // kv.key\n // kv.value\n}\n```\n\nOnce this function is called, `req.multiFormData()` will no longer work (because the body is assumed parsed).\n\n### Multi Part Form Data\nSimilar to the above, `req.multiFormData()` can be called to parse requests with a \"multipart/form-data\" content type. The `request.max_multiform_count` configuration value must be set to the maximum number of form fields to support. This defaults to 0.\n\nThis is a different API than `formData` because the return type is different. Rather than a simple string=>value type, the multi part form data value consists of a `value: []const u8` and a `filename: ?[]const u8`.\n\nOn first call, the `multiFormData` function attempts to parse the body. The parsed value is internally cached, so subsequent calls to `multiFormData()` are fast and cannot fail.\n\nThe original casing of both the key and the name are preserved.\n\nTo iterate over all fields, use:\n\n```zig\nvar it = req.multiFormData.iterator();\nwhile (it.next()) |kv| {\n // kv.key\n // kv.value.value\n // kv.value.filename (optional)\n}\n```\n\nOnce this function is called, `req.formData()` will no longer work (because the body is assumed parsed).\n\nAdvance warning: This is one of the few methods that can modify the request in-place. For most people this won't be an issue, but if you use `req.body()` and `req.multiFormData()`, say to log the raw body, the content-disposition field names are escaped in-place. It's still safe to use `req.body()` but any content-disposition name that was escaped will be a little off.\n\n### Lazy Loading\nBy default, httpz reads the full request body into memory. Depending on httpz configuration and the request size, the body will be stored in the static request buffer, a large buffer pool, or dynamically allocated.\n\nAs an alternative, when `config.request.lazy_read_size` is set, bodies larger than the configured bytes will not be read into memory. Instead, applications can create a `io.Reader` by calling `req.reader(timeout_in_ms)`. \n\n```zig\n// 5000 millisecond read timeout on a per-read basis\nvar reader = try req.reader(5000);\nvar buf: [4096]u8 = undefined;\nwhile (true) {\n const n = try reader.read(&buf);\n if (n == 0) break\n // buf[0..n] is what was read\n}\n```\n\n`req.reader` can safely be used whether or not the full body was already in-memory - the API abstracts reading from already-loaded bytes and bytes still waiting to be received on the socket. You can check `req.unread_body > 0` to know whether lazy loading is in effect.\n\nA few notes about the implementation.\n\nIf if the body is larger than the configured `lazy_read_size`, part of the body might still be read into the request's static buffer. The `io.Reader` returned by `req.reader()` will abstract this detail away and return the full body.\n\nAlso, if the body isn't fully read, but the connection is marked for keepalive (which is generally the default), httpz will still read the full body, but will do so in 4K chunks.\n\nWhile the `io.Reader` can be used for non-lazy loaded bodies, there's overhead to this. It is better to use it only when you know that the body is large (i.e., a file upload).\n\n## Cookies\nUse the `req.cookies` method to get a `Request.Cookie` object. Use `get` to get an optional cookie value for a given cookie name. The cookie name is case sensitive.\n\n```zig\nvar cookies = req.cookies();\nif (cookies.get(\"auth\")) |auth| {\n /// ...\n}\n```\n\n# httpz.Response\nThe following fields are the most useful:\n\n* `status` - set the status code, by default, each response starts off with a 200 status code\n* `content_type` - an httpz.ContentType enum value. This is a convenience and optimization over using the `res.header` function.\n* `arena` - A fast thread-local buffer that fallsback to an ArenaAllocator, same as `req.arena`.\n\n## Body\nThe simplest way to set a body is to set `res.body` to a `[]const u8`. **However** the provided value must remain valid until the body is written, which happens after the function exists or when `res.write()` is explicitly called.\n\n## Dynamic Content\nYou can use the `res.arena` allocator to create dynamic content:\n\n```zig\nconst query = try req.query();\nconst name = query.get(\"name\") orelse \"stranger\";\nres.body = try std.fmt.allocPrint(res.arena, \"Hello {s}\", .{name});\n```\n\nMemory allocated with `res.arena` will exist until the response is sent.\n\n## io.Writer\n`res.writer()` returns an `std.io.Writer`. Various types support writing to an io.Writer. For example, the built-in JSON stream writer can use this writer:\n\n```zig\nvar ws = std.json.writeStream(res.writer(), 4);\ntry ws.beginObject();\ntry ws.objectField(\"name\");\ntry ws.emitString(req.param(\"name\").?);\ntry ws.endObject();\n```\n\n## JSON\nThe `res.json` function will set the content_type to `httpz.ContentType.JSON` and serialize the provided value using `std.json.stringify`. The 2nd argument to the json function is the `std.json.StringifyOptions` to pass to the `stringify` function.\n\nThis function uses `res.writer()` explained above.\n\n## Header Value\nSet header values using the `res.header(NAME, VALUE)` function:\n\n```zig\nres.header(\"Location\", \"/\");\n```\n\nThe header name and value are sent as provided. Both the name and value must remain valid until the response is sent, which will happen outside of the action. Dynamic names and/or values should be created and or dupe'd with `res.arena`. \n\n`res.headerOpts(NAME, VALUE, OPTS)` can be used to dupe the name and/or value:\n\n```zig\ntry res.headerOpts(\"Location\", location, .{.dupe_value = true});\n```\n\n`HeaderOpts` currently supports `dupe_name: bool` and `dupe_value: bool`, both default to `false`.\n\n## Cookies\nYou can use the `res.setCookie(name, value, opts)` to set the \"Set-Cookie\" header.\n\n```zig\ntry res.setCookie(\"cookie_name3\", \"cookie value 3\", .{\n .path = \"/auth/\",\n .domain = \"www.openmymind.net\",\n .max_age = 9001,\n .secure = true,\n .http_only = true,\n .partitioned = true,\n .same_site = .lax, // or .none, or .strict (or null to leave out)\n});\n```\n\n`setCookie` does not validate the name, value, path or domain - it assumes you're setting proper values. It *will* double-quote values which contain spaces or commas (as required).\n\nIf, for whatever reason, `res.setCookie` doesn't work for you, you always have full control over the cookie value via `res.header(\"Set-Cookie\", value)`.\n\n```zig\nvar cookies = req.cookies();\nif (cookies.get(\"auth\")) |auth| {\n /// ...\n}\n```\n\n## Writing\nBy default, httpz will automatically flush your response. In more advance cases, you can use `res.write()` to explicitly flush it. This is useful in cases where you have resources that need to be freed/released only after the response is written. For example, my [LRU cache](https://github.com/karlseguin/cache.zig) uses atomic referencing counting to safely allow concurrent access to cached data. This requires callers to \"release\" the cached entry:\n\n```zig\npub fn info(app: *MyApp, _: *httpz.Request, res: *httpz.Response) !void {\n const cached = app.cache.get(\"info\") orelse {\n // load the info\n };\n defer cached.release();\n\n res.body = cached.value;\n return res.write();\n}\n```\n\n# Router\nYou get an instance of the router by calling `server.route(.{})`. Currently, the configuration takes a single parameter:\n\n* `middlewares` - A list of middlewares to apply to each request. These middleware will be executed even for requests with no matching route (i.e. not found). An individual route can opt-out of these middleware, see the `middleware_strategy` route configuration.\n\nYou can use the `get`, `put`, `post`, `head`, `patch`, `trace`, `delete`, `options` or `connect` method of the router to define a router. You can also use the special `all` method to add a route for all methods.\n\nThese functions can all `@panic` as they allocate memory. Each function has an equivalent `tryXYZ` variant which will return an error rather than panicking:\n\n```zig\n// this can panic if it fails to create the route\nrouter.get(\"/\", index, .{});\n\n// this returns a !void (which you can try/catch)\nrouter.tryGet(\"/\", index, .{});\n```\n\nThe 3rd parameter is a route configuration. It allows you to speficy a different `handler` and/or `dispatch` method and/or `middleware`.\n\n```zig\n// this can panic if it fails to create the route\nrouter.get(\"/\", index, .{\n .dispatcher = Handler.dispathAuth,\n .handler = &auth_handler,\n .middlewares = &.{cors_middleware},\n});\n```\n\n## Configuration\nThe last parameter to the various `router` methods is a route configuration. In many cases, you'll probably use an empty configuration (`.{}`). The route configuration has three fields:\n\n* `dispatcher` - The dispatch method to use. This overrides the default dispatcher, which is either httpz built-in dispatcher or [your handler's `dispatch` method](#custom-dispatch).\n* `handler` - The handler instance to use. The default handler is the 3rd parameter passed to `Server(H).init` but you can override this on a route-per-route basis.\n* `middlewares` - A list of [middlewares](#middlewares) to run. By default, no middlewares are run. By default, this list of middleware is appended to the list given to `server.route(.{.middlewares = .{....})`.\n* `middleware_strategy` - How the given middleware should be merged with the global middlewares. Defaults to `.append`, can also be `.replace`.\n* `data` - Arbitrary data (`*const anyopaque`) to make available to `req.route_data`. This must be a `const`.\n\nYou can specify a separate configuration for each route. To change the configuration for a group of routes, you have two options. The first, is to directly change the router's `handler`, `dispatcher` and `middlewares` field. Any subsequent routes will use these values:\n\n```zig\nvar server = try httpz.Server(Handler).init(allocator, .{.port = 5882}, &handler);\n \nvar router = try server.router(.{});\n\n// Will use Handler.dispatch on the &handler instance passed to init\n// No middleware\nrouter.get(\"/route1\", route1, .{});\n\nrouter.dispatcher = Handler.dispathAuth;\n// uses the new dispatcher\nrouter.get(\"/route2\", route2, .{}); \n\nrouter.handler = &Handler{.public = true};\n// uses the new dispatcher + new handler\nrouter.get(\"/route3\", route3, .{.handler = Handler.dispathAuth});\n```\n\nThis approach is error prone though. New routes need to be carefully added in the correct order so that the desired handler, dispatcher and middlewares are used.\n\nA more scalable option is to use route groups.\n\n## Groups\nDefining a custom dispatcher or custom global data on each route can be tedious. Instead, consider using a router group:\n\n```zig\nvar admin_routes = router.group(\"/admin\", .{\n .handler = &auth_handler,\n .dispatcher = Handler.dispathAuth,\n .middlewares = &.{cors_middleware},\n});\nadmin_routes.get(\"/users\", listUsers, .{});\nadmin_routs.delete(\"/users/:id\", deleteUsers, .{});\n```\n\nThe first parameter to `group` is a prefix to prepend to each route in the group. An empty prefix is acceptable. Thus, route groups can be used to configure either a common prefix and/or a common configuration across multiple routes.\n\n## Casing\nYou **must** use a lowercase route. You can use any casing with parameter names, as long as you use that same casing when getting the parameter.\n\n## Parameters\nRouting supports parameters, via `:CAPTURE_NAME`. The captured values are available via `req.params.get(name: []const u8) ?[]const u8`. \n\n## Glob\nYou can glob an individual path segment, or the entire path suffix. For a suffix glob, it is important that no trailing slash is present.\n\n```zig\n// prefer using a custom `notFound` handler than a global glob.\nrouter.all(\"/*\", not_found, .{});\nrouter.get(\"/api/*/debug\", .{})\n```\n\nWhen multiple globs are used, the most specific will be selected. E.g., give the following two routes:\n\n```zig\nrouter.get(\"/*\", not_found, .{});\nrouter.get(\"/info/*\", any_info, .{})\n```\n\nA request for \"/info/debug/all\" will be routed to `any_info`, whereas a request for \"/over/9000\" will be routed to `not_found`.\n\n## Custom Methods\nYou can use the `method` function to route a custom method:\n\n```zig\nrouter.method(\"TEA\", \"/\", teaList, .{});\n```\n\nIn such cases, `request.method` will be `.OTHER` and you can use the `reqeust.method_string` for the string value. The method name, `TEA` above, is cloned by the router and does not need to exist beyond the function call. The method name should only be uppercase ASCII letters.\n\nThe `router.all` method **does not** route to custom methods.\n \n## Limitations\nThe router has several limitations which might not get fixed. These specifically resolve around the interaction of globs, parameters and static path segments.\n\nGiven the following routes:\n\n```zig\nrouter.get(\"/:any/users\", route1, .{});\nrouter.get(\"/hello/users/test\", route2, .{});\n```\n\nYou would expect a request to \"/hello/users\" to be routed to `route1`. However, no route will be found. \n\nGlobs interact similarly poorly with parameters and static path segments.\n\nResolving this issue requires keeping a stack (or visiting the routes recursively), in order to back-out of a dead-end and trying a different path.\nThis seems like an unnecessarily expensive thing to do, on each request, when, in my opinion, such route hierarchies are uncommon. \n\n# Middlewares\nIn general, use a [custom dispatch](#custom-dispatch) function to apply custom logic, such as logging, authentication and authorization. If you have complex route-specific logic, middleware can also be leveraged.\n\nA middleware is a struct which exposes a nested `Config` type, a public `init` function and a public `execute` method. It can optionally define a `deinit` method. See the built-in [CORS middleware](https://github.com/karlseguin/http.zig/blob/master/src/middleware/Cors.zig) or the sample [logger middleware](https://github.com/karlseguin/http.zig/blob/master/examples/middleware/Logger.zig) for examples.\n\nA middleware instance is created using `server.middleware()` and can then be used with the router:\n\n```zig\nvar server = try httpz.Server(void).init(allocator, .{.port = 5882}, {});\n\n// the middleware method takes the struct name and its configuration\nconst cors = try server.middleware(httpz.middleware.Cors, .{\n .origin = \"https://www.openmymind.net/\",\n});\n\n// apply this middleware to all routes (unless the route \n// explicitly opts out)\nvar router = try server.router(.{.middlewares = &.{cors}});\n\n// or we could add middleware on a route-per-route bassis\nrouter.get(\"/v1/users\", user, .{.middlewares = &.{cors}});\n\n// by default, middlewares on a route are appended to the global middlewares\n// we can replace them instead by specifying a middleware_strategy\nrouter.get(\"/v1/metrics\", metrics, .{.middlewares = &.{cors}, .middleware_strategy = .replace});\n```\n\n## Cors\nhttpz comes with a built-in CORS middleware: `httpz.middlewares.Cors`. Its configuration is:\n\n* `origin: []const u8`\n* `headers: ?[]const u8 = null`\n* `methods: ?[]const u8 = null`\n* `max_age: ?[]const u8 = null`\n\nThe CORS middleware will include a `Access-Control-Allow-Origin: $origin` to every request. For an OPTIONS request where the `sec-fetch-mode` is set to `cors, the `Access-Control-Allow-Headers`, `Access-Control-Allow-Methods` and `Access-Control-Max-Age` response headers will optionally be set based on the configuration.\n\n# Configuration\nThe second parameter given to `Server(H).init` is an `httpz.Config`. When running in blocking mode (e.g. on Windows) a few of these behave slightly, but not drastically, different.\n\nThere are many configuration options. \n\n`thread_pool.buffer_size` is the single most important value to tweak. Usage of `req.arena`, `res.arena`, `res.writer()` and `res.json()` all use a fallback allocator which first uses a fast thread-local buffer and then an underlying arena. The total memory this will require is `thread_pool.count * thread_pool.buffer_size`. Since `thread_pool.count` is usually small, a large `buffer_size` is reasonable.\n\n`request.buffer_size` must be large enough to fit the request header. Any extra space might be used to read the body. However, there can be up to `workers.count * workers.max_conn` pending requests, so a large `request.buffer_size` can take up a lot of memory. Instead, consider keeping `request.buffer_size` only large enough for the header (plus a bit of overhead for decoding URL-escape values) and set `workers.large_buffer_size` to a reasonable size for your incoming request bodies. This will take `workers.count * workers.large_buffer_count * workers.large_buffer_size` memory. \n\nBuffers for request bodies larger than `workers.large_buffer_size` but smaller than `request.max_body_size` will be dynamic allocated.\n\nIn addition to a bit of overhead, at a minimum, httpz will use:\n\n```zig\n(thread_pool.count * thread_pool.buffer_size) +\n(workers.count * workers.large_buffer_count * workers.large_buffer_size) +\n(workers.count * workers.min_conn * request.buffer_size)\n```\n\nPossible values, along with their default, are:\n\n```zig\ntry httpz.listen(allocator, &router, .{\n // Port to listen on\n .port = 5882, \n\n // Interface address to bind to\n .address = \"127.0.0.1\",\n\n // unix socket to listen on (mutually exclusive with host&port)\n .unix_path = null,\n\n // configure the workers which are responsible for:\n // 1 - accepting connections\n // 2 - reading and parsing requests\n // 3 - passing requests to the thread pool\n .workers = .{\n // Number of worker threads\n // (blocking mode: handled differently)\n .count = 1,\n\n // Maximum number of concurrent connection each worker can handle\n // (blocking mode: currently ignored)\n .max_conn = 8_192,\n\n // Minimum number of connection states each worker should maintain\n // (blocking mode: currently ignored)\n .min_conn = 64,\n\n // A pool of larger buffers that can be used for any data larger than configured\n // static buffers. For example, if response headers don't fit in in \n // $response.header_buffer_size, a buffer will be pulled from here.\n // This is per-worker. \n .large_buffer_count = 16,\n\n // The size of each large buffer.\n .large_buffer_size = 65536,\n\n // Size of bytes retained for the connection arena between use. This will\n // result in up to `count * min_conn * retain_allocated_bytes` of memory usage.\n .retain_allocated_bytes = 4096,\n },\n\n // configures the threadpool which processes requests. The threadpool is \n // where your application code runs.\n .thread_pool = .{\n // Number threads. If you're handlers are doing a lot of i/o, a higher\n // number might provide better throughput\n // (blocking mode: handled differently)\n .count = 32,\n\n // The maximum number of pending requests that the thread pool will accept\n // This applies back pressure to the above workers and ensures that, under load\n // pending requests get precedence over processing new requests.\n .backlog = 500,\n\n // Size of the static buffer to give each thread. Memory usage will be \n // `count * buffer_size`. If you're making heavy use of either `req.arena` or\n // `res.arena`, this is likely the single easiest way to gain performance. \n .buffer_size = 8192,\n },\n\n // options for tweaking request processing\n .request = .{\n // Maximum request body size that we'll process. We can allocate up \n // to this much memory per request for the body. Internally, we might\n // keep this memory around for a number of requests as an optimization.\n .max_body_size: usize = 1_048_576,\n\n // When set, if request body is larger than this value, the body won't be\n // eagerly read. The application can use `req.reader()` to create a reader\n // to read the body. Prevents loading large bodies completely in memory.\n // When set, max_body_size is ignored.\n .lazy_read_size: ?usize = null,\n\n // This memory is allocated upfront. The request header _must_ fit into\n // this space, else the request will be rejected.\n .buffer_size: usize = 4_096,\n\n // Maximum number of headers to accept. \n // Additional headers will be silently ignored.\n .max_header_count: usize = 32,\n\n // Maximum number of URL parameters to accept.\n // Additional parameters will be silently ignored.\n .max_param_count: usize = 10,\n\n // Maximum number of query string parameters to accept.\n // Additional parameters will be silently ignored.\n .max_query_count: usize = 32,\n\n // Maximum number of x-www-form-urlencoded fields to support.\n // Additional parameters will be silently ignored. This must be\n // set to a value greater than 0 (the default) if you're going\n // to use the req.formData() method.\n .max_form_count: usize = 0,\n\n // Maximum number of multipart/form-data fields to support.\n // Additional parameters will be silently ignored. This must be\n // set to a value greater than 0 (the default) if you're going\n // to use the req.multiFormData() method.\n .max_multiform_count: usize = 0,\n },\n\n // options for tweaking response object\n .response = .{\n // The maximum number of headers to accept. \n // Additional headers will be silently ignored.\n .max_header_count: usize = 16,\n },\n\n .timeout = .{\n // Time in seconds that keepalive connections will be kept alive while inactive\n .keepalive = null,\n\n // Time in seconds that a connection has to send a complete request\n .request = null\n\n // Maximum number of a requests allowed on a single keepalive connection\n .request_count = null,\n },\n .websocket = .{\n // refer to https://github.com/karlseguin/websocket.zig#config\n max_message_size: ?usize = null,\n small_buffer_size: ?usize = null,\n small_buffer_pool: ?usize = null,\n large_buffer_size: ?usize = null,\n large_buffer_pool: ?u16 = null,\n compression: bool = false,\n compression_retain_writer: bool = true,\n // if compression is true, and this is null, then\n // we accept compressed messaged from the client, but never send\n // compressed messages\n compression_write_treshold: ?usize = null,\n },\n});\n```\n\n## Blocking Mode\nkqueue (BSD, MacOS) or epoll (Linux) are used on supported platforms. On all other platforms (most notably Windows), a more naive thread-per-connection with blocking sockets is used.\n\nThe comptime-safe, `httpz.blockingMode() bool` function can be called to determine which mode httpz is running in (when it returns `true`, then you're running the simpler blocking mode).\n\nWhile you should always run httpz behind a reverse proxy, it's particularly important to do so in blocking mode due to the ease with which external connections can DOS the server.\n\nIn blocking mode, `config.workers.count` is hard-coded to 1. (This worker does considerably less work than the non-blocking workers). If `config.workers.count` is > 1, than those extra workers will go towards `config.thread_pool.count`. In other words:\n\nIn non-blocking mode, if `config.workers.count = 2` and `config.thread_pool.count = 4`, then you'll have 6 threads: 2 threads that read+parse requests and send replies, and 4 threads to execute application code.\n\nIn blocking mode, the same config will also use 6 threads, but there will only be: 1 thread that accepts connections, and 5 threads to read+parse requests, send replies and execute application code.\n\nThe goal is for the same configuration to result in the same # of threads regardless of the mode, and to have more thread_pool threads in blocking mode since they do more work.\n\nIn blocking mode, `config.workers.large_buffer_count` defaults to the size of the thread pool.\n\nIn blocking mode, `config.workers.max_conn` and `config.workers.min_conn` are ignored. The maximum number of connections is simply the size of the thread_pool.\n\nIf you aren't using a reverse proxy, you should always set the `config.timeout.request`, `config.timeout.keepalive` and `config.timeout.request_count` settings. In blocking mode, consider using conservative values: say 5/5/5 (5 second request timeout, 5 second keepalive timeout, and 5 keepalive count). You can monitor the `httpz_timeout_active` metric to see if the request timeout is too low.\n\n## Timeouts\nThe configuration settings under the `timeouts` section are designed to help protect the system against basic DOS attacks (say, by connecting and not sending data). However it is recommended that you leave these null (disabled) and use the appropriate timeout in your reverse proxy (e.g. NGINX). \n\nThe `timeout.request` is the time, in seconds, that a connection has to send a complete request. The `timeout.keepalive` is the time, in second, that a connection can stay connected without sending a request (after the initial request has been sent).\n\nThe connection alternates between these two timeouts. It starts with a timeout of `timeout.request` and after the response is sent and the connection is placed in the \"keepalive list\", switches to the `timeout.keepalive`. When new data is received, it switches back to `timeout.request`. When `null`, both timeouts default to 2_147_483_647 seconds (so not completely disabled, but close enough).\n\nThe `timeout.request_count` is the number of individual requests allowed within a single keepalive session. This protects against a client consuming the connection by sending unlimited meaningless but valid HTTP requests.\n\nWhen the three are combined, it should be difficult for a problematic client to stay connected indefinitely.\n\nIf you're running httpz on Windows (or, more generally, where httpz.blockingMode() returns true), please read the section as this mode of operation is more susceptible to DOS.\n\n# Metrics\nA few basic metrics are collected using [metrics.zig](https://github.com/karlseguin/metrics.zig), a prometheus-compatible library. These can be written to an `std.io.Writer` using `try httpz.writeMetrics(writer)`. As an example:\n\n```zig\npub fn metrics(_: *httpz.Request, res: *httpz.Response) !void {\n const writer = res.writer();\n try httpz.writeMetrics(writer);\n\n // if we were also using pg.zig \n // try pg.writeMetrics(writer);\n}\n```\n\nSince httpz does not provide any authorization, care should be taken before exposing this. \n\nThe metrics are:\n\n* `httpz_connections` - counts each TCP connection\n* `httpz_requests` - counts each request (should be >= httpz_connections due to keepalive)\n* `httpz_timeout_active` - counts each time an \"active\" connection is timed out. An \"active\" connection is one that has (a) just connected or (b) started to send bytes. The timeout is controlled by the `timeout.request` configuration.\n* `httpz_timeout_keepalive` - counts each time an \"keepalive\" connection is timed out. A \"keepalive\" connection has already received at least 1 response and the server is waiting for a new request. The timeout is controlled by the `timeout.keepalive` configuration.\n* `httpz_alloc_buffer_empty` - counts number of bytes allocated due to the large buffer pool being empty. This may indicate that `workers.large_buffer_count` should be larger.\n* `httpz_alloc_buffer_large` - counts number of bytes allocated due to the large buffer pool being too small. This may indicate that `workers.large_buffer_size` should be larger.\n* `httpz_alloc_unescape` - counts number of bytes allocated due to unescaping query or form parameters. This may indicate that `request.buffer_size` should be larger.\n* `httpz_internal_error` - counts number of unexpected errors within httpz. Such errors normally result in the connection being abruptly closed. For example, a failing syscall to epoll/kqueue would increment this counter.\n* `httpz_invalid_request` - counts number of requests which httpz could not parse (where the request is invalid).\n* `httpz_header_too_big` - counts the number of requests which httpz rejects due to a header being too big (does not fit in `request.buffer_size` config).\n* `httpz_body_too_big` - counts the number of requests which httpz rejects due to a body being too big (is larger than `request.max_body_size` config).\n\n# Testing\nThe `httpz.testing` namespace exists to help application developers setup an `*httpz.Request` and assert an `*httpz.Response`.\n\nImagine we have the following partial action:\n\n```zig\nfn search(req: *httpz.Request, res: *httpz.Response) !void {\n const query = try req.query();\n const search = query.get(\"search\") orelse return missingParameter(res, \"search\");\n\n // TODO ...\n}\n\nfn missingParameter(res: *httpz.Response, parameter: []const u8) !void {\n res.status = 400;\n return res.json(.{.@\"error\" = \"missing parameter\", .parameter = parameter}, .{});\n}\n```\n\nWe can test the above error case like so:\n\n```zig\nconst ht = @import(\"httpz\").testing;\n\ntest \"search: missing parameter\" {\n // init takes the same Configuration used when creating the real server\n // but only the config.request and config.response settings have any impact\n var web_test = ht.init(.{});\n defer web_test.deinit();\n\n try search(web_test.req, web_test.res);\n try web_test.expectStatus(400);\n try web_test.expectJson(.{.@\"error\" = \"missing parameter\", .parameter = \"search\"});\n}\n```\n\n## Building the test Request\nThe testing structure returns from httpz.testing.init exposes helper functions to set param, query and query values as well as the body:\n\n```zig\nvar web_test = ht.init(.{});\ndefer web_test.deinit();\n\nweb_test.param(\"id\", \"99382\");\nweb_test.query(\"search\", \"tea\");\nweb_test.header(\"Authorization\", \"admin\");\n\nweb_test.body(\"over 9000!\");\n// OR\nweb_test.json(.{.over = 9000});\n// OR \n// This requires ht.init(.{.request = .{.max_form_count = 10}})\nweb_test.form(.{.over = \"9000\"});\n\n// at this point, web_test.req has a param value, a query string value, a header value and a body.\n```\n\nAs an alternative to the `query` function, the full URL can also be set. If you use `query` AND `url`, the query parameters of the URL will be ignored:\n\n```zig\nweb_test.url(\"/power?over=9000\");\n```\n\n## Asserting the Response\nThere are various methods to assert the response:\n\n```zig\ntry web_test.expectStatus(200);\ntry web_test.expectHeader(\"Location\", \"/\");\ntry web_test.expectHeader(\"Location\", \"/\");\ntry web_test.expectBody(\"{\\\"over\\\":9000}\");\n```\n\nIf the expected body is in JSON, there are two helpers available. First, to assert the entire JSON body, you can use `expectJson`:\n\n```zig\ntry web_test.expectJson(.{.over = 9000});\n```\n\nOr, you can retrieve a `std.json.Value` object by calling `getJson`:\n\n```zig\nconst json = try web_test.getJson();\ntry std.testing.expectEqual(@as(i64, 9000), json.Object.get(\"over\").?.Integer);\n```\n\nFor more advanced validation, use the `parseResponse` function to return a structure representing the parsed response:\n\n```zig\nconst res = try web_test.parseResponse();\ntry std.testing.expectEqual(@as(u16, 200), res.status);\n// use res.body for a []const u8 \n// use res.headers for a std.StringHashMap([]const u8)\n// use res.raw for the full raw response\n```\n\n# HTTP Compliance\nThis implementation may never be fully HTTP/1.1 compliant, as it is built with the assumption that it will sit behind a reverse proxy that is tolerant of non-compliant upstreams (e.g. nginx). (One example I know of is that the server doesn't include the mandatory Date header in the response.)\n\n# Server Side Events\nServer Side Events can be enabled by calling `res.startEventStream()`. This method takes an arbitrary context and a function pointer. The provided function will be executed in a new thread, receiving the provided context and an `std.net.Stream`. Headers can be added (via `res.headers.add`) before calling `startEventStream()`. `res.body` must not be set (directly or indirectly).\n\nCalling `startEventStream()` automatically sets the `Content-Type`, `Cache-Control` and `Connection` header.\n\n```zig\nfn handler(_: *Request, res: *Response) !void {\n try res.startEventStream(StreamContext{}, StreamContext.handle);\n}\n\nconst StreamContext = struct {\n fn handle(self: StreamContext, stream: std.net.Stream) void {\n while (true) {\n // some event loop\n stream.writeAll(\"event: ....\") catch return;\n }\n }\n}\n```\n\n# Websocket\nhttp.zig integrates with [https://github.com/karlseguin/websocket.zig](https://github.com/karlseguin/websocket.zig) by calling `httpz.upgradeWebsocket()`. First, your handler must have a `WebsocketHandler` declaration which is the WebSocket handler type used by `websocket.Server(H)`.\n\n```zig\nconst websocket = httpz.websocket;\n\nconst Handler = struct {\n // App-specific data you want to pass when initializing\n // your WebSocketHandler\n const WebsocketContext = struct {\n\n };\n\n // See the websocket.zig documentation. But essentially this is your\n // Application's wrapper around 1 websocket connection\n pub const WebsocketHandler = struct {\n conn: *websocket.Conn,\n\n // ctx is arbitrary data you passs to httpz.upgradeWebsocket\n pub fn init(conn: *websocket.Conn, _: WebsocketContext) {\n return .{\n .conn = conn,\n }\n }\n\n // echo back\n pub fn clientMessage(self: *WebsocketHandler, data: []const u8) !void {\n try self.conn.write(data);\n }\n } \n};\n```\n\nWith this in place, you can call httpz.upgradeWebsocket() within an action:\n\n```zig\nfn ws(req: *httpz.Request, res: *httpz.Response) !void {\n if (try httpz.upgradeWebsocket(WebsocketHandler, req, res, WebsocketContext{}) == false) {\n // this was not a valid websocket handshake request\n // you should probably return with an error\n res.status = 400;\n res.body = \"invalid websocket handshake\";\n return;\n }\n // Do not use `res` from this point on\n}\n```\n\nIn websocket.zig, `init` is passed a `websocket.Handshake`. This is not the case with the httpz integration - you are expected to do any necessary validation of the request in the action.\n\nIt is an undefined behavior if `Handler.WebsocketHandler` is not the same type passed to `httpz.upgradeWebsocket`.\n"}, {"avatar_url": "https://avatars.githubusercontent.com/u/206480?v=4", "name": "websocket.zig", "full_name": "karlseguin/websocket.zig", "created_at": "2022-05-28T14:35:01Z", "description": "A websocket implementation for zig", "default_branch": "master", "open_issues": 5, "stargazers_count": 393, "forks_count": 37, "watchers_count": 393, "tags_url": "https://api.github.com/repos/karlseguin/websocket.zig/tags", "license": "-", "topics": ["websocket", "zig", "zig-library", "zig-package"], "size": 524, "fork": false, "updated_at": "2025-04-07T04:21:41Z", "has_build_zig": true, "has_build_zig_zon": true, "readme_content": "# A zig websocket server.\nThis project follows Zig master. See available branches if you're targeting a specific version.\n\nSkip to the [client section](#client).\n\nIf you're upgrading from a previous version, check out the [Server Migration](https://github.com/karlseguin/websocket.zig/wiki/Server-Migration) and [Client Migration](https://github.com/karlseguin/websocket.zig/wiki/Client-Migration) wikis.\n\n# Server\n```zig\nconst std = @import(\"std\");\nconst ws = @import(\"websocket\");\n\npub fn main() !void {\n var gpa = std.heap.GeneralPurposeAllocator(.{}){};\n const allocator = gpa.allocator();\n\n var server = try ws.Server(Handler).init(allocator, .{\n .port = 9224,\n .address = \"127.0.0.1\",\n .handshake = .{\n .timeout = 3,\n .max_size = 1024,\n // since we aren't using hanshake.headers\n // we can set this to 0 to save a few bytes.\n .max_headers = 0,\n },\n });\n\n // Arbitrary (application-specific) data to pass into each handler\n // Pass void ({}) into listen if you have none\n var app = App{};\n\n // this blocks\n try server.listen(&app);\n}\n\n// This is your application-specific wrapper around a websocket connection\nconst Handler = struct {\n app: *App,\n conn: *ws.Conn,\n\n // You must define a public init function which takes\n pub fn init(h: ws.Handshake, conn: *ws.Conn, app: *App) !Handler {\n // `h` contains the initial websocket \"handshake\" request\n // It can be used to apply application-specific logic to verify / allow\n // the connection (e.g. valid url, query string parameters, or headers)\n\n _ = h; // we're not using this in our simple case\n\n return .{\n .app = app,\n .conn = conn,\n };\n }\n\n // You must defined a public clientMessage method\n pub fn clientMessage(self: *Handler, data: []const u8) !void {\n try self.conn.write(data); // echo the message back\n }\n};\n\n// This is application-specific you want passed into your Handler's\n// init function.\nconst App = struct {\n // maybe a db pool\n // maybe a list of rooms\n};\n```\n\n## Handler\nWhen you create a `websocket.Server(Handler)`, the specified `Handler` is your structure which will receive messages. It must have a public `init` function and `clientMessage` method. Other methods, such as `close` can optionally be defined.\n\n### init\nThe `init` method is called with a `websocket.Handshake`, a `*websocket.Conn` and whatever app-specific value was passed into `Server(H).init`. \n\nWhen `init` is called, the handshake response has not yet been sent to the client (this allows your `init` method to return an error which will cause websocket.zig to send an error response and close the connection). As such, you should not use/write to the `*websocket.Conn` at this point. Instead, use the `afterInit` method, described next.\n\nThe websocket specification requires the initial \"handshake\" to contain certain headers and values. The library validates these headers. However applications may have additional requirements before allowing the connection to be \"upgraded\" to a websocket connection. For example, a one-time-use token could be required in the querystring. Applications should use the provided `websocket.Handshake` to apply any application-specific verification and optionally return an error to terminate the connection.\n\nThe `websocket.Handshake` exposes the following fields:\n\n* `url: []const u8` - URL of the request in its original casing\n* `method: []const u8` - Method of the request in its original casing\n* `raw_headers: []const u8` - The raw \"key1: value1\\r\\nkey2: value2\\r\\n\" headers. Keys are lowercase.\n\nIf you set the `max_headers` configuration value to > 0, then you can use `req.headers.get(\"HEADER_NAME\")` to extract a header value from the given name:\n\n```zig\n// the last parameter, an *App in this case, is an application-specific\n// value that you passed into server.listen()\npub fn init(h: websocket.Handshake, conn: websocket.Conn, app: *App) !Handler {\n // get returns a ?[]const u8\n // the name is lowercase\n // the value is in its original case\n const token = handshake.headers.get(\"authorization\") orelse {\n return error.NotAuthorized;\n }\n\n return .{\n .app = app,\n .conn = conn,\n };\n}\n\n```\n\nYou can iterate through all the headers:\n```zig\nvar it = handshake.headers.iterator();\nwhile (it.next) |kv| {\n std.debug.print(\"{s} = {s}\\n\", .{kv.key, kv.value});\n}\n```\n\nMemory referenced by the `websocket.Handshake`, including headers from `handshake.headers` will be freed after the call to `init` completes. Application that need these values to exist beyond the call to `init` must make a copy.\n\n### afterInit\nIf your handler defines a `afterInit(handler: *Handler) !void` method, the method is called after the handshake response has been sent. This is the first time the connection can safely be used.\n\n`afterInit` supports two overloads:\n\n```zig\npub fn afterInit(handler: *Handler) !void\npub fn afterInit(handler: *Handler, ctx: anytype) !void\n```\n\nThe `ctx` is the same `ctx` passed into `init`. It is passed here for cases where the value is only needed once when the connection is established.\n\n### clientMessage\nThe `clientMessage` method is called whenever a text or binary message is received.\n\nThe `clientMessage` method can take one of four shapes. The simplest, shown in the first example, is:\n\n```zig\n// simple clientMessage\nclientMessage(h: *Handler, data: []u8) !void\n```\n\nThe Websocket specific has a distinct message type for text and binary. Text messages must be valid UTF-8. Websocket.zig does not do this validation (it's expensive and most apps don't care). However, if you do care about the distinction, your `clientMessage` can take another parameter:\n\n```zig\n// clientMessage that accepts a tpe to differentiate between messages\n// sent as `text` vs those sent as `binary`. Either way, Websocket.zig\n// does not validate that text data is valid UTF-8.\nclientMessage(h: *Handler, data: []u8, tpe: ws.MessageTextType) !void\n```\n\nFinally, `clientMessage` can take an optional `std.mem.Allocator`. If you need to dynamically allocate memory within `clientMessage`, consider using this allocator. It is a fast thread-local buffer that fallsback to an arena allocator. Allocations made with this allocator are freed after `clientMessage` returns:\n\n```zig\n// clientMessage that takes an allocator\nclientMessage(h: *Handler, allocator: Allocator, data: []u8) !void\n\n// cilentMessage that takes an allocator AND a MessageTextType\nclientMessage(h: *Handler, allocator: Allocator, data: []u8, tpe: ws.MessageTextType) !void`\n```\n\nIf `clientMessage` returns an error, the connection is closed. You can also call `conn.close()` within the method.\n\n### close\nIf your handler defines a `close(handler: *Handler)` method, the method is called whenever the connection is being closed. Guaranteed to be called exactly once, so it is safe to deinitialize the `handler` at this point. This is called no mater the reason for the closure (on shutdown, if the client closed the connection, if your code close the connection, ...)\n\nThe socket may or may not still be alive.\n\n### clientClose\nIf your handler defines a `clientClose(handler: *Handler, data: []u8) !void` method, the function will be called whenever a `close` message is received from the client. \n\nYou almost certainly *do not* want to define this method and instead want to use `close()`. When not defined, websocket.zig follows the websocket specific and replies with its own matching close message.\n\n### clientPong\nIf your handler defines a `clientPong(handler: *Handler, data: []u8) !void` method, the function will be called whenever a `pong` message is received from the client. When not defined, no action is taken.\n\n### clientPing\nIf your handler defines a `clientPing(handler: *Handler, data: []u8) !void` method, the function will be called whenever `ping` message is received from the client. When not defined, websocket.zig will write a corresponding `pong` reply.\n\n## websocket.Conn\nThe call to `init` includes a `*websocket.Conn`. It is expected that handlers will keep a reference to it. The main purpose of the `*Conn` is to write data via `conn.write([]const u8)` and `conn.writeBin([]const u8)`. The websocket protocol differentiates between a \"text\" and \"binary\" message, with the only difference that \"text\" must be valid UTF-8. This library does not enforce this. Which you use really depends on what your client expects. For browsers, text messages appear as strings, and binary messages appear as a Blob or ArrayBuffer (depending on how the client is configured).\n\n`conn.close(.{})` can also be called to close the connection. Calling `conn.close()` **will** result in the handler's `close` callback being called. \n\n`close` takes an optional value where you can specify the `code` and/or `reason`: `conn.close(.{.code = 4000, .reason = \"bye bye\"})` Refer to [RFC6455](https://datatracker.ietf.org/doc/html/rfc6455#section-7.4.1) for valid codes. The `reason` must be <= 123 bytes.\n\n### Writer\nIt's possible to get a `std.io.Writer` from a `*Conn`. Because websocket messages are framed, the writter will buffer the message in memory and requires an explicit \"flush\". Buffering requires an allocator. \n\n```zig\n// .text or .binary\nvar wb = conn.writeBuffer(allocator, .text);\ndefer wb.deinit();\ntry std.fmt.format(wb.writer(), \"it's over {d}!!!\", .{9000});\ntry wb.flush();\n```\n\nConsider using the `clientMessage` overload which accepts an allocator. Not only is this allocator fast (it's a thread-local buffer than fallsback to an arena), but it also eliminates the need to call `deinit`:\n\n```zig\npub fn clientMessage(h: *Handler, allocator: Allocator, data: []const u8) !void {\n // Use the provided allocator.\n // It's faster and doesn't require `deinit` to be called\n\n var wb = conn.writeBuffer(allocator, .text);\n try std.fmt.format(wb.writer(), \"it's over {d}!!!\", .{9000});\n try wb.flush();\n}\n```\n\n## Thread Safety\nWebsocket.zig ensures that only 1 message per connection/handler is processed at a time. Therefore, you will never have concurrent calls to `clientMessage`, `clientPing`, `clientPong` or `clientClose`. Conversely, concurrent calls to methods of `*websocket.Conn` are allowed (i.e. `conn.write` and `conn.close`). \n\n## Config\nThe 2nd parameter to `Server(H).init` is a configuration object. \n\n```zig\npub const Config = struct {\n port: u16 = 9882,\n\n // Ignored if unix_path is set\n address: []const u8 = \"127.0.0.1\",\n\n // Not valid on windows\n unix_path: ?[]const u8 = null,\n\n // In nonblocking mode (Linux/Mac/BSD), sets the number of\n // listening threads. Defaults to 1.\n // In blocking mode, this is ignored and always set to 1.\n worker_count: ?u8 = null,\n\n // The maximum number of connections, per worker.\n // default: 16_384\n max_conn: ?usize = null,\n\n // The maximium allows message size. \n // A websocket message can have up to 14 bytes of overhead/header\n // Default: 65_536\n max_message_size: ?usize = null,\n\n handshake: Config.Handshake = .{},\n thread_pool: ThreadPool = .{},\n buffers: Config.Buffers = .{},\n\n // compression is disabled by default\n compression: ?Compression = null,\n\n // In blocking mode the thread pool isn't used \n pub const ThreadPool = struct {\n // Number of threads to process messages.\n // These threads are where your `clientXYZ` method will execute.\n // Default: 4.\n count: ?u16 = null,\n\n // The maximum number of pending requests that the thread pool will accept\n // This applies back pressure to worker and ensures that, under load\n // pending requests get precedence over processing new requests.\n // Default: 500.\n backlog: ?u32 = null,\n\n // Size of the static buffer to give each thread. Memory usage will be \n // `count * buffer_size`.\n // If clientMessage isn't defined with an Allocator, this defaults to 0.\n // Else it default to 32768\n buffer_size: ?usize = null,\n };\n\n const Handshake = struct {\n // time, in seconds, to timeout the initial handshake request\n timeout: u32 = 10,\n\n // Max size, in bytes, allowed for the initial handshake request.\n // If you're expected a large handshake (many headers, large cookies, etc)\n // you'll need to set this larger.\n // Default: 1024\n max_size: ?u16 = null,\n\n // Max number of headers to capture. These become available as\n // handshake.headers.get(...).\n // Default: 0\n max_headers: ?u16 = null,\n\n // Count of handshake objects to keep in a pool. More are created\n // as needed.\n // Default: 32\n count: ?u16 = null,\n };\n\n const Buffers = struct {\n // The number of \"small\" buffers to keep pooled.\n //\n // When `null`, the small buffer pool is disabled and each connection\n // gets its own dedicated small buffer (of `size`). This is reasonable\n // when you expect most clients to be sending a steady stream of data.\n \n // When set > 0, a pool is created (of `size` buffers) and buffers are \n // assigned as messages are received. This is reasonable when you expect\n // sporadic and messages from clients.\n //\n // Default: `null`\n small_pool: ?usize = null,\n\n // The size of each \"small\" buffer. Depending on the value of `pool`\n // this is either a per-connection buffer, or the size of pool buffers\n // shared between all connections\n // Default: 2048\n small_size: ?usize = null,\n\n // The number of large buffers to have in the pool.\n // Messages larger than `buffers.small_size` but smaller than `max_message_size`\n // will attempt to use a large buffer from the pool.\n // If the pool is empty, a dynamic buffer is created.\n // Default: 8\n large_pool: ?u16 = null,\n\n // The size of each large buffer.\n // Default: min(2 * buffers.small_size, max_message_size)\n large_size: ?usize = null,\n };\n\n // Compression is disabled by default, to enable it and accept the default\n // values, set it to a defautl struct: .{}\n const Compression = struct {\n // The mimimum size of data before messages will be compressed\n // null = message are never compressed when writing messages to the client\n // If you want to enable compression, 512 is a reasonable default\n write_threshold: ?usize = null,\n\n // When compression is enable, and write_treshold != null, every connection\n // gets an std.ArrayList(u8) to write the compressed message to. When this\n // is true, the memory allocated to the ArrayList is kept for subsequent\n // messages (i.e. it calls `clearRetainingCapacity`). When false, the memory\n // is freed after each message. \n // true = more memory, but fewer allocations\n retain_write_buffer: bool = true,\n\n // Advanced options that are part of the permessage-deflate specification.\n // You can set these to true to try and save a bit of memory. But if you\n // want to save memory, don't use compression at all.\n client_no_context_takeover: bool = false,\n server_no_context_takeover: bool = false,\n };\n}\n```\n\n## Logging\nwebsocket.zig uses Zig's built-in scope logging. You can control the log level by having an `std_options` decleration in your program's main file:\n\n```zig\npub const std_options = std.Options{\n .log_scope_levels = &[_]std.log.ScopeLevel{\n .{ .scope = .websocket, .level = .err },\n }\n};\n```\n\n## Advanced\n\n### Pre-Framed Comptime Message\nWebsocket message have their own special framing. When you use `conn.write` or `conn.writeBin` the data you provide is \"framed\" into a correct websocket message. Framing is fast and cheap (e.g., it DOES NOT require an O(N) loop through the data). Nonetheless, there may be be cases where pre-framing messages at compile-time is desired. The `websocket.frameText` and `websocket.frameBin` can be used for this purpose:\n\n```zig\nconst UNKNOWN_COMMAND = websocket.frameText(\"unknown command\");\n...\n\npub fn clientMessage(self: *Handler, data: []const u8) !void {\n if (std.mem.startsWith(u8, data, \"join: \")) {\n self.handleJoin(data)\n } else if (std.mem.startsWith(u8, data, \"leave: \")) {\n self.handleLead(data)\n } else {\n try self.conn.writeFramed(UNKNOWN_COMMAND);\n }\n}\n```\n\n### Blocking Mode\nkqueue (BSD, MacOS) or epoll (Linux) are used on supported platforms. On all other platforms (most notably Windows), a more naive thread-per-connection with blocking sockets is used.\n\nThe comptime-safe, `websocket.blockingMode() bool` function can be called to determine which mode websocket is running in (when it returns `true`, then you're running the simpler blocking mode).\n\n### Per-Connection Buffers\nIn non-blocking mode, the `buffers.small_pool` and `buffers.small_size` should be set for your particular use case. When `buffers.small_pool == null`, each connection gets its own buffer of `buffers.small_size` bytes. This is a good option if you expect most of your clients to be sending a steady stream of data. While it might take more memory (# of connections * buffers.small_size), its faster and minimizes multi-threading overhead.\n\nHowever, if you expect clients to only send messages sporadically, such as a chat application, enabling the pool can reduce memory usage at the cost of a bit of overhead.\n\nIn blocking mode, these settings are ignored and each connection always gets its own buffer (though there is still a shared large buffer pool).\n\n### Stopping\n`server.stop()` can be called to stop the webserver. It is safe to call this from a different thread (i.e. a `sigaction` handler).\n\n## Testing\nThe library comes with some helpers for testing.\n\n```zig\nconst wt = @import(\"websocket\").testing;\n\ntest \"handler: echo\" {\n var wtt = wt.init();\n defer wtt.deinit();\n\n // create an instance of your handler (however you want)\n // and use &tww.conn as the *ws.Conn field\n var handler = Handler{\n .conn = &wtt.conn,\n };\n\n // execute the methods of your handler\n try handler.clientMessage(\"hello world\");\n\n // assert what the client should have received\n try wtt.expectMessage(.text, \"hello world\");\n}\n```\n\nBesides `expectMessage` you can also call `expectClose()`.\n\nNote that this testing is heavy-handed. It opens up a pair of sockets with one side listening on `127.0.0.1` and accepting a connection from the other. `wtt.conn` is the \"server\" side of the connection, and assertion happens on the client side.\n\n# Client\nThe `*websocket.Client` can be used in one of two ways. At its simplest, after creating a client and initiating a handshake, you simply use write to send messages and read to receive them. First, we create the client and initiate the handshake:\n\n```zig\npub fn main() !void {\n var gpa = std.heap.GeneralPurposeAllocator(.{}){};\n const allocator = gpa.allocator();\n\n // create the client\n var client = try websocket.Client.init(allocator, .{\n .port = 9224,\n .host = \"localhost\",\n });\n defer client.deinit();\n\n // send the initial handshake request\n const request_path = \"/ws\";\n try client.handshake(request_path, .{\n .timeout_ms = 1000,\n // Raw headers to send, if any. \n // A lot of servers require a Host header.\n // Separate multiple headers using \\r\\n\n .headers = \"Host: localhost:9224\",\n });\n}\n```\n\nWe can then use `read` and `write`. By default, `read` blocks until a message is received (or an error occurs). We can make it return `null` by setting a timeout:\n\n```zig\n // optional, read will return null after 1 second\n try client.readTimeout(std.time.ms_per_s * 1);\n\n // echo messages back to the server until the connection is closed\n while (true) {\n // since we didn't set a timeout, client.read() will either\n // return a message or an error (i.e. it won't return null)\n const message = (try client.read()) orelse {\n // no message after our 1 second\n std.debug.print(\".\", .{});\n continue;\n };\n\n // must be called once you're done processing the request\n defer client.done(message);\n\n switch (message.type) {\n .text, .binary => {\n std.debug.print(\"received: {s}\\n\", .{message.data});\n try client.write(message.data);\n },\n .ping => try client.writePong(message.data),\n .pong => {},\n .close => {\n try client.close(.{});\n break;\n }\n }\n }\n}\n```\n\n### Config\nWhen creating a Client, the 2nd parameter is a configuration object:\n\n* `port` - The port to connect to. Required.\n* `host` - The host/IP address to connect to. The Host:IP value IS NOT automatically put in the header of the handshake request. Required.\n* `max_size` - Maximum incoming message size to allow. The library will dynamically allocate up to this much space per request. Default: `65536`.\n* `buffer_size` - Size of the static buffer that's available for the client to process incoming messages. While there's other overhead, the minimal memory usage of the server will be `# of active clients * buffer_size`. Default: `4096`.\n* `tls` - Whether or not to connect over TLS. Only TLS 1.3 is supported. Default: `false`.\n* `ca_bundle` - Provide a custom `std.crypto.Certificate.Bundle`. Only meaningful when `tls = true`. Default: `null`.\n\nSetting `max_size == buffer_size` is valid and will ensure that no dynamic memory allocation occurs once the connection is established.\n\nZig only supports TLS 1.3, so this library can only connect to hosts using TLS 1.3. If no `ca_bundle` is provided, the library will create a default bundle per connection.\n\n### Handshake\n`client.handshake()` takes two parameters. The first is the request path. The second is handshake configuration value:\n\n* `timeout_ms` - Timeout, in milliseconds, for the handshake. Default: `10_000` (10 seconds).\n* `headers` - Raw headers to include in the handshake. Multiple headers should be separated by by \"\\r\\n\". Many servers require a Host header. Example: `\"Host: server\\r\\nAuthorization: Something\"`. Defaul: `null`\n\n### Custom Wrapper\nIn more advanced cases, you'll likely want to wrap a `*ws.Client` in your own type and use a background read loop with \"callback\" methods. Like in the above example, you'll first want to create a client and initialize a handshake:\n\n```zig\nconst ws = @import(\"websocket\");\n\nconst Handler = struct {\n client: ws.Client,\n\n fn init(allocator: std.mem.Allocator) !Handler {\n var client = try ws.Client.init(allocator, .{\n .port = 9224,\n .host = \"localhost\",\n });\n defer client.deinit();\n\n // send the initial handshake request\n const request_path = \"/ws\";\n try client.handshake(request_path, .{\n .timeout_ms = 1000,\n .headers = \"host: localhost:9224\\r\\n\",\n });\n\n return .{\n .client = client,\n };\n }\n ```\n\nYou can then call `client.readLoopInNewThread()` to start a background listener. Your handler must define a `serverMessage` method:\n\n```zig\n pub fn startLoop(self: *Handler) !void {\n // use readLoop for a blocking version\n const thread = try self.client.readLoopInNewThread(self);\n thread.detach();\n }\n\n pub fn serverMessage(self: *Handler, data: []u8) !void {\n // echo back to server\n return self.client.write(data);\n }\n}\n```\n\nWebsockets have a number of different message types. `serverMessage` only receives text and binary messages. If you care about the distinction, you can use an overload:\n\n```zig\npub fn serverMessage(self: *Handler, data: []u8, tpe: ws.MessageTextType) !void\n```\n\nwhere `tpe` will be either `.text` or `.binary`. Different callbacks are used for the other message types.\n\n#### Optional Callbacks\nIn addition to the required `serverMessage`, you can define optional callbacks.\n\n```zig\n// Guaranteed to be called exactly once when the readLoop exits\npub fn close(self: *Handler) void\n\n// If omitted, websocket.zig will automatically reply with a pong\npub fn serverPing(self: *Handler, data: []u8) !void\n\n// If omitted, websocket.zig will ignore this message\npub fn serverPong(self: *Handler) !void\n\n// If omitted, websocket.zig will automatically reply with a close message\npub fn serverClose(self: *Handler) !void\n```\n\nYou almost certainly **do not** want to define a `serverClose` method, but instead what do define a `close` method. In your `close` callback, you should call `client.close(.{})` (and optionally pass a code and reason).\n\n## Client\nWhether you're calling `client.read()` explicitly or using `client.readLoopInNewThread()` (or `client.readLoop()`), the `client` API is the same. In both cases, the various `write` methods, as well as `close()` are thread-safe.\n\n### Writing\nIt may come as a surprise, but every variation of `write` expects a `[]u8`, not a `[]const u8`. Websocket payloads sent from a client need to be masked, which the websocket.zig library handles. It is obviously more efficient to mutate the given payload versus creating a copy. By taking a `[]u8`, applications with mutable buffers benefit from avoiding the clone. Applications that have immutable buffers will need to create a mutable clone.\n\n```zig\n// write a text message\npub fn write(self: *Client, data: []u8) !void\n\n// write a text message (same as client.write(data))\npub fn writeText(self: *Client, data: []u8) !void\n\n// write a binary message\npub fn writeBin(self: *Client, data: []u8) !void\n\n// write a ping message\npub fn writePing(self: *Client, data: []u8) !void\n\n// write a pong message\n// if you don't define a handlePong message, websocket.zig\n// will automatically answer any ping with a pong\npub fn writePong(self: *Client, data: []u8) !void\n\n// lower-level, use by all of the above\npub fn writeFrame(self: *Client, op_code: proto.OpCode, data: []u8) !void\n```\n\n### Reading\nAs seen above, most applications will either chose to call `read()` explicitly or use a `readLoop`. It is **not safe* to call `read` while the read loop is running.\n\n```zig\n// Reads 1 message. Returns null on timeout\n// Set a timeout using `client.readTimeout(ms)`\npub fn read(self: *Client) !?ws.Message\n\n\n// Starts a readloop in the calling thread. \n// `@TypeOf(handler)` must define the `serverMessage` callback\n// (and may define other optional callbacks)\npub fn readLoop(self: *Client, handler: anytype) !void\n\n// Same as `readLoop` but starts the readLoop in a new thread\npub fn readLoopInNewThread(self: *Client, h: anytype) !std.Thread\n```\n\n### Closing\nUse `try client.close(.{.code = 4000, .reason = \"bye\"})` to both send a close frame and close the connection. Noop if the connection is already known to be close. Thread safe.\n\nBoth `code` and `reason` are optional. Refer to [RFC6455](https://datatracker.ietf.org/doc/html/rfc6455#section-7.4.1) for valid codes. The `reason` must be <= 123 bytes.\n\n\n### Performance Optimization 1 - CA Bundle\nFor a high number of connections, it might be beneficial to manage our own CA bundle:\n\n```zig\n// some global data\nvar ca_bundle = std.crypto.Certificate.Bundle{}\ntry ca_bundle.rescan(allocator);\ndefer ca_bundle.deinit(allocator);\n```\n\nAnd then assign this `ca_bundle` into the the configuration's `ca_bundle` field. This way the library does not have to create and scan the installed CA certificates for each client connection.\n\n### Performance Optimization 2 - Buffer Provider\nFor a high nummber of connections a large buffer pool can be created and provided to each client:\n\n```zig\n// Create a buffer pool of 10 buffers, each being 32K\nconst buffer_provider = try websocket.bufferProvider(allocator, 10, 32768);\ndefer buffer_provider.deinit();\n\n\n// create your client(s) using the above created buffer_provider\nvar client = try websocket.connect(allocator, \"localhost\", 9001, .{\n ...\n .buffer_provider = buffer_provider,\n});\n```\n\nThis allows each client to have a reasonable `buffer_size` that can accomodate most messages, while having an efficient fallback for the occasional large message. When `max_size` is greater than the large buffer pool size (32K in the above example) or when all pooled buffers are used, a dynamic buffer is created.\n"}, {"avatar_url": "https://avatars.githubusercontent.com/u/3932972?v=4", "name": "zig-network", "full_name": "ikskuh/zig-network", "created_at": "2020-04-28T11:39:38Z", "description": "A smallest-common-subset of socket functions for crossplatform networking, TCP & UDP", "default_branch": "master", "open_issues": 11, "stargazers_count": 544, "forks_count": 67, "watchers_count": 544, "tags_url": "https://api.github.com/repos/ikskuh/zig-network/tags", "license": "-", "topics": ["networking", "tcp", "tcp-client", "tcp-server", "udp", "zig", "zig-package"], "size": 287, "fork": false, "updated_at": "2025-04-08T01:56:25Z", "has_build_zig": true, "has_build_zig_zon": true, "readme_content": "# Zig Network Abstraction\n\nSmall network abstraction layer around TCP & UDP.\n\n## Features\n- Implements the minimal API surface for basic networking\n- Makes cross-platform abstractions\n- Supports blocking and non-blocking I/O via `select`/`poll`\n- UDP multicast support\n\n## Usage\n\n### Use with the package manager\n\n`build.zig.zon`:\n```zig\n.{\n .name = \"appname\",\n .version = \"0.0.0\",\n .dependencies = .{\n .network = .{\n .url = \"https://github.com/MasterQ32/zig-network/archive/.tar.gz\",\n .hash = \"HASH_GOES_HERE\",\n },\n },\n}\n```\n(To aquire the hash, please remove the line containing `.hash`, the compiler will then tell you which line to put back)\n\n`build.zig`:\n```zig\nexe.addModule(\"network\", b.dependency(\"network\", .{}).module(\"network\"));\n```\n\n### Usage example\n\n```zig\nconst network = @import(\"network\");\n\ntest \"Connect to an echo server\" {\n try network.init();\n defer network.deinit();\n\n const sock = try network.connectToHost(std.heap.page_allocator, \"tcpbin.com\", 4242, .tcp);\n defer sock.close();\n\n const msg = \"Hi from socket!\\n\";\n try sock.writer().writeAll(msg);\n\n var buf: [128]u8 = undefined;\n std.debug.print(\"Echo: {}\", .{buf[0..try sock.reader().readAll(buf[0..msg.len])]});\n}\n```\n\nSee [async.zig](examples/async.zig) for a more complete example on how to use asynchronous I/O to make a small TCP server.\n\n## Run examples\n\nBuild all examples:\n\n```bash\n$ zig build examples\n```\n\nBuild a specific example:\n\n```bash\n$ zig build sync-examples\n```\n\nTo test an example, eg. `echo`:\n\n```bash\n$ ./zig-out/bin/echo 3000\n``` \n\nin another terminal\n\n```bash\n$ nc localhost 3000\nhello\nhello\nhow are you\nhow are you\n```\n\n## Notes\nOn Windows receive and send function calls are asynchronous and cooperate with the standard library event loop\nwhen `io_mode = .evented` is set in the root file of your program. \nOther calls (connect, listen, accept etc) are blocking. \n"}] \ No newline at end of file