Unnamed: 0
int64
0
0
repo_id
stringlengths
5
186
file_path
stringlengths
15
223
content
stringlengths
1
32.8M
0
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/docs
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/docs/architecture/psa-crypto-implementation-structure.md
PSA Cryptograpy API implementation and PSA driver interface =========================================================== ## Introduction The [PSA Cryptography API specification](https://armmbed.github.io/mbed-crypto/psa/#application-programming-interface) defines an interface to cryptographic operations for which the Mbed TLS library provides a reference implementation. The PSA Cryptography API specification is complemented by the PSA driver interface specification which defines an interface for cryptoprocessor drivers. This document describes the high level organization of the Mbed TLS PSA Cryptography API implementation which is tightly related to the PSA driver interface. ## High level organization of the Mbed TLS PSA Cryptography API implementation In one sentence, the Mbed TLS PSA Cryptography API implementation is made of a core and PSA drivers as defined in the PSA driver interface. The key point is that software cryptographic operations are organized as PSA drivers: they interact with the core through the PSA driver interface. ### Rationale * Addressing software and hardware cryptographic implementations through the same C interface reduces the core code size and its call graph complexity. The core and its dispatching to software and hardware implementations are consequently easier to test and validate. * The organization of the software cryptographic implementations in drivers promotes modularization of those implementations. * As hardware capabilities, software cryptographic functionalities can be described by a JSON driver description file as defined in the PSA driver interface. * Along with JSON driver description files, the PSA driver specification defines the deliverables for a driver to be included into the Mbed TLS PSA Cryptography implementation. This provides a natural framework to integrate third party or alternative software implementations of cryptographic operations. ## The Mbed TLS PSA Cryptography API implementation core The core implements all the APIs as defined in the PSA Cryptography API specification but does not perform on its own any cryptographic operation. The core relies on PSA drivers to actually perform the cryptographic operations. The core is responsible for: * the key store. * checking PSA API arguments and translating them into valid arguments for the necessary calls to the PSA driver interface. * dispatching the cryptographic operations to the appropriate PSA drivers. The sketch of an Mbed TLS PSA cryptographic API implementation is thus: ```C psa_status_t psa_api( ... ) { psa_status_t status; /* Pre driver interface call processing: validation of arguments, building * of arguments for the call to the driver interface, ... */ ... /* Call to the driver interface */ status = psa_driver_wrapper_<entry_point>( ... ); if( status != PSA_SUCCESS ) return( status ); /* Post driver interface call processing: validation of the values returned * by the driver, finalization of the values to return to the caller, * clean-up in case of error ... */ } ``` The code of most PSA APIs is expected to match precisely the above layout. However, it is likely that the code structure of some APIs will be more complicated with several calls to the driver interface, mainly to encompass a larger variety of hardware designs. For example, to encompass hardware accelerators that are capable of verifying a MAC and those that are only capable of computing a MAC, the psa_mac_verify() API could call first psa_driver_wrapper_mac_verify() and then fallback to psa_driver_wrapper_mac_compute(). The implementations of `psa_driver_wrapper_<entry_point>` functions are generated by the build system based on the JSON driver description files of the various PSA drivers making up the Mbed TLS PSA Cryptography API implementation. The implementations are generated in a psa_crypto_driver_wrappers.c C file and the function prototypes declared in a psa_crypto_driver_wrappers.h header file. The psa_driver_wrapper_<entry_point>() functions dispatch cryptographic operations to accelerator drivers, secure element drivers as well as to the software implementations of cryptographic operations. Note that the implementation allows to build the library with only a C compiler by shipping a generated file corresponding to a pure software implementation. The driver entry points and their code in this generated file are guarded by pre-processor directives based on PSA_WANT_xyz macros (see [Conditional inclusion of cryptographic mechanism through the PSA API in Mbed TLS](psa-conditional-inclusion-c.html). That way, it is possible to compile and include in the library only the desired cryptographic operations. ### Key creation Key creation implementation in Mbed TLS PSA core is articulated around three internal functions: psa_start_key_creation(), psa_finish_key_creation() and psa_fail_key_creation(). Implementations of key creation PSA APIs, namely psa_import_key(), psa_generate_key(), psa_key_derivation_output_key() and psa_copy_key() go by the following sequence: 1. Check the input parameters. 2. Call psa_start_key_creation() that allocates a key slot, prepares it with the specified key attributes, and in case of a volatile key assign it a volatile key identifier. 3. Generate or copy the key material into the key slot. This entails the allocation of the buffer to store the key material. 4. Call psa_finish_key_creation() that mostly saves persistent keys into persistent storage. In case of any error occurring at step 3 or 4, psa_fail_key_creation() is called. It wipes and cleans the slot especially the key material: reset to zero of the RAM memory that contained the key material, free the allocated buffer. ## Mbed TLS PSA Cryptography API implementation drivers A driver of the Mbed TLS PSA Cryptography API implementation (Mbed TLS PSA driver in the following) is a driver in the sense that it is compliant with the PSA driver interface specification. But it is not an actual driver that drives some hardware. It implements cryptographic operations purely in software. An Mbed TLS PSA driver C file is named psa_crypto_<driver_name>.c and its associated header file psa_crypto_<driver_name>.h. The functions implementing a driver entry point as defined in the PSA driver interface specification are named as mbedtls_psa_<driver name>_<entry point>(). As an example, the psa_crypto_rsa.c and psa_crypto_rsa.h are the files containing the Mbed TLS PSA driver implementing RSA cryptographic operations. This RSA driver implements among other entry points the "import_key" entry point. The function implementing this entry point is named mbedtls_psa_rsa_import_key().
0
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/docs
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/docs/architecture/mbed-crypto-storage-specification.md
Mbed Crypto storage specification ================================= This document specifies how Mbed Crypto uses storage. Mbed Crypto may be upgraded on an existing device with the storage preserved. Therefore: 1. Any change may break existing installations and may require an upgrade path. 1. This document retains historical information about all past released versions. Do not remove information from this document unless it has always been incorrect or it is about a version that you are sure was never released. Mbed Crypto 0.1.0 ----------------- Tags: mbedcrypto-0.1.0b, mbedcrypto-0.1.0b2 Released in November 2018. <br> Integrated in Mbed OS 5.11. Supported backends: * [PSA ITS](#file-namespace-on-its-for-0.1.0) * [C stdio](#file-namespace-on-stdio-for-0.1.0) Supported features: * [Persistent transparent keys](#key-file-format-for-0.1.0) designated by a [slot number](#key-names-for-0.1.0). * [Nonvolatile random seed](#nonvolatile-random-seed-file-format-for-0.1.0) on ITS only. This is a beta release, and we do not promise backward compatibility, with one exception: > On Mbed OS, if a device has a nonvolatile random seed file produced with Mbed OS 5.11.x and is upgraded to a later version of Mbed OS, the nonvolatile random seed file is preserved or upgraded. We do not make any promises regarding key storage, or regarding the nonvolatile random seed file on other platforms. ### Key names for 0.1.0 Information about each key is stored in a dedicated file whose name is constructed from the key identifier. The way in which the file name is constructed depends on the storage backend. The content of the file is described [below](#key-file-format-for-0.1.0). The valid values for a key identifier are the range from 1 to 0xfffeffff. This limitation on the range is not documented in user-facing documentation: according to the user-facing documentation, arbitrary 32-bit values are valid. The code uses the following constant in an internal header (note that despite the name, this value is actually one plus the maximum permitted value): #define PSA_MAX_PERSISTENT_KEY_IDENTIFIER 0xffff0000 There is a shared namespace for all callers. ### Key file format for 0.1.0 All integers are encoded in little-endian order in 8-bit bytes. The layout of a key file is: * magic (8 bytes): `"PSA\0KEY\0"` * version (4 bytes): 0 * type (4 bytes): `psa_key_type_t` value * policy usage flags (4 bytes): `psa_key_usage_t` value * policy usage algorithm (4 bytes): `psa_algorithm_t` value * key material length (4 bytes) * key material: output of `psa_export_key` * Any trailing data is rejected on load. ### Nonvolatile random seed file format for 0.1.0 The nonvolatile random seed file contains a seed for the random generator. If present, it is rewritten at each boot as part of the random generator initialization. The file format is just the seed as a byte string with no metadata or encoding of any kind. ### File namespace on ITS for 0.1.0 Assumption: ITS provides a 32-bit file identifier namespace. The Crypto service can use arbitrary file identifiers and no other part of the system accesses the same file identifier namespace. * File 0: unused. * Files 1 through 0xfffeffff: [content](#key-file-format-for-0.1.0) of the [key whose identifier is the file identifier](#key-names-for-0.1.0). * File 0xffffff52 (`PSA_CRYPTO_ITS_RANDOM_SEED_UID`): [nonvolatile random seed](#nonvolatile-random-seed-file-format-for-0.1.0). * Files 0xffff0000 through 0xffffff51, 0xffffff53 through 0xffffffff: unused. ### File namespace on stdio for 0.1.0 Assumption: C stdio, allowing names containing lowercase letters, digits and underscores, of length up to 23. An undocumented build-time configuration value `CRYPTO_STORAGE_FILE_LOCATION` allows storing the key files in a directory other than the current directory. This value is simply prepended to the file name (so it must end with a directory separator to put the keys in a different directory). * `CRYPTO_STORAGE_FILE_LOCATION "psa_key_slot_0"`: used as a temporary file. Must be writable. May be overwritten or deleted if present. * `sprintf(CRYPTO_STORAGE_FILE_LOCATION "psa_key_slot_%lu", key_id)` [content](#key-file-format-for-0.1.0) of the [key whose identifier](#key-names-for-0.1.0) is `key_id`. * Other files: unused. Mbed Crypto 1.0.0 ----------------- Tags: mbedcrypto-1.0.0d4, mbedcrypto-1.0.0 Released in February 2019. <br> Integrated in Mbed OS 5.12. Supported integrations: * [PSA platform](#file-namespace-on-a-psa-platform-for-1.0.0) * [library using PSA ITS](#file-namespace-on-its-as-a-library-for-1.0.0) * [library using C stdio](#file-namespace-on-stdio-for-1.0.0) Supported features: * [Persistent transparent keys](#key-file-format-for-1.0.0) designated by a [key identifier and owner](#key-names-for-1.0.0). * [Nonvolatile random seed](#nonvolatile-random-seed-file-format-for-1.0.0) on ITS only. Backward compatibility commitments: TBD ### Key names for 1.0.0 Information about each key is stored in a dedicated file designated by the key identifier. In integrations where there is no concept of key owner (in particular, in library integrations), the key identifier is exactly the key identifier as defined in the PSA Cryptography API specification (`psa_key_id_t`). In integrations where there is a concept of key owner (integration into a service for example), the key identifier is made of an owner identifier (its semantics and type are integration specific) and of the key identifier (`psa_key_id_t`) from the key owner point of view. The way in which the file name is constructed from the key identifier depends on the storage backend. The content of the file is described [below](#key-file-format-for-1.0.0). * Library integration: the key file name is just the key identifier as defined in the PSA crypto specification. This is a 32-bit value. * PSA service integration: the key file name is `(uint32_t)owner_uid << 32 | key_id` where `key_id` is the key identifier from the owner point of view and `owner_uid` (of type `int32_t`) is the calling partition identifier provided to the server by the partition manager. This is a 64-bit value. ### Key file format for 1.0.0 The layout is identical to [0.1.0](#key-file-format-for-0.1.0) so far. However note that the encoding of key types, algorithms and key material has changed, therefore the storage format is not compatible (despite using the same value in the version field so far). ### Nonvolatile random seed file format for 1.0.0 [Identical to 0.1.0](#nonvolatile-random-seed-file-format-for-0.1.0). ### File namespace on a PSA platform for 1.0.0 Assumption: ITS provides a 64-bit file identifier namespace. The Crypto service can use arbitrary file identifiers and no other part of the system accesses the same file identifier namespace. Assumption: the owner identifier is a nonzero value of type `int32_t`. * Files 0 through 0xffffff51, 0xffffff53 through 0xffffffff: unused, reserved for internal use of the crypto library or crypto service. * File 0xffffff52 (`PSA_CRYPTO_ITS_RANDOM_SEED_UID`): [nonvolatile random seed](#nonvolatile-random-seed-file-format-for-0.1.0). * Files 0x100000000 through 0xffffffffffff: [content](#key-file-format-for-1.0.0) of the [key whose identifier is the file identifier](#key-names-for-1.0.0). The upper 32 bits determine the owner. ### File namespace on ITS as a library for 1.0.0 Assumption: ITS provides a 64-bit file identifier namespace. The entity using the crypto library can use arbitrary file identifiers and no other part of the system accesses the same file identifier namespace. This is a library integration, so there is no owner. The key file identifier is identical to the key identifier. * File 0: unused. * Files 1 through 0xfffeffff: [content](#key-file-format-for-1.0.0) of the [key whose identifier is the file identifier](#key-names-for-1.0.0). * File 0xffffff52 (`PSA_CRYPTO_ITS_RANDOM_SEED_UID`): [nonvolatile random seed](#nonvolatile-random-seed-file-format-for-1.0.0). * Files 0xffff0000 through 0xffffff51, 0xffffff53 through 0xffffffff, 0x100000000 through 0xffffffffffffffff: unused. ### File namespace on stdio for 1.0.0 This is a library integration, so there is no owner. The key file identifier is identical to the key identifier. [Identical to 0.1.0](#file-namespace-on-stdio-for-0.1.0). ### Upgrade from 0.1.0 to 1.0.0. * Delete files 1 through 0xfffeffff, which contain keys in a format that is no longer supported. ### Suggested changes to make before 1.0.0 The library integration and the PSA platform integration use different sets of file names. This is annoyingly non-uniform. For example, if we want to store non-key files, we have room in different ranges (0 through 0xffffffff on a PSA platform, 0xffff0000 through 0xffffffffffffffff in a library integration). It would simplify things to always have a 32-bit owner, with a nonzero value, and thus reserve the range 0–0xffffffff for internal library use. Mbed Crypto 1.1.0 ----------------- Tags: mbedcrypto-1.1.0 Released in early June 2019. <br> Integrated in Mbed OS 5.13. Identical to [1.0.0](#mbed-crypto-1.0.0) except for some changes in the key file format. ### Key file format for 1.1.0 The key file format is identical to [1.0.0](#key-file-format-for-1.0.0), except for the following changes: * A new policy field, marked as [NEW:1.1.0] below. * The encoding of key types, algorithms and key material has changed, therefore the storage format is not compatible (despite using the same value in the version field so far). A self-contained description of the file layout follows. All integers are encoded in little-endian order in 8-bit bytes. The layout of a key file is: * magic (8 bytes): `"PSA\0KEY\0"` * version (4 bytes): 0 * type (4 bytes): `psa_key_type_t` value * policy usage flags (4 bytes): `psa_key_usage_t` value * policy usage algorithm (4 bytes): `psa_algorithm_t` value * policy enrollment algorithm (4 bytes): `psa_algorithm_t` value [NEW:1.1.0] * key material length (4 bytes) * key material: output of `psa_export_key` * Any trailing data is rejected on load. Mbed Crypto TBD --------------- Tags: TBD Released in TBD 2019. <br> Integrated in Mbed OS TBD. ### Changes introduced in TBD * The layout of a key file now has a lifetime field before the type field. * Key files can store references to keys in a secure element. In such key files, the key material contains the slot number. ### File namespace on a PSA platform on TBD Assumption: ITS provides a 64-bit file identifier namespace. The Crypto service can use arbitrary file identifiers and no other part of the system accesses the same file identifier namespace. Assumption: the owner identifier is a nonzero value of type `int32_t`. * Files 0 through 0xfffeffff: unused. * Files 0xffff0000 through 0xffffffff: reserved for internal use of the crypto library or crypto service. See [non-key files](#non-key-files-on-tbd). * Files 0x100000000 through 0xffffffffffff: [content](#key-file-format-for-1.0.0) of the [key whose identifier is the file identifier](#key-names-for-1.0.0). The upper 32 bits determine the owner. ### File namespace on ITS as a library on TBD Assumption: ITS provides a 64-bit file identifier namespace. The entity using the crypto library can use arbitrary file identifiers and no other part of the system accesses the same file identifier namespace. This is a library integration, so there is no owner. The key file identifier is identical to the key identifier. * File 0: unused. * Files 1 through 0xfffeffff: [content](#key-file-format-for-1.0.0) of the [key whose identifier is the file identifier](#key-names-for-1.0.0). * Files 0xffff0000 through 0xffffffff: reserved for internal use of the crypto library or crypto service. See [non-key files](#non-key-files-on-tbd). * Files 0x100000000 through 0xffffffffffffffff: unused. ### Non-key files on TBD File identifiers in the range 0xffff0000 through 0xffffffff are reserved for internal use in Mbed Crypto. * Files 0xfffffe02 through 0xfffffeff (`PSA_CRYPTO_SE_DRIVER_ITS_UID_BASE + lifetime`): secure element driver storage. The content of the file is the secure element driver's persistent data. * File 0xffffff52 (`PSA_CRYPTO_ITS_RANDOM_SEED_UID`): [nonvolatile random seed](#nonvolatile-random-seed-file-format-for-1.0.0). * File 0xffffff54 (`PSA_CRYPTO_ITS_TRANSACTION_UID`): [transaction file](#transaction-file-format-for-tbd). * Other files are unused and reserved for future use. ### Key file format for TBD All integers are encoded in little-endian order in 8-bit bytes except where otherwise indicated. The layout of a key file is: * magic (8 bytes): `"PSA\0KEY\0"`. * version (4 bytes): 0. * lifetime (4 bytes): `psa_key_lifetime_t` value. * type (4 bytes): `psa_key_type_t` value. * policy usage flags (4 bytes): `psa_key_usage_t` value. * policy usage algorithm (4 bytes): `psa_algorithm_t` value. * policy enrollment algorithm (4 bytes): `psa_algorithm_t` value. * key material length (4 bytes). * key material: * For a transparent key: output of `psa_export_key`. * For an opaque key (unified driver interface): driver-specific opaque key blob. * For an opaque key (key in a secure element): slot number (8 bytes), in platform endianness. * Any trailing data is rejected on load. ### Transaction file format for TBD The transaction file contains data about an ongoing action that cannot be completed atomically. It exists only if there is an ongoing transaction. All integers are encoded in platform endianness. All currently existing transactions concern a key in a secure element. The layout of a transaction file is: * type (2 bytes): the [transaction type](#transaction-types-on-tbd). * unused (2 bytes) * lifetime (4 bytes): `psa_key_lifetime_t` value that corresponds to a key in a secure element. * slot number (8 bytes): `psa_key_slot_number_t` value. This is the unique designation of the key for the secure element driver. * key identifier (4 bytes in a library integration, 8 bytes on a PSA platform): the internal representation of the key identifier. On a PSA platform, this encodes the key owner in the same way as [in file identifiers for key files](#file-namespace-on-a-psa-platform-on-tbd)). #### Transaction types on TBD * 0x0001: key creation. The following locations may or may not contain data about the key that is being created: * The slot in the secure element designated by the slot number. * The file containing the key metadata designated by the key identifier. * The driver persistent data. * 0x0002: key destruction. The following locations may or may not still contain data about the key that is being destroyed: * The slot in the secure element designated by the slot number. * The file containing the key metadata designated by the key identifier. * The driver persistent data. Mbed Crypto TBD --------------- Tags: TBD Released in TBD 2020. <br> Integrated in Mbed OS TBD. ### Changes introduced in TBD * The type field has been split into a type and a bits field of 2 bytes each. ### Key file format for TBD All integers are encoded in little-endian order in 8-bit bytes except where otherwise indicated. The layout of a key file is: * magic (8 bytes): `"PSA\0KEY\0"`. * version (4 bytes): 0. * lifetime (4 bytes): `psa_key_lifetime_t` value. * type (2 bytes): `psa_key_type_t` value. * bits (2 bytes): `psa_key_bits_t` value. * policy usage flags (4 bytes): `psa_key_usage_t` value. * policy usage algorithm (4 bytes): `psa_algorithm_t` value. * policy enrollment algorithm (4 bytes): `psa_algorithm_t` value. * key material length (4 bytes). * key material: * For a transparent key: output of `psa_export_key`. * For an opaque key (unified driver interface): driver-specific opaque key blob. * For an opaque key (key in a secure element): slot number (8 bytes), in platform endianness. * Any trailing data is rejected on load.
0
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/docs
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/docs/architecture/tls13-experimental.md
TLS 1.3 Experimental Developments ================================= Overview -------- Mbed TLS doesn't support the TLS 1.3 protocol yet, but a prototype is in development. Stable parts of this prototype that can be independently tested are being successively upstreamed under the guard of the following macro: ``` MBEDTLS_SSL_PROTO_TLS1_3_EXPERIMENTAL ``` This macro will likely be renamed to `MBEDTLS_SSL_PROTO_TLS1_3` once a minimal viable implementation of the TLS 1.3 protocol is available. See the [documentation of `MBEDTLS_SSL_PROTO_TLS1_3_EXPERIMENTAL`](../../include/mbedtls/config.h) for more information. Status ------ The following lists which parts of the TLS 1.3 prototype have already been upstreamed together with their level of testing: * TLS 1.3 record protection mechanisms The record protection routines `mbedtls_ssl_{encrypt|decrypt}_buf()` have been extended to support the modified TLS 1.3 record protection mechanism, including modified computation of AAD, IV, and the introduction of a flexible padding. Those record protection routines have unit tests in `test_suite_ssl` alongside the tests for the other record protection routines. TODO: Add some test vectors from RFC 8448. - The HKDF key derivation function on which the TLS 1.3 key schedule is based, is already present as an independent module controlled by `MBEDTLS_HKDF_C` independently of the development of the TLS 1.3 prototype. - The TLS 1.3-specific HKDF-based key derivation functions (see RFC 8446): * HKDF-Expand-Label * Derive-Secret - Secret evolution * The traffic {Key,IV} generation from secret Those functions are implemented in `library/ssl_tls13_keys.c` and tested in `test_suite_ssl` using test vectors from RFC 8448 and https://tls13.ulfheim.net/. - New TLS Message Processing Stack (MPS) The TLS 1.3 prototype is developed alongside a rewrite of the TLS messaging layer, encompassing low-level details such as record parsing, handshake reassembly, and DTLS retransmission state machine. MPS has the following components: - Layer 1 (Datagram handling) - Layer 2 (Record handling) - Layer 3 (Message handling) - Layer 4 (Retransmission State Machine) - Reader (Abstracted pointer arithmetic and reassembly logic for incoming data) - Writer (Abstracted pointer arithmetic and fragmentation logic for outgoing data) Of those components, the following have been upstreamed as part of `MBEDTLS_SSL_PROTO_TLS1_3_EXPERIMENTAL`: - Reader ([`library/mps_reader.h`](../../library/mps_reader.h))
0
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/docs/architecture
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/docs/architecture/testing/invasive-testing.md
# Mbed TLS invasive testing strategy ## Introduction In Mbed TLS, we use black-box testing as much as possible: test the documented behavior of the product, in a realistic environment. However this is not always sufficient. The goal of this document is to identify areas where black-box testing is insufficient and to propose solutions. This is a test strategy document, not a test plan. A description of exactly what is tested is out of scope. This document is structured as follows: * [“Rules”](#rules) gives general rules and is written for brevity. * [“Requirements”](#requirements) explores the reasons why invasive testing is needed and how it should be done. * [“Possible approaches”](#possible-approaches) discusses some general methods for non-black-box testing. * [“Solutions”](#solutions) explains how we currently solve, or intend to solve, specific problems. ### TLS This document currently focuses on data structure manipulation and storage, which is what the crypto/keystore and X.509 parts of the library are about. More work is needed to fully take TLS into account. ## Rules Always follow these rules unless you have a good reason not to. If you deviate, document the rationale somewhere. See the section [“Possible approaches”](#possible-approaches) for a rationale. ### Interface design for testing Do not add test-specific interfaces if there's a practical way of doing it another way. All public interfaces should be useful in at least some configurations. Features with a significant impact on the code size or attack surface should have a compile-time guard. ### Reliance on internal details In unit tests and in test programs, it's ok to include header files from `library/`. Do not define non-public interfaces in public headers (`include/mbedtls` has `*_internal.h` headers for legacy reasons, but this approach is deprecated). In contrast, sample programs must not include header files from `library/`. Sometimes it makes sense to have unit tests on functions that aren't part of the public API. Declare such functions in `library/*.h` and include the corresponding header in the test code. If the function should be `static` for optimization but can't be `static` for testing, declare it as `MBEDTLS_STATIC_TESTABLE`, and make the tests that use it depend on `MBEDTLS_TEST_HOOKS` (see [“rules for compile-time options”](#rules-for-compile-time-options)). If test code or test data depends on internal details of the library and not just on its documented behavior, add a comment in the code that explains the dependency. For example: > ``` > /* This test file is specific to the ITS implementation in PSA Crypto > * on top of stdio. It expects to know what the stdio name of a file is > * based on its keystore name. > */ > ``` > ``` > # This test assumes that PSA_MAX_KEY_BITS (currently 65536-8 bits = 8191 bytes > # and not expected to be raised any time soon) is less than the maximum > # output from HKDF-SHA512 (255*64 = 16320 bytes). > ``` ### Rules for compile-time options If the most practical way to test something is to add code to the product that is only useful for testing, do so, but obey the following rules. For more information, see the [rationale](#guidelines-for-compile-time-options). * **Only use test-specific code when necessary.** Anything that can be tested through the documented API must be tested through the documented API. * **Test-specific code must be guarded by `#if defined(MBEDTLS_TEST_HOOKS)`**. Do not create fine-grained guards for test-specific code. * **Do not use `MBEDTLS_TEST_HOOKS` for security checks or assertions.** Security checks belong in the product. * **Merely defining `MBEDTLS_TEST_HOOKS` must not change the behavior**. It may define extra functions. It may add fields to structures, but if so, make it very clear that these fields have no impact on non-test-specific fields. * **Where tests must be able to change the behavior, do it by function substitution.** See [“rules for function substitution”](#rules-for-function-substitution) for more details. #### Rules for function substitution This section explains how to replace a library function `mbedtls_foo()` by alternative code for test purposes. That is, library code calls `mbedtls_foo()`, and there is a mechanism to arrange for these calls to invoke different code. Often `mbedtls_foo` is a macro which is defined to be a system function (like `mbedtls_calloc` or `mbedtls_fopen`), which we replace to mock or wrap the system function. This is useful to simulate I/O failure, for example. Note that if the macro can be replaced at compile time to support alternative platforms, the test code should be compatible with this compile-time configuration so that it works on these alternative platforms as well. Sometimes the substitutable function is a `static inline` function that does nothing (not a macro, to avoid accidentally skipping side effects in its parameters), to provide a hook for test code; such functions should have a name that starts with the prefix `mbedtls_test_hook_`. In such cases, the function should generally not modify its parameters, so any pointer argument should be const. The function should return void. With `MBEDTLS_TEST_HOOKS` set, `mbedtls_foo` is a global variable of function pointer type. This global variable is initialized to the system function, or to a function that does nothing. The global variable is defined in a header in the `library` directory such as `psa_crypto_invasive.h`. This is similar to the platform function configuration mechanism with `MBEDTLS_PLATFORM_xxx_ALT`. In unit test code that needs to modify the internal behavior: * The test function (or the whole test file) must depend on `MBEDTLS_TEST_HOOKS`. * At the beginning of the test function, set the global function pointers to the desired value. * In the test function's cleanup code, restore the global function pointers to their default value. ## Requirements ### General goals We need to balance the following goals, which are sometimes contradictory. * Coverage: we need to test behaviors which are not easy to trigger by using the API or which cannot be triggered deterministically, for example I/O failures. * Correctness: we want to test the actual product, not a modified version, since conclusions drawn from a test of a modified product may not apply to the real product. * Effacement: the product should not include features that are solely present for test purposes, since these increase the attack surface and the code size. * Portability: tests should work on every platform. Skipping tests on certain platforms may hide errors that are only apparent on such platforms. * Maintainability: tests should only enforce the documented behavior of the product, to avoid extra work when the product's internal or implementation-specific behavior changes. We should also not give the impression that whatever the tests check is guaranteed behavior of the product which cannot change in future versions. Where those goals conflict, we should at least mitigate the goals that cannot be fulfilled, and document the architectural choices and their rationale. ### Problem areas #### Allocation Resource allocation can fail, but rarely does so in a typical test environment. How does the product cope if some allocations fail? Resources include: * Memory. * Files in storage (PSA API only — in the Mbed TLS API, black-box unit tests are sufficient). * Key slots (PSA API only). * Key slots in a secure element (PSA SE HAL). * Communication handles (PSA crypto service only). #### Storage Storage can fail, either due to hardware errors or to active attacks on trusted storage. How does the code cope if some storage accesses fail? We also need to test resilience: if the system is reset during an operation, does it restart in a correct state? #### Cleanup When code should clean up resources, how do we know that they have truly been cleaned up? * Zeroization of confidential data after use. * Freeing memory. * Freeing key slots. * Freeing key slots in a secure element. * Deleting files in storage (PSA API only). #### Internal data Sometimes it is useful to peek or poke internal data. * Check consistency of internal data (e.g. output of key generation). * Check the format of files (which matters so that the product can still read old files after an upgrade). * Inject faults and test corruption checks inside the product. ## Possible approaches Key to requirement tables: * ++ requirement is fully met * \+ requirement is mostly met * ~ requirement is partially met but there are limitations * ! requirement is somewhat problematic * !! requirement is very problematic ### Fine-grained public interfaces We can include all the features we want to test in the public interface. Then the tests can be truly black-box. The limitation of this approach is that this requires adding a lot of interfaces that are not useful in production. These interfaces have costs: they increase the code size, the attack surface, and the testing burden (exponentially, because we need to test all these interfaces in combination). As a rule, we do not add public interfaces solely for testing purposes. We only add public interfaces if they are also useful in production, at least sometimes. For example, the main purpose of `mbedtls_psa_crypto_free` is to clean up all resources in tests, but this is also useful in production in some applications that only want to use PSA Crypto during part of their lifetime. Mbed TLS traditionally has very fine-grained public interfaces, with many platform functions that can be substituted (`MBEDTLS_PLATFORM_xxx` macros). PSA Crypto has more opacity and less platform substitution macros. | Requirement | Analysis | | ----------- | -------- | | Coverage | ~ Many useful tests are not reasonably achievable | | Correctness | ++ Ideal | | Effacement | !! Requires adding many otherwise-useless interfaces | | Portability | ++ Ideal; the additional interfaces may be useful for portability beyond testing | | Maintainability | !! Combinatorial explosion on the testing burden | | | ! Public interfaces must remain for backward compatibility even if the test architecture changes | ### Fine-grained undocumented interfaces We can include all the features we want to test in undocumented interfaces. Undocumented interfaces are described in public headers for the sake of the C compiler, but are described as “do not use” in comments (or not described at all) and are not included in Doxygen-rendered documentation. This mitigates some of the downsides of [fine-grained public interfaces](#fine-grained-public-interfaces), but not all. In particular, the extra interfaces do increase the code size, the attack surface and the test surface. Mbed TLS traditionally has a few internal interfaces, mostly intended for cross-module abstraction leakage rather than for testing. For the PSA API, we favor [internal interfaces](#internal-interfaces). | Requirement | Analysis | | ----------- | -------- | | Coverage | ~ Many useful tests are not reasonably achievable | | Correctness | ++ Ideal | | Effacement | !! Requires adding many otherwise-useless interfaces | | Portability | ++ Ideal; the additional interfaces may be useful for portability beyond testing | | Maintainability | ! Combinatorial explosion on the testing burden | ### Internal interfaces We can write tests that call internal functions that are not exposed in the public interfaces. This is nice when it works, because it lets us test the unchanged product without compromising the design of the public interface. A limitation is that these interfaces must exist in the first place. If they don't, this has mostly the same downside as public interfaces: the extra interfaces increase the code size and the attack surface for no direct benefit to the product. Another limitation is that internal interfaces need to be used correctly. We may accidentally rely on internal details in the tests that are not necessarily always true (for example that are platform-specific). We may accidentally use these internal interfaces in ways that don't correspond to the actual product. This approach is mostly portable since it only relies on C interfaces. A limitation is that the test-only interfaces must not be hidden at link time (but link-time hiding is not something we currently do). Another limitation is that this approach does not work for users who patch the library by replacing some modules; this is a secondary concern since we do not officially offer this as a feature. | Requirement | Analysis | | ----------- | -------- | | Coverage | ~ Many useful tests require additional internal interfaces | | Correctness | + Does not require a product change | | | ~ The tests may call internal functions in a way that does not reflect actual usage inside the product | | Effacement | ++ Fine as long as the internal interfaces aren't added solely for test purposes | | Portability | + Fine as long as we control how the tests are linked | | | ~ Doesn't work if the users rewrite an internal module | | Maintainability | + Tests interfaces that are documented; dependencies in the tests are easily noticed when changing these interfaces | ### Static analysis If we guarantee certain properties through static analysis, we don't need to test them. This puts some constraints on the properties: * We need to have confidence in the specification (but we can gain this confidence by evaluating the specification on test data). * This does not work for platform-dependent properties unless we have a formal model of the platform. | Requirement | Analysis | | ----------- | -------- | | Coverage | ~ Good for platform-independent properties, if we can guarantee them statically | | Correctness | + Good as long as we have confidence in the specification | | Effacement | ++ Zero impact on the code | | Portability | ++ Zero runtime burden | | Maintainability | ~ Static analysis is hard, but it's also helpful | ### Compile-time options If there's code that we want to have in the product for testing, but not in production, we can add a compile-time option to enable it. This is very powerful and usually easy to use, but comes with a major downside: we aren't testing the same code anymore. | Requirement | Analysis | | ----------- | -------- | | Coverage | ++ Most things can be tested that way | | Correctness | ! Difficult to ensure that what we test is what we run | | Effacement | ++ No impact on the product when built normally or on the documentation, if done right | | | ! Risk of getting “no impact” wrong | | Portability | ++ It's just C code so it works everywhere | | | ~ Doesn't work if the users rewrite an internal module | | Maintainability | + Test interfaces impact the product source code, but at least they're clearly marked as such in the code | #### Guidelines for compile-time options * **Minimize the number of compile-time options.**<br> Either we're testing or we're not. Fine-grained options for testing would require more test builds, especially if combinatorics enters the play. * **Merely enabling the compile-time option should not change the behavior.**<br> When building in test mode, the code should have exactly the same behavior. Changing the behavior should require some action at runtime (calling a function or changing a variable). * **Minimize the impact on code**.<br> We should not have test-specific conditional compilation littered through the code, as that makes the code hard to read. ### Runtime instrumentation Some properties can be tested through runtime instrumentation: have the compiler or a similar tool inject something into the binary. * Sanitizers check for certain bad usage patterns (ASan, MSan, UBSan, Valgrind). * We can inject external libraries at link time. This can be a way to make system functions fail. | Requirement | Analysis | | ----------- | -------- | | Coverage | ! Limited scope | | Correctness | + Instrumentation generally does not affect the program's functional behavior | | Effacement | ++ Zero impact on the code | | Portability | ~ Depends on the method | | Maintainability | ~ Depending on the instrumentation, this may require additional builds and scripts | | | + Many properties come for free, but some require effort (e.g. the test code itself must be leak-free to avoid false positives in a leak detector) | ### Debugger-based testing If we want to do something in a test that the product isn't capable of doing, we can use a debugger to read or modify the memory, or hook into the code at arbitrary points. This is a very powerful approach, but it comes with limitations: * The debugger may introduce behavior changes (e.g. timing). If we modify data structures in memory, we may do so in a way that the code doesn't expect. * Due to compiler optimizations, the memory may not have the layout that we expect. * Writing reliable debugger scripts is hard. We need to have confidence that we're testing what we mean to test, even in the face of compiler optimizations. Languages such as gdb make it hard to automate even relatively simple things such as finding the place(s) in the binary corresponding to some place in the source code. * Debugger scripts are very much non-portable. | Requirement | Analysis | | ----------- | -------- | | Coverage | ++ The sky is the limit | | Correctness | ++ The code is unmodified, and tested as compiled (so we even detect compiler-induced bugs) | | | ! Compiler optimizations may hinder | | | ~ Modifying the execution may introduce divergence | | Effacement | ++ Zero impact on the code | | Portability | !! Not all environments have a debugger, and even if they do, we'd need completely different scripts for every debugger | | Maintainability | ! Writing reliable debugger scripts is hard | | | !! Very tight coupling with the details of the source code and even with the compiler | ## Solutions This section lists some strategies that are currently used for invasive testing, or planned to be used. This list is not intended to be exhaustive. ### Memory management #### Zeroization testing Goal: test that `mbedtls_platform_zeroize` does wipe the memory buffer. Solution ([debugger](#debugger-based-testing)): implemented in `tests/scripts/test_zeroize.gdb`. Rationale: this cannot be tested by adding C code, because the danger is that the compiler optimizes the zeroization away, and any C code that observes the zeroization would cause the compiler not to optimize it away. #### Memory cleanup Goal: test the absence of memory leaks. Solution ([instrumentation](#runtime-instrumentation)): run tests with ASan. (We also use Valgrind, but it's slower than ASan, so we favor ASan.) Since we run many test jobs with a memory leak detector, each test function or test program must clean up after itself. Use the cleanup code (after the `exit` label in test functions) to free any memory that the function may have allocated. #### Robustness against memory allocation failure Solution: TODO. We don't test this at all at this point. #### PSA key store memory cleanup Goal: test the absence of resource leaks in the PSA key store code, in particular that `psa_close_key` and `psa_destroy_key` work correctly. Solution ([internal interface](#internal-interfaces)): in most tests involving PSA functions, the cleanup code explicitly calls `PSA_DONE()` instead of `mbedtls_psa_crypto_free()`. `PSA_DONE` fails the test if the key store in memory is not empty. Note there must also be tests that call `mbedtls_psa_crypto_free` with keys still open, to verify that it does close all keys. `PSA_DONE` is a macro defined in `psa_crypto_helpers.h` which uses `mbedtls_psa_get_stats()` to get information about the keystore content before calling `mbedtls_psa_crypto_free()`. This feature is mostly but not exclusively useful for testing, and may be moved under `MBEDTLS_TEST_HOOKS`. ### PSA storage #### PSA storage cleanup on success Goal: test that no stray files are left over in the key store after a test that succeeded. Solution: TODO. Currently the various test suites do it differently. #### PSA storage cleanup on failure Goal: ensure that no stray files are left over in the key store even if a test has failed (as that could cause other tests to fail). Solution: TODO. Currently the various test suites do it differently. #### PSA storage resilience Goal: test the resilience of PSA storage against power failures. Solution: TODO. See the [secure element driver interface test strategy](driver-interface-test-strategy.html) for more information. #### Corrupted storage Goal: test the robustness against corrupted storage. Solution ([internal interface](#internal-interfaces)): call `psa_its` functions to modify the storage. #### Storage read failure Goal: test the robustness against read errors. Solution: TODO #### Storage write failure Goal: test the robustness against write errors (`STORAGE_FAILURE` or `INSUFFICIENT_STORAGE`). Solution: TODO #### Storage format stability Goal: test that the storage format does not change between versions (or if it does, an upgrade path must be provided). Solution ([internal interface](#internal-interfaces)): call internal functions to inspect the content of the file. Note that the storage format is defined not only by the general layout, but also by the numerical values of encodings for key types and other metadata. For numerical values, there is a risk that we would accidentally modify a single value or a few values, so the tests should be exhaustive. This probably requires some compile-time analysis (perhaps the automation for `psa_constant_names` can be used here). TODO ### Other fault injection #### PSA crypto init failure Goal: test the failure of `psa_crypto_init`. Solution ([compile-time option](#compile-time-options)): replace entropy initialization functions by functions that can fail. This is the only failure point for `psa_crypto_init` that is present in all builds. When we implement the PSA entropy driver interface, this should be reworked to use the entropy driver interface. #### PSA crypto data corruption The PSA crypto subsystem has a few checks to detect corrupted data in memory. We currently don't have a way to exercise those checks. Solution: TODO. To corrupt a multipart operation structure, we can do it by looking inside the structure content, but only when running without isolation. To corrupt the key store, we would need to add a function to the library or to use a debugger.
0
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/docs/architecture
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/docs/architecture/testing/test-framework.md
# Mbed TLS test framework This document is an overview of the Mbed TLS test framework and test tools. This document is incomplete. You can help by expanding it. ## Unit tests See <https://tls.mbed.org/kb/development/test_suites> ### Unit test descriptions Each test case has a description which succinctly describes for a human audience what the test does. The first non-comment line of each paragraph in a `.data` file is the test description. The following rules and guidelines apply: * Test descriptions may not contain semicolons, line breaks and other control characters, or non-ASCII characters. <br> Rationale: keep the tools that process test descriptions (`generate_test_code.py`, [outcome file](#outcome-file) tools) simple. * Test descriptions must be unique within a `.data` file. If you can't think of a better description, the convention is to append `#1`, `#2`, etc. <br> Rationale: make it easy to relate a failure log to the test data. Avoid confusion between cases in the [outcome file](#outcome-file). * Test descriptions should be a maximum of **66 characters**. <br> Rationale: 66 characters is what our various tools assume (leaving room for 14 more characters on an 80-column line). Longer descriptions may be truncated or may break a visual alignment. <br> We have a lot of test cases with longer descriptions, but they should be avoided. At least please make sure that the first 66 characters describe the test uniquely. * Make the description descriptive. “foo: x=2, y=4” is more descriptive than “foo #2”. “foo: 0<x<y, both even” is even better if these inequalities and parities are why this particular test data was chosen. * Avoid changing the description of an existing test case without a good reason. This breaks the tracking of failures across CI runs, since this tracking is based on the descriptions. `tests/scripts/check_test_cases.py` enforces some rules and warns if some guidelines are violated. ## TLS tests ### SSL extension tests #### SSL test case descriptions Each test case in `ssl-opt.sh` has a description which succinctly describes for a human audience what the test does. The test description is the first parameter to `run_tests`. The same rules and guidelines apply as for [unit test descriptions](#unit-test-descriptions). In addition, the description must be written on the same line as `run_test`, in double quotes, for the sake of `check_test_cases.py`. ## Running tests ### Outcome file #### Generating an outcome file Unit tests and `ssl-opt.sh` record the outcome of each test case in a **test outcome file**. This feature is enabled if the environment variable `MBEDTLS_TEST_OUTCOME_FILE` is set. Set it to the path of the desired file. If you run `all.sh --outcome-file test-outcome.csv`, this collects the outcome of all the test cases in `test-outcome.csv`. #### Outcome file format The outcome file is in a CSV format using `;` (semicolon) as the delimiter and no quoting. This means that fields may not contain newlines or semicolons. There is no title line. The outcome file has 6 fields: * **Platform**: a description of the platform, e.g. `Linux-x86_64` or `Linux-x86_64-gcc7-msan`. * **Configuration**: a unique description of the configuration (`config.h`). * **Test suite**: `test_suite_xxx` or `ssl-opt`. * **Test case**: the description of the test case. * **Result**: one of `PASS`, `SKIP` or `FAIL`. * **Cause**: more information explaining the result.
0
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/docs/architecture
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/docs/architecture/testing/driver-interface-test-strategy.md
# Mbed Crypto driver interface test strategy This document describes the test strategy for the driver interfaces in Mbed Crypto. Mbed Crypto has interfaces for secure element drivers, accelerator drivers and entropy drivers. This document is about testing Mbed Crypto itself; testing drivers is out of scope. The driver interfaces are standardized through PSA Cryptography functional specifications. ## Secure element driver interface testing ### Secure element driver interfaces #### Opaque driver interface The [unified driver interface](../../proposed/psa-driver-interface.md) supports both transparent drivers (for accelerators) and opaque drivers (for secure elements). Drivers exposing this interface need to be registered at compile time by declaring their JSON description file. #### Dynamic secure element driver interface The dynamic secure element driver interface (SE interface for short) is defined by [`psa/crypto_se_driver.h`](../../../include/psa/crypto_se_driver.h). This is an interface between Mbed Crypto and one or more third-party drivers. The SE interface consists of one function provided by Mbed Crypto (`psa_register_se_driver`) and many functions that drivers must implement. To make a driver usable by Mbed Crypto, the initialization code must call `psa_register_se_driver` with a structure that describes the driver. The structure mostly contains function pointers, pointing to the driver's methods. All calls to a driver function are triggered by a call to a PSA crypto API function. ### SE driver interface unit tests This section describes unit tests that must be implemented to validate the secure element driver interface. Note that a test case may cover multiple requirements; for example a “good case” test can validate that the proper function is called, that it receives the expected inputs and that it produces the expected outputs. Many SE driver interface unit tests could be covered by running the existing API tests with a key in a secure element. #### SE driver registration This applies to dynamic drivers only. * Test `psa_register_se_driver` with valid and with invalid arguments. * Make at least one failing call to `psa_register_se_driver` followed by a successful call. * Make at least one test that successfully registers the maximum number of drivers and fails to register one more. #### Dispatch to SE driver For each API function that can lead to a driver call (more precisely, for each driver method call site, but this is practically equivalent): * Make at least one test with a key in a secure element that checks that the driver method is called. A few API functions involve multiple driver methods; these should validate that all the expected driver methods are called. * Make at least one test with a key that is not in a secure element that checks that the driver method is not called. * Make at least one test with a key in a secure element with a driver that does not have the requisite method (i.e. the method pointer is `NULL`) but has the substructure containing that method, and check that the return value is `PSA_ERROR_NOT_SUPPORTED`. * Make at least one test with a key in a secure element with a driver that does not have the substructure containing that method (i.e. the pointer to the substructure is `NULL`), and check that the return value is `PSA_ERROR_NOT_SUPPORTED`. * At least one test should register multiple drivers with a key in each driver and check that the expected driver is called. This does not need to be done for all operations (use a white-box approach to determine if operations may use different code paths to choose the driver). * At least one test should register the same driver structure with multiple lifetime values and check that the driver receives the expected lifetime value. Some methods only make sense as a group (for example a driver that provides the MAC methods must provide all or none). In those cases, test with all of them null and none of them null. #### SE driver inputs For each API function that can lead to a driver call (more precisely, for each driver method call site, but this is practically equivalent): * Wherever the specification guarantees parameters that satisfy certain preconditions, check these preconditions whenever practical. * If the API function can take parameters that are invalid and must not reach the driver, call the API function with such parameters and verify that the driver method is not called. * Check that the expected inputs reach the driver. This may be implicit in a test that checks the outputs if the only realistic way to obtain the correct outputs is to start from the expected inputs (as is often the case for cryptographic material, but not for metadata). #### SE driver outputs For each API function that leads to a driver call, call it with parameters that cause a driver to be invoked and check how Mbed Crypto handles the outputs. * Correct outputs. * Incorrect outputs such as an invalid output length. * Expected errors (e.g. `PSA_ERROR_INVALID_SIGNATURE` from a signature verification method). * Unexpected errors. At least test that if the driver returns `PSA_ERROR_GENERIC_ERROR`, this is propagated correctly. Key creation functions invoke multiple methods and need more complex error handling: * Check the consequence of errors detected at each stage (slot number allocation or validation, key creation method, storage accesses). * Check that the storage ends up in the expected state. At least make sure that no intermediate file remains after a failure. #### Persistence of SE keys The following tests must be performed at least one for each key creation method (import, generate, ...). * Test that keys in a secure element survive `psa_close_key(); psa_open_key()`. * Test that keys in a secure element survive `mbedtls_psa_crypto_free(); psa_crypto_init()`. * Test that the driver's persistent data survives `mbedtls_psa_crypto_free(); psa_crypto_init()`. * Test that `psa_destroy_key()` does not leave any trace of the key. #### Resilience for SE drivers Creating or removing a key in a secure element involves multiple storage modifications (M<sub>1</sub>, ..., M<sub>n</sub>). If the operation is interrupted by a reset at any point, it must be either rolled back or completed. * For each potential interruption point (before M<sub>1</sub>, between M<sub>1</sub> and M<sub>2</sub>, ..., after M<sub>n</sub>), call `mbedtls_psa_crypto_free(); psa_crypto_init()` at that point and check that this either rolls back or completes the operation that was started. * This must be done for each key creation method and for key destruction. * This must be done for each possible flow, including error cases (e.g. a key creation that fails midway due to `OUT_OF_MEMORY`). * The recovery during `psa_crypto_init` can itself be interrupted. Test those interruptions too. * Two things need to be tested: the key that is being created or destroyed, and the driver's persistent storage. * Check both that the storage has the expected content (this can be done by e.g. using a key that is supposed to be present) and does not have any unexpected content (for keys, this can be done by checking that `psa_open_key` fails with `PSA_ERRROR_DOES_NOT_EXIST`). This requires instrumenting the storage implementation, either to force it to fail at each point or to record successive storage states and replay each of them. Each `psa_its_xxx` function call is assumed to be atomic. ### SE driver system tests #### Real-world use case We must have at least one driver that is close to real-world conditions: * With its own source tree. * Running on actual hardware. * Run the full driver validation test suite (which does not yet exist). * Run at least one test application (e.g. the Mbed OS TLS example). This requirement shall be fulfilled by the [Microchip ATECC508A driver](https://github.com/ARMmbed/mbed-os-atecc608a/). #### Complete driver We should have at least one driver that covers the whole interface: * With its own source tree. * Implementing all the methods. * Run the full driver validation test suite (which does not yet exist). A PKCS#11 driver would be a good candidate. It would be useful as part of our product offering. ## Transparent driver interface testing The [unified driver interface](../../proposed/psa-driver-interface.md) defines interfaces for accelerators. ### Test requirements #### Requirements for transparent driver testing Every cryptographic mechanism for which a transparent driver interface exists (key creation, cryptographic operations, …) must be exercised in at least one build. The test must verify that the driver code is called. #### Requirements for fallback The driver interface includes a fallback mechanism so that a driver can reject a request at runtime and let another driver handle the request. For each entry point, there must be at least three test runs with two or more drivers available with driver A configured to fall back to driver B, with one run where A returns `PSA_SUCCESS`, one where A returns `PSA_ERROR_NOT_SUPPORTED` and B is invoked, and one where A returns a different error and B is not invoked. ## Entropy and randomness interface testing TODO
0
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/docs/architecture
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/docs/architecture/testing/psa-storage-format-testing.md
# Mbed TLS PSA keystore format stability testing strategy ## Introduction The PSA crypto subsystem includes a persistent key store. It is possible to create a persistent key and read it back later. This must work even if Mbed TLS has been upgraded in the meantime (except for deliberate breaks in the backward compatibility of the storage). The goal of this document is to define a test strategy for the key store that not only validates that it's possible to load a key that was saved with the version of Mbed TLS under test, but also that it's possible to load a key that was saved with previous versions of Mbed TLS. Interoperability is not a goal: PSA crypto implementations are not intended to have compatible storage formats. Downgrading is not required to work. ## General approach ### Limitations of a direct approach The goal of storage format stability testing is: as a user of Mbed TLS, I want to store a key under version V and read it back under version W, with W ≥ V. Doing the testing this way would be difficult because we'd need to have version V of Mbed TLS available when testing version W. An alternative, semi-direct approach consists of generating test data under version V, and reading it back under version W. Done naively, this would require keeping a large amount of test data (full test coverage multiplied by the number of versions that we want to preserve backward compatibility with). ### Save-and-compare approach Importing and saving a key is deterministic. Therefore we can ensure the stability of the storage format by creating test cases under a version V of Mbed TLS, where the test case parameters include both the parameters to pass to key creation and the expected state of the storage after the key is created. The test case creates a key as indicated by the parameters, then compares the actual state of the storage with the expected state. In addition, the test case also loads the key and checks that it has the expected data and metadata. If the test passes with version V, this means that the test data is consistent with what the implementation does. When the test later runs under version W ≥ V, it creates and reads back a storage state which is known to be identical to the state that V would have produced. Thus, this approach validates that W can read storage states created by V. Use a similar approach for files other than keys where possible and relevant. ### Keeping up with storage format evolution Test cases should normally not be removed from the code base: if something has worked before, it should keep working in future versions, so we should keep testing it. If the way certain keys are stored changes, and we don't deliberately decide to stop supporting old keys (which should only be done by retiring a version of the storage format), then we should keep the corresponding test cases in load-only mode: create a file with the expected content, load it and check the data that it contains. ## Storage architecture overview The PSA subsystem provides storage on top of the PSA trusted storage interface. The state of the storage is a mapping from file identifer (a 64-bit number) to file content (a byte array). These files include: * [Key files](#key-storage) (files containing one key's metadata and, except for some secure element keys, key material). * The [random generator injected seed or state file](#random-generator-state) (`PSA_CRYPTO_ITS_RANDOM_SEED_UID`). * [Storage transaction file](#storage-transaction-resumption). * [Driver state files](#driver-state-files). For a more detailed description, refer to the [Mbed Crypto storage specification](../mbed-crypto-storage-specification.md). In addition, Mbed TLS includes an implementation of the PSA trusted storage interface on top of C stdio. This document addresses the test strategy for [PSA ITS over file](#psa-its-over-file) in a separate section below. ## Key storage testing This section describes the desired test cases for keys created with the current storage format version. When the storage format changes, if backward compatibility is desired, old test data should be kept as described under [“Keeping up with storage format evolution”](#keeping-up-with-storage-format-evolution). ### Keystore layout Objective: test that the key file name corresponds to the key identifier. Method: Create a key with a given identifier (using `psa_import_key`) and verify that a file with the expected name is created, and no other. Repeat for different identifiers. ### General key format Objective: test the format of the key file: which field goes where and how big it is. Method: Create a key with certain metadata with `psa_import_key`. Read the file content and validate that it has the expected layout, deduced from the storage specification. Repeat with different metadata. Ensure that there are test cases covering all fields. ### Enumeration of test cases for keys Objective: ensure that the coverage is sufficient to have assurance that all keys are stored correctly. This requires a sufficient selection of key types, sizes, policies, etc. In particular, the tests must validate that each `PSA_xxx` constant that is stored in a key is covered by at least once test case: * Usage flags: `PSA_KEY_USAGE_xxx`. * Algorithms in policies: `PSA_ALG_xxx`. * Key types: `PSA_KEY_TYPE_xxx`, `PSA_ECC_FAMILY_xxx`, `PSA_DH_FAMILY_xxx`. Method: Each test case creates a key with `psa_import_key`, purges it from memory, then reads it back and exercises it. Generate test cases automatically based on an enumeration of available constants and some knowledge of what attributes (sizes, algorithms, …) and content to use for keys of a certain type. Note that the generated test cases will be checked into the repository (generating test cases at runtime would not allow us to test the stability of the format, only that a given version is internally consistent). ### Testing with alternative lifetime values Objective: have test coverage for lifetimes other than the default persistent lifetime (`PSA_KEY_LIFETIME_PERSISTENT`). Method: * For alternative locations: have tests conditional on the presence of a driver for that location. * For alternative persistence levels: TODO ## Random generator state TODO ## Driver state files Not yet implemented. TODO ## Storage transaction resumption Only relevant for secure element support. Not yet fully implemented. TODO ## PSA ITS over file TODO
0
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/docs
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/docs/proposed/psa-driver-integration-guide.md
Building Mbed TLS with PSA cryptoprocessor drivers ================================================== **This is a specification of work in progress. The implementation is not yet merged into Mbed TLS.** This document describes how to build Mbed TLS with additional cryptoprocessor drivers that follow the PSA cryptoprocessor driver interface. The interface is not fully implemented in Mbed TLS yet and is disabled by default. You can enable the experimental work in progress by setting `MBEDTLS_PSA_CRYPTO_DRIVERS` in the compile-time configuration. Please note that the interface may still change: until further notice, we do not guarantee backward compatibility with existing driver code when `MBEDTLS_PSA_CRYPTO_DRIVERS` is enabled. ## Introduction The PSA cryptography driver interface provides a way to build Mbed TLS with additional code that implements certain cryptographic primitives. This is primarily intended to support platform-specific hardware. Note that such drivers are only available through the PSA cryptography API (crypto functions beginning with `psa_`, and X.509 and TLS interfaces that reference PSA types). Concretely speaking, a driver consists of one or more **driver description files** in JSON format and some code to include in the build. The driver code can either be provided in binary form as additional object file to link, or in source form. ## How to build Mbed TLS with drivers To build Mbed TLS with drivers: 1. Activate `MBEDTLS_PSA_CRYPTO_DRIVERS` in the library configuration. ``` cd /path/to/mbedtls scripts/config.py set MBEDTLS_PSA_CRYPTO_DRIVERS ``` 2. Pass the driver description files through the Make variable `PSA_DRIVERS` when building the library. ``` cd /path/to/mbedtls make PSA_DRIVERS="/path/to/acme/driver.json /path/to/nadir/driver.json" lib ``` 3. Link your application with the implementation of the driver functions. ``` cd /path/to/application ld myapp.o -L/path/to/acme -lacmedriver -L/path/to/nadir -lnadirdriver -L/path/to/mbedtls -lmbedcrypto ``` <!-- TODO: what if the driver is provided as C source code? --> <!-- TODO: what about additional include files? -->
0
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/docs
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/docs/proposed/psa-conditional-inclusion-c.md
Conditional inclusion of cryptographic mechanism through the PSA API in Mbed TLS ================================================================================ This document is a proposed interface for deciding at build time which cryptographic mechanisms to include in the PSA Cryptography interface. This is currently a proposal for Mbed TLS. It is not currently on track for standardization in PSA. ## Introduction ### Purpose of this specification The [PSA Cryptography API specification](https://armmbed.github.io/mbed-crypto/psa/#application-programming-interface) specifies the interface between a PSA Cryptography implementation and an application. The interface defines a number of categories of cryptographic algorithms (hashes, MAC, signatures, etc.). In each category, a typical implementation offers many algorithms (e.g. for signatures: RSA-PKCS#1v1.5, RSA-PSS, ECDSA). When building the implementation for a specific use case, it is often desirable to include only a subset of the available cryptographic mechanisms, primarily in order to reduce the code footprint of the compiled system. The present document proposes a way for an application using the PSA cryptography interface to declare which mechanisms it requires. ### Conditional inclusion of legacy cryptography modules Mbed TLS offers a way to select which cryptographic mechanisms are included in a build through its configuration file (`config.h`). This mechanism is based on two main sets of symbols: `MBEDTLS_xxx_C` controls the availability of the mechanism to the application, and `MBEDTLS_xxx_ALT` controls the availability of an alternative implementation, so the software implementation is only included if `MBEDTLS_xxx_C` is defined but not `MBEDTLS_xxx_ALT`. ### PSA evolution In the PSA cryptography interface, the **core** (built-in implementations of cryptographic mechanisms) can be augmented with drivers. **Transparent drivers** replace the built-in implementation of a cryptographic mechanism (or, with **fallback**, the built-in implementation is tried if the driver only has partial support for the mechanism). **Opaque drivers** implement cryptographic mechanisms on keys which are stored in a separate domain such as a secure element, for which the core only does key management and dispatch using wrapped key blobs or key identifiers. The current model is difficult to adapt to the PSA interface for several reasons. The `MBEDTLS_xxx_ALT` symbols are somewhat inconsistent, and in particular do not work well for asymmetric cryptography. For example, many parts of the ECC code have no `MBEDTLS_xxx_ALT` symbol, so a platform with ECC acceleration that can perform all ECDSA and ECDH operations in the accelerator would still embark the `bignum` module and large parts of the `ecp_curves`, `ecp` and `ecdsa` modules. Also the availability of a transparent driver for a mechanism does not translate directly to `MBEDTLS_xxx` symbols. ### Requirements [Req.interface] The application can declare which cryptographic mechanisms it needs. [Req.inclusion] If the application does not require a mechanism, a suitably configured Mbed TLS build must not include it. The granularity of mechanisms must work for typical use cases and has [acceptable limitations](#acceptable-limitations). [Req.drivers] If a PSA driver is available in the build, a suitably configured Mbed TLS build must not include the corresponding software code (unless a software fallback is needed). [Req.c] The configuration mechanism consists of C preprocessor definitions, and the build does not require tools other than a C compiler. This is necessary to allow building an application and Mbed TLS in development environments that do not allow third-party tools. [Req.adaptability] The implementation of the mechanism must be adaptable with future evolution of the PSA cryptography specifications and Mbed TLS. Therefore the interface must remain sufficiently simple and abstract. ### Acceptable limitations [Limitation.matrix] If a mechanism is defined by a combination of algorithms and key types, for example a block cipher mode (CBC, CTR, CFB, …) and a block permutation (AES, CAMELLIA, ARIA, …), there is no requirement to include only specific combinations. [Limitation.direction] For mechanisms that have multiple directions (for example encrypt/decrypt, sign/verify), there is no requirement to include only one direction. [Limitation.size] There is no requirement to include only support for certain key sizes. [Limitation.multipart] Where there are multiple ways to perform an operation, for example single-part and multi-part, there is no mechanism to select only one or a subset of the possible ways. ## Interface ### PSA Crypto configuration file The PSA Crypto configuration file `psa/crypto_config.h` defines a series of symbols of the form `PSA_WANT_xxx` where `xxx` describes the feature that the symbol enables. The symbols are documented in the section [“PSA Crypto configuration symbols”](#psa-crypto-configuration-symbols) below. The symbol `MBEDTLS_PSA_CRYPTO_CONFIG` in `mbedtls/config.h` determines whether `psa/crypto_config.h` is used. * If `MBEDTLS_PSA_CRYPTO_CONFIG` is unset, which is the default at least in Mbed TLS 2.x versions, things are as they are today: the PSA subsystem includes generic code unconditionally, and includes support for specific mechanisms conditionally based on the existing `MBEDTLS_xxx_` symbols. * If `MBEDTLS_PSA_CRYPTO_CONFIG` is set, the necessary software implementations of cryptographic algorithms are included based on both the content of the PSA Crypto configuration file and the Mbed TLS configuration file. For example, the code in `aes.c` is enabled if either `mbedtls/config.h` contains `MBEDTLS_AES_C` or `psa/crypto_config.h` contains `PSA_WANT_KEY_TYPE_AES`. ### PSA Crypto configuration symbols #### Configuration symbol syntax A PSA Crypto configuration symbol is a C preprocessor symbol whose name starts with `PSA_WANT_`. * If the symbol is not defined, the corresponding feature is not included. * If the symbol is defined to a preprocessor expression with the value `1`, the corresponding feature is included. * If the symbol is defined with a different value, the behavior is currently undefined and reserved for future use. #### Configuration symbol usage The presence of a symbol `PSA_WANT_xxx` in the Mbed TLS configuration determines whether a feature is available through the PSA API. These symbols should be used in any place that requires conditional compilation based on the availability of a cryptographic mechanism through the PSA API, including: * In Mbed TLS test code. * In Mbed TLS library code using `MBEDTLS_USE_PSA_CRYPTO`, for example in TLS to determine which cipher suites to enable. * In application code that provides additional features based on cryptographic capabilities, for example additional key parsing and formatting functions, or cipher suite availability for network protocols. #### Configuration symbol semantics If a feature is not requested for inclusion in the PSA Crypto configuration file, it may still be included in the build, either because the feature has been requested in some other way, or because the library does not support the exclusion of this feature. Mbed TLS should make a best effort to support the exclusion of all features, but in some cases this may be judged too much effort for too little benefit. #### Configuration symbols for key types For each constant or constructor macro of the form `PSA_KEY_TYPE_xxx`, the symbol **`PSA_WANT_KEY_TYPE_xxx`** indicates that support for this key type is desired. For asymmetric cryptography, `PSA_WANT_KEY_TYPE_xxx_KEY_PAIR` determines whether private-key operations are desired, and `PSA_WANT_KEY_TYPE_xxx_PUBLIC_KEY` determines whether public-key operations are desired. `PSA_WANT_KEY_TYPE_xxx_KEY_PAIR` implicitly enables `PSA_WANT_KEY_TYPE_xxx_PUBLIC_KEY`: there is no way to only include private-key operations (which typically saves little code). #### Configuration symbols for elliptic curves For elliptic curve key types, only the specified curves are included. To include a curve, include a symbol of the form **`PSA_WANT_ECC_family_size`**. For example: `PSA_WANT_ECC_SECP_R1_256` for secp256r1, `PSA_WANT_ECC_MONTGOMERY_255` for Curve25519. It is an error to require an ECC key type but no curve, and Mbed TLS will reject this at compile time. Rationale: this is a deviation of the general principle that `PSA_ECC_FAMILY_xxx` would have a corresponding symbol `PSA_WANT_ECC_FAMILY_xxx`. This deviation is justified by the fact that it is very common to wish to include only certain curves in a family, and that can lead to a significant gain in code size. #### Configuration symbols for Diffie-Hellman groups There are no configuration symbols for Diffie-Hellman groups (`PSA_DH_GROUP_xxx`). Rationale: Finite-field Diffie-Hellman code is usually not specialized for any particular group, so reducing the number of available groups at compile time only saves a little code space. Constrained implementations tend to omit FFDH anyway, so the small code size gain is not important. #### Configuration symbols for algorithms For each constant or constructor macro of the form `PSA_ALG_xxx`, the symbol **`PSA_WANT_ALG_xxx`** indicates that support for this algorithm is desired. For parametrized algorithms, the `PSA_WANT_ALG_xxx` symbol indicates whether the base mechanism is supported. Parameters must themselves be included through their own `PSA_WANT_ALG_xxx` symbols. It is an error to include a base mechanism without at least one possible parameter, and Mbed TLS will reject this at compile time. For example, `PSA_WANT_ALG_ECDSA` requires the inclusion of randomized ECDSA for all hash algorithms whose corresponding symbol `PSA_WANT_ALG_xxx` is enabled. ## Implementation ### Additional non-public symbols #### Accounting for transparent drivers In addition to the [configuration symbols](#psa-crypto-configuration-symbols), we need two parallel or mostly parallel sets of symbols: * **`MBEDTLS_PSA_ACCEL_xxx`** indicates whether a fully-featured, fallback-free transparent driver is available. * **`MBEDTLS_PSA_BUILTIN_xxx`** indicates whether the software implementation is needed. `MBEDTLS_PSA_ACCEL_xxx` is one of the outputs of the transpilation of a driver description, alongside the glue code for calling the drivers. `MBEDTLS_PSA_BUILTIN_xxx` is enabled when `PSA_WANT_xxx` is enabled and `MBEDTLS_PSA_ACCEL_xxx` is disabled. These symbols are not part of the public interface of Mbed TLS towards applications or to drivers, regardless of whether the symbols are actually visible. ### Architecture of symbol definitions #### New-style definition of configuration symbols When `MBEDTLS_PSA_CRYPTO_CONFIG` is set, the header file `mbedtls/config.h` needs to define all the `MBEDTLS_xxx_C` configuration symbols, including the ones deduced from the PSA Crypto configuration. It does this by including the new header file **`mbedtls/config_psa.h`**, which defines the `MBEDTLS_PSA_BUILTIN_xxx` symbols and deduces the corresponding `MBEDTLS_xxx_C` (and other) symbols. `mbedtls/config_psa.h` includes `psa/crypto_config.h`, the user-editable file that defines application requirements. #### Old-style definition of configuration symbols When `MBEDTLS_PSA_CRYPTO_CONFIG` is not set, the configuration of Mbed TLS works as before, and the inclusion of non-PSA code only depends on `MBEDTLS_xxx` symbols defined (or not) in `mbedtls/config.h`. Furthermore, the new header file **`mbedtls/config_psa.h`** deduces PSA configuration symbols (`PSA_WANT_xxx`, `MBEDTLS_PSA_BUILTIN_xxx`) from classic configuration symbols (`MBEDTLS_xxx`). The `PSA_WANT_xxx` definitions in `mbedtls/config_psa.h` are needed not only to build the PSA parts of the library, but also to build code that uses these parts. This includes structure definitions in `psa/crypto_struct.h`, size calculations in `psa/crypto_sizes.h`, and application code that's specific to a given cryptographic mechanism. In Mbed TLS itself, code under `MBEDTLS_USE_PSA_CRYPTO` and conditional compilation guards in tests and sample programs need `PSA_WANT_xxx`. Since some existing applications use a handwritten `mbedtls/config.h` or an edited copy of `mbedtls/config.h` from an earlier version of Mbed TLS, `mbedtls/config_psa.h` must be included via an already existing header that is not `mbedtls/config.h`, so it is included via `psa/crypto.h` (for example from `psa/crypto_platform.h`). #### Summary of definitions of configuration symbols Whether `MBEDTLS_PSA_CRYPTO_CONFIG` is set or not, `mbedtls/config_psa.h` includes `mbedtls/crypto_drivers.h`, a header file generated by the transpilation of the driver descriptions. It defines `MBEDTLS_PSA_ACCEL_xxx` symbols according to the availability of transparent drivers without fallback. The following table summarizes where symbols are defined depending on the configuration mode. * (U) indicates a symbol that is defined by the user (application). * (D) indicates a symbol that is deduced from other symbols by code that ships with Mbed TLS. * (G) indicates a symbol that is generated from driver descriptions. | Symbols | With `MBEDTLS_PSA_CRYPTO_CONFIG` | Without `MBEDTLS_PSA_CRYPTO_CONFIG` | | ------------------------- | -------------------------------- | ----------------------------------- | | `MBEDTLS_xxx_C` | `mbedtls/config.h` (U) or | `mbedtls/config.h` (U) | | | `mbedtls/config_psa.h` (D) | | | `PSA_WANT_xxx` | `psa/crypto_config.h` (U) | `mbedtls/config_psa.h` (D) | | `MBEDTLS_PSA_BUILTIN_xxx` | `mbedtls/config_psa.h` (D) | `mbedtls/config_psa.h` (D) | | `MBEDTLS_PSA_ACCEL_xxx` | `mbedtls/crypto_drivers.h` (G) | N/A | #### Visibility of internal symbols Ideally, the `MBEDTLS_PSA_ACCEL_xxx` and `MBEDTLS_PSA_BUILTIN_xxx` symbols should not be visible to application code or driver code, since they are not part of the public interface of the library. However these symbols are needed to deduce whether to include library modules (for example `MBEDTLS_AES_C` has to be enabled if `MBEDTLS_PSA_BUILTIN_KEY_TYPE_AES` is enabled), which makes it difficult to keep them private. #### Compile-time checks The header file **`library/psa_check_config.h`** applies sanity checks to the configuration, throwing `#error` if something is wrong. A mechanism similar to `mbedtls/check_config.h` detects errors such as enabling ECDSA but no curve. Since configuration symbols must be undefined or 1, any other value should trigger an `#error`. #### Automatic generation of preprocessor symbol manipulations A lot of the preprocessor symbol manipulation is systematic calculations that analyze the configuration. `mbedtls/config_psa.h` and `library/psa_check_config.h` should be generated automatically, in the same manner as `version_features.c`. ### Structure of PSA Crypto library code #### Conditional inclusion of library entry points An entry point can be eliminated entirely if no algorithm requires it. #### Conditional inclusion of mechanism-specific code Code that is specific to certain key types or to certain algorithms must be guarded by the applicable symbols: `PSA_WANT_xxx` for code that is independent of the application, and `MBEDTLS_PSA_BUILTIN_xxx` for code that calls an Mbed TLS software implementation. ## PSA standardization ### JSON configuration mechanism At the time of writing, the preferred configuration mechanism for a PSA service is in JSON syntax. The translation from JSON to build instructions is not specified by PSA. For PSA Crypto, the preferred configuration mechanism would be similar to capability specifications of transparent drivers. The same JSON properties that are used to mean “this driver can perform that mechanism” in a driver description would be used to mean “the application wants to perform that mechanism” in the application configuration. ### From JSON to C The JSON capability language allows a more fine-grained selection than the C mechanism proposed here. For example, it allows requesting only single-part mechanisms, only certain key sizes, or only certain combinations of algorithms and key types. The JSON capability language can be translated approximately to the boolean symbol mechanism proposed here. The approximation considers a feature to be enabled if any part of it is enabled. For example, if there is a capability for AES-CTR and one for CAMELLIA-GCM, the translation to boolean symbols will also include AES-GCM and CAMELLIA-CTR. If there is a capability for AES-128, the translation will also include AES-192 and AES-256. The boolean symbol mechanism proposed here can be translated to a list of JSON capabilities: for each included algorithm, include a capability with that algorithm, the key types that apply to that algorithm, no size restriction, and all the entry points that apply to that algorithm. ## Open questions ### Open questions about the interface #### Naming of symbols The names of [elliptic curve symbols](#configuration-symbols-for-elliptic-curves) are a bit weird: `SECP_R1_256` instead of `SECP256R1`, `MONTGOMERY_255` instead of `CURVE25519`. Should we make them more classical, but less systematic? #### Impossible combinations What does it mean to have `PSA_WANT_ALG_ECDSA` enabled but with only Curve25519? Is it a mandatory error? #### Diffie-Hellman Way to request only specific groups? Not a priority: constrained devices don't do FFDH. Specify it as may change in future versions. #### Coexistence with the current Mbed TLS configuration The two mechanisms have very different designs. Is there serious potential for confusion? Do we understand how the combinations work? ### Open questions about the design #### Algorithms without a key type or vice versa Is it realistic to mandate a compile-time error if a key type is required, but no matching algorithm, or vice versa? Is it always the right thing, for example if there is an opaque driver that manipulates this key type? #### Opaque-only mechanisms If a mechanism should only be supported in an opaque driver, what does the core need to know about it? Do we have all the information we need? This is especially relevant to suppress a mechanism completely if there is no matching algorithm. For example, if there is no transparent implementation of RSA or ECDSA, `psa_sign_hash` and `psa_verify_hash` may still be needed if there is an opaque signature driver. ### Open questions about the implementation #### Testability Is this proposal decently testable? There are a lot of combinations. What combinations should we test? <!-- Local Variables: time-stamp-line-limit: 40 time-stamp-start: "Time-stamp: *\"" time-stamp-end: "\"" time-stamp-format: "%04Y/%02m/%02d %02H:%02M:%02S %Z" time-stamp-time-zone: "GMT" End: -->
0
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/docs
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/docs/proposed/psa-driver-interface.md
PSA Cryptoprocessor Driver Interface ==================================== This document describes an interface for cryptoprocessor drivers in the PSA cryptography API. This interface complements the [PSA Cryptography API specification](https://armmbed.github.io/mbed-crypto/psa/#application-programming-interface), which describes the interface between a PSA Cryptography implementation and an application. This specification is work in progress and should be considered to be in a beta stage. There is ongoing work to implement this interface in Mbed TLS, which is the reference implementation of the PSA Cryptography API. At this stage, Arm does not expect major changes, but minor changes are expected based on experience from the first implementation and on external feedback. ## Introduction ### Purpose of the driver interface The PSA Cryptography API defines an interface that allows applications to perform cryptographic operations in a uniform way regardless of how the operations are performed. Under the hood, different keys may be stored and used in different hardware or in different logical partitions, and different algorithms may involve different hardware or software components. The driver interface allows implementations of the PSA Cryptography API to be built compositionally. An implementation of the PSA Cryptography API is composed of a **core** and zero or more **drivers**. The core handles key management, enforces key usage policies, and dispatches cryptographic operations either to the applicable driver or to built-in code. Functions in the PSA Cryptography API invoke functions in the core. Code from the core calls drivers as described in the present document. ### Types of drivers The PSA Cryptography driver interface supports two types of cryptoprocessors, and accordingly two types of drivers. * **Transparent** drivers implement cryptographic operations on keys that are provided in cleartext at the beginning of each operation. They are typically used for hardware **accelerators**. When a transparent driver is available for a particular combination of parameters (cryptographic algorithm, key type and size, etc.), it is used instead of the default software implementation. Transparent drivers can also be pure software implementations that are distributed as plug-ins to a PSA Cryptography implementation (for example, an alternative implementation with different performance characteristics, or a certified implementation). * **Opaque** drivers implement cryptographic operations on keys that can only be used inside a protected environment such as a **secure element**, a hardware security module, a smartcard, a secure enclave, etc. An opaque driver is invoked for the specific [key location](#lifetimes-and-locations) that the driver is registered for: the dispatch is based on the key's lifetime. ### Requirements The present specification was designed to fulfill the following high-level requirements. [Req.plugins] It is possible to combine multiple drivers from different providers into the same implementation, without any prior arrangement other than choosing certain names and values from disjoint namespaces. [Req.compile] It is possible to compile the code of each driver and of the core separately, and link them together. A small amount of glue code may need to be compiled once the list of drivers is available. [Req.types] Support drivers for the following types of hardware: accelerators that operate on keys in cleartext; cryptoprocessors that can wrap keys with a built-in keys but not store user keys; and cryptoprocessors that store key material. [Req.portable] The interface between drivers and the core does not involve any platform-specific consideration. Driver calls are simple C function calls. Interactions with platform-specific hardware happen only inside the driver (and in fact a driver need not involve any hardware at all). [Req.location] Applications can tell which location values correspond to which secure element drivers. [Req.fallback] Accelerator drivers can specify that they do not fully support a cryptographic mechanism and that a fallback to core code may be necessary. Conversely, if an accelerator fully supports cryptographic mechanism, the core must be able to omit code for this mechanism. [Req.mechanisms] Drivers can specify which mechanisms they support. A driver's code will not be invoked for cryptographic mechanisms that it does not support. ## Overview of drivers ### Deliverables for a driver To write a driver, you need to implement some functions with C linkage, and to declare these functions in a **driver description file**. The driver description file declares which functions the driver implements and what cryptographic mechanisms they support. If the driver description references custom types, macros or constants, you also need to provide C header files defining those elements. The concrete syntax for a driver description file is JSON. The structure of this JSON file is specified in the section [“Driver description syntax”](#driver-description-syntax). A driver therefore consists of: * A driver description file (in JSON format). * C header files defining the types required by the driver description. The names of these header files are declared in the driver description file. * An object file compiled for the target platform defining the entry point functions specified by the driver description. Implementations may allow drivers to be provided as source files and compiled with the core instead of being pre-compiled. How to provide the driver description file, the C header files and the object code is implementation-dependent. ### Driver description syntax The concrete syntax for a driver description file is JSON. #### Driver description list PSA Cryptography core implementations should support multiple drivers. The driver description files are passed to the implementation as an ordered list in an unspecified manner. This may be, for example, a list of file names passed on a command line, or a JSON list whose elements are individual driver descriptions. #### Driver description top-level element A driver description is a JSON object containing the following properties: * `"prefix"` (mandatory, string). This must be a valid prefix for a C identifier. All the types and functions provided by the driver have a name that starts with this prefix unless overridden with a `"name"` element in the applicable capability as described below. * `"type"` (mandatory, string). One of `"transparent"` or `"opaque"`. * `"headers"` (optional, array of strings). A list of header files. These header files must define the types, macros and constants referenced by the driver description. They may declare the entry point functions, but this is not required. They may include other PSA headers and standard headers of the platform. Whether they may include other headers is implementation-specific. If omitted, the list of headers is empty. The header files must be present at the specified location relative to a directory on the compiler's include path when compiling glue code between the core and the drivers. * `"capabilities"` (mandatory, array of [capabilities](#driver-description-capability)). A list of **capabilities**. Each capability describes a family of functions that the driver implements for a certain class of cryptographic mechanisms. * `"key_context"` (not permitted for transparent drivers, mandatory for opaque drivers): information about the [representation of keys](#key-format-for-opaque-drivers). * `"persistent_state_size"` (not permitted for transparent drivers, optional for opaque drivers, integer or string). The size in bytes of the [persistent state of the driver](#opaque-driver-persistent-state). This may be either a non-negative integer or a C constant expression of type `size_t`. * `"location"` (not permitted for transparent drivers, optional for opaque drivers, integer or string). The [location value](#lifetimes-and-locations) for which this driver is invoked. In other words, this determines the lifetimes for which the driver is invoked. This may be either a non-negative integer or a C constant expression of type `psa_key_location_t`. ### Driver description capability #### Capability syntax A capability declares a family of functions that the driver implements for a certain class of cryptographic mechanisms. The capability specifies which key types and algorithms are covered and the names of the types and functions that implement it. A capability is a JSON object containing the following properties: * `"entry_points"` (mandatory, list of strings). Each element is the name of a [driver entry point](#driver-entry-points) or driver entry point family. An entry point is a function defined by the driver. If specified, the core will invoke this capability of the driver only when performing one of the specified operations. The driver must implement all the specified entry points, as well as the types if applicable. * `"algorithms"` (optional, list of strings). Each element is an [algorithm specification](#algorithm-specifications). If specified, the core will invoke this capability of the driver only when performing one of the specified algorithms. If omitted, the core will invoke this capability for all applicable algorithms. * `"key_types"` (optional, list of strings). Each element is a [key type specification](#key-type-specifications). If specified, the core will invoke this capability of the driver only for operations involving a key with one of the specified key types. If omitted, the core will invoke this capability of the driver for all applicable key types. * `"key_sizes"` (optional, list of integers). If specified, the core will invoke this capability of the driver only for operations involving a key with one of the specified key sizes. If omitted, the core will invoke this capability of the driver for all applicable key sizes. Key sizes are expressed in bits. * `"names"` (optional, object). A mapping from entry point names described by the `"entry_points"` property, to the name of the C function in the driver that implements the corresponding function. If a function is not listed here, name of the driver function that implements it is the driver's prefix followed by an underscore (`_`) followed by the function name. If this property is omitted, it is equivalent to an empty object (so each entry point *suffix* is implemented by a function called *prefix*`_`*suffix*). * `"fallback"` (optional for transparent drivers, not permitted for opaque drivers, boolean). If present and true, the driver may return `PSA_ERROR_NOT_SUPPORTED`, in which case the core should call another driver or use built-in code to perform this operation. If absent or false, the driver is expected to fully support the mechanisms described by this capability. See the section “[Fallback](#fallback)” for more information. #### Capability semantics When the PSA Cryptography implementation performs a cryptographic mechanism, it invokes available driver entry points as described in the section [“Driver entry points”](#driver-entry-points). A driver is considered available for a cryptographic mechanism that invokes a given entry point if all of the following conditions are met: * The driver specification includes a capability whose `"entry_points"` list either includes the entry point or includes an entry point family that includes the entry point. * If the mechanism involves an algorithm: * either the capability does not have an `"algorithms"` property; * or the value of the capability's `"algorithms"` property includes an [algorithm specification](#algorithm-specifications) that matches this algorithm. * If the mechanism involves a key: * either the key is transparent (its location is `PSA_KEY_LOCATION_LOCAL_STORAGE`) and the driver is transparent; * or the key is opaque (its location is not `PSA_KEY_LOCATION_LOCAL_STORAGE`) and the driver is an opaque driver whose location is the key's location. * If the mechanism involves a key: * either the capability does not have a `"key_types"` property; * or the value of the capability's `"key_types"` property includes a [key type specification](#key-type-specifications) that matches this algorithm. * If the mechanism involves a key: * either the capability does not have a `"key_sizes"` property; * or the value of the capability's `"key_sizes"` property includes the key's size. If a driver includes multiple applicable capabilities for a given combination of entry point, algorithm, key type and key size, and all the capabilities map the entry point to the same function name, the driver is considered available for this cryptographic mechanism. If a driver includes multiple applicable capabilities for a given combination of entry point, algorithm, key type and key size, and at least two of these capabilities map the entry point to the different function names, the driver specification is invalid. If multiple transparent drivers have applicable capabilities for a given combination of entry point, algorithm, key type and key size, the first matching driver in the [specification list](#driver-description-list) is invoked. If the capability has [fallback](#fallback) enabled and the first driver returns `PSA_ERROR_NOT_SUPPORTED`, the next matching driver is invoked, and so on. If multiple opaque drivers have the same location, the list of driver specifications is invalid. #### Capability examples Example 1: the following capability declares that the driver can perform deterministic ECDSA signatures (but not signature verification) using any hash algorithm and any curve that the core supports. If the prefix of this driver is `"acme"`, the function that performs the signature is called `acme_sign_hash`. ``` { "entry_points": ["sign_hash"], "algorithms": ["PSA_ALG_DETERMINISTIC_ECDSA(PSA_ALG_ANY_HASH)"], } ``` Example 2: the following capability declares that the driver can perform deterministic ECDSA signatures using SHA-256 or SHA-384 with a SECP256R1 or SECP384R1 private key (with either hash being possible in combination with either curve). If the prefix of this driver is `"acme"`, the function that performs the signature is called `acme_sign_hash`. ``` { "entry_points": ["sign_hash"], "algorithms": ["PSA_ALG_DETERMINISTIC_ECDSA(PSA_ALG_SHA_256)", "PSA_ALG_DETERMINISTIC_ECDSA(PSA_ALG_SHA_384)"], "key_types": ["PSA_KEY_TYPE_ECC_KEY_PAIR(PSA_ECC_FAMILY_SECP_R1)"], "key_sizes": [256, 384] } ``` ### Algorithm and key specifications #### Algorithm specifications An algorithm specification is a string consisting of a `PSA_ALG_xxx` macro that specifies a cryptographic algorithm or an algorithm wildcard policy defined by the PSA Cryptography API. If the macro takes arguments, the string must have the syntax of a C macro call and each argument must be an algorithm specification or a decimal or hexadecimal literal with no suffix, depending on the expected type of argument. Spaces are optional after commas. Whether other whitespace is permitted is implementation-specific. Valid examples: ``` PSA_ALG_SHA_256 PSA_ALG_HMAC(PSA_ALG_SHA_256) PSA_ALG_KEY_AGREEMENT(PSA_ALG_ECDH, PSA_ALG_HKDF(PSA_ALG_SHA_256)) PSA_ALG_RSA_PSS(PSA_ALG_ANY_HASH) ``` #### Key type specifications An algorithm specification is a string consisting of a `PSA_KEY_TYPE_xxx` macro that specifies a key type defined by the PSA Cryptography API. If the macro takes an argument, the string must have the syntax of a C macro call and each argument must be the name of a constant of suitable type (curve or group). The name `_` may be used instead of a curve or group to indicate that the capability concerns all curves or groups. Valid examples: ``` PSA_KEY_TYPE_AES PSA_KEY_TYPE_ECC_KEY_PAIR(PSA_ECC_FAMILY_SECP_R1) PSA_KEY_TYPE_ECC_KEY_PAIR(_) ``` ### Driver entry points #### Overview of driver entry points Drivers define functions, each of which implements an aspect of a capability of a driver, such as a cryptographic operation, a part of a cryptographic operation, or a key management action. These functions are called the **entry points** of the driver. Most driver entry points correspond to a particular function in the PSA Cryptography API. For example, if a call to `psa_sign_hash()` is dispatched to a driver, it invokes the driver's `sign_hash` function. All driver entry points return a status of type `psa_status_t` which should use the status codes documented for PSA services in general and for PSA Cryptography in particular: `PSA_SUCCESS` indicates that the function succeeded, and `PSA_ERROR_xxx` values indicate that an error occurred. The signature of a driver entry point generally looks like the signature of the PSA Cryptography API that it implements, with some modifications. This section gives an overview of modifications that apply to whole classes of entry points. Refer to the reference section for each entry point or entry point family for details. * For entry points that operate on an existing key, the `psa_key_id_t` parameter is replaced by a sequence of three parameters that describe the key: 1. `const psa_key_attributes_t *attributes`: the key attributes. 2. `const uint8_t *key_buffer`: a key material or key context buffer. 3. `size_t key_buffer_size`: the size of the key buffer in bytes. For transparent drivers, the key buffer contains the key material, in the same format as defined for `psa_export_key()` and `psa_export_public_key()` in the PSA Cryptography API. For opaque drivers, the content of the key buffer is entirely up to the driver. * For entry points that involve a multi-part operation, the operation state type (`psa_XXX_operation_t`) is replaced by a driver-specific operation state type (*prefix*`_XXX_operation_t`). * For entry points that are involved in key creation, the `psa_key_id_t *` output parameter is replaced by a sequence of parameters that convey the key context: 1. `uint8_t *key_buffer`: a buffer for the key material or key context. 2. `size_t key_buffer_size`: the size of the key buffer in bytes. 2. `size_t *key_buffer_length`: the length of the data written to the key buffer in bytes. Some entry points are grouped in families that must be implemented as a whole. If a driver supports an entry point family, it must provide all the entry points in the family. Drivers can also have entry points related to random generation. A transparent driver can provide a [random generation interface](#random-generation-entry-points). Separately, transparent and opaque drivers can have [entropy collection entry points](#entropy-collection-entry-point). #### General considerations on driver entry point parameters Buffer parameters for driver entry points obey the following conventions: * An input buffer has the type `const uint8_t *` and is immediately followed by a parameter of type `size_t` that indicates the buffer size. * An output buffer has the type `uint8_t *` and is immediately followed by a parameter of type `size_t` that indicates the buffer size. A third parameter of type `size_t *` is provided to report the actual length of the data written in the buffer if the function succeeds. * An in-out buffer has the type `uint8_t *` and is immediately followed by a parameter of type `size_t` that indicates the buffer size. In-out buffers are only used when the input and the output have the same length. Buffers of size 0 may be represented with either a null pointer or a non-null pointer. Input buffers and other input-only parameters (`const` pointers) may be in read-only memory. Overlap is possible between input buffers, and between an input buffer and an output buffer, but not between two output buffers or between a non-buffer parameter and another parameter. #### Driver entry points for single-part cryptographic operations The following driver entry points perform a cryptographic operation in one shot (single-part operation): * `"hash_compute"` (transparent drivers only): calculation of a hash. Called by `psa_hash_compute()` and `psa_hash_compare()`. To verify a hash with `psa_hash_compare()`, the core calls the driver's `"hash_compute"` entry point and compares the result with the reference hash value. * `"mac_compute"`: calculation of a MAC. Called by `psa_mac_compute()` and possibly `psa_mac_verify()`. To verify a mac with `psa_mac_verify()`, the core calls an applicable driver's `"mac_verify"` entry point if there is one, otherwise the core calls an applicable driver's `"mac_compute"` entry point and compares the result with the reference MAC value. * `"mac_verify"`: verification of a MAC. Called by `psa_mac_verify()`. This entry point is mainly useful for drivers of secure elements that verify a MAC without revealing the correct MAC. Although transparent drivers may implement this entry point in addition to `"mac_compute"`, it is generally not useful because the core can call the `"mac_compute"` entry point and compare with the expected MAC value. * `"cipher_encrypt"`: unauthenticated symmetric cipher encryption. Called by `psa_cipher_encrypt()`. * `"cipher_decrypt"`: unauthenticated symmetric cipher decryption. Called by `psa_cipher_decrypt()`. * `"aead_encrypt"`: authenticated encryption with associated data. Called by `psa_aead_encrypt()`. * `"aead_decrypt"`: authenticated decryption with associated data. Called by `psa_aead_decrypt()`. * `"asymmetric_encrypt"`: asymmetric encryption. Called by `psa_asymmetric_encrypt()`. * `"asymmetric_decrypt"`: asymmetric decryption. Called by `psa_asymmetric_decrypt()`. * `"sign_hash"`: signature of an already calculated hash. Called by `psa_sign_hash()` and possibly `psa_sign_message()`. To sign a message with `psa_sign_message()`, the core calls an applicable driver's `"sign_message"` entry point if there is one, otherwise the core calls an applicable driver's `"hash_compute"` entry point followed by an applicable driver's `"sign_hash"` entry point. * `"verify_hash"`: verification of an already calculated hash. Called by `psa_verify_hash()` and possibly `psa_verify_message()`. To verify a message with `psa_verify_message()`, the core calls an applicable driver's `"verify_message"` entry point if there is one, otherwise the core calls an applicable driver's `"hash_compute"` entry point followed by an applicable driver's `"verify_hash"` entry point. * `"sign_message"`: signature of a message. Called by `psa_sign_message()`. * `"verify_message"`: verification of a message. Called by `psa_verify_message()`. * `"key_agreement"`: key agreement without a subsequent key derivation. Called by `psa_raw_key_agreement()` and possibly `psa_key_derivation_key_agreement()`. ### Driver entry points for multi-part operations #### General considerations on multi-part operations The entry points that implement each step of a multi-part operation are grouped into a family. A driver that implements a multi-part operation must define all of the entry points in this family as well as a type that represents the operation context. The lifecycle of a driver operation context is similar to the lifecycle of an API operation context: 1. The core initializes operation context objects to either all-bits-zero or to logical zero (`{0}`), at its discretion. 1. The core calls the `xxx_setup` entry point for this operation family. If this fails, the core destroys the operation context object without calling any other driver entry point on it. 1. The core calls other entry points that manipulate the operation context object, respecting the constraints. 1. If any entry point fails, the core calls the driver's `xxx_abort` entry point for this operation family, then destroys the operation context object without calling any other driver entry point on it. 1. If a “finish” entry point fails, the core destroys the operation context object without calling any other driver entry point on it. The finish entry points are: *prefix*`_mac_sign_finish`, *prefix*`_mac_verify_finish`, *prefix*`_cipher_fnish`, *prefix*`_aead_finish`, *prefix*`_aead_verify`. If a driver implements a multi-part operation but not the corresponding single-part operation, the core calls the driver's multipart operation entry points to perform the single-part operation. #### Multi-part operation entry point family `"hash_multipart"` This family corresponds to the calculation of a hash in multiple steps. This family applies to transparent drivers only. This family requires the following type and entry points: * Type `"hash_operation_t"`: the type of a hash operation context. It must be possible to copy a hash operation context byte by byte, therefore hash operation contexts must not contain any embedded pointers (except pointers to global data that do not change after the setup step). * `"hash_setup"`: called by `psa_hash_setup()`. * `"hash_update"`: called by `psa_hash_update()`. * `"hash_finish"`: called by `psa_hash_finish()` and `psa_hash_verify()`. * `"hash_abort"`: called by all multi-part hash functions of the PSA Cryptography API. To verify a hash with `psa_hash_verify()`, the core calls the driver's *prefix*`_hash_finish` entry point and compares the result with the reference hash value. For example, a driver with the prefix `"acme"` that implements the `"hash_multipart"` entry point family must define the following type and entry points (assuming that the capability does not use the `"names"` property to declare different type and entry point names): ``` typedef ... acme_hash_operation_t; psa_status_t acme_hash_setup(acme_hash_operation_t *operation, psa_algorithm_t alg); psa_status_t acme_hash_update(acme_hash_operation_t *operation, const uint8_t *input, size_t input_length); psa_status_t acme_hash_finish(acme_hash_operation_t *operation, uint8_t *hash, size_t hash_size, size_t *hash_length); psa_status_t acme_hash_abort(acme_hash_operation_t *operation); ``` #### Operation family `"mac_multipart"` TODO #### Operation family `"mac_verify_multipart"` TODO #### Operation family `"cipher_encrypt_multipart"` TODO #### Operation family `"cipher_decrypt_multipart"` TODO #### Operation family `"aead_encrypt_multipart"` TODO #### Operation family `"aead_decrypt_multipart"` TODO #### Operation family `"key_derivation"` This family requires the following type and entry points: * Type `"key_derivation_operation_t"`: the type of a key derivation operation context. * `"key_derivation_setup"`: called by `psa_key_derivation_setup()`. * `"key_derivation_set_capacity"`: called by `psa_key_derivation_set_capacity()`. The core will always enforce the capacity, therefore this function does not need to do anything for algorithms where the output stream only depends on the effective generated length and not on the capacity. * `"key_derivation_input_bytes"`: called by `psa_key_derivation_input_bytes()` and `psa_key_derivation_input_key()`. For transparent drivers, when processing a call to `psa_key_derivation_input_key()`, the core always calls the applicable driver's `"key_derivation_input_bytes"` entry point. * `"key_derivation_input_key"` (opaque drivers only) * `"key_derivation_output_bytes"`: called by `psa_key_derivation_output_bytes()`; also by `psa_key_derivation_output_key()` for transparent drivers. * `"key_derivation_output_key"`: called by `psa_key_derivation_output_key()` for transparent drivers when deriving an asymmetric key pair, and also for opaque drivers. * `"key_derivation_abort"`: called by all key derivation functions of the PSA Cryptography API. TODO: key input and output for opaque drivers; deterministic key generation for transparent drivers TODO ### Driver entry points for key management The driver entry points for key management differ significantly between [transparent drivers](#key-management-with-transparent-drivers) and [opaque drivers](#key-management-with-opaque-drivers). This section describes common elements. Refer to the applicable section for each driver type for more information. The entry points that create or format key data have the following prototypes for a driver with the prefix `"acme"`: ``` psa_status_t acme_import_key(const psa_key_attributes_t *attributes, const uint8_t *data, size_t data_length, uint8_t *key_buffer, size_t key_buffer_size, size_t *key_buffer_length, size_t *bits); // additional parameter, see below psa_status_t acme_generate_key(const psa_key_attributes_t *attributes, uint8_t *key_buffer, size_t key_buffer_size, size_t *key_buffer_length); ``` TODO: derivation, copy * The key attributes (`attributes`) have the same semantics as in the PSA Cryptography application interface. * For the `"import_key"` entry point, the input in the `data` buffer is either the export format or an implementation-specific format that the core documents as an acceptable input format for `psa_import_key()`. * The size of the key data buffer `key_buffer` is sufficient for the internal representation of the key. For a transparent driver, this is the key's [export format](#key-format-for-transparent-drivers). For an opaque driver, this is the size determined from the driver description and the key attributes, as specified in the section [“Key format for opaque drivers”](#key-format-for-opaque-drivers). * For an opaque driver with an `"allocate_key"` entry point, the content of the key data buffer on entry is the output of that entry point. * The `"import_key"` entry point must determine or validate the key size and set `*bits` as described in the section [“Key size determination on import”](#key-size-determination-on-import) below. All key creation entry points must ensure that the resulting key is valid as specified in the section [“Key validation”](#key-validation) below. This is primarily important for import entry points since the key data comes from the application. #### Key size determination on import The `"import_key"` entry point must determine or validate the key size. The PSA Cryptography API exposes the key size as part of the key attributes. When importing a key, the key size recorded in the key attributes can be either a size specified by the caller of the API (who may not be trusted), or `0` which indicates that the size must be calculated from the data. When the core calls the `"import_key"` entry point to process a call to `psa_import_key`, it passes an `attributes` structure such that `psa_get_key_bits(attributes)` is the size passed by the caller of `psa_import_key`. If this size is `0`, the `"import_key"` entry point must set the `bits` input-output parameter to the correct key size. The semantics of `bits` is as follows: * The core sets `*bits` to `psa_get_key_bits(attributes)` before calling the `"import_key"` entry point. * If `*bits == 0`, the driver must determine the key size from the data and set `*bits` to this size. If the key size cannot be determined from the data, the driver must return `PSA_ERROR_INVALID_ARGUMENT` (as of version 1.0 of the PSA Cryptography API specification, it is possible to determine the key size for all standard key types). * If `*bits != 0`, the driver must check the value of `*bits` against the data and return `PSA_ERROR_INVALID_ARGUMENT` if it does not match. If the driver entry point changes `*bits` to a different value but returns `PSA_SUCCESS`, the core will consider the key as invalid and the import will fail. #### Key validation Key creation entry points must produce valid key data. Key data is _valid_ if operations involving the key are guaranteed to work functionally and not to cause indirect security loss. Operation functions are supposed to receive valid keys, and should not have to check and report invalid keys. For example: * If a cryptographic mechanism is defined as having keying material of a certain size, or if the keying material involves integers that have to be in a certain range, key creation must ensure that the keying material has an appropriate size and falls within an appropriate range. * If a cryptographic operation involves a division by an integer which is provided as part of a key, key creation must ensure that this integer is nonzero. * If a cryptographic operation involves two keys A and B (or more), then the creation of A must ensure that using it does not risk compromising B. This applies even if A's policy does not explicitly allow a problematic operation, but A is exportable. In particular, public keys that can potentially be used for key agreement are considered invalid and must not be created if they risk compromising the private key. * On the other hand, it is acceptable for import to accept a key that cannot be verified as valid if using this key would at most compromise the key itself and material that is secured with this key. For example, RSA key import does not need to verify that the primes are actually prime. Key import may accept an insecure key if the consequences of the insecurity are no worse than a leak of the key prior to its import. With opaque drivers, the key context can only be used by code from the same driver, so key validity is primarily intended to report key creation errors at creation time rather than during an operation. With transparent drivers, the key context can potentially be used by code from a different provider, so key validity is critical for interoperability. This section describes some minimal validity requirements for standard key types. * For symmetric key types, check that the key size is suitable for the type. * For DES (`PSA_KEY_TYPE_DES`), additionally verify the parity bits. * For RSA (`PSA_KEY_TYPE_RSA_PUBLIC_KEY`, `PSA_KEY_TYPE_RSA_KEY_PAIR`), check the syntax of the key and make sanity checks on its components. TODO: what sanity checks? Value ranges (e.g. p < n), sanity checks such as parity, minimum and maximum size, what else? * For elliptic curve private keys (`PSA_KEY_TYPE_ECC_KEY_PAIR`), check the size and range. TODO: what else? * For elliptic curve public keys (`PSA_KEY_TYPE_ECC_PUBLIC_KEY`), check the size and range, and that the point is on the curve. TODO: what else? ### Entropy collection entry point A driver can declare an entropy source by providing a `"get_entropy"` entry point. This entry point has the following prototype for a driver with the prefix `"acme"`: ``` psa_status_t acme_get_entropy(uint32_t flags, size_t *estimate_bits, uint8_t *output, size_t output_size); ``` The semantics of the parameters is as follows: * `flags`: a bit-mask of [entropy collection flags](#entropy-collection-flags). * `estimate_bits`: on success, an estimate of the amount of entropy that is present in the `output` buffer, in bits. This must be at least `1` on success. The value is ignored on failure. Drivers should return a conservative estimate, even in circumstances where the quality of the entropy source is degraded due to environmental conditions (e.g. undervolting, low temperature, etc.). * `output`: on success, this buffer contains non-deterministic data with an estimated entropy of at least `*estimate_bits` bits. When the entropy is coming from a hardware peripheral, this should preferably be raw or lightly conditioned measurements from a physical process, such that statistical tests run over a sufficiently large amount of output can confirm the entropy estimates. But this specification also permits entropy sources that are fully conditioned, for example when the PSA Cryptography system is running as an application in an operating system and `"get_entropy"` returns data from the random generator in the operating system's kernel. * `output_size`: the size of the `output` buffer in bytes. This size should be large enough to allow a driver to pass unconditioned data with a low density of entropy; for example a peripheral that returns eight bytes of data with an estimated one bit of entropy cannot provide meaningful output in less than 8 bytes. Note that there is no output parameter indicating how many bytes the driver wrote to the buffer. Such an output length indication is not necessary because the entropy may be located anywhere in the buffer, so the driver may write less than `output_size` bytes but the core does not need to know this. The output parameter `estimate_bits` contains the amount of entropy, expressed in bits, which may be significantly less than `output_size * 8`. The entry point may return the following statuses: * `PSA_SUCCESS`: success. The output buffer contains some entropy. * `PSA_ERROR_INSUFFICIENT_ENTROPY`: no entropy is available without blocking. This is only permitted if the `PSA_DRIVER_GET_ENTROPY_BLOCK` flag is clear. The core may call `get_entropy` again later, giving time for entropy to be gathered or for adverse environmental conditions to be rectified. * Other error codes indicate a transient or permanent failure of the entropy source. Unlike most other entry points, if multiple transparent drivers include a `"get_entropy"` point, the core will call all of them (as well as the entry points from opaque drivers). Fallback is not applicable to `"get_entropy"`. #### Entropy collection flags * `PSA_DRIVER_GET_ENTROPY_BLOCK`: If this flag is set, the driver should block until it has at least one bit of entropy. If this flag is clear, the driver should avoid blocking if no entropy is readily available. * `PSA_DRIVER_GET_ENTROPY_KEEPALIVE`: This flag is intended to help with energy management for entropy-generating peripherals. If this flag is set, the driver should expect another call to `acme_get_entropy` after a short time. If this flag is clear, the core is not expecting to call the `"get_entropy"` entry point again within a short amount of time (but it may do so nonetheless). #### Entropy collection and blocking The intent of the `BLOCK` and `KEEPALIVE` [flags](#entropy-collection-flags) is to support drivers for TRNG (True Random Number Generator, i.e. an entropy source peripheral) that have a long ramp-up time, especially on platforms with multiple entropy sources. Here is a suggested call sequence for entropy collection that leverages these flags: 1. The core makes a first round of calls to `"get_entropy"` on every source with the `BLOCK` flag clear and the `KEEPALIVE` flag set, so that drivers can prepare the TRNG peripheral. 2. The core makes a second round of calls with the `BLOCK` flag set and the `KEEPALIVE` flag clear to gather needed entropy. 3. If the second round does not collect enough entropy, the core makes more similar rounds, until the total amount of collected entropy is sufficient. ### Miscellaneous driver entry points #### Driver initialization A driver may declare an `"init"` entry point in a capability with no algorithm, key type or key size. If so, the core calls this entry point once during the initialization of the PSA Cryptography subsystem. If the init entry point of any driver fails, the initialization of the PSA Cryptography subsystem fails. When multiple drivers have an init entry point, the order in which they are called is unspecified. It is also unspecified whether other drivers' `"init"` entry points are called if one or more init entry point fails. On platforms where the PSA Cryptography implementation is a subsystem of a single application, the initialization of the PSA Cryptography subsystem takes place during the call to `psa_crypto_init()`. On platforms where the PSA Cryptography implementation is separate from the application or applications, the initialization of the PSA Cryptography subsystem takes place before or during the first time an application calls `psa_crypto_init()`. The init entry point does not take any parameter. ### Combining multiple drivers To declare a cryptoprocessor can handle both cleartext and wrapped keys, you need to provide two driver descriptions, one for a transparent driver and one for an opaque driver. You can use the mapping in capabilities' `"names"` property to arrange for multiple driver entry points to map to the same C function. ## Transparent drivers ### Key format for transparent drivers The format of a key for transparent drivers is the same as in applications. Refer to the documentation of [`psa_export_key()`](https://armmbed.github.io/mbed-crypto/html/api/keys/management.html#c.psa_export_key) and [`psa_export_public_key()`](https://armmbed.github.io/mbed-crypto/html/api/keys/management.html#c.psa_export_public_key) in the PSA Cryptography API specification. For custom key types defined by an implementation, refer to the documentation of that implementation. ### Key management with transparent drivers Transparent drivers may provide the following key management entry points: * [`"import_key"`](#key-import-with-transparent-drivers): called by `psa_import_key()`, only when importing a key pair or a public key (key such that `PSA_KEY_TYPE_IS_ASYMMETRIC` is true). * `"generate_key"`: called by `psa_generate_key()`, only when generating a key pair (key such that `PSA_KEY_TYPE_IS_KEY_PAIR` is true). * `"key_derivation_output_key"`: called by `psa_key_derivation_output_key()`, only when deriving a key pair (key such that `PSA_KEY_TYPE_IS_KEY_PAIR` is true). * `"export_public_key"`: called by the core to obtain the public key of a key pair. The core may call this function at any time to obtain the public key, which can be for `psa_export_public_key()` but also at other times, including during a cryptographic operation that requires the public key such as a call to `psa_verify_message()` on a key pair object. Transparent drivers are not involved when exporting, copying or destroying keys, or when importing, generating or deriving symmetric keys. #### Key import with transparent drivers As discussed in [the general section about key management entry points](#driver-entry-points-for-key-management), the key import entry points has the following prototype for a driver with the prefix `"acme"`: ``` psa_status_t acme_import_key(const psa_key_attributes_t *attributes, const uint8_t *data, size_t data_length, uint8_t *key_buffer, size_t key_buffer_size, size_t *key_buffer_length, size_t *bits); ``` This entry point has several roles: 1. Parse the key data in the input buffer `data`. The driver must support the export format for the key types that the entry point is declared for. It may support additional formats as specified in the description of [`psa_import_key()`](https://armmbed.github.io/mbed-crypto/html/api/keys/management.html#c.psa_export_key) in the PSA Cryptography API specification. 2. Validate the key data. The necessary validation is described in the section [“Key validation with transparent drivers”](#key-validation-with-transparent-drivers) above. 3. [Determine the key size](#key-size-determination-on-import) and output it through `*bits`. 4. Copy the validated key data from `data` to `key_buffer`. The output must be in the canonical format documented for [`psa_export_key()`](https://armmbed.github.io/mbed-crypto/html/api/keys/management.html#c.psa_export_key) or [`psa_export_public_key()`](https://armmbed.github.io/mbed-crypto/html/api/keys/management.html#c.psa_export_public_key), so if the input is not in this format, the entry point must convert it. ### Random generation entry points A transparent driver may provide an operation family that can be used as a cryptographic random number generator. The random generation mechanism must obey the following requirements: * The random output must be of cryptographic quality, with a uniform distribution. Therefore, if the random generator includes an entropy source, this entropy source must be fed through a CSPRNG (cryptographically secure pseudo-random number generator). * Random generation is expected to be fast. (If a device can provide entropy but is slow at generating random data, declare it as an [entropy driver](#entropy-collection-entry-point) instead.) * The random generator should be able to incorporate entropy provided by an outside source. If it isn't, the random generator can only be used if it's the only entropy source on the platform. (A random generator peripheral can be declared as an [entropy source](#entropy-collection-entry-point) instead of a random generator; this way the core will combine it with other entropy sources.) * The random generator may either be deterministic (in the sense that it always returns the same data when given the same entropy inputs) or non-deterministic (including its own entropy source). In other words, this interface is suitable both for PRNG (pseudo-random number generator, also known as DRBG (deterministic random bit generator)) and for NRBG (non-deterministic random bit generator). If no driver implements the random generation entry point family, the core provides an unspecified random generation mechanism. This operation family requires the following type, entry points and parameters (TODO: where exactly are the parameters in the JSON structure?): * Type `"random_context_t"`: the type of a random generation context. * `"init_random"` (entry point, optional): if this function is present, [the core calls it once](#random-generator-initialization) after allocating a `"random_context_t"` object. * `"add_entropy"` (entry point, optional): the core calls this function to [inject entropy](#entropy-injection). This entry point is optional if the driver is for a peripheral that includes an entropy source of its own, however [random generator drivers without entropy injection](#random-generator-drivers-without-entropy-injection) have limited portability since they can only be used on platforms with no other entropy source. This entry point is mandatory if `"initial_entropy_size"` is nonzero. * `"get_random"` (entry point, mandatory): the core calls this function whenever it needs to [obtain random data](#the-get_random-entry-point). * `"initial_entropy_size"` (integer, mandatory): the minimum number of bytes of entropy that the core must supply before the driver can output random data. This can be `0` if the driver is for a peripheral that includes an entropy source of its own. * `"reseed_entropy_size"` (integer, optional): the minimum number of bytes of entropy that the core should supply via [`"add_entropy"`](#entropy-injection) when the driver runs out of entropy. This value is also a hint for the size to supply if the core makes additional calls to `"add_entropy"`, for example to enforce prediction resistance. If omitted, the core should pass an amount of entropy corresponding to the expected security strength of the device (for example, pass 32 bytes of entropy when reseeding to achieve a security strength of 256 bits). If specified, the core should pass the larger of `"reseed_entropy_size"` and the amount corresponding to the security strength. Random generation is not parametrized by an algorithm. The choice of algorithm is up to the driver. #### Random generator initialization The `"init_random"` entry point has the following prototype for a driver with the prefix `"acme"`: ``` psa_status_t acme_init_random(acme_random_context_t *context); ``` The core calls this entry point once after allocating a random generation context. Initially, the context object is all-bits-zero. If a driver does not have an `"init_random"` entry point, the context object passed to the first call to `"add_entropy"` or `"get_random"` will be all-bits-zero. #### Entropy injection The `"add_entropy"` entry point has the following prototype for a driver with the prefix `"acme"`: ``` psa_status_t acme_add_entropy(acme_random_context_t *context, const uint8_t *entropy, size_t entropy_size); ``` The semantics of the parameters is as follows: * `context`: a random generation context. On the first call to `"add_entropy"`, this object has been initialized by a call to the driver's `"init_random"` entry point if one is present, and to all-bits-zero otherwise. * `entropy`: a buffer containing full-entropy data to seed the random generator. “Full-entropy” means that the data is uniformly distributed and independent of any other observable quantity. * `entropy_size`: the size of the `entropy` buffer in bytes. It is guaranteed to be at least `1`, but it may be smaller than the amount of entropy that the driver needs to deliver random data, in which case the core will call the `"add_entropy"` entry point again to supply more entropy. The core calls this function to supply entropy to the driver. The driver must mix this entropy into its internal state. The driver must mix the whole supplied entropy, even if there is more than what the driver requires, to ensure that all entropy sources are mixed into the random generator state. The driver may mix additional entropy of its own. The core may call this function at any time. For example, to enforce prediction resistance, the core can call `"add_entropy"` immediately after each call to `"get_random"`. The core must call this function in two circumstances: * Before the first call to the `"get_random"` entry point, to supply `"initial_entropy_size"` bytes of entropy. * After a call to the `"get_random"` entry point returns less than the required amount of random data, to supply at least `"reseed_entropy_size"` bytes of entropy. When the driver requires entropy, the core can supply it with one or more successive calls to the `"add_entropy"` entry point. If the required entropy size is zero, the core does not need to call `"add_entropy"`. #### Combining entropy sources with a random generation driver This section provides guidance on combining one or more [entropy sources](#entropy-collection-entry-point) (each having a `"get_entropy"` entry point) with a random generation driver (with an `"add_entropy"` entry point). Note that `"get_entropy"` returns data with an estimated amount of entropy that is in general less than the buffer size. The core must apply a mixing algorithm to the output of `"get_entropy"` to obtain full-entropy data. For example, the core may use a simple mixing scheme based on a pseudorandom function family $(F_k)$ with an $E$-bit output where $E = 8 \cdot \mathtt{entropy_size}$ and $\mathtt{entropy_size}$ is the desired amount of entropy in bytes (typically the random driver's `"initial_entropy_size"` property for the initial seeding and the `"reseed_entropy_size"` property for subsequent reseeding). The core calls the `"get_entropy"` points of the available entropy drivers, outputting a string $s_i$ and an entropy estimate $e_i$ on the $i$th call. It does so until the total entropy estimate $e_1 + e_2 + \ldots + e_n$ is at least $E$. The core then calculates $F_k(0)$ where $k = s_1 || s_2 || \ldots || s_n$. This value is a string of $\mathtt{entropy_size}$, and since $(F_k)$ is a pseudorandom function family, $F_k(0)$ is uniformly distributed over strings of $\mathtt{entropy_size}$ bytes. Therefore $F_k(0)$ is a suitable value to pass to `"add_entropy"`. Note that the mechanism above is only given as an example. Implementations may choose a different mechanism, for example involving multiple pools or intermediate compression functions. #### Random generator drivers without entropy injection Random generator drivers should have the capability to inject additional entropy through the `"add_entropy"` entry point. This ensures that the random generator depends on all the entropy sources that are available on the platform. A driver where a call to `"add_entropy"` does not affect the state of the random generator is not compliant with this specification. However, a driver may omit the `"add_entropy"` entry point. This limits the driver's portability: implementations of the PSA Cryptography specification may reject drivers without an `"add_entropy"` entry point, or only accept such drivers in certain configurations. In particular, the `"add_entropy"` entry point is required if: * the integration of PSA Cryptography includes an entropy source that is outside the driver; or * the core saves random data in persistent storage to be preserved across platform resets. #### The `"get_random"` entry point The `"get_random"` entry point has the following prototype for a driver with the prefix `"acme"`: ``` psa_status_t acme_get_random(acme_random_context_t *context, uint8_t *output, size_t output_size, size_t *output_length); ``` The semantics of the parameters is as follows: * `context`: a random generation context. If the driver's `"initial_entropy_size"` property is nonzero, the core must have called `"add_entropy"` at least once with a total of at least `"initial_entropy_size"` bytes of entropy before it calls `"get_random"`. Alternatively, if the driver's `"initial_entropy_size"` property is zero and the core did not call `"add_entropy"`, or if the driver has no `"add_entropy"` entry point, the core must have called `"init_random"` if present, and otherwise the context is all-bits zero. * `output`: on success (including partial success), the first `*output_length` bytes of this buffer contain cryptographic-quality random data. The output is not used on error. * `output_size`: the size of the `output` buffer in bytes. * `*output_length`: on success (including partial success), the number of bytes of random data that the driver has written to the `output` buffer. This is preferably `output_size`, but the driver is allowed to return less data if it runs out of entropy as described below. The core sets this value to 0 on entry. The value is not used on error. The driver may return the following status codes: * `PSA_SUCCESS`: the `output` buffer contains `*output_length` bytes of cryptographic-quality random data. Note that this may be less than `output_size`; in this case the core should call the driver's `"add_entropy"` method to supply at least `"reseed_entropy_size"` bytes of entropy before calling `"get_random"` again. * `PSA_ERROR_INSUFFICIENT_ENTROPY`: the core must supply additional entropy by calling the `"add_entropy"` entry point with at least `"reseed_entropy_size"` bytes. * `PSA_ERROR_NOT_SUPPORTED`: the random generator is not available. This is only permitted if the driver specification for random generation has the [fallback property](#fallback) enabled. * Other error codes such as `PSA_ERROR_COMMUNICATION_FAILURE` or `PSA_ERROR_HARDWARE_FAILURE` indicate a transient or permanent error. ### Fallback Sometimes cryptographic accelerators only support certain cryptographic mechanisms partially. The capability description language allows specifying some restrictions, including restrictions on key sizes, but it cannot cover all the possibilities that may arise in practice. Furthermore, it may be desirable to deploy the same binary image on different devices, only some of which have a cryptographic accelerators. For these purposes, a transparent driver can declare that it only supports a [capability](#driver-description-capability) partially, by setting the capability's `"fallback"` property to true. If a transparent driver entry point is part of a capability which has a true `"fallback"` property and returns `PSA_ERROR_NOT_SUPPORTED`, the core will call the next transparent driver that supports the mechanism, if there is one. The core considers drivers in the order given by the [driver description list](#driver-description-list). If all the available drivers have fallback enabled and return `PSA_ERROR_NOT_SUPPORTED`, the core will perform the operation using built-in code. As soon as a driver returns any value other than `PSA_ERROR_NOT_SUPPORTED` (`PSA_SUCCESS` or a different error code), this value is returned to the application, without attempting to call any other driver or built-in code. If a transparent driver entry point is part of a capability where the `"fallback"` property is false or omitted, the core should not include any other code for this capability, whether built in or in another transparent driver. ## Opaque drivers Opaque drivers allow a PSA Cryptography implementation to delegate cryptographic operations to a separate environment that might not allow exporting key material in cleartext. The opaque driver interface is designed so that the core never inspects the representation of a key. The opaque driver interface is designed to support two subtypes of cryptoprocessors: * Some cryptoprocessors do not have persistent storage for individual keys. The representation of a key is the key material wrapped with a master key which is located in the cryptoprocessor and never exported from it. The core stores this wrapped key material on behalf of the cryptoprocessor. * Some cryptoprocessors have persistent storage for individual keys. The representation of a key is an identifier such as label or slot number. The core stores this identifier. ### Key format for opaque drivers The format of a key for opaque drivers is an opaque blob. The content of this blob is fully up to the driver. The core merely stores this blob. Note that since the core stores the key context blob as it is in memory, it must only contain data that is meaningful after a reboot. In particular, it must not contain any pointers or transient handles. The `"key_context"` property in the [driver description](#driver-description-top-level-element) specifies how to calculate the size of the key context as a function of the key type and size. This is an object with the following properties: * `"base_size"` (integer or string, optional): this many bytes are included in every key context. If omitted, this value defaults to 0. * `"key_pair_size"` (integer or string, optional): this many bytes are included in every key context for a key pair. If omitted, this value defaults to 0. * `"public_key_size"` (integer or string, optional): this many bytes are included in every key context for a public key. If omitted, this value defaults to 0. * `"symmetric_factor"` (integer or string, optional): every key context for a symmetric key includes this many times the key size. If omitted, this value defaults to 0. * `"store_public_key"` (boolean, optional): If specified and true, for a key pair, the key context includes space for the public key. If omitted or false, no additional space is added for the public key. * `"size_function"` (string, optional): the name of a function that returns the number of bytes that the driver needs in a key context for a key. This may be a pointer to function. This must be a C identifier; more complex expressions are not permitted. If the core uses this function, it supersedes all the other properties except for `"builtin_key_size"` (where applicable, if present). * `"builtin_key_size"` (integer or string, optional): If specified, this overrides all other methods (including the `"size_function"` entry point) to determine the size of the key context for [built-in keys](#built-in-keys). This allows drivers to efficiently represent application keys as wrapped key material, but built-in keys by an internal identifier that takes up less space. The integer properties must be C language constants. A typical value for `"base_size"` is `sizeof(acme_key_context_t)` where `acme_key_context_t` is a type defined in a driver header file. #### Size of a dynamically allocated key context If the core supports dynamic allocation for the key context and chooses to use it, and the driver specification includes the `"size_function"` property, the size of the key context is at least ``` size_function(key_type, key_bits) ``` where `size_function` is the function named in the `"size_function"` property, `key_type` is the key type and `key_bits` is the key size in bits. The prototype of the size function is ``` size_t size_function(psa_key_type_t key_type, size_t key_bits); ``` #### Size of a statically allocated key context If the core does not support dynamic allocation for the key context or chooses not to use it, or if the driver specification does not include the `"size_function"` property, the size of the key context for a key of type `key_type` and of size `key_bits` bits is: * For a key pair (`PSA_KEY_TYPE_IS_KEY_PAIR(key_type)` is true): ``` base_size + key_pair_size + public_key_overhead ``` where `public_key_overhead = PSA_EXPORT_PUBLIC_KEY_MAX_SIZE(key_type, key_bits)` if the `"store_public_key"` property is true and `public_key_overhead = 0` otherwise. * For a public key (`PSA_KEY_TYPE_IS_PUBLIC_KEY(key_type)` is true): ``` base_size + public_key_size ``` * For a symmetric key (not a key pair or public key): ``` base_size + symmetric_factor * key_bytes ``` where `key_bytes = ((key_bits + 7) / 8)` is the key size in bytes. #### Key context size for a secure element with storage If the key is stored in the secure element and the driver only needs to store a label for the key, use `"base_size"` as the size of the label plus any other metadata that the driver needs to store, and omit the other properties. If the key is stored in the secure element, but the secure element does not store the public part of a key pair and cannot recompute it on demand, additionally use the `"store_public_key"` property with the value `true`. Note that this only influences the size of the key context: the driver code must copy the public key to the key context and retrieve it on demand in its `export_public_key` entry point. #### Key context size for a secure element without storage If the key is stored in wrapped form outside the secure element, and the wrapped form of the key plus any metadata has up to *N* bytes of overhead, use *N* as the value of the `"base_size"` property and set the `"symmetric_factor"` property to 1. Set the `"key_pair_size"` and `"public_key_size"` properties appropriately for the largest supported key pair and the largest supported public key respectively. ### Key management with opaque drivers Opaque drivers may provide the following key management entry points: * `"export_key"`: called by `psa_export_key()`, or by `psa_copy_key()` when copying a key from or to a different [location](#lifetimes-and-locations). * `"export_public_key"`: called by the core to obtain the public key of a key pair. The core may call this entry point at any time to obtain the public key, which can be for `psa_export_public_key()` but also at other times, including during a cryptographic operation that requires the public key such as a call to `psa_verify_message()` on a key pair object. * `"import_key"`: called by `psa_import_key()`, or by `psa_copy_key()` when copying a key from another location. * `"generate_key"`: called by `psa_generate_key()`. * `"key_derivation_output_key"`: called by `psa_key_derivation_output_key()`. * `"copy_key"`: called by `psa_copy_key()` when copying a key within the same [location](#lifetimes-and-locations). * `"get_builtin_key"`: called by functions that access a key to retrieve information about a [built-in key](#built-in-keys). In addition, secure elements that store the key material internally must provide the following two entry points: * `"allocate_key"`: called by `psa_import_key()`, `psa_generate_key()`, `psa_key_derivation_output_key()` or `psa_copy_key()` before creating a key in the location of this driver. * `"destroy_key"`: called by `psa_destroy_key()`. #### Key creation in a secure element without storage This section describes the key creation process for secure elements that do not store the key material. The driver must obtain a wrapped form of the key material which the core will store. A driver for such a secure element has no `"allocate_key"` or `"destroy_key"` entry point. When creating a key with an opaque driver which does not have an `"allocate_key"` or `"destroy_key"` entry point: 1. The core allocates memory for the key context. 2. The core calls the driver's import, generate, derive or copy entry point. 3. The core saves the resulting wrapped key material and any other data that the key context may contain. To destroy a key, the core simply destroys the wrapped key material, without invoking driver code. #### Key management in a secure element with storage This section describes the key creation and key destruction processes for secure elements that have persistent storage for the key material. A driver for such a secure element has two mandatory entry points: * `"allocate_key"`: this function obtains an internal identifier for the key. This may be, for example, a unique label or a slot number. * `"destroy_key"`: this function invalidates the internal identifier and destroys the associated key material. These functions have the following prototypes for a driver with the prefix `"acme"`: ``` psa_status_t acme_allocate_key(const psa_key_attributes_t *attributes, uint8_t *key_buffer, size_t key_buffer_size); psa_status_t acme_destroy_key(const psa_key_attributes_t *attributes, const uint8_t *key_buffer, size_t key_buffer_size); ``` When creating a persistent key with an opaque driver which has an `"allocate_key"` entry point: 1. The core calls the driver's `"allocate_key"` entry point. This function typically allocates an internal identifier for the key without modifying the state of the secure element and stores the identifier in the key context. This function should not modify the state of the secure element. It may modify the copy of the persistent state of the driver in memory. 1. The core saves the key context to persistent storage. 1. The core calls the driver's key creation entry point. 1. The core saves the updated key context to persistent storage. If a failure occurs after the `"allocate_key"` step but before the call to the second driver entry point, the core will do one of the following: * Fail the creation of the key without indicating this to the driver. This can happen, in particular, if the device loses power immediately after the key allocation entry point returns. * Call the driver's `"destroy_key"` entry point. To destroy a key, the core calls the driver's `"destroy_key"` entry point. Note that the key allocation and destruction entry points must not rely solely on the key identifier in the key attributes to identify a key. Some implementations of the PSA Cryptography API store keys on behalf of multiple clients, and different clients may use the same key identifier to designate different keys. The manner in which the core distinguishes keys that have the same identifier but are part of the key namespace for different clients is implementation-dependent and is not accessible to drivers. Some typical strategies to allocate an internal key identifier are: * Maintain a set of free slot numbers which is stored either in the secure element or in the driver's persistent storage. To allocate a key slot, find a free slot number, mark it as occupied and store the number in the key context. When the key is destroyed, mark the slot number as free. * Maintain a monotonic counter with a practically unbounded range in the secure element or in the driver's persistent storage. To allocate a key slot, increment the counter and store the current value in the key context. Destroying a key does not change the counter. TODO: explain constraints on how the driver updates its persistent state for resilience TODO: some of the above doesn't apply to volatile keys #### Key creation entry points in opaque drivers The key creation entry points have the following prototypes for a driver with the prefix `"acme"`: ``` psa_status_t acme_import_key(const psa_key_attributes_t *attributes, const uint8_t *data, size_t data_length, uint8_t *key_buffer, size_t key_buffer_size, size_t *key_buffer_length, size_t *bits); psa_status_t acme_generate_key(const psa_key_attributes_t *attributes, uint8_t *key_buffer, size_t key_buffer_size, size_t *key_buffer_length); ``` If the driver has an [`"allocate_key"` entry point](#key-management-in-a-secure-element-with-storage), the core calls the `"allocate_key"` entry point with the same attributes on the same key buffer before calling the key creation entry point. TODO: derivation, copy #### Key export entry points in opaque drivers The key export entry points have the following prototypes for a driver with the prefix `"acme"`: ``` psa_status_t acme_export_key(const psa_key_attributes_t *attributes, const uint8_t *key_buffer, size_t key_buffer_size, uint8_t *data, size_t data_size, size_t *data_length); psa_status_t acme_export_public_key(const psa_key_attributes_t *attributes, const uint8_t *key_buffer, size_t key_buffer_size, uint8_t *data, size_t data_size, size_t *data_length); ``` The core will only call `acme_export_public_key` on a private key. Drivers implementers may choose to store the public key in the key context buffer or to recalculate it on demand. If the key context includes the public key, it needs to have an adequate size; see [“Key format for opaque drivers”](#key-format-for-opaque-drivers). The core guarantees that the size of the output buffer (`data_size`) is sufficient to export any key with the given attributes. The driver must set `*data_length` to the exact size of the exported key. ### Opaque driver persistent state The core maintains persistent state on behalf of an opaque driver. This persistent state consists of a single byte array whose size is given by the `"persistent_state_size"` property in the [driver description](#driver-description-top-level-element). The core loads the persistent state in memory before it calls the driver's [init entry point](#driver-initialization). It is adjusted to match the size declared by the driver, in case a driver upgrade changes the size: * The first time the driver is loaded on a system, the persistent state is all-bits-zero. * If the stored persistent state is smaller than the declared size, the core pads the persistent state with all-bits-zero at the end. * If the stored persistent state is larger than the declared size, the core truncates the persistent state to the declared size. The core provides the following callback functions, which an opaque driver may call while it is processing a call from the driver: ``` psa_status_t psa_crypto_driver_get_persistent_state(uint_8_t **persistent_state_ptr); psa_status_t psa_crypto_driver_commit_persistent_state(size_t from, size_t length); ``` `psa_crypto_driver_get_persistent_state` sets `*persistent_state_ptr` to a pointer to the first byte of the persistent state. This pointer remains valid during a call to a driver entry point. Once the entry point returns, the pointer is no longer valid. The core guarantees that calls to `psa_crypto_driver_get_persistent_state` within the same entry point return the same address for the persistent state, but this address may change between calls to an entry point. `psa_crypto_driver_commit_persistent_state` updates the persistent state in persistent storage. Only the portion at byte offsets `from` inclusive to `from + length` exclusive is guaranteed to be updated; it is unspecified whether changes made to other parts of the state are taken into account. The driver must call this function after updating the persistent state in memory and before returning from the entry point, otherwise it is unspecified whether the persistent state is updated. The core will not update the persistent state in storage while an entry point is running except when the entry point calls `psa_crypto_driver_commit_persistent_state`. It may update the persistent state in storage after an entry point returns. In a multithreaded environment, the driver may only call these two functions from the thread that is executing the entry point. #### Built-in keys Opaque drivers may declare built-in keys. Built-in keys can be accessed, but not created, through the PSA Cryptography API. A built-in key is identified by its location and its **slot number**. Drivers that support built-in keys must provide a `"get_builtin_key"` entry point to retrieve the key data and metadata. The core calls this entry point when it needs to access the key, typically because the application requested an operation on the key. The core may keep information about the key in cache, and successive calls to access the same slot number should return the same data. This entry point has the following prototype: ``` psa_status_t acme_get_builtin_key(psa_drv_slot_number_t slot_number, psa_key_attributes_t *attributes, uint8_t *key_buffer, size_t key_buffer_size, size_t *key_buffer_length); ``` If this function returns `PSA_SUCCESS` or `PSA_ERROR_BUFFER_TOO_SMALL`, it must fill `attributes` with the attributes of the key (except for the key identifier). On success, this function must also fill `key_buffer` with the key context. On entry, `psa_get_key_lifetime(attributes)` is the location at which the driver was declared and a persistence level with which the platform is attempting to register the key. The driver entry point may choose to change the lifetime (`psa_set_key_lifetime(attributes, lifetime)`) of the reported key attributes to one with the same location but a different persistence level, in case the driver has more specific knowledge about the actual persistence level of the key which is being retrieved. For example, if a driver knows it cannot delete a key, it may override the persistence level in the lifetime to `PSA_KEY_PERSISTENCE_READ_ONLY`. The standard attributes other than the key identifier and lifetime have the value conveyed by `PSA_KEY_ATTRIBUTES_INIT`. The output parameter `key_buffer` points to a writable buffer of `key_buffer_size` bytes. If the driver has a [`"builtin_key_size"` property](#key-format-for-opaque-drivers) property, `key_buffer_size` has this value, otherwise `key_buffer_size` has the value determined from the key type and size. Typically, for a built-in key, the key context is a reference to key material that is kept inside the secure element, similar to the format returned by [`"allocate_key"`](#key-management-in-a-secure-element-with-storage). A driver may have built-in keys even if it doesn't have an `"allocate_key"` entry point. This entry point may return the following status values: * `PSA_SUCCESS`: the requested key exists, and the output parameters `attributes` and `key_buffer` contain the key metadata and key context respectively, and `*key_buffer_length` contains the length of the data written to `key_buffer`. * `PSA_ERROR_BUFFER_TOO_SMALL`: `key_buffer_size` is insufficient. In this case, the driver must pass the key's attributes in `*attributes`. In particular, `get_builtin_key(slot_number, &attributes, NULL, 0)` is a way for the core to obtain the key's attributes. * `PSA_ERROR_DOES_NOT_EXIST`: the requested key does not exist. * Other error codes such as `PSA_ERROR_COMMUNICATION_FAILURE` or `PSA_ERROR_HARDWARE_FAILURE` indicate a transient or permanent error. The core will pass authorized requests to destroy a built-in key to the [`"destroy_key"`](#key-management-in-a-secure-element-with-storage) entry point if there is one. If built-in keys must not be destroyed, it is up to the driver to reject such requests. ## How to use drivers from an application ### Using transparent drivers Transparent drivers linked into the library are automatically used for the mechanisms that they implement. ### Using opaque drivers Each opaque driver is assigned a [location](#lifetimes-and-locations). The driver is invoked for all actions that use a key in that location. A key's location is indicated by its lifetime. The application chooses the key's lifetime when it creates the key. For example, the following snippet creates an AES-GCM key which is only accessible inside the secure element designated by the location `PSA_KEY_LOCATION_acme`. ``` psa_key_attributes_t attributes = PSA_KEY_ATTRIBUTES_INIT; psa_set_key_lifetime(&attributes, PSA_KEY_LIFETIME_FROM_PERSISTENCE_AND_LOCATION( PSA_KEY_PERSISTENCE_DEFAULT, PSA_KEY_LOCATION_acme)); psa_set_key_identifier(&attributes, 42); psa_set_key_type(&attributes, PSA_KEY_TYPE_AES); psa_set_key_size(&attributes, 128); psa_set_key_algorithm(&attributes, PSA_ALG_GCM); psa_set_key_usage_flags(&attributes, PSA_KEY_USAGE_ENCRYPT | PSA_KEY_USAGE_DECRYPT); psa_key_id_t key; psa_generate_key(&attributes, &key); ``` ## Using opaque drivers from an application ### Lifetimes and locations The PSA Cryptography API, version 1.0.0, defines [lifetimes](https://armmbed.github.io/mbed-crypto/html/api/keys/attributes.html?highlight=psa_key_lifetime_t#c.psa_key_lifetime_t) as an attribute of a key that indicates where the key is stored and which application and system actions will create and destroy it. The lifetime is expressed as a 32-bit value (`typedef uint32_t psa_key_lifetime_t`). An upcoming version of the PSA Cryptography API defines more structure for lifetime values to separate these two aspects of the lifetime: * Bits 0–7 are a _persistence level_. This value indicates what device management actions can cause it to be destroyed. In particular, it indicates whether the key is volatile or persistent. * Bits 8–31 are a _location indicator_. This value indicates where the key material is stored and where operations on the key are performed. Location values can be stored in a variable of type `psa_key_location_t`. An opaque driver is attached to a specific location. Keys in the default location (`PSA_KEY_LOCATION_LOCAL_STORAGE = 0`) are transparent: the core has direct access to the key material. For keys in a location that is managed by an opaque driver, only the secure element has access to the key material and can perform operations on the key, while the core only manipulates a wrapped form of the key or an identifier of the key. ### Creating a key in a secure element The core defines a compile-time constant for each opaque driver indicating its location called `PSA_KEY_LOCATION_`*prefix* where *prefix* is the value of the `"prefix"` property in the driver description. For convenience, Mbed TLS also declares a compile-time constant for the corresponding lifetime with the default persistence called `PSA_KEY_LIFETIME_`*prefix*. Therefore, to declare an opaque key in the location with the prefix `foo` with the default persistence, call `psa_set_key_lifetime` during the key creation as follows: ``` psa_set_key_lifetime(&attributes, PSA_KEY_LIFETIME_foo); ``` To declare a volatile key: ``` psa_set_key_lifetime(&attributes, PSA_KEY_LIFETIME_FROM_PERSISTENCE_AND_LOCATION( PSA_KEY_LOCATION_foo, PSA_KEY_PERSISTENCE_VOLATILE)); ``` Generally speaking, to declare a key with a specified persistence: ``` psa_set_key_lifetime(&attributes, PSA_KEY_LIFETIME_FROM_PERSISTENCE_AND_LOCATION( PSA_KEY_LOCATION_foo, persistence)); ``` ## Open questions ### Value representation #### Integers It would be better if there was a uniform requirement on integer values. Do they have to be JSON integers? C preprocessor integers (which could be e.g. a macro defined in some header file)? C compile-time constants (allowing `sizeof`)? This choice is partly driven by the use of the values, so they might not be uniform. Note that if the value can be zero and it's plausible that the core would want to statically allocate an array of the given size, the core needs to know whether the value is 0 so that it could use code like ``` #if ACME_FOO_SIZE != 0 uint8_t foo[ACME_FOO_SIZE]; #endif ``` ### Driver declarations #### Declaring driver entry points The core may want to provide declarations for the driver entry points so that it can compile code using them. At the time of writing this paragraph, the driver headers must define types but there is no obligation for them to declare functions. The core knows what the function names and argument types are, so it can generate prototypes. It should be ok for driver functions to be function-like macros or function pointers. #### Driver location values How does a driver author decide which location values to use? It should be possible to combine drivers from different sources. Use the same vendor assignment as for PSA services? Can the driver assembly process generate distinct location values as needed? This can be convenient, but it's also risky: if you upgrade a device, you need the location values to be the same between builds. The current plan is for Arm to maintain a registry of vendors and assign a location namespace to each vendor. Parts of the namespace would be reserved for implementations and integrators. #### Multiple transparent drivers When multiple transparent drivers implement the same mechanism, which one is called? The first one? The last one? Unspecified? Or is this an error (excluding capabilities with fallback enabled)? The current choice is that the first one is used, which allows having a preference order on drivers, but may mask integration errors. ### Driver function interfaces #### Driver function parameter conventions Should 0-size buffers be guaranteed to have a non-null pointers? Should drivers really have to cope with overlap? Should the core guarantee that the output buffer size has the size indicated by the applicable buffer size macro (which may be an overestimation)? ### Partial computations in drivers #### Substitution points Earlier drafts of the driver interface had a concept of _substitution points_: places in the calculation where a driver may be called. Some hardware doesn't do the whole calculation, but only the “main” part. This goes both for transparent and opaque drivers. Some common examples: * A processor that performs the RSA exponentiation, but not the padding. The driver should be able to leverage the padding code in the core. * A processor that performs a block cipher operation only for a single block, or only in ECB mode, or only in CTR mode. The core would perform the block mode (CBC, CTR, CCM, ...). This concept, or some other way to reuse portable code such as specifying inner functions like `psa_rsa_pad` in the core, should be added to the specification. ### Key management #### Mixing drivers in key derivation How does `psa_key_derivation_output_key` work when the extraction part and the expansion part use different drivers? #### Public key calculation ECC key pairs are represented as the private key value only. The public key needs to be calculated from that. Both transparent drivers and opaque drivers provide a function to calculate the public key (`"export_public_key"`). The specification doesn't mention when the public key might be calculated. The core may calculate it on creation, on demand, or anything in between. Opaque drivers have a choice of storing the public key in the key context or calculating it on demand and can convey whether the core should store the public key with the `"store_public_key"` property. Is this good enough or should the specification include non-functional requirements? #### Symmetric key validation with transparent drivers Should the entry point be called for symmetric keys as well? #### Support for custom import formats [“Driver entry points for key management”](#driver-entry-points-for-key-management) states that the input to `"import_key"` can be an implementation-defined format. Is this a good idea? It reduces driver portability, since a core that accepts a custom format would not work with a driver that doesn't accept this format. On the other hand, if a driver accepts a custom format, the core should let it through because the driver presumably handles it more efficiently (in terms of speed and code size) than the core could. Allowing custom formats also causes a problem with import: the core can't know the size of the key representation until it knows the bit-size of the key, but determining the bit-size of the key is part of the job of the `"import_key"` entry point. For standard key types, this could plausibly be an issue for RSA private keys, where an implementation might accept a custom format that omits the CRT parameters (or that omits *d*). ### Opaque drivers #### Opaque driver persistent state The driver is allowed to update the state at any time. Is this ok? An example use case for updating the persistent state at arbitrary times is to renew a key that is used to encrypt communications between the application processor and the secure element. `psa_crypto_driver_get_persistent_state` does not identify the calling driver, so the driver needs to remember which driver it's calling. This may require a thread-local variable in a multithreaded core. Is this ok? ### Randomness #### Input to `"add_entropy"` Should the input to the [`"add_entropy"` entry point](#entropy-injection) be a full-entropy buffer (with data from all entropy sources already mixed), raw entropy direct from the entropy sources, or give the core a choice? * Raw data: drivers must implement entropy mixing. `"add_entropy"` needs an extra parameter to indicate the amount of entropy in the data. The core must not do any conditioning. * Choice: drivers must implement entropy mixing. `"add_entropy"` needs an extra parameter to indicate the amount of entropy in the data. The core may do conditioning if it wants, but doesn't have to. * Full entropy: drivers don't need to do entropy mixing. #### Flags for `"get_entropy"` Are the [entropy collection flags](#entropy-collection-flags) well-chosen? #### Random generator instantiations May the core instantiate a random generation context more than once? In other words, can there be multiple objects of type `acme_random_context_t`? Functionally, one RNG is as good as any. If the core wants some parts of the system to use a deterministic generator for reproducibility, it can't use this interface anyway, since the RNG is not necessarily deterministic. However, for performance on multiprocessor systems, a multithreaded core could prefer to use one RNG instance per thread. <!-- Local Variables: time-stamp-line-limit: 40 time-stamp-start: "Time-stamp: *\"" time-stamp-end: "\"" time-stamp-format: "%04Y/%02m/%02d %02H:%02M:%02S %Z" time-stamp-time-zone: "GMT" End: -->
0
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/docs
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/docs/proposed/psa-driver-developer-guide.md
PSA Cryptoprocessor driver developer's guide ============================================ **This is a specification of work in progress. The implementation is not yet merged into Mbed TLS.** This document describes how to write drivers of cryptoprocessors such as accelerators and secure elements for the PSA cryptography subsystem of Mbed TLS. This document focuses on behavior that is specific to Mbed TLS. For a reference of the interface between Mbed TLS and drivers, refer to the [PSA Cryptoprocessor Driver Interface specification](psa-driver-interface.html). The interface is not fully implemented in Mbed TLS yet and is disabled by default. You can enable the experimental work in progress by setting `MBEDTLS_PSA_CRYPTO_DRIVERS` in the compile-time configuration. Please note that the interface may still change: until further notice, we do not guarantee backward compatibility with existing driver code when `MBEDTLS_PSA_CRYPTO_DRIVERS` is enabled. ## Introduction ### Purpose The PSA cryptography driver interface provides a way to build Mbed TLS with additional code that implements certain cryptographic primitives. This is primarily intended to support platform-specific hardware. There are two types of drivers: * **Transparent** drivers implement cryptographic operations on keys that are provided in cleartext at the beginning of each operation. They are typically used for hardware **accelerators**. When a transparent driver is available for a particular combination of parameters (cryptographic algorithm, key type and size, etc.), it is used instead of the default software implementation. Transparent drivers can also be pure software implementations that are distributed as plug-ins to a PSA Crypto implementation. * **Opaque** drivers implement cryptographic operations on keys that can only be used inside a protected environment such as a **secure element**, a hardware security module, a smartcard, a secure enclave, etc. An opaque driver is invoked for the specific key location that the driver is registered for: the dispatch is based on the key's lifetime. ### Deliverables for a driver To write a driver, you need to implement some functions with C linkage, and to declare these functions in a **driver description file**. The driver description file declares which functions the driver implements and what cryptographic mechanisms they support. Depending on the driver type, you may also need to define some C types and macros in a header file. The concrete syntax for a driver description file is JSON. The structure of this JSON file is specified in the section [“Driver description syntax”](psa-driver-interface.html#driver-description-syntax) of the PSA cryptography driver interface specification. A driver therefore consists of: * A driver description file (in JSON format). * C header files defining the types required by the driver description. The names of these header files is declared in the driver description file. * An object file compiled for the target platform defining the functions required by the driver description. Implementations may allow drivers to be provided as source files and compiled with the core instead of being pre-compiled. ## Driver C interfaces Mbed TLS calls driver entry points [as specified in the PSA Cryptography Driver Interface specification](psa-driver-interface.html#driver-entry-points) except as otherwise indicated in this section. ## Building and testing your driver <!-- TODO --> ## Dependencies on the Mbed TLS configuration <!-- TODO -->
0
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/CMakeLists.txt
list (APPEND thirdparty_src) list (APPEND thirdparty_lib) list (APPEND thirdparty_inc_public) list (APPEND thirdparty_inc) list (APPEND thirdparty_def) execute_process(COMMAND ${MBEDTLS_PYTHON_EXECUTABLE} ${CMAKE_CURRENT_SOURCE_DIR}/../scripts/config.py -f ${CMAKE_CURRENT_SOURCE_DIR}/../include/mbedtls/config.h get MBEDTLS_ECDH_VARIANT_EVEREST_ENABLED RESULT_VARIABLE result) if(${result} EQUAL 0) add_subdirectory(everest) endif() set(thirdparty_src ${thirdparty_src} PARENT_SCOPE) set(thirdparty_lib ${thirdparty_lib} PARENT_SCOPE) set(thirdparty_inc_public ${thirdparty_inc_public} PARENT_SCOPE) set(thirdparty_inc ${thirdparty_inc} PARENT_SCOPE) set(thirdparty_def ${thirdparty_def} PARENT_SCOPE)
0
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/Makefile.inc
THIRDPARTY_DIR = $(dir $(lastword $(MAKEFILE_LIST))) include $(THIRDPARTY_DIR)/everest/Makefile.inc
0
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/CMakeLists.txt
list (APPEND everest_src) list (APPEND everest_inc_public) list (APPEND everest_inc) list (APPEND everest_def) set(everest_src ${CMAKE_CURRENT_SOURCE_DIR}/library/everest.c ${CMAKE_CURRENT_SOURCE_DIR}/library/x25519.c ${CMAKE_CURRENT_SOURCE_DIR}/library/Hacl_Curve25519_joined.c ) list(APPEND everest_inc_public ${CMAKE_CURRENT_SOURCE_DIR}/include) list(APPEND everest_inc ${CMAKE_CURRENT_SOURCE_DIR}/include/everest ${CMAKE_CURRENT_SOURCE_DIR}/include/everest/kremlib) if(INSTALL_MBEDTLS_HEADERS) install(DIRECTORY include/everest DESTINATION include FILE_PERMISSIONS OWNER_READ OWNER_WRITE GROUP_READ WORLD_READ DIRECTORY_PERMISSIONS OWNER_READ OWNER_WRITE OWNER_EXECUTE GROUP_READ GROUP_EXECUTE WORLD_READ WORLD_EXECUTE FILES_MATCHING PATTERN "*.h") endif(INSTALL_MBEDTLS_HEADERS) set(thirdparty_src ${thirdparty_src} ${everest_src} PARENT_SCOPE) set(thirdparty_inc_public ${thirdparty_inc_public} ${everest_inc_public} PARENT_SCOPE) set(thirdparty_inc ${thirdparty_inc} ${everest_inc} PARENT_SCOPE) set(thirdparty_def ${thirdparty_def} ${everest_def} PARENT_SCOPE)
0
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/Makefile.inc
THIRDPARTY_INCLUDES+=-I../3rdparty/everest/include -I../3rdparty/everest/include/everest -I../3rdparty/everest/include/everest/kremlib THIRDPARTY_CRYPTO_OBJECTS+= \ ../3rdparty/everest/library/everest.o \ ../3rdparty/everest/library/x25519.o \ ../3rdparty/everest/library/Hacl_Curve25519_joined.o
0
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/README.md
The files in this directory stem from [Project Everest](https://project-everest.github.io/) and are distributed under the Apache 2.0 license. This is a formally verified implementation of Curve25519-based handshakes. The C code is automatically derived from the (verified) [original implementation](https://github.com/project-everest/hacl-star/tree/master/code/curve25519) in the [F* language](https://github.com/fstarlang/fstar) by [KreMLin](https://github.com/fstarlang/kremlin). In addition to the improved safety and security of the implementation, it is also significantly faster than the default implementation of Curve25519 in mbedTLS. The caveat is that not all platforms are supported, although the version in `everest/library/legacy` should work on most systems. The main issue is that some platforms do not provide a 128-bit integer type and KreMLin therefore has to use additional (also verified) code to simulate them, resulting in less of a performance gain overall. Explictly supported platforms are currently `x86` and `x86_64` using gcc or clang, and Visual C (2010 and later).
0
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/include
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/include/everest/kremlib.h
/* * Copyright 2016-2018 INRIA and Microsoft Corporation * * SPDX-License-Identifier: Apache-2.0 * * Licensed under the Apache License, Version 2.0 (the "License"); you may * not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * * This file is part of Mbed TLS (https://tls.mbed.org) and * originated from Project Everest (https://project-everest.github.io/) */ #ifndef __KREMLIB_H #define __KREMLIB_H #include "kremlin/internal/target.h" #include "kremlin/internal/types.h" #include "kremlin/c_endianness.h" #endif /* __KREMLIB_H */
0
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/include
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/include/everest/x25519.h
/* * ECDH with curve-optimized implementation multiplexing * * Copyright 2016-2018 INRIA and Microsoft Corporation * SPDX-License-Identifier: Apache-2.0 * * Licensed under the Apache License, Version 2.0 (the "License"); you may * not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * * This file is part of mbed TLS (https://tls.mbed.org) */ #ifndef MBEDTLS_X25519_H #define MBEDTLS_X25519_H #ifdef __cplusplus extern "C" { #endif #define MBEDTLS_ECP_TLS_CURVE25519 0x1d #define MBEDTLS_X25519_KEY_SIZE_BYTES 32 /** * Defines the source of the imported EC key. */ typedef enum { MBEDTLS_X25519_ECDH_OURS, /**< Our key. */ MBEDTLS_X25519_ECDH_THEIRS, /**< The key of the peer. */ } mbedtls_x25519_ecdh_side; /** * \brief The x25519 context structure. */ typedef struct { unsigned char our_secret[MBEDTLS_X25519_KEY_SIZE_BYTES]; unsigned char peer_point[MBEDTLS_X25519_KEY_SIZE_BYTES]; } mbedtls_x25519_context; /** * \brief This function initializes an x25519 context. * * \param ctx The x25519 context to initialize. */ void mbedtls_x25519_init( mbedtls_x25519_context *ctx ); /** * \brief This function frees a context. * * \param ctx The context to free. */ void mbedtls_x25519_free( mbedtls_x25519_context *ctx ); /** * \brief This function generates a public key and a TLS * ServerKeyExchange payload. * * This is the first function used by a TLS server for x25519. * * * \param ctx The x25519 context. * \param olen The number of characters written. * \param buf The destination buffer. * \param blen The length of the destination buffer. * \param f_rng The RNG function. * \param p_rng The RNG context. * * \return \c 0 on success. * \return An \c MBEDTLS_ERR_ECP_XXX error code on failure. */ int mbedtls_x25519_make_params( mbedtls_x25519_context *ctx, size_t *olen, unsigned char *buf, size_t blen, int( *f_rng )(void *, unsigned char *, size_t), void *p_rng ); /** * \brief This function parses and processes a TLS ServerKeyExchange * payload. * * * \param ctx The x25519 context. * \param buf The pointer to the start of the input buffer. * \param end The address for one Byte past the end of the buffer. * * \return \c 0 on success. * \return An \c MBEDTLS_ERR_ECP_XXX error code on failure. * */ int mbedtls_x25519_read_params( mbedtls_x25519_context *ctx, const unsigned char **buf, const unsigned char *end ); /** * \brief This function sets up an x25519 context from an EC key. * * It is used by clients and servers in place of the * ServerKeyEchange for static ECDH, and imports ECDH * parameters from the EC key information of a certificate. * * \see ecp.h * * \param ctx The x25519 context to set up. * \param key The EC key to use. * \param side Defines the source of the key: 1: Our key, or * 0: The key of the peer. * * \return \c 0 on success. * \return An \c MBEDTLS_ERR_ECP_XXX error code on failure. * */ int mbedtls_x25519_get_params( mbedtls_x25519_context *ctx, const mbedtls_ecp_keypair *key, mbedtls_x25519_ecdh_side side ); /** * \brief This function derives and exports the shared secret. * * This is the last function used by both TLS client * and servers. * * * \param ctx The x25519 context. * \param olen The number of Bytes written. * \param buf The destination buffer. * \param blen The length of the destination buffer. * \param f_rng The RNG function. * \param p_rng The RNG context. * * \return \c 0 on success. * \return An \c MBEDTLS_ERR_ECP_XXX error code on failure. */ int mbedtls_x25519_calc_secret( mbedtls_x25519_context *ctx, size_t *olen, unsigned char *buf, size_t blen, int( *f_rng )(void *, unsigned char *, size_t), void *p_rng ); /** * \brief This function generates a public key and a TLS * ClientKeyExchange payload. * * This is the second function used by a TLS client for x25519. * * \see ecp.h * * \param ctx The x25519 context. * \param olen The number of Bytes written. * \param buf The destination buffer. * \param blen The size of the destination buffer. * \param f_rng The RNG function. * \param p_rng The RNG context. * * \return \c 0 on success. * \return An \c MBEDTLS_ERR_ECP_XXX error code on failure. */ int mbedtls_x25519_make_public( mbedtls_x25519_context *ctx, size_t *olen, unsigned char *buf, size_t blen, int( *f_rng )(void *, unsigned char *, size_t), void *p_rng ); /** * \brief This function parses and processes a TLS ClientKeyExchange * payload. * * This is the second function used by a TLS server for x25519. * * \see ecp.h * * \param ctx The x25519 context. * \param buf The start of the input buffer. * \param blen The length of the input buffer. * * \return \c 0 on success. * \return An \c MBEDTLS_ERR_ECP_XXX error code on failure. */ int mbedtls_x25519_read_public( mbedtls_x25519_context *ctx, const unsigned char *buf, size_t blen ); #ifdef __cplusplus } #endif #endif /* x25519.h */
0
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/include
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/include/everest/Hacl_Curve25519.h
/* Copyright (c) INRIA and Microsoft Corporation. All rights reserved. Licensed under the Apache 2.0 License. */ /* This file was generated by KreMLin <https://github.com/FStarLang/kremlin> * KreMLin invocation: /mnt/e/everest/verify/kremlin/krml -fc89 -fparentheses -fno-shadow -header /mnt/e/everest/verify/hdrcLh -minimal -fbuiltin-uint128 -fc89 -fparentheses -fno-shadow -header /mnt/e/everest/verify/hdrcLh -minimal -I /mnt/e/everest/verify/hacl-star/code/lib/kremlin -I /mnt/e/everest/verify/kremlin/kremlib/compat -I /mnt/e/everest/verify/hacl-star/specs -I /mnt/e/everest/verify/hacl-star/specs/old -I . -ccopt -march=native -verbose -ldopt -flto -tmpdir x25519-c -I ../bignum -bundle Hacl.Curve25519=* -minimal -add-include "kremlib.h" -skip-compilation x25519-c/out.krml -o x25519-c/Hacl_Curve25519.c * F* version: 059db0c8 * KreMLin version: 916c37ac */ #ifndef __Hacl_Curve25519_H #define __Hacl_Curve25519_H #include "kremlib.h" void Hacl_Curve25519_crypto_scalarmult(uint8_t *mypublic, uint8_t *secret, uint8_t *basepoint); #define __Hacl_Curve25519_H_DEFINED #endif
0
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/include
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/include/everest/everest.h
/* * Interface to code from Project Everest * * Copyright 2016-2018 INRIA and Microsoft Corporation * SPDX-License-Identifier: Apache-2.0 * * Licensed under the Apache License, Version 2.0 (the "License"); you may * not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * * This file is part of Mbed TLS (https://tls.mbed.org). */ #ifndef MBEDTLS_EVEREST_H #define MBEDTLS_EVEREST_H #include "everest/x25519.h" #ifdef __cplusplus extern "C" { #endif /** * Defines the source of the imported EC key. */ typedef enum { MBEDTLS_EVEREST_ECDH_OURS, /**< Our key. */ MBEDTLS_EVEREST_ECDH_THEIRS, /**< The key of the peer. */ } mbedtls_everest_ecdh_side; typedef struct { mbedtls_x25519_context ctx; } mbedtls_ecdh_context_everest; /** * \brief This function sets up the ECDH context with the information * given. * * This function should be called after mbedtls_ecdh_init() but * before mbedtls_ecdh_make_params(). There is no need to call * this function before mbedtls_ecdh_read_params(). * * This is the first function used by a TLS server for ECDHE * ciphersuites. * * \param ctx The ECDH context to set up. * \param grp_id The group id of the group to set up the context for. * * \return \c 0 on success. */ int mbedtls_everest_setup( mbedtls_ecdh_context_everest *ctx, int grp_id ); /** * \brief This function frees a context. * * \param ctx The context to free. */ void mbedtls_everest_free( mbedtls_ecdh_context_everest *ctx ); /** * \brief This function generates a public key and a TLS * ServerKeyExchange payload. * * This is the second function used by a TLS server for ECDHE * ciphersuites. (It is called after mbedtls_ecdh_setup().) * * \note This function assumes that the ECP group (grp) of the * \p ctx context has already been properly set, * for example, using mbedtls_ecp_group_load(). * * \see ecp.h * * \param ctx The ECDH context. * \param olen The number of characters written. * \param buf The destination buffer. * \param blen The length of the destination buffer. * \param f_rng The RNG function. * \param p_rng The RNG context. * * \return \c 0 on success. * \return An \c MBEDTLS_ERR_ECP_XXX error code on failure. */ int mbedtls_everest_make_params( mbedtls_ecdh_context_everest *ctx, size_t *olen, unsigned char *buf, size_t blen, int( *f_rng )( void *, unsigned char *, size_t ), void *p_rng ); /** * \brief This function parses and processes a TLS ServerKeyExhange * payload. * * This is the first function used by a TLS client for ECDHE * ciphersuites. * * \see ecp.h * * \param ctx The ECDH context. * \param buf The pointer to the start of the input buffer. * \param end The address for one Byte past the end of the buffer. * * \return \c 0 on success. * \return An \c MBEDTLS_ERR_ECP_XXX error code on failure. * */ int mbedtls_everest_read_params( mbedtls_ecdh_context_everest *ctx, const unsigned char **buf, const unsigned char *end ); /** * \brief This function parses and processes a TLS ServerKeyExhange * payload. * * This is the first function used by a TLS client for ECDHE * ciphersuites. * * \see ecp.h * * \param ctx The ECDH context. * \param buf The pointer to the start of the input buffer. * \param end The address for one Byte past the end of the buffer. * * \return \c 0 on success. * \return An \c MBEDTLS_ERR_ECP_XXX error code on failure. * */ int mbedtls_everest_read_params( mbedtls_ecdh_context_everest *ctx, const unsigned char **buf, const unsigned char *end ); /** * \brief This function sets up an ECDH context from an EC key. * * It is used by clients and servers in place of the * ServerKeyEchange for static ECDH, and imports ECDH * parameters from the EC key information of a certificate. * * \see ecp.h * * \param ctx The ECDH context to set up. * \param key The EC key to use. * \param side Defines the source of the key: 1: Our key, or * 0: The key of the peer. * * \return \c 0 on success. * \return An \c MBEDTLS_ERR_ECP_XXX error code on failure. * */ int mbedtls_everest_get_params( mbedtls_ecdh_context_everest *ctx, const mbedtls_ecp_keypair *key, mbedtls_everest_ecdh_side side ); /** * \brief This function generates a public key and a TLS * ClientKeyExchange payload. * * This is the second function used by a TLS client for ECDH(E) * ciphersuites. * * \see ecp.h * * \param ctx The ECDH context. * \param olen The number of Bytes written. * \param buf The destination buffer. * \param blen The size of the destination buffer. * \param f_rng The RNG function. * \param p_rng The RNG context. * * \return \c 0 on success. * \return An \c MBEDTLS_ERR_ECP_XXX error code on failure. */ int mbedtls_everest_make_public( mbedtls_ecdh_context_everest *ctx, size_t *olen, unsigned char *buf, size_t blen, int( *f_rng )( void *, unsigned char *, size_t ), void *p_rng ); /** * \brief This function parses and processes a TLS ClientKeyExchange * payload. * * This is the third function used by a TLS server for ECDH(E) * ciphersuites. (It is called after mbedtls_ecdh_setup() and * mbedtls_ecdh_make_params().) * * \see ecp.h * * \param ctx The ECDH context. * \param buf The start of the input buffer. * \param blen The length of the input buffer. * * \return \c 0 on success. * \return An \c MBEDTLS_ERR_ECP_XXX error code on failure. */ int mbedtls_everest_read_public( mbedtls_ecdh_context_everest *ctx, const unsigned char *buf, size_t blen ); /** * \brief This function derives and exports the shared secret. * * This is the last function used by both TLS client * and servers. * * \note If \p f_rng is not NULL, it is used to implement * countermeasures against side-channel attacks. * For more information, see mbedtls_ecp_mul(). * * \see ecp.h * * \param ctx The ECDH context. * \param olen The number of Bytes written. * \param buf The destination buffer. * \param blen The length of the destination buffer. * \param f_rng The RNG function. * \param p_rng The RNG context. * * \return \c 0 on success. * \return An \c MBEDTLS_ERR_ECP_XXX error code on failure. */ int mbedtls_everest_calc_secret( mbedtls_ecdh_context_everest *ctx, size_t *olen, unsigned char *buf, size_t blen, int( *f_rng )( void *, unsigned char *, size_t ), void *p_rng ); #ifdef __cplusplus } #endif #endif /* MBEDTLS_EVEREST_H */
0
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/include/everest
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/include/everest/kremlib/FStar_UInt64_FStar_UInt32_FStar_UInt16_FStar_UInt8.h
/* Copyright (c) INRIA and Microsoft Corporation. All rights reserved. Licensed under the Apache 2.0 License. */ /* This file was generated by KreMLin <https://github.com/FStarLang/kremlin> * KreMLin invocation: ../krml -fc89 -fparentheses -fno-shadow -header /mnt/e/everest/verify/hdrB9w -minimal -fparentheses -fcurly-braces -fno-shadow -header copyright-header.txt -minimal -tmpdir dist/minimal -skip-compilation -extract-uints -add-include <inttypes.h> -add-include <stdbool.h> -add-include "kremlin/internal/compat.h" -add-include "kremlin/internal/types.h" -bundle FStar.UInt64+FStar.UInt32+FStar.UInt16+FStar.UInt8=* extracted/prims.krml extracted/FStar_Pervasives_Native.krml extracted/FStar_Pervasives.krml extracted/FStar_Mul.krml extracted/FStar_Squash.krml extracted/FStar_Classical.krml extracted/FStar_StrongExcludedMiddle.krml extracted/FStar_FunctionalExtensionality.krml extracted/FStar_List_Tot_Base.krml extracted/FStar_List_Tot_Properties.krml extracted/FStar_List_Tot.krml extracted/FStar_Seq_Base.krml extracted/FStar_Seq_Properties.krml extracted/FStar_Seq.krml extracted/FStar_Math_Lib.krml extracted/FStar_Math_Lemmas.krml extracted/FStar_BitVector.krml extracted/FStar_UInt.krml extracted/FStar_UInt32.krml extracted/FStar_Int.krml extracted/FStar_Int16.krml extracted/FStar_Preorder.krml extracted/FStar_Ghost.krml extracted/FStar_ErasedLogic.krml extracted/FStar_UInt64.krml extracted/FStar_Set.krml extracted/FStar_PropositionalExtensionality.krml extracted/FStar_PredicateExtensionality.krml extracted/FStar_TSet.krml extracted/FStar_Monotonic_Heap.krml extracted/FStar_Heap.krml extracted/FStar_Map.krml extracted/FStar_Monotonic_HyperHeap.krml extracted/FStar_Monotonic_HyperStack.krml extracted/FStar_HyperStack.krml extracted/FStar_Monotonic_Witnessed.krml extracted/FStar_HyperStack_ST.krml extracted/FStar_HyperStack_All.krml extracted/FStar_Date.krml extracted/FStar_Universe.krml extracted/FStar_GSet.krml extracted/FStar_ModifiesGen.krml extracted/LowStar_Monotonic_Buffer.krml extracted/LowStar_Buffer.krml extracted/Spec_Loops.krml extracted/LowStar_BufferOps.krml extracted/C_Loops.krml extracted/FStar_UInt8.krml extracted/FStar_Kremlin_Endianness.krml extracted/FStar_UInt63.krml extracted/FStar_Exn.krml extracted/FStar_ST.krml extracted/FStar_All.krml extracted/FStar_Dyn.krml extracted/FStar_Int63.krml extracted/FStar_Int64.krml extracted/FStar_Int32.krml extracted/FStar_Int8.krml extracted/FStar_UInt16.krml extracted/FStar_Int_Cast.krml extracted/FStar_UInt128.krml extracted/C_Endianness.krml extracted/FStar_List.krml extracted/FStar_Float.krml extracted/FStar_IO.krml extracted/C.krml extracted/FStar_Char.krml extracted/FStar_String.krml extracted/LowStar_Modifies.krml extracted/C_String.krml extracted/FStar_Bytes.krml extracted/FStar_HyperStack_IO.krml extracted/C_Failure.krml extracted/TestLib.krml extracted/FStar_Int_Cast_Full.krml * F* version: 059db0c8 * KreMLin version: 916c37ac */ #ifndef __FStar_UInt64_FStar_UInt32_FStar_UInt16_FStar_UInt8_H #define __FStar_UInt64_FStar_UInt32_FStar_UInt16_FStar_UInt8_H #include <inttypes.h> #include <stdbool.h> #include "kremlin/internal/compat.h" #include "kremlin/internal/types.h" extern Prims_int FStar_UInt64_n; extern Prims_int FStar_UInt64_v(uint64_t x0); extern uint64_t FStar_UInt64_uint_to_t(Prims_int x0); extern uint64_t FStar_UInt64_add(uint64_t x0, uint64_t x1); extern uint64_t FStar_UInt64_add_underspec(uint64_t x0, uint64_t x1); extern uint64_t FStar_UInt64_add_mod(uint64_t x0, uint64_t x1); extern uint64_t FStar_UInt64_sub(uint64_t x0, uint64_t x1); extern uint64_t FStar_UInt64_sub_underspec(uint64_t x0, uint64_t x1); extern uint64_t FStar_UInt64_sub_mod(uint64_t x0, uint64_t x1); extern uint64_t FStar_UInt64_mul(uint64_t x0, uint64_t x1); extern uint64_t FStar_UInt64_mul_underspec(uint64_t x0, uint64_t x1); extern uint64_t FStar_UInt64_mul_mod(uint64_t x0, uint64_t x1); extern uint64_t FStar_UInt64_mul_div(uint64_t x0, uint64_t x1); extern uint64_t FStar_UInt64_div(uint64_t x0, uint64_t x1); extern uint64_t FStar_UInt64_rem(uint64_t x0, uint64_t x1); extern uint64_t FStar_UInt64_logand(uint64_t x0, uint64_t x1); extern uint64_t FStar_UInt64_logxor(uint64_t x0, uint64_t x1); extern uint64_t FStar_UInt64_logor(uint64_t x0, uint64_t x1); extern uint64_t FStar_UInt64_lognot(uint64_t x0); extern uint64_t FStar_UInt64_shift_right(uint64_t x0, uint32_t x1); extern uint64_t FStar_UInt64_shift_left(uint64_t x0, uint32_t x1); extern bool FStar_UInt64_eq(uint64_t x0, uint64_t x1); extern bool FStar_UInt64_gt(uint64_t x0, uint64_t x1); extern bool FStar_UInt64_gte(uint64_t x0, uint64_t x1); extern bool FStar_UInt64_lt(uint64_t x0, uint64_t x1); extern bool FStar_UInt64_lte(uint64_t x0, uint64_t x1); extern uint64_t FStar_UInt64_minus(uint64_t x0); extern uint32_t FStar_UInt64_n_minus_one; uint64_t FStar_UInt64_eq_mask(uint64_t a, uint64_t b); uint64_t FStar_UInt64_gte_mask(uint64_t a, uint64_t b); extern Prims_string FStar_UInt64_to_string(uint64_t x0); extern uint64_t FStar_UInt64_of_string(Prims_string x0); extern Prims_int FStar_UInt32_n; extern Prims_int FStar_UInt32_v(uint32_t x0); extern uint32_t FStar_UInt32_uint_to_t(Prims_int x0); extern uint32_t FStar_UInt32_add(uint32_t x0, uint32_t x1); extern uint32_t FStar_UInt32_add_underspec(uint32_t x0, uint32_t x1); extern uint32_t FStar_UInt32_add_mod(uint32_t x0, uint32_t x1); extern uint32_t FStar_UInt32_sub(uint32_t x0, uint32_t x1); extern uint32_t FStar_UInt32_sub_underspec(uint32_t x0, uint32_t x1); extern uint32_t FStar_UInt32_sub_mod(uint32_t x0, uint32_t x1); extern uint32_t FStar_UInt32_mul(uint32_t x0, uint32_t x1); extern uint32_t FStar_UInt32_mul_underspec(uint32_t x0, uint32_t x1); extern uint32_t FStar_UInt32_mul_mod(uint32_t x0, uint32_t x1); extern uint32_t FStar_UInt32_mul_div(uint32_t x0, uint32_t x1); extern uint32_t FStar_UInt32_div(uint32_t x0, uint32_t x1); extern uint32_t FStar_UInt32_rem(uint32_t x0, uint32_t x1); extern uint32_t FStar_UInt32_logand(uint32_t x0, uint32_t x1); extern uint32_t FStar_UInt32_logxor(uint32_t x0, uint32_t x1); extern uint32_t FStar_UInt32_logor(uint32_t x0, uint32_t x1); extern uint32_t FStar_UInt32_lognot(uint32_t x0); extern uint32_t FStar_UInt32_shift_right(uint32_t x0, uint32_t x1); extern uint32_t FStar_UInt32_shift_left(uint32_t x0, uint32_t x1); extern bool FStar_UInt32_eq(uint32_t x0, uint32_t x1); extern bool FStar_UInt32_gt(uint32_t x0, uint32_t x1); extern bool FStar_UInt32_gte(uint32_t x0, uint32_t x1); extern bool FStar_UInt32_lt(uint32_t x0, uint32_t x1); extern bool FStar_UInt32_lte(uint32_t x0, uint32_t x1); extern uint32_t FStar_UInt32_minus(uint32_t x0); extern uint32_t FStar_UInt32_n_minus_one; uint32_t FStar_UInt32_eq_mask(uint32_t a, uint32_t b); uint32_t FStar_UInt32_gte_mask(uint32_t a, uint32_t b); extern Prims_string FStar_UInt32_to_string(uint32_t x0); extern uint32_t FStar_UInt32_of_string(Prims_string x0); extern Prims_int FStar_UInt16_n; extern Prims_int FStar_UInt16_v(uint16_t x0); extern uint16_t FStar_UInt16_uint_to_t(Prims_int x0); extern uint16_t FStar_UInt16_add(uint16_t x0, uint16_t x1); extern uint16_t FStar_UInt16_add_underspec(uint16_t x0, uint16_t x1); extern uint16_t FStar_UInt16_add_mod(uint16_t x0, uint16_t x1); extern uint16_t FStar_UInt16_sub(uint16_t x0, uint16_t x1); extern uint16_t FStar_UInt16_sub_underspec(uint16_t x0, uint16_t x1); extern uint16_t FStar_UInt16_sub_mod(uint16_t x0, uint16_t x1); extern uint16_t FStar_UInt16_mul(uint16_t x0, uint16_t x1); extern uint16_t FStar_UInt16_mul_underspec(uint16_t x0, uint16_t x1); extern uint16_t FStar_UInt16_mul_mod(uint16_t x0, uint16_t x1); extern uint16_t FStar_UInt16_mul_div(uint16_t x0, uint16_t x1); extern uint16_t FStar_UInt16_div(uint16_t x0, uint16_t x1); extern uint16_t FStar_UInt16_rem(uint16_t x0, uint16_t x1); extern uint16_t FStar_UInt16_logand(uint16_t x0, uint16_t x1); extern uint16_t FStar_UInt16_logxor(uint16_t x0, uint16_t x1); extern uint16_t FStar_UInt16_logor(uint16_t x0, uint16_t x1); extern uint16_t FStar_UInt16_lognot(uint16_t x0); extern uint16_t FStar_UInt16_shift_right(uint16_t x0, uint32_t x1); extern uint16_t FStar_UInt16_shift_left(uint16_t x0, uint32_t x1); extern bool FStar_UInt16_eq(uint16_t x0, uint16_t x1); extern bool FStar_UInt16_gt(uint16_t x0, uint16_t x1); extern bool FStar_UInt16_gte(uint16_t x0, uint16_t x1); extern bool FStar_UInt16_lt(uint16_t x0, uint16_t x1); extern bool FStar_UInt16_lte(uint16_t x0, uint16_t x1); extern uint16_t FStar_UInt16_minus(uint16_t x0); extern uint32_t FStar_UInt16_n_minus_one; uint16_t FStar_UInt16_eq_mask(uint16_t a, uint16_t b); uint16_t FStar_UInt16_gte_mask(uint16_t a, uint16_t b); extern Prims_string FStar_UInt16_to_string(uint16_t x0); extern uint16_t FStar_UInt16_of_string(Prims_string x0); extern Prims_int FStar_UInt8_n; extern Prims_int FStar_UInt8_v(uint8_t x0); extern uint8_t FStar_UInt8_uint_to_t(Prims_int x0); extern uint8_t FStar_UInt8_add(uint8_t x0, uint8_t x1); extern uint8_t FStar_UInt8_add_underspec(uint8_t x0, uint8_t x1); extern uint8_t FStar_UInt8_add_mod(uint8_t x0, uint8_t x1); extern uint8_t FStar_UInt8_sub(uint8_t x0, uint8_t x1); extern uint8_t FStar_UInt8_sub_underspec(uint8_t x0, uint8_t x1); extern uint8_t FStar_UInt8_sub_mod(uint8_t x0, uint8_t x1); extern uint8_t FStar_UInt8_mul(uint8_t x0, uint8_t x1); extern uint8_t FStar_UInt8_mul_underspec(uint8_t x0, uint8_t x1); extern uint8_t FStar_UInt8_mul_mod(uint8_t x0, uint8_t x1); extern uint8_t FStar_UInt8_mul_div(uint8_t x0, uint8_t x1); extern uint8_t FStar_UInt8_div(uint8_t x0, uint8_t x1); extern uint8_t FStar_UInt8_rem(uint8_t x0, uint8_t x1); extern uint8_t FStar_UInt8_logand(uint8_t x0, uint8_t x1); extern uint8_t FStar_UInt8_logxor(uint8_t x0, uint8_t x1); extern uint8_t FStar_UInt8_logor(uint8_t x0, uint8_t x1); extern uint8_t FStar_UInt8_lognot(uint8_t x0); extern uint8_t FStar_UInt8_shift_right(uint8_t x0, uint32_t x1); extern uint8_t FStar_UInt8_shift_left(uint8_t x0, uint32_t x1); extern bool FStar_UInt8_eq(uint8_t x0, uint8_t x1); extern bool FStar_UInt8_gt(uint8_t x0, uint8_t x1); extern bool FStar_UInt8_gte(uint8_t x0, uint8_t x1); extern bool FStar_UInt8_lt(uint8_t x0, uint8_t x1); extern bool FStar_UInt8_lte(uint8_t x0, uint8_t x1); extern uint8_t FStar_UInt8_minus(uint8_t x0); extern uint32_t FStar_UInt8_n_minus_one; uint8_t FStar_UInt8_eq_mask(uint8_t a, uint8_t b); uint8_t FStar_UInt8_gte_mask(uint8_t a, uint8_t b); extern Prims_string FStar_UInt8_to_string(uint8_t x0); extern uint8_t FStar_UInt8_of_string(Prims_string x0); typedef uint8_t FStar_UInt8_byte; #define __FStar_UInt64_FStar_UInt32_FStar_UInt16_FStar_UInt8_H_DEFINED #endif
0
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/include/everest
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/include/everest/kremlib/FStar_UInt128.h
/* Copyright (c) INRIA and Microsoft Corporation. All rights reserved. Licensed under the Apache 2.0 License. */ /* This file was generated by KreMLin <https://github.com/FStarLang/kremlin> * KreMLin invocation: ../krml -fc89 -fparentheses -fno-shadow -header /mnt/e/everest/verify/hdrB9w -minimal -fparentheses -fcurly-braces -fno-shadow -header copyright-header.txt -minimal -tmpdir dist/uint128 -skip-compilation -extract-uints -add-include <inttypes.h> -add-include <stdbool.h> -add-include "kremlin/internal/types.h" -bundle FStar.UInt128=* extracted/prims.krml extracted/FStar_Pervasives_Native.krml extracted/FStar_Pervasives.krml extracted/FStar_Mul.krml extracted/FStar_Squash.krml extracted/FStar_Classical.krml extracted/FStar_StrongExcludedMiddle.krml extracted/FStar_FunctionalExtensionality.krml extracted/FStar_List_Tot_Base.krml extracted/FStar_List_Tot_Properties.krml extracted/FStar_List_Tot.krml extracted/FStar_Seq_Base.krml extracted/FStar_Seq_Properties.krml extracted/FStar_Seq.krml extracted/FStar_Math_Lib.krml extracted/FStar_Math_Lemmas.krml extracted/FStar_BitVector.krml extracted/FStar_UInt.krml extracted/FStar_UInt32.krml extracted/FStar_Int.krml extracted/FStar_Int16.krml extracted/FStar_Preorder.krml extracted/FStar_Ghost.krml extracted/FStar_ErasedLogic.krml extracted/FStar_UInt64.krml extracted/FStar_Set.krml extracted/FStar_PropositionalExtensionality.krml extracted/FStar_PredicateExtensionality.krml extracted/FStar_TSet.krml extracted/FStar_Monotonic_Heap.krml extracted/FStar_Heap.krml extracted/FStar_Map.krml extracted/FStar_Monotonic_HyperHeap.krml extracted/FStar_Monotonic_HyperStack.krml extracted/FStar_HyperStack.krml extracted/FStar_Monotonic_Witnessed.krml extracted/FStar_HyperStack_ST.krml extracted/FStar_HyperStack_All.krml extracted/FStar_Date.krml extracted/FStar_Universe.krml extracted/FStar_GSet.krml extracted/FStar_ModifiesGen.krml extracted/LowStar_Monotonic_Buffer.krml extracted/LowStar_Buffer.krml extracted/Spec_Loops.krml extracted/LowStar_BufferOps.krml extracted/C_Loops.krml extracted/FStar_UInt8.krml extracted/FStar_Kremlin_Endianness.krml extracted/FStar_UInt63.krml extracted/FStar_Exn.krml extracted/FStar_ST.krml extracted/FStar_All.krml extracted/FStar_Dyn.krml extracted/FStar_Int63.krml extracted/FStar_Int64.krml extracted/FStar_Int32.krml extracted/FStar_Int8.krml extracted/FStar_UInt16.krml extracted/FStar_Int_Cast.krml extracted/FStar_UInt128.krml extracted/C_Endianness.krml extracted/FStar_List.krml extracted/FStar_Float.krml extracted/FStar_IO.krml extracted/C.krml extracted/FStar_Char.krml extracted/FStar_String.krml extracted/LowStar_Modifies.krml extracted/C_String.krml extracted/FStar_Bytes.krml extracted/FStar_HyperStack_IO.krml extracted/C_Failure.krml extracted/TestLib.krml extracted/FStar_Int_Cast_Full.krml * F* version: 059db0c8 * KreMLin version: 916c37ac */ #ifndef __FStar_UInt128_H #define __FStar_UInt128_H #include <inttypes.h> #include <stdbool.h> #include "kremlin/internal/types.h" uint64_t FStar_UInt128___proj__Mkuint128__item__low(FStar_UInt128_uint128 projectee); uint64_t FStar_UInt128___proj__Mkuint128__item__high(FStar_UInt128_uint128 projectee); typedef FStar_UInt128_uint128 FStar_UInt128_t; FStar_UInt128_uint128 FStar_UInt128_add(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b); FStar_UInt128_uint128 FStar_UInt128_add_underspec(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b); FStar_UInt128_uint128 FStar_UInt128_add_mod(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b); FStar_UInt128_uint128 FStar_UInt128_sub(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b); FStar_UInt128_uint128 FStar_UInt128_sub_underspec(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b); FStar_UInt128_uint128 FStar_UInt128_sub_mod(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b); FStar_UInt128_uint128 FStar_UInt128_logand(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b); FStar_UInt128_uint128 FStar_UInt128_logxor(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b); FStar_UInt128_uint128 FStar_UInt128_logor(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b); FStar_UInt128_uint128 FStar_UInt128_lognot(FStar_UInt128_uint128 a); FStar_UInt128_uint128 FStar_UInt128_shift_left(FStar_UInt128_uint128 a, uint32_t s); FStar_UInt128_uint128 FStar_UInt128_shift_right(FStar_UInt128_uint128 a, uint32_t s); bool FStar_UInt128_eq(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b); bool FStar_UInt128_gt(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b); bool FStar_UInt128_lt(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b); bool FStar_UInt128_gte(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b); bool FStar_UInt128_lte(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b); FStar_UInt128_uint128 FStar_UInt128_eq_mask(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b); FStar_UInt128_uint128 FStar_UInt128_gte_mask(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b); FStar_UInt128_uint128 FStar_UInt128_uint64_to_uint128(uint64_t a); uint64_t FStar_UInt128_uint128_to_uint64(FStar_UInt128_uint128 a); extern FStar_UInt128_uint128 (*FStar_UInt128_op_Plus_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1); extern FStar_UInt128_uint128 (*FStar_UInt128_op_Plus_Question_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1); extern FStar_UInt128_uint128 (*FStar_UInt128_op_Plus_Percent_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1); extern FStar_UInt128_uint128 (*FStar_UInt128_op_Subtraction_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1); extern FStar_UInt128_uint128 (*FStar_UInt128_op_Subtraction_Question_Hat)( FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1 ); extern FStar_UInt128_uint128 (*FStar_UInt128_op_Subtraction_Percent_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1); extern FStar_UInt128_uint128 (*FStar_UInt128_op_Amp_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1); extern FStar_UInt128_uint128 (*FStar_UInt128_op_Hat_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1); extern FStar_UInt128_uint128 (*FStar_UInt128_op_Bar_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1); extern FStar_UInt128_uint128 (*FStar_UInt128_op_Less_Less_Hat)(FStar_UInt128_uint128 x0, uint32_t x1); extern FStar_UInt128_uint128 (*FStar_UInt128_op_Greater_Greater_Hat)(FStar_UInt128_uint128 x0, uint32_t x1); extern bool (*FStar_UInt128_op_Equals_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1); extern bool (*FStar_UInt128_op_Greater_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1); extern bool (*FStar_UInt128_op_Less_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1); extern bool (*FStar_UInt128_op_Greater_Equals_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1); extern bool (*FStar_UInt128_op_Less_Equals_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1); FStar_UInt128_uint128 FStar_UInt128_mul32(uint64_t x, uint32_t y); FStar_UInt128_uint128 FStar_UInt128_mul_wide(uint64_t x, uint64_t y); #define __FStar_UInt128_H_DEFINED #endif
0
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/include/everest
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/include/everest/vs2010/inttypes.h
/* * Custom inttypes.h for VS2010 KreMLin requires these definitions, * but VS2010 doesn't provide them. * * Copyright 2016-2018 INRIA and Microsoft Corporation * SPDX-License-Identifier: Apache-2.0 * * Licensed under the Apache License, Version 2.0 (the "License"); you may * not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * * This file is part of mbed TLS (https://tls.mbed.org) */ #ifndef _INTTYPES_H_VS2010 #define _INTTYPES_H_VS2010 #include <stdint.h> #ifdef _MSC_VER #define inline __inline #endif /* VS2010 unsigned long == 8 bytes */ #define PRIu64 "I64u" #endif
0
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/include/everest
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/include/everest/vs2010/stdbool.h
/* * Custom stdbool.h for VS2010 KreMLin requires these definitions, * but VS2010 doesn't provide them. * * Copyright 2016-2018 INRIA and Microsoft Corporation * SPDX-License-Identifier: Apache-2.0 * * Licensed under the Apache License, Version 2.0 (the "License"); you may * not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * * This file is part of mbed TLS (https://tls.mbed.org) */ #ifndef _STDBOOL_H_VS2010 #define _STDBOOL_H_VS2010 typedef int bool; static bool true = 1; static bool false = 0; #endif
0
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/include/everest
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/include/everest/vs2010/Hacl_Curve25519.h
/* Copyright (c) INRIA and Microsoft Corporation. All rights reserved. Licensed under the Apache 2.0 License. */ /* This file was generated by KreMLin <https://github.com/FStarLang/kremlin> * KreMLin invocation: /mnt/e/everest/verify/kremlin/krml -fc89 -fparentheses -fno-shadow -header /mnt/e/everest/verify/hdrcLh -minimal -fc89 -fparentheses -fno-shadow -header /mnt/e/everest/verify/hdrcLh -minimal -I /mnt/e/everest/verify/hacl-star/code/lib/kremlin -I /mnt/e/everest/verify/kremlin/kremlib/compat -I /mnt/e/everest/verify/hacl-star/specs -I /mnt/e/everest/verify/hacl-star/specs/old -I . -ccopt -march=native -verbose -ldopt -flto -tmpdir x25519-c -I ../bignum -bundle Hacl.Curve25519=* -minimal -add-include "kremlib.h" -skip-compilation x25519-c/out.krml -o x25519-c/Hacl_Curve25519.c * F* version: 059db0c8 * KreMLin version: 916c37ac */ #ifndef __Hacl_Curve25519_H #define __Hacl_Curve25519_H #include "kremlib.h" void Hacl_Curve25519_crypto_scalarmult(uint8_t *mypublic, uint8_t *secret, uint8_t *basepoint); #define __Hacl_Curve25519_H_DEFINED #endif
0
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/include/everest
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/include/everest/kremlin/c_endianness.h
/* Copyright (c) INRIA and Microsoft Corporation. All rights reserved. Licensed under the Apache 2.0 License. */ #ifndef __KREMLIN_ENDIAN_H #define __KREMLIN_ENDIAN_H #include <string.h> #include <inttypes.h> /******************************************************************************/ /* Implementing C.fst (part 2: endian-ness macros) */ /******************************************************************************/ /* ... for Linux */ #if defined(__linux__) || defined(__CYGWIN__) # include <endian.h> /* ... for OSX */ #elif defined(__APPLE__) # include <libkern/OSByteOrder.h> # define htole64(x) OSSwapHostToLittleInt64(x) # define le64toh(x) OSSwapLittleToHostInt64(x) # define htobe64(x) OSSwapHostToBigInt64(x) # define be64toh(x) OSSwapBigToHostInt64(x) # define htole16(x) OSSwapHostToLittleInt16(x) # define le16toh(x) OSSwapLittleToHostInt16(x) # define htobe16(x) OSSwapHostToBigInt16(x) # define be16toh(x) OSSwapBigToHostInt16(x) # define htole32(x) OSSwapHostToLittleInt32(x) # define le32toh(x) OSSwapLittleToHostInt32(x) # define htobe32(x) OSSwapHostToBigInt32(x) # define be32toh(x) OSSwapBigToHostInt32(x) /* ... for Solaris */ #elif defined(__sun__) # include <sys/byteorder.h> # define htole64(x) LE_64(x) # define le64toh(x) LE_64(x) # define htobe64(x) BE_64(x) # define be64toh(x) BE_64(x) # define htole16(x) LE_16(x) # define le16toh(x) LE_16(x) # define htobe16(x) BE_16(x) # define be16toh(x) BE_16(x) # define htole32(x) LE_32(x) # define le32toh(x) LE_32(x) # define htobe32(x) BE_32(x) # define be32toh(x) BE_32(x) /* ... for the BSDs */ #elif defined(__FreeBSD__) || defined(__NetBSD__) || defined(__DragonFly__) # include <sys/endian.h> #elif defined(__OpenBSD__) # include <endian.h> /* ... for Windows (MSVC)... not targeting XBOX 360! */ #elif defined(_MSC_VER) # include <stdlib.h> # define htobe16(x) _byteswap_ushort(x) # define htole16(x) (x) # define be16toh(x) _byteswap_ushort(x) # define le16toh(x) (x) # define htobe32(x) _byteswap_ulong(x) # define htole32(x) (x) # define be32toh(x) _byteswap_ulong(x) # define le32toh(x) (x) # define htobe64(x) _byteswap_uint64(x) # define htole64(x) (x) # define be64toh(x) _byteswap_uint64(x) # define le64toh(x) (x) /* ... for Windows (GCC-like, e.g. mingw or clang) */ #elif (defined(_WIN32) || defined(_WIN64)) && \ (defined(__GNUC__) || defined(__clang__)) # define htobe16(x) __builtin_bswap16(x) # define htole16(x) (x) # define be16toh(x) __builtin_bswap16(x) # define le16toh(x) (x) # define htobe32(x) __builtin_bswap32(x) # define htole32(x) (x) # define be32toh(x) __builtin_bswap32(x) # define le32toh(x) (x) # define htobe64(x) __builtin_bswap64(x) # define htole64(x) (x) # define be64toh(x) __builtin_bswap64(x) # define le64toh(x) (x) /* ... generic big-endian fallback code */ #elif defined(__BYTE_ORDER__) && __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__ /* byte swapping code inspired by: * https://github.com/rweather/arduinolibs/blob/master/libraries/Crypto/utility/EndianUtil.h * */ # define htobe32(x) (x) # define be32toh(x) (x) # define htole32(x) \ (__extension__({ \ uint32_t _temp = (x); \ ((_temp >> 24) & 0x000000FF) | ((_temp >> 8) & 0x0000FF00) | \ ((_temp << 8) & 0x00FF0000) | ((_temp << 24) & 0xFF000000); \ })) # define le32toh(x) (htole32((x))) # define htobe64(x) (x) # define be64toh(x) (x) # define htole64(x) \ (__extension__({ \ uint64_t __temp = (x); \ uint32_t __low = htobe32((uint32_t)__temp); \ uint32_t __high = htobe32((uint32_t)(__temp >> 32)); \ (((uint64_t)__low) << 32) | __high; \ })) # define le64toh(x) (htole64((x))) /* ... generic little-endian fallback code */ #elif defined(__BYTE_ORDER__) && __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ # define htole32(x) (x) # define le32toh(x) (x) # define htobe32(x) \ (__extension__({ \ uint32_t _temp = (x); \ ((_temp >> 24) & 0x000000FF) | ((_temp >> 8) & 0x0000FF00) | \ ((_temp << 8) & 0x00FF0000) | ((_temp << 24) & 0xFF000000); \ })) # define be32toh(x) (htobe32((x))) # define htole64(x) (x) # define le64toh(x) (x) # define htobe64(x) \ (__extension__({ \ uint64_t __temp = (x); \ uint32_t __low = htobe32((uint32_t)__temp); \ uint32_t __high = htobe32((uint32_t)(__temp >> 32)); \ (((uint64_t)__low) << 32) | __high; \ })) # define be64toh(x) (htobe64((x))) /* ... couldn't determine endian-ness of the target platform */ #else # error "Please define __BYTE_ORDER__!" #endif /* defined(__linux__) || ... */ /* Loads and stores. These avoid undefined behavior due to unaligned memory * accesses, via memcpy. */ inline static uint16_t load16(uint8_t *b) { uint16_t x; memcpy(&x, b, 2); return x; } inline static uint32_t load32(uint8_t *b) { uint32_t x; memcpy(&x, b, 4); return x; } inline static uint64_t load64(uint8_t *b) { uint64_t x; memcpy(&x, b, 8); return x; } inline static void store16(uint8_t *b, uint16_t i) { memcpy(b, &i, 2); } inline static void store32(uint8_t *b, uint32_t i) { memcpy(b, &i, 4); } inline static void store64(uint8_t *b, uint64_t i) { memcpy(b, &i, 8); } #define load16_le(b) (le16toh(load16(b))) #define store16_le(b, i) (store16(b, htole16(i))) #define load16_be(b) (be16toh(load16(b))) #define store16_be(b, i) (store16(b, htobe16(i))) #define load32_le(b) (le32toh(load32(b))) #define store32_le(b, i) (store32(b, htole32(i))) #define load32_be(b) (be32toh(load32(b))) #define store32_be(b, i) (store32(b, htobe32(i))) #define load64_le(b) (le64toh(load64(b))) #define store64_le(b, i) (store64(b, htole64(i))) #define load64_be(b) (be64toh(load64(b))) #define store64_be(b, i) (store64(b, htobe64(i))) #endif
0
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/include/everest/kremlin
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/include/everest/kremlin/internal/compat.h
/* Copyright (c) INRIA and Microsoft Corporation. All rights reserved. Licensed under the Apache 2.0 License. */ #ifndef KRML_COMPAT_H #define KRML_COMPAT_H #include <inttypes.h> /* A series of macros that define C implementations of types that are not Low*, * to facilitate porting programs to Low*. */ typedef const char *Prims_string; typedef struct { uint32_t length; const char *data; } FStar_Bytes_bytes; typedef int32_t Prims_pos, Prims_nat, Prims_nonzero, Prims_int, krml_checked_int_t; #define RETURN_OR(x) \ do { \ int64_t __ret = x; \ if (__ret < INT32_MIN || INT32_MAX < __ret) { \ KRML_HOST_PRINTF( \ "Prims.{int,nat,pos} integer overflow at %s:%d\n", __FILE__, \ __LINE__); \ KRML_HOST_EXIT(252); \ } \ return (int32_t)__ret; \ } while (0) #endif
0
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/include/everest/kremlin
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/include/everest/kremlin/internal/types.h
/* Copyright (c) INRIA and Microsoft Corporation. All rights reserved. Licensed under the Apache 2.0 License. */ #ifndef KRML_TYPES_H #define KRML_TYPES_H #include <inttypes.h> #include <stdio.h> #include <stdlib.h> /* Types which are either abstract, meaning that have to be implemented in C, or * which are models, meaning that they are swapped out at compile-time for * hand-written C types (in which case they're marked as noextract). */ typedef uint64_t FStar_UInt64_t, FStar_UInt64_t_; typedef int64_t FStar_Int64_t, FStar_Int64_t_; typedef uint32_t FStar_UInt32_t, FStar_UInt32_t_; typedef int32_t FStar_Int32_t, FStar_Int32_t_; typedef uint16_t FStar_UInt16_t, FStar_UInt16_t_; typedef int16_t FStar_Int16_t, FStar_Int16_t_; typedef uint8_t FStar_UInt8_t, FStar_UInt8_t_; typedef int8_t FStar_Int8_t, FStar_Int8_t_; /* Only useful when building Kremlib, because it's in the dependency graph of * FStar.Int.Cast. */ typedef uint64_t FStar_UInt63_t, FStar_UInt63_t_; typedef int64_t FStar_Int63_t, FStar_Int63_t_; typedef double FStar_Float_float; typedef uint32_t FStar_Char_char; typedef FILE *FStar_IO_fd_read, *FStar_IO_fd_write; typedef void *FStar_Dyn_dyn; typedef const char *C_String_t, *C_String_t_; typedef int exit_code; typedef FILE *channel; typedef unsigned long long TestLib_cycles; typedef uint64_t FStar_Date_dateTime, FStar_Date_timeSpan; /* The uint128 type is a special case since we offer several implementations of * it, depending on the compiler and whether the user wants the verified * implementation or not. */ #if !defined(KRML_VERIFIED_UINT128) && defined(_MSC_VER) && defined(_M_X64) # include <emmintrin.h> typedef __m128i FStar_UInt128_uint128; #elif !defined(KRML_VERIFIED_UINT128) && !defined(_MSC_VER) typedef unsigned __int128 FStar_UInt128_uint128; #else typedef struct FStar_UInt128_uint128_s { uint64_t low; uint64_t high; } FStar_UInt128_uint128; #endif typedef FStar_UInt128_uint128 FStar_UInt128_t, FStar_UInt128_t_, uint128_t; #endif
0
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/include/everest/kremlin
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/include/everest/kremlin/internal/wasmsupport.h
/* Copyright (c) INRIA and Microsoft Corporation. All rights reserved. Licensed under the Apache 2.0 License. */ /* This file is automatically included when compiling with -wasm -d force-c */ #define WasmSupport_check_buffer_size(X)
0
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/include/everest/kremlin
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/include/everest/kremlin/internal/debug.h
/* Copyright (c) INRIA and Microsoft Corporation. All rights reserved. Licensed under the Apache 2.0 License. */ #ifndef __KREMLIN_DEBUG_H #define __KREMLIN_DEBUG_H #include <inttypes.h> #include "kremlin/internal/target.h" /******************************************************************************/ /* Debugging helpers - intended only for KreMLin developers */ /******************************************************************************/ /* In support of "-wasm -d force-c": we might need this function to be * forward-declared, because the dependency on WasmSupport appears very late, * after SimplifyWasm, and sadly, after the topological order has been done. */ void WasmSupport_check_buffer_size(uint32_t s); /* A series of GCC atrocities to trace function calls (kremlin's [-d c-calls] * option). Useful when trying to debug, say, Wasm, to compare traces. */ /* clang-format off */ #ifdef __GNUC__ #define KRML_FORMAT(X) _Generic((X), \ uint8_t : "0x%08" PRIx8, \ uint16_t: "0x%08" PRIx16, \ uint32_t: "0x%08" PRIx32, \ uint64_t: "0x%08" PRIx64, \ int8_t : "0x%08" PRIx8, \ int16_t : "0x%08" PRIx16, \ int32_t : "0x%08" PRIx32, \ int64_t : "0x%08" PRIx64, \ default : "%s") #define KRML_FORMAT_ARG(X) _Generic((X), \ uint8_t : X, \ uint16_t: X, \ uint32_t: X, \ uint64_t: X, \ int8_t : X, \ int16_t : X, \ int32_t : X, \ int64_t : X, \ default : "unknown") /* clang-format on */ # define KRML_DEBUG_RETURN(X) \ ({ \ __auto_type _ret = (X); \ KRML_HOST_PRINTF("returning: "); \ KRML_HOST_PRINTF(KRML_FORMAT(_ret), KRML_FORMAT_ARG(_ret)); \ KRML_HOST_PRINTF(" \n"); \ _ret; \ }) #endif #endif
0
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/include/everest/kremlin
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/include/everest/kremlin/internal/callconv.h
/* Copyright (c) INRIA and Microsoft Corporation. All rights reserved. Licensed under the Apache 2.0 License. */ #ifndef __KREMLIN_CALLCONV_H #define __KREMLIN_CALLCONV_H /******************************************************************************/ /* Some macros to ease compatibility */ /******************************************************************************/ /* We want to generate __cdecl safely without worrying about it being undefined. * When using MSVC, these are always defined. When using MinGW, these are * defined too. They have no meaning for other platforms, so we define them to * be empty macros in other situations. */ #ifndef _MSC_VER #ifndef __cdecl #define __cdecl #endif #ifndef __stdcall #define __stdcall #endif #ifndef __fastcall #define __fastcall #endif #endif /* Since KreMLin emits the inline keyword unconditionally, we follow the * guidelines at https://gcc.gnu.org/onlinedocs/gcc/Inline.html and make this * __inline__ to ensure the code compiles with -std=c90 and earlier. */ #ifdef __GNUC__ # define inline __inline__ #endif /* GCC-specific attribute syntax; everyone else gets the standard C inline * attribute. */ #ifdef __GNU_C__ # ifndef __clang__ # define force_inline inline __attribute__((always_inline)) # else # define force_inline inline # endif #else # define force_inline inline #endif #endif
0
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/include/everest/kremlin
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/include/everest/kremlin/internal/target.h
/* Copyright (c) INRIA and Microsoft Corporation. All rights reserved. Licensed under the Apache 2.0 License. */ #ifndef __KREMLIN_TARGET_H #define __KREMLIN_TARGET_H #include <stdlib.h> #include <stdio.h> #include <stdbool.h> #include <inttypes.h> #include <limits.h> #include "kremlin/internal/callconv.h" /******************************************************************************/ /* Macros that KreMLin will generate. */ /******************************************************************************/ /* For "bare" targets that do not have a C stdlib, the user might want to use * [-add-early-include '"mydefinitions.h"'] and override these. */ #ifndef KRML_HOST_PRINTF # define KRML_HOST_PRINTF printf #endif #if ( \ (defined __STDC_VERSION__) && (__STDC_VERSION__ >= 199901L) && \ (!(defined KRML_HOST_EPRINTF))) # define KRML_HOST_EPRINTF(...) fprintf(stderr, __VA_ARGS__) #endif #ifndef KRML_HOST_EXIT # define KRML_HOST_EXIT exit #endif #ifndef KRML_HOST_MALLOC # define KRML_HOST_MALLOC malloc #endif #ifndef KRML_HOST_CALLOC # define KRML_HOST_CALLOC calloc #endif #ifndef KRML_HOST_FREE # define KRML_HOST_FREE free #endif #ifndef KRML_HOST_TIME # include <time.h> /* Prims_nat not yet in scope */ inline static int32_t krml_time() { return (int32_t)time(NULL); } # define KRML_HOST_TIME krml_time #endif /* In statement position, exiting is easy. */ #define KRML_EXIT \ do { \ KRML_HOST_PRINTF("Unimplemented function at %s:%d\n", __FILE__, __LINE__); \ KRML_HOST_EXIT(254); \ } while (0) /* In expression position, use the comma-operator and a malloc to return an * expression of the right size. KreMLin passes t as the parameter to the macro. */ #define KRML_EABORT(t, msg) \ (KRML_HOST_PRINTF("KreMLin abort at %s:%d\n%s\n", __FILE__, __LINE__, msg), \ KRML_HOST_EXIT(255), *((t *)KRML_HOST_MALLOC(sizeof(t)))) /* In FStar.Buffer.fst, the size of arrays is uint32_t, but it's a number of * *elements*. Do an ugly, run-time check (some of which KreMLin can eliminate). */ #ifdef __GNUC__ # define _KRML_CHECK_SIZE_PRAGMA \ _Pragma("GCC diagnostic ignored \"-Wtype-limits\"") #else # define _KRML_CHECK_SIZE_PRAGMA #endif #define KRML_CHECK_SIZE(size_elt, sz) \ do { \ _KRML_CHECK_SIZE_PRAGMA \ if (((size_t)(sz)) > ((size_t)(SIZE_MAX / (size_elt)))) { \ KRML_HOST_PRINTF( \ "Maximum allocatable size exceeded, aborting before overflow at " \ "%s:%d\n", \ __FILE__, __LINE__); \ KRML_HOST_EXIT(253); \ } \ } while (0) #if defined(_MSC_VER) && _MSC_VER < 1900 # define KRML_HOST_SNPRINTF(buf, sz, fmt, arg) _snprintf_s(buf, sz, _TRUNCATE, fmt, arg) #else # define KRML_HOST_SNPRINTF(buf, sz, fmt, arg) snprintf(buf, sz, fmt, arg) #endif #endif
0
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/include/everest/kremlin
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/include/everest/kremlin/internal/builtin.h
/* Copyright (c) INRIA and Microsoft Corporation. All rights reserved. Licensed under the Apache 2.0 License. */ #ifndef __KREMLIN_BUILTIN_H #define __KREMLIN_BUILTIN_H /* For alloca, when using KreMLin's -falloca */ #if (defined(_WIN32) || defined(_WIN64)) # include <malloc.h> #endif /* If some globals need to be initialized before the main, then kremlin will * generate and try to link last a function with this type: */ void kremlinit_globals(void); #endif
0
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/library/everest.c
/* * Interface to code from Project Everest * * Copyright 2016-2018 INRIA and Microsoft Corporation * SPDX-License-Identifier: Apache-2.0 * * Licensed under the Apache License, Version 2.0 (the "License"); you may * not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * * This file is part of Mbed TLS (https://tls.mbed.org). */ #include "common.h" #include <string.h> #include "mbedtls/ecdh.h" #include "everest/x25519.h" #include "everest/everest.h" #if defined(MBEDTLS_PLATFORM_C) #include "mbedtls/platform.h" #else #define mbedtls_calloc calloc #define mbedtls_free free #endif #if defined(MBEDTLS_ECDH_VARIANT_EVEREST_ENABLED) int mbedtls_everest_setup( mbedtls_ecdh_context_everest *ctx, int grp_id ) { if( grp_id != MBEDTLS_ECP_DP_CURVE25519 ) return MBEDTLS_ERR_ECP_BAD_INPUT_DATA; mbedtls_x25519_init( &ctx->ctx ); return 0; } void mbedtls_everest_free( mbedtls_ecdh_context_everest *ctx ) { mbedtls_x25519_free( &ctx->ctx ); } int mbedtls_everest_make_params( mbedtls_ecdh_context_everest *ctx, size_t *olen, unsigned char *buf, size_t blen, int( *f_rng )( void *, unsigned char *, size_t ), void *p_rng ) { mbedtls_x25519_context *x25519_ctx = &ctx->ctx; return mbedtls_x25519_make_params( x25519_ctx, olen, buf, blen, f_rng, p_rng ); } int mbedtls_everest_read_params( mbedtls_ecdh_context_everest *ctx, const unsigned char **buf, const unsigned char *end ) { mbedtls_x25519_context *x25519_ctx = &ctx->ctx; return mbedtls_x25519_read_params( x25519_ctx, buf, end ); } int mbedtls_everest_get_params( mbedtls_ecdh_context_everest *ctx, const mbedtls_ecp_keypair *key, mbedtls_everest_ecdh_side side ) { mbedtls_x25519_context *x25519_ctx = &ctx->ctx; mbedtls_x25519_ecdh_side s = side == MBEDTLS_EVEREST_ECDH_OURS ? MBEDTLS_X25519_ECDH_OURS : MBEDTLS_X25519_ECDH_THEIRS; return mbedtls_x25519_get_params( x25519_ctx, key, s ); } int mbedtls_everest_make_public( mbedtls_ecdh_context_everest *ctx, size_t *olen, unsigned char *buf, size_t blen, int( *f_rng )( void *, unsigned char *, size_t ), void *p_rng ) { mbedtls_x25519_context *x25519_ctx = &ctx->ctx; return mbedtls_x25519_make_public( x25519_ctx, olen, buf, blen, f_rng, p_rng ); } int mbedtls_everest_read_public( mbedtls_ecdh_context_everest *ctx, const unsigned char *buf, size_t blen ) { mbedtls_x25519_context *x25519_ctx = &ctx->ctx; return mbedtls_x25519_read_public ( x25519_ctx, buf, blen ); } int mbedtls_everest_calc_secret( mbedtls_ecdh_context_everest *ctx, size_t *olen, unsigned char *buf, size_t blen, int( *f_rng )( void *, unsigned char *, size_t ), void *p_rng ) { mbedtls_x25519_context *x25519_ctx = &ctx->ctx; return mbedtls_x25519_calc_secret( x25519_ctx, olen, buf, blen, f_rng, p_rng ); } #endif /* MBEDTLS_ECDH_VARIANT_EVEREST_ENABLED */
0
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/library/Hacl_Curve25519.c
/* Copyright (c) INRIA and Microsoft Corporation. All rights reserved. Licensed under the Apache 2.0 License. */ /* This file was generated by KreMLin <https://github.com/FStarLang/kremlin> * KreMLin invocation: /mnt/e/everest/verify/kremlin/krml -fc89 -fparentheses -fno-shadow -header /mnt/e/everest/verify/hdrcLh -minimal -fbuiltin-uint128 -fc89 -fparentheses -fno-shadow -header /mnt/e/everest/verify/hdrcLh -minimal -I /mnt/e/everest/verify/hacl-star/code/lib/kremlin -I /mnt/e/everest/verify/kremlin/kremlib/compat -I /mnt/e/everest/verify/hacl-star/specs -I /mnt/e/everest/verify/hacl-star/specs/old -I . -ccopt -march=native -verbose -ldopt -flto -tmpdir x25519-c -I ../bignum -bundle Hacl.Curve25519=* -minimal -add-include "kremlib.h" -skip-compilation x25519-c/out.krml -o x25519-c/Hacl_Curve25519.c * F* version: 059db0c8 * KreMLin version: 916c37ac */ #include "Hacl_Curve25519.h" extern uint64_t FStar_UInt64_eq_mask(uint64_t x0, uint64_t x1); extern uint64_t FStar_UInt64_gte_mask(uint64_t x0, uint64_t x1); extern uint128_t FStar_UInt128_add(uint128_t x0, uint128_t x1); extern uint128_t FStar_UInt128_add_mod(uint128_t x0, uint128_t x1); extern uint128_t FStar_UInt128_logand(uint128_t x0, uint128_t x1); extern uint128_t FStar_UInt128_shift_right(uint128_t x0, uint32_t x1); extern uint128_t FStar_UInt128_uint64_to_uint128(uint64_t x0); extern uint64_t FStar_UInt128_uint128_to_uint64(uint128_t x0); extern uint128_t FStar_UInt128_mul_wide(uint64_t x0, uint64_t x1); static void Hacl_Bignum_Modulo_carry_top(uint64_t *b) { uint64_t b4 = b[4U]; uint64_t b0 = b[0U]; uint64_t b4_ = b4 & (uint64_t)0x7ffffffffffffU; uint64_t b0_ = b0 + (uint64_t)19U * (b4 >> (uint32_t)51U); b[4U] = b4_; b[0U] = b0_; } inline static void Hacl_Bignum_Fproduct_copy_from_wide_(uint64_t *output, uint128_t *input) { uint32_t i; for (i = (uint32_t)0U; i < (uint32_t)5U; i = i + (uint32_t)1U) { uint128_t xi = input[i]; output[i] = (uint64_t)xi; } } inline static void Hacl_Bignum_Fproduct_sum_scalar_multiplication_(uint128_t *output, uint64_t *input, uint64_t s) { uint32_t i; for (i = (uint32_t)0U; i < (uint32_t)5U; i = i + (uint32_t)1U) { uint128_t xi = output[i]; uint64_t yi = input[i]; output[i] = xi + (uint128_t)yi * s; } } inline static void Hacl_Bignum_Fproduct_carry_wide_(uint128_t *tmp) { uint32_t i; for (i = (uint32_t)0U; i < (uint32_t)4U; i = i + (uint32_t)1U) { uint32_t ctr = i; uint128_t tctr = tmp[ctr]; uint128_t tctrp1 = tmp[ctr + (uint32_t)1U]; uint64_t r0 = (uint64_t)tctr & (uint64_t)0x7ffffffffffffU; uint128_t c = tctr >> (uint32_t)51U; tmp[ctr] = (uint128_t)r0; tmp[ctr + (uint32_t)1U] = tctrp1 + c; } } inline static void Hacl_Bignum_Fmul_shift_reduce(uint64_t *output) { uint64_t tmp = output[4U]; uint64_t b0; { uint32_t i; for (i = (uint32_t)0U; i < (uint32_t)4U; i = i + (uint32_t)1U) { uint32_t ctr = (uint32_t)5U - i - (uint32_t)1U; uint64_t z = output[ctr - (uint32_t)1U]; output[ctr] = z; } } output[0U] = tmp; b0 = output[0U]; output[0U] = (uint64_t)19U * b0; } static void Hacl_Bignum_Fmul_mul_shift_reduce_(uint128_t *output, uint64_t *input, uint64_t *input2) { uint32_t i; uint64_t input2i; { uint32_t i0; for (i0 = (uint32_t)0U; i0 < (uint32_t)4U; i0 = i0 + (uint32_t)1U) { uint64_t input2i0 = input2[i0]; Hacl_Bignum_Fproduct_sum_scalar_multiplication_(output, input, input2i0); Hacl_Bignum_Fmul_shift_reduce(input); } } i = (uint32_t)4U; input2i = input2[i]; Hacl_Bignum_Fproduct_sum_scalar_multiplication_(output, input, input2i); } inline static void Hacl_Bignum_Fmul_fmul(uint64_t *output, uint64_t *input, uint64_t *input2) { uint64_t tmp[5U] = { 0U }; memcpy(tmp, input, (uint32_t)5U * sizeof input[0U]); KRML_CHECK_SIZE(sizeof (uint128_t), (uint32_t)5U); { uint128_t t[5U]; { uint32_t _i; for (_i = 0U; _i < (uint32_t)5U; ++_i) t[_i] = (uint128_t)(uint64_t)0U; } { uint128_t b4; uint128_t b0; uint128_t b4_; uint128_t b0_; uint64_t i0; uint64_t i1; uint64_t i0_; uint64_t i1_; Hacl_Bignum_Fmul_mul_shift_reduce_(t, tmp, input2); Hacl_Bignum_Fproduct_carry_wide_(t); b4 = t[4U]; b0 = t[0U]; b4_ = b4 & (uint128_t)(uint64_t)0x7ffffffffffffU; b0_ = b0 + (uint128_t)(uint64_t)19U * (uint64_t)(b4 >> (uint32_t)51U); t[4U] = b4_; t[0U] = b0_; Hacl_Bignum_Fproduct_copy_from_wide_(output, t); i0 = output[0U]; i1 = output[1U]; i0_ = i0 & (uint64_t)0x7ffffffffffffU; i1_ = i1 + (i0 >> (uint32_t)51U); output[0U] = i0_; output[1U] = i1_; } } } inline static void Hacl_Bignum_Fsquare_fsquare__(uint128_t *tmp, uint64_t *output) { uint64_t r0 = output[0U]; uint64_t r1 = output[1U]; uint64_t r2 = output[2U]; uint64_t r3 = output[3U]; uint64_t r4 = output[4U]; uint64_t d0 = r0 * (uint64_t)2U; uint64_t d1 = r1 * (uint64_t)2U; uint64_t d2 = r2 * (uint64_t)2U * (uint64_t)19U; uint64_t d419 = r4 * (uint64_t)19U; uint64_t d4 = d419 * (uint64_t)2U; uint128_t s0 = (uint128_t)r0 * r0 + (uint128_t)d4 * r1 + (uint128_t)d2 * r3; uint128_t s1 = (uint128_t)d0 * r1 + (uint128_t)d4 * r2 + (uint128_t)(r3 * (uint64_t)19U) * r3; uint128_t s2 = (uint128_t)d0 * r2 + (uint128_t)r1 * r1 + (uint128_t)d4 * r3; uint128_t s3 = (uint128_t)d0 * r3 + (uint128_t)d1 * r2 + (uint128_t)r4 * d419; uint128_t s4 = (uint128_t)d0 * r4 + (uint128_t)d1 * r3 + (uint128_t)r2 * r2; tmp[0U] = s0; tmp[1U] = s1; tmp[2U] = s2; tmp[3U] = s3; tmp[4U] = s4; } inline static void Hacl_Bignum_Fsquare_fsquare_(uint128_t *tmp, uint64_t *output) { uint128_t b4; uint128_t b0; uint128_t b4_; uint128_t b0_; uint64_t i0; uint64_t i1; uint64_t i0_; uint64_t i1_; Hacl_Bignum_Fsquare_fsquare__(tmp, output); Hacl_Bignum_Fproduct_carry_wide_(tmp); b4 = tmp[4U]; b0 = tmp[0U]; b4_ = b4 & (uint128_t)(uint64_t)0x7ffffffffffffU; b0_ = b0 + (uint128_t)(uint64_t)19U * (uint64_t)(b4 >> (uint32_t)51U); tmp[4U] = b4_; tmp[0U] = b0_; Hacl_Bignum_Fproduct_copy_from_wide_(output, tmp); i0 = output[0U]; i1 = output[1U]; i0_ = i0 & (uint64_t)0x7ffffffffffffU; i1_ = i1 + (i0 >> (uint32_t)51U); output[0U] = i0_; output[1U] = i1_; } static void Hacl_Bignum_Fsquare_fsquare_times_(uint64_t *input, uint128_t *tmp, uint32_t count1) { uint32_t i; Hacl_Bignum_Fsquare_fsquare_(tmp, input); for (i = (uint32_t)1U; i < count1; i = i + (uint32_t)1U) Hacl_Bignum_Fsquare_fsquare_(tmp, input); } inline static void Hacl_Bignum_Fsquare_fsquare_times(uint64_t *output, uint64_t *input, uint32_t count1) { KRML_CHECK_SIZE(sizeof (uint128_t), (uint32_t)5U); { uint128_t t[5U]; { uint32_t _i; for (_i = 0U; _i < (uint32_t)5U; ++_i) t[_i] = (uint128_t)(uint64_t)0U; } memcpy(output, input, (uint32_t)5U * sizeof input[0U]); Hacl_Bignum_Fsquare_fsquare_times_(output, t, count1); } } inline static void Hacl_Bignum_Fsquare_fsquare_times_inplace(uint64_t *output, uint32_t count1) { KRML_CHECK_SIZE(sizeof (uint128_t), (uint32_t)5U); { uint128_t t[5U]; { uint32_t _i; for (_i = 0U; _i < (uint32_t)5U; ++_i) t[_i] = (uint128_t)(uint64_t)0U; } Hacl_Bignum_Fsquare_fsquare_times_(output, t, count1); } } inline static void Hacl_Bignum_Crecip_crecip(uint64_t *out, uint64_t *z) { uint64_t buf[20U] = { 0U }; uint64_t *a0 = buf; uint64_t *t00 = buf + (uint32_t)5U; uint64_t *b0 = buf + (uint32_t)10U; uint64_t *t01; uint64_t *b1; uint64_t *c0; uint64_t *a; uint64_t *t0; uint64_t *b; uint64_t *c; Hacl_Bignum_Fsquare_fsquare_times(a0, z, (uint32_t)1U); Hacl_Bignum_Fsquare_fsquare_times(t00, a0, (uint32_t)2U); Hacl_Bignum_Fmul_fmul(b0, t00, z); Hacl_Bignum_Fmul_fmul(a0, b0, a0); Hacl_Bignum_Fsquare_fsquare_times(t00, a0, (uint32_t)1U); Hacl_Bignum_Fmul_fmul(b0, t00, b0); Hacl_Bignum_Fsquare_fsquare_times(t00, b0, (uint32_t)5U); t01 = buf + (uint32_t)5U; b1 = buf + (uint32_t)10U; c0 = buf + (uint32_t)15U; Hacl_Bignum_Fmul_fmul(b1, t01, b1); Hacl_Bignum_Fsquare_fsquare_times(t01, b1, (uint32_t)10U); Hacl_Bignum_Fmul_fmul(c0, t01, b1); Hacl_Bignum_Fsquare_fsquare_times(t01, c0, (uint32_t)20U); Hacl_Bignum_Fmul_fmul(t01, t01, c0); Hacl_Bignum_Fsquare_fsquare_times_inplace(t01, (uint32_t)10U); Hacl_Bignum_Fmul_fmul(b1, t01, b1); Hacl_Bignum_Fsquare_fsquare_times(t01, b1, (uint32_t)50U); a = buf; t0 = buf + (uint32_t)5U; b = buf + (uint32_t)10U; c = buf + (uint32_t)15U; Hacl_Bignum_Fmul_fmul(c, t0, b); Hacl_Bignum_Fsquare_fsquare_times(t0, c, (uint32_t)100U); Hacl_Bignum_Fmul_fmul(t0, t0, c); Hacl_Bignum_Fsquare_fsquare_times_inplace(t0, (uint32_t)50U); Hacl_Bignum_Fmul_fmul(t0, t0, b); Hacl_Bignum_Fsquare_fsquare_times_inplace(t0, (uint32_t)5U); Hacl_Bignum_Fmul_fmul(out, t0, a); } inline static void Hacl_Bignum_fsum(uint64_t *a, uint64_t *b) { uint32_t i; for (i = (uint32_t)0U; i < (uint32_t)5U; i = i + (uint32_t)1U) { uint64_t xi = a[i]; uint64_t yi = b[i]; a[i] = xi + yi; } } inline static void Hacl_Bignum_fdifference(uint64_t *a, uint64_t *b) { uint64_t tmp[5U] = { 0U }; uint64_t b0; uint64_t b1; uint64_t b2; uint64_t b3; uint64_t b4; memcpy(tmp, b, (uint32_t)5U * sizeof b[0U]); b0 = tmp[0U]; b1 = tmp[1U]; b2 = tmp[2U]; b3 = tmp[3U]; b4 = tmp[4U]; tmp[0U] = b0 + (uint64_t)0x3fffffffffff68U; tmp[1U] = b1 + (uint64_t)0x3ffffffffffff8U; tmp[2U] = b2 + (uint64_t)0x3ffffffffffff8U; tmp[3U] = b3 + (uint64_t)0x3ffffffffffff8U; tmp[4U] = b4 + (uint64_t)0x3ffffffffffff8U; { uint32_t i; for (i = (uint32_t)0U; i < (uint32_t)5U; i = i + (uint32_t)1U) { uint64_t xi = a[i]; uint64_t yi = tmp[i]; a[i] = yi - xi; } } } inline static void Hacl_Bignum_fscalar(uint64_t *output, uint64_t *b, uint64_t s) { KRML_CHECK_SIZE(sizeof (uint128_t), (uint32_t)5U); { uint128_t tmp[5U]; { uint32_t _i; for (_i = 0U; _i < (uint32_t)5U; ++_i) tmp[_i] = (uint128_t)(uint64_t)0U; } { uint128_t b4; uint128_t b0; uint128_t b4_; uint128_t b0_; { uint32_t i; for (i = (uint32_t)0U; i < (uint32_t)5U; i = i + (uint32_t)1U) { uint64_t xi = b[i]; tmp[i] = (uint128_t)xi * s; } } Hacl_Bignum_Fproduct_carry_wide_(tmp); b4 = tmp[4U]; b0 = tmp[0U]; b4_ = b4 & (uint128_t)(uint64_t)0x7ffffffffffffU; b0_ = b0 + (uint128_t)(uint64_t)19U * (uint64_t)(b4 >> (uint32_t)51U); tmp[4U] = b4_; tmp[0U] = b0_; Hacl_Bignum_Fproduct_copy_from_wide_(output, tmp); } } } inline static void Hacl_Bignum_fmul(uint64_t *output, uint64_t *a, uint64_t *b) { Hacl_Bignum_Fmul_fmul(output, a, b); } inline static void Hacl_Bignum_crecip(uint64_t *output, uint64_t *input) { Hacl_Bignum_Crecip_crecip(output, input); } static void Hacl_EC_Point_swap_conditional_step(uint64_t *a, uint64_t *b, uint64_t swap1, uint32_t ctr) { uint32_t i = ctr - (uint32_t)1U; uint64_t ai = a[i]; uint64_t bi = b[i]; uint64_t x = swap1 & (ai ^ bi); uint64_t ai1 = ai ^ x; uint64_t bi1 = bi ^ x; a[i] = ai1; b[i] = bi1; } static void Hacl_EC_Point_swap_conditional_(uint64_t *a, uint64_t *b, uint64_t swap1, uint32_t ctr) { if (!(ctr == (uint32_t)0U)) { uint32_t i; Hacl_EC_Point_swap_conditional_step(a, b, swap1, ctr); i = ctr - (uint32_t)1U; Hacl_EC_Point_swap_conditional_(a, b, swap1, i); } } static void Hacl_EC_Point_swap_conditional(uint64_t *a, uint64_t *b, uint64_t iswap) { uint64_t swap1 = (uint64_t)0U - iswap; Hacl_EC_Point_swap_conditional_(a, b, swap1, (uint32_t)5U); Hacl_EC_Point_swap_conditional_(a + (uint32_t)5U, b + (uint32_t)5U, swap1, (uint32_t)5U); } static void Hacl_EC_Point_copy(uint64_t *output, uint64_t *input) { memcpy(output, input, (uint32_t)5U * sizeof input[0U]); memcpy(output + (uint32_t)5U, input + (uint32_t)5U, (uint32_t)5U * sizeof (input + (uint32_t)5U)[0U]); } static void Hacl_EC_Format_fexpand(uint64_t *output, uint8_t *input) { uint64_t i0 = load64_le(input); uint8_t *x00 = input + (uint32_t)6U; uint64_t i1 = load64_le(x00); uint8_t *x01 = input + (uint32_t)12U; uint64_t i2 = load64_le(x01); uint8_t *x02 = input + (uint32_t)19U; uint64_t i3 = load64_le(x02); uint8_t *x0 = input + (uint32_t)24U; uint64_t i4 = load64_le(x0); uint64_t output0 = i0 & (uint64_t)0x7ffffffffffffU; uint64_t output1 = i1 >> (uint32_t)3U & (uint64_t)0x7ffffffffffffU; uint64_t output2 = i2 >> (uint32_t)6U & (uint64_t)0x7ffffffffffffU; uint64_t output3 = i3 >> (uint32_t)1U & (uint64_t)0x7ffffffffffffU; uint64_t output4 = i4 >> (uint32_t)12U & (uint64_t)0x7ffffffffffffU; output[0U] = output0; output[1U] = output1; output[2U] = output2; output[3U] = output3; output[4U] = output4; } static void Hacl_EC_Format_fcontract_first_carry_pass(uint64_t *input) { uint64_t t0 = input[0U]; uint64_t t1 = input[1U]; uint64_t t2 = input[2U]; uint64_t t3 = input[3U]; uint64_t t4 = input[4U]; uint64_t t1_ = t1 + (t0 >> (uint32_t)51U); uint64_t t0_ = t0 & (uint64_t)0x7ffffffffffffU; uint64_t t2_ = t2 + (t1_ >> (uint32_t)51U); uint64_t t1__ = t1_ & (uint64_t)0x7ffffffffffffU; uint64_t t3_ = t3 + (t2_ >> (uint32_t)51U); uint64_t t2__ = t2_ & (uint64_t)0x7ffffffffffffU; uint64_t t4_ = t4 + (t3_ >> (uint32_t)51U); uint64_t t3__ = t3_ & (uint64_t)0x7ffffffffffffU; input[0U] = t0_; input[1U] = t1__; input[2U] = t2__; input[3U] = t3__; input[4U] = t4_; } static void Hacl_EC_Format_fcontract_first_carry_full(uint64_t *input) { Hacl_EC_Format_fcontract_first_carry_pass(input); Hacl_Bignum_Modulo_carry_top(input); } static void Hacl_EC_Format_fcontract_second_carry_pass(uint64_t *input) { uint64_t t0 = input[0U]; uint64_t t1 = input[1U]; uint64_t t2 = input[2U]; uint64_t t3 = input[3U]; uint64_t t4 = input[4U]; uint64_t t1_ = t1 + (t0 >> (uint32_t)51U); uint64_t t0_ = t0 & (uint64_t)0x7ffffffffffffU; uint64_t t2_ = t2 + (t1_ >> (uint32_t)51U); uint64_t t1__ = t1_ & (uint64_t)0x7ffffffffffffU; uint64_t t3_ = t3 + (t2_ >> (uint32_t)51U); uint64_t t2__ = t2_ & (uint64_t)0x7ffffffffffffU; uint64_t t4_ = t4 + (t3_ >> (uint32_t)51U); uint64_t t3__ = t3_ & (uint64_t)0x7ffffffffffffU; input[0U] = t0_; input[1U] = t1__; input[2U] = t2__; input[3U] = t3__; input[4U] = t4_; } static void Hacl_EC_Format_fcontract_second_carry_full(uint64_t *input) { uint64_t i0; uint64_t i1; uint64_t i0_; uint64_t i1_; Hacl_EC_Format_fcontract_second_carry_pass(input); Hacl_Bignum_Modulo_carry_top(input); i0 = input[0U]; i1 = input[1U]; i0_ = i0 & (uint64_t)0x7ffffffffffffU; i1_ = i1 + (i0 >> (uint32_t)51U); input[0U] = i0_; input[1U] = i1_; } static void Hacl_EC_Format_fcontract_trim(uint64_t *input) { uint64_t a0 = input[0U]; uint64_t a1 = input[1U]; uint64_t a2 = input[2U]; uint64_t a3 = input[3U]; uint64_t a4 = input[4U]; uint64_t mask0 = FStar_UInt64_gte_mask(a0, (uint64_t)0x7ffffffffffedU); uint64_t mask1 = FStar_UInt64_eq_mask(a1, (uint64_t)0x7ffffffffffffU); uint64_t mask2 = FStar_UInt64_eq_mask(a2, (uint64_t)0x7ffffffffffffU); uint64_t mask3 = FStar_UInt64_eq_mask(a3, (uint64_t)0x7ffffffffffffU); uint64_t mask4 = FStar_UInt64_eq_mask(a4, (uint64_t)0x7ffffffffffffU); uint64_t mask = (((mask0 & mask1) & mask2) & mask3) & mask4; uint64_t a0_ = a0 - ((uint64_t)0x7ffffffffffedU & mask); uint64_t a1_ = a1 - ((uint64_t)0x7ffffffffffffU & mask); uint64_t a2_ = a2 - ((uint64_t)0x7ffffffffffffU & mask); uint64_t a3_ = a3 - ((uint64_t)0x7ffffffffffffU & mask); uint64_t a4_ = a4 - ((uint64_t)0x7ffffffffffffU & mask); input[0U] = a0_; input[1U] = a1_; input[2U] = a2_; input[3U] = a3_; input[4U] = a4_; } static void Hacl_EC_Format_fcontract_store(uint8_t *output, uint64_t *input) { uint64_t t0 = input[0U]; uint64_t t1 = input[1U]; uint64_t t2 = input[2U]; uint64_t t3 = input[3U]; uint64_t t4 = input[4U]; uint64_t o0 = t1 << (uint32_t)51U | t0; uint64_t o1 = t2 << (uint32_t)38U | t1 >> (uint32_t)13U; uint64_t o2 = t3 << (uint32_t)25U | t2 >> (uint32_t)26U; uint64_t o3 = t4 << (uint32_t)12U | t3 >> (uint32_t)39U; uint8_t *b0 = output; uint8_t *b1 = output + (uint32_t)8U; uint8_t *b2 = output + (uint32_t)16U; uint8_t *b3 = output + (uint32_t)24U; store64_le(b0, o0); store64_le(b1, o1); store64_le(b2, o2); store64_le(b3, o3); } static void Hacl_EC_Format_fcontract(uint8_t *output, uint64_t *input) { Hacl_EC_Format_fcontract_first_carry_full(input); Hacl_EC_Format_fcontract_second_carry_full(input); Hacl_EC_Format_fcontract_trim(input); Hacl_EC_Format_fcontract_store(output, input); } static void Hacl_EC_Format_scalar_of_point(uint8_t *scalar, uint64_t *point) { uint64_t *x = point; uint64_t *z = point + (uint32_t)5U; uint64_t buf[10U] = { 0U }; uint64_t *zmone = buf; uint64_t *sc = buf + (uint32_t)5U; Hacl_Bignum_crecip(zmone, z); Hacl_Bignum_fmul(sc, x, zmone); Hacl_EC_Format_fcontract(scalar, sc); } static void Hacl_EC_AddAndDouble_fmonty( uint64_t *pp, uint64_t *ppq, uint64_t *p, uint64_t *pq, uint64_t *qmqp ) { uint64_t *qx = qmqp; uint64_t *x2 = pp; uint64_t *z2 = pp + (uint32_t)5U; uint64_t *x3 = ppq; uint64_t *z3 = ppq + (uint32_t)5U; uint64_t *x = p; uint64_t *z = p + (uint32_t)5U; uint64_t *xprime = pq; uint64_t *zprime = pq + (uint32_t)5U; uint64_t buf[40U] = { 0U }; uint64_t *origx = buf; uint64_t *origxprime0 = buf + (uint32_t)5U; uint64_t *xxprime0 = buf + (uint32_t)25U; uint64_t *zzprime0 = buf + (uint32_t)30U; uint64_t *origxprime; uint64_t *xx0; uint64_t *zz0; uint64_t *xxprime; uint64_t *zzprime; uint64_t *zzzprime; uint64_t *zzz; uint64_t *xx; uint64_t *zz; uint64_t scalar; memcpy(origx, x, (uint32_t)5U * sizeof x[0U]); Hacl_Bignum_fsum(x, z); Hacl_Bignum_fdifference(z, origx); memcpy(origxprime0, xprime, (uint32_t)5U * sizeof xprime[0U]); Hacl_Bignum_fsum(xprime, zprime); Hacl_Bignum_fdifference(zprime, origxprime0); Hacl_Bignum_fmul(xxprime0, xprime, z); Hacl_Bignum_fmul(zzprime0, x, zprime); origxprime = buf + (uint32_t)5U; xx0 = buf + (uint32_t)15U; zz0 = buf + (uint32_t)20U; xxprime = buf + (uint32_t)25U; zzprime = buf + (uint32_t)30U; zzzprime = buf + (uint32_t)35U; memcpy(origxprime, xxprime, (uint32_t)5U * sizeof xxprime[0U]); Hacl_Bignum_fsum(xxprime, zzprime); Hacl_Bignum_fdifference(zzprime, origxprime); Hacl_Bignum_Fsquare_fsquare_times(x3, xxprime, (uint32_t)1U); Hacl_Bignum_Fsquare_fsquare_times(zzzprime, zzprime, (uint32_t)1U); Hacl_Bignum_fmul(z3, zzzprime, qx); Hacl_Bignum_Fsquare_fsquare_times(xx0, x, (uint32_t)1U); Hacl_Bignum_Fsquare_fsquare_times(zz0, z, (uint32_t)1U); zzz = buf + (uint32_t)10U; xx = buf + (uint32_t)15U; zz = buf + (uint32_t)20U; Hacl_Bignum_fmul(x2, xx, zz); Hacl_Bignum_fdifference(zz, xx); scalar = (uint64_t)121665U; Hacl_Bignum_fscalar(zzz, zz, scalar); Hacl_Bignum_fsum(zzz, xx); Hacl_Bignum_fmul(z2, zzz, zz); } static void Hacl_EC_Ladder_SmallLoop_cmult_small_loop_step( uint64_t *nq, uint64_t *nqpq, uint64_t *nq2, uint64_t *nqpq2, uint64_t *q, uint8_t byt ) { uint64_t bit0 = (uint64_t)(byt >> (uint32_t)7U); uint64_t bit; Hacl_EC_Point_swap_conditional(nq, nqpq, bit0); Hacl_EC_AddAndDouble_fmonty(nq2, nqpq2, nq, nqpq, q); bit = (uint64_t)(byt >> (uint32_t)7U); Hacl_EC_Point_swap_conditional(nq2, nqpq2, bit); } static void Hacl_EC_Ladder_SmallLoop_cmult_small_loop_double_step( uint64_t *nq, uint64_t *nqpq, uint64_t *nq2, uint64_t *nqpq2, uint64_t *q, uint8_t byt ) { uint8_t byt1; Hacl_EC_Ladder_SmallLoop_cmult_small_loop_step(nq, nqpq, nq2, nqpq2, q, byt); byt1 = byt << (uint32_t)1U; Hacl_EC_Ladder_SmallLoop_cmult_small_loop_step(nq2, nqpq2, nq, nqpq, q, byt1); } static void Hacl_EC_Ladder_SmallLoop_cmult_small_loop( uint64_t *nq, uint64_t *nqpq, uint64_t *nq2, uint64_t *nqpq2, uint64_t *q, uint8_t byt, uint32_t i ) { if (!(i == (uint32_t)0U)) { uint32_t i_ = i - (uint32_t)1U; uint8_t byt_; Hacl_EC_Ladder_SmallLoop_cmult_small_loop_double_step(nq, nqpq, nq2, nqpq2, q, byt); byt_ = byt << (uint32_t)2U; Hacl_EC_Ladder_SmallLoop_cmult_small_loop(nq, nqpq, nq2, nqpq2, q, byt_, i_); } } static void Hacl_EC_Ladder_BigLoop_cmult_big_loop( uint8_t *n1, uint64_t *nq, uint64_t *nqpq, uint64_t *nq2, uint64_t *nqpq2, uint64_t *q, uint32_t i ) { if (!(i == (uint32_t)0U)) { uint32_t i1 = i - (uint32_t)1U; uint8_t byte = n1[i1]; Hacl_EC_Ladder_SmallLoop_cmult_small_loop(nq, nqpq, nq2, nqpq2, q, byte, (uint32_t)4U); Hacl_EC_Ladder_BigLoop_cmult_big_loop(n1, nq, nqpq, nq2, nqpq2, q, i1); } } static void Hacl_EC_Ladder_cmult(uint64_t *result, uint8_t *n1, uint64_t *q) { uint64_t point_buf[40U] = { 0U }; uint64_t *nq = point_buf; uint64_t *nqpq = point_buf + (uint32_t)10U; uint64_t *nq2 = point_buf + (uint32_t)20U; uint64_t *nqpq2 = point_buf + (uint32_t)30U; Hacl_EC_Point_copy(nqpq, q); nq[0U] = (uint64_t)1U; Hacl_EC_Ladder_BigLoop_cmult_big_loop(n1, nq, nqpq, nq2, nqpq2, q, (uint32_t)32U); Hacl_EC_Point_copy(result, nq); } void Hacl_Curve25519_crypto_scalarmult(uint8_t *mypublic, uint8_t *secret, uint8_t *basepoint) { uint64_t buf0[10U] = { 0U }; uint64_t *x0 = buf0; uint64_t *z = buf0 + (uint32_t)5U; uint64_t *q; Hacl_EC_Format_fexpand(x0, basepoint); z[0U] = (uint64_t)1U; q = buf0; { uint8_t e[32U] = { 0U }; uint8_t e0; uint8_t e31; uint8_t e01; uint8_t e311; uint8_t e312; uint8_t *scalar; memcpy(e, secret, (uint32_t)32U * sizeof secret[0U]); e0 = e[0U]; e31 = e[31U]; e01 = e0 & (uint8_t)248U; e311 = e31 & (uint8_t)127U; e312 = e311 | (uint8_t)64U; e[0U] = e01; e[31U] = e312; scalar = e; { uint64_t buf[15U] = { 0U }; uint64_t *nq = buf; uint64_t *x = nq; x[0U] = (uint64_t)1U; Hacl_EC_Ladder_cmult(nq, scalar, q); Hacl_EC_Format_scalar_of_point(mypublic, nq); } } }
0
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/library/Hacl_Curve25519_joined.c
/* * Interface to code from Project Everest * * Copyright 2016-2018 INRIA and Microsoft Corporation * SPDX-License-Identifier: Apache-2.0 * * Licensed under the Apache License, Version 2.0 (the "License"); you may * not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * * This file is part of mbed TLS (https://tls.mbed.org) */ #include "common.h" #if defined(MBEDTLS_ECDH_VARIANT_EVEREST_ENABLED) #if defined(__SIZEOF_INT128__) && (__SIZEOF_INT128__ == 16) #define MBEDTLS_HAVE_INT128 #endif #if defined(MBEDTLS_HAVE_INT128) #include "Hacl_Curve25519.c" #else #define KRML_VERIFIED_UINT128 #include "kremlib/FStar_UInt128_extracted.c" #include "legacy/Hacl_Curve25519.c" #endif #include "kremlib/FStar_UInt64_FStar_UInt32_FStar_UInt16_FStar_UInt8.c" #endif /* defined(MBEDTLS_ECDH_VARIANT_EVEREST_ENABLED) */
0
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/library/x25519.c
/* * ECDH with curve-optimized implementation multiplexing * * Copyright 2016-2018 INRIA and Microsoft Corporation * SPDX-License-Identifier: Apache-2.0 * * Licensed under the Apache License, Version 2.0 (the "License"); you may * not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * * This file is part of mbed TLS (https://tls.mbed.org) */ #include "common.h" #if defined(MBEDTLS_ECDH_C) && defined(MBEDTLS_ECDH_VARIANT_EVEREST_ENABLED) #include <mbedtls/ecdh.h> #if !(defined(__SIZEOF_INT128__) && (__SIZEOF_INT128__ == 16)) #define KRML_VERIFIED_UINT128 #endif #include <Hacl_Curve25519.h> #include <mbedtls/platform_util.h> #include "x25519.h" #include <string.h> /* * Initialize context */ void mbedtls_x25519_init( mbedtls_x25519_context *ctx ) { mbedtls_platform_zeroize( ctx, sizeof( mbedtls_x25519_context ) ); } /* * Free context */ void mbedtls_x25519_free( mbedtls_x25519_context *ctx ) { if( ctx == NULL ) return; mbedtls_platform_zeroize( ctx->our_secret, MBEDTLS_X25519_KEY_SIZE_BYTES ); mbedtls_platform_zeroize( ctx->peer_point, MBEDTLS_X25519_KEY_SIZE_BYTES ); } int mbedtls_x25519_make_params( mbedtls_x25519_context *ctx, size_t *olen, unsigned char *buf, size_t blen, int( *f_rng )(void *, unsigned char *, size_t), void *p_rng ) { int ret = 0; uint8_t base[MBEDTLS_X25519_KEY_SIZE_BYTES] = {0}; if( ( ret = f_rng( p_rng, ctx->our_secret, MBEDTLS_X25519_KEY_SIZE_BYTES ) ) != 0 ) return ret; *olen = MBEDTLS_X25519_KEY_SIZE_BYTES + 4; if( blen < *olen ) return( MBEDTLS_ERR_ECP_BUFFER_TOO_SMALL ); *buf++ = MBEDTLS_ECP_TLS_NAMED_CURVE; *buf++ = MBEDTLS_ECP_TLS_CURVE25519 >> 8; *buf++ = MBEDTLS_ECP_TLS_CURVE25519 & 0xFF; *buf++ = MBEDTLS_X25519_KEY_SIZE_BYTES; base[0] = 9; Hacl_Curve25519_crypto_scalarmult( buf, ctx->our_secret, base ); base[0] = 0; if( memcmp( buf, base, MBEDTLS_X25519_KEY_SIZE_BYTES) == 0 ) return MBEDTLS_ERR_ECP_RANDOM_FAILED; return( 0 ); } int mbedtls_x25519_read_params( mbedtls_x25519_context *ctx, const unsigned char **buf, const unsigned char *end ) { if( end - *buf < MBEDTLS_X25519_KEY_SIZE_BYTES + 1 ) return( MBEDTLS_ERR_ECP_BAD_INPUT_DATA ); if( ( *(*buf)++ != MBEDTLS_X25519_KEY_SIZE_BYTES ) ) return( MBEDTLS_ERR_ECP_BAD_INPUT_DATA ); memcpy( ctx->peer_point, *buf, MBEDTLS_X25519_KEY_SIZE_BYTES ); *buf += MBEDTLS_X25519_KEY_SIZE_BYTES; return( 0 ); } int mbedtls_x25519_get_params( mbedtls_x25519_context *ctx, const mbedtls_ecp_keypair *key, mbedtls_x25519_ecdh_side side ) { size_t olen = 0; switch( side ) { case MBEDTLS_X25519_ECDH_THEIRS: return mbedtls_ecp_point_write_binary( &key->grp, &key->Q, MBEDTLS_ECP_PF_COMPRESSED, &olen, ctx->peer_point, MBEDTLS_X25519_KEY_SIZE_BYTES ); case MBEDTLS_X25519_ECDH_OURS: return mbedtls_mpi_write_binary_le( &key->d, ctx->our_secret, MBEDTLS_X25519_KEY_SIZE_BYTES ); default: return( MBEDTLS_ERR_ECP_BAD_INPUT_DATA ); } } int mbedtls_x25519_calc_secret( mbedtls_x25519_context *ctx, size_t *olen, unsigned char *buf, size_t blen, int( *f_rng )(void *, unsigned char *, size_t), void *p_rng ) { /* f_rng and p_rng are not used here because this implementation does not need blinding since it has constant trace. */ (( void )f_rng); (( void )p_rng); *olen = MBEDTLS_X25519_KEY_SIZE_BYTES; if( blen < *olen ) return( MBEDTLS_ERR_ECP_BUFFER_TOO_SMALL ); Hacl_Curve25519_crypto_scalarmult( buf, ctx->our_secret, ctx->peer_point); /* Wipe the DH secret and don't let the peer chose a small subgroup point */ mbedtls_platform_zeroize( ctx->our_secret, MBEDTLS_X25519_KEY_SIZE_BYTES ); if( memcmp( buf, ctx->our_secret, MBEDTLS_X25519_KEY_SIZE_BYTES) == 0 ) return MBEDTLS_ERR_ECP_RANDOM_FAILED; return( 0 ); } int mbedtls_x25519_make_public( mbedtls_x25519_context *ctx, size_t *olen, unsigned char *buf, size_t blen, int( *f_rng )(void *, unsigned char *, size_t), void *p_rng ) { int ret = 0; unsigned char base[MBEDTLS_X25519_KEY_SIZE_BYTES] = { 0 }; if( ctx == NULL ) return( MBEDTLS_ERR_ECP_BAD_INPUT_DATA ); if( ( ret = f_rng( p_rng, ctx->our_secret, MBEDTLS_X25519_KEY_SIZE_BYTES ) ) != 0 ) return ret; *olen = MBEDTLS_X25519_KEY_SIZE_BYTES + 1; if( blen < *olen ) return(MBEDTLS_ERR_ECP_BUFFER_TOO_SMALL); *buf++ = MBEDTLS_X25519_KEY_SIZE_BYTES; base[0] = 9; Hacl_Curve25519_crypto_scalarmult( buf, ctx->our_secret, base ); base[0] = 0; if( memcmp( buf, base, MBEDTLS_X25519_KEY_SIZE_BYTES ) == 0 ) return MBEDTLS_ERR_ECP_RANDOM_FAILED; return( ret ); } int mbedtls_x25519_read_public( mbedtls_x25519_context *ctx, const unsigned char *buf, size_t blen ) { if( blen < MBEDTLS_X25519_KEY_SIZE_BYTES + 1 ) return(MBEDTLS_ERR_ECP_BUFFER_TOO_SMALL); if( (*buf++ != MBEDTLS_X25519_KEY_SIZE_BYTES) ) return(MBEDTLS_ERR_ECP_BAD_INPUT_DATA); memcpy( ctx->peer_point, buf, MBEDTLS_X25519_KEY_SIZE_BYTES ); return( 0 ); } #endif /* MBEDTLS_ECDH_C && MBEDTLS_ECDH_VARIANT_EVEREST_ENABLED */
0
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/library
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/library/kremlib/FStar_UInt128_extracted.c
/* Copyright (c) INRIA and Microsoft Corporation. All rights reserved. Licensed under the Apache 2.0 License. */ /* This file was generated by KreMLin <https://github.com/FStarLang/kremlin> * KreMLin invocation: ../krml -fc89 -fparentheses -fno-shadow -header /mnt/e/everest/verify/hdrB9w -minimal -fparentheses -fcurly-braces -fno-shadow -header copyright-header.txt -minimal -tmpdir extracted -warn-error +9+11 -skip-compilation -extract-uints -add-include <inttypes.h> -add-include "kremlib.h" -add-include "kremlin/internal/compat.h" extracted/prims.krml extracted/FStar_Pervasives_Native.krml extracted/FStar_Pervasives.krml extracted/FStar_Mul.krml extracted/FStar_Squash.krml extracted/FStar_Classical.krml extracted/FStar_StrongExcludedMiddle.krml extracted/FStar_FunctionalExtensionality.krml extracted/FStar_List_Tot_Base.krml extracted/FStar_List_Tot_Properties.krml extracted/FStar_List_Tot.krml extracted/FStar_Seq_Base.krml extracted/FStar_Seq_Properties.krml extracted/FStar_Seq.krml extracted/FStar_Math_Lib.krml extracted/FStar_Math_Lemmas.krml extracted/FStar_BitVector.krml extracted/FStar_UInt.krml extracted/FStar_UInt32.krml extracted/FStar_Int.krml extracted/FStar_Int16.krml extracted/FStar_Preorder.krml extracted/FStar_Ghost.krml extracted/FStar_ErasedLogic.krml extracted/FStar_UInt64.krml extracted/FStar_Set.krml extracted/FStar_PropositionalExtensionality.krml extracted/FStar_PredicateExtensionality.krml extracted/FStar_TSet.krml extracted/FStar_Monotonic_Heap.krml extracted/FStar_Heap.krml extracted/FStar_Map.krml extracted/FStar_Monotonic_HyperHeap.krml extracted/FStar_Monotonic_HyperStack.krml extracted/FStar_HyperStack.krml extracted/FStar_Monotonic_Witnessed.krml extracted/FStar_HyperStack_ST.krml extracted/FStar_HyperStack_All.krml extracted/FStar_Date.krml extracted/FStar_Universe.krml extracted/FStar_GSet.krml extracted/FStar_ModifiesGen.krml extracted/LowStar_Monotonic_Buffer.krml extracted/LowStar_Buffer.krml extracted/Spec_Loops.krml extracted/LowStar_BufferOps.krml extracted/C_Loops.krml extracted/FStar_UInt8.krml extracted/FStar_Kremlin_Endianness.krml extracted/FStar_UInt63.krml extracted/FStar_Exn.krml extracted/FStar_ST.krml extracted/FStar_All.krml extracted/FStar_Dyn.krml extracted/FStar_Int63.krml extracted/FStar_Int64.krml extracted/FStar_Int32.krml extracted/FStar_Int8.krml extracted/FStar_UInt16.krml extracted/FStar_Int_Cast.krml extracted/FStar_UInt128.krml extracted/C_Endianness.krml extracted/FStar_List.krml extracted/FStar_Float.krml extracted/FStar_IO.krml extracted/C.krml extracted/FStar_Char.krml extracted/FStar_String.krml extracted/LowStar_Modifies.krml extracted/C_String.krml extracted/FStar_Bytes.krml extracted/FStar_HyperStack_IO.krml extracted/C_Failure.krml extracted/TestLib.krml extracted/FStar_Int_Cast_Full.krml * F* version: 059db0c8 * KreMLin version: 916c37ac */ #include "FStar_UInt128.h" #include "kremlin/c_endianness.h" #include "FStar_UInt64_FStar_UInt32_FStar_UInt16_FStar_UInt8.h" uint64_t FStar_UInt128___proj__Mkuint128__item__low(FStar_UInt128_uint128 projectee) { return projectee.low; } uint64_t FStar_UInt128___proj__Mkuint128__item__high(FStar_UInt128_uint128 projectee) { return projectee.high; } static uint64_t FStar_UInt128_constant_time_carry(uint64_t a, uint64_t b) { return (a ^ ((a ^ b) | ((a - b) ^ b))) >> (uint32_t)63U; } static uint64_t FStar_UInt128_carry(uint64_t a, uint64_t b) { return FStar_UInt128_constant_time_carry(a, b); } FStar_UInt128_uint128 FStar_UInt128_add(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b) { FStar_UInt128_uint128 flat = { a.low + b.low, a.high + b.high + FStar_UInt128_carry(a.low + b.low, b.low) }; return flat; } FStar_UInt128_uint128 FStar_UInt128_add_underspec(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b) { FStar_UInt128_uint128 flat = { a.low + b.low, a.high + b.high + FStar_UInt128_carry(a.low + b.low, b.low) }; return flat; } FStar_UInt128_uint128 FStar_UInt128_add_mod(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b) { FStar_UInt128_uint128 flat = { a.low + b.low, a.high + b.high + FStar_UInt128_carry(a.low + b.low, b.low) }; return flat; } FStar_UInt128_uint128 FStar_UInt128_sub(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b) { FStar_UInt128_uint128 flat = { a.low - b.low, a.high - b.high - FStar_UInt128_carry(a.low, a.low - b.low) }; return flat; } FStar_UInt128_uint128 FStar_UInt128_sub_underspec(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b) { FStar_UInt128_uint128 flat = { a.low - b.low, a.high - b.high - FStar_UInt128_carry(a.low, a.low - b.low) }; return flat; } static FStar_UInt128_uint128 FStar_UInt128_sub_mod_impl(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b) { FStar_UInt128_uint128 flat = { a.low - b.low, a.high - b.high - FStar_UInt128_carry(a.low, a.low - b.low) }; return flat; } FStar_UInt128_uint128 FStar_UInt128_sub_mod(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b) { return FStar_UInt128_sub_mod_impl(a, b); } FStar_UInt128_uint128 FStar_UInt128_logand(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b) { FStar_UInt128_uint128 flat = { a.low & b.low, a.high & b.high }; return flat; } FStar_UInt128_uint128 FStar_UInt128_logxor(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b) { FStar_UInt128_uint128 flat = { a.low ^ b.low, a.high ^ b.high }; return flat; } FStar_UInt128_uint128 FStar_UInt128_logor(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b) { FStar_UInt128_uint128 flat = { a.low | b.low, a.high | b.high }; return flat; } FStar_UInt128_uint128 FStar_UInt128_lognot(FStar_UInt128_uint128 a) { FStar_UInt128_uint128 flat = { ~a.low, ~a.high }; return flat; } static uint32_t FStar_UInt128_u32_64 = (uint32_t)64U; static uint64_t FStar_UInt128_add_u64_shift_left(uint64_t hi, uint64_t lo, uint32_t s) { return (hi << s) + (lo >> (FStar_UInt128_u32_64 - s)); } static uint64_t FStar_UInt128_add_u64_shift_left_respec(uint64_t hi, uint64_t lo, uint32_t s) { return FStar_UInt128_add_u64_shift_left(hi, lo, s); } static FStar_UInt128_uint128 FStar_UInt128_shift_left_small(FStar_UInt128_uint128 a, uint32_t s) { if (s == (uint32_t)0U) { return a; } else { FStar_UInt128_uint128 flat = { a.low << s, FStar_UInt128_add_u64_shift_left_respec(a.high, a.low, s) }; return flat; } } static FStar_UInt128_uint128 FStar_UInt128_shift_left_large(FStar_UInt128_uint128 a, uint32_t s) { FStar_UInt128_uint128 flat = { (uint64_t)0U, a.low << (s - FStar_UInt128_u32_64) }; return flat; } FStar_UInt128_uint128 FStar_UInt128_shift_left(FStar_UInt128_uint128 a, uint32_t s) { if (s < FStar_UInt128_u32_64) { return FStar_UInt128_shift_left_small(a, s); } else { return FStar_UInt128_shift_left_large(a, s); } } static uint64_t FStar_UInt128_add_u64_shift_right(uint64_t hi, uint64_t lo, uint32_t s) { return (lo >> s) + (hi << (FStar_UInt128_u32_64 - s)); } static uint64_t FStar_UInt128_add_u64_shift_right_respec(uint64_t hi, uint64_t lo, uint32_t s) { return FStar_UInt128_add_u64_shift_right(hi, lo, s); } static FStar_UInt128_uint128 FStar_UInt128_shift_right_small(FStar_UInt128_uint128 a, uint32_t s) { if (s == (uint32_t)0U) { return a; } else { FStar_UInt128_uint128 flat = { FStar_UInt128_add_u64_shift_right_respec(a.high, a.low, s), a.high >> s }; return flat; } } static FStar_UInt128_uint128 FStar_UInt128_shift_right_large(FStar_UInt128_uint128 a, uint32_t s) { FStar_UInt128_uint128 flat = { a.high >> (s - FStar_UInt128_u32_64), (uint64_t)0U }; return flat; } FStar_UInt128_uint128 FStar_UInt128_shift_right(FStar_UInt128_uint128 a, uint32_t s) { if (s < FStar_UInt128_u32_64) { return FStar_UInt128_shift_right_small(a, s); } else { return FStar_UInt128_shift_right_large(a, s); } } bool FStar_UInt128_eq(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b) { return a.low == b.low && a.high == b.high; } bool FStar_UInt128_gt(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b) { return a.high > b.high || (a.high == b.high && a.low > b.low); } bool FStar_UInt128_lt(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b) { return a.high < b.high || (a.high == b.high && a.low < b.low); } bool FStar_UInt128_gte(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b) { return a.high > b.high || (a.high == b.high && a.low >= b.low); } bool FStar_UInt128_lte(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b) { return a.high < b.high || (a.high == b.high && a.low <= b.low); } FStar_UInt128_uint128 FStar_UInt128_eq_mask(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b) { FStar_UInt128_uint128 flat = { FStar_UInt64_eq_mask(a.low, b.low) & FStar_UInt64_eq_mask(a.high, b.high), FStar_UInt64_eq_mask(a.low, b.low) & FStar_UInt64_eq_mask(a.high, b.high) }; return flat; } FStar_UInt128_uint128 FStar_UInt128_gte_mask(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b) { FStar_UInt128_uint128 flat = { (FStar_UInt64_gte_mask(a.high, b.high) & ~FStar_UInt64_eq_mask(a.high, b.high)) | (FStar_UInt64_eq_mask(a.high, b.high) & FStar_UInt64_gte_mask(a.low, b.low)), (FStar_UInt64_gte_mask(a.high, b.high) & ~FStar_UInt64_eq_mask(a.high, b.high)) | (FStar_UInt64_eq_mask(a.high, b.high) & FStar_UInt64_gte_mask(a.low, b.low)) }; return flat; } FStar_UInt128_uint128 FStar_UInt128_uint64_to_uint128(uint64_t a) { FStar_UInt128_uint128 flat = { a, (uint64_t)0U }; return flat; } uint64_t FStar_UInt128_uint128_to_uint64(FStar_UInt128_uint128 a) { return a.low; } FStar_UInt128_uint128 (*FStar_UInt128_op_Plus_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1) = FStar_UInt128_add; FStar_UInt128_uint128 (*FStar_UInt128_op_Plus_Question_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1) = FStar_UInt128_add_underspec; FStar_UInt128_uint128 (*FStar_UInt128_op_Plus_Percent_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1) = FStar_UInt128_add_mod; FStar_UInt128_uint128 (*FStar_UInt128_op_Subtraction_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1) = FStar_UInt128_sub; FStar_UInt128_uint128 (*FStar_UInt128_op_Subtraction_Question_Hat)( FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1 ) = FStar_UInt128_sub_underspec; FStar_UInt128_uint128 (*FStar_UInt128_op_Subtraction_Percent_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1) = FStar_UInt128_sub_mod; FStar_UInt128_uint128 (*FStar_UInt128_op_Amp_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1) = FStar_UInt128_logand; FStar_UInt128_uint128 (*FStar_UInt128_op_Hat_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1) = FStar_UInt128_logxor; FStar_UInt128_uint128 (*FStar_UInt128_op_Bar_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1) = FStar_UInt128_logor; FStar_UInt128_uint128 (*FStar_UInt128_op_Less_Less_Hat)(FStar_UInt128_uint128 x0, uint32_t x1) = FStar_UInt128_shift_left; FStar_UInt128_uint128 (*FStar_UInt128_op_Greater_Greater_Hat)(FStar_UInt128_uint128 x0, uint32_t x1) = FStar_UInt128_shift_right; bool (*FStar_UInt128_op_Equals_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1) = FStar_UInt128_eq; bool (*FStar_UInt128_op_Greater_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1) = FStar_UInt128_gt; bool (*FStar_UInt128_op_Less_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1) = FStar_UInt128_lt; bool (*FStar_UInt128_op_Greater_Equals_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1) = FStar_UInt128_gte; bool (*FStar_UInt128_op_Less_Equals_Hat)(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1) = FStar_UInt128_lte; static uint64_t FStar_UInt128_u64_mod_32(uint64_t a) { return a & (uint64_t)0xffffffffU; } static uint32_t FStar_UInt128_u32_32 = (uint32_t)32U; static uint64_t FStar_UInt128_u32_combine(uint64_t hi, uint64_t lo) { return lo + (hi << FStar_UInt128_u32_32); } FStar_UInt128_uint128 FStar_UInt128_mul32(uint64_t x, uint32_t y) { FStar_UInt128_uint128 flat = { FStar_UInt128_u32_combine((x >> FStar_UInt128_u32_32) * (uint64_t)y + (FStar_UInt128_u64_mod_32(x) * (uint64_t)y >> FStar_UInt128_u32_32), FStar_UInt128_u64_mod_32(FStar_UInt128_u64_mod_32(x) * (uint64_t)y)), ((x >> FStar_UInt128_u32_32) * (uint64_t)y + (FStar_UInt128_u64_mod_32(x) * (uint64_t)y >> FStar_UInt128_u32_32)) >> FStar_UInt128_u32_32 }; return flat; } typedef struct K___uint64_t_uint64_t_uint64_t_uint64_t_s { uint64_t fst; uint64_t snd; uint64_t thd; uint64_t f3; } K___uint64_t_uint64_t_uint64_t_uint64_t; static K___uint64_t_uint64_t_uint64_t_uint64_t FStar_UInt128_mul_wide_impl_t_(uint64_t x, uint64_t y) { K___uint64_t_uint64_t_uint64_t_uint64_t flat = { FStar_UInt128_u64_mod_32(x), FStar_UInt128_u64_mod_32(FStar_UInt128_u64_mod_32(x) * FStar_UInt128_u64_mod_32(y)), x >> FStar_UInt128_u32_32, (x >> FStar_UInt128_u32_32) * FStar_UInt128_u64_mod_32(y) + (FStar_UInt128_u64_mod_32(x) * FStar_UInt128_u64_mod_32(y) >> FStar_UInt128_u32_32) }; return flat; } static uint64_t FStar_UInt128_u32_combine_(uint64_t hi, uint64_t lo) { return lo + (hi << FStar_UInt128_u32_32); } static FStar_UInt128_uint128 FStar_UInt128_mul_wide_impl(uint64_t x, uint64_t y) { K___uint64_t_uint64_t_uint64_t_uint64_t scrut = FStar_UInt128_mul_wide_impl_t_(x, y); uint64_t u1 = scrut.fst; uint64_t w3 = scrut.snd; uint64_t x_ = scrut.thd; uint64_t t_ = scrut.f3; FStar_UInt128_uint128 flat = { FStar_UInt128_u32_combine_(u1 * (y >> FStar_UInt128_u32_32) + FStar_UInt128_u64_mod_32(t_), w3), x_ * (y >> FStar_UInt128_u32_32) + (t_ >> FStar_UInt128_u32_32) + ((u1 * (y >> FStar_UInt128_u32_32) + FStar_UInt128_u64_mod_32(t_)) >> FStar_UInt128_u32_32) }; return flat; } FStar_UInt128_uint128 FStar_UInt128_mul_wide(uint64_t x, uint64_t y) { return FStar_UInt128_mul_wide_impl(x, y); }
0
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/library
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/library/kremlib/FStar_UInt64_FStar_UInt32_FStar_UInt16_FStar_UInt8.c
/* Copyright (c) INRIA and Microsoft Corporation. All rights reserved. Licensed under the Apache 2.0 License. */ /* This file was generated by KreMLin <https://github.com/FStarLang/kremlin> * KreMLin invocation: ../krml -fc89 -fparentheses -fno-shadow -header /mnt/e/everest/verify/hdrB9w -minimal -fparentheses -fcurly-braces -fno-shadow -header copyright-header.txt -minimal -tmpdir dist/minimal -skip-compilation -extract-uints -add-include <inttypes.h> -add-include <stdbool.h> -add-include "kremlin/internal/compat.h" -add-include "kremlin/internal/types.h" -bundle FStar.UInt64+FStar.UInt32+FStar.UInt16+FStar.UInt8=* extracted/prims.krml extracted/FStar_Pervasives_Native.krml extracted/FStar_Pervasives.krml extracted/FStar_Mul.krml extracted/FStar_Squash.krml extracted/FStar_Classical.krml extracted/FStar_StrongExcludedMiddle.krml extracted/FStar_FunctionalExtensionality.krml extracted/FStar_List_Tot_Base.krml extracted/FStar_List_Tot_Properties.krml extracted/FStar_List_Tot.krml extracted/FStar_Seq_Base.krml extracted/FStar_Seq_Properties.krml extracted/FStar_Seq.krml extracted/FStar_Math_Lib.krml extracted/FStar_Math_Lemmas.krml extracted/FStar_BitVector.krml extracted/FStar_UInt.krml extracted/FStar_UInt32.krml extracted/FStar_Int.krml extracted/FStar_Int16.krml extracted/FStar_Preorder.krml extracted/FStar_Ghost.krml extracted/FStar_ErasedLogic.krml extracted/FStar_UInt64.krml extracted/FStar_Set.krml extracted/FStar_PropositionalExtensionality.krml extracted/FStar_PredicateExtensionality.krml extracted/FStar_TSet.krml extracted/FStar_Monotonic_Heap.krml extracted/FStar_Heap.krml extracted/FStar_Map.krml extracted/FStar_Monotonic_HyperHeap.krml extracted/FStar_Monotonic_HyperStack.krml extracted/FStar_HyperStack.krml extracted/FStar_Monotonic_Witnessed.krml extracted/FStar_HyperStack_ST.krml extracted/FStar_HyperStack_All.krml extracted/FStar_Date.krml extracted/FStar_Universe.krml extracted/FStar_GSet.krml extracted/FStar_ModifiesGen.krml extracted/LowStar_Monotonic_Buffer.krml extracted/LowStar_Buffer.krml extracted/Spec_Loops.krml extracted/LowStar_BufferOps.krml extracted/C_Loops.krml extracted/FStar_UInt8.krml extracted/FStar_Kremlin_Endianness.krml extracted/FStar_UInt63.krml extracted/FStar_Exn.krml extracted/FStar_ST.krml extracted/FStar_All.krml extracted/FStar_Dyn.krml extracted/FStar_Int63.krml extracted/FStar_Int64.krml extracted/FStar_Int32.krml extracted/FStar_Int8.krml extracted/FStar_UInt16.krml extracted/FStar_Int_Cast.krml extracted/FStar_UInt128.krml extracted/C_Endianness.krml extracted/FStar_List.krml extracted/FStar_Float.krml extracted/FStar_IO.krml extracted/C.krml extracted/FStar_Char.krml extracted/FStar_String.krml extracted/LowStar_Modifies.krml extracted/C_String.krml extracted/FStar_Bytes.krml extracted/FStar_HyperStack_IO.krml extracted/C_Failure.krml extracted/TestLib.krml extracted/FStar_Int_Cast_Full.krml * F* version: 059db0c8 * KreMLin version: 916c37ac */ #include "FStar_UInt64_FStar_UInt32_FStar_UInt16_FStar_UInt8.h" uint64_t FStar_UInt64_eq_mask(uint64_t a, uint64_t b) { uint64_t x = a ^ b; uint64_t minus_x = ~x + (uint64_t)1U; uint64_t x_or_minus_x = x | minus_x; uint64_t xnx = x_or_minus_x >> (uint32_t)63U; return xnx - (uint64_t)1U; } uint64_t FStar_UInt64_gte_mask(uint64_t a, uint64_t b) { uint64_t x = a; uint64_t y = b; uint64_t x_xor_y = x ^ y; uint64_t x_sub_y = x - y; uint64_t x_sub_y_xor_y = x_sub_y ^ y; uint64_t q = x_xor_y | x_sub_y_xor_y; uint64_t x_xor_q = x ^ q; uint64_t x_xor_q_ = x_xor_q >> (uint32_t)63U; return x_xor_q_ - (uint64_t)1U; } uint32_t FStar_UInt32_eq_mask(uint32_t a, uint32_t b) { uint32_t x = a ^ b; uint32_t minus_x = ~x + (uint32_t)1U; uint32_t x_or_minus_x = x | minus_x; uint32_t xnx = x_or_minus_x >> (uint32_t)31U; return xnx - (uint32_t)1U; } uint32_t FStar_UInt32_gte_mask(uint32_t a, uint32_t b) { uint32_t x = a; uint32_t y = b; uint32_t x_xor_y = x ^ y; uint32_t x_sub_y = x - y; uint32_t x_sub_y_xor_y = x_sub_y ^ y; uint32_t q = x_xor_y | x_sub_y_xor_y; uint32_t x_xor_q = x ^ q; uint32_t x_xor_q_ = x_xor_q >> (uint32_t)31U; return x_xor_q_ - (uint32_t)1U; } uint16_t FStar_UInt16_eq_mask(uint16_t a, uint16_t b) { uint16_t x = a ^ b; uint16_t minus_x = ~x + (uint16_t)1U; uint16_t x_or_minus_x = x | minus_x; uint16_t xnx = x_or_minus_x >> (uint32_t)15U; return xnx - (uint16_t)1U; } uint16_t FStar_UInt16_gte_mask(uint16_t a, uint16_t b) { uint16_t x = a; uint16_t y = b; uint16_t x_xor_y = x ^ y; uint16_t x_sub_y = x - y; uint16_t x_sub_y_xor_y = x_sub_y ^ y; uint16_t q = x_xor_y | x_sub_y_xor_y; uint16_t x_xor_q = x ^ q; uint16_t x_xor_q_ = x_xor_q >> (uint32_t)15U; return x_xor_q_ - (uint16_t)1U; } uint8_t FStar_UInt8_eq_mask(uint8_t a, uint8_t b) { uint8_t x = a ^ b; uint8_t minus_x = ~x + (uint8_t)1U; uint8_t x_or_minus_x = x | minus_x; uint8_t xnx = x_or_minus_x >> (uint32_t)7U; return xnx - (uint8_t)1U; } uint8_t FStar_UInt8_gte_mask(uint8_t a, uint8_t b) { uint8_t x = a; uint8_t y = b; uint8_t x_xor_y = x ^ y; uint8_t x_sub_y = x - y; uint8_t x_sub_y_xor_y = x_sub_y ^ y; uint8_t q = x_xor_y | x_sub_y_xor_y; uint8_t x_xor_q = x ^ q; uint8_t x_xor_q_ = x_xor_q >> (uint32_t)7U; return x_xor_q_ - (uint8_t)1U; }
0
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/library
repos/gyro/.gyro/zig-mbedtls-mattnite-github.com-a4f5357c/pkg/mbedtls/3rdparty/everest/library/legacy/Hacl_Curve25519.c
/* Copyright (c) INRIA and Microsoft Corporation. All rights reserved. Licensed under the Apache 2.0 License. */ /* This file was generated by KreMLin <https://github.com/FStarLang/kremlin> * KreMLin invocation: /mnt/e/everest/verify/kremlin/krml -fc89 -fparentheses -fno-shadow -header /mnt/e/everest/verify/hdrcLh -minimal -fc89 -fparentheses -fno-shadow -header /mnt/e/everest/verify/hdrcLh -minimal -I /mnt/e/everest/verify/hacl-star/code/lib/kremlin -I /mnt/e/everest/verify/kremlin/kremlib/compat -I /mnt/e/everest/verify/hacl-star/specs -I /mnt/e/everest/verify/hacl-star/specs/old -I . -ccopt -march=native -verbose -ldopt -flto -tmpdir x25519-c -I ../bignum -bundle Hacl.Curve25519=* -minimal -add-include "kremlib.h" -skip-compilation x25519-c/out.krml -o x25519-c/Hacl_Curve25519.c * F* version: 059db0c8 * KreMLin version: 916c37ac */ #include "Hacl_Curve25519.h" extern uint64_t FStar_UInt64_eq_mask(uint64_t x0, uint64_t x1); extern uint64_t FStar_UInt64_gte_mask(uint64_t x0, uint64_t x1); extern FStar_UInt128_uint128 FStar_UInt128_add(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1); extern FStar_UInt128_uint128 FStar_UInt128_add_mod(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1); extern FStar_UInt128_uint128 FStar_UInt128_logand(FStar_UInt128_uint128 x0, FStar_UInt128_uint128 x1); extern FStar_UInt128_uint128 FStar_UInt128_shift_right(FStar_UInt128_uint128 x0, uint32_t x1); extern FStar_UInt128_uint128 FStar_UInt128_uint64_to_uint128(uint64_t x0); extern uint64_t FStar_UInt128_uint128_to_uint64(FStar_UInt128_uint128 x0); extern FStar_UInt128_uint128 FStar_UInt128_mul_wide(uint64_t x0, uint64_t x1); static void Hacl_Bignum_Modulo_carry_top(uint64_t *b) { uint64_t b4 = b[4U]; uint64_t b0 = b[0U]; uint64_t b4_ = b4 & (uint64_t)0x7ffffffffffffU; uint64_t b0_ = b0 + (uint64_t)19U * (b4 >> (uint32_t)51U); b[4U] = b4_; b[0U] = b0_; } inline static void Hacl_Bignum_Fproduct_copy_from_wide_(uint64_t *output, FStar_UInt128_uint128 *input) { uint32_t i; for (i = (uint32_t)0U; i < (uint32_t)5U; i = i + (uint32_t)1U) { FStar_UInt128_uint128 xi = input[i]; output[i] = FStar_UInt128_uint128_to_uint64(xi); } } inline static void Hacl_Bignum_Fproduct_sum_scalar_multiplication_( FStar_UInt128_uint128 *output, uint64_t *input, uint64_t s ) { uint32_t i; for (i = (uint32_t)0U; i < (uint32_t)5U; i = i + (uint32_t)1U) { FStar_UInt128_uint128 xi = output[i]; uint64_t yi = input[i]; output[i] = FStar_UInt128_add_mod(xi, FStar_UInt128_mul_wide(yi, s)); } } inline static void Hacl_Bignum_Fproduct_carry_wide_(FStar_UInt128_uint128 *tmp) { uint32_t i; for (i = (uint32_t)0U; i < (uint32_t)4U; i = i + (uint32_t)1U) { uint32_t ctr = i; FStar_UInt128_uint128 tctr = tmp[ctr]; FStar_UInt128_uint128 tctrp1 = tmp[ctr + (uint32_t)1U]; uint64_t r0 = FStar_UInt128_uint128_to_uint64(tctr) & (uint64_t)0x7ffffffffffffU; FStar_UInt128_uint128 c = FStar_UInt128_shift_right(tctr, (uint32_t)51U); tmp[ctr] = FStar_UInt128_uint64_to_uint128(r0); tmp[ctr + (uint32_t)1U] = FStar_UInt128_add(tctrp1, c); } } inline static void Hacl_Bignum_Fmul_shift_reduce(uint64_t *output) { uint64_t tmp = output[4U]; uint64_t b0; { uint32_t i; for (i = (uint32_t)0U; i < (uint32_t)4U; i = i + (uint32_t)1U) { uint32_t ctr = (uint32_t)5U - i - (uint32_t)1U; uint64_t z = output[ctr - (uint32_t)1U]; output[ctr] = z; } } output[0U] = tmp; b0 = output[0U]; output[0U] = (uint64_t)19U * b0; } static void Hacl_Bignum_Fmul_mul_shift_reduce_( FStar_UInt128_uint128 *output, uint64_t *input, uint64_t *input2 ) { uint32_t i; uint64_t input2i; { uint32_t i0; for (i0 = (uint32_t)0U; i0 < (uint32_t)4U; i0 = i0 + (uint32_t)1U) { uint64_t input2i0 = input2[i0]; Hacl_Bignum_Fproduct_sum_scalar_multiplication_(output, input, input2i0); Hacl_Bignum_Fmul_shift_reduce(input); } } i = (uint32_t)4U; input2i = input2[i]; Hacl_Bignum_Fproduct_sum_scalar_multiplication_(output, input, input2i); } inline static void Hacl_Bignum_Fmul_fmul(uint64_t *output, uint64_t *input, uint64_t *input2) { uint64_t tmp[5U] = { 0U }; memcpy(tmp, input, (uint32_t)5U * sizeof input[0U]); KRML_CHECK_SIZE(sizeof (FStar_UInt128_uint128), (uint32_t)5U); { FStar_UInt128_uint128 t[5U]; { uint32_t _i; for (_i = 0U; _i < (uint32_t)5U; ++_i) t[_i] = FStar_UInt128_uint64_to_uint128((uint64_t)0U); } { FStar_UInt128_uint128 b4; FStar_UInt128_uint128 b0; FStar_UInt128_uint128 b4_; FStar_UInt128_uint128 b0_; uint64_t i0; uint64_t i1; uint64_t i0_; uint64_t i1_; Hacl_Bignum_Fmul_mul_shift_reduce_(t, tmp, input2); Hacl_Bignum_Fproduct_carry_wide_(t); b4 = t[4U]; b0 = t[0U]; b4_ = FStar_UInt128_logand(b4, FStar_UInt128_uint64_to_uint128((uint64_t)0x7ffffffffffffU)); b0_ = FStar_UInt128_add(b0, FStar_UInt128_mul_wide((uint64_t)19U, FStar_UInt128_uint128_to_uint64(FStar_UInt128_shift_right(b4, (uint32_t)51U)))); t[4U] = b4_; t[0U] = b0_; Hacl_Bignum_Fproduct_copy_from_wide_(output, t); i0 = output[0U]; i1 = output[1U]; i0_ = i0 & (uint64_t)0x7ffffffffffffU; i1_ = i1 + (i0 >> (uint32_t)51U); output[0U] = i0_; output[1U] = i1_; } } } inline static void Hacl_Bignum_Fsquare_fsquare__(FStar_UInt128_uint128 *tmp, uint64_t *output) { uint64_t r0 = output[0U]; uint64_t r1 = output[1U]; uint64_t r2 = output[2U]; uint64_t r3 = output[3U]; uint64_t r4 = output[4U]; uint64_t d0 = r0 * (uint64_t)2U; uint64_t d1 = r1 * (uint64_t)2U; uint64_t d2 = r2 * (uint64_t)2U * (uint64_t)19U; uint64_t d419 = r4 * (uint64_t)19U; uint64_t d4 = d419 * (uint64_t)2U; FStar_UInt128_uint128 s0 = FStar_UInt128_add(FStar_UInt128_add(FStar_UInt128_mul_wide(r0, r0), FStar_UInt128_mul_wide(d4, r1)), FStar_UInt128_mul_wide(d2, r3)); FStar_UInt128_uint128 s1 = FStar_UInt128_add(FStar_UInt128_add(FStar_UInt128_mul_wide(d0, r1), FStar_UInt128_mul_wide(d4, r2)), FStar_UInt128_mul_wide(r3 * (uint64_t)19U, r3)); FStar_UInt128_uint128 s2 = FStar_UInt128_add(FStar_UInt128_add(FStar_UInt128_mul_wide(d0, r2), FStar_UInt128_mul_wide(r1, r1)), FStar_UInt128_mul_wide(d4, r3)); FStar_UInt128_uint128 s3 = FStar_UInt128_add(FStar_UInt128_add(FStar_UInt128_mul_wide(d0, r3), FStar_UInt128_mul_wide(d1, r2)), FStar_UInt128_mul_wide(r4, d419)); FStar_UInt128_uint128 s4 = FStar_UInt128_add(FStar_UInt128_add(FStar_UInt128_mul_wide(d0, r4), FStar_UInt128_mul_wide(d1, r3)), FStar_UInt128_mul_wide(r2, r2)); tmp[0U] = s0; tmp[1U] = s1; tmp[2U] = s2; tmp[3U] = s3; tmp[4U] = s4; } inline static void Hacl_Bignum_Fsquare_fsquare_(FStar_UInt128_uint128 *tmp, uint64_t *output) { FStar_UInt128_uint128 b4; FStar_UInt128_uint128 b0; FStar_UInt128_uint128 b4_; FStar_UInt128_uint128 b0_; uint64_t i0; uint64_t i1; uint64_t i0_; uint64_t i1_; Hacl_Bignum_Fsquare_fsquare__(tmp, output); Hacl_Bignum_Fproduct_carry_wide_(tmp); b4 = tmp[4U]; b0 = tmp[0U]; b4_ = FStar_UInt128_logand(b4, FStar_UInt128_uint64_to_uint128((uint64_t)0x7ffffffffffffU)); b0_ = FStar_UInt128_add(b0, FStar_UInt128_mul_wide((uint64_t)19U, FStar_UInt128_uint128_to_uint64(FStar_UInt128_shift_right(b4, (uint32_t)51U)))); tmp[4U] = b4_; tmp[0U] = b0_; Hacl_Bignum_Fproduct_copy_from_wide_(output, tmp); i0 = output[0U]; i1 = output[1U]; i0_ = i0 & (uint64_t)0x7ffffffffffffU; i1_ = i1 + (i0 >> (uint32_t)51U); output[0U] = i0_; output[1U] = i1_; } static void Hacl_Bignum_Fsquare_fsquare_times_( uint64_t *input, FStar_UInt128_uint128 *tmp, uint32_t count1 ) { uint32_t i; Hacl_Bignum_Fsquare_fsquare_(tmp, input); for (i = (uint32_t)1U; i < count1; i = i + (uint32_t)1U) Hacl_Bignum_Fsquare_fsquare_(tmp, input); } inline static void Hacl_Bignum_Fsquare_fsquare_times(uint64_t *output, uint64_t *input, uint32_t count1) { KRML_CHECK_SIZE(sizeof (FStar_UInt128_uint128), (uint32_t)5U); { FStar_UInt128_uint128 t[5U]; { uint32_t _i; for (_i = 0U; _i < (uint32_t)5U; ++_i) t[_i] = FStar_UInt128_uint64_to_uint128((uint64_t)0U); } memcpy(output, input, (uint32_t)5U * sizeof input[0U]); Hacl_Bignum_Fsquare_fsquare_times_(output, t, count1); } } inline static void Hacl_Bignum_Fsquare_fsquare_times_inplace(uint64_t *output, uint32_t count1) { KRML_CHECK_SIZE(sizeof (FStar_UInt128_uint128), (uint32_t)5U); { FStar_UInt128_uint128 t[5U]; { uint32_t _i; for (_i = 0U; _i < (uint32_t)5U; ++_i) t[_i] = FStar_UInt128_uint64_to_uint128((uint64_t)0U); } Hacl_Bignum_Fsquare_fsquare_times_(output, t, count1); } } inline static void Hacl_Bignum_Crecip_crecip(uint64_t *out, uint64_t *z) { uint64_t buf[20U] = { 0U }; uint64_t *a0 = buf; uint64_t *t00 = buf + (uint32_t)5U; uint64_t *b0 = buf + (uint32_t)10U; uint64_t *t01; uint64_t *b1; uint64_t *c0; uint64_t *a; uint64_t *t0; uint64_t *b; uint64_t *c; Hacl_Bignum_Fsquare_fsquare_times(a0, z, (uint32_t)1U); Hacl_Bignum_Fsquare_fsquare_times(t00, a0, (uint32_t)2U); Hacl_Bignum_Fmul_fmul(b0, t00, z); Hacl_Bignum_Fmul_fmul(a0, b0, a0); Hacl_Bignum_Fsquare_fsquare_times(t00, a0, (uint32_t)1U); Hacl_Bignum_Fmul_fmul(b0, t00, b0); Hacl_Bignum_Fsquare_fsquare_times(t00, b0, (uint32_t)5U); t01 = buf + (uint32_t)5U; b1 = buf + (uint32_t)10U; c0 = buf + (uint32_t)15U; Hacl_Bignum_Fmul_fmul(b1, t01, b1); Hacl_Bignum_Fsquare_fsquare_times(t01, b1, (uint32_t)10U); Hacl_Bignum_Fmul_fmul(c0, t01, b1); Hacl_Bignum_Fsquare_fsquare_times(t01, c0, (uint32_t)20U); Hacl_Bignum_Fmul_fmul(t01, t01, c0); Hacl_Bignum_Fsquare_fsquare_times_inplace(t01, (uint32_t)10U); Hacl_Bignum_Fmul_fmul(b1, t01, b1); Hacl_Bignum_Fsquare_fsquare_times(t01, b1, (uint32_t)50U); a = buf; t0 = buf + (uint32_t)5U; b = buf + (uint32_t)10U; c = buf + (uint32_t)15U; Hacl_Bignum_Fmul_fmul(c, t0, b); Hacl_Bignum_Fsquare_fsquare_times(t0, c, (uint32_t)100U); Hacl_Bignum_Fmul_fmul(t0, t0, c); Hacl_Bignum_Fsquare_fsquare_times_inplace(t0, (uint32_t)50U); Hacl_Bignum_Fmul_fmul(t0, t0, b); Hacl_Bignum_Fsquare_fsquare_times_inplace(t0, (uint32_t)5U); Hacl_Bignum_Fmul_fmul(out, t0, a); } inline static void Hacl_Bignum_fsum(uint64_t *a, uint64_t *b) { uint32_t i; for (i = (uint32_t)0U; i < (uint32_t)5U; i = i + (uint32_t)1U) { uint64_t xi = a[i]; uint64_t yi = b[i]; a[i] = xi + yi; } } inline static void Hacl_Bignum_fdifference(uint64_t *a, uint64_t *b) { uint64_t tmp[5U] = { 0U }; uint64_t b0; uint64_t b1; uint64_t b2; uint64_t b3; uint64_t b4; memcpy(tmp, b, (uint32_t)5U * sizeof b[0U]); b0 = tmp[0U]; b1 = tmp[1U]; b2 = tmp[2U]; b3 = tmp[3U]; b4 = tmp[4U]; tmp[0U] = b0 + (uint64_t)0x3fffffffffff68U; tmp[1U] = b1 + (uint64_t)0x3ffffffffffff8U; tmp[2U] = b2 + (uint64_t)0x3ffffffffffff8U; tmp[3U] = b3 + (uint64_t)0x3ffffffffffff8U; tmp[4U] = b4 + (uint64_t)0x3ffffffffffff8U; { uint32_t i; for (i = (uint32_t)0U; i < (uint32_t)5U; i = i + (uint32_t)1U) { uint64_t xi = a[i]; uint64_t yi = tmp[i]; a[i] = yi - xi; } } } inline static void Hacl_Bignum_fscalar(uint64_t *output, uint64_t *b, uint64_t s) { KRML_CHECK_SIZE(sizeof (FStar_UInt128_uint128), (uint32_t)5U); { FStar_UInt128_uint128 tmp[5U]; { uint32_t _i; for (_i = 0U; _i < (uint32_t)5U; ++_i) tmp[_i] = FStar_UInt128_uint64_to_uint128((uint64_t)0U); } { FStar_UInt128_uint128 b4; FStar_UInt128_uint128 b0; FStar_UInt128_uint128 b4_; FStar_UInt128_uint128 b0_; { uint32_t i; for (i = (uint32_t)0U; i < (uint32_t)5U; i = i + (uint32_t)1U) { uint64_t xi = b[i]; tmp[i] = FStar_UInt128_mul_wide(xi, s); } } Hacl_Bignum_Fproduct_carry_wide_(tmp); b4 = tmp[4U]; b0 = tmp[0U]; b4_ = FStar_UInt128_logand(b4, FStar_UInt128_uint64_to_uint128((uint64_t)0x7ffffffffffffU)); b0_ = FStar_UInt128_add(b0, FStar_UInt128_mul_wide((uint64_t)19U, FStar_UInt128_uint128_to_uint64(FStar_UInt128_shift_right(b4, (uint32_t)51U)))); tmp[4U] = b4_; tmp[0U] = b0_; Hacl_Bignum_Fproduct_copy_from_wide_(output, tmp); } } } inline static void Hacl_Bignum_fmul(uint64_t *output, uint64_t *a, uint64_t *b) { Hacl_Bignum_Fmul_fmul(output, a, b); } inline static void Hacl_Bignum_crecip(uint64_t *output, uint64_t *input) { Hacl_Bignum_Crecip_crecip(output, input); } static void Hacl_EC_Point_swap_conditional_step(uint64_t *a, uint64_t *b, uint64_t swap1, uint32_t ctr) { uint32_t i = ctr - (uint32_t)1U; uint64_t ai = a[i]; uint64_t bi = b[i]; uint64_t x = swap1 & (ai ^ bi); uint64_t ai1 = ai ^ x; uint64_t bi1 = bi ^ x; a[i] = ai1; b[i] = bi1; } static void Hacl_EC_Point_swap_conditional_(uint64_t *a, uint64_t *b, uint64_t swap1, uint32_t ctr) { if (!(ctr == (uint32_t)0U)) { uint32_t i; Hacl_EC_Point_swap_conditional_step(a, b, swap1, ctr); i = ctr - (uint32_t)1U; Hacl_EC_Point_swap_conditional_(a, b, swap1, i); } } static void Hacl_EC_Point_swap_conditional(uint64_t *a, uint64_t *b, uint64_t iswap) { uint64_t swap1 = (uint64_t)0U - iswap; Hacl_EC_Point_swap_conditional_(a, b, swap1, (uint32_t)5U); Hacl_EC_Point_swap_conditional_(a + (uint32_t)5U, b + (uint32_t)5U, swap1, (uint32_t)5U); } static void Hacl_EC_Point_copy(uint64_t *output, uint64_t *input) { memcpy(output, input, (uint32_t)5U * sizeof input[0U]); memcpy(output + (uint32_t)5U, input + (uint32_t)5U, (uint32_t)5U * sizeof (input + (uint32_t)5U)[0U]); } static void Hacl_EC_Format_fexpand(uint64_t *output, uint8_t *input) { uint64_t i0 = load64_le(input); uint8_t *x00 = input + (uint32_t)6U; uint64_t i1 = load64_le(x00); uint8_t *x01 = input + (uint32_t)12U; uint64_t i2 = load64_le(x01); uint8_t *x02 = input + (uint32_t)19U; uint64_t i3 = load64_le(x02); uint8_t *x0 = input + (uint32_t)24U; uint64_t i4 = load64_le(x0); uint64_t output0 = i0 & (uint64_t)0x7ffffffffffffU; uint64_t output1 = i1 >> (uint32_t)3U & (uint64_t)0x7ffffffffffffU; uint64_t output2 = i2 >> (uint32_t)6U & (uint64_t)0x7ffffffffffffU; uint64_t output3 = i3 >> (uint32_t)1U & (uint64_t)0x7ffffffffffffU; uint64_t output4 = i4 >> (uint32_t)12U & (uint64_t)0x7ffffffffffffU; output[0U] = output0; output[1U] = output1; output[2U] = output2; output[3U] = output3; output[4U] = output4; } static void Hacl_EC_Format_fcontract_first_carry_pass(uint64_t *input) { uint64_t t0 = input[0U]; uint64_t t1 = input[1U]; uint64_t t2 = input[2U]; uint64_t t3 = input[3U]; uint64_t t4 = input[4U]; uint64_t t1_ = t1 + (t0 >> (uint32_t)51U); uint64_t t0_ = t0 & (uint64_t)0x7ffffffffffffU; uint64_t t2_ = t2 + (t1_ >> (uint32_t)51U); uint64_t t1__ = t1_ & (uint64_t)0x7ffffffffffffU; uint64_t t3_ = t3 + (t2_ >> (uint32_t)51U); uint64_t t2__ = t2_ & (uint64_t)0x7ffffffffffffU; uint64_t t4_ = t4 + (t3_ >> (uint32_t)51U); uint64_t t3__ = t3_ & (uint64_t)0x7ffffffffffffU; input[0U] = t0_; input[1U] = t1__; input[2U] = t2__; input[3U] = t3__; input[4U] = t4_; } static void Hacl_EC_Format_fcontract_first_carry_full(uint64_t *input) { Hacl_EC_Format_fcontract_first_carry_pass(input); Hacl_Bignum_Modulo_carry_top(input); } static void Hacl_EC_Format_fcontract_second_carry_pass(uint64_t *input) { uint64_t t0 = input[0U]; uint64_t t1 = input[1U]; uint64_t t2 = input[2U]; uint64_t t3 = input[3U]; uint64_t t4 = input[4U]; uint64_t t1_ = t1 + (t0 >> (uint32_t)51U); uint64_t t0_ = t0 & (uint64_t)0x7ffffffffffffU; uint64_t t2_ = t2 + (t1_ >> (uint32_t)51U); uint64_t t1__ = t1_ & (uint64_t)0x7ffffffffffffU; uint64_t t3_ = t3 + (t2_ >> (uint32_t)51U); uint64_t t2__ = t2_ & (uint64_t)0x7ffffffffffffU; uint64_t t4_ = t4 + (t3_ >> (uint32_t)51U); uint64_t t3__ = t3_ & (uint64_t)0x7ffffffffffffU; input[0U] = t0_; input[1U] = t1__; input[2U] = t2__; input[3U] = t3__; input[4U] = t4_; } static void Hacl_EC_Format_fcontract_second_carry_full(uint64_t *input) { uint64_t i0; uint64_t i1; uint64_t i0_; uint64_t i1_; Hacl_EC_Format_fcontract_second_carry_pass(input); Hacl_Bignum_Modulo_carry_top(input); i0 = input[0U]; i1 = input[1U]; i0_ = i0 & (uint64_t)0x7ffffffffffffU; i1_ = i1 + (i0 >> (uint32_t)51U); input[0U] = i0_; input[1U] = i1_; } static void Hacl_EC_Format_fcontract_trim(uint64_t *input) { uint64_t a0 = input[0U]; uint64_t a1 = input[1U]; uint64_t a2 = input[2U]; uint64_t a3 = input[3U]; uint64_t a4 = input[4U]; uint64_t mask0 = FStar_UInt64_gte_mask(a0, (uint64_t)0x7ffffffffffedU); uint64_t mask1 = FStar_UInt64_eq_mask(a1, (uint64_t)0x7ffffffffffffU); uint64_t mask2 = FStar_UInt64_eq_mask(a2, (uint64_t)0x7ffffffffffffU); uint64_t mask3 = FStar_UInt64_eq_mask(a3, (uint64_t)0x7ffffffffffffU); uint64_t mask4 = FStar_UInt64_eq_mask(a4, (uint64_t)0x7ffffffffffffU); uint64_t mask = (((mask0 & mask1) & mask2) & mask3) & mask4; uint64_t a0_ = a0 - ((uint64_t)0x7ffffffffffedU & mask); uint64_t a1_ = a1 - ((uint64_t)0x7ffffffffffffU & mask); uint64_t a2_ = a2 - ((uint64_t)0x7ffffffffffffU & mask); uint64_t a3_ = a3 - ((uint64_t)0x7ffffffffffffU & mask); uint64_t a4_ = a4 - ((uint64_t)0x7ffffffffffffU & mask); input[0U] = a0_; input[1U] = a1_; input[2U] = a2_; input[3U] = a3_; input[4U] = a4_; } static void Hacl_EC_Format_fcontract_store(uint8_t *output, uint64_t *input) { uint64_t t0 = input[0U]; uint64_t t1 = input[1U]; uint64_t t2 = input[2U]; uint64_t t3 = input[3U]; uint64_t t4 = input[4U]; uint64_t o0 = t1 << (uint32_t)51U | t0; uint64_t o1 = t2 << (uint32_t)38U | t1 >> (uint32_t)13U; uint64_t o2 = t3 << (uint32_t)25U | t2 >> (uint32_t)26U; uint64_t o3 = t4 << (uint32_t)12U | t3 >> (uint32_t)39U; uint8_t *b0 = output; uint8_t *b1 = output + (uint32_t)8U; uint8_t *b2 = output + (uint32_t)16U; uint8_t *b3 = output + (uint32_t)24U; store64_le(b0, o0); store64_le(b1, o1); store64_le(b2, o2); store64_le(b3, o3); } static void Hacl_EC_Format_fcontract(uint8_t *output, uint64_t *input) { Hacl_EC_Format_fcontract_first_carry_full(input); Hacl_EC_Format_fcontract_second_carry_full(input); Hacl_EC_Format_fcontract_trim(input); Hacl_EC_Format_fcontract_store(output, input); } static void Hacl_EC_Format_scalar_of_point(uint8_t *scalar, uint64_t *point) { uint64_t *x = point; uint64_t *z = point + (uint32_t)5U; uint64_t buf[10U] = { 0U }; uint64_t *zmone = buf; uint64_t *sc = buf + (uint32_t)5U; Hacl_Bignum_crecip(zmone, z); Hacl_Bignum_fmul(sc, x, zmone); Hacl_EC_Format_fcontract(scalar, sc); } static void Hacl_EC_AddAndDouble_fmonty( uint64_t *pp, uint64_t *ppq, uint64_t *p, uint64_t *pq, uint64_t *qmqp ) { uint64_t *qx = qmqp; uint64_t *x2 = pp; uint64_t *z2 = pp + (uint32_t)5U; uint64_t *x3 = ppq; uint64_t *z3 = ppq + (uint32_t)5U; uint64_t *x = p; uint64_t *z = p + (uint32_t)5U; uint64_t *xprime = pq; uint64_t *zprime = pq + (uint32_t)5U; uint64_t buf[40U] = { 0U }; uint64_t *origx = buf; uint64_t *origxprime0 = buf + (uint32_t)5U; uint64_t *xxprime0 = buf + (uint32_t)25U; uint64_t *zzprime0 = buf + (uint32_t)30U; uint64_t *origxprime; uint64_t *xx0; uint64_t *zz0; uint64_t *xxprime; uint64_t *zzprime; uint64_t *zzzprime; uint64_t *zzz; uint64_t *xx; uint64_t *zz; uint64_t scalar; memcpy(origx, x, (uint32_t)5U * sizeof x[0U]); Hacl_Bignum_fsum(x, z); Hacl_Bignum_fdifference(z, origx); memcpy(origxprime0, xprime, (uint32_t)5U * sizeof xprime[0U]); Hacl_Bignum_fsum(xprime, zprime); Hacl_Bignum_fdifference(zprime, origxprime0); Hacl_Bignum_fmul(xxprime0, xprime, z); Hacl_Bignum_fmul(zzprime0, x, zprime); origxprime = buf + (uint32_t)5U; xx0 = buf + (uint32_t)15U; zz0 = buf + (uint32_t)20U; xxprime = buf + (uint32_t)25U; zzprime = buf + (uint32_t)30U; zzzprime = buf + (uint32_t)35U; memcpy(origxprime, xxprime, (uint32_t)5U * sizeof xxprime[0U]); Hacl_Bignum_fsum(xxprime, zzprime); Hacl_Bignum_fdifference(zzprime, origxprime); Hacl_Bignum_Fsquare_fsquare_times(x3, xxprime, (uint32_t)1U); Hacl_Bignum_Fsquare_fsquare_times(zzzprime, zzprime, (uint32_t)1U); Hacl_Bignum_fmul(z3, zzzprime, qx); Hacl_Bignum_Fsquare_fsquare_times(xx0, x, (uint32_t)1U); Hacl_Bignum_Fsquare_fsquare_times(zz0, z, (uint32_t)1U); zzz = buf + (uint32_t)10U; xx = buf + (uint32_t)15U; zz = buf + (uint32_t)20U; Hacl_Bignum_fmul(x2, xx, zz); Hacl_Bignum_fdifference(zz, xx); scalar = (uint64_t)121665U; Hacl_Bignum_fscalar(zzz, zz, scalar); Hacl_Bignum_fsum(zzz, xx); Hacl_Bignum_fmul(z2, zzz, zz); } static void Hacl_EC_Ladder_SmallLoop_cmult_small_loop_step( uint64_t *nq, uint64_t *nqpq, uint64_t *nq2, uint64_t *nqpq2, uint64_t *q, uint8_t byt ) { uint64_t bit0 = (uint64_t)(byt >> (uint32_t)7U); uint64_t bit; Hacl_EC_Point_swap_conditional(nq, nqpq, bit0); Hacl_EC_AddAndDouble_fmonty(nq2, nqpq2, nq, nqpq, q); bit = (uint64_t)(byt >> (uint32_t)7U); Hacl_EC_Point_swap_conditional(nq2, nqpq2, bit); } static void Hacl_EC_Ladder_SmallLoop_cmult_small_loop_double_step( uint64_t *nq, uint64_t *nqpq, uint64_t *nq2, uint64_t *nqpq2, uint64_t *q, uint8_t byt ) { uint8_t byt1; Hacl_EC_Ladder_SmallLoop_cmult_small_loop_step(nq, nqpq, nq2, nqpq2, q, byt); byt1 = byt << (uint32_t)1U; Hacl_EC_Ladder_SmallLoop_cmult_small_loop_step(nq2, nqpq2, nq, nqpq, q, byt1); } static void Hacl_EC_Ladder_SmallLoop_cmult_small_loop( uint64_t *nq, uint64_t *nqpq, uint64_t *nq2, uint64_t *nqpq2, uint64_t *q, uint8_t byt, uint32_t i ) { if (!(i == (uint32_t)0U)) { uint32_t i_ = i - (uint32_t)1U; uint8_t byt_; Hacl_EC_Ladder_SmallLoop_cmult_small_loop_double_step(nq, nqpq, nq2, nqpq2, q, byt); byt_ = byt << (uint32_t)2U; Hacl_EC_Ladder_SmallLoop_cmult_small_loop(nq, nqpq, nq2, nqpq2, q, byt_, i_); } } static void Hacl_EC_Ladder_BigLoop_cmult_big_loop( uint8_t *n1, uint64_t *nq, uint64_t *nqpq, uint64_t *nq2, uint64_t *nqpq2, uint64_t *q, uint32_t i ) { if (!(i == (uint32_t)0U)) { uint32_t i1 = i - (uint32_t)1U; uint8_t byte = n1[i1]; Hacl_EC_Ladder_SmallLoop_cmult_small_loop(nq, nqpq, nq2, nqpq2, q, byte, (uint32_t)4U); Hacl_EC_Ladder_BigLoop_cmult_big_loop(n1, nq, nqpq, nq2, nqpq2, q, i1); } } static void Hacl_EC_Ladder_cmult(uint64_t *result, uint8_t *n1, uint64_t *q) { uint64_t point_buf[40U] = { 0U }; uint64_t *nq = point_buf; uint64_t *nqpq = point_buf + (uint32_t)10U; uint64_t *nq2 = point_buf + (uint32_t)20U; uint64_t *nqpq2 = point_buf + (uint32_t)30U; Hacl_EC_Point_copy(nqpq, q); nq[0U] = (uint64_t)1U; Hacl_EC_Ladder_BigLoop_cmult_big_loop(n1, nq, nqpq, nq2, nqpq2, q, (uint32_t)32U); Hacl_EC_Point_copy(result, nq); } void Hacl_Curve25519_crypto_scalarmult(uint8_t *mypublic, uint8_t *secret, uint8_t *basepoint) { uint64_t buf0[10U] = { 0U }; uint64_t *x0 = buf0; uint64_t *z = buf0 + (uint32_t)5U; uint64_t *q; Hacl_EC_Format_fexpand(x0, basepoint); z[0U] = (uint64_t)1U; q = buf0; { uint8_t e[32U] = { 0U }; uint8_t e0; uint8_t e31; uint8_t e01; uint8_t e311; uint8_t e312; uint8_t *scalar; memcpy(e, secret, (uint32_t)32U * sizeof secret[0U]); e0 = e[0U]; e31 = e[31U]; e01 = e0 & (uint8_t)248U; e311 = e31 & (uint8_t)127U; e312 = e311 | (uint8_t)64U; e[0U] = e01; e[31U] = e312; scalar = e; { uint64_t buf[15U] = { 0U }; uint64_t *nq = buf; uint64_t *x = nq; x[0U] = (uint64_t)1U; Hacl_EC_Ladder_cmult(nq, scalar, q); Hacl_EC_Format_scalar_of_point(mypublic, nq); } } }
0
repos/gyro
repos/gyro/src/Engine.zig
const std = @import("std"); const builtin = @import("builtin"); const Dependency = @import("Dependency.zig"); const Project = @import("Project.zig"); const utils = @import("utils.zig"); const ThreadSafeArenaAllocator = @import("ThreadSafeArenaAllocator.zig"); const Engine = @This(); const Allocator = std.mem.Allocator; const StructField = std.builtin.Type.StructField; const UnionField = std.builtin.Type.UnionField; const testing = std.testing; const assert = std.debug.assert; pub const DepTable = std.ArrayListUnmanaged(Dependency.Source); pub const Sources = .{ @import("pkg.zig"), @import("local.zig"), @import("url.zig"), @import("git.zig"), }; comptime { inline for (Sources) |source| { const type_info = @typeInfo(@TypeOf(source.dedupeResolveAndFetch)); if (type_info.Fn.return_type != void) @compileError("dedupeResolveAndFetch has to return void, not !void"); } } pub const Edge = struct { const ParentIndex = union(enum) { root: enum { normal, build, }, index: usize, }; from: ParentIndex, to: usize, alias: []const u8, pub fn format( edge: Edge, comptime fmt: []const u8, options: std.fmt.FormatOptions, writer: anytype, ) @TypeOf(writer).Error!void { _ = fmt; _ = options; switch (edge.from) { .root => |which| switch (which) { .normal => try writer.print("Edge: deps -> {}: {s}", .{ edge.to, edge.alias }), .build => try writer.print("Edge: build_deps -> {}: {s}", .{ edge.to, edge.alias }), }, .index => |idx| try writer.print("Edge: {} -> {}: {s}", .{ idx, edge.to, edge.alias }), } } }; pub const Resolutions = blk: { var tables_fields: [Sources.len]StructField = undefined; var edges_fields: [Sources.len]StructField = undefined; inline for (Sources) |source, i| { const ResolutionTable = std.ArrayListUnmanaged(source.ResolutionEntry); tables_fields[i] = StructField{ .name = source.name, .field_type = ResolutionTable, .alignment = @alignOf(ResolutionTable), .is_comptime = false, .default_value = null, }; const EdgeTable = std.ArrayListUnmanaged(struct { dep_idx: usize, res_idx: usize, }); edges_fields[i] = StructField{ .name = source.name, .field_type = EdgeTable, .alignment = @alignOf(EdgeTable), .is_comptime = false, .default_value = null, }; } const Tables = @Type(std.builtin.Type{ .Struct = .{ .layout = .Auto, .is_tuple = false, .fields = &tables_fields, .decls = &.{}, }, }); const Edges = @Type(std.builtin.Type{ .Struct = .{ .layout = .Auto, .is_tuple = false, .fields = &edges_fields, .decls = &.{}, }, }); break :blk struct { text: []const u8, tables: Tables, edges: Edges, const Self = @This(); pub fn deinit(self: *Self, allocator: Allocator) void { inline for (Sources) |source| { @field(self.tables, source.name).deinit(allocator); @field(self.edges, source.name).deinit(allocator); } allocator.free(self.text); } pub fn fromReader(allocator: Allocator, reader: anytype) !Self { var ret = Self{ .text = try reader.readAllAlloc(allocator, std.math.maxInt(usize)), .tables = undefined, .edges = undefined, }; inline for (std.meta.fields(Tables)) |field| @field(ret.tables, field.name) = field.field_type{}; inline for (std.meta.fields(Edges)) |field| @field(ret.edges, field.name) = field.field_type{}; errdefer ret.deinit(allocator); var line_it = std.mem.tokenize(u8, ret.text, "\n"); var count: usize = 0; iterate: while (line_it.next()) |line| : (count += 1) { var it = std.mem.tokenize(u8, line, " "); const first = it.next() orelse return error.EmptyLine; inline for (Sources) |source| { if (std.mem.eql(u8, first, source.name)) { source.deserializeLockfileEntry( allocator, &it, &@field(ret.tables, source.name), ) catch |err| { std.log.warn( "invalid lockfile entry on line {}, {s} -- ignoring and removing:\n{s}\n", .{ count + 1, @errorName(err), line }, ); continue :iterate; }; break; } } else { std.log.err("unsupported lockfile prefix: {s}", .{first}); return error.Explained; } } return ret; } }; }; pub fn MultiQueueImpl(comptime Resolution: type, comptime Error: type) type { return std.MultiArrayList(struct { edge: Edge, thread: ?std.Thread = null, result: union(enum) { replace_me: usize, fill_resolution: usize, copy_deps: usize, new_entry: Resolution, err: Error, } = undefined, path: ?[]const u8 = null, deps: std.ArrayListUnmanaged(Dependency), }); } pub const FetchQueue = blk: { var fields: [Sources.len]StructField = undefined; var next_fields: [Sources.len]StructField = undefined; inline for (Sources) |source, i| { const MultiQueue = MultiQueueImpl( source.Resolution, source.FetchError, ); fields[i] = StructField{ .name = source.name, .field_type = MultiQueue, .alignment = @alignOf(MultiQueue), .is_comptime = false, .default_value = null, }; next_fields[i] = StructField{ .name = source.name, .field_type = std.ArrayListUnmanaged(Edge), .alignment = @alignOf(std.ArrayListUnmanaged(Edge)), .is_comptime = false, .default_value = null, }; } const Tables = @Type(std.builtin.Type{ .Struct = .{ .layout = .Auto, .is_tuple = false, .fields = &fields, .decls = &.{}, }, }); const NextType = @Type(std.builtin.Type{ .Struct = .{ .layout = .Auto, .is_tuple = false, .fields = &next_fields, .decls = &.{}, }, }); break :blk struct { tables: Tables, const Self = @This(); pub const Next = struct { tables: NextType, pub fn init() @This() { var ret: @This() = undefined; inline for (Sources) |source| @field(ret.tables, source.name) = std.ArrayListUnmanaged(Edge){}; return ret; } pub fn deinit(self: *@This(), allocator: Allocator) void { inline for (Sources) |source| @field(self.tables, source.name).deinit(allocator); } pub fn append( self: *@This(), allocator: Allocator, src_type: Dependency.SourceType, edge: Edge, ) !void { inline for (Sources) |source| { if (src_type == @field(Dependency.SourceType, source.name)) { try @field(self.tables, source.name).append(allocator, edge); break; } } else { std.log.err("unsupported dependency source type: {}", .{src_type}); assert(false); return error.Explained; } } }; pub fn init() Self { var ret: Self = undefined; inline for (std.meta.fields(Tables)) |field| @field(ret.tables, field.name) = field.field_type{}; return ret; } pub fn deinit(self: *Self, allocator: Allocator) void { inline for (Sources) |source| { @field(self.tables, source.name).deinit(allocator); } } pub fn append( self: *Self, allocator: Allocator, src_type: Dependency.SourceType, edge: Edge, ) !void { inline for (Sources) |source| { if (src_type == @field(Dependency.SourceType, source.name)) { try @field(self.tables, source.name).append(allocator, .{ .edge = edge, .deps = std.ArrayListUnmanaged(Dependency){}, }); break; } } else { std.log.err("unsupported dependency source type: {}", .{src_type}); assert(false); return error.Explained; } } pub fn empty(self: Self) bool { return inline for (Sources) |source| { if (@field(self.tables, source.name).len != 0) break false; } else true; } pub fn clearAndLoad(self: *Self, allocator: Allocator, next: Next) !void { // clear current table inline for (Sources) |source| { @field(self.tables, source.name).shrinkRetainingCapacity(0); for (@field(next.tables, source.name).items) |edge| { try @field(self.tables, source.name).append(allocator, .{ .edge = edge, .deps = std.ArrayListUnmanaged(Dependency){}, }); } } } pub fn parallelFetch( self: *Self, arena: *ThreadSafeArenaAllocator, dep_table: DepTable, resolutions: Resolutions, ) !void { errdefer inline for (Sources) |source| for (@field(self.tables, source.name).items(.thread)) |th| if (th) |t| t.join(); inline for (Sources) |source| { for (@field(self.tables, source.name).items(.thread)) |*th, i| { th.* = try std.Thread.spawn( .{}, source.dedupeResolveAndFetch, .{ arena, dep_table.items, @field(resolutions.tables, source.name).items, &@field(self.tables, source.name), i, }, ); } } inline for (Sources) |source| for (@field(self.tables, source.name).items(.thread)) |th| th.?.join(); } pub fn cleanupDeps(self: *Self, allocator: Allocator) void { _ = self; _ = allocator; //inline for (Sources) |source| // for (@field(self.tables, source.name).items(.deps)) |*deps| // deps.deinit(allocator); } }; }; allocator: Allocator, arena: ThreadSafeArenaAllocator, project: *Project, dep_table: DepTable, edges: std.ArrayListUnmanaged(Edge), fetch_queue: FetchQueue, resolutions: Resolutions, paths: std.AutoHashMapUnmanaged(usize, []const u8), pub fn init( allocator: Allocator, project: *Project, lockfile_reader: anytype, ) !Engine { const initial_deps = project.deps.items.len + project.build_deps.items.len; var dep_table = try DepTable.initCapacity(allocator, initial_deps); errdefer dep_table.deinit(allocator); var fetch_queue = FetchQueue.init(); errdefer fetch_queue.deinit(allocator); for (project.deps.items) |dep| { try dep_table.append(allocator, dep.src); try fetch_queue.append(allocator, dep.src, .{ .from = .{ .root = .normal, }, .to = dep_table.items.len - 1, .alias = dep.alias, }); } for (project.build_deps.items) |dep| { try dep_table.append(allocator, dep.src); try fetch_queue.append(allocator, dep.src, .{ .from = .{ .root = .build, }, .to = dep_table.items.len - 1, .alias = dep.alias, }); } const resolutions = try Resolutions.fromReader(allocator, lockfile_reader); errdefer resolutions.deinit(allocator); return Engine{ .allocator = allocator, .arena = ThreadSafeArenaAllocator.init(allocator), .project = project, .dep_table = dep_table, .edges = std.ArrayListUnmanaged(Edge){}, .fetch_queue = fetch_queue, .resolutions = resolutions, .paths = std.AutoHashMapUnmanaged(usize, []const u8){}, }; } pub fn deinit(self: *Engine) void { self.dep_table.deinit(self.allocator); self.edges.deinit(self.allocator); self.fetch_queue.deinit(self.allocator); self.resolutions.deinit(self.allocator); self.paths.deinit(self.allocator); self.arena.deinit(); } // look at root dependencies and clear the resolution associated with it. // note: will update the same alias in both dep and build_deps pub fn clearResolution(self: *Engine, alias: []const u8) !void { inline for (Sources) |source| { for (@field(self.fetch_queue.tables, source.name).items(.edge)) |edge| if (edge.from == .root) { if (std.mem.eql(u8, alias, edge.alias)) { const dep = self.dep_table.items[edge.to]; if (dep == @field(Dependency.Source, source.name)) { if (source.findResolution(dep, @field(self.resolutions.tables, source.name).items)) |res_idx| { _ = @field(self.resolutions.tables, source.name).orderedRemove(res_idx); } } } }; } } pub fn fetch(self: *Engine) !void { defer self.fetch_queue.cleanupDeps(self.allocator); while (!self.fetch_queue.empty()) { var next = FetchQueue.Next.init(); defer next.deinit(self.allocator); { try self.fetch_queue.parallelFetch(&self.arena, self.dep_table, self.resolutions); // inline for workaround because the compiler wasn't generating the right code for this var explained = false; for (self.fetch_queue.tables.pkg.items(.result)) |_, i| Sources[0].updateResolution(self.allocator, &self.resolutions.tables.pkg, self.dep_table.items, &self.fetch_queue.tables.pkg, i) catch |err| { if (err == error.Explained) explained = true else return err; }; for (self.fetch_queue.tables.local.items(.result)) |_, i| Sources[1].updateResolution(self.allocator, &self.resolutions.tables.local, self.dep_table.items, &self.fetch_queue.tables.local, i) catch |err| { if (err == error.Explained) explained = true else return err; }; for (self.fetch_queue.tables.url.items(.result)) |_, i| Sources[2].updateResolution(self.allocator, &self.resolutions.tables.url, self.dep_table.items, &self.fetch_queue.tables.url, i) catch |err| { if (err == error.Explained) explained = true else return err; }; for (self.fetch_queue.tables.git.items(.result)) |_, i| Sources[3].updateResolution(self.allocator, &self.resolutions.tables.git, self.dep_table.items, &self.fetch_queue.tables.git, i) catch |err| { if (err == error.Explained) explained = true else return err; }; if (explained) return error.Explained; inline for (Sources) |source| { for (@field(self.fetch_queue.tables, source.name).items(.path)) |opt_path, i| { if (opt_path) |path| { try self.paths.putNoClobber( self.allocator, @field(self.fetch_queue.tables, source.name).items(.edge)[i].to, path, ); } } // set up next batch of deps to fetch for (@field(self.fetch_queue.tables, source.name).items(.deps)) |deps, i| { const dep_index = @field(self.fetch_queue.tables, source.name).items(.edge)[i].to; for (deps.items) |dep| { try self.dep_table.append(self.allocator, dep.src); const edge = Edge{ .from = .{ .index = dep_index, }, .to = self.dep_table.items.len - 1, .alias = dep.alias, }; try next.append(self.allocator, dep.src, edge); } } // copy edges try self.edges.appendSlice( self.allocator, @field(self.fetch_queue.tables, source.name).items(.edge), ); } } try self.fetch_queue.clearAndLoad(self.allocator, next); } // TODO: check for circular dependencies // TODO: deleteTree doesn't work on windows with hidden or read-only files if (builtin.target.os.tag != .windows) { // clean up cache var paths = std.StringHashMap(void).init(self.allocator); defer paths.deinit(); inline for (Sources) |source| { if (@hasDecl(source, "resolutionToCachePath")) { for (@field(self.resolutions.tables, source.name).items) |entry| { if (entry.dep_idx != null) { try paths.put(try source.resolutionToCachePath(self.arena.allocator(), entry), {}); } } } } var cache_dir = try std.fs.cwd().makeOpenPathIterable(".gyro", .{}); defer cache_dir.close(); var it = cache_dir.iterate(); while (try it.next()) |entry| switch (entry.kind) { .Directory => if (!paths.contains(entry.name)) { try cache_dir.dir.deleteTree(entry.name); }, else => {}, }; } } pub fn writeLockfile(self: Engine, writer: anytype) !void { inline for (Sources) |source| try source.serializeResolutions(@field(self.resolutions.tables, source.name).items, writer); } pub fn writeDepBeginRoot(self: *Engine, writer: anytype, indent: usize, edge: Edge) !void { const escaped = try utils.escape(self.allocator, edge.alias); defer self.allocator.free(escaped); try writer.writeByteNTimes(' ', 4 * indent); try writer.print("pub const {s} = Pkg{{\n", .{escaped}); try writer.writeByteNTimes(' ', 4 * (indent + 1)); try writer.print(".name = \"{s}\",\n", .{edge.alias}); try writer.writeByteNTimes(' ', 4 * (indent + 1)); try writer.print(".source = FileSource{{\n", .{}); const path = if (builtin.target.os.tag == .windows) try std.mem.replaceOwned(u8, self.allocator, self.paths.get(edge.to).?, "\\", "\\\\") else self.paths.get(edge.to).?; defer if (builtin.target.os.tag == .windows) self.allocator.free(path); try writer.writeByteNTimes(' ', 4 * (indent + 2)); try writer.print(".path = \"{s}\",\n", .{path}); try writer.writeByteNTimes(' ', 4 * (indent + 1)); try writer.print("}},\n", .{}); } pub fn writeDepEndRoot(writer: anytype, indent: usize) !void { try writer.writeByteNTimes(' ', 4 * (1 + indent)); try writer.print("}},\n", .{}); try writer.writeByteNTimes(' ', 4 * indent); try writer.print("}};\n\n", .{}); } pub fn writeDepBegin(self: Engine, writer: anytype, indent: usize, edge: Edge) !void { try writer.writeByteNTimes(' ', 4 * indent); try writer.print("Pkg{{\n", .{}); try writer.writeByteNTimes(' ', 4 * (indent + 1)); try writer.print(".name = \"{s}\",\n", .{edge.alias}); try writer.writeByteNTimes(' ', 4 * (indent + 1)); try writer.print(".source = FileSource{{\n", .{}); const path = if (builtin.target.os.tag == .windows) try std.mem.replaceOwned(u8, self.allocator, self.paths.get(edge.to).?, "\\", "\\\\") else self.paths.get(edge.to).?; defer if (builtin.target.os.tag == .windows) self.allocator.free(path); try writer.writeByteNTimes(' ', 4 * (indent + 2)); try writer.print(".path = \"{s}\",\n", .{path}); try writer.writeByteNTimes(' ', 4 * (indent + 1)); try writer.print("}},\n", .{}); } pub fn writeDepEnd(writer: anytype, indent: usize) !void { try writer.writeByteNTimes(' ', 4 * (1 + indent)); try writer.print("}},\n", .{}); } pub fn writeDepsZig(self: *Engine, writer: anytype) !void { try writer.writeAll( \\const std = @import("std"); \\const Pkg = std.build.Pkg; \\const FileSource = std.build.FileSource; \\ \\ ); const has_build_pkgs = for (self.edges.items) |edge| { switch (edge.from) { .root => |root| if (root == .build) break true, else => {}, } } else false; const has_pkgs = for (self.edges.items) |edge| { switch (edge.from) { .root => |root| if (root == .normal) break true, else => {}, } } else false; if (has_build_pkgs) { try writer.writeAll( \\pub const build_pkgs = struct { \\ ); for (self.edges.items) |edge, i| { switch (edge.from) { .root => |root| if (root == .build) { const has_deps = for (self.edges.items[i..]) |other| { switch (other.from) { .index => |parent_idx| if (parent_idx == i) break true, else => {}, } } else false; if (has_deps) { try writer.print( \\ pub const {s} = @compileError("can't directly import because it has dependencies, you'll need to directly import: https://github.com/mattnite/gyro#build-dependencies"); \\ , .{std.zig.fmtId(edge.alias)}); } else { const path = if (builtin.target.os.tag == .windows) try std.mem.replaceOwned(u8, self.allocator, self.paths.get(edge.to).?, "\\", "\\\\") else self.paths.get(edge.to).?; // we can't import it if it has dependencies, they'll have to use gyro build deps then try writer.print( \\ pub const {s} = @import("{s}"); \\ , .{ std.zig.fmtId(edge.alias), path }); } }, else => {}, } } try writer.writeAll( \\}; \\ ); } if (has_pkgs) { if (has_build_pkgs) { try writer.writeByte('\n'); } try writer.writeAll( \\pub const pkgs = struct { \\ ); for (self.edges.items) |edge| { switch (edge.from) { .root => |root| if (root == .normal) { var stack = std.ArrayList(struct { current: usize, edge_idx: usize, has_deps: bool, }).init(self.allocator); defer stack.deinit(); var current = edge.to; var edge_idx = 1 + edge.to; var has_deps = false; try self.writeDepBeginRoot(writer, 1 + stack.items.len, edge); while (true) { while (edge_idx < self.edges.items.len) : (edge_idx += 1) { const root_level = stack.items.len == 0; switch (self.edges.items[edge_idx].from) { .index => |idx| if (idx == current) { if (!has_deps) { const offset: usize = if (root_level) 2 else 3; try writer.writeByteNTimes(' ', 4 * (stack.items.len + offset)); try writer.print(".dependencies = &[_]Pkg{{\n", .{}); has_deps = true; } try stack.append(.{ .current = current, .edge_idx = edge_idx, .has_deps = has_deps, }); const offset: usize = if (root_level) 2 else 3; try self.writeDepBegin(writer, offset + stack.items.len, self.edges.items[edge_idx]); current = edge_idx; edge_idx += 1; has_deps = false; break; }, else => {}, } } else if (stack.items.len > 0) { if (has_deps) { try writer.writeByteNTimes(' ', 4 * (stack.items.len + 3)); try writer.print("}},\n", .{}); } const offset: usize = if (stack.items.len == 1) 2 else 3; try writer.writeByteNTimes(' ', 4 * (stack.items.len + offset)); try writer.print("}},\n", .{}); const pop = stack.pop(); current = pop.current; edge_idx = 1 + pop.edge_idx; has_deps = pop.has_deps; } else { if (has_deps) { try writer.writeByteNTimes(' ', 8); try writer.print("}},\n", .{}); } break; } } try writer.writeByteNTimes(' ', 4); try writer.print("}};\n\n", .{}); }, else => {}, } } try writer.print(" pub fn addAllTo(artifact: *std.build.LibExeObjStep) void {{\n", .{}); for (self.edges.items) |edge| { switch (edge.from) { .root => |root| if (root == .normal) { try writer.print(" artifact.addPackage(pkgs.{s});\n", .{ try utils.escape(self.arena.allocator(), edge.alias), }); }, else => {}, } } try writer.print(" }}\n", .{}); try writer.print("}};\n", .{}); } if (self.project.packages.count() == 0) return; try writer.print("\npub const exports = struct {{\n", .{}); var it = self.project.packages.iterator(); while (it.next()) |pkg| { const path: []const u8 = pkg.value_ptr.root orelse utils.default_root; try writer.print( \\ pub const {s} = Pkg{{ \\ .name = "{s}", \\ .source = FileSource{{ .path = "{s}" }}, \\ , .{ try utils.escape(self.arena.allocator(), pkg.value_ptr.name), pkg.value_ptr.name, path, }); if (self.project.deps.items.len > 0) { try writer.print(" .dependencies = &[_]Pkg{{\n", .{}); for (self.edges.items) |edge| { switch (edge.from) { .root => |root| if (root == .normal) { try writer.print(" pkgs.{s},\n", .{ try utils.escape(self.arena.allocator(), edge.alias), }); }, else => {}, } } try writer.print(" }},\n", .{}); } try writer.print(" }};\n", .{}); } try writer.print("}};\n", .{}); } /// arena only stores the arraylists, not text, return slice is allocated in the arena pub fn genBuildDeps(self: Engine, arena: *ThreadSafeArenaAllocator) !std.ArrayList(std.build.Pkg) { const allocator = arena.child_allocator; var ret = std.ArrayList(std.build.Pkg).init(allocator); errdefer ret.deinit(); for (self.edges.items) |edge| { switch (edge.from) { .root => |root| if (root == .build) { var stack = std.ArrayList(struct { current: usize, edge_idx: usize, deps: std.ArrayListUnmanaged(std.build.Pkg), }).init(allocator); defer stack.deinit(); var current = edge.to; var edge_idx = 1 + edge.to; var deps = std.ArrayListUnmanaged(std.build.Pkg){}; while (true) { while (edge_idx < self.edges.items.len) : (edge_idx += 1) { switch (self.edges.items[edge_idx].from) { .index => |idx| if (idx == current) { try deps.append(arena.allocator(), .{ .name = self.edges.items[edge_idx].alias, .source = .{ .path = self.paths.get(self.edges.items[edge_idx].to).?, }, }); try stack.append(.{ .current = current, .edge_idx = edge_idx, .deps = deps, }); current = edge_idx; edge_idx += 1; deps = std.ArrayListUnmanaged(std.build.Pkg){}; break; }, else => {}, } } else if (stack.items.len > 0) { const pop = stack.pop(); if (deps.items.len > 0) pop.deps.items[pop.deps.items.len - 1].dependencies = deps.items; current = pop.current; edge_idx = 1 + pop.edge_idx; deps = pop.deps; } else { break; } } try ret.append(.{ .name = edge.alias, .source = .{ .path = self.paths.get(edge.to).? }, .dependencies = deps.items, }); assert(stack.items.len == 0); }, else => {}, } } return ret; } test "Resolutions" { var text = "".*; var fb = std.io.fixedBufferStream(&text); var resolutions = try Resolutions.fromReader(testing.allocator, fb.reader()); defer resolutions.deinit(testing.allocator); } test "FetchQueue" { var fetch_queue = FetchQueue.init(); defer fetch_queue.deinit(testing.allocator); } test "fetch" { var text = "".*; var fb = std.io.fixedBufferStream(&text); var engine = Engine{ .allocator = testing.allocator, .arena = ThreadSafeArenaAllocator.init(testing.allocator), .dep_table = DepTable{}, .edges = std.ArrayListUnmanaged(Edge){}, .fetch_queue = FetchQueue.init(), .resolutions = try Resolutions.fromReader(testing.allocator, fb.reader()), }; defer engine.deinit(); try engine.fetch(); } test "writeLockfile" { var text = "".*; var fb = std.io.fixedBufferStream(&text); var engine = Engine{ .allocator = testing.allocator, .arena = ThreadSafeArenaAllocator.init(testing.allocator), .dep_table = DepTable{}, .edges = std.ArrayListUnmanaged(Edge){}, .fetch_queue = FetchQueue.init(), .resolutions = try Resolutions.fromReader(testing.allocator, fb.reader()), }; defer engine.deinit(); try engine.writeLockfile(fb.writer()); }
0
repos/gyro
repos/gyro/src/ThreadSafeArenaAllocator.zig
// SPDX-License-Identifier: MIT // Copyright (c) 2015-2021 Zig Contributors // This file is part of [zig](https://ziglang.org/), which is MIT licensed. // The MIT license requires this copyright notice to be included in all copies // and substantial portions of the software. const std = @import("std"); const assert = std.debug.assert; const mem = std.mem; const Allocator = std.mem.Allocator; // this is a modified version of zig's stdlib arena allocator but with mutexes const Self = @This(); mtx: std.Thread.Mutex, child_allocator: Allocator, state: State, /// Inner state of Self. Can be stored rather than the entire Self /// as a memory-saving optimization. pub const State = struct { buffer_list: std.SinglyLinkedList([]u8) = @as(std.SinglyLinkedList([]u8), .{}), end_index: usize = 0, pub fn promote(self: State, child_allocator: Allocator) Self { return .{ .mtx = std.Thread.Mutex{}, .child_allocator = child_allocator, .state = self, }; } }; pub fn allocator(self: *Self) Allocator { return Allocator.init(self, alloc, resize, free); } const BufNode = std.SinglyLinkedList([]u8).Node; pub fn init(child_allocator: Allocator) Self { return (State{}).promote(child_allocator); } pub fn deinit(self: *Self) void { var it = self.state.buffer_list.first; while (it) |node| { // this has to occur before the free because the free frees node const next_it = node.next; self.child_allocator.free(node.data); it = next_it; } } fn createNode(self: *Self, prev_len: usize, minimum_size: usize) !*BufNode { const actual_min_size = minimum_size + (@sizeOf(BufNode) + 16); const big_enough_len = prev_len + actual_min_size; const len = big_enough_len + big_enough_len / 2; const buf = try self.child_allocator.rawAlloc(len, @alignOf(BufNode), 1, @returnAddress()); const buf_node = @ptrCast(*BufNode, @alignCast(@alignOf(BufNode), buf.ptr)); buf_node.* = BufNode{ .data = buf, .next = null, }; self.state.buffer_list.prepend(buf_node); self.state.end_index = 0; return buf_node; } fn alloc(self: *Self, n: usize, ptr_align: u29, len_align: u29, ra: usize) ![]u8 { _ = len_align; _ = ra; self.mtx.lock(); defer self.mtx.unlock(); var cur_node = if (self.state.buffer_list.first) |first_node| first_node else try self.createNode(0, n + ptr_align); while (true) { const cur_buf = cur_node.data[@sizeOf(BufNode)..]; const addr = @ptrToInt(cur_buf.ptr) + self.state.end_index; const adjusted_addr = mem.alignForward(addr, ptr_align); const adjusted_index = self.state.end_index + (adjusted_addr - addr); const new_end_index = adjusted_index + n; if (new_end_index <= cur_buf.len) { const result = cur_buf[adjusted_index..new_end_index]; self.state.end_index = new_end_index; return result; } const bigger_buf_size = @sizeOf(BufNode) + new_end_index; // Try to grow the buffer in-place cur_node.data = self.child_allocator.resize(cur_node.data, bigger_buf_size) orelse { // Allocate a new node if that's not possible cur_node = try self.createNode(cur_buf.len, n + ptr_align); continue; }; } } fn resize(self: *Self, buf: []u8, buf_align: u29, new_len: usize, len_align: u29, ret_addr: usize) ?usize { _ = buf_align; _ = len_align; _ = ret_addr; self.mtx.lock(); defer self.mtx.unlock(); const cur_node = self.state.buffer_list.first orelse return null; const cur_buf = cur_node.data[@sizeOf(BufNode)..]; if (@ptrToInt(cur_buf.ptr) + self.state.end_index != @ptrToInt(buf.ptr) + buf.len) { if (new_len > buf.len) return null; return new_len; } if (buf.len >= new_len) { self.state.end_index -= buf.len - new_len; return new_len; } else if (cur_buf.len - self.state.end_index >= new_len - buf.len) { self.state.end_index += new_len - buf.len; return new_len; } else { return null; } } fn free(self: *Self, buf: []u8, buf_align: u29, ret_addr: usize) void { _ = buf_align; _ = ret_addr; const cur_node = self.state.buffer_list.first orelse return; const cur_buf = cur_node.data[@sizeOf(BufNode)..]; if (@ptrToInt(cur_buf.ptr) + self.state.end_index == @ptrToInt(buf.ptr) + buf.len) { self.state.end_index -= buf.len; } }
0
repos/gyro
repos/gyro/src/Dependency.zig
const std = @import("std"); const version = @import("version"); const zzz = @import("zzz"); const uri = @import("uri"); const api = @import("api.zig"); const utils = @import("utils.zig"); const ThreadSafeArenaAllocator = @import("ThreadSafeArenaAllocator.zig"); const Self = @This(); const Allocator = std.mem.Allocator; const mem = std.mem; const testing = std.testing; pub const SourceType = std.meta.Tag(Source); alias: []const u8, src: Source, pub const Source = union(enum) { pkg: struct { user: []const u8, name: []const u8, version: version.Range, repository: []const u8, }, github: struct { user: []const u8, repo: []const u8, ref: []const u8, root: ?[]const u8, }, url: struct { str: []const u8, root: ?[]const u8, }, local: struct { path: []const u8, root: ?[]const u8, }, git: struct { url: []const u8, ref: []const u8, root: ?[]const u8, }, pub fn format( source: Source, comptime fmt: []const u8, options: std.fmt.FormatOptions, writer: anytype, ) @TypeOf(writer).Error!void { _ = fmt; _ = options; switch (source) { .pkg => |pkg| try writer.print("{s}/{s}/{s}: {}", .{ pkg.repository, pkg.user, pkg.name, pkg.version }), .github => |gh| { const root = gh.root orelse utils.default_root; try writer.print("github.com/{s}/{s}/{s}: {s}", .{ gh.user, gh.repo, root, gh.ref }); }, .url => |url| { const root = url.root orelse utils.default_root; try writer.print("{s}/{s}", .{ url.str, root }); }, .local => |local| { const root = local.root orelse utils.default_root; try writer.print("{s}/{s}", .{ local.path, root }); }, .git => |git| { const root = git.root orelse utils.default_root; try writer.print("{s}:{s}:{s}", .{ git.url, git.ref, root }); }, } } }; /// There are four ways for a dependency to be declared in the project file: /// /// A package from some other index: /// ``` /// name: /// pkg: /// user: <user> /// name: <name> # optional /// version: <version string> /// repository: <repository> # optional /// ``` /// /// A github repo: /// ``` /// name: /// github: /// user: <user> /// repo: <repo> /// ref: <ref> /// root: <root file> /// ``` /// /// A git repo: /// ``` /// name: /// git: /// url: <url> /// ref: <ref> /// root: <root file> /// ``` /// /// A raw url: /// ``` /// name: /// url: <url> /// root: <root file> /// ``` pub fn fromZNode(arena: *ThreadSafeArenaAllocator, node: *zzz.ZNode) !Self { if (node.*.child == null) return error.NoChildren; // check if only one child node and that it has no children if (node.*.child.?.value == .String and node.*.child.?.child == null) { if (node.*.child.?.sibling != null) return error.Unknown; const key = try utils.zGetString(node); const info = try utils.parseUserRepo(key); const ver_str = try utils.zGetString(node.*.child.?); return Self{ .alias = info.repo, .src = .{ .pkg = .{ .user = info.user, .name = info.repo, .version = try version.Range.parse(ver_str), .repository = utils.default_repo, }, }, }; } const alias = try utils.zGetString(node); var root: ?[]const u8 = null; { var it = node; var depth: isize = 0; while (it.nextUntil(node, &depth)) |child| : (it = child) { switch (child.value) { .String => |str| if (mem.eql(u8, str, "root")) { if (root != null) { std.log.err("multiple roots defined", .{}); return error.Explained; // TODO: handle child.value not being string } else { root = try utils.zGetString(child.child.?); } }, else => continue, } } } // search for src node const src_node = blk: { var it = utils.ZChildIterator.init(node); while (it.next()) |child| { switch (child.value) { .String => |str| if (mem.eql(u8, str, "src")) break :blk child, else => continue, } } else break :blk node; }; const src: Source = blk: { const child = src_node.child orelse return error.SrcNeedsChild; const src_str = try utils.zGetString(child); const src_type = inline for (std.meta.fields(SourceType)) |field| { if (mem.eql(u8, src_str, field.name)) break @field(SourceType, field.name); } else return error.InvalidSrcTag; break :blk switch (src_type) { .pkg => .{ .pkg = .{ .user = (try utils.zFindString(child, "user")) orelse return error.MissingUser, .name = (try utils.zFindString(child, "name")) orelse alias, .version = try version.Range.parse((try utils.zFindString(child, "version")) orelse return error.MissingVersion), .repository = (try utils.zFindString(child, "repository")) orelse utils.default_repo, }, }, .github => gh: { const url = try std.fmt.allocPrint(arena.allocator(), "https://github.com/{s}/{s}.git", .{ (try utils.zFindString(child, "user")) orelse return error.GithubMissingUser, (try utils.zFindString(child, "repo")) orelse return error.GithubMissingRepo, }); break :gh .{ .git = .{ .url = url, .ref = (try utils.zFindString(child, "ref")) orelse return error.GithubMissingRef, .root = root, }, }; }, .url => .{ .url = .{ .str = try utils.zGetString(child.child orelse return error.UrlMissingStr), .root = root, }, }, .local => .{ .local = .{ .path = try utils.zGetString(child.child orelse return error.UrlMissingStr), .root = root, }, }, .git => .{ .git = .{ .url = (try utils.zFindString(child, "url")) orelse return error.GitMissingUrl, .ref = (try utils.zFindString(child, "ref")) orelse return error.GitMissingRef, .root = root, }, }, }; }; if (src == .url) { _ = uri.parse(src.url.str) catch |err| { switch (err) { error.InvalidFormat => { std.log.err( "Failed to parse '{s}' as a url ({}), did you forget to wrap your url in double quotes?", .{ src.url.str, err }, ); }, else => return err, } return error.Explained; }; } // TODO: integrity return Self{ .alias = alias, .src = src }; } /// for testing fn fromString(arena: *ThreadSafeArenaAllocator, str: []const u8) !Self { var tree = zzz.ZTree(1, 1000){}; const root = try tree.appendText(str); return Self.fromZNode(arena, root.*.child.?); } fn expectNullStrEqual(expected: ?[]const u8, actual: ?[]const u8) !void { if (expected) |e| if (actual) |a| { try testing.expectEqualStrings(e, a); return; }; try testing.expectEqual(expected, actual); } fn expectDepEqual(expected: Self, actual: Self) !void { try testing.expectEqualStrings(expected.alias, actual.alias); try testing.expectEqual(@as(SourceType, expected.src), @as(SourceType, actual.src)); return switch (expected.src) { .pkg => |pkg| { try testing.expectEqualStrings(pkg.user, actual.src.pkg.user); try testing.expectEqualStrings(pkg.name, actual.src.pkg.name); try testing.expectEqualStrings(pkg.repository, actual.src.pkg.repository); try testing.expectEqual(pkg.version, actual.src.pkg.version); }, .github => |gh| { try testing.expectEqualStrings(gh.user, actual.src.github.user); try testing.expectEqualStrings(gh.repo, actual.src.github.repo); try testing.expectEqualStrings(gh.ref, actual.src.github.ref); try expectNullStrEqual(gh.root, actual.src.github.root); }, .git => |git| { try testing.expectEqualStrings(git.url, actual.src.git.url); try testing.expectEqualStrings(git.ref, actual.src.git.ref); try expectNullStrEqual(git.root, actual.src.git.root); }, .url => |url| { try testing.expectEqualStrings(url.str, actual.src.url.str); try expectNullStrEqual(url.root, actual.src.url.root); }, .local => |local| { try testing.expectEqualStrings(local.path, actual.src.local.path); try expectNullStrEqual(local.root, actual.src.local.root); }, }; } test "default repo pkg" { var arena = ThreadSafeArenaAllocator.init(testing.allocator); defer arena.deinit(); try expectDepEqual(Self{ .alias = "something", .src = .{ .pkg = .{ .user = "matt", .name = "something", .version = try version.Range.parse("^0.1.0"), .repository = utils.default_repo, }, }, }, try fromString(&arena, "matt/something: ^0.1.0")); } test "legacy aliased, default repo pkg" { var arena = ThreadSafeArenaAllocator.init(testing.allocator); defer arena.deinit(); const actual = try fromString(&arena, \\something: \\ src: \\ pkg: \\ user: matt \\ name: blarg \\ version: ^0.1.0 ); const expected = Self{ .alias = "something", .src = .{ .pkg = .{ .user = "matt", .name = "blarg", .version = try version.Range.parse("^0.1.0"), .repository = utils.default_repo, }, }, }; try expectDepEqual(expected, actual); } test "aliased, default repo pkg" { var arena = ThreadSafeArenaAllocator.init(testing.allocator); defer arena.deinit(); const actual = try fromString(&arena, \\something: \\ pkg: \\ user: matt \\ name: blarg \\ version: ^0.1.0 ); const expected = Self{ .alias = "something", .src = .{ .pkg = .{ .user = "matt", .name = "blarg", .version = try version.Range.parse("^0.1.0"), .repository = utils.default_repo, }, }, }; try expectDepEqual(expected, actual); } test "legacy non-default repo pkg" { var arena = ThreadSafeArenaAllocator.init(testing.allocator); defer arena.deinit(); const actual = try fromString(&arena, \\something: \\ src: \\ pkg: \\ user: matt \\ version: ^0.1.0 \\ repository: example.com ); const expected = Self{ .alias = "something", .src = .{ .pkg = .{ .user = "matt", .name = "something", .version = try version.Range.parse("^0.1.0"), .repository = "example.com", }, }, }; try expectDepEqual(expected, actual); } test "non-default repo pkg" { var arena = ThreadSafeArenaAllocator.init(testing.allocator); defer arena.deinit(); const actual = try fromString(&arena, \\something: \\ pkg: \\ user: matt \\ version: ^0.1.0 \\ repository: example.com ); const expected = Self{ .alias = "something", .src = .{ .pkg = .{ .user = "matt", .name = "something", .version = try version.Range.parse("^0.1.0"), .repository = "example.com", }, }, }; try expectDepEqual(expected, actual); } test "legacy aliased, non-default repo pkg" { var arena = ThreadSafeArenaAllocator.init(testing.allocator); defer arena.deinit(); const actual = try fromString(&arena, \\something: \\ src: \\ pkg: \\ user: matt \\ name: real_name \\ version: ^0.1.0 \\ repository: example.com ); const expected = Self{ .alias = "something", .src = .{ .pkg = .{ .user = "matt", .name = "real_name", .version = try version.Range.parse("^0.1.0"), .repository = "example.com", }, }, }; try expectDepEqual(expected, actual); } test "aliased, non-default repo pkg" { var arena = ThreadSafeArenaAllocator.init(testing.allocator); defer arena.deinit(); const actual = try fromString(&arena, \\something: \\ pkg: \\ user: matt \\ name: real_name \\ version: ^0.1.0 \\ repository: example.com ); const expected = Self{ .alias = "something", .src = .{ .pkg = .{ .user = "matt", .name = "real_name", .version = try version.Range.parse("^0.1.0"), .repository = "example.com", }, }, }; try expectDepEqual(expected, actual); } test "legacy github default root" { var arena = ThreadSafeArenaAllocator.init(testing.allocator); defer arena.deinit(); const actual = try fromString(&arena, \\foo: \\ src: \\ github: \\ user: test \\ repo: something \\ ref: main ); const expected = Self{ .alias = "foo", .src = .{ .git = .{ .url = "https://github.com/test/something.git", .ref = "main", .root = null, }, }, }; try expectDepEqual(expected, actual); } test "github default root" { var arena = ThreadSafeArenaAllocator.init(testing.allocator); defer arena.deinit(); const actual = try fromString(&arena, \\foo: \\ github: \\ user: test \\ repo: something \\ ref: main ); const expected = Self{ .alias = "foo", .src = .{ .git = .{ .url = "https://github.com/test/something.git", .ref = "main", .root = null, }, }, }; try expectDepEqual(expected, actual); } test "legacy github explicit root" { var arena = ThreadSafeArenaAllocator.init(testing.allocator); defer arena.deinit(); const actual = try fromString(&arena, \\foo: \\ src: \\ github: \\ user: test \\ repo: something \\ ref: main \\ root: main.zig ); const expected = Self{ .alias = "foo", .src = .{ .git = .{ .url = "https://github.com/test/something.git", .ref = "main", .root = "main.zig", }, }, }; try expectDepEqual(expected, actual); } test "legacy github explicit root, incorrect indent" { var arena = ThreadSafeArenaAllocator.init(testing.allocator); defer arena.deinit(); const actual = try fromString(&arena, \\foo: \\ src: \\ github: \\ user: test \\ repo: something \\ ref: main \\ root: main.zig ); const expected = Self{ .alias = "foo", .src = .{ .git = .{ .url = "https://github.com/test/something.git", .ref = "main", .root = "main.zig", }, }, }; try expectDepEqual(expected, actual); } test "legacy github explicit root, mixed with newer root indent" { var arena = ThreadSafeArenaAllocator.init(testing.allocator); defer arena.deinit(); const actual = try fromString(&arena, \\foo: \\ src: \\ github: \\ user: test \\ repo: something \\ ref: main \\ root: main.zig ); const expected = Self{ .alias = "foo", .src = .{ .git = .{ .url = "https://github.com/test/something.git", .ref = "main", .root = "main.zig", }, }, }; try expectDepEqual(expected, actual); } test "github explicit root, incorrect indent" { var arena = ThreadSafeArenaAllocator.init(testing.allocator); defer arena.deinit(); const actual = try fromString(&arena, \\foo: \\ github: \\ user: test \\ repo: something \\ ref: main \\ root: main.zig ); const expected = Self{ .alias = "foo", .src = .{ .git = .{ .url = "https://github.com/test/something.git", .ref = "main", .root = "main.zig", }, }, }; try expectDepEqual(expected, actual); } test "github explicit root" { var arena = ThreadSafeArenaAllocator.init(testing.allocator); defer arena.deinit(); const actual = try fromString(&arena, \\foo: \\ github: \\ user: test \\ repo: something \\ ref: main \\ root: main.zig ); const expected = Self{ .alias = "foo", .src = .{ .git = .{ .url = "https://github.com/test/something.git", .ref = "main", .root = "main.zig", }, }, }; try expectDepEqual(expected, actual); } test "legacy raw default root" { var arena = ThreadSafeArenaAllocator.init(testing.allocator); defer arena.deinit(); const actual = try fromString(&arena, \\foo: \\ src: \\ url: "https://astrolabe.pm" ); const expected = Self{ .alias = "foo", .src = .{ .url = .{ .str = "https://astrolabe.pm", .root = null, }, }, }; try expectDepEqual(expected, actual); } test "raw default root" { var arena = ThreadSafeArenaAllocator.init(testing.allocator); defer arena.deinit(); const actual = try fromString(&arena, \\foo: \\ url: "https://astrolabe.pm" ); const expected = Self{ .alias = "foo", .src = .{ .url = .{ .str = "https://astrolabe.pm", .root = null, }, }, }; try expectDepEqual(expected, actual); } test "legacy raw explicit root" { var arena = ThreadSafeArenaAllocator.init(testing.allocator); defer arena.deinit(); const actual = try fromString(&arena, \\foo: \\ src: \\ url: "https://astrolabe.pm" \\ root: main.zig ); const expected = Self{ .alias = "foo", .src = .{ .url = .{ .str = "https://astrolabe.pm", .root = "main.zig", }, }, }; try expectDepEqual(expected, actual); } test "legacy raw explicit root, incorrect indent" { var arena = ThreadSafeArenaAllocator.init(testing.allocator); defer arena.deinit(); const actual = try fromString(&arena, \\foo: \\ src: \\ url: "https://astrolabe.pm" \\ root: main.zig ); const expected = Self{ .alias = "foo", .src = .{ .url = .{ .str = "https://astrolabe.pm", .root = "main.zig", }, }, }; try expectDepEqual(expected, actual); } test "raw explicit root" { var arena = ThreadSafeArenaAllocator.init(testing.allocator); defer arena.deinit(); const actual = try fromString(&arena, \\foo: \\ url: "https://astrolabe.pm" \\ root: main.zig ); const expected = Self{ .alias = "foo", .src = .{ .url = .{ .str = "https://astrolabe.pm", .root = "main.zig", }, }, }; try expectDepEqual(expected, actual); } test "legacy local with default root" { var arena = ThreadSafeArenaAllocator.init(testing.allocator); defer arena.deinit(); const actual = try fromString(&arena, \\foo: \\ src: \\ local: "mypkgs/cool-project" ); const expected = Self{ .alias = "foo", .src = .{ .local = .{ .path = "mypkgs/cool-project", .root = null, }, }, }; try expectDepEqual(expected, actual); } test "local with default root" { var arena = ThreadSafeArenaAllocator.init(testing.allocator); defer arena.deinit(); const actual = try fromString(&arena, \\foo: \\ local: "mypkgs/cool-project" ); const expected = Self{ .alias = "foo", .src = .{ .local = .{ .path = "mypkgs/cool-project", .root = null, }, }, }; try expectDepEqual(expected, actual); } test "legacy local with explicit root" { var arena = ThreadSafeArenaAllocator.init(testing.allocator); defer arena.deinit(); const actual = try fromString(&arena, \\foo: \\ src: \\ local: "mypkgs/cool-project" \\ root: main.zig ); const expected = Self{ .alias = "foo", .src = .{ .local = .{ .path = "mypkgs/cool-project", .root = "main.zig", }, }, }; try expectDepEqual(expected, actual); } test "legacy local with explicit root, incorrect indent" { var arena = ThreadSafeArenaAllocator.init(testing.allocator); defer arena.deinit(); const actual = try fromString(&arena, \\foo: \\ src: \\ local: "mypkgs/cool-project" \\ root: main.zig ); const expected = Self{ .alias = "foo", .src = .{ .local = .{ .path = "mypkgs/cool-project", .root = "main.zig", }, }, }; try expectDepEqual(expected, actual); } test "local with explicit root" { var arena = ThreadSafeArenaAllocator.init(testing.allocator); defer arena.deinit(); const actual = try fromString(&arena, \\foo: \\ local: "mypkgs/cool-project" \\ root: main.zig ); const expected = Self{ .alias = "foo", .src = .{ .local = .{ .path = "mypkgs/cool-project", .root = "main.zig", }, }, }; try expectDepEqual(expected, actual); } test "pkg can't take a root" { // TODO } test "pkg can't take an integrity" { // TODO } test "github can't take an integrity " { // TODO } /// serializes dependency information back into zzz format pub fn addToZNode( self: Self, arena: *ThreadSafeArenaAllocator, tree: *zzz.ZTree(1, 1000), parent: *zzz.ZNode, explicit: bool, ) !void { var alias = try tree.addNode(parent, .{ .String = self.alias }); switch (self.src) { .pkg => |pkg| if (!explicit and std.mem.eql(u8, self.alias, pkg.name) and std.mem.eql(u8, pkg.repository, utils.default_repo)) { var fifo = std.fifo.LinearFifo(u8, .{ .Dynamic = {} }).init(arena.allocator()); try fifo.writer().print("{s}/{s}", .{ pkg.user, pkg.name }); alias.value.String = fifo.readableSlice(0); const ver_str = try std.fmt.allocPrint(arena.allocator(), "{}", .{pkg.version}); _ = try tree.addNode(alias, .{ .String = ver_str }); } else { var node = try tree.addNode(alias, .{ .String = "pkg" }); try utils.zPutKeyString(tree, node, "user", pkg.user); if (explicit or !std.mem.eql(u8, pkg.name, self.alias)) { try utils.zPutKeyString(tree, node, "name", pkg.name); } const ver_str = try std.fmt.allocPrint(arena.allocator(), "{}", .{pkg.version}); try utils.zPutKeyString(tree, node, "version", ver_str); if (explicit or !std.mem.eql(u8, pkg.repository, utils.default_repo)) { try utils.zPutKeyString(tree, node, "repository", pkg.repository); } }, .github => |gh| { var github = try tree.addNode(alias, .{ .String = "github" }); try utils.zPutKeyString(tree, github, "user", gh.user); try utils.zPutKeyString(tree, github, "repo", gh.repo); try utils.zPutKeyString(tree, github, "ref", gh.ref); if (explicit or gh.root != null) { try utils.zPutKeyString(tree, github, "root", gh.root orelse utils.default_root); } }, .git => |g| { var git = try tree.addNode(alias, .{ .String = "git" }); try utils.zPutKeyString(tree, git, "url", g.url); try utils.zPutKeyString(tree, git, "ref", g.ref); if (explicit or g.root != null) { try utils.zPutKeyString(tree, git, "root", g.root orelse utils.default_root); } }, .url => |url| { try utils.zPutKeyString(tree, alias, "url", url.str); if (explicit or url.root != null) { try utils.zPutKeyString(tree, alias, "root", url.root orelse utils.default_root); } }, .local => |local| { try utils.zPutKeyString(tree, alias, "local", local.path); if (explicit or local.root != null) { try utils.zPutKeyString(tree, alias, "root", local.root orelse utils.default_root); } }, } } fn expectZzzEqual(expected: *zzz.ZNode, actual: *zzz.ZNode) !void { var expected_it: *zzz.ZNode = expected; var actual_it: *zzz.ZNode = actual; var expected_depth: isize = 0; var actual_depth: isize = 0; while (expected_it.next(&expected_depth)) |exp| : (expected_it = exp) { if (actual_it.next(&actual_depth)) |act| { defer actual_it = act; try testing.expectEqual(expected_depth, actual_depth); switch (exp.value) { .String => |str| try testing.expectEqualStrings(str, act.value.String), .Int => |int| try testing.expectEqual(int, act.value.Int), .Float => |float| try testing.expectEqual(float, act.value.Float), .Bool => |b| try testing.expectEqual(b, act.value.Bool), else => {}, } } else { try testing.expect(false); } } try testing.expectEqual( expected_it.next(&expected_depth), actual_it.next(&actual_depth), ); } fn serializeTest(from: []const u8, to: []const u8, explicit: bool) !void { var arena = ThreadSafeArenaAllocator.init(testing.allocator); defer arena.deinit(); const dep = try fromString(&arena, from); var actual = zzz.ZTree(1, 1000){}; var actual_root = try actual.addNode(null, .{ .Null = {} }); try dep.addToZNode(&arena, &actual, actual_root, explicit); var expected = zzz.ZTree(1, 1000){}; const expected_root = try expected.appendText(to); try expectZzzEqual(expected_root, actual_root); } test "serialize pkg non-explicit" { const from = \\something: \\ pkg: \\ user: test \\ version: ^0.0.0 \\ ; const to = "test/something: ^0.0.0"; try serializeTest(from, to, false); } test "serialize pkg explicit" { const from = \\something: \\ pkg: \\ user: test \\ version: ^0.0.0 \\ ; const to = \\something: \\ pkg: \\ user: test \\ name: something \\ version: ^0.0.0 \\ repository: astrolabe.pm \\ ; try serializeTest(from, to, true); } test "serialize github non-explicit" { const from = \\something: \\ github: \\ user: test \\ repo: my_repo \\ ref: master \\ root: main.zig \\ ; const to = \\something: \\ git: \\ url: "https://github.com/test/my_repo.git" \\ ref: master \\ root: main.zig \\ ; try serializeTest(from, to, false); } test "serialize github non-explicit, default root" { const from = \\something: \\ github: \\ user: test \\ repo: my_repo \\ ref: master \\ ; const to = \\something: \\ git: \\ url: "https://github.com/test/my_repo.git" \\ ref: master \\ ; try serializeTest(from, to, false); } test "serialize github explicit, default root" { const from = \\something: \\ github: \\ user: test \\ repo: my_repo \\ ref: master \\ root: src/main.zig \\ ; const to = \\something: \\ git: \\ url: "https://github.com/test/my_repo.git" \\ ref: master \\ root: src/main.zig \\ ; try serializeTest(from, to, true); } test "serialize github explicit" { const from = \\something: \\ github: \\ user: test \\ repo: my_repo \\ ref: master \\ ; const to = \\something: \\ git: \\ url: "https://github.com/test/my_repo.git" \\ ref: master \\ root: src/main.zig \\ ; try serializeTest(from, to, true); } test "serialize url non-explicit" { const str = \\something: \\ url: "https://github.com" \\ root: main.zig \\ ; try serializeTest(str, str, false); } test "serialize url explicit" { const from = \\something: \\ url: "https://github.com" \\ ; const to = \\something: \\ url: "https://github.com" \\ root: src/main.zig \\ ; try serializeTest(from, to, true); }
0
repos/gyro
repos/gyro/src/utils.zig
const std = @import("std"); const zzz = @import("zzz"); pub const default_repo = "astrolabe.pm"; pub const default_root = "src/main.zig"; pub const ZChildIterator = struct { val: ?*zzz.ZNode, pub fn next(self: *ZChildIterator) ?*zzz.ZNode { return if (self.val) |node| blk: { self.val = node.sibling; break :blk node; } else null; } pub fn init(node: *zzz.ZNode) ZChildIterator { return ZChildIterator{ .val = node.child, }; } }; pub fn zFindChild(node: *zzz.ZNode, key: []const u8) ?*zzz.ZNode { var it = ZChildIterator.init(node); return while (it.next()) |child| { switch (child.value) { .String => |str| if (std.mem.eql(u8, str, key)) break child, else => continue, } } else null; } pub fn zGetString(node: *const zzz.ZNode) ![]const u8 { return switch (node.value) { .String => |str| str, else => { return error.NotAString; }, }; } pub fn zFindString(parent: *zzz.ZNode, key: []const u8) !?[]const u8 { return if (zFindChild(parent, key)) |node| if (node.child) |child| try zGetString(child) else null else null; } pub fn zPutKeyString(tree: *zzz.ZTree(1, 1000), parent: *zzz.ZNode, key: []const u8, value: []const u8) !void { var node = try tree.addNode(parent, .{ .String = key }); _ = try tree.addNode(node, .{ .String = value }); } pub const UserRepoResult = struct { user: []const u8, repo: []const u8, }; pub fn parseUserRepo(str: []const u8) !UserRepoResult { if (std.mem.count(u8, str, "/") != 1) { std.log.err("need to have a single '/' in {s}", .{str}); return error.Explained; } var it = std.mem.tokenize(u8, str, "/"); return UserRepoResult{ .user = it.next().?, .repo = it.next().?, }; } /// trim 'zig-' prefix and '-zig' or '.zig' suffixes from a name pub fn normalizeName(name: []const u8) ![]const u8 { const prefix = "zig-"; const dot_suffix = ".zig"; const dash_suffix = "-zig"; const begin = if (std.mem.startsWith(u8, name, prefix)) prefix.len else 0; const end = if (std.mem.endsWith(u8, name, dot_suffix)) name.len - dot_suffix.len else if (std.mem.endsWith(u8, name, dash_suffix)) name.len - dash_suffix.len else name.len; if (begin > end) return error.Overlap else if (begin == end) return error.Empty; return name[begin..end]; } pub fn escape(allocator: std.mem.Allocator, str: []const u8) ![]const u8 { return for (str) |c| { if (!std.ascii.isAlNum(c) and c != '_') { var buf = try allocator.alloc(u8, str.len + 3); std.mem.copy(u8, buf, "@\""); std.mem.copy(u8, buf[2..], str); buf[buf.len - 1] = '"'; break buf; } } else try allocator.dupe(u8, str); } pub fn joinPathConvertSep(arena: *@import("ThreadSafeArenaAllocator.zig"), inputs: []const []const u8) ![]const u8 { const allocator = arena.child_allocator; var components = try std.ArrayList([]const u8).initCapacity(allocator, inputs.len); defer { for (components.items) |comp| allocator.free(comp); components.deinit(); } for (inputs) |input| try components.append(try std.mem.replaceOwned( u8, allocator, input, std.fs.path.sep_str_posix, std.fs.path.sep_str, )); return std.fs.path.join(arena.allocator(), components.items); } test "normalize zig-zig" { try std.testing.expectError(error.Overlap, normalizeName("zig-zig")); } test "normalize zig-.zig" { try std.testing.expectError(error.Empty, normalizeName("zig-.zig")); } test "normalize SDL.zig" { try std.testing.expectEqualStrings("SDL", try normalizeName("SDL.zig")); } test "normalize zgl" { try std.testing.expectEqualStrings("zgl", try normalizeName("zgl")); } test "normalize zig-args" { try std.testing.expectEqualStrings("args", try normalizeName("zig-args")); } test "normalize vulkan-zig" { try std.testing.expectEqualStrings("vulkan", try normalizeName("vulkan-zig")); } test "normalize known-folders" { try std.testing.expectEqualStrings("known-folders", try normalizeName("known-folders")); }
0
repos/gyro
repos/gyro/src/cache.zig
const std = @import("std"); pub fn getEntry(name: []const u8) !Entry { var cache_dir = try std.fs.cwd().makeOpenPath(".gyro", .{}); defer cache_dir.close(); return Entry{ .dir = try cache_dir.makeOpenPath(name, .{}), }; } pub const Entry = struct { dir: std.fs.Dir, const Self = @This(); pub fn deinit(self: *Self) void { self.dir.close(); } pub fn done(self: *Self) !void { const file = try self.dir.createFile("ok", .{}); defer file.close(); } pub fn isDone(self: *Self) !bool { return if (self.dir.access("ok", .{})) true else |err| switch (err) { error.FileNotFound => false, else => |e| e, }; } pub fn contentDir(self: *Self) !std.fs.Dir { return try self.dir.makeOpenPath("pkg", .{}); } };
0
repos/gyro
repos/gyro/src/commands.zig
const std = @import("std"); const builtin = @import("builtin"); const clap = @import("clap"); const version = @import("version"); const zzz = @import("zzz"); const known_folders = @import("known-folders"); const curl = @import("curl"); const Dependency = @import("Dependency.zig"); const Engine = @import("Engine.zig"); const Project = @import("Project.zig"); const ThreadSafeArenaAllocator = @import("ThreadSafeArenaAllocator.zig"); const api = @import("api.zig"); const utils = @import("utils.zig"); const Allocator = std.mem.Allocator; fn assertFileExistsInCwd(subpath: []const u8) !void { std.fs.cwd().access(subpath, .{ .mode = .read_only }) catch |err| { return if (err == error.FileNotFound) blk: { std.log.err("no {s} in current working directory", .{subpath}); break :blk error.Explained; } else err; }; } // move to an explicit step later, for now make it automatic and slick fn migrateGithubLockfile(allocator: Allocator, file: std.fs.File) !void { var to_lines = std.ArrayList([]const u8).init(allocator); defer to_lines.deinit(); var github_lines = std.ArrayList([]const u8).init(allocator); defer github_lines.deinit(); const text = try file.readToEndAlloc(allocator, std.math.maxInt(usize)); defer allocator.free(text); // sort between good entries and github entries var it = std.mem.tokenize(u8, text, "\n"); while (it.next()) |line| if (std.mem.startsWith(u8, line, "github")) try github_lines.append(line) else try to_lines.append(line); var arena = ThreadSafeArenaAllocator.init(allocator); defer arena.deinit(); // convert each github entry to a git entry for (github_lines.items) |line| { var line_it = std.mem.tokenize(u8, line, " "); // github label _ = line_it.next() orelse unreachable; const new_line = try std.fmt.allocPrint( arena.allocator(), "git https://github.com/{s}/{s}.git {s} {s} {s}", .{ .user = line_it.next() orelse return error.NoUser, .repo = line_it.next() orelse return error.NoRepo, .ref = line_it.next() orelse return error.NoRef, .root = line_it.next() orelse return error.NoRoot, .commit = line_it.next() orelse return error.NoCommit, }, ); try to_lines.append(new_line); } // clear file and write all entries to it try file.setEndPos(0); try file.seekTo(0); const writer = file.writer(); for (to_lines.items) |line| { try writer.writeAll(line); try writer.writeByte('\n'); } // seek to beginning so that any future reading is from the beginning of the file try file.seekTo(0); } pub fn fetch(allocator: Allocator) !void { var arena = ThreadSafeArenaAllocator.init(allocator); defer arena.deinit(); const project = try Project.fromDirPath(&arena, "."); defer project.destroy(); const lockfile = try std.fs.cwd().createFile("gyro.lock", .{ .read = true, .truncate = false, }); defer lockfile.close(); try migrateGithubLockfile(allocator, lockfile); const deps_file = try std.fs.cwd().createFile("deps.zig", .{ .truncate = true, }); defer deps_file.close(); var engine = try Engine.init(allocator, project, lockfile.reader()); defer engine.deinit(); try engine.fetch(); try lockfile.setEndPos(0); try lockfile.seekTo(0); try engine.writeLockfile(lockfile.writer()); try engine.writeDepsZig(deps_file.writer()); const project_file = try std.fs.cwd().openFile("gyro.zzz", .{ .mode = .read_write }); defer project_file.close(); try project.toFile(project_file); } pub fn update( allocator: Allocator, targets: []const []const u8, ) !void { if (targets.len == 0) { try std.fs.cwd().deleteFile("gyro.lock"); try fetch(allocator); return; } var arena = ThreadSafeArenaAllocator.init(allocator); defer arena.deinit(); const project = try Project.fromDirPath(&arena, "."); defer project.destroy(); const lockfile = try std.fs.cwd().createFile("gyro.lock", .{ .read = true, .truncate = false, }); defer lockfile.close(); try migrateGithubLockfile(allocator, lockfile); const deps_file = try std.fs.cwd().createFile("deps.zig", .{ .truncate = true, }); defer deps_file.close(); var engine = try Engine.init(allocator, project, lockfile.reader()); defer engine.deinit(); for (targets) |target| try engine.clearResolution(target); try engine.fetch(); try lockfile.setEndPos(0); try lockfile.seekTo(0); try engine.writeLockfile(lockfile.writer()); try engine.writeDepsZig(deps_file.writer()); } const EnvInfo = struct { zig_exe: []const u8, lib_dir: []const u8, std_dir: []const u8, global_cache_dir: []const u8, version: []const u8, }; pub fn build(allocator: Allocator, args: *std.process.ArgIterator) !void { try assertFileExistsInCwd("build.zig"); var fifo = std.fifo.LinearFifo(u8, .{ .Dynamic = {} }).init(allocator); defer fifo.deinit(); const result = try std.ChildProcess.exec(.{ .allocator = allocator, .argv = &[_][]const u8{ "zig", "env" }, }); defer { allocator.free(result.stdout); allocator.free(result.stderr); } switch (result.term) { .Exited => |val| { if (val != 0) { std.log.err("zig compiler returned error code: {}", .{val}); return error.Explained; } }, .Signal => |sig| { std.log.err("zig compiler interrupted by signal: {}", .{sig}); return error.Explained; }, else => return error.UnknownTerm, } var token_stream = std.json.TokenStream.init(result.stdout); const parse_opts = std.json.ParseOptions{ .allocator = allocator, .ignore_unknown_fields = true }; const env = try std.json.parse(EnvInfo, &token_stream, parse_opts); defer std.json.parseFree(EnvInfo, env, parse_opts); var zig_lib_dir = try std.fs.openDirAbsolute( env.lib_dir, .{ .access_sub_paths = true }, ); defer zig_lib_dir.close(); try zig_lib_dir.copyFile( "build_runner.zig", std.fs.cwd(), "build_runner.zig", .{}, ); defer std.fs.cwd().deleteFile("build_runner.zig") catch {}; var arena = ThreadSafeArenaAllocator.init(allocator); defer arena.deinit(); const project = blk: { const project_file = std.fs.cwd().openFile("gyro.zzz", .{}) catch |err| switch (err) { error.FileNotFound => break :blk try Project.fromUnownedText(&arena, ".", ""), else => |e| return e, }; defer project_file.close(); break :blk try Project.fromFile(&arena, ".", project_file); }; defer project.destroy(); const lockfile = try std.fs.cwd().createFile("gyro.lock", .{ .read = true, .truncate = false, }); defer lockfile.close(); try migrateGithubLockfile(allocator, lockfile); const deps_file = try std.fs.cwd().createFile("deps.zig", .{ .truncate = true, }); defer deps_file.close(); var engine = try Engine.init(allocator, project, lockfile.reader()); defer engine.deinit(); try engine.fetch(); try lockfile.setEndPos(0); try lockfile.seekTo(0); try engine.writeLockfile(lockfile.writer()); try engine.writeDepsZig(deps_file.writer()); // TODO: configurable local cache const pkgs = try engine.genBuildDeps(&arena); defer pkgs.deinit(); const b = try std.build.Builder.create( arena.allocator(), env.zig_exe, ".", "zig-cache", env.global_cache_dir, ); defer b.destroy(); b.resolveInstallPrefix(null, .{}); const runner = b.addExecutable("build", "build_runner.zig"); runner.addPackage(std.build.Pkg{ .name = "@build", .source = .{ .path = "build.zig", }, .dependencies = pkgs.items, }); const run_cmd = runner.run(); run_cmd.addArgs(&[_][]const u8{ env.zig_exe, ".", "zig-cache", env.global_cache_dir, }); while (args.next()) |arg| run_cmd.addArg(arg); b.default_step.dependOn(&run_cmd.step); if (b.validateUserInputDidItFail()) { return error.UserInputFailed; } b.make(&[_][]const u8{"install"}) catch |err| { switch (err) { error.UncleanExit => { std.log.err("Compiler had an unclean exit", .{}); return error.Explained; }, error.UnexpectedExitCode => return error.Explained, else => return err, } }; const project_file = try std.fs.cwd().openFile("gyro.zzz", .{ .mode = .read_write }); defer project_file.close(); try project.toFile(project_file); } pub fn package( allocator: Allocator, output_dir: ?[]const u8, names: []const []const u8, ) !void { var arena = ThreadSafeArenaAllocator.init(allocator); defer arena.deinit(); const project = try Project.fromDirPath(&arena, "."); defer project.destroy(); if (project.packages.count() == 0) { std.log.err("there are no packages to package!", .{}); return error.Explained; } validateNoRedirects(allocator) catch |e| switch (e) { error.RedirectsExist => { std.log.err("you need to clear redirects before packaging with 'gyro redirect --clean'", .{}); return error.Explained; }, else => return e, }; var found_not_pkg = false; for (names) |name| if (!project.contains(name)) { std.log.err("{s} is not a package", .{name}); found_not_pkg = true; }; if (found_not_pkg) return error.Explained; var write_dir = try std.fs.cwd().openIterableDir( if (output_dir) |output| output else ".", .{ .access_sub_paths = true }, ); defer write_dir.close(); var read_dir = try std.fs.cwd().openIterableDir(".", .{}); defer read_dir.close(); if (names.len > 0) { for (names) |name| try project.get(name).?.bundle(read_dir.dir, write_dir.dir); } else { var it = project.iterator(); while (it.next()) |pkg| try pkg.bundle(read_dir.dir, write_dir.dir); } } fn maybePrintKey( json_key: []const u8, zzz_key: []const u8, root: anytype, writer: anytype, ) !void { if (root.get(json_key)) |val| { switch (val) { .String => |str| try writer.print(" {s}: \"{s}\"\n", .{ zzz_key, str }), else => {}, } } } pub fn init( allocator: Allocator, link: ?[]const u8, ) !void { const file = std.fs.cwd().createFile("gyro.zzz", .{ .exclusive = true }) catch |err| { return if (err == error.PathAlreadyExists) blk: { std.log.err("gyro.zzz already exists", .{}); break :blk error.Explained; } else err; }; errdefer std.fs.cwd().deleteFile("gyro.zzz") catch {}; defer file.close(); const info = try utils.parseUserRepo(link orelse return); var repo_tree = try api.getGithubRepo(allocator, info.user, info.repo); defer repo_tree.deinit(); var topics_tree = try api.getGithubTopics(allocator, info.user, info.repo); defer topics_tree.deinit(); if (repo_tree.root != .Object or topics_tree.root != .Object) { std.log.err("Invalid JSON response from Github", .{}); return error.Explained; } const repo_root = repo_tree.root.Object; const topics_root = topics_tree.root.Object; const writer = file.writer(); try writer.print( \\pkgs: \\ {s}: \\ version: 0.0.0 \\ , .{try utils.normalizeName(info.repo)}); try maybePrintKey("description", "description", repo_root, writer); // pretty gross ngl if (repo_root.get("license")) |license| { switch (license) { .Object => |obj| { if (obj.get("spdx_id")) |spdx| { switch (spdx) { .String => |id| { try writer.print(" license: {s}\n", .{id}); }, else => {}, } } }, else => {}, } } try maybePrintKey("html_url", "source_url", repo_root, writer); if (topics_root.get("names")) |topics| { switch (topics) { .Array => |arr| { if (arr.items.len > 0) { try writer.print(" tags:\n", .{}); for (arr.items) |topic| { switch (topic) { .String => |str| if (std.mem.indexOf(u8, str, "zig") == null) { try writer.print(" {s}\n", .{str}); }, else => {}, } } } }, else => {}, } } try writer.print( \\ \\ root: src/main.zig \\ files: \\ README.md \\ LICENSE \\ , .{}); } // check for alias collisions fn verifyUniqueAlias(alias: []const u8, deps: []const Dependency) !void { for (deps) |dep| { if (std.mem.eql(u8, alias, dep.alias)) { std.log.err("The alias '{s}' is already in use for this project", .{alias}); return error.Explained; } } } fn gitDependency( arena: *ThreadSafeArenaAllocator, url: []const u8, alias_opt: ?[]const u8, ref_opt: ?[]const u8, root_opt: ?[]const u8, ) !Dependency { const git = @import("git.zig"); const cache = @import("cache.zig"); const allocator = arena.child_allocator; const commit = if (ref_opt) |r| try git.getHeadCommitOfRef(allocator, url, r) else try git.getHEADCommit(allocator, url); const ref = if (ref_opt) |r| r else commit; const entry_name = try git.fmtCachePath(allocator, url, commit); defer allocator.free(entry_name); var entry = try cache.getEntry(entry_name); defer entry.deinit(); const base_path = try std.fs.path.join(allocator, &.{ ".gyro", entry_name, "pkg", }); defer allocator.free(base_path); if (!try entry.isDone()) { if (builtin.target.os.tag != .windows) { if (std.fs.cwd().access(base_path, .{})) { try std.fs.cwd().deleteTree(base_path); } else |_| {} // TODO: if base_path exists then deleteTree it } try git.clone( arena, url, commit, base_path, ); try entry.done(); } // if root is not specified then try to read the manifest and find a match // for the alias, if no alias is specified then it would have been // calculated from the url var alias = alias_opt; var root = root_opt; if (root) |r| { const root_path = try std.fs.path.join(allocator, &.{ base_path, r }); defer allocator.free(root_path); std.fs.cwd().access(root_path, .{}) catch |err| switch (err) { error.FileNotFound => { std.log.err("the path '{s}' does not exist in {s}", .{ r, url }); return error.Explained; }, else => return err, }; if (alias != null) return Dependency{ .alias = alias.?, .src = .{ .git = .{ .url = url, .ref = ref, .root = root.?, }, }, }; } var base_dir = try std.fs.cwd().openDir(base_path, .{}); defer base_dir.close(); const project_file = try base_dir.createFile("gyro.zzz", .{ .read = true, .truncate = false, .exclusive = false, }); defer project_file.close(); const text = try project_file.reader().readAllAlloc( arena.allocator(), std.math.maxInt(usize), ); const project = try Project.fromUnownedText(arena, base_path, text); defer project.destroy(); if (project.packages.count() == 1) { var it = project.packages.iterator(); const pkg_entry = it.next().?; if (alias == null) alias = pkg_entry.key_ptr.*; if (root == null) root = pkg_entry.value_ptr.root orelse utils.default_root; return Dependency{ .alias = alias.?, .src = .{ .git = .{ .url = url, .ref = ref, .root = root.?, }, }, }; } if (project.packages.count() == 0 and root == null) { std.log.err("this repo has no advertised packages, you need to manually specify the root path with '--root' or '-r'", .{}); return error.Explained; } // TODO: if no alias, print error about ambiguity if (alias == null and root == null) { std.log.err("this repo advertises multiple packages, you must choose an alias:", .{}); var it = project.packages.iterator(); while (it.next()) |pkgs_entry| { const pkg_root = pkgs_entry.value_ptr.root orelse utils.default_root; std.log.err(" {s}: {s}", .{ pkgs_entry.key_ptr.*, pkg_root }); } return error.Explained; } if (root == null) { var iterator = project.packages.iterator(); while (iterator.next()) |pkgs_entry| { if (std.mem.eql(u8, pkgs_entry.key_ptr.*, alias.?)) { root = pkgs_entry.value_ptr.root orelse utils.default_root; break; } } else { std.log.err("failed to find package that matched the alias '{s}', the advertised packages are:", .{alias.?}); var it = project.packages.iterator(); while (it.next()) |pkgs_entry| { const pkg_root = pkgs_entry.value_ptr.root orelse utils.default_root; std.log.err(" {s}: {s}", .{ pkgs_entry.key_ptr.*, pkg_root }); } return error.Explained; } } if (alias == null) { const url_z = try arena.allocator().dupeZ(u8, url); const curl_url = try curl.Url.init(); defer curl_url.cleanup(); try curl_url.set(url_z); alias = std.fs.path.basename(std.mem.span(try curl_url.getPath())); const ext = std.fs.path.extension(alias.?); alias = try utils.normalizeName(alias.?[0 .. alias.?.len - ext.len]); if (alias.?.len == 0) { std.log.err("failed to figure out an alias from the url, please manually specify it with '--alias' or '-a'", .{}); return error.Explained; } } return Dependency{ .alias = alias.?, .src = .{ .git = .{ .url = url, .ref = ref, .root = root.?, }, }, }; } pub fn add( allocator: Allocator, src_tag: Dependency.SourceType, alias: ?[]const u8, build_deps: bool, ref: ?[]const u8, root_path: ?[]const u8, repository_opt: ?[]const u8, target: []const u8, ) !void { switch (src_tag) { .pkg, .github, .local, .git => {}, else => return error.Todo, } const repository = repository_opt orelse utils.default_repo; var arena = ThreadSafeArenaAllocator.init(allocator); defer arena.deinit(); const file = try std.fs.cwd().createFile("gyro.zzz", .{ .truncate = false, .read = true, .exclusive = false, }); defer file.close(); var project = try Project.fromFile(&arena, ".", file); defer project.destroy(); const dep_list = if (build_deps) &project.build_deps else &project.deps; const dep = switch (src_tag) { .github => blk: { const info = try utils.parseUserRepo(target); const url = try std.fmt.allocPrint(arena.allocator(), "https://github.com/{s}/{s}.git", .{ info.user, info.repo, }); break :blk try gitDependency(&arena, url, alias, ref, root_path); }, .git => try gitDependency(&arena, target, alias, ref, root_path), .pkg => blk: { const info = try utils.parseUserRepo(target); const latest = try api.getLatest(arena.allocator(), repository, info.user, info.repo, null); var buf = try arena.allocator().alloc(u8, 80); var stream = std.io.fixedBufferStream(buf); try stream.writer().print("^{}", .{latest}); try verifyUniqueAlias(info.repo, dep_list.items); break :blk Dependency{ .alias = info.repo, .src = .{ .pkg = .{ .user = info.user, .name = info.repo, .version = version.Range{ .min = latest, .kind = .caret, }, .repository = repository, }, }, }; }, .local => blk: { const subproject = try Project.fromDirPath(&arena, target); defer subproject.destroy(); const name = alias orelse try utils.normalizeName(std.fs.path.basename(target)); try verifyUniqueAlias(name, dep_list.items); const root = root_path orelse if (try subproject.findBestMatchingPackage(name)) |pkg| pkg.root orelse utils.default_root else utils.default_root; break :blk Dependency{ .alias = name, .src = .{ .local = .{ .path = target, .root = root, }, }, }; }, else => return error.Todo, }; for (dep_list.items) |d| { if (std.mem.eql(u8, d.alias, dep.alias)) { std.log.err("alias '{s}' is already being used", .{dep.alias}); return error.Explained; } } try dep_list.append(dep); try project.toFile(file); } pub fn rm( allocator: Allocator, build_deps: bool, targets: []const []const u8, ) !void { var arena = ThreadSafeArenaAllocator.init(allocator); defer arena.deinit(); const file = try std.fs.cwd().createFile("gyro.zzz", .{ .truncate = false, .read = true, .exclusive = false, }); defer file.close(); var project = try Project.fromFile(&arena, ".", file); defer project.destroy(); const dep_list = if (build_deps) &project.build_deps else &project.deps; // make sure targets are unique for (targets) |_, i| { var j: usize = i + 1; while (j < targets.len) : (j += 1) { if (std.mem.eql(u8, targets[i], targets[j])) { std.log.err("duplicated target: {s}", .{targets[i]}); return error.Explained; } } } // ensure all targets are valid for (targets) |target| { for (dep_list.items) |dep| { if (std.mem.eql(u8, target, dep.alias)) break; } else { std.log.err("{s} is not a dependency", .{target}); return error.Explained; } } // remove targets for (targets) |target| { for (dep_list.items) |dep, i| { if (std.mem.eql(u8, target, dep.alias)) { _ = dep_list.swapRemove(i); break; } } } try project.toFile(file); } pub fn publish(allocator: Allocator, repository: ?[]const u8, pkg: ?[]const u8) anyerror!void { const client_id = "ea14bba19a49f4cba053"; const scope = "read:user user:email"; var arena = ThreadSafeArenaAllocator.init(allocator); defer arena.deinit(); const file = std.fs.cwd().openFile("gyro.zzz", .{}) catch |err| { if (err == error.FileNotFound) { std.log.err("missing gyro.zzz file", .{}); return error.Explained; } else return err; }; defer file.close(); var project = try Project.fromFile(&arena, ".", file); defer project.destroy(); if (project.packages.count() == 0) { std.log.err("there are no packages to publish!", .{}); return error.Explained; } validateNoRedirects(allocator) catch |e| switch (e) { error.RedirectsExist => { std.log.err("you need to clear redirects before publishing with 'gyro redirect --clean'", .{}); return error.Explained; }, else => return e, }; const name = if (pkg) |p| blk: { if (!project.contains(p)) { std.log.err("{s} is not a package", .{p}); return error.Explained; } break :blk p; } else if (project.packages.count() == 1) blk: { var it = project.iterator(); break :blk it.next().?.name; } else { std.log.err("there are multiple packages exported, choose one", .{}); return error.Explained; }; var access_token: ?[]const u8 = std.process.getEnvVarOwned(allocator, "GYRO_ACCESS_TOKEN") catch |err| blk: { if (err == error.EnvironmentVariableNotFound) break :blk null else return err; }; defer if (access_token) |at| allocator.free(at); const from_env = access_token != null; if (access_token == null) { access_token = blk: { var dir = if (try known_folders.open(allocator, .cache, .{ .access_sub_paths = true })) |d| d else break :blk null; defer dir.close(); const cache_file = dir.openFile("gyro-access-token", .{}) catch |err| { if (err == error.FileNotFound) break :blk null else return err; }; defer cache_file.close(); break :blk try cache_file.reader().readAllAlloc(allocator, std.math.maxInt(usize)); }; } if (access_token == null) { const open_program: []const u8 = switch (builtin.os.tag) { .windows => "explorer", .macos => "open", else => "xdg-open", }; var browser = std.ChildProcess.init(&.{ open_program, "https://github.com/login/device" }, allocator); _ = browser.spawnAndWait() catch { try std.io.getStdErr().writer().print("Failed to open your browser, please go to https://github.com/login/device", .{}); }; var device_code_resp = try api.postDeviceCode(allocator, client_id, scope); defer std.json.parseFree(api.DeviceCodeResponse, device_code_resp, .{ .allocator = allocator }); const stderr = std.io.getStdErr().writer(); try stderr.print("enter this code: {s}\nwaiting for github authentication...\n", .{device_code_resp.user_code}); const end_time = device_code_resp.expires_in + @intCast(u64, std.time.timestamp()); const interval_ns = device_code_resp.interval * std.time.ns_per_s; access_token = while (std.time.timestamp() < end_time) : (std.time.sleep(interval_ns)) { if (try api.pollDeviceCode(allocator, client_id, device_code_resp.device_code)) |resp| { if (try known_folders.open(allocator, .cache, .{ .access_sub_paths = true })) |d| { var dir = d; defer dir.close(); const cache_file = try dir.createFile("gyro-access-token", .{ .truncate = true }); defer cache_file.close(); try cache_file.writer().writeAll(resp); } break resp; } } else { std.log.err("timed out device polling", .{}); return error.Explained; }; } if (access_token == null) { std.log.err("failed to get access token", .{}); return error.Explained; } api.postPublish(allocator, repository, access_token.?, project.get(name).?) catch |err| switch (err) { error.Unauthorized => { if (from_env) { std.log.err("the access token from the env var 'GYRO_ACCESS_TOKEN' is using an outdated format for github. You need to get a new one.", .{}); return error.Explained; } std.log.info("looks like you were using an old token, deleting your cached one.", .{}); if (try known_folders.open(allocator, .cache, .{ .access_sub_paths = true })) |d| { var dir = d; defer dir.close(); try dir.deleteFile("gyro-access-token"); } std.log.info("getting you a new token...", .{}); try publish(allocator, repository, pkg); }, else => return err, }; } fn validateDepsAliases(redirected_deps: []const Dependency, project_deps: []const Dependency) !void { for (redirected_deps) |redirected_dep| { for (project_deps) |project_dep| { if (std.mem.eql(u8, redirected_dep.alias, project_dep.alias)) break; } else { std.log.err("'{s}' redirect does not exist in project dependencies", .{redirected_dep.alias}); return error.Explained; } } } fn moveDeps(redirected_deps: []const Dependency, project_deps: []Dependency) !void { for (redirected_deps) |redirected_dep| { for (project_deps) |*project_dep| { if (std.mem.eql(u8, redirected_dep.alias, project_dep.alias)) { project_dep.* = redirected_dep; break; } } else unreachable; } } /// make sure there are no entries in the redirect file fn validateNoRedirects(allocator: Allocator) !void { var arena = ThreadSafeArenaAllocator.init(allocator); defer arena.deinit(); var gyro_dir = try std.fs.cwd().makeOpenPath(".gyro", .{}); defer gyro_dir.close(); const redirect_file = try gyro_dir.createFile("redirects", .{ .truncate = false, .read = true, }); defer redirect_file.close(); var redirects = try Project.fromFile(&arena, ".", redirect_file); defer redirects.destroy(); if (redirects.deps.items.len > 0 or redirects.build_deps.items.len > 0) { return error.RedirectsExist; } } pub fn redirect( allocator: Allocator, check: bool, clean: bool, build_dep: bool, alias_opt: ?[]const u8, path_opt: ?[]const u8, ) !void { const do_redirect = alias_opt != null or path_opt != null; if ((check and clean) or (check and do_redirect) or (clean and do_redirect)) { std.log.err("you can only one at a time: clean, check, or redirect", .{}); return error.Explained; } var arena = ThreadSafeArenaAllocator.init(allocator); defer arena.deinit(); const project_file = try std.fs.cwd().openFile("gyro.zzz", .{ .mode = .read_write }); defer project_file.close(); var gyro_dir = try std.fs.cwd().makeOpenPath(".gyro", .{}); defer gyro_dir.close(); const redirect_file = try gyro_dir.createFile("redirects", .{ .truncate = false, .read = true, }); defer redirect_file.close(); var project = try Project.fromFile(&arena, ".", project_file); defer project.destroy(); var redirects = try Project.fromFile(&arena, ".", redirect_file); defer redirects.destroy(); if (check) { if (redirects.deps.items.len > 0 or redirects.build_deps.items.len > 0) { std.log.err("there are gyro redirects", .{}); return error.Explained; } else return; } else if (clean) { try validateDepsAliases(redirects.deps.items, project.deps.items); try validateDepsAliases(redirects.build_deps.items, project.build_deps.items); try moveDeps(redirects.deps.items, project.deps.items); try moveDeps(redirects.build_deps.items, project.build_deps.items); redirects.deps.clearRetainingCapacity(); redirects.build_deps.clearRetainingCapacity(); } else { const alias = alias_opt orelse { std.log.err("missing alias argument", .{}); return error.Explained; }; const path = path_opt orelse { std.log.err("missing path argument", .{}); return error.Explained; }; const deps = if (build_dep) &project.build_deps else &project.deps; const dep = for (deps.items) |*d| { if (std.mem.eql(u8, d.alias, alias)) break d; } else { const deps_type = if (build_dep) "build dependencies" else "dependencies"; std.log.err("Failed to find '{s}' in {s}", .{ alias, deps_type }); return error.Explained; }; const redirect_deps = if (build_dep) &redirects.build_deps else &redirects.deps; for (redirect_deps.items) |d| if (std.mem.eql(u8, d.alias, alias)) { std.log.err("'{s}' is already redirected", .{alias}); return error.Explained; }; try redirect_deps.append(dep.*); const root = switch (dep.src) { .pkg => |pkg| blk: { var local_project = try Project.fromDirPath(&arena, path); defer local_project.destroy(); const result = local_project.packages.get(pkg.name) orelse { std.log.err("the project located in {s} doesn't export '{s}'", .{ path, alias, }); return error.Explained; }; // TODO: the orelse here should probably be an error break :blk try arena.allocator().dupe(u8, result.root orelse utils.default_root); }, .github => |github| github.root, .git => |git| git.root, .url => |url| url.root, .local => |local| local.root, }; dep.* = Dependency{ .alias = alias, .src = .{ .local = .{ .path = path, .root = root, }, }, }; } try redirects.toFile(redirect_file); try project.toFile(project_file); }
0
repos/gyro
repos/gyro/src/main.zig
const std = @import("std"); const builtin = @import("builtin"); const clap = @import("clap"); const curl = @import("curl"); const Dependency = @import("Dependency.zig"); const cmds = @import("commands.zig"); const loadSystemCerts = @import("certs.zig").loadSystemCerts; const Display = @import("Display.zig"); const utils = @import("utils.zig"); const c = @cImport({ @cInclude("git2.h"); @cInclude("mbedtls/debug.h"); }); export fn gai_strerrorA(err: c_int) [*c]u8 { _ = err; return null; } extern fn git_mbedtls__insecure() void; extern fn git_mbedtls__set_debug() void; pub const log_level: std.log.Level = if (builtin.mode == .Debug) .debug else .info; pub var display: Display = undefined; pub fn log( comptime level: std.log.Level, comptime scope: @TypeOf(.EnumLiteral), comptime format: []const u8, args: anytype, ) void { display.log(level, scope, format, args); } pub fn main() !void { var exit_val: u8 = 0; { const allocator = std.heap.c_allocator; try Display.init(&display, allocator); defer display.deinit(); try curl.globalInit(); defer curl.globalCleanup(); if (builtin.mode == .Debug) c.mbedtls_debug_set_threshold(1); const rc = c.git_libgit2_init(); if (rc < 0) { const last_error = c.git_error_last(); std.log.err("{s}", .{last_error.*.message}); return error.Libgit2Init; } defer _ = c.git_libgit2_shutdown(); try loadSystemCerts(allocator); if (!(builtin.target.os.tag == .linux) or std.process.hasEnvVarConstant("GYRO_INSECURE")) git_mbedtls__insecure(); runCommands(allocator) catch |err| { switch (err) { error.Explained => exit_val = 1, else => return err, } }; } std.process.exit(exit_val); } // prints gyro command usage to stderr fn usage() !void { const stderr = std.io.getStdErr().writer(); try stderr.writeAll("Usage: gyro [command] [options]\n\n"); try stderr.writeAll("Commands:\n\n"); inline for (@typeInfo(commands).Struct.decls) |decl| { try stderr.print(" {s: <10} {s}\n", .{ decl.name, @field(commands, decl.name).description }); } try stderr.writeAll("\nOptions:\n\n"); try stderr.print(" {s: <10} Print command-specific usage\n\n", .{"-h, --help"}); } // prints usage and help for a single command fn help(comptime name: []const u8, comptime command: type) !void { const stderr = std.io.getStdErr().writer(); try stderr.writeAll("Usage: gyro " ++ name ++ " "); try clap.usage(stderr, clap.Help, &command.params); try stderr.print("\n\n{s}\n", .{command.description}); try stderr.writeAll("\nOptions:\n\n"); try clap.help(stderr, clap.Help, &command.params, .{}); try stderr.writeAll("\n"); } fn runCommands(allocator: std.mem.Allocator) !void { const stderr = std.io.getStdErr().writer(); var iter = try std.process.ArgIterator.initWithAllocator(allocator); defer iter.deinit(); // skip process name _ = iter.next(); const command_name = (iter.next()) orelse { try usage(); std.log.err("expected command argument", .{}); return error.Explained; }; inline for (@typeInfo(commands).Struct.decls) |decl| { const cmd = @field(commands, decl.name); // special handling for build subcommand since it passes through // arguments to build runner const is_build = std.mem.eql(u8, "build", decl.name); if (std.mem.eql(u8, command_name, decl.name)) { var args = if (!is_build) blk: { var diag = clap.Diagnostic{}; var res = clap.parseEx(clap.Help, &cmd.params, clap.parsers.default, &iter, .{ .diagnostic = &diag, }) catch |err| { // Report useful error and exit diag.report(stderr, err) catch {}; try help(decl.name, cmd); return error.Explained; }; if (res.args.help) { try help(decl.name, cmd); return; } break :blk res; } else undefined; defer if (!is_build) args.deinit(); try cmd.run(allocator, &args, &iter); return; } } else { try usage(); std.log.err("{s} is not a valid command", .{command_name}); return error.Explained; } } pub const commands = struct { pub const init = struct { pub const description = "Initialize a gyro.zzz with a link to a github repo"; pub const params = clap.parseParamsComptime( \\-h, --help Display Help \\<str> \\ ); pub fn run(allocator: std.mem.Allocator, res: anytype, _: *std.process.ArgIterator) !void { const repo = if (res.positionals.len == 1) res.positionals[0] else { std.log.err("that's too many args, please just give me one in the form of a link to your github repo or just '<user>/<repo>'", .{}); return error.Explained; }; try cmds.init(allocator, repo); } }; pub const add = struct { pub const description = "Add dependencies to the project"; pub const params = clap.parseParamsComptime( \\-h, --help Display Help \\-s, --src <str> Set type of dependency \\-a, --alias <str> Override what string the package is imported with \\-b, --build_dep Add this as a build dependency \\-r, --root <str> Set root path with respect to the project root, default is 'src/main.zig' \\ --ref <str> Commit, tag, or branch to reference for git or github source types \\ --repository <str> The package repository you want to add a package from, default is astrolabe.pm \\<str> \\ ); pub fn run(allocator: std.mem.Allocator, res: anytype, _: *std.process.ArgIterator) !void { if (res.positionals.len != 1) { std.log.err("only specify one package at a time", .{}); return error.Explained; } const src_str = res.args.src orelse "pkg"; const src_tag = inline for (std.meta.fields(Dependency.SourceType)) |field| { if (std.mem.eql(u8, src_str, field.name)) break @field(Dependency.SourceType, field.name); } else { std.log.err("{s} is not a valid source type", .{src_str}); return error.Explained; }; try cmds.add( allocator, src_tag, res.args.alias, res.args.build_dep, res.args.ref, res.args.root, res.args.repository, res.positionals[0], ); } }; pub const rm = struct { pub const description = "Remove dependencies from the project"; pub const params = clap.parseParamsComptime( \\-h, --help Display help \\-b, --build_dep Remove this as a build dependency \\<str>... \\ ); pub fn run(allocator: std.mem.Allocator, res: anytype, _: *std.process.ArgIterator) !void { try cmds.rm(allocator, res.args.build_dep, res.positionals); } }; pub const build = struct { pub const description = "Wrapper around 'zig build', automatically downloads dependencies"; pub const params = clap.parseParamsComptime( \\-h, --help Dispaly help \\<str>... \\ ); pub fn run(allocator: std.mem.Allocator, _: anytype, iterator: *std.process.ArgIterator) !void { try cmds.build(allocator, iterator); } }; pub const fetch = struct { pub const description = "Manually download dependencies and generate deps.zig file"; pub const params = clap.parseParamsComptime( \\-h, --help Display help \\ ); pub fn run(allocator: std.mem.Allocator, _: anytype, _: *std.process.ArgIterator) !void { try cmds.fetch(allocator); } }; pub const update = struct { pub const description = "Update project dependencies to latest"; pub const params = clap.parseParamsComptime( \\-h, --help Display help \\<str> ); pub fn run(allocator: std.mem.Allocator, res: anytype, _: *std.process.ArgIterator) !void { try cmds.update(allocator, res.positionals); } }; pub const publish = struct { pub const description = "Publish package to astrolabe.pm, requires github account"; pub const params = clap.parseParamsComptime( \\-h, --help Display help \\-r, --repository <str> The package repository you want to publish to, default is astrolabe.pm \\<str> \\ ); pub fn run(allocator: std.mem.Allocator, res: anytype, _: *std.process.ArgIterator) !void { try cmds.publish( allocator, res.args.repository, if (res.positionals.len > 0) res.positionals[0] else null, ); } }; pub const package = struct { pub const description = "Generate a tar file for publishing"; pub const params = clap.parseParamsComptime( \\-h, --help Display help \\-o, --output_dir <str> Set package output directory \\<str> \\ ); pub fn run(allocator: std.mem.Allocator, res: anytype, _: *std.process.ArgIterator) !void { try cmds.package(allocator, res.args.output_dir, res.positionals); } }; pub const redirect = struct { pub const description = "Manage local development"; pub const params = clap.parseParamsComptime( \\-h, --help Display help \\-c, --clean Undo all local redirects \\-a, --alias <str> Package to redirect \\-p, --path <str> Project root directory \\-b, --build_dep Redirect a build dependency \\ --check Return successfully if there are no redirects (intended for git pre-commit hook) \\ ); pub fn run(allocator: std.mem.Allocator, res: anytype, _: *std.process.ArgIterator) !void { try cmds.redirect(allocator, res.args.check, res.args.clean, res.args.build_dep, res.args.alias, res.args.path); } }; }; test "all" { std.testing.refAllDecls(@import("Dependency.zig")); }
0
repos/gyro
repos/gyro/src/pkg.zig
const std = @import("std"); const builtin = @import("builtin"); const main = @import("root"); const curl = @import("curl"); const version = @import("version"); const zzz = @import("zzz"); const Engine = @import("Engine.zig"); const Dependency = @import("Dependency.zig"); const api = @import("api.zig"); const cache = @import("cache.zig"); const utils = @import("utils.zig"); const ThreadSafeArenaAllocator = @import("ThreadSafeArenaAllocator.zig"); const Allocator = std.mem.Allocator; const testing = std.testing; const assert = std.debug.assert; pub const name = "pkg"; pub const Resolution = version.Semver; pub const ResolutionEntry = struct { repository: []const u8, user: []const u8, name: []const u8, semver: version.Semver, dep_idx: ?usize, pub fn format( entry: ResolutionEntry, comptime fmt: []const u8, options: std.fmt.FormatOptions, writer: anytype, ) @TypeOf(writer).Error!void { _ = fmt; _ = options; try writer.print("{s}/{s}/{s}: {} -> {}", .{ entry.repository, entry.user, entry.name, entry.semver, entry.dep_idx, }); } }; pub const FetchError = @typeInfo(@typeInfo(@TypeOf(api.getLatest)).Fn.return_type.?).ErrorUnion.error_set || @typeInfo(@typeInfo(@TypeOf(fetch)).Fn.return_type.?).ErrorUnion.error_set; const FetchQueue = Engine.MultiQueueImpl(Resolution, FetchError); const ResolutionTable = std.ArrayListUnmanaged(ResolutionEntry); pub fn deserializeLockfileEntry( allocator: Allocator, it: *std.mem.TokenIterator(u8), resolutions: *ResolutionTable, ) !void { const repo = it.next() orelse return error.NoRepo; try resolutions.append(allocator, .{ .repository = if (std.mem.eql(u8, repo, "default")) "astrolabe.pm" else repo, .user = it.next() orelse return error.NoUser, .name = it.next() orelse return error.NoName, .semver = try version.Semver.parse(it.next() orelse return error.NoVersion), .dep_idx = null, }); } pub fn serializeResolutions( resolutions: []const ResolutionEntry, writer: anytype, ) !void { for (resolutions) |resolution| if (resolution.dep_idx != null) try writer.print("pkg {s} {s} {s} {}\n", .{ resolution.repository, resolution.user, resolution.name, resolution.semver, }); } pub fn findResolution(dep: Dependency.Source, resolutions: []const ResolutionEntry) ?usize { return for (resolutions) |entry, j| { if (std.mem.eql(u8, dep.pkg.repository, entry.repository) and std.mem.eql(u8, dep.pkg.user, entry.user) and std.mem.eql(u8, dep.pkg.name, entry.name) and dep.pkg.version.contains(entry.semver)) { break j; } } else null; } fn findMatch(dep_table: []const Dependency.Source, dep_idx: usize, edges: []const Engine.Edge) ?usize { // TODO: handle different version range kinds const dep = dep_table[dep_idx].pkg; return for (edges) |edge| { const other = dep_table[edge.to].pkg; if (std.mem.eql(u8, dep.repository, other.repository) and std.mem.eql(u8, dep.user, other.user) and std.mem.eql(u8, dep.name, other.name) and (dep.version.contains(other.version.min) or other.version.contains(dep.version.min))) { break edge.to; } } else null; } fn updateBasePaths( arena: *ThreadSafeArenaAllocator, base_path: []const u8, deps: *std.ArrayListUnmanaged(Dependency), ) !void { for (deps.items) |*dep| if (dep.src == .local) { const resolved = try std.fs.path.resolve( arena.child_allocator, &.{ base_path, dep.src.local.path }, ); defer arena.child_allocator.free(resolved); dep.src.local.path = try std.fs.path.relative(arena.allocator(), ".", resolved); }; } fn fmtCachePath( allocator: Allocator, pkg_name: []const u8, user: []const u8, semver: version.Semver, repository: []const u8, ) ![]const u8 { return std.fmt.allocPrint(allocator, "{s}-{s}-{}-{s}", .{ pkg_name, user, semver, repository, }); } pub fn resolutionToCachePath( allocator: Allocator, res: ResolutionEntry, ) ![]const u8 { return fmtCachePath( allocator, res.name, res.user, res.semver, res.repository, ); } fn progressCb( data: ?*anyopaque, dltotal: curl.Offset, dlnow: curl.Offset, ultotal: curl.Offset, ulnow: curl.Offset, ) callconv(.C) c_int { _ = ultotal; _ = ulnow; const handle = @ptrCast(*usize, @alignCast(@alignOf(*usize), data orelse return 0)).*; main.display.updateEntry(handle, .{ .progress = .{ .current = @intCast(usize, dlnow), .total = @intCast(usize, if (dltotal == 0) 1 else dltotal), }, }) catch {}; return 0; } fn fetch( arena: *ThreadSafeArenaAllocator, dep: Dependency.Source, semver: Resolution, deps: *std.ArrayListUnmanaged(Dependency), path: *?[]const u8, ) !void { const allocator = arena.child_allocator; const entry_name = try fmtCachePath( allocator, dep.pkg.name, dep.pkg.user, semver, dep.pkg.repository, ); defer allocator.free(entry_name); var entry = try cache.getEntry(entry_name); defer entry.deinit(); if (!try entry.isDone()) { var handle = try main.display.createEntry(.{ .pkg = .{ .repository = dep.pkg.repository, .user = dep.pkg.user, .name = dep.pkg.name, .semver = semver, }, }); errdefer main.display.updateEntry(handle, .{ .err = {} }) catch {}; const xfer_ctx = api.XferCtx{ .cb = progressCb, .data = &handle, }; try api.getPkg( allocator, dep.pkg.repository, dep.pkg.user, dep.pkg.name, semver, entry.dir, xfer_ctx, ); try entry.done(); } const base_path = try std.fs.path.join(allocator, &.{ ".gyro", entry_name, "pkg" }); defer allocator.free(base_path); const manifest = try entry.dir.openFile("manifest.zzz", .{}); defer manifest.close(); const text = try manifest.reader().readAllAlloc(arena.allocator(), std.math.maxInt(usize)); var ztree = zzz.ZTree(1, 1000){}; var root = try ztree.appendText(text); if (utils.zFindChild(root, "deps")) |deps_node| { var it = utils.ZChildIterator.init(deps_node); while (it.next()) |node| try deps.append( allocator, try Dependency.fromZNode(arena, node), ); } try updateBasePaths(arena, base_path, deps); path.* = try utils.joinPathConvertSep(arena, &.{ ".gyro", entry_name, "pkg", (try utils.zFindString(root, "root")) orelse { std.log.err("fatal: manifest missing pkg root path: {s}/{s}/{s} {}", .{ dep.pkg.repository, dep.pkg.user, dep.pkg.name, semver, }); return error.Explained; }, }); } pub fn dedupeResolveAndFetch( arena: *ThreadSafeArenaAllocator, dep_table: []const Dependency.Source, resolutions: []const ResolutionEntry, fetch_queue: *FetchQueue, i: usize, ) void { dedupeResolveAndFetchImpl( arena, dep_table, resolutions, fetch_queue, i, ) catch |err| { fetch_queue.items(.result)[i] = .{ .err = err }; }; } fn dedupeResolveAndFetchImpl( arena: *ThreadSafeArenaAllocator, dep_table: []const Dependency.Source, resolutions: []const ResolutionEntry, fetch_queue: *FetchQueue, i: usize, ) FetchError!void { const dep_idx = fetch_queue.items(.edge)[i].to; if (findResolution(dep_table[dep_idx], resolutions)) |res_idx| { if (resolutions[res_idx].dep_idx) |idx| { fetch_queue.items(.result)[i] = .{ .replace_me = idx, }; return; } else if (findMatch(dep_table, dep_idx, fetch_queue.items(.edge)[0..i])) |idx| { fetch_queue.items(.result)[i] = .{ .replace_me = idx, }; return; } else { fetch_queue.items(.result)[i] = .{ .fill_resolution = res_idx, }; } } else if (findMatch(dep_table, dep_idx, fetch_queue.items(.edge)[0..i])) |idx| { fetch_queue.items(.result)[i] = .{ .replace_me = idx, }; return; } else { fetch_queue.items(.result)[i] = .{ .new_entry = try api.getLatest( arena.allocator(), dep_table[dep_idx].pkg.repository, dep_table[dep_idx].pkg.user, dep_table[dep_idx].pkg.name, dep_table[dep_idx].pkg.version, ), }; } const resolution = switch (fetch_queue.items(.result)[i]) { .fill_resolution => |res_idx| resolutions[res_idx].semver, .new_entry => |semver| semver, else => unreachable, }; try fetch( arena, dep_table[dep_idx], resolution, &fetch_queue.items(.deps)[i], &fetch_queue.items(.path)[i], ); assert(fetch_queue.items(.path)[i] != null); } pub fn updateResolution( allocator: Allocator, resolutions: *ResolutionTable, dep_table: []const Dependency.Source, fetch_queue: *FetchQueue, i: usize, ) !void { switch (fetch_queue.items(.result)[i]) { .fill_resolution => |res_idx| { const dep_idx = fetch_queue.items(.edge)[i].to; assert(resolutions.items[res_idx].dep_idx == null); resolutions.items[res_idx].dep_idx = dep_idx; }, .new_entry => |semver| { const dep_idx = fetch_queue.items(.edge)[i].to; const pkg = &dep_table[dep_idx].pkg; try resolutions.append(allocator, .{ .repository = pkg.repository, .user = pkg.user, .name = pkg.name, .semver = semver, .dep_idx = dep_idx, }); }, .replace_me => |dep_idx| fetch_queue.items(.edge)[i].to = dep_idx, .err => |err| return err, else => unreachable, } } test "deserializeLockfileEntry" { const lines = [_][]const u8{ "default matt something 0.1.0", "my_own_repository matt foo 0.2.0", }; var expected = ResolutionTable{}; defer expected.deinit(testing.allocator); try expected.append(testing.allocator, .{ .repository = "astrolabe.pm", .user = "matt", .name = "something", .semver = .{ .major = 0, .minor = 1, .patch = 0, }, .dep_idx = null, }); try expected.append(testing.allocator, .{ .repository = "my_own_repository", .user = "matt", .name = "foo", .semver = .{ .major = 0, .minor = 2, .patch = 0, }, .dep_idx = null, }); var resolutions = ResolutionTable{}; defer resolutions.deinit(testing.allocator); for (lines) |line| { var it = std.mem.tokenize(u8, line, " "); try deserializeLockfileEntry(testing.allocator, &it, &resolutions); } for (resolutions.items) |resolution, i| { try testing.expectEqualStrings(expected.items[i].repository, resolution.repository); try testing.expectEqualStrings(expected.items[i].user, resolution.user); try testing.expectEqualStrings(expected.items[i].name, resolution.name); try testing.expectEqual(expected.items[i].semver, resolution.semver); } } test "serializeResolutions" { var resolutions = ResolutionTable{}; defer resolutions.deinit(testing.allocator); try resolutions.append(testing.allocator, .{ .repository = "astrolabe.pm", .user = "matt", .name = "something", .semver = .{ .major = 0, .minor = 1, .patch = 0, }, .dep_idx = null, }); try resolutions.append(testing.allocator, .{ .repository = "my_own_repository", .user = "matt", .name = "foo", .semver = .{ .major = 0, .minor = 2, .patch = 0, }, .dep_idx = null, }); var buf: [4096]u8 = undefined; var fb = std.io.fixedBufferStream(&buf); const expected = \\pkg astrolabe.pm matt something 0.1.0 \\pkg my_own_repository matt foo 0.2.0 \\ ; try serializeResolutions(resolutions.items, fb.writer()); try testing.expectEqualStrings(expected, buf[0..expected.len]); } test "dedupeResolveAndFetch: existing resolution" { var arena = ThreadSafeArenaAllocator.init(testing.allocator); defer arena.deinit(); const dep = Dependency.Source{ .pkg = .{ .repository = "astrolabe.pm", .user = "matt", .name = "foo", .version = .{ .min = .{ .major = 0, .minor = 2, .patch = 0, }, .kind = .caret, }, }, }; const resolution = ResolutionEntry{ .repository = "astrolabe.pm", .user = "matt", .name = "foo", .semver = .{ .major = 0, .minor = 2, .patch = 0, }, .dep_idx = 5, }; var fetch_queue = FetchQueue{}; defer fetch_queue.deinit(testing.allocator); try fetch_queue.append(testing.allocator, .{ .edge = .{ .from = .{ .root = .normal, }, .to = 0, .alias = "blarg", }, .deps = std.ArrayListUnmanaged(Dependency){}, }); try dedupeResolveAndFetch(&arena, &.{dep}, &.{resolution}, &fetch_queue, 0); try testing.expectEqual(resolution.dep_idx, fetch_queue.items(.result)[0].replace_me); } test "dedupeResolveAndFetch: resolution without index" { var arena = ThreadSafeArenaAllocator.init(testing.allocator); defer arena.deinit(); const dep = Dependency.Source{ .pkg = .{ .repository = "astrolabe.pm", .user = "mattnite", .name = "version", .version = .{ .min = .{ .major = 0, .minor = 1, .patch = 0, }, .kind = .caret, }, }, }; var resolutions = ResolutionTable{}; defer resolutions.deinit(testing.allocator); try resolutions.append(testing.allocator, .{ .repository = "astrolabe.pm", .user = "mattnite", .name = "glob", .semver = .{ .major = 0, .minor = 0, .patch = 0, }, .dep_idx = null, }); try resolutions.append(testing.allocator, .{ .repository = "astrolabe.pm", .user = "mattnite", .name = "version", .semver = .{ .major = 0, .minor = 1, .patch = 0, }, .dep_idx = null, }); var fetch_queue = FetchQueue{}; defer fetch_queue.deinit(testing.allocator); try fetch_queue.append(testing.allocator, .{ .edge = .{ .from = .{ .root = .normal, }, .to = 0, .alias = "blarg", }, .deps = std.ArrayListUnmanaged(Dependency){}, }); try dedupeResolveAndFetch(&arena, &.{dep}, resolutions.items, &fetch_queue, 0); try testing.expectEqual(@as(usize, 1), fetch_queue.items(.result)[0].fill_resolution); try updateResolution(testing.allocator, &resolutions, &.{dep}, fetch_queue, 0); try testing.expectEqual(@as(?usize, 0), resolutions.items[1].dep_idx); } test "dedupeResolveAndFetch: new entry" { var arena = ThreadSafeArenaAllocator.init(testing.allocator); defer arena.deinit(); const deps = &.{ Dependency.Source{ .pkg = .{ .repository = "astrolabe.pm", .user = "mattnite", .name = "download", .version = .{ .min = .{ .major = 0, .minor = 1, .patch = 0, }, .kind = .caret, }, }, }, Dependency.Source{ .pkg = .{ .repository = "astrolabe.pm", .user = "mattnite", .name = "download", .version = .{ .min = .{ .major = 0, .minor = 1, .patch = 2, }, .kind = .caret, }, }, }, }; var resolutions = ResolutionTable{}; defer resolutions.deinit(testing.allocator); var fetch_queue = FetchQueue{}; defer fetch_queue.deinit(testing.allocator); try fetch_queue.append(testing.allocator, .{ .edge = .{ .from = .{ .root = .normal, }, .to = 0, .alias = "foo", }, .deps = std.ArrayListUnmanaged(Dependency){}, }); try fetch_queue.append(testing.allocator, .{ .edge = .{ .from = .{ .root = .normal, }, .to = 1, .alias = "blarg", }, .deps = std.ArrayListUnmanaged(Dependency){}, }); for (fetch_queue.items(.edge)) |_, i| try dedupeResolveAndFetch(&arena, deps, resolutions.items, &fetch_queue, i); for (fetch_queue.items(.edge)) |_, i| try updateResolution(testing.allocator, &resolutions, deps, fetch_queue, i); try testing.expect(fetch_queue.items(.result)[0] == .new_entry); try testing.expectEqual(@TypeOf(fetch_queue.items(.result)[0]){ .replace_me = 0 }, fetch_queue.items(.result)[1]); try testing.expectEqual(@as(usize, 0), fetch_queue.items(.result)[1].replace_me); try testing.expectEqual(@as(?usize, 0), resolutions.items[0].dep_idx); } test "dedupeResolveAndFetch: collision in batch" { var arena = ThreadSafeArenaAllocator.init(testing.allocator); defer arena.deinit(); const deps = &.{ Dependency.Source{ .pkg = .{ .repository = "astrolabe.pm", .user = "mattnite", .name = "download", .version = .{ .min = .{ .major = 0, .minor = 1, .patch = 0, }, .kind = .caret, }, }, }, Dependency.Source{ .pkg = .{ .repository = "astrolabe.pm", .user = "mattnite", .name = "download", .version = .{ .min = .{ .major = 0, .minor = 1, .patch = 2, }, .kind = .caret, }, }, }, }; var resolutions = ResolutionTable{}; defer resolutions.deinit(testing.allocator); try resolutions.append(testing.allocator, .{ .repository = "astrolabe.pm", .user = "mattnite", .name = "download", .semver = .{ .major = 0, .minor = 1, .patch = 2, }, .dep_idx = null, }); var fetch_queue = FetchQueue{}; defer fetch_queue.deinit(testing.allocator); try fetch_queue.append(testing.allocator, .{ .edge = .{ .from = .{ .root = .normal, }, .to = 0, .alias = "foo", }, .deps = std.ArrayListUnmanaged(Dependency){}, }); try fetch_queue.append(testing.allocator, .{ .edge = .{ .from = .{ .root = .normal, }, .to = 1, .alias = "blarg", }, .deps = std.ArrayListUnmanaged(Dependency){}, }); for (fetch_queue.items(.edge)) |_, i| try dedupeResolveAndFetch(&arena, deps, resolutions.items, &fetch_queue, i); for (fetch_queue.items(.edge)) |_, i| try updateResolution(testing.allocator, &resolutions, deps, fetch_queue, i); try testing.expectEqual(@as(usize, 0), fetch_queue.items(.result)[0].fill_resolution); try testing.expectEqual(fetch_queue.items(.edge)[0].to, fetch_queue.items(.result)[1].replace_me); try testing.expectEqual(@as(?usize, 0), resolutions.items[0].dep_idx); }
0
repos/gyro
repos/gyro/src/api.zig
const std = @import("std"); const version = @import("version"); const tar = @import("tar"); const zzz = @import("zzz"); const Dependency = @import("Dependency.zig"); const Package = @import("Package.zig"); const utils = @import("utils.zig"); const curl = @import("curl"); const Allocator = std.mem.Allocator; const Fifo = std.fifo.LinearFifo(u8, .{ .Dynamic = {} }); const userAgent = "user-agent: gyro/0.7.0"; pub fn getLatest( allocator: Allocator, repository: []const u8, user: []const u8, package: []const u8, range: ?version.Range, ) !version.Semver { const url = if (range) |r| try std.fmt.allocPrintZ(allocator, "https://{s}/pkgs/{s}/{s}/latest?v={u}", .{ repository, user, package, r, }) else try std.fmt.allocPrintZ(allocator, "https://{s}/pkgs/{s}/{s}/latest", .{ repository, user, package, }); defer allocator.free(url); var fifo = Fifo.init(allocator); defer fifo.deinit(); const easy = try curl.Easy.init(); defer easy.cleanup(); try easy.setUrl(url); try easy.setSslVerifyPeer(false); try easy.setWriteFn(curl.writeToFifo(Fifo)); try easy.setWriteData(&fifo); try easy.perform(); const status_code = try easy.getResponseCode(); switch (status_code) { 200 => {}, 404 => { if (range) |r| { std.log.err("failed to find {} for {s}/{s} on {s}", .{ r, user, package, repository, }); } else { std.log.err("failed to find latest for {s}/{s} on {s}", .{ user, package, repository, }); } return error.Explained; }, else => { const stderr = std.io.getStdErr().writer(); try stderr.print("got http status code for {s}: {}", .{ url, status_code }); try stderr.print("{s}\n", .{fifo.readableSlice(0)}); return error.Explained; }, } return version.Semver.parse(fifo.readableSlice(0)); } pub const XferCtx = struct { cb: curl.XferInfoFn, data: ?*anyopaque, }; pub fn getPkg( allocator: Allocator, repository: []const u8, user: []const u8, package: []const u8, semver: version.Semver, dir: std.fs.Dir, xfer_ctx: ?XferCtx, ) !void { const url = try std.fmt.allocPrintZ( allocator, "https://{s}/archive/{s}/{s}/{}", .{ repository, user, package, semver, }, ); defer allocator.free(url); try getTarGz(allocator, url, dir, xfer_ctx); } // not a super huge fan of allocating the entire response over streaming, but // it'll do for now, at least it's compressed lol fn getTarGzImpl( allocator: Allocator, url: [:0]const u8, dir: std.fs.Dir, skip_depth: usize, xfer: ?XferCtx, ) !void { var fifo = Fifo.init(allocator); defer fifo.deinit(); var headers = curl.HeaderList.init(); defer headers.freeAll(); try headers.append("Accept: */*"); const easy = try curl.Easy.init(); defer easy.cleanup(); try easy.setUrl(url); try easy.setHeaders(headers); try easy.setSslVerifyPeer(false); try easy.setWriteFn(curl.writeToFifo(Fifo)); try easy.setWriteData(&fifo); if (xfer) |x| { try easy.setXferInfoFn(x.cb); if (x.data) |data| try easy.setXferInfoData(data); try easy.setNoProgress(false); } try easy.perform(); const status_code = try easy.getResponseCode(); if (status_code != 200) { std.log.err("http status code: {}", .{status_code}); return error.HttpError; } var gzip = try std.compress.gzip.gzipStream(allocator, fifo.reader()); defer gzip.deinit(); try tar.instantiate(allocator, dir, gzip.reader(), skip_depth); } pub fn getTarGz( allocator: Allocator, url: [:0]const u8, dir: std.fs.Dir, xfer_ctx: ?XferCtx, ) !void { try getTarGzImpl(allocator, url, dir, 0, xfer_ctx); } pub fn getGithubRepo( allocator: Allocator, user: []const u8, repo: []const u8, ) !std.json.ValueTree { const url = try std.fmt.allocPrintZ( allocator, "https://api.github.com/repos/{s}/{s}", .{ user, repo }, ); defer allocator.free(url); var fifo = Fifo.init(allocator); defer fifo.deinit(); var headers = curl.HeaderList.init(); defer headers.freeAll(); try headers.append("Accept: application/vnd.github.v3+json"); try headers.append(userAgent); const easy = try curl.Easy.init(); defer easy.cleanup(); try easy.setUrl(url); try easy.setHeaders(headers); try easy.setSslVerifyPeer(false); try easy.setWriteFn(curl.writeToFifo(Fifo)); try easy.setWriteData(&fifo); try easy.perform(); const status_code = try easy.getResponseCode(); if (status_code != 200) { std.log.err("http status code: {}", .{status_code}); return error.HttpError; } var parser = std.json.Parser.init(allocator, true); defer parser.deinit(); return try parser.parse(fifo.readableSlice(0)); } pub fn getGithubTopics( allocator: Allocator, user: []const u8, repo: []const u8, ) !std.json.ValueTree { const url = try std.fmt.allocPrintZ(allocator, "https://api.github.com/repos/{s}/{s}/topics", .{ user, repo }); defer allocator.free(url); var fifo = Fifo.init(allocator); defer fifo.deinit(); var headers = curl.HeaderList.init(); defer headers.freeAll(); try headers.append("Accept: application/vnd.github.mercy-preview+json"); try headers.append(userAgent); const easy = try curl.Easy.init(); defer easy.cleanup(); try easy.setUrl(url); try easy.setHeaders(headers); try easy.setSslVerifyPeer(false); try easy.setWriteFn(curl.writeToFifo(Fifo)); try easy.setWriteData(&fifo); try easy.perform(); const status_code = try easy.getResponseCode(); if (status_code != 200) { std.log.err("http status code: {}", .{status_code}); std.log.err("{s}", .{fifo.readableSlice(0)}); return error.Explained; } var parser = std.json.Parser.init(allocator, true); defer parser.deinit(); return try parser.parse(fifo.readableSlice(0)); } pub const DeviceCodeResponse = struct { device_code: []const u8, user_code: []const u8, verification_uri: []const u8, expires_in: u64, interval: u64, }; pub fn postDeviceCode( allocator: Allocator, client_id: []const u8, scope: []const u8, ) !DeviceCodeResponse { const url = "https://github.com/login/device/code"; const payload = try std.fmt.allocPrint(allocator, "client_id={s}&scope={s}", .{ client_id, scope }); defer allocator.free(payload); var fifo = Fifo.init(allocator); defer fifo.deinit(); var headers = curl.HeaderList.init(); defer headers.freeAll(); try headers.append("Accept: application/json"); try headers.append(userAgent); // remove expect header try headers.append("Expect:"); const easy = try curl.Easy.init(); defer easy.cleanup(); try easy.setPost(); try easy.setUrl(url); try easy.setHeaders(headers); try easy.setSslVerifyPeer(false); try easy.setPostFields(payload.ptr); try easy.setPostFieldSize(payload.len); try easy.setWriteFn(curl.writeToFifo(Fifo)); try easy.setWriteData(&fifo); try easy.perform(); const status_code = try easy.getResponseCode(); if (status_code != 200) { std.log.err("http status code: {}", .{status_code}); std.log.err("{s}", .{fifo.readableSlice(0)}); return error.Explained; } std.log.debug("message: {s}", .{fifo.readableSlice(0)}); var token_stream = std.json.TokenStream.init(fifo.readableSlice(0)); return std.json.parse(DeviceCodeResponse, &token_stream, .{ .allocator = allocator, .ignore_unknown_fields = true, }); } const PollDeviceCodeResponse = struct { access_token: []const u8, token_type: []const u8, scope: []const u8, }; pub fn pollDeviceCode( allocator: Allocator, client_id: []const u8, device_code: []const u8, ) !?[]const u8 { const url = "https://github.com/login/oauth/access_token"; const payload = try std.fmt.allocPrint( allocator, "client_id={s}&device_code={s}&grant_type=urn:ietf:params:oauth:grant-type:device_code", .{ client_id, device_code }, ); defer allocator.free(payload); var fifo = Fifo.init(allocator); defer fifo.deinit(); var headers = curl.HeaderList.init(); defer headers.freeAll(); try headers.append("Accept: application/json"); try headers.append(userAgent); const easy = try curl.Easy.init(); defer easy.cleanup(); try easy.setPost(); try easy.setUrl(url); try easy.setHeaders(headers); try easy.setSslVerifyPeer(false); try easy.setPostFields(payload.ptr); try easy.setPostFieldSize(payload.len); try easy.setWriteFn(curl.writeToFifo(Fifo)); try easy.setWriteData(&fifo); try easy.perform(); const status_code = try easy.getResponseCode(); if (status_code != 200) { std.log.err("http status code: {}", .{status_code}); std.log.err("{s}", .{fifo.readableSlice(0)}); return error.Explained; } var parser = std.json.Parser.init(allocator, false); defer parser.deinit(); var value_tree = try parser.parse(fifo.readableSlice(0)); defer value_tree.deinit(); // TODO: error handling based on the json error codes return if (value_tree.root.Object.get("access_token")) |value| switch (value) { .String => |str| try allocator.dupe(u8, str), else => null, } else null; } pub fn postPublish( allocator: Allocator, repository_opt: ?[]const u8, access_token: []const u8, pkg: *Package, ) !void { try pkg.bundle(std.fs.cwd(), std.fs.cwd()); const filename = try pkg.filename(allocator); defer allocator.free(filename); const file = try std.fs.cwd().openFile(filename, .{}); defer { file.close(); std.fs.cwd().deleteFile(filename) catch {}; } const repository = repository_opt orelse utils.default_repo; const url = try std.fmt.allocPrintZ(allocator, "https://{s}/publish", .{repository}); defer allocator.free(url); const payload = try file.reader().readAllAlloc(allocator, std.math.maxInt(usize)); defer allocator.free(payload); var fifo = Fifo.init(allocator); defer fifo.deinit(); const authorization_header = try std.fmt.allocPrintZ(allocator, "Authorization: Bearer github {s}", .{access_token}); defer allocator.free(authorization_header); var headers = curl.HeaderList.init(); defer headers.freeAll(); try headers.append("Content-Type: application/octet-stream"); try headers.append("Accept: */*"); try headers.append(authorization_header); const easy = try curl.Easy.init(); defer easy.cleanup(); try easy.setPost(); try easy.setUrl(url); try easy.setHeaders(headers); try easy.setSslVerifyPeer(false); try easy.setPostFields(payload.ptr); try easy.setPostFieldSize(payload.len); try easy.setWriteFn(curl.writeToFifo(Fifo)); try easy.setWriteData(&fifo); try easy.perform(); const stderr = std.io.getStdErr().writer(); defer stderr.print("{s}\n", .{fifo.readableSlice(0)}) catch {}; switch (try easy.getResponseCode()) { 200 => {}, 401 => return error.Unauthorized, else => |code| { if (fifo.readableSlice(0).len > 0) { return error.Explained; } else { std.log.err("http status code: {}", .{code}); return error.HttpError; } }, } }
0
repos/gyro
repos/gyro/src/git.zig
const std = @import("std"); const builtin = @import("builtin"); const uri = @import("uri"); const api = @import("api.zig"); const cache = @import("cache.zig"); const Engine = @import("Engine.zig"); const Dependency = @import("Dependency.zig"); const Project = @import("Project.zig"); const utils = @import("utils.zig"); const local = @import("local.zig"); const main = @import("root"); const ThreadSafeArenaAllocator = @import("ThreadSafeArenaAllocator.zig"); const c = @cImport({ @cInclude("git2.h"); }); const Allocator = std.mem.Allocator; const assert = std.debug.assert; pub const name = "git"; pub const Resolution = []const u8; pub const ResolutionEntry = struct { url: []const u8, ref: []const u8, commit: []const u8, root: []const u8, dep_idx: ?usize = null, pub fn format( entry: ResolutionEntry, comptime fmt: []const u8, options: std.fmt.FormatOptions, writer: anytype, ) @TypeOf(writer).Error!void { _ = fmt; _ = options; try writer.print("{s}:{s}/{s} -> {}", .{ entry.url, entry.commit, entry.root, entry.dep_idx, }); } }; pub const FetchError = @typeInfo(@typeInfo(@TypeOf(getHeadCommitOfRef)).Fn.return_type.?).ErrorUnion.error_set || @typeInfo(@typeInfo(@TypeOf(fetch)).Fn.return_type.?).ErrorUnion.error_set; const FetchQueue = Engine.MultiQueueImpl(Resolution, FetchError); const ResolutionTable = std.ArrayListUnmanaged(ResolutionEntry); pub fn deserializeLockfileEntry( allocator: Allocator, it: *std.mem.TokenIterator(u8), resolutions: *ResolutionTable, ) !void { try resolutions.append(allocator, .{ .url = it.next() orelse return error.Url, .ref = it.next() orelse return error.NoRef, .root = it.next() orelse return error.NoRoot, .commit = it.next() orelse return error.NoCommit, }); } pub fn serializeResolutions( resolutions: []const ResolutionEntry, writer: anytype, ) !void { for (resolutions) |entry| { if (entry.dep_idx != null) try writer.print("git {s} {s} {s} {s}\n", .{ entry.url, entry.ref, entry.root, entry.commit, }); } } pub fn findResolution( dep: Dependency.Source, resolutions: []const ResolutionEntry, ) ?usize { const root = dep.git.root orelse utils.default_root; return for (resolutions) |entry, j| { if (std.mem.eql(u8, dep.git.url, entry.url) and std.mem.eql(u8, dep.git.ref, entry.ref) and std.mem.eql(u8, root, entry.root)) { break j; } } else null; } fn findMatch( dep_table: []const Dependency.Source, dep_idx: usize, edges: []const Engine.Edge, ) ?usize { const dep = dep_table[dep_idx].git; const root = dep.root orelse utils.default_root; return for (edges) |edge| { const other = dep_table[edge.to].git; const other_root = other.root orelse utils.default_root; if (std.mem.eql(u8, dep.url, other.url) and std.mem.eql(u8, dep.ref, other.ref) and std.mem.eql(u8, root, other_root)) { break edge.to; } } else null; } const RemoteHeadEntry = struct { oid: [c.GIT_OID_HEXSZ]u8, name: []const u8, }; pub fn getHEADCommit( allocator: Allocator, url: []const u8, ) ![]const u8 { const url_z = try allocator.dupeZ(u8, url); defer allocator.free(url_z); var remote: ?*c.git_remote = null; var err = c.git_remote_create_anonymous(&remote, null, url_z.ptr); if (err < 0) { const last_error = c.git_error_last(); std.log.err("{s}", .{last_error.*.message}); return error.GitRemoteCreate; } defer c.git_remote_free(remote); var callbacks: c.git_remote_callbacks = undefined; err = c.git_remote_init_callbacks( &callbacks, c.GIT_REMOTE_CALLBACKS_VERSION, ); if (err < 0) { const last_error = c.git_error_last(); std.log.err("{s}", .{last_error.*.message}); return error.GitRemoteInitCallbacks; } err = c.git_remote_connect( remote, c.GIT_DIRECTION_FETCH, &callbacks, null, null, ); if (err < 0) { const last_error = c.git_error_last(); std.log.err("{s}", .{last_error.*.message}); return error.GitRemoteConnect; } var refs_ptr: [*c][*c]c.git_remote_head = undefined; var refs_len: usize = undefined; err = c.git_remote_ls(&refs_ptr, &refs_len, remote); if (err < 0) { const last_error = c.git_error_last(); std.log.err("{s}", .{last_error.*.message}); return error.GitRemoteLs; } var refs = std.ArrayList(RemoteHeadEntry).init(allocator); defer { for (refs.items) |entry| allocator.free(entry.name); refs.deinit(); } var i: usize = 0; while (i < refs_len) : (i += 1) { const len = std.mem.len(refs_ptr[i].*.name); try refs.append(.{ .oid = undefined, .name = try allocator.dupeZ(u8, refs_ptr[i].*.name[0..len]), }); _ = c.git_oid_fmt( &refs.items[refs.items.len - 1].oid, &refs_ptr[i].*.oid, ); } return for (refs.items) |entry| { if (std.mem.eql(u8, entry.name, "HEAD")) break allocator.dupe(u8, &entry.oid); } else error.RefNotFound; } pub fn getHeadCommitOfRef( allocator: Allocator, url: []const u8, ref: []const u8, ) ![]const u8 { // if ref is the same size as an OID and hex format then treat it as a // commit if (ref.len == c.GIT_OID_HEXSZ) { for (ref) |char| { if (!std.ascii.isXDigit(char)) break; } else return allocator.dupe(u8, ref); } const url_z = try allocator.dupeZ(u8, url); defer allocator.free(url_z); var remote: ?*c.git_remote = null; var err = c.git_remote_create_anonymous(&remote, null, url_z.ptr); if (err < 0) { const last_error = c.git_error_last(); std.log.err("{s}", .{last_error.*.message}); return error.GitRemoteCreate; } defer c.git_remote_free(remote); var callbacks: c.git_remote_callbacks = undefined; err = c.git_remote_init_callbacks( &callbacks, c.GIT_REMOTE_CALLBACKS_VERSION, ); if (err < 0) { const last_error = c.git_error_last(); std.log.err("{s}", .{last_error.*.message}); return error.GitRemoteInitCallbacks; } err = c.git_remote_connect( remote, c.GIT_DIRECTION_FETCH, &callbacks, null, null, ); if (err < 0) { const last_error = c.git_error_last(); std.log.err("{s}", .{last_error.*.message}); return error.GitRemoteConnect; } var refs_ptr: [*c][*c]c.git_remote_head = undefined; var refs_len: usize = undefined; err = c.git_remote_ls(&refs_ptr, &refs_len, remote); if (err < 0) { const last_error = c.git_error_last(); std.log.err("{s}", .{last_error.*.message}); return error.GitRemoteLs; } var refs = std.ArrayList(RemoteHeadEntry).init(allocator); defer { for (refs.items) |entry| allocator.free(entry.name); refs.deinit(); } var i: usize = 0; while (i < refs_len) : (i += 1) { const len = std.mem.len(refs_ptr[i].*.name); try refs.append(.{ .oid = undefined, .name = try allocator.dupeZ(u8, refs_ptr[i].*.name[0..len]), }); _ = c.git_oid_fmt( &refs.items[refs.items.len - 1].oid, &refs_ptr[i].*.oid, ); } inline for (&[_][]const u8{ "refs/tags/", "refs/heads/" }) |prefix| { for (refs.items) |entry| { if (std.mem.startsWith(u8, entry.name, prefix) and std.mem.eql(u8, entry.name[prefix.len..], ref)) { return allocator.dupe(u8, &entry.oid); } } } std.log.err("'{s}' ref not found", .{ref}); return error.RefNotFound; } const CloneState = struct { arena: *ThreadSafeArenaAllocator, base_path: []const u8, }; fn submoduleCb(sm: ?*c.git_submodule, sm_name: [*c]const u8, payload: ?*anyopaque) callconv(.C) c_int { return if (submoduleCbImpl(sm, sm_name, payload)) 0 else |err| blk: { std.log.err("got err: {s}", .{@errorName(err)}); break :blk -1; }; } fn indexerCb(stats_c: [*c]const c.git_indexer_progress, payload: ?*anyopaque) callconv(.C) c_int { const stats = @ptrCast(*const c.git_indexer_progress, stats_c); const handle = @ptrCast(*usize, @alignCast(@alignOf(*usize), payload)).*; main.display.updateEntry(handle, .{ .progress = .{ .current = stats.received_objects + stats.indexed_objects, .total = stats.total_objects * 2, }, }) catch {}; return 0; } fn submoduleCbImpl(sm: ?*c.git_submodule, sm_name: [*c]const u8, payload: ?*anyopaque) !void { const parent_state = @ptrCast(*CloneState, @alignCast(@alignOf(*CloneState), payload)); const arena = parent_state.arena; const allocator = arena.child_allocator; if (sm == null) return; const submodule_name = if (sm_name != null) std.mem.span(sm_name) else return; // git always uses posix path separators const sub_path = try std.mem.replaceOwned(u8, allocator, submodule_name, "/", std.fs.path.sep_str); defer allocator.free(sub_path); const base_path = try std.fs.path.join(allocator, &.{ parent_state.base_path, sub_path }); defer allocator.free(base_path); const oid = try arena.allocator().alloc(u8, c.GIT_OID_HEXSZ); _ = c.git_oid_fmt( oid.ptr, c.git_submodule_head_id(sm), ); const sm_url = c.git_submodule_url(sm); const url = if (sm_url != null) std.mem.sliceTo(sm_url, 0) else return; var handle = try main.display.createEntry(.{ .sub = .{ .url = try arena.allocator().dupe(u8, url), .commit = oid, }, }); errdefer main.display.updateEntry(handle, .{ .err = {} }) catch {}; var options: c.git_submodule_update_options = undefined; _ = c.git_submodule_update_options_init(&options, c.GIT_SUBMODULE_UPDATE_OPTIONS_VERSION); options.fetch_opts.callbacks.transfer_progress = indexerCb; options.fetch_opts.callbacks.payload = &handle; var err = c.git_submodule_update(sm, 1, &options); if (err != 0) { std.log.err("{s}", .{c.git_error_last().*.message}); return error.GitSubmoduleUpdate; } var repo: ?*c.git_repository = null; err = c.git_submodule_open(&repo, sm); if (err != 0) return error.GitSubmoduleOpen; defer c.git_repository_free(repo); var state = CloneState{ .arena = arena, .base_path = base_path, }; err = c.git_submodule_foreach(repo, submoduleCb, &state); if (err != 0) { std.log.err("{s}", .{c.git_error_last().*.message}); return error.GitSubmoduleForeach; } // TODO: deleteTree doesn't work on windows with hidden or read-only files if (builtin.target.os.tag != .windows) { const dot_git = try std.fs.path.join(allocator, &.{ base_path, ".git" }); defer allocator.free(dot_git); try std.fs.cwd().deleteTree(dot_git); } } pub fn clone( arena: *ThreadSafeArenaAllocator, url: []const u8, commit: []const u8, path: []const u8, ) !void { const allocator = arena.child_allocator; const url_z = try allocator.dupeZ(u8, url); defer allocator.free(url_z); const path_z = try allocator.dupeZ(u8, path); defer allocator.free(path_z); const commit_z = try allocator.dupeZ(u8, commit); defer allocator.free(commit_z); var handle = try main.display.createEntry(.{ .git = .{ .url = url, .commit = commit, }, }); errdefer main.display.updateEntry(handle, .{ .err = {} }) catch {}; var repo: ?*c.git_repository = null; var options: c.git_clone_options = undefined; _ = c.git_clone_options_init(&options, c.GIT_CLONE_OPTIONS_VERSION); options.fetch_opts.callbacks.transfer_progress = indexerCb; options.fetch_opts.callbacks.payload = &handle; var err = c.git_clone(&repo, url_z.ptr, path_z.ptr, &options); if (err != 0) { std.log.err("{s}", .{c.git_error_last().*.message}); return error.GitClone; } defer c.git_repository_free(repo); var oid: c.git_oid = undefined; err = c.git_oid_fromstr(&oid, commit_z.ptr); if (err != 0) { std.log.err("{s}", .{c.git_error_last().*.message}); return error.GitOidFromString; } var obj: ?*c.git_object = undefined; err = c.git_object_lookup(&obj, repo, &oid, c.GIT_OBJECT_COMMIT); if (err != 0) { std.log.err("{s}", .{c.git_error_last().*.message}); return error.GitObjectLookup; } var checkout_opts: c.git_checkout_options = undefined; _ = c.git_checkout_options_init(&checkout_opts, c.GIT_CHECKOUT_OPTIONS_VERSION); err = c.git_checkout_tree(repo, obj, &checkout_opts); if (err != 0) { std.log.err("{s}", .{c.git_error_last().*.message}); return error.GitCheckoutTree; } var state = CloneState{ .arena = arena, .base_path = path, }; err = c.git_submodule_foreach(repo, submoduleCb, &state); if (err != 0) { std.log.err("\n{s}", .{c.git_error_last().*.message}); return error.GitSubmoduleForeach; } // TODO: deleteTree doesn't work on windows with hidden or read-only files if (builtin.target.os.tag != .windows) { const dot_git = try std.fs.path.join(allocator, &.{ path, ".git" }); defer allocator.free(dot_git); try std.fs.cwd().deleteTree(dot_git); } } fn findPartialMatch( allocator: Allocator, dep_table: []const Dependency.Source, commit: []const u8, dep_idx: usize, edges: []const Engine.Edge, ) !?usize { const dep = dep_table[dep_idx].git; return for (edges) |edge| { const other = dep_table[edge.to].git; if (std.mem.eql(u8, dep.url, other.url)) { const other_commit = try getHeadCommitOfRef( allocator, other.url, other.ref, ); defer allocator.free(other_commit); if (std.mem.eql(u8, commit, other_commit)) { break edge.to; } } } else null; } pub fn fmtCachePath( allocator: Allocator, url: []const u8, commit: []const u8, ) ![]const u8 { var components = std.ArrayList([]const u8).init(allocator); defer components.deinit(); const link = try uri.parse(url); const scheme = link.scheme orelse return error.NoUriScheme; const end = url.len - if (std.mem.endsWith(u8, url, ".git")) ".git".len else 0; var it = std.mem.tokenize(u8, url[scheme.len + 1 .. end], "/"); while (it.next()) |comp| try components.insert(0, comp); try components.append(commit[0..8]); return std.mem.join(allocator, "-", components.items); } pub fn resolutionToCachePath( allocator: Allocator, res: ResolutionEntry, ) ![]const u8 { return fmtCachePath(allocator, res.url, res.commit); } fn fetch( arena: *ThreadSafeArenaAllocator, dep: Dependency.Source, done: bool, commit: Resolution, deps: *std.ArrayListUnmanaged(Dependency), path: *?[]const u8, ) !void { const allocator = arena.child_allocator; const entry_name = try fmtCachePath(allocator, dep.git.url, commit); defer allocator.free(entry_name); var entry = try cache.getEntry(entry_name); defer entry.deinit(); const base_path = try std.fs.path.join(allocator, &.{ ".gyro", entry_name, "pkg", }); defer allocator.free(base_path); if (!done and !try entry.isDone()) { if (builtin.target.os.tag != .windows) { if (std.fs.cwd().access(base_path, .{})) { try std.fs.cwd().deleteTree(base_path); } else |_| {} // TODO: if base_path exists then deleteTree it } try clone( arena, dep.git.url, commit, base_path, ); try entry.done(); } const root = dep.git.root orelse utils.default_root; path.* = try utils.joinPathConvertSep(arena, &.{ base_path, root }); if (!done) { var base_dir = try std.fs.cwd().openDir(base_path, .{}); defer base_dir.close(); const project_file = try base_dir.createFile("gyro.zzz", .{ .read = true, .truncate = false, .exclusive = false, }); defer project_file.close(); const text = try project_file.reader().readAllAlloc( arena.allocator(), std.math.maxInt(usize), ); const project = try Project.fromUnownedText(arena, base_path, text); defer project.destroy(); try deps.appendSlice(allocator, project.deps.items); } } pub fn dedupeResolveAndFetch( arena: *ThreadSafeArenaAllocator, dep_table: []const Dependency.Source, resolutions: []const ResolutionEntry, fetch_queue: *FetchQueue, i: usize, ) void { dedupeResolveAndFetchImpl( arena, dep_table, resolutions, fetch_queue, i, ) catch |err| { fetch_queue.items(.result)[i] = .{ .err = err }; }; } fn dedupeResolveAndFetchImpl( arena: *ThreadSafeArenaAllocator, dep_table: []const Dependency.Source, resolutions: []const ResolutionEntry, fetch_queue: *FetchQueue, i: usize, ) FetchError!void { const dep_idx = fetch_queue.items(.edge)[i].to; var commit: []const u8 = undefined; if (findResolution(dep_table[dep_idx], resolutions)) |res_idx| { if (resolutions[res_idx].dep_idx) |idx| { fetch_queue.items(.result)[i] = .{ .replace_me = idx, }; return; } else if (findMatch( dep_table, dep_idx, fetch_queue.items(.edge)[0..i], )) |idx| { fetch_queue.items(.result)[i] = .{ .replace_me = idx, }; return; } else { fetch_queue.items(.result)[i] = .{ .fill_resolution = res_idx, }; } } else if (findMatch( dep_table, dep_idx, fetch_queue.items(.edge)[0..i], )) |idx| { fetch_queue.items(.result)[i] = .{ .replace_me = idx, }; return; } else { commit = try getHeadCommitOfRef( arena.allocator(), dep_table[dep_idx].git.url, dep_table[dep_idx].git.ref, ); if (try findPartialMatch( arena.child_allocator, dep_table, commit, dep_idx, fetch_queue.items(.edge)[0..i], )) |idx| { fetch_queue.items(.result)[i] = .{ .copy_deps = idx, }; } else { fetch_queue.items(.result)[i] = .{ .new_entry = commit, }; } } var done = false; const resolution = switch (fetch_queue.items(.result)[i]) { .fill_resolution => |res_idx| resolutions[res_idx].commit, .new_entry => |entry_commit| entry_commit, .copy_deps => blk: { done = true; break :blk commit; }, else => unreachable, }; try fetch( arena, dep_table[dep_idx], done, resolution, &fetch_queue.items(.deps)[i], &fetch_queue.items(.path)[i], ); assert(fetch_queue.items(.path)[i] != null); } pub fn updateResolution( allocator: Allocator, resolutions: *ResolutionTable, dep_table: []const Dependency.Source, fetch_queue: *FetchQueue, i: usize, ) !void { switch (fetch_queue.items(.result)[i]) { .fill_resolution => |res_idx| { const dep_idx = fetch_queue.items(.edge)[i].to; assert(resolutions.items[res_idx].dep_idx == null); resolutions.items[res_idx].dep_idx = dep_idx; }, .new_entry => |commit| { const dep_idx = fetch_queue.items(.edge)[i].to; const git = &dep_table[dep_idx].git; const root = git.root orelse utils.default_root; try resolutions.append(allocator, .{ .url = git.url, .ref = git.ref, .root = root, .commit = commit, .dep_idx = dep_idx, }); }, .replace_me => |dep_idx| fetch_queue.items(.edge)[i].to = dep_idx, .err => |err| { std.log.err("recieved error: {s} while getting dep: {}", .{ @errorName(err), dep_table[fetch_queue.items(.edge)[i].to], }); return error.Explained; }, .copy_deps => |queue_idx| { const commit = resolutions.items[ findResolution( dep_table[fetch_queue.items(.edge)[queue_idx].to], resolutions.items, ).? ].commit; const dep_idx = fetch_queue.items(.edge)[i].to; const git = &dep_table[dep_idx].git; const root = git.root orelse utils.default_root; try resolutions.append(allocator, .{ .url = git.url, .ref = git.ref, .root = root, .commit = commit, .dep_idx = dep_idx, }); try fetch_queue.items(.deps)[i].appendSlice( allocator, fetch_queue.items(.deps)[queue_idx].items, ); }, } }
0
repos/gyro
repos/gyro/src/Package.zig
const std = @import("std"); const version = @import("version"); const tar = @import("tar"); const glob = @import("glob"); const zzz = @import("zzz"); const Dependency = @import("Dependency.zig"); const Project = @import("Project.zig"); const utils = @import("utils.zig"); const ThreadSafeArenaAllocator = @import("ThreadSafeArenaAllocator.zig"); const Self = @This(); const Allocator = std.mem.Allocator; arena: ThreadSafeArenaAllocator, name: []const u8, version: version.Semver, project: *Project, root: ?[]const u8, files: std.ArrayList([]const u8), // meta info description: ?[]const u8, license: ?[]const u8, homepage_url: ?[]const u8, source_url: ?[]const u8, tags: std.ArrayList([]const u8), pub fn init( allocator: Allocator, name: []const u8, ver: version.Semver, project: *Project, ) !Self { var ret = Self{ .arena = ThreadSafeArenaAllocator.init(allocator), .name = name, .version = ver, .project = project, .files = std.ArrayList([]const u8).init(allocator), .tags = std.ArrayList([]const u8).init(allocator), .root = null, .description = null, .license = null, .homepage_url = null, .source_url = null, }; return ret; } pub fn deinit(self: *Self) void { self.tags.deinit(); self.files.deinit(); self.arena.deinit(); } pub fn fillFromZNode( self: *Self, node: *zzz.ZNode, ) !void { if (utils.zFindChild(node, "files")) |files| { var it = utils.ZChildIterator.init(files); while (it.next()) |path| try self.files.append(try utils.zGetString(path)); } if (utils.zFindChild(node, "deps") != null) { std.log.warn("subdependencies are no longer supported", .{}); } if (utils.zFindChild(node, "tags")) |tags| { var it = utils.ZChildIterator.init(tags); while (it.next()) |tag| try self.tags.append(try utils.zGetString(tag)); } inline for (std.meta.fields(Self)) |field| { if (@TypeOf(@field(self, field.name)) == ?[]const u8) { @field(self, field.name) = try utils.zFindString(node, field.name); } } } fn createManifest(self: *Self, tree: *zzz.ZTree(1, 1000)) !void { var root = try tree.addNode(null, .Null); try utils.zPutKeyString(tree, root, "name", self.name); var ver_str = try std.fmt.allocPrint(self.arena.allocator(), "{}", .{self.version}); try utils.zPutKeyString(tree, root, "version", ver_str); inline for (std.meta.fields(Self)) |field| { if (@TypeOf(@field(self, field.name)) == ?[]const u8) { if (@field(self, field.name)) |value| { try utils.zPutKeyString(tree, root, field.name, value); } else if (std.mem.eql(u8, field.name, "root")) { try utils.zPutKeyString(tree, root, field.name, "src/main.zig"); } } } if (self.tags.items.len > 0) { var tags = try tree.addNode(root, .{ .String = "tags" }); for (self.tags.items) |tag| _ = try tree.addNode(tags, .{ .String = tag }); } // TODO: check for collisions between different deps sets if (self.project.deps.items.len > 0) { var deps = try tree.addNode(root, .{ .String = "deps" }); for (self.project.deps.items) |dep| try dep.addToZNode(&self.arena, tree, deps, true); } if (self.project.build_deps.items.len > 0) { var build_deps = try tree.addNode(root, .{ .String = "build_deps" }); for (self.project.build_deps.items) |dep| try dep.addToZNode(&self.arena, tree, build_deps, true); } } pub fn filename(self: Self, allocator: Allocator) ![]const u8 { return std.fmt.allocPrint(allocator, "{s}-{}.tar", .{ self.name, self.version }); } pub fn bundle(self: *Self, root: std.fs.Dir, output_dir: std.fs.Dir) !void { const fname = try self.filename(self.arena.allocator()); const file = try output_dir.createFile(fname, .{ .truncate = true, .read = true, }); errdefer output_dir.deleteFile(fname) catch {}; defer file.close(); var tarball = tar.builder(self.arena.child_allocator, file.writer()); defer { tarball.finish() catch {}; tarball.deinit(); } var fifo = std.fifo.LinearFifo(u8, .Dynamic).init(self.arena.child_allocator); defer fifo.deinit(); var manifest = zzz.ZTree(1, 1000){}; try self.createManifest(&manifest); try manifest.rootSlice()[0].stringify(fifo.writer()); try fifo.writer().writeByte('\n'); try tarball.addSlice(fifo.readableSlice(0), "manifest.zzz"); if (self.root) |root_file| { tarball.addFile(root, "pkg", root_file) catch |err| { if (err == error.FileNotFound) { std.log.err("{s}'s root is declared as {s}, but it does not exist", .{ self.name, root_file, }); return error.Explained; } else return err; }; } else { tarball.addFile(root, "pkg", "src/main.zig") catch |err| { if (err == error.FileNotFound) { std.log.err("there's no src/main.zig, did you forget to declare a {s}'s root file in gyro.zzz?", .{ self.name, }); return error.Explained; } else return err; }; } for (self.files.items) |pattern| { var dir = try root.openIterableDir(".", .{ .access_sub_paths = true }); defer dir.close(); var it = try glob.Iterator.init(self.arena.child_allocator, dir, pattern); defer it.deinit(); while (try it.next()) |subpath| { tarball.addFile(dir.dir, "pkg", subpath) catch |err| { return if (err == error.FileNotFound) blk: { std.log.err("file pattern '{s}' wants path '{s}', but it doesn't exist", .{ pattern, subpath, }); break :blk error.Explained; } else err; }; } } } pub fn addToZNode( self: Self, arena: *ThreadSafeArenaAllocator, tree: *zzz.ZTree(1, 1000), parent: *zzz.ZNode, ) !void { var node = try tree.addNode(parent, .{ .String = self.name }); var ver_str = try std.fmt.allocPrint(arena.allocator(), "{}", .{self.version}); try utils.zPutKeyString(tree, node, "version", ver_str); inline for (std.meta.fields(Self)) |field| { if (@TypeOf(@field(self, field.name)) == ?[]const u8) { if (!std.mem.eql(u8, field.name, "root")) { if (@field(self, field.name)) |value| { try utils.zPutKeyString(tree, node, field.name, value); } } } } if (self.tags.items.len > 0) { var tags = try tree.addNode(node, .{ .String = "tags" }); for (self.tags.items) |tag| _ = try tree.addNode(tags, .{ .String = tag }); } try utils.zPutKeyString(tree, node, "root", if (self.root) |path| path else utils.default_root); if (self.files.items.len > 0) { var files = try tree.addNode(node, .{ .String = "files" }); for (self.files.items) |file| _ = try tree.addNode(files, .{ .String = file }); } }
0
repos/gyro
repos/gyro/src/Display.zig
const std = @import("std"); const builtin = @import("builtin"); const version = @import("version"); const uri = @import("uri"); const assert = std.debug.assert; const is_windows = builtin.os.tag == .windows; const w32 = if (is_windows) struct { extern "kernel32" fn SetConsoleMode(console: ?*anyopaque, mode: std.os.windows.DWORD) callconv(std.os.windows.WINAPI) u32; } else undefined; var w32_mode: if (is_windows) std.os.windows.DWORD else void = undefined; pub const Size = struct { rows: usize, cols: usize, }; pub const Source = union(enum) { const Git = struct { url: []const u8, commit: []const u8, }; git: Git, sub: Git, pkg: struct { repository: []const u8, user: []const u8, name: []const u8, semver: version.Semver, }, url: []const u8, }; const EntryUpdate = union(enum) { progress: Progress, err: void, }; const Progress = struct { current: usize, total: usize, }; const Entry = struct { tag: []const u8, label: []const u8, version: []const u8, progress: Progress, err: bool, }; const UpdateState = struct { current_len: usize, entries: std.ArrayList(Entry), progress: std.AutoHashMap(usize, Progress), errors: std.AutoHashMap(usize, void), new_size: ?Size, fn init(allocator: std.mem.Allocator) UpdateState { return UpdateState{ .current_len = 0, .entries = std.ArrayList(Entry).init(allocator), .progress = std.AutoHashMap(usize, Progress).init(allocator), .errors = std.AutoHashMap(usize, void).init(allocator), .new_size = null, }; } fn deinit(self: *UpdateState) void { self.entries.deinit(); self.progress.deinit(); self.errors.deinit(); } fn hasChanges(self: UpdateState) bool { return self.new_size != null or self.entries.items.len > 0 or self.progress.count() > 0 or self.errors.count() > 0; } fn clear(self: *UpdateState) void { self.entries.clearRetainingCapacity(); self.progress.clearRetainingCapacity(); self.errors.clearRetainingCapacity(); self.new_size = null; } }; const Self = @This(); mode: union(enum) { direct_log: void, ansi: struct { allocator: std.mem.Allocator, arena: std.heap.ArenaAllocator, entries: std.ArrayList(Entry), logs: std.ArrayList([]const u8), depth: usize, size: Size, running: std.atomic.Atomic(bool), mtx: std.Thread.Mutex, logs_mtx: std.Thread.Mutex, render_thread: std.Thread, // state maps that get swapped collector: *UpdateState, scratchpad: *UpdateState, fifo: std.fifo.LinearFifo(u8, .{ .Dynamic = {} }), }, }, pub fn init(location: *Self, allocator: std.mem.Allocator) !void { var size = Size{ .rows = 24, .cols = 80, }; switch (builtin.target.os.tag) { .windows => { const c = @cImport({ @cInclude("windows.h"); }); var csbi: c.CONSOLE_SCREEN_BUFFER_INFO = undefined; if (0 == c.GetConsoleScreenBufferInfo(c.GetStdHandle(c.STD_OUTPUT_HANDLE), &csbi) or std.process.hasEnvVarConstant("GYRO_DIRECT_LOG")) { location.* = Self{ .mode = .{ .direct_log = {} } }; return; } size.rows = @intCast(usize, csbi.srWindow.Bottom - csbi.srWindow.Top + 1); size.cols = @intCast(usize, csbi.srWindow.Right - csbi.srWindow.Left + 1); const h = c.GetStdHandle(c.STD_OUTPUT_HANDLE); if (c.GetConsoleMode(h, &w32_mode) != 0) { w32_mode |= c.ENABLE_VIRTUAL_TERMINAL_PROCESSING; w32_mode = w32.SetConsoleMode(h, w32_mode); } }, else => { var winsize: std.os.linux.winsize = undefined; const rc = std.c.ioctl(0, std.os.linux.T.IOCGWINSZ, &winsize); if (rc != 0 or !std.os.isatty(std.io.getStdOut().handle) or std.process.hasEnvVarConstant("GYRO_DIRECT_LOG")) { location.* = Self{ .mode = .{ .direct_log = {} } }; return; } size.rows = winsize.ws_row; size.cols = winsize.ws_col; }, } const collector = try allocator.create(UpdateState); errdefer allocator.destroy(collector); const scratchpad = try allocator.create(UpdateState); errdefer allocator.destroy(scratchpad); collector.* = UpdateState.init(allocator); scratchpad.* = UpdateState.init(allocator); location.* = Self{ .mode = .{ .ansi = .{ .allocator = allocator, .arena = std.heap.ArenaAllocator.init(allocator), .running = std.atomic.Atomic(bool).init(true), .mtx = std.Thread.Mutex{}, .logs_mtx = std.Thread.Mutex{}, .render_thread = try std.Thread.spawn(.{}, renderTask, .{location}), .entries = std.ArrayList(Entry).init(allocator), .logs = std.ArrayList([]const u8).init(allocator), .size = size, .depth = 0, .collector = collector, .scratchpad = scratchpad, .fifo = std.fifo.LinearFifo(u8, .{ .Dynamic = {} }).init(allocator), }, }, }; } pub fn deinit(self: *Self) void { switch (self.mode) { .direct_log => {}, .ansi => |*ansi| { ansi.running.store(false, .SeqCst); ansi.render_thread.join(); const stderr = std.io.getStdErr().writer(); if (ansi.logs.items.len > 1) { stderr.writeByteNTimes('-', ansi.size.cols) catch {}; stderr.writeByte('\n') catch {}; stderr.writeAll("logs captured during fetch:\n") catch {}; } for (ansi.logs.items) |msg| stderr.writeAll(msg) catch continue; ansi.entries.deinit(); ansi.collector.deinit(); ansi.scratchpad.deinit(); ansi.allocator.destroy(ansi.collector); ansi.allocator.destroy(ansi.scratchpad); ansi.fifo.deinit(); ansi.arena.deinit(); }, } if (builtin.os.tag == .windows) { const c = @cImport({ @cInclude("windows.h"); }); const h = c.GetStdHandle(c.STD_OUTPUT_HANDLE); _ = w32.SetConsoleMode(h, w32_mode); } } pub fn log( self: *Self, comptime level: std.log.Level, comptime scope: @TypeOf(.EnumLiteral), comptime format: []const u8, args: anytype, ) void { switch (self.mode) { .direct_log => std.log.defaultLog(level, scope, format, args), .ansi => |*ansi| { const level_txt = comptime level.asText(); const prefix2 = if (scope == .default) ": " else "(" ++ @tagName(scope) ++ "): "; const message = std.fmt.allocPrint(ansi.allocator, level_txt ++ prefix2 ++ format ++ "\n", args) catch return; ansi.logs_mtx.lock(); defer ansi.logs_mtx.unlock(); ansi.logs.append(message) catch {}; }, } } fn entryFromGit( self: *Self, tag: []const u8, url: []const u8, commit: []const u8, ) !Entry { const link = try uri.parse(url); const begin = if (link.scheme) |scheme| scheme.len + 3 else 0; const end_offset: usize = if (std.mem.endsWith(u8, url, ".git")) 4 else 0; return Entry{ .tag = tag, .label = try self.mode.ansi.arena.allocator().dupe(u8, url[begin .. url.len - end_offset]), .version = try self.mode.ansi.arena.allocator().dupe(u8, commit[0..std.math.min(commit.len, 8)]), .progress = .{ .current = 0, .total = 1, }, .err = false, }; } pub fn createEntry(self: *Self, source: Source) !usize { switch (self.mode) { .direct_log => { switch (source) { .git => |git| std.log.info("cloning {s} {s}", .{ git.url, git.commit[0..std.math.min(git.commit.len, 8)], }), .sub => |sub| std.log.info("cloning submodule {s}", .{ sub.url, }), .pkg => |pkg| std.log.info("fetching package {s}/{s}/{s}", .{ pkg.repository, pkg.user, pkg.name, }), .url => |url| std.log.info("fetching tarball {s}", .{ url, }), } return 0; }, .ansi => |*ansi| { const allocator = ansi.arena.allocator(); const new_entry = switch (source) { .git => |git| try self.entryFromGit("git", git.url, git.commit), .sub => |sub| try self.entryFromGit("sub", sub.url, sub.commit), .pkg => |pkg| Entry{ .tag = "pkg", .label = try std.fmt.allocPrint(allocator, "{s}/{s}/{s}", .{ pkg.repository, pkg.user, pkg.name, }), .version = try std.fmt.allocPrint(allocator, "{}", .{pkg.semver}), .progress = .{ .current = 0, .total = 1, }, .err = false, }, .url => |url| Entry{ .tag = "url", .label = try ansi.allocator.dupe(u8, url), .version = "", .progress = .{ .current = 0, .total = 1, }, .err = false, }, }; ansi.mtx.lock(); defer ansi.mtx.unlock(); try ansi.collector.entries.append(new_entry); return ansi.collector.current_len + ansi.collector.entries.items.len - 1; }, } } pub fn updateEntry(self: *Self, handle: usize, update: EntryUpdate) !void { switch (self.mode) { .direct_log => {}, .ansi => |*ansi| { ansi.mtx.lock(); defer ansi.mtx.unlock(); switch (update) { .progress => |p| try ansi.collector.progress.put(handle, p), .err => try ansi.collector.errors.put(handle, {}), } }, } } pub fn updateSize(self: *Self, new_size: Size) void { switch (self.mod) { .direct_log => {}, .ansi => |ansi| { ansi.mtx.lock(); defer ansi.mtx.unlock(); ansi.collector.new_size = new_size; }, } } fn updateState(self: *Self) !void { switch (self.mode) { .direct_log => unreachable, .ansi => |*ansi| { try ansi.entries.appendSlice(ansi.scratchpad.entries.items); if (ansi.scratchpad.new_size) |new_size| ansi.size = new_size; { var it = ansi.scratchpad.progress.iterator(); while (it.next()) |entry| { const idx = entry.key_ptr.*; assert(idx <= ansi.entries.items.len); ansi.entries.items[idx].progress = entry.value_ptr.*; } } { var it = ansi.scratchpad.errors.iterator(); while (it.next()) |entry| { const idx = entry.key_ptr.*; ansi.entries.items[idx].err = true; } } }, } } fn renderTask(self: *Self) !void { const stdout = std.io.getStdOut().writer(); var done = false; while (!done) : (std.time.sleep(std.time.ns_per_s * 0.1)) { if (!self.mode.ansi.running.load(.SeqCst)) done = true; { self.mode.ansi.mtx.lock(); defer self.mode.ansi.mtx.unlock(); self.mode.ansi.scratchpad.current_len = self.mode.ansi.collector.current_len + self.mode.ansi.collector.entries.items.len; std.mem.swap(UpdateState, self.mode.ansi.collector, self.mode.ansi.scratchpad); } try self.updateState(); if (self.mode.ansi.entries.items.len > 0 and (self.mode.ansi.scratchpad.hasChanges() or self.mode.ansi.depth != self.mode.ansi.entries.items.len)) { try self.render(stdout); } self.mode.ansi.scratchpad.clear(); } } fn drawBar(writer: anytype, width: usize, percent: usize) !void { if (width < 3) { try writer.writeByteNTimes(' ', width); return; } const bar_width = width - 2; const cells = std.math.min(percent * bar_width / 100, bar_width); try writer.writeByte('['); try writer.writeByteNTimes('#', cells); try writer.writeByteNTimes(' ', bar_width - cells); try writer.writeByte(']'); } fn render(self: *Self, stdout: anytype) !void { switch (self.mode) { .direct_log => unreachable, .ansi => |*ansi| { const writer = ansi.fifo.writer(); defer { ansi.fifo.count = 0; ansi.fifo.head = 0; } const spacing = 20; const short_mode = ansi.size.cols < 50; // calculations const version_width = 8; const variable = ansi.size.cols -| 26; const label_width = if (variable < spacing) variable else spacing + ((variable - spacing) / 2); const bar_width = if (variable < spacing) 0 else ((variable - spacing) / 2) + if (variable % 2 == 1) @as(usize, 1) else 0; if (ansi.depth < ansi.entries.items.len) { try writer.writeByteNTimes('\n', ansi.entries.items.len - ansi.depth); ansi.depth = ansi.entries.items.len; } // up n lines at beginning try writer.print("\x1b[{}F", .{ansi.depth}); for (ansi.entries.items) |entry| { if (short_mode) { if (entry.err) try writer.writeAll("\x1b[31m"); try writer.writeAll(entry.label[0..std.math.min(entry.label.len, ansi.size.cols)]); if (entry.err) { try writer.writeAll("\x1b[0m"); } try writer.writeAll("\x1b[1B\x0d"); continue; } const percent = std.math.min(entry.progress.current * 100 / entry.progress.total, 100); if (entry.err) try writer.writeAll("\x1b[31m"); try writer.print("{s} ", .{entry.tag}); if (entry.label.len > label_width) { try writer.writeAll(entry.label[0 .. label_width - 3]); try writer.writeAll("..."); } else { try writer.writeAll(entry.label); try writer.writeByteNTimes(' ', label_width - entry.label.len); } try writer.writeByte(' '); try writer.writeAll(entry.version[0..std.math.min(version_width, entry.version.len)]); if (entry.version.len < label_width) try writer.writeByteNTimes(' ', version_width - entry.version.len); try writer.writeByte(' '); try drawBar(writer, bar_width, percent); try writer.print(" {: >3}%", .{percent}); if (entry.err) { try writer.writeAll(" ERROR \x1b[0m"); } else { try writer.writeByteNTimes(' ', 7); } try writer.writeAll("\x1b[1E"); } try stdout.writeAll(ansi.fifo.readableSlice(0)); }, } }
0
repos/gyro
repos/gyro/src/certs.zig
const std = @import("std"); const builtin = @import("builtin"); extern fn git_mbedtls__set_cert_location(path: ?[*:0]const u8, file: ?[*:0]const u8) c_int; extern fn git_mbedtls__set_cert_buf(buf: [*]const u8, len: usize) c_int; /// based off of golang's system cert finding code: https://golang.org/src/crypto/x509/ pub fn loadSystemCerts(allocator: std.mem.Allocator) !void { switch (builtin.target.os.tag) { .windows => { //const c = @cImport({ // @cInclude("wincrypt.h"); //}); //const store = c.CertOpenSystemStoreA(null, "ROOT"); //if (store == null) { // std.log.err("failed to open system cert store", .{}); // return error.Explained; //} //defer _ = c.CertCloseStore(store, 0); //var cert: ?*c.PCCERT_CONTEXT = null; //while (true) { // cert = c.CertEnumCertificatesInStore(store, cert); // if (cert_context == null) { // // TODO: handle errors and end of certs // } // // TODO: check for X509_ASN_ENCODING // mbedtls_x509_crt_parse(ca_chain, cert.pbCertEncoded, cert.cbCertEncoded); //} //mbedtls_ssl_conf_ca_chain(); }, .ios => @compileError("TODO: ios certs"), .macos => {}, .linux, .aix, .dragonfly, .netbsd, .freebsd, .openbsd, .plan9, .solaris, => try loadUnixCerts(allocator), else => std.log.warn("don't know how to load system certs for this os", .{}), } } fn loadUnixCerts(allocator: std.mem.Allocator) !void { // TODO: env var overload const has_env_var = try std.process.hasEnvVar(allocator, "SSL_CERT_FILE"); const files: []const [:0]const u8 = if (has_env_var) blk: { const file_path = try std.process.getEnvVarOwned(allocator, "SSL_CERT_FILE"); defer allocator.free(file_path); break :blk &.{try allocator.dupeZ(u8, file_path)}; } else switch (builtin.target.os.tag) { .linux => &.{ // Debian/Ubuntu/Gentoo etc. "/etc/ssl/certs/ca-certificates.crt", // Fedora/RHEL 6 "/etc/pki/tls/certs/ca-bundle.crt", // OpenSUSE "/etc/ssl/ca-bundle.pem", // OpenELEC "/etc/pki/tls/cacert.pem", // CentOS/RHEL 7 "/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem", // Alpine Linux "/etc/ssl/cert.pem", }, .aix => &.{"/var/ssl/certs/ca-bundle.crt"}, .dragonfly => &.{"/usr/local/share/certs/ca-root-nss.crt"}, .netbsd => &.{"/etc/openssl/certs/ca-certificates.crt"}, .freebsd => &.{"/usr/local/etc/ssl/cert.pem"}, .openbsd => &.{"/etc/ssl/cert.pem"}, .plan9 => &.{"/sys/lib/tls/ca.pem"}, .solaris => &.{ // Solaris 11.2+ "/etc/certs/ca-certificates.crt", // Joyent SmartOS "/etc/ssl/certs/ca-certificates.crt", // OmniOS "/etc/ssl/cacert.pem", }, else => @compileError("Don't know how to load system certs for this unix os"), }; defer if (has_env_var) allocator.free(files[0]); for (files) |path| { const rc = git_mbedtls__set_cert_location(path, null); if (rc == 0) { return; } } }
0
repos/gyro
repos/gyro/src/url.zig
const std = @import("std"); const uri = @import("uri"); const curl = @import("curl"); const Engine = @import("Engine.zig"); const Dependency = @import("Dependency.zig"); const Project = @import("Project.zig"); const api = @import("api.zig"); const cache = @import("cache.zig"); const utils = @import("utils.zig"); const local = @import("local.zig"); const main = @import("root"); const ThreadSafeArenaAllocator = @import("ThreadSafeArenaAllocator.zig"); const Allocator = std.mem.Allocator; const assert = std.debug.assert; pub const name = "url"; pub const Resolution = void; pub const ResolutionEntry = struct { root: []const u8, str: []const u8, dep_idx: ?usize = null, }; pub const FetchError = @typeInfo(@typeInfo(@TypeOf(fetch)).Fn.return_type.?).ErrorUnion.error_set; const FetchQueue = Engine.MultiQueueImpl(Resolution, FetchError); const ResolutionTable = std.ArrayListUnmanaged(ResolutionEntry); pub fn deserializeLockfileEntry( allocator: Allocator, it: *std.mem.TokenIterator(u8), resolutions: *ResolutionTable, ) !void { const entry = ResolutionEntry{ .root = it.next() orelse return error.NoRoot, .str = it.next() orelse return error.NoUrl, }; if (std.mem.startsWith(u8, entry.str, "file://")) return error.OldLocalFormat; try resolutions.append(allocator, entry); } pub fn serializeResolutions( resolutions: []const ResolutionEntry, writer: anytype, ) !void { for (resolutions) |entry| if (entry.dep_idx != null) try writer.print("url {s} {s}\n", .{ entry.root, entry.str, }); } pub fn findResolution(dep: Dependency.Source, resolutions: []const ResolutionEntry) ?usize { const root = dep.url.root orelse utils.default_root; return for (resolutions) |entry, i| { if (std.mem.eql(u8, dep.url.str, entry.str) and std.mem.eql(u8, root, entry.root)) { break i; } } else null; } fn findMatch(dep_table: []const Dependency.Source, dep_idx: usize, edges: []const Engine.Edge) ?usize { const dep = dep_table[dep_idx].url; const root = dep.root orelse utils.default_root; return for (edges) |edge| { const other = dep_table[edge.to].url; const other_root = other.root orelse utils.default_root; if (std.mem.eql(u8, dep.str, other.str) and std.mem.eql(u8, root, other_root)) { break edge.to; } } else null; } fn findPartialMatch(dep_table: []const Dependency.Source, dep_idx: usize, edges: []const Engine.Edge) ?usize { const dep = dep_table[dep_idx].url; return for (edges) |edge| { const other = dep_table[edge.to].url; if (std.mem.eql(u8, dep.str, other.str)) { break edge.to; } } else null; } fn fmtCachePath(allocator: Allocator, url: []const u8) ![]const u8 { const link = try uri.parse(url); return std.mem.replaceOwned( u8, allocator, url[link.scheme.?.len + 3 ..], "/", "-", ); } pub fn resolutionToCachePath( allocator: Allocator, res: ResolutionEntry, ) ![]const u8 { return fmtCachePath(allocator, res.str); } fn progressCb( data: ?*anyopaque, dltotal: curl.Offset, dlnow: curl.Offset, ultotal: curl.Offset, ulnow: curl.Offset, ) callconv(.C) c_int { _ = ultotal; _ = ulnow; const handle = @ptrCast(*usize, @alignCast(@alignOf(*usize), data orelse return 0)).*; main.display.updateEntry(handle, .{ .progress = .{ .current = @intCast(usize, dlnow), .total = @intCast(usize, if (dltotal == 0) 1 else dltotal), }, }) catch {}; return 0; } fn fetch( arena: *ThreadSafeArenaAllocator, dep: Dependency.Source, deps: *std.ArrayListUnmanaged(Dependency), path: *?[]const u8, ) !void { const allocator = arena.child_allocator; const entry_name = try fmtCachePath(allocator, dep.url.str); defer allocator.free(entry_name); var entry = try cache.getEntry(entry_name); defer entry.deinit(); if (!try entry.isDone()) { var content_dir = try entry.contentDir(); defer content_dir.close(); // TODO: allow user to strip directories from a tarball var handle = try main.display.createEntry(.{ .url = dep.url.str }); errdefer main.display.updateEntry(handle, .{ .err = {} }) catch {}; const url_z = try allocator.dupeZ(u8, dep.url.str); defer allocator.free(url_z); const xfer_ctx = api.XferCtx{ .cb = progressCb, .data = &handle, }; try api.getTarGz(allocator, url_z, content_dir, xfer_ctx); try entry.done(); } const base_path = try std.fs.path.join(allocator, &.{ ".gyro", entry_name, "pkg", }); defer allocator.free(base_path); const root = dep.url.root orelse utils.default_root; path.* = try utils.joinPathConvertSep(arena, &.{ base_path, root }); var base_dir = try std.fs.cwd().openDir(base_path, .{}); defer base_dir.close(); const project_file = try base_dir.createFile("gyro.zzz", .{ .read = true, .truncate = false, .exclusive = false, }); defer project_file.close(); const text = try project_file.reader().readAllAlloc(arena.allocator(), std.math.maxInt(usize)); const project = try Project.fromUnownedText(arena, base_path, text); defer project.destroy(); try deps.appendSlice(allocator, project.deps.items); } pub fn dedupeResolveAndFetch( arena: *ThreadSafeArenaAllocator, dep_table: []const Dependency.Source, resolutions: []const ResolutionEntry, fetch_queue: *FetchQueue, i: usize, ) void { dedupeResolveAndFetchImpl( arena, dep_table, resolutions, fetch_queue, i, ) catch |err| { fetch_queue.items(.result)[i] = .{ .err = err }; }; } fn dedupeResolveAndFetchImpl( arena: *ThreadSafeArenaAllocator, dep_table: []const Dependency.Source, resolutions: []const ResolutionEntry, fetch_queue: *FetchQueue, i: usize, ) FetchError!void { const dep_idx = fetch_queue.items(.edge)[i].to; // check lockfile for entry if (findResolution(dep_table[dep_idx], resolutions)) |res_idx| { if (resolutions[res_idx].dep_idx) |idx| { fetch_queue.items(.result)[i] = .{ .replace_me = idx, }; return; } else if (findMatch(dep_table, dep_idx, fetch_queue.items(.edge)[0..i])) |idx| { fetch_queue.items(.result)[i] = .{ .replace_me = idx, }; return; } else { fetch_queue.items(.result)[i] = .{ .fill_resolution = res_idx, }; } } else if (findMatch(dep_table, dep_idx, fetch_queue.items(.edge)[0..i])) |idx| { fetch_queue.items(.result)[i] = .{ .replace_me = idx, }; return; } else if (findPartialMatch(dep_table, dep_idx, fetch_queue.items(.edge)[0..i])) |idx| { fetch_queue.items(.result)[i] = .{ .copy_deps = idx, }; return; } else { fetch_queue.items(.result)[i] = .{ .new_entry = {}, }; } try fetch( arena, dep_table[dep_idx], &fetch_queue.items(.deps)[i], &fetch_queue.items(.path)[i], ); } pub fn updateResolution( allocator: Allocator, resolutions: *ResolutionTable, dep_table: []const Dependency.Source, fetch_queue: *FetchQueue, i: usize, ) !void { switch (fetch_queue.items(.result)[i]) { .fill_resolution => |res_idx| { const dep_idx = fetch_queue.items(.edge)[i].to; assert(resolutions.items[res_idx].dep_idx == null); resolutions.items[res_idx].dep_idx = dep_idx; }, .new_entry => { const dep_idx = fetch_queue.items(.edge)[i].to; const url = &dep_table[dep_idx].url; const root = url.root orelse utils.default_root; try resolutions.append(allocator, .{ .str = url.str, .root = root, .dep_idx = dep_idx, }); }, .replace_me => |dep_idx| fetch_queue.items(.edge)[i].to = dep_idx, .err => |err| return err, // TODO: update resolution table .copy_deps => |queue_idx| try fetch_queue.items(.deps)[i].appendSlice( allocator, fetch_queue.items(.deps)[queue_idx].items, ), } }
0
repos/gyro
repos/gyro/src/local.zig
const std = @import("std"); const Engine = @import("Engine.zig"); const Dependency = @import("Dependency.zig"); const Project = @import("Project.zig"); const utils = @import("utils.zig"); const ThreadSafeArenaAllocator = @import("ThreadSafeArenaAllocator.zig"); const Allocator = std.mem.Allocator; const ArenaAllocator = std.heap.ArenaAllocator; pub const name = "local"; pub const Resolution = []const u8; pub const ResolutionEntry = struct { path: []const u8, root: []const u8, dep_idx: ?usize = null, }; pub const FetchError = error{Todo} || @typeInfo(@typeInfo(@TypeOf(std.fs.Dir.openDir)).Fn.return_type.?).ErrorUnion.error_set || @typeInfo(@typeInfo(@TypeOf(std.fs.path.join)).Fn.return_type.?).ErrorUnion.error_set || @typeInfo(@typeInfo(@TypeOf(Project.fromDirPath)).Fn.return_type.?).ErrorUnion.error_set; const FetchQueue = Engine.MultiQueueImpl(Resolution, FetchError); const ResolutionTable = std.ArrayListUnmanaged(ResolutionEntry); /// local source types should never be in the lockfile pub fn deserializeLockfileEntry( allocator: Allocator, it: *std.mem.TokenIterator(u8), resolutions: *ResolutionTable, ) !void { // TODO: warn but continue processing lockfile _ = allocator; _ = it; _ = resolutions; return error.LocalsDontLock; } /// does nothing because we don't lock local source types pub fn serializeResolutions( resolutions: []const ResolutionEntry, writer: anytype, ) !void { _ = resolutions; _ = writer; } pub fn findResolution( dep: Dependency.Source, resolutions: []const ResolutionEntry, ) ?usize { _ = dep; _ = resolutions; return null; } pub fn dedupeResolveAndFetch( arena: *ThreadSafeArenaAllocator, dep_table: []const Dependency.Source, resolutions: []const ResolutionEntry, fetch_queue: *FetchQueue, i: usize, ) void { dedupeResolveAndFetchImpl( arena, dep_table, resolutions, fetch_queue, i, ) catch |err| { fetch_queue.items(.result)[i] = .{ .err = err }; }; } fn dedupeResolveAndFetchImpl( arena: *ThreadSafeArenaAllocator, dep_table: []const Dependency.Source, resolutions: []const ResolutionEntry, fetch_queue: *FetchQueue, i: usize, ) FetchError!void { _ = resolutions; const edge = fetch_queue.items(.edge)[i]; const dep = &dep_table[edge.to].local; var base_dir = try std.fs.cwd().openDir(dep.path, .{}); defer base_dir.close(); const project_file = try base_dir.createFile("gyro.zzz", .{ .read = true, .truncate = false, .exclusive = false, }); defer project_file.close(); const text = try project_file.reader().readAllAlloc(arena.allocator(), std.math.maxInt(usize)); const project = try Project.fromUnownedText(arena, dep.path, text); defer project.destroy(); const root = dep.root orelse blk: { const pkg = (try project.findBestMatchingPackage(edge.alias)) orelse break :blk utils.default_root; break :blk pkg.root orelse utils.default_root; }; fetch_queue.items(.path)[i] = try utils.joinPathConvertSep(arena, &.{ dep.path, root }); try fetch_queue.items(.deps)[i].appendSlice(arena.child_allocator, project.deps.items); } pub fn updateResolution( allocator: Allocator, resolutions: *ResolutionTable, dep_table: []const Dependency.Source, fetch_queue: *FetchQueue, i: usize, ) !void { _ = allocator; _ = resolutions; _ = dep_table; _ = fetch_queue; _ = i; }
0
repos/gyro
repos/gyro/src/Project.zig
const std = @import("std"); const zzz = @import("zzz"); const version = @import("version"); const Package = @import("Package.zig"); const Dependency = @import("Dependency.zig"); const utils = @import("utils.zig"); const ThreadSafeArenaAllocator = @import("ThreadSafeArenaAllocator.zig"); const Allocator = std.mem.Allocator; const Self = @This(); allocator: Allocator, arena: *ThreadSafeArenaAllocator, base_dir: []const u8, text: []const u8, owns_text: bool, packages: std.StringHashMap(Package), deps: std.ArrayList(Dependency), build_deps: std.ArrayList(Dependency), pub const Iterator = struct { inner: std.StringHashMapUnmanaged(Package).Iterator, pub fn next(self: *Iterator) ?*Package { return if (self.inner.next()) |entry| &entry.value_ptr.* else null; } }; fn create( arena: *ThreadSafeArenaAllocator, base_dir: []const u8, text: []const u8, owns_text: bool, ) !*Self { const allocator = arena.child_allocator; const ret = try allocator.create(Self); errdefer allocator.destroy(ret); ret.* = Self{ .allocator = arena.child_allocator, .arena = arena, .base_dir = base_dir, .text = text, .owns_text = owns_text, .packages = std.StringHashMap(Package).init(allocator), .deps = std.ArrayList(Dependency).init(allocator), .build_deps = std.ArrayList(Dependency).init(allocator), }; errdefer ret.deinit(); if (std.mem.indexOf(u8, ret.text, "\r\n") != null) { std.log.err("gyro.zzz requires LF line endings, not CRLF", .{}); return error.Explained; } var tree = zzz.ZTree(1, 1000){}; var root = try tree.appendText(ret.text); if (utils.zFindChild(root, "pkgs")) |pkgs| { var it = utils.ZChildIterator.init(pkgs); while (it.next()) |node| { const name = try utils.zGetString(node); const ver_str = (try utils.zFindString(node, "version")) orelse { std.log.err("missing version string in package", .{}); return error.Explained; }; const ver = version.Semver.parse(ver_str) catch |err| { std.log.err("failed to parse version string '{s}', must be <major>.<minor>.<patch>: {}", .{ ver_str, err }); return error.Explained; }; const res = try ret.packages.getOrPut(name); if (res.found_existing) { std.log.err("duplicate exported packages {s}", .{name}); return error.Explained; } res.value_ptr.* = try Package.init( allocator, name, ver, ret, ); try res.value_ptr.fillFromZNode(node); } } inline for (.{ "deps", "build_deps" }) |deps_field| { if (utils.zFindChild(root, deps_field)) |deps| { var it = utils.ZChildIterator.init(deps); while (it.next()) |dep_node| { var dep = try Dependency.fromZNode(ret.arena, dep_node); for (ret.deps.items) |other| { if (std.mem.eql(u8, dep.alias, other.alias)) { std.log.err("'{s}' alias in 'deps' is declared multiple times", .{dep.alias}); return error.Explained; } } else { if (dep.src == .local) { const resolved = try std.fs.path.resolve( allocator, &.{ base_dir, dep.src.local.path }, ); defer allocator.free(resolved); dep.src.local.path = try std.fs.path.relative(ret.arena.allocator(), ".", resolved); } try @field(ret, deps_field).append(dep); } } } } return ret; } fn deinit(self: *Self) void { var it = self.packages.iterator(); while (it.next()) |entry| { entry.value_ptr.deinit(); _ = self.packages.remove(entry.key_ptr.*); } self.deps.deinit(); self.build_deps.deinit(); self.packages.deinit(); if (self.owns_text) self.allocator.free(self.text); } pub fn destroy(self: *Self) void { self.deinit(); self.allocator.destroy(self); } pub fn fromUnownedText(arena: *ThreadSafeArenaAllocator, base_dir: []const u8, text: []const u8) !*Self { return try Self.create(arena, base_dir, text, false); } pub fn fromFile(arena: *ThreadSafeArenaAllocator, base_dir: []const u8, file: std.fs.File) !*Self { return Self.create( arena, base_dir, try file.reader().readAllAlloc(arena.child_allocator, std.math.maxInt(usize)), true, ); } pub fn fromDirPath( arena: *ThreadSafeArenaAllocator, base_dir: []const u8, ) !*Self { var dir = try std.fs.cwd().openDir(base_dir, .{}); defer dir.close(); const file = try dir.openFile("gyro.zzz", .{}); defer file.close(); return Self.fromFile(arena, base_dir, file); } pub fn write(self: Self, writer: anytype) !void { var tree = zzz.ZTree(1, 1000){}; var root = try tree.addNode(null, .Null); var arena = ThreadSafeArenaAllocator.init(self.allocator); defer arena.deinit(); if (self.packages.count() > 0) { var pkgs = try tree.addNode(root, .{ .String = "pkgs" }); var it = self.packages.iterator(); while (it.next()) |entry| _ = try entry.value_ptr.addToZNode(&arena, &tree, pkgs); } if (self.deps.items.len > 0) { var deps = try tree.addNode(root, .{ .String = "deps" }); for (self.deps.items) |dep| try dep.addToZNode(&arena, &tree, deps, false); } if (self.build_deps.items.len > 0) { var build_deps = try tree.addNode(root, .{ .String = "build_deps" }); for (self.build_deps.items) |dep| try dep.addToZNode(&arena, &tree, build_deps, false); } try root.stringifyPretty(writer); } pub fn toFile(self: *Self, file: std.fs.File) !void { try file.setEndPos(0); try file.seekTo(0); try self.write(file.writer()); } pub fn contains(self: Self, name: []const u8) bool { return self.packages.contains(name); } pub fn get(self: Self, name: []const u8) ?*Package { return if (self.packages.getEntry(name)) |entry| &entry.value_ptr.* else null; } pub fn iterator(self: Self) Iterator { return Iterator{ .inner = self.packages.iterator() }; } pub fn findBestMatchingPackage(self: Self, alias: []const u8) !?Package { // I would use a switch but it's not deducing types correctly. const count = self.packages.count(); if (0 == count) return null else if (1 == count) { var it = self.packages.iterator(); return it.next().?.value_ptr.*; } else if (self.packages.get(alias)) |pkg| return pkg else { std.log.err("ambiguous package selection, dependency has alias of '{s}', options are:", .{alias}); var it = self.packages.iterator(); while (it.next()) |entry| std.log.err(" {s}, root: {s}", .{ entry.key_ptr.*, entry.value_ptr.root.? }); return error.Explained; } }
0
repos/gyro
repos/gyro/src/completion.zig
const std = @import("std"); const clap = @import("clap"); const assert = std.debug.assert; pub const Param = struct { pub const Repository = struct {}; pub const Directory = struct {}; pub const Package = struct {}; pub const AnyFile = struct {}; pub const File = struct {}; short_name: ?u8 = null, long_name: ?[]const u8 = null, description: []const u8, value_name: ?[]const u8 = null, size: clap.Values = .none, data: type, }; const ClapParam = clap.Param(clap.Help); pub const Command = struct { name: []const u8, summary: []const u8, params: []const Param = &[_]Param{}, clap_params: []const ClapParam = &[_]ClapParam{}, parent: type, passthrough: bool = false, pub fn init(comptime name: []const u8, summary: []const u8, parent: type) Command { return .{ .name = name, .summary = summary, .parent = parent, }; } pub fn addFlag(comptime self: *Command, comptime short: ?u8, comptime long: ?[]const u8, comptime description: []const u8) void { assert(short != null or long != null); self.params = self.params ++ [_]Param{.{ .short_name = short, .long_name = long, .description = description, .data = void, }}; } pub fn addOption(comptime self: *Command, comptime short: ?u8, comptime long: ?[]const u8, comptime value_name: []const u8, data: type, comptime description: []const u8) void { assert(short != null or long != null); self.params = self.params ++ [_]Param{.{ .short_name = short, .long_name = long, .description = description, .value_name = value_name, .data = data, .size = .one, }}; } pub fn addPositional(comptime self: *Command, comptime value_name: []const u8, data: type, comptime size: clap.Values, comptime description: []const u8) void { self.params = self.params ++ [_]Param{.{ .description = description, .value_name = value_name, .data = data, .size = size, }}; } pub fn done(comptime self: *Command) void { self.clap_params = &[_]ClapParam{}; for (self.params) |p| { self.clap_params = self.clap_params ++ [_]ClapParam{.{ .id = .{ .desc = p.description, .val = p.value_name orelse "", }, .names = .{ .short = p.short_name, .long = p.long_name, }, .takes_value = p.size, }}; } } pub fn parseParamsComptime(comptime self: *const Command) type { return clap.parseParamsComptime(clap.Help, self.clap_params); } }; pub const shells = struct { pub const List = enum { zsh }; pub const zsh = struct { pub fn writeAll(writer: anytype, comptime commands: []const Command) !void { try writer.writeAll( \\#compdef gyro \\ \\function _gyro { \\ local -a __subcommands \\ local line state \\ \\ __subcommands=( \\ ); inline for (commands) |cmd| { try writer.print(" \"{s}:{}\"\n", .{ cmd.name, std.zig.fmtEscapes(cmd.summary) }); } try writer.writeAll( \\ ) \\ \\ _arguments -C \ \\ "1: :->subcommand" \ \\ "*::arg:->args" \\ \\ case $state in \\ subcommand) \\ _describe 'command' __subcommands \\ ;; \\ args) \\ __subcommand="__gyro_cmd_${line[1]}" \\ if type $__subcommand >/dev/null; then \\ $__subcommand \\ fi \\ ;; \\ esac \\} \\ \\ ); inline for (commands) |cmd| { try writer.print("function __gyro_cmd_{s} {{\n", .{cmd.name}); try writer.writeAll(" _arguments \\\n"); inline for (cmd.params) |param, i| { try writer.writeAll(" "); if (param.short_name == null and param.long_name == null) { // positional try writer.writeAll("\""); } else { // flag or option if (param.short_name == null) { try writer.print("--{s}", .{param.long_name}); } else if (param.long_name == null) { try writer.print("-{c}", .{param.short_name}); } else { try writer.print("{{-{c},--{s}}}", .{ param.short_name, param.long_name }); } try writer.print("\"[{}]", .{std.zig.fmtEscapes(param.description)}); } try writeType(writer, param.data); try writer.writeAll("\""); if (i < cmd.params.len - 1) { try writer.writeAll(" \\\n"); } } try writer.writeAll("\n}\n\n"); } try writer.writeAll("_gyro\n"); } fn writeType(writer: anytype, comptime T: type) @TypeOf(writer).Error!void { switch (T) { void => return, Param.Directory => { try writer.writeAll(": :_files -/"); }, Param.AnyFile => { try writer.writeAll(": :_files"); }, Param.File => { try writer.writeAll(": :_files -g '*.zig'"); }, Param.Package, Param.Repository => { try writer.writeAll(": :_nothing"); }, else => { switch (@typeInfo(T)) { .Optional => |info| { try writer.writeAll(":"); try writeType(writer, info.child); }, .Enum => |info| { try writer.writeAll(": :("); inline for (info.fields) |field| { try writer.print("'{s}' ", .{field.name}); } try writer.writeAll(")"); }, else => @compileError("not implemented"), } }, } } }; };
0
repos
repos/libflightplan/shell.nix
(import ( let flake-compat = (builtins.fromJSON (builtins.readFile ./flake.lock)).nodes.flake-compat; in fetchTarball { url = "https://github.com/edolstra/flake-compat/archive/${flake-compat.locked.rev}.tar.gz"; sha256 = flake-compat.locked.narHash; } ) { src = ./.; }).shellNix
0
repos
repos/libflightplan/README.md
# libflightplan (Zig and C) libflightplan is a library for reading and writing flight plans in various formats. Flight plans are used in aviation to save properties of one or more flights such as route (waypoints), altitude, source and departure airport, etc. This library is written primarily in Zig but exports a C ABI compatible shared and static library so that any programming language that can interface with C can interface with this library. **Warning!** If you use this library with the intention of using the flight plan for actual flight, be very careful to verify the plan in your avionics or EFB. Never trust the output of this library for actual flight. **Library status: Unstable.** This library is _brand new_ and was built for hobby purposes. It only supports a handful of formats, with limitations. My primary interest at the time of writing this is ForeFlight flight plans and being able to use them to build supporting tools, but I'm interested in supporting more formats over time. ## Formats | Name | Ext | Read | Write | | :--- | :---: | :---: | :---: | | ForeFlight | FPL | ✅ | ✅* | | Garmin | FPL | ✅ | ✅* | | X-Plane FMS 11 | FMS | ❌ | ✅* | \*: The C API doesn't support creating flight plans from scratch or modifying existing flight plans. But you can read in one format and encode in another. The Zig API supports full creation and modification. ## Usage libflightplan can be used from C and [Zig](https://ziglang.org/). Examples for each are shown below. ### C The C API is documented as [man pages](https://github.com/mitchellh/libflightplan/tree/main/doc) as well as the [flightplan.h header file](https://github.com/mitchellh/libflightplan/blob/main/include/flightplan.h). An example program is available in [`examples/basic.c`](https://github.com/mitchellh/libflightplan/blob/main/examples/basic.c), and a simplified version is reproduced below. This example shows how to read and extract information from a ForeFlight flight plan. The C API is available as both a static and shared library. To build them, install [Zig](https://ziglang.org/) and run `zig build install`. This also installs `pkg-config` files so the header and libraries can be easily found and integrated with other build systems. ```c #include <stddef.h> #include <stdio.h> #include <flightplan.h> int main() { // Parse our flight plan from an FPL file out of ForeFlight. flightplan *fpl = fpl_garmin_parse_file("./test/basic.fpl"); if (fpl == NULL) { // We can get a more detailed error. flightplan_error *err = fpl_last_error(); printf("error: %s\n", fpl_error_message(err)); fpl_cleanup(); return 1; } // Iterate and output the full ordered route. int max = fpl_route_points_count(fpl); printf("\nroute: \"%s\" (points: %d)\n", fpl_route_name(fpl), max); for (int i = 0; i < max; i++) { flightplan_route_point *point = fpl_route_points_get(fpl, i); printf(" %s\n", fpl_route_point_identifier(point)); } // Convert this to an X-Plane 11 flight plan. fpl_xplane11_write_to_file(fpl, "./copy.fms"); fpl_free(fpl); fpl_cleanup(); return 0; } ``` ### Zig ```zig const std = @import("std"); const flightplan = @import("flightplan"); fn main() !void { defer flightplan.deinit(); var alloc = std.heap.ArenaAllocator.init(std.heap.page_allocator); defer alloc.deinit(); var fpl = try flightplan.Format.Garmin.initFromFile(alloc, "./test/basic.fpl"); defer fpl.deinit(); std.debug.print("route: \"{s}\" (points: {d})\n", .{ fpl.route.name.?, fpl.route.points.items.len, }); for (fpl.route.points.items) |point| { std.debug.print(" {s}\n", .{point}); } // Convert to an X-Plane 11 flight plan format flightplan.Format.XPlaneFMS11.Format.writeToFile("./copy.fms", fpl); } ``` ## Build To build libflightplan, you need to have the following installed: * [Zig](https://ziglang.org/) * [Libxml2](http://www.xmlsoft.org/) With the dependencies installed, you can run `zig build` to make a local build of the libraries. You can run `zig build install` to build and install the libraries and headers to your standard prefix. And you can run `zig build test` to run all the tests. A [Nix](https://nixos.org/) flake is also provided. If you are a Nix user, you can easily build this library, depend on it, etc. You know who you are and you know what to do.
0
repos
repos/libflightplan/build.zig
const std = @import("std"); const Builder = std.build.Builder; const libxml2 = @import("vendor/zig-libxml2/libxml2.zig"); const ScdocStep = @import("src-build/ScdocStep.zig"); // Zig packages in use const pkgs = struct { const flightplan = pkg("src/main.zig"); }; /// pkg can be called to get the Pkg for this library. Downstream users /// can use this to add the package to the import paths. pub fn pkg(path: []const u8) std.build.Pkg { return std.build.Pkg{ .name = "flightplan", .path = .{ .path = path }, }; } pub fn build(b: *Builder) !void { const mode = b.standardReleaseOptions(); const target = b.standardTargetOptions(.{}); // Options const man_pages = b.option( bool, "man-pages", "Set to true to build man pages. Requires scdoc. Defaults to true if scdoc is found.", ) orelse scdoc_found: { _ = b.findProgram(&[_][]const u8{"scdoc"}, &[_][]const u8{}) catch |err| switch (err) { error.FileNotFound => break :scdoc_found false, else => return err, }; break :scdoc_found true; }; // Steps const test_step = b.step("test", "Run all tests"); const test_unit_step = b.step("test-unit", "Run unit tests only"); // Build libxml2 for static builds only const xml2 = try libxml2.create(b, target, mode, .{ .iconv = false, .lzma = false, .zlib = false, }); // Native Zig tests const lib_tests = b.addTestSource(pkgs.flightplan.path); addSharedSettings(lib_tests, mode, target); xml2.link(lib_tests); test_unit_step.dependOn(&lib_tests.step); test_step.dependOn(&lib_tests.step); // Static C lib { const static_lib = b.addStaticLibrary("flightplan", "src/binding.zig"); addSharedSettings(static_lib, mode, target); xml2.addIncludeDirs(static_lib); static_lib.install(); b.default_step.dependOn(&static_lib.step); const static_binding_test = b.addExecutable("static-binding", null); static_binding_test.setBuildMode(mode); static_binding_test.setTarget(target); static_binding_test.linkLibC(); static_binding_test.addIncludeDir("include"); static_binding_test.addCSourceFile("examples/basic.c", &[_][]const u8{ "-Wall", "-Wextra", "-pedantic", "-std=c99" }); static_binding_test.linkLibrary(static_lib); xml2.link(static_binding_test); const static_binding_test_run = static_binding_test.run(); test_step.dependOn(&static_binding_test_run.step); } // Dynamic C lib. We only build this if this is the native target so we // can link to libxml2 on our native system. if (target.isNative()) { const dynamic_lib_name = if (target.isWindows()) "flightplan.dll" else "flightplan"; const dynamic_lib = b.addSharedLibrary(dynamic_lib_name, "src/binding.zig", .unversioned); addSharedSettings(dynamic_lib, mode, target); dynamic_lib.linkSystemLibrary("libxml-2.0"); dynamic_lib.install(); b.default_step.dependOn(&dynamic_lib.step); const dynamic_binding_test = b.addExecutable("dynamic-binding", null); dynamic_binding_test.setBuildMode(mode); dynamic_binding_test.setTarget(target); dynamic_binding_test.linkLibC(); dynamic_binding_test.addIncludeDir("include"); dynamic_binding_test.addCSourceFile("examples/basic.c", &[_][]const u8{ "-Wall", "-Wextra", "-pedantic", "-std=c99" }); dynamic_binding_test.linkLibrary(dynamic_lib); const dynamic_binding_test_run = dynamic_binding_test.run(); test_step.dependOn(&dynamic_binding_test_run.step); } // Headers const install_header = b.addInstallFileWithDir( .{ .path = "include/flightplan.h" }, .header, "flightplan.h", ); b.getInstallStep().dependOn(&install_header.step); // pkg-config { const file = try std.fs.path.join( b.allocator, &[_][]const u8{ b.cache_root, "libflightplan.pc" }, ); const pkgconfig_file = try std.fs.cwd().createFile(file, .{}); const writer = pkgconfig_file.writer(); try writer.print( \\prefix={s} \\includedir=${{prefix}}/include \\libdir=${{prefix}}/lib \\ \\Name: libflightplan \\URL: https://github.com/mitchellh/libflightplan \\Description: Library for reading and writing aviation flight plans. \\Version: 0.1.0 \\Cflags: -I${{includedir}} \\Libs: -L${{libdir}} -lflightplan , .{b.install_prefix}); defer pkgconfig_file.close(); b.installFile(file, "share/pkgconfig/libflightplan.pc"); } if (man_pages) { const scdoc_step = ScdocStep.create(b); try scdoc_step.install(); } } /// The shared settings that we need to apply when building a library or /// executable using libflightplan. fn addSharedSettings( lib: *std.build.LibExeObjStep, mode: std.builtin.Mode, target: std.zig.CrossTarget, ) void { lib.setBuildMode(mode); lib.setTarget(target); lib.addPackage(pkgs.flightplan); lib.addIncludeDir("src/include"); lib.addIncludeDir("include"); lib.linkLibC(); }
0
repos
repos/libflightplan/flake.nix
{ description = "libflightplan"; inputs = { nixpkgs.url = "github:nixos/nixpkgs/nixpkgs-unstable"; flake-utils.url = "github:numtide/flake-utils"; zig.url = "github:roarkanize/zig-overlay"; # Used for shell.nix flake-compat = { url = github:edolstra/flake-compat; flake = false; }; # Dependencies we track using flake.lock zig-libxml2-src = { url = "https://github.com/mitchellh/zig-libxml2.git"; flake = false; submodules = true; type = "git"; ref = "main"; }; }; outputs = { self, nixpkgs, flake-utils, ... }@inputs: let overlays = [ # Our repo overlay (import ./nix/overlay.nix) # Other overlays (final: prev: { zigpkgs = inputs.zig.packages.${prev.system}; zig-libxml2-src = inputs.zig-libxml2-src; }) ]; # Our supported systems are the same supported systems as the Zig binaries systems = builtins.attrNames inputs.zig.packages; in flake-utils.lib.eachSystem systems (system: let pkgs = import nixpkgs { inherit overlays system; }; in rec { devShell = pkgs.devShell; packages.libflightplan = pkgs.libflightplan; defaultPackage = packages.libflightplan; } ); }
0
repos
repos/libflightplan/flake.lock
{ "nodes": { "flake-compat": { "flake": false, "locked": { "lastModified": 1641205782, "narHash": "sha256-4jY7RCWUoZ9cKD8co0/4tFARpWB+57+r1bLLvXNJliY=", "owner": "edolstra", "repo": "flake-compat", "rev": "b7547d3eed6f32d06102ead8991ec52ab0a4f1a7", "type": "github" }, "original": { "owner": "edolstra", "repo": "flake-compat", "type": "github" } }, "flake-utils": { "locked": { "lastModified": 1638122382, "narHash": "sha256-sQzZzAbvKEqN9s0bzWuYmRaA03v40gaJ4+iL1LXjaeI=", "owner": "numtide", "repo": "flake-utils", "rev": "74f7e4319258e287b0f9cb95426c9853b282730b", "type": "github" }, "original": { "owner": "numtide", "repo": "flake-utils", "type": "github" } }, "flake-utils_2": { "locked": { "lastModified": 1629481132, "narHash": "sha256-JHgasjPR0/J1J3DRm4KxM4zTyAj4IOJY8vIl75v/kPI=", "owner": "numtide", "repo": "flake-utils", "rev": "997f7efcb746a9c140ce1f13c72263189225f482", "type": "github" }, "original": { "owner": "numtide", "repo": "flake-utils", "type": "github" } }, "nixpkgs": { "locked": { "lastModified": 1642069818, "narHash": "sha256-666w6j8wl/bojfgpp0k58/UJ5rbrdYFbI2RFT2BXbSQ=", "owner": "nixos", "repo": "nixpkgs", "rev": "46821ea01c8f54d2a20f5a503809abfc605269d7", "type": "github" }, "original": { "owner": "nixos", "ref": "nixpkgs-unstable", "repo": "nixpkgs", "type": "github" } }, "nixpkgs_2": { "locked": { "lastModified": 1631288242, "narHash": "sha256-sXm4KiKs7qSIf5oTAmrlsEvBW193sFj+tKYVirBaXz0=", "owner": "NixOS", "repo": "nixpkgs", "rev": "0e24c87754430cb6ad2f8c8c8021b29834a8845e", "type": "github" }, "original": { "owner": "NixOS", "ref": "nixpkgs-unstable", "repo": "nixpkgs", "type": "github" } }, "root": { "inputs": { "flake-compat": "flake-compat", "flake-utils": "flake-utils", "nixpkgs": "nixpkgs", "zig": "zig", "zig-libxml2-src": "zig-libxml2-src" } }, "zig": { "inputs": { "flake-utils": "flake-utils_2", "nixpkgs": "nixpkgs_2" }, "locked": { "lastModified": 1642206480, "narHash": "sha256-aS5zbhz+KXmDYBINbEabnVEhp7OQ0yR/O7pfFZqKRPw=", "owner": "roarkanize", "repo": "zig-overlay", "rev": "78e7670a7e1d57a60819dc1bfa026670bf09c48c", "type": "github" }, "original": { "owner": "roarkanize", "repo": "zig-overlay", "type": "github" } }, "zig-libxml2-src": { "flake": false, "locked": { "lastModified": 1642210805, "narHash": "sha256-zQh4yqCOetocb8fV/0Rgbq3JcMhaJQKGgvmsSrdl/h4=", "ref": "main", "rev": "c2cf5ec294d08adfa0fc7aea7245a83871ed19f2", "revCount": 10, "submodules": true, "type": "git", "url": "https://github.com/mitchellh/zig-libxml2.git" }, "original": { "ref": "main", "submodules": true, "type": "git", "url": "https://github.com/mitchellh/zig-libxml2.git" } } }, "root": "root", "version": 7 }
0
repos/libflightplan
repos/libflightplan/src-build/ScdocStep.zig
const std = @import("std"); const mem = std.mem; const fs = std.fs; const Step = std.build.Step; const Builder = std.build.Builder; /// ScdocStep generates man pages using scdoc(1). /// /// It reads all the raw pages from src_path and writes them to out_path. /// src_path is typically "doc/" relative to the build root and out_path is /// the build cache. /// /// The man pages can be installed by calling install() on the step. const ScdocStep = @This(); step: Step, builder: *Builder, /// path to read man page sources from, defaults to the "doc/" subdirectory /// from the build.zig file. This must be an absolute path. src_path: []const u8, /// path where the generated man pages will be written (NOT installed). This /// defaults to build cache root. out_path: []const u8, pub fn create(builder: *Builder) *ScdocStep { const self = builder.allocator.create(ScdocStep) catch unreachable; self.* = init(builder); return self; } pub fn init(builder: *Builder) ScdocStep { return ScdocStep{ .builder = builder, .step = Step.init(.custom, "generate man pages", builder.allocator, make), .src_path = builder.pathFromRoot("doc/"), .out_path = fs.path.join(builder.allocator, &[_][]const u8{ builder.cache_root, "man", }) catch unreachable, }; } fn make(step: *std.build.Step) !void { const self = @fieldParentPtr(ScdocStep, "step", step); // Create our cache path // TODO(mitchellh): ideally this would be pure zig { const command = try std.fmt.allocPrint( self.builder.allocator, "rm -f {[path]s}/* && mkdir -p {[path]s}", .{ .path = self.out_path }, ); _ = try self.builder.exec(&[_][]const u8{ "sh", "-c", command }); } // Find all our man pages which are in our src path ending with ".scd". var dir = try fs.openDirAbsolute(self.src_path, .{ .iterate = true, }); defer dir.close(); var iter = dir.iterate(); while (try iter.next()) |*entry| { // We only want "scd" files to generate. if (!mem.eql(u8, fs.path.extension(entry.name), ".scd")) { continue; } const src = try fs.path.join( self.builder.allocator, &[_][]const u8{ self.src_path, entry.name }, ); const dst = try fs.path.join( self.builder.allocator, &[_][]const u8{ self.out_path, entry.name[0..(entry.name.len - 4)] }, ); const command = try std.fmt.allocPrint( self.builder.allocator, "scdoc < {s} > {s}", .{ src, dst }, ); _ = try self.builder.exec(&[_][]const u8{ "sh", "-c", command }); } } pub fn install(self: *ScdocStep) !void { // Ensure that `zig build install` depends on our generation step first. self.builder.getInstallStep().dependOn(&self.step); // Then run our install step which looks at what we made out of our // generation and moves it to the install prefix. const install_step = InstallStep.create(self.builder, self); self.builder.getInstallStep().dependOn(&install_step.step); } /// Install man pages, create using install() on ScdocStep. const InstallStep = struct { step: Step, builder: *Builder, scdoc: *ScdocStep, pub fn create(builder: *Builder, scdoc: *ScdocStep) *InstallStep { const self = builder.allocator.create(InstallStep) catch unreachable; self.* = InstallStep.init(builder, scdoc); return self; } pub fn init(builder: *Builder, scdoc: *ScdocStep) InstallStep { return InstallStep{ .builder = builder, .step = Step.init(.custom, "generate man pages", builder.allocator, InstallStep.make), .scdoc = scdoc, }; } fn make(step: *Step) !void { const self = @fieldParentPtr(InstallStep, "step", step); // Get our absolute output path var path = self.scdoc.out_path; if (!fs.path.isAbsolute(path)) { path = self.builder.pathFromRoot(path); } // Find all our man pages which are in our src path ending with ".scd". var dir = try fs.openDirAbsolute(path, .{ .iterate = true }); defer dir.close(); var iter = dir.iterate(); while (try iter.next()) |*entry| { // We expect filenames to be "foo.3" and this gets us "3" const section = entry.name[(entry.name.len - 1)..]; const src = try fs.path.join( self.builder.allocator, &[_][]const u8{ path, entry.name }, ); const output = try std.fmt.allocPrint( self.builder.allocator, "share/man/man{s}/{s}", .{ section, entry.name }, ); const fileStep = self.builder.addInstallFile( .{ .path = src }, output, ); try fileStep.step.make(); } } };
0
repos/libflightplan
repos/libflightplan/nix/overlay.nix
final: prev: rec { # Notes: # # When determining a SHA256, use this to set a fake one until we know # the real value: # # vendorSha256 = nixpkgs.lib.fakeSha256; # devShell = prev.callPackage ./devshell.nix { }; libflightplan = prev.callPackage ./package.nix { }; # zig we want to be the latest nightly since 0.9.0 is not released yet. zig = final.zigpkgs.master.latest; }
0
repos/libflightplan
repos/libflightplan/nix/devshell.nix
{ mkShell , pkg-config , libxml2 , scdoc , zig }: mkShell rec { name = "libflightplan"; nativeBuildInputs = [ pkg-config scdoc zig ]; buildInputs = [ libxml2 ]; }
0
repos/libflightplan
repos/libflightplan/nix/package.nix
{ stdenv , lib , zig , pkg-config , scdoc , libxml2 , zig-libxml2-src }: stdenv.mkDerivation rec { pname = "libflightplan"; version = "0.1.0"; src = ./..; nativeBuildInputs = [ zig scdoc pkg-config ]; buildInputs = [ libxml2 ]; dontConfigure = true; preBuild = '' export HOME=$TMPDIR mkdir -p ./vendor/zig-libxml2 cp -r ${zig-libxml2-src}/* ./vendor/zig-libxml2 ''; installPhase = '' runHook preInstall zig build -Drelease-safe -Dman-pages --prefix $out install runHook postInstall ''; outputs = [ "out" "dev" "man" ]; meta = with lib; { description = "A library for reading and writing flight plans in various formats"; homepage = "https://github.com/mitchellh/libflightplan"; license = licenses.mit; platforms = [ "x86_64-linux" "aarch64-linux" "x86_64-darwin" "aarch64-darwin" ]; }; }
0
repos/libflightplan
repos/libflightplan/include/flightplan.h
#ifndef LIBFLIGHTPLAN_H_GUARD #define LIBFLIGHTPLAN_H_GUARD #ifdef __cplusplus extern "C" { #endif /* * NAME fpl_cleanup() * * DESCRIPTION * * This should be called when the process is done using this library * to perform any global level memory cleanup (really just any errors). * This is safe to call multiple times. * */ void fpl_cleanup(); // A flightplan represents the primary flightplan data structure. typedef void flightplan; /* * NAME fpl_new() * * DESCRIPTION * * Create a new empty flight plan. */ flightplan *fpl_new(); /* * NAME fpl_free() * * DESCRIPTION * * Free resources associated with a flight plan. The flight plan can no longer * be used after this is called. This must be called for any flight plan that * is returned. */ void fpl_free(flightplan *); /* * NAME fpl_created() * * DESCRIPTION * * Returns the timestamp when the flight plan was created. * * NOTE(mitchellh): This raw string is not what I want long term. I want to * convert this to a UTC unix timestamp, so this function will probably change * to a time_t result at some point. */ char *fpl_created(flightplan *); /************************************************************************** * Errors *************************************************************************/ typedef void flightplan_error; /* * NAME fpl_last_error() * * DESCRIPTION * * Returns the last error (if any). An error can be set in any situation * where a function returns NULL or otherwise noted by the documentation. * The error doesn't need to be freed; any memory associated with error storage * is freed when fpl_cleanup is called. * * This error is only valid until another error occurs. * */ flightplan_error *fpl_last_error(); /* * NAME fpl_error_message() * * DESCRIPTION * * Returns a human-friendly error message for this error. * */ char *fpl_error_message(flightplan_error *); /************************************************************************** * Import/Export *************************************************************************/ /* * NAME fpl_garmin_parse_file() * * DESCRIPTION * * Parse a Garmin FPL file. This is also compatible with ForeFlight. */ flightplan *fpl_garmin_parse_file(char *); /* * NAME fpl_garmin_write_to_file() * * DESCRIPTION * * Write a flight plan in Garmin FPL format to the given file. */ int fpl_garmin_write_to_file(flightplan *, char *); /* * NAME fpl_xplane11_write_to_file() * * DESCRIPTION * * Write a flight plan in X-Plane 11 FMS format to the given file. */ int fpl_xplane11_write_to_file(flightplan *, char *); /************************************************************************** * Waypoints *************************************************************************/ // A waypoint that the flight plan may or may not use but knows about. typedef void flightplan_waypoint; typedef void flightplan_waypoint_iter; // Types of waypoints. typedef enum { FLIGHTPLAN_INVALID, FLIGHTPLAN_USER_WAYPOINT, FLIGHTPLAN_AIRPORT, FLIGHTPLAN_NDB, FLIGHTPLAN_VOR, FLIGHTPLAN_INT, FLIGHTPLAN_INT_VRP, } flightplan_waypoint_type; /* * NAME fpl_waypoints_count() * * DESCRIPTION * * Returns the total number of waypoints that are in this flight plan. */ int fpl_waypoints_count(flightplan *); /* * NAME fpl_waypoint_iter() * * DESCRIPTION * * Returns an iterator that can be used to read each of the waypoints. * The iterator is only valid so long as zero modifications are made * to the waypoint list. * * The iterator must be freed with fpl_waypoint_iter_free. */ flightplan_waypoint_iter *fpl_waypoints_iter(flightplan *); /* * NAME fpl_waypoint_iter_free() * * DESCRIPTION * * Free resources associated with an iterator. */ void fpl_waypoint_iter_free(flightplan_waypoint_iter *); /* * NAME fpl_waypoints_next() * * DESCRIPTION * * Get the next waypoint for the iterator. This returns NULL when there are * no more waypoints available. The values returned should NOT be manually * freed, they are owned by the flight plan. */ flightplan_waypoint *fpl_waypoints_next(flightplan_waypoint_iter *); // TODO flightplan_waypoint *fpl_waypoint_new(); void fpl_waypoint_free(flightplan_waypoint *); /* * NAME fpl_waypoint_identifier() * * DESCRIPTION * * Return the unique identifier for this waypoint. */ char *fpl_waypoint_identifier(flightplan_waypoint *); /* * NAME fpl_waypoint_lat() * * DESCRIPTION * * Return the latitude for this waypoint as a decimal value. */ float fpl_waypoint_lat(flightplan_waypoint *); /* * NAME fpl_waypoint_lon() * * DESCRIPTION * * Return the longitude for this waypoint as a decimal value. */ float fpl_waypoint_lon(flightplan_waypoint *); /* * NAME fpl_waypoint_type() * * DESCRIPTION * * Returns the type of this waypoint. */ flightplan_waypoint_type fpl_waypoint_type(flightplan_waypoint *); /* * NAME fpl_waypoint_type_str() * * DESCRIPTION * * Convert a waypoint type to a string value. */ char *fpl_waypoint_type_str(flightplan_waypoint_type); /************************************************************************** * Route *************************************************************************/ typedef void flightplan_route_point; typedef void flightplan_route_point_iter; /* * NAME fpl_route_name() * * DESCRIPTION * * The name of the route. */ char *fpl_route_name(flightplan *); /* * NAME fpl_route_points_count() * * DESCRIPTION * * Returns the total number of route points that are in this flight plan. */ int fpl_route_points_count(flightplan *); /* * NAME fpl_route_points_get() * * DESCRIPTION * * Returns the route point at the given index in the route. index must be * greater than 0 and less than fpl_route_points_count(). */ flightplan_route_point *fpl_route_points_get(flightplan *, int); /* * NAME fpl_route_point_identifier() * * DESCRIPTION * * Returns the identifier of this route point. This should match a waypoint * in the flight plan if it is validly formed. */ char *fpl_route_point_identifier(flightplan_route_point *); #ifdef __cplusplus } #endif #endif
0
repos/libflightplan
repos/libflightplan/src/Runway.zig
const Self = @This(); const std = @import("std"); const testing = std.testing; // Number is the runway number such as "25" number: u16, // Position is the relative position of the runway, if any, such as // L, R, C. This is a byte so that it can be any ASCII character, but position: ?Position = null, /// Position is the potential position of runways with matching numbers. pub const Position = enum { L, R, C, }; /// Runway string such as "15L", "15", etc. The buffer must be at least /// 3 characters large. If the buffer isn't large enough you'll get an error. pub fn toString(self: Self, buf: []u8) ![:0]u8 { var posString: [:0]const u8 = ""; if (self.position) |pos| { posString = @tagName(pos); } return try std.fmt.bufPrintZ(buf, "{d:0>2}{s}", .{ self.number, posString }); } test "string" { var buf: [6]u8 = undefined; { const rwy = Self{ .number = 25 }; try testing.expectEqualStrings(try rwy.toString(&buf), "25"); } { const rwy = Self{ .number = 25, .position = .L }; try testing.expectEqualStrings(try rwy.toString(&buf), "25L"); } { const rwy = Self{ .number = 1, .position = .C }; try testing.expectEqualStrings(try rwy.toString(&buf), "01C"); } // Stupid but should work { const rwy = Self{ .number = 679 }; try testing.expectEqualStrings(try rwy.toString(&buf), "679"); } }
0
repos/libflightplan
repos/libflightplan/src/Departure.zig
/// Departure represents information about the departure portion of a flight /// plan, such as the departing airport, runway, procedure, transition, etc. /// /// This is just the departure procedure metadata. The route of the DP is /// expected to still be added manually to the FlightPlan's route field. const Self = @This(); const std = @import("std"); const Allocator = std.mem.Allocator; const Runway = @import("Runway.zig"); /// Departure waypoint ID. This waypoint must be present in the waypoints map /// on a flight plan for more information such as lat/lon. This doesn't have to /// be an airport, this can be a VOR or another NAVAID. identifier: [:0]const u8, /// Departure runway. While this can be set for any identifier, note that /// a runway is non-sensical for a non-airport identifier. runway: ?Runway = null, // Name of the SID used for departure (if any) sid: ?[:0]const u8 = null, // Name of the departure transition (if any). This may be set when sid // is null but that makes no sense. transition: ?[:0]const u8 = null, pub fn deinit(self: *Self, alloc: Allocator) void { alloc.free(self.identifier); if (self.sid) |v| alloc.free(v); if (self.transition) |v| alloc.free(v); self.* = undefined; }
0
repos/libflightplan
repos/libflightplan/src/Route.zig
/// Route structure represents an ordered list of waypoints (and other /// potential metadata) for a route in a flight plan. const Self = @This(); const std = @import("std"); const Allocator = std.mem.Allocator; const PointsList = std.ArrayListUnmanaged(Point); /// Name of the route, human-friendly. name: ?[:0]const u8 = null, /// Ordered list of points in the route. Currently, each value is a string /// matching the name of a Waypoint. In the future, this will be changed /// to a rich struct that has more information. points: PointsList = .{}, /// Point is a point in a route. pub const Point = struct { /// Identifier of this route point, MUST correspond to a matching /// waypoint in the flight plan or most encoding will fail. identifier: [:0]const u8, /// The route that this point is via, such as an airway. This is used /// by certain formats and ignored by most. via: ?Via = null, /// Altitude in feet (MSL, AGL, whatever you'd like for your flight /// plan and format). This is used by some formats to note the desired /// altitude at a given point. This can be zero to note cruising altitude /// or field elevation. altitude: u16 = 0, pub const Via = union(enum) { airport_departure: void, airport_destination: void, direct: void, airway: [:0]const u8, }; pub fn deinit(self: *Point, alloc: Allocator) void { alloc.free(self.identifier); self.* = undefined; } }; pub fn deinit(self: *Self, alloc: Allocator) void { if (self.name) |v| alloc.free(v); while (self.points.popOrNull()) |*v| v.deinit(alloc); self.points.deinit(alloc); self.* = undefined; }
0
repos/libflightplan
repos/libflightplan/src/main.zig
const std = @import("std"); pub const FlightPlan = @import("FlightPlan.zig"); pub const Waypoint = @import("Waypoint.zig"); pub const Route = @import("Route.zig"); pub const Departure = @import("Departure.zig"); pub const Destination = @import("Destination.zig"); pub const Runway = @import("Runway.zig"); pub const Error = @import("Error.zig"); pub const Format = struct { pub const Garmin = @import("format/garmin.zig"); pub const XPlaneFMS11 = @import("format/xplane_fms_11.zig"); }; /// deinit should be called when the process is done with this library /// to perform process-level cleanup. This frees memory associated with /// some global error values. pub fn deinit() void { Error.setLastError(null); } test { _ = Error; _ = Departure; _ = Destination; _ = FlightPlan; _ = Route; _ = Runway; _ = Format.Garmin; _ = Format.XPlaneFMS11; }
0
repos/libflightplan
repos/libflightplan/src/test.zig
const std = @import("std"); /// Get the path to a test fixture file. This should only be used in tests /// since it depends on a predictable source path. This returns a null-terminated /// string slice so that it can be used directly with C APIs (libxml), but /// the cost is then it must be freed. pub fn testFile(comptime path: []const u8) ![:0]const u8 { comptime { const sepSlice = &[_]u8{std.fs.path.sep}; // Build our path which has our relative test directory. var path2 = "/../test/" ++ path; // The path is expected to always use / so we replace it if its // a different value. If sep is / we technically don't have to do this // but we always do just so we can ensure this code path works var buf: [path2.len]u8 = undefined; _ = std.mem.replace( u8, path2, &[_]u8{'/'}, sepSlice, buf[0..], ); const finalPath = buf[0..]; // Get the directory of this source file. const srcDir = std.fs.path.dirname(@src().file) orelse unreachable; // Add our path const final: []const u8 = srcDir ++ finalPath ++ &[_]u8{0}; return final[0 .. final.len - 1 :0]; } }
0
repos/libflightplan
repos/libflightplan/src/xml.zig
pub const c = @cImport({ @cDefine("LIBXML_WRITER_ENABLED", {}); @cInclude("libxml/xmlreader.h"); @cInclude("libxml/xmlwriter.h"); }); // free calls xmlFree pub fn free(ptr: ?*anyopaque) void { if (ptr) |v| { c.xmlFree.?(v); } } /// Find a node that has the given element type and return it. This looks /// in sibling nodes. pub fn findNode(node: ?*c.xmlNode, name: []const u8) ?*c.xmlNode { var cur = node; while (cur) |n| : (cur = n.next) { if (n.type != c.XML_ELEMENT_NODE) { continue; } if (c.xmlStrcmp(n.name, name.ptr) == 0) { return n; } } return null; }
0
repos/libflightplan
repos/libflightplan/src/Destination.zig
/// Destination represents information about the destination portion of a flight /// plan, such as the destination airport, arrival, approach, etc. const Self = @This(); const std = @import("std"); const Allocator = std.mem.Allocator; const Runway = @import("Runway.zig"); /// Destination waypoint ID. This waypoint must be present in the waypoints map /// on a flight plan for more information such as lat/lon. This doesn't have to /// be an airport, this can be a VOR or another NAVAID. identifier: [:0]const u8, /// Destination runway. While this can be set for any identifier, note that /// a runway is non-sensical for a non-airport identifier. runway: ?Runway = null, // Name of the STAR used for arrival (if any). star: ?[:0]const u8 = null, // Name of the arrival transition (if any). star_transition: ?[:0]const u8 = null, // Name of the approach used for arrival (if any). The recommended format // is the ARINC 424-18 format, such as LOCD, I26L, etc. approach: ?[:0]const u8 = null, // Name of the arrival transition (if any). approach_transition: ?[:0]const u8 = null, pub fn deinit(self: *Self, alloc: Allocator) void { alloc.free(self.identifier); if (self.star) |v| alloc.free(v); if (self.star_transition) |v| alloc.free(v); if (self.approach) |v| alloc.free(v); if (self.approach_transition) |v| alloc.free(v); self.* = undefined; }
0
repos/libflightplan
repos/libflightplan/src/Waypoint.zig
/// Waypoint structure is a single potential waypoint in a route. This /// contains all the metadata about the waypoint. const Self = @This(); const std = @import("std"); const Allocator = std.mem.Allocator; const mem = std.mem; /// Name of the waypoint. This is a key that is used by the route to lookup /// the waypoint. identifier: [:0]const u8, /// Type of the waypoint, such as VOR, NDB, etc. type: Type, /// Latitude and longitude of this waypoint. lat: f32 = 0, lon: f32 = 0, pub const Type = enum { user_waypoint, airport, ndb, vor, int, int_vrp, pub fn fromString(v: []const u8) Type { if (mem.eql(u8, v, "AIRPORT")) { return .airport; } else if (mem.eql(u8, v, "NDB")) { return .ndb; } else if (mem.eql(u8, v, "USER WAYPOINT")) { return .user_waypoint; } else if (mem.eql(u8, v, "VOR")) { return .vor; } else if (mem.eql(u8, v, "INT")) { return .int; } else if (mem.eql(u8, v, "INT-VRP")) { return .int_vrp; } @panic("invalid waypoint type"); } pub fn toString(self: Type) [:0]const u8 { return switch (self) { .user_waypoint => "USER WAYPOINT", .airport => "AIRPORT", .ndb => "NDB", .vor => "VOR", .int => "INT", .int_vrp => "INT-VRP", }; } }; pub fn deinit(self: *Self, alloc: Allocator) void { alloc.free(self.identifier); self.* = undefined; }
0
repos/libflightplan
repos/libflightplan/src/format.zig
const std = @import("std"); const fs = std.fs; const mem = std.mem; const testing = std.testing; const Allocator = std.mem.Allocator; const FlightPlan = @import("FlightPlan.zig"); /// Format returns a typed format for the given underlying implementation. /// /// Users do NOT need to use this type; most formats have direct reader/writer /// functions you can use directly. This generic type is here just to be useful /// as a way to guide format implementors to a common format and to add higher /// level operations in the future. /// /// Implementations must support the following fields: /// /// * Binding: type - C bindings to expose /// * Reader: type - Reader implementation for reading flight plans. /// * Writer: type - Writer implementatino for encoding flight plans. /// /// TODO: more docs pub fn Format( comptime Impl: type, ) type { return struct { /// Initialize a flight plan from a file path. pub fn initFromFile(alloc: Allocator, path: [:0]const u8) !FlightPlan { return Impl.Reader.initFromFile(alloc, path); } /// Write the flightplan to the given writer. writer is expected /// to implement std.io.writer. pub fn writeTo(writer: anytype, fpl: *const FlightPlan) !void { return Impl.Writer.writeTo(writer, fpl); } /// Write the flightplan to the given filepath. pub fn writeToFile(path: [:0]const u8, fpl: *const FlightPlan) !void { // Create our file const flags = fs.File.CreateFlags{ .truncate = true }; const file = if (fs.path.isAbsolute(path)) try fs.createFileAbsolute(path, flags) else try fs.cwd().createFile(path, flags); defer file.close(); // Write as a writer try writeTo(file.writer(), fpl); } }; }
0
repos/libflightplan
repos/libflightplan/src/time.zig
// This file exports a singular source for time.h from libc. pub const c = @cImport({ @cInclude("time.h"); });
0
repos/libflightplan
repos/libflightplan/src/FlightPlan.zig
/// The primary abstract flight plan structure. This is the structure that /// various formats decode to an encode from. /// /// Note all features of this structure are not supported by all formats. /// For example, the flight rules field (IFR or VFR) is not used at all by /// the Garmin or ForeFlight FPL formats, but is used by MSFS 2020 PLN. /// Formats just ignore information they don't use. const Self = @This(); const std = @import("std"); const hash_map = std.hash_map; const Allocator = std.mem.Allocator; const Waypoint = @import("Waypoint.zig"); const Departure = @import("Departure.zig"); const Destination = @import("Destination.zig"); const Route = @import("Route.zig"); /// Allocator associated with this FlightPlan. This allocator must be /// used for all the memory owned by this structure for deinit to work. alloc: Allocator, // The type of flight rules, assumes IFR. rules: Rules = .ifr, /// The AIRAC cycle used to create this flight plan, i.e. 2201. /// See: https://en.wikipedia.org/wiki/Aeronautical_Information_Publication /// This is expected to be heap-allocated and will be freed on deinit. airac: ?[:0]const u8 = null, /// The timestamp when this flight plan was created. This is expected to /// be heap-allocated and will be freed on deinit. /// TODO: some well known format created: ?[:0]const u8 = null, /// Departure information departure: ?Departure = null, /// Destination information destination: ?Destination = null, /// Waypoints that are part of the route. These are unordered, they are /// just the full list of possible waypoints that the route may contain. waypoints: hash_map.StringHashMapUnmanaged(Waypoint) = .{}, /// The flight plan route. This route may only contain waypoints in the /// waypoints map. route: Route = .{}, /// Flight rules types pub const Rules = enum { vfr, ifr, }; /// Clean up resources associated with the flight plan. This should /// always be called for any created flight plan when it is no longer in use. pub fn deinit(self: *Self) void { if (self.airac) |v| self.alloc.free(v); if (self.created) |v| self.alloc.free(v); if (self.departure) |*dep| dep.deinit(self.alloc); if (self.destination) |*des| des.deinit(self.alloc); self.route.deinit(self.alloc); var it = self.waypoints.iterator(); while (it.next()) |kv| { kv.value_ptr.deinit(self.alloc); } self.waypoints.deinit(self.alloc); self.* = undefined; } test { _ = Waypoint; _ = @import("binding.zig"); }
0
repos/libflightplan
repos/libflightplan/src/binding.zig
// This file contains the C bindings that are exported when building // the system libraries. // // WHERE IS THE DOCUMENTATION? Note that all the documentation for the C // interface is in the header file flightplan.h. The implementation for // these various functions may have some comments but are meant towards // maintainers. const std = @import("std"); const Allocator = std.mem.Allocator; const c_allocator = std.heap.c_allocator; const lib = @import("main.zig"); const Error = lib.Error; const FlightPlan = lib.FlightPlan; const Waypoint = lib.Waypoint; const Route = lib.Route; const testutil = @import("test.zig"); const c = @cImport({ @cInclude("flightplan.h"); }); //------------------------------------------------------------------- // Formats pub usingnamespace @import("format/garmin.zig").Binding; pub usingnamespace @import("format/xplane_fms_11.zig").Binding; //------------------------------------------------------------------- // General functions export fn fpl_cleanup() void { lib.deinit(); } export fn fpl_new() ?*FlightPlan { return cflightplan(.{ .alloc = c_allocator }); } export fn fpl_set_created(raw: ?*FlightPlan, str: [*:0]const u8) u8 { const fpl = raw orelse return 1; const copy = std.mem.span(str); fpl.created = Allocator.dupeZ(c_allocator, u8, copy) catch return 1; return 0; } export fn fpl_created(raw: ?*FlightPlan) ?[*:0]const u8 { if (raw) |fpl| { if (fpl.created) |v| { return v.ptr; } } return null; } export fn fpl_free(raw: ?*FlightPlan) void { if (raw) |v| { v.deinit(); c_allocator.destroy(v); } } pub fn cflightplan(fpl: FlightPlan) ?*FlightPlan { const result = c_allocator.create(FlightPlan) catch return null; result.* = fpl; return result; } //------------------------------------------------------------------- // Errors export fn fpl_last_error() ?*Error { return Error.lastError(); } export fn fpl_error_message(raw: ?*Error) ?[*:0]const u8 { const err = raw orelse return null; return err.message().ptr; } //------------------------------------------------------------------- // Waypoints const WPIterator = std.meta.fieldInfo(FlightPlan, .waypoints).field_type.ValueIterator; export fn fpl_waypoints_count(raw: ?*FlightPlan) c_int { if (raw) |fpl| { return @intCast(c_int, fpl.waypoints.count()); } return 0; } export fn fpl_waypoints_iter(raw: ?*FlightPlan) ?*WPIterator { const fpl = raw orelse return null; const iter = fpl.waypoints.valueIterator(); const result = c_allocator.create(@TypeOf(iter)) catch return null; result.* = iter; return result; } export fn fpl_waypoint_iter_free(raw: ?*WPIterator) void { if (raw) |iter| { c_allocator.destroy(iter); } } export fn fpl_waypoints_next(raw: ?*WPIterator) ?*Waypoint { const iter = raw orelse return null; return iter.next(); } export fn fpl_waypoint_identifier(raw: ?*Waypoint) ?[*:0]const u8 { const wp = raw orelse return null; return wp.identifier.ptr; } export fn fpl_waypoint_lat(raw: ?*Waypoint) f32 { const wp = raw orelse return -1; return wp.lat; } export fn fpl_waypoint_lon(raw: ?*Waypoint) f32 { const wp = raw orelse return -1; return wp.lon; } export fn fpl_waypoint_type(raw: ?*Waypoint) c.flightplan_waypoint_type { const wp = raw orelse return c.FLIGHTPLAN_INVALID; return @enumToInt(wp.type) + 1; // must add 1 due to _INVALID } export fn fpl_waypoint_type_str(raw: c.flightplan_waypoint_type) [*:0]const u8 { // subtraction here due to _INVALID return @intToEnum(Waypoint.Type, raw - 1).toString().ptr; } //------------------------------------------------------------------- // Route export fn fpl_route_name(raw: ?*FlightPlan) ?[*:0]const u8 { const fpl = raw orelse return null; if (fpl.route.name) |v| { return v.ptr; } return null; } export fn fpl_route_points_count(raw: ?*FlightPlan) c_int { const fpl = raw orelse return 0; return @intCast(c_int, fpl.route.points.items.len); } export fn fpl_route_points_get(raw: ?*FlightPlan, idx: c_int) ?*Route.Point { const fpl = raw orelse return null; return &fpl.route.points.items[@intCast(usize, idx)]; } export fn fpl_route_point_identifier(raw: ?*Route.Point) ?[*:0]const u8 { const ptr = raw orelse return null; return ptr.identifier; }
0
repos/libflightplan
repos/libflightplan/src/Error.zig
const Self = @This(); const std = @import("std"); const Allocator = std.mem.Allocator; const c = @import("xml.zig").c; /// Possible errors that can be returned by many of the functions /// in this library. See the function doc comments for details on /// exactly which of these can be returnd. pub const Set = error{ OutOfMemory, Unimplemented, ReadFailed, WriteFailed, NodeExpected, InvalidElement, RequiredValueMissing, RouteMissingWaypoint, }; /// last error that occurred, this MIGHT be set if an error code is returned. /// This is thread local so using this library in a threaded environment will /// store errors separately. threadlocal var _lastError: ?Self = null; /// Error code for this error code: Set, /// Additional details for this error. Whether this is set depends on what /// triggered the error. The type of this is dependent on the context in which /// the error was triggered. detail: ?Detail = null, /// Extra details for an error. What is set is dependent on what raised the error. pub const Detail = union(enum) { /// message is a basic string message. string: String, /// xml-specific error message (typically a parse error) xml: XMLDetail, /// Gets a human-friendly message regardless of type. pub fn message(self: *Detail) [:0]const u8 { switch (self.*) { .string => |*v| return v.message, .xml => |*v| return v.message(), } } pub fn deinit(self: Detail) void { switch (self) { .string => |v| v.deinit(), .xml => |v| v.deinit(), } } /// XMLDetail when an XML-related error occurs for formats that use XML. pub const XMLDetail = struct { pub const Context = union(enum) { global: void, parser: c.xmlParserCtxtPtr, writer: c.xmlTextWriterPtr, }; ctx: Context, /// Return the raw xmlError structure. pub fn err(self: *XMLDetail) ?*c.xmlError { return switch (self.ctx) { .global => c.xmlGetLastError(), .parser => |ptr| c.xmlCtxtGetLastError(ptr), .writer => |ptr| c.xmlCtxtGetLastError(ptr), }; } pub fn message(self: *XMLDetail) [:0]const u8 { const v = self.err() orelse return "no error"; return std.mem.span(v.message); } pub fn deinit(self: XMLDetail) void { switch (self.ctx) { .global => {}, .parser => |ptr| c.xmlFreeParserCtxt(ptr), .writer => |ptr| c.xmlFreeTextWriter(ptr), } } }; pub const String = struct { alloc: Allocator, message: [:0]const u8, pub fn init(alloc: Allocator, comptime fmt: []const u8, args: anytype) !String { const msg = try std.fmt.allocPrintZ(alloc, fmt, args); return String{ .alloc = alloc, .message = msg }; } pub fn deinit(self: String) void { self.alloc.free(self.message); } }; }; /// Helper to easily initialize an error with a message. pub fn initMessage(alloc: Allocator, code: Set, comptime fmt: []const u8, args: anytype) !Self { const detail = Detail{ .string = try Detail.String.init(alloc, fmt, args), }; return Self{ .code = code, .detail = detail, }; } /// Returns a human-friendly message about the error. pub fn message(self: *Self) [:0]const u8 { if (self.detail) |*detail| { return detail.message(); } return "no error message"; } /// Release resources associated with an error. pub fn deinit(self: Self) void { if (self.detail) |detail| { detail.deinit(); } } /// Return the last error (if any). pub inline fn lastError() ?*Self { if (_lastError) |*err| { return err; } return null; } // Set a new last error. pub fn setLastError(err: ?Self) void { // Unset previous error if there is one. if (_lastError) |last| { last.deinit(); } _lastError = err; } /// Set a new last error that was an XML error. pub fn setLastErrorXML(code: Set, ctx: Detail.XMLDetail.Context) Set { // Can't nest it all due to: https://github.com/ziglang/zig/issues/6043 const detail = Detail{ .xml = .{ .ctx = ctx } }; setLastError(Self{ .code = code, .detail = detail, }); return code; } test "set last error" { // Setting it while null does nothing setLastError(null); setLastError(null); // Can set and retrieve setLastError(Self{ .code = Set.ReadFailed }); const err = lastError().?; try std.testing.expectEqual(err.code, Set.ReadFailed); // Can set to null setLastError(null); try std.testing.expect(lastError() == null); }
0
repos/libflightplan/src
repos/libflightplan/src/include/bridge.h
#include <libxml/xmlreader.h> // Zig can't "call" the macro properly so this is a brige function we can // call in order to initialize libxml. void _zig_LIBXML_TEST_VERSION() { LIBXML_TEST_VERSION }
0
repos/libflightplan/src
repos/libflightplan/src/format/garmin.zig
/// This file contains the reading/writing logic for the Garmin FPL /// format. This format is also used by ForeFlight with slight modifications. /// The reader/writer handle both formats. /// /// The FPL format does not support departure/arrival procedures. The /// data it uses is: /// /// * Waypoints /// * Route: only the identifier for each point /// /// Reference: https://www8.garmin.com/xmlschemas/FlightPlanv1.xsd const Garmin = @This(); const std = @import("std"); const mem = std.mem; const testing = std.testing; const Allocator = std.mem.Allocator; const FlightPlan = @import("../FlightPlan.zig"); const Waypoint = @import("../Waypoint.zig"); const format = @import("../format.zig"); const testutil = @import("../test.zig"); const xml = @import("../xml.zig"); const c = xml.c; const Error = @import("../Error.zig"); const ErrorSet = Error.Set; const Route = @import("../Route.zig"); test { _ = Binding; _ = Reader; _ = Writer; } /// The Format type that can be used with the generic functions on FlightPlan. /// You can also call the direct functions in this file. pub const Format = format.Format(Garmin); /// Initialize a flightplan from a file. pub fn initFromFile(alloc: Allocator, path: [:0]const u8) !FlightPlan { return Reader.initFromFile(alloc, path); } /// Encode a flightplan to this format to the given writer. writer should /// be a std.io.Writer-like implementation. pub fn writeTo(writer: anytype, fpl: *const FlightPlan) !void { return Writer.writeTo(writer, fpl); } /// Binding are the C bindings for this format. pub const Binding = struct { const binding = @import("../binding.zig"); const c_allocator = std.heap.c_allocator; export fn fpl_garmin_parse_file(path: [*:0]const u8) ?*FlightPlan { var fpl = Reader.initFromFile(c_allocator, mem.sliceTo(path, 0)) catch return null; return binding.cflightplan(fpl); } export fn fpl_garmin_write_to_file(raw: ?*FlightPlan, path: [*:0]const u8) c_int { const fpl = raw orelse return -1; Format.writeToFile(mem.sliceTo(path, 0), fpl) catch return -1; return 0; } }; /// Reader implementation (see format.zig) pub const Reader = struct { pub fn initFromFile(alloc: Allocator, path: [:0]const u8) !FlightPlan { // Create a parser context. We use the context form rather than the global // xmlReadFile form so that we can be a little more thread safe. const ctx = c.xmlNewParserCtxt(); if (ctx == null) { Error.setLastError(null); return ErrorSet.ReadFailed; } // NOTE: we do not defer freeing the context cause we want to preserve // the context if there are any errors. // Read the file const doc = c.xmlCtxtReadFile(ctx, path.ptr, null, 0); if (doc == null) { return Error.setLastErrorXML(ErrorSet.ReadFailed, .{ .parser = ctx }); } defer c.xmlFreeParserCtxt(ctx); defer c.xmlFreeDoc(doc); // Get the root elem const root = c.xmlDocGetRootElement(doc); return initFromXMLNode(alloc, root); } pub fn initFromReader(alloc: Allocator, reader: anytype) !FlightPlan { // Read the full contents. var buf = try reader.readAllAlloc( alloc, 1024 * 1024 * 50, // 50 MB for now ); defer alloc.free(buf); const ctx = c.xmlNewParserCtxt(); if (ctx == null) { Error.setLastError(null); return ErrorSet.ReadFailed; } // NOTE: we do not defer freeing the context cause we want to preserve // the context if there are any errors. // Read the const doc = c.xmlCtxtReadMemory( ctx, buf.ptr, @intCast(c_int, buf.len), null, null, 0, ); if (doc == null) { return Error.setLastErrorXML(ErrorSet.ReadFailed, .{ .parser = ctx }); } defer c.xmlFreeParserCtxt(ctx); defer c.xmlFreeDoc(doc); // Get the root elem const root = c.xmlDocGetRootElement(doc); return initFromXMLNode(alloc, root); } fn initFromXMLNode(alloc: Allocator, node: *c.xmlNode) !FlightPlan { // Should be an opening node if (node.type != c.XML_ELEMENT_NODE) { return ErrorSet.NodeExpected; } // Should be a "flight-plan" node. if (c.xmlStrcmp(node.name, "flight-plan") != 0) { Error.setLastError(try Error.initMessage( alloc, ErrorSet.InvalidElement, "flight-plan element not found", .{}, )); return ErrorSet.InvalidElement; } const WPType = comptime std.meta.fieldInfo(FlightPlan, .waypoints).field_type; var self = FlightPlan{ .alloc = alloc, .created = undefined, .waypoints = WPType{}, .route = undefined, }; try parseFlightPlan(&self, node); return self; } fn parseFlightPlan(self: *FlightPlan, node: *c.xmlNode) !void { var cur: ?*c.xmlNode = node.children; while (cur) |n| : (cur = n.next) { if (n.type != c.XML_ELEMENT_NODE) { continue; } if (c.xmlStrcmp(n.name, "created") == 0) { const copy = c.xmlNodeListGetString(node.doc, n.children, 1); defer xml.free(copy); self.created = try Allocator.dupeZ(self.alloc, u8, mem.sliceTo(copy, 0)); } else if (c.xmlStrcmp(n.name, "waypoint-table") == 0) { try parseWaypointTable(self, n); } else if (c.xmlStrcmp(n.name, "route") == 0) { self.route = try parseRoute(self.alloc, n); } } } fn parseWaypointTable(self: *FlightPlan, node: *c.xmlNode) !void { var cur: ?*c.xmlNode = node.children; while (cur) |n| : (cur = n.next) { if (n.type != c.XML_ELEMENT_NODE) { continue; } if (c.xmlStrcmp(n.name, "waypoint") == 0) { const wp = try parseWaypoint(self.alloc, n); try self.waypoints.put(self.alloc, wp.identifier, wp); } } } fn parseRoute(alloc: Allocator, node: *c.xmlNode) !Route { var self = Route{ .name = undefined, .points = .{}, }; var cur: ?*c.xmlNode = node.children; while (cur) |n| : (cur = n.next) { if (n.type != c.XML_ELEMENT_NODE) { continue; } if (c.xmlStrcmp(n.name, "route-name") == 0) { const copy = c.xmlNodeListGetString(node.doc, n.children, 1); defer xml.free(copy); self.name = try Allocator.dupeZ(alloc, u8, mem.sliceTo(copy, 0)); } else if (c.xmlStrcmp(n.name, "route-point") == 0) { try parseRoutePoint(&self, alloc, n); } } return self; } fn parseRoutePoint(self: *Route, alloc: Allocator, node: *c.xmlNode) !void { var cur: ?*c.xmlNode = node.children; while (cur) |n| : (cur = n.next) { if (n.type != c.XML_ELEMENT_NODE) { continue; } if (c.xmlStrcmp(n.name, "waypoint-identifier") == 0) { const copy = c.xmlNodeListGetString(node.doc, n.children, 1); defer xml.free(copy); const zcopy = try Allocator.dupeZ(alloc, u8, mem.sliceTo(copy, 0)); try self.points.append(alloc, Route.Point{ .identifier = zcopy, }); } } } fn parseWaypoint(alloc: Allocator, node: *c.xmlNode) !Waypoint { var self = Waypoint{ .identifier = undefined, .type = undefined, }; var cur: ?*c.xmlNode = node.children; while (cur) |n| : (cur = n.next) { if (n.type != c.XML_ELEMENT_NODE) { continue; } if (c.xmlStrcmp(n.name, "identifier") == 0) { const copy = c.xmlNodeListGetString(node.doc, n.children, 1); defer xml.free(copy); self.identifier = try Allocator.dupeZ(alloc, u8, mem.sliceTo(copy, 0)); } else if (c.xmlStrcmp(n.name, "lat") == 0) { const copy = c.xmlNodeListGetString(node.doc, n.children, 1); defer xml.free(copy); self.lat = try std.fmt.parseFloat(f32, mem.sliceTo(copy, 0)); } else if (c.xmlStrcmp(n.name, "lon") == 0) { const copy = c.xmlNodeListGetString(node.doc, n.children, 1); defer xml.free(copy); self.lon = try std.fmt.parseFloat(f32, mem.sliceTo(copy, 0)); } else if (c.xmlStrcmp(n.name, "type") == 0) { const copy = c.xmlNodeListGetString(node.doc, n.children, 1); defer xml.free(copy); self.type = Waypoint.Type.fromString(mem.sliceTo(copy, 0)); } } return self; } test "basic reading" { const testPath = try testutil.testFile("basic.fpl"); var plan = try Format.initFromFile(testing.allocator, testPath); defer plan.deinit(); try testing.expectEqualStrings(plan.created.?, "20211230T22:07:20Z"); try testing.expectEqual(plan.waypoints.count(), 20); // Test route try testing.expectEqualStrings(plan.route.name.?, "KHHR TO KHTH"); try testing.expectEqual(plan.route.points.items.len, 20); // Test a waypoint { const wp = plan.waypoints.get("KHHR").?; try testing.expectEqualStrings(wp.identifier, "KHHR"); try testing.expect(wp.lat > 33.91 and wp.lat < 33.93); try testing.expect(wp.lon > -118.336 and wp.lon < -118.334); try testing.expectEqual(wp.type, .airport); try testing.expectEqualStrings(wp.type.toString(), "AIRPORT"); } } test "parse error" { const testPath = try testutil.testFile("error_syntax.fpl"); try testing.expectError(ErrorSet.ReadFailed, Format.initFromFile(testing.allocator, testPath)); var lastErr = Error.lastError().?; defer Error.setLastError(null); try testing.expectEqual(lastErr.code, ErrorSet.ReadFailed); const xmlErr = lastErr.detail.?.xml.err(); const message = mem.span(xmlErr.?.message); try testing.expect(message.len > 0); } test "error: no flight-plan" { const testPath = try testutil.testFile("error_no_flightplan.fpl"); try testing.expectError(ErrorSet.InvalidElement, Format.initFromFile(testing.allocator, testPath)); var lastErr = Error.lastError().?; defer Error.setLastError(null); try testing.expectEqual(lastErr.code, ErrorSet.InvalidElement); } }; /// Writer implementation (see format.zig) pub const Writer = struct { pub fn writeTo(writer: anytype, fpl: *const FlightPlan) !void { // Initialize an in-memory buffer. We have to do all writes to a buffer // first. We know that our flight plans can't be _that_ big (for a // reasonable user) so this is fine. var buf = c.xmlBufferCreate(); if (buf == null) { return Error.setLastErrorXML(ErrorSet.OutOfMemory, .{ .global = {} }); } defer c.xmlBufferFree(buf); var xmlwriter = c.xmlNewTextWriterMemory(buf, 0); if (xmlwriter == null) { return Error.setLastErrorXML(ErrorSet.OutOfMemory, .{ .global = {} }); } // Make the output human-friendly var rc = c.xmlTextWriterSetIndent(xmlwriter, 1); if (rc < 0) { return Error.setLastErrorXML(ErrorSet.WriteFailed, .{ .writer = xmlwriter }); } rc = c.xmlTextWriterSetIndentString(xmlwriter, "\t"); if (rc < 0) { return Error.setLastErrorXML(ErrorSet.WriteFailed, .{ .writer = xmlwriter }); } rc = c.xmlTextWriterStartDocument(xmlwriter, "1.0", "utf-8", null); if (rc < 0) { return Error.setLastErrorXML(ErrorSet.WriteFailed, .{ .writer = xmlwriter }); } // Start <flight-plan> const ns = "http://www8.garmin.com/xmlschemas/FlightPlan/v1"; rc = c.xmlTextWriterStartElementNS(xmlwriter, null, "flight-plan", ns); if (rc < 0) { return Error.setLastErrorXML(ErrorSet.WriteFailed, .{ .writer = xmlwriter }); } // <created> if (fpl.created) |created| { rc = c.xmlTextWriterWriteElement(xmlwriter, "created", created); if (rc < 0) { return Error.setLastErrorXML(ErrorSet.WriteFailed, .{ .writer = xmlwriter }); } } // Encode our waypoints try writeWaypoints(xmlwriter, fpl); // Encode our route try writeRoute(xmlwriter, fpl); // End <flight-plan> rc = c.xmlTextWriterEndElement(xmlwriter); if (rc < 0) { return Error.setLastErrorXML(ErrorSet.WriteFailed, .{ .writer = xmlwriter }); } // End doc rc = c.xmlTextWriterEndDocument(xmlwriter); if (rc < 0) { return Error.setLastErrorXML(ErrorSet.WriteFailed, .{ .writer = xmlwriter }); } // Free our text writer. We defer this now because errors below no longer // need this reference. defer c.xmlFreeTextWriter(xmlwriter); // Success, lets copy our buffer to the writer. try writer.writeAll(mem.span(buf.*.content)); } fn writeWaypoints(xmlwriter: c.xmlTextWriterPtr, fpl: *const FlightPlan) !void { // Do nothing if we have no waypoints if (fpl.waypoints.count() == 0) { return; } // Buffer for writing var buf: [128]u8 = undefined; // Start <waypoint-table> var rc = c.xmlTextWriterStartElement(xmlwriter, "waypoint-table"); if (rc < 0) { return Error.setLastErrorXML(ErrorSet.WriteFailed, .{ .writer = xmlwriter }); } // Iterate over each waypoint and write it var iter = fpl.waypoints.valueIterator(); while (iter.next()) |wp| { // Start <waypoint> rc = c.xmlTextWriterStartElement(xmlwriter, "waypoint"); if (rc < 0) { return Error.setLastErrorXML(ErrorSet.WriteFailed, .{ .writer = xmlwriter }); } rc = c.xmlTextWriterWriteElement(xmlwriter, "identifier", wp.identifier); if (rc < 0) { return Error.setLastErrorXML(ErrorSet.WriteFailed, .{ .writer = xmlwriter }); } rc = c.xmlTextWriterWriteElement(xmlwriter, "type", wp.type.toString()); if (rc < 0) { return Error.setLastErrorXML(ErrorSet.WriteFailed, .{ .writer = xmlwriter }); } rc = c.xmlTextWriterWriteElement( xmlwriter, "lat", try std.fmt.bufPrintZ(&buf, "{d}", .{wp.lat}), ); if (rc < 0) { return Error.setLastErrorXML(ErrorSet.WriteFailed, .{ .writer = xmlwriter }); } rc = c.xmlTextWriterWriteElement( xmlwriter, "lon", try std.fmt.bufPrintZ(&buf, "{d}", .{wp.lon}), ); if (rc < 0) { return Error.setLastErrorXML(ErrorSet.WriteFailed, .{ .writer = xmlwriter }); } // End <waypoint> rc = c.xmlTextWriterEndElement(xmlwriter); if (rc < 0) { return Error.setLastErrorXML(ErrorSet.WriteFailed, .{ .writer = xmlwriter }); } } // End <waypoint-table> rc = c.xmlTextWriterEndElement(xmlwriter); if (rc < 0) { return Error.setLastErrorXML(ErrorSet.WriteFailed, .{ .writer = xmlwriter }); } } fn writeRoute(xmlwriter: c.xmlTextWriterPtr, fpl: *const FlightPlan) !void { // Start <route> var rc = c.xmlTextWriterStartElement(xmlwriter, "route"); if (rc < 0) { return Error.setLastErrorXML(ErrorSet.WriteFailed, .{ .writer = xmlwriter }); } if (fpl.route.name) |name| { rc = c.xmlTextWriterWriteElement(xmlwriter, "route-name", name); if (rc < 0) { return Error.setLastErrorXML(ErrorSet.WriteFailed, .{ .writer = xmlwriter }); } } for (fpl.route.points.items) |point| { // Find the waypoint for this point const wp = fpl.waypoints.get(point.identifier) orelse return ErrorSet.RouteMissingWaypoint; // Start <route-point> rc = c.xmlTextWriterStartElement(xmlwriter, "route-point"); if (rc < 0) { return Error.setLastErrorXML(ErrorSet.WriteFailed, .{ .writer = xmlwriter }); } rc = c.xmlTextWriterWriteElement(xmlwriter, "waypoint-identifier", point.identifier); if (rc < 0) { return Error.setLastErrorXML(ErrorSet.WriteFailed, .{ .writer = xmlwriter }); } rc = c.xmlTextWriterWriteElement(xmlwriter, "waypoint-type", wp.type.toString()); if (rc < 0) { return Error.setLastErrorXML(ErrorSet.WriteFailed, .{ .writer = xmlwriter }); } // End <route-point> rc = c.xmlTextWriterEndElement(xmlwriter); if (rc < 0) { return Error.setLastErrorXML(ErrorSet.WriteFailed, .{ .writer = xmlwriter }); } } // End <route> rc = c.xmlTextWriterEndElement(xmlwriter); if (rc < 0) { return Error.setLastErrorXML(ErrorSet.WriteFailed, .{ .writer = xmlwriter }); } } test "basic writing" { const testPath = try testutil.testFile("basic.fpl"); var plan = try Format.initFromFile(testing.allocator, testPath); defer plan.deinit(); // Write the plan and compare var output = std.ArrayList(u8).init(testing.allocator); defer output.deinit(); // Write try Writer.writeTo(output.writer(), &plan); // Debug, write output to compare //std.debug.print("write:\n\n{s}\n", .{output.items}); // re-read to verify it parses const reader = std.io.fixedBufferStream(output.items).reader(); var plan2 = try Reader.initFromReader(testing.allocator, reader); defer plan2.deinit(); } };
0
repos/libflightplan/src
repos/libflightplan/src/format/xplane_fms_11.zig
/// This file contains the format implementation for the X-Plane FMS v11 format /// used by X-Plane 11.10 and later. /// /// Reference: https://developer.x-plane.com/article/flightplan-files-v11-fms-file-format/ const FMS = @This(); const std = @import("std"); const mem = std.mem; const testing = std.testing; const Allocator = std.mem.Allocator; const format = @import("../format.zig"); const testutil = @import("../test.zig"); const time = @import("../time.zig"); const FlightPlan = @import("../FlightPlan.zig"); const Route = @import("../Route.zig"); const Waypoint = @import("../Waypoint.zig"); const Error = @import("../Error.zig"); const ErrorSet = Error.Set; test { _ = Binding; _ = Reader; _ = Writer; } /// The Format type that can be used with the generic functions on FlightPlan. /// You can also call the direct functions in this file. pub const Format = format.Format(FMS); /// Binding are the C bindings for this format. pub const Binding = struct { const binding = @import("../binding.zig"); const c_allocator = std.heap.c_allocator; export fn fpl_xplane11_write_to_file(raw: ?*FlightPlan, path: [*:0]const u8) c_int { const fpl = raw orelse return -1; Format.writeToFile(mem.sliceTo(path, 0), fpl) catch return -1; return 0; } }; /// Reader implementation (see format.zig) /// TODO pub const Reader = struct { pub fn initFromFile(alloc: Allocator, path: [:0]const u8) !FlightPlan { _ = alloc; _ = path; return ErrorSet.Unimplemented; } }; /// Writer implementation (see format.zig) pub const Writer = struct { pub fn writeTo(writer: anytype, fpl: *const FlightPlan) !void { // Buffer that might be used for string operations. // Ensure this is always big enough. var buf: [8]u8 = undefined; // Header try writer.writeAll("I\n"); try writer.writeAll("1100 Version\n"); // Determine our AIRAC cycle. We try to use the airac cycle // on the flight plan. If that's not set, we just make // one up based on the current year. Waypoints don't change // often and flightplan validation will find this error so // if the user got here they are okay with defaults. if (fpl.airac) |v| { try writer.print("CYCLE {s}\n", .{v}); } else { const t = time.c.time(null); const tm = time.c.localtime(&t).*; const v = try std.fmt.bufPrintZ(&buf, "{d}01", .{ // we want years since 2000 tm.tm_year - 100, }); try writer.print("CYCLE {s}\n", .{v}); } // Departure if (fpl.departure) |dep| { // Departure airport. If we have departure info set then we use that. try writeDeparture(writer, fpl, dep.identifier); // Write additional departure info try writeDepartureProc(writer, fpl); } else if (fpl.route.points.items.len > 0) { // No departure info set, we just try to use the first route. const point = &fpl.route.points.items[0]; try writeDeparture(writer, fpl, point.identifier); } else { // No route return ErrorSet.RequiredValueMissing; } // Destination if (fpl.destination) |des| { // Departure airport. If we have departure info set then we use that. try writeDestination(writer, fpl, des.identifier); // Write additional departure info try writeDestinationProc(writer, fpl); } else if (fpl.route.points.items.len > 0) { // No departure info set, we just try to use the first route. const point = &fpl.route.points.items[fpl.route.points.items.len - 1]; try writeDestination(writer, fpl, point.identifier); } else { // No route return ErrorSet.RequiredValueMissing; } // Route try writeRoute(writer, fpl); } fn writeDeparture(writer: anytype, fpl: *const FlightPlan, id: []const u8) !void { // Get the waypoint associated with the departure ID so we can // determine the type. const wp = fpl.waypoints.get(id) orelse return ErrorSet.RouteMissingWaypoint; // Prefix we use depends if departure is an airport or not. const prefix = switch (wp.type) { .airport => "ADEP", else => "DEP", }; try writer.print("{s} {s}\n", .{ prefix, wp.identifier }); } fn writeDepartureProc(writer: anytype, fpl: *const FlightPlan) !void { var buf: [8]u8 = undefined; const dep = fpl.departure.?; if (dep.runway) |rwy| try writer.print("DEPRWY RW{s}\n", .{ try rwy.toString(&buf), }); if (dep.sid) |v| { try writer.print("SID {s}\n", .{v}); if (dep.transition) |transition| try writer.print("SIDTRANS {s}\n", .{transition}); } } fn writeDestination(writer: anytype, fpl: *const FlightPlan, id: []const u8) !void { // Get the waypoint associated with the ID so we can determine the type. const wp = fpl.waypoints.get(id) orelse return ErrorSet.RouteMissingWaypoint; // Prefix we use depends if departure is an airport or not. const prefix = switch (wp.type) { .airport => "ADES", else => "DES", }; try writer.print("{s} {s}\n", .{ prefix, wp.identifier }); } fn writeDestinationProc(writer: anytype, fpl: *const FlightPlan) !void { var buf: [8]u8 = undefined; const des = fpl.destination.?; if (des.runway) |rwy| try writer.print("DESRWY RW{s}\n", .{ try rwy.toString(&buf), }); if (des.star) |v| { try writer.print("STAR {s}\n", .{v}); if (des.star_transition) |transition| try writer.print("STARTRANS {s}\n", .{transition}); } if (des.approach) |v| { try writer.print("APP {s}\n", .{v}); if (des.approach_transition) |transition| try writer.print("APPTRANS {s}\n", .{transition}); } } fn writeRoute(writer: anytype, fpl: *const FlightPlan) !void { try writer.print("NUMENR {d}\n", .{fpl.route.points.items.len}); for (fpl.route.points.items) |point, i| { const wp = fpl.waypoints.get(point.identifier) orelse return ErrorSet.RouteMissingWaypoint; const typeCode: u8 = switch (wp.type) { .airport => 1, .ndb => 2, .vor => 3, .int => 11, .int_vrp => 11, .user_waypoint => 28, }; // Get our "via" value for XPlane. If this isn't set, we try to // determine it based on what kind of route point this is. const via = point.via orelse blk: { if (i == 0 and wp.type == .airport) { // First route, airport => departure airport break :blk Route.Point.Via{ .airport_departure = {} }; } else if (i == fpl.route.points.items.len - 1 and wp.type == .airport) { // Last route, airport => destination airport break :blk Route.Point.Via{ .airport_destination = {} }; } else { // Anything else, we go direct break :blk Route.Point.Via{ .direct = {} }; } }; // Convert the Via tagged union to the string value xplane expects const viaString = switch (via) { .airport_departure => "ADEP", .airport_destination => "ADES", .direct => "DRCT", .airway => |v| v, }; try writer.print("{d} {s} {s} {d} {d} {d}\n", .{ typeCode, wp.identifier, viaString, point.altitude, wp.lat, wp.lon, }); } } test "read Garmin FPL, write X-Plane" { const Garmin = @import("garmin.zig"); const testPath = try testutil.testFile("basic.fpl"); var plan = try Garmin.Format.initFromFile(testing.allocator, testPath); defer plan.deinit(); // Write the plan and compare var output = std.ArrayList(u8).init(testing.allocator); defer output.deinit(); // Write try Writer.writeTo(output.writer(), &plan); // Debug, write output to compare // std.debug.print("write:\n\n{s}\n", .{output.items}); // TODO: re-read to verify it parses } };
0
repos/libflightplan
repos/libflightplan/examples/basic.c
#include <stddef.h> #include <stdio.h> #include <flightplan.h> int main() { // Parse our flight plan from an FPL file out of ForeFlight. flightplan *fpl = fpl_garmin_parse_file("./test/basic.fpl"); if (fpl == NULL) { return 1; } // Extract information from our flight plan easily printf("created at: %s\n\n", fpl_created(fpl)); // Iterate through the available waypoints in the flightplan printf("waypoints: %d\n", fpl_waypoints_count(fpl)); flightplan_waypoint_iter *iter = fpl_waypoints_iter(fpl); while (1) { flightplan_waypoint *wp = fpl_waypoints_next(iter); if (wp == NULL) { break; } printf(" %s\t(type: %s,\tlat/lon: %f/%f)\n", fpl_waypoint_identifier(wp), fpl_waypoint_type_str(fpl_waypoint_type(wp)), fpl_waypoint_lat(wp), fpl_waypoint_lon(wp) ); } fpl_waypoint_iter_free(iter); // Iterate through the ordered route int max = fpl_route_points_count(fpl); printf("\nroute: \"%s\" (points: %d)\n", fpl_route_name(fpl), max); for (int i = 0; i < max; i++) { flightplan_route_point *point = fpl_route_points_get(fpl, i); printf(" %s\n", fpl_route_point_identifier(point)); } fpl_free(fpl); fpl_cleanup(); return 0; }
0
repos/libflightplan
repos/libflightplan/doc/libflightplan.3.scd
libflightplan(3) "github.com/mitchellh/libflightplan" "Library Functions Manual" # NAME libflightplan - library used to read and write aviation flight plans # DESCRIPTION *libflightplan* is a library for reading and writing flight plans in various formats. Flight plans are used in aviation to save properties of one or more flights such as route (waypoints), altitude, source and departure airport, etc. This library is available as a native C library as well as Zig. The man pages focus on the C API currently. # API NOTES - fpl_cleanup(3) should be called when all users of the library are done. This cleans up any global state associatd with the library. - The library may allocate global state on the heap to store error information (accessible via fpl_last_error(3)). - The library is not threadsafe. Global error state is stored in thread local variables. # EXAMPLE The example below shows how the C API can be used to parse a ForeFlight flight plan and read route information about it. ``` #include <stddef.h> #include <stdio.h> #include <flightplan.h> int main() { // Parse our flight plan from an FPL file out of ForeFlight. flightplan *fpl = fpl_parse_garmin("./test/basic.fpl"); if (fpl == NULL) { // We can get a more detailed error. flightplan_error *err = fpl_last_error(); printf("error: %s\n", fpl_error_message(err)); fpl_cleanup(); return 1; } // Iterate and output the full ordered route. int max = fpl_route_points_count(fpl); printf("\nroute: \"%s\" (points: %d)\n", fpl_route_name(fpl), max); for (int i = 0; i < max; i++) { flightplan_route_point *point = fpl_route_points_get(fpl, i); printf(" %s\n", fpl_route_point_identifier(point)); } fpl_free(fpl); fpl_cleanup(); return 0; } ``` # AUTHORS Mitchell Hashimoto ([email protected]) and any open source contributors. See <https://github.com/mitchellh/libflightplan>.
0
repos
repos/zig-rocksdb/build.zig.zon
.{ .name = "zig-rocksdb", .version = "0.0.0", .minimum_zig_version = "0.12.0", .dependencies = .{}, .paths = .{ "build.zig", "build.zig.zon", "src", "LICENSE", "README.org", }, }
0
repos
repos/zig-rocksdb/build.zig
const std = @import("std"); pub fn build(b: *std.Build) void { const target = b.standardTargetOptions(.{}); const optimize = b.standardOptimizeOption(.{}); const module = b.createModule(.{ .root_source_file = b.path("src/root.zig"), .target = target, .optimize = optimize, .link_libc = true, .link_libcpp = true, }); module.linkSystemLibrary("rocksdb", .{}); const lib_unit_tests = b.addTest(.{ .root_source_file = b.path("src/root.zig"), .target = target, .optimize = optimize, }); const run_lib_unit_tests = b.addRunArtifact(lib_unit_tests); const test_step = b.step("test", "Run unit tests"); const run_step = b.step("run", "Run all examples"); test_step.dependOn(&run_lib_unit_tests.step); buildExample(b, "basic", run_step, target, optimize, module); buildExample(b, "cf", run_step, target, optimize, module); } fn buildExample( b: *std.Build, comptime name: []const u8, run_all: *std.Build.Step, target: std.Build.ResolvedTarget, optimize: std.builtin.OptimizeMode, module: *std.Build.Module, ) void { const exe = b.addExecutable(.{ .name = name, .root_source_file = b.path(std.fmt.comptimePrint("examples/{s}.zig", .{name})), .target = target, .optimize = optimize, }); exe.root_module.addImport("rocksdb", module); b.installArtifact(exe); const run_cmd = b.addRunArtifact(exe); if (b.args) |args| { run_cmd.addArgs(args); } const run_step = b.step("run-" ++ name, "Run the app"); run_step.dependOn(&run_cmd.step); run_all.dependOn(&run_cmd.step); }
0
repos/zig-rocksdb
repos/zig-rocksdb/scripts/valgrind.sh
#!/usr/bin/env bash set -Eeuo pipefail trap cleanup SIGINT SIGTERM ERR EXIT cleanup() { trap - SIGINT SIGTERM ERR EXIT } script_dir=$(cd "$(dirname "${BASH_SOURCE[0]}")" &>/dev/null && pwd -P) cd "${script_dir}/.." BINS=("./zig-out/bin/basic" "./zig-out/bin/cf") for bin in ${BINS[@]}; do valgrind --leak-check=full --tool=memcheck \ --show-leak-kinds=definite,possible --error-exitcode=1 ${bin} done
0
repos/zig-rocksdb
repos/zig-rocksdb/src/root.zig
const std = @import("std"); const Options = @import("options.zig").Options; const ReadOptions = @import("options.zig").ReadOptions; const WriteOptions = @import("options.zig").WriteOptions; const ColumnFamily = @import("ColumnFamily.zig"); const mem = std.mem; const Allocator = mem.Allocator; const testing = std.testing; pub const c = @cImport({ @cInclude("rocksdb/c.h"); }); /// Free slice generated by the RocksDB C API. pub fn free(v: []const u8) void { c.rocksdb_free(@constCast(@ptrCast(v.ptr))); } pub const ThreadMode = enum { Single, Multiple, }; /// A RocksDB database, wrapper around `c.rocksdb_t`. /// /// `ThreadMode` controls how column families are managed. pub fn Database(comptime tm: ThreadMode) type { return struct { c_handle: *c.rocksdb_t, allocator: Allocator, cfs: std.StringHashMap(ColumnFamily), cfs_lock: switch (tm) { .Multiple => std.Thread.Mutex, .Single => void, }, const Self = @This(); pub fn open(allocator: Allocator, path: [:0]const u8, opts: Options) !Self { const c_opts = opts.toC(); defer c.rocksdb_options_destroy(c_opts); return Self.openRaw(allocator, path, c_opts); } pub fn openColumnFamilies(allocator: Allocator, path: [:0]const u8, db_opts: Options, cf_opts: Options) !Self { const c_db_opts = db_opts.toC(); defer c.rocksdb_options_destroy(c_db_opts); const cf_names = try Self.listColumnFamilyRaw(path, c_db_opts) orelse return Self.openRaw(allocator, path, c_db_opts); defer c.rocksdb_list_column_families_destroy(cf_names.ptr, cf_names.len); const c_cf_opt = cf_opts.toC(); defer c.rocksdb_options_destroy(c_cf_opt); var c_cf_opts = std.ArrayList(*c.rocksdb_options_t).init(allocator); defer c_cf_opts.deinit(); for (0..cf_names.len) |_| { try c_cf_opts.append(c_cf_opt); } var cf_handles = std.ArrayList(?*c.rocksdb_column_family_handle_t).init(allocator); for (0..cf_names.len) |_| { try cf_handles.append(null); } var err: ?[*:0]u8 = null; const c_handle = c.rocksdb_open_column_families( c_db_opts, path, @intCast(cf_names.len), cf_names.ptr, c_cf_opts.items.ptr, cf_handles.items.ptr, &err, ); if (err) |e| { std.log.err("Error open column families: {s}", .{e}); c.rocksdb_free(err); return error.OpenDatabase; } var cfs = std.StringHashMap(ColumnFamily).init(allocator); for (cf_names, cf_handles.items) |name, handle| { if (handle) |h| { const n = try allocator.dupe(u8, std.mem.span(name)); try cfs.put(n, ColumnFamily.init(h)); } else { return error.ColumnFamilyNull; } } return Self{ .allocator = allocator, .c_handle = c_handle.?, .cfs = cfs, .cfs_lock = switch (tm) { .Single => {}, .Multiple => std.Thread.Mutex{}, }, }; } pub fn openRaw(allocator: Allocator, path: [:0]const u8, c_opts: *c.rocksdb_options_t) !Self { var err: ?[*:0]u8 = null; const c_handle = c.rocksdb_open( c_opts, path.ptr, &err, ); if (err) |e| { std.log.err("Error opening database: {s}", .{e}); c.rocksdb_free(err); return error.OpenDatabase; } return Self{ .c_handle = c_handle.?, .allocator = allocator, .cfs = std.StringHashMap(ColumnFamily).init(allocator), .cfs_lock = switch (tm) { .Single => {}, .Multiple => std.Thread.Mutex{}, }, }; } pub fn deinit(self: *Self) void { var it = self.cfs.iterator(); while (it.next()) |entry| { self.allocator.free(entry.key_ptr.*); entry.value_ptr.*.deinit(); } self.cfs.deinit(); c.rocksdb_close(self.c_handle); } pub fn put(self: Self, key: []const u8, value: []const u8, opts: WriteOptions) !void { const c_opts = opts.toC(); defer c.rocksdb_writeoptions_destroy(c_opts); try self.ffi(c.rocksdb_put, .{ c_opts, key.ptr, key.len, value.ptr, value.len, }); } pub fn putCf(self: Self, cf_name: []const u8, key: []const u8, value: []const u8, opts: WriteOptions) !void { const cf = self.cfs.get(cf_name) orelse return error.NoSuchColumnFamily; const c_opts = opts.toC(); defer c.rocksdb_writeoptions_destroy(c_opts); try self.ffi(c.rocksdb_put_cf, .{ c_opts, cf.c_handle, key.ptr, key.len, value.ptr, value.len, }); } pub fn get(self: Self, key: []const u8, opts: ReadOptions) !?[]const u8 { var value_len: usize = 0; const c_opts = opts.toC(); defer c.rocksdb_readoptions_destroy(c_opts); const value = try self.ffi(c.rocksdb_get, .{ c_opts, key.ptr, key.len, &value_len, }); return if (value) |v| v[0..value_len] else null; } pub fn getCf(self: Self, cf_name: []const u8, key: []const u8, opts: ReadOptions) !?[]const u8 { const cf = self.cfs.get(cf_name) orelse return error.NoSuchColumnFamily; var value_len: usize = 0; const c_opts = opts.toC(); defer c.rocksdb_readoptions_destroy(c_opts); const value = try self.ffi(c.rocksdb_get_cf, .{ c_opts, cf.c_handle, key.ptr, key.len, &value_len, }); return if (value) |v| v[0..value_len] else null; } pub fn listColumnFamilyRaw(path: [:0]const u8, c_opts: *c.rocksdb_options_t) !?[][*c]u8 { var err: ?[*:0]u8 = null; var len: usize = 0; const cf_list = c.rocksdb_list_column_families(c_opts, path.ptr, &len, &err); if (err) |e| { const err_msg = std.mem.span(e); if (std.mem.containsAtLeast(u8, err_msg, 1, "No such file or directory")) { return null; } std.log.err("Error list column families: {s}", .{e}); c.rocksdb_free(err); return error.ListColumnFamilies; } return cf_list[0..len]; } pub fn createColumnFamily( self: *Self, name: [:0]const u8, opts: Options, ) !ColumnFamily { if (comptime @TypeOf(self.cfs_lock) != void) self.cfs_lock.lock(); defer if (comptime @TypeOf(self.cfs_lock) != void) self.cfs_lock.unlock(); if (self.cfs.contains(name)) { return error.CFAlreadyExists; } const c_opts = opts.toC(); defer c.rocksdb_options_destroy(c_opts); const c_cf = try self.ffi(c.rocksdb_create_column_family, .{ c_opts, name.ptr, }); errdefer c.rocksdb_column_family_handle_destroy(c_cf); const cf = ColumnFamily{ .c_handle = c_cf.? }; try self.cfs.put(try self.allocator.dupe(u8, name), cf); return cf; } pub fn dropColumnFamily( self: *Self, name: [:0]const u8, ) !void { if (comptime @TypeOf(self.cfs_lock) != void) self.cfs_lock.lock(); defer if (comptime @TypeOf(self.cfs_lock) != void) self.cfs_lock.unlock(); const cf = self.cfs.get(name) orelse return error.CFNotExists; try self.ffi(c.rocksdb_drop_column_family, .{ cf.c_handle, }); cf.deinit(); std.debug.assert(self.cfs.remove(name)); } /// Call RocksDB c API, automatically fill follow params: /// - The first, `?*c.rocksdb_t` /// - The last, `[*c][*c]errptr` fn ffi(self: Self, c_func: anytype, args: anytype) !FFIReturnType(@TypeOf(c_func)) { var ffi_args: std.meta.ArgsTuple(@TypeOf(c_func)) = undefined; ffi_args[0] = self.c_handle; inline for (args, 1..) |arg, i| { ffi_args[i] = arg; } var err: ?[*:0]u8 = null; ffi_args[ffi_args.len - 1] = &err; const v = @call(.auto, c_func, ffi_args); if (err) |e| { std.log.err("Error when call rocksdb, msg:{s}", .{e}); c.rocksdb_free(err); return error.DBError; } return v; } }; } fn FFIReturnType(Func: type) type { const info = @typeInfo(Func); const fn_info = switch (info) { .Fn => |fn_info| fn_info, else => @compileError("expecting a function"), }; return fn_info.return_type.?; }
0
repos/zig-rocksdb
repos/zig-rocksdb/src/options.zig
const std = @import("std"); pub const c = @cImport({ @cInclude("rocksdb/c.h"); }); /// The default values mentioned here, describe the values of the C++ library only. /// This wrapper does not set any default value itself. So as soon as the rocksdb /// developers change a default value this document could be outdated. So if you /// really depend on a default value, double check it with the according version of the C++ library. /// Most recent default values should be here /// https://github.com/facebook/rocksdb/blob/v9.1.1/include/rocksdb/options.h#L489 pub const Options = struct { /// If true, the database will be created if it is missing. /// Default: false create_if_missing: ?bool = null, /// If true, missing column families will be automatically created on `DB::Open()` /// Default: false create_missing_column_families: ?bool = null, /// If true, an error is raised if the database already exists. /// Default: false error_if_exists: ?bool = null, /// If true, RocksDB will aggressively check consistency of the data. /// Also, if any of the writes to the database fails (Put, Delete, Merge, /// Write), the database will switch to read-only mode and fail all other /// Write operations. /// In most cases you want this to be set to true. /// Default: true paranoid_checks: ?bool = null, /// Default: -1 max_open_files: ?i32 = null, /// Default: 16 max_file_opening_threads: ?i32 = null, /// Once write-ahead logs exceed this size, we will start forcing the flush of /// column families whose memtables are backed by the oldest live WAL file /// (i.e. the ones that are causing all the space amplification). If set to 0 /// (default), we will dynamically choose the WAL size limit to be /// [sum of all write_buffer_size * max_write_buffer_number] * 4 /// /// For example, with 15 column families, each with /// write_buffer_size = 128 MB /// max_write_buffer_number = 6 /// max_total_wal_size will be calculated to be [15 * 128MB * 6] * 4 = 45GB /// /// Default: 0 /// /// Dynamically changeable through SetDBOptions() API. max_total_wal_size: ?u64 = null, /// Maximum number of concurrent background jobs (compactions and flushes). /// /// Default: 2 /// /// Dynamically changeable through SetDBOptions() API. max_background_jobs: ?i32 = null, /// Default: false use_adaptive_mutex: ?bool = null, /// Default: false enable_pipelined_write: ?bool = null, /// Convert this options to `*c.rocksdb_options_t`. pub fn toC(self: Options) *c.rocksdb_options_t { const opts = c.rocksdb_options_create(); errdefer comptime unreachable; // For option `create_if_missing`, its setter function name // is `rocksdb_options_set_create_if_missing`. // All options follow this pattern, so we can generate those at comptime. inline for (std.meta.fields(Options)) |fld| { if (@field(self, fld.name)) |value| { const v = if (fld.type == ?bool) @intFromBool(value) else value; const setter = std.fmt.comptimePrint("rocksdb_options_set_{s}", .{fld.name}); @call(.auto, @field(c, setter), .{ opts, v }); } } return opts.?; } }; /// Options that control read operations /// https://github.com/facebook/rocksdb/blob/v9.1.1/include/rocksdb/options.h#L1550 pub const ReadOptions = struct { /// If true, all data read from underlying storage will be /// verified against corresponding checksums. /// Default: true verify_checksums: ?bool = null, /// Defaut: true fill_cache: ?bool = null, /// Default: false ignore_range_deletions: ?bool = null, /// Default: false total_order_seek: ?bool = null, /// Default: false prefix_same_as_start: ?bool = null, /// Default: false pin_data: ?bool = null, /// Default: false background_purge_on_iterator_cleanup: ?bool = null, pub fn toC(self: ReadOptions) *c.rocksdb_readoptions_t { const opts = c.rocksdb_readoptions_create(); errdefer comptime unreachable; inline for (std.meta.fields(ReadOptions)) |fld| { if (@field(self, fld.name)) |value| { const v = if (fld.type == ?bool) @intFromBool(value) else value; const setter = std.fmt.comptimePrint("rocksdb_readoptions_set_{s}", .{fld.name}); @call(.auto, @field(c, setter), .{ opts, v }); } } return opts.?; } }; /// Options that control write operations /// https://github.com/facebook/rocksdb/blob/v9.1.1/include/rocksdb/options.h#L1800 pub const WriteOptions = struct { /// Default: false sync: ?bool = null, /// Default: false disable_wal: ?bool = null, /// Default: false ignore_missing_column_families: ?bool = null, /// Default: false no_slowdown: ?bool = null, /// Default: false low_pri: ?bool = null, /// Default: false memtable_insert_hint_per_batch: ?bool = null, pub fn toC(self: WriteOptions) *c.rocksdb_writeoptions_t { const opts = c.rocksdb_writeoptions_create(); errdefer comptime unreachable; inline for (std.meta.fields(WriteOptions)) |fld| { if (@field(self, fld.name)) |value| { const v = if (fld.type == ?bool) @intFromBool(value) else value; if (comptime std.mem.eql(u8, "disable_wal", fld.name)) { c.rocksdb_writeoptions_disable_WAL(opts, v); } else { const setter = std.fmt.comptimePrint("rocksdb_writeoptions_set_{s}", .{fld.name}); @call(.auto, @field(c, setter), .{ opts, v }); } } } return opts.?; } };
0
repos/zig-rocksdb
repos/zig-rocksdb/src/ColumnFamily.zig
pub const c = @cImport({ @cInclude("rocksdb/c.h"); }); const Self = @This(); c_handle: *c.rocksdb_column_family_handle_t, pub fn init(c_handle: *c.rocksdb_column_family_handle_t) Self { return Self{ .c_handle = c_handle, }; } pub fn deinit(self: Self) void { c.rocksdb_column_family_handle_destroy(self.c_handle); }
0
repos/zig-rocksdb
repos/zig-rocksdb/examples/basic.zig
const std = @import("std"); const rocksdb = @import("rocksdb"); pub fn main() !void { const allocator = std.heap.page_allocator; var db = try rocksdb.Database(.Single).open( allocator, "/tmp/zig-rocksdb-basic", .{ .create_if_missing = true, }, ); defer db.deinit(); for (0..10) |i| { const key = try std.fmt.allocPrint(allocator, "key-{d}", .{i}); defer allocator.free(key); const value = try std.fmt.allocPrint(allocator, "{d}", .{i * i}); defer allocator.free(value); try db.put(key, value, .{}); } for (0..10) |i| { const key = try std.fmt.allocPrint(allocator, "key-{d}", .{i}); defer allocator.free(key); const value = try db.get(key, .{}); if (value) |v| { defer rocksdb.free(v); std.debug.print("{s} = {s}\n", .{ key, v }); } } }
0
repos/zig-rocksdb
repos/zig-rocksdb/examples/cf.zig
const std = @import("std"); const rocksdb = @import("rocksdb"); pub fn main() !void { const allocator = std.heap.page_allocator; var db = try rocksdb.Database(.Multiple).openColumnFamilies( allocator, "/tmp/zig-rocksdb-cf", .{ .create_if_missing = true }, .{}, ); defer db.deinit(); const cf_name = "metadata"; if (!db.cfs.contains(cf_name)) { _ = try db.createColumnFamily(cf_name, .{}); } try db.putCf(cf_name, "key", "value", .{}); inline for ([_][:0]const u8{ "key", "key2" }) |key| { const value = try db.getCf(cf_name, key, .{}); if (value) |v| { defer rocksdb.free(v); std.debug.print("{s} is {s}\n", .{ key, v }); } else { std.debug.print("{s} not found\n", .{key}); } } _ = db.createColumnFamily(cf_name, .{}) catch |e| { std.log.err("err:{any}", .{e}); return; }; try db.dropColumnFamily(cf_name); }
0
repos
repos/vite-plugin-zig/index.cjs
var __create = Object.create; var __defProp = Object.defineProperty; var __getOwnPropDesc = Object.getOwnPropertyDescriptor; var __getOwnPropNames = Object.getOwnPropertyNames; var __getProtoOf = Object.getPrototypeOf; var __hasOwnProp = Object.prototype.hasOwnProperty; var __export = (target, all) => { for (var name in all) __defProp(target, name, { get: all[name], enumerable: true }); }; var __copyProps = (to, from, except, desc) => { if (from && typeof from === "object" || typeof from === "function") { for (let key of __getOwnPropNames(from)) if (!__hasOwnProp.call(to, key) && key !== except) __defProp(to, key, { get: () => from[key], enumerable: !(desc = __getOwnPropDesc(from, key)) || desc.enumerable }); } return to; }; var __toESM = (mod, isNodeMode, target) => (target = mod != null ? __create(__getProtoOf(mod)) : {}, __copyProps( // If the importer is in node compatibility mode or this is not an ESM // file that has been converted to a CommonJS file using a Babel- // compatible transform (i.e. "__esModule" has not been set), then set // "default" to the CommonJS "module.exports" for node compatibility. isNodeMode || !mod || !mod.__esModule ? __defProp(target, "default", { value: mod, enumerable: true }) : target, mod )); var __toCommonJS = (mod) => __copyProps(__defProp({}, "__esModule", { value: true }), mod); // index.js var vite_plugin_zig_exports = {}; __export(vite_plugin_zig_exports, { default: () => zig }); module.exports = __toCommonJS(vite_plugin_zig_exports); var import_child_process = require("child_process"); var fs = __toESM(require("fs/promises"), 1); var path = __toESM(require("path"), 1); var os = __toESM(require("os"), 1); var ext = ".zig"; var run = (p) => new Promise((resolve, reject) => { p.on( "close", (code) => code === 0 ? resolve() : reject(new Error(`Command ${p.spawnargs.join(" ")} failed with error code: ${code}`)) ); p.on("error", reject); }); function zig({ outDir = "wasm", tmpDir = os.tmpdir() } = {}) { let config; const map = /* @__PURE__ */ new Map(); return { name: "vite-plugin-zig", // resolveId(source, importer, options) { // console.log({ source, importer, options }); // }, // load(id, options) { // console.log({ id, options }); // if (id.endsWith(ext)) { // console.log(`load ${id}`); // } // }, async transform(code, id, options) { const [filename, raw_query] = id.split(`?`, 2); if (filename.endsWith(ext)) { const name = path.basename(filename).slice(0, -ext.length); const wasm_file = `${name}.wasm`; const temp_file = path.posix.join(tmpDir, wasm_file); const mode = "ReleaseSmall"; const command = `zig build-exe ${filename} -femit-bin=${temp_file} -fno-entry -rdynamic -target wasm32-freestanding -O ${mode}`; const [cmd, ...args] = command.split(" "); const zig2 = (0, import_child_process.spawn)(cmd, args, { stdio: "inherit" }); await run(zig2); const wasm = await fs.readFile(temp_file); const dir = path.posix.join(config.build.assetsDir, outDir); const output_file = path.posix.join(dir, wasm_file); const output_url = path.posix.join(config.base, output_file); map.set(output_file, wasm); const query = new URLSearchParams(raw_query); const instantiate = query.get("instantiate") !== null; const code2 = config.build.target === "esnext" ? instantiate ? ` const importObject = { env: { print(result) { console.log(result); } } }; export const { module, instance } = await WebAssembly.instantiateStreaming(fetch("${output_url}"), importObject); export const { exports } = instance; ` : ` export const module = await WebAssembly.compileStreaming(fetch('${output_url}')); export const instantiate = importObject => WebAssembly.instantiate(module, importObject).then(instance => { const { exports } = instance; return { instance, exports }; }); ` : instantiate ? ` const importObject = { env: { print(result) { console.log(result); } } }; export let module, instance, exports; export const instantiated = WebAssembly.instantiateStreaming(fetch("${output_url}"), importObject).then(result => { ({ module, instance } = result); ({ exports } = instance); }) ` : ` const importObject = { env: { print(result) { console.log(result); } } }; export let module; export const compiled = WebAssembly.compileStreaming(fetch("${output_url}")).then(result => { module = result; return module; }) export const instantiate = importObject => compiled.then(module => WebAssembly.instantiate(module, importObject).then(instance => { const { exports } = instance; return { instance, exports }; }); `; return { code: code2, map: { mappings: "" } // moduleSideEffects: false, }; } }, // adapted from vite-plugin-wasm-pack buildEnd() { map.forEach((wasm, output_file) => { this.emitFile({ type: "asset", fileName: output_file, // name: path.basename(output_file), source: wasm }); }); }, // alternative approach used in vite-plugin-wasm-go // closeBundle() { // map.forEach((value, output_file) => { // const buildFilename = path.posix.join(buildConfig.build.outDir, output_file); // await fs.mkdirs(path.dirname(buildFilename)); // await fs.writeFile(buildFilename, value); // }); // }, configResolved(resolvedConfig) { config = resolvedConfig; }, // adapted from vite-plugin-wasm-go configureServer(server) { server.middlewares.use((req, res, next) => { const url = req.url?.replace(/^\//, "") || ""; if (map.get(url)) { res.writeHead(200, { "Content-Type": "application/wasm" }); res.end(map.get(url)); return; } next(); }); } }; }
0
repos
repos/vite-plugin-zig/package.json
{ "name": "vite-plugin-zig", "version": "0.0.13", "files": [ "index.js", "index.cjs", "index.d.ts" ], "keywords": [ "rollup", "rollup-plugin", "vite", "vite-plugin", "wasm", "zig" ], "license": "MIT", "repository": { "url": "https://github.com/pluvial/vite-plugin-zig" }, "exports": { ".": { "import": "./index.js", "require": "./index.cjs" } }, "main": "index.js", "type": "module", "scripts": { "build-cjs": "esbuild index.js --bundle --platform=node --outfile=index.cjs", "prepublish": "pnpm build-cjs" }, "devDependencies": { "esbuild": "^0.21.5", "vite": "^5.3.1" }, "packageManager": "[email protected]+sha512.ee7b93e0c2bd11409c6424f92b866f31d3ea1bef5fbe47d3c7500cdc3c9668833d2e55681ad66df5b640c61fa9dc25d546efa54d76d7f8bf54b13614ac293631" }
0
repos
repos/vite-plugin-zig/pnpm-workspace.yaml
packages: - 'examples/*'
0
repos
repos/vite-plugin-zig/pnpm-lock.yaml
lockfileVersion: '9.0' settings: autoInstallPeers: true excludeLinksFromLockfile: false importers: .: devDependencies: esbuild: specifier: ^0.21.5 version: 0.21.5 vite: specifier: ^5.3.1 version: 5.3.1 examples/svelte-kit-zig-project: devDependencies: '@sveltejs/adapter-static': specifier: ^3.0.2 version: 3.0.2(@sveltejs/[email protected](@sveltejs/[email protected]([email protected])([email protected]))([email protected])([email protected])) '@sveltejs/kit': specifier: ^2.0.0 version: 2.5.16(@sveltejs/[email protected]([email protected])([email protected]))([email protected])([email protected]) '@sveltejs/vite-plugin-svelte': specifier: ^3.0.0 version: 3.1.1([email protected])([email protected]) svelte: specifier: ^4.2.7 version: 4.2.18 vite: specifier: ^5.0.3 version: 5.3.1 vite-plugin-zig: specifier: workspace:* version: link:../.. examples/vite-zig-project: devDependencies: vite: specifier: ^5.2.0 version: 5.3.1 vite-plugin-inspect: specifier: ^0.8.4 version: 0.8.4([email protected])([email protected]) vite-plugin-zig: specifier: workspace:* version: link:../.. packages: '@ampproject/[email protected]': resolution: {integrity: sha512-30iZtAPgz+LTIYoeivqYo853f02jBYSd5uGnGpkFV0M3xOt9aN73erkgYAmZU43x4VfqcnLxW9Kpg3R5LC4YYw==} engines: {node: '>=6.0.0'} '@antfu/[email protected]': resolution: {integrity: sha512-rWQkqXRESdjXtc+7NRfK9lASQjpXJu1ayp7qi1d23zZorY+wBHVLHHoVcMsEnkqEBWTFqbztO7/QdJFzyEcLTg==} '@esbuild/[email protected]': resolution: {integrity: sha512-1SDgH6ZSPTlggy1yI6+Dbkiz8xzpHJEVAlF/AM1tHPLsf5STom9rwtjE4hKAF20FfXXNTFqEYXyJNWh1GiZedQ==} engines: {node: '>=12'} cpu: [ppc64] os: [aix] '@esbuild/[email protected]': resolution: {integrity: sha512-c0uX9VAUBQ7dTDCjq+wdyGLowMdtR/GoC2U5IYk/7D1H1JYC0qseD7+11iMP2mRLN9RcCMRcjC4YMclCzGwS/A==} engines: {node: '>=12'} cpu: [arm64] os: [android] '@esbuild/[email protected]': resolution: {integrity: sha512-vCPvzSjpPHEi1siZdlvAlsPxXl7WbOVUBBAowWug4rJHb68Ox8KualB+1ocNvT5fjv6wpkX6o/iEpbDrf68zcg==} engines: {node: '>=12'} cpu: [arm] os: [android] '@esbuild/[email protected]': resolution: {integrity: sha512-D7aPRUUNHRBwHxzxRvp856rjUHRFW1SdQATKXH2hqA0kAZb1hKmi02OpYRacl0TxIGz/ZmXWlbZgjwWYaCakTA==} engines: {node: '>=12'} cpu: [x64] os: [android] '@esbuild/[email protected]': resolution: {integrity: sha512-DwqXqZyuk5AiWWf3UfLiRDJ5EDd49zg6O9wclZ7kUMv2WRFr4HKjXp/5t8JZ11QbQfUS6/cRCKGwYhtNAY88kQ==} engines: {node: '>=12'} cpu: [arm64] os: [darwin] '@esbuild/[email protected]': resolution: {integrity: sha512-se/JjF8NlmKVG4kNIuyWMV/22ZaerB+qaSi5MdrXtd6R08kvs2qCN4C09miupktDitvh8jRFflwGFBQcxZRjbw==} engines: {node: '>=12'} cpu: [x64] os: [darwin] '@esbuild/[email protected]': resolution: {integrity: sha512-5JcRxxRDUJLX8JXp/wcBCy3pENnCgBR9bN6JsY4OmhfUtIHe3ZW0mawA7+RDAcMLrMIZaf03NlQiX9DGyB8h4g==} engines: {node: '>=12'} cpu: [arm64] os: [freebsd] '@esbuild/[email protected]': resolution: {integrity: sha512-J95kNBj1zkbMXtHVH29bBriQygMXqoVQOQYA+ISs0/2l3T9/kj42ow2mpqerRBxDJnmkUDCaQT/dfNXWX/ZZCQ==} engines: {node: '>=12'} cpu: [x64] os: [freebsd] '@esbuild/[email protected]': resolution: {integrity: sha512-ibKvmyYzKsBeX8d8I7MH/TMfWDXBF3db4qM6sy+7re0YXya+K1cem3on9XgdT2EQGMu4hQyZhan7TeQ8XkGp4Q==} engines: {node: '>=12'} cpu: [arm64] os: [linux] '@esbuild/[email protected]': resolution: {integrity: sha512-bPb5AHZtbeNGjCKVZ9UGqGwo8EUu4cLq68E95A53KlxAPRmUyYv2D6F0uUI65XisGOL1hBP5mTronbgo+0bFcA==} engines: {node: '>=12'} cpu: [arm] os: [linux] '@esbuild/[email protected]': resolution: {integrity: sha512-YvjXDqLRqPDl2dvRODYmmhz4rPeVKYvppfGYKSNGdyZkA01046pLWyRKKI3ax8fbJoK5QbxblURkwK/MWY18Tg==} engines: {node: '>=12'} cpu: [ia32] os: [linux] '@esbuild/[email protected]': resolution: {integrity: sha512-uHf1BmMG8qEvzdrzAqg2SIG/02+4/DHB6a9Kbya0XDvwDEKCoC8ZRWI5JJvNdUjtciBGFQ5PuBlpEOXQj+JQSg==} engines: {node: '>=12'} cpu: [loong64] os: [linux] '@esbuild/[email protected]': resolution: {integrity: sha512-IajOmO+KJK23bj52dFSNCMsz1QP1DqM6cwLUv3W1QwyxkyIWecfafnI555fvSGqEKwjMXVLokcV5ygHW5b3Jbg==} engines: {node: '>=12'} cpu: [mips64el] os: [linux] '@esbuild/[email protected]': resolution: {integrity: sha512-1hHV/Z4OEfMwpLO8rp7CvlhBDnjsC3CttJXIhBi+5Aj5r+MBvy4egg7wCbe//hSsT+RvDAG7s81tAvpL2XAE4w==} engines: {node: '>=12'} cpu: [ppc64] os: [linux] '@esbuild/[email protected]': resolution: {integrity: sha512-2HdXDMd9GMgTGrPWnJzP2ALSokE/0O5HhTUvWIbD3YdjME8JwvSCnNGBnTThKGEB91OZhzrJ4qIIxk/SBmyDDA==} engines: {node: '>=12'} cpu: [riscv64] os: [linux] '@esbuild/[email protected]': resolution: {integrity: sha512-zus5sxzqBJD3eXxwvjN1yQkRepANgxE9lgOW2qLnmr8ikMTphkjgXu1HR01K4FJg8h1kEEDAqDcZQtbrRnB41A==} engines: {node: '>=12'} cpu: [s390x] os: [linux] '@esbuild/[email protected]': resolution: {integrity: sha512-1rYdTpyv03iycF1+BhzrzQJCdOuAOtaqHTWJZCWvijKD2N5Xu0TtVC8/+1faWqcP9iBCWOmjmhoH94dH82BxPQ==} engines: {node: '>=12'} cpu: [x64] os: [linux] '@esbuild/[email protected]': resolution: {integrity: sha512-Woi2MXzXjMULccIwMnLciyZH4nCIMpWQAs049KEeMvOcNADVxo0UBIQPfSmxB3CWKedngg7sWZdLvLczpe0tLg==} engines: {node: '>=12'} cpu: [x64] os: [netbsd] '@esbuild/[email protected]': resolution: {integrity: sha512-HLNNw99xsvx12lFBUwoT8EVCsSvRNDVxNpjZ7bPn947b8gJPzeHWyNVhFsaerc0n3TsbOINvRP2byTZ5LKezow==} engines: {node: '>=12'} cpu: [x64] os: [openbsd] '@esbuild/[email protected]': resolution: {integrity: sha512-6+gjmFpfy0BHU5Tpptkuh8+uw3mnrvgs+dSPQXQOv3ekbordwnzTVEb4qnIvQcYXq6gzkyTnoZ9dZG+D4garKg==} engines: {node: '>=12'} cpu: [x64] os: [sunos] '@esbuild/[email protected]': resolution: {integrity: sha512-Z0gOTd75VvXqyq7nsl93zwahcTROgqvuAcYDUr+vOv8uHhNSKROyU961kgtCD1e95IqPKSQKH7tBTslnS3tA8A==} engines: {node: '>=12'} cpu: [arm64] os: [win32] '@esbuild/[email protected]': resolution: {integrity: sha512-SWXFF1CL2RVNMaVs+BBClwtfZSvDgtL//G/smwAc5oVK/UPu2Gu9tIaRgFmYFFKrmg3SyAjSrElf0TiJ1v8fYA==} engines: {node: '>=12'} cpu: [ia32] os: [win32] '@esbuild/[email protected]': resolution: {integrity: sha512-tQd/1efJuzPC6rCFwEvLtci/xNFcTZknmXs98FYDfGE4wP9ClFV98nyKrzJKVPMhdDnjzLhdUyMX4PsQAPjwIw==} engines: {node: '>=12'} cpu: [x64] os: [win32] '@jridgewell/[email protected]': resolution: {integrity: sha512-IzL8ZoEDIBRWEzlCcRhOaCupYyN5gdIK+Q6fbFdPDg6HqX6jpkItn7DFIpW9LQzXG6Df9sA7+OKnq0qlz/GaQg==} engines: {node: '>=6.0.0'} '@jridgewell/[email protected]': resolution: {integrity: sha512-bRISgCIjP20/tbWSPWMEi54QVPRZExkuD9lJL+UIxUKtwVJA8wW1Trb1jMs1RFXo1CBTNZ/5hpC9QvmKWdopKw==} engines: {node: '>=6.0.0'} '@jridgewell/[email protected]': resolution: {integrity: sha512-R8gLRTZeyp03ymzP/6Lil/28tGeGEzhx1q2k703KGWRAI1VdvPIXdG70VJc2pAMw3NA6JKL5hhFu1sJX0Mnn/A==} engines: {node: '>=6.0.0'} '@jridgewell/[email protected]': resolution: {integrity: sha512-eF2rxCRulEKXHTRiDrDy6erMYWqNw4LPdQ8UQA4huuxaQsVeRPFl2oM8oDGxMFhJUWZf9McpLtJasDDZb/Bpeg==} '@jridgewell/[email protected]': resolution: {integrity: sha512-vNk6aEwybGtawWmy/PzwnGDOjCkLWSD2wqvjGGAgOAwCGWySYXfYoxt00IJkTF+8Lb57DwOb3Aa0o9CApepiYQ==} '@polka/[email protected]': resolution: {integrity: sha512-j7P6Rgr3mmtdkeDGTe0E/aYyWEWVtc5yFXtHCRHs28/jptDEWfaVOc5T7cblqy1XKPPfCxJc/8DwQ5YgLOZOVQ==} '@rollup/[email protected]': resolution: {integrity: sha512-XTIWOPPcpvyKI6L1NHo0lFlCyznUEyPmPY1mc3KpPVDYulHSTvyeLNVW00QTLIAFNhR3kYnJTQHeGqU4M3n09g==} engines: {node: '>=14.0.0'} peerDependencies: rollup: ^1.20.0||^2.0.0||^3.0.0||^4.0.0 peerDependenciesMeta: rollup: optional: true '@rollup/[email protected]': resolution: {integrity: sha512-Tya6xypR10giZV1XzxmH5wr25VcZSncG0pZIjfePT0OVBvqNEurzValetGNarVrGiq66EBVAFn15iYX4w6FKgQ==} cpu: [arm] os: [android] '@rollup/[email protected]': resolution: {integrity: sha512-avCea0RAP03lTsDhEyfy+hpfr85KfyTctMADqHVhLAF3MlIkq83CP8UfAHUssgXTYd+6er6PaAhx/QGv4L1EiA==} cpu: [arm64] os: [android] '@rollup/[email protected]': resolution: {integrity: sha512-IWfdwU7KDSm07Ty0PuA/W2JYoZ4iTj3TUQjkVsO/6U+4I1jN5lcR71ZEvRh52sDOERdnNhhHU57UITXz5jC1/w==} cpu: [arm64] os: [darwin] '@rollup/[email protected]': resolution: {integrity: sha512-n2LMsUz7Ynu7DoQrSQkBf8iNrjOGyPLrdSg802vk6XT3FtsgX6JbE8IHRvposskFm9SNxzkLYGSq9QdpLYpRNA==} cpu: [x64] os: [darwin] '@rollup/[email protected]': resolution: {integrity: sha512-C/zbRYRXFjWvz9Z4haRxcTdnkPt1BtCkz+7RtBSuNmKzMzp3ZxdM28Mpccn6pt28/UWUCTXa+b0Mx1k3g6NOMA==} cpu: [arm] os: [linux] '@rollup/[email protected]': resolution: {integrity: sha512-l3m9ewPgjQSXrUMHg93vt0hYCGnrMOcUpTz6FLtbwljo2HluS4zTXFy2571YQbisTnfTKPZ01u/ukJdQTLGh9A==} cpu: [arm] os: [linux] '@rollup/[email protected]': resolution: {integrity: sha512-rJ5D47d8WD7J+7STKdCUAgmQk49xuFrRi9pZkWoRD1UeSMakbcepWXPF8ycChBoAqs1pb2wzvbY6Q33WmN2ftw==} cpu: [arm64] os: [linux] '@rollup/[email protected]': resolution: {integrity: sha512-be6Yx37b24ZwxQ+wOQXXLZqpq4jTckJhtGlWGZs68TgdKXJgw54lUUoFYrg6Zs/kjzAQwEwYbp8JxZVzZLRepQ==} cpu: [arm64] os: [linux] '@rollup/[email protected]': resolution: {integrity: sha512-hNVMQK+qrA9Todu9+wqrXOHxFiD5YmdEi3paj6vP02Kx1hjd2LLYR2eaN7DsEshg09+9uzWi2W18MJDlG0cxJA==} cpu: [ppc64] os: [linux] '@rollup/[email protected]': resolution: {integrity: sha512-ROCM7i+m1NfdrsmvwSzoxp9HFtmKGHEqu5NNDiZWQtXLA8S5HBCkVvKAxJ8U+CVctHwV2Gb5VUaK7UAkzhDjlg==} cpu: [riscv64] os: [linux] '@rollup/[email protected]': resolution: {integrity: sha512-0UyyRHyDN42QL+NbqevXIIUnKA47A+45WyasO+y2bGJ1mhQrfrtXUpTxCOrfxCR4esV3/RLYyucGVPiUsO8xjg==} cpu: [s390x] os: [linux] '@rollup/[email protected]': resolution: {integrity: sha512-xuglR2rBVHA5UsI8h8UbX4VJ470PtGCf5Vpswh7p2ukaqBGFTnsfzxUBetoWBWymHMxbIG0Cmx7Y9qDZzr648w==} cpu: [x64] os: [linux] '@rollup/[email protected]': resolution: {integrity: sha512-LKaqQL9osY/ir2geuLVvRRs+utWUNilzdE90TpyoX0eNqPzWjRm14oMEE+YLve4k/NAqCdPkGYDaDF5Sw+xBfg==} cpu: [x64] os: [linux] '@rollup/[email protected]': resolution: {integrity: sha512-7J6TkZQFGo9qBKH0pk2cEVSRhJbL6MtfWxth7Y5YmZs57Pi+4x6c2dStAUvaQkHQLnEQv1jzBUW43GvZW8OFqA==} cpu: [arm64] os: [win32] '@rollup/[email protected]': resolution: {integrity: sha512-Txjh+IxBPbkUB9+SXZMpv+b/vnTEtFyfWZgJ6iyCmt2tdx0OF5WhFowLmnh8ENGNpfUlUZkdI//4IEmhwPieNg==} cpu: [ia32] os: [win32] '@rollup/[email protected]': resolution: {integrity: sha512-UOo5FdvOL0+eIVTgS4tIdbW+TtnBLWg1YBCcU2KWM7nuNwRz9bksDX1bekJJCpu25N1DVWaCwnT39dVQxzqS8g==} cpu: [x64] os: [win32] '@sveltejs/[email protected]': resolution: {integrity: sha512-/EBFydZDwfwFfFEuF1vzUseBoRziwKP7AoHAwv+Ot3M084sE/HTVBHf9mCmXfdM9ijprY5YEugZjleflncX5fQ==} peerDependencies: '@sveltejs/kit': ^2.0.0 '@sveltejs/[email protected]': resolution: {integrity: sha512-09Ypy+ibuhTCTpRFRnR+cDI3VARiu16o7vVSjETAA43ZCLtqvrNrVxUkJ/fKHrAjx2peKWilcHE8+SbW2Z/AsQ==} engines: {node: '>=18.13'} hasBin: true peerDependencies: '@sveltejs/vite-plugin-svelte': ^3.0.0 svelte: ^4.0.0 || ^5.0.0-next.0 vite: ^5.0.3 '@sveltejs/[email protected]': resolution: {integrity: sha512-9QX28IymvBlSCqsCll5t0kQVxipsfhFFL+L2t3nTWfXnddYwxBuAEtTtlaVQpRz9c37BhJjltSeY4AJSC03SSg==} engines: {node: ^18.0.0 || >=20} peerDependencies: '@sveltejs/vite-plugin-svelte': ^3.0.0 svelte: ^4.0.0 || ^5.0.0-next.0 vite: ^5.0.0 '@sveltejs/[email protected]': resolution: {integrity: sha512-rimpFEAboBBHIlzISibg94iP09k/KYdHgVhJlcsTfn7KMBhc70jFX/GRWkRdFCc2fdnk+4+Bdfej23cMDnJS6A==} engines: {node: ^18.0.0 || >=20} peerDependencies: svelte: ^4.0.0 || ^5.0.0-next.0 vite: ^5.0.0 '@types/[email protected]': resolution: {integrity: sha512-4Kh9a6B2bQciAhf7FSuMRRkUWecJgJu9nPnx3yzpsfXX/c50REIqpHY4C82bXP90qrLtXtkDxTZosYO3UpOwlA==} '@types/[email protected]': resolution: {integrity: sha512-/kYRxGDLWzHOB7q+wtSUQlFrtcdUccpfy+X+9iMBpHK8QLLhx2wIPYuS5DYtR9Wa/YlZAbIovy7qVdB1Aq6Lyw==} [email protected]: resolution: {integrity: sha512-RTvkC4w+KNXrM39/lWCUaG0IbRkWdCv7W/IOW9oU6SawyxulvkQy5HQPVTKxEjczcUvapcrw3cFx/60VN/NRNw==} engines: {node: '>=0.4.0'} hasBin: true [email protected]: resolution: {integrity: sha512-b0P0sZPKtyu8HkeRAfCq0IfURZK+SuwMjY1UXGBU27wpAiTwQAIlq56IbIO+ytk/JjS1fMR14ee5WBBfKi5J6A==} [email protected]: resolution: {integrity: sha512-+60uv1hiVFhHZeO+Lz0RYzsVHy5Wr1ayX0mwda9KPDVLNJgZ1T9Ny7VmFbLDzxsH0D87I86vgj3gFrjTJUYznw==} [email protected]: resolution: {integrity: sha512-tjwM5exMg6BGRI+kNmTntNsvdZS1X8BFYS6tnJ2hdH0kVxM6/eVZ2xy+FqStSWvYmtfFMDLIxurorHwDKfDz5Q==} engines: {node: '>=18'} [email protected]: resolution: {integrity: sha512-7qJWqItLA8/VPVlKJlFXU+NBlo/qyfs39aJcuMT/2ere32ZqvF5OSxgdM5xOfJJ7O429gg2HM47y8v9P+9wrNw==} [email protected]: resolution: {integrity: sha512-U71cyTamuh1CRNCfpGY6to28lxvNwPG4Guz/EVjgf3Jmzv0vlDp1atT9eS5dDjMYHucpHbWns6Lwf3BKz6svdw==} engines: {node: '>= 0.6'} [email protected]: resolution: {integrity: sha512-6Fv1DV/TYw//QF5IzQdqsNDjx/wc8TrMBZsqjL9eW01tWb7R7k/mq+/VXfJCl7SoD5emsJop9cOByJZfs8hYIw==} engines: {node: ^10 || ^12.20.0 || ^14.13.0 || >=15.0.0} [email protected]: resolution: {integrity: sha512-pt0bNEmneDIvdL1Xsd9oDQ/wrQRkXDT4AUWlNZNPKvW5x/jyO9VFXkJUP07vQ2upmw5PlaITaPKc31jK13V+jg==} engines: {node: '>=6.0'} peerDependencies: supports-color: '*' peerDependenciesMeta: supports-color: optional: true [email protected]: resolution: {integrity: sha512-3sUqbMEc77XqpdNO7FRyRog+eW3ph+GYCbj+rK+uYyRMuwsVy0rMiVtPn+QJlKFvWP/1PYpapqYn0Me2knFn+A==} engines: {node: '>=0.10.0'} [email protected]: resolution: {integrity: sha512-A6p/pu/6fyBcA1TRz/GqWYPViplrftcW2gZC9q79ngNCKAeR/X3gcEdXQHl4KNXV+3wgIJ1CPkJQ3IHM6lcsyA==} engines: {node: '>=18'} [email protected]: resolution: {integrity: sha512-WY/3TUME0x3KPYdRRxEJJvXRHV4PyPoUsxtZa78lwItwRQRHhd2U9xOscaT/YTf8uCXIAjeJOFBVEh/7FtD8Xg==} engines: {node: '>=18'} [email protected]: resolution: {integrity: sha512-N+MeXYoqr3pOgn8xfyRPREN7gHakLYjhsHhWGT3fWAiL4IkAt0iDw14QiiEm2bE30c5XX5q0FtAA3CK5f9/BUg==} engines: {node: '>=12'} [email protected]: resolution: {integrity: sha512-0je+qPKHEMohvfRTCEo3CrPG6cAzAYgmzKyxRiYSSDkS6eGJdyVJm7WaYA5ECaAD9wLB2T4EEeymA5aFVcYXCA==} engines: {node: '>=6'} [email protected]: resolution: {integrity: sha512-gO+/OMXF7488D+u3ue+G7Y4AA3ZmUnB3eHJXmBTgNHvr4ZNzl36A0ZtG+XCRNYCkYx/bFmw4qtkoFLa+wSrwAA==} [email protected]: resolution: {integrity: sha512-l0uy0kAoo6toCgVOYaAayqtPa2a1L15efxUMEnQebKwLQX2X0OpS6wMMQdc4juJXmxd9i40DuaUHq+mjIya9TQ==} [email protected]: resolution: {integrity: sha512-mg3OPMV4hXywwpoDxu3Qda5xCKQi+vCTZq8S9J/EpkhB2HzKXq4SNFZE3+NK93JYxc8VMSep+lOUSC/RVKaBqw==} engines: {node: '>=12'} hasBin: true [email protected]: resolution: {integrity: sha512-Cf6VksWPsTuW01vU9Mk/3vRue91Zevka5SjyNf3nEpokFRuqt/KjUQoGAwq9qMmhpLTHmXzSIrFRw8zxWzmFBA==} [email protected]: resolution: {integrity: sha512-Rfkk/Mp/DL7JVje3u18FxFujQlTNR2q6QfMSMB7AvCBx91NGj/ba3kCfza0f6dVDbw7YlRf/nDrn7pQrCCyQ/w==} [email protected]: resolution: {integrity: sha512-7RUKfXgSMMkzt6ZuXmqapOurLGPPfgj6l9uRZ7lRGolvk0y2yocc35LdcxKC5PQZdn2DMqioAQ2NoWcrTKmm6g==} [email protected]: resolution: {integrity: sha512-PmDi3uwK5nFuXh7XDTlVnS17xJS7vW36is2+w3xcv8SVxiB4NyATf4ctkVY5bkSjX0Y4nbvZCq1/EjtEyr9ktw==} engines: {node: '>=14.14'} [email protected]: resolution: {integrity: sha512-5xoDfX+fL7faATnagmWPpbFtwh/R77WmMMqqHGS65C3vvB0YHrgF+B1YmZ3441tMj5n63k0212XNoJwzlhffQw==} engines: {node: ^8.16.0 || ^10.6.0 || >=11.0.0} os: [darwin] [email protected]: resolution: {integrity: sha512-40oNTM9UfG6aBmuKxk/giHn5nQ8RVz/SS4Ir6zgzOv9/qC3kKZ9v4etGTcJbEl/NyVQH7FGU7d+X1egr57Md2Q==} [email protected]: resolution: {integrity: sha512-uHJgbwAMwNFf5mLst7IWLNg14x1CkeqglJb/K3doi4dw6q2IvAAmM/Y81kevy83wP+Sst+nutFTYOGg3d1lsxg==} [email protected]: resolution: {integrity: sha512-RbJ5/jmFcNNCcDV5o9eTnBLJ/HszWV0P73bc+Ff4nS/rJj+YaS6IGyiOL0VoBYX+l1Wrl3k63h/KrH+nhJ0XvQ==} [email protected]: resolution: {integrity: sha512-I6fiaX09Xivtk+THaMfAwnA3MVA5Big1WHF1Dfx9hFuvNIWpXnorlkzhcQf6ehrqQiiZECRt1poOAkPmer3ruw==} [email protected]: resolution: {integrity: sha512-eljcgEDlEns/7AXFosB5K/2nCM4P7FQPkGc/DWLy5rmFEWvZayGrik1d9/QIY5nJ4f9YsVvBkA6kJpHn9rISdQ==} engines: {node: ^12.20.0 || ^14.13.1 || >=16.0.0} hasBin: true [email protected]: resolution: {integrity: sha512-KIYLCCJghfHZxqjYBE7rEy0OBuTd5xCHS7tHVgvCLkx7StIoaxwNW3hCALgEUjFfeRk+MG/Qxmp/vtETEF3tRA==} engines: {node: '>=14.16'} hasBin: true [email protected]: resolution: {integrity: sha512-v3rht/LgVcsdZa3O2Nqs+NMowLOxeOm7Ay9+/ARQ2F+qEoANRcqrjAZKGN0v8ymUetZGgkp26LTnGT7H0Qo9Pg==} [email protected]: resolution: {integrity: sha512-UcVfVfaK4Sc4m7X3dUSoHoozQGBEFeDC+zVo06t98xe8CzHSZZBekNXH+tu0NalHolcJ/QAGqS46Hef7QXBIMw==} engines: {node: '>=16'} [email protected]: resolution: {integrity: sha512-5dgndWOriYSm5cnYaJNhalLNDKOqFwyDB/rr1E9ZsGciGvKPs8R2xYGCacuf3z6K1YKDz182fd+fY3cn3pMqXQ==} [email protected]: resolution: {integrity: sha512-o+NO+8WrRiQEE4/7nwRJhN1HWpVmJm511pBHUxPLtp0BUISzlBplORYSmTclCnJvQq2tKu/sgl3xVpkc7ZWuQQ==} engines: {node: '>=6'} [email protected]: resolution: {integrity: sha512-SW13ws7BjaeJ6p7Q6CO2nchbYEc3X3J6WrmTTDto7yMPqVSZTUyY5Tjbid+Ab8gLnATtygYtiDIJGQRRn2ZOiA==} [email protected]: resolution: {integrity: sha512-iIRwTIf0QKV3UAnYK4PU8uiEc4SRh5jX0mwpIwETPpHdhVM4f53RSwS/vXvN1JhGX+Cs7B8qIq3d6AH49O5fAQ==} [email protected]: resolution: {integrity: sha512-GaqWWShW4kv/G9IEucWScBx9G1/vsFZZJUO+tD26M8J8z3Kw5RDQjaoZe03YAClgeS/SWPOcb4nkFBTEi5DUEA==} [email protected]: resolution: {integrity: sha512-tzzskb3bG8LvYGFF/mDTpq3jpI6Q9wc3LEmBaghu+DdCssd1FakN7Bc0hVNmEyGq1bq3RgfkCb3cmQLpNPOroA==} engines: {node: '>=4'} [email protected]: resolution: {integrity: sha512-eu38+hdgojoyq63s+yTpN4XMBdt5l8HhMhc4VKLO9KM5caLIBvUm4thi7fFaxyTmCKeNnXZ5pAlBwCUnhA09uw==} engines: {node: '>=10'} [email protected]: resolution: {integrity: sha512-sGkPx+VjMtmA6MX27oA4FBFELFCZZ4S4XqeGOXCv68tT+jb3vk/RyaKWP0PTKyWtmLSM0b+adUTEvbs1PEaH2w==} [email protected]: resolution: {integrity: sha512-eSRppjcPIatRIMC1U6UngP8XFcz8MQWGQdt1MTBQ7NaAmvXDfvNxbvWV3x2y6CdEUciCSsDHDQZbhYaB8QEo2g==} engines: {node: ^10 || ^12 || ^13.7 || ^14 || >=15.0.1} hasBin: true [email protected]: resolution: {integrity: sha512-mnkeQ1qP5Ue2wd+aivTD3NHd/lZ96Lu0jgf0pwktLPtx6cTZiH7tyeGRRHs0zX0rbrahXPnXlUnbeXyaBBuIaw==} engines: {node: '>=18'} [email protected]: resolution: {integrity: sha512-xCy9V055GLEqoFaHoC1SoLIaLmWctgCUaBaWxDZ7/Zx4CTyX7cJQLJOok/orfjZAh9kEYpjJa4d0KcJmCbctZA==} [email protected]: resolution: {integrity: sha512-vKiQ8RRtkl9P+r/+oefh25C3fhybptkHKCZSPlcXiJux2tJF55GnEj3BVn4A5gKfq9NWWXXrxkHBwVPUfH0opw==} [email protected]: resolution: {integrity: sha512-anP1Z8qwhkbmu7MFP5iTt+wQKXgwzf7zTyGlcdzabySa9vd0Xt392U0rVmz9poOaBj0uHJKyyo9/upk0HrEQew==} [email protected]: resolution: {integrity: sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA==} engines: {node: '>=8.6'} [email protected]: resolution: {integrity: sha512-Wglpdk03BSfXkHoQa3b/oulrotAkwrlLDRSOb9D0bN86FdRyE9lppSp33aHNPgBa0JKCoB+drFLZkQoRRYae5A==} engines: {node: ^10 || ^12 || >=14} [email protected]: resolution: {integrity: sha512-QmJz14PX3rzbJCN1SG4Xe/bAAX2a6NpCP8ab2vfu2GiUr8AQcr2nCV/oEO3yneFarB67zk8ShlIyWb2LGTb3Sg==} engines: {node: '>=18.0.0', npm: '>=8.0.0'} hasBin: true [email protected]: resolution: {integrity: sha512-9by4Ij99JUr/MCFBUkDKLWK3G9HVXmabKz9U5MlIAIuvuzkiOicRYs8XJLxX+xahD+mLiiCYDqF9dKAgtzKP1A==} engines: {node: '>=18'} [email protected]: resolution: {integrity: sha512-xal3CZX1Xlo/k4ApwCFrHVACi9fBqJ7V+mwhBsuf/1IOKbBy098Fex+Wa/5QMubw09pSZ/u8EY8PWgevJsXp1A==} engines: {node: '>=6'} [email protected]: resolution: {integrity: sha512-RVnVQxTXuerk653XfuliOxBP81Sf0+qfQE73LIYKcyMYHG94AuH0kgrQpRDuTZnSmjpysHmzxJXKNfa6PjFhyQ==} [email protected]: resolution: {integrity: sha512-94Bdh3cC2PKrbgSOUqTiGPWVZeSiXfKOVZNJniWoqrWrRkB1CJzBU3NEbiTsPcYy1lDsANA/THzS+9WBiy5nfQ==} engines: {node: '>= 10'} [email protected]: resolution: {integrity: sha512-itJW8lvSA0TXEphiRoawsCksnlf8SyvmFzIhltqAHluXd88pkCd+cXJVHTDwdCr0IzwptSm035IHQktUu1QUMg==} engines: {node: '>=0.10.0'} [email protected]: resolution: {integrity: sha512-Gyc7cOS3VJzLlfj7wKS0ZnzDVdv3Pn2IuVeJPk9m2skfhcu5bq3wtIZyQGggr7/Iim5rH5cncyQft/kRLupcnA==} engines: {node: ^12.20 || ^14.13.1 || >= 16} peerDependencies: svelte: ^3.19.0 || ^4.0.0 [email protected]: resolution: {integrity: sha512-d0FdzYIiAePqRJEb90WlJDkjUEx42xhivxN8muUBmfZnP+tzUgz12DJ2hRJi8sIHCME7jeK1PTMgKPSfTd8JrA==} engines: {node: '>=16'} [email protected]: resolution: {integrity: sha512-g/55ssRPUjShh+xkfx9UPDXqhckHEsHr4Vd9zX55oSdGZc/MD0m3sferOkwWtp98bv+kcVfEHtRJgBVJzelrzg==} [email protected]: resolution: {integrity: sha512-sf4i37nQ2LBx4m3wB74y+ubopq6W/dIzXg0FDGjsYnZHVa1Da8FH853wlL2gtUhg+xJXjfk3kUZS3BRoQeoQBQ==} engines: {node: '>=6'} [email protected]: resolution: {integrity: sha512-gptHNQghINnc/vTGIk0SOFGFNXw7JVrlRUtConJRlvaw6DuX0wO5Jeko9sWrMBhh+PsYAZ7oXAiOnf/UKogyiw==} engines: {node: '>= 10.0.0'} [email protected]: resolution: {integrity: sha512-G0N3rjfw+AiiwnGw50KlObIHYWfulVwaCBUBLh2xTW9G1eM9ocE5olXkEYUbwyTmX+azM8duubi+9w5awdCz+g==} engines: {node: '>=14'} peerDependencies: '@nuxt/kit': '*' vite: ^3.1.0 || ^4.0.0 || ^5.0.0-0 peerDependenciesMeta: '@nuxt/kit': optional: true [email protected]: resolution: {integrity: sha512-XBmSKRLXLxiaPYamLv3/hnP/KXDai1NDexN0FpkTaZXTfycHvkRHoenpgl/fvuK/kPbB6xAgoyiryAhQNxYmAQ==} engines: {node: ^18.0.0 || >=20.0.0} hasBin: true peerDependencies: '@types/node': ^18.0.0 || >=20.0.0 less: '*' lightningcss: ^1.21.0 sass: '*' stylus: '*' sugarss: '*' terser: ^5.4.0 peerDependenciesMeta: '@types/node': optional: true less: optional: true lightningcss: optional: true sass: optional: true stylus: optional: true sugarss: optional: true terser: optional: true [email protected]: resolution: {integrity: sha512-SgHtMLoqaeeGnd2evZ849ZbACbnwQCIwRH57t18FxcXoZop0uQu0uzlIhJBlF/eWVzuce0sHeqPcDo+evVcg8Q==} peerDependencies: vite: ^3.0.0 || ^4.0.0 || ^5.0.0 peerDependenciesMeta: vite: optional: true snapshots: '@ampproject/[email protected]': dependencies: '@jridgewell/gen-mapping': 0.3.5 '@jridgewell/trace-mapping': 0.3.25 '@antfu/[email protected]': {} '@esbuild/[email protected]': optional: true '@esbuild/[email protected]': optional: true '@esbuild/[email protected]': optional: true '@esbuild/[email protected]': optional: true '@esbuild/[email protected]': optional: true '@esbuild/[email protected]': optional: true '@esbuild/[email protected]': optional: true '@esbuild/[email protected]': optional: true '@esbuild/[email protected]': optional: true '@esbuild/[email protected]': optional: true '@esbuild/[email protected]': optional: true '@esbuild/[email protected]': optional: true '@esbuild/[email protected]': optional: true '@esbuild/[email protected]': optional: true '@esbuild/[email protected]': optional: true '@esbuild/[email protected]': optional: true '@esbuild/[email protected]': optional: true '@esbuild/[email protected]': optional: true '@esbuild/[email protected]': optional: true '@esbuild/[email protected]': optional: true '@esbuild/[email protected]': optional: true '@esbuild/[email protected]': optional: true '@esbuild/[email protected]': optional: true '@jridgewell/[email protected]': dependencies: '@jridgewell/set-array': 1.2.1 '@jridgewell/sourcemap-codec': 1.4.15 '@jridgewell/trace-mapping': 0.3.25 '@jridgewell/[email protected]': {} '@jridgewell/[email protected]': {} '@jridgewell/[email protected]': {} '@jridgewell/[email protected]': dependencies: '@jridgewell/resolve-uri': 3.1.2 '@jridgewell/sourcemap-codec': 1.4.15 '@polka/[email protected]': {} '@rollup/[email protected]([email protected])': dependencies: '@types/estree': 1.0.5 estree-walker: 2.0.2 picomatch: 2.3.1 optionalDependencies: rollup: 4.18.0 '@rollup/[email protected]': optional: true '@rollup/[email protected]': optional: true '@rollup/[email protected]': optional: true '@rollup/[email protected]': optional: true '@rollup/[email protected]': optional: true '@rollup/[email protected]': optional: true '@rollup/[email protected]': optional: true '@rollup/[email protected]': optional: true '@rollup/[email protected]': optional: true '@rollup/[email protected]': optional: true '@rollup/[email protected]': optional: true '@rollup/[email protected]': optional: true '@rollup/[email protected]': optional: true '@rollup/[email protected]': optional: true '@rollup/[email protected]': optional: true '@rollup/[email protected]': optional: true '@sveltejs/[email protected](@sveltejs/[email protected](@sveltejs/[email protected]([email protected])([email protected]))([email protected])([email protected]))': dependencies: '@sveltejs/kit': 2.5.16(@sveltejs/[email protected]([email protected])([email protected]))([email protected])([email protected]) '@sveltejs/[email protected](@sveltejs/[email protected]([email protected])([email protected]))([email protected])([email protected])': dependencies: '@sveltejs/vite-plugin-svelte': 3.1.1([email protected])([email protected]) '@types/cookie': 0.6.0 cookie: 0.6.0 devalue: 5.0.0 esm-env: 1.0.0 import-meta-resolve: 4.1.0 kleur: 4.1.5 magic-string: 0.30.10 mrmime: 2.0.0 sade: 1.8.1 set-cookie-parser: 2.6.0 sirv: 2.0.4 svelte: 4.2.18 tiny-glob: 0.2.9 vite: 5.3.1 '@sveltejs/[email protected](@sveltejs/[email protected]([email protected])([email protected]))([email protected])([email protected])': dependencies: '@sveltejs/vite-plugin-svelte': 3.1.1([email protected])([email protected]) debug: 4.3.5 svelte: 4.2.18 vite: 5.3.1 transitivePeerDependencies: - supports-color '@sveltejs/[email protected]([email protected])([email protected])': dependencies: '@sveltejs/vite-plugin-svelte-inspector': 2.1.0(@sveltejs/[email protected]([email protected])([email protected]))([email protected])([email protected]) debug: 4.3.5 deepmerge: 4.3.1 kleur: 4.1.5 magic-string: 0.30.10 svelte: 4.2.18 svelte-hmr: 0.16.0([email protected]) vite: 5.3.1 vitefu: 0.2.5([email protected]) transitivePeerDependencies: - supports-color '@types/[email protected]': {} '@types/[email protected]': {} [email protected]: {} [email protected]: dependencies: dequal: 2.0.3 [email protected]: dependencies: dequal: 2.0.3 [email protected]: dependencies: run-applescript: 7.0.0 [email protected]: dependencies: '@jridgewell/sourcemap-codec': 1.4.15 '@types/estree': 1.0.5 acorn: 8.12.0 estree-walker: 3.0.3 periscopic: 3.1.0 [email protected]: {} [email protected]: dependencies: mdn-data: 2.0.30 source-map-js: 1.2.0 [email protected]: dependencies: ms: 2.1.2 [email protected]: {} [email protected]: {} [email protected]: dependencies: bundle-name: 4.1.0 default-browser-id: 5.0.0 [email protected]: {} [email protected]: {} [email protected]: {} [email protected]: {} [email protected]: optionalDependencies: '@esbuild/aix-ppc64': 0.21.5 '@esbuild/android-arm': 0.21.5 '@esbuild/android-arm64': 0.21.5 '@esbuild/android-x64': 0.21.5 '@esbuild/darwin-arm64': 0.21.5 '@esbuild/darwin-x64': 0.21.5 '@esbuild/freebsd-arm64': 0.21.5 '@esbuild/freebsd-x64': 0.21.5 '@esbuild/linux-arm': 0.21.5 '@esbuild/linux-arm64': 0.21.5 '@esbuild/linux-ia32': 0.21.5 '@esbuild/linux-loong64': 0.21.5 '@esbuild/linux-mips64el': 0.21.5 '@esbuild/linux-ppc64': 0.21.5 '@esbuild/linux-riscv64': 0.21.5 '@esbuild/linux-s390x': 0.21.5 '@esbuild/linux-x64': 0.21.5 '@esbuild/netbsd-x64': 0.21.5 '@esbuild/openbsd-x64': 0.21.5 '@esbuild/sunos-x64': 0.21.5 '@esbuild/win32-arm64': 0.21.5 '@esbuild/win32-ia32': 0.21.5 '@esbuild/win32-x64': 0.21.5 [email protected]: {} [email protected]: {} [email protected]: dependencies: '@types/estree': 1.0.5 [email protected]: dependencies: graceful-fs: 4.2.11 jsonfile: 6.1.0 universalify: 2.0.1 [email protected]: optional: true [email protected]: {} [email protected]: {} [email protected]: {} [email protected]: {} [email protected]: {} [email protected]: dependencies: is-docker: 3.0.0 [email protected]: dependencies: '@types/estree': 1.0.5 [email protected]: dependencies: is-inside-container: 1.0.0 [email protected]: dependencies: universalify: 2.0.1 optionalDependencies: graceful-fs: 4.2.11 [email protected]: {} [email protected]: {} [email protected]: dependencies: '@jridgewell/sourcemap-codec': 1.4.15 [email protected]: {} [email protected]: {} [email protected]: {} [email protected]: {} [email protected]: {} [email protected]: dependencies: default-browser: 5.2.1 define-lazy-prop: 3.0.0 is-inside-container: 1.0.0 is-wsl: 3.1.0 [email protected]: {} [email protected]: dependencies: '@types/estree': 1.0.5 estree-walker: 3.0.3 is-reference: 3.0.2 [email protected]: {} [email protected]: {} [email protected]: dependencies: nanoid: 3.3.7 picocolors: 1.0.1 source-map-js: 1.2.0 [email protected]: dependencies: '@types/estree': 1.0.5 optionalDependencies: '@rollup/rollup-android-arm-eabi': 4.18.0 '@rollup/rollup-android-arm64': 4.18.0 '@rollup/rollup-darwin-arm64': 4.18.0 '@rollup/rollup-darwin-x64': 4.18.0 '@rollup/rollup-linux-arm-gnueabihf': 4.18.0 '@rollup/rollup-linux-arm-musleabihf': 4.18.0 '@rollup/rollup-linux-arm64-gnu': 4.18.0 '@rollup/rollup-linux-arm64-musl': 4.18.0 '@rollup/rollup-linux-powerpc64le-gnu': 4.18.0 '@rollup/rollup-linux-riscv64-gnu': 4.18.0 '@rollup/rollup-linux-s390x-gnu': 4.18.0 '@rollup/rollup-linux-x64-gnu': 4.18.0 '@rollup/rollup-linux-x64-musl': 4.18.0 '@rollup/rollup-win32-arm64-msvc': 4.18.0 '@rollup/rollup-win32-ia32-msvc': 4.18.0 '@rollup/rollup-win32-x64-msvc': 4.18.0 fsevents: 2.3.3 [email protected]: {} [email protected]: dependencies: mri: 1.2.0 [email protected]: {} [email protected]: dependencies: '@polka/url': 1.0.0-next.25 mrmime: 2.0.0 totalist: 3.0.1 [email protected]: {} [email protected]([email protected]): dependencies: svelte: 4.2.18 [email protected]: dependencies: '@ampproject/remapping': 2.3.0 '@jridgewell/sourcemap-codec': 1.4.15 '@jridgewell/trace-mapping': 0.3.25 '@types/estree': 1.0.5 acorn: 8.12.0 aria-query: 5.3.0 axobject-query: 4.0.0 code-red: 1.0.4 css-tree: 2.3.1 estree-walker: 3.0.3 is-reference: 3.0.2 locate-character: 3.0.0 magic-string: 0.30.10 periscopic: 3.1.0 [email protected]: dependencies: globalyzer: 0.1.0 globrex: 0.1.2 [email protected]: {} [email protected]: {} [email protected]([email protected])([email protected]): dependencies: '@antfu/utils': 0.7.8 '@rollup/pluginutils': 5.1.0([email protected]) debug: 4.3.5 error-stack-parser-es: 0.1.4 fs-extra: 11.2.0 open: 10.1.0 perfect-debounce: 1.0.0 picocolors: 1.0.1 sirv: 2.0.4 vite: 5.3.1 transitivePeerDependencies: - rollup - supports-color [email protected]: dependencies: esbuild: 0.21.5 postcss: 8.4.38 rollup: 4.18.0 optionalDependencies: fsevents: 2.3.3 [email protected]([email protected]): optionalDependencies: vite: 5.3.1
0
repos
repos/vite-plugin-zig/index.d.ts
import { Plugin } from 'vite'; export function zig(): Plugin;
0
repos
repos/vite-plugin-zig/README.md
# vite-plugin-zig Import WebAssembly modules compiled from Zig files. ## Prerequisites - Install the [Zig compiler](https://ziglang.org): the binary can be downloaded from [downloads page](https://ziglang.org/download), or built from source by following the [GitHub Wiki instructions](https://github.com/ziglang/zig/wiki/Building-Zig-From-Source), or using the [zig-bootstrap](https://github.com/ziglang/zig-bootstrap) scripts. As an alternative, the [`@ziglang/cli`](https://github.com/pluvial/node-zig/tree/main/packages/cli) npm package can be added as a dependency, useful in a CI environment for instance. ## Usage Install with `npm i -D vite-plugin-zig` (or `pnpm i -D` or `yarn i -D`), then add the plugin to your `vite.config.js`: ```js // vite.config.js import zig from 'vite-plugin-zig'; /** @type {import('vite').UserConfig} */ export default { plugins: [zig()], build: { target: 'esnext' }, }; ``` Write your Zig code and `export` any symbol to be used in JS code: ```zig // src/main.zig export fn add(a: i32, b: i32) i32 { return a + b; } ``` If available, top-level await can be used so that importing the module feels similar to importing a regular JS module: ```js // example.js import { instantiate } from './src/main.zig'; // pass any custom importObject here, functions should be declared // as extern in the Zig file const importObject = { // ... }; // instantiate the compiled WebAssembly module, can also be moved // to a Worker for instantiation in another thread const { exports, instance } = await instantiate(importObject); // call exported functions from the exports object console.log(exports.add(5, 37)); // 42 ``` As a shorthand to avoid having to manually call `await instantiate()`, the `?instantiate` query parameter can be specified in the module import to both compile and instantiate the module at import time, allowing access to `instance` and `exports`: ```js import { exports, instance, module } from './src/main.zig?instantiate'; // call exported functions from the exports object console.log(exports.add(5, 37)); // 42 ``` If your Vite config does not allow for top-level await (by setting `build: { target: 'esnext' }`, e.g. if the framework you're using enforces a specific target value), an alternative API is provided which instead exposes Promises (`compiled` and `instantiated` respectively depending on whether `?instantiate` is used) which resolve when the compilation or instantiation of the module are complete: ```js // example.js import { compiled, instantiate, module } from './src/main.zig'; (async () => { // `await compiled` can be used to populate the `module` import // manually before instantiation if necessary // pass any custom importObject here, functions should be declared // as extern in the Zig file const importObject = { // ... }; // instantiate the compiled WebAssembly module, can also be moved // to a Worker for instantiation in another thread const { exports, instance } = await instantiate(importObject); // call exported functions from the exports object console.log(exports.add(5, 37)); // 42 })(); ``` ```js // example.js import { exports, instance, instantiated, module, } from './src/main.zig?instantiate'; (async () => { // manually await to populate the imports await instantiated; // call exported functions from the exports object console.log(exports.add(5, 37)); // 42 })(); ``` To integrate with SSR frameworks such as SvelteKit, use a dynamic import: ```svelte <script> import { onMount } from 'svelte'; onMount(async () => { const wasm = await import('$lib/main.zig?instantiate'); await wasm.instantiated; console.log(wasm.exports.add(5, 37)); // 42 }); </script> ``` ## Notes and TODOs - It would be great to have something similar to Rust's `wasm-bindgen` to generate JS glue code and type definitions ## License [MIT](LICENSE)
0
repos
repos/vite-plugin-zig/index.js
import { spawn } from 'child_process'; import * as fs from 'fs/promises'; import * as path from 'path'; import * as os from 'os'; const ext = '.zig'; const run = p => new Promise((resolve, reject) => { p.on('close', code => code === 0 ? resolve() : reject(new Error(`Command ${p.spawnargs.join(' ')} failed with error code: ${code}`)), ); p.on('error', reject); }); /** * @param {object} options * @param {string} options.outDir * @param {string} options.tmpDir * @returns {import('vite').Plugin} */ export default function zig({ outDir = 'wasm', tmpDir = os.tmpdir() } = {}) { /** @type {import('vite').ResolvedConfig} */ let config; /** @type {Map<string, string>} */ const map = new Map(); return { name: 'vite-plugin-zig', // resolveId(source, importer, options) { // console.log({ source, importer, options }); // }, // load(id, options) { // console.log({ id, options }); // if (id.endsWith(ext)) { // console.log(`load ${id}`); // } // }, async transform(code, id, options) { // console.log({ code, id, options }); const [filename, raw_query] = id.split(`?`, 2); if (filename.endsWith(ext)) { const name = path.basename(filename).slice(0, -ext.length); const wasm_file = `${name}.wasm`; const temp_file = path.posix.join(tmpDir, wasm_file); // TODO: check for dev/prd here const mode = 'ReleaseSmall'; // | 'Debug' | 'ReleaseFast' | 'ReleaseSafe' const command = `zig build-exe ${filename} -femit-bin=${temp_file} -fno-entry -rdynamic -target wasm32-freestanding -O ${ mode }`; const [cmd, ...args] = command.split(' '); const zig = spawn(cmd, args, { stdio: 'inherit' }); await run(zig); const wasm = await fs.readFile(temp_file); // const wasm = await fs.readFile(output_file); const dir = path.posix.join(config.build.assetsDir, outDir); const output_file = path.posix.join(dir, wasm_file); const output_url = path.posix.join(config.base, output_file); map.set(output_file, wasm); // TODO: was previously using this.emitFile() to have Rollup emit the // file with the hashed filename and then referencing it in the exported // module, need to find an alternative as currently the wasm filename // has no hash // const wasm = await fs.readFile(output_file); // const referenceId = this.emitFile({ // type: 'asset', // source: wasm, // name: wasm_file, // }); // const output_url = `import.meta.ROLLUP_FILE_URL_${referenceId}`; const query = new URLSearchParams(raw_query); const instantiate = query.get('instantiate') !== null; const code = config.build.target === 'esnext' ? instantiate ? ` const importObject = { env: { print(result) { console.log(result); } } }; export const { module, instance } = await WebAssembly.instantiateStreaming(fetch("${output_url}"), importObject); export const { exports } = instance; ` : ` export const module = await WebAssembly.compileStreaming(fetch('${output_url}')); export const instantiate = importObject => WebAssembly.instantiate(module, importObject).then(instance => { const { exports } = instance; return { instance, exports }; }); ` : instantiate ? ` const importObject = { env: { print(result) { console.log(result); } } }; export let module, instance, exports; export const instantiated = WebAssembly.instantiateStreaming(fetch("${output_url}"), importObject).then(result => { ({ module, instance } = result); ({ exports } = instance); }) ` : ` const importObject = { env: { print(result) { console.log(result); } } }; export let module; export const compiled = WebAssembly.compileStreaming(fetch("${output_url}")).then(result => { module = result; return module; }) export const instantiate = importObject => compiled.then(module => WebAssembly.instantiate(module, importObject).then(instance => { const { exports } = instance; return { instance, exports }; }); `; return { code, map: { mappings: '' }, // moduleSideEffects: false, }; } }, // adapted from vite-plugin-wasm-pack buildEnd() { // copy xxx.wasm files to /assets/xxx.wasm map.forEach((wasm, output_file) => { this.emitFile({ type: 'asset', fileName: output_file, // name: path.basename(output_file), source: wasm, }); }); }, // alternative approach used in vite-plugin-wasm-go // closeBundle() { // map.forEach((value, output_file) => { // const buildFilename = path.posix.join(buildConfig.build.outDir, output_file); // await fs.mkdirs(path.dirname(buildFilename)); // await fs.writeFile(buildFilename, value); // }); // }, configResolved(resolvedConfig) { config = resolvedConfig; }, // adapted from vite-plugin-wasm-go configureServer(server) { server.middlewares.use((req, res, next) => { const url = req.url?.replace(/^\//, '') || ''; if (map.get(url)) { res.writeHead(200, { 'Content-Type': 'application/wasm' }); res.end(map.get(url)); return; } next(); }); }, }; }