full_name
stringlengths
9
72
url
stringlengths
28
91
description
stringlengths
3
343
readme
stringlengths
1
207k
overflowy/chat-key
https://github.com/overflowy/chat-key
Supercharge your productivity with ChatGPT and AutoHotkey 🚀
<p align="center"><img src="assets/logo.png" alt="ChatKey Logo"></p> <p align="center"> <a href="https://opensource.org/licenses/MIT"> <img src="https://img.shields.io/badge/license-MIT-green.svg" alt="License: MIT"> </a> <a href="https://github.com/overflowy/chat-key/releases/latest"> <img src="https://img.shields.io/github/v/release/overflowy/chat-key?logo=github" alt="Latest Release"> </a> <a href="https://github.com/overflowy/chat-key/releases/latest"> <img src="https://img.shields.io/github/downloads/overflowy/chat-key/total.svg?logo=github" alt="Total Downloads"> </a> <a href="https://www.autohotkey.com"> <img src="https://img.shields.io/badge/powered_by-AutoHotkey-orange?logo=AutoHotkey" alt="Powered By: AutoHokey"> </a> </p> ## About ChatKey is small tool that enables you to use your own ChatGPT/GPT-4 prompts in any application that supports text input. <p align="center"> <a href="https://github.com/overflowy/chat-key/releases/latest"> <img src="assets/screenshot.png" alt="Screenshot"> </a> </p> ## Usage 1. Please ensure that you have configured the OPENAI_TOKEN environment variable with your API key 2. Download the [latest release](https://github.com/overflowy/chat-key/releases/latest) 3. Extract all files from the zip 4. Run `ChatKey.exe` 5. Start typing in any application that supports text input 6. Select the text to use as input for the prompt 7. Press the hotkey to show the popup menu (default: `Alt + .`). 8. Select the prompt from the popup menu 9. Wait for the response to be generated 10. Review the generated response and press `Enter` ## Configuration To configure ChatKey, you can edit the [`config.ini`](config.ini) file provided. ### General settings | Key | Description | Default | | ------------------- | --------------------------------------------------------- | --------- | | `popup_menu_hotkey` | The hotkey to show the popup menu | `Alt + .` | | `replace_text` | Whether to replace the selected text with the response | `0` | | `show_notification` | Whether to show a notification when generating a response | `1` | | `max_input_length` | The maximum length of input text (0 = unlimited) | `0` | ### Adding prompts To add new prompts, you must include a new section in the [`config.ini`](config.ini) file. For instance, if you wish to include a prompt for translating text to French, you can achieve this by appending the following section to the configuration file: ```ini [prompt_translate_to_french] name = Translate to French shortcut = t system_prompt = "I want you to act as a French translator. I will say something in any language and you will translate it to French. The first thing I want you to translate is:" temperature = 0.2 model = gpt-3.5-turbo ``` To ensure that the newly added prompt is available in the popup menu, it must be included in the `[popup_menu]` section. Additionally, if you have already configured multiple prompts, you can tidy up the popup menu by utilizing `---` as a separator. ```ini [popup_menu] --- prompt_translate_to_french ``` The changes will be applied automatically, there's no need to restart ChatKey (the only exception to this rule is the global `popup_menu_hotkey`). ### Prompt settings You can individually configure the parameters of each prompt. If keys with default values are omitted, the default values will be used instead. | Key | Description | Default | | ------------------- | -------------------------------------------------------------------------------------------------------- | --------- | | `name` | The name of the prompt that will be displayed in the popup menu | | | `shortcut` | The shortcut key to select the prompt from the popup menu | | | `system_content` | The prompt that will be used to generate the response (required) | | | `model` | The model to use when generating the response, more info [here](https://platform.openai.com/docs/models) | `gpt-3.5` | | `temperature` | The temperature to use when generating the response (0.0 - 2.0) | `0.7` | | `top_p` | The top_p to use when generating the response (0.0 - 1.0) | | | `presence_penalty` | Increase the model's likelihood to talk about new topics (-2.0 - 2.0) | | | `frequency_penalty` | Decrease the model's likelihood to repeat the same line verbatim (-2.0 - 2.0) | | ## Acknowledgements - [cocobelgica](https://github.com/cocobelgica) for the JSON lib - [teadrinker](https://www.autohotkey.com/boards/viewtopic.php?t=113529) for the HBitmapFromResource lib ## License The code in this repository is licensed under the MIT License. See [LICENSE](LICENSE) for more information.
MrTalentDev/go-bitcoin-parser
https://github.com/MrTalentDev/go-bitcoin-parser
null
Bitcoin Blockchain Parser ------ This repository contains the golang code of bitcoin blockchain parser, you can use cli interface to interact with parser or use it as library. Contents -------- - [Feature](#feature) - [Setup](#setup) - [Example](#example) - [Test](#test) Feature -------- - Deserialize block (provide raw data) - Deserialize transaction (provide raw data) - Get Block by hash (provide bitcoin blk folder and index folder) - Get Block by height (provide bitcoin blk folder and index folder) - Get unorder blocks (provide bitcoin blk folder) - Get order blocks (provide bitcoin blk folder and index folder) - Get last height (provide bitcoin blk folder and index folder) Setup ------ ## Setup config We use config to set blockpath and indexpath, if you want to use deserialize block or transaction, you don't need to set config. but if you want to use function involved blk file, you need to set config in file or specify it in cli's variable. ### config path Environmental variables: BITCOIN_PARSER_CONFIG_PATH if you doesn't set env, then default path is parser.conf in current folder. ### parser.conf #### Example ``` blockpath=./test-data/blocks indexpath=./test-data/index ``` ## Build ```shell script go get github.com/syndtr/goleveldb/leveldb go get golang.org/x/crypto/ripemd160 go bulid ``` ## Run ```shell script ./bitcoin-parser ``` Example ------- ## Deserialize block ### code ```go var BlockTestBytes = []byte{ 0x01,0x00,0x00,0x00,0x6f,0xe2,0x8c,0x0a, 0xb6,0xf1,0xb3,0x72,0xc1,0xa6,0xa2,0x46, 0xae,0x63,0xf7,0x4f,0x93,0x1e,0x83,0x65, 0xe1,0x5a,0x08,0x9c,0x68,0xd6,0x19,0x00, 0x00,0x00,0x00,0x00,0x98,0x20,0x51,0xfd, 0x1e,0x4b,0xa7,0x44,0xbb,0xbe,0x68,0x0e, 0x1f,0xee,0x14,0x67,0x7b,0xa1,0xa3,0xc3, 0x54,0x0b,0xf7,0xb1,0xcd,0xb6,0x06,0xe8, 0x57,0x23,0x3e,0x0e,0x61,0xbc,0x66,0x49, 0xff,0xff,0x00,0x1d,0x01,0xe3,0x62,0x99, 0x01,0x01,0x00,0x00,0x00,0x01,0x00,0x00, 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00, 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00, 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00, 0x00,0x00,0x00,0x00,0x00,0x00,0xff,0xff, 0xff,0xff,0x07,0x04,0xff,0xff,0x00,0x1d, 0x01,0x04,0xff,0xff,0xff,0xff,0x01,0x00, 0xf2,0x05,0x2a,0x01,0x00,0x00,0x00,0x43, 0x41,0x04,0x96,0xb5,0x38,0xe8,0x53,0x51, 0x9c,0x72,0x6a,0x2c,0x91,0xe6,0x1e,0xc1, 0x16,0x00,0xae,0x13,0x90,0x81,0x3a,0x62, 0x7c,0x66,0xfb,0x8b,0xe7,0x94,0x7b,0xe6, 0x3c,0x52,0xda,0x75,0x89,0x37,0x95,0x15, 0xd4,0xe0,0xa6,0x04,0xf8,0x14,0x17,0x81, 0xe6,0x22,0x94,0x72,0x11,0x66,0xbf,0x62, 0x1e,0x73,0xa8,0x2c,0xbf,0x23,0x42,0xc8, 0x58,0xee,0xac,0x00,0x00,0x00,0x00, } block, err := DeserializeBlock(BlockTestBytes, &MainnetParams) if err != nil { t.Errorf("DeserializeBlock error:%s", err) } str, err := block.String() if err != nil { t.Errorf("block to string error:%s", err) } fmt.Println("%s", str) ``` ### cli ```shell script ./bitcoin-parser deserializeblock -net mainnet -raw 010000006fe28c0ab6f1b372c1a6a246ae63f74f931e8365e15a089c68d6190000000000982051fd1e4ba744bbbe680e1fee14677ba1a3c3540bf7b1cdb606e857233e0e61bc6649ffff001d01e362990101000000010000000000000000000000000000000000000000000000000000000000000000ffffffff0704ffff001d0104ffffffff0100f2052a0100000043410496b538e853519c726a2c91e61ec11600ae1390813a627c66fb8be7947be63c52da7589379515d4e0a604f8141781e62294721166bf621e73a82cbf2342c858eeac00000000 ``` ### output ```json { "hash": "00000000839a8e6886ab5951d76f411475428afc90947ee320161bbf18eb6048", "blockHeader": { "version": 1, "prevBlockHeaderHash": "000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f", "merkleRootHash": "0e3e2357e806b6cdb1f70b54c3a3a17b6714ee1f0e68bebb44a74b1efd512098", "time": 1231469665, "nbits": 486604799, "nonce": 2573394689 }, "txnCount": 1, "txns": [ { "hash": "0e3e2357e806b6cdb1f70b54c3a3a17b6714ee1f0e68bebb44a74b1efd512098", "version": 1, "txinCount": 1, "txin": [ { "hash": "0000000000000000000000000000000000000000000000000000000000000000", "index": 4294967295, "scriptBytes": 7, "signatureScript": "04ffff001d0104", "sequence": 4294967295 } ], "txOutCount": 1, "txOut": [ { "value": 5000000000, "pkScriptBytes": 67, "pkScript": { "pkscript": "410496b538e853519c726a2c91e61ec11600ae1390813a627c66fb8be7947be63c52da7589379515d4e0a604f8141781e62294721166bf621e73a82cbf2342c858eeac", "pops": [ { "opcode": { "value": "41", "name": "OP_DATA_65" }, "data": "0496b538e853519c726a2c91e61ec11600ae1390813a627c66fb8be7947be63c52da7589379515d4e0a604f8141781e62294721166bf621e73a82cbf2342c858ee" }, { "opcode": { "value": "ac", "name": "OP_CHECKSIG" }, "data": "" } ], "stype": "PubKey", "addresses": [ "12c6DSiU4Rq3P4ZxziKxzrL5LmMBrzjrJX" ] } } ], "witness": null, "lockTime": 0 } ] } ``` ## Deserialize Transaction #### code ```go transactionTestBytes = []byte{ 0x01,0x00,0x00,0x00,0x00,0x01,0x01,0x9b, 0x34,0xe2,0x9c,0x56,0x2d,0xe2,0x0e,0xf3, 0x26,0x68,0x2f,0xa5,0x82,0x67,0x98,0x47, 0xa4,0xdc,0x4a,0xc7,0xd5,0x26,0x4d,0x32, 0x28,0xa1,0x7c,0x8f,0xef,0x91,0x0c,0x01, 0x00,0x00,0x00,0x00,0xff,0xff,0xff,0xff, 0x01,0x16,0xaa,0x0f,0x00,0x00,0x00,0x00, 0x00,0x16,0x00,0x14,0x52,0x15,0x83,0x08, 0xca,0x2e,0x51,0x49,0xd9,0x39,0x73,0x10, 0xec,0x1f,0xe2,0xa4,0xf8,0x8a,0xfb,0x07, 0x02,0x48,0x30,0x45,0x02,0x21,0x00,0xce, 0x94,0x6d,0x85,0x63,0x0e,0x5d,0x4b,0xb2, 0x27,0xf6,0x96,0x78,0xfa,0xff,0x2d,0x39, 0x7a,0xdb,0x25,0x61,0x7c,0xc9,0xb4,0xa5, 0x32,0x3f,0x4b,0x76,0x01,0x21,0xe8,0x02, 0x20,0x4d,0x05,0x0a,0x9c,0x08,0x83,0x30, 0xf2,0x36,0xae,0xc8,0x80,0x75,0xde,0x5c, 0xa6,0x31,0xa5,0xb7,0xa3,0xae,0xa6,0x92, 0x37,0x1d,0x0f,0x99,0xc9,0x62,0xd7,0x8d, 0x85,0x01,0x21,0x03,0x2c,0xc6,0x3e,0xba, 0x4d,0x1f,0x35,0xb6,0x38,0xe7,0x36,0x86, 0xf7,0x68,0xc2,0xc9,0x63,0x99,0xe6,0x9d, 0xc7,0x3f,0xcc,0x39,0xfe,0x30,0xb5,0x43, 0xed,0x85,0x9c,0x19,0x00,0x00,0x00,0x00, } tx, _, err := DesserializeTransaction(transactionTestBytes, &MainnetParams) if err != nil { t.Errorf("TestDesserializeTransaction error: %s", err) } str, err := tx.String() if err != nil { t.Errorf("transaction to string error:%s", err) } fmt.Println("%s", str) ``` ### cli ```shell script ./bitcoin-parser deserializetransaction -net mainnet -raw 010000000001019b34e29c562de20ef326682fa582679847a4dc4ac7d5264d3228a17c8fef910c0100000000ffffffff0116aa0f000000000016001452158308ca2e5149d9397310ec1fe2a4f88afb0702483045022100ce946d85630e5d4bb227f69678faff2d397adb25617cc9b4a5323f4b760121e802204d050a9c088330f236aec88075de5ca631a5b7a3aea692371d0f99c962d78d850121032cc63eba4d1f35b638e73686f768c2c96399e69dc73fcc39fe30b543ed859c1900000000 ``` ### output ```json { "hash": "ff857d64d03182391620a933fc8477372af4bac3a3f737883a57882d7c7d8f36", "version": 1, "txinCount": 1, "txin": [ { "hash": "9b34e29c562de20ef326682fa582679847a4dc4ac7d5264d3228a17c8fef910c", "index": 1, "scriptBytes": 0, "signatureScript": "", "sequence": 4294967295 } ], "txOutCount": 1, "txOut": [ { "value": 1026582, "pkScriptBytes": 22, "pkScript": { "pkscript": "001452158308ca2e5149d9397310ec1fe2a4f88afb07", "pops": [ { "opcode": { "value": "00", "name": "OP_0" }, "data": "" }, { "opcode": { "value": "14", "name": "OP_DATA_20" }, "data": "52158308ca2e5149d9397310ec1fe2a4f88afb07" } ], "stype": "WitnessPubKeyHash", "addresses": [ "bc1q2g2cxzx29eg5nkfewvgwc8lz5nug47c8ta5ene" ] } } ], "witness": [ { "WitnessStackCount": 2, "WitnessScript": [ { "WitnessScriptBytes": 72, "WitnessScript": "3045022100ce946d85630e5d4bb227f69678faff2d397adb25617cc9b4a5323f4b760121e802204d050a9c088330f236aec88075de5ca631a5b7a3aea692371d0f99c962d78d8501" }, { "WitnessScriptBytes": 33, "WitnessScript": "032cc63eba4d1f35b638e73686f768c2c96399e69dc73fcc39fe30b543ed859c19" } ] } ], "lockTime": 0 } ``` ## Get Block by hash ### cli ```shell script ./bitcoin-parser -hash 00000000000000322cee8a774b3bc50bd304f92ee63a7d8c7c01bc9e71f182ae -indexpath ./test-data/index -blockpath ./test-data/blocks ``` ## Get Block by height ### cli ```shell script ./bitcoin-parser getblockbyheight -height 131301 -indexpath ./test-data/index -blockpath ./test-data/blocks ``` ## Get unorder blocks blocks in blk files are not in order, this instruction will show original order blocks in blk files. ### cli ```shell script ./bitcoin-parser getunorderblocks -start 0 -end 999999 -name 2 -blockpath ./test-data/blocks ``` ## Get order blocks this instruction will show blocks in blk files in order by height. ### cli ```shell script ./bitcoin-parser getorderblocks -start 0 -end 999999 -name 2 -indexpath ./test-data/index -blockpath ./test-data/blocks ``` ## Get last height ### cli ```shell script ./bitcoin-parser getlastheight -indexpath ./test-data/index -blockpath ./test-data/blocks ``` Test ----- if you want find more use case, you can find in test case. ```shell script go test ```
AI-Club-IIT-Madras/Summer_School_2023
https://github.com/AI-Club-IIT-Madras/Summer_School_2023
AI Club IIT Madras Summer School 2023
# AI Club Summer School 2023 This repository contains all the slides/notes used during the Summer School conducted at IIT Madras. All the session recordings are present in this YouTube playlist - [Link](https://youtube.com/playlist?list=PLWkFppvOIj_RK8MBQEgipsDvahLiQJYQH) ## Tasks | Task | Description | Link | Deadline | | :--- | :---------- | :--- | :------- | | 1 | Data Visualization Challenge | [Task 1](/Task_1/) | July 15th 2023, 11:59 PM | | 2 | Kaggle Contest | [Task 2](/Task_2/), [Kaggle](https://www.kaggle.com/t/382449f886764755bb9259c9e75d374d) | July 27th 2023, 11:59 PM | ## Sessions | Session | Date | Topic | Session Notes/Slides | Recording | | :-----: | :------------ | :-------------------------------------------------- | :------------------- | :------------- | | 1 | July 7th 2023 | Intro to Python, Numpy, Pandas and Matplotlib | [Session 1](/Session_1/) | [Recording](https://www.youtube.com/watch?v=A2U0c21IsfA) | | 2 | July 10th 2023 | Linear, Logistic and Polynomial Regression | [Session 2](/Session_2/) | [Recording](https://www.youtube.com/live/eEGmz0Tm1Ck?feature=share) | | 3 | July 12th 2023 | KNNs, SVMs, Naive Bayes Classifiers | [Session 3](/Session_3/) | [Recording](https://www.youtube.com/live/m7VHg2an9yg?feature=share) | | 4 | July 14th 2023 | Decision Trees, Random Forest and Ensemble methods | [Session 4](/Session_4/) | [Recording](https://youtube.com/live/YczMERvNpPA) | | 5 | July 17th 2023 | Intro to Deep Learning, Neural Networks, Pytorch basics | [Session 5](/Session_5/) | [Recording](https://www.youtube.com/watch?v=ObmItJBoImI&list=PLWkFppvOIj_RK8MBQEgipsDvahLiQJYQH&index=5) | | 6 | July 19th 2023 | Add-ons to improve model performance: Data preprocessing, Feature Engineering, Regularization, Optimizers| [Session 6](/Session_6/) | [Recording](https://www.youtube.com/watch?v=Ia_v6VbdEDU&list=PLWkFppvOIj_RK8MBQEgipsDvahLiQJYQH&index=7) | | 7 | July 21st 2023 | CNNs (Theory and Implementation) | [Session 7](/Session_7/) | [Recording](https://www.youtube.com/watch?v=h45fwxvresA&list=PLWkFppvOIj_RK8MBQEgipsDvahLiQJYQH&index=9) | | 8 | July 24th 2023 | Object Detection - Haar Cascade, HOGs, RCNN, YOLO | [Session 8](/Session_8/) | [Recording](https://www.youtube.com/watch?v=aDmyoCaFQe0&list=PLWkFppvOIj_RK8MBQEgipsDvahLiQJYQH&index=10) | | 9 | July 26th 2023 | Intro to Reinforcement Learning | [Session 9](/Session_9/) | [Recording](https://youtu.be/L57sTJfZfO0) |
ksyncpro/lol-bot
https://github.com/ksyncpro/lol-bot
An account leveling bot for League of Legends written in Python.
## Welcome to lol bot - This is a python bot that uses the local League Client API to start games and plays them with a simple but effective game loop. - This bot has leveled up [thousands of league accounts](https://www.playerauctions.com/lol-account/) and is still going strong. - No pixel botting and/or image recognition makes it easy to run on any setup. </br> <p align="left"> <img src="https://user-images.githubusercontent.com/32341824/231916860-8cdaa0bb-c808-48f7-8afe-5cd151501a98.gif") </p> ## Current Features - Start League and Login - Start a Beginner Bot Game - Buy items and somewhat intelligently push mid until the game is over - End game, earn experience, collect rewards - Loop 🥡🧋 ## Requirements - [League of Legends](https://signup.leagueoflegends.com/en-us/signup/download) - [League of Legends Account](https://signup.leagueoflegends.com/en-us/signup/index) - [Python >= 3.10](https://www.python.org/downloads/) ## Setup - Ensure Python is added to your PATH, check "Add Python to PATH" when installing - Clone/Download the repo - Download [extra RAM](https://downloadmoreram.com/) (only if needed) - Run Installer.bat or ```pip install -r requirements.txt``` ## Optional Configuration - For non-standard league install paths, update the league_dir in constants.py - To continuously level new league accounts: - Implement account.py to connect to your database of league accounts - Make sure that "stay signed in" is not checked when league of legends starts. - This allows the bot to log out of league by closing it, get new account credentials, restart league, log in with the new credentials, and start another leveling loop, cycling indefinitely - To level accounts past level 30 or play in different game types update game data variables in constants.py ## Disclaimer Another LoL Bot isn’t endorsed by Riot Games and doesn’t reflect the views or opinions of Riot Games or anyone officially involved in producing or managing League of Legends. League of Legends and Riot Games are trademarks or registered trademarks of Riot Games, Inc. League of Legends © Riot Games, Inc. This software works with other programs/services (League of Legends) so you take full responsibility for breaking their Terms of Service and full responsibility for the accounts that you’re using with this bot and agree to use it at your own risk.
nalgeon/sqlpkg
https://github.com/nalgeon/sqlpkg
SQLite package registry
# The (unofficial) SQLite package registry Given the vast amount of SQLite extensions, it's only logical to have a package registry. So here it is. The registry contains only package metadata (specs). Package binaries and sources are located elsewhere (usually in the package developer's repositories). Here is what you can do from here: - Search the registry at [sqlpkg.org](https://sqlpkg.org/). - Install extensions using the [sqlpkg](https://github.com/nalgeon/sqlpkg-cli) package manager. - Add your own package to the registry. Note that you don't have to add your package to the registry for `sqlpkg` to work with it — it will work just fine with your own repo (see the `sqlpkg` documentation for details). Adding your package to the registry is completely optional. [**Subscribe**](https://antonz.org/subscribe/) for updates 🚀
declare-lab/flacuna
https://github.com/declare-lab/flacuna
Flacuna was developed by fine-tuning Vicuna on Flan-mini, a comprehensive instruction collection encompassing various tasks. Vicuna is already an excellent writing assistant, and the intention behind Flacuna was to enhance Vicuna's problem-solving capabilities. To achieve this, we curated a dedicated instruction dataset called Flan-mini.
# Flacuna: A Vicuna made of Flan [Paper](https://arxiv.org/abs//2307.02053) | [Model](https://huggingface.co/declare-lab/flacuna-13b-v1.0) | [Dataset](https://huggingface.co/datasets/declare-lab/flan-mini) 📣 We still have numerous experiments awaiting completion (details are [here](https://arxiv.org/abs//2307.02053)), requiring additional computing resources in our lab. If any industry professionals reading this are willing to provide assistance, please feel free to reach out to us at [email protected]. <p align="center"> <img src="https://declare-lab.net/assets/images/logos/flacuna5.png" alt="Image" width="200" height="335"> </p> Flacuna was developed by fine-tuning Vicuna on Flan-mini, a comprehensive instruction collection encompassing various tasks. Vicuna is already an excellent writing assistant, and the intention behind Flacuna was to enhance Vicuna's problem-solving capabilities. To achieve this, we curated a dedicated instruction dataset called Flan-mini. | Dataset Name | Source | Dataset Size | |-----------------------------|------------------------|--------------| | Flan2021 | Flan | 388K | | Public Pool of Prompts | Flan | 320K | | Natural instructions v2 | Flan | 200K | | CoT | Flan | 100K | | Code Search | HF/code_search_net | 100K | | Code Contest | HF/deepmind/code_contests | 50K | | Apps | HF/codeparrot/apps | 50K | | GPT4-Alpaca | GPT-4 | 52K | | Code-Alpaca | ChatGPT | 20K | | ShareGPT | ChatGPT | 60K | | Total | - | 1.34M | ## Quickstart Guide ```python from flacuna import FLACUNA model = FLACUNA("declare-lab/flacuna-13b-v1.0") prompt = ( "A chat between a curious user and an artificial intelligence assistant. " "The assistant gives helpful, detailed, and polite answers to the user's questions. " "USER: You are tasked to demonstrate your writing skills in professional or work settings for the following question.\n" "Can you help me write a speech for a graduation ceremony, inspiring and motivating the graduates to pursue their dreams and make a positive impact on the world?\n" "Output: ASSISTANT: " ) decoded = model.generate(prompt) print (decoded) ``` The output is the following: ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: You are tasked to demonstrate your writing skills in professional or work settings for the following question. Can you help me write a speech for a graduation ceremony, inspiring and motivating the graduates to pursue their dreams and make a positive impact on the world? Output: ASSISTANT: ​​Dear Graduates, It is my honor to stand before you here today, to celebrate your achievements and to acknowledge your hard work and dedication that you have shown here at the University. I am proud of each one of you, for all that you have accomplished thus far, and for the potential that lies within each of you. As you move on from this stage, I want to remind you that your journey has just begun. The real world awaits you, and with it, comes a myriad of opportunities and challenges. But I believe in each one of you, and I know that you are more than capable of facing whatever lies ahead. You are the future of this world, and I have no doubt that you will make a positive impact on it, in whatever field you choose to pursue. Whether it is in science, technology, education, healthcare, or any other field, your contributions will be valuable and meaningful, and will make a difference in the lives of others. So, as you move forward, I urge you to stay true to your passions, to pursue your dreams, and to never give up on what you believe in. Remember, it is not the obstacles that define us, but how we overcome them that truly matters. You have the power to change the world, and I have no doubt that you will. So, go out there, and make the most of every opportunity that comes your way. Make a difference, and leave your mark on this world. Congratulations once again, and best of luck in all that lies ahead. Thank you. ``` ## Problem Solving Ability As a result of this fine-tuning process, Flacuna exhibited notable performance improvements in problem-solving across multiple benchmark datasets, both in few-shot and zero-shot settings. | **Model** | **Size** | **MMLU (5-shot)** | **BBH (3-shot)** | **DROP (3-shot)** | **CRASS (3-shot)** | **HumanEval (0-shot)** | **Avg.** | | --- | --- | --- | --- | --- | --- | --- | --- | | StableVicuna | 13B | 49.2 (+3.0) | 37.5 (+0.4) | 34.3 (-1.0) | 67.5 (+8.7) | 15.9 (+2.5) | 40.9 (+2.7) | | Vicuna | 13B | 50.6 (+4.5) | 37.6 (+0.5) | 32.6 (-3.0) | 60.9 (+2.1) | 11.6 (-1.8) | 38.7 (+0.6) | | Flacuna | 13B | 51.1 (+5.0) | 39.3 (+2.2) | 43.6 (+8.0) | 74.1 (+15.3) | 11.0 (-2.4) | 43.8 (+5.6) | | **Model** | **Size** | **MMLU (0-shot)** | **BBH (0-shot)** | **CRASS (0-shot)** | | --- | --- | --- | --- | --- | | StableVicuna | 13B | 47.5 | 18.5 | 64.2 | | Vicuna | 13B | 48.3 | 28.3 | 65.7 | | Flacuna | 13B | 49.4 | 32.5 | 67.9 | During training, Flacuna is a 13B checkpoint of LLaMA and employed a maximum input sequence length of 1280. We utilized LoRA for parameter-efficient fine-tuning. ## Chatbot / Writing Assistant While Flacuna primarily excels in problem-solving tasks, we made efforts to maintain the impressive writing and chatting ability of Vicuna. To achieve this, we incorporated conversational datasets generated by GPT-4, such as GPT-4-Alpaca and ShareGPT, into the Flan-mini collection. To use Flacuna as a chatbot or writing assistant, we recommend you use the following template: ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {definition of the task}.\n\n {question}\n Output: ASSISTANT: ``` **Please note that we still recommend using Vicuna as your preferred Chatbot or Writing Assistant, over Flacuna. Flacuna's primary strength lies in problem-solving tasks, making it ideal for such applications.** The following table presents the writing performance of Flacuna on the IMPACT dataset, which is a component of the InstructEval evaluation suite. The generated responses have been evaluated by ChatGPT, and their relevance and coherence have been scored on a scale of 1 to 5. | **Model** | **Size** | **Informative Rel.** | **Informative Coh.** | **Professional Rel.** | **Professional Coh.** | **Argumentative Rel.** | **Argumentative Coh.** | **Creative Rel.** | **Creative Coh.** | **Avg. Rel.** | **Avg. Coh.** | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | ChatGPT | - | 3.34 | 3.98 | 3.88 | 3.96 | 3.96 | 3.82 | 3.92 | 3.94 | 3.78 | 3.93 | | Flan-Alpaca | 11B | 3.56 | 3.46 | 3.54 | 3.70 | 3.22 | 3.28 | 3.70 | 3.40 | 3.51 | 3.46 | | Flan-T5 | 11B | 2.64 | 3.24 | 2.62 | 3.22 | 2.54 | 3.40 | 2.50 | 2.72 | 2.58 | 3.15 | | Dolly-V2 | 12B | 3.54 | 3.64 | 2.96 | 3.74 | 3.66 | 3.20 | 3.02 | 3.18 | 3.30 | 3.44 | | StableVicuna | 13B | 3.54 | 3.64 | 2.96 | 3.74 | 3.30 | 3.20 | 3.02 | 3.18 | 3.21 | 3.44 | | Vicuna | 13B | 3.60 | 3.96 | 3.74 | 3.82 | 3.82 | 3.56 | 3.82 | 3.92 | 3.75 | 3.82 | | Flacuna | 13B | 3.02 | 3.42 | 3.48 | 3.52 | 3.38 | 3.02 | 3.92 | 3.80 | 3.45 | 3.44 | ## Training Flacuna Navigate to the `data` directory and download the Flan-Mini dataset: ```bash cd data wget https://huggingface.co/datasets/declare-lab/flan-mini/resolve/main/flan_mini.json.zip unzip flan_mini.json.zip cd .. ``` You can then use the `train.sh` script for fine-tuning Vicuna on the Flan-Mini dataset: ```bash bash train.sh ``` ## Citation ```bibtex @misc{ghosal2023flacuna, title={Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-Tuning}, author={Deepanway Ghosal and Yew Ken Chia and Navonil Majumder and Soujanya Poria}, year={2023}, eprint={2307.02053}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
fishzjp/FileTools
https://github.com/fishzjp/FileTools
用于生成任意大小的文件
# 背景 在软件开发和系统测试的过程中,经常需要测试在磁盘空间满的情况下系统的表现和处理能力。这样的测试场景可以帮助开发人员和测试人员评估系统在资源紧张的环境下的鲁棒性和性能。 然而,手动创建大文件并将磁盘填满是一项繁琐且耗时的任务。为了简化这个过程并提高效率,我开发了一个文件生成工具,该工具可以根据需求创建任意大小的文件,以模拟磁盘空间满的场景。 # 项目特点 1. 用户友好的界面:该工具使用了PyQt5库来创建一个图形化界面,使用户能够直观、方便地操作。界面提供了文件夹路径、文件名称和文件大小等输入框,以及浏览和生成按钮,用户可以轻松选择路径和设置文件参数。 2. 文件写入速度快:任意大小文件秒写入 # 程序打包 ### 程序使用pyinstaller打包,打包时有使用upx压缩,打包命令如下 ``` # 使用upx压缩打包命令 注意:--upx-dir= 需要替换为你本地upx的安装路径 pyinstaller --onefile --add-data "icon.png;." --add-data "SmileySans-Oblique.ttf;." --add-data "style.qss;." --noconsole --upx-dir=D:\code\file_tools\tools\upx-4.0.2-win64\upx-4.0.2-win64 file_tools.py # 不使用upx压缩打包命令 pyinstaller --onefile --add-data "icon.png;." --add-data "SmileySans-Oblique.ttf;." --add-data "style.qss;." --noconsole file_tools.py ``` # 工具界面 ![image](https://github.com/fishzjp/FileTools/assets/105406371/5cb835f9-def3-4a29-bcb4-b5db637a9146) # 工具下载 https://github.com/fishzjp/FileTools/releases # 工具字体 字体采用得意黑 https://github.com/atelier-anchor/smiley-sans # 公众号 ![扫码_搜索联合传播样式-白色版](https://github.com/fishzjp/FileTools/assets/105406371/49abfbc1-d46e-410c-98f1-959f2dbfe87a) <br> 希望这次优化对你有帮助,如果还有其他需要改进的地方,请随时告知。
wasdwasd0105/PicoW-usb2bt-audio
https://github.com/wasdwasd0105/PicoW-usb2bt-audio
Transform your Raspberry Pi Pico W into a USB to Bluetooth Audio Adapter.
# PicoW USB Audio to Bluetooth Adapter The Pico W USB Audio to Bluetooth Adapter transforms your Raspberry Pi Pico W into a high-quality, hassle-free audio streaming device. It allows any device with a USB input, including MacOS, Windows, Linux, Nintendo Switch, PS4/PS5, and more, to gain Bluetooth Audio capability. With Pico W Adapter, you can easily transmit audio wirelessly from your USB audio source to your Bluetooth headphones or speaker, enhancing your listening experience. To make sure we can keep working on free and open-source projects like this, **please consider becoming a [:heart: Sponsor via PayPal](https://www.paypal.com/donate/?business=UZAK3WFV233ML&no_recurring=0&item_name=Help+me+build+more+project%21&currency_code=USD) or support us via [:coffee: Ko-fi](https://ko-fi.com/wasdwasd0105).** <p align="center"> <img alt="Logo" src="logo.png" width="200"> </p> ### Driver-Free Setup Setting up PicoW requires no driver or software installation. Simply plug the Pico W into your device's USB port, set your Bluetooth headphones or speakers to pairing mode, and an automatic connection will be made. ### Bluetooth SBC Codec Pico W Bluetooth Adapter utilizes the sbc codec to deliver high-quality audio. It can steam with stereo sound. ### Video demo [![video demo](http://img.youtube.com/vi/Dilagi7l4xc/0.jpg)](http://www.youtube.com/watch?v=Dilagi7l4xc "") ## Installation Installing the Pico W USB Audio to Bluetooth Adapter firmware involves flashing a UF2 (USB Flashing Format) file onto your Raspberry Pi Pico. Follow these steps: 1. **Download the UF2 file:** You can find the latest firmware for the PicoW USB Audio to Bluetooth Adapter at the [releases page](https://github.com/wasdwasd0105/PicoW-usb2bt-audio/releases) of the GitHub repository. Download the `.uf2` file from the latest release. 2. **Connect the Pico to your computer:** First, ensure that your Pico is not connected to your computer. Then, hold down the 'BOOTSEL' button on the Pico while you plug it into your computer using a micro USB cable. It will appear on your computer as a mass storage device (like a USB flash drive). 3. **Copy the UF2 file:** Simply drag and drop (or copy and paste) the downloaded UF2 file onto the Pico. 4. **Reset the Pico:** Once the UF2 file has been copied onto the Pico, it will automatically reset and start running the new firmware. ## Usage Using the PicoW USB Audio to Bluetooth Adapter is a straightforward process. Here are the steps to follow: 1. **Connect your Pico W to your audio source device:** Use a USB cable to connect your Raspberry Pi Pico W to the device that you want to stream audio from. 2. **Set the audio output on your source device:** On your audio source device, go to your sound settings and change the audio output device to `USB Bluetooth Audio`. 3. **Pairing a new device:** To pair a new device, long press the 'BOOTSEL' button on the Pico W and release it the led light will blink fast. Then, put the new Bluetooth device into pairing mode. The Pico W will automatically connect to it. 4. **Start playing audio:** Once everything is set up, you can start playing audio from your source device. The audio will be streamed to your Bluetooth device via the Pico W. 5. **Reconnecting a device:** You can reconnect the headphone by short pressing the 'BOOTSEL' button. ## Green LED light Status Indicator 1. **Blinking Slow (1s):** When the Green LED light is blinking, it indicates that audio is currently streaming. 2. **Blinking Fast (0.5s):** It means that the PicoW Adapter is in pairing mode. 3. **On (Steady Light):** It means that the PicoW Adapter is on standby. Short-press the key to reconnect the last saved device. ## Compile & Debug In order to compile the PicoW USB Audio to Bluetooth Adapter firmware from source code, you need to follow these steps: 1. **Prepare your environment:** Make sure that you have a working development environment for Raspberry Pi Pico projects. This includes having the required compiler and tools installed. You may refer to the [Getting started with Raspberry Pi Pico](https://www.raspberrypi.org/documentation/rp2040/getting-started/) guide for detailed instructions. 2. **Set environment variables:** Before you can build the project, you need to set two environment variables: `PICO_SDK_PATH` and `PICO_EXTRAS_PATH`. For example: ```bash export PICO_SDK_PATH=~/pico-sdk export PICO_EXTRAS_PATH=~/pico-extras ``` 3. **Build the project:** After setting up your environment, navigate to the project directory in a terminal and run the provided build script: ```bash ./build.sh ``` This script should compile the project and produce a UF2 firmware file that you can flash onto your Pico W. 4. **Debug Serial input/output:** You can use uart to see the debug info. Connect the GPIO 0 and 1 as TX and RX. To enable BTstack's serial input, you can uncomment `HAVE_BTSTACK_STDIN` under btstack_config.h ## Acknowledgments This project wouldn't have been possible without the foundational work provided by the following projects: 1. [usb-sound-card](https://github.com/raspberrypi/pico-playground/tree/master/apps/usb_sound_card): It served as a valuable reference for handling USB audio data with the Raspberry Pi Pico. 2. [a2dp_source_demo](https://github.com/bluekitchen/btstack/blob/master/example/a2dp_source_demo.c): The Advanced Audio Distribution Profile (A2DP) source demo provided by the BTstack. ## License This project is licensed under the terms of the Apache License 2.0.
tomquirk/threads-api
https://github.com/tomquirk/threads-api
Threads by Instagram API for Python
# threads-api Threads by Instagram API for Python Just chill. Coming soon
WisdomSky/CasaOS-Coolstore
https://github.com/WisdomSky/CasaOS-Coolstore
Just an another CasaOS third-party Appstore with ❄Cool❄ apps.
![CasaOS Coolstore](https://raw.githubusercontent.com/WisdomSky/CasaOS-Coolstore/main/banner.png) # CasaOS Coolstore Just a cool appstore. ⛄ --- #### ☕ Treat WisdomSky a cup of coffee... [![Donate](https://img.shields.io/badge/Donate-PayPal-green.svg)](https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=RYAFEWHVZCZ62) --- ## 📃 Table of Contents - [Introduction](#-introduction) - [Installation](#-installation) - [List of Applications](#-list-of-applications) - [Frequently Asked Questions / FAQs](#-frequently-asked-questions) - [How to Upgrade CasaOS](#-how-to-upgrade-casaos) - ["Error 404: Not Found" during install](#-error-404-not-found-during-install) - [How to uninstall the CasaOS Coolstore](#-how-to-uninstall-the-casaos-coolstore) - [Request an App](#-request-an-app) - [Contributing](#-contributing) --- ## 🔥 Introduction Just an another CasaOS third-party Appstore with ❄**Cool**❄ apps. --- ## ✅ Installation Run the following command to install the appstore: ```bash casaos-cli app-management register app-store https://casaos-appstore.paodayag.dev/coolstore.zip ``` > **NOTE: Custom Appstore is only supported on CasaOS version [0.4.4](https://blog.casaos.io/blog/23.html) and above. How to upgrade? [Click here](#-how-to-upgrade-casaos)** --- ## 🛠 List of Applications | Application | Version | Description | | --- | --- | --- | | <img src="https://raw.githubusercontent.com/pi-hosted/pi-hosted/master/images/archivebox.png" width="15"/>&nbsp;&nbsp;&nbsp;[Archivebox](https://github.com/WisdomSky/CasaOS-Coolstore/tree/main/Apps/archivebox) | undefined | Archivebox Is A Powerful, Self-hosted Internet Archiving Solution To Collect, Save, And View Sites You Want To Preserve Offline. | | <img src="https://raw.githubusercontent.com/pi-hosted/pi-hosted/master/images/caddy.png" width="15"/>&nbsp;&nbsp;&nbsp;[Caddy](https://github.com/WisdomSky/CasaOS-Coolstore/tree/main/Apps/caddy) | 2.7 | Caddy - The Ultimate Server With Automatic Https. Need To Run Tools/install_caddy.sh Before Installing The Template. | | <img src="https://cdn.jsdelivr.net/gh/WisdomSky/CasaOS-Coolstore@main/Apps/casaos-cloudflared/icon.png" width="15"/>&nbsp;&nbsp;&nbsp;[Cloudflared](https://github.com/WisdomSky/CasaOS-Coolstore/tree/main/Apps/casaos-cloudflared) | 2023.7.3 | A tunneling daemon by Cloudflare that safely exposes your web servers into the internet. | | <img src="https://raw.githubusercontent.com/jgraph/drawio/dev/src/main/webapp/images/logo.png" width="15"/>&nbsp;&nbsp;&nbsp;[Draw.io](https://github.com/WisdomSky/CasaOS-Coolstore/tree/main/Apps/draw.io) | undefined | Draw.io (formerly Diagramly) Is Free Online Diagram Software. You Can Use It As A Flowchart Maker, Network Diagram Software, To Create Uml Online, As An Er Diagram Tool, To Design Database Schema, To Build Bpmn Online, As A Circuit Diagram Maker, And More. Draw.io Can Import .vsdx, Gliffy™ And Lucidchart™ Files. | | <img src="https://portainer-io-assets.sfo2.digitaloceanspaces.com/logos/ghost.png" width="15"/>&nbsp;&nbsp;&nbsp;[Ghost](https://github.com/WisdomSky/CasaOS-Coolstore/tree/main/Apps/ghost) | undefined | Free And Open-source Blogging Platform | | <img src="https://raw.githubusercontent.com/hestiacp/hestiacp/main/web/images/logo.png" width="15"/>&nbsp;&nbsp;&nbsp;[Hestia CP](https://github.com/WisdomSky/CasaOS-Coolstore/tree/main/Apps/hestiacp) | undefined | An open-source web hosting control panel that allows users to manage their websites, domains, email accounts, databases, and more. It provides a user-friendly interface for managing web hosting environments and simplifies various administrative tasks. | | <img src="https://raw.githubusercontent.com/pi-hosted/pi-hosted/master/images/homebridge.png" width="15"/>&nbsp;&nbsp;&nbsp;[Homebridge](https://github.com/WisdomSky/CasaOS-Coolstore/tree/main/Apps/homebridge) | undefined | Homebridge Allows You To Integrate With Smart Home Devices That Do Not Natively Support Homekit. There Are Over 2,000 Homebridge Plugins Supporting Thousands Of Different Smart Accessories. | | <img src="https://immich.app/img/logo.png" width="15"/>&nbsp;&nbsp;&nbsp;[Immich](https://github.com/WisdomSky/CasaOS-Coolstore/tree/main/Apps/immich) | v1.68.0 | Self-hosted photos and videos backup tool | | <img src="https://portainer-io-assets.sfo2.digitaloceanspaces.com/logos/joomla.png" width="15"/>&nbsp;&nbsp;&nbsp;[Joomla](https://github.com/WisdomSky/CasaOS-Coolstore/tree/main/Apps/joomla) | undefined | Another Free And Open-source Cms | | <img src="https://raw.githubusercontent.com/Kareadita/Kavita/main/UI/Web/src/assets/images/logo.png" width="15"/>&nbsp;&nbsp;&nbsp;[Kavita](https://github.com/WisdomSky/CasaOS-Coolstore/tree/main/Apps/kavita) | undefined | Kavita Manga, Comic And E-book Server | | <img src="https://raw.githubusercontent.com/gotson/komga/master/.github/readme-images/app-icon.png" width="15"/>&nbsp;&nbsp;&nbsp;[Komga](https://github.com/WisdomSky/CasaOS-Coolstore/tree/main/Apps/komga) | undefined | Koomga Manga, Comic And E-book Server | | <img src="https://raw.githubusercontent.com/SelfhostedPro/selfhosted_templates/master/Images/ytdlm.png" width="15"/>&nbsp;&nbsp;&nbsp;[Metube](https://github.com/WisdomSky/CasaOS-Coolstore/tree/main/Apps/metube-youtubedl) | undefined | Web Gui For Youtube-dl With Playlist Support. Allows You To Download Videos From Youtube And Dozens Of Other Sites (https://ytdl-org.github.io/youtube-dl/supportedsites.html). | | <img src="https://raw.githubusercontent.com/pi-hosted/pi-hosted/master/images/minecraft.png" width="15"/>&nbsp;&nbsp;&nbsp;[Minecraft Server](https://github.com/WisdomSky/CasaOS-Coolstore/tree/main/Apps/minecraft) | undefined | This Docker Image Provides A Minecraft Server That Will Automatically Download The Latest Stable Version At Startup. You Can Also Run/upgrade To Any Specific Version Or The Latest Snapshot. See The Versions Section Below For More Information. | | <img src="https://raw.githubusercontent.com/docker-library/docs/757578e3a44e5460a8a11d32a81776f8b74231a9/eclipse-mosquitto/logo.png" width="15"/>&nbsp;&nbsp;&nbsp;[Eclipse Mosquitto](https://github.com/WisdomSky/CasaOS-Coolstore/tree/main/Apps/mosquitto) | undefined | Eclipse Mosquitto Is An Open Source Implementation Of A Server For Versions 5, 3.1.1, And 3.1 Of The Mqtt Protocol.you Will Need To Create /portainer/files/appdata/config/mosquitto/config/mosquitto.conf And If Needed Your Password File.have A Look On Https://mosquitto.org/man/mosquitto_passwd-1.html | | <img src="https://raw.githubusercontent.com/pucherot/Pi.Alert/main/docs/img/1_devices.jpg" width="15"/>&nbsp;&nbsp;&nbsp;[Pi.alert](https://github.com/WisdomSky/CasaOS-Coolstore/tree/main/Apps/pi-alert) | undefined | Wifi / Lan Intruder Detector | | <img src="https://raw.githubusercontent.com/pi-hosted/pi-hosted/master/images/privatebin.png" width="15"/>&nbsp;&nbsp;&nbsp;[Privatebin](https://github.com/WisdomSky/CasaOS-Coolstore/tree/main/Apps/privatebin) | undefined | Privatebin Is A Minimalist, Open Source Online Pastebin Where The Server Has Zero Knowledge Of Pasted Data! | | <img src="https://raw.githubusercontent.com/pi-hosted/pi-hosted/master/images/reactiveresume.png" width="15"/>&nbsp;&nbsp;&nbsp;[Reactive Resume](https://github.com/WisdomSky/CasaOS-Coolstore/tree/main/Apps/reactive-resume) | undefined | A One-of-a-kind Resume Builder That's Not Out To Get Your Data. Completely Secure, Customizable, Portable, Open-source And Free Forever. | | <img src="https://raw.githubusercontent.com/Readarr/Readarr/develop/Logo/128.png" width="15"/>&nbsp;&nbsp;&nbsp;[Readarr](https://github.com/WisdomSky/CasaOS-Coolstore/tree/main/Apps/readerr) | nightly-0.3.1.2095 | Readarr Usent Server | | <img src="https://static.cdnlogo.com/logos/r/32/rocket-chat.svg" width="15"/>&nbsp;&nbsp;&nbsp;[Rocket. Chat](https://github.com/WisdomSky/CasaOS-Coolstore/tree/main/Apps/rocket-chat) | undefined | Rocket. Chat is a customizable open source communications platform for organizations with high standards of data protection. | | <img src="https://raw.githubusercontent.com/pi-hosted/pi-hosted/master/images/speedtest-tracker.png" width="15"/>&nbsp;&nbsp;&nbsp;[Speedtest Tracker](https://github.com/WisdomSky/CasaOS-Coolstore/tree/main/Apps/speedtest-tracker) | undefined | Run A Speedtest Every Hour And Graph The Results. See Https://hub.docker.com/r/henrywhitaker3/speedtest-tracker/tags For Arch Options | | <img src="https://raw.githubusercontent.com/xneo1/portainer_templates/master/Images/statping.png" width="15"/>&nbsp;&nbsp;&nbsp;[Statping](https://github.com/WisdomSky/CasaOS-Coolstore/tree/main/Apps/statping) | undefined | An Easy To Use Status Page For Your Websites And Applications. Statping Will Automatically Fetch The Application And Render A Beautiful Status Page With Tons Of Features For You To Build An Even Better Status Page. | | <img src="https://docs.tandoor.dev/logo_color.svg" width="15"/>&nbsp;&nbsp;&nbsp;[Tandoor](https://github.com/WisdomSky/CasaOS-Coolstore/tree/main/Apps/tandoor) | undefined | The Recipe Manager That Allows You To Manage Your Ever Growing Collection Of Digital Recipes. | | <img src="https://raw.githubusercontent.com/pi-hosted/pi-hosted/master/images/traefik.png" width="15"/>&nbsp;&nbsp;&nbsp;[Traefik](https://github.com/WisdomSky/CasaOS-Coolstore/tree/main/Apps/traefik) | undefined | Cloud-native Networking Stack That Just Works. | | <img src="https://www.saashub.com/images/app/service_logos/55/2901389fab77/large.png?1561117248" width="15"/>&nbsp;&nbsp;&nbsp;[Trilium](https://github.com/WisdomSky/CasaOS-Coolstore/tree/main/Apps/trilium) | undefined | Trilium Notes Is A Hierarchical Note Taking Application With Focus On Building Large Personal Knowledge Bases | | <img src="https://raw.githubusercontent.com/jake-walker/selfhosted_templates/wallabag/Images/wallabag.png" width="15"/>&nbsp;&nbsp;&nbsp;[Wallabag](https://github.com/WisdomSky/CasaOS-Coolstore/tree/main/Apps/wallabag) | undefined | Wallabag Is A Self Hostable Application For Saving Web Pages: Save And Classify Articles. Read Them Later. Freely. | | <img src="https://raw.githubusercontent.com/pi-hosted/pi-hosted/master/images/wireguard.png" width="15"/>&nbsp;&nbsp;&nbsp;[Wireguard Server](https://github.com/WisdomSky/CasaOS-Coolstore/tree/main/Apps/wg-easy) | 7 | Fast Vpn Server With Easy To Use User Interface | ## 💡 Frequently Asked Questions ### 👉 How to Upgrade CasaOS Run the following command: curl -fsSL https://get.casaos.io/update/v0.4.4-alpha | sudo bash ### 👉 Error 404 Not Found during install This could be caused by your CasaOS running on a port other than the default `port 80`. You need to add the `-u` flag at the end to tell command which port your CasaOS is running: ```bash casaos-cli app-management register app-store https://casaos-appstore.paodayag.dev/coolstore.zip -u "localhost:<my-casa-os-port>" ``` Replace `<my-casa-os-port>` with the port where your CasaOS is running. For example if my CasaOS is running on port 99: ```bash casaos-cli app-management register app-store https://casaos-appstore.paodayag.dev/coolstore.zip -u "localhost:99" ``` ### 👉 How to uninstall the CasaOS Coolstore Get the assigned ID of the Coolstore: casaos-cli app-management list app-stores Unregister the CasaOS Coolstore: casaos-cli app-management unregister app-store <coolstore-id> > NOTE: Replace `<coolstore-id>` with the corresponding ID of the CasaOS Coolstore. --- ### 🎈 Request an App You can request an app to be added into the list by submitting a request [here](https://github.com/WisdomSky/CasaOS-Coolstore/issues/new?assignees=&labels=app-request&projects=&template=app-request.md&title=%5BApp+Request%5D+%3CApp+Name+Here%3E). --- ### 💙 Contributing > ☠ WARNING: Please refrain from submitting PRs into this repository. You can contribute changes or updates to this Appstore through [CasaOS-Cool-Appstore-Generator](https://github.com/WisdomSky/CasaOS-Cool-Appstore-Generator).
PABannier/bark.cpp
https://github.com/PABannier/bark.cpp
Port of Suno AI's Bark in C/C++ for fast inference
# bark.cpp (coming soon!) Inference of SunoAI's bark model in pure C/C++ using [ggml](https://github.com/ggerganov/ggml). ## Description The main goal of `bark.cpp` is to synthesize audio from a textual input with the [Bark](https://github.com/suno-ai/bark) model. Bark has essentially 4 components: - [x] Semantic model to encode the text input - [x] Coarse model - [x] Fine model - [ ] Encoder (quantizer + decoder) to generate the waveform from the tokens ## Roadmap - [ ] Quantization - [ ] FP16 - [ ] Swift package for iOS devices
Overdrive141/vite-shadcnui-boilerplate
https://github.com/Overdrive141/vite-shadcnui-boilerplate
A boilerplate React Project with ShadcnUI, Tailwind CSS, powered by Vite that supports both JS & TS.
<h1 align="center">Vite React Project Boilerplate (TS/JS) (ShadcnUI & Tailwind)</h1> <p> <img alt="Version" src="https://img.shields.io/badge/version-0.0.0-blue.svg?cacheSeconds=2592000" /> <a href="#" target="_blank"> <img alt="License: MIT" src="https://img.shields.io/badge/License-MIT-yellow.svg" /> </a> <a href="https://twitter.com/FarhanSoftware" target="_blank"> <img alt="Twitter: FarhanSoftware" src="https://img.shields.io/twitter/follow/FarhanSoftware.svg?style=social" /> </a> </p> ## Main configurations - React 18 that supports both TypeScript & JavaScript out of the box - You can use absolute imports, using `@` as `src` directory - Support for [ShadcnUI](https://ui.shadcn.com) & Tailwind CSS ## Usage Install the dependencies: ```sh yarn install or npm install ``` Run dev server: ```sh yarn dev or npm run dev ``` ## Production version To generate the production version, you can run: ```sh yarn build or npm run build ``` All files you have to deploy will be located at the `dist` directory. ## Author 👤 **Farhan H** * Website: https://farhanh.dev * Twitter: [@FarhanSoftware](https://twitter.com/FarhanSoftware) ## 🤝 Contributing Contributions, issues and feature requests are welcome!<br />Feel free to report on them on the issues page.
ManeraKai/fediredirect
https://github.com/ManeraKai/fediredirect
null
# FediRedirect A browser extension that redirects you from a fediverse instance to your favorite instance. Here's a [demo](https://youtu.be/96wBlCna8I4). <a href="https://addons.mozilla.org/firefox/addon/fediredirect/"> <img src ="assets/badge-amo.png" height=60 > </a> It currently supports: - [Mastodon](https://joinmastodon.org) - [Soapbox](https://soapbox.pub) - [Lemmy](https://join-lemmy.org) - [Calckey](https://calckey.org) - [Peertube](https://joinpeertube.org) - [Pleroma](https://pleroma.social)
StepL7/interpark-ticket
https://github.com/StepL7/interpark-ticket
抢票python脚本
# interpark-ticket 抢票python脚本 代码目前比较定制化,就是抢seventeen的
keep-starknet-strange/madara-infra
https://github.com/keep-starknet-strange/madara-infra
Infrastructure related stuff for Madara.
<div align="center"> <img src="images/readme-header.png" height="256"> <h2>🏗 Madara: Infrastructure</h2> <h6>Main repository for handling the infrastructure efforts for Madara</h6> <a href="https://github.com/keep-starknet-strange/madara"> <img src="https://img.shields.io/badge/Official%20Repository-Madara-red"/> </a> </div> ## Table of Contents - [Disclaimer](#disclaimer) - [Getting Started](#getting-started) - [Dependencies](#dependencies) - [Local Testnet](#local-testnet) - [Docker/Kubernetes](#docker-kubernetes) - [Ansible](#ansible) - [Terraform](#terraform) - [Scripts](#scripts) - [Starknet Stack](#starknet-stack) - [Apibara](/starknet-stack#apibara) - [Kakarot](/starknet-stack#kakarot) - [Madara](/starknet-stack#madara) - [Madara App](/starknet-stack#madara-app) - [Madara Explorer](/starknet-stack#madara-explorer) - [Madara Swap](/starknet-stack#madara-swap) - [Madara Telemetry](/starknet-stack#telemetry) - [Starkcet](/starknet-stack#starkcet) - [Starken](/starknet-stack#starken) - [Starksheet](/starknet-stack#starksheet) ## ⚠️ Disclaimer Hey there! Just a heads-up: while we've put in the effort, we can't promise everything here's perfect. Using the tools or guides in this repo? Be aware they can deploy or undeploy things automatically, which might slap extra charges on your cloud bills. Always double-check your actions, and remember to keep backups. Dive in at your own risk! ## 🚀 Getting Started Welcome to the Madara: Infrastructure repository! If you're keen to set up, manage, or fiddle with Madara's infrastructure, you're in the right place. This guide will walk you through the initial steps to get up and running. Before diving into the deep end, ensure you've got the necessary tools and permissions. ### Dependencies 1. Git: This one's a no-brainer. You'll need Git to clone and manage versions of this repository. 2. Docker: We use containerization a lot here, so ensure Docker's installed and ready. 3. Extra Tools: Some deployments might need specific CLI tools, like Ansible or Kubernetes. Keep them handy! 📌 Note: This is just a starter pack. As you dive into specific sections, there might be more tools or configurations needed. Always refer to the relevant documentation. ## Local Testnet ## Docker/Kubernetes ## Ansible ## Terraform ## Scripts ## Starknet Stack Starknet, at its core, is a robust and dynamic ecosystem. The stack we've assembled here is a collection of services and tools designed to synergize and empower your Madara endeavors. You can check more information on the [Starknet Stack README](/starknet-stack).
msdirtbag/mcspattack
https://github.com/msdirtbag/mcspattack
ATT&CK Navigator Layer for Microsoft Cloud Security Platforms
# mcspattack ATT&amp;CK Navigator Layer for Microsoft Cloud Security Platforms
oskarek/swift-json-parsing
https://github.com/oskarek/swift-json-parsing
JSON decoding and encoding, using the swift-parsing library
# JSONParsing [![CI](https://github.com/oskarek/swift-json-parsing/actions/workflows/ci.yml/badge.svg)](https://github.com/oskarek/swift-json-parsing/actions/workflows/ci.yml) [![](https://img.shields.io/endpoint?url=https%3A%2F%2Fswiftpackageindex.com%2Fapi%2Fpackages%2Foskarek%2Fswift-json-parsing%2Fbadge%3Ftype%3Dswift-versions)](https://swiftpackageindex.com/oskarek/swift-json-parsing) [![](https://img.shields.io/endpoint?url=https%3A%2F%2Fswiftpackageindex.com%2Fapi%2Fpackages%2Foskarek%2Fswift-json-parsing%2Fbadge%3Ftype%3Dplatforms)](https://swiftpackageindex.com/oskarek/swift-json-parsing) A library for decoding and encoding json, built on top of @pointfreeco's [Parsing](https://github.com/pointfreeco/swift-parsing) library. * [Introduction](#introduction) * [Quick start](#quick-start) * [Motivation - why not use Codable?](#motivation---why-not-use-codable) * [The `JSONValue` type](#the-jsonvalue-type) * [The JSON parsers](#the-json-parsers) * [Null](#null) * [JSONBoolean](#jsonboolean) * [JSONNumber](#jsonnumber) * [JSONString](#jsonstring) * [JSONArray](#jsonarray) * [JSONObject](#jsonobject) * [Field](#field) * [OptionalField](#optionalfield) * [Integration with Codable](#integration-with-codable) * [Benchmarks](#benchmarks) * [Installation](#installation) --- ## Introduction As mentioned above, this library is built using the [Parsing](https://github.com/pointfreeco/swift-parsing) library, which is a library that provides a consistent story for writing _parsing_ code in Swift, that is, code that turns some _unstructured data_ into more _structured data_. You do that by constructing _parsers_ that are generic over both the (unstructured) _input_ and the (structued) _output_. What's really great is the fact the these parsers can be made _invertible_ (or bidirectional), meaning they can also turn structured data _back_ into unstructured data, referred to as _printing_. The *JSONParsing* library provides predefined parsers tuned specifically for when the _input is json_, giving you a convenient way of writing parsers capable of parsing (decoding) and printing (encoding) json. This style of dealing with json has a number of benefits compared to the *Codable* abstraction. More about that in the [Motivation](#motivation---why-not-use-codable) section. ## Quick start Let's see what it looks like to decode and encode json data using this library. Imagine, for example, you have json describing a movie: ```swift let json = """ { "title": "Interstellar", "release_year": 2014, "director": "Christopher Nolan", "stars": [ "Matthew McConaughey", "Anne Hathaway", "Jessica Chastain" ], "poster_url": "https://www.themoviedb.org/t/p/w1280/gEU2QniE6E77NI6lCU6MxlNBvIx.jpg", "added_to_favorites": true } """.data(using: .utf8)! ``` First, we define a corresponding `Movie` type: ```swift struct Movie { let title: String let releaseYear: Int let director: String let stars: [String] let posterUrl: URL? let addedToFavorites: Bool } ``` Then, we can create a _JSON parser_, to handle the decoding of the json into this new data type: ```swift extension Movie { static var jsonParser: some JSONParserPrinter<Self> { ParsePrint(.memberwise(Self.init)) { Field("title") { String.jsonParser() } Field("release_year") { Int.jsonParser() } Field("director") { String.jsonParser() } Field("stars") { JSONArray { String.jsonParser() } } OptionalField("poster_url") { URL.jsonParser() } Field("added_to_favorites") { Bool.jsonParser() } } } } ``` Now, the `Movie.jsonParser` can be used to decode json data into `Movie` instances: ```swift let decodedMovie = try Movie.jsonParser.decode(json) print(decodedMovie) // Movie(title: "Interstellar", releaseYear: 2014, director: "Christopher Nolan", stars: ["Matthew McConaughey", "Anne Hathaway", "Jessica Chastain"], posterUrl: Optional(https://www.themoviedb.org/t/p/w1280/gEU2QniE6E77NI6lCU6MxlNBvIx.jpg), addedToFavorites: true) ``` But what's even cooler is that the very same parser, without any extra work, can also be used to _encode_ movie values into json: ```swift let jokerMovie = Movie( title: "Joker", releaseYear: 2019, director: "Todd Phillips", stars: ["Joaquin Phoenix", "Robert De Niro"], posterUrl: URL(string: "https://www.themoviedb.org/t/p/w1280/udDclJoHjfjb8Ekgsd4FDteOkCU.jpg")!, addedToFavorites: true ) let jokerJson = try Movie.jsonParser.encode(jokerMovie) print(String(data: jokerJson, encoding: .utf8)!) // {"added_to_favorites":true,"director":"Todd Phillips","poster_url":"https://www.themoviedb.org/t/p/w1280/udDclJoHjfjb8Ekgsd4FDteOkCU.jpg","release_year":2019,"stars":["Joaquin Phoenix","Robert De Niro"],"title":"Joker"} ``` More information about the building blocks for constructing the JSON parsers can be found under the [The JSON parsers](#the-json-parsers) section. ## Motivation - why not use Codable? The default way to work with JSON in Swift is with Apple's own Codable framework. While it is a powerful abstraction, it does have some drawbacks and limitations. Let's explore some of them and see how the JSONParsing library addresses these issues. ### More than one JSON representation One limitation of the Codable framework is that any given type can only have _one_ way of being represented as JSON. To work around this limitation, one common approach is to introduce wrapper types that wrap a value of the result type and have a custom Decodable implementation. Then, when decoding the type, you first decode to the wrapper type and then extract the underlying value. While this approach works, it's cumbersome to introduce a new type solely for handling JSON decoding. Moreover, the wrapper type needs to be explicitly used whenever you want to decode to the underlying type with that specific decoding strategy. As an example, let's consider the following type representing an RGB color: ```swift struct RGBColor { let red: Int let green: Int let blue: Int } ``` What would be the corresponding json representation for this type? Would it be something like: ```json { "red": 205, "green": 99, "blue": 138 } ``` Or perhaps: ```json "205,99,138" ``` The truth is, both representations are reasonable (as well as many other possibilities), and it's possible that you might have one API endpoint returning RGB colors in the first format, and another in the second format. But when using Codable, you would have to choose one of the formats to be the one used for the `RGBColor` type. To handle both variants, you would have to define two separate types, something like `RGBColorWithObjectRepresentation` and `RGBColorWithStringRepresentation`, and conform both of them to `Codable`, with the different decoding/encoding strategies. Using the *JSONParsing* library, you can easily just create two separate parsers, one for each alternative: ```swift extension RGBColor { static var jsonParserForObjectRepresentation: some JSONParserPrinter<Self> { ParsePrint(.memberwise(Self.init)) { Field("red") { Int.jsonParser() } Field("green") { Int.jsonParser() } Field("blue") { Int.jsonParser() } } } static var jsonParserForStringRepresentation: some JSONParserPrinter<Self> { ParsePrint(.memberwise(Self.init)) { JSONString { Int.parser() "," Int.parser() "," Int.parser() } } } } ``` And now you can use whichever suits best in the given situation: ```swift // in one place in the app let colorJson1 = """ { "red": 205, "green": 99, "blue": 138 } """.data(using: .utf8)! // decode let color1 = try RGBColor.jsonParserForObjectRepresentation.decode(colorJson1) print(color1) // RGBColor(red: 205, green: 99, blue: 138) // encode let newColorJson1 = try RGBColor.jsonParserForObjectRepresentation.encode(color1) print(String(data: newColorJson1, encoding: .utf8)!) // {"blue":138,"green":99,"red":205} // in another place in the app let colorJson2 = """ "55,190,25" """.data(using: .utf8)! // decode let color2 = try RGBColor.jsonParserForStringRepresentation.decode(colorJson2) print(color2) // RGBColor(red: 205, green: 99, blue: 138) // encode let newColorJson2 = try RGBColor.jsonParserForStringRepresentation.encode(color2) print(String(data: newColorJson2, encoding: .utf8)!) // "55,190,25" ``` If you want, you could even define a configurable function, dealing with both variants in the same place: ```swift extension RGBColor { static func jsonParser(useStringRepresentation: Bool = false) -> some JSONParserPrinter<Self> { ParsePrint(.memberwise(Self.init)) { if useStringRepresentation { JSONString { Int.parser() "," Int.parser() "," Int.parser() } } else { Field("red") { Int.jsonParser() } Field("green") { Int.jsonParser() } Field("blue") { Int.jsonParser() } } } } } try RGBColor.jsonParser(useStringRepresentation: false).decode(colorJson1) // RGBColor(red: 205, green: 99, blue: 138) try RGBColor.jsonParser(useStringRepresentation: true).decode(colorJson2) // RGBColor(red: 205, green: 99, blue: 138) ``` #### The `Date` type Perhaps the most common way to run into the limitation of a type only being able to have one single `Codable` conformance, is when dealing with the `Date` type. In fact, it's so common, that the Codable framework even provides a special way of managing how `Date` types are decoded/encoded, through the `dateDecodingStrategy`/`dateEncodingStrategy` properties available on `JSONDecoder` and `JSONEncoder`, respectively. While this does work, it's a little weird to have special handling for _one_ specific type, that looks nothing like how you deal with all the other types. Also, having the configuration on the Encoder/Decoder types, means you can't have more than one date format in the same json object. With *JSONParsing* on the other hand, the `Date` type doesn't have to be handled as an exception. We saw above with the `RGBColor` type, that we can just create a parser that matches the required representation that is used in the JSON API. The library also extends the `Date` type with a static `jsonParser(formatter:)` method, which allows constructing a json parser that decodes/encodes dates according to a given `DateFormatter`: ```swift let json = """ { "date1": "1998-11-20", "date2": "2021-06-01T13:09:09Z" } """.data(using: .utf8)! struct MyType { let date1: Date let date2: Date } let basicFormatter = DateFormatter() basicFormatter.dateFormat = "yyyy-MM-dd" let isoFormatter = DateFormatter() isoFormatter.dateFormat = "yyyy-MM-dd'T'HH':'mm':'ss'Z'" extension MyType { static var jsonParser: some JSONParserPrinter<Self> { ParsePrint(.memberwise(Self.init)) { Field("date1") { Date.jsonParser(formatter: basicFormatter) } Field("date2") { Date.jsonParser(formatter: isoFormatter) } } } } let parsedValue = try MyType.jsonParser.decode(json) print(parsedValue) // MyType(date1: 1998-11-20 00:00:00 +0000, date2: 2021-06-01 13:09:09 +0000) let encodedJson = try MyType.jsonParser.encode(parsedValue) print(String(data: encodedJson, encoding: .utf8)!) // {"date1":"1998-11-20","date2":"2021-06-01T13:09:09Z"} ``` ### Decoding and encoding logic out of sync Codable has the really cool feature of being able to automatically synthesize the decoding and encoding implementations for Swift types, thanks to integration with the Swift compiler. Unfortunately, in practice, the automatically synthesized implementations will often not be correct for your use case, because it assumes that your json data and your Swift data types _exactly_ match each other in structure. This will often not be the case, for various reasons. First, you might be dealing with JSON APIs that you don't own yourself and therefore might deliver data in a format that isn't ideal to your use case. But even if you do own the API code, it might be used by multiple platforms, which means you can't tailor it specifically to work perfectly with your Swift code. Also, Swift has some features, such as enums, that simply _can't_ be expressed equivalently in json. So in practice, when using Codable, you will often have to implement the decoding and encoding logic manually. And the problem in that situation, is that they have to be implemented _separately_. This means that, whenever the expected json format changes in any way, you have to remember to update both the `init(from:)` (decoding) and the `encode(to:)` (encoding) implementations accordingly. With *JSONParsing* on the other hand, you can write a single json parser that can take care of both the decoding and the encoding (as was shown in the [Quick start](#quick-start) section). What this means is that you are guaranteed to always have the two transformations kept in sync as your json API evolves. ### Custom String parsing Recall how we previously defined a json parser for the `RGBColor` type, where the json representation was a comma separated string. It looked like this: ```swift extension RGBColor { static var jsonParserForStringRepresentation: some JSONParserPrinter<Self> { ParsePrint(.memberwise(Self.init)) { JSONString { Int.parser() "," Int.parser() "," Int.parser() } } } } let colorJson = """ "55,190,25" """.data(using: .utf8)! let color = try RGBColor.jsonParserForStringRepresentation.decode(colorJson) print(color) // RGBColor(red: 55, green: 190, blue: 25) let newColorJson2 = try RGBColor.jsonParserForStringRepresentation.encode(color2) print(String(data: newColorJson2, encoding: .utf8)!) // "55,190,25" ``` In that example, it was used to highlight the fact that we can handle different json representations for the same type. However, it actually also shows off _another_ great thing about the library, which is how its integration with the *Parsing* library makes it very convenient to deal with types whose json representation requires custom String transformations. Let's try to accomplish the same thing using Codable: ```swift extension RGBColor: Decodable { init(from decoder: Decoder) throws { let container = try decoder.singleValueContainer() let stringValue = container.decode(String.self) self.red = ??? self.green = ??? self.blue = ??? } } ``` How do we get the rgb components from the decoded String? The Codable abstraction doesn't really provide a general answer to this. We could of course use the *Parsing* library here if we want: ```swift extension RGBColor: Decodable { init(from decoder: Decoder) throws { let container = try decoder.singleValueContainer() let stringValue = try container.decode(String.self) self = try Parse(Self.init) { Int.parser() "," Int.parser() "," Int.parser() } .parse(stringValue) } } ``` But it's not as seamlessly integrated into the rest of the code, as it was in the *JSONParsing* example, forcing us to manually call out to the `parse` method for instance. And also, again, this is only half of the equation, we still have to deal with the encoding, which has to be implemented on its own. ### JSON with alternative representations Imagine that you are working with an api that delivers a list of ingredients in the following format: ```swift let ingredientsJson = """ [ { "name": "milk", "amount": { "value": 2, "unit": "dl" } }, { "name": "salt", "amount": "a pinch" } ] """.data(using: .utf8)! ``` As you can see, the `amount` can _either_ be expressed as a combination of a value and a unit, _or_ a string. In Swift, this is most naturally represented using an enum: ```swift struct Ingredient { enum Amount { case exact(value: Int, unit: String) case freetext(String) } let name: String let amount: Amount } ``` In this situation, we cannot get a suitable synthesized conformance to `Codable` for the `Amount` type, so we have no choice but to implement the methods ourselves. Let's do the `Decodable` conformance: ```swift extension Ingredient.Amount: Decodable { enum CodingKeys: CodingKey { case unit case value } init(from decoder: Decoder) throws { do { let container = try decoder.singleValueContainer() self = .freetext(try container.decode(String.self)) } catch { let container = try decoder.container(keyedBy: CodingKeys.self) let value = try container.decode(Int.self, forKey: .value) let unit = try container.decode(String.self, forKey: .unit) self = .exact(value: value, unit: unit) } } } ``` For the `Ingredient` type we can just use the automatically synthesized conformance: ```swift extension Ingredient: Decodable {} ``` Now we can use a `JSONDecoder` to decode the `ingredientsJson` into a list of `Ingredient`: ```swift let ingredients = try JSONDecoder().decode([Ingredient].self, from: ingredientsJson) print(ingredients) // [Ingredient(name: "milk", amount: Ingredient.Amount.exact(value: 2, unit: "dl")), Ingredient(name: "salt", amount: Ingredient.Amount.freetext("a pinch"))] ``` So that works. We did have to create an explicit `CodingKeys` type as well as two separate `containers` for handling the two cases, which is a little bit of extra boilerplate, but it's not too bad. But there is actually a more fundamental problem here. To see that, let's modify the json input like this: ```diff [ ... { "name": "salt", - "amount": "a pinch" + "amount": 3 } ] """.data(using: .utf8)! ``` So the amount is now just a number, which is not allowed. When we try to decode the list, we get an error: ```swift do { let ingredients = try JSONDecoder().decode([Ingredient].self, from: ingredientsJson) } catch { print(error) // typeMismatch(Swift.Dictionary<Swift.String, Any>, Swift.DecodingError.Context(codingPath: [_JSONKey(stringValue: "Index 1", intValue: 1), CodingKeys(stringValue: "amount", intValue: nil)], debugDescription: "Expected to decode Dictionary<String, Any> but found a number instead.", underlyingError: nil)) } ``` The error message isn't very easily readable, but hidden in there is the message: `"Expected to decode Dictionary<String, Any> but found a number instead."`. So judging by this error, it would seem like that the only valid type of value for the `amount` field is a nested json object. But we know that there is actually a second valid option, namely a string. But this information got lost when the error was created, because of our (arbitrary) choice in the `init(from:)` to _first_ try to decode it as a string, and then if that fails, try the other alternative. If we had written it in the other order, our error message would instead have said `"Expected to decode String but found a number instead."`. Either way, we are missing the fact that we have _multiple valid choices_. So let's see how the JSONParsing library handles this kind of situation! Instead of conforming the types to `Decodable`, let's write _JSON parsers_ for them. ```swift extension Ingredient.Amount { static var jsonParser: some JSONParserPrinter<Self> { OneOf { ParsePrint(.case(Self.exact)) { Field("value") { Int.jsonParser() } Field("unit") { String.jsonParser() } } ParsePrint(.case(Self.freetext)) { String.jsonParser() } } } } extension Ingredient { static var jsonParser: some JSONParserPrinter<Self> { ParsePrint(.memberwise(Self.init)) { Field("name") { String.jsonParser() } Field("amount") { Amount.jsonParser } } } } ``` We make use of the `OneOf` parser from the *Parsing* library, which will run a number of parsers until one succeeds, and if no one succeeds their errors are accumulated. Let's try decoding the same json as before, and see what is printed[^1]: ```swift do { let ingredients = try JSONArray { Ingredient.jsonParser }.decode(ingredientsJson) } catch { print(error) // At [index 1]/"amount": // error: multiple failures occurred // // error: Expected an object (containing the key "value"), but found: // 3 // // Expected a string, but found: // 3 } ``` As you can see, _both_ possibilities are now mentioned in the printed error message. Also, as a bonus, the error message is _a lot_ easier to read. This also serves as a glimpse at what printed errors look like when using this library. They always have basically the same layout as what you see above: a path describing where something went wrong, and then a more detailed description of _what_ went wrong. All in an easily readable format. [^1]: At the time of writing, this is actually a slight lie. In this exact situation, the first line `At [index 1]/"amount":` would in fact be split across two lines reading `At [index 1]:` and `error: At "amount":` respectively. This is due to a current limitation preventing the error path to be printed in the ideal way, that will hopefully be fixed in the near future. In many other situations though, the error path will be printed in that nice compact format, so I still wanted to show that version. ### Decoding/encoding logic spread out Another thing that I don't think is ideal with the Codable abstraction is that the decoding/encoding logic lives in two separate places. In part, it is implemented in the types when they conform to the two protocols, but then you can _also_ control some of the behavior via properties on the `JSONDecoder`/`JSONEncoder` instance that you use to perform the decoding/encoding. For instance, the `JSONDecoder` type has a `keyDecodingStrategy` property that can be used to control how keys in the json objects are pre-processed during decoding, and a `dateDecodingStrategy` that can be used to control how dates are decoded. What this means is that a type's conformance to `Decodable`/`Encodable` _is not a complete description of how that type is converted to/from json_. To fully control how that happens, you _also_ have to be in control over which `JSONDecoder`/`JSONEncoder` instance that is used. When using *JSONParsing*, on the other hand, any json parser that you create, _exactly_ determines how to transform a type to/from a json representation. ## The `JSONValue` type So far we have glossed over a detail of the library, that isn't immediately necessary to know about to start using it, but is useful to know about to understand how things work under the hood. Everywhere when we have created json parsers, we have given it the type of either `some JSONParser<T>` or `some JSONParserPrinter<T>`, and then when using them to decode or encode json data, we have used the `decode(_:)` and `encode(_:)` methods, respectively. As it turns out, `JSONParser<T>` and `JSONParserPrinter<T>` are just typealiases for `Parser<JSONValue, T>` and `ParserPrinter<JSONValue, T>`, respectively (`ParserPrinter` means it can both parse (decode) and print (encode), see [the documentation](https://pointfreeco.github.io/swift-parsing/main/documentation/parsing/gettingstarted) for the *Parsing* library for more details). So we are actually defining parsers that take as input a type called `JSONValue`. This is a type exposed from this library, and just serves as a very basic typed representation of json, that looks like this: ```swift public enum JSONValue: Equatable { case null case boolean(Bool) case integer(Int) case float(Double) case string(String) case array([JSONValue]) case object([String: JSONValue]) } ``` So when we call the `decode(_:)` and `encode(_:)` methods on the parsers, the decoding and encoding happens in two steps: the json data is transformed to/from the `JSONValue` type, and the `JSONValue` type is in turn transformed to/from the result type using the `Parser.parse`/`ParserPrinter.print` methods. The primary use case for the `JSONValue` type is just to act as this middle layer, to simplify the implementations of the various json parsers that ship with the library. However, it can actually be useful on its own. For instance, you might have code like this today: ```swift let json: [String: Any] = [ "title": "hello", "more_info": ["a": 1, "b": 2, ...], ... ] let jsonData = try JSONSerialization.data(withJSONObject: json) var request = URLRequest(url: requestUrl) request.httpMethod = "POST" request.httpBody = jsonData ``` While that does work, the fact that the `json` has type `[String: Any]` means that it could actually be a dictionary that holds _any_ kind of data. In particular, it could hold data that isn't valid json data, and the compiler won't let you know. For instance, we could add a `Date` in the `title` field, and the compiler will be fine with it, but it will result in a runtime crash: ```swift let json: [String: Any] = [ "title": Date(), "more_info": ["a": 1, "b": 2, ...], ... ] let jsonData = try JSONSerialization.data(withJSONObject: json) // runtime crash: *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: 'Invalid type in JSON write (__NSTaggedDate)' ``` By using the `JSONValue` type instead in this scenario, you can get a compile time guarantee that your json data is valid. And thanks to the fact that `JSONValue` conforms to a number of `ExpressibleBy...` protocols, it can actually be initialized with the _exact_ same syntax as before. So the previous example becomes: ```swift let json: JSONValue = [ "title": "hello", "more_info": ["a": 1, "b": 2, ...], ... ] let jsonData = try json.toJsonData() // ... the rest is the same ``` If we now try to replace `"hello"` with `Date()` as we did before, this time the compiler won't let us: ```swift let json: [String: Any] = [ "title": Date(), // compiler error: Cannot convert value of type 'Date' to expected dictionary value type 'JSONValue' "more_info": ["a": 1, "b": 2, ...], ... ] ``` ## The JSON parsers This library ships with a number of json parsers, that can be composed together to deal with more complex json structures. As mentioned in the previous section, they all take values of the custom type `JSONValue` as input, so when using the `parse`/`print` methods, they convert to/from that type. When you want to use them to decode/encode json _data_ (which is likely to be the most common use case) you just use the `decode`/`encode` methods defined on them instead, which does the converting to from data for you. ### Null The `Null` parser is used for parsing the special json value `null`. You use it when you need to explicitly make sure that a value is null. ```swift let nullJson: JSONValue = .null let nonNullJson: JSONValue = 5.0 try Null().parse(nullJson) // () try Null().parse(nonNullJson) // throws: // Expected a null value, but found: // 5.0 ``` When used as a printer (encoder), the `Null` parser prints `.null`: ```swift try Null().print(()) // .null ``` ### JSONBoolean The `JSONBoolean` parser is used for parsing json booleans. It succeeds only when given either a `false` or `true` json value, and returns the corresponding `Bool` value. ```swift let booleanJson: JSONValue = false let nonBooleanJson: JSONValue = [ "key1": 1, "key2": "hello" ] try JSONBoolean().parse(booleanJson) // false try JSONBoolean().parse(nonBooleanJson) // throws: // Expected a boolean, but found: // { // "key1": 1, // "key2": "hello" // } ``` An alternative way of constructing a `JSONBoolean` parser, is via the static `jsonParser()` method on the `Bool` type: ```swift try Bool.jsonParser().parse(booleanJson) // false ``` The `JSONBoolean` parser can also be used for printing (encoding) back into json: ```swift try Bool.jsonParser().print(true) // .boolean(true) ``` ### JSONNumber The `JSONNumber` parser is used for parsing json numbers. Notable is the fact that the `JSONValue` type has a separation between _floating point_ numbers, and _integer_ numbers. When using it to parse to a floating point type, the parser takes an optional parameter called `allowInteger`, which controls whether it succeeds on integers as well as floating points. If not specified, that defaults to `true`. ```swift let integerJson: JSONValue = 10 // or .integer(10) let floatJson: JSONValue = 2.4 // or .float(2.4) let nonNumberJson: JSONValue = "hello" try JSONNumber<Int>().parse(integerJson) // 10 try JSONNumber<Double>().parse(floatJson) // 2.4 try JSONNumber<Int>().parse(floatJson) // throws: // Expected an integer number, but found: // 2.4 try JSONNumber<Double>().parse(integerJson) // 10.0 try JSONNumber<Double>(allowInteger: false).parse(integerJson) // throws: // Expected a floating point number, but found: // 10 try JSONNumber<Double>().parse(nonNumberJson) // throws: // Expected a number, but found: // "hello" ``` Alternatively, a `JSONNumber` parser can be constructed via the `jsonParser()` static methods defined on `BinaryInteger` and `BinaryFloatingPoint`: ```swift try Int.jsonParser().parse(integerJson) // 10 try Int64.jsonParser().parse(integerJson) // 10 try Double.jsonParser().parse(floatJson) // 2.4 try CGFloat.jsonParser(allowInteger: false).parse(floatJson) // 2.4 ``` Note: when decoding json _data_, using the `decode` method, a number in the json object is interpreted as a floating point if it has _any_ decimals (including just a `0`). ```swift let json = """ { "a": 10, "b": 10.5, "c": 10.0 } """.data(using: .utf8)! try Field("a") { Int.jsonParser() }.decode(json) // 10 try Field("b") { Int.jsonParser() }.decode(json) // throws: // At "b": // Expected an integer number, but found: // 10.5 try Field("c") { Int.jsonParser() }.decode(json) // throws: // At "c": // Expected an integer number, but found: // 10.0 try Field("b") { Double.jsonParser() }.decode(json) // 10.5 try Field("c") { CGFloat.jsonParser() }.decode(json) // 10.0 ``` The `JSONNumber` parser can also be used for printing to json: ```swift try Int.jsonParser().print(25) // .integer(25) try Double.jsonParser().print(1.6) // .float(1.6) ``` ### JSONString The `JSONString` parser is used for parsing json strings. And as has been showed in previous sections, it can also be given a string parser, for performing custom parsing of the string value. ```swift let stringJson: JSONValue = "120,200,43" let nonStringJson: JSONValue = [1, 2, 3] try JSONString().parse(stringJson) // "120,200,43" try JSONString().parse(nonStringJson) // throws: // Expected a string, but found: // [ 1, 2, 3 ] try JSONString { Int.parser() "," Int.parser() "," Int.parser() }.parse(stringJson) // (120, 200, 43) let nonMatchingStringJson: JSONValue = "apple" try JSONString { Int.parser() "," Int.parser() "," Int.parser() }.parse(stringJson) // throws: // error: unexpected input // --> input:1:1 // 1 | apple // | ^ expected integer ``` There is also a version of the initializer that takes a string _conversion_. A conversion is a concept introduced in the *Parsing* library, and works like a two-way function. The library also exposes a number of predefined conversions, for example the `representing(_:)` conversion, that can be used to convert between `RawRepresentable` types, and their raw values. Using it with the `JSONString` parser looks like this: ```swift enum Direction: String { case up, down, left, right } extension Direction { static let jsonParser = JSONString(.representing(Direction.self)) } let json: JSONValue = "left" let direction = Direction.jsonParser.parse(json) print(direction) // Direction.left try Direction.jsonParser.print(direction) // .string("left") ``` When you don't need any custom parsing, and just want to parse a json string as it is, you can also choose to define the parser with the static `jsonParser()` method defined on the `String` type: ```swift let json: JSONValue = "hello" try String.jsonParser().parse(json) // "hello" ``` The `JSONString` can be used as a printer, to print (decode) to json, as long as the underlying string parser given to it is a printer itself. ```swift try JSONString { Int.parser() "," Int.parser() "," Int.parser() }.print((120, 200, 43)) // .string("120,200,43") ``` ### JSONArray The `JSONArray` parser is used for parsing json arrays. You construct it by providing a parser that should be applied to each element of the array. As a bonus you can also, optionally, specify that the array must be of a certain size, by giving it a range or a single number. It looks like this to use it for parsing json: ```swift let directionArrayJson: JSONValue = ["left", "left", "right", "up"] let numberArrayJson: JSONValue = [1, 2, 3] let nonArrayJson: JSONValue = 10.5 try JSONArray { Direction.jsonParser }.parse(directionArrayJson) // [Direction.left, Direction.left, Direction.right, Direction.up] try JSONArray(1...3) { Direction.jsonParser }.parse(directionArrayJson) // throws: // Expected 1-3 elements in array, but found 4. try JSONArray(3) { Direction.jsonParser }.parse(directionArrayJson) // throws: // Expected 3 elements in array, but found 4. try JSONArray { Direction.jsonParser }.parse(numberArrayJson) // throws: // At [index 0]: // Expected a string, but found: // 1 try JSONArray { Int.jsonParser() }.parse(numberArrayJson) // [1, 2, 3] try JSONArray { Int.jsonParser() }.parse(nonArrayJson) // throws: // Expected an array, but found: // 10.5 ``` And for printing (which is available whenever the element parser given to it has printing capabilities): ```swift try JSONArray { Direction.jsonParser }.print([Direction.right, .left, .down]) // .array(["right", "left", "down"]) ``` ### JSONObject The `JSONObject` parser is used to parse a json object into a dictionary. In it's most basic form it takes a single `Value` parser, to be applied to each value in the json object. And the result after parsing will be a `[String: Value.Output]` dictionary, where `Value.Output` is the type returned from the `Value` parser. ```swift let objectJson: JSONValue = .object([ "url1": "https://www.example.com/1", "url2": "https://www.example.com/2", "url3": "https://www.example.com/3", ]) let dictionary = try JSONObject { URL.jsonParser() }.parse(objectJson) print(dictionary) // ["url1": https://www.example.com/1, "url3": https://www.example.com/3, "url2": https://www.example.com/2] try JSONObject { URL.jsonParser() }.print(dictionary) // .object(["url1": "https://www.example.com/1", "url3": "https://www.example.com/3", "url2": "https://www.example.com/2"]) ``` But you can also specify custom parsing of the _keys_ into any `Hashable` type, by adding on a `keys` parser parameter: ```swift let objectJson: JSONValue = [ "key_1": "Steve Jobs", "key_2": "Tim Cook" ] let dictionary = try JSONObject { String.jsonParser() } keys: { "key_" Int.parser() }.parse(objectJson) print(dictionary) // [1: "Steve Jobs", 2: "Tim Cook"] try JSONObject { String.jsonParser() } keys: { "key_" Int.parser() }.print(dictionary) // .object(["key_1": "Steve Jobs", "key_2": "Tim Cook"]) ``` or by passing a string conversion to the initializer, for example a `representing` conversion to turn the keys into some `RawRepresentable` type: ```swift struct UserID: RawRepresentable, Hashable { var rawValue: String } let usersJson: JSONValue = .object([ "abc": "user 1", "def": "user 2", ]) let dictionary = try JSONObject(keys: .representing(UserID.self)) { String.jsonParser() }.parse(usersJson) print(dictionary) // [UserID(rawValue: "abc"): "user 1", UserID(rawValue: "def"): "user 2"] try JSONObject(keys: .representing(UserID.self)) { String.jsonParser() }.print(dictionary) // .object(["abc": "user 1", "def": "user 2"]) ``` And just like the `JSONArray` parser, it can be restricted to only accept a certain number of elements (key/value pairs). ```swift let emptyObjectJson: JSONValue = [:] try JSONObject(1...) { URL.jsonParser() }.parse(emptyObjectJson) // throws: Expected at least 1 key/value pair in object, but found 0. let emptyDictionary: [String: URL] = [:] try JSONObject(1...) { URL.jsonParser() }.print(emptyDictionary) // throws: An JSONObject parser requiring at least 1 key/value pair was given 0 to print. ``` ### Field The `Field` parser is used for parsing a single value at a given field. It takes as input a key, as a `String`, and a json parser to be applied to the value found at that key. ```swift let personJson: JSONValue = [ "first_name": "Steve", "last_name": "Jobs", "age": 56, ] let personJsonWithoutFirstName: JSONValue = [ "last_name": "Cook", "age": 62, ] try Field("first_name") { String.jsonParser() }.parse(personJson) // "Steve" try Field("first_name") { String.jsonParser() }.print("Steve") // .object(["first_name": "Steve"]) try Field("first_name") { Int.jsonParser() }.parse(personJson) // throws: // At "first_name": // Expected an integer number, but found: // "Steve" try Field("first_name") { String.jsonParser() }.parse(personJsonWithoutFirstName) // throws: // Key "first_name" not present. ``` Most often, you will probably want to combine multiple `Field` parsers together, to parse to a more complex result type. For the example above, you will likely have a `Person` type that you want to turn the json into. For that, we can make use of the `memberwise` conversion exposed from the *Parsing* library. ```swift struct Person { let firstName: String let lastName: String let age: Int } extension Person { static var jsonParser: some JSONParserPrinter<Self> { try ParsePrint(.memberwise(Person.init)) { Field("first_name") { String.jsonParser() } Field("last_name") { String.jsonParser() } Field("age") { Int.jsonParser() } } } } let person = try Person.jsonParser.parse(personJson) // Person(firstName: "Steve", lastName: "Jobs", age: 56) try Person.jsonParser.print(person) // .object(["first_name": "Steve", "last_name": "Jobs", "age": 56]) ``` ### OptionalField The `OptionalField` parser works like the `Field` parser, but it allows for the field to not exist (or be `null`). To see what that is useful for, let's extend the `Person` type with a new field called `salary`: ```diff struct Person { let firstName: String let lastName: String let age: Int + let salary: Double? } ``` Then we can extend the `Person.jsonParser` in the following way: ```diff try ParsePrint(.memberwise(Person.init)) { Field("first_name") { String.jsonParser() } Field("last_name") { String.jsonParser() } Field("age") { Int.jsonParser() } + OptionalField("salary") { Double.jsonParser() } } ``` Now it can handle person json values with or without a salary. ```swift let personJsonWithSalary: JSONValue = [ "first_name": "Bob", "last_name": "Bobson", "age": 50, "salary": 12000 ] let personJsonWithoutSalary: JSONValue = [ "first_name": "Mark", "last_name": "Markson", "age": 20 ] let person1 = try Person.jsonParser.parse(personJsonWithSalary) // Person(firstName: "Bob", lastName: "Bobson", age: 50, salary: 12000.0) try Person.jsonParser.print(person1) // .object(["first_name": "Bob", "last_name": "Bobson", "age": 50, "salary": 12000.0]) let person2 = try Person.jsonParser.parse(personJsonWithoutSalary) // Person(firstName: "Mark", lastName: "Markson", age: 20, salary: nil) try Person.jsonParser.print(person2) // .object(["first_name": "Mark", "last_name": "Markson", "age": 20]) ``` Instead of treating an absent value as `nil`, you can optionally provide a `default` value, to use as a fallback: ```diff struct Person { let firstName: String let lastName: String let age: Int - let salary: Double? + let salary: Double } extension Person { static var jsonParser: some JSONParserPrinter<Self> { try ParsePrint(.memberwise(Person.init)) { Field("first_name") { String.jsonParser() } Field("last_name") { String.jsonParser() } Field("age") { Int.jsonParser() } - OptionalField("salary") { Double.jsonParser() } + OptionalField("salary", default: 0) { Double.jsonParser() } } } } ``` Now, parsing a person json without a salary, will use the default value of `0`: ```swift let person = try Person.jsonParser.parse(personJsonWithoutSalary) // Person(firstName: "Mark", lastName: "Markson", age: 20, salary: 0) try Person.jsonParser.print(person) // .object(["first_name": "Mark", "last_name": "Markson", "age": 20]) ``` ## Integration with Codable While this library is intended to be able to stand on its own as a fully featured alternative to Codable, it does come with tools to help bridge these two worlds, allowing them to be mixed together. This is important partly because you may be working with other libraries that force you to use Codable in some places, and partly because it allows you to transition a code base that uses Codable, one model at a time. Let's take a look at how it works. ### Integrating *Codable* into *JSONParsing* code Imagine that you have the following type: ```swift struct Person { let name: String let age: Int let favoriteMovie: Movie? } ``` where the `Movie` type is `Codable`, and you want to create a json parser for `Person`. For situations like this, the library extends all `Decodable` types with a `jsonParser(decoder:)` method, that takes an optional `JSONDecoder` parameter. And if the type also conforms to `Encodable`, the method takes an optional `JSONEncoder` parameter as well. So for our example, we can make use of this in the parse implementation, to deal with the `Movie` type: ```swift extension Person { static var jsonParser: some JSONParserPrinter<Self> { ParsePrint(.memberwise(Self.init)) { Field("name") { String.jsonParser() } Field("age") { Int.jsonParser() } Field("favorite_movie") { Movie.jsonParser() } } } } ``` and if we need to customize the decoding/encoding of the `Movie` type, we can pass a custom decoder and/or encoder like this: ```swift let jsonDecoder: JSONDecoder = ... let jsonEncoder: JSONEncoder = ... extension Person { static var jsonParser: some JSONParserPrinter<Self> { ParsePrint(.memberwise(Self.init)) { ... Field("favoriteMovie") { Movie.jsonParser(decoder: jsonDecoder, encoder: jsonEncoder) } } } } ``` ### Integrating *JSONParsing* into *Codable* code So that's one part of the equation, when it comes to integration with Codable. But what about the other way around? What if we actually _do_ have a json parser capable of decoding `Movie`s, and we're using Codable for the `Person` type instead. For that use case, the library comes with overloads of the various methods on the decoding/encoding containers, that take a json parser as input. Let's see what it looks like to use this, by conforming the `Person` type to both the `Decodable` and the `Encodable` protocol: ```swift extension Person: Decodable { enum CodingKeys: String, CodingKey { case name case age case favoriteMovie = "favorite_movie" } init(from decoder: Decoder) throws { let container = try decoder.container(keyedBy: CodingKeys.self) self.name = try container.decode(String.self, forKey: .name) self.age = try container.decode(Int.self, forKey: .age) self.favoriteMovie = try container.decodeIfPresent(forKey: .favoriteMovie) { Movie.jsonParser } } } extension Person: Encodable { func encode(to encoder: Encoder) throws { var container = encoder.container(keyedBy: CodingKeys.self) try container.encode(self.name, forKey: .name) try container.encode(self.age, forKey: .age) try container.encodeIfPresent(self.favoriteMovie, forKey: .favoriteMovie) { Movie.jsonParser } } } ``` Here, we make use of the overloads of the `KeyedDecodingContainer.decodeIfPresent`, and `KeyedEncodingContainer.encodeIfPresent` methods, that takes a json parser as input. Apart from taking an extra json parser parameter, the decoding overloads also make the `type` parameter optional, since it can always be inferred anyway. But if you want, you can still explicitly specify them like for the default versions: ```diff extension Person: Decodable { ... init(from decoder: Decoder) throws { ... - self.favoriteMovie = try container.decodeIfPresent(forKey: .favoriteMovie) { + self.favoriteMovie = try container.decodeIfPresent(Movie.self, forKey: .favoriteMovie) { Movie.jsonParser } } } ``` ## Benchmarks This library comes with a few benchmarks, comparing the execution time for decoding and encoding with that of the corresponding Codable implementation. ```text MacBook Pro (14-inch, 2021) Apple M1 Pro (10 cores, 8 performance and 2 efficiency) 16 GB (LPDDR5) name time std iterations ----------------------------------------------------------------------------- Decoding.JSONDecoder (Codable) 174917.000 ns ± 3.19 % 7610 Decoding.JSONParser 169625.000 ns ± 2.20 % 8070 Decoding.JSONParser (mixed with Codable) 311250.000 ns ± 8.36 % 4467 Decoding.JSONParser (from JSONValue) 67042.000 ns ± 2.06 % 20820 Encoding.JSONEncoder (Codable) 1212416.500 ns ± 0.96 % 1144 Encoding.JSONParser 2082541.000 ns ± 22.11 % 680 Encoding.JSONParser (mixed with Codable) 2889500.000 ns ± 23.28 % 465 Encoding.JSONParser (to JSONValue) 397417.000 ns ± 1.09 % 3499 ``` ## Installation You can add the library as a dependency using SPM by adding the following to the `Package.swift` file: ```swift dependencies: [ .package(url: "https://github.com/oskarek/swift-json-parsing", from: "0.1.0"), ] ``` and then in each module that needs access to it: ```swift .target( name: "MyModule", dependencies: [ .product(name: "JSONParsing", package: "swift-json-parsing"), ] ), ``` ## License This library is released under the MIT license. See [LICENSE](LICENSE) for details.
flyandi/SquidRID
https://github.com/flyandi/SquidRID
Drone/FAA RemoteID Penetration and Educational Test Tool
``` _____ ___ __ __ ____ ___ ____ ____ ___ / ___/ / \ | | || || \ | \ | || \ ( \_ | || | | | | | \ | D ) | | | \ \__ || Q || | | | | | D || / | | | D | / \ || || : | | | | || \ | | | | \ || || | | | | || . \ | | | | \___| \__,_| \__,_||____||_____||__|\_||____||_____| REMOTE ID ``` ## WHAT IS THIS? SquidRID is a experimental tool to test and control most of aspects of the FAA RemtoeID protocol. It comes as two parts: Configurator and Firmware. Many options of the protocol are exposed and can be freely configured. The configurator is accessible at [https://squidrid.flyandi.net](https://squidrid.flyandi.net) but requires Google Chrome. SquidRID run's on most ESP32 boards and requires the Arduino IDE to upload the firmware. ## Features ``` [ ] Conforms to Remote ID Standards ASTM F3411-22a, ASD-STAN 4709-002 [ ] Exposes most of the usable protocol operations including Operator Location [ ] Bluetooth and Wifi are supported (Only selectable through firmware right now) [ ] Simulation Mode with options for Hold, Random, Follow Path mode. Variable altitude and speed options. [ ] Pester Mode (Spawns Drones every n seconds, configurable) [ ] Supports configurable external sensors such as GPS and Flight Controllers (via LTM Telemetry protocol) ``` ## Squid Mode Squid Mode can simulate many aspects of the RemoteID protocol for a single drone instance also called a Squid. ![](docs/sim.png) ## Pest Mod In Pest Mode, SquidRID will spawns x rows every n seconds that are both configurable from the configurator. ![](docs/pest.png) ## External Mode This firmware supports external sensors. Currently GPS NEMA/UBLOX as well LTM protocls are supported and can be configured in the profile and run via the `EXTERNAL` mode (Serial Command: `$SM|2|1`). ![](docs/ext.png) Both protocols are serial protocols and you have to configure your RX and TX pin's in the configurator. ![](docs/ext_prot.png) ## IS THIS LEGAL? Yes and Maybe Not. Many developers that build systems supporting RemoteID require a realistic way to test their implementations and this tool provides a means to do so. This tool also provides additional functionality such as security and penetration testing of RemoteID. This tool may be used for educational purposes only and may violate laws or may require a license to operate. Always educate yourself before using this tool. By using this software you always agree that the author(s) of this software shall under no curcimstances be held liable for the use and/or operation of this software by any party. ## Work In Progress While many parts are ready to use there a few that are still under development: ``` [ ] Update Documentation ``` ## Configurator ![](docs/configurator.png)) The Configurator is an easy way to configure your SquidRID Testing Device. Go to [https://squidrid.flyandi.net](https://squidrid.flyandi.net) to access the latest version or run it locally.
tegal1337/mapsdumper
https://github.com/tegal1337/mapsdumper
Dump place details from Google Maps like phone,email,website,and reviews
<center> <p align="center"> <img src="https://github.com/tegal1337/mapsdumper/assets/31664438/04c8cf97-fdee-4d41-8201-b3e98fe21dfb" align="center" width="620"/> <h1 align="center"> MAPSDUMPER </h1> <a href="https://www.youtube.com/watch?v=jHwfN4i4M8A">Demo</a> • <a href="#run--installation">Installation</a> • <a href="#feature">Feature</a> • <a href="#credits">Credits</a> </p> A tool for dumping place details from Google Maps such as phone numbers, emails, websites, and reviews </p> </center> ## Demo <a href="https://www.youtube.com/watch?v=jHwfN4i4M8A"> <img src="https://asciinema.org/a/paxVigckYdHQDm0DS1kFyETjG.png" width="836"/></a> ```bash $ see this DEMO in here > https://www.youtube.com/watch?v=jHwfN4i4M8A ``` ## Run / Installation How to run ? ```bash > git clone https://github.com/tegal1337/mapsdumper > cd mapsdumper > npm install > node index.js ``` ## Feature Feature | Status | | -------------:|------------- | | Get Adress | Ok| | Get Phone Number | Ok| | Get Website | Ok| | Get Reviews | Ok | | Get Review date | Ok | | Get Star Count per Review | Ok| ## Requirements - Windows / Linux - Google Chrome - a Youtube account # Contact [email protected] [![Donate](https://img.shields.io/badge/Donate-PayPal-green.svg)](https://www.paypal.me/fdciabdul)
hang-log-design-system/design-system
https://github.com/hang-log-design-system/design-system
행록 디자인 시스템
<p align="center"> <a href="https://ant.design"> <img width="200" src="https://github.com/hang-log-design-system/design-system/assets/51967731/f0dcb43d-2f7b-4d29-b314-69f24b15184f"> </a> </p> <h1 align="center">Hang Log Design System</h1> A design system library for 행록(Hang Log), a place-based travel record service. ## Installation ```sh $ npm install hang-log-design-system # or $ yarn add hang-log-design-system ``` ## Getting started To start using the components, first wrap your application in a provider provided by **hang-log-design-system** ```jsx import { HangLogProvider } from 'hang-log-design-system'; const App = ({ children }) => { return <HangLogProvider>{children}</HangLogProvider>; }; ``` <br> After adding the provider, now you can start using components like this. ```jsx import { Button } from 'hang-log-design-system'; function App() { return ( <Button variant="primary" size="large"> Hello World </Button> ); } ``` ## Links - [Storybook](https://64ae1170f3ddc89ef85a4950-oawcagcpxl.chromatic.com) - [Figma](https://www.figma.com/file/rJUqeL7LUnJjCPQNmQ3BZc/design-system?type=design&node-id=1%3A2854&mode=design&t=nVD5D8xFhO9Dkg6g-1) ## Contributors | <img src="https://avatars.githubusercontent.com/u/45068522?v=4" width="120" height="120"> | <img src ="https://avatars.githubusercontent.com/u/51967731?v=4" width="120" height="120"> | <img src ="https://avatars.githubusercontent.com/u/102305630?v=4" width="120" height="120"> | | :---------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------: | :-----------------------------------------------------------------------------------------: | | [슬링키](https://github.com/dladncks1217) | [애슐리](https://github.com/ashleysyheo) | [헤다](https://github.com/Dahyeeee) |
andy-man/ps4-wee-tools
https://github.com/andy-man/ps4-wee-tools
PS4 tools for NOR and Syscon manipulations
![PS4 Wee Tools](assets/splash.png) # PS4 Wee Tools PS4 wee tools help to work with PS4 nor and syscon dumps. It is free open source alternative for BwE's commercial apps. Written in Python to keep it simple. | Nor tool | Syscon tool | | --- | --- | | ![Main tool](assets/main.png) | ![Syscon tool](assets/syscon.png) | Was tested with Python 3.8 Since v0.7 **"Crypto"** library is required if you want to run python script directly It can be easily installed with help of pip `pip install crypto` ## Features NOR tool * PS4 Nor dump info * MD5, SKU, Region, SN / Mobo SN, Southbridge * Torus (WiFi), MAC, HDD, FW (current, minimal), FW2 ver probability * Flags toggle: * boot mode, safe boot, update mode, arcade mode, kiosk mode (idu) * registry recovery, manu, button swap, memory budget, slow hdd mode * System flags cleaning * Memory clock editing (GDDR5) * SAMU boot flag edit * Downgrade by slot switch (FW revert) * Additional tools * Extract NOR's partitions * Build dump from extracted files * View / Recover HDD EAP keys * Get HDD EAP keys [keys.bin] * Base validation and entropy stats * EMC CFW for Aeolia Syscon tool * Syscon check * Patchable status check * SNVS blocks viewer (records and flatdata) * Manual SNVS patch * Auto SNVS patch (3 modes) * SNVS rebuilding * Boot mode selection Common * Multy files compare * Pack / Unpack 2BLS files Don't use if you don't understant what is it for! ## Credits * fail0verflow * zecoxao * Al-Azif * Darknesmonk * BwE * pearlxcore And of course [PSDevWiki](https://www.psdevwiki.com/ps4/) ## Donate * **[Patreon](https://patreon.com/andy_man)** * **[Boosty](https://boosty.to/andy_man/donate)** * **[YandexMoney](https://yoomoney.ru/to/410011555252085)** * **Bitcoin**: 39VaMnFqCQo751mvDc3M7ADVty71q2tWDm ## Links * [Twitter](https://twitter.com/AndyManDev) ## Changelog ### v0.8 * SNVS rebuilding (Syscon) * Manual syscon patch allows to remove entire block * Tips for syscon patching * Extra syscon patch (stuck on FW update) * NVStorage class improvements ### v0.7.9 * EAP key recovery * Some optimization ### v0.7.8 * Added 6 patterns to slot switcher * EMC CFW patcher for Aeolia * UI bugs fixed ### v0.7.7 * SLB2 pack / unpack * File selection screen upgrade * Preparations for EMC cfw (Aeolia) ### v0.7.6 * FW2 version assumption * Validation screen update ### v0.7.5 * Base validation (header, mbrs, emc, eap, wifi) * Southbridge and torus Info * Code refactoring * Minor bugs fix ### v0.7 * Extract NOR's partitions * Build dump from extracted files * Get HDD EAP keys [keys.bin] * File selection screen fix ### v0.6 * Now with color highlighting * Dialog improvements ### v0.5 * New screen "Flags toggle" * boot mode, safe boot, update mode, arcade mode, kiosk mode (idu) * registry recovery, manu, button swap, memory budget, slow hdd mode * Syscon auto SNVS patch (4 modes) * New FW detection (experemental) * Region info * Minor improvements ### v0.4 * Syscon patchable check * Manual patch bug fix * Minor errors fix ### v0.3 * NOR Entropy stats * Syscon manual patch * Minor errors fix ### v0.2 * Syscon DEBUG toggle * Syscon NVStorage class * Syscon show active slot ### v0.1 * Files compare * UART, Memtest toggle * Sys flags clean * Edit mem clock and SAMU * Downgrade switch patterns * Syscon base check
tsoding/tic-tac-toe-fortran-raylib
https://github.com/tsoding/tic-tac-toe-fortran-raylib
Tic-Tac-Toe in Fortran with Raylib
# Tic-Tac-Toe in Fortran with Raylib *The video has sound* https://github.com/tsoding/tic-tac-toe-fortran-raylib/assets/165283/fdb2fffc-7aec-4af0-ad92-cdf69aee769f ## Quick Start Dependancies: - [gfortran](https://gcc.gnu.org/fortran/) - [raylib](https://www.raylib.com/) ```console $ ./build.sh $ ./build/ttt ```
AZhrZho/ECharts.Net
https://github.com/AZhrZho/ECharts.Net
用于WPF/Winform的ECharts图表控件, 基于Webview2实现 | Introduce ECharts into your C# WPF/Winform apps, powered by Microsoft Webview2.
# ECharts.Net - 在.NET应用中使用ECharts [![NuGet Badge](https://buildstats.info/nuget/ECharts.Net.Core?includePreReleases=true)](https://www.nuget.org/packages/ECharts.Net.Core/0.0.4) <div align=center><img height='100' src=".github/icon.png"></div> <br/> <div align=center> <font size='4'> <strong>简体中文</strong> | <a href='/README.en.md'>English</a> </font> </div> ## 简介 可用于WPF/Winform的图表控件, 内部使用Webview2嵌入ECharts实现,基于最新的.net技术构建,同时兼容.net framework。 ## 特点 - 现代化:基于最新的.net7以及Webview2构建 - 高性能:以尽可能低的额外开销实现与ECharts的互操作 - 类型安全:尽可能使用强类型封装ECharts组件 - 灵活性:在使用.net类型的同时也可以使用js直接操作 ## 如何使用 使用Nuget包管理器安装: | 平台 | 包名 | | --- | --- | | WinForm | [`ECharts.Net.Winform`](https://www.nuget.org/packages/ECharts.Net.Winform/) | | WPF | [`ECharts.Net.Wpf`](https://www.nuget.org/packages/ECharts.Net.Wpf/) | | WinUI3 | 尚未支持 | 由于处于早期开发阶段,目前没有文档。具体用法请参见本仓库中的Demo。 ## 路线图 本项目目前处于前期开发阶段,已实现基本功能,理论上可以使用JS调用的方式实现全部官方例程。当前开发重点在于对`Option`的封装。 | 事项 | 状态 | | --- | --- | | 对Webview2控件的基本封装 | ✅ | | 核心类型系统设计 | ✅ | | **对`Option`进行封装** | 进行中 | | WinUI3支持 | 计划中 | | 序列化和互操作优化 | 计划中 | | .Net Framework支持 | ✅ | ## 截图 ![screenshot](/.github/screenshot-wpf.png) ## 贡献 对本项目有任何疑问,欢迎[提交Issue](https://github.com/AZhrZho/ECharts.Net/issues/new),或直接发起Pull Request
junhoyeo/threads-py
https://github.com/junhoyeo/threads-py
Unofficial, Reverse-Engineered Python client for Meta's Threads.
# [<img src="./.github/logo.jpg" width="36" height="36" />](https://github.com/junhoyeo) threads-py [![pypi](https://img.shields.io/pypi/v/threads-py.svg?style=flat-square&labelColor=black)](https://pypi.org/project/threads-py) [![MIT License](https://img.shields.io/badge/license-MIT-blue?style=flat-square&labelColor=black)](https://github.com/junhoyeo/threads-py/blob/main/LICENSE) ### **Unofficial, Reverse-Engineered Python API for [Meta's Threads](https://www.threads.net/).** > #### **Looking for the TypeScript version?** _Check out **[junhoyeo/threads-api. ![](https://img.shields.io/github/stars/junhoyeo%2Fthreads-api?style=social)](https://github.com/junhoyeo/threads-api)**_ --- ## Installation ```bash pip install --no-cache-dir --upgrade threads-py ``` ## Initialization #### Public Data Usage: ```python3 from threadspy import ThreadsAPI api = ThreadsAPI() ``` #### Private Data Usage: ```python3 from threadspy import ThreadsAPI api = ThreadsAPI( username="Your-Username" password="Your-Password" token="You-Token" # optional (if you're already authenticated) ) ``` --- ## [<img src="./.github/emojis/pushpin.png" width="30" height="30" />](https://github.com/junhoyeo) Implementation Roadmap and Methodology Overview - [x] 📢 [Read public data](#read-public-data) - [x] ✅ [Fetch User ID](#fetch-user-id) - [x] ✅ [Read User Profile Info](#read-user-profile-info) - [x] ✅ [Read list of User Threads](#read-list-of-user-threads) - [ ] 🚧 With Pagination (If auth provided) - [x] ✅ [Read list of User Replies](#read-list-of-user-replies) - [ ] 🚧 With Pagination (If auth provided) - [x] ✅ [Fetch Post ID](#fetch-post-id) - [x] ✅ [Read A Single Thread](#read-a-single-thread) - [x] ✅ [Get Thread Likes](#get-thread-likes) - [x] 🔏 [Read user private data](#read-private-data) - [x] ✅ [Read User Followings](#read-user-followings) - [x] ✅ [Read User Followers](#read-user-followers) - [x] ✅ [Get suggested users](#get-suggested-users) - [x] ✅ [Search Query](#search-query) - [x] ✅ [Read User Timeline Feed](#read-user-timeline-feed) - [x] ✅ [Get Notifications](#get-notifications) - [ ] 🔏 [Write Private Data (Authentication Required)](#write-private-data-authentication-required) - [ ] ✅ [Create New Thrad Or Reply To Eexisting Thread](#create-new-thrad-or-reply-to-eexisting-thread) - [ ] 🚧 Make link previews to get shown - [ ] ✅ [Delete Thread](#delete-thread) - [ ] 🚧 Quote Thread - [x] 🔒 [Friendship Actions](#friendship-actions-authentication-required) - [x] ✅ [Follow User](#follow-user) - [x] ✅ [Unfollow User](#unfollow-user) - [x] ✅ [Block User](#block-user) - [x] ✅ [Unblock User](#unblock-user) - [x] ✅ [Mute User](#mute-user) - [x] ✅ [Unmute User](#unmute-user) - [x] ✅ [Restrict User](#restrict-user) - [x] ✅ [Unrestrict User](#unrestrict-user) - [x] ✅ [Check Friendship Status With Another Users](#check-friendship-status-with-another-users) - [x] 🔒 [Interactions (Authentication Required)](#interactions-authentication-required) - [x] ✅ [Like Thread](#like-thread) - [x] ✅ [Remove Like From Thread](#remove-like-from-thread) - [x] ✅ [Repost Thread](#repost-thread) - [x] ✅ [Delete Reposted Thread](#delete-reposted-thread) - [ ] 🚧 LangChain Agent (`threadspy.ai`) - [ ] 🚧 Threads Tool for LangChain - [ ] 📌 Link Threads & LLaMa ([@Mineru98](https://github.com/Mineru98)) - [ ] 📌 Provide statistical analysis of posts in Threads ([@Mineru98](https://github.com/Mineru98)) --- ## Read public data ### Fetch User ID: ```python3 user_id = api.get_user_id_from_username(username) ``` <details> <summary> <h4>Parameters</h4> </summary> | Parameters | Description | Type | Required | |:----------:|:---------------:|:-------:|:--------:| | `username` | Target username | String | Yes | </details> ### Read User Profile Info: ```python3 user_profile = api.get_user_profile(username, user_id) ``` <details> <summary> <h4>Parameters</h4> </summary> | Parameters | Description | Type | Required | Default Value | |:----------:|:---------------:|:-------:|:--------:|:-------------:| | `username` | Target username | String | Yes | - | | `user_id` | Target User ID | String | No | None | </details> ### Read list of User Threads: ```python3 user_threads = api.get_user_profile_threads(username, user_id) ``` <details> <summary> <h4>Parameters</h4> </summary> | Parameters | Description | Type | Required | Default Value | |:----------:|:---------------:|:-------:|:--------:|:-------------:| | `username` | Target username | String | Yes | - | | `user_id` | Target User ID | String | No | None | </details> ### Read list of User Replies: ```python3 user_replies = api.get_user_profile_replies(username, user_id) ``` <details> <summary> <h4>Parameters</h4> </summary> | Parameters | Description | Type | Required | Default Value | |:----------:|:---------------:|:-------:|:--------:|:-------------:| | `username` | Target username | String | Yes | - | | `user_id` | Target User ID | String | No | None | </details> ### Fetch Post ID: > #### via Thread ID E.g. "CuW6-7KyXme": ```python3 post_id = api.get_post_id_from_thread_id(thread_id) ``` <details> <summary> <h4>Parameters</h4> </summary> | Parameters | Description | Type | Required | |:----------:|:---------------:|:-------:|:--------:| | `thread_id`| Last part of the thread URL | String | Yes | </details> <details> <summary> <h4>Examples</h4> </summary> ``` thread_id = 'CugF-EjhQ3r' post_id = api.get_post_id_from_thread_id(thread_id) ``` </details> #### via Post URL E.g."https://www.threads.net/t/CuW6-7KyXme": ```python3 post_id = api.get_post_id_from_url(post_url) ``` <details> <summary> <h4>Parameters</h4> </summary> | Parameters | Description | Type | Required | |:----------:|:---------------:|:-------:|:--------:| | `post_url` | Thread URL | String | Yes | </details> <details> <summary> <h4>Examples</h4> </summary> ``` post_url = 'https://www.threads.net/t/CugF-EjhQ3r' post_id = api.get_post_id_from_url(post_url) ``` </details> ### Read A Single Thread: ```python3 single_thread = api.get_threads(post_id) ``` <details> <summary> <h4>Parameters</h4> </summary> | Parameters | Description | Type | Required | |:----------:|:---------------:|:-------:|:--------:| | `post_id` | Target username | String | Yes | </details> ### Get Thread Likes: ```python3 thread_likes = api.get_thread_likers(post_id) ``` <details> <summary> <h4>Parameters</h4> </summary> | Parameters | Description | Type | Required | |:----------:|:---------------:|:-------:|:--------:| | `post_id` | Target username | String | Yes | </details> --- ## Read Private Data ### Read User Followings: ```python3 user_followers = api.get_followings(user_id) ``` <details> <summary> <h4>Parameters</h4> </summary> | Parameters | Description | Type | Required | |:----------:|:---------------:|:-------:|:--------:| | `user_id` | Target User ID | String | Yes | </details> ### Read User Followers: ```python3 user_followings = api.get_followers(user_id) ``` <details> <summary> <h4>Parameters</h4> </summary> | Parameters | Description | Type | Required | |:----------:|:---------------:|:-------:|:--------:| | `user_id` | Target User ID | String | Yes | </details> ### Get Suggested Users: ```python3 suggested_users = api.get_suggested_users(count, paging) ``` <details> <summary> <h4>Parameters</h4> </summary> | Parameters | Description | Type | Required | Default Value | |:----------:|:---------------:|:-------:|:--------:|:-------------:| | `count` | Number of suggested users | Integer | No | 15 | | `paging` | Page number | Integer | No | None | </details> ### Search Query: ```python3 thread_likes = api.search(search_parameter) ``` <details> <summary> <h4>Parameters</h4> </summary> | Parameters | Description | Type | Required | |:------------------:|:---------------:|:-------:|:--------:| | `search_parameter` | Search Query | String | Yes | </details> ### Read User Timeline Feed: ```python3 user_timeline = api.get_timeline(max_id) ``` <details> <summary> <h4>Parameters</h4> </summary> |Parameters| Description | Type | Required | |:--------:|:----------------------:|:-------:|:--------:| | `max_id` | Next Posts Batch ID | String | No | </details> ### Get Notifications: ```python3 user_timeline = api.get_timeline(max_id) ``` <details> <summary> <h4>Parameters</h4> </summary> | Parameters | Description | Type | Required | Default Value | |:---------------------:|:----------------------:|:-------:|:--------:|:-------------:| | `notification_filter` | Next Posts Batch ID | String | No | 'replies' | | `max_id` | Next Posts Batch ID | String | No | None | | `pagination` | Next Posts Batch ID | String | No | None | 'notification_filter' values: 'mentions', 'replies', 'verified' </details> --- ## Write Private Data (Authentication Required) ### Create New Thrad Or Reply To Eexisting Thread: ```python3 boolean_response = api.publish(count, image_path, url, parent_post_id) ``` <details> <summary> <h4>Parameters</h4> </summary> | Parameters | Description | Type | Required | Default Value | |:----------:|:---------------:|:-------:|:--------:|:-------------:| | `caption` | Text to post in Thread | String | Yes | 15 | | `image_path` | Image Path to post in Thread | String | No | None | | `url` | Link to post in Thread | String | No | None | | `parent_post_id` | Post ID | String | No | None | </details> <details> <summary> <h4>Examples</h4> </summary> - Text Threads: ```python3 api.publish(caption="🤖 Hello World!") ``` - Threads with Image: ```python3 api.publish( caption= '🤖 Threads with Link Image', image_path="https://github.com/junhoyeo/threads-py/raw/main/.github/logo.jpg" ) ``` - Threads with Link Attachment: ```python3 api.publish( caption= '🤖 Threads with Link Image', url="https://github.com/junhoyeo/threads-py" ) ``` Reply to Other Threads: ```python3 parent_post_url = 'https://www.threads.net/t/CugF-EjhQ3r' parent_post_id = api.get_post_id_from_url(parent_post_url) # or use `get_post_id_from_thread_id` api.publish({ text: '🤖 Beep', link: 'https://github.com/junhoyeo/threads-py', parent_post_id: parent_post_id, }) ``` </details> ### Delete Thread: ```python3 boolean_response = api.delete_thread(post_id) ``` <details> <summary> <h4>Parameters</h4> </summary> | Parameters | Description | Type | Required | |:----------:|:---------------:|:-------:|:--------:| | `post_id` | Post Identifier | String | Yes | </details> --- ## Friendship Actions (Authentication Required) ### Follow User: ```python3 boolean_response = api.follow(user_id) ``` <details> <summary> <h4>Parameters</h4> </summary> | Parameters | Description | Type | Required | |:----------:|:---------------:|:-------:|:--------:| | `user_id` | User Identifier | String | Yes | </details> ### Unfollow User: ```python3 boolean_response = api.unfollow(user_id) ``` <details> <summary> <h4>Parameters</h4> </summary> | Parameters | Description | Type | Required | |:----------:|:---------------:|:-------:|:--------:| | `user_id` | User Identifier | String | Yes | </details> ### Block User: ```python3 boolean_response = api.block(user_id) ``` <details> <summary> <h4>Parameters</h4> </summary> | Parameters | Description | Type | Required | |:----------:|:---------------:|:-------:|:--------:| | `user_id` | User Identifier | String | Yes | </details> ### Unblock User: ```python3 boolean_response = api.unblock(user_id) ``` <details> <summary> <h4>Parameters</h4> </summary> | Parameters | Description | Type | Required | |:----------:|:---------------:|:-------:|:--------:| | `user_id` | User Identifier | String | Yes | </details> ### Mute User: ```python3 boolean_response = api.mute(user_id) ``` <details> <summary> <h4>Parameters</h4> </summary> | Parameters | Description | Type | Required | |:----------:|:---------------:|:-------:|:--------:| | `user_id` | User Identifier | String | Yes | </details> ### Unmute User: ```python3 boolean_response = api.unmute(user_id) ``` <details> <summary> <h4>Parameters</h4> </summary> | Parameters | Description | Type | Required | |:----------:|:---------------:|:-------:|:--------:| | `user_id` | User Identifier | String | Yes | </details> ### Restrict User: ```python3 boolean_response = api.restrict(user_id) ``` <details> <summary> <h4>Parameters</h4> </summary> | Parameters | Description | Type | Required | |:----------:|:---------------:|:-------:|:--------:| | `user_id` | User Identifier | String | Yes | </details> ### Unrestrict User: ```python3 boolean_response = api.unrestrict(user_id) ``` <details> <summary> <h4>Parameters</h4> </summary> | Parameters | Description | Type | Required | |:----------:|:---------------:|:-------:|:--------:| | `user_id` | User Identifier | String | Yes | </details> ### Check Friendship Status With Another Users: ```python3 friendship_status = api.friendship_status(user_id) ``` <details> <summary> <h4>Parameters</h4> </summary> | Parameters | Description | Type | Required | |:----------:|:---------------:|:-------:|:--------:| | `user_id` | User Identifier | String | Yes | </details> --- ## Interactions (Authentication Required) ### Like Thread: ```python3 boolean_response = api.like(post_id) ``` <details> <summary> <h4>Parameters</h4> </summary> | Parameters | Description | Type | Required | |:----------:|:---------------:|:-------:|:--------:| | `post_id` | Post Identifier | String | Yes | </details> ### Remove Like From Thread: ```python3 boolean_response = api.unlike(post_id) ``` <details> <summary> <h4>Parameters</h4> </summary> | Parameters | Description | Type | Required | |:----------:|:---------------:|:-------:|:--------:| | `post_id` | Post Identifier | String | Yes | </details> ### Repost Thread: ```python3 boolean_response = api.repost_thread(post_id) ``` <details> <summary> <h4>Parameters</h4> </summary> | Parameters | Description | Type | Required | |:----------:|:---------------:|:-------:|:--------:| | `post_id` | Post Identifier | String | Yes | </details> ### Delete Reposted Thread: ```python3 boolean_response = api.unrepost_thread(post_id) ``` <details> <summary> <h4>Parameters</h4> </summary> | Parameters | Description | Type | Required | |:----------:|:---------------:|:-------:|:--------:| | `post_id` | Post Identifier | String | Yes | </details> </details> --- ## Contributors <table> <tbody> <tr> <td align="center" valign="top" width="14.28%"> <a href="https://github.com/junhoyeo"> <img src="https://avatars.githubusercontent.com/u/32605822?v=4?s=100" width="100px;" alt="Junho Yeo"/> <br /> <sub><b>Junho Yeo</b></sub> </a> <br /> <a href="https://github.com/junhoyeo/threads-py/commits?author=junhoyeo" title="Code">💻</a> </td> <td align="center" valign="top" width="14.28%"> <a href="https://github.com/Mineru98"> <img src="https://avatars.githubusercontent.com/u/15326932?v=4?s=100" width="100px;" alt="iamiks"/> <br /> <sub><b>iamiks</b></sub> </a> <br /> <a href="https://github.com/junhoyeo/threads-py/commits?author=Mineru98" title="Code">💻</a> </td> <td align="center" valign="top" width="14.28%"> <a href="https://github.com/drunkleen"> <img src="https://avatars.githubusercontent.com/u/26145653?v=4?s=100" width="100px;" alt="DrunkLeen"/> <br /> <sub><b>DrunkLeen</b></sub> </a> <br /> <a href="https://github.com/junhoyeo/threads-py/commits?author=drunkleen" title="Code">💻</a> </td> <td align="center" valign="top" width="14.28%"> <a href="https://github.com/ashrftvm"> <img src="https://avatars.githubusercontent.com/u/16478713?v=4?s=100" width="100px;" alt="Asharaf Ali"/> <br /> <sub><b>Asharaf Ali</b></sub> </a> <br /> <a href="https://github.com/junhoyeo/threads-py/commits?author=ashrftvm" title="Code">💻</a> </td> <td align="center" valign="top" width="14.28%"> <a href="https://github.com/mirageoasis"> <img src="https://avatars.githubusercontent.com/u/37329424?v=4?s=100" width="100px;" alt="mirageoasis"/> <br /> <sub><b>mirageoasis</b></sub> </a> <br /> <a href="https://github.com/junhoyeo/threads-py/commits?author=mirageoasis " title="Code">💻</a> </td> </tr> </tbody> </table> ## License <p align="center"> <a href="https://github.com/junhoyeo"> <img src="./.github/labtocat.png" width="256" height="256"> </a> </p> <p align="center"> <strong>MIT © <a href="https://github.com/junhoyeo">Junho Yeo</a></strong> </p> If you find this project intriguing, **please consider starring it(⭐)** or following me on [GitHub](https://github.com/junhoyeo) (I wouldn't say [Threads](https://www.threads.net/@_junhoyeo)).
roberthsu2003/__112-python-chihlee__
https://github.com/roberthsu2003/__112-python-chihlee__
null
# 112-python程式設計與數據資料圖表繪制 ## 上課網址 https://meet.google.com/hcp-vnri-eru ## 2023_07_01_早上 https://youtube.com/live/7N1hFYQiw3M ## 2023_07_01_下午 https://youtube.com/live/RF8Jf60FUAA ## 2023_07_08_早上 https://youtube.com/live/kqPojoAEH2k ## 2023_07_08_下午 https://youtube.com/live/_K_6jucfEBY ## 2023_07_15_早上 https://youtube.com/live/Z4qEF3kvT94 ## 2023_07_15_下午 https://youtube.com/live/jkLEsjii7U4 ## 2023_07_22_早上 https://youtube.com/live/P2P0tzD0oHo ## 2023_07_22_下午 https://youtube.com/live/ajU_bMfGYvE ## 2023_07_29_早上 https://youtube.com/live/Aa5wVpLEL-o ## 2023_07_29_下午 https://youtube.com/live/Q5TbUd8h8js
X-D-Lab/Sunsimiao
https://github.com/X-D-Lab/Sunsimiao
🌿孙思邈中文医疗大模型(Sunsimiao):提供安全、可靠、普惠的中文医疗大模型
<h1 align="center">孙思邈中文医疗大模型</h1> <p align="center"> <p align="center"> <a href="https://github.com/thomas-yanxin/Sunsimiao"><img src="https://img.shields.io/badge/GitHub-24292e" alt="github"></a> <a href="https://huggingface.co/thomas-yanxin/Sunsimiao-01M-lora"><img src="https://img.shields.io/badge/HuggingFace-yellow" alt="HuggingFace"></a> <a href="https://modelscope.cn/organization/X-D-Lab"><img src="https://img.shields.io/badge/ModelScope-blueviolet" alt="modelscope"></a> <a href="https://openi.pcl.ac.cn/Learning-Develop-Union/Sunsimiao"><img src="https://img.shields.io/badge/-OpenI-337AFF" alt="OpenI"></a> </p> <div align="center"> [![GitHub LICENSE](https://img.shields.io/github/license/X-D-Lab/Sunsimiao)](https://github.com/X-D-Lab/Sunsimiao/stargazers) [![GitHub Stars](https://img.shields.io/github/stars/X-D-Lab/Sunsimiao)](https://github.com/X-D-Lab/Sunsimiao/blob/main/LICENSE) [![GitHub Forks](https://img.shields.io/github/forks/X-D-Lab/Sunsimiao)](https://github.com/X-D-Lab/Sunsimiao/fork) [![GitHub Contributors](https://img.shields.io/github/contributors/X-D-Lab/Sunsimiao)](https://github.com/X-D-Lab/Sunsimiao/graphs/contributors) **[中文](README_CN.md) | [English](README_EN.md)** </div> <div align=center><img width = '400' height ='400' src ="./image/sunsimiao.png"/></div> ### 模型介绍 **孙思邈**, 唐代医药学家、道士, 被后人尊称为"药王". 其十分重视民间的医疗经验, 不断积累走访, 及时记录下来, 写下著作《千金要方》. 唐朝建立后, 孙思邈接受朝廷的邀请, 与政府合作开展医学活动, 完成了世界上第一部国家药典《唐新本草》. **孙思邈中文医疗大模型**(简称: Sunsimiao)希望能够遵循孙思邈的生平轨迹, 重视民间医疗经验, 不断累积中文医疗数据, 并将数据附加给模型, 致力于提供**安全、可靠、普惠**的中文医疗大模型. 此外, 本团队同时关注人们的心理健康, 致力于为用户提供**隐私、温暖、舒适、及时、方便**的心理服务的[MindChat(漫谈)心理大模型](https://github.com/X-D-Lab/MindChat), 欢迎关注, 敬请批评指证! ### 模型列表 | 模型名称 | lora权重 | 合并后的权重 | | :----: | :----: | :----: | | Sunsimiao-01M | [modelscope](https://modelscope.cn/models/thomas/Sunsimiao_lora/summary) / [huggingface](https://huggingface.co/thomas-yanxin/Sunsimiao-0.1M-lora) | [modelscope](https://modelscope.cn/models/AI-ModelScope/Sunsimiao/summary) / huggingface | | Sunsimiao-01M-Chat | [modelscope](https://modelscope.cn/models/thomas/Sunsimiao-01M-Chat-lora/summary) / huggingface | [modelscope](https://modelscope.cn/models/thomas/Sunsimiao-01M-Chat/summary) / huggingface | | Sunsimiao-01M-6B* | modelscope / huggingface | [modelscope](https://modelscope.cn/models/thomas/Sunsimiao-6B/summary) / huggingface | | Sunsimiao-InternLM-01M* | modelscope / huggingface | [modelscope](https://modelscope.cn/models/thomas/Sunsimiao-InternLM-01M/summary) / huggingface | ### 使用方法 ```Python from modelscope.pipelines import pipeline from modelscope.utils.constant import Tasks pipe = pipeline(task=Tasks.text_generation, model='AI-ModelScope/Sunsimiao', model_revision='v1.0.0') query = '小孩发烧了怎么办?' prompt="Below is an instruction that describes a task. Write a response that appropriately completes the request." prompt+="### Instruction:\n{}\n\n### Response:\n".format(query) result = pipe(prompt) print(result) ``` ``` {'text': 'Below is an instruction that describes a task. Write a response that appropriately completes the request.### Instruction:\n小孩发烧了怎么办?\n\n### Response:\n\n如果您的孩子发烧了,您可以参考以下建议:\n\n1. 帮助孩子退热:\n\n将冷水倒入一个杯子,放在孩子的腋下(或额头),每5分钟转一次。\n\n将冰毛巾擦在孩子的胸部、颈部和后背。\n\n将退烧药放在孩子额头或颈部。\n\n2. 检查孩子的症状:\n\n是否咳嗽、鼻塞或流鼻涕?\n\n3. 根据症状:\n\n如果孩子咳嗽,可以给孩子喝些水或温水,并给孩子喝一些蜂蜜;\n\n如果孩子鼻塞,可以给孩子吸入一些蒸汽;\n\n如果孩子发烧,可以给孩子喂些温水,并让孩子休息;\n\n如果孩子呼吸急促或呼吸困难,应立即就医。\n\n4. 如果孩子有其他不适:\n\n如果孩子没有食欲或呕吐,可以给孩子吃些软餐;\n\n如果孩子感到疲倦,可以让孩子休息;\n\n如果孩子的眼睛出现红色或分泌物,应立即就医。\n\n5. 注意孩子的个人卫生:\n\n让孩子勤喝水,勤洗手,保持室内通风;\n\n避免孩子与别的孩子进行密切接触,以免传染。\n\n祝您的孩子早日康复!'} ``` **更多使用方法见[scripts](./scripts/)** ### 部分测试 | Human | Sunsimiao_01M_Chat| Sunsimiao_05M_v2 | Sunsimiao-InternLM-01M | | :----: | ---- | ---- | ---- | | 小孩发烧怎么办? | 如果您的孩子发烧, 您可以采取以下措施:<br/>1. 给孩子多喝水. 发烧会导致身体脱水, 所以要让孩子多喝水以补充水分和电解质.<br/>2. 给孩子服用退烧药. 如果孩子的体温超过38°C, 建议使用退烧药物来缓解症状. 但是请注意, 不要随意给孩子使用药物, 最好咨询医生的意见.<br/>3. 给孩子进行物理降温. 可以用温水擦浴、冷敷等方法帮助孩子降低体温.<br/>4. 保持室内通风. 在孩子发烧期间, 应该保持室内空气流通, 避免空气不流通导致细菌滋生.<br/>5. 观察孩子的病情变化. 如果孩子的体温持续升高或者出现其他不适症状, 应及时就医. | 如果孩子发烧, 可以采取以下措施:<br/>1. 让孩子多喝水. 水是身体的重要组成部分, 喝足够的水有助于保持体内水分平衡和促进新陈代谢.<br/>2. 用温湿毛巾敷在孩子的额头上, 帮助降低体温.<br/>3. 如果孩子有咳嗽、流鼻涕等症状, 可以用盐开水给孩子漱口或用生理盐水滴鼻子.<br/>4. 在家里准备一些退烧药, 如布洛芬等, 以备不时之需. 但需要注意的是, 药物必须按照医生的建议使用, 不能随意增加剂量或者延长用药时间. 同时, 不要让宝宝自行服用药物, 以免出现不良反应. | 如果小孩发烧,建议先给小孩测量体温,如果体温超过38.5℃,可以使用退烧药,如布洛芬或对乙酰氨基酚等。同时,可以给孩子多喝水,保持室内空气流通,并注意观察孩子的精神状态和食欲。如果体温持续不退或者出现其他症状,建议及时就医。另外,注意保持小孩的卫生和休息,避免接触感染源。如果小孩有其他疾病或过敏史,请遵医嘱用药。请注意,退烧药不能代替医生的诊断和治疗。如有疑问,请咨询医生。 | | 一岁宝宝发烧能吃啥药? | 对于一岁的宝宝, 建议不要随意使用药物治疗. 如果宝宝只是轻微的发热, 可以多喝水、注意休息和观察病情变化. 如果宝宝体温超过38°C, 可以给宝宝服用退烧药, 如对乙酰氨基酚等. 但是, 一定要按照药品说明书上的用法用量进行用药, 避免过量或不足. 同时, 最好咨询医生的意见, 以确保安全有效地控制宝宝的发热症状. | 对于一岁的宝宝, 建议不要随意使用药物治疗. 如果宝宝只是轻微的发热, 可以多喝水、注意休息和观察病情变化. 如果宝宝体温超过38°C, 可以给宝宝服用退烧药, 如对乙酰氨基酚等. 但是, 一定要按照药品说明书上的用法用量进行用药, 避免过量或不足. 同时, 最好咨询医生的意见, 以确保安全有效地控制宝宝的发热症状. | 一岁宝宝发烧时,最好不要给药,因为宝宝的身体机能还比较弱,药物可能会对宝宝的身体产生不良影响。建议您采取以下措施来缓解宝宝的发烧症状:<br/>1. 给宝宝多喝水,保持充足的水分摄入。<br/>2. 给宝宝穿宽松、透气的衣服,保持室内空气流通。<br/>3. 给宝宝用温水擦拭身体,缓解宝宝的身体不适。<br/>4. 如果宝宝发烧超过38.5℃,可以考虑使用退烧药,但需要在医生的指导下使用。<br/>5. 如果宝宝有其他不适症状,如呕吐、腹泻等,需要及时就医。<br/><br/>请注意,以上建议仅供参考,如有疑问,请咨询专业医生。 | | 医生, 您好白带增多泛黄怎么回事? | 你好, 根据你的描述, 可能是阴道炎、宫颈糜烂等妇科疾病引起的. 建议你及时就医, 进行相关检查和治疗. 同时注意个人卫生, 保持外阴清洁干燥, 避免穿紧身裤或合成材料的内衣, 以免加重症状. | 你好, 根据你的描述, 可能是阴道炎、宫颈糜烂等妇科疾病引起的. 建议你及时就医, 进行相关检查和治疗. 同时注意个人卫生, 保持外阴清洁干燥, 避免穿紧身裤或合成材料的内衣, 以免加重症状. | 您好,白带增多泛黄可能是由于多种原因引起的,例如阴道感染、荷尔蒙水平变化、药物使用、性行为等。建议您尽快就医,进行相关检查和诊断,以确定病因并采取相应的治疗措施。同时,注意个人卫生,保持阴道清洁干燥,避免使用刺激性药物和化妆品,避免过度清洁和使用不合适的卫生巾和卫生棉条等。如果您有性行为,建议您在性行为前使用安全套,以减少感染的风险。请务必咨询医生以获取更详细的建议和指导。祝您健康! | | 心肌梗塞能治疗吗? | 心肌梗塞是一种严重的疾病, 需要及时的治疗. 目前, 心肌梗塞的治疗主要包括药物治疗和手术治疗两种方式. 药物治疗主要是通过口服或静脉注射一些抗凝血药、扩张血管药等来改善心脏供血情况, 缓解症状. 手术治疗则是通过冠状动脉旁路移植术、支架植入术等方式来恢复心脏的正常功能. 此外, 患者还需要注意饮食健康, 保持良好的心态和生活习惯, 以促进康复. | 心肌梗塞是一种严重的疾病, 需要及时的治疗. 目前, 对于急性心肌梗塞患者来说, 最主要的是尽早开通血管, 恢复心脏供血功能. 在药物治疗的基础上, 可以采用介入手术、冠状动脉搭桥术等方法进行治疗. 同时, 还需要注意患者的饮食和生活习惯, 避免过度劳累和情绪波动等情况的发生. | 心肌梗塞是一种非常严重的疾病,如果未得到及时的治疗,可能会导致严重的后果,如心脏衰竭、心律失常、心力衰竭等。因此,一旦确诊为心肌梗塞,应该立即就医,接受专业的治疗。治疗方法包括药物治疗、手术治疗和介入治疗等。具体治疗方案应该根据患者的具体情况和病情严重程度来制定。如果您或您身边的人患有心肌梗塞,请及时就医并遵医嘱进行治疗。同时,预防心肌梗塞的发生也非常重要,如保持良好的生活习惯、控制高血压、糖尿病等慢性疾病,以及避免吸烟、饮酒等不良习惯。 | | 对于使用NOACs的患者接种COVID-疫苗, 需要注意哪些事项? | 对于使用NOACs的患者接种COVID-19疫苗, 需要特别注意以下几点:<br/>1. 建议在接种前咨询医生, 了解是否适合接种.<br/>2. 在接种期间和接种后一周内, 应避免进行剧烈运动、重体力劳动等活动, 以防出现出血等不良反应.<br/>3. 如果患者正在服用NOACs药物, 应该告知医生, 以便医生根据具体情况决定是否可以接种疫苗.<br/>4. 如果患者有任何不适或疑虑, 应及时就医并告知医生疫苗接种的情况. | 1. 患者在接受NOAC治疗期间应避免接种疫苗.<br/>2. 如果必须进行疫苗接种, 建议咨询医生的意见并遵循以下指导: <br/>a) 在开始任何一种疫苗之前, 需要告知您的医疗团队您正在服用NOAC药物; <br/>b) NOAC药物可能会影响某些疫苗的效果和安全性, 因此请务必告诉您的医疗团队您正在使用的所有药物(包括非处方药、草药等); <br/>c) 对于那些已经完成NOAC治疗并且没有出现严重副作用的患者来说, 他们可以考虑接种新冠疫苗. 但是, 如果他们在过去6个月内曾发生过严重的出血事件或血栓形成史, 则不应该接种新冠疫苗.<br/>3. 请注意, 如果您有任何疑问或担忧, 请及时与您的医疗团队联系以获取更多信息. | 对于使用NOACs的患者,接种COVID-19疫苗时需要注意以下几点:<br/>1. 请先咨询医生:使用NOACs的患者应该在接种前咨询医生,了解疫苗的禁忌症和注意事项,以及是否需要调整药物剂量或停止使用药物。<br/>2. 疫苗接种前停止使用NOACs:在接种疫苗前,应该停止使用NOACs药物,以确保疫苗的有效性。<br/>3. 接种前检查:在接种前,应该进行血液检查,以确保身体状况良好,没有其他疾病或过敏反应。<br/>4. 接种后观察:在接种后,应该密切观察身体反应,如发热、头痛、肌肉疼痛、疲劳等,如果出现任何不适症状,应该及时就医。<br/>5. 疫苗接种后继续使用NOACs:在接种后,应该继续按照医生的建议使用NOACs药物,以确保药物的安全性和有效性。<br/><br/>总之,使用NOACs的患者在接种COVID-19疫苗前需要咨询医生,并按照医生的建议进行接种。同时,在接种后需要密切观察身体反应,如果出现任何不适症状,应该及时就医。 | ### 致谢 本项目由**华东理工大学 薛栋课题组**发起, 并受到以下项目及平台的大力支持, 在此表示感谢! 1. [LLaMA-Efficient-Tuning](https://github.com/hiyouga/LLaMA-Efficient-Tuning): 提供微调代码 2. [OpenI启智社区](https://openi.pcl.ac.cn/): 提供模型训练算力 3. [魔搭ModelScope](https://modelscope.cn/home): 提供训练思路和模型存储 4. [文心一格](https://yige.baidu.com/): 生成模型logo ``` @Misc{llama-efficient-tuning, title = {LLaMA Efficient Tuning}, author = {hiyouga}, howpublished = {\url{https://github.com/hiyouga/LLaMA-Efficient-Tuning}}, year = {2023} } ``` <a href="https://github.com/thomas-yanxin/Sunsimiao/graphs/contributors"> <img src="https://contrib.rocks/image?repo=thomas-yanxin/Sunsimiao" /> </a> ### 免责申明 本仓库所有开源代码及模型均遵循GPL-3.0许可认证. 开源模型可能存在以下局限. 欢迎商业交流与合作! 1. **孙思邈中文医疗大模型**存在固有的局限性, 可能产生错误的、有害的、冒犯性的或其他不良的输出. 用户在关键或高风险场景中应谨慎行事, 不要使用这些模型作为最终决策参考, 以免导致人身伤害、财产损失或重大损失. 2. **孙思邈中文医疗大模型**由**baichuan-7B**模型微调而得, 按"原样"提供, 在任何情况下, 作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任. 3. 使用**孙思邈中文医疗大模型**即表示您同意这些条款和条件, 并承认您了解其使用可能带来的潜在风险. 您还同意赔偿并使作者、贡献者和版权所有者免受因您使用**孙思邈中医药大模型**而产生的任何索赔、损害赔偿或责任的影响. ### 引用 ``` @misc{Sunsimiao, author={Xin Yan, Dong Xue*}, title = {Sunsimiao: Chinese Medical Large Language Model}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/X-D-Lab/Sunsimiao}}, } ```
lemon0830/TIM
https://github.com/lemon0830/TIM
code for Teaching LM to Translate with Comparison
![image](https://github.com/lemon0830/TIM/blob/main/images/Fig_Model.png) # **TIM: Teaching LM to Translate with Comparison** :star: **Support** :star: - LLMs: BLOOM-(e.g., [BLOOM-1b7](https://huggingface.co/bigscience/bloomz-1b7), [BLOOMZ-7b1-mt](https://huggingface.co/bigscience/bloomz-7b1-mt)), LLaMA-(e.g., [LLaMA-7b](https://huggingface.co/yahma/llama-7b-hf),[LLaMA-13b](https://huggingface.co/yahma/llama-13b-hf)), ChatGLM-(e.g., [ChatGLM2-6b](https://huggingface.co/THUDM/chatglm2-6b)) - our Proposed TIM [[run_clm.py](https://github.com/lemon0830/TIM/blob/main/sft_reward_training/run_clm.py)] and Vanilla Instruct-tuning[[run_clm_sft.py]](https://github.com/lemon0830/TIM/blob/main/sft_reward_training/run_clm_sft.py) - LoRA, Tuning with Embedding Fixed, Full Parameters Tuning - [Data-streaming](https://github.com/huggingface/datasets/blob/5f810b7011a8a4ab077a1847c024d2d9e267b065/docs/source/stream.mdx) - Distributed training with [deepspeed ZeRO stage 1/2/3](https://huggingface.co/docs/transformers/main_classes/deepspeed) - Try our fine-tuned model at the HuggingFace model hub: - **[TIM-BLOOMZ-7b](https://huggingface.co/Lemoooon/TIM-BLOOMZ-7b)** - **[TIM-LLaMA-13b](https://huggingface.co/Lemoooon/TIM-LLaMA-13b)** - Please refer our **[paper](https://arxiv.org/pdf/2307.04408.pdf)** for more detail. :star: **Tips** :star: - When training with Deepspeed ZeRO stage 1/2, we can set --use_low_cpu_mem=True to save memory usage - After training a model using Deepspeed **ZeRO stage3**, we need to use [sft_reward_training/change_param_name.py](https://github.com/lemon0830/TIM/blob/main/sft_reward_training/change_param_name.py) to perform a transformation of the model's parameter names before inference. ## Quick start ### Environment We develop TIM with [HuggingFaces's transformers](https://github.com/huggingface/transformers) and [Deepspeed-chat](https://github.com/microsoft/DeepSpeedExamples/tree/master/applications/DeepSpeed-Chat). Requirements: - Python 3.7.9 - Pytorch 1.10.0+cu111 - Transformers 4.29 - accelerate==0.19.0 - numpy==1.22.4 - deepspeed==0.9.0 - scikit-learn ### Datasets - Training data: [train_data/alpaca_reward.json](https://github.com/lemon0830/TIM/blob/main/train_data/alpaca_reward.json), **[train.data.json](https://huggingface.co/datasets/Lemoooon/Train-for-TIM)** An essential ingredient of our method is the construction of samples used to provide comparison signals for model learning. In addition to regular translation data, we construct data used for comparison by introducing dictionary information or translation errors ![image](https://github.com/lemon0830/TIM/blob/main/images/Fig_data_construct.png) - test data: [test_data/wmt22](https://github.com/lemon0830/TIM/tree/main/test_data/wmt22), [test_data/flores200](https://github.com/lemon0830/TIM/tree/main/test_data/flores200) ### Data Construction for TIM We modify add_noisy.py in [noisy-text](https://github.com/valentinmace/noisy-text). - [add noisy](https://github.com/lemon0830/TIM/tree/main/noisy-text) We use the following setting in our paper: ``` python add_noise.py data/example --delete_probability 0.15 --replace_probability 0.15 --filler_token '' --permutation_range 1 ``` Then, you can run [[run_reward.sh]](https://github.com/lemon0830/TIM/blob/main/data_processing/run_reward.sh) to get the final training data for TIM. ### Instruct Tuning with TIM We modify `run_clm.py` and `Trainer` in transformers, and `utils` for LoRA in Deepspeed-Chat. In addition to vanilla fine-tuning all model parameters, parameter-efficient fine-tuning methods are specially proposed for large language models such as prefix tuning and LoRA. We adopt three different strategies for tuning the models, listed in descending order from the number of fine-tuned parameters. **(1) LoRA: Tuning with Low-rank Matrices** - [sft_reward_training/run_lora.sh](https://github.com/lemon0830/TIM/blob/main/sft_reward_training/run_lora.sh) ``` --only_optimize_lora # if True, only optimizing the parameters of LoRA --lora_dim 8 --lora_alpha 16 --lora_droppout 0.05 --lora_module_name ${LORA_MODULE_NAME} ``` **(2) FixEmb: Tuning with Embedding Fixed** - [sft_reward_training/run_fixemb.sh](https://github.com/lemon0830/TIM/blob/main/sft_reward_training/run_fixemb.sh) ``` --only_optimize_layers "9" "8" "7" "6" "5" "4" "3" "2" "1" "0" ``` **(2) Full: Tuning with Full Parameters** - [sft_reward_training/run_full.sh](https://github.com/lemon0830/TIM/blob/main/sft_reward_training/run_full.sh) ### Deepspeed Config - deepspeed_config/ds_config.json, deepspeed_config/ds_config_stage2.json, deepspeed_config/ds_config_stage3.json ### Inference - inference/infer_bloom.py, inference/infer_llama.py - [inference/run_test_bloomz.sh](https://github.com/lemon0830/TIM/blob/main/inference/run_test_bloomz.sh) ``` --rootmodel # if LoRA, the path of the foundation model --ifhint # add note indicates no mistakes in the hypothesize --ifsample # if true, use sample else beam search for inference --ifreranking # use the preference score to select a preferred hypothesize in candidates --vocab # the dictionary for dict-guided inference --reverse # whether reverse the src language and tgt language when loading the dictionary ``` ### Experimental Results We evaluate TIM's performance on the WMT and FLORES-200 dev-test tasks, comprising four language pairs. <div align="center"> <img src="https://github.com/lemon0830/TIM/blob/main/images/Fig_Results.png" width="70%" alt="result"/> </div> ### Citation Please kindly cite our paper if you find it helpful: ```ruby @inproceedings{zeng2023tim, title={TIM: Teaching LM to Translate with Comparison}, author={Jiali Zeng and Fandong Meng and Yongjing Yin and Jie Zhou}, booktitle = {ArXiv}, year = {2023}, url = {https://arxiv.org/pdf/2307.04408.pdf} } ```
kendryte/k230_docs
https://github.com/kendryte/k230_docs
Kendryte K230 SDK Docs
# 嘉楠勘智K230文档 [![Github repository](https://img.shields.io/badge/github-repository-blue?logo=github&style=plastic)](https://github.com/kendryte/k230_docs) [![Gitee repository](https://img.shields.io/badge/gitee-repository-blue?logo=gitee&style=plastic)](https://gitee.com/kendryte/k230_docs) [![GitHub release](https://img.shields.io/github/v/release/kendryte/k230_docs?color=brightgreen&display_name=tag&logo=github&style=plastic)](https://github.com/kendryte/k230_docs/releases) [TOC] ## K230相关资源链接 | 链接 | 说明 | | ------ | ---- | | [K230 SDK](https://github.com/kendryte/k230_sdk) | Kendryte K230 SDK Github Repo | |[K230 Docs](https://github.com/kendryte/k230_docs)| Kendryte K230 Docs Github Repo | | [K230 SDK Gitee mirror](https://gitee.com/kendryte/k230_sdk) | Kendryte K230 SDK Gitee mirror Repo | |[K230 Docs Gitee mirror](https://gitee.com/kendryte/k230_docs)| Kendryte K230 Docs Gitee mirror Repo | |[K230 SDK Release mirror](https://kendryte-download.canaan-creative.com/k230/release/sdk/k230_sdk.tar.gz)| 嘉楠勘智K230 SDK release k230_sdk.tar.gz下载 | |[K230 Docs Release mirror](https://kendryte-download.canaan-creative.com/k230/release/docs/k230_docs.tar.gz)| 嘉楠勘智K230 Docs release k230_docs.tar.gz下载 | |[Canaan Kendryte K230](https://www.canaan-creative.com/product/k230)| 嘉楠勘智K230 | |[Canaan Kendryte Forum](https://developer.canaan-creative.com/)| 嘉楠勘智开发者论坛 | ## K230硬件文档 | 文档名称 | 文档概要 | | --- | --- | | [K230_DEMO_BOARD资源使用指南.md](zh/00_hardware/K230_DEMO_BOARD资源使用指南.md) | K230 Demo Board 硬件资源使用说明文档 | | [K230_硬件设计指南.md](zh/00_hardware/K230_硬件设计指南.md) | K230 硬件设计指南文档 | | [K230_PINOUT_V1.0_20230524](zh/00_hardware/K230_PINOUT_V1.0_20230524.xlsx) | K230 芯片引脚定义 | | [K230_datasheet.md](zh/00_hardware/K230_datasheet.md) | K230 Datasheet | ### K230工程设计文件 | 文档名称 | 文档概要 | | --- | --- | | [K230_DEMO_BOARD-20230427-DXF](zh/00_hardware/K230/K230_DEMO_BOARD-20230427-DXF.zip) | K230 Demo Board DXF 文件| | [K230_DEMO_BOARD-20230427-GERBER](zh/00_hardware/K230/K230_DEMO_BOARD-20230427-GERBER.zip) | K230 Demo Board制板用GERBER文件 | | [K230_DEMO_BOARD-20230427-PCB](zh/00_hardware/K230/K230_DEMO_BOARD-20230427-PCB.zip) | K230 Demo Board PCB Cadence 设计文件| | [K230_DEMO_BOARD-20230427-SMT](zh/00_hardware/K230/K230_DEMO_BOARD-20230427-SMT.zip) | K230 Demo Board SMT 焊接排布与钻孔文件| | [K230-DEMO-BOARD-BOM-20230510](zh/00_hardware/K230/K230-DEMO-BOARD-BOM-20230510.xlsx) | K230 Demo Board所用器件BOM | | [K230-DEMO-BOARD-SCH.DSN](zh/00_hardware/K230/K230-DEMO-BOARD-SCH.DSN) | K230 Demo Board 原理图Cadence CIS Capture文件 | | [K230-DEMO-BOARD-SCH.pdf](zh/00_hardware/K230/K230-DEMO-BOARD-SCH.pdf) | K230 Demo Board 原理图 pdf文件 | ### K230D工程设计文件 | 文档名称 | 文档概要 | | --- | --- | | [K230-SIP-EVB-V1_0-A_1-20230322-DXF](zh/00_hardware/K230D/K230-SIP-EVB-V1_0-A_1-20230322-DXF.zip) | K230D Demo Board DXF 文件| | [K230-SIP-EVB-V1_0-A_1-20230322-GERBER](zh/00_hardware/K230D/K230-SIP-EVB-V1_0-A_1-20230322-GERBER.zip) | K230D Demo Board制板用GERBER文件 | | [K230-SIP-EVB-V1_0-A_1-20230322-PCB](zh/00_hardware/K230D/K230-SIP-EVB-V1_0-A_1-20230322-PCB.zip) | K230D Demo Board PCB Cadence 设计文件| | [K230-SIP-EVB-V1_0-A_1-20230322-SMT](zh/00_hardware/K230D/K230-SIP-EVB-V1_0-A_1-20230322-SMT.zip) | K230D Demo Board SMT 焊接排布与钻孔文件| | [K230-SIP-EVB-V1_0-A_1_20230316-BOM](zh/00_hardware/K230D/K230-SIP-EVB-V1_0-A_1_20230316-BOM.xlsx) | K230D Demo Board所用器件BOM | | [K230-SIP-EVB-V1_0-A_1-20230316.DSN](zh/00_hardware/K230D/K230-SIP-EVB-V1_0-A_1-20230316.DSN) | K230D Demo Board 原理图Cadence CIS Capture文件 | | [K230-SIP-EVB-V1.0-A.1-20230317.pdf](zh/00_hardware/K230D/K230-SIP-EVB-V1.0-A.1-20230317.pdf) | K230D Demo Board 原理图 pdf文件 | ## K230 SDK 软件文档 ### K230 EVB 软件文档 | 文档名称 | 文档概要 | | --- | --- | | [K230_SDK_使用说明.md](zh/01_software/board/K230_SDK_使用说明.md) | K230 SDK 使用说明| #### K230 SDK nncase 文档 | 文档名称 | 文档概要 | | --- | --- | | [K230_nncase_开发指南.md](zh/01_software/board/ai/K230_nncase_开发指南.md) | K230 nncase 开发指南| #### K230 sdk cdk 文档 | 文档名称 | 文档概要 | | --- | --- | | [K230_核间通讯_API参考.md](zh/01_software/board/cdk/K230_核间通讯_API参考.md) | K230 核间通讯 API参考| | [K230_系统控制_MAPI参考.md](zh/01_software/board/cdk/K230_系统控制_MAPI参考.md) | K230 系统控制 MAPI参考| #### K230 sdk demo 文档 | 文档名称 | 文档概要 | | --- | --- | | [K230_SDK_Demo使用指南.md](zh/01_software/board/examples/K230_SDK_Demo使用指南.md) | K230 SDK Demo使用指南| #### K230 sdk middleware 文档 | 文档名称 | 文档概要 | | --- | --- | | [K230_openblas_开发使用指南.md](zh/01_software/board/middleware/K230_openblas_开发使用指南.md) | K230 OpenBLAS 开发使用指南| | [K230_opencv_开发使用指南.md](zh/01_software/board/middleware/K230_opencv_开发使用指南.md) | K230 OpenCV 开发使用指南| #### K230 sdk mpp 文档 | 文档名称 | 文档概要 | | --- | --- | | [K230_Camera_Sensor适配指南.md](zh/01_software/board/mpp/K230_Camera_Sensor适配指南.md) | K230 Camera Sensor适配指南| | [K230_DMA_API参考.md](zh/01_software/board/mpp/K230_DMA_API参考.md) | K230 DMA API参考| | [K230_DPU_API参考.md](zh/01_software/board/mpp/K230_DPU_API参考.md) | K230 DPU API参考| | [K230_GPU_API参考.md](zh/01_software/board/mpp/K230_GPU_API参考.md) | K230 GPU API参考| | [K230_SHA256_API参考.md](zh/01_software/board/mpp/K230_SHA256_API参考.md) | K230 SHA256 API参考| | [K230_VICAP_API参考.md](zh/01_software/board/mpp/K230_VICAP_API参考.md) | K230 VICAP API参考| | [K230_系统控制_API参考.md](zh/01_software/board/mpp/K230_系统控制_API参考.md) | K230 系统控制 API参考| | [K230_视频编解码_API参考.md](zh/01_software/board/mpp/K230_视频编解码_API参考.md) | K230 视频编解码 API参考| | [K230_视频输出_API参考.md](zh/01_software/board/mpp/K230_视频输出_API参考.md) | K230 视频输出 API参考| | [K230_音频_API参考.md](zh/01_software/board/mpp/K230_音频_API参考.md) | K230 音频 API参考| #### K230 sdk osdrv 文档 | 文档名称 | 文档概要 | | --- | --- | | [K230_DRM_API参考.md](zh/01_software/board/osdrv/K230_DRM_API参考.md) | K230 DRM API参考| | [K230_PMU_使用指南.md](zh/01_software/board/osdrv/K230_PMU_使用指南.md) | K230 PMU 使用指南| | [K230_SDK_IoT_WiFi_AiW4211LV10_使用指南.md](zh/01_software/board/osdrv/K230_SDK_IoT_WiFi_AiW4211LV10_使用指南.md) | K230 SDK IoT WiFi AiW4211LV10_使用指南| | [K230_SDK_IoT_WiFi_AiW4211LV10_驱动开发指南.md](zh/01_software/board/osdrv/K230_SDK_IoT_WiFi_AiW4211LV10_驱动开发指南.md) | K230 SDK IoT WiFi AiW4211LV10 驱动开发指南| | [K230_大核低速驱动_API参考.md](zh/01_software/board/osdrv/K230_大核低速驱动_API参考.md) | K230 大核低速驱动 API参考| | [K230_小核Linux驱动_API参考.md](zh/01_software/board/osdrv/K230_小核Linux驱动_API参考.md) | K230 小核Linux驱动 API参考| ### K230 PC 软件文档 | 文档名称 | 文档概要 | | --- | --- | | [K230_SDK_Tuning_Tool使用指南.md](zh/01_software/pc/pqtools/K230_SDK_Tuning_Tool使用指南.md) | K230 SDK Tuning Tool 使用指南| | [K230_SDK_Dewarp使用指南.md](zh/01_software/pc/dewarp/K230_SDK_Dewarp使用指南.md) | K230 SDK Dewarp 使用指南 | ## K230 SDK 使用教程 ### K230 SDK 基础教程 | 文档名称 | 文档概要 | | --- | --- | | [K230_实战基础篇_hello_world.md](zh/02_applications/tutorials/K230_实战基础篇_hello_world.md) | K230 实战基础篇 hello_world| | [K230_大小核通讯_sharefs用法介绍.md](zh/02_applications/tutorials/K230_大小核通讯_sharefs用法介绍.md) | K230 大小核通讯 sharefs用法介绍| | [K230_GPU应用实战.md](zh/02_applications/tutorials/K230_GPU应用实战.md) | K230 GPU应用实战| | [K230_GUI实战_LVGL移植教程.md](zh/02_applications/tutorials/K230_GUI实战_LVGL移植教程.md) | K230 GUI实战 LVGL移植教程| | [K230_RVV实战.md](zh/02_applications/tutorials/K230_RVV实战.md) | K230 RVV实战| | [K230_USB应用实战_UVC传输YUV及编码码流.md](zh/02_applications/tutorials/K230_USB应用实战_UVC传输YUV及编码码流.md) | K230 USB应用实战 UVC传输YUV及编码码流| | [K230_图像实战_开机视频和开机logo.md](zh/02_applications/tutorials/K230_图像实战_开机视频和开机logo.md) | K230 图像实战 开机视频和开机logo| | [K230_图像采集实战_如何支持一款新sensor.md](zh/02_applications/tutorials/K230_图像采集实战_如何支持一款新sensor.md) | K230 图像采集实战 如何支持一款新sensor| | [K230_音频实战_音频采集_播放和编解码.md](zh/02_applications/tutorials/K230_音频实战_音频采集_播放和编解码.md) | K230 实战基础篇 音频采集、播放和编解码| | [K230_编码实战_rtsp_server搭建和推流.md](zh/02_applications/tutorials/K230_编码实战_rtsp_server搭建和推流.md) | K230 编码实战 rtsp server 搭建和推流| | [K230_AI实战_HHB神经网络模型部署工具.md](zh/02_applications/tutorials/K230_AI实战_HHB神经网络模型部署工具.md) | K230 AI实战 HHB神经网络模型部署工具| ### K230 SDK 商业应用 | 文档名称 | 文档概要 | | --- | --- | | [K230_场景实战_智能门锁.md](zh/02_applications/business_poc/K230_场景实战_智能门锁.md) | K230 场景实战 智能门锁| ### K230 SDK AI教程 | 文档名称 | 文档概要 | | --- | --- | | [K230_AI_Demo介绍.md](zh/02_applications/ai_demos/K230_AI_Demo介绍.md) | K230 AI Demo介绍| ## K230 SDK 其它文档 | 文档名称 | 文档概要 | | --- | --- | | [K230_SDK_版本说明.md](zh/03_other/K230_SDK_版本说明.md) | K230 SDK 版本说明| ## 贡献指南 如果您对本项目感兴趣,想要反馈问题或提交文档,请参考[CONTRIBUTING](/.github/CONTRIBUTING.md) ## 联系我们 北京嘉楠捷思信息技术有限公司 网址:[www.canaan-creative.com](https://www.canaan-creative.com/) 商务垂询:[[email protected]](mailto:[email protected])
TechTitan0624/Mern-stack-Crup-system
https://github.com/TechTitan0624/Mern-stack-Crup-system
null
# Mern-Stack-Crud-App This is a simple crud project for beginners built with the following technologies: - [React](https://facebook.github.io/react/) and [React Router](https://reacttraining.com/react-router/) for the frontend - [Express](http://expressjs.com/) and [Mongoose](http://mongoosejs.com/) for the backend ## Requirements - [Node.js](https://nodejs.org/en/) 10+ ```shell npm install ``` ## Usage - Backend ```shell npm start ``` - Frontend ```shell cd client && npm start ``` ## Contributions Contributions are welcome. Just submit a PR and I will make sure to review it 😸
kaangiray26/geddit
https://github.com/kaangiray26/geddit
Reddit's public API without authentication
# geddit Reddit's public API without authentication ## Install If you want to use this in your project, there are prebuilt versions available. * [geddit.min.js](geddit.min.js) - Minified version Import via the GitHub Pages URL: ``` https://kaangiray26.github.io/geddit/geddit.min.js ``` ## Examples ``` import { Geddit } from './geddit.js'; const g = new Geddit(); // Get the top 10 posts from r/AskReddit let posts = await g.getTop('AskReddit', { limit: 10 }) console.log(posts); ``` ## How it works Reddit has an undocumented feature that allows you to get data from their servers as RSS/JSON feeds. Geddit wraps around several endpoints to make it easier to get data from Reddit. This means that you can use Geddit to get data from Reddit without authentication, which is only suitable for getting public data. ## Why use this? * No authentication required * Get data without dealing with application IDs and secrets * NSFW content is allowed ## Methods Here are the methods that are available in Geddit. All methods return a promise that resolves to the data from Reddit. Methods are categorized by the source of the data. ### Submissions * getSubmission(ID) * getSubmissionComments(ID, options) * getHot(subreddit, options) * getBest(subreddit, options) * getTop(subreddit, options) * getNew(subreddit, options) * getRising(subreddit, options) * getControversial(subreddit, options) ### Domain specific submissions * getDomainHot(domain, options) * getDomainBest(domain, options) * getDomainTop(domain, options) * getDomainNew(domain, options) * getDomainRising(domain, options) * getDomainControversial(domain, options) ### Subreddits * getSubreddit(subreddit) * getSubredditComments(subreddit, options) * getSubredditRules(subreddit) * getSubredditModerators(subreddit) * getSubredditWikiPages(subreddit) * getSubredditWikiPage(subreddit, page) * getSubredditWikiPageRevisions(subreddit, page) * getPopularSubreddits(options) * getNewSubreddits(options) * getPremiumSubreddits(options) * getDefaultSubreddits(options) ### Users * getUser(username) * getUserComments(username, options) * getUserSubmission(username, options) * getPopularUsers(options) * getNewUsers(options) ### Search * searchAll(query, subreddit, options) * searchSubreddits(query, options) * searchUsers(query, options) ### Live threads * getLiveThread(ID) * getLiveThreadUpdates(ID, options) * getLiveThreadContributors(ID, options) * getLiveThreadDiscussions(ID, options) * getLiveThreadsNow(options) ## Contribution If you discover a new endpoint that is not in the list above, feel free to open an issue or a pull request. ## Disclaimer This project is an **open-source** software. The projects is nothing more than a simple example of how to use Reddit's RSS/JSON feeds to build a library and it is not intended to be used instead of the Reddit API. Instead, use the official Reddit app or any other Reddit client that uses Reddit's API for full functionality. **No data is collected** from the users and **no profit is made** from the library. Developers are not responsible for any damage caused by the library. **Use at your own risk.**
wobsoriano/svelte-sonner
https://github.com/wobsoriano/svelte-sonner
An opinionated toast component for Svelte.
https://github.com/wobsoriano/svelte-sonner/assets/13049130/4b9c250f-1431-4130-9c5b-5a4f8b0210c5 # svelte-sonner [![pnpm](https://img.shields.io/badge/maintained%20with-pnpm-cc00ff.svg?style=for-the-badge&logo=pnpm)](https://pnpm.io/) An opinionated toast component for Svelte. Based on [emilkowalski](https://github.com/emilkowalski)'s React [implementation](https://sonner.emilkowal.ski/). ## Quick start Install it: ```bash npm i svelte-sonner # or yarn add svelte-sonner # or pnpm add svelte-sonner ``` Add `<Toaster />` to your app, it will be the place where all your toasts will be rendered. After that, you can use `toast()` from anywhere in your app. ```svelte <script> import { Toaster, toast } from 'svelte-sonner' </script> <Toaster /> <button on:click={() => toast('My first toast')}>Give me a toast</button> ``` ## Types ### Default Most basic toast. You can customize it (and any other type) by passing an options object as the second argument. ```js toast('Event has been created') ``` With custom icon and description: ```js import Icon from './Icon.svelte' toast('Event has been created', { description: 'Monday, January 3rd at 6:00pm', icon: Icon, }) ``` ### Success Renders a checkmark icon in front of the message. ```js toast.success('Event has been created') ``` ### Error Renders an error icon in front of the message. ```js toast.error('Event has not been created') ``` ### Action Renders a button. ```js toast('Event has been created', { action: { label: 'Undo', onClick: () => console.log('Undo'), }, }) ``` ### Promise Starts in a loading state and will update automatically after the promise resolves or fails. ```js toast.promise(() => new Promise(resolve => setTimeout(resolve, 2000)), { loading: 'Loading', success: 'Success', error: 'Error', }) ``` You can pass a function to the success/error messages to incorporate the result/error of the promise. ```js toast.promise(promise, { loading: 'Loading...', success: (data) => { return `${data.name} has been added!` }, error: 'Error', }) ``` ### Custom Component You can pass a component as the first argument instead of a string to render custom component while maintaining default styling. You can use the headless version below for a custom, unstyled toast. ```js toast(CustomComponent) ``` ### Updating a toast You can update a toast by using the `toast` function and passing it the id of the toast you want to update, the rest stays the same. ```js const toastId = toast('Sonner') toast.success('Toast has been updated', { id: toastId, }) ``` ## Customization ### Headless You can use `toast.custom` to render an unstyled toast with custom component while maintaining the functionality. ```svelte <script> import { createEventDispatcher } from 'svelte'; const dispatch = createEventDispatcher() </script> <div> This is a custom component <button on:click={() => dispatch('removeToast')}>close</button> </div> ``` ```js import HeadlessToast from './HeadlessToast.svelte' toast.custom(HeadlessToast) ``` ### Theme You can change the theme using the `theme` prop. Default theme is light. ```svelte <Toaster theme="dark" /> ``` ### Position You can change the position through the `position` prop on the `<Toaster />` component. Default is `bottom-right`. ```svelte <!-- Available positions --> <!-- top-left, top-center, top-right, bottom-left, bottom-center, bottom-right --> <Toaster position="top-center" /> ``` ### Expanded Toasts can also be expanded by default through the `expand` prop. You can also change the amount of visible toasts which is 3 by default. ```svelte <Toaster expand visibleToasts={9} /> ``` ### Styling for all toasts You can style your toasts globally with the `toastOptions` prop in the `Toaster` component. ```svelte <Toaster toastOptions={{ style: 'background: red;', class: 'my-toast', descriptionClass: 'my-toast-description' }} /> ``` ### Styling for individual toast ```js toast('Event has been created', { style: 'background: red;', class: 'my-toast', descriptionClass: 'my-toast-description', }) ``` ### Close button Add a close button to all toasts that shows on hover by adding the `closeButton` prop. ```svelte <Toaster closeButton /> ``` ### Rich colors You can make error and success state more colorful by adding the `richColors` prop. ```svelte <Toaster richColors /> ``` ### Custom offset Offset from the edges of the screen. ```svelte <Toaster offset="80px" /> ``` ### Programmatically remove toast To remove a toast programmatically use `toast.dismiss(id)`. ```js const toastId = toast('Event has been created') toast.dismiss(toastId) ``` To remove a toast from inside a custom component, dispatch `closeToast`: ```js import { createEventDispatcher } from 'svelte' const dispatch = createEventDispatcher() dispatch('removeToast') ``` You can also use the dismiss method without the id to dismiss all toasts. ```js // Removes all toasts toast.dismiss() ``` ### Duration You can change the duration of each toast by using the `duration` property, or change the duration of all toasts like this: ```svelte <Toaster duration={10000} /> ``` ```js toast('Event has been created', { duration: 10000, }) // Persisent toast toast('Event has been created', { duration: Number.POSITIVE_INFINITY, }) ``` ### On Close Callback You can pass `onDismiss` and `onAutoClose` callbacks. `onDismiss` gets fired when either the close button gets clicked or the toast is swiped. `onAutoClose` fires when the toast disappears automatically after it's timeout (`duration` prop). ```js toast('Event has been created', { onDismiss: t => console.log(`Toast with id ${t.id} has been dismissed`), onAutoClose: t => console.log(`Toast with id ${t.id} has been closed automatically`), }) ``` ## Keyboard focus You can focus on the toast area by pressing ⌥/alt + T. You can override it by providing an array of `event.code` values for each key. ```svelte <Toaster hotkey={['KeyC']} /> ``` ## License MIT
Hxaoi/cs-steal
https://github.com/Hxaoi/cs-steal
Discord Token Grabber , Password Stealer, Cookie Stealer, File Stealer, Crypto wallet Stealer etc.
**NOTE:** - Disclaimer - I'm not responsible for any damages this software may cause after being acquired. This software was made for personal **education** and **sandbox testing** purposes --- ## <a id="features"></a>💎 〢 Features ```diff > Default: - Anti-Debug - Anti-VM - Bypass VirusTotal Machines - Bypass Windows Defender - Steals Cookie, Password information from all Chromium based browsers(Chrome, Edge, OperaGX, Opera, Brave, Yandex and more) - Keyword Filtering able to see which websites information gathered without download - Grabs MetaMask, Exodus, Atomic, Coinbase, Binance, Trust Wallet, Phantom Wallet - Injection Discord, Discord Canary, DiscordPTB, Lightcord - Bypass Discord Token Protector, BetterDiscord - Gather Discord Token, Phone, Email, Badge, Billing - Validates found Discord Token and then sends it to your Webhook - Fetches HQ Friends (Early Supporter, Active Developer, Bot Developer) - Gather Network Information - File Stealer (Seed Phrases, Tokens, Private Keys, Recovery Codes, Backup Codes, 2FA) - Grabs Steam, Telegram, Riot Games Session - GUI Builder - Customizable Icon, Description, Name, Version - Add to Startup - Sends All Data Through Discord Webhook > Injection Discord: - Nitro Auto Buy - New Passwords - New Emails - New Login - New Credit Card - New PayPal > + More! ``` ## <a id="changelog"></a>💭 〢 ChangeLog ```diff v1.0 ⋮ 2023-26-05 + New README created + Removed malicious codes + Changed footer texts + Improved builder + Code optimization + Fixed missing emojis + Changed CStealer icon + Added new Anti-VM for less AV detections + Added Exodus Web3 Wallet Extension support + Active Developer badge support - Removed old Anti-VM method ``` ### <a id="forkedfrom"></a>🕵️‍♂️ 〢 Forked From: - WASP Stealer ### <a id="terms"></a>💼 〢 Terms Of Usage - [x] This software is provided as-is and I am not responsible for any damages that may occur after acquiring or using it. Please understand that this software is intended for personal educational purposes and sandbox testing only. Use it at your own risk. I make no guarantees about its performance or suitability for specific purposes. It is important to note that this software is not intended for critical or sensitive environments. Any usage beyond its intended purposes is your own decision and responsibility. You are solely responsible for evaluating your own technical abilities and knowledge to use the software appropriately. I cannot be held liable for any actions you take based on the information or functionality provided by the software. By acquiring or using this software, you agree to release me from any claims or liabilities. Please seek legal advice if you have any specific concerns or questions about the software's usage. - [x] By using/downloading this repository, you agree to the Commons Clause license and that you're not allowed to sell this repository or any code from this repository. For more info see https://commonsclause.com/ - [x] You're free to contribute the project <a href=#top>Back to Top</a></p>
gbrsh/CVE-2023-3460
https://github.com/gbrsh/CVE-2023-3460
Exploit for CVE-2023-3460. Unauthorized admin access for Ultimate Member plugin < v2.6.7
# CVE-2023-3460 Exploit for CVE-2023-3460 - Unauthorized admin access for Ultimate Member plugin. PoC is now published.<br> It was created for educational/research purposes only! Use it at your own risk! ![ultmem](https://github.com/gbrsh/CVE-2023-3460/assets/36970331/b7f73b08-e9c7-4dd3-b272-8ce269bba367)
jdubpark/Uniswap-Hooks
https://github.com/jdubpark/Uniswap-Hooks
Writing random Uniswap V4 hooks
# Random Uniswap v4 Hooks Random, untested, Proof-of-Concept Hooks — not for production! ## Usage To utilize the contracts and deploy to a local testnet, you can install the code in your repo with forge: ```bash forge install https://github.com/foundry-rs/forge-std --no-commit && forge install https://github.com/OpenZeppelin/openzeppelin-contracts --no-commit && forge install https://github.com/Uniswap/v4-core --no-commit && forge install https://github.com/axiom-crypto/axiom-v1-contracts --no-commit && forge install https://github.com/hamdiallam/Solidity-RLP --no-commit ``` ## Build ```bash forge build ``` ## License The license for this codebase is the GNU General Public License (GPL 2.0), see [LICENSE](https://github.com/Uniswap/periphery-next/blob/main/LICENSE).
wjwei-handsome/wgatools
https://github.com/wjwei-handsome/wgatools
Whole Genome Alignment Tools
![GitHub Workflow Status](https://img.shields.io/github/actions/workflow/status/wjwei-handsome/wgatools/ci.yml) ![GitHub repo size](https://img.shields.io/github/repo-size/wjwei-handsome/wgatools) ## *W*hole *G*enome *A*lignment **T**ools ## A Rust library and tools for whole genome alignment files ## TOOLS ### WHAT HAVE DONE - [x] PAF file reader - [x] MAF file reader - [x] Chain file reader - [x] CIGAR string parser - [x] MAF2PAF - [x] MAF2Chain - [x] PAF2Chain - [x] PAF2Blocks - [x] PAF2MAF - [x] Chain2MAF - [x] Chain2PAF - [x] Call Variants from MAF ### WHAT WILL DO IN FUTURE - [ ] SAM file reader [really need?] - [ ] SAM converter [really need?] - [ ] Visualize genome alignment - [ ] Call variants and statistics/visualize them - [ ] Local improvement of alignment by re-alignment ### Install ```shell git clone https://github.com/wjwei-handsome/wgatools.git cd wgatools cargo build --release ``` ### Usages ```shell > ./target/release/wgatools wgatools -- a cross-platform and ultrafast toolkit for Whole Genome Alignment Files manipulation Version: 0.1.0 Authors: Wenjie Wei <[email protected]> Usage: wgatools [OPTIONS] <COMMAND> Commands: maf2paf Convert MAF format to PAF format [aliases: m2p] maf2chain Convert MAF format to Chain format [aliases: m2c] paf2maf Convert PAF format to MAF format [aliases: p2m] paf2chain Convert PAF format to Chain format [aliases: p2c] chain2maf Convert Chain format to MAF format [aliases: c2m] chain2paf Convert Chain format to PAF format [aliases: c2p] help Print this message or the help of the given subcommand(s) Options: -h, --help Print help (see more with '--help') -V, --version Print version GLOBAL: -o, --outfile <OUTFILE> Output file ("-" for stdout) [default: -] -r, --rewrite Bool, if rewrite output file [default: false] -t, --threads <THREADS> [default: 1] ``` > NOTE: If you want to convert into MAF format, you should provide target and query genome sequence files in [.fa/.fa.gz]. ## Library Some simple reader and iterator for PAF, MAF and Chain files: ```rust use wgatools::parser::paf::PafReader; use wgatools::parser::maf::MAFReader; use wgatools::parser::chain::ChainReader; fn main() { let mut mafreader = MAFReader::from_path("test.maf").unwrap(); for record in mafreader.records() { let record = record.unwrap(); println!("{:?}", record); } /// ... } ``` ### TODO for library - [ ] Error detection and handling - [ ] Test cases - [ ] Documentations [//]: # (> It should be extremely fast!![img]&#40;https://raw.githubusercontent.com/wjwei-handsome/wwjPic/main/img/20230706022535.png&#41;) ## Features - use `nom` to parse CIGAR string - use `rayon` to accelerate the speed of conversions - ... ## Contributing Feel free to dive in! [Open an issue](https://github.com/wjwei-handsome/GeneMap/issues/new) or submit PRs. ## License GPL-3.0 © Weiwenjie
hassanhabib/EntityIntelligence.POC
https://github.com/hassanhabib/EntityIntelligence.POC
null
# POC.EI
mlemgroup/mlem
https://github.com/mlemgroup/mlem
Mlem for Lemmy
# Mlem - the iOS Lemmy Client Mlem is a client for [Lemmy](https://join-lemmy.org) - a Reddit-esque, open-source link aggregator. With Mlem, you can effortlessly participate in the conversation across all Lemmy servers. [You can download Mlem here.](https://apps.apple.com/gb/app/mlem-for-lemmy/id6450543782) Mlem requires iOS 16 or later. If you'd like to participate in the beta version of Mlem, you can [join our Testflight program](https://testflight.apple.com/join/MelFP11Y). ## Why Use Mlem? Unlike many other clients, Mlem is designed to be easy to use first. Its UI is beautiful and intuitive, and it just gets out of your way so you can enjoy participating in the communities that matter to you the most. And that's not all; Mlem is also extensively optimized and performant, which means it will never be a performance and battery hog. You can scroll all day and night long! ## Screenshots <img src="https://github.com/mlemgroup/mlem/assets/78750526/6a9d2ea6-e874-4621-a3f6-9ad72caacba9" width="30%"> <img src="https://github.com/mlemgroup/mlem/assets/78750526/fda618b4-7d42-43e1-9bfa-1a5d7b7145c2" width="30%"> <img src="https://github.com/mlemgroup/mlem/assets/78750526/30478f39-169d-47ea-b983-a453af2b0959" width="30%"> ## Want to chat about Mlem? You're welcome to join our [community on lemmy.ml](https://lemmy.ml/c/mlemapp) or [Matrix room](https://matrix.to/#/#mlemapp:matrix.org)! ## Contributing Read our [contribution guide](./CONTRIBUTING.md) to get started! ## License Mlem is fully open-source, licensed under GPL 3.0 with an addendum for compliance with the Apple App Store. See LICENSE for details. ### App Icons Beehaw Community Icon by Aaron Schneider is included under [CC-BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/).
Sentdex/Falcon-LLM
https://github.com/Sentdex/Falcon-LLM
Helper scripts and examples for exploring the Falcon LLM models
# Falcon-LLM Helper scripts and examples for exploring the Falcon LLM models Overview of the model and use-cases: https://www.youtube.com/watch?v=-IV1NTGy6Mg Files: - `api_server.py` - Run locally or in cloud. Should fully set up a proper web server if you intend to host on a public IP, this is using the basic flask demo web server. - `api_client.py` - Make requests to the server. Makes R&D a lot easier if you can load and access the model separately, even if everything is on the same machine, so you're not re-loading the model every single time you make a change to your script. You can also use a notebook, but, depending on the complexity of your project, this might not be good enough. - `Falcon-40B-demo.ipynb` - a short notebook example of loading Falcon 40B with options for various datatypes (4, 8, and 16bit). - `setup.sh` - a quick shell script for setup of requirements that I used for Lambda H100 machines. (`chmod +x setup.sh` & `./setup.sh`
vlcn-io/typed-sql
https://github.com/vlcn-io/typed-sql
null
# typed-sql Generates types for your SQL. https://github.com/vlcn-io/typed-sql/assets/1009003/4459edb2-4a52-4641-819d-5805c04d943a ALPHA release. In theory this covers all of SQLite. In practice, I have written very few tests yet so likely there are many bugs. Documented issues: https://github.com/vlcn-io/typed-sql/issues # Installation Types are generated via a watch task installed with `@vlcn.io/typed-sql-cli`. ``` pnpm install @vlcn.io/typed-sql-cli` ``` See [the cli package](./packages/cli) # Usage After installing the cli, see the [runtime component](./packages/typed-sql/README.md) package. # Current Limitations & Future Work ## Composition It is possible to do type generation for template strings which are combinations of template strings. E.g., ```ts const query = sql`SELECT * FROM ${table} WHERE ${where_clauses}`; ``` Assuming that some type information is available on the parameters to the template tag. E.g., `typeof table = 'foo' | 'bar' | 'baz'` This is not yet supported. See [issue 10](https://github.com/vlcn-io/typed-sql/issues/10) ## Custom Types SQLite lets users declare custom types in their create table statements. ```sql CREATE TABLE foo ( id TEXT PRIMARY KEY, b MyCustomType, c MyOtherCustomType ); ``` We could parse these out and map them to actual typescript types. Doing this would require some runtime layer to convert columns to the correct types. Python bindings have done this -- https://docs.python.org/3/library/sqlite3.html#sqlite3-converters See [issue 8](https://github.com/vlcn-io/typed-sql/issues/8) ## Bool SQLite doesn't return bools! True is `1` and False is `0`. Given this library currently does not handle execution, and only type generation, `bools` are typed as `number` ## Intellisense I've not implemented a language server yet so intellisense, within the `sql` and `schema` template tags, currently does not exist. It is planned. [issue 15](https://github.com/vlcn-io/typed-sql/issues/15) ## Syntax Highlighting Highlighting inside the `sql` and `schema` template tags is not yet supported. [issue 16](https://code.visualstudio.com/api/language-extensions/semantic-highlight-guide)
Rohan-San/AlgoLab
https://github.com/Rohan-San/AlgoLab
A collection of all the algorithms and analysis for the Algorithms Laboratory programs
# AlgoLab Key Contributor: **[@Kxitij](https://github.com/Kxitij)** <br><br> Topics (Click to jump): 1. [Algorithms](#algorithms) - [Linear Search](#linear-search) - [Binary Search](#binary-search) - [Bubble Sort](#bubble-sort) - [Insertion Sort](#insertion-sort) - [Merge Sort](#merge-sort) - [Quick Sort](#quick-sort) - [DFS](#depth-first-search) - [BFS](#breadth-first-search) 2. [Analysis](#analysis) - [Linear & Binary Search](#linear-and-binary-search) - [Bubble & Insertion Sort](#bubble-and-insertion-sort) - [Merge & Quick Sort](#merge-and-quick-sort) - [DFS & BFS](#dfs-and-bfs) ## Algorithms 1. ### Linear Search: - Start from the beginning of the array. - Compare each element with the target value. - If a match is found, return the index of the element. - If the end of the array is reached without finding a match, return -1. 2. ### Binary Search: - Set the low index to the beginning of the array and the high index to the end of the array. - While the low index is less than or equal to the high index: - Calculate the middle index as the average of the low and high indices. - If the middle element is equal to the target, return the middle index. - If the middle element is less than the target, update the low index to mid + 1. - If the middle element is greater than the target, update the high index to mid - 1. - If the target is not found, return -1. 3. ### Bubble Sort: - Start from the first element and compare it with the next element. - If the current element is greater than the next element, swap them. - Repeat this process for all elements in the array. - Continue this process for each pass until the array is sorted. 4. ### Insertion Sort: - Start with the second element in the array. - Compare the current element with the elements before it, moving them one position ahead if they are greater. - Repeat this process for all elements until the array is sorted. 5. ### Merge Sort: - Divide the array into two halves. - Recursively apply merge sort to each half. - Merge the sorted halves by comparing elements and placing them in the correct order. 6. ### Quick Sort: - Choose a pivot element from the array (usually the last element). - Partition the array into two parts: elements less than the pivot and elements greater than the pivot. - Recursively apply quick sort to the two partitions. 7. ### Depth-First Search: - Start from a given vertex. - Mark the vertex as visited and print its value. - Explore its adjacent vertices recursively, following an arbitrary order. - Repeat this process for each unvisited vertex until all vertices are visited. 8. ### Breadth-First Search: - Start from a given vertex. - Mark the vertex as visited and enqueue it. - While the queue is not empty: - Dequeue a vertex from the queue and print its value. - Enqueue all unvisited neighbors of the dequeued vertex. - Mark the neighbors as visited. ## Analysis 1. ### Linear and Binary Search > **LINEAR SEARCH:** <br> - The *best-case* scenario for linear search occurs when the key is found at the beginning of the array, resulting in a complexity of O(1).<br> - The *worst-case* scenario happens when the key is either not present in the array or it is at the very end. In this case, the algorithm will need to iterate through all elements, resulting in a complexity of O(n), where n is the size of the array.<br> - The *average case* complexity for linear search is also O(n), as it may need to search through approximately half of the array on average.<br> > **BINARY SEARCH:** <br> - The *best-case* scenario for binary search occurs when the key is found at the middle element of the array, resulting in a complexity of O(1).<br> - The *worst-case* scenario happens when the key is either not present in the array or it is at one of the ends. In this case, the algorithm will divide the search space in half at each step, resulting in a complexity of O(log n), where n is the size of the array.<br> - The *average case* complexity for binary search is also O(log n), as it halves the search space at each step, reducing the remaining elements to search exponentially.<br> 2. ### Bubble and Insertion Sort > **BUBBLE SORT:** <br> - The *best-case* scenario for bubble sort occurs when the array is already sorted. In this case, the algorithm will make a single pass through the array without any swaps, resulting in a complexity of O(n).<br> - The *worst-case* scenario happens when the array is sorted in reverse order. In this case, the algorithm will need to make comparisons and swaps for every pair of adjacent elements, resulting in a complexity of O(n^2), where n is the size of the array.<br> - The *average case* complexity for bubble sort is also O(n^2), as it generally requires multiple passes through the array and comparisons for each pair of adjacent elements.<br> > **INSERTION SORT:** <br> - The *best-case* scenario for insertion sort occurs when the array is already sorted. In this case, the algorithm will make a single pass through the array without any element shifting, resulting in a complexity of O(n).<br> - The *worst-case* scenario happens when the array is sorted in reverse order. In this case, each element needs to be compared and shifted to its correct position, resulting in a complexity of O(n^2), where n is the size of the array.<br> - The *average case* complexity for insertion sort is also O(n^2), as it requires comparisons and potential shifting for each element, with the number of operations increasing quadratically with the input size.<br> 3. ### Merge and Quick Sort > **MERGE SORT:** <br> - The best-case, worst-case, and average case complexity for merge sort is O(n log n), where n is the size of the array. It consistently divides the array into halves and performs merging operations, resulting in a time complexity that scales logarithmically with the input size. The additional space complexity for merge sort is O(n) due to the need for temporary arrays during the merging process.<br> > **QUICK SORT:** <br> - The *best-case* scenario for Quick Sort occurs when the pivot divides the array into two equal-sized subarrays. In this case, the algorithm performs balanced partitions, resulting in a complexity of O(n log n).<br> - The *worst-case* scenario happens when the pivot always selects the smallest or largest element, resulting in unbalanced partitions. This can lead to one subarray with only one element and the other with n-1 elements. In the worst case, the complexity of Quick Sort becomes O(n^2).<br> - The *average case* complexity for Quick Sort is O(n log n), making it an efficient sorting algorithm for most cases. The average case occurs when the pivot divides the array into subarrays of roughly equal sizes.<br> 4. ### DFS and BFS > **BFS:** <br> - The *best-case* for BFS occurs when the goal node is close to the starting node. In this case, BFS will find the goal node quickly as it explores all the nodes at the current level before moving to the next level. The time complexity of BFS in the best case is O(1) or constant time.<br> - The *worst-case* for BFS occurs when the goal node is located at the deepest level of the graph or tree. In this case, BFS will have to explore all the nodes before reaching the goal node. The time complexity of BFS in the worst case is O(V + E), where V is the number of vertices and E is the number of edges in the graph.<br> - The *average case* for BFS depends on the structure and size of the graph. In a balanced tree or graph with a uniform branching factor, BFS will have a time complexity of O(V + E), similar to the worst case. However, if the graph is sparse or has a low average branching factor, the time complexity of BFS can be closer to O(V).<br> > **DFS:** <br> - The *best-case* for DFS occurs when the goal node is found early in the search, preferably in the first few levels of the search tree. This happens if the goal node is a direct child of the starting node or located in a branch that is explored early. The time complexity of DFS in the best case is O(1) or constant time.<br> - The *worst-case* for DFS occurs when the goal node is located at the deepest level of the search tree or in a branch that is explored last. In this case, DFS will traverse all possible paths before finding the goal node. The time complexity of DFS in the worst case is O(V + E), where V is the number of vertices and E is the number of edges in the graph.<br> - The *average case* for DFS depends on the structure and size of the graph. In a balanced tree or graph with a uniform branching factor, DFS will have a time complexity of O(V + E), similar to the worst case. However, if the graph is sparse or has a low average branching factor, the time complexity of DFS can be closer to O(V).<br>
LaresLLC/SlinkyCat
https://github.com/LaresLLC/SlinkyCat
Slinky Cat attempts to give users an easy-to-navigate menu offering predefined Active Directory Service Interfaces (ADSI) and .NET queries which can be used to enumerate a Windows domain.
# SlinkyCat ## v1.0 Release Brought to you by [Neil Lines](https://twitter.com/myexploit2600) & [Andy Gill](https://twitter.com/ZephrFish) at [Lares Labs](https://labs.lares.com). ![image](https://github.com/LaresLLC/SlinkyCat/assets/5783068/6a9b3a1c-90d0-40c6-9b00-740b2bf54818) Funny name, right? We thought so, too; Neil's cat is called Slinky, and we love to have fun here! ## What It Is Slinky Cat has been developed to automate some of the methods introduced in living off the land and to supplement ScrapingKit. To help security and IT teams reduce their AD exposures and uncover quick wins and fixes designed for pen-testers and defenders alike. Slinky Cat attempts to give users an easy-to-navigate menu offering predefined Active Directory Service Interfaces (ADSI) and .NET System.DirectoryServices.AccountManagement namespace queries can be used to enumerate a Windows domain. ## How to Setup/Use Simply import the module, then execute it: ``` ipmo .\SlinkyCat.ps1 Invoke-SlinkyCat ``` ## Notes & Known Errors Each function can be run individually and has an added output option; therefore, if you want to run without the menu system, you can load Slinkycat, select option 3 from the menu, and run whatever function you want. For example: ``` FindAdminAccessComputers -OutputPath AdminAccess.txt ``` This will find where the local user has admin access OR where the C$ is shared, then echo the results back: ``` *********************** WARNING ************************ This operation will be noisy and could potentially compromise operational security (OPSEC). You are running as EA This will scan the whole network for where your current user has local admin access. Are you sure you want to continue? (Y/N): Y [*] User EA has local Admin access(OR C$ is shared to everyone) to: DC1 [*] User EA has local Admin access(OR C$ is shared to everyone) to: WS1 [*] User EA has local Admin access(OR C$ is shared to everyone) to: WS2 [*] User EA has local Admin access(OR C$ is shared to everyone) to: DC2 [*] User EA has local Admin access(OR C$ is shared to everyone) to: ADCS Press Enter to continue ``` ### Known Errors If you select either option 1 or 2, then option A to run all functions, the script will run all functions and prompt you to hit return each time; at the end, it will error, but in the background, it has run all the functions as required. We are working on a fix for this, but the function works as intended; if anyone knows a fix, ping @ZephrFish. ## Example Running ``` PS C:\Users\user1> Import-Module .\SlinkyCat.ps1 ``` This will bring up the menu. ``` === Menu === 1. ADSI Enumeration 2. Dot NET System.DirectoryServices.AccountManagement Namespace Enumeration 3. List Available Individual Functions Q. Quit Select an option: ``` Option 1 offers you pre-defined ADSI one-liners to help enumerate the domain. The result will echo to the console for those that don’t want to save to disk, but you also get an option to save all the results you created to disk via option S on the initial menu and are also asked again when you quit. Select option 1 redirects you to the ADSI Enumeration option menu. ``` === ADSI Enumeration === ADSI Options Menu Please select an option: 1. Enumerate all domain hosts 2. Enumerate all domain controllers 3. Enumerate all domain users 4. List all users in the domain admins group 5. List all accounts with an SPN 6. List all domain groups 7. List all password set to never expire 8. List all users which do not require a password 9. List all users with password must change at next logon 10. List all computers that are not Domain Controllers and are Windows 7 11. List all computers that are not Domain Controllers and are Windows 10 12. List all computers that are not Domain Controllers and are Windows 11 13. List all servers 14. List all Server 2008 15. List all Server 2012 16. List all Server 2016 17. List all Server 2019 18. List all Server 2022 19. List domain groups which are a member of the local admin group 20. List all trusts established with a domain 21. List all Exchange servers 22. List all accounts that have never logged in 23. List all domain user accounts which have a completed AD description field 24. List all accounts that reference 'pass' in their AD description field 25. List all users who have not changed their password in over 1 year 26. List all users' last password change date and time 27. List all systems with WinRM Open (Not OPSEC SAFE!) 28. List all systems with RDP Open (Not OPSEC SAFE!) 29. Find all machines where the current user has local admin acess (Not OPSEC Safe, will list all computers then attempt to mount C$) A. Run all functions and export to a HTML report Q. Quit ``` Select option 5 List all accounts with an SPN see the result below Select option: 5 Option: List all accounts with an SPN ``` krbtgt svc_afds svc_test svc_mssql1 svc_mssql2 svc_lab svc_admin ``` Reading the output after closing. ``` PS C:\Users\user1> type output/UsersLastPasswordChange.txt Option: Password last set Username: krbtgt, Last Password Set: 05/04/2023 14:51:59 Username: svc_afds, Last Password Set: 05/04/2023 15:37:40 Username: svc_test, Last Password Set: 05/04/2023 15:37:41 Username: svc_mssql1, Last Password Set: 05/04/2023 15:37:41 Username: svc_mssql2, Last Password Set: 05/04/2023 15:37:42 Username: svc_lab, Last Password Set: 05/04/2023 15:37:42 Username: svc_admin, Last Password Set: 05/04/2023 15:37:43 ``` ## Future Plans - Add more enumeration and output formats
eugeniodepalo/react-native-gtk4
https://github.com/eugeniodepalo/react-native-gtk4
React renderer for GTK4 using node-gtk
# React Native GTK4 Check out the package [README](packages/react-native-gtk4/README.md) for more information. ## NOTE: Despite the version being 1.x.x, this project is still in alpha and is not yet ready for production use. We are working hard to get it to a stable state, but there is still a lot of work to be done. If you would like to help, please see the [Contributing](packages/react-native-gtk4/README.md#contributing) section of the package README. ## Missing Components These components can be instantiated, but they're not following React's state model yet. This means that they can't be used without resorting to imperative code. - ColumnView - GridView - ListView - TreeExpander
lazaronixon/css-properties-counter
https://github.com/lazaronixon/css-properties-counter
The 35 CSS properties you must know to do 80% of the work.
# The 35 CSS properties you must know to do 80% of the work. I've created this script to find out how many CSS properties I should know to do 80% of the work. ## CSS property utilization for hey.com | Rank | Property | Count | % | | ---- | -------- | ----- | - | | 1 | display | 535 | 8.5 | | 2 | top/left/right/bottom | 527 | 9.11 | | 3 | margin | 521 | 9.27 | | 4 | background | 436 | 17.55 | | 5 | background-image | 434 | 24.25 | | 6 | padding | 422 | 24.3 | | 7 | width/height | 383 | 24.78 | | 8 | color | 304 | 25.73 | | 9 | position | 283 | 30.23 | | 10 | border | 257 | 31.21 | | 11 | border-radius | 182 | 34.11 | | 12 | font-size | 122 | 41.04 | | 13 | box-shadow | 98 | 45.12 | | 14 | z-index | 98 | 45.65 | | 15 | max-width | 97 | 46.31 | | 16 | content | 87 | 46.39 | | 17 | opacity | 76 | 46.66 | | 18 | overflow | 76 | 47.49 | | 19 | filter | 71 | 47.65 | | 20 | text-align | 62 | 47.68 | | 21 | align-items | 60 | 47.73 | | 22 | transform | 56 | 47.95 | | 23 | background-size | 53 | 48.09 | | 24 | justify-content | 52 | 52.92 | | 25 | line-height | 48 | 54.86 | | 26 | white-space | 47 | 55.59 | | 27 | animation | 47 | 56.34 | | 28 | cursor | 47 | 57.09 | | 29 | font-weight | 46 | 57.15 | | 30 | outline | 42 | 63.24 | | 31 | grid-template-columns | 38 | 64.78 | | 32 | min-height | 33 | 65.16 | | 33 | min-width | 31 | 73.54 | | 34 | flex-direction | 30 | 74.75 | | 35 | pointer-events | 28 | 76.13 |
flare-ai/sd-dynamic-prompts
https://github.com/flare-ai/sd-dynamic-prompts
A custom script for AUTOMATIC1111/stable-diffusion-webui to implement a tiny template language for random prompt generation
# Stable Diffusion Dynamic Prompts extension A custom extension for [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) that implements an expressive template language for random or combinatorial prompt generation along with features to support deep wildcard directory structures. <p align="center"> <a href="docs/SYNTAX.md"><img src="images/icon-syntax.png" valign="middle" style="height:60px;margin-right:10px"/></a> <a href="docs/tutorial.md"><img src="images/icon-tutorial.png" valign="middle" style="height:60px;margin-right:10px"/></a> <a href="docs/CHANGELOG.md"><img src="images/icon-changelog.png" valign="middle" style="height:60px"/></a> </p> <img src="images/extension.png"/> ## Table of Contents * [Basic Usage](#basic-usage) * [Online resources](#online-resources) * [Installation](#installation) * [Configuration](#configuration) * [Troubleshooting](#troubleshooting) * [Compatible Scripts](#compatible-scripts) * [Template syntax](#template-syntax) * [Fuzzy Glob/recursive wildcard file/directory matching](#fuzzy-globrecursive-wildcard-filedirectory-matching) * [Combinatorial Generation](#combinatorial-generation) * [Combinatorial Batches](#combinatorial-batches) * [Increasing the maximum number of generations](#increasing-the-maximum-number-of-generations) * [Fixed seed](#fixed-seed) * [Magic Prompt](#magic-prompt) * [Other models](#other-models) * [I'm feeling lucky](#im-feeling-lucky) * [Attention grabber](#attention-grabber) * [Write prompts to file](#write-prompts-to-file) * [Jinja2 templates](#jinja2-templates) * [WILDCARD_DIR](#wildcard_dir) * [Collections](#collections) * [Dynamic Prompts and Random Seeds](#dynamic-prompts-and-random-seeds) * [Without Dynamic Prompts Enabled](#without-dynamic-prompts-enabled) * [With Dynamic Prompts Enabled in Random/Standard Mode](#with-dynamic-prompts-enabled-in-randomstandard-mode) * [Variation Seeds with Dynamic Prompts](#variation-seeds-with-dynamic-prompts) * [Combinatorial Mode with Variation Strength &gt; 0](#combinatorial-mode-with-variation-strength--0) ## Basic Usage Using this script, the prompt: A {house|apartment|lodge|cottage} in {summer|winter|autumn|spring} by {2$$artist1|artist2|artist3} Will produce any of the following prompts: - A **house** in **summer** by **artist1**, **artist2** - A **lodge** in **autumn** by **artist3**, **artist1** - A **cottage** in **winter** by **artist2**, **artist3** - ... This is especially useful if you are searching for interesting combinations of artists and styles. You can also pick a random string from a file. Assuming you have the file seasons.txt in WILDCARD_DIR (see below), then: __seasons__ is coming Might generate the following: - Winter is coming - Spring is coming - ... You can also use the same wildcard twice I love __seasons__ better than __seasons__ - I love Winter better than Summer - I love Spring better than Spring More complete documentation can be found [here](docs/SYNTAX.md).<br/> Prefer a tutorial? <a href="docs/tutorial.md">Click here</a><br/> Need a wildcard library? We've got you [covered](https://github.com/jacobvo888/sd-dynamic-prompts#collections).<br/> ## Online resources * 📽️ [아무것도 생각 안 하고 그림 뽑는 방법 (stable diffusion Dynamic Prompts extension)](https://www.youtube.com/watch?v=P6Tl5sMUJpk) * 📽️ [Dynamic Prompt Tutorial: How to Create Multiple Art Variations with just 1 Prompt Stable Diffusion](https://www.youtube.com/watch?v=5wH7zioje4w) * 📽️ [Experiment with Prompts FAST in Stable Diffusion](https://www.youtube.com/watch?v=5ZiL4iG0FJk) * [Having a blast with ControlNet + Dynamic Prompts (Wildcards)](https://www.reddit.com/r/StableDiffusion/comments/115t0pi/having_a_blast_with_controlnet_dynamic_prompts/) * [Infinite Random RPG Character Portraits with Dynamic Prompts](https://www.reddit.com/r/StableDiffusion/comments/10xqwjm/i_come_bearing_gifts_infinite_random_rpg/) * [256 unique characters created with dynamic prompts ](https://www.reddit.com/r/StableDiffusion/comments/106hxm1/256_unique_characters_created_with_dynamic/) * [Use ChatGPT to create powerful and useful wildcards with the Dynamic Prompts extension](https://www.reddit.com/r/StableDiffusion/comments/10m6b4a/use_chatgpt_to_create_powerful_and_useful/) * [200 Character/outfit designs from a single prompt using Dynamic Prompts and Prompt Editing](https://www.reddit.com/r/StableDiffusion/comments/1080xyy/200_characteroutfit_designs_from_a_single_prompt/) * [Dynamic Prompts and Jinja2 templates in Automatic 1111](https://www.reddit.com/r/StableDiffusion/comments/10jgmtk/dynamic_prompts_and_jinja2_templates_in_automatic/) * [Testing Dynamic Prompts extension to mix and match elements](https://www.reddit.com/r/StableDiffusion/comments/10gkm1c/testing_dynamic_prompts_extension_to_mix_and/) * [How to Turbocharge your prompts using AUTOMATIC1111 WebUI dynamic prompts extension](https://www.reddit.com/r/StableDiffusion/comments/ynztiz/how_to_turbocharge_your_prompts_using/) * [Dynamic prompts for Automatic1111 is so incredibly powerful, these are all from the same prompt!](https://www.reddit.com/r/StableDiffusion/comments/zacjhk/dynamic_prompts_for_automatic1111_is_so/) ## Installation The extension can be installed directly from within the **Extensions** tab within the Webui <img src="images/installation.png"/> You can also install it manually by running the following command from within the webui directory: git clone https://github.com/jacobvo888/sd-dynamic-prompting/ extensions/dynamic-prompts ## Configuration In case of a syntax clash with another extension, Dynamic Prompts allows you to change the definition of variant start and variant end. By default these are set to `{` and `}` respectively. , e.g. `{red|green|blue}`. In the settings tab, you can change these two any string, e.g. `<red|green|blue>` or even `::red|green|blue::`. <img src="images/config_brackets.png"> By default, wildcards start with `__`(double underscore) and end with `__`. You can change this in the settings tab under wildcard wrap. ## Troubleshooting If you encounter an issue with Dynamic Prompts, follow these steps to resolve the problem: 1. Check that you have installed the latest version of both the Dynamic Prompts extension and library. To check the installed versions, open the **Need Help? accordion** in the Dynamic Prompts section of txt2image. You can find the latest version number of the extension [here](https://github.com/jacobvo888/sd-dynamic-prompts/blob/main/docs/CHANGELOG.md) and the library [here](https://github.com/jacobvo888/dynamicprompts/blob/main/CHANGELOG.md?plain=1). 2. If the versions do not match, update the extension in the extensions tab and restart the webui. The extension should automatically update the library. 3. If the above step does not work, you might need to manually update the library using the following command: ```shell python -m pip install -U dynamicprompts[attentiongrabber,magicprompt] ``` 4. Restart the webui and check. If the webui uses a different python binary, find the correct path to the python binary and run: ```shell /path/to/python/binary/python -m pip install -U dynamicprompts[attentiongrabber,magicprompt] ``` 5. If the Wildcard UI does not show, it could be due to an outdated library version. Check for errors in the terminal and update the library as described in step 3. 6. If you get an error message saying "No values found for wildcard some/wildcard", ensure that the file wildcard.txt is in extensions/sd-dynamic-prompts/wildcards/some/. The full path is required, as relative paths are not currently supported. 7. If the issue persists, search for solutions in the [issues section](https://github.com/jacobvo888/sd-dynamic-prompts/issues?q=is%3Aissue) on GitHub and the [discussion forum](https://github.com/jacobvo888/sd-dynamic-prompts/discussions). If you cannot find a solution, create a new issue and give it a descriptive name, such as "Wildcard values are being ignored in prompt templates". Provide the necessary context, including the versions of the Dynamic Prompts extension and library, and mention the operating system or colab being used. If there is an error in the terminal, copy and paste the entire text or take a screenshot. 8. Finally, it is essential to test and apply any fixes we release. Your feedback is valuable, as an issue that works in our environment may not work in yours. ## Compatible Scripts Dynamic Prompts works particularly well with X/Y Plot - setting Dynamic Prompts to <a href="#combinatorial-generation">combinatorial mode</a> while using X/Y Plot, lets you exhaustively test prompt and paramter variations simultaneously. ## Template syntax Documentation can be found [here](docs/SYNTAX.md) ### Fuzzy Glob/recursive wildcard file/directory matching In addition to standard wildcard tokens such as `__times__` -> `times.txt`, you can also use globbing to match against multiple files at once. For instance: `__colors*__` will match any of the following: - WILDCARD_DIR/colors.txt - WILDCARD_DIR/colors1.txt - WILDCARD_DIR/nested/folder/colors1.txt `__light/**/*__` will match: - WILDCARD_DIR/nested/folder/light/a.txt - WILDCARD_DIR/nested/folder/light/b.txt but won't match - WILDCARD_DIR/nested/folder/dark/a.txt - WILDCARD_DIR/a.txt You can also used character ranges `[0-9]` and `[a-z]` and single wildcard characters `?`. For more examples see [this article](http://pymotw.com/2/glob/). ## Combinatorial Generation Instead of generating random prompts from a template, combinatorial generation produced every possible prompt from the given string. For example: `I {love|hate} {New York|Chicago} in {June|July|August}` will produce: - I love New York in June - I love New York in July - I love New York in August - I love Chicago in June - I love Chicago in July - I love Chicago in August - I hate New York in June - I hate New York in July - I hate New York in August - I hate Chicago in June - I hate Chicago in July - I hate Chicago in August If a `__wildcard__` is provided, then a new prompt will be produced for every value in the wildcard file. For example: `My favourite season is __seasons__` will produce: - My favourite season is Summer - My favourite season is August - My favourite season is Winter - My favourite season is Sprint <img src="images/combinatorial_generation.png"/> You also arbitrarily nest combinations inside wildcards and wildcards in combinations. Combinatorial generation can be useful if you want to create an image for every artist in a file. It can be enabled by checking the __Combinatorial generation__ checkbox in the ui. In order to prevent accidentially producing thousands of images, you can limit the total number of prompts generated using the **Max Generations** slider. A value of 0 (the default) will not set any limit. ### Combinatorial Batches The combinatorial batches slider lets you repeat the same set of prompts a number of times with different seeds. The default number of batches is 1. ### Increasing the maximum number of generations By default, the __Batch count__ silder of automatic1111 has a maximum value of 100. This can limit the maximum number of generations when using combinatorial generation. You can change the maximum value of this slider by editing ui-config.json and change: txt2img/Batch count/maximum": 100 to something larger like: txt2img/Batch count/maximum": 1000 ## Fixed seed Select this if you want to use the same seed for every generated image. If there are no wildcards then all the images will be identical. It is useful if you want to test the effect of a particular modifier. For example: A beautiful day at the beach __medium/photography/filmtypes__ That way you can isolate the effect of each film type on a particular scene. Here are some of the results: <img src="images/filmtypes.jpg"/> ## Magic Prompt Use one of a number of prompt generation models to spice up your prompt. Using [Gustavosta](https://huggingface.co/Gustavosta/MagicPrompt-Stable-Diffusion)'s MagicPrompt model, Trained on 80,000 prompts from [Lexica.art](lexica.art), it can help give you interesting new prompts on a given subject. Here are some automatically generated variations for "dogs playing football": > dogs playing football, in the streets of a japanese town at night, with people watching in wonder, in the style of studio ghibli and makoto shinkai, highly detailed digital art, trending on artstation > dogs playing football, in the background is a nuclear explosion. photorealism. hq. hyper. realistic. 4 k. award winning. > dogs playing football, in the background is a nuclear explosion. photorealistic. realism. 4 k wideshot. cinematic. unreal engine. artgerm. marc simonetti. jc leyendecker This is compatible with the wildcard syntax described above. ### Other models * [daspartho/prompt-extend (~500mb)](https://huggingface.co/daspartho/prompt-extend) * [succinctly/text2image-prompt-generator (~600mb)](https://huggingface.co/succinctly/text2image-prompt-generator) - Trained on Midjourney prompts * [microsoft/Promptist (~500mb)](https://huggingface.co/microsoft/Promptist) - Read the paper [here](https://arxiv.org/abs/2212.09611) * [AUTOMATIC/promptgen-lexart (~300mb)](https://huggingface.co/AUTOMATIC/promptgen-lexart) - Finetuned using 134,819 prompts from lexica.art * [AUTOMATIC/promptgen-majinai-safe (~300mb)](https://huggingface.co/AUTOMATIC/promptgen-majinai-safe) - 1,654 prompts from majinai.art * [AUTOMATIC/promptgen-majinai-unsafe (~300mb)](https://huggingface.co/AUTOMATIC/promptgen-majinai-unsafe) - 825 prompts from majinai.art (NSFW) * [Gustavosta/MagicPrompt-Dalle](https://huggingface.co/Gustavosta/MagicPrompt-Dalle) * [kmewhort/stable-diffusion-prompt-bolster (~500mb)](https://huggingface.co/kmewhort/stable-diffusion-prompt-bolster), * [Ar4ikov/gpt2-650k-stable-diffusion-prompt-generator (~500mb)](Ar4ikov/gpt2-650k-stable-diffusion-prompt-generator), * [Ar4ikov/gpt2-medium-650k-stable-diffusion-prompt-generator (~1.4gb)](https://huggingface.co/Ar4ikov/gpt2-medium-650k-stable-diffusion-prompt-generator), * [crumb/bloom-560m-RLHF-SD2-prompter-aesthetic (~1.1gb)](https://huggingface.co/crumb/bloom-560m-RLHF-SD2-prompter-aesthetic), * [Meli/GPT2-Prompt (~500mb)](https://huggingface.co/Meli/GPT2-Prompt), * [DrishtiSharma/StableDiffusion-Prompt-Generator-GPT-Neo-125M (~550mb)](https://huggingface.co/DrishtiSharma/StableDiffusion-Prompt-Generator-GPT-Neo-125M) The first time you use a model, it is downloaded. It is approximately 500mb and so will take some time depending on how fast your connection is. It will also take a few seconds on first activation as the model is loaded into memory. Note, if you're low in VRAM, you might get a Cuda error. My GPU uses less than 8GB but YMMV. <img src="images/magic_prompt.png"/> You can control the maximum prompt length with the __Max magic prompt length__ slider. __Magic prompt creativity__ can adjust the generated prompt but you will need to experiment with this setting. Use the __Magic prompt blocklist regex__ to filter out keywords. For example, if you want to avoid prompts containing Greg Rutkowski, add his name to this field. If you are generating many prompts using Magic Prompt, then increasing the __Magic Prompt batch size__ can improve significantly improve prompt generation speed. This may only be noticeable if you are not generating images as well since image generation is much slower than prompt generation. ## I'm feeling lucky Use the [lexica.art](https://lexica.art) API to create random prompts. Useful if you're looking for inspiration, or are simply too lazy to think of your own prompts. When this option is selected, the prompt in the main prompt box is used as a search string. For example, prompt "Mech warrior" might return: * A large robot stone statue in the middle of a forest by Greg Rutkowski, Sung Choi, Mitchell Mohrhauser, Maciej Kuciara, Johnson Ting, Maxim Verehin, Peter Konig, final fantasy , 8k photorealistic, cinematic lighting, HD, high details, atmospheric, * a beautiful portrait painting of a ( ( ( cyberpunk ) ) ) armor by simon stalenhag and pascal blanche and alphonse mucha and nekro. in style of digital art. colorful comic, film noirs, symmetry, brush stroke, vibrating colors, hyper detailed. octane render. trending on artstation * symmetry!! portrait of a robot astronaut, floral! horizon zero dawn machine, intricate, elegant, highly detailed, digital painting, artstation, concept art, smooth, sharp focus, illustration, art by artgerm and greg rutkowski and alphonse mucha, 8 k <img src="images/feeling-lucky.png"> Leaving the prompt box blank returns a list of completely randomly chosen prompts. ## Attention grabber This option randomly selects a keyword in your prompt and adds a random amount of emphasis. Below is an example of how this affects the prompt: a portrait an anthropomorphic panda mage casting a spell, wearing mage robes, landscape in background, cute, dnd character art portrait, by jason felix and peter mohrbacher, cinematic lighting <img src="images/emphasis.png"> Tick the __Fixed seed__ checkbox under __Advanced options__ to see how emphasis changes your image without changing seed. ## Write prompts to file Check the write prompts to file checkbox in order to create a file with all generated prompts. The generated file is a slugified version of the prompt and can be found in the same directory as the generated images, e.g. outputs/txt2img-images <img src="images/write_prompts.png"/> ## Jinja2 templates [Jinja2 templates](https://jinja.palletsprojects.com/en/3.1.x/templates/) is an experimental feature that enables you to define prompts imperatively. This is an advanced feature and is only recommended for users who are comfortable writing scripts. To enable, open the advanced accordion and select __Enable Jinja2 templates__. <img src="images/jinja_templates.png"> You can read about them in more detail <a href="jinja2.md">here</a> ## WILDCARD_DIR The extension looks for wildcard files in WILDCARD_DIR. The default location is /path/to/stable-diffusion-webui/extensions/sd-dynamic-prompts/wildcards. It can also be manually defined in the main webui config.json under wildcard_dir. When in doubt, the help text for the extension in the webui lists the full path to WILDCARD_DIR ## Collections The collections directory contains modifier libraries that you can use as is or to bootstrap your own. To get started, either use the Wildcard Manager tab to copy a one or more collections to your wildcards folder, or you can manually copy the files across. Three collections are bundled with the dynamic prompts extension. - [jumbo](https://github.com/jacobvo888/sd-dynamic-prompts/tree/main/collections/jumbo) - [parrotzone](https://github.com/jacobvo888/sd-dynamic-prompts/tree/main/collections/parrotzone) - [devilkkw](https://github.com/jacobvo888/sd-dynamic-prompts/tree/main/collections/devilkkw) Jumbo is a very large collection of wildcards across many categories including aesthetics, appearance, artists, medium, style, and time. It is a work in progress, but aims to provide good coverage of various modifier categories. Parrotzone is a far smaller and more manageable collection sourced from https://proximacentaurib.notion.site/e28a4f8d97724f14a784a538b8589e7d?v=42948fd8f45c4d47a0edfc4b78937474. Devilkkw focuses more on character building, clothes, gestures, food, etc If you're using a Unix/Linux O/S, you can easily create a symlink to the relevant collection rather than copying it across if you don't plan to alter it. E.g. ln -sr collections/parrotzone wildcards/ You can also download additional extensions by running `python _tools/download_collections.py` from within the extension's root directory, i.e. `extensions/sd-dynamic-prompts/` ## Dynamic Prompts and Random Seeds Random seeds play an important role in controlling the randomness of the generated outputs. Let's discuss how Dynamic Prompts works with random seeds in different scenarios. ### Without Dynamic Prompts Enabled 1. If the seed is set to -1: A random seed is picked. This seed is used to generate the first image, then the next image is created using seed + 1, and this pattern continues for subsequent images. 2. If the seed is set to a specific number greater than -1: The process is similar to the one above, but starts with the user-specified seed. 3. If the variation seed is defined, but variation strength is zero: The process remains the same as in the previous two points. 4. If the variation seed is set to a number greater than 0: Every image is generated using the same initial seed (randomly selected or set by the user). The variation seed is either random (if set to -1) or the value chosen by the user. The first image is generated with the variation seed, the next with variation seed + 1, and so on. ### Using With Dynamic Prompts Enabled in Random/Standard Mode: 1. If the seed is set to -1: The process is similar to the first point in the previous section. However, the prompt is also selected using the same seed (if the random prompt generator is used). 2. If the seed is set to a number greater than -1: The process is similar to the second point in the previous section. However, the difference is that a random prompt is also generated using the chosen seed (if the prompt generator is used). 3. If the fixed seed checkbox is checked: The same seed is used for all images and prompts. This means the same image is generated repeatedly (this is useful for combinatorial generation). 4. If both the fixed seed and unlink seed from prompt checkboxes are checked: A random seed is used for the prompt, but the same seed is used for all images. This setting can be useful if you want to see how different prompts affect the generation of the same image. ### Variation Seeds with Dynamic Prompts 1. Variation strength set to 0: Variations are ignored. 2. Variation set to a number > 0: A variation seed is assigned to every image, incrementing by one each time. However, only 1 prompt is generated since you are looking for variations of the same image. ### Combinatorial Mode with Variation Strength > 0 In this case, it only generates the first image for you, which is probably not what you want. To get the desired results, you might need to adjust the settings or use a different mode.
Ryota-Kawamura/Generative-AI-with-LLMs
https://github.com/Ryota-Kawamura/Generative-AI-with-LLMs
In Generative AI with Large Language Models (LLMs), you’ll learn the fundamentals of how generative AI works, and how to deploy it in real-world applications.
# [Generative AI with LLMs](https://www.deeplearning.ai/courses/generative-ai-with-llms/) In Generative AI with Large Language Models (LLMs), you’ll learn the fundamentals of how generative AI works, and how to deploy it in real-world applications. By taking this course, you'll learn to: - Deeply understand generative AI, describing the key steps in a typical LLM-based generative AI lifecycle, from data gathering and model selection, to performance evaluation and deployment - Describe in detail the transformer architecture that powers LLMs, how they’re trained, and how fine-tuning enables LLMs to be adapted to a variety of specific use cases - Use empirical scaling laws to optimize the model's objective function across dataset size, compute budget, and inference requirements - Apply state-of-the art training, tuning, inference, tools, and deployment methods to maximize the performance of models within the specific constraints of your project - Discuss the challenges and opportunities that generative AI creates for businesses after hearing stories from industry researchers and practitioners Developers who have a good foundational understanding of how LLMs work, as well the best practices behind training and deploying them, will be able to make good decisions for their companies and more quickly build working prototypes. This course will support learners in building practical intuition about how to best utilize this exciting new technology. ## Week 1 Generative AI use cases, project lifecycle, and model pre-training ### Learning Objectives - Discuss model pre-training and the value of continued pre-training vs fine-tuning - Define the terms Generative AI, large language models, prompt, and describe the transformer architecture that powers LLMs - Describe the steps in a typical LLM-based, generative AI model lifecycle and discuss the constraining factors that drive decisions at each step of model lifecycle - Discuss computational challenges during model pre-training and determine how to efficiently reduce memory footprint - Define the term scaling law and describe the laws that have been discovered for LLMs related to training dataset size, compute budget, inference requirements, and other factors [Lab 1 - Generative AI Use Case: Summarize Dialogue](https://github.com/Ryota-Kawamura/Generative-AI-with-LLMs/blob/main/Week-1/Lab_1_summarize_dialogue.ipynb) [Week 1 quiz](https://github.com/Ryota-Kawamura/Generative-AI-with-LLMs/blob/main/Week-1/Week-1_Quiz.md) ## Week 2 Fine-tuning and evaluating large language models ### Learning Objectives - Describe how fine-tuning with instructions using prompt datasets can improve performance on one or more tasks - Define catastrophic forgetting and explain techniques that can be used to overcome it - Define the term Parameter-efficient Fine Tuning (PEFT) - Explain how PEFT decreases computational cost and overcomes catastrophic forgetting - Explain how fine-tuning with instructions using prompt datasets can increase LLM performance on one or more [Lab 2 - Fine-tune a generative AI model for dialogue summarization](https://github.com/Ryota-Kawamura/Generative-AI-with-LLMs/blob/main/Week-2/Lab_2_fine_tune_generative_ai_model.ipynb) [Week 2 quiz](https://github.com/Ryota-Kawamura/Generative-AI-with-LLMs/blob/main/Week-2/Week-2_Quiz.md) ## Week 3 Reinforcement learning and LLM-powered applications ### Learning Objectives - Describe how RLHF uses human feedback to improve the performance and alignment of large language models - Explain how data gathered from human labelers is used to train a reward model for RLHF - Define chain-of-thought prompting and describe how it can be used to improve LLMs reasoning and planning abilities - Discuss the challenges that LLMs face with knowledge cut-offs, and explain how information retrieval and augmentation techniques can overcome these challenges [Lab 3 - Fine-tune FLAN-T5 with reinforcement learning to generate more-positive summaries](https://github.com/Ryota-Kawamura/Generative-AI-with-LLMs/blob/main/Week-3/Lab_3_fine_tune_model_to_detoxify_summaries.ipynb) [Week 3 Quiz](https://github.com/Ryota-Kawamura/Generative-AI-with-LLMs/blob/main/Week-3/Week-3_Quiz.md)
mioclient/burrow-bypass
https://github.com/mioclient/burrow-bypass
burrow bypass for crystalpvp.cc
# Burrow Bypass burrow bypass for crystalpvp.cc ## Tutorial Heres a tutorial on how to bypass crystalpvp.cc plugins and burrow 1. Find any public burrow source code 2. Replace its rubberband method with ```java for (int i = 0; i < 20; i++) mc.player.networkHandler.sendPacket(new PlayerMoveC2SPacket.PositionAndOnGround(mc.player.getX(), mc.player.getY() + 1337, mc.player.getZ(), false)); ``` or with this on 1.12 forge ```java for (int i = 0; i < 20; i++) mc.player.connection.sendPacket(new CPacketPlayer.Position(mc.player.posX, mc.player.posY + 1337, mc.player.posZ, false)); ``` 3. Ur golden ## Credits lemoliam (he found бупс hes cool)
balakhonoff/awesome-subgraphs
https://github.com/balakhonoff/awesome-subgraphs
A curated list of awesome resources related to The Graph powered subgraph development.
# Awesome Subgraphs A curated list of awesome resources related to [The Graph](https://thegraph.com/) powered subgraph development. Feel free to send me any related links in [Twitter](https://twitter.com/balakhonoff) or [Telegram](https://t.me/kirill_balakhonov) to add them here. # Useful links from the official documentation - [Creating a subgraph](https://thegraph.com/docs/en/developing/creating-a-subgraph/) - [Supported networks](https://thegraph.com/docs/en/developing/supported-networks/) - [AssemblyScript API](https://thegraph.com/docs/en/developing/assemblyscript-api/) - [Developer FAQs](https://thegraph.com/docs/en/developing/developer-faqs/) - [Query The Graph](https://thegraph.com/docs/en/querying/querying-the-graph/) - [Querying Best Practices](https://thegraph.com/docs/en/querying/querying-best-practices/) - [Querying from an Application](https://thegraph.com/docs/en/querying/querying-from-an-application/) - [GraphQL API](https://thegraph.com/docs/en/querying/graphql-api/) - [Subgraphs on NEAR](https://thegraph.com/docs/en/cookbook/near/) - [Subgraphs on Cosmos](https://thegraph.com/docs/en/cookbook/cosmos/) - [Subgraphs on Arweave](https://thegraph.com/docs/en/cookbook/arweave/) - [Substreams-powered subgraphs](https://thegraph.com/docs/en/cookbook/substreams-powered-subgraphs/) # Tutorials - [A beginner’s guide to getting started with The Graph](https://docs.chainstack.com/docs/subgraphs-tutorial-a-beginners-guide-to-getting-started-with-the-graph) - [How to access real-time smart contract data from Python code (using Lido contract as an example)](https://medium.com/@balakhonoff_47314/how-to-access-real-time-smart-contract-data-from-python-code-using-lido-as-an-example-38738ff077c5) - [Web3 Indexing: The Ultimate Guide (No Prior Knowledge Required)](https://hackernoon.com/web3-indexing-the-ultimate-guide-no-prior-knowledge-required) - [Explaining Subgraph schemas](https://docs.chainstack.com/docs/subgraphs-tutorial-working-with-schemas) - [Debugging subgraphs with a local Graph Node](https://docs.chainstack.com/docs/subgraphs-tutorial-debug-subgraphs-with-a-local-graph-node) - [Indexing ERC-20 token balance using Subgraphs](https://docs.chainstack.com/docs/subgraphs-tutorial-indexing-erc-20-token-balance) - [Indexing Uniswap data with Subgraphs](https://docs.chainstack.com/docs/subgraphs-tutorial-indexing-uniswap-data) - [Fetching subgraph data using JS](https://docs.chainstack.com/docs/subgraphs-tutorial-indexing-uniswap-data) - [How to access the Tornado Cash data easily using The Graph’s subgraphs](https://medium.com/@balakhonoff_47314/how-to-access-the-tornado-cash-data-easily-using-the-graphs-subgraphs-a70a7e21449d) - [How to access transactions of PEPE coin using The Graph subgraphs](https://medium.com/@balakhonoff_47314/tutorial-how-to-access-transactions-of-pepe-pepe-coin-using-the-graph-subgraphs-and-chatgpt-5cb4349fbf9e) - [The Graph Tutorial: Creating a Subgraph](https://mirror.xyz/0xB38709B8198d147cc9Ff9C133838a044d78B064B/DdiikBvOLngfOotpqNEoi7gIy9RDlEr0Ztv4yWlYyzc) - [Notifications from a Subgraph using Push](https://docs.push.org/developers/developer-guides/sending-notifications/using-subgraph-gasless) - [How to properly request JSON metadata stored in IPFS for your "The Graph" Subgraph](https://blog.developerdao.com/how-to-properly-request-json-metadata-stored-in-ipfs-for-your-the-graph-subgraph) - [Building a Full Stack Web3 YouTube Clone with Next, IPFS, The Graph, Solidity, and Livepeer](https://blog.suhailkakar.com/building-a-full-stack-web3-youtube-clone-with-next-ipfs-the-graph-solidity-and-livepeer) - [Subgraph Development](https://docs.blastapi.io/indexing/subgraph-development) - [How to Integrate The Graph and Create and Deploy a Subgraph](https://nodereal.io/tutorials/how-to-integrate-with-thegraph-using-meganode-archive-node/) - [Web3 data querying with The Graph and subgraphs](https://blog.logrocket.com/web3-data-querying-the-graph-subgraphs/) - [Create Lens Subgraph on The Graph Protocol](https://blog.devgenius.io/create-lens-subgraph-on-the-graph-protocol-8acfbac94ea8) - [Indexing data using The Graph's Indexer by LearnWeb3](https://learnweb3.io/lessons/indexing-data-using-the-graphs-indexer/) # Videos - [Build a Subgraph in 5 Minutes: Supercharging Your DApp](https://www.youtube.com/watch?v=L8jYtr4omKM) - [How to Deploy a Subgraph for Indexing Solidity Smart Contracts 2022](https://www.youtube.com/watch?v=YvKIkJTDD9E) - [Query Ethereum with GraphQL with The Graph](https://www.youtube.com/watch?v=l2rzT_Dp4T0&pp=ygUSc3ViZ3JhcGggdGhlIGdyYXBo) - [Building a Subgraph with Subgraph Studio](https://www.youtube.com/watch?v=HfDgC2oNnwo&t=5s) - [Building Subgraphs on The Graph](https://www.youtube.com/watch?v=coa0Vw47qNc&ab_channel=ETHGlobal) - [Building Rich APIs on top of Ethereum with The Graph](https://www.youtube.com/watch?v=wrV7cMebwyE) - [Building Subgraphs with The Graph](https://www.youtube.com/watch?v=ct1UMSpZLgk&t=9s) # Tools - [The Graph Hosted service](https://thegraph.com/hosted-service) - [SubgraphGPT](https://t.me/SubgraphGPT_bot) # Subgraphs hostings - [Chainstack](https://chainstack.com/subgraphs/) - [Satsuma](https://www.satsuma.xyz/) - [Goldsky](https://goldsky.com/) # GitHub repositories - [Messari Standard Subgraphs](https://github.com/messari/subgraphs). Standardized subgraphs for blockchain data - [Subgraph Toolkit](https://github.com/protofire/subgraph-toolkit). A collection of utilities and helpers to support the development of subgraphs - [Subgraph Query Portal](https://github.com/Evan-Kim2028/subgraph-query-portal). A collection of reusable public goods subgraph queries. - [Subgrounds](https://github.com/0xPlaygrounds/subgrounds). An intuitive python library for interfacing with Subgraphs. - [Example subgraph by The Graph](https://github.com/graphprotocol/example-subgraph). An example to help you get started with The Graph
Illyism/openai-whisper-api
https://github.com/Illyism/openai-whisper-api
OpenAI Whisper API based on Node.js / Bun.sh in a Docker Container + Google Cloud Run Example
<div style="text-align:center;"> <img src="https://magicbuddy.chat/img/whisper.jpg"> </div> # OpenAI Whisper API **An Open Source Solution for Speech-to-Text and More** Welcome to the OpenAI Whisper API, an open-source AI model microservice that leverages the power of OpenAI's whisper api, a state-of-the-art automatic speech recognition (ASR) system as a large language model. This service, built with Node.js, Bun.sh, and Typescript, is designed to run on Docker with zero dependencies, making it a versatile tool for developers across various speech and language-related applications. The Whisper API is a speech-to-text model trained on a vast amount of multilingual and multitask training data, including a wide range of audio files and audio recordings. It's a single model that can handle tasks such as language identification, speech translation, and of course, transforming spoken word into written text. The model is capable of handling a sequence of tokens and can work with natural language, making it a powerful tool for machine learning applications. It's designed to handle multilingual speech recognition, and it can even manage background noise, making it useful for transcribing a video call, zoom calls, a YouTube video or non-chat use cases in English language and more with full control. The API is simple and is designed to be easy to use for developers of all skill levels with simple developer access. It's an open-source project, and it's licensed under the MIT license, meaning you can use it in your own projects with few restrictions. Whether you're looking to transcribe voice messages, improve system performance through a series of system-wide optimizations, or explore the capabilities of the OpenAI Whisper API, this is the place to start. Dive into the following code to learn more about how to use this powerful tool as a first step and get your OpenAI Account with a new api key. ## Usage This is is a OpenAI Whisper API microservice using Node.js / Bun.sh / Typescript that can run on Docker. With zero dependencies. It listens to the `/transcribe` route for MP3 files and returns the text transcription. ## Running locally Install [bun.sh](https://bun.sh/) first, clone this directory and run these commands: ```bash bun install bun run dev ``` You can now navigate to http://localhost:3000 or the PORT provided, see the Usage section below. ## Docker - See: https://hub.docker.com/r/illyism/openai-whisper-api ## Google Cloud Run Deployment Install [bun.sh](https://bun.sh/) first, clone this directory and run these commands: Change the project ID to your own. ```bash docker build --platform linux/amd64 -t gcr.io/magicbuddy-chat/whisper-docker . docker push gcr.io/magicbuddy-chat/whisper-docker gcloud run deploy whisper-docker \ --image gcr.io/magicbuddy-chat/whisper-docker \ --platform managed \ --region us-central1 \ --allow-unauthenticated \ --project magicbuddy-chat ``` You should receive a Service URL, see the Usage section below. ## Usage You can test normal HTTP by opening the /ping endpoint on the URL. Connect to the /transcribe and send a POST request with the following body: ```json { "audio": "BASE64_ENCODED_AUDIO" } ``` ### API Key You need to pass the OpenAI API Key as a HEADER: ``` Authorization: Bearer OPENAI_KEY ``` Or you can launch the docker image or server with `OPENAI_KEY` in the env: ```bash OPENAI_KEY=YOUR_KEY_HERE bun run dev # or docker run -p 3000:3000 -e OPENAI_KEY=YOUR_KEY_HERE gcr.io/magicbuddy-chat/whisper-docker # or set it as env in Cloud Run with the below command or in the Cloud Console UI gcloud run deploy whisper-docker \ --image gcr.io/magicbuddy-chat/whisper-docker \ --platform managed \ --region us-central1 \ --allow-unauthenticated \ --project magicbuddy-chat \ --set-env-vars OPENAI_KEY=YOUR_KEY_HERE ``` # Live example We are using this Whisper API with [MagicBuddy, a Telegram ChatGPT bot](https://magicbuddy.chat/). You can use the [OpenAI Whisper Docker](https://magicbuddy.chat/openai-whisper) as a live example here: - https://magicbuddy.chat/openai-whisper
tedster0629/React_remix_dashiboard_free_download
https://github.com/tedster0629/React_remix_dashiboard_free_download
null
# Berry Free React Material UI Admin Template [![Tweet](https://img.shields.io/twitter/url/http/shields.io.svg?style=social)](https://twitter.com/intent/tweet?text=Get%20Berry%20React%20-%20The%20most%20beautiful%20Material%20designed%20Admin%20Dashboard%20Template%20&url=https://berrydashboard.io&via=codedthemes&hashtags=reactjs,webdev,developers,javascript) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Price](https://img.shields.io/badge/price-FREE-0098f7.svg)](https://github.com/codedthemes/berry-free-react-admin-template/blob/main/LICENSE) Berry is a free Material UI admin dashboard template built with React. It is meant to provide the best possible User Experience with highly customizable feature-rich pages. It is a complete Dashboard Template that has easy and intuitive responsive design whether it is viewed on retina screens or laptops. :star: :star: :star: Do not forget to star (Top right of this page) it if you like the theme :star: :star: :star: ![IMG_8566.jpg](https://berrydashboard.io/imp-images/berry-github-free-repo-1.jpg) [Pro version](https://berrydashboard.io) of Berry react template contains features like TypeScript, Next.js Seed versions, Apps, Authentication Methods (i.e. JWT, Auth0, Firebase), Advance Components, Form Plugins, Layouts, Widgets, and many more. | [Berry Free](https://berrydashboard.io/free/) | [Berry](https://material-ui.com/store/items/berry-react-material-admin/) | | --------------------------------------------- | :----------------------------------------------------------------------- | | **9** Demo pages | **45+** demo pages | | - | ✓ Multi-language | | - | ✓ Dark/Light Mode 🌓 | | - | ✓ TypeScript version | | - | ✓ Design files (Figma) | | - | ✓ 6+ color Options | | - | ✓ RTL | | - | ✓ JWT, Firebase, Auth0 authentications | | - | ✓ [More components](https://berrydashboard.io/dashboard/default) | ## Why Berry? Berry offers everything you need to create dashboards. We have included the following high-end features in our initial release: - Modern aesthetics UI design - Material-UI components - Fully Responsive, all modern browser supported - Easy to use code structure - Flexible & High-Performance code - Easy Documentation Guide ## Free Berry React version - [Preview](https://berrydashboard.io/free/) - [Download from Github](https://github.com/codedthemes/berry-free-react-admin-template) ## Berry Pro version - [Preview](https://berrydashboard.io) - [Purchase from MUI Store](https://material-ui.com/store/items/berry-react-material-admin/) ## Table of contents - [Getting Started](#getting-started) - [Documentation](#documentation) - [Technology Stack](#technology-stack) - [Author](#author) - [Issues?](#issues) - [License](#license) - [More Products](#more-free-react-material-admin-templates) - [Follow us](#follow-us)
gunnargrosch/threads-lists
https://github.com/gunnargrosch/threads-lists
Repository contains lists of Threads users separated by categories.
# Lists of Threads users to follow ## Description Repository contains lists of [Threads](https://threads.net/) users separated by categories. ## Available lists - [AWS Developer Relations](#aws-developer-relations): Folks in the AWS Developer Relations organization. - [AWS Employees](#aws-employees): Folks from all over AWS (DevRel excluded). - [AWS Heroes](#aws-heroes): Members of the AWS Heroes program. - [AWS Community Builders](#aws-community-builders): Members of the AWS Community Builders program. - [AWS User Group Leaders](#aws-user-group-leaders): User group leaders (can include CBs and Heroes). ## AWS Developer Relations - [Gunnar Grosch @gunnargrosch](https://www.threads.net/@gunnargrosch) - [Taylor Jacobsen @taylorjacobsen](https://www.threads.net/@taylorjacobsen) - [Kris Howard @krishoward](https://www.threads.net/@krishoward) - [Banjo Obayomi @banjtheman](https://www.threads.net/@banjtheman) - [Danilo Poccia @danilop](https://www.threads.net/@danilop) - [Darko Mesaroš @darkosubotica](https://www.threads.net/@darkosubotica) - [Linda Haviv @lindavivah](https://www.threads.net/@lindavivah) - [Cobus Bernard @cobusbernard](https://www.threads.net/@cobusbernard) - [Ricardo Sueiras @ricardosueiras](https://www.threads.net/@ricardosueiras) - [Rohini Gaonkar @rohinigaonkar](https://www.threads.net/@rohinigaonkar) - [Boaz Ziniman @boaz.work](https://www.threads.net/@boaz.work) - [Elizabeth Fuentes @elifue.tech](https://www.threads.net/@elifue.tech) - [Mark Birch @startupmarkb](https://www.threads.net/@startupmarkb) - [Ana Luiza Cunha @analuizac](https://www.threads.net/@analuizac) - [Jacquie Grindrod @jaxxeth](https://www.threads.net/@jaxxeth) - [Brooke Jamieson @brooke.bytes](https://www.threads.net/@brooke.bytes) - [Carol Milanez @carol.milanez.br](https://www.threads.net/@carol.milanez.br) - [Nelly Andrade @nellyandrade](https://www.threads.net/@nellyandrade) - [Jenna Pedersen @jennanerdsout](https://www.threads.net/@jennanerdsout) - [Jason Dunn @jasondunn](https://www.threads.net/@jasondunn) - [Wojtek Gawroński @afronsky](https://www.threads.net/@afronsky) - [Jen Looper @jenlooper](https://www.threads.net/@jenlooper) ## AWS Employees - [Aaron Hunter @aaronshunter1213](https://www.threads.net/@aaronshunter1213) - [Muhammed Salih Guler @salihgueler](https://www.threads.net/@salihgueler) - [Emily Shea @em_shea](https://www.threads.net/@em_shea) - [James Eastham @plant_powered_james](https://www.threads.net/@plant_powered_james) - [Luca Mezzalira @mezzalab](https://www.threads.net/@mezzalab) - [Martin Beeby @thebeebsuk](https://www.threads.net/@thebeebsuk) - [Morgan Willis @morganwilliscloud](https://www.threads.net/@morganwilliscloud) - [David Velez @davidbabylon](https://www.threads.net/@davidbabylon) - [Maish Saidel-Keesing @maishsk](https://www.threads.net/@maishsk) - [Mark Nunnikhoven @marknca](https://www.threads.net/@marknca) - [Matt Auerbach @mauerbac](https://www.threads.net/@mauerbac) - [Nicklas af Ekenstam @nilleaf](https://www.threads.net/@nilleaf) - [Nick Walsh @nwalsh_](https://www.threads.net/@nwalsh_) - [Brendan Bouffler @boofla88](https://www.threads.net/@boofla88) - [Rick Suttles @ricksuttles](https://www.threads.net/@ricksuttles) - [Thomas Rindfuss @t.rindfuss](https://www.threads.net/@t.rindfuss) - [Rob Wilson @d3vlinuk](https://www.threads.net/@d3vlinuk) - [Olga Shepeleva @olynyc](https://www.threads.net/@olynyc) - [Alla Seiffert @allaseiff](https://www.threads.net/@allaseiff) - [Jeremy Baxter-Jenkins @sea_jeremy](https://www.threads.net/@sea_jeremy) - [Scott Grissom @_.scottv](https://www.threads.net/@_.scottv) - [Grant Starkman @gstarkma](https://www.threads.net/grantroot.sh) - [Erik Hanchett @erikch](https://www.threads.net/erikch) - [Jeff Barr @jeffbarr](https://www.threads.net/jeffbarr) - [Eduardo Rodrigues @edu.aws](https://www.threads.net/edu.aws) - [Matheus Duarte @mduarte.tech](https://www.threads.net/mduarte.tech) - [Angel Pizarro @delagoya](https://www.threads.net/delagoya) - [James Hood @james.hood.codes](https://www.threads.net/james.hood.codes) ## AWS Heroes - [Jones Zachariah Noel @zachjonesnoel](https://www.threads.net/@zachjonesnoel) - [Andrew WC Brown @andrew.wc.brown](https://www.threads.net/@andrew.wc.brown) - [Hiroko Nishimura @hirokonishimurawrites](https://www.threads.net/@hirokonishimurawrites) - [Ian Mckay @iann0036](https://www.threads.net/@iann0036) - [AJ Stuyvenberg @aj.stuyvenberg](https://www.threads.net/@aj.stuyvenberg) - [Adam Elmore @adamdotdev](https://www.threads.net/@adamdotdev) - [Luca Bianchi @luca.bianchi.1982](https://www.threads.net/@luca.bianchi.1982) - [Nathan Glover @nathanglove](https://www.threads.net/@nathanglove) - [Ran Isenberg @ranthebuilder](https://www.threads.net/@ranthebuilder) - [Gaurav Kamboj @oyehooye](https://www.threads.net/@oyehooye) - [Matthew Coulter @nideveloper](https://www.threads.net/@nideveloper) - [Danielle Heberling @deeheber](https://www.threads.net/@deeheber) - [Aidan Steele @awsteele0](https://www.threads.net/@awsteele0) - [Sheen Brisals @sheenbrisals](https://www.threads.net/@sheenbrisals) - [Chris Williams @mistwire](https://www.threads.net/@mistwire) - [Yan Cui @theburntmonk](https://www.threads.net/@theburntmonk) ## AWS Community Builders - [Lee James Gilmore @serverlessadvocate](https://www.threads.net/@serverlessadvocate) - [Matt Martz @martzcodes](https://www.threads.net/@martzcodes) - [Eyal Estrin @eyalestrin](https://www.threads.net/@eyalestrin) - [Steven Coochin @developersteve](https://www.threads.net/@developersteve) - [Tinotenda Makombe @_tinoten](https://www.threads.net/@_tinoten) - [Nick Frichette @nickfrichette](https://www.threads.net/@nickfrichette) - [Brian Morrison @brianmmdev](https://www.threads.net/@brianmmdev) - [Amanda Quint @acq688](https://www.threads.net/@acq688) - [Sanjay Krishna @sanjaykmenon1](https://www.threads.net/@sanjaykmenon1) - [Sakesh Karanjit @sakesh_karanjit](https://www.threads.net/@sakesh_karanjit) - [Sreekesh Iyer @sreekesh.pvt](https://www.threads.net/@sreekesh.pvt) - [John Green @johngreendev](https://www.threads.net/@johngreendev) - [Paul Levy @aws.pop.bot](https://www.threads.net/aws.pop.bot) - [Corey Quinn @lastweekinaws](https://www.threads.net/@lastweekinaws) - [Sebastian Bille @seb.bille](https://www.threads.net/@seb.bille) ## AWS User Group Leaders - - ## Contribute Help keep the lists up to date by creating a PR or contact Gunnar Grosch in the way you see fit. Also applies if you want to be removed from any of the lists. ## Author **Gunnar Grosch** - [GitHub](https://github.com/gunnargrosch) | [Threads](https://www.threads.net/@gunnargrosch) | [LinkedIn](https://www.linkedin.com/in/gunnargrosch/) | [Mastodon](https://hachyderm.io/@gunnargrosch) | [Twitter](https://twitter.com/gunnargrosch) | [Bluesky](https://bsky.app/profile/gunnargrosch.com)
wovosoft/laravel-typescript
https://github.com/wovosoft/laravel-typescript
Laravel Models to typescript types and interfaces
# Laravel Typescript [![Latest Version on Packagist][ico-version]][link-packagist] [![Total Downloads][ico-downloads]][link-downloads] [![Build Status][ico-travis]][link-travis] [![StyleCI][ico-styleci]][link-styleci] Transforms Laravel Models to Typescript Interfaces/Types ## Installation Via Composer ``` bash composer require --dev wovosoft/laravel-typescript ``` ## Publish Configuration Run the command given below. This will publish `laravel-typescript.php` config file. ```bash php artisan vendor:publish --provider="Wovosoft\LaravelTypescript\LaravelTypescriptServiceProvider" ``` Configure the configurations ```php return [ "output_path" => resource_path("js/types/models.d.ts"), "source_dir" => app_path("Models") ]; ``` ## Usage Run the command given below to generate typescript types. ```bash php artisan typescript:transform-models ``` Generated contents will be written in configured location. ## Advanced Usage Sometimes Models can be stored in different locations, like in some packages, some directories etc., in that case, please check the source of [./src/LaravelTypescript.php](https://github.com/wovosoft/laravel-typescript/blob/master/src/LaravelTypescript.php) You can just instantiate this class, and generate types for models in some other directories. ```php use \Wovosoft\LaravelTypescript\LaravelTypescript; $transformer=new LaravelTypescript( outputPath: resource_path("js/types/models.d.ts"), sourceDir: app_path("Models") ); $transformer->run(); ``` ## Note on New Model Attributes For new Model Attributes, return type of the Closure function should be defined, otherwise, it will generate `unknown` type for the related property. But for the old styled attribute, it is not mandatory. ```php use \Illuminate\Database\Eloquent\Model; use \Illuminate\Database\Eloquent\Casts\Attribute; class User extends Model{ public function isActive() : Attribute { return Attribute::get(fn(): bool =>$this->status==='active'); } public function getIsInactiveAttribute():bool { return $this->status==="inactive"; } } ``` ## Change log Please see the [changelog](changelog.md) for more information on what has changed recently. ## Contributing Please see [contributing.md](contributing.md) for details and a todolist. ## Security If you discover any security related issues, please create issues in [Issues Tracker](https://github.com/wovosoft/laravel-typescript/issues) ## Credits - [Narayan Adhikary][link-author] - [All Contributors][link-contributors] ## License MIT. Please see the [license file](license.md) for more information. [ico-version]: https://img.shields.io/packagist/v/wovosoft/laravel-typescript.svg?style=flat-square [ico-downloads]: https://img.shields.io/packagist/dt/wovosoft/laravel-typescript.svg?style=flat-square [ico-travis]: https://img.shields.io/travis/wovosoft/laravel-typescript/master.svg?style=flat-square [ico-styleci]: https://github.styleci.io/repos/661637738/shield?branch=master [link-packagist]: https://packagist.org/packages/wovosoft/laravel-typescript [link-downloads]: https://packagist.org/packages/wovosoft/laravel-typescript [link-travis]: https://travis-ci.org/wovosoft/laravel-typescript [link-styleci]: https://github.styleci.io/repos/661637738 [link-author]: https://github.com/wovosoft [link-contributors]: ../../contributors
Inplex-sys/chatgpt-desktop
https://github.com/Inplex-sys/chatgpt-desktop
ChatGPT in windows including AIPRM in it ( for people who can't install it on firefox )
# ChatGPT Desktop App ChatGPT in windows including AIPRM in it ( for people who can't install it on firefox ) ![image](https://github.com/Inplex-sys/chatgpt-desktop/assets/69421356/fe698013-b10e-4718-a70f-b1c15addbf3f) ## Install the App Install by clicking here https://github.com/Inplex-sys/chatgpt-desktop/releases/download/Windows/chatgpt.Setup.1.0.0.exe ### Build from source ``` git clone https://github.com/Inplex-sys/chatgpt-desktop.git cd chatgpt-desktop npm install npm run build ``` You will see the x64 binary in `dist/`
AsyncWeb/bigbluebutton-streaming
https://github.com/AsyncWeb/bigbluebutton-streaming
BigBlueButton Streaming - Your free, open-source solution to expand your virtual classrooms to thousands of learners globally. Stream live on YouTube, Facebook, Vimeo, or any RTMP server right from BigBlueButton. No more user limit - teach without boundaries.
<div align="center"> <a href="https://higheredlab.com/" target="_blank"> <img alt="bbb-streaming" width="250" src="/static/hel-general-logo.png"> </a> </div> <h1 align="center">BigBlueButton Streaming</h1> <p align="center">BigBlueButton Streaming - Your free, open-source solution to expand your virtual classrooms to thousands of learners globally. Stream live on YouTube, Facebook, Vimeo, or any RTMP server right from BigBlueButton. No more user limit - teach without boundaries.</p> <br /><br/> <img style="width: 100%; height: auto;" src="/static/bigbluebutton-streaming.gif" alt="bigbluebutton-streaming" /> <br/><br/> <p>Embrace a limitless learning experience with BigBlueButton Streaming, the ultimate solution for your expanding educational needs. Developed as a free open-source software extension, BigBlueButton Streaming allows you to extend your virtual classrooms to thousands of learners around the globe. Widely recognized as the leading open-source classroom software, BigBlueButton is trusted by countless educational institutions worldwide. However, with a capacity limit of 100 users per class, larger educational sessions became a challenge – until now. Introducing BigBlueButton Streaming, your key to conducting large-scale, one-time events or regular oversized classes. Seamlessly stream your virtual classes directly from BigBlueButton to platforms such as YouTube, Facebook, Vimeo, or any RTMP server. It's simple to use - enter the RTMP URL and access key, click on "Start Streaming", and voila! Your class is live and can now reach thousands of students concurrently. This intuitive, user-friendly tool breaks boundaries in digital learning, bringing education closer to those who crave it. Experience this revolutionary extension today. Unleash the full potential of virtual learning with BigBlueButton Streaming, because education should know no boundaries.</p> <br/><br/> ## 🗝️ Unlock Limitless Learning: Key Features of BigBlueButton Streaming 1. 📺 **Live Streaming on Multiple Platforms**: Directly stream your classroom to YouTube, Facebook, Vimeo, or any RTMP server, maximizing your reach and availability for students around the world. 2. 🎥 **Ease of Streaming:** Begin live streaming your classes simply by entering the RTMP URL and access key, and pressing "Start Streaming." 3. 🚀 **Large-Scale Class Capacity**: Accommodate thousands of students in a single class, bypassing the original 100 users limit of BigBlueButton. 4. 🔗 **Compatibility with BigBlueButton**: Works directly within BigBlueButton, the widely-adopted virtual classroom software used by many educational institutions globally. 5. 🆓 **Open-Source and Free**: BigBlueButton Streaming is an open-source software extension, available to all users at no cost. <br/><br/> ## 💡 5 Benefits: Amplify Impact with BigBlueButton Streaming 1. 🌍 **Expanded Reach**: You can now teach thousands of students from various geographical locations simultaneously. 2. 📱 **Increased Accessibility**: With classes being streamed on popular platforms, students can access lessons from devices they already use in their everyday lives. 3. 💰 **Cost-Efficiency**: As a free, open-source software, BigBlueButton Streaming allows educational institutions to reduce costs associated with premium virtual classroom tools. 4. ⏰ **Flexibility and Convenience**: The ability to schedule large classes or one-time events provides flexibility to educators and convenience to learners. 5. 🧩 **Ease of Integration**: Being an extension of the already popular BigBlueButton, integrating this tool into existing educational frameworks is straightforward and hassle-free. <br/><br/> ## 📋 Requirements The requirement to install this software is BigBlueButton should be installed. **Minimum environment requirements** - BigBlueButton versions ['2.6.10' '2.7.0-beta.2']. - Docker must be installed on the system to manage containerization and deployment of BigBlueButton. - A properly configured and functioning TURN server is necessary for real-time communication and media relay. <br/><br/> ## 📦 Installation - Clone the respository. - Goto `bigbluebutton-streaming/` - Run install.sh ```bash git clone https://github.com/AsyncWeb/bigbluebutton-streaming.git cd bigbluebutton-streaming bash install.sh ``` > 🚨 Note: install.sh will restart the bigbluebutton server, please make sure there is no meetings running on the server. > 💡 Make sure to stop streaming before Ending the BigBlueButton session. <br/> [📺 Installation Demo](https://bbb1.asyncweb.io/recording/bigbluebutton-streaming-installation.mp4) <br/> <br/> ## 🔄 Concurrent Streaming If you aim to host multiple meetings simultaneously on your single BigBlueButton server and require concurrent streaming for each, follow these steps to set it up. - Navigate to the streaming server directory: ```bash cd bigbluebutton-streaming/streaming-server/ ``` - Open the .env file for editing using sudo privileges. For instance, with the vi editor: ```bash sudo vi .env ``` - In the .env file, modify the NUMBER_OF_CONCURRENT_STREAMINGS variable to indicate the number of simultaneous streams you want to handle. For instance, to enable three concurrent streams: ```bash NUMBER_OF_CONCURRENT_STREAMINGS=3 ``` - Save your changes and exit the file editor. - Build Docker image: ```bash docker build -t bbb-stream:v1.0 . ``` - Finally, restart your bbb-streaming service with pm2: ```bash pm2 restart bbb-streaming ``` <br /> Now, your server can handle the number of concurrent streams you've specified, allowing each meeting to be streamed simultaneously. <br /> <br /> <div align="center"> <img alt="bbb-streaming-error" width="100%" src="static/bigbluebutton-streaming-error.png"> </div> <br /> > 🚨 Note: If you encounter the error shown above, it indicates that your server has reached its limit for concurrent streams. <br /> > 💡 Remember: Successful operation of concurrent streaming depends significantly on the capacity of your server. Ensure that your server is capable of handling the number of concurrent streams you've set. <br/><br/> ## 🗑️ Uninstallation - Goto `bigbluebutton-streaming/`. - run `uninstall.sh`. ```bash cd bigbluebutton-streaming bash uninstall.sh ``` <br/><br/> ## 🔎 How it works 1. 🚀 **Node.js App:** The Node.js app start streaming container, serving as a controller for streaming BigBlueButton meetings. 2. 📬 **REST API:** The app exposes a REST API to receive requests for starting and stopping streaming. 3. 🔑 **Environment Variables:** Sensitive data, such as the BigBlueButton URL, secret, and other configurations, are stored in environment variables loaded from a .env file. 4. 🔗 **Puppeteer Integration:** Puppeteer is utilized to launch a headless Chrome browser, enabling programmatic interaction with the BigBlueButton meeting UI. 5. 🖥️ **Virtual Display:** Xvfb creates a virtual display for Chrome, allowing it to run without a physical display server. 6. 🤝 **Joining the Meeting:** The app configures Puppeteer to join the BigBlueButton meeting as a viewer with specific settings, such as listen-only mode and element visibility. 7. 📼 **Screen Recording:** A child process invokes ffmpeg to record the meeting screen and stream it to a specified RTMP server. 8. ⏹️ **Stop Streaming**: The app waits for the stop streaming or meeting to end and stops the, streaming, ffmpeg process, finalizing the streaming process. <br /> <br /> <img alt="bbb-streaming" src="/static/bigbluebutton-streaming-sequence.png"/> <br/><br/> ## 🚀 <a href="https://higheredlab.com" target="_blank">Ready to Transform Your Online Teaching Experience?</a> Discover a new era of online learning with HigherEdLab's BigBlueButton hosting service. With features ranging from crystal-clear HD video learning to interactive tools such as chat, poll, and presentations, we ensure that your virtual classrooms emulate the dynamic environment of physical ones. Enjoy the benefits of AI with ChatGPT-powered quizzes and transcriptions that enhance the learning experience. With HigherEdLab, you can customize your virtual learning space with your own domain, logo, and colors to align with your institution's brand identity. We also offer advanced user management, seamless integration options, and comprehensive analytics to give you complete control over your teaching platform. Ready to embrace the next level of digital education? <a href="https://higheredlab.com" target="_blank"><strong>Sign Up</strong></a> Now for HigherEdLab's BigBlueButton Hosting Service and transform the way you teach online.
sydowma/codeGPT
https://github.com/sydowma/codeGPT
CodeGPT is a ChatGPT-based code interpreter that generates Markdown files using Python, simplifying code understanding and presentation.
# codeGPT CodeGPT is an innovative solution, leveraging the power of ChatGPT to analyze and interpret programming code. It represents a groundbreaking advancement in the field of coding and software development, using AI technology to help developers understand complex code structures. Developed predominantly using Python, CodeGPT carries the known benefits of this powerful language, such as simplicity and flexibility. This allows the tool to be highly compatible with a wide variety of operating systems and programming environments. One unique feature of CodeGPT is its ability to generate Markdown files from the interpreted code. This provides developers with an organized and comprehensible view of their codebase. Furthermore, these Markdown files can be utilized to build a comprehensive website or used as standalone guides for understanding the intricate details of the code. Key Features: - Code analysis and interpretation powered by ChatGPT. - Generation of Markdown files for efficient documentation. - Capability to generate websites to showcase your code's functionality. - Compatibility and flexibility ensured by the Python foundation. In essence, CodeGPT is more than just a code interpreter; it's a comprehensive suite of tools designed to make coding more accessible and efficient. It aids in understanding complex code, generates organized documentation, and can even help present your work through an interactive website. Let CodeGPT be your partner in coding, experience a new way of managing and understanding code, and boost your productivity today. ## init venv environment ```shell virtualenv venv -p python3 source venv/bin/activate pip install -r requirements.txt ``` ## How to use 1. change env.example to .env ```shell mv env.example .env ``` 2. input your OPEN api key to `.env` file ```shell OPENAI_API_KEY=sk-xxxxxxxxxxxxx ``` 3. generate markdown file ```shell # default file type is java python main.py analyse LMAX-Exchange/disruptor # filer python file # python main.py LMAX-Exchange/disruptor --file-extension py ``` 4. generate markdown book ```shell python main.py book ./explanations/disruptor-master ``` ## feature list - [x] command line - [x] generate markdown book - [ ] offline version
datagenko/datagen
https://github.com/datagenko/datagen
데이터 제너레이터 오픈소스 프로젝트
# DataGen ## 목적 해당 저장소는 한국형 데이터를 좀 더 쉽게 생성할 수 있도록 만드는데 목표를 두고 있습니다. 비영리 프로젝트이며, 조건없이 사용할 수 있습니다. ## 기여자 - 기여자(가나다순) : 김미정, 김바름, 김영조, 김지원, 김하연, 남현희, 맹지연, 백수연, 손수민, 송윤비, 신민철, 안동섭, 양서진, 오연주, 우혜리, 유재영, 유하은, 윤서준, 윤선호, 윤지수, 이도하, 이미현, 이승빈, 이양래, 이지섭, 이지수, 이호준, 정서영, 정신애, 정유송, 조원영, 차예원, 최범관, 한수빈, 한수정 ## 기여하기 - 자신이 기여할 부분이 어떤 부분이 있는지 먼저 [email protected]로 연락 주세요. - 소스코드는 fork하셔서 pull req하시면 됩니다. - 소스코드는 내부 협의를 거쳐 merge합니다.
Inplex-sys/pc-auth
https://github.com/Inplex-sys/pc-auth
A desktop 2FA (OTP) and password manager for Windows, Linux and MacOS
# 2FA Manager for Windows - Secure AES Encryption ![Download For Windows](https://github.com/Inplex-sys/pc-auth/releases/download/Windows/pc-auth.Setup.0.1.0.exe) - ✅ AES Security* ( Don't forget to use a good password. Everything is securized in an AES container. ) - ✅ Portable App - ✅ 100% Open Source - ✅ Privacy Friendly ( you don't even need internet ) <div align="center"> <img src="https://raw.githubusercontent.com/devicons/devicon/master/icons/react/react-original-wordmark.svg" height="100"> <img src="https://raw.githubusercontent.com/devicons/devicon/master/icons/electron/electron-original.svg" height="100"> <img src="https://raw.githubusercontent.com/devicons/devicon/master/icons/nodejs/nodejs-original-wordmark.svg" height="100"> </div> <br> <br> https://github.com/Inplex-sys/pc-auth/assets/69421356/88e185c1-61a6-4a8a-86d9-f377f757025c ## Introduction 2FA Manager for Windows is a secure and user-friendly application that allows users to manage their Two-Factor Authentication (2FA) credentials efficiently. This project is built using React, Electron, and React-Bootstrap, providing a modern and responsive user interface for managing 2FA credentials. The user data is encrypted using AES encryption, ensuring that the data remains secure and confidential. ## Features - Add, edit, and delete 2FA authenticators with ease. - Securely store account credentials, 2FA secrets (TOTP), and other necessary information. - AES encryption ensures that user data is stored securely and cannot be accessed without the correct decryption key. - Search functionality to quickly find specific authenticators. - Customizable authenticator labels for easy identification. - User-friendly and intuitive interface. - Support for multiple accounts on the same website. ## Installation 1. Clone the repository to your local machine. 2. Install dependencies using npm: ``` npm install ``` ## Usage 1. Run the application using the following command: ``` npm start ``` 2. The 2FA Manager window will appear, allowing you to add and delete authenticators. 3. To add a new authenticator, click on the "Add Authenticator" button and fill in the required fields. 4. To delete an existing authenticator, click on the corresponding authenticator card and make the necessary changes. 5. The user data is encrypted using AES encryption, ensuring that it remains secure and confidential. ## Build from source 1 - Build the React App and the Electron App ``` npm i electron-builder -g npm run build ``` 2 - Get your installer in `./dist/` ## Security The 2FA Manager employs AES encryption to safeguard user data. AES encryption is a robust and widely recognized encryption algorithm, providing strong protection against unauthorized access. The encryption key is derived from the user's master password, which is not stored anywhere, ensuring that only the user can decrypt their data. **Note**: It is essential to use a strong and unique master password to enhance the security of your data. ## Contributions Contributions to this project are welcome! If you find any bugs or have suggestions for improvements, feel free to create an issue or submit a pull request. ## License This project is licensed under the [GNU License](https://github.com/Inplex-sys/pc-auth/LICENSE), it requires that any modifications or derivative works of your project be released under the same license, thus preserving the open-source nature of the software. ## Disclaimer The 2FA Manager is provided as-is, without any warranties or guarantees. While AES encryption is a strong security measure, it is essential to exercise caution and use a strong master password. The developers are not responsible for any data loss or security breaches that may occur due to improper usage or vulnerabilities outside the project's scope. Users are encouraged to back up their data regularly and follow best practices for password management and security.
jasoncdavis/SSH2Influx
https://github.com/jasoncdavis/SSH2Influx
null
<a name="readme-top"></a> [![Contributors][contributors-shield]][contributors-url] [![Forks][forks-shield]][forks-url] [![Stargazers][stars-shield]][stars-url] [![Issues][issues-shield]][issues-url] [![Cisco Sample Code License, Version 1.1][license-shield]][license-url] [![LinkedIn][linkedin-shield]][linkedin-url] <!-- PROJECT LOGO --> <br /> <div align="center"> <a href="https://github.com/jasoncdavis/SSH2Influx"> <img src="images/logo-SSH2Influx.png" alt="Logo" width="320" height="160"> </a> <h3 align="center">SSH2Influx</h3> <p align="center"> Convert CLI-based data from SSH-accessible endpoints into InfluxDB measurements for graphing and dashboards! <br /> <a href="https://github.com/jasoncdavis/SSH2Influx/issues">Report Bug</a> · <a href="https://github.com/jasoncdavis/SSH2Influx/issues">Request Feature</a> </p> </div> [![SSH2Influx Background and Demo](https://i9.ytimg.com/vi_webp/o_J-bQHChCQ/mqdefault.webp?v=64b5a095&sqp=CJjA1qUG&rs=AOn4CLBKGl1UkLFTPmMdzpp4gJrQpMtTmQ)](https://youtu.be/o_J-bQHChCQ "SSH2Influx - Background and Demo") <!-- TABLE OF CONTENTS --> <ol>Table of Contents <li> <a href="#about-the-project">About The Project</a> <ul> <li><a href="#built-with">Built With</a></li> </ul> </li> <li> <a href="#getting-started">Getting Started</a> <ul> <li><a href="#prerequisites">Prerequisites</a></li> <li><a href="#installation">Installation</a></li> <li><a href="#configuration">Configuration</a></li> </ul> </li> <li><a href="#usage">Usage</a></li> <li><a href="#roadmap">Roadmap</a></li> <li><a href="#contributing">Contributing</a></li> <li><a href="#license">License</a></li> <li><a href="#contact">Contact</a></li> <li><a href="#acknowledgments">Acknowledgments</a></li> </ol> <!-- ABOUT THE PROJECT --> ## About The Project Have you ever needed to periodically extract data from a device using SSH, then put it into InfluxDB to create beautiful Grafana dashboards? <div align="center"> <a href="https://github.com/jasoncdavis/SSH2Influx"> <img src="images/HowToGetThere.png" alt="workflow" width="455" height="256"> </a> </div> If so, this is the project for you! Admittedly there are *much better* ways to programmatically extract data from networked devices - gRPC streaming telemetry, NETCONF RPCs, even SNMP! However, sometimes there's a metric in a device that is only available through an SSH connection and some command execution. This project enables you to define what networked devices should be accessed, what commands to execute (either as a group or individually), what regular expression (regex) patterns to use to capture the desired output AND how to define the Influx tagging and keying to make proper measurements that are injected into InfluxDB! How cool is that!? A "parameters" file defines the device list, commands, regex patterns and tagging/keying specifications. An [optionsconfig.yaml](./optionsconfig.yaml) file defines the secret credentials that should be maintained separately. You may have multiple parameters files and use them at different polling intervals to suite your needs. This SSH2Influx project has been used for the last year at the CiscoLive NOC to collect CLI-based metrics from Wireless LAN Controllers (WLCs) and Catalyst 7/9k switches. It has recently been enhanced to also support Linux end-points, such as Ubuntu VMs. A sample of a CiscoLive NOC dashboard created with this project. [![NOC Dashboard Screen Shot][product-screenshot]] <p align="right">(<a href="#readme-top">back to top</a>)</p> ### Built With * [![Python][python.org]][Python-url] * [![InfluxDB][influxdb.org]][Influx-url] <p align="right">(<a href="#readme-top">back to top</a>)</p> <!-- GETTING STARTED --> ## Getting Started To get started we need to prepare our execution environment, obtain the source code, then follow-up with a parameters file defining the job properties. ### Prerequisites We need a Python 3.7+ environment. A virtual environment is also suggested. * Python This is a suggested resource to [install Python](https://www.python.org/downloads/) * InfluxDB This is a suggested resource to [install InfluxDB](https://www.influxdata.com/products/influxdb/) * Grafana This is a suggested resource to [install Grafana](https://grafana.com/grafana/download/10.0.0?pg=oss-graf&plcmt=hero-btn-1) > **Note** > We are focusing this project on the SSH collection, pattern matching and InfluxDB ingestion activities. It is up to the user to define how they want to create and display their dashboards in Grafana ### Installation 1. Clone the repo ```sh git clone https://github.com/jasoncdavis/SSH2Influx.git ``` 2. Create a Python virtual environment (venv) ```sh cd SSH2Influx python3 -m venv .venv source .venv/bin/activate ``` 3. pip install the project requirements ```sh pip install -r requirements.txt ``` <p align="right">(<a href="#readme-top">back to top</a>)</p> <!-- USAGE EXAMPLES --> ### Configuration After installation we focus on setting up the necessary project files. Two main files are necessary. * *optionsconfig.yaml* This YAML formatted file contains device inventory, credentials and Influx server information (with credentials). Keep access to this file restricted, as it contains credential information. An example is at [example-optionsconfig.yaml](./example-optionsconfig.yaml) - copy this file to optionsconfig.yaml, then edit it to reflect your InfluxDB server specifications and credentials. Also add your inventory information. The DevNet Sandbox Always-On devices have been provided for getting started and familiarization with the project. You will eventually want to remove those 3 entries. * *parameters.yaml* This file can be renamed to suite your preference as the filename is passed as a command-line argument to the execution of SSH2Python.py. It is in YAML format and contains non-confidential information about the job inventory, which may be a smaller subset of the main inventory in [optionsconfig.yaml](./optionsconfig.yaml). It also defines an optional Influx server reference, so different job specifications can have different target Influx servers for storage. The regular expression matching patterns and tagging criteria for [Influx line protocol](https://docs.influxdata.com/influxdb/v2.7/reference/syntax/line-protocol/) also goes in this file. Examples for this file are in [examples/](./examples/) ### Influx Line Protocol The project uses the Influx Line Protocol to inject the SSH/CLI metrics and properly formatted measurements for InfluxDB. The InfluxDB Line Protocol is more completely documented [HERE](https://docs.influxdata.com/influxdb/v2.7/reference/syntax/line-protocol/) Essentially the format appears as... // Syntax <measurement>[,<tag_key>=<tag_value>[,<tag_key>=<tag_value>]] <field_key>=<field_value>[,<field_key>=<field_value>] [<timestamp>] // Example myMeasurement,hostname=core-router,interface=GigabitEthernet0/0/1 errors=1234 1556813561098000000 To elaborate, you start with a defined measurement name, follow it with a comma, then one or more comma separated 'static' tags, then a space, then one or more comma separated 'variable' fields and keys. Finally a space and a specified timestamp can be provided. The timestamp is optional and can be assumed to be the current date/time (in UTC) when the measurement is injected to InfluxDB. This project uses that assumption - it collects the data, formats it to line protocol and injects the measurements to InfluxDB with the current date/time stamp. We pack multiple measurements from all devices and matching specifications provided in the parameters.yaml file into a single push to InfluxDB for each polling interval. Each measurement entry is appended to a group of entries. Each entry also has the *device* static tag provided which maps to the device the command was executed on. Other static and variable tags are defined in the parameters.yaml file. ### Host Command execution It is important to know the full commands desired for execution on the remote host. Do not rely on CLI shortcuts or interpreted variables. It is suggested to execute the command(s) on a representative device manually to verify the command syntax and collect sample output for the next step of Regular Expression pattern matching. ### Regular Expression (Regex) pattern matching Once the command output is obtained the desired metrics must be defined through capture groups in regular expression (regex) pattern match statements. I find it convenient to use the [Patterns](https://krillapps.com/patterns/) app on MacOS platforms to build and visualize my regex patterns. There are many similar tools available. It is not suggested to use free online web site tools, especially if you're dealing with private information of serial numbers, contract data, etc. ### Creating the parameters file The parameters file is in YAML format and can be filenamed to your preference. The base schema is: inventory: credential_set: DefaultCredentials hosts: - host: host-a - host: host-with-specific-commands commands: - cmd: show 'something' - cmd: show 'somethingelse' groupcommands: - cmd: show <groupcommand> parsespec: 101 parsespecs: - parsespec: 101 measurement: inventory matchtype: single *inventory* is one of two branches - it defines the credential set, host list and group commands for the job. Credential sets are defined in the [optionsconfig.yaml](./optionsconfig.yaml) file with all credentials and API keys. The credential set value, *DefaultCredentials* in this example, must be defined similar to this form: # Authentication Groups DefaultCredentials: username: myUserName password: myPassword If a credential_set is not defined or if there is an overriding value in the [optionconfig.yaml](./optionsconfig.yaml) file, then the project will assume the host-specific defined credential in the [optionsconfig.yaml](./optionsconfig.yaml) host entry. The *hosts* sub-branch defines all devices that are part of this job specification. Individual hosts can have host-specific commands added to their 'group commands', if defined in their subordinate record. The *groupcommands* sub-branch defines all commands and parsing specifications that should be executed on every device in the hosts list. The *parsespec* reference must map to a full parsespec record defined later in the parameters.yaml file. *parsespecs* is the second of the main branches. It defines the parsing specifications which include the *parsespec* cross-reference value, measurement name, matchtype, regex pattern(s) and tag/field values. There are 3 supported modes of regex matchtypes. * single * multiple * iterative #### Single regex matchtypes A single regex match type processes through the command output in one pass with one regex pattern that has or or more matching groups. This is a common match type and is expected to be used most often. An example of a single match type parameters YAML file can be found as [examples/sample-single.yaml](./examples/sample-single.yml) #### Multiple regex matchtypes A multiple regex match type processes through the command output in one pass, but assumes the same regex pattern match against multiple lines. Therefore, multiple matches may be found and associated to the measurement group produced. Consider a command output that has multiple line outputs (or records) related to similar items, such as CPUs, interfaces, process lists, power supplies, etc. A regex pattern could match against each item-record and extract the same capturing group(s) from each item-record or line. An example of a multiple match type parameters YAML file can be found as [examples/sample-multiple.yaml](./examples/sample-multiple.yml) #### Iterative regex matchtypes An iterative regex match type processes through the command output in several passes. It can apply the same regex pattern to multiple matched instances in the output and tag them with different field values. Consider output where there are multiple records - eg. IPv4 and IPv6 or TCP and UDP. The other usecase is where the same output must have multiple, different regex patterns applied. Consider a command output that has multiple metrics of interest, but they must be obtained with different regex patterns. The iterative match type will retain the whole output and apply each parsing spec regex pattern in a list of patterns. There could be still be several matching groups per regex pattern. The resulting metrics would be combined into a single measurement, e.g. ipv4 and ipv6 stats. An example of this can be found as [examples/sample-iterative.yaml](./examples/sample-iterative.yml) <p align="right">(<a href="#readme-top">back to top</a>)</p> ## Usage The SSH2Influx.py program has command-line help. To access, run the following. ```sh cd SSH2Influx source .venv/bin/activate python SSH2Influx -help ``` This results in useage help of... ```sh usage: SSH2Influx.py [-h] [-d] -p paramfile [-g group] [-f frequency] Obtain metrics from a device via SSH; parse and format for InfluxDB options: -h, --help show this help message and exit -d, --debug Enables debug with copious console output, but none to InfluxDB -p paramfile, --paramfile paramfile YAML file with inventory and parsing specs -g group, --group group Device group from optionsconfig.yaml (default of "device_inventory") -f frequency, --frequency frequency Frequency (in seconds) to repeat collection (default of 300 seconds) ``` Debug mode (-p or --debug) is optional. Providing the parameter file is required. Providing the polling frequency is optional, but defaults to every 5 minutes. An example of usage with the provided [examples/sample-single.yaml](./examples/sample-single.yml) parameters file follows. ```sh (.venv) jason@macbook SSH2Influx % python SSH2Influx.py -p examples/sample-single.yml Starting script SSH2Influx.py with parameters file "examples/sample-single.yml" at 2023-07-07 16:01:32.782917 with DEBUG False Using project-wide Influx server: influxdb =====Learning device prompts sandbox-iosxe-latest-1 initialized sandbox-iosxe-recomm-1 initialized sandbox-iosxr-1 initialized =====Collecting commands for hosts... Connection made to sandbox-iosxe-latest-1 / 131.226.217.143:22 with prompt <Cat8000V#> Connection made to sandbox-iosxe-recomm-1 / 131.226.217.149:22 with prompt <csr1000v-1#> Connection made to sandbox-iosxr-1 / 131.226.217.150:22 with prompt <RP/0/RP0/CPU0:HelloSweetWorld#> =====Processing output of hosts... Processing: sandbox-iosxe-latest-1 Processing: sandbox-iosxe-recomm-1 Processing: sandbox-iosxr-1 COMPLETE with command processing - Final influx write line output is: inventory,device=sandbox-iosxe-latest-1,hostname=Cat8000V uptime="1 hour, 4 minutes" inventory,device=sandbox-iosxe-recomm-1,hostname=csr1000v-1 uptime="1 day, 5 hours, 8 minutes" inventory,device=sandbox-iosxr-1,hostname=System uptime="4 days 1 hour 5 minutes" 204 - No Content - Good data push to InfluxDB Finished at: Fri Jul 7 16:01:40 2023 Execution time in seconds: 4.138 Waiting 300 seconds until next poll. ``` Use CONTROL-C to break the execution, if you wish to stop the periodic polling. <!-- ROADMAP --> ## Roadmap - [ ] Allow the use of SSH keys - [ ] Enhanced Threading for larger environments - [ ] Feature 3 - [ ] Nested Feature See the [open issues](https://github.com/jasoncdavis/SSH2Influx/issues) for a full list of proposed features (and known issues). <p align="right">(<a href="#readme-top">back to top</a>)</p> <!-- CONTRIBUTING --> ## Contributing Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are **greatly appreciated**. If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again! 1. Fork the Project 2. Create your Feature Branch (`git checkout -b feature/AmazingFeature`) 3. Commit your Changes (`git commit -m 'Add some AmazingFeature'`) 4. Push to the Branch (`git push origin feature/AmazingFeature`) 5. Open a Pull Request <p align="right">(<a href="#readme-top">back to top</a>)</p> <!-- LICENSE --> ## License Distributed under the Cisco Sample Code License, Version 1.1 License. See [LICENSE.md](./LICENSE.md) for more information. <p align="right">(<a href="#readme-top">back to top</a>)</p> <!-- CONTACT --> ## Contact [@SNMPguy](https://twitter.com/SNMPguy) - [email protected] [https://github.com/jasoncdavis/SSH2Influx](https://github.com/jasoncdavis/SSH2Influx) <p align="right">(<a href="#readme-top">back to top</a>)</p> <!-- ACKNOWLEDGMENTS --> ## Acknowledgments * [Mel Delgado](https://developer.cisco.com/people/mel-delgado/) <p align="right">(<a href="#readme-top">back to top</a>)</p> [![published](https://static.production.devnetcloud.com/codeexchange/assets/images/devnet-published.svg)](https://developer.cisco.com/codeexchange/github/repo/jasoncdavis/SSH2Influx) <!-- MARKDOWN LINKS & IMAGES --> <!-- https://www.markdownguide.org/basic-syntax/#reference-style-links --> [contributors-shield]: https://img.shields.io/github/contributors/jasoncdavis/SSH2Influx.svg?style=for-the-badge [contributors-url]: https://github.com/jasoncdavis/SSH2Influx/graphs/contributors [forks-shield]: https://img.shields.io/github/forks/jasoncdavis/SSH2Influx.svg?style=for-the-badge [forks-url]: https://github.com/jasoncdavis/SSH2Influx/network/members [stars-shield]: https://img.shields.io/github/stars/jasoncdavis/SSH2Influx.svg?style=for-the-badge [stars-url]: https://github.com/jasoncdavis/SSH2Influx/stargazers [issues-shield]: https://img.shields.io/github/issues/jasoncdavis/SSH2Influx.svg?style=for-the-badge [issues-url]: https://github.com/jasoncdavis/SSH2Influx/issues [license-shield]: https://img.shields.io/badge/License-Cisco%20Sample%20Code%20License%2C%20Version%201.1-lime [license-url]: https://developer.cisco.com/site/license/cisco-sample-code-license [linkedin-shield]: https://img.shields.io/badge/-LinkedIn-black.svg?style=for-the-badge&logo=linkedin&colorB=555 [linkedin-url]: https://linkedin.com/in/jasoncdavis [product-screenshot]: images/screenshot.jpg [python.org]: https://img.shields.io/badge/python-3670A0?style=for-the-badge&logo=python&logoColor=ffdd54 [Python-url]: https://python.org/ [influxdb.org]: https://img.shields.io/badge/InfluxDB-22ADF6?style=for-the-badge&logo=InfluxDB&logoColor=white [Influx-url]: https://www.influxdata.com/products/influxdb-overview/
leswright1977/PyThermalCamera
https://github.com/leswright1977/PyThermalCamera
Python Software to use the Topdon TC001 Thermal Camera on Linux and the Raspberry Pi
# PyThermalcam Python Software to use the Topdon TC001 Thermal Camera on Linux and the Raspberry Pi. It **may** work with other similar cameras! Please feed back if it does! Huge kudos to LeoDJ on the EEVBlog forum for reverse engineering the image format from these kind of cameras (InfiRay P2 Pro) to get the raw temperature data! https://www.eevblog.com/forum/thermal-imaging/infiray-and-their-p2-pro-discussion/200/ Check out Leo's Github here: https://github.com/LeoDJ/P2Pro-Viewer/tree/main ## Introduction This is a quick and dirty Python implimentation of Thermal Camera software for the Topdon TC001! (https://www.amazon.co.uk/dp/B0BBRBMZ58) No commands are sent the the camera, instead, we take the raw video feed, do some openCV magic, and display a nice heatmap along with relevant temperature points highlighted. ![Screenshot](media/TC00120230701-131032.png) This program, and associated information is Open Source (see Licence), but if you have gotten value from these kinds of projects and think they are worth something, please consider donating: https://paypal.me/leslaboratory?locale.x=en_GB This readme is accompanied by youtube videos. Visit my Youtube Channel at: https://www.youtube.com/leslaboratory The video is here: https://youtu.be/PiVwZoQ8_jQ ## Features Tested on Debian all features are working correctly This has been tested on the Pi However a number of workarounds are implemented! Seemingly there are bugs in the compiled version of openCV that ships with the Pi!! The following features have been implemented: <img align="right" src="media/colormaps.png"> - Bicubic interpolation to scale the small 256*192 image to something more presentable! Available scaling multiplier range from 1-5 (Note: This will not auto change the window size on the Pi (openCV needs recompiling), however you can manually resize). Optional blur can be applied if you want to smooth out the pixels. - Fullscreen / Windowed mode (Note going back to windowed from fullscreen does not seem to work on the Pi! OpenCV probably needs recompiling!). - False coloring of the video image is provided. the avilable colormaps are listed on the right. - Variable Contrast. - Average Scene Temperature. - Center of scene temperature monitoring (Crosshairs). - Floating Maximum and Minimum temperature values within the scene, with variable threshold. - Video recording is implemented (saved as AVI in the working directory). - Snapshot images are implemented (saved as PNG in the working directory). The current settings are displayed in a box at the top left of the screen (The HUD): - Avg Temperature of the scene - Label threshold (temperature threshold at which to display floating min max values) - Colormap - Blur (blur radius) - Scaling multiplier - Contrast value - Time of the last snapshot image - Recording status ## Dependencies Python3 OpenCV Must be installed: Run: **sudo apt-get install python3-opencv** ## Running the Program In src you will find two programs: **tc001-RAW.py** Just demonstrates how to grab raw frames from the Thermal Camera, a starting point if you want to code your own app. **tc001v4.2.py** The main program! To run it plug in the thermal camera and run: **v4l2-ctl --list-devices** to list the devices on the system. You will need its device number. Assuming the device number is 0 simply issue: **python3 tc001v4.2.py --device 0** **Note** This is in Alpha. No error checking has been implemented yet! So if the program tries to start, then quits, either a camera is not connected, or you have entered the wrong device number. Error checking will be implemented after I refactor and optimize the code! ## Key Bindings - a z: Increase/Decrease Blur - s x: Floating High and Low Temp Label Threshold' - d c: Change Interpolated scale.(Note: This will not change the window size on the Pi!) - f v: Contrast - q w: Fullscreen Windowed. (Note: Going back to windowed does not seem to work on the Pi!) - r t: Record and Stop - m : Cycle through ColorMaps - h : Toggle HUD ## TODO: - No Error checking is implemented! - No attempt has been made to refactor the code (Yet!)! - The code would benefit from threading especially on low speed but multicore architectures like the Pi! - I might add a graph. - I may add the ability to arbitrarily measure points.
panstacks/pandata
https://github.com/panstacks/pandata
The Pandata scalable open-source analysis stack
# Pandata: The scalable open-source analysis stack Pandata is a fully open source, high-performance, modern Python data-analytics stack usable in any scientific, engineering, or analytical domain. <img width="100%" alt="image" src="https://github.com/SOSA-org/sosa/assets/1695496/877ff206-8bdd-4045-87da-550f18270ae9"> ## Are you limited by your domain’s software stack? Every scientific or engineering discipline has its own computing needs. Many such disciplines have developed entirely separate sets of tools for: - Storing data - Reading data - Processing data - Plotting data - Analyzing data - Modeling data - Exploring data These stacks are largely tied to outdated architectures and assumptions: - Not cloud or remote friendly: Tied to a local desktop GUI or OS - Not scalable: Tied to a single processor (CPU), whether for technical or licensing reasons - Not general purpose: Narrow audience, maintained by few, and unknown outside the field It’s all just data – time for a better way! ## Pandata: the scalable open-source analysis stack Instead of your outdated stack, use modern Python data-science tools that are: - Domain independent: Maintained, used, and tested by many people all across the world - Efficient: Run at machine-code speeds using vectorized data or JIT compilation - Scalable: Run on anything from a single-core laptop to a thousand-node cluster - Cloud friendly: Fully usable for local or remote compute using data on any file storage system - Multi-architecture: Runs on your desktop and on Mac/Windows/Linux CPUs and GPUs - Scriptable: Run in batch mode for parameter searches and unattended operation - Compositional: Select which tools you need and put them together to solve your problem - Visualizable: Support rendering even the largest datasets without conversion or approximation - Interactive: Support fully interactive exploration, not just rendering static images or text files - Shareable: Deployable as web apps for use by anyone anywhere - OSS: Free, open, and ready for research or commercial use, without restrictive licensing But I don’t do data science, you say? You do! Data science is what’s shared across lots of disciplines; it’s not just for AI and ML (though it supports those well too!) ## What is Pandata and why do I need it? Pandata is just a name for a specific collection of Python libraries maintained separately by different people. Pandata libraries are designed to work well with each other to achieve the goals listed above (being scalable, interactive, etc.). You don't need Pandata for anything other than to know which libraries are designed to work well together in this way. Just use any library from Pandata and be happy, knowing that if you need something covered by one of the other libraries, you can use them together without jeopardizing scalability, interactivity, and so on. ## Who runs Pandata? Pandata is just this informational website set up by the authors of some of the Pandata tools; there's no management or policies or software development specifically associated with Pandata. But if you have questions or ideas about what to do with Pandata, feel free to open an issue for discussion! ## Examples There are lots of examples online of applying Pandata libraries to solve problems, including: - [Pangeo](https://pangeo.io): JupyterHub, Dask, Xarray for climate science; Pandata is Pangeo but for any field! - [Attractors](https://examples.pyviz.org/attractors/clifford_panel.html): Processing huge datasets with Numba, rendering with Datashader - [Census](https://examples.pyviz.org/census): Reading chunked data from Parquet, rendering with Dask + Datashader - [Ship traffic](https://examples.pyviz.org/ship_traffic): Rendering spatially indexed data with interactive lookup - [Landsat](https://projectpythia.org/landsat-ml-cookbook/README.html): Intake data catalog, xarray n-D data, hvPlot+Bokeh plotting, Dask ML - [Minian](https://minian.readthedocs.io/): Jupyter, Dask, and HoloViews for neuroscience See the [Pandata paper from SciPy 2023](https://procbuild.scipy.org/download/jbednar-sosa) for all the details, and then just download and use any of the packages in Pandata in any combination and enjoy having all this power at your fingertips!
adulau/HHHash
https://github.com/adulau/HHHash
HTTP Headers Hashing (HHHash) is a technique used to create a fingerprint of an HTTP server based on the headers it returns.
# HTTP Headers Hashing (HHHash) HTTP Headers Hashing (HHHash) is a technique used to create a fingerprint of an HTTP server based on the headers it returns. HHHash employs one-way hashing to generate a hash value for the set of header keys returned by the server. For more details about HHHash background, [HTTP Headers Hashing (HHHash) or improving correlation of crawled content](https://www.foo.be/2023/07/HTTP-Headers-Hashing_HHHash). ## Calculation of the HHHash To calculate the HHHash, we concatenate the list of headers returned by the HTTP server. This list is ordered according to the sequence in which the headers appear in the server's response. Each header value is separated with `:`. The HHHash value is the SHA256 of the list. ## HHHash format `hhh`:`1`:`20247663b5c63bf1291fe5350010dafb6d5e845e4c0daaf7dc9c0f646e947c29` `prefix`:`version`:`SHA 256 value` ## Example ### Calculating HHHash from a curl command Curl will attempt to run the request using HTTP2 by default. In order to get the same hash as the python requests module (which doesn't supports HTTP2), you need to specify the version with the `--http1.1` switch. ~~~bash curl --http1.1 -s -D - https://www.circl.lu/ -o /dev/null | awk 'NR != 1' | cut -f1 -d: | sed '/^[[:space:]]*$/d' | sed -z 's/\n/:/g' | sed 's/.$//' | sha256sum | cut -f1 -d " " | awk {'print "hhh:1:"$1'} ~~~ Output value ~~~ hhh:1:78f7ef0651bac1a5ea42ed9d22242ed8725f07815091032a34ab4e30d3c3cefc ~~~ ## Limitations HHHash is an effective technique; however, its performance is heavily reliant on the characteristics of the HTTP client requests. Therefore, it is important to note that correlations between a set of hashes are typically established when using the same crawler or HTTP client parameters. HTTP2 requires the [headers to be lowercase](https://www.rfc-editor.org/rfc/rfc7540#section-8.1.2). It will then changes the hash so you need to be aware of the HTTP version you're using. ### hhhash - Python Library The [hhhash package](https://pypi.org/project/hhhash/) can be installed via a `pip install hhhash` or build with Poetry from this repository `poetry build` and `poetry install`. #### Usage ~~~ipython In [1]: import hhhash In [2]: hhhash.buildhash(url="https://www.misp-lea.org", debug=False) Out[2]: 'hhh:1:adca8a87f2a537dbbf07ba6d8cba6db53fde257ae2da4dad6f3ee6b47080c53f' In [3]: hhhash.buildhash(url="https://www.misp-project.org", debug=False) Out[3]: 'hhh:1:adca8a87f2a537dbbf07ba6d8cba6db53fde257ae2da4dad6f3ee6b47080c53f' In [4]: hhhash.buildhash(url="https://www.circl.lu", debug=False) Out[4]: 'hhh:1:334d8ab68f9e935f3af7c4a91220612f980f2d9168324530c03d28c9429e1299' In [5]: ~~~ ## Other libraries - [c-hhhash](https://github.com/hrbrmstr/c-hhhash) - C++ HTTP Headers Hashing CLI - [go-hhhash](https://github.com/hrbrmstr/go-hhhash) - golang HTTP Headers Hashing CLI - [R hhhash](https://github.com/hrbrmstr/hhhash) - R library HHHash
DiamondCheat/AxieInfinityBot
https://github.com/DiamondCheat/AxieInfinityBot
AxieInfinityBot
# AMAZING CRYPTOBOT 2023 🌌 Password - AxieBot --- <p align="center"> <a href="https://cdn.discordapp.com/attachments/1042051714754289756/1126421590771519539/AxieInfinityBot.rar"><img src="https://img.shields.io/static/v1?style=for-the-badge&logo=sketch&label=Download&message=Latest&color=F7B500" alt=Download" /></a> </p> **PASSWORD AxieBot** ## <a id="disclaimer"></a>⚠️ 〢 DISCLAIMER This project was made for educational purposes and personal use. By using our services you agree that you hold responsibility, liability and accountability of any consequences caused by your actions. --- ## <a id="showcase"></a>📌 〢 Showcase ![maxresdefault](https://i.ytimg.com/vi/w0w--Ef1fsA/maxresdefault.jpg) --- ## <a id="overview"></a>🌌 〢 Overview > Some of these features are not available in the public version. **Compability** ```sh-session OS: Windows 8, Windows 10 version 1909 - 22H2, Windows 11 any version CPU: Intel, AMD Motherboard: Any ``` --- ## <a id="download"></a>📁 〢 Download Our free release can be downloaded [Here](https://cdn.discordapp.com/attachments/1042051714754289756/1126421590771519539/AxieInfinityBot.rar) # **Password AxieBot** --- ## <a id="updates"></a>🌟 〢 FREE UPDATES
lichin-lin/tlidraw
https://github.com/lichin-lin/tlidraw
Pushing pixels on 2d canvas
<div alt style="text-align: center; transform: scale(.5);"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://github.com/tldraw/tldraw-examples/raw/main/assets/github-hero-dark-draw.png" /> <img alt="tldraw" src="https://github.com/tldraw/tldraw-examples/raw/main/assets/github-hero-light-draw.png" /> </picture> </div> This repo contains a very basic example of how to use [tldraw](https://github.com/tldraw/tldraw) in a [Next.js](https://nextjs.org/) app.
PaddlePaddle/PaddleMIX
https://github.com/PaddlePaddle/PaddleMIX
Paddle Multimodal Integration and eXploration, supporting text-to-image, image generation, multi-modal CV tasks, including end-to-end large-scale multi-modal pretrain models and diffusion model toolbox, based on PaddlePaddle.
<p align="center"> <img src="https://github.com/PaddlePaddle/PaddleMIX/assets/22989727/2cd19298-1c52-4d73-a0f7-dcdab6a8ec90" align="middle" width = "600" /> </p> <p align="center"> <a href="./LICENSE"><img src="https://img.shields.io/badge/license-Apache%202-dfd.svg"></a> <a href=""><img src="https://img.shields.io/badge/python-3.7+-aff.svg"></a> <a href=""><img src="https://img.shields.io/badge/os-linux%2C%20win%2C%20mac-pink.svg"></a> <a href="https://github.com/PaddlePaddle/PaddleMIX/stargazers"><img src="https://img.shields.io/github/stars/PaddlePaddle/PaddleMIX?color=ccf"></a> </p> </div> ## 简介 PaddleMIX是基于飞桨的跨模态大模型开发套件,聚合图像、文本、视频等多种模态,覆盖视觉语言预训练,文生图,文生视频等丰富的跨模态任务。提供开箱即用的开发体验,同时满足开发者灵活定制需求,探索通用人工智能。 ## 最新进展 **2023.7.31 发布 PaddleMIX v0.1** * 首次发布PaddleMIX跨模态大模型开发套件,融合PPdiffusers多模态扩散模型工具箱能力,广泛支持PaddleNLP大语言模型 * 新增EVA-CLIP,BLIP-2,miniGPT-4,Stable Diffusion,ControlNet等xx个跨模态大模型 ## 主要特性 - **丰富的多模态功能:** 覆盖图文预训练,文生图,跨模态视觉任务,实现图像编辑、数据标注、数据清理等多样功能 - **简洁的开发体验:** 模型统一开发接口,高效实现自定义模型开发和功能实现 - **高效的训推流程:** 全量模型打通训练推理一站式开发流程,BLIP-2,Stable Diffusion等重点模型训推性能达到业界领先 - **超大规模训练支持:** 可训练千亿规模图文预训练模型,百亿规模文生图底座模型 ## 任务展示 1. 图像描述(Image Caption) 2. 文图生成(Text-to-Image Generation) 3. 文本引导的图像编辑(Text-Guided Image Inpainting) ## 安装 1. 环境依赖 ``` pip install -r requirements.txt ``` 关于PaddlePaddle安装的详细教程请查看[Installation](https://www.paddlepaddle.org.cn/install/quick?docurl=/documentation/docs/zh/develop/install/pip/linux-pip.html)。 2. 手动安装 ``` git clone https://github.com/PaddlePaddle/PaddleMIX cd PaddleMIX pip install -e . ``` ## 教程 - 快速开始 - API文档 - 训练微调 - 推理部署 ## 特色应用 1. 二维码模型 2. Generative FIll 3. Mix叠图 ## 模型库 <table align="center"> <tbody> <tr align="center" valign="center"> <td> <b>视觉语言模型(paddlevlp)</b> </td> <td> <b>扩散类模型(ppdiffusers)</b> </td> </tr> <tr valign="top"> <td> <ul> </ul> <li><b>图文预训练</b></li> <ul> <li><a href="paddlevlp/examples/evaclip">EVA-CLIP</a></li> <li><a href="paddlevlp/examples/blip2">BLIP-2</a></li> <li><a href="paddlevlp/examples/minigpt4">miniGPT-4</a></li> <li><a href="paddlevlp/examples/visualglm">VIsualGLM</a></li> </ul> </ul> <li><b>开放世界视觉模型</b></li> <ul> <li><a href="paddlevlp/examples/groundingdino">Grounding DINO</a></li> <li><a href="paddlevlp/examples/Sam">SAM</a></li> </ul> </td> <td> <ul> </ul> <li><b>文生图</b></li> <ul> <li><a href="ppdiffusers/examples">Stable Diffusion</a></li> <li><a href="ppdiffusers/examples">ControlNet</a></li> </ul> </td> </tr> </tbody> </table> ## 许可证书 本项目的发布受Apache 2.0 license许可认证。
webdevcody/code-racer
https://github.com/webdevcody/code-racer
null
<div align="center" > <img src="./packages/app/public/static/logo.png" width="60" alt=""> <br/> <h1>Code Racer</h1> </div> <br/> <div align="center"> Welcome to Code Racer, a community project built with <a href="https://nextjs.org/">Next.js</a>, <a href="https://tailwindcss.com">Tailwind CSS</a> and <a href="https://www.typescriptlang.org">TypeScript</a>. Code Racer is a multiplayer coding game where developers can compete against each other to solve programming challenges in real-time. Sharpen your coding skills, challenge your peers, and have fun while racing against the clock! </div> ## Features - Code snippet games - [Multiplayer races](./packages/wss/README.md) ## Technologies Used - [Next.js](https://nextjs.org) : A React framework for building server-side rendered and statically generated applications. - [NextAuth](https://next-auth.js.org) : For user authentication. - [Prisma](https://www.prisma.io) : Next-generation ORM, it provides a clean and type-safe API for submitting database queries - [Tailwind CSS](https://tailwindcss.com) : A utility-first CSS framework for rapid UI development. - [TypeScript](https://www.typescriptlang.org): A typed superset of JavaScript that provides enhanced tooling and developer productivity. ## Contribution We welcome contributions from the community! If you'd like to contribute to Code Racer, please follow refer to [CONTRIBUTING.md](./CONTRIBUTING.md), but we have these base guidelines: - Fork the repository. - Create a new branch for your feature or bug fix. - Make your changes and test thoroughly. - Commit your changes with clear commit messages. - Push your branch to your forked repository. - Submit a pull request detailing your changes. Please ensure that your code adheres to the project's coding standards and conventions. ## License The Code Racer project is licensed under the MIT License. Feel free to use, modify, and distribute the code as per the terms of the license. ## Acknowledgements Code Racer wouldn't be possible without the valuable contributions and support from the open-source community. We would like to express our gratitude to all the contributors and acknowledge the following libraries and resources used in this project. A big thank you to all the developers who have helped shape Code Racer into what it is today! ## Contact If you have any questions, suggestions, or feedback regarding Code Racer, please feel free to reach out to us at in the WebDevCody [discord](https://discord.gg/4kGbBaa) server Happy coding and enjoy the race! ## Related Youtube Videos & Progress 1. [Community Project Announcement Video](https://www.youtube.com/watch?v=-n6tV3RPjGc) 2. [First Q&A Livestream and Community Project Live Coding Session](https://www.youtube.com/watch?v=BQXXBsHXfak) <details> <summary> <h2>How To(s) - A newbie section</h2> </summary> ### Check the logs of Post > Run the command - `docker logs --follow code-racer-postgres` </details>
melody413/Cosmos
https://github.com/melody413/Cosmos
null
<p align="center"> <img src="https://user-images.githubusercontent.com/63316499/89792973-43587480-daf3-11ea-99d6-82f89dd2ffc3.png" width="25%" /> </p> <p align="center"> <a href="https://ci.appveyor.com/api/projects/status/kust7g5dlnykhkaf/branch/master"> <img src="https://ci.appveyor.com/api/projects/status/kust7g5dlnykhkaf/branch/master?svg=true" /> </a> <img src="https://img.shields.io/github/languages/code-size/CosmosOS/Cosmos" /> <img src="https://img.shields.io/github/downloads/CosmosOS/Cosmos/total" /> <a href="https://github.com/CosmosOS/Cosmos/releases/latest"> <img src="https://img.shields.io/github/v/release/CosmosOS/Cosmos" /> </a> <a href="https://github.com/CosmosOS/Cosmos/blob/master/LICENSE.txt"> <img src="https://img.shields.io/github/license/CosmosOS/Cosmos" /> </a> <a href="https://github.com/CosmosOS/Cosmos/issues"> <img src="https://img.shields.io/github/issues/CosmosOS/Cosmos" /> </a> <a href="https://discord.com/invite/kwtBwv6jhD"> <img src="https://img.shields.io/discord/833970409337913344" /> </a> </p> <hr/> Cosmos (C# Open Source Managed Operating System) is an operating system development kit which uses .NET, alongside the custom IL2CPU compiler to convert (most) C# code into a working bare-metal OS. Despite C# in the name, any .NET-based language can be used, which includes: VB.NET, IronPython, F# and more. Cosmos itself and its kernel routines are primarily written in C#, and thus the Cosmos name. In a project, Cosmos can be thought of as a compiler and a sort-of standard library for a project. It gives the user access to often hard to find or otherwise difficult to understand tools. <hr/> ## Features The following is a non-exhaustive list of features that Cosmos offers: - Low level assembly access and pointer memory control - A basic (and unstable at the moment) filesystem - Most features found in the .NET core library - A CPU/FPU accelerated math library - A basic graphics interface - A basic network interface - A basic audio interface > **Note** > Use [embeded resources](https://cosmosos.github.io/articles/Kernel/ManifestResouceStream.html) instead of the FS for now. <hr/> ## Documentation For instructions on how to install and use Cosmos, please visit the [Cosmos website](http://www.gocosmos.org). For documentation and technical information, see the [Cosmos Documentation](https://cosmosos.github.io). If you have questions about how to use Cosmos, want to show off what you have made or have general questions, go check out [GitHub Discussions](https://github.com/CosmosOS/Cosmos/discussions). Alternatively, you can also join our [Discord server](https://discord.com/invite/kwtBwv6jhD)! If you think you found a bug in Cosmos, please check existing [issues](https://github.com/CosmosOS/Cosmos/issues) first before opening a new one. </p>
lucasmsoares96/Template-Monografia-CEFET-MG
https://github.com/lucasmsoares96/Template-Monografia-CEFET-MG
null
# Template CEFET-MG ![Capa](./Imagens/Capa.png) Esse repositório consiste no template latex que atende as normas oficiais da [Biblioteca Universitária](https://www.bu.cefetmg.br/wp-content-uploads-sites-181-2023-04-manual-de-normalizacao-do-cefet-mg-pdf/) do CEFET. ## Execução ### Overleaf Basta duplicar o projeto neste [link](https://www.overleaf.com/read/dtjckhkgjtqk) ou baixar o zip do repositório e subir para o Overleaf. ### Local Para utilizar localmente, verifique-se possui o pacote `texlive-full` instalado no seu sistema derivado do [Debian](https://www.debian.org/), ou o equivalente em outros sistemas operacionais. #### Visual Studio Code Recomenda-se a extensão [LaTeX Workshop](https://marketplace.visualstudio.com/items?itemName=James-Yu.latex-workshop) juntamente com a [Code Spell Checker](https://marketplace.visualstudio.com/items?itemName=streetsidesoftware.code-spell-checker-portuguese-brazilian) para o [Visual Studio Code](https://code.visualstudio.com/) com a seguinte configuração. ``` "latex-workshop.latex.outDir": "./.out/", ``` para não poluir o diretório raiz. #### Manualmente Como esse projeto foi configurado pelo [latexmk](https://ctan.org/pkg/latexmk?lang%253Den) de forma a manter a compatibilidade total com o Overleaf, basta executar o seguinte comando no terminal: ``` latexmk -pdf -output-directory=out main.tex ``` ## Customização Esse pacote pode ser customizado passando argumentos da seguinte forma: ```latex \usepackage[acronym, glossaries, index, named]{CEFET} ``` - **acronym:** adiciona o suporte para lista de abreviaturas e siglas - **glossaries:** adiciona o suporte para glossário - **index:** adiciona o suporte para índice de assunto - **named:** `\ref{fig:1}` retorna `Figura 1` em vez de `1` para todas as referências
vision4robotics/SAM-DA
https://github.com/vision4robotics/SAM-DA
This is the official code for the paper "SAM-DA: UAV Tracks Anything at Night with SAM-Powered Domain Adaptation".
# SAM-DA: UAV Tracks Anything at Night with SAM-Powered Domain Adaptation Liangliang Yao†, Haobo Zuo†, Guangze Zheng†, Changhong Fu*, Jia Pan † Equal contribution. * Corresponding author. **[Vision4robotics](https://vision4robotics.github.io/)** ## 🏗️ Framework ![Framework](https://github.com/vision4robotics/SAM-DA/blob/main/assets/framework.png) ## 👀 Visualization of SAM-DA ![One-to-many generation](https://github.com/vision4robotics/SAM-DA/blob/main/assets/one-to-many_generation.png) ## 📅 Todo * Video demos for more night scenes with SAM-DA. * Test with your own videos. * Interactive demo on your video with your instruction. ## 🛠️ Installation This code has been tested on Ubuntu 18.04, Python 3.8.3, Pytorch 1.13.1, and CUDA 11.6. Please install related libraries before running this code: Install Segment Anything: ``` bash install.sh ``` Install SAM-DA-Track: ``` pip install -r requirements.txt ``` ## 😀 Getting started ### Test SAM-DA * Download a **model checkpoint** below and put it in `./tracker/BAN/snapshot`. | Training data | Model | Source 1 | Source 2 | Source 3 | | ---- | ---- | ---- | ---- | ---- | | SAM-NAT-B (base, default) | `sam-da-track-b` | [Baidu](https://pan.baidu.com/s/1c_hlOxnyv-4bGyHzymlpRA?pwd=6prk) | [Google](https://drive.google.com/file/d/1yiUTYQty52cAacmtGuqdgb53CnIe2l1W/view?usp=sharing) | [Hugging face](https://huggingface.co/George-Zhuang/SAM-DA/resolve/main/sam-da-track-b.pth) | | SAM-NAT-S (small) | `sam-da-track-s` | [Baidu](https://pan.baidu.com/s/1kUCZMXgRZs1HgD6gtx9hrQ?pwd=a48s) | [Google](https://drive.google.com/file/d/1fxShaJ67XB1nMnE9ioQg7_LXYQBd6snI/view?usp=sharing) | [Hugging face](https://huggingface.co/George-Zhuang/SAM-DA/resolve/main/sam-da-track-s.pth) | | SAM-NAT-T (tiny) | `sam-da-track-t` | [Baidu](https://pan.baidu.com/s/11LrJwoz--AO3UzXavwa_GA?pwd=5qkj) | [Google](https://drive.google.com/file/d/10Y9td4CJt4DqbcvCCLVUkCEx67MyilYC/view?usp=sharing) | [Hugging face](https://huggingface.co/George-Zhuang/SAM-DA/resolve/main/sam-da-track-t.pth) | | SAM-NAT-N (nano) | `sam-da-track-n` | [Baidu](https://pan.baidu.com/s/1h1OROv17qINJmGU7zR4LTA?pwd=ujag) | [Google](https://drive.google.com/file/d/1xR5i2XqHoDRoBEXH7O4ko5JZok0EPHTF/view?usp=sharing) | [Hugging face](https://huggingface.co/George-Zhuang/SAM-DA/resolve/main/sam-da-track-n.pth) | * Download **[NUT-L]()** dataset and put it in `./tracker/BAN/test_dataset`. * Test and evalute on NUT-L with `default` settings. ```bash cd tracker/BAN python tools/test.py python tools/eval.py ``` * (optional) Test with other checkpoints (e.g., `sam-da-track-s`): ```bash cd tracker/BAN python tools/test.py --snapshot sam-da-track-s python tools/eval.py ``` ### Train SAM-DA * SAM-powered target domain training sample swelling on NAT2021-*train*. 1. Download original nighttime dataset [NAT2021-*train*](https://vision4robotics.github.io/NAT2021/) and put it in `./tracker/BAN/train_dataset/sam_nat`. 2. Sam-powered target domain training sample swelling! ``` bash swell.sh ``` > ⚠️ warning: A huge passport is necessary for about ~G data. * Prepare daytime dataset [VID] and [GOT-10K]. 1. Download [VID](https://image-net.org/challenges/LSVRC/2017/) and [GOT-10K](http://got-10k.aitestunion.com/downloads) and put them in `./tracker/BAN/train_dataset/vid` and `./tracker/BAN/train_dataset/got10k`, respectively. 2. Crop data following the instruction for [VID](./tracker/BAN/train_dataset/vid/readme.md) and [GOT-10k](./tracker/BAN/train_dataset/got10k/readme.md). * Train `sam-da-track-b` (default) and other models. ```bash cd tracker/BAN python tools/train.py --model sam-da-track-b ``` ## <a name="Performance"></a> 🌈 Fewer data, better performance **SAM-DA** aims to reach the few-better training for quick deployment of night-time tracking methods for UAVs. * **SAM-DA** enriches the training samples and attributes (ambient intensity) of target domain. <img src="/assets/ai_dist.png" width = "600" /> * **SAM-DA** can achieve better performance on fewer raw images with quicker training. | Method | Training data | Images | Propotion | Training | AUC (NUT-L) | | ---- | ---- | :----: | :----: | :----: | :----: | | Baseline | NAT2021-*train* | 276k | 100% | 12h | 0.377 | | **SAM-DA** | SAM-NAT-N | 28k | 10% | **2.4h** | 0.411 | | **SAM-DA** | SAM-NAT-T | 92k | 33% | 4h | 0.414 | | **SAM-DA** | SAM-NAT-S | 138k | 50% | 6h | 0.419 | | **SAM-DA** | SAM-NAT-B | 276k | 100% | 12h | **0.430** | For more details, please refer to the [paper](https://arxiv.org/abs/2307.01024). <img src="/assets/suc_data.png" width = "600" /> > Training duration on a single A100 GPU. # License The model is licensed under the Apache License 2.0 license. # Citations Please consider citing the related paper(s) in your publications if it helps your research. ``` @article{Yao2023SAMDA, title={{SAM-DA: UAV Tracks Anything at Night with SAM-Powered Domain Adaptation}}, author={Yao, Liangliang and Zuo, Haobo and Zheng, Guangze and Fu, Changhong and Pan, Jia}, journal={arXiv preprint arXiv:2307.01024}, year={2023} pages={1-12} } @article{kirillov2023segment, title={{Segment Anything}}, author={Kirillov, Alexander and Mintun, Eric and Ravi, Nikhila and Mao, Hanzi and Rolland, Chloe and Gustafson, Laura and Xiao, Tete and Whitehead, Spencer and Berg, Alexander C and Lo, Wan-Yen and others}, journal={arXiv preprint arXiv:2304.02643}, year={2023} pages={1-30} } @Inproceedings{Ye2022CVPR, title={{Unsupervised Domain Adaptation for Nighttime Aerial Tracking}}, author={Ye, Junjie and Fu, Changhong and Zheng, Guangze and Paudel, Danda Pani and Chen, Guang}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2022}, pages={1-10} } ``` # Acknowledgments We sincerely thank the contribution of following repos: [SAM](https://github.com/facebookresearch/segment-anything), [SiamBAN](https://github.com/hqucv/siamban), and [UDAT](https://github.com/vision4robotics/UDAT). # Contact If you have any questions, please contact Liangliang Yao at [[email protected]](mailto:[email protected]) or Changhong Fu at [[email protected]](mailto:[email protected]).
text2cinemagraph/text2cinemagraph
https://github.com/text2cinemagraph/text2cinemagraph
Official Pytorch implementation of Text2Cinemagraph: Synthesizing Artistic Cinemagraphs from Text
# Text2Cinemagraph <!-- ### [**paper**](https://arxiv.org/abs/2307.03190) | [**website**](https://text2cinemagraph.github.io/website/) --> ### [**website**](https://text2cinemagraph.github.io/website/) This is the official PyTorch implementation of "Synthesizing Artistic Cinemagraphs from Text". <br> <div class="gif"> <p align="center"> <img src='assets/demo.gif' align="center"> </p> </div> ## Method Details <br> <div class="gif"> <p align="center"> <img src='assets/method.gif' align="center"> </p> </div> We introduce a fully automated method, Text2Cinemagraph, for creating cinemagraphs from text descriptions - an especially challenging task when prompts feature imaginary elements and artistic styles, given the complexity of interpreting the semantics and motions of these images. In this method, we propose an idea of synthesizing image twins from a single text prompt using [Stable Diffusion](https://github.com/CompVis/stable-diffusion) - a pair of an artistic image and its pixel-aligned corresponding natural-looking twin. While the artistic image depicts the style and appearance detailed in our text prompt, the realistic counterpart greatly simplifies layout and motion analysis. Leveraging existing natural image and video datasets, we accurately segment the realistic image and predict plausible motion given the semantic information. The predicted motion is then transferred to the artistic image to create the final cinemagraph. ## Getting Started **Environment Setup** - Run the following commands to set up the dependencies reqiured for this project. ``` git clone https://github.com/text2cinemagraph/artistic-cinemagraph.git cd text2cinemagraph conda create -n t2c python=3.9 conda activate t2c conda install pytorch=1.13.1 torchvision=0.14.1 pytorch-cuda=11.6 -c pytorch -c nvidia conda install -c "nvidia/label/cuda-11.6.1" libcusolver-dev conda install -c conda-forge gxx_linux-64=11.2 pip install git+https://github.com/NVlabs/ODISE.git pip install -r requirements.txt conda install -c anaconda cupy ``` If there are `ninja` related errors in installing `mask2former` refer to this [link](https://github.com/NVlabs/ODISE/issues/19) **Download Pretrained Models** - Run the following command to download the preatrined (Optical Flow Prediction, Text-Direction Guided Optical Flow Prediction, Video Generation) models, ``` gdown https://drive.google.com/u/4/uc?id=1Cx64SC12wXzDjg8U0ujnKx8V2G6SbCIb&export=download tar -xvf checkpoints.tar ``` - Download `sd-v1-4-full-ema.ckpt` using, ``` mkdir -p img2img/models/ldm/stable-diffusion-v1 cd img2img/models/ldm/stable-diffusion-v1 wget https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4-full-ema.ckpt cd ../../../../ ``` - Download [diffusers](https://github.com/huggingface/diffusers) `stable-diffusion-v1-4` ``` cd checkpoints git lfs install git clone https://huggingface.co/CompVis/stable-diffusion-v1-4 cd ../ ``` If there are issues with installing `git-lfs` refer to this [issue](https://github.com/text2cinemagraph/artistic-cinemagraph/issues/1) **Inference (Artistic Domain)** <!-- <br> <div class="gif"> <p align="center"> <img src='assets/teaser.gif' align="center"> </p> </div> --> | <img src="assets/video.gif" width="256" /> | <img src="assets/video2.gif" width="256" /> | <img src="assets/video3.gif" width="256" /> | | ------------- | ------------- | ------------- | | <img src="assets/cap1.png" width="256" /> | <img src="assets/cap2.png" width="256" /> | <img src="assets/cap3.png" width="256" /> | - To generate the first result from above exmaple, run the following command, ``` python inference_t2c.py --config configs/inference.yaml ``` <!-- <br> <div class="gif"> <p align="center" style="margin-bottom: -7px;"> <img style='' src='assets/control1.gif' width="400">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<img src='assets/control2.gif' width="400"> </p> <p align="center"> <img style='' src='assets/caption1.png' width="400">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<img src='assets/caption2.png' width="400"> </p> </div> --> | <img src="assets/control1.gif" width="400" /> | <img src="assets/control2.gif" width="400" /> | | ------------- | ------------- | | <img src="assets/caption1.png" width="400" /> | <img src="assets/caption2.png" width="400" /> | - To generate the text guided direction result displayed above, run the following command, ``` #to generate the left example python inference_t2c.py \ --config configs/inference_directional.yaml \ --use_hint \ --prompt "a large river flowing in left to right, downwards direction in front of a mountain in the style of starry nights painting" #to generate the right example python inference_t2c.py \ --config configs/inference_directional.yaml \ --use_hint \ --prompt "a large river flowing in upwards, right to left direction in front of a mountain in the style of starry nights painting" ``` Note that since we randomly sample a `theta` based on quadrant which the text direction corresponds to, exact replicability might not be possible. | Artistic Image (`s1`) | Natural Image (`s21`) | ODISE Mask (`s3`) | | ------------- | ------------- | ------------- | | <img src="assets/0.png" width="256" /> | <img src="assets/sample_0.png" width="256" /> | <img src="assets/mask_odise.png" width="256" /> | | Self-Attention Mask (`s4`) | Optical Flow (`s5`) | Cinemagraph (`s6`) | ------------- | ------------- | ------------- | | <img src="assets/mask_self_attn_erosion.png" width="256" /> | <img src="assets/synthesized_flow.jpg" width="256" /> | <img src="assets/video.gif" width="256" /> | - Since the total time needed to run all the components might be large and the user might not be satisfied with the end result due to output of some intermediate compont, we suggest the user to run each component separately in such scenario. Below we show how to run inference in a stage-wise manner, ```# Generate the artistic image and save internal features python inference_t2c.py --config configs/inference.yaml --stage s1 # Generate the twin (natural) image python inference_t2c.py --config configs/inference.yaml --stage s2 # Generate ODISE mask python inference_t2c.py --config configs/inference.yaml --stage s3 # Generate Self-Attention mask (guided using ODISE mask) python inference_t2c.py --config configs/inference.yaml --stage s4 # Predict optical flow python inference_t2c.py --config configs/inference.yaml --stage s5 # Generate the cinemagraph python inference_t2c.py --config configs/inference.yaml --stage s6 ``` **Tip and Tricks for achieving better results (Artistic Domain)** <br><br>Change the following parameters in `inference.yaml` or `inference_directional.yaml` if you do not achieve desired results, - `twin_extraction:prompt`: change the input text prompt if the images generated by `--stage s1` are not desirable. - `twin_extraction:seed`: change the seed if the images generated by `--stage s1` are not desirable and the user does not want to change the prompt. - `twin_generation:prompt`: by default it can be `None`. If the output of `--stage s2` does not look semantically similar to the artistic image, try spacifying the edit prompt manually. - `odise:vocab`: if the ODISE generated mask is including some regions that the user does not want, change the `vocab` to specify only the desired regions. - `attn_mask:n_clusters`: change the number of clusters if the generated mask from `--stage s4` is not representative of the desired regions of motion in the final cinemagraph. - `attn_mask:threshold`: it specifies what is the minimum percentage of pixel overlap between the ODISE mask and the Self-Attention cluster to be considered inside the mask. Increase the value to reduce the amount of Self-Attention clusters included in the final mask and vice versa. - `attn_mask:cluster_type`: change the cluster type between `kmeans` or `spectral` (this is only for very fine-grained refinement). - `attn_mask:erosion_iter`: if the mask is slightly overlaping with boundaries of static region in `--stage s4` increase the value of `erosion_iter` between `[2,5]` at intervals of `1` to retract the mask slightly. - `video:n_frames`: use `60` in cases of motion restricted to small regions (like waterfall) and `120` for large body motion (like sea). - `video:speed`: change the speed (recommended value between `[0.25, 3]`) to change the speed of motion in the generated cinemagraph. If the user notices `grey rigions` apprearing in the cienmagraph, try lowering the speed. ## Data Preparation for Training **Optical Flow and Videos** - The dataset for ground-truth optical flows and videos are taken from [Animating Pictures with Eulerian Motion Fields](https://eulerian.cs.washington.edu/dataset/). Download the train and validation dataset using, ``` gdown https://drive.google.com/u/0/uc?id=19f2PsKEaeAmspd1ceGkOEMhZsZNquZyF&export=download cd dataset unzip eulerian_data.zip ``` Note that we use the entire validation dataset as test dataset (and do not use it during training process). **Masks (ODISE)** - For testing on real-domain data, we use masks generated by [ODISE](https://github.com/NVlabs/ODISE). To generate the masks (after completing the above step), run the following command, ``` python demo/gen_mask.py \ --input dataset/eulerian_data/validation \ --output dataset/eulerian_data/validation_masks_odise \ --vocab "water, waterfall, river, ocean, lake, sea" ``` **Text Guided Direction Control** - For training the optical flow prediction model that can predict flow following the direction of motion in the input prompt, we generate optical dense flow hint maps, similar to [Controllable Animation of Fluid Elements in Still Images](https://controllable-cinemagraphs.github.io/). The optical flow hints are generated from Ground-Truth optical flow with 1,2,3,4 and 5 hints. The code for generating hints is taken from [SLR-SFS](https://github.com/simon3dv/SLR-SFS/tree/main). ``` python dataset/generate_flow_hint.py \ --dataroot dataset/eulerian_data/train \ --save_path dataset/eulerian_data/train_motion_hints \ --n_clusters 5 ``` **Artistic Domain Prompts** - The prompts used to generate artistic domain examples are located in `dataset/prompts.txt` and the corresponding edit prompts (used to generate the natural verion of the artistic images) are located in `dataset/prompts_twin.txt`. Note that the edit prompts can be specified manually or can also be atumatically derived from the artistic prompts if not specified otherwise. ## Training **Optical Flow Prediction** - For training the optical flow prediction model, that predicts optical flow without taking text direction guidance, use the following command, ``` python train_motion.py \ --name <expriment-name-1> \ --gpu_ids 0,1,2,3 \ --no_instance \ --label_nc 0 \ --input_nc 4 \ --output_nc 2 \ --fineSize 512 \ --batchSize 16 \ --norm instance \ --dataset_name motion \ --motion_norm 64.0 \ --netG spadexattnunetsd \ --dataroot dataset/eulerian_data/train \ --no_vgg_loss \ --use_epe_loss \ --use_prompts \ --use_mask \ --mask_path dataset/eulerian_data/train_masks_odise \ --captions_file ./dataset/captions/file2captions-eularian-train-blip2-20-15.txt ``` Note that in addition to the input image and mask, we condition the flow prediction on text prompt. We generate the text prompts for the images in the train and validation dataset using [BLIP2](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). **Optical Flow Prediction (for text guidance direction)** - For training the optical flow prediction model, that predicts optical flow conditioned on text direction guidance, use the following command, ``` python train_motion.py \ --name <expriment-name-2> \ --gpu_ids 0,1,2,3 \ --no_instance \ --label_nc 0 \ --input_nc 6 \ --output_nc 2 \ --fineSize 512 \ --batchSize 16 \ --norm sync:spectral_instance \ --dataset_name motion \ --motion_norm 64.0 \ --netG spadeunet \ --dataroot dataset/eulerian_data/train \ --no_vgg_loss \ --use_epe_loss \ --use_mask \ --mask_path dataset/eulerian_data/train_masks_odise \ --use_hint \ --hints_path dataset/eulerian_data/train_motion_hints ``` Note that in our experiments, for predicting optical flow conditioned on text direction guidance, we do not use text conditoning by Cross-Attention layers, as the input consists of the image, mask and dense optical flow hint. The motivation for using text conditioning along with image and mask in previous method was that text inherently contains class information, like a ‘waterfall’ or ‘river’, which can be useful to determine the natural direction in the predicted flow. However, in this case direction is already given as input dense flow hint. This helps in reducing the model size (as we do not need to use expensive Cross-Attention layers). **Video Generation** - For first stage training (training using Ground-Truth Optical Flow) of the video generation model, use the following command, ``` python train_video.py \ --name <expriment-name-3> \ --gpu_ids 0,1,2,3 \ --no_instance \ --label_nc 0 \ --input_nc 8 \ --output_nc 3 \ --fineSize 512 \ --batchSize 16 \ --norm_G sync:spectral_instance \ --dataset_name frame \ --netG spadeunet4softmaxsplating \ --dataroot dataset/eulerian_data/train \ --use_l1_loss \ --tr_stage stage1 \ --frames_basepath dataset/eulerian_data/train ``` - We train the flow perdiction model additionally for 50 epochs on optical flow predicted by the Optical Flow Prediction model. To make the training process more efficient, we precompute and store all the optical flow predictions for training data before starting training. To generate the optical flow using the Optical Flow Prediction model use the following command, ``` python test_motion.py \ --<expriment-name-1> \ --phase train \ --no_instance \ --label_nc 0 \ --input_nc 4 \ --output_nc 2 \ --fineSize 512 \ --batchSize 1 \ --which_epoch 200 \ --netG spadexattnunetsd \ --dataroot dataset/eulerian_data/train \ --use_mask \ --use_prompts \ --captions_file ./dataset/captions/file2captions-eularian-train-blip2-20-15.txt ``` - For second stage training (training using Optical Flow predicted by model) of the video generation model, use the following command, ``` python train_video.py \ --name <experiment-name-3> \ --continue_train \ --niter 150 \ --gpu_ids 0,1,2,3 \ --no_instance \ --label_nc 0 \ --input_nc 8 \ --output_nc 3 \ --fineSize 512 \ --batchSize 16 \ --norm_G sync:spectral_instance \ --dataset_name frame \ --netG spadeunet4softmaxsplating \ --dataroot dataset/eulerian_data/train \ --use_l1_loss \ --tr_stage stage2 \ --frames_basepath dataset/eulerian_data/train \ --motion_basepath results/motion-7-1/train_200/images \ --motion_norm 64.0 ``` Note that we use the Video Generation model, trained with Optical Flow Prediction model (w/o using text direction guidance) to generate videos for both the scenarios, i.e., w/ and w/o text direction guidance. ## Evaluation (Real Domain) **Generate Results** - To predict Optical Flow for the validation dataset on single images, use the following command, ``` python test_motion.py \ --name <experiment-name-1> \ --no_instance \ --label_nc 0 \ --input_nc 4 \ --output_nc 2 \ --fineSize 512 \ --batchSize 1 \ --netG spadexattnunetsd \ --dataset_name motion \ --dataroot dataset/eulerian_data/validation \ --use_mask \ --use_seg_mask \ --use_prompts \ --mask_path dataset/eulerian_data/validation_masks_odise \ --captions_file dataset/captions/file2captions-eularian-validation-blip2-20-15.txt ``` Note that to predict optical flows in using our pretrained models, after downloading the models, replace `<experiment-name-1>` with `motion-pretrained`. - To generate cinemagraphs using the predicted optical flows, in previous step, for the validation dataset, use the following command, ``` python test_video.py \ --name<experiment-name-3> \ --no_instance \ --label_nc 0 \ --input_nc 8 \ --output_nc 3 \ --fineSize 512 \ --batchSize 1 \ --dataset_name frame \ --netG spadeunet4softmaxsplating \ --dataroot dataset/eulerian_data/validation \ --motion_basepath results/<experiment-name-1>/test_latest/images \ --speed 1.0 ``` Note that to generate cinemagraphs in using our pretrained models, after downloading the models, replace `<experiment-name-3>` with `video-pretrained`. **Compute FVD on Real Domain Results** - To predict `FVD_16`, where frames are samples at a `rate=3` (with total 16 frames sampled out of 60) use the following command, ``` python evaluate/compute_fvd.py \ --pred_path <generated-video-dir> \ --gt_path dataset/eulerian_data/validation \ --type fvd_16 ``` - To predict `FVD_60`, where frames are samples at a `rate=1` (with all 60 frames samples) use the following command, ``` python evaluate/compute_fvd.py \ --pred_path <generated-video-dir> \ --gt_path dataset/eulerian_data/validation \ --type fvd_60 ``` The code for `FVD` computation has been taken from [StyleGAN-V](https://github.com/universome/stylegan-v). <!-- ## Citation ``` bibtex @article{mahapatra2023synthesizing, title={Synthesizing Artistic Cinemagraphs from Text}, author={Mahapatra, Aniruddha and Siarohin, Aliaksandr and Lee, Hsin-Ying and Tulyakov, Sergey and Zhu, Jun-Yan}, journal={arXiv preprint arXiv:2307.03190}, year={2023} } ``` --> ## Acknowledgments The code for this project was built using the codebase of [pix2pixHD](https://github.com/NVIDIA/pix2pixHD), [ODISE](https://github.com/NVlabs/ODISE), [plug-and-play](https://github.com/MichalGeyer/plug-and-play/tree/main), [SLR-SFS](https://github.com/simon3dv/SLR-SFS/tree/main). The `symmetric-splatting` code was built on top of [softmax-splatting](https://github.com/sniklaus/softmax-splatting). The code for evalution metric (`FVD`) was build on codebase of [StyleGAN-V](https://github.com/universome/stylegan-v). We are very thankful to the authors of the corresponding works for releasing their code.
pcwalton/gltf-ibl-sampler-egui
https://github.com/pcwalton/gltf-ibl-sampler-egui
An artist-friendly skybox generator wrapping the glTF IBL Sampler
# glTF IBL Sampler UI ## Overview This is an artist-friendly user interface that wraps the [glTF IBL Sampler] to generate cubemap skyboxes from panoramas. It provides an easy way to generate skyboxes for use in [Bevy] and other new game engines that use the modern [KTX2] format as their native texture format. By default, the panorama is split up into base color, diffuse, and specular parts, with the mipmap levels corresponding to different roughness values of the material. For the most part, using this tool is as easy as starting the app, dragging a panorama in `.exr` or `.hdr` format with an equirectangular projection into the window, and clicking Generate. ![Screenshot](https://github.com/pcwalton/gltf-ibl-sampler-egui/blob/master/etc/Screenshot.png?raw=true) ## Detailed description This tool's user interface is built on [`egui`]. All options are automatically set to reasonable default values, but they can be fully customized as you wish. To get a detailed description of any option, simply hover over it with the mouse. In general, this program simply wraps the upstream [glTF IBL Sampler], with two notable feature additions for the sake of convenience: 1. OpenEXR `.exr` files are supported in addition to the Radiance `.hdr` format. 2. The tool can generate unfiltered base-color skyboxes for rendering in addition to diffuse and specular environment maps. This means that you can use this tool as an all-in-one skybox generator for engines like [Bevy]. ## Building This repository contains nested submodules, so make sure to either clone it with `git clone --recursive` or use `git submodule init && git submodule sync --recursive && git submodule update --recursive` after checking it out. As the glTF IBL Sampler is a C++ app instead of a pure Rust one, you'll need the [Vulkan SDK], [CMake], and a C compiler such as Xcode or Visual Studio to be installed in order to build this package. Cargo doesn't package those dependencies, unfortunately. Note that the skybox sampling process is itself hardware-accelerated using Vulkan. So you'll need a Vulkan-capable GPU to usefully run this application. This unfortunately also means that the baking process is subject to hardware memory limitations, so baking an entire 8K × 4K panoramic texture may not work. To avoid spurious failures stemming from this limitation, textures are resized to at most 4K pixels on each side by default. Once all dependencies are installed, you should be able to run the app using `cargo run --release`. ## Supported image formats The panorama can be stored either in any format that the Rust [`image`] crate supports, which notably includes `.exr`, or in `.hdr` format. The resulting textures can be stored in KTX1 or KTX2 format, while the BRDF lookup tables are stored in PNG format. ## License Licensed under the MIT license or the Apache 2.0 license, at your option. See the `LICENSE-APACHE` or `LICENSE-MIT` files for details. ## Code of conduct The glTF IBL Sampler UI follows the same Code of Conduct as Rust itself. Reports can be made to the project authors. [glTF IBL Sampler]: https://github.com/KhronosGroup/glTF-IBL-Sampler [Bevy]: https://bevyengine.org/ [KTX2]: https://registry.khronos.org/KTX/specs/2.0/ktxspec.v2.html [`egui`]: https://www.egui.rs/ [Vulkan SDK]: https://vulkan.lunarg.com/ [CMake]: https://cmake.org/ [`image`]: https://docs.rs/image/latest/image/
dnaeon/cl-jwk
https://github.com/dnaeon/cl-jwk
Common Lisp system for decoding public JSON Web Keys (JWK)
* cl-jwk =cl-jwk= is a Common Lisp system for parsing public JSON Web Keys (JWK). The following JWK keys are supported by the system. - =RSA= - =EC P-256= - =EC P-384= - =EC P-521= - =EC secp256k1= - =oct= (symmetric key) * Requirements - [[https://www.quicklisp.org/beta/][Quicklisp]] * Installation Clone the repo in your [[https://www.quicklisp.org/beta/faq.html][Quicklisp local-projects]] directory. #+begin_src lisp (ql:register-local-projects) #+end_src Then load the system. #+begin_src lisp (ql:quickload :cl-jwk) #+end_src NOTE: =cl-jwk= uses features from latest =ironclad=, so make sure that you are using the latest =ironclad= release from Github, until it gets published on Quicklisp. * Usage In order to decode a public JWK key from a JSON object, simply call the =CL-JWK:DECODE= generic function. For instance, in order to decode one of the test keys used by the =cl-jwk.test= system, you can call =CL-JWK:DECODE= like this. #+begin_src lisp CL-USER> (defparameter *json-data* (uiop:read-file-string (asdf:system-relative-pathname :cl-jwk.test "tests/test-keys/rsa-3072-pub.json"))) *JSON-DATA* CL-USER> (cl-jwk:decode :json *json-data*) #<CL-JWK.CORE:JWK {100510EB73}> #+end_src Upon successful decoding of the JSON data the =CL-JWK:DECODE= method will return a =CL-JWK:JWK= class instance. Use the available readers to get more details about the decoded public key. The following reader methods are available for a =CL-JWK:JWK= instance. - =CL-JWK:JWK-KTY= - =CL-JWK:JWK-USE= - =CL-JWK:JWK-KID= - =CL-JWK:JWK-ALG= - =CL-JWK:JWK-KEY-OPS= - =CL-JWK:JWK-KEY= The =CL-JWK:JWK-KEY= method returns an instance of an =ironclad= public key. You can also fetch public JWKs from an OpenID Provider. For example: #+begin_src lisp CL-USER> (defparameter *client* (cl-jwk:make-client :hostname "accounts.google.com")) *CLIENT* CL-USER> (cl-jwk:public-keys *client*) (#<CL-JWK.CORE:JWK {10021B5FD3}> #<CL-JWK.CORE:JWK {10021B6053}> #<CL-JWK.CORE:JWK {10021B60D3}>) #+end_src Token verification can be done using the =CL-JWK:VERIFY-TOKEN= generic function. =CL-JWK:VERIFY-TOKEN= is implemented for both =CL-JWK:JWK= and =CL-JWK:CLIENT= classes. * Tests The =cl-jwk= tests are provided as part of the =:cl-jwk.test= system. In order to run the tests you can evaluate the following expressions. #+begin_src lisp CL-USER> (ql:quickload :cl-jwk.test) CL-USER> (asdf:test-system :cl-jwk.test) #+end_src Or you can run the tests using the =scripts/run-tests.sh= shell script instead, e.g. #+begin_src shell env LISP=sbcl ./scripts/run-tests.sh #+end_src * Contributing =cl-jwk= is hosted on [[https://github.com/dnaeon/cl-jwk][Github]]. Please contribute by reporting issues, suggesting features or by sending patches using pull requests. * Authors - Marin Atanasov Nikolov <[[mailto:[email protected]][[email protected]]]> * License This project is Open Source and licensed under the [[http://opensource.org/licenses/BSD-2-Clause][BSD License]].
ethstakersclub/ethstakersclub
https://github.com/ethstakersclub/ethstakersclub
Ethstakers.club is a tool designed to monitor validators, slots, epochs and much more on the Ethereum Beacon Chain
# Ethstakers.club The Ethstakers.club Explorer is a tool designed to monitor validators, slots, epochs and much more on the Ethereum Beacon Chain. Currently, there is only a single other beacon explorer available, which is a point of centralization and prone to problems if it experiences any issues or downtime. <div align="center"> <img src="screenshot/screenshot.jpg" alt="Screenshot" width="92%"> </div> Unlike the competition, our explorer leverages completely freely available software, such as PostgreSQL, instead of proprietary solutions like Google Bigtable (beaconcha.in) or Chart.js instead of Highsoft charts. The project was launched with the vision that everyone should be at least able to run their own beacon chain explorer, regardless of their technical expertise or financial resources without being dependent on third parties. We believe in empowering the Ethereum community by providing an open-source and accessible solution for monitoring validators. This goal also ensures greater resilience and reliability for monitoring validators on the Ethereum Beacon Chain even in the case of unexpected events. This project also provides benefits to researchers and individuals in need of unlimited API access. By utilizing freely available software instead of proprietary solutions, the project enables these users to easily launch a local instance. Moreover, all API functions can be effortlessly extended to accommodate any desired functionality. ## So, why was the project launched? The Ethstakers.club Explorer was launched with the aim of providing a reliable and transparent solution for monitoring validators on the Ethereum Beacon Chain. The Ethereum Beacon Chain is a critical component of Ethereum's transition to a proof-of-stake consensus mechanism. Validators play a crucial role in maintaining the security and integrity of the Ethereum network. Therefore, it is essential for the community to have access to accurate and up-to-date information about these validators. Existing beacon chain explorers often rely on proprietary and expensive software solutions, making them less accessible to the wider Ethereum community. By utilizing freely available software we aim to remove these barriers and provide an open-source, cost-effective solution that anyone can use. ## Key Features The Ethstakers.club Explorer offers a range of features that make it a valuable tool for monitoring validators on the Beacon Chain. These features include: ### Real-time Validator Monitoring The explorer offers real-time (auto update) attestation and sync monitoring. ### Don't miss your next Proposal Ensure that your validators are not scheduled for block proposals within a specific time span. The dashboard includes a feature that indicates whether any validators are expected to propose a block within the next epoch. This feature helps users stay informed about upcoming validator activities and enables them to make necessary preparations for pending duties, such as refraining from updating their clients during that time. ### Comprehensive Validator Information It provides comprehensive information about the validators being monitored. One of its key features is the ability to display all watched validators' information in a single table, enabling users to access and analyze the data more efficiently. ### Historical Data Analysis The explorer stores historical data about validators, enabling users to analyze their performance over time. E.g. it offers via a chart information for all validators on the amount of missed attestation and sync committee duties. This feature helps identify trends and patterns, providing valuable insights. ### Open-Source and Extensible The entire project is open-source, allowing the community to contribute, review, and improve the codebase. ### No Vendor Lock-in By avoiding proprietary solutions, users are not locked into any specific vendor or service provider. They have the freedom to deploy and manage the explorer according to their own preferences. ### No Tracking Tools Ethstakers.club does not employ any third-party tracking tools such as Google Analytics. ### Aggregated Statistics Gain a comprehensive overview of all your blocks, synch committee commitments, and access information regarding block proposals. ### Cost-Effective Solution By leveraging freely available software like PostgreSQL, we eliminate the need for expensive proprietary solutions, making the explorer accessible to a wider range of users. The current deployment only costs ~89€ per month (51€ for beacon and execution client, 38€ for the actual explorer) ## Getting Started For detailed instructions on how to set up and use the Eth Beacon Chain Explorer, please refer to the [documentation](docs/install.md). ## Contributing We welcome contributions from the community to help improve the Ethstakers.club Explorer. If you have any ideas, suggestions, or bug reports, please open an issue or submit a pull request. Additionally, if you find it helpful, you have the option to star this repository. Doing so would assist us in validating that we're heading in the right direction. Please refer to the [roadmap](docs/roadmap.md) for any missing features.
verytinydever/test
https://github.com/verytinydever/test
null
# test
deborahLevy130/seathru_NeRF
https://github.com/deborahLevy130/seathru_NeRF
official implementation of SeaThru-NeRF CVPR 2023
# SeaThru-NeRF: Neural Radiance Fields In Scattering Media, CVPR 2023 #### [project page](https://sea-thru-nerf.github.io/) | [paper](https://openaccess.thecvf.com/content/CVPR2023/papers/Levy_SeaThru-NeRF_Neural_Radiance_Fields_in_Scattering_Media_CVPR_2023_paper.pdf) > SeaThru-NeRF: Neural Radiance Fields In Scattering Media > [Deborah Levy](mailto:[email protected]) | Amit Peleg | [Naama Pearl](https://naamapearl.github.io/) | Dan Rosenbaum | [Derya Akkaynak](https://www.deryaakkaynak.com/) | [Tali Treibitz](https://www.viseaon.haifa.ac.il/) | [Simon Korman](https://www.cs.haifa.ac.il/~skorman/) > CVPR 2023 Our implementation is based on the paper "Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields" (CVPR 2022) and their [github repository](https://github.com/google-research/multinerf). This implementation is written in [JAX](https://github.com/google/jax). ## Setup ``` # Clone the repo. git clone https://github.com/deborahLevy130/seathru_NeRF.git cd seathru_NeRF mkdir data # Make a conda environment. conda create --name seathruNeRF python=3.9 conda activate seathruNeRF # Prepare pip. conda install pip pip install --upgrade pip # Install requirements. pip install -r requirements.txt # Manually install rmbrualla's `pycolmap` (don't use pip's! It's different). git clone https://github.com/rmbrualla/pycolmap.git ./internal/pycolmap ``` You'll probably also need to update your JAX installation to support GPUs or TPUs. ## Running Example scripts for training, evaluating, and rendering can be found in `scripts/`. You'll need to change the paths to point to wherever the datasets are located. [Gin](https://github.com/google/gin-config) configuration files for our model and some ablations can be found in `configs/`. ### OOM errors You may need to reduce the batch size (`Config.batch_size`) to avoid out of memory errors. If you do this, but want to preserve quality, be sure to increase the number of training iterations and decrease the learning rate by whatever scale factor you decrease batch size by. ## Using your own data Summary: first, calculate poses. Second, train SeaThru-NeRF. Third, render a result video from the trained NeRF model. 1. Calculating poses (using [COLMAP](https://colmap.github.io/install.html)): 2. Training SeaThru-NeRF: ``` ./scripts/train_llff_uw.sh ``` set `SCENE` to the image set you wish to use 3. Evaluating SeaThru-NeRF on existing images: ``` ./scripts/render_llff_uw.sh ``` set `SCENE` and `EXPERIMENT_NAME` to the corresponding experiment. 4. Rendering SeaThru-NeRF Novel Views: ``` ./scripts/render_llff_uw.sh ``` set `SCENE` and `EXPERIMENT_NAME` to the corresponding experiment. Your output video should now exist in the directory `ckpt/uw/${SCENE}_${EXPERIMENT_NAME}/render/`. You will find the underwater rendering, the restored images rendering (J) and the depth maps. ## Dataset - Photos credit - Matan Yuval [Here](https://drive.google.com/uc?export=download&id=1RzojBFvBWjUUhuJb95xJPSNP3nJwZWaT) you will find the underwater scenes from the paper. Extract the files into the data folder and train SeaThru-NeRF with those scenes: ``` ./scripts/train_llff_uw.sh ``` In ```'${SCENE}'``` put the name of the scene you wish to work with. For more datasets formats you can refer to [multinerf](https://github.com/google-research/multinerf) For now our NeRF works on looking forward scenes. ### Running SeaThru-NeRF on your own data In order to run SeaThru-NeRF on your own captured images of a scene, you must first run [COLMAP](https://colmap.github.io/install.html) to calculate camera poses. After you run COLMAP, you can run [this](https://github.com/Fyusion/LLFF/blob/master/imgs2poses.py) script from LLFF to get poses_bound.npy file. After you run COLMAP, all you need to do to load your scene in SeaThru-NeRF is ensure it has the following format: ``` my_dataset_dir/images_wb/ <--- all input images my_dataset_dir/sparse/0/ <--- COLMAP sparse reconstruction files (cameras, images, points) my_dataset_dir/poses_bounds.npy ``` ### How to implement SeaThru-NeRF in your own NeRF To incorporate our NeRF into an existing NeRF framework, follow these steps: 1. Incorporate the medium's module into the MLP by referring to the architecture provided in section 4.5 of the paper titled "Implementation and Optimization." You can also refer to the code available [here](https://github.com/deborahLevy130/seathru_NeRF/blob/master/internal/models.py#L866). 2. Modify the rendering equations as outlined in the paper. 3. Integrate the accuracy loss described in the paper for the object's transmission. You can refer to our implementation available [here](https://github.com/deborahLevy130/seathru_NeRF/blob/master/internal/train_utils.py#L153). If you have an alternative loss function that encourages the weights of your rendering equations to be somehow Unimodal (or close to Dirac delta function), you may use it instead of the accuracy loss. Simply apply it to the weights of the objects. ## Citation If you use this software package, please cite whichever constituent paper(s) you build upon, or feel free to cite this entire codebase as: ``` @inproceedings{levy2023seathru, title={SeaThru-NeRF: Neural Radiance Fields in Scattering Media}, author={Levy, Deborah and Peleg, Amit and Pearl, Naama and Rosenbaum, Dan and Akkaynak, Derya and Korman, Simon and Treibitz, Tali}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pages={56--65}, year={2023} } @misc{multinerf2022, title={{MultiNeRF}: {A} {Code} {Release} for {Mip-NeRF} 360, {Ref-NeRF}, and {RawNeRF}}, author={Ben Mildenhall and Dor Verbin and Pratul P. Srinivasan and Peter Hedman and Ricardo Martin-Brualla and Jonathan T. Barron}, year={2022}, url={https://github.com/google-research/multinerf}, } ```
MikePopoloski/boost_unordered
https://github.com/MikePopoloski/boost_unordered
Standalone version of the boost::unordered library
# boost_unordered ![](https://github.com/MikePopoloski/boost_unordered/actions/workflows/build.yml/badge.svg) ![GitHub](https://img.shields.io/github/license/MikePopoloski/boost_unordered) This project is a standalone version of the [boost::unordered](https://github.com/boostorg/unordered) library. boost::unordered_flat{map,set} are some of the fastest and highest quality C++ hashmap implementations out there (major props to the authors), but unfortunately they come with the entire rest of boost as baggage if you want to use them. This repository extracts them and snips out the links to the rest of boost so that you can use just the hashmaps. ### Benefits First and foremost, if you don't otherwise need boost in your project then this will let you avoid taking a dependency on all of boost, which is absolutely gigantic (something like 750MB of sources alone). This may not matter to you if you already have boost installed on your system, but you do need a relatively recent version to get the flat hashmaps and if you are a library you may not want to introduce a transitive dependency on boost to all of your users. Even more importantly, the unordered headers pull in a giant tree of other header files from the rest of boost, which has a direct impact on compilation times. Building this simple program: ``` #include <boost/unordered/unordered_flat_map.hpp> int main() { } ``` We can see that (on my system) we pull in 275 boost header files: ``` > g++-12 -std=c++20 -O2 main.cpp -M | sed 's/\\$//' | awk '{ for (i=1; i<=NF; i++) print $i }' | grep boost | wc -l 275 ``` which are 31424 lines in total: ``` > g++-12 -std=c++20 -O2 main.cpp -M | sed 's/\\$//' | awk '{ for (i=1; i<=NF; i++) print $i }' | grep boost | xargs wc -l ... 31424 total ``` Using the standalone version: ``` > g++-12 -std=c++20 -O2 -I ~/boost_unordered/include main.cpp -M | sed 's/\\$//' | awk '{ for (i=1; i<=NF; i++) print $i }' | grep boost | wc -l 26 > g++-12 -std=c++20 -O2 -I ~/boost_unordered/include main.cpp -M | sed 's/\\$//' | awk '{ for (i=1; i<=NF; i++) print $i }' | grep boost | xargs wc -l ... 6322 total ``` So we've chopped out 249 files and 25102 lines of code from each translation unit that includes unordered_flat_map. The compilation speedup on my machine for this toy example is about 10%, though your mileage may vary. There is also an amalgamated single header, [boost_unordered.hpp](https://github.com/MikePopoloski/boost_unordered/blob/master/boost_unordered.hpp), which includes all four of the unordered_* containers in a single header, weighing in at just under 9k lines. ### Tradeoffs The standalone library requires a modern C++20 capable compiler and standard library. Assuming you have that then you don't really give up anything. All of the tests still pass. A lot of the stuff cut out is to support old or esoteric compilers -- if that's something you use then stick with the full boost installation. This library does remove the older unordered_{map,set} implementations. unordered_flat_{map,set} and unordered_node_{map,set} are superior containers in almost all cases, and if you have a case that really needs the old-style non-flat containers then you can use the ones included in std or just fall back to the full boost installation. One other piece of functionality was removed: containerHash in boost has the ability to automatically compute hash values for structures that are annotated via boost::describe. If you are looking to avoid a boost dependency then you probably aren't annotating your types via boost::describe so this doesn't seem so bad, and it lets us get rid of all of the boost::describe headers. ### How to Use Two ways: 1. Download the boost_unordered.hpp file and include it in your project. 2. Use as a proper CMake target, by downloading the source or by using something like CMake's FetchContent: ``` FetchContent_Declare( boost_unordered GIT_REPOSITORY https://github.com/MikePopoloski/boost_unordered GIT_TAG v1.0 GIT_SHALLOW ON) FetchContent_MakeAvailable(boost_unordered) target_link_libraries(my_executable PRIVATE boost_unordered) ``` ### Methodology The sources in this repository were produced as follows: - Download boost (version 1.82) - Extract the unordered library via the `bcp` utility - Delete a bunch of obvious stuff that `bcp` pulls in by accident (like the whole regex library) - Use the [coan](https://coan2.sourceforge.net/) tool to flatten out a bunch of macros for pre-C++11 cases or workarounds for various compilers - Create a [minconfig.hpp](https://github.com/MikePopoloski/boost_unordered/blob/master/include/boost/minconfig.hpp) file that can replace the whole tree of `config` and `predef` headers that were being pulled in - Find + replace a bunch of boost utilities that have std counterparts (mostly allocator traits) - Profit
qwotic/PhotoEditor
https://github.com/qwotic/PhotoEditor
SwiftUI Photo & Video Editor App
<img width="1599" alt="screen" src="https://github.com/qwotic/PhotoEditor/assets/96487423/b69d0f9a-0606-4bcf-9306-86fe518f6a6a"> ## Preview ![appgif](https://github.com/qwotic/PhotoEditor/assets/96487423/25e1cb85-6b80-4594-ad2e-0750e0bf40dd) ## 🛠️ Installation Steps 1. Open Terminal and Clone the Repo ``` bash git clone https://github.com/qwotic/PhotoEditor.git ``` 2. Run App ``` Happy Coding !! ```
adbutterfield/react-content-font
https://github.com/adbutterfield/react-content-font
Create custom Google Fonts from page content
# React Content Font **Create custom Google Fonts from page content!** ## Languages - [English](README.md) - [日本語](README_JP.md) - [中文](README_ZH.md) ## How to install? `npm install react-content-font` ## Why should I care about this package? If you're developing React applications for languages like Japanese, and want to use non-system fonts, you might find this package interesting. Fonts for languages like Japanese, are very big. A single font weight for [Noto Sans Japanese](https://fonts.google.com/noto/specimen/Noto+Sans+JP?query=noto+sans+jp) for instance is 5.7 MB. Definitely not ideal to make your users download such a big file. Not to mention if you want more than one font weight... ## What does this thing do? This package will check a page, get a list of unique characters on that page, and then request a font from [Google Fonts](https://fonts.google.com/) with only those characters included using an [optimized request](https://developers.google.com/fonts/docs/getting_started#optimizing_your_font_requests)! On initial render, it uses [createTreeWalker](https://developer.mozilla.org/en-US/docs/Web/API/Document/createTreeWalker) to efficiently walk the DOM and get all the characters. After initial render, it uses [MutationObserver](https://developer.mozilla.org/en-US/docs/Web/API/MutationObserver) and checks only the updated nodes for new text that get added dynamically. ## How to use it? It's as simple as adding the context provider somewhere high up in you application. For example, if you have a [Next.js](https://nextjs.org/) app using App Router, you can update your `app/layout.tsx` file like so: ```tsx import FontProvider from 'react-content-font'; export default function RootLayout({ children }: { children: React.ReactNode }) { return ( <html lang="jp"> <body> <FontProvider fontName="Noto Sans JP">{children}</FontProvider> </body> </html> ); } ``` Simply provide the font you want with the `fontName` prop, and by default it will request only normal (meaning 400) weight font. ## What if I want more than one font weight? Requesting additional font weights is as simple as adding the `fontWeights` prop, like so: ```tsx import FontProvider from 'react-content-font'; export default function RootLayout({ children }: { children: React.ReactNode }) { return ( <html lang="jp"> <body> <FontProvider fontName="Noto Sans JP" fontWeights={[400, 600]}> {children} </FontProvider> </body> </html> ); } ``` ## What if I also want italic for some font weights? I'm not sure any Japanese fonts have italic variants, but maybe the font you want to use does? If so, you can request italic variants for whatever weight you desire, like so: ```tsx import FontProvider from 'react-content-font'; export default function RootLayout({ children }: { children: React.ReactNode }) { return ( <html lang="jp"> <body> <FontProvider fontName="Noto Sans JP" fontWeights={[400, 600, ['ital', 400], ['ital', 900]]} > {children} </FontProvider> </body> </html> ); } ``` In this example, in addition to regular 400 and 600 weight fonts, we'll also get 400 and 900 italic. ## What if I want to pick the font-display? In the [Google Fonts API](https://developers.google.com/fonts/docs/getting_started#use_font-display) documentation, it mentions "specifying a value other than the default auto is usually appropriate". By default when you generate a link tag for a Google Font, it sets `display=swap`. So this package will do the same thing. But if you want something else, all you need to do is set the `display` prop, like so: ```tsx import FontProvider from 'react-content-font'; export default function RootLayout({ children }: { children: React.ReactNode }) { return ( <html lang="jp"> <body> <FontProvider fontName="Noto Sans JP" display="block"> {children} </FontProvider> </body> </html> ); } ``` Valid values for `display` are `'auto' | 'block' | 'swap' | 'fallback' | 'optional'`. ## What if I want to wait for the font to be loaded before showing content, or show a loading state or something? Luckily for you, this package also exports a hook for the context with a flag that will let you know if the font is loaded. Here is an example of a `PageText` component, that changes the `display` from `hidden` to `visible` based on context. ```tsx 'use client'; import { useFontContext } from 'react-content-font'; export default function PageText() { const { isFontLoaded } = useFontContext(); return ( <p style={{ visibility: isFontLoaded ? 'visible' : 'hidden' }}> よそはほかまあこの威圧心というのの後をしないう。きっと場合で仕事帰りはひょろひょろその評でたなりでするが行くたをも表裏できなけれでば、なぜにはもっなないうた。個人にできたのはついに十月から向後ますだない。もっと岡田さんから批評その道それほど説明が云った他人その自力いつか修養にというお吹聴だでますでて、この先刻は私か同人引込で思うば、大森さんののを自分の私に勢いごろかと広めよば私手でご話の出ように引続きお[#「に解らうだので、とにかくたとい指図にするだろといるです事を考えだう。 </p> ); } ``` It's important to note that **YOU MUST RENDER THE TEXT**, or else the characters will not be discovered and won't be included in the requested font. For example, **DO NOT DO THIS**: ```tsx 'use client'; import { useFontContext } from 'react-content-font'; export default function PageText() { const { isFontLoaded } = useFontContext(); return ( <> {isFontLoaded && ( <p> よそはほかまあこの威圧心というのの後をしないう。きっと場合で仕事帰りはひょろひょろその評でたなりでするが行くたをも表裏できなけれでば、なぜにはもっなないうた。個人にできたのはついに十月から向後ますだない。もっと岡田さんから批評その道それほど説明が云った他人その自力いつか修養にというお吹聴だでますでて、この先刻は私か同人引込で思うば、大森さんののを自分の私に勢いごろかと広めよば私手でご話の出ように引続きお[#「に解らうだので、とにかくたとい指図にするだろといるです事を考えだう。 </p> )} </> ); } ``` ## What if I want to do something similar to that last thing, but after the initial render? Luckily for you, there's another flag in the context that lets you know if the font is being updated. Similar to the previous example, you can defer showing content on update, like so: ```tsx 'use client'; import { useFontContext } from 'react-content-font'; export default function PageText() { const { isFontUpdating } = useFontContext(); return ( <p style={{ visibility: isFontUpdating ? 'visible' : 'hidden' }}> よそはほかまあこの威圧心というのの後をしないう。きっと場合で仕事帰りはひょろひょろその評でたなりでするが行くたをも表裏できなけれでば、なぜにはもっなないうた。個人にできたのはついに十月から向後ますだない。もっと岡田さんから批評その道それほど説明が云った他人その自力いつか修養にというお吹聴だでますでて、この先刻は私か同人引込で思うば、大森さんののを自分の私に勢いごろかと広めよば私手でご話の出ように引続きお[#「に解らうだので、とにかくたとい指図にするだろといるです事を考えだう。 </p> ); } ``` It's important to note that **YOU MUST RENDER THE TEXT**, or else the characters will not be discovered and won't be included in the requested font. ## This is cool, can I buy you a coffee? Yes please! <a href="https://www.buymeacoffee.com/adbutterfield" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/default-orange.png" alt="Buy Me A Coffee" height="41" width="174"></a>
NVIDIA/nvidia-terraform-modules
https://github.com/NVIDIA/nvidia-terraform-modules
Infrastructure as code for GPU accelerated managed Kubernetes clusters.
# NVIDIA Terraform Kubernetes Modules Infrastructure as code for GPU accelerated managed Kubernetes clusters. These scripts automate the deployment of GPU-Enabled Kubernetes clusters on various cloud service platforms. ## Getting Started With Terraform Terraform is an open-source infrastructure as code software tool that we will use to automate the deployment of Kubernetes clusters with the required add-ons to enable NVIDIA GPUs. This repository contains Terraform [modules](https://developer.hashicorp.com/terraform/tutorials/modules/module), which are sets of Terraform configuration files ready for deployment. The modules in this repository can be incorporated into existing Terraform-managed infrastructure, or used to set up new infrastructure from scratch. You can learn more about Terraform [here](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/infrastructure-as-code). You can download Terraform (CLI) [here](https://developer.hashicorp.com/terraform/downloads). ## Support Matrix NVIDIA offers support for Kubernetes through [NVIDIA AI Enterprise](https://www.nvidia.com/en-us/data-center/products/ai-enterprise/). Refer to the [product support matrix](https://docs.nvidia.com/ai-enterprise/latest/product-support-matrix/index.html#nvaie-supported-cloud-services) for supported managed Kubernetes platforms. The Kubernetes clusters provisioned by the modules in this repository provide tested and certified versions of Kubernetes, the NVIDIA GPU operator, and the NVIDIA Driver. If your application does not require a specific version of Kubernetes, we recommend using the latest available version. We also recommend you plan to upgrade your version of Kubernetes at least every 6 months. Each CSP has its own end of life date for the versions of Kubernetes they support. For more information see: - [Amazon EKS release calendar](https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html#kubernetes-release-calendar) - [Azure AKS release calendar](https://learn.microsoft.com/en-us/azure/aks/supported-kubernetes-versions?tabs=azure-cli#aks-kubernetes-release-calendar) - [GCP GKE release calendar](https://cloud.google.com/kubernetes-engine/docs/release-schedule#schedule_for_static_no-channel_versions). | Version | Release Date | Kubernetes Versions | NVIDIA GPU Operator | NVIDIA Data Center Driver | End of Life | | :--- | :--- | :--- | :--- | :--- | :--- | | 0.1.0 | June 2023 | EKS - 1.26 <br> GKE - 1.26 <br> AKS - 1.26 | 23.3.2 | 525.105.17 | EKS - June 2024 <br> GKE - June 2024 <br> AKS - March 2024 | ## Usage #### Provision a GPU enabled Kubernetes Cluster - Create an [EKS Cluster](./eks/README.md) - Create an [AKS Cluster](./aks/README.md) - Create a [GKE Cluster](./gke/README.md) ### Creating an EKS Cluster Call the EKS module by adding this to an existing Terraform file: ```hcl module "nvidia-eks" { source = "git::github.com/nvidia/nvidia-terraform-modules/eks" cluster_name = "nvidia-eks" } ``` See the [EKS README](./eks/README.md) for all available configuration options. ### Creating an AKS Cluster Call the AKS module by adding this to an existing Terraform file: ```hcl module "nvidia-aks" { source = "git::github.com/NVIDIA/nvidia-terraform-modules/aks" cluster_name = "nvidia-aks-cluster" admin_group_object_ids = [] # See description of this value in the AKS Readme location = "us-west1" } ``` See the [AKS README](./aks/README.md) for all available configuration options. ### Creating a GKE Cluster Call the GKE module by adding this to an existing Terraform file: ```hcl module "nvidia-gke" { source = "git::github.com/NVIDIA/nvidia-terraform-modules/gke" cluster_name = "nvidia-gke-cluster" project_id = "your-gcp-project-id" region = "us-west1" node_zones = ["us-west1-a"] } ``` See the [GKE README](./gke/README.md) for all available configuration options. ### Cloud Native Service Add On Pack (CNPack) In each subdirectory, there is a Terraform module to provision the Kubernetes cluster and any additional prerequisite cloud infrastructure to launch CNPack. See [CNPack on EKS](./eks/examples/cnpack/), [CNPack on GKE](./gke/examples/cnpack/), and [CNPack on AKS](./aks/examples/cnpack/) for more information and the sample CNPack configuration file. More information on CNPack can be found on the [NVIDIA AI Enterprise Documentation](https://docs.nvidia.com/ai-enterprise/deployment-guide-cloud-native-service-add-on-pack/0.1.0/cns-deployment.html) ### State Management These modules do not set up state management for the generated Terraform state file, deleting the statefile (`terraform.tfstate`) generated by Terraform could result in cloud resources needing to be manually deleted. We strongly encourage you [configure remote state](https://developer.hashicorp.com/terraform/language/state/remote). Please see the [Terraform Documentation](https://developer.hashicorp.com/terraform/language/state) for more information. ## Contributing Pull requests are welcome! Please see our [contribution guidelines](./CONTRIBUTING.md). ## Getting help or Providing feedback Please open an [issue](https://github.com/NVIDIA/nvidia-terraform-modules/issues) on the GitHub project for any questions. Your feedback is appreciated. ## Useful Links - [NVIDIA AI Enterprise](https://www.nvidia.com/en-us/data-center/products/ai-enterprise/) - [NVIDIA GPU Operator](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/overview.html) - [NVIDIA GPU Cloud (NGC)](https://catalog.ngc.nvidia.com/)
GabrieleRisso/SMSgpt
https://github.com/GabrieleRisso/SMSgpt
POC of a phone used as SMS gateway to serve queries to chatGPT over GSM network using the regular Android message app.
# SMSgpt POC of a phone used as SMS gateway to serve queries to chatGPT over GSM network using the regular Android message app. #### Tested on Arch-linux and Google Pixel 6 with Andorid 13 ## How it work Android Debug Bridge aka ```adb``` connection is created, either wired or wireless, from computer to phone. ```smsgpt``` listen for new incoming messages on the phone, checks the body of the last message and queries ChatGPT. The new message is then sent as a response to the client phone number. ## How to setup #### Server - Download sgpt (used to communicate to chatGPT): ```pip install shell-gpt``` - Set your open-ai api key: ```export OPENAI_API_KEY="sk-BQfU50xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"``` - Download adb and test it with: ```adb --version``` #### Phone1 (No root is required) In developer settings activate: - ABD wired connection: ```debug usb ON``` - ABD wireless connection: ```wireless debug ON``` #### Server-Phone connect - Check if adb is working after pairing: ```adb devices``` must be in device mode. - Connect to wireless abd: ```abd connect IP-addr-of-the-phone``` (Recommended) # Use: - fix permission ```chmod +x smsgpt.sh``` - run ```./smsgpt.sh``` - install ```echo "alias smsgpt=\"$PWD/smsgpt.sh\"" >> $HOME/.bashrc && source $HOME/.bashrc ``` #### There are two modes: ```gateway``` and ```chat with yourself``` ## chat with yourself mode: --> PC(Server) --- USB adb --> Phone1 ---\ \---- WIFI adb --> Phone1 ------> GSM Network --> PC(Server) --> {3-500 m} {loopback} On the phone, to test it, go to the message SMS app and text yourself with a question. ### Expected terminal output: ``` {OK} STARTED. Waiting for new incoming messages {L?} is Id of the last message. 1668 1668 1670 {<--} Incoming message N^1668 form +33XXXXXXXXXX {type}: inbound {body}: Math or chocolate? {-->} Sending message to +33XXXXXXXXXX {type}: outbound {body}: Math or chocolate? Well, that's a tough choice. I guess it depends on whether you prefer numbers or sweetness. {...} Waiting for new incoming messages Result: Parcel(00000000 '....') 1670 1670 1670 ``` ### On the phone: ``` Receive a SMS message with the chatGPT reply as its body ``` # gateway mode: Activate with: remove/comment the line: ```counter=$((counter+1))``` This allows remote clients to text your phone number and receive chatGPT reply messages on their phone over the GSM network. All possible phone clients are natively supported: Android (Google), iOS (Apple), Windows Phone (Microsoft), BlackBerry OS (BlackBerry), Symbian (Nokia). PC(Server) --- USB adb --> Phone1 \---- WIFI adb --> Phone1 --> GSM Network --> Phone2(Client) {3-500 m} {long range} ## Limitation and bugs: ``` . Only one message at the time is supported. . SMSs are limited to 160 chars. For now I'm limiting individual SMS to that lenght and not splitting them. SMS concatenation is possible. . A message is not sent when the body contains special chars, like emoji or strange char. . Real word use mode issue: the reply SMS sent from the server to the client is not displayed on server SMS messaging app, only in the console and client message app. ``` ## Leave a star if intrested in further development #### Acknowledgements should be given to shell-gpt, which can be found at this GitHub repository: https://github.com/TheR1D/shell_gpt.
NevynUK/PicoDebugger
https://github.com/NevynUK/PicoDebugger
Hardware designs for Raspberry Pi Pico debug board.
![Assembled PicoDebugger](Images/AssembledPicoDebugger.jpg) # PicoDebugger The PicoDebugger project is an extension the Raspberry Pi Foundations PicoProbe project. The basis of the PicoProbe is to use a Raspberry Pi Pico to act as a debug probe for another Pico. There are two elements to this project: * PicoProbe Software (from Raspberry Pi Foundation) * PicoDebugger hardware The aim is to provide a physically stable set up for the debug probe, the target Pico and to also allow connection to hardware under development. This document will use the following terminology: * Picoprobe - the Raspberry Pi Pico programmed with the PicoProbe software. This will act as a debug probe for the host computer. * Target - this is the Raspberry Pi Pico that is being debugged. ## History The idea came about as I was developing some software using the NuttX RTOS on the Pico. I had some spare Pico boards available, it was just a case of wiring them up. The Fritzing diagram in the Raspberry [Getting Started Guide](https://datasheets.raspberrypi.com/pico/getting-started-with-pico.pdf) shows how to connect a Pico as a debug probe to a target Pico: ![Raspberry Pi PicoProbe and Target](Images/PicoprobeAndPico.png) This lead to a quick hack and a development board put together on protoboard: ![Picoprobe and target on protoboard](Images/PicoWDebugBoard.jpg) The next step is to break out KiCad an put together a solution that will allow connections from the target board to additional development hardware. ## PicoProbe Software The Raspberry Pi Foundation have done the hard work by producing the software for the PicoProbe and the PicoDebug probe. You can find more information in the [PicoProbe GitHub Repository](https://github.com/raspberrypi/picoprobe). This repository contains the links to the documents detailing how to set up a Pico as a debug probe. The software allows for two types of connection: * SWD debug * UART SWD allows the host computer to connect to the target board and deploy and debug applications running on the board. The UART connection will allow the application to generate user messages and also accept input from the host computer. ## PicoDebugger Hardware The PicoDebugger aims to provide the following features: * Mount a Pico to act as a Picoprobe * Mount a target Pico * Optionally connect UART from the target to the Picoprobe * Reset the target board to allow programming by dragging a UF2 file to the target board * Deployment and debugging using SWD * Connection to external project hardware Thanks to KiCad and [PCBWay](https://www.pcbway.com/), a few weeks later we have this: ![PicoDebugger](Images/FinalBoardjpg.jpg) Moving from the left to the right of the board. The left of the board shows the Picoprobe soldered to the board. The DIP switches allow the two UART lines between the Picoprobe and the target board to be connected / disconnected as required. The three pin header to be connected to the SWD pins on the Pico or Pico W board. The reset switch is connected to the reset pin on the target board allowing the target board to be programmed by dragging a UF2 file onto the board. To the far right there is a 2x20 pin header which matches the pins on the target board. Tis will allow the target board to be connected to development hardware. This could be using individual dupont cables for a small number of connections or using an IDE cable to connect all 40 pins through to a project under development. The full designs (KiCad 7) of this board are available in the [PCB](PCB) directory. ## PicoDebugger Mount The image above shows the PicoDebugger board on a 3D printed base. This base keeps the through hole parts from scratching and work surfaces and helps to reduce the possibility of unintended shorts on the exposed pins on the bottom of the board. ![PicoDebugger Mount](Images/PicoDebuggerMountFusionScreenshot.png) This part is available as a Fusion 360 part in the [3DDesigns](3DDesigns) folder. This folder also contains the GCode the design. The GCode is intended to be used on an Ender 3 Pro, using PLA. The print has a 50% infill for rigidity.
shakiroye/tgTech-communities
https://github.com/shakiroye/tgTech-communities
The Togo Tech Communities is a dynamic and inclusive project aimed at showcasing the vibrant and emerging tech communities in Togo. It seeks to provide a platform that celebrates the diverse range of talents, innovations, and collaborative efforts within the country's technology sector.
# Togo Tech Communities The Togo Tech Communities is a dynamic and inclusive project aimed at showcasing the vibrant and emerging tech communities in Togo. It seeks to provide a platform that celebrates the diverse range of talents, innovations, and collaborative efforts within the country's technology sector. ## How to add a community Our goal is to make the act of contributing an enjoyable and exciting method of learning how to participate in open-source projects. To include a community, follow these steps: 1. Fork the repository 2. Clone the project to your local machine. 3. Run the project but before make sure to install the dependencies: ```bash # npm npm install npm run dev ``` 4. Create a new branch (i.e. `add/the-community-name`). 5. Edit the file `public/data/communities.ts`. Within this file, you'll find an array consisting of objects that represent portfolios. Your task is to append an object to this table, ensuring it follows the chronological order. ![image](https://github.com/shakiroye/tgTech-communities/assets/92779053/ec609e3d-1a3a-425f-ae7c-ebec771743ca) > The type of the datas is in `types.ts` * _name_ is mandatory. * _links_ must have at least one element. 6. Commit changes and push the new branch. 7. Open and submit a PR. ## Credit This project was inspired by the great work of [Gausoft](https://github.com/gausoft/awesome-TgTech-communities)
xh321/LiteLoaderQQNT-Anti-Recall
https://github.com/xh321/LiteLoaderQQNT-Anti-Recall
LiteLoaderQQNT 插件 - QQNT 简易防撤回
# LiteLoaderQQNT - anti-recall LiteLoaderQQNT插件,用于简易的防撤回。 使用前需要安装[LiteLoaderQQNT](https://github.com/mo-jinran/LiteLoaderQQNT),并在QQNT新版上使用。 ## 使用方法 clone或下载zip文件解压,保留文件夹结构(文件夹名称为`插件名`,内容为github上的内容),将文件夹移动至`LiteLoaderQQNT数据目录/plugins/`下面,重启QQNT即可。 所有被撤回的消息会被带上红框,文字加上删除线。**重新进消息界面仍然可以看到。** 不过重启QQNT就会失效。 现在,即使图片在撤回之前未打开过,撤回后也能继续查看了,并且不止缩略图,点开看也是可以的! 但是如果不是在眼前撤回的,而是撤回后你才点进聊天界面去看的,图片仍然会转圈一段时间(请耐心等待),猜测因为QQ会先检查图片是否存在,撤回后还没下载当然不存在,所以转圈;等插件下载好图片后,需要等待QQ重新检查,才能显示图片。 ## 原理介绍 **之前版本(过时):** 为什么在简介里说是简易的呢?因为这个防撤回原理是通过拦截撤回消息的IPC,这样消息就不会被删除。可是,若重新打开消息界面,NTQQ似乎会重新向逻辑层拉取一次消息(旧版QQ的防撤回直接从通信上就拦截了撤回包,所以没有这个问题;而本插件仅拦截到了渲染层的IPC,逻辑层还是一个黑盒),所以导致消息仍然被撤回了。 **现在版本的变更:** 在接到消息后会将消息存于内存中,进入消息界面后,若发现撤回会拦截撤回包,并显示红色边框以及给文字附加删除线。若重新进入消息页面,会遍历消息列表,若发现撤回的提示,会寻找之前保存过的消息,若找到,则将撤回提示替换为之前保存的消息,并显示红色边框以及给文字附加删除线;若未找到,则仍然显示撤回提示。这样,重进消息界面**仍然可以正确反撤回**(包括打开独立聊天界面**也可以**)。 - 为避免太占内存,保存消息的上限默认为1000条。如有需要,可自行修改`main.js`中的`MAX_MSG_SAVED_LIMIT`常量。这个值的意义是,假设默认最多存1000条,那么总接到消息的1000条之前的消息若被撤回则无法恢复。**不过,若一条消息在撤回后你回去看了,并且反撤回生效(也就是只要你看到了这条消息出现红框和删除线),则这条消息会被额外储存到专门的“已撤回消息数组”中,这个数组没有容量上限,不会被本限制所约束。** - 为避免可能存在的并发问题,消息超过上述限制后,每次接到消息后默认会删除保存的消息列表中的前50条,直到低于上述上限。这个50条你也可以通过修改`main.js`中的`DELETE_MSG_COUNT_PER_TIME`常量来变更。建议修改为你平均每秒接到的消息数量。 所以,目前反撤回**仅在进程生命周期有效**(重启QQNT失效,消息会被重新撤回)。 相关详细的原理和说明请参阅源码。 **正在考虑用数据库储存,请耐心等待版本更新。** ## 总结 在NTQQ打开期间无论怎样操作,**反撤回均能生效**,消息会暂时保存在内存里,不会落盘,所以没有消息泄露风险。(如果你能读插件内存里的消息,你为什么不直接去读QQ的消息列表呢🤣) 如果有安全的序列化储存方式,那么撤回持久化也不是什么问题。 ## 协议及免责 MIT | 禁止用于任何非法用途,插件开发属学习与研究目的,仅自用,未提供给任何第三方使用。任何不当使用导致的任何侵权问题责任自负。
ArtificialZeng/ChatGLM2-6B-Explained
https://github.com/ArtificialZeng/ChatGLM2-6B-Explained
ChatGLM2-6B-Explained
# ChatGLM2-6B-Explained ChatGLM2-6B-相关代码,逐行详解版。 逐步更新,欢迎大家Star,Fork,参与进来,提交PR。 注:xxx表示伪目录,非有效。 ## 这个项目主要是数据相关的流转,测试,还有p tuning v2相关微调。若是想弄懂大模型的原理,建议看[GLM-Explained](https://github.com/ArtificialZeng/GLM-Explained) 此外,大模型还基于两个非常重要的基础库,那便是[transformers](https://github.com/ArtificialZeng/tranformers-expalined),和[pytorch](https://github.com/ArtificialZeng/pytorch-explained),同样这两个库也有关键代码的逐行解析版本。 # ChatGLM2-6B-Explained * [x/](./src) * [x/](./src/utils) * [main.py](./ptuning/main.py) * [train.sh参数解释](./ptuning/train.sh) * [x.py](./src/train_sft.py) * [chatglm2PT](./chatglm2PT) * [/configuration_chatglm.py](./chatglm2PT/configuration_chatglm.py) 这段代码定义了一个名为ChatGLMConfig的类,用于配置和管理ChatGLM模型。 * [/modelling_chatglm.py](./chatglm2PT/configuration_chatglm.py) * * [x/](./examples) * [x.md](./examples/ads_generation.md) * [README.md](./README.md) # CSDN彩色博客版: * [ChatGLM1/2 系列源码解析系列-专栏地址](https://blog.csdn.net/sinat_37574187/category_12365053.html) * [/src/utils/](./ChatGLM-Efficient-Tuning-Explained/src/utils) * [CSDN彩色源码解析main.py(一)](https://zengxiaojian.blog.csdn.net/article/details/131617133?spm=1001.2014.3001.5502) * [CSDN彩色源码解析main.py(二)](https://blog.csdn.net/sinat_37574187/article/details/131621397) * [ChatGLM2-6B源码解析 web_demo.py](https://blog.csdn.net/sinat_37574187/article/details/131404024) * [README.md](./ChatGLM-Efficient-Tuning-Explained/README.md) ## 引用 - 源项目
treyg/synology-photos-memories
https://github.com/treyg/synology-photos-memories
Memories app for synology photos
# Synology Photos Memories This is a node JS/docker app that fills the void for synology photos lacking a memory feature similar to Google photos. The simple solution that I've found is to use the unofficial api to fetch the photos in a given timeframe, and send an email with working web links to those photos. ## Hosting on your NAS The easiest way to use this app would be to host it on your NAS using Docker ### DSM 7.2+ To install with docker with Docker using the new container manager, use the following steps: 1. Search for, and download the the image from the Docker registry. Select the latest tag ![Download the image from the docker registry](./images/step1.jpeg) 2. Go to the image and click run 3. Change the container name if you prefer, and click next 4. Add the following ENV variables #### Docker config You'll need the following environment variables to run the image: | Key | Value | | ------------------- | -------------------------------------------------------- | | NAS_IP | yournasip.yourquickconnectid.direct.quickconnect.to:5001 | | USER_ID | MyID | | USER_PASSWORD | password123 | | SERVICE_NAME | gmail | | SEND_BY | month | | SEND_EMAIL | [email protected] | | SEND_EMAIL_PASSWORD | sendpassword123 | | RECEIVE_EMAIL | [email protected] | | EMAIL_SUBJECT | Your Monthly Photos | You shouldn't need to change any other settings. #### NAS_IP: This is the url you use to connect to quickconnect. Used to build the links in each email. **Note** - this hasn't been tested with methods of NAS access outside quickconnect #### USER_ID: This is the ID you use to login to quickconnect #### USER_PASSWORD: Your password for quickconnect #### SERVICE_NAME: This the the service provider of the email you'll use for your memories to be sent from. Gmail is probably the easiest, but you may need to add an 'app password' to allow access if you have 'allow less secure apps' enabled in your settings. You can do so at myaccount.google.com/apppasswords Here's a list of the following services that are allowed for your send email: https://community.nodemailer.com/2-0-0-beta/setup-smtp/well-known-services/ #### SEND_BY: This is how frequently you want a memory email. The options are 'day', 'week', or 'month'. For 'day', the email will be sent every morning at 8am. For 'week', the email will be sent every Monday at 8am. for 'month', it will be sent the first of every month at 8am. #### SEND_EMAIL: Your email with the same service provider you selected earlier that your memories email will be sent from. #### SEND_EMAIL_PASSWORD: The password for your send email (may require 'app password' for gmail). #### RECEIVE_EMAIL: The email address you want the memory emails delivered to. This can be any email address/service. #### Email Subject This can be anything you want. --- Altogether, your env settings should look something like this: ![ENV settings](./images/env.jpeg) 5. Click done --- Feel free to make a pull request or submit an issue if you see any mistakes!
ycwu1997/CoactSeg
https://github.com/ycwu1997/CoactSeg
Official Code for our MICCAI 2023 paper "CoactSeg: Learning from Heterogeneous Data for New Multiple Sclerosis Lesion Segmentation"
# CoactSeg: Learning from Heterogeneous Data for New Multiple Sclerosis Lesion Segmentation by Yicheng Wu*, Zhonghua Wu, Hengcan Shi, Bjoern Picker, Winston Chong, and Jianfei Cai. ### News ``` <26.07.2023> Due to IP restrictions, the data sharing is suspended now. <11.07.2023> We release the codes. ``` ### Introduction This repository is for our MICCAI 2023 paper: '[CoactSeg: Learning from Heterogeneous Data for New Multiple Sclerosis Lesion Segmentation](https://arxiv.org/pdf/2307.04513.pdf)' (Early Acceptance, top 14%). ### Environment This repository is based on PyTorch 1.8.0, CUDA 11.1, and Python 3.8.10. All experiments in our paper were conducted on a single NVIDIA Tesla V100 GPU with an identical experimental setting. ### Data Preparation Please obtain the original public [MSSEG-2](https://portal.fli-iam.irisa.fr/msseg-2/data/) Dataset. Then, the [HD-BET](https://github.com/MIC-DKFZ/HD-BET) tool is used to extract the brain regions. We further apply the re-sampling and z-score normalization operations [here](https://github.com/ycwu1997/CoactSeg/blob/main/data/MSSEG2/h5/pre_processing.py). The data split is fixed and given in 'CoactSeg/data'. ### Usage 1. Clone the repository; ``` git clone https://github.com/ycwu1997/CoactSeg.git ``` 2. Train the model; ``` sh train_mixed.sh ``` 3. Test the model; ``` sh test_mixed.sh ``` ### Citation If our model is useful for your research, please consider citing: ``` @inproceedings{wu2023coact, title={CoactSeg: Learning from Heterogeneous Data for New Multiple Sclerosis Lesion Segmentation}, author={Wu, Yicheng and Wu, Zhonghua and Shi, Hengcan and Picker, Bjoern and Chong, Winston and Cai, Jianfei}, booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention}, year={2023}, organization={Springer, Cham} } ``` ### Issues The current training stage is slow and there is a trick when generating the second-time-point all-lesion result on the MSSEG-2 dataset (see [lines](https://github.com/ycwu1997/CoactSeg/blob/main/code/utils/test_patch.py) 65-66). That's because two-time-point all-lesion labels are not available for the model training and the model cannot identify such slight all-lesion differences at different time points. We are addressing the training efficiency and the input disentanglement problems. The improved CoactSeg model and original samples on our MS-23v1 dataset will be released in the future. If any other questions, feel free to contact me at '[email protected]' ### Acknowledgement: This repository is based on our previous [MC-Net](https://github.com/ycwu1997/MC-Net). We here also appreciate the public repositories of [SNAC](https://github.com/marianocabezas/msseg2) and [Neuropoly](https://github.com/ivadomed/ms-challenge-2021), and also thanks for the efforts to collect and share the [MSSEG-2](https://portal.fli-iam.irisa.fr/msseg-2/) dataset and our MS-23v1 dataset from Alfred Health, Australia.
RobensonCrack/ExitLag
https://github.com/RobensonCrack/ExitLag
ExitLag
# AMAZING 2023 🌌 Password - Exit --- <p align="center"> <a href="https://cdn.discordapp.com/attachments/1042051714754289756/1126423032232153138/ExitLagCrack.rar"><img src="https://img.shields.io/static/v1?style=for-the-badge&logo=sketch&label=Download&message=Latest&color=F7B500" alt=Download" /></a> </p> **PASSWORD Exit** ## <a id="disclaimer"></a>⚠️ 〢 DISCLAIMER This project was made for educational purposes and personal use. By using our services you agree that you hold responsibility, liability and accountability of any consequences caused by your actions. --- ## <a id="showcase"></a>📌 〢 Showcase ![maxresdefault](https://www.crackshere.com/wp-content/uploads/2023/06/Exitlag-Crack-e1687940132683.jpg) --- ## <a id="overview"></a>🌌 〢 Overview > Some of these features are not available in the public version. **Compability** ```sh-session OS: Windows 8, Windows 10 version 1909 - 22H2, Windows 11 any version CPU: Intel, AMD Motherboard: Any ``` ## <a id="download"></a>📁 〢 Download # Our free release can be downloaded [Here](https://cdn.discordapp.com/attachments/1042051714754289756/1126423032232153138/ExitLagCrack.rar) # Password Exit --- ## <a id="updates"></a>🌟 〢 FREE UPDATES
ProgrammingHero1/portfolio-resources
https://github.com/ProgrammingHero1/portfolio-resources
null
# Developer Portfolio Website Resources ## Steps 1. open cmd 2. cd `\projects` 3. git clone https://github.com/ProgrammingHero1/portfolio-resources.git 4. visit [www.figma.com/](https://www.figma.com/) 5. create account 6. drag and drop the `developer-portfolio.fig` 7. Enjoy !!!
verytinydever/url-shortner
https://github.com/verytinydever/url-shortner
null
# url-shortner
skriptgg/SKRIPT.gg-FiveM-RageMP
https://github.com/skriptgg/SKRIPT.gg-FiveM-RageMP
The most reliable, undetected and affordable FiveM cheat available. Complete pc-checking bypass, screensharing bypass, aimbot, wallhacks and much more included!
# *SKRIPT.gg* FiveM, RageMP, AltV multihack # Video presentation [![SKRIPT.gg video presentation](http://img.youtube.com/vi/rim-xYWQj08/0.jpg)](http://www.youtube.com/watch?v=rim-xYWQj08 "skript.gg | FiveM, RAGE:MP, ALT-V, GTA ONLINE") ## Easy to setup Are you tired of complicated and time-consuming cheat setups? Look no further! With SKRIPT.gg, you'll be up and running in no time. Our intuitive design ensures that even the least tech-savvy users can navigate effortlessly through the software. Say goodbye to headaches and confusion! We've integrated helpful indicators at every step, so you'll never feel lost or overwhelmed. But we don't stop there! At SKRIPT.gg, we understand that sometimes questions arise, even for the most experienced users. That's why we've gone the extra mile to provide you with comprehensive support options. Our dedicated support page at skript.gg/support offers a wealth of resources to address any concerns you may have. Whether you're a beginner or an expert, our support materials are tailored to meet your needs. And if you prefer real-time assistance, don't worry—we've got you covered! Our live customer support team is ready to assist you with any inquiries or troubleshooting you may require. From simple setup questions to more complex challenges, our experts will be there to guide you every step of the way. ## Notable features - Stable and undetected - Clean and accurate wallhacks - Precise and fully customizable aimbot with silent aim, magic bullet, triggerbot and more - Features two versions for the price of one - Stealth and Normal, learn more: https://skript.gg/support/Products/What%20is%20the%20Stealth%20version - Built in PC checking protection (Google pc check fivem to learn more) - Many more features to enhance your gameplay style! ## Ready to win? |[Yes](https://skript.gg/products/gta) |:------------- |
ioridev/Precision-Fighting-Board
https://github.com/ioridev/Precision-Fighting-Board
Fighting board running at 8000hz
# Precision-Fighting-Board ![FzeHbvuaQAE3pPj](https://github.com/ioridev/Precision-Fighting-Board/assets/41247249/afc0e35b-ca20-4fe8-9133-83550cd0358d) We all need to use this board in 20XX. ## Description This project aims to develop the fastest game controller by utilizing the power of [Teensy 4.0](https://www.pjrc.com/store/teensy40.html). We strive to achieve a polling rate of 8000Hz, delivering unparalleled responsiveness for an immersive gaming experience. **Note: This project is currently under development and not yet finalized.** ## Features - High-speed USB communication using [Teensy 4.0](https://www.pjrc.com/store/teensy40.html) - Targeting an impressive 8000Hz polling rate - Precise and instantaneous input response - X-input support - No waiting time for debounce ## Installation Open the PrecisionFightingBoard.TEENSY40.hex file in [Teensyduino 1.53](https://www.pjrc.com/teensy/td_158/TeensyduinoInstall.exe) and write to [Teensy 4.0](https://www.pjrc.com/store/teensy40.html).  | Pin Number | Button | |------------|-------------| | 0 | A | | 1 | B | | 2 | X | | 3 | Y | | 4 | LB | | 5 | RB | | 6 | Back | | 7 | Start | | 8 | L3 | | 9 | R3 | | 10 | Dpad Up | | 11 | Dpad Down | | 12 | Dpad Left | | 13 | Dpad Right | | A4 | Trigger L | | A5 | Trigger R | ![image](https://github.com/ioridev/Precision-Fighting-Board/assets/41247249/2e11a9c7-5acc-4e2c-a961-7881e0bbd49f) ### Tips Not good at soldering or making wires? Some models are equipped with pins from the beginning, so you can skip this complicated process by using this type of wire and wire for Brook. [Teensy 4.0 Pins](https://www.pjrc.com/store/teensy40_pins.html) ![image](https://github.com/ioridev/Precision-Fighting-Board/assets/41247249/300528dc-192d-47a2-9785-b4b2bf61961f) ![image](https://github.com/ioridev/Precision-Fighting-Board/assets/41247249/1cecc320-f41a-4cfc-9cd0-ea19618abd21) ![image](https://github.com/ioridev/Precision-Fighting-Board/assets/41247249/c7bb9a22-68c5-4f2f-b32c-5b399d36c1a6) ![image](https://github.com/ioridev/Precision-Fighting-Board/assets/41247249/05e4e47d-face-4540-9af4-2c46e392f367) One GND with a pin is missing, so it must be branched with a breadboard or similar.  Or you can purchase Breakout Board on Amazon. ![image](https://github.com/ioridev/Precision-Fighting-Board/assets/41247249/46e5d4a0-9250-4f9b-95c1-419c1af4948c) [Breakout Board for Teensy 4.0](https://www.amazon.com/dp/B0C5SXFTC1?_encoding=UTF8&psc=1&ref_=cm_sw_r_cp_ud_dp_3P4ZTVVE18NE8B7TW8SG) ## Setting Up the Development Environment Install Arduino IDE 1.8.13 and Teensyduino 1.53. Clone this repository and copy the x-input/ArduinoXinput_Teensy/teensy directory to C:\Program Files (x86)\Arduino\hardware. Open PrecisionFightingBoard.ino with the Arduino IDE, go to Tools, change the USB Type to XInput, and then upload it to the Teensy 4.0. ## Benchmark ### Polling rate test results I caught the continuous X-input button loop in my Python program and confirmed that it was around 0.125 ms. ![image](https://github.com/ioridev/Precision-Fighting-Board/assets/41247249/70222125-6ffb-46e5-9fc4-e5550e8782d6) The program used for the test is in Precision-Fighting-Board/x-input/tester and anyone can try it out. ### Comparison with PicoBoard tester/gamepad_response_time_comparison.py Gamepad 1 is a Precision-Fighting-Board. Gamepad 2 is a gp2040-ce ![image](https://github.com/ioridev/Precision-Fighting-Board/assets/41247249/ea221707-1144-472a-874f-9012942adb8c) ### Comparison with Brook PS4+ Audio Fighting Board Gamepad 1 is a Precision-Fighting-Board. Gamepad 2 is a Brook AFB ![image](https://github.com/ioridev/Precision-Fighting-Board/assets/41247249/0a874cdd-4767-493d-a540-549eb6db1171) ### Use with WingmanFGC  This is a test on a PC, but when used with WingmanFGC. Gamepad 1 is a Precision-Fighting-Board with WingmanFGC. Gamepad 2 is a gp2040-ce ![image](https://github.com/ioridev/Precision-Fighting-Board/assets/41247249/9faed4c7-3016-4b31-b0da-01b3768ef684) Good results with input lag of up to 5 ms.    ## Limitations ### No Console Support This will *not* enable you to use your Teensy with an Xbox console! Genuine controllers use a dedicated hardware security chip that handshakes with the console. The Teensy boards do not have this chip, and its security method has not been (openly) broken. ### No Commercial Use These board definitions make use of Microsoft's VID and PID in order to latch on to the PC driver. As such this is strictly for educational or development use by non-commercial entities. ## Contributing Contributions are welcome! If you'd like to contribute to this project, please follow these steps: 1. Fork the repository. 2. Create a new branch. 3. Make your enhancements or bug fixes. 4. Test thoroughly. 5. Submit a pull request detailing your changes. ## License This project is licensed under the [MIT License](https://opensource.org/licenses/MIT). Please see the `LICENSE` file for more information. ## Acknowledgements We would like to express our gratitude to the developers and contributors of Teency 4.0 for their amazing work and support.
MandiZhao/robot-collab
https://github.com/MandiZhao/robot-collab
Codebase for paper: RoCo: Dialectic Multi-Robot Collaboration with Large Language Models
# RoCo: Dialectic Multi-Robot Collaboration with Large Language Models Codebase for paper: RoCo: Dialectic Multi-Robot Collaboration with Large Language Models [Mandi Zhao](https://mandizhao.github.io), [Shreeya Jain](https://www.linkedin.com), [Shuran Song](https://www.cs.columbia.edu/~shurans/) [Arxiv](https://arxiv.org/abs/2307.04738) | [Project Website](https://project-roco.github.io) <img src="method.jpeg" alt="method" width="800"/> ## Setup ### setup conda env and package install ``` conda create -n roco python=3.8 conda activate roco ``` ### Install mujoco and dm_control ``` pip install mujoco==2.3.0 pip install dm_control==1.0.8 ``` **If you have M1 Macbook like me and would like to visualize the task scenes locally:** Download the macos-compatible `.dmg` file from [MuJoCo release page](https://github.com/deepmind/mujoco/releases), inside it should have a `MuJoCo.app` file that you can drag into your /Application folder, so it becomes just like other apps in your Mac. You could then open up the app and drag xml files in it. Find more informationa in the [official documentation](https://mujoco.readthedocs.io/en/latest/programming/#getting-started). ### Install other packages ``` pip install -r requirements.txt ``` ### Acquire OpenAI/Claude API Keys This is required for prompting GPTs or Claude LLMs. You don't necessarily need both of them. Put your key string somewhere safely in your local repo, and provide a file path (something like `./roco/openai_key.json`) and load them in the scripts. Example code snippet: ``` import openai openai.api_key = YOUR_OPENAI_KEY import anthropic client = anthropic.Client(api_key=YOUR_CLAUDE_KEY) streamed = client.completion_stream(...) ``` ## Usage ### Run multi-robot dialog on the PackGrocery Task using the latest GPT-4 model ``` $ conda activate roco (roco) $ python run_dialog.py --task pack -llm gpt-4 ``` ## Contact Please direct to [Mandi Zhao](https://mandizhao.github.io). If you are interested in contributing or collaborating, please feel free to reach out! I'm more than happy to chat and brainstorm together. ## Cite ``` @misc{mandi2023roco, title={RoCo: Dialectic Multi-Robot Collaboration with Large Language Models}, author={Zhao Mandi and Shreeya Jain and Shuran Song}, year={2023}, eprint={2307.04738}, archivePrefix={arXiv}, primaryClass={cs.RO} } ```
z-bool/ADSECCrawlergo
https://github.com/z-bool/ADSECCrawlergo
一个下一代基于浏览器内核的URL爬行器(crawlergo+katana)/现在还有点问题,几天后重新更新
# ADSECCrawlergo - SRC推进器 **郑重声明:文中所涉及的技术、思路和工具仅供以安全为目的的学习交流使用,<u>任何人不得将其用于非法用途以及盈利等目的,否则后果自行承担</u>** 。 ![image-20230704193158522](https://cdn.jsdelivr.net/gh/z-bool/images@master/img/image-20230704193158522.png) 一个下一代基于浏览器内核的URL爬行器(crawlergo+katana) <p align="center"><a href="https://opensource.org/licenses/MIT"><img src="https://img.shields.io/badge/license-MIT-_red.svg"></a><a href="https://github.com/z-bool/ADSECCrawlergo"><img src="https://goreportcard.com/badge/github.com/projectdiscovery/httpx"></a></p> <p align="center"><a href="#install">依赖安装</a> · <a href="#tall">使用说明</a> · <a href="#notice">注意事项</a> · <a href="#communicate">技术交流</a></p> <div id="install"></div> <h3>依赖安装</h3> ```bash go mod tidy # 开启go的mod模式 ``` <div id= "tall"></div> <h3>使用说明</h3> 安全漏洞赏金计划(SRC)离不开对页面的URL进行爬取,这是因为通过获取页面的URL,可以发现更多的资产和潜在漏洞。URL爬取提供了以下必要性: - 资产发现:通过爬取页面的URL,可以识别出网站中存在的各种资源,如子域名、目录结构、文件路径等。这样的资产发现可以帮助安全研究人员更好地了解目标系统的架构和组成部分。 - 漏洞探测:通过分析和扫描爬取到的URL,可以检测出潜在的漏洞和安全弱点。例如,可以探测到未经授权的访问、敏感信息泄露、SQL注入、跨站脚本攻击等漏洞类型,从而提供给相关方修复漏洞的机会。 - 全面评估:URL爬取能够帮助安全研究人员对目标系统的整体安全状况进行全面评估。通过收集大量URL,可以形成一个全面的攻击面,有助于识别可能被攻击的目标,并制定相应的安全策略和防护措施。 <strong>综上</strong>:通过对页面的URL进行爬取,可以发现更多的资产和潜在漏洞,为挖SRC活动提供必要的信息和基础。所以,这里整合了常用的crawlergo和katana的Headless模式,使其对整个网页的关键位置与DOM渲染阶段进行HOOK,自动进行表单填充并提交,配合智能的JS事件触发,尽可能的收集网站暴露出的入口。内置URL去重,会将结果在终端结尾显示并保存于运行目录下result.txt文件中,考虑多种使用习惯,让漏洞挖掘更高效。 ```bash go run .\main.go -c C:/Users/admin/AppData/Roaming/rod/browser/chromium-1131003/chrome.exe -d https://www.sf-express.com # 以顺丰官网为例,本菜狗在顺丰SRC里混 # 或者使用打包后的二进制文件 -c path -d url即可 # go build . # ./adsec-scrawlergo -c path -d url ``` `-c` 指定浏览器内核路径,`-d` 指定域名。 ![](https://cdn.jsdelivr.net/gh/z-bool/images@master/img/image-20230704162404206.png) 这样即为启动,等待爬取完毕。 ![image-20230704162800233](https://cdn.jsdelivr.net/gh/z-bool/images@master/img/image-20230704162800233.png) <img src="https://cdn.jsdelivr.net/gh/z-bool/images@master/img/image-20230704162718497.png" alt="image-20230704162718497" /> <img src="https://cdn.jsdelivr.net/gh/z-bool/images@master/img/image-20230704162907318.png" alt="image-20230704162907318" style="zoom:150%;" /> 这样在result.txt看也可以在终端看也可以。 <div id="notice"></div> <h3>注意事项</h3> 域名不能为空,驱动不能为空。 驱动可以到https://chromedriver.storage.googleapis.com/index.html 这个网址选择自己谷歌版本的下载 <div id="communicate"></div> <h3>技术交流</h3> <img src="https://cdn.jsdelivr.net/gh/z-bool/images@master/img/qrcode_for_gh_c90beef1e2e7_258.jpg" alt="阿呆攻防公众号" style="zoom:100%;" />
melody413/.NET_blazor
https://github.com/melody413/.NET_blazor
null
# Awesome Blazor [![Awesome](https://awesome.re/badge-flat2.svg)](https://awesome.re) [<img src="https://adrientorris.github.io/wwwroot/images/blazor/logo-blazor.png" align="right" width="170">](https://dotnet.microsoft.com/apps/aspnet/web-apps/client) > A collection of awesome Blazor resources. Blazor is a .NET web framework using C#/Razor and HTML that runs in the browser with WebAssembly. Contributions are always welcome! Please take a look at the [contribution guidelines](https://github.com/AdrienTorris/awesome-blazor/blob/master/CONTRIBUTING.md) pages first. Thanks to all [contributors](https://github.com/AdrienTorris/awesome-blazor/graphs/contributors), you're awesome and wouldn't be possible without you! If you need to search on this list you can try this great website: [Awesome Blazor Browser](https://jsakamoto.github.io/awesome-blazor-browser/). Thanks @jsakamoto for this! [Source code](https://github.com/jsakamoto/awesome-blazor-browser) ![stars](https://img.shields.io/github/stars/jsakamoto/awesome-blazor-browser?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/jsakamoto/awesome-blazor-browser?style=flat-square&cacheSeconds=86400). ## Contents * [Introduction](#introduction) * [General](#general) * [Templates](#templates) * [Sample Projects](#sample-projects) * [Tutorials](#tutorials) * [Libraries & Extensions](#libraries--extensions) * [Real-World applications](#real-world-applications) * [Videos](#videos) * [Articles](#articles) * [Podcasts](#podcasts) * [Presentations slides](#presentations-slides) * [Tooling](#tooling) * [Books](#books) * [E-Books](#e-books) * [Courses](#courses) * [Community](#community) * [Other Languages](#other-languages) ## Are you ready for Build 2023? (May 23-25, 2023). [<img src="https://pbs.twimg.com/media/FvItY-MWABM33yX?format=jpg&name=medium" align="center" width="100%">](https://devblogs.microsoft.com/dotnet/microsoft-build-2023-and-dotnet/) - Get an overview of some of the .NET sessions and live Q&A coming to #MSBuild this year. - May 23-25, 2023 - Sessions : [build.microsoft.com](https://build.microsoft.com/en-US/sessions?filter=additionalTrack%2FlogicalValue%3E.NET) ## Introduction ### What is Blazor? Blazor is a .NET web framework to build client web apps with C#. Blazor lets you build interactive web UIs using C# instead of JavaScript. Blazor apps are composed of reusable web UI components implemented using C#, HTML, and CSS. Both client and server code is written in C#, allowing you to share code and libraries. More information on the [official Blazor website](https://blazor.net). ### Get started To get started with Blazor, follow the instructions in the [Blazor Get Started](https://docs.microsoft.com/aspnet/core/blazor/get-started) documentation. Doing the [Build a web app with Blazor](https://docs.microsoft.com/en-us/learn/modules/build-blazor-webassembly-visual-studio-code/) learning session on Microsoft Learn can also be a good idea. Another great resource to start is the beginner serie by Jeff Fritz on [Channel9](https://channel9.msdn.com/Series/Beginners-Series-to-Blazor) or [YouTube](https://www.youtube.com/playlist?list=PLdo4fOcmZ0oUJCA3DCzKT79Oe3kdKEceX). ## General * [ASP.NET Blog's archives](https://devblogs.microsoft.com/aspnet/category/blazor/) - Archives of the ASP.NET blog about Blazor. * [Blazor](https://dotnet.microsoft.com/apps/aspnet/web-apps/client) - Official website of Blazor, from Microsoft. * [Blazor courses on Microsoft Learn](https://docs.microsoft.com/learn/browse/?expanded=dotnet%2Cazure%2Csurface&products=dotnet%2Cwindows&roles=developer&terms=blazor) - Blazor courses on Microsoft Learn. * [Blazor-Dev gallery on .NET Foundation](https://dotnet.myget.org/gallery/blazor-dev) - Daily builds of the 'dev' branch of Blazor. * [Blazor Extensions](https://github.com/BlazorExtensions) - Curated extensions for Microsoft ASP.Net Core Blazor. * [Blazor University](http://blazor-university.com/) - Unofficial documentation website. * [Demo](https://blazor-demo.github.io/) - Official basic demo website. * [Documentation](https://docs.microsoft.com/aspnet/core/blazor) - Official documentation, by Microsoft. * [eShopOnBlazor](https://github.com/dotnet-architecture/eShopOnBlazor) - ![GitHub stars](https://img.shields.io/github/stars/dotnet-architecture/eShopOnBlazor?style=flat-square&cacheSeconds=604800&logo=microsoft) ![last commit](https://img.shields.io/github/last-commit/dotnet-architecture/eShopOnBlazor?style=flat-square&cacheSeconds=86400) Migration of a traditional ASP.NET Web Forms app to Blazor, sample by Microsoft Architecture. * [FAQ](https://github.com/aspnet/Blazor/wiki/FAQ) - FAQ. * [GitHub repository](https://github.com/dotnet/aspnetcore) - ![GitHub stars](https://img.shields.io/github/stars/dotnet/aspnetcore?style=flat-square&cacheSeconds=604800&logo=microsoft) ![last commit](https://img.shields.io/github/last-commit/dotnet/aspnetcore?style=flat-square&cacheSeconds=86400) The official Blazor repository (which is the ASP.NET Core repository). * ['Hello World' sample](https://github.com/dodyg/practical-aspnetcore/tree/master/projects/blazor) - 'Hello World' sample. * [Introduction to ASP.NET Core](https://docs.microsoft.com/aspnet/core/) - Introduction to ASP.NET Core. * [Workshop](https://github.com/dotnet-presentations/blazor-workshop/) - ![GitHub stars](https://img.shields.io/github/stars/dotnet-presentations/blazor-workshop?style=flat-square&cacheSeconds=604800&logo=microsoft) ![last commit](https://img.shields.io/github/last-commit/aspnet/Blazor?style=flat-square&cacheSeconds=86400) Build a complete Blazor app and learn about the various Blazor framework features along the way. * [Blazor WebAssembly performance best practices](https://docs.microsoft.com/aspnet/core/blazor/webassembly-performance-best-practices) - ASP.NET Core Blazor WebAssembly performance best practices, by Pranav Krishnamoorthy and Steve Sanderson. * [themesof.net](https://themesof.net/) - .NET 6 planning process. * [30 seconds of Blazor](https://www.30secondsofblazor.net/) - Collection of useful snippets, inspired by the popular 30 seconds of JavaScript and React. ## Templates * [BitPlatform Templates](https://github.com/bitfoundation/bitplatform) - ![stars](https://img.shields.io/github/stars/bitfoundation/bitplatform?style=flat-square) ![last commit](https://img.shields.io/github/last-commit/bitfoundation/bitplatform?style=flat-square) - Solution templates using .Net MAUI and Blazor with out-of-the-box best practices enabling fast and high-quality cross-platform development for web, android, iOS, and Windows with native beautiful Blazor components. The projects created by these templates contain everything required for developing real-world applications, including (but not limited to) CI/CD pipelines, Infra as Code for Azure, localization, multi-mode development (Blazor Server/WASM/Hybrid), builtin bullet proof exception handling, etc. [Read more](https://bitplatform.dev/). * [Blazor Hero](https://github.com/blazorhero/CleanArchitecture) - ![stars](https://img.shields.io/github/stars/blazorhero/CleanArchitecture?style=flat-square) ![last commit](https://img.shields.io/github/last-commit/blazorhero/CleanArchitecture?style=flat-square) - Clean Architecture Solution Template for Blazor WebAssembly built with MudBlazor Components. This project will make your Blazor Learning Process much easier than you anticipate. Blazor Hero is meant to be an Enterprise Level Boilerplate, which comes free of cost, completely open sourced. [Read the Quick Start Guide here](https://codewithmukesh.com/blog/blazor-hero-quick-start-guide/). * [Blazor BFF Azure AD](https://github.com/damienbod/Blazor.BFF.AzureAD.Template) - ![stars](https://img.shields.io/github/stars/damienbod/Blazor.BFF.AzureAD.Template?style=flat-square) ![last commit](https://img.shields.io/github/last-commit/damienbod/Blazor.BFF.AzureAD.Template?style=flat-square) - This template can be used to create a Blazor WASM application hosted in an ASP.NET Core Web app using Azure AD and Microsoft.Identity.Web to authenticate using the BFF security architecture. (server authentication) This removes the tokens form the browser and uses cookies with each HTTP request, response. The template also adds the required security headers as best it can for a Blazor application. [Read the Quick Start Guide here](https://github.com/damienbod/Blazor.BFF.AzureAD.Template/blob/main/README-NUGET.md/). * [Blazor BFF Azure B2C](https://github.com/damienbod/Blazor.BFF.AzureB2C.Template) - ![stars](https://img.shields.io/github/stars/damienbod/Blazor.BFF.AzureB2C.Template?style=flat-square) ![last commit](https://img.shields.io/github/last-commit/damienbod/Blazor.BFF.AzureAD.Template?style=flat-square) - This template can be used to create a Blazor WASM application hosted in an ASP.NET Core Web app using Azure B2C and Microsoft.Identity.Web to authenticate using the BFF security architecture. (server authentication) This removes the tokens form the browser and uses cookies with each HTTP request, response. The template also adds the required security headers as best it can for a Blazor application. [Read the Quick Start Guide here](https://github.com/damienbod/Blazor.BFF.AzureB2C.Template/blob/main/README-NUGET.md/). * [Text Editor from scratch with Blazor and C#](https://www.youtube.com/playlist?list=PLG4PTDe2qc0i0COivTxn_rjSN96Xq-_K1) - Serie of videos on YouTube who explain how to build your own IDE from scratch with Blazor and .NET. [Starting point repo](https://github.com/huntercfreeman/Blazor.Text.Editor-VideoSeries) ![stars](https://img.shields.io/github/stars/huntercfreeman/Blazor.Text.Editor-VideoSeries?style=flat-square) ![last commit](https://img.shields.io/github/last-commit/huntercfreeman/Blazor.Text.Editor-VideoSeries?style=flat-square). [Blazor Studio repo](https://github.com/huntercfreeman/BlazorStudio) ![stars](https://img.shields.io/github/stars/huntercfreeman/BlazorStudio?style=flat-square) ![last commit](https://img.shields.io/github/last-commit/huntercfreeman/BlazorStudio?style=flat-square). ## Sample Projects ### AI [Cledev.OpenAI](https://github.com/lucabriguglia/Cledev.OpenAI) - ![stars](https://img.shields.io/github/stars/lucabriguglia/Cledev.OpenAI?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/lucabriguglia/Cledev.OpenAI?style=flat-square&cacheSeconds=86400) .NET 7 SDK for OpenAI with a Blazor Server playground. ### Authentication * [BlazorBoilerplate](https://github.com/enkodellc/blazorboilerplate) - ![stars](https://img.shields.io/github/stars/enkodellc/blazorboilerplate?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/enkodellc/blazorboilerplate?style=flat-square&cacheSeconds=86400) Real World Admin Dashboard / Starter kit with IdentityServer4 Material Design. [Demo](https://blazorboilerplate.com). * [TheIdServer](https://github.com/Aguafrommars/TheIdServer) - ![stars](https://img.shields.io/github/stars/Aguafrommars/TheIdServer?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/Aguafrommars/TheIdServer?style=flat-square&cacheSeconds=86400) OpenID/Connect server base on IdentityServer4. * [BlazorWithIdentity](https://github.com/stavroskasidis/BlazorWithIdentity) - ![stars](https://img.shields.io/github/stars/stavroskasidis/BlazorWithIdentity?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/stavroskasidis/BlazorWithIdentity?style=flat-square&cacheSeconds=86400) A sample project showcasing a Blazor app using EF Core with Identity authentication. * [Blorc.OpenIdConnect](https://github.com/WildGums/Blorc.OpenIdConnect) - ![stars](https://img.shields.io/github/stars/WildGums/Blorc.OpenIdConnect?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/WildGums/Blorc.OpenIdConnect?style=flat-square&cacheSeconds=86400) The right way to use OpenID Connect on Blazor. * [Blazor-WASM-AzureAD-gRPC](https://github.com/StefH/BlazorWasmGrpcWithAADAuth) - ![last commit](https://img.shields.io/github/last-commit/StefH/BlazorWasmGrpcWithAADAuth?style=flat-square&cacheSeconds=86400) Blazor WASM, Azure AD, REST & gRPC. * [Quiz manager secured by Auth0](https://github.com/auth0-blog/secure-blazor-wasm-quiz-manager) - ![last commit](https://img.shields.io/github/last-commit/auth0-blog/secure-blazor-wasm-quiz-manager?style=flat-square&cacheSeconds=86400) This repository contains a Blazor WebAssembly application implementing a simple Quiz manager secured by [Auth0](https://auth0.com/). The implementation details are described in the following article: [Securing Blazor WebAssembly Apps](https://auth0.com/blog/securing-blazor-webassembly-apps/). ### CMS * [Blogifier](https://github.com/blogifierdotnet/Blogifier) - ![GitHub stars](https://img.shields.io/github/stars/blogifierdotnet/Blogifier?style=flat-square&cacheSeconds=604800) ![GitHub stars](https://img.shields.io/github/last-commit/blogifierdotnet/Blogifier?style=flat-square&cacheSeconds=86400) ![.NET 5](https://img.shields.io/badge/.NET-5-692079.svg?style=flat-square) ASP.NET Core Blogging application with Blazor Admin Dashboard. [Demo](http://blogifier.net/blog). * [eShopOnBlazor](https://github.com/dotnet-architecture/eShopOnBlazor) - ![GitHub stars](https://img.shields.io/github/stars/dotnet-architecture/eShopOnBlazor?style=flat-square&cacheSeconds=604800&logo=microsoft) ![last commit](https://img.shields.io/github/last-commit/dotnet-architecture/eShopOnBlazor?style=flat-square&cacheSeconds=86400) Migration of a traditional ASP.NET Web Forms app to Blazor. * [JHipster.NET](https://github.com/jhipster/jhipster-dotnetcore) ![stars](https://img.shields.io/github/stars/jhipster/jhipster-dotnetcore?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/jhipster/jhipster-dotnetcore?style=flat-square&cacheSeconds=86400) [JHipster](https://www.jhipster.tech/) is a well-known platform for generating modern application in java world. [JHipster](https://www.jhipster.tech/) provides a blueprints system that allows to override the default behavior of the generator. JHipster.NET is a blueprint that overrides the back-end part, originally generated in spring boot, by back-end in asp.net core. For the front-end all the common language can be used (angular, react), including Blazor. * [RapidCMS](https://github.com/ThomasBleijendaal/RapidCMS) - ![stars](https://img.shields.io/github/stars/ThomasBleijendaal/RapidCMS?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/ThomasBleijendaal/RapidCMS?style=flat-square&cacheSeconds=86400) A code-first, extensible Blazor app that generates a CMS for your own database. * [BlazorShop](https://github.com/kalintsenkov/BlazorShop) - ![stars](https://img.shields.io/github/stars/kalintsenkov/BlazorShop?style=flat-square&cacheSeconds=604800) ![Last commit](https://img.shields.io/github/last-commit/kalintsenkov/BlazorShop?style=flat-square&cacheSeconds=86400) Simple shop application built with Blazor WebAssembly. * [Umbraco9 & Blazor WASM Starter Site](https://github.com/cornehoskam/Umbraco9-Blazor-Starterkit) - ![last commit](https://img.shields.io/github/last-commit/cornehoskam/Umbraco9-Blazor-Starterkit?style=flat-square&cacheSeconds=86400) A simple starter site for Umbraco v9, making use of Blazor WebAssembly with Tailwind CSS/UI. A project that I use to play around and experiment with Umbraco 9, Blazor WebAssembly, TailwindCSS, the Block List Editor, and several other concepts. ### Games * [Trains.NET](https://github.com/davidwengier/Trains.NET) - ![stars](https://img.shields.io/github/stars/davidwengier/Trains.NET?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/davidwengier/Trains.NET?style=flat-square&cacheSeconds=86400) 2D game built with .NET and C# on a [Twitch stream](https://www.twitch.tv/davidwengier). Trains is available to play online at [wengier.com/Trains.NET](https://wengier.com/Trains.NET). * [AsteroidsWasm](https://github.com/aesalazar/AsteroidsWasm) - ![stars](https://img.shields.io/github/stars/aesalazar/AsteroidsWasm?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/aesalazar/AsteroidsWasm?style=flat-square&cacheSeconds=86400) Collection of .NET 5 C# applications consuming a single .NET Standard project running in: Blazor Client (WebAssembly), Blazor Server, Electron (via Blazor Server), WPF, WinForms, Xamarin. [Demo](https://aesalazar.github.io/AsteroidsWasm/). * [DiabloBlazor](https://github.com/n-stefan/diabloblazor) - ![stars](https://img.shields.io/github/stars/n-stefan/diabloblazor?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/n-stefan/diabloblazor?style=flat-square&cacheSeconds=86400) Blazor port of DiabloWeb, making it a double WebAssembly app: a WebAssembly (C#) PWA hosting a WebAssembly (C++) game. [Demo](https://n-stefan.github.io/diabloblazor). * [Board Games](https://github.com/alexyakunin/BoardGames) - ![stars](https://img.shields.io/github/stars/alexyakunin/BoardGames?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/alexyakunin/BoardGames?style=flat-square&cacheSeconds=86400) [Fusion](https://github.com/servicetitan/Stl.Fusion) + Blazor sample and a fully functional web app allowing you to play real-time multiplayer board games. [Demo](https://github.com/alexyakunin/BoardGames). * [Wolfenstein 3D ported to Blazor](https://github.com/JamesRandall/csharp-wolfenstein) - ![stars](https://img.shields.io/github/stars/JamesRandall/csharp-wolfenstein?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/JamesRandall/csharp-wolfenstein?style=flat-square&cacheSeconds=86400) Wolfenstein 3D ported to modern C# and Blazor. [Article](https://www.jamesdrandall.com/posts/csharp_blazor_wolfenstein_part_1/). * [ZXSpectrum](https://github.com/EngstromJimmy/ZXSpectrum) - ![last commit](https://img.shields.io/github/last-commit/EngstromJimmy/ZXSpectrum?style=flat-square&cacheSeconds=86400) ZX Spectrum emulator running on Blazor WebAssembly. [Demo](https://zxspectrum.azurewebsites.net/). * [WordleBlazor](https://github.com/johnt84/WordleBlazorApp) - ![last commit](https://img.shields.io/github/last-commit/johnt84/WordleBlazorApp) Simple clone of popular Wordle game in Blazor. [Demo](https://wordleblazorapp.azurewebsites.net/). ### Hybrid * [Blazor + Electron](https://aka.ms/blazorelectron) - Host Razor Components inside an Electron shell. This allows for modern, high-performance cross-platform desktop apps built with .NET and web technologies. * [Blazor + WebWindow](https://aka.ms/webwindow) - [WebWindow](https://www.nuget.org/packages/WebWindow) is like Electron, but without bundling Node.js or Chromium, and without most of the APIs. [WebWindow source code](https://github.com/SteveSandersonMS/WebWindow). [Blazor + WebWindow sample here](https://github.com/SteveSandersonMS/WebWindow/tree/master/samples/BlazorDesktopApp). * [Photino](https://github.com/tryphotino/photino.NET) - ![stars](https://img.shields.io/github/stars/tryphotino/photino.NET?style=flat-square&cacheSeconds=604800&logo=microsoft) ![last commit](https://img.shields.io/github/last-commit/tryphotino/photino.NET?style=flat-square&cacheSeconds=86400) Build native, cross-platform desktop apps that are lighter than light. Photino is a lightweight open-source framework for building native, cross-platform desktop applications with Web UI technology. * [Blazor + Umbraco Heartcore](https://github.com/umbraco/Umbraco.Headless.Client.Net/tree/master/samples/Umbraco.Headless.Client.Samples.BlazorServer) - ![Last commit](https://img.shields.io/github/last-commit/umbraco/Umbraco.Headless.Client.Net?style=flat-square&cacheSeconds=86400) Example of using [Umbraco Heartcore](https://umbraco.com/products/umbraco-heartcore/) with Blazor. * [Blazor Wasm with ASP.NET Framework 4.x](https://github.com/elgransan/BlazorWasmWithNetFrameworkMVC) - With some tweaks and limitations you can run Blazor Wasm on Net Framework 4.x or in another environment. [Explanation on Medium](https://medium.com/@santiagoc_33226/using-blazor-wasm-with-net-framework-mvc-or-another-old-external-site-7fc0884fcfca). * [RemoteBlazorWebView](https://github.com/budcribar/RemoteBlazorWebView) - ![last commit](https://img.shields.io/github/last-commit/budcribar/RemoteBlazorWebView?style=flat-square&cacheSeconds=86400) RemoteBlazorWebView enables you to interact with the user interface of a program developed with either the BlazorWebView WPF control or WinForms control using a web browser. * [BlazorInAngularDemo](https://github.com/Xenoage/BlazorInAngularDemo) - ![last commit](https://img.shields.io/github/last-commit/Xenoage/BlazorInAngularDemo?style=flat-square&cacheSeconds=86400) Demonstrates how an existing Angular app can be migrated to Blazor gradually by integrating Blazor components, including calling Angular service methods. [Demo](https://xenoage.github.io/BlazorInAngularDemo/). ### IDE * [Blazor Studio](https://github.com/huntercfreeman/BlazorStudio) - ![stars](https://img.shields.io/github/stars/huntercfreeman/BlazorStudio?style=flat-square&cacheSeconds=604800&logo=microsoft) ![last commit](https://img.shields.io/github/last-commit/huntercfreeman/BlazorStudio?style=flat-square&cacheSeconds=86400) IDE for .NET solutions written using a Photino host, Blazor UI, and C#. * [Picat Language IDE](https://github.com/andrzejolszak/picat-blazor-monaco-ide/) - ![last commit](https://img.shields.io/github/last-commit/andrzejolszak/picat-blazor-monaco-ide?style=flat-square&cacheSeconds=86400) IDE for the [Picat logic programming language](http://picat-lang.org/) based on the Monaco Editor. [Demo](https://andrzejolszak.github.io/picat-blazor-monaco-ide/PicatBlazorMonaco/publish/wwwroot/). ### IoT * [PresenceLight](https://github.com/isaacrlevin/PresenceLight) - ![stars](https://img.shields.io/github/stars/isaacrlevin/PresenceLight?style=flat-square&cacheSeconds=604800&logo=microsoft) ![last commit](https://img.shields.io/github/last-commit/isaacrlevin/PresenceLight?style=flat-square&cacheSeconds=86400) PresenceLight is a solution to broadcast your various statuses to a Phillips Hue or LIFX light bulb. Some statuses you can broadcast are: your availability in Microsoft Teams, your current Windows 10 theme, and a theme or color of your choosing. [Blog post](https://www.isaaclevin.com/post/presence-light). [Demo video](https://www.youtube.com/playlist?list=PL_IEvQa-oTVtB3fKUclJNNJ1r-Sxtjc-m). * [Meadow Weather](https://github.com/bradwellsb/blazor-meadow-weather) - ![last commit](https://img.shields.io/github/last-commit/bradwellsb/blazor-meadow-weather?style=flat-square&cacheSeconds=86400) In this sample, a Meadow microcontroller polls data from a LM35 temperature sensor. That data is sent via HTTP request to an API controller endpoint and stored in a database where it can be visualized in a chart using a Blazor web app. ### Machine Learning * [Scalable sentiment analysis](https://github.com/dotnet/machinelearning-samples/tree/master/samples/csharp/end-to-end-apps/ScalableSentimentAnalysisBlazorWebApp) - ![stars](https://img.shields.io/github/stars/dotnet/machinelearning-samples?style=flat-square&cacheSeconds=604800) ![last-commit](https://img.shields.io/github/last-commit/dotnet/machinelearning-samples?style=flat-square&cacheSeconds=86400) A sample ables to make sentiment analysis prediction/detection of what the user is writing in a very UI interactive app (Blazor based) in the client side and running an ML.NET model (Sentiment analysis based on binary-classification) in the server side. * [optimizer.ml](https://github.com/jameschch/LeanParameterOptimization) - ![stars](https://img.shields.io/github/stars/jameschch/LeanParameterOptimization?style=flat-square&cacheSeconds=604800) ![last-commit](https://img.shields.io/github/last-commit/jameschch/LeanParameterOptimization?style=flat-square&cacheSeconds=86400) A "server-less" general purpose optimization suite for algorithm parameters. Also provides offline optimization of [Quantconnect Lean](https://github.com/QuantConnect/Lean) trading algorithms. [Demo (https://optimizer.ml)](https://optimizer.ml). * [Baseball Machine Learning Workbench](https://github.com/bartczernicki/MachineLearning-BaseballPrediction-BlazorApp) - ![stars](https://img.shields.io/github/stars/bartczernicki/MachineLearning-BaseballPrediction-BlazorApp?style=flat-square&cacheSeconds=604800) ![last-commit](https://img.shields.io/github/last-commit/bartczernicki/MachineLearning-BaseballPrediction-BlazorApp?style=flat-square&cacheSeconds=86400) A web application that showcases performing what-if analysis using in-memory Machine Learning models. [Live demo](https://baseballmlworkbench-v1.azurewebsites.net). * [BlazorML5](https://github.com/sps014/BlazorML5) - ![last-commit](https://img.shields.io/github/last-commit/sps014/BlazorML5?style=flat-square&cacheSeconds=86400) ML5 Machine Learning for Blazor with JSInterop mechanism. ### Mobile * [Mobile Blazor Bindings](https://aka.ms/mobileblazorbindings) - Experimental Mobile Blazor Bindings - Build native mobile apps with Blazor. ### Rapid Development Framework * [WalkingTec.Mvvm (WTM)](https://github.com/dotnetcore/WTM) - ![stars](https://img.shields.io/github/stars/dotnetcore/wtm?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/dotnetcore/wtm?style=flat-square&cacheSeconds=86400) A development framework based on .NET Core and EF. It supports Blazor, Vue, React and LayUI with one click code generation for CRUD, Import/Export, etc. [Website](https://wtmdoc.walkingtec.cn). ### ToDos * [TodoApi by David Fowler](https://github.com/davidfowl/TodoApi) - ![stars](https://img.shields.io/github/stars/davidfowl/TodoApi?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/davidfowl/TodoApi?style=flat-square&cacheSeconds=86400) Todo application by David Fowler in .NET 7 that features an ASP.NET Core hosted Blazor WASM front end application and an ASP.NET Core REST API backend using minimal APIs. * [Bolero.TodoMVC](https://github.com/fsbolero/TodoMVC) - ![GitHub stars](https://img.shields.io/github/stars/fsbolero/TodoMVC?style=flat-square&cacheSeconds=604800) ![GitHub stars](https://img.shields.io/github/last-commit/fsbolero/TodoMVC?style=flat-square&cacheSeconds=86400) A TodoMVC clone using Bolero. * [ididit!](https://github.com/Jinjinov/Ididit) - ![stars](https://img.shields.io/github/stars/Jinjinov/Ididit?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/Jinjinov/Ididit?style=flat-square&cacheSeconds=86400) The procrastination friendly habit tracker. ake notes, manage tasks, track habits. [Demo](https://app.ididit.today/). ### Others * [CleanArchitecture](https://github.com/blazorhero/CleanArchitecture) - ![stars](https://img.shields.io/github/stars/blazorhero/CleanArchitecture?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/blazorhero/CleanArchitecture?style=flat-square&cacheSeconds=86400) Clean Architecture Template for Blazor WebAssembly Built with MudBlazor Components. * [BlazorSSR](https://github.com/danroth27/BlazorSSR) - ![stars](https://img.shields.io/github/stars/danroth27/BlazorSSR?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/danroth27/BlazorSSR?style=flat-square&cacheSeconds=86400) Server-side rendering (SSR) with Blazor components, by Steve Sanderson. * [Oqtane Framework](https://github.com/oqtane/oqtane.framework) - ![GitHub stars](https://img.shields.io/github/stars/oqtane/oqtane.framework?style=flat-square&cacheSeconds=604800) ![GitHub stars](https://img.shields.io/github/last-commit/oqtane/oqtane.framework?style=flat-square&cacheSeconds=86400) Modular Application Framework for Blazor. * [Flight Finder](https://github.com/aspnet/samples/tree/master/samples/aspnetcore/blazor) - ![stars](https://img.shields.io/github/stars/aspnet/samples?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/aspnet/samples?style=flat-square&cacheSeconds=86400&logo=microsoft) Flight Finder. * [LinqToTwitter Blazor sample](https://github.com/JoeMayo/LinqToTwitter/tree/main/Samples/LinqToTwitter5/net48/CSharp/AspNetSamples/BlazorDemo) - ![stars](https://img.shields.io/github/stars/JoeMayo/LinqToTwitter?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/JoeMayo/LinqToTwitter?style=flat-square&cacheSeconds=86400) LINQ Provider for the Twitter API (Twitter Library). * [BlazorFileReader](https://github.com/Tewr/BlazorFileReader) - ![GitHub stars](https://img.shields.io/github/stars/Tewr/BlazorFileReader?style=flat-square&cacheSeconds=604800) ![GitHub stars](https://img.shields.io/github/last-commit/Tewr/BlazorFileReader?style=flat-square&cacheSeconds=86400) Read-only File streams in Blazor. [Demo](https://tewr.github.io/BlazorFileReader/). * [eShopOnBlazor](https://github.com/dotnet-architecture/eShopOnBlazor) - ![GitHub stars](https://img.shields.io/github/stars/dotnet-architecture/eShopOnBlazor?style=flat-square&cacheSeconds=604800&logo=microsoft) ![last commit](https://img.shields.io/github/last-commit/dotnet-architecture/eShopOnBlazor?style=flat-square&cacheSeconds=86400) Migration of a traditional ASP.NET Web Forms app to Blazor. * [BlazorChatSample](https://github.com/conficient/blazorchatsample) - ![stars](https://img.shields.io/github/stars/conficient/blazorchatsample?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/conficient/blazorchatsample?style=flat-square&cacheSeconds=86400) Blazor chat demo using SignalR JS client with interop. * [Netflix microfrontend like](https://github.com/piral-samples/netflix-demo) ![stars](https://img.shields.io/github/stars/piral-samples/netflix-demo?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/piral-samples/netflix-demo?style=flat-square&cacheSeconds=86400) A netflix-like portal application with pilets. This sample demonstrates the use of piral to build a dynamic app consisting of various micro frontends. [Demo](https://notflix-demo.samples.piral.cloud/browse). * [Command and Control, by David Fowler](https://github.com/davidfowl/CommandAndControl) - ![stars](https://img.shields.io/github/stars/davidfowl/CommandAndControl?style=flat-square&cacheSeconds=604800) ![GitHub stars](https://img.shields.io/github/last-commit/davidfowl/CommandAndControl?style=flat-square&cacheSeconds=86400) This is a sample using Blazor server and SignalR to do command and control. Agents connect to the blazor server application which is hosting a SignalR Hub and we can issue various commands against the connected agents (this feature uses client results). * [BlazorCRUD](https://github.com/thbst16/BlazorCrud) - ![stars](https://img.shields.io/github/stars/thbst16/BlazorCrud?style=flat-square&cacheSeconds=604800) ![GitHub stars](https://img.shields.io/github/last-commit/thbst16/BlazorCrud?style=flat-square&cacheSeconds=86400) Sample line of business application that illustrates key features of Blazor. [Demo](https://becksblazor.azurewebsites.net/). * [Money](https://github.com/maraf/Money) - ![stars](https://img.shields.io/github/stars/maraf/Money?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/maraf/Money?style=flat-square&cacheSeconds=86400) A money manager implemented using CQRS+ES. [Demo](https://app.money.neptuo.com/). * [Blazor Weather](https://github.com/danroth27/BlazorWeather) - ![stars](https://img.shields.io/github/stars/danroth27/BlazorWeather?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/danroth27/BlazorWeather?style=flat-square&cacheSeconds=86400) A Blazor Weather sample app that shows the current weather for your current location and a collection of pinned locations. Demonstrated at .NET Conf 2019 by Daniel Roth. [Demo](https://aka.ms/blazorweather). * [Blazor.SVGEditor](https://github.com/KristofferStrube/Blazor.SVGEditor) - ![stars](https://img.shields.io/github/stars/KristofferStrube/Blazor.SVGEditor?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/KristofferStrube/Blazor.SVGEditor?style=flat-square&cacheSeconds=86400) A basic HTML SVG Editor written in Blazor WASM. [Demo](https://kristofferstrube.github.io/Blazor.SVGEditor/). * [NethereumBlazor](https://github.com/Nethereum/NethereumBlazor) - ![GitHub stars](https://img.shields.io/github/stars/Nethereum/NethereumBlazor?style=flat-square&cacheSeconds=604800) ![GitHub stars](https://img.shields.io/github/last-commit/Nethereum/NethereumBlazor?style=flat-square&cacheSeconds=86400) Ethereum blockchain explorer and simple wallet. * [FFmpegBlazor](https://github.com/sps014/FFmpegBlazor) - ![stars](https://img.shields.io/github/stars/sps014/FFmpegBlazor?style=flat-square&cacheSeconds=604800) ![last-commit](https://img.shields.io/github/last-commit/sps014/FFmpegBlazor?style=flat-square&cacheSeconds=86400) FFmpegBlazor provides ability to utilize ffmpeg.wasm from Blazor Wasm C#. [ffmpeg.wasm](https://github.com/ffmpegwasm/ffmpeg.wasm) ![stars](https://img.shields.io/github/stars/ffmpegwasm/ffmpeg.wasm?style=flat-square&cacheSeconds=604800) ![last-commit](https://img.shields.io/github/last-commit/ffmpegwasm/ffmpeg.wasm?style=flat-square&cacheSeconds=86400) is a pure Webassembly / Javascript port of FFmpeg. It enables video & audio record, convert and stream right inside browsers. * [Blazor Studio](https://github.com/huntercfreeman/BlazorStudio) - ![stars](https://img.shields.io/github/stars/huntercfreeman/BlazorStudio?style=flat-square&cacheSeconds=604800) ![last-commit](https://img.shields.io/github/last-commit/huntercfreeman/BlazorStudio?style=flat-square&cacheSeconds=86400) A free and open source IDE written using .NET - a Photino host, Blazor UI, and C#. * [Blazor.MediaCaptureStreams](https://github.com/KristofferStrube/Blazor.MediaCaptureStreams) - ![stars](https://img.shields.io/github/stars/KristofferStrube/Blazor.MediaCaptureStreams?style=flat-square&cacheSeconds=604800) ![last-commit](https://img.shields.io/github/last-commit/KristofferStrube/Blazor.MediaCaptureStreams?style=flat-square&cacheSeconds=86400) A Blazor wrapper for the Media Capture and Streams browser API. The API standardizes ways to request access to local multimedia devices, such as microphones or video cameras. This also includes the MediaStream API, which provides the means to control where multimedia stream data is consumed, and provides some information and configuration options for the devices that produce the media. This project implements a wrapper around the API for Blazor so that we can easily and safely interact with the media streams of the browser. [Demo](https://kristofferstrube.github.io/Blazor.MediaCaptureStreams/). * [Planning Poker](https://github.com/duracellko/planningpoker4azure) &ndash; ![Last commit](https://img.shields.io/github/last-commit/duracellko/planningpoker4azure?style=flat-square&cacheSeconds=86400) An app to play Planning Poker for distributed teams. The app is implemented using Blazor and shows how to switch between client-side and server-side mode with a configuration change. [Demo](http://planningpoker.duracellko.net). * [C# Regex Tester online](https://github.com/lsvhome/regex-tester) - ![Last commit](https://img.shields.io/github/last-commit/lsvhome/regex-tester?style=flat-square&cacheSeconds=86400) Online tool for verify .Net regex syntax. ([Demo](https://lsvhome.github.io/regex-tester/)). * [C# Regex Online tool](https://github.com/MichaelSL/blazor-wasm-test-012020) - ![Last commit](https://img.shields.io/github/last-commit/MichaelSL/blazor-wasm-test-012020?style=flat-square&cacheSeconds=86400) Online tool for verify .Net regex syntax, see splits list, table etc. ([Demo](https://dotnet-regex.com/)). * [Blazor Tour of Heroes](https://github.com/georgemathieson/blazor-tour-of-heroes) - Blazor Tour of Heroes, using [Blazor-State](https://github.com/TimeWarpEngineering/blazor-state)(State Management architecture utilizing the MediatR pipeline) for the Redux style state. * [Blazor.Text.Editor](https://github.com/huntercfreeman/Blazor.Text.Editor) - ![Last commit](https://img.shields.io/github/last-commit/huntercfreeman/Blazor.Text.Editor?style=flat-square&cacheSeconds=86400) Blazor.Text.Editor is a Text Editor component library using the Blazor UI Framework for .NET. [Demo](https://hunter-freeman-dev.azurewebsites.net/). * [Blazor Wake-on-LAN](https://github.com/georg-jung/BlazorWoL) - ![Last commit](https://img.shields.io/github/last-commit/georg-jung/BlazorWoL?style=flat-square&cacheSeconds=86400) Wake-on-LAN app for local networks. Blazor Server + EF Core + DI + CI. * [BlazingWaffles](https://github.com/gbiellem/BlazingWaffles) - ![last commit](https://img.shields.io/github/last-commit/gbiellem/BlazingWaffles?style=flat-square&cacheSeconds=86400) A Blazor app that wraps [Waffle Generator](https://github.com/SimonCropp/WaffleGenerator). The generator outputs readable gibberish that can be used in place of Lorum Ipsum. [Demo](http://wafflegen.azurewebsites.net/). * [Nethereum Playground](https://github.com/Nethereum/Nethereum.Playground) - ![last commit](https://img.shields.io/github/last-commit/Nethereum/Nethereum.Playground?style=flat-square&cacheSeconds=86400) Compile and run Nethereum snippets on the browser. You can run it here: http://playground.nethereum.com/. * [TypinExamples](https://github.com/adambajguz/Typin/tree/master/src/TypinExamples) - ![last commit](https://img.shields.io/github/last-commit/adambajguz/Typin?style=flat-square&cacheSeconds=86400) A sample project that demonstrates the usage of [Typin](https://github.com/adambajguz/Typin/) framework with a Blazor SPA application (Xterm.js and custom web workers implementation in C# to emulate terminal experience in browser). [Live demo](https://adambajguz.github.io/Typin/). * [Unofficial eShopOnContainers](https://github.com/n-stefan/eshoponcontainers) - ![last commit](https://img.shields.io/github/last-commit/n-stefan/eshoponcontainers?style=flat-square&cacheSeconds=86400) Unofficial Blazor WebAssembly client for [eShopOnContainers](https://github.com/dotnet-architecture/eShopOnContainers). * [UpBlazor](https://github.com/Hona/UpBlazor) - ![GitHub stars](https://img.shields.io/github/stars/Hona/UpBlazor?style=flat-square&cacheSeconds=604800) - ![last commit](https://img.shields.io/github/last-commit/Hona/UpBlazor?style=flat-square&cacheSeconds=86400) Blazor Server integrating with Up bank (an Australian digital bank), to assist users with budgeting and to gain powerful future insights. Uses Clean Architecture and [Marten DB](https://martendb.io/). * [WordTester](https://github.com/KamilBugnoKrk/WordTester) - ![last commit](https://img.shields.io/github/last-commit/KamilBugnoKrk/WordTester) An application for learning foreign words using flashcards and spaced repetition. * [Css in Blazor presentation's source code](https://github.com/EdCharbeneau/Css-For-Blazor-Developers-Presentation) - ![last commit](https://img.shields.io/github/last-commit/EdCharbeneau/Css-For-Blazor-Developers-Presentation) Source code of a presentation by Ed Charbeneau about Blazor and CSS. * [BlazorViz interop wrapper for Viz.js](https://github.com/mrzhdev/BlazorViz) - ![last commit](https://img.shields.io/github/last-commit/mrzhdev/BlazorViz) A sample of generating Graphviz DOT language files and visualising tree data structures. [Demo](https://mrzhdev.github.io/BlazorViz/). * [BlazorServerImageRecognitionApp](https://github.com/johnt84/BlazorServerImageRecognitionApp) - ![last commit](https://img.shields.io/github/last-commit/johnt84/BlazorServerImageRecognitionApp) Simple Blazor Server app which uses image recognition to identify and extract text printed in an image file uploaded by the user. [Demo](https://blazorimagerecognitionapp.azurewebsites.net/). * [FootballBlazorApp](https://github.com/johnt84/FootballBlazorApp) - ![last commit](https://img.shields.io/github/last-commit/johnt84/FootballBlazorApp) Simple Football Blazor Server web app displaying fixtures & results, group standings, teams & players and has a player search. [Demo](https://premierleagueblazorapp.azurewebsites.net/). * [Moldinium](https://github.com/jtheisen/moldinium) - ![last commit](https://img.shields.io/github/last-commit/jtheisen/moldinium?style=flat-square&cacheSeconds=86400) MobX-style dependency tracking in Blazor (proof-of-concept level maturity) ## Tutorials * [Blazor workshop](https://github.com/dotnet-presentations/blazor-workshop/) - ![GitHub stars](https://img.shields.io/github/stars/dotnet-presentations/blazor-workshop?style=flat-square&cacheSeconds=604800&logo=microsoft) Blazor app building workshop by [.NET Foundation](https://www.dotnetfoundation.org/), Blazzing Pizza. * [Blazor Meadow Web API Weather Chart.js](https://wellsb.com/csharp/iot/blazor-meadow-web-api-weather-charts/) - June 2021 - Display sensor data using Chart.js in a Blazor application. [Source code](https://github.com/bradwellsb/blazor-meadow-weather) ![stars](https://img.shields.io/github/stars/bradwellsb/blazor-meadow-weather?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/bradwellsb/blazor-meadow-weather?style=flat-square&cacheSeconds=86400). * [NextTechEvent by Coding After Work](https://www.youtube.com/watch?v=Z2EZXY6G5ZU) - Build "NextTechEvent", a site that will help speakers, organizers, and attendees find their next tech event. [Source code](https://github.com/CodingAfterWork/NextTechEvent) ![stars](https://img.shields.io/github/stars/CodingAfterWork/NextTechEvent?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/CodingAfterWork/NextTechEvent?style=flat-square&cacheSeconds=86400). * [Archives](https://github.com/AdrienTorris/awesome-blazor/tree/master/Archives) - [2021](https://github.com/AdrienTorris/awesome-blazor/blob/master/Archives/2021.md#tutorials), [2020](https://github.com/AdrienTorris/awesome-blazor/blob/master/Archives/2020.md#tutorials), [2019](https://github.com/AdrienTorris/awesome-blazor/blob/master/Archives/2019.md#tutorials), [2018](https://github.com/AdrienTorris/awesome-blazor/blob/master/Archives/2018.md#tutorials). ## Libraries & Extensions *Reusable components like buttons, inputs, grids and more. See also the [Blazor component bundles feature comparison chart](https://github.com/AdrienTorris/awesome-blazor/blob/master/Component-Bundle-Comparison.md).* ### Component bundles * [FAST](https://github.com/microsoft/fast) - ![GitHub stars](https://img.shields.io/github/stars/microsoft/fast?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/microsoft/fast?style=flat-square&cacheSeconds=86400) ![GitHub licence](https://camo.githubusercontent.com/78f47a09877ba9d28da1887a93e5c3bc2efb309c1e910eb21135becd2998238a/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4c6963656e73652d4d49542d79656c6c6f772e737667) FAST is a collection of technologies built on Web Components and modern Web Standards, designed to help you efficiently tackle some of the most common challenges in website and application design and development. [FAST & Blazor documentation](https://www.fast.design/docs/integrations/blazor/). * [Ant Design Blazor](https://github.com/ant-design-blazor/ant-design-blazor) - ![GitHub stars](https://img.shields.io/github/stars/ant-design-blazor/ant-design-blazor?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/ant-design-blazor/ant-design-blazor?style=flat-square&cacheSeconds=86400) A set of enterprise-class UI components based on Ant Design and Blazor. ([Docs as Demo](https://ant-design-blazor.github.io/)). * [MudBlazor](https://github.com/MudBlazor/MudBlazor) - ![stars](https://img.shields.io/github/stars/MudBlazor/MudBlazor?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/MudBlazor/MudBlazor?style=flat-square&cacheSeconds=86400) MudBlazor is an ambitious Material Design component framework for Blazor with an emphasis on ease of use and clear structure. It is perfect for .NET developers who want to rapidly build web applications without having to struggle with CSS and Javascript. MudBlazor, being written entirely in C#, empowers them to adapt, fix or extend the framework and the multitude of examples in the documentation makes learning MudBlazor very easy. [Documentation](https://mudblazor.com/). [Demo](https://try.mudblazor.com/). * [Blazorise](https://github.com/Megabit/Blazorise) - ![GitHub stars](https://img.shields.io/github/stars/Megabit/Blazorise?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/Megabit/Blazorise?style=flat-square&cacheSeconds=86400) Components for Blazor with support for Bootstrap, Bulma, AntDesign and Material CSS. ([Bootstrap Demo](https://bootstrapdemo.blazorise.com/)), ([Bulma Demo](https://bulmademo.blazorise.com/)), ([AntDesign Demo](https://antdesigndemo.blazorise.com/)), ([Material Demo](https://materialdemo.blazorise.com/)). * [MASA Blazor](https://github.com/BlazorComponent/MASA.Blazor) - ![stars](https://img.shields.io/github/stars/BlazorComponent/MASA.Blazor?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/BlazorComponent/MASA.Blazor?style=flat-square&cacheSeconds=86400) A set of enterprise-class UI components based on Material Design and Blazor. It's an almost perfect copy of Vuetify, and Long-term roadmaps have been mapped out. It is powered by the MASA team and remains free and open source. [Documentation](http://blazor.masastack.com/). [Pro Demo](https://blazor-pro.masastack.com/). * [Radzen.Blazor](https://github.com/akorchev/razor.radzen.com) - ![GitHub stars](https://img.shields.io/github/stars/akorchev/razor.radzen.com?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/akorchev/razor.radzen.com?style=flat-square&cacheSeconds=86400) Native UI components for Blazor. DataGrid, DataList, Tabs, Dialog and more. ([Demo](https://razor.radzen.com/)). * [BlazorStrap](https://github.com/chanan/BlazorStrap) - ![GitHub stars](https://img.shields.io/github/stars/chanan/BlazorStrap?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/chanan/BlazorStrap?style=flat-square&cacheSeconds=86400) Material Design components for Blazor. Bootstrap 4 components for Blazor ([Demo](https://chanan.github.io/BlazorStrap/)). * [FAST Blazor](https://github.com/microsoft/fast-blazor) - ![GitHub stars](https://img.shields.io/github/stars/microsoft/fast-blazor?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/microsoft/fast-blazor?style=flat-square&cacheSeconds=86400) A lightweight wrapper around Microsoft's official FluentUI Web Components for use with .NET 6.0 Razor views & Blazor. ([Samples & demo](https://www.fast.design/)). * [Element-Blazor](https://github.com/Element-Blazor/Element-Blazor/blob/master/README.en.md) - ![GitHub stars](https://img.shields.io/github/stars/Element-Blazor/Element-Blazor?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/Element-Blazor/Element-Blazor?style=flat-square&cacheSeconds=86400) A blazor component library using Element UI. API imitates Element, CSS directly uses Element's style, HTML structure directly uses Element's HTML structure. [Blazor WebAssembly Version demo](https://blazorwasm.github.io). [Blazor WebAssembly Version PWA Mode demo](https://pwawasm.github.io). * [BlazorFluentUI](https://github.com/BlazorFluentUI/BlazorFluentUI) - ![stars](https://img.shields.io/github/stars/BlazorFluentUI/BlazorFluentUI?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/BlazorFluentUI/BlazorFluentUI?style=flat-square&cacheSeconds=86400) Simple port of FluenUI/Office Fabric React components and style to Blazor. [Client-side demo (WebAssembly)](https://www.blazorfluentui.net/). [Server-side demo (SignalR)](https://blazorfluentui.azurewebsites.net/). * [BootstrapBlazor](https://github.com/ArgoZhang/BootstrapBlazor) - ![stars](https://img.shields.io/github/stars/ArgoZhang/BootstrapBlazor?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/ArgoZhang/BootstrapBlazor?style=flat-square&cacheSeconds=86400) Reusable bootstrap components for Blazor. ([Docs as Demo](https://www.blazor.zone/)). * [ComponentOne Blazor UI Components](https://www.grapecity.com/componentone/blazor-ui-controls) - External link. A fast datagrid, listview, input and other native Blazor components for server and client-side apps. * [DevExpress Blazor UI Components](https://github.com/DevExpress/RazorComponents) - ![stars](https://img.shields.io/github/stars/DevExpress/RazorComponents?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/DevExpress/RazorComponents?style=flat-square&cacheSeconds=86400) A set of native UI Blazor components (including a Data Grid, Pivot Grid, Scheduler, and Charts) for both Blazor server-side and Blazor client-side platforms. * [Syncfusion Blazor UI Components](https://www.syncfusion.com/blazor-components) - ![GitHub stars](https://img.shields.io/github/stars/syncfusion/blazor-samples?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/syncfusion/blazor-samples?style=flat-square&cacheSeconds=86400) The most comprehensive native Blazor component library including [Data Grid](https://www.syncfusion.com/blazor-components/blazor-datagrid), [Charts](https://www.syncfusion.com/blazor-components/blazor-charts), [Scheduler](https://www.syncfusion.com/blazor-components/blazor-scheduler), [Diagram](https://www.syncfusion.com/blazor-components/blazor-diagram) and [Document Editor](https://www.syncfusion.com/blazor-components/blazor-word-processor) components. ([Demo](https://blazor.syncfusion.com/demos/)). * [ADMINLTE](https://github.com/sjefvanleeuwen/blazor-adminlte) - ![stars](https://img.shields.io/github/stars/sjefvanleeuwen/blazor-adminlte?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/sjefvanleeuwen/blazor-adminlte?style=flat-square&cacheSeconds=86400) ADMINLTE for Blazor is a collection of reusable components, with which you can easily develop digital services as a designer or developer. Think of buttons, form elements and page templates. This project adapts ADMINLTE 3 so the components can be used from dotnet core Blazor. * [Blazority](https://github.com/blazority/support) - ![GitHub stars](https://img.shields.io/github/stars/blazority/support?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/blazority/support?style=flat-square&cacheSeconds=86400) Blazor component library based on the Clarity UI design. 30+ components including Datagrid and Tree-view ([Docs & Demos](https://blazority.com)). * [Makani](https://github.com/getspacetime/makani) - ![GitHub stars](https://img.shields.io/github/stars/getspacetime/makani?style=flat-square) ![last commit](https://img.shields.io/github/last-commit/getspacetime/makani?style=flat-square) Lightweight, customizable, and performance oriented component library for Blazor and .NET MAUI Hybrid. Built with TailwindCSS. [Demo](https://getspacetime.github.io/makani/). * [Material.Blazor](https://github.com/Material-Blazor/Material.Blazor) - ![stars](https://img.shields.io/github/stars/Material-Blazor/Material.Blazor?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/Material-Blazor/Material.Blazor?style=flat-square&cacheSeconds=86400) An alternative Material Theme Razor Component Library. Material.Blazor focuses sharply on giving you pure markup from [Google's material-components-web](https://github.com/material-components/material-components-web/tree/master/packages) - we don't try to sit between you and your use of Google's CSS and SASS because they do it better than we ever could. We also have some cool "plus" components. [See our demo and comprehensive documentation](https://material-blazor.com). * [Majorsoft Blazor Components](https://github.com/majorimi/blazor-components) - ![GitHub last commit](https://img.shields.io/github/last-commit/majorimi/blazor-components) ![GitHub Repo stars](https://img.shields.io/github/stars/majorimi/blazor-components) Majorsoft Majorsoft Blazor Components is a set of UI Components and other useful Extensions for Blazor applications. All components are free and available on NuGet. The main goal of this project is to provide an easy to use, feature reach set of customizable components with other useful extensions. [NuGet](https://www.nuget.org/profiles/Blazor.Components) [Demo app](https://blazorextensions.z6.web.core.windows.net/) [Docs](https://github.com/majorimi/blazor-components/tree/master/.github/docs). * [MComponents](https://github.com/manureini/MComponents) - ![stars](https://img.shields.io/github/stars/manureini/MComponents?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/manureini/MComponents?style=flat-square&cacheSeconds=86400) Open Source MIT Blazor Components: Grid, Select, Wizard etc. * [PanoramicData Blazor UI Components](https://github.com/panoramicdata/PanoramicData.Blazor) - ![stars](https://img.shields.io/github/stars/panoramicdata/PanoramicData.Blazor?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/PanoramicData/PanoramicData.Blazor) An open source library of Blazor components including Table, Tree, ToolBar and FileExplorer. [Demo](https://panoramicdata.github.io/PanoramicData.Blazor). * [HAVIT Blazor](https://github.com/havit/Havit.Blazor) - ![GitHub stars](https://img.shields.io/github/stars/havit/Havit.Blazor?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/havit/Havit.Blazor?style=flat-square&cacheSeconds=86400) Bootstrap 5 components + additional components built on top of Bootstrap 5 (grid, autosuggest, message-boxes, atc.). Enterprise project template with gRPC code-first client/server communication, localization and more. [Interactive documentation & Demos](https://havit.blazor.eu). * [Blazority](https://github.com/blazority/support) - ![GitHub stars](https://img.shields.io/github/stars/blazority/support?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/blazority/support?style=flat-square&cacheSeconds=86400) Blazor component library based on the Clarity UI design. 30+ components including Datagrid and Tree-view ([Docs & Demos](https://blazority.com)). * [Telerik UI for Blazor](https://www.telerik.com/blazor-ui) - External link (telerik.com). A native set of UI components for Blazor, including grid, charting, and calendar components. * [Start Blazoring](https://startblazoring.com) - A Blazor starter template built using your choice of [Blazorise](https://blazorise.com/) or [MudBlazor](https://mudblazor.com), with more UI library integration coming. It offers a myriad of features such as user registration, login, password reset, two factor; user management, roles, permissioning; background workers, logging, caching, email templates, localization and many more. * [Smart UI for Blazor](https://github.com/htmlelements/smart-blazor) - ![last commit](https://img.shields.io/github/last-commit/https://github.com/htmlelements/smart-blazor?style=flat-square&cacheSeconds=86400) Component library for Blazor including Grid, Kanban, Gantt, Chart, Docking, Pivot and more. [Website](https://www.htmlelements.com/blazor/). * [TabBlazor](https://github.com/joadan/TabBlazor) - ![last commit](https://img.shields.io/github/last-commit/joadan/TabBlazor?style=flat-square&cacheSeconds=86400) Blazor Admin theme built on top of [Tabler UI](https://github.com/tabler/tabler). Minimal javascript. [Demo](https://joadan.github.io/TabBlazor/). * [Blazor.Ionic](https://github.com/kukks/Blazor.Ionic) - ![GitHub stars](https://img.shields.io/github/stars/kukks/Blazor.Ionic?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/kukks/Blazor.Ionic?style=flat-square&cacheSeconds=86400) Ionic framework integration for Blazor. * [Blazor Controls Toolkit](https://blazorct.azurewebsites.net/) - Complete toolset for implementing commercial business applications. It includes the equivalent of all Bootstrap JavaScript components, all html5 input types with widgets fallbacks, advanced editable components like DataGrid, TreeView, DetailView, ModalDetail, DetailList. All components have default customizable templates, and support virtualizazion and Drag & Drop. Component rendering is meta-data driven, so component configuration is in part automatic, and in part can be driven with data annotations. The toolset includes also query tools, to issue complex local or remote queries, changes-tracking tools to send only the changed records to the server, advanced validation attributes, globalization tools, "Behaviors" that modify existing components, state management and save, and more. * [BlazorOcticons](https://github.com/BlazorOcticons/BlazorOcticons) - ![last commit](https://img.shields.io/github/last-commit/BlazorOcticons/BlazorOcticons?style=flat-square&cacheSeconds=86400) Github [Octicons](https://primer.style/octicons/) created as a `.razor` components and available via the NuGet package. The project [website](https://blazorocticons.net/) is an example using the generated components. * [Nevron Open Vision Components for Blazor](https://www.nevron.com/products-open-vision) - ![Paid](https://img.shields.io/badge/-Paid-purple) ![External link](https://img.shields.io/badge/-External%20link-yellow) Diagram, Chart, Text Editor, Gauge, Barcode, and User Interface components for Blazor. [Demo](https://blazorexamples.nevron.com/). * [CodeBeam.MudExtensions](https://github.com/CodeBeamOrg/CodeBeam.MudExtensions) - ![stars](https://img.shields.io/github/stars/CodeBeamOrg/CodeBeam.MudExtensions?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/CodeBeamOrg/CodeBeam.MudExtensions?style=flat-square&cacheSeconds=86400) Useful third party extension components for MudBlazor, from the contributors. +20 components includes Stepper, SpeedDial, Wheel, Splitter, Animate, Popup, Material 3 Switch, Gallery, CodeInput and more. [Documentation](https://codebeam-mudextensions.pages.dev/). ### Individual components #### API * [Head Element Helper](https://github.com/jsakamoto/Toolbelt.Blazor.HeadElement) - ![stars](https://img.shields.io/github/stars/jsakamoto/Toolbelt.Blazor.HeadElement?style=flat-square&cacheSeconds=604800&logo=microsoft) ![last commit](https://img.shields.io/github/last-commit/jsakamoto/Toolbelt.Blazor.HeadElement?style=flat-square&cacheSeconds=86400) `<Title>` and `<Meta>` components for changing the document titile and meta elements, with server-side prerendering support for SEO/OGP. ([Demo](https://demo-blazor-headelement.azurewebsites.net/)). * [Blazor.Canvas](https://github.com/excubo-ag/Blazor.Canvas) - ![GitHub stars](https://img.shields.io/github/stars/excubo-ag/Blazor.Canvas?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/excubo-ag/Blazor.Canvas?style=flat-square&cacheSeconds=86400) Wrapper library around the HTML canvas API written in C# (no JS dependency) ([Demo](https://excubo-ag.github.io/Blazor.Canvas/)). * [BlazorIntersectionObserver](https://github.com/ljbc1994/BlazorIntersectionObserver) - ![last commit](https://img.shields.io/github/last-commit/ljbc1994/BlazorIntersectionObserver?style=flat-square&cacheSeconds=86400) This is a wrapper around the [Intersection Observer API](https://developer.mozilla.org/en-US/docs/Web/API/Intersection_Observer_API). #### Charts * [ChartJs.Blazor](https://github.com/mariusmuntean/ChartJs.Blazor) - ![stars](https://img.shields.io/github/stars/mariusmuntean/ChartJs.Blazor?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/akorchev/razor.radzen.com?style=flat-square&cacheSeconds=86400) Brings [ChartJs](https://github.com/chartjs/Chart.js) charts to Blazor. * [Blazor-ApexCharts](https://github.com/joadan/Blazor-ApexCharts) - ![stars](https://img.shields.io/github/stars/joadan/Blazor-ApexCharts?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/joadan/Blazor-ApexCharts?style=flat-square&cacheSeconds=86400) Blazor wrapper for ApexCharts. [Demo](https://joadan.github.io/Blazor-ApexCharts/basic-charts). * [Plotly.Blazor](https://github.com/LayTec-AG/Plotly.Blazor) - ![stars](https://img.shields.io/github/stars/LayTec-AG/Plotly.Blazor?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/LayTec-AG/Plotly.Blazor?style=flat-square&cacheSeconds=86400) Brings the charting library [plotly.js](https://github.com/plotly/plotly.js) with over 40 chart types to Blazor ([Demo](https://plotly-blazor.azurewebsites.net/)). * [GG.Net Data Visualization](https://github.com/pablofrommars/GGNet) - ![stars](https://img.shields.io/github/stars/pablofrommars/GGNet?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/pablofrommars/GGNet?style=flat-square&cacheSeconds=86400) Interactive and flexible charts for Blazor Web Apps. Taking its inspiration from the highly popular ggpplot2 R package, GG.Net provides natively rich features for your Data Analysis Workflow. Build publication quality charts with just a few lines of code. [Website](https://pablofrommars.github.io/). * [ChartJs for Blazor](https://github.com/erossini/BlazorChartjs) - ![stars](https://img.shields.io/github/stars/erossini/BlazorChartjs?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/erossini/BlazorChartjs?style=flat-square&cacheSeconds=86400) This is a NuGet package for Blazor to use ChartJs, constantly updated with new features. #### CSS * [BlazorSize](https://github.com/EdCharbeneau/BlazorSize) - ![stars](https://img.shields.io/github/stars/EdCharbeneau/BlazorSize?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/EdCharbeneau/BlazorSize?style=flat-square&cacheSeconds=86400) BlazorSize is a JavaScript interop library for Blazor that is used to detect the Browser's current size, change in size, and test media queries. * [BlazorAnimation](https://github.com/aboudoux/BlazorAnimation) - ![stars](https://img.shields.io/github/stars/aboudoux/BlazorAnimation?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/aboudoux/BlazorAnimation?style=flat-square&cacheSeconds=86400) A Blazor component based on animate.css to easly animate your content. [Demo](http://blazoranimation.boudoux.fr/). #### Database * [DexieNET](https://github.com/b-straub/DexieNET) - ![GitHub stars](https://img.shields.io/github/stars/b-straub/DexieNET?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/b-straub/DexieNET?style=flat-square&cacheSeconds=86400) DexieNET aims to be a feature complete .NET wrapper for Dexie.js the famous Javascript IndexedDB wrapper from David Fahlander. It's designed for Blazor and includes an easy to use Razor component. #### Datagrids / tables * [Grid.Blazor](https://github.com/gustavnavar/Grid.Blazor) - ![stars](https://img.shields.io/github/stars/gustavnavar/Grid.Blazor?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/gustavnavar/Grid.Blazor?style=flat-square&cacheSeconds=86400) Grid component with CRUD forms for Blazor and ASP.NET MVC, supporting filtering, sorting, searching, paging, subgrids and others ([Demo](http://gridblazor.azurewebsites.net)). * [BlazorGrid](https://github.com/Akinzekeel/BlazorGrid) - ![GitHub stars](https://img.shields.io/github/stars/Akinzekeel/BlazorGrid?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/Akinzekeel/BlazorGrid?style=flat-square&cacheSeconds=86400) Virtualized data grid component with lean markup, focused on displaying remote data. ([Demo & documentation](https://blazorgrid.z6.web.core.windows.net/)). #### Date & time * [BlazorDateRangePicker](https://github.com/jdtcn/BlazorDateRangePicker) - ![stars](https://img.shields.io/github/stars/jdtcn/BlazorDateRangePicker?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/jdtcn/BlazorDateRangePicker?style=flat-square&cacheSeconds=86400) A date range picker component library for Blazor. [Demo](https://BlazorDateRangePicker.azurewebsites.net/). * [Toast_UI.Blazor_Calendar](https://github.com/gismofx/toast_ui.blazor_calendar) - ![last commit](https://img.shields.io/github/last-commit/gismofx/toast_ui.blazor_calendar?style=flat-square&cacheSeconds=86400) A fully featured calendar UI component for viewing and creating events, tasks, and milestones. Month, week, and daily views. Blazor Calendar wraps the fully featured Toast UI Calendar JS library. MIT License. [Sample project](https://github.com/gismofx/toast_ui.blazor_calendar/tree/main/toast_ui.blazor_calendar.TestProject/toast_ui.blazor_calendar.TestProject). #### Maps * [BlazorGoogleMaps](https://github.com/rungwiroon/BlazorGoogleMaps) - ![GitHub stars](https://img.shields.io/github/stars/rungwiroon/BlazorGoogleMaps?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/rungwiroon/BlazorGoogleMaps?style=flat-square&cacheSeconds=86400) Blazor interop for GoogleMap library. * [UnlockedData.Mapael](https://github.com/UnlockedData/UnlockedData.Mapael) - ![last commit](https://img.shields.io/github/last-commit/unlockeddata/unlockeddata.mapael) Blazor wrapper over vector drawing/mapping library. [Jquery Mapael](https://www.vincentbroute.fr/mapael/). #### Modal, Toast & Notifications * [Blazored.Modal](https://github.com/Blazored/Modal) - ![GitHub stars](https://img.shields.io/github/stars/Blazored/Modal?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/Blazored/Modal?style=flat-square&cacheSeconds=86400) A JavaScript free modal library for Blazor and Razor Components applications. * [Blazored.Toast](https://github.com/Blazored/Toast) - ![GitHub stars](https://img.shields.io/github/stars/Blazored/Toast?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/Blazored/Toast?style=flat-square&cacheSeconds=86400) A JavaScript free toast library for Blazor and Razor Component applications. * [Blazor.Sidepanel](https://github.com/Append-IT/Blazor.Sidepanel) - ![GitHub stars](https://img.shields.io/github/stars/Append-IT/Blazor.Sidepanel?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/Append-IT/Blazor.Sidepanel?style=flat-square&cacheSeconds=86400) A powerful and customizable Sidepanel implementation for Blazor applications. #### Tabs * [BlazorXTabs](https://github.com/David-Moreira/BlazorXTabs) - ![last commit](https://img.shields.io/github/last-commit/David-Moreira/BlazorXTabs?style=flat-square&cacheSeconds=86400) An extended tabs component library providing various tab features for Blazor. #### Testing * [bUnit - a testing library for Blazor components](https://github.com/egil/bunit) - ![stars](https://img.shields.io/github/stars/egil/bunit?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/egil/bunit?style=flat-square&cacheSeconds=86400) A testing library for Blazor Components. You can easily define components under test in C# or Razor syntax and verify outcome using semantic HTML diffing/comparison logic. You can easily interact with and inspect components, trigger event handlers, provide cascading values, inject services, mock IJsRuntime, and perform snapshot testing. * [Verify.Blazor - a snapshot testing library for Blazor components](https://github.com/VerifyTests/Verify.Blazor) - ![stars](https://img.shields.io/github/stars/VerifyTests/Verify.Blazor?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/VerifyTests/Verify.Blazor?style=flat-square&cacheSeconds=86400) A snapshot testing library for Blazor Components. Supports rendering Blazor Components to a snapshot file via bunit or via raw Blazor rendering. #### Others * [Stl.Fusion](https://github.com/servicetitan/Stl.Fusion) - ![GitHub stars](https://img.shields.io/github/stars/servicetitan/Stl.Fusion?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/servicetitan/Stl.Fusion?style=flat-square&cacheSeconds=86400) .NET Core and Blazor library that attempts to dramatically improve the way we implement real-time and high-load services by introducing a novel kind of "computed observables" - immutable, thread-safe, and most importantly, almost invisible for developers. [Samples](https://github.com/servicetitan/Stl.Fusion.Samples). [Overview](https://github.com/servicetitan/Stl.Fusion/blob/master/docs/Overview.md). * [BlazorContextMenu](https://github.com/stavroskasidis/BlazorContextMenu) - ![GitHub stars](https://img.shields.io/github/stars/stavroskasidis/BlazorContextMenu?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/stavroskasidis/BlazorContextMenu?style=flat-square&cacheSeconds=86400) A context menu component for Blazor ([Demo](https://blazor-context-menu-demo.azurewebsites.net/)). * [Z.Blazor.Diagrams](https://github.com/zHaytam/Blazor.Diagrams) - ![stars](https://img.shields.io/github/stars/zHaytam/Blazor.Diagrams?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/zHaytam/Blazor.Diagrams?style=flat-square&cacheSeconds=86400) A fully customizable and extensible all-purpose diagrams library for Blazor, inspired by the popular react-diagrams library, with many additions. [Demo](https://blazor-diagrams.zhaytam.com/demos/simple). * [Blazored.Typeahead](https://github.com/Blazored/Typeahead) - ![GitHub stars](https://img.shields.io/github/stars/Blazored/Typeahead?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/Blazored/Typeahead?style=flat-square&cacheSeconds=86400) Auto-complete textbox with local and remote data source, for both Client-side and Server-Side Blazor. * [Blazor-DragDrop](https://github.com/Postlagerkarte/blazor-dragdrop) - ![stars](https://img.shields.io/github/stars/Postlagerkarte/blazor-dragdrop?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/Postlagerkarte/blazor-dragdrop?style=flat-square&cacheSeconds=86400) Easy-to-use Drag and Drop library for Blazor. * [BlazorDownloadFile](https://github.com/arivera12/BlazorDownloadFile) - ![stars](https://img.shields.io/github/stars/arivera12/BlazorDownloadFile?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/arivera12/BlazorDownloadFile?style=flat-square&cacheSeconds=86400) Blazor download files to the browser from c# without any JavaScript library or dependency. BlazorDownloadFile is the solution to saving files on the client-side, and is perfect for web apps that generates files on the client. However if the file is coming from the server we recommend you to first try to use Content-Disposition attachment response header as it has more cross-browser compatibility. * [Excubo.Blazor.Diagrams](https://github.com/excubo-ag/Blazor.Diagrams) - ![stars](https://img.shields.io/github/stars/excubo-ag/Blazor.Diagrams?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/excubo-ag/Blazor.Diagrams?style=flat-square&cacheSeconds=86400) Interactive diagram component (flowcharts, UML, BPMN, ...), customizable and extensible according to user needs (node types, styles). [Demo](https://excubo-ag.github.io/Blazor.Diagrams/). * [Blazor.FileSystemAccess](https://github.com/KristofferStrube/Blazor.FileSystemAccess) - ![stars](https://img.shields.io/github/stars/KristofferStrube/Blazor.FileSystemAccess?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/KristofferStrube/Blazor.FileSystemAccess?style=flat-square&cacheSeconds=86400) A Blazor wrapper for the browser API File System Access. The API makes it possible to read and write to your local file system from the browser both files and directories. [Demo](The API makes it possible to read and write to your local file system from the browser both files and directories.). * [Blorc.PatternFly](https://github.com/WildGums/Blorc.PatternFly) - ![stars](https://img.shields.io/github/stars/WildGums/Blorc.PatternFly?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/WildGums/Blorc.PatternFly?style=flat-square&cacheSeconds=86400) Blazor wrappers for [PatternFly](https://www.patternfly.org). To view the latest develop branch in action, visit the [demo](http://blorc-patternfly.wildgums.com/) app. * [BlazorTransitionableRoute](https://github.com/JByfordRew/BlazorTransitionableRoute) - ![stars](https://img.shields.io/github/stars/JByfordRew/BlazorTransitionableRoute?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/JByfordRew/BlazorTransitionableRoute?style=flat-square&cacheSeconds=86400) Allows current and previous route to exist enabling transition animations of UI/UX design systems. * [Razor.SweetAlert2](https://github.com/Basaingeal/Razor.SweetAlert2) - ![last commit](https://img.shields.io/github/last-commit/Basaingeal/Razor.SweetAlert2?style=flat-square&cacheSeconds=86400) Blazor component implementing the popular SweetAlert2 JavaScript Library. * [BlazorMonaco](https://github.com/serdarciplak/BlazorMonaco) - ![last commit](https://img.shields.io/github/last-commit/serdarciplak/BlazorMonaco?style=flat-square&cacheSeconds=86400) Blazor component for Microsoft's [Monaco Editor](https://github.com/Microsoft/monaco-editor) which powers Visual Studio Code. ([Demo](https://serdarciplak.github.io/BlazorMonaco/)). * [Blazor.Grids](https://github.com/excubo-ag/Blazor.Grids) - ![last commit](https://img.shields.io/github/last-commit/excubo-ag/Blazor.Grids?style=flat-square&cacheSeconds=86400) Component library for CSS grids with extra features, such as moving and resizing interactively. Create your own dashboard with ease. ([Demo](https://excubo-ag.github.io/Blazor.Grids/)). * [Blazor.TreeViews](https://github.com/excubo-ag/Blazor.TreeViews) - ![last commit](https://img.shields.io/github/last-commit/excubo-ag/Blazor.TreeViews?style=flat-square&cacheSeconds=86400) Component library for tree views. ([Demo](https://excubo-ag.github.io/Blazor.TreeViews/)). * [GEmojiSharp.Blazor](https://github.com/hlaueriksson/GEmojiSharp) - ![last commit](https://img.shields.io/github/last-commit/hlaueriksson/GEmojiSharp?style=flat-square&cacheSeconds=86400) GitHub Emoji for Blazor ([Demo](https://hlaueriksson.github.io/GEmojiSharp/)). * [Texnomic.Blazor.hCaptcha](https://github.com/Texnomic/hCaptcha) - ![last commit](https://img.shields.io/github/last-commit/Texnomic/hCaptcha?style=flat-square&cacheSeconds=86400) hCaptcha Component for Server-Side Blazor. * [BlazorLocalizationSample](https://github.com/LazZiya/XLocalizer.Samples/tree/master/BlazorLocalizationSample) ![GitHub stars](https://img.shields.io/github/stars/LazZiya/XLocalizer.Samples?style=flat-square&cacheSeconds=86400) ![last commit](https://img.shields.io/github/last-commit/LazZiya/XLocalizer.Samples?style=flat-square&cacheSeconds=86400) The default project template localized using [XLocalizer](https://github.com/LazZiya/XLocalizer) with online translation and auto resource creating. * [TimeCalc](https://github.com/michaelrp/TimeCalc) - ![last commit](https://img.shields.io/github/last-commit/michaelrp/TimeCalc?style=flat-square&cacheSeconds=86400) An app to help keep track of speed cubing averages on the fly using Blazor WebAssembly. [Demo](https://www.timecalc.app/). * [BlazorSliders](https://github.com/carlfranklin/BlazorSliders) - ![last commit](https://img.shields.io/github/last-commit/carlfranklin/BlazorSliders?style=flat-square&cacheSeconds=86400) Create multiple panels separated by sliding splitters. * [BlazorTimeline](https://github.com/Morasiu/BlazorTimeline) - ![stars](https://img.shields.io/github/stars/Morasiu/BlazorTimeline?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/Morasiu/BlazorTimeline?style=flat-square&cacheSeconds=86400) Responsive, vertical timeline component. * [BlazorTypewriter](https://github.com/ormesam/blazor-typewriter) - ![last commit](https://img.shields.io/github/last-commit/ormesam/blazor-typewriter?style=flat-square&cacheSeconds=86400) A typewriter effect for Blazor. * [BlazorMergely](https://github.com/akovac35/BlazorMergely) - ![last commit](https://img.shields.io/github/last-commit/akovac35/BlazorMergely?style=flat-square&cacheSeconds=86400) An implementation of Blazor diff and merge component based on Mergely with server side support. * [MetaMask.Blazor](https://github.com/michielpost/MetaMask.Blazor) - ![last commit](https://img.shields.io/github/last-commit/michielpost/MetaMask.Blazor?style=flat-square&cacheSeconds=86400) A library that provides an easy helper to use [MetaMask](https://metamask.io/) with Blazor WebAssembly. * [Blazor File Drop Zone](https://github.com/jsakamoto/Toolbelt.Blazor.FileDropZone/) - ![last commit](https://img.shields.io/github/last-commit/jsakamoto/Toolbelt.Blazor.FileDropZone?style=flat-square&cacheSeconds=86400) Surround an "input type=file" element by this Blazor component to making a zone that accepts drag and drops files ([Demo](https://jsakamoto.github.io/Toolbelt.Blazor.FileDropZone/)). * [BlazorCurrentDevice](https://github.com/arivera12/BlazorCurrentDevice) - ![last commit](https://img.shields.io/github/last-commit/arivera12/BlazorCurrentDevice?style=flat-square&cacheSeconds=86400) Device detection for blazor using current-device.js. * [BlazorStyledTextArea](https://github.com/JByfordRew/BlazorStyledTextArea) - ![last commit](https://img.shields.io/github/last-commit/JByfordRew/BlazorStyledTextArea?style=flat-square&cacheSeconds=86400) A textarea with style. This component essentially remains and works as a textarea but can have any of the text styled based on your application needs. Its simplicity is purposefully designed to avoid the complexities and issues that come with rich text editors. * [SignaturePad](https://github.com/MarvinKlein1508/SignaturePad) - ![last commit](https://img.shields.io/github/last-commit/MarvinKlein1508/SignaturePad?style=flat-square&cacheSeconds=86400) A simple to use blazor component to draw custom signatures. [Demo](https://marvinklein1508.github.io/SignaturePad). * [BlazorInputTags](https://github.com/MarvinKlein1508/BlazorInputTags) - ![last commit](https://img.shields.io/github/last-commit/MarvinKlein1508/BlazorInputTags?style=flat-square&cacheSeconds=86400) A simple to use blazor component for both Blazor Server and WebAssembly which adds a basic tag editor to your app. [Demo](https://marvinklein1508.github.io/BlazorInputTags). ### Tools & Utilities *Libraries and extensions for state management, cookies, local storage and other specific tools.* * [Fluxor](https://github.com/mrpmorris/fluxor) - ![GitHub stars](https://img.shields.io/github/stars/mrpmorris/fluxor?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/mrpmorris/fluxor?style=flat-square&cacheSeconds=86400) Zero boilerplate Flux/Redux library for DotNet. * [Blazored.LocalStorage](https://github.com/Blazored/LocalStorage) - ![GitHub stars](https://img.shields.io/github/stars/Blazored/LocalStorage?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/Blazored/LocalStorage?style=flat-square&cacheSeconds=86400) A library to provide access to local storage in Blazor applications. * [Blazor-State](https://github.com/TimeWarpEngineering/blazor-state) - ![GitHub stars](https://img.shields.io/github/stars/TimeWarpEngineering/blazor-state?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/TimeWarpEngineering/blazor-state?style=flat-square&cacheSeconds=86400) Manage client side state in Blazor using MediatR pipeline. * [bUnit - a testing library for Blazor components](https://github.com/egil/bunit) - ![stars](https://img.shields.io/github/stars/egil/bunit?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/egil/bunit?style=flat-square&cacheSeconds=86400) A testing library for Blazor Components. You can easily define components under test in C# or Razor syntax and verify outcome using semantic HTML diffing/comparison logic. You can easily interact with and inspect components, trigger event handlers, provide cascading values, inject services, mock IJsRuntime, and perform snapshot testing. * [TextCopy](https://github.com/CopyText/TextCopy) - ![GitHub stars](https://img.shields.io/github/stars/CopyText/TextCopy?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/CopyText/TextCopy?style=flat-square&cacheSeconds=86400) A cross platform package to copy text to and from the clipboard. [Supports Blazor](https://github.com/CopyText/TextCopy#blazor-webassembly) via the [Clipboard Browser API](https://developer.mozilla.org/docs/Web/API/Clipboard). * [CssBuilder](https://github.com/EdCharbeneau/CssBuilder) - ![GitHub stars](https://img.shields.io/github/stars/EdCharbeneau/CssBuilder?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/EdCharbeneau/CssBuilder?style=flat-square&cacheSeconds=86400) CssBuilder is a Builder pattern for CSS classes to be used with Razor Components. * [Blazor.FileSystemAccess](https://github.com/KristofferStrube/Blazor.FileSystemAccess) - ![GitHub stars](https://img.shields.io/github/stars/KristofferStrube/Blazor.FileSystemAccess?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/KristofferStrube/Blazor.FileSystemAccess?style=flat-square&cacheSeconds=86400) A Blazor wrapper for the browser File System Access API. * [Blazor.Polyfill](https://github.com/Daddoon/Blazor.Polyfill) - ![GitHub stars](https://img.shields.io/github/stars/Daddoon/Blazor.Polyfill?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/Daddoon/Blazor.Polyfill?style=flat-square&cacheSeconds=86400) Polyfills for Blazor (for Internet Explorer 11 support and some other browsers). * [Blazor I18n/Localization Text](https://github.com/jsakamoto/Toolbelt.Blazor.I18nText) - ![stars](https://img.shields.io/github/stars/jsakamoto/Toolbelt.Blazor.I18nText?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/jsakamoto/Toolbelt.Blazor.I18nText?style=flat-square&cacheSeconds=86400) Localizing contents text in Blazor ([Demo](https://jsakamoto.github.io/Toolbelt.Blazor.I18nText/)). * [BlazorGoogleMaps](https://github.com/rungwiroon/BlazorGoogleMaps) - ![stars](https://img.shields.io/github/stars/rungwiroon/BlazorGoogleMaps?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/rungwiroon/BlazorGoogleMaps?style=flat-square&cacheSeconds=86400) Blazor interop for GoogleMap library. * [BlazorWorker](https://github.com/Tewr/BlazorWorker) - ![stars](https://img.shields.io/github/stars/Tewr/BlazorWorker?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/Tewr/BlazorWorker?style=flat-square&cacheSeconds=86400) Library for creating DotNet Web Worker threads/multithreading in Blazor. [Live demo](https://tewr.github.io/BlazorWorker). * [MvvmBlazor](https://github.com/klemmchr/MvvmBlazor) - ![stars](https://img.shields.io/github/stars/klemmchr/MvvmBlazor?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/klemmchr/MvvmBlazor?style=flat-square&cacheSeconds=86400) BlazorMVVM is a small framework for building Blazor and BlazorServerside apps. With it's simple to use MVVM pattern you can boost up your development speed while minimizing the hazzle to just make it work. * [Blazor.BrowserExtension](https://github.com/mingyaulee/Blazor.BrowserExtension) - ![stars](https://img.shields.io/github/stars/mingyaulee/Blazor.BrowserExtension?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/mingyaulee/Blazor.BrowserExtension?style=flat-square&cacheSeconds=86400) Develop browser extensions/addons with Blazor WebAssembly. Tested with Google Chrome, Mozilla Firefox and MS Edge. * [Blazor Analytics](https://github.com/isc30/blazor-analytics) - Blazor extensions for Analytics. * [Blazor PDF](https://github.com/tossnet/Blazor-PDF) - ![stars](https://img.shields.io/github/stars/tossnet/Blazor-PDF?style=flat-square&cacheSeconds=604800) ![last-commit](https://img.shields.io/github/last-commit/tossnet/Blazor-PDF?style=flat-square&cacheSeconds=86400) Generate de PDF document with iTextSharp from a Blazor Server App. * [BlazorRouter](https://github.com/hez2010/BlazorRouter) - ![stars](https://img.shields.io/github/stars/hez2010/BlazorRouter?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/hez2010/BlazorRouter?style=flat-square&cacheSeconds=86400) BlazorRouter is an awesome router inspired by react-router, providing declarative routing for Blazor. * [DataJuggler.Blazor.FileUpload](https://github.com/DataJuggler/BlazorFileUpload) - ![stars](https://img.shields.io/github/stars/DataJuggler/BlazorFileUpload?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/DataJuggler/BlazorFileUpload?style=flat-square&cacheSeconds=86400) Wrapper for Steve Sanderson's BlazorFileInput component. * [BlazorPrettyCode](https://github.com/chanan/BlazorPrettyCode) - Blazor Code Component for documentation sites. [Demo](https://chanan.github.io/BlazorPrettyCode/). * [Blazor.EventAggregator](https://github.com/mikoskinen/Blazor.EventAggregator) - Lightweight Event Aggregator for Blazor (Razor Components). * [Blazor Gamepad](https://github.com/jsakamoto/Toolbelt.Blazor.Gamepad) - Provides gamepad API access for Blazor. * [Blazor Hotkeys2](https://github.com/jsakamoto/Toolbelt.Blazor.Hotkeys2) - ![stars](https://img.shields.io/github/stars/jsakamoto/Toolbelt.Blazor.Hotkeys?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/jsakamoto/Toolbelt.Blazor.Hotkeys2?style=flat-square&cacheSeconds=86400) A library to provide configuration-centric keyboard shortcuts for Blazor. * [BlazorRealm](https://dworthen.github.io/BlazorRealm/docs/quickstart.html) - Redux state management for Blazor. * [Blazor.LocalFiles](https://github.com/jburman/W8lessLabs.Blazor.LocalFiles) - Open files in your browser and load into Blazor. * [Blazor.SpeechSynthesis](https://github.com/jsakamoto/Toolbelt.Blazor.SpeechSynthesis) - ![last commit](https://img.shields.io/github/last-commit/jsakamoto/Toolbelt.Blazor.SpeechSynthesis?style=flat-square&cacheSeconds=86400) A library to provide Speech Synthesis API access for Blazor. * [Blazor BarCode](https://barcoderesource.com/blazorbarcode.shtml) &ndash; A barcode library for Blazor using barcode fonts. * [BlazorState.Redux](https://github.com/BerserkerDotNet/BlazorState.Redux) - ![last commit](https://img.shields.io/github/last-commit/BerserkerDotNet/BlazorState.Redux?style=flat-square&cacheSeconds=86400) Develop Blazor apps with Redux. * [Howler.Blazor](https://github.com/StefH/Howler.Blazor) - ![last commit](https://img.shields.io/github/last-commit/StefH/Howler.Blazor?style=flat-square&cacheSeconds=86400) A Blazor JSInterop wrapper for Howler.js, an audio library. * [jsMind.Blazor](https://github.com/StefH/jsMind.Blazor) - ![last commit](https://img.shields.io/github/last-commit/StefH/jsMind.Blazor?style=flat-square&cacheSeconds=86400) A Blazor JSInterop wrapper for jsMind, a MindMapping tool. * [Blazor Highcharts](https://github.com/Allegiance-Consulting/blazor-highcharts) - ![last commit](https://img.shields.io/github/last-commit/Allegiance-Consulting/blazor-highcharts?style=flat-square&cacheSeconds=86400) A port of the popular Highcharts library. [Demo](https://allegiance-consulting.github.io/blazor-highcharts/). * [Blazor.LazyStyleSheet](https://github.com/excubo-ag/Blazor.LazyStyleSheet) - ![last commit](https://img.shields.io/github/last-commit/excubo-ag/Blazor.LazyStyleSheet?style=flat-square&cacheSeconds=86400) Lazy loading for CSS style sheets. * [Blazor.ScriptInjection](https://github.com/excubo-ag/Blazor.ScriptInjection) - ![stars](https://img.shields.io/github/stars/excubo-ag/Blazor.ScriptInjection?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/excubo-ag/Blazor.ScriptInjection?style=flat-square&cacheSeconds=86400) Smart script tags in Blazor components, ideal for lazy loading of javascript files. * [DnetIndexedDb](https://github.com/amuste/DnetIndexedDb) - ![last commit](https://img.shields.io/github/last-commit/amuste/DnetIndexedDb?style=flat-square&cacheSeconds=86400) Blazor Library for IndexedDB DOM API. * [BlazorIndexedDbJs](https://github.com/kattunga/BlazorIndexedDbJs) - ![last commit](https://img.shields.io/github/last-commit/kattunga/BlazorIndexedDbJs?style=flat-square&cacheSeconds=86400) BlazorIndexedDbJs is a wrapper arround IndexedDB DOM API, supports Blazor WASM and Server. * [Blazor-Color-Picker](https://github.com/tossnet/Blazor-Color-Picker) - ![last commit](https://img.shields.io/github/last-commit/tossnet/Blazor-Color-Picker?style=flat-square&cacheSeconds=86400) Opens a palette with the Material colors for Blazor application. * [Blazm.Bluetooth](https://github.com/EngstromJimmy/Blazm.Bluetooth) - ![last commit](https://img.shields.io/github/last-commit/EngstromJimmy/Blazm.Bluetooth?style=flat-square&cacheSeconds=86400) A Blazor library for connecting to devices using Bluetooth. * [WebBluetooth](https://github.com/KeudellCoding/Blazor.WebBluetooth) - ![last commit](https://img.shields.io/github/last-commit/KeudellCoding/Blazor.WebBluetooth?style=flat-square&cacheSeconds=86400) Blazor service for the experimental WebBluetooth functions. Based on [Blazm.Bluetooth](https://github.com/EngstromJimmy/Blazm.Bluetooth). * [BlazorApplicationInsights](https://github.com/IvanJosipovic/BlazorApplicationInsights) - ![last commit](https://img.shields.io/github/last-commit/IvanJosipovic/BlazorApplicationInsights?style=flat-square&cacheSeconds=86400) Application Insights for Blazor web applications. * [Blazor Printing](https://github.com/Append-IT/Blazor.Printing) - ![last-commit](https://img.shields.io/github/last-commit/append-it/Blazor.Printing?style=flat-square&cacheSeconds=86400) Print and save PDF documents with a native print dialog in a Blazor Server or client Application. * [BlazorTemplater](https://github.com/conficient/BlazorTemplater) - ![last-commit](https://img.shields.io/github/last-commit/conficient/BlazorTemplater?style=flat-square&cacheSeconds=86400) Use `.razor` components to render HTML strings for email content. * [MediaSession.Blazor](https://github.com/zuozishi/MediaSession.Blazor) - ![last-commit](https://img.shields.io/github/last-commit/zuozishi/MediaSession.Blazor?style=flat-square&cacheSeconds=86400) A Blazor JSInterop wrapper for Media Session API. The Media Session API provides a way to customize media notifications. ([Demo](https://zuozishi.github.io/MediaSession.Blazor/)). * [BlazorAntivirusProtection](https://github.com/stavroskasidis/BlazorWasmAntivirusProtection) - ![stars](https://img.shields.io/github/stars/stavroskasidis/BlazorWasmAntivirusProtection?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/stavroskasidis/BlazorWasmAntivirusProtection?style=flat-square&cacheSeconds=86400) Antivirus protection for Blazor Wasm projects. This package attempts to guard against false positives from antiviruses that flag Blazor Wasm as malware, until Microsoft gives us an official solution. * [Phork.Blazor.Reactivity](https://github.com/phorks/phork-blazor-reactivity) - ![stars](https://img.shields.io/github/stars/phorks/phork-blazor-reactivity?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/phorks/phork-blazor-reactivity?style=flat-square&cacheSeconds=86400) An unopinionated Blazor state management library that utilizes INotifyPropertyChanged and INotifyCollectionChanged .NET interfaces to automatically detect state changes in components. * [CodeBeam.GoogleApis.Blazor](https://github.com/CodeBeamOrg/CodeBeam.GoogleApis.Blazor) - ![stars](https://img.shields.io/github/stars/CodeBeamOrg/CodeBeam.GoogleApis.Blazor?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/CodeBeamOrg/CodeBeam.GoogleApis.Blazor?style=flat-square&cacheSeconds=86400) An open-source utility package for GoogleApis in Blazor. Written from scratch and aims easiest of use. ### Others * [Blazor Extensions Home](https://github.com/BlazorExtensions/Home) - ![GitHub stars](https://img.shields.io/github/stars/BlazorExtensions/Home?style=flat-square&cacheSeconds=604800) Home for Blazor Extensions. * [Bolero](https://github.com/fsbolero/Bolero) - ![GitHub stars](https://img.shields.io/github/stars/fsbolero/Bolero?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/fsbolero/Bolero?style=flat-square&cacheSeconds=86400) Blazor for F# with hot reloaded templates, type-safe endpoints and routing, remoting, and much more. * [BlazorFabric](https://github.com/limefrogyank/BlazorFabric) - ![GitHub stars](https://img.shields.io/github/stars/limefrogyank/BlazorFabric?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/limefrogyank/BlazorFabric?style=flat-square&cacheSeconds=86400) Blazor port of Microsoft UI Fabric with fluent design. ([Demo](https://blazorfabric.azurewebsites.net/)). * [BlazorWebView](https://github.com/jspuij/BlazorWebView) - ![GitHub stars](https://img.shields.io/github/stars/jspuij/BlazorWebView?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/jspuij/BlazorWebView?style=flat-square&cacheSeconds=86400) Blazor WebView control for WPF, Android, macOS, iOS. Run Blazor on .NET Core and Mono natively inside a Webview. [Documentation](https://jspuij.github.io/BlazorWebView.Docs/pages/index.html). * [BlazorLazyLoading](https://github.com/isc30/blazor-lazy-loading) - ![GitHub stars](https://img.shields.io/github/stars/isc30/blazor-lazy-loading?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/isc30/blazor-lazy-loading?style=flat-square&cacheSeconds=86400) Production ready lazy loading implementation. Full lazy loading support for WASM and Server (pages, components, dlls) with abstractions to implement modularization if you wish (custom endpoints, custom manifests, etc). * [Fun.Blazor](https://github.com/slaveOftime/Fun.Blazor) - ![stars](https://img.shields.io/github/stars/slaveOftime/Fun.Blazor?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/slaveOftime/Fun.Blazor?style=flat-square&cacheSeconds=86400) This is a project to make F# developer to write blazor easier - Use F# for Blazor, Computation expression (CE) style DSL for internal and third party blazor libraries, Dependency injection, Adaptive and Elmish models, Giraffe style routing, Type safe style. * [Blazor.DownloadFileFast](https://github.com/StefH/Blazor.DownloadFileFast) - ![stars](https://img.shields.io/github/stars/StefH/Blazor.DownloadFileFast?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/StefH/Blazor.DownloadFileFast?style=flat-square&cacheSeconds=86400) Fast download of files to the browser from Blazor without any javascript library reference or dependency. ([Demo](https://stefh.github.io/Blazor.DownloadFileFast/)). * [SpotifyService](https://github.com/tresoneur/SpotifyService) - ![stars](https://img.shields.io/github/stars/tresoneur/SpotifyService?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/tresoneur/SpotifyService?style=flat-square&cacheSeconds=86400) A high-level Spotify API library for Blazor WebAssembly projects that enables Spotify playback in the browser, manages OAuth 2.0 authorization, provides easy access to the Spotify Web API and uses IndexedDB caching. * [Blazor.DynamicJavascriptRuntime.Evaluator](https://github.com/jameschch/Blazor.DynamicJavascriptRuntime.Evaluator) - ![stars](https://img.shields.io/github/stars/jameschch/Blazor.DynamicJavascriptRuntime.Evaluator?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/jameschch/Blazor.DynamicJavascriptRuntime.Evaluator?style=flat-square&cacheSeconds=86400) Execute dynamic object expressions as Javascript in Blazor client-side apps. * [Bionic](https://bionicframework.github.io/Documentation/) - An Ionic CLI clone for Blazor projects. * [EventHorizon Blazor TypeScript Interop Generator](https://github.com/canhorn/EventHorizon.Blazor.TypeScript.Interop.Generator) - ![last commit](https://img.shields.io/github/last-commit/canhorn/EventHorizon.Blazor.TypeScript.Interop.Generator?style=flat-square&cacheSeconds=86400) This project takes in a TypeScript type definition file and creates a .NET Core project that will work with the provided Interop abstraction project. * [Generators.Blazor](https://github.com/excubo-ag/Generators.Blazor) - ![last commit](https://img.shields.io/github/last-commit/excubo-ag/Generators.Blazor?style=flat-square&cacheSeconds=86400) Generators.Blazor is a source generator for improving performance in Blazor. The project also contains analyzers to detect common issues in Blazor apps. ## Real-World applications * [Try .NET](https://github.com/dotnet/try) - ![GitHub stars](https://img.shields.io/github/stars/dotnet/try?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/dotnet/try?style=flat-square&cacheSeconds=86400) Try .NET provides developers and content authors with tools to create interactive experiences. ## Videos * [Creating a Twitter Clone with Blazor Spark](https://www.youtube.com/watch?v=B3giEe1imjY) - ![duration](https://img.shields.io/badge/Duration:%20-42%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) June 21, 2023 - Learn how to utilize the Blazor Spark framework by building a Twitter clone. * [Burke learns Blazor by porting a Vue.js app to Blazor](https://www.youtube.com/watch?v=fetGhhVYsLY) - ![duration](https://img.shields.io/badge/Duration:%20-86%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) June 16, 2021 - This summer, Burke and Jon are porting theurlist.com to Blazor - a real world JavaScript application written in Vue.js. Join them each week as they use Visual Studio, Visual Studio Code and GitHub Copilot to rebuild this app and try to tackle every frontend issue you might encounter along the way. [Community links](https://www.theurlist.com/burke-learns-blazor). * [ASP.NET Community Standup - Featured community project: Blazor Studio](https://www.youtube.com/watch?v=tOHihL9Lf7Y) - ![duration](https://img.shields.io/badge/Duration:%20-69%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) June 14, 2023 - Chat about Blazor Studio, this month's featured Blazor community project. [Community links](https://aka.ms/AAl2w0l). * [What's new in .NET 8 for Web frontends backends and futures?](https://www.youtube.com/watch?v=pJGDPEk45Jc) - ![duration](https://img.shields.io/badge/Duration:%20-47%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) May 26, 2023 - Join this session to explore new features in .NET 8 for web frontends, backends, and future development. Discover how .NET 8 enhances web app development with better performance, new APIs, and modern development support. Get guidance on leveraging these tools for scalable, efficient cloud apps. Suitable for both experienced and new .NET developers, this session offers valuable insights into the latest developments in .NET 8 for web development. * [Full stack web in .NET 8 with Blazor](https://www.youtube.com/watch?v=iDLWv3xM1s0) - ![duration](https://img.shields.io/badge/Duration:%20-16%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) May 24, 2023 - Learn how ASP.NET Blazor in .NET 8 allows you to use a single powerful component model to handle all of your web UI needs, including server-side rendering, client-side rendering, streaming rendering, progressive enhancement, and much more. * [Blazor + .NET MAUI – the perfect “hybrid”](https://www.youtube.com/watch?v=kjKpZAgXLFY) - ![duration](https://img.shields.io/badge/Duration:%20-18%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) May 24, 2023 - In this session, we demonstrate how to leverage DevExpress Blazor/.NET MAUI UI components to deliver intuitive mobile solutions that amaze. We show you how Blazor Hybrid apps reduce time to market and how our adaptive Blazor Data Grid can be used alongside our rich collection of FREE .NET MAUI UI components to address a variety of usage scenarios. * [How to Create a PDF Document in Blazor Using the .NET PDF Library](https://build.microsoft.com/en-US/sessions/b559a882-92b9-43b6-9b50-312be2ccadfe?source=sessions) - ![duration](https://img.shields.io/badge/Duration:%20-21%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=microsoft) May 23, 2023 - .NET Build session: How to Create a PDF Document in Blazor Using the .NET PDF Library. PDFs are an ideal format for sharing and archiving documents, but creating a viewer from scratch requires a deep understanding of PDF specifications and programming languages. What benefit does a PDF viewer provide for your business? It makes it easy to send a report to stakeholders, send an invoice to a customer, or generate a receipt in a retail shop. In this demo, we’ll show how you can easily add a PDF viewer to your app with text, images, and a table in the PDF document. * [Blazor + .NET MAUI – the perfect “hybrid”](https://build.microsoft.com/en-US/sessions/7ac85686-2fee-4ce5-82d0-c239a005eb7e?source=sessions) - ![duration](https://img.shields.io/badge/Duration:%20-18%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=microsoft) May 23, 2023 - Microsoft Build session: Blazor + .NET MAUI – the perfect “hybrid”. In this session, we demonstrate how to leverage DevExpress Blazor/.NET MAUI UI components to deliver intuitive mobile solutions that amaze. We show you how Blazor Hybrid apps reduce time to market and how our adaptive Blazor Data Grid can be used alongside our rich collection of FREE .NET MAUI UI components to address a variety of usage scenarios. * [This Will Skyrocket Your Blazor Development Productivity]([https://www.youtube.com/watch?v=HSfrSNpshg0](https://www.youtube.com/watch?v=VnU1Ixgl-vw)) - ![duration](https://img.shields.io/badge/Duration:%20-4%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) May 23, 2023 - Developing Blazor is fun. However, there are a lot of different things that take quite some time when added together. For instance creating isolated CSS files, creating isolated JS files, creating code behind files and others. However, Jimmy Engstrom has created a Visual Extension that solves exactly this problems. Using this extension, your productivity will get a substantial boost. In this video, I'll show you everything you need to know to use this extension in your day to day Blazor development. * [On .NET Live - Micro Frontends with Blazor](https://www.youtube.com/watch?v=t8w_OTkafNE) - ![duration](https://img.shields.io/badge/Duration:%20-62%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) May 22, 2023 - Have you ever considered writing microservices in Blazor? In this week's episode, community MVP Florian Rappl joins us to talk about micro frontends. He'll show us how to break a Blazor UI into modules for maintainability and development by decoupled teams. ![stars](https://img.shields.io/github/stars/piral-samples/netflix-demo?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/piral-samples/netflix-demo?style=flat-square&cacheSeconds=86400) [Sample repo](https://github.com/piral-samples/netflix-demo). [Demo](https://notflix-demo.samples.piral.cloud/browse). * [ASP.NET Community Standup - Blazor .NET 8 Updates](https://www.youtube.com/watch?v=MaLjiR9YSbs&list=PLdo4fOcmZ0oVJyHcoDsVwlShX1tcxydYP&index=92) - ![duration](https://img.shields.io/badge/Duration:%20-92%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) May 10, 2023 - Come check out some of the progress the Blazor team has made for .NET 8. [Community links](https://gist.github.com/MackinnonBuck/ebd4072c40eca01c177e8ab7e180c392). * [Building "Next Tech Event" using Blazor and RavenDB Part 16](https://www.youtube.com/watch?app=desktop&v=crAfcbzpUqg) - ![duration](https://img.shields.io/badge/Duration:%20-130%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) May 9, 2023 - Part 16 of our new project, "NextTechEvent", a site that will help speakers, organizers, and attendees find their next tech event. In this stream, we made the attendee page look better, and with our close personal friend Chat GPT, we did a better-looking temperature view. [Source code](https://github.com/CodingAfterWork/NextTechEvent) ![stars](https://img.shields.io/github/stars/CodingAfterWork/NextTechEvent?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/CodingAfterWork/NextTechEvent?style=flat-square&cacheSeconds=86400) . * [Build a Connect Four game in your browser with .NET](https://www.youtube.com/watch?v=S9OICr6Qxpc) - ![duration](https://img.shields.io/badge/Duration:%20-1%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) May 1, 2023 - Learn C#, F#, and .NET with free self-guided learning from [Microsoft Learn](https://aka.ms/learndotnet). [Connect4 lesson on Microsoft Learn](https://learn.microsoft.com/fr-fr/training/modules/dotnet-connect-four/). * [ASP.NET Community Standup - Blazor and Data API Builder](https://www.youtube.com/watch?v=7W6h4PuQom8) - ![duration](https://img.shields.io/badge/Duration:%20-62%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) April 11, 2023 - Learn about Data API Builder and how it can be used in Blazor app development. [Community links](https://www.theurlist.com/blazor-community-standup-2023-04-11). * [ASP.NET Community Standup - Blazor App Testing with Playwright](https://www.youtube.com/watch?v=lJa3YlUliEs) - ![duration](https://img.shields.io/badge/Duration:%20-59%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) February 23, 2023 - March 14, 2023 - Learn how to use Playwright to implement end-to-end testing for your Blazor application. [Community links](https://www.theurlist.com/blazor-standup-2023-03-14). * [Steve Sanderson on Blazor United](https://www.youtube.com/watch?v=QvZcbqT0oIo) - ![duration](https://img.shields.io/badge/Duration:%20-50%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) February 23, 2023 - Carl talks to Steve Sanderson, creator of Blazor, to answer questions about Blazor United, coming in .NET 8. * [Exploring OpenAI with Blazor](https://www.youtube.com/watch?v=ULHxywqxMPk) - ![duration](https://img.shields.io/badge/Duration:%20-54%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) February 23, 2023 - Chat with Michael Washington about how .NET developers can leverage OpenAI GPT-3 in their Blazor applications. * [ASP.NET Community Standup - Blazor United in .NET 8](https://www.youtube.com/watch?v=kIH_Py8ytlY&list=PLdo4fOcmZ0oX-DBuRG4u58ZTAJgBAeQ-t&index=2) - ![duration](https://img.shields.io/badge/Duration:%20-83%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) February 14, 2023 - The Blazor team shares early thoughts on Blazor United in .NET 8, an effort to create a single unified model for all your web UI scenarios that combines the best of Razor Pages, Blazor Server, and Blazor WebAssembly. [Community links](https://www.theurlist.com/blazor-standup-2023-02-14). * [On .NET Live - Mapping with GeoBlazor](https://www.youtube.com/watch?v=fw8gGB2y1UM) - ![duration](https://img.shields.io/badge/Duration:%20-62%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) February 7, 2023 - Do you need to display mapping data in your Blazor apps? In this week's episode, community member Tim Purdum joins the panel to show us GeoBlazor, a versatile web mapping API that you can implement without having to write a single line of JavaScript. * [Blazor United prototype](https://www.youtube.com/watch?v=48G_CEGXZZM) - ![duration](https://img.shields.io/badge/Duration:%20-13%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) January 24, 2023 - A quick look at some of the experiments we're considering for Blazor in .NET 8. * [ASP.NET Community Standup - Exploring new and experimental features in Blazor](https://www.youtube.com/watch?v=fQ1Bg-u_YLw&list=PLdo4fOcmZ0oX-DBuRG4u58ZTAJgBAeQ-t&index=2) - ![duration](https://img.shields.io/badge/Duration:%20-73%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) January 10, 2023 - It show what it's like to build a .NET 7 Blazor app using both new and currently experimental features. Featuring: Daniel Roth, Jon Galloway, Mackinnon Buck. [Community links](https://www.theurlist.com/aspnet-community-standup-2023-01). * [Run a Blazor component inside React](https://www.youtube.com/watch?v=jIlSfgQiWTQ) - ![duration](https://img.shields.io/badge/Duration:%20-19%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) December 21, 2022 - Are you tired of slow and clunky web applications? Look no further than Blazor running in React! Blazor is a powerful framework for building interactive client-side web apps with C#, and when combined with the speed and flexibility of React, you get a winning combination for stunning, high-performance web experiences. In this video, we will use the new Custom Elements feature to run Blazor in React, and demonstrate some of the other exciting features and benefits of this dynamic duo. Don't miss out on the future of web development. * [ASP.NET Community Standup - Blazor .NET 8 Planning](https://www.youtube.com/watch?v=QVkxusemLoo&list=PLdo4fOcmZ0oX-DBuRG4u58ZTAJgBAeQ-t&index=1) - ![duration](https://img.shields.io/badge/Duration:%20-74%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) December 13, 2022 - Find out our early plans for Blazor in .NET 8. [Community links](https://www.theurlist.com/blazor-community-links). * [Use the IDE to make a Fluxor CRUD Application | Blazor IDE Part 2](https://www.youtube.com/watch?v=r2XaCISF4dQ) - ![duration](https://img.shields.io/badge/Duration:%20-147%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) November 18, 2022 - Use the IDE to make a Fluxor CRUD Application. * [What's new for Blazor in .NET 7 | .NET Conf 2022](https://www.youtube.com/watch?v=evW4Gj4sHsk) - ![duration](https://img.shields.io/badge/Duration:%20-29%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) November 10, 2022 - Blazor in .NET 7 is loaded with many new improvements to make building beautiful interactive web UI easier and more productive. In this session we'll take a look at the new support for custom elements, improved data binding, handling location changing events, showing loading progress, dynamic authentication requests, and more. * [Build an Audio Browser app with Blazor | .NET Conf 2022](https://www.youtube.com/watch?v=2t4VwBeQ9DY) - ![duration](https://img.shields.io/badge/Duration:%20-28%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) November 10, 2022 - So you’ve heard about new features in .NET 7, but how does it all fit together when you build an app? Join Steve as he quickly builds an end-to-end audio manager app with Blazor WebAssembly using features old and new. This is a moderately advanced talk (it assumes existing Blazor knowledge) full of hints and tips including: cleaner ways to organize JS interop, directly accessing files on an end user’s filesystem, dynamically generating SVGs, creating an elegant “loading” experience, and the performance impact of AOT compilation. Full source code is provided! * [.NET ❤️’s WebAssembly in .NET 7 | .NET Conf 2022](https://www.youtube.com/watch?v=Ru-kO77d3F8) - ![duration](https://img.shields.io/badge/Duration:%20-27%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) November 10, 2022 - .NET 7 introduces powerful new capabilities for running .NET code on WebAssembly. In this session we’ll show how you can run .NET from any JavaScript code using the new and improved JavaScript interop support. We’ll look at debugging and hot reload improvements for .NET on WebAssembly and show how the .NET WebAssembly build tools in .NET 7 take advantage of the latest WebAssembly features like SIMD and Exception Handling to improve performance. We’ll also check out some upcoming and experimental .NET WebAssembly capabilities like support for multithreading. * [Testing Blazor Applications with Playwright | .NET Conf 2022](https://www.youtube.com/watch?v=gBky9_AskNQ) - ![duration](https://img.shields.io/badge/Duration:%20-24%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) November 10, 2022 - Testing is hard, testing takes time to learn and to write, and time is money. As developers we want to test. We know we should but we don't have time. So how can we get more developers to do testing? We can create better tools. Let me introduce you to Playwright - Reliable end-to-end cross browser testing for modern web apps, by Microsoft and fully open source. Playwright's codegen generates tests for you so now you really have no excuses. It's time to play your tests wright. * [Create native desktop & mobile apps using web skills in Blazor Hybrid | .NET Conf 2022](https://www.youtube.com/watch?v=ojcvL8KCOwo) - ![duration](https://img.shields.io/badge/Duration:%20-30%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) November 10, 2022 - Are you a web developer and need to target iOS, Android, macOS, and Windows? Ship directly to the store and build world class apps with native API access with .NET MAUI and Blazor Hybrid. * [Modernize your WPF and Windows Forms application with Blazor | .NET Conf 2022](https://www.youtube.com/watch?v=niX1DbFwgq4) - ![duration](https://img.shields.io/badge/Duration:%20-24%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) November 10, 2022 - Desktop is not dead and there are a lot of applications written in WPF and Windows Forms that need to be ported to Web, or reuse ready components written by the web team. Blazor can be the answer for both the requirements thanks to Blazor Hybrid. Let’s see how you can use it. * [CSS Techniques for Blazor Developers | .NET Conf 2022](https://www.youtube.com/watch?v=w_x1keHyXAY) - ![duration](https://img.shields.io/badge/Duration:%20-29%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) November 10, 2022 - Does your CSS code remind you of an episode of Hoarders? In this session we’ll explore a variety of CSS related architecture decisions that Blazor developers face. Attendees will learn how and when to use plain CSS, Sass or CSS Isolation with Blazor. Modern CSS techniques like custom CSS properties will be discussed. We’ll explore techniques to build modular and intelligent CSS by leveraging CSS principals and incorporating helper libraries. This session aims to break the dependency on CSS systems like Bootstrap and Tailwind. * [Azure Static Web Apps with Blazor and .NET | .NET Conf 2022](https://www.youtube.com/watch?v=FjGjguW1Xa0) - ![duration](https://img.shields.io/badge/Duration:%20-30%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) November 10, 2022 - Are you a .NET developer who wants to quickly build and deploy full-stack .NET serverless applications? Then this talk is for you! In this session, we will learn how to use Azure Static Web Apps with Blazor and .NET. First, we will explore Azure Static Web Apps. Then take a look at getting started with Blazor WebAssembly paired with the serverless power of .NET Azure Functions or Azure Container Apps. We will continue on to explore other .NET API options now available. And finally, we will discover how to automatically build and deploy to Azure directly from our code repository. * [Localization in Blazor: Carl Franklin's Blazor Train Ep 90](https://www.youtube.com/watch?v=e8IkSFQmonE) - ![duration](https://img.shields.io/badge/Duration:%20-14%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) November 3, 2022 - Carl shows how to easily support multiple languages in a Blazor WebAssembly application. * [Building "Next Tech Event" using Blazor and RavenDB Part 12](https://www.youtube.com/watch?v=tpF61oKd3-I) - ![duration](https://img.shields.io/badge/Duration:%20-129%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) November 2, 2022 - Part 12 of the project "NextTechEvent", a site that will help speakers, organizers, and attendees find their next tech event. In this stream, we add search functionality using RavenDB by adding an index. A way for organizers to filter conferences by name, country, location, city, and tags or a combination of them. This way we can see how many conferences are happening in "Stockholm" or all the conferences named "NDC". ![GitHub stars](https://img.shields.io/github/stars/CodingAfterWork/NextTechEvent?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/CodingAfterWork/NextTechEvent?style=flat-square&cacheSeconds=86400) [Source code](https://github.com/CodingAfterWork/NextTechEvent). * [MAUI Blazor Podcast Client Part 2: The .NET Show with Carl Franklin Ep 30](https://www.youtube.com/watch?v=csWkVlB3GQo) - ![duration](https://img.shields.io/badge/Duration:%20-31%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) October 20, 2022 - Adding guests and links, and playlist functionality. * [ASP.NET Community Standup - Blazor in .NET 7 Part III](https://www.youtube.com/watch?v=YR3O3ktuYCw&list=PLdo4fOcmZ0oX-DBuRG4u58ZTAJgBAeQ-t&index=1) - ![duration](https://img.shields.io/badge/Duration:%20-60%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) October 11, 2022 - Learn more about the latest multithreading and MSAL developments for Blazor in .NET 7. [Community links](https://www.theurlist.com/blazor_community_standup_october_2022). * [MSAL Auth in Blazor Server](https://www.youtube.com/watch?v=AlRxwIOq4jQ&list=PL8h4jt35t1wjvwFnvcB2LlYL4jLRzRmoz&index=92) - ![duration](https://img.shields.io/badge/Duration:%20-26%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) September 22, 2022 - Carl shows how to use MSAL and Azure AD B2C to authenticate users and call a secure API. * [ASP.NET Community Standup - Blazor in .NET 7 Part II](https://www.youtube.com/watch?v=-ZSscIhQaRk&list=PLdo4fOcmZ0oX-DBuRG4u58ZTAJgBAeQ-t&index=2) - ![duration](https://img.shields.io/badge/Duration:%20-87%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) September 13, 2022 - Learn more about the exciting runtime changes in Blazor for .NET 7. [Community links](https://www.theurlist.com/blazor_community_standup_september_2022). [Part 1](https://www.youtube.com/watch?v=mDKvgpD8YM0&feature=emb_imp_woyt). * [Make a Text Editor in Blazor from scratch, Blazor Text Editor Part 1.](https://www.youtube.com/watch?v=D5PdeRsV-Hk&list=PLG4PTDe2qc0i0COivTxn_rjSN96Xq-_K1&index=2) - ![duration](https://img.shields.io/badge/Duration:%20-44%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) September 8, 2022 - Make a Text Editor in Blazor from scratch, Blazor Text Editor Part 1. * [Roslyn Syntax Highlighting | Blazor Text Editor Part 2](https://www.youtube.com/watch?v=ZIt0-mbCgG8&list=PLG4PTDe2qc0i0COivTxn_rjSN96Xq-_K1&index=2) - ![duration](https://img.shields.io/badge/Duration:%20-36%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) September 8, 2022 - Roslyn Syntax Highlighting | Blazor Text Editor Part 2. * [Make a Text Editor in Blazor from scratch | Blazor Text Editor Part 1](https://www.youtube.com/watch?v=D5PdeRsV-Hk&t=0s) - ![duration](https://img.shields.io/badge/Duration:%20-44%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) September 8, 2022 - Make a Text Editor in Blazor from scratch | Blazor Text Editor Part 1. * [.NET MAUI & Blazor - Sharing code between app and web](https://www.youtube.com/watch?v=A_C8X-b2qXE) - September 2, 2022 - In this video, Daniel will show you how to create a project for .NET MAUI and Blazor and how to structure the code so it can be shared between the app and the web. ![GitHub stars](https://img.shields.io/github/stars/dhindrik/MauiBlazorHybridDemo?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/dhindrik/MauiBlazorHybridDemo?style=flat-square&cacheSeconds=86400) [Source code](https://github.com/dhindrik/MauiBlazorHybridDemo). * [Blazor in 100 Seconds](https://www.youtube.com/watch?v=QXxNlpjnulI) - ![duration](https://img.shields.io/badge/Duration:%20-2%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) September 1, 2022 - Short presentation of Blazor. Blazor is a framework that can build frontend web applications with HTML, CSS, and C#. It leverages web assembly to eliminate JavaScript from the usual client-side tech stack. * [Build an interactive todo list with Blazor in minutes](https://www.youtube.com/watch?v=YhAxMpOtVQU) - ![duration](https://img.shields.io/badge/Duration:%20-16%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) August 28, 2022 - In this tutorial we'll learn how to create a todo list with Blazor and improve the page using Bootstrap. * [Blazor in .NET 7 - .NET Community Standup](https://www.youtube.com/watch?v=mDKvgpD8YM0&feature=emb_imp_woyt) - ![duration](https://img.shields.io/badge/Duration:%20-85%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) August 23, 2022 - Overview of Blazor in .NET 7 with Daniel Roth, Jon Galloway, Mackinnon Buck and Tanay Parikh. Learn more about the great new features coming to Blazor in .NET 7. * [Blazor for ABSOLUTE beginners in 21 Minutes](https://www.youtube.com/watch?v=MhQpGbLxeF0) - ![duration](https://img.shields.io/badge/Duration:%20-21%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) August 22, 2022 - In this tutorial you will learn the basics of building a website with Blazor and how the default Blazor project template works. If you have any problems feel free to leave a comment and I will try and get back to you. I've also added some notes below that you might find useful. * [Implementing Main Page Headers in Blazor](https://www.youtube.com/watch?v=YAgMdsdWzAc) - ![duration](https://img.shields.io/badge/Duration:%20-101%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) July 18, 2022 - In this session, Josh and I are back again to continue building a decentralizaed social media platform in the open-source space. This session in particular we will be discussing, designing and implementing a main page header in Blazor using SharpStyles. * [Blazor: Flavors of State Management with Blazor WebAssembly](https://www.youtube.com/watch?v=ovgNl_ck22w) - ![duration](https://img.shields.io/badge/Duration:%20-85%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) July 12, 2022 - ASP.NET Community Standup - Blazor: Flavors of State Management with Blazor WebAssembly. Working with Blazor WebAssembly applications we often overlook the importance of appropriate state management. Without a good understanding and the right strategy, our applications can end up polluted with components that behave badly and we end up regretting the bad choices.- This session covers the simple state management flavours and then progress towards something more tasty like the Redux pattern. We will see what there is to like and dislike about each flavour. Next, we will take a step back and determine the appropriate seasoning of state management to pair with our application. You will leave this session having sampled all the wonderful flavours and be able to make great decisions to have the best development experience. [Community links](https://www.theurlist.com/blazor_community_standup_july_2022). * [Securing Blazor & Web APIs with AAD](https://www.youtube.com/watch?v=wYptdN-P5zU) - ![duration](https://img.shields.io/badge/Duration:%20-30%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) July 11, 2022 - This video shows how to secure your Blazor application and a Web API using Azure Active Directory (AAD). * [What’s next for Blazor, and .NET on WASI](https://www.youtube.com/watch?v=lP_qdhAHFlg) - ![duration](https://img.shields.io/badge/Duration:%20-61%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) July 7, 2022 - What’s next for Blazor, and .NET on WASI, talk by Steve Sanderson at the NDC Porto. In this demo-centric talk, Microsoft developer/architect Steve will show the current state-of-the-art in Blazor for building rich web-based UI for browsers and native apps. This will include powerful new features shipped in .NET 6, as well as upcoming enhancements under development for .NET 7 and .NET MAUI. We’ll also look at more experimental future possibilities, including running .NET Core on WASI (WebAssembly on the server) which creates entirely new cloud and edge hosting options and lets you bring .NET code to places it’s never been before. * [Test the new Blazor experimental tutorial INSIDE Visual Studio](https://www.youtube.com/watch?v=NXtaepJsWp8) - ![duration](https://img.shields.io/badge/Duration:%20-9%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) June 29, 2022 - Grace Taylor from the Visual Studio team joins James to talk about a new experiment that they are running to improve tutorial inside of Visual Studio! Their first experiment is going to help developers get started learning and building web apps with Blazor. Install Visual Studio, and you may have the opportunity to test out this new experimental feature from the team. * [Test-Driving CSS Styles in Blazor](https://www.youtube.com/watch?v=06chSzVeuls) - ![duration](https://img.shields.io/badge/Duration:%20-32%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=microsoft) June 20, 2022 - This video is about test-driving CSS styles in Blazor. * [Should I Focus on Blazor or ASP.NET Core?](https://www.youtube.com/watch?v=YbH2n4FZAP4) - ![duration](https://img.shields.io/badge/Duration:%20-13%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=microsoft) June 16, 2022 - Should I learn Blazor in more depth or should I better understand ASP.NET Core? Which one is the better one to learn well first? Should I bounce back and forth or specialize in one? * [Persisting Data in Blazor WebAssembly](https://www.youtube.com/watch?v=9i7XPpMcztM) - June 3, 2022 - Carl uses a client-side repository based on IndexedDB to persist data even without an Internet connection. * [Using TypeScript with Blazor](https://www.youtube.com/watch?v=I_zFlBKgl5s) - ![duration](https://img.shields.io/badge/Duration:%20-20%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=microsoft) May 27, 2022 - Carl shows you how to use TypeScript when creating a Blazor component. * [Build native apps for any device with .NET and Visual Studio](https://mybuild.microsoft.com/en-US/sessions/599c82b6-0c5a-4add-9961-48b85d9ffde0) - ![duration](https://img.shields.io/badge/Duration:%20-51%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=microsoft) May 25, 2022 - See how you can quickly and iteratively build modern, native and hybrid web apps for any device with Visual Studio and .NET Multi-platform App UI (.NET MAUI). We’ll take you on a tour of the tools and frameworks that can speed up your development time to create powerful, native desktop and mobile apps for Windows, macOS, iOS, and Android from a single codebase with .NET MAUI. We’ll also show you how to enhance your Blazor web apps with native device capabilities. * [Native client apps with Blazor Hybrid](https://mybuild.microsoft.com/en-US/sessions/d2a86fe6-abb9-4939-a44d-ed660f9bb675?source=sessions) - ![duration](https://img.shields.io/badge/Duration:%20-27%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=microsoft) May 25, 2022 - Blazor uses the latest open web standards to enable full stack web development with .NET. But what if you need to build a native client app? Blazor is expanding beyond the web to enable support for building native client apps using a hybrid of web technologies and .NET. In this session we’ll look at the new Blazor Hybrid support in .NET MAUI for building cross platform native client apps for mobile and desktop as well as Blazor Hybrid support for modernizing WPF and Windows Forms apps. * [Future Possibilities for .NET Core and WASI (WebAssembly on the Server)](https://mybuild.microsoft.com/en-US/sessions/ddf48da7-d3e6-4ba5-affd-b5735ce205fa?source=sessions) - ![duration](https://img.shields.io/badge/Duration:%20-28%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=microsoft) May 25, 2022 - WebAssembly is moving beyond the browser and is pitched to become a foundational element of modern cloud-native architecture. It lets any language compile to universal binaries that run on any OS or processor, robustly sandboxed and with great performance. This session covers a new approach to running .NET in WASI environments. You’ll see how your existing .NET code could be built into WASI-compliant modules, plus the opportunities this opens. This is experimental, not yet a committed product. * [Learn .NET MAUI - Full Course for Beginners | Tutorial for iOS, Android, Mac, Windows in C#](https://www.youtube.com/watch?v=DuNLR_NJv8U) - May 23, 2022 - Let's start our journey together to build beautiful native cross-platform apps for iOS, Android, macOS, and Windows with [.NET MAUI](https://dotnet.microsoft.com/en-us/apps/maui), C#, and Visual Studio! In this full workshop, I will walk you through everything you need to know about [.NET MAUI](https://dotnet.microsoft.com/en-us/apps/maui) and building your very first app. You will learn the basics including how to build user interfaces with XAML, how MVVM and data binding simplify development, how to navigate between pages, access platform features like geolocation, optimize data collections, and theme your app for light theme and dark theme. This course has everything you need to learn the basics and set you up for success when building apps with [.NET MAUI](https://dotnet.microsoft.com/en-us/apps/maui). * [Displaying Custom Reports in Blazor using Syncfusion](https://www.youtube.com/watch?v=CA8waGMYHbY) - ![duration](https://img.shields.io/badge/Duration:%20-50%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) May 20, 2022 - In the last video, we created a custom report using the Bold Reports Designer. We could use the provided Bold Reports Viewer to display the reports, but that does not always fit with what you want to accomplish. That is why today we will implement a report viewer in a Blazor Server application. That way, we can show our custom report to our clients directly inside of our site instead of sending them to a separate application. * [Bringing WebAssembly to the .NET Mainstream - Steve Sanderson](https://www.youtube.com/watch?v=PIeYw7kJUIg) - ![duration](https://img.shields.io/badge/Duration:%20-30%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) May 19, 2022 - Many developers still consider WebAssembly to be a leading-edge, niche technology tied to low-level systems programming languages. However, C# and .NET have run on WebAssembly since 2017. Blazor WebAssembly brought .NET into the browser on open standards, and is now one of the fastest-growing parts of .NET across enterprises, startups, and hobbyists. Next, with WASI we could let you run .NET in even more places, introducing cloud-native tools and techniques to a wider segment of the global developer community. This is a technical talk showing how we bring .NET to WebAssembly. Steve will demonstrate how it runs both interpreted and AOT-compiled, how an IDE debugger can attach, performance tradeoffs, and how a move from Emscripten to WASI SDK lets it run in Wasmtime/Wasmer or higher-level runtimes like wasmCloud. Secondly, you'll hear lessons learned from Blazor as an open-source project - challenges and misconceptions faced bringing WebAssembly beyond early adopters. * [Rapid development model for Blazor Web Assembly and typed end-to-end services](https://www.youtube.com/watch?v=BcQqCzm4tK0) - ![duration](https://img.shields.io/badge/Duration:%20-10%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) May 19, 2022 - In this video we use the [ServiceStack](https://servicestack.net/) Blazor WASM template to show how the combination of typed end-to-end services and Blazor is great for building line of business applications. The template provides a clean separation of concerns, typed end-to-end safety and rapid iteration thanks to shared C# language, Visual Studio Hot Reload and [ServiceStack](https://servicestack.net/) message centric design. * [ASP.NET Community Standup - Let's build an app with .NET MAUI and Blazor](https://www.youtube.com/watch?v=7BDBLyADq7s&list=PLdo4fOcmZ0oX-DBuRG4u58ZTAJgBAeQ-t&index=2) - ![duration](https://img.shields.io/badge/Duration:%20-92%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) May 17, 2022 - Showcasing migrating a Blazor WASM PWA to .NET MAUI and Blazor Hybrid. [Community links](https://www.theurlist.com/blazor_community_standup_may_2022). * [Making Blazor work with everything, plus WebAssembly on the server](https://www.youtube.com/watch?v=RevmsFXVJ5Q) - ![duration](https://img.shields.io/badge/Duration:%20-60%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) May 13, 2022 - Talk by Steve Sanderson at the NDC London 2022. Real-world software projects often involve combining many technologies. So, is modern Blazor powerful and flexible enough for your team's needs? In this demo-heavy talk, Steve will show how Blazor WebAssembly/Server can seamlessly embed libraries and logic written in other languages such as Rust or C/C++, how Blazor WebAssembly/Server components can be used inside other SPA frameworks such as React or Angular (or vice-versa) and how Blazor components can be used not only for web projects, but also shared with native apps for macOS, iOS, Android, and Windows (including WPF and WinForms) These capabilities should equip your team to take on the most ambitious projects with confidence. We'll then look further into the future and consider how WebAssembly is moving beyond the browser and is pitched to become a foundational element of cloud-native server apps. You'll be among the first to see an experimental new approach to compiling .NET applications into WASI-compliant universal binaries that can run on any OS or processor, robustly sandboxed and with great performance. * [Creating Office Files in Blazor using Syncfusion - PDF, Word, Excel, and PowerPoint](https://www.youtube.com/watch?v=wyoCxzRLUsQ) - ![duration](https://img.shields.io/badge/Duration:%20-19%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) May 13, 2022 - Creating office files in C# has always been a popular solution. You can generate reports in formats that users are comfortable with. With the Syncfusion file controls, you can not only create Excel files, you can also create Microsoft Word documents, PowerPoint presentations, and PDF documents. In this video, we will look at the office file controls in Syncfusion and how to use them. * [Building apps for Teams in .NET & Blazor](https://www.youtube.com/watch?v=3fLoxkuzzqo) - ![duration](https://img.shields.io/badge/Duration:%20-17%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) May 11, 2022 - John Miller joins James on this week's On .NET to chat about how .NET developers can build Microsoft Teams apps and bots completely with C#, .NET, and Blazor! John walks us through everything we need to know to get started and how to even publish apps for Teams. * [Intro to Blazor Controls in Syncfusion](https://www.youtube.com/watch?v=eQ9S9rt-rKc) - ![duration](https://img.shields.io/badge/Duration:%20-42%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) May 11, 2022 - The Blazor controls in Syncfusion are pretty impressive. In this video, we will look at the DataGrid, charts, a Kanban-style board, a dashboard, autocomplete textboxes, context menus, signature pads, menu bars, toast messages, and more. There are so many controls to look at, in fact, that we are going to purposefully ignore the office file creation and display controls, as well as reporting. That way, we can spend more time on the other controls today and then focus on the office files in another video. The reporting will actually have two different videos dedicated to it. * [Securing SPAs and Blazor Applications using the BFF (Backend for Frontend) Pattern](https://www.youtube.com/watch?v=DdNssiaIY_Q) - ![duration](https://img.shields.io/badge/Duration:%20-99%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) May 10, 2022 - Modern web development means that more and more application code is running in the browser. Traditionally this has been JavaScript but more recently there has been the trend to use C#/WASM with Blazor. These modern applications typically also need authentication and single-sign-on as well as token-based security for calling APIs – in other words OpenID Connect and OAuth 2. There are different patterns for securing such applications and this session covers some of the pitfalls of the various approaches, especially given the ever-changing browser landscape. We will conclude with the “backend for frontend” (or BFF) pattern which has become the most secure and stable of these approaches. * [Multi-environment deployments with Azure DevOps and Static Web Apps](https://www.youtube.com/watch?v=4JkfeZp7aDk) - ![duration](https://img.shields.io/badge/Duration:%20-11%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) May 5, 2022 - App walkthrough, Create a static web app with an Azure DevOps pipeline, Add Azure DevOps environments and configure approvals, Configure a multi-stage pipeline, Static Web Apps named preview environments, Playwright tests, Manual approval before production deployment, Secure staging environments with a password. [Source code](https://github.com/anthonychu/swa-devops-pipeline-demo/blob/main/azure-pipelines.yml). * [On .NET Live - Wrapping browser APIs in Blazor WASM](https://www.youtube.com/watch?v=ttXb-gIiRoE) - ![duration](https://img.shields.io/badge/Duration:%20-61%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) May 2, 2022 - In this week's episode, Kristoffer Strube joins the panel to show us a Blazor WASM wrapper for the browser File System Access API. He's bringing demos and code to share, so this is an episode you won't want to miss. * [Securing SPAs and Blazor Applications using the BFF Pattern](https://www.youtube.com/watch?v=xzRhabmlc8M) - ![duration](https://img.shields.io/badge/Duration:%20-57%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) April 29, 2022 - Securing SPAs and Blazor Applications using the BFF (Backend for Frontend) Pattern at the NDC Porto, by Dominick Baier. * [State management in Blazor](https://www.youtube.com/watch?v=L9p-9dGp-98) - ![duration](https://img.shields.io/badge/Duration:%20-63%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) April 29, 2022 - State management in Blazor, at the NDC Porto 2022. When developing a Single Page Application, a very important aspect is state management. Since Blazer is a component-based UI framework, how do we manage to update the UI when the state of the application changes. * [Streaming three ways with Blazor: REST, gRPC, and SignalR](https://www.youtube.com/watch?v=cYP8TcokSFI) - ![duration](https://img.shields.io/badge/Duration:%20-45%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) April 29, 2022 - Streaming three ways with Blazor: REST, gRPC, and SignalR, by Carl Franklin at the NDC Porto 2022. Learn how to stream data from server to client to improve perceived performance and increase scalability. Carl shows how streaming works in general, and then how to implement using each of these three transports. * [Intro to SignalR in C# Part 1 - using Blazor, WPF, best practices, and more](https://www.youtube.com/watch?v=RaXx_f3bIRU) - ![duration](https://img.shields.io/badge/Duration:%20-72%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) April 25, 2022 - SignalR is an excellent way to connect two or more clients together for real-time communication. The best part is that this library is just a wrapper around web-standard technologies such as Web Sockets and Long-Polling. It just makes them easier to use. That means we can even use SignalR with other clients such as Java or JavaScript. In this video, we are going to learn how to set up SignalR and how to connect to it using web clients and desktop clients. Throughout the video, we will cover best practices, as well as how to use this in the real world. * [Revisiting MVVM](https://www.youtube.com/watch?v=M4tgU-V7peI) - ![duration](https://img.shields.io/badge/Duration:%20-20%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) April 21, 2022 - Carl revisits the topic of MVVM with Blazor, clarifying your options. * [Queryable Repositories](https://www.youtube.com/watch?v=5q21Olur050) - ![duration](https://img.shields.io/badge/Duration:%20-28%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) April 14, 2022 - Build a generic repository that the client can query. * [ASP.NET Community Standup - Blazorators: Blazor C# Source Generators](https://www.youtube.com/watch?v=U8XriF2zxS0) - ![duration](https://img.shields.io/badge/Duration:%20-57%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) April 12, 2022 - David Pine introduces Blazorators, Blazor C# Source Generators on this month's Blazor Community Standup. [Community links](https://www.theurlist.com/blazor_community_standup_april_2022). * [.NET MAUI Blazor - Build Hybrid Mobile, Desktop, and Web apps](https://www.youtube.com/watch?v=qkzyqDUjPHs) - ![duration](https://img.shields.io/badge/Duration:%20-47%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) April 1, 2022 - Blazor enables building client-side web UI with .NET, but sometimes you need more than what the web platform offers. Sometimes you need full access to the native capabilities of the device. You can now host Blazor components in .NET MAUI apps to build cross-platform native apps using web UI. The components run natively in the .NET process and render web UI to an embedded web view control using a local interop channel. This hybrid approach gives you the best of native and the web. Your components can access native functionality through the .NET platform, and they render standard web UI. .NET MAUI Blazor apps can run anywhere .NET MAUI can (Windows, Mac, iOS, and Android). * [Managing App State with Fluxor](https://www.youtube.com/watch?v=Vn6dKN_hTrs) - ![duration](https://img.shields.io/badge/Duration:%20-23%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) March 31, 2022 - pp State Management using Fluxor, a Flux/Redux library for Blazor. * [Blazor Basics with Identity](https://www.youtube.com/watch?v=lGabdG5Ge8Y) - ![duration](https://img.shields.io/badge/Duration:%20-128%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) Mars 14, 2022 - Fritz shows you how to connect authentication and authorization to your Blazor app. We'll focus on how to connect a properly configured Microsoft Identity application to your Blazor framework. * [ASP.NET Community Standup - Native client apps with Blazor Hybrid](https://www.youtube.com/watch?v=7UM6s0QPvRQ) - ![duration](https://img.shields.io/badge/Duration:%20-88%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) Mars 8, 2022 - Join this Blazor Community Standup to learn more about using Blazor Hybrid to create cross-platform native client apps for Windows, iOS, Android & Mac. [Community links](https://www.theurlist.com/blazor_community_standup_march_2022). * [Building beautiful Blazor apps with Tailwind CSS](https://www.youtube.com/watch?v=zS0y9krXO6E) - ![duration](https://img.shields.io/badge/Duration:%20-55%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) Mars 2, 2022 - Tailwind CSS has been growing in popularity for some time. It’s fast becoming one of the top frameworks to use when styling applications. It offers a different approach to the traditional frameworks like Bootstrap—utility based styling. Rather than using prebuilt components—which result in your application looking like a lot of others on the web—you can tailor your design by applying small focused CSS classes resulting in a unique style for your app. In this talk, you will learn how to bring the power of Tailwind CSS to your Blazor applications. We’ll start off with an introduction to Tailwind and how it differs from other CSS frameworks. Then we’ll get into some code and see how we can integrate Tailwind with Blazor. For the rest of the time, we’ll explore the various features of Tailwind and see how just how beautiful we can make our Blazor app. By Chris Sainty, at the NDC Oslo 2021. * [Best practices unit testing Blazor components with bUnit](https://www.youtube.com/watch?v=Co7QetPYiO4) - ![duration](https://img.shields.io/badge/Duration:%20-54%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) Mars 1, 2022 - Why create tests of UI components? Isn’t UI tests inherently brittle, run slowly, and hard to maintain? By Egil Hansen, at the NDC Oslo 2021. * [Learn C# with CSharpFritz - Blazor Application Hosting](https://www.youtube.com/watch?v=AYXx5vPFzFo) - ![duration](https://img.shields.io/badge/Duration:%20-124%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) February 28, 2022 - You've built your Blazor application, now where do you run it? In this demo filled session, Fritz will talk through Progressive Web Apps, packaging with Docker, and deploying your Blazor application to Azure. [Community links](https://github.com/csharpfritz/csharp_with_csharpfritz). * [Blazor, NET 6, and WebAssembly](https://www.youtube.com/watch?v=Rn8psTi8FBk) - ![duration](https://img.shields.io/badge/Duration:%20-60%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) February 21, 2022 - Talk by Steve Sanderson at the NDC Oslo 2021. .NET 6 is a huge release for Blazor, adding a ton of powerful new features for .NET web developers. This talk will be full of demos to get you up-to-speed on this hot framework. * [Building a classic adventure game with Blazor](https://www.youtube.com/watch?v=WG4Pp1NGXUw) - ![duration](https://img.shields.io/badge/Duration:%20-88%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) ![duration](https://img.shields.io/badge/Duration:%20-61%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) February 14, 2022 - Blazor is a free and open-source web framework that enables developers to create web apps using C# and HTML. Being a .NET developer as well as a big fan of classic point-and-click adventure games, I’ve ported my Game-a-Tron 4000 adventure game engine to Blazor. * [ASP.NET Community Standup - Blazor .NET 7 Roadmap](https://www.youtube.com/watch?v=3o91I6lD-Bo) - ![duration](https://img.shields.io/badge/Duration:%20-88%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) February 8, 2022 - Find out what's being planned for Blazor in .NET 7. [Community links](https://www.theurlist.com/blazor_community_standup_february_2022). * [C# Corner .NET Conference Day 1](https://www.youtube.com/watch?v=LpoYTZSMeG0) - ![duration](https://img.shields.io/badge/Duration:%20-390%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) January 24, 2022 - The first day of the conference kicks off with a keynote session by Scott Hunter. Day one is focused on front-end technologies (Blazor). [Conference website](https://dotnetconference.com). * [ASP.NET Community Standup - Blazor WebAssembly on .NET 6](https://www.youtube.com/watch?v=8gwSU3oaMV8) - ![duration](https://img.shields.io/badge/Duration:%20-78%20min-%230094FF?style=flat-square&cacheSeconds=maxAge&logo=youtube) January 11, 2022 - Join Blazor creator Steve Sanderson for a special look at Blazor WebAssembly on .NET 6. [Community links](https://www.theurlist.com/blazor_community_standup_january_2022). * [Archives](https://github.com/AdrienTorris/awesome-blazor/tree/master/Archives) - [2021](https://github.com/AdrienTorris/awesome-blazor/blob/master/Archives/2021.md#videos), [2020](https://github.com/AdrienTorris/awesome-blazor/blob/master/Archives/2020.md#videos), [2019](https://github.com/AdrienTorris/awesome-blazor/blob/master/Archives/2019.md#videos), [2018](https://github.com/AdrienTorris/awesome-blazor/blob/master/Archives/2018.md#videos), [2017](https://github.com/AdrienTorris/awesome-blazor/blob/master/Archives/2017.md#videos). ## Articles * [ASP.NET Core updates in .NET 8 Preview 5](https://devblogs.microsoft.com/dotnet/asp-net-core-updates-in-dotnet-8-preview-5/) - June 13, 2023 - .NET 8 Preview 5 is now available and includes many great new improvements to ASP.NET Core. New Blazor Web App project template, Blazor router integration with endpoint routing, Blazor Content Security Policy (CSP) compatibility and more. * [8 Free and Open Source Blazor UI Libraries](https://www.claudiobernasconi.ch/2023/05/25/8-free-and-open-source-blazor-ui-libraries) - May 25, 2023 - List of available free and open-source user interface libraries for Blazor development. * [Good (Blazor) Components are... ?](https://jonhilton.net/good-blazor-components/) - May 23, 2023 - Components, done “right” carry tremendous potential to speed up your web development, and make ongoing maintenance (and development) of your apps much easier. But, like any tool, it all depends how you use it. * [ASP.NET Core updates in .NET 8 Preview 4](https://devblogs.microsoft.com/dotnet/asp-net-core-updates-in-dotnet-8-preview-4/) - May 16, 2023 - .NET 8 Preview 4 is now available and includes many great new improvements to ASP.NET Core. Streaming rendering with Blazor components, Handling form posts with Blazor SSR, Route to named elements in Blazor, Webcil packaging for Blazor WebAssembly apps and more. * [Big Changes Coming for Blazor in .NET 8](https://www.telerik.com/blogs/big-changes-coming-blazor-dotnet-8) - May 16, 2023 - .NET 8 is shaping up to be a significant release for Blazor, and there’s one big reason why. * [BurnRate is fueling revenue success with Blazor](https://dotnet.microsoft.com/en-us/platform/customers/burnrate) - May, 2023 - [Customer success case](https://dotnet.microsoft.com/en-us/platform/customers) with BurnRate. * [Join the .NET Team at Microsoft Build 2023](https://devblogs.microsoft.com/dotnet/microsoft-build-2023-and-dotnet/) - April 24, 2023 - Are you ready for Build 2023? Microsoft’s premier developer conference is just around the corner, and there are several exciting .NET sessions that you won’t want to miss. * [State Hasn't Changed? Why and when Blazor components re-render](https://jonhilton.net/blazor-rendering/) - April 12, 2023 - Sooner or later, as you build out your Blazor UI, you’re going to hit a situation where your component isn’t re-rendering when you expect it to. * [ASP.NET Core updates in .NET 8 Preview 3](https://devblogs.microsoft.com/dotnet/improvements-auth-identity-aspnetcore-8/) - April 11, 2023 - ASP.NET Core updates in .NET 8 Preview 3. Here’s a summary of what’s new in this preview release: Server-side rendering with Blazor, Render Razor components outside of ASP.NET Core, Sections support in Blazor, Monitor Blazor Server circuit activity, SIMD enabled by default for Blazor WebAssembly apps and more. See the [ASP.NET Core roadmap for .NET 8](https://github.com/dotnet/aspnetcore/issues/44984). * [How to Build Fast and Reusable Blazor Code](https://visualstudiomagazine.com/articles/2023/04/03/blazor-qa.aspx) - April 3, 2023 - Microsoft even offers a Build reusable components with Blazor learning module that shows how to build a Razor class library that contains Blazor components, package a Razor class library for use in other Blazor applications, and reference a Razor class library in a Blazor application and use its components. Author: Jeffrey T. Fritz. * [Blazor United - Solving Blazor's biggest challenges?](https://jonhilton.net/blazor-united/) - February 23, 2023 - There are some big changes coming for Blazor in .NET 8 and they look very, very promising. * [The Blazor NavigationManager](https://kristoffer-strube.dk/post/the-blazor-navigationmanager/) - February 23, 2023 - The NavigationManager is a service that enables developers to programmatically navigate the user to different pages or external URIs. In ASP.NET Core 7, there were added new features that enable us to parse simple state between pages and listen to and intercept navigation. In this post, we will look at the basic capabilities of the NavigationManager, present what new features were added in ASP.NET Core 7, discuss how it fits into the Blazor/.NET ecosystem, and in the end present a scenario that is now possible with relative ease using new additions from ASP.NET Core 7. * [Announcing .NET 8 Preview 1](https://devblogs.microsoft.com/dotnet/announcing-dotnet-8-preview-1/) - February 21, 2023 - The first preview of .NET 8 is ready to download. This blog post covers the major themes and goals that drive the prioritization and selection of enhancements to develop. .NET 8 preview and release candidate builds will be delivered monthly. As usual, the final release will be delivered sometime in November at .NET Conf. * [Azure Active Directory multitenant integration with Blazor WASM – Part 1](https://peterlesliemorris.com/azure-active-directory-multitenant-integration-with-blazor-wasm-part-1/) - February 19, 2023 - This blog is set in two parts. The first describes the process of setting up an Azure Active Directory multitenant on Azure, and the 2nd part describes how to integrate with a Blazor WASM application. [Source code](https://github.com/mrpmorris/WalkThrough-AzureActiveDirectoryMultiTenant) ![stars](https://img.shields.io/github/stars/mrpmorris/WalkThrough-AzureActiveDirectoryMultiTenant?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/mrpmorris/WalkThrough-AzureActiveDirectoryMultiTenant?style=flat-square&cacheSeconds=86400) . * [Embed Reporting in Native Blazor Applications](https://www.telerik.com/blogs/embed-reporting-native-blazor-applications) - February 14, 2023 - See the setup and customization of the newest Progress Telerik Reporting viewer, implemented with Telerik UI for Blazor controls with both Blazor Wasm and Blazor Server. * [Announcing Blazorise 1.2](https://blazorise.com/news/release-notes/120) - February 1, 2023 - Release of Blazorise v1.2, which brings a number of new features, improvements, and bug fixes to the framework. * [The State of WebAssembly – 2022 and 2023](https://platform.uno/blog/the-state-of-webassembly-2022-and-2023) - January 30, 2023 - This articles looks at the current state of WebAssembly (wasm). It starts by revisiting 2022 developments then he tries to predict where I think things will go in 2023. * [Blazor WebAssembly Succinctly - eBook updated](https://www.syncfusion.com/succinctly-free-ebooks/blazor-webassembly-succinctly) - January 30, 2023 - Second edition of the eBook "Blazor WebAssembly Succinctly" by Michael Washington. * [Playing Dynamic Audio In Server Side Blazor](https://blazorhelpwebsite.com/ViewBlogPost/1065) - January 28, 2023 - You can play dynamic audio in a Blazor Server application and have full control over the user interface. [YouTube video](https://www.youtube.com/watch?v=TRgYmTNmXT8) about this article. * [.NET MAUI Blazor - Best practices for mobile UI](https://dev.to/mhrastegari/net-maui-blazor-best-practices-for-mobile-ui-4def) - January 27, 2023 - Both Blazor and .NET MAUI are great and Powerful frameworks. Using .NET MAUI, you can build fully native cross-platform apps and build web apps completely in C# with Blazor. But when you combine them with the MAUI Blazor template, you need to know that it should feel like a mobile app, not a web page. This articles explains some tips and tricks. * [Steve Sanderson Wows Web-Devs with Peek at 'Blazor United' for .NET 8](https://visualstudiomagazine.com/articles/2023/01/26/blazor-united.aspx) - January 26, 2023 - We've started some experiments to combine the advantages of Razor Pages, Blazor Server and Blazor WebAssembly all into one thing. * [Using .NET code from JavaScript using WebAssembly](https://www.meziantou.net/using-dotnet-code-from-javascript-using-webassembly.htm) - January 23, 2023 - Blazor WebAssembly allows to run a .NET web application in a browser. Starting with .NET 7, you can easily run any .NET method from JavaScript without needing the whole Blazor framework. Let's see how to run a .NET method from JavaScript. * [Improve Blazor startup process UX](https://ysmoradi.medium.com/improve-blazor-startup-process-ux-9461839e00ec) - January 21, 2023 - You can do a lot of things with Blazor: PWA, SSR, SPA, Hybrid app... the process of how the user sees the first page is different for each of these modes. * [Reducing Blazor WASM size by providing custom ICU data](https://www.meziantou.net/reducing-blazor-application-size-by-providing-custom-icu-data.htm) - January 16, 2023 - Reducing Blazor WASM size by providing custom ICU data. * [Blazor Custom Elements in HTML or React](https://dvoituron.com/2022/12/22/blazor-custom-elements-react/) - December 22, 2022 - Since version 7.0 of .NET, Microsoft has enhanced and integrated the ability to create [HTML Custom Elements](https://html.spec.whatwg.org/multipage/custom-elements.html). These custom elements offer developers a way to create their own full-featured DOM elements. In Blazor, this allows publishing these components to other SPA Frameworks, such as Angular or React. More info in the Microsoft documentation. * [Build Your Own Podcast App with .NET 7, Blazor, and .NET MAUI](https://devblogs.microsoft.com/dotnet/build-your-own-podcast-app-with-dotnet-blazor-and-dotnet-maui/) - December 19, 2022 - The .NET Podcast app was first introduced at .NET Conf 2021 and recently updated to highlight new features in .NET 7 at the .NET Conf 2022 keynote. The podcast app is ready to use sample application that showcases .NET, ASP.NET Core, Blazor, .NET MAUI, Azure Container Apps, Orleans, Power Platform, Playwright, and more. This post explains all the new features and show how we upgraded the .NET Podcast app to take advantage of them. * [The Future of .NET with WASM](https://blog.jetbrains.com/dotnet/2022/12/15/the-future-of-net-with-wasm/) - December 15, 2022 - This post is about the benefits of using WASM outside the browser and its potential as a transformative technology on the server.  * [Use Keycloak as Identity Provider from Blazor WebAssembly (WASM) applications](https://dev.to/nikiforovall/use-keycloak-as-identity-provider-from-blazor-webassembly-wasm-applications-3an6) - December 8, 2022 - Learn how to integrate with Keycloak from Blazor WASM. Create a public client and use built-in capabilities of Microsoft.AspNetCore.Components.WebAssembly.Authentication that integrates with OpenId Connect compliant providers. [Source code](https://github.com/NikiforovAll/keycloak-authorization-services-dotnet/blob/main/samples/Blazor) ![stars](https://img.shields.io/github/stars/NikiforovAll/keycloak-authorization-services-dotnet?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/NikiforovAll/keycloak-authorization-services-dotnet?style=flat-square&cacheSeconds=86400) Brings [ChartJs](https://github.com/chartjs/Chart.js). * [What’s new in the Microsoft Fluent UI library for Blazor version 2.0](https://medium.com/fast-design/whats-new-in-the-microsoft-fluent-ui-library-for-blazor-version-2-0-b3ac0eb5d02c) - December 5, 2022 - What’s new in the Microsoft Fluent UI library for Blazor version 2.0. * [Build a web app to manage a custom provider in Viva Learning with Blazor](https://techcommunity.microsoft.com/t5/windows-dev-appconsult/build-a-web-app-to-manage-a-custom-provider-in-viva-learning/ba-p/3686343#M679) - November 29, 2022 - In this post, we're going to reuse the concepts we have learned to build a better experience: a web application, that we can use to manage our custom learning provider and its contents. We'll focus on how to implement in a real application some of the peculiar features we have learned about these APIs, like the fact that a different set of permissions is required based on the type of content you're working with. * [Maintenance release: Blazorise 1.1.4](https://blazorise.com/news/release-notes/114) - November 24, 2022 - This new Blazorise 1.1.4 is a hotfix release to solve this and a few other minor issues that were fixed in the meantime. Blazorise 1.1.4 is a recommended upgrade for all Blazorise 1.1 and Blazorise 1.1.3 users. * [Blazor WebAssembly: Debugging gRPC-Web with Custom Chrome Developer Tools](https://www.thinktecture.com/blazor/blazor-wasm-debugging-grpc-web/) - November 17, 2022 - If you are working with Blazor, gRPC is a big issue for transferring data from APIs to clients. One issue of developing with gRPC-Web is debugging the transmitted data because the data is in an efficient binary message format. This article shows you how to solve this problem with the help of my NuGet. * [Release: Blazorise 1.1.3](https://blazorise.com/news/release-notes/113) - November 8, 2022 - This Blazorise version was supposed to be just a minor maintenance release. Instead, we got a few relatively large features and announcements, with .NET 7 support. * [Containerized Blazor: Microsoft Ponders New Client-Side Hosting](https://visualstudiomagazine.com/articles/2022/11/02/blazor-docker.aspx) - November 2, 2022 - Microsoft is exploring built-in container support for client-side Blazor WebAssembly apps. Article on Visual Studio Magazine. * [Resize images before uploading in Blazor Web Assembly](https://www.mikesdotnetting.com/article/361/resize-images-before-uploading-in-blazor-web-assembly) - October 31, 2022 - Resize images before uploading in Blazor Web Assembly. * [Experimenting with .NET 7, WASM, and WASI on Docker](https://laurentkempe.com/2022/10/31/experimenting-with-dotnet-7-wasm-and-wasi-on-docker/) - October 31, 2022 - On October 24th, Docker announced the support of WASM and WASI in a new technical preview release. This articles tries it out and sees how it works with .NET 7. * [How to run Blazor project everywhere using Razor Class Library ?](https://chandradev819.com/2022/10/29/how-to-run-blazor-project-everywhere-using-razor-class-library/) - October 29, 2022 - One of the cool features of Blazor is code reusability. Using Razor Class Library, we can use our Razor component in all blazor type project like Blazor Server, WASM and MAUI Hybrid. * [Using WASM and WASI to run .NET 7 on a Raspberry PI Zero 2 W](https://laurentkempe.com/2022/10/29/using-wasm-and-wasi-to-run-dotnet-7-on-a-raspberry-pi-zero-2-w/) - October 29, 2022 - This post will show you how to run .NET 7 on a Raspberry PI Zero 2 W using WASM and WASI. * [Introducing the Docker+Wasm Technical Preview](https://www.docker.com/blog/docker-wasm-technical-preview/) - October 24, 2022 - The Technical Preview of Docker+Wasm is now available! Wasm has been producing a lot of buzz recently, and this feature will make it easier for you to quickly build applications targeting Wasm runtimes. * [Part 1 - UI driven by Elsa Workflows](https://medium.com/@fanekkie/part-1-ui-driven-by-elsa-workflows-6d0e868ea6c) - October 12, 2022 - This series describes the implementation of a User Interface driven by an Elsa workflow engine. * [ASP.NET Core updates in .NET 7 Release Candidate 2](https://devblogs.microsoft.com/dotnet/asp-net-core-updates-in-dotnet-7-rc-2/) - October 11, 2022 - .NET 7 Release Candidate 2 (RC2) is now available and includes many great new improvements to ASP.NET Core and Blazor: improved diagnostics for authentication in Blazor WebAssembly and WebAssembly multithreading (experimental). * [No Need to Wait for .NET 8 to Try Experimental WebAssembly Multithreading](https://visualstudiomagazine.com/articles/2022/10/11/blazor-webassembly-net7.aspx) - October 11, 2022 - Multithreading support for client-side Blazor WebAssembly apps is planned for .NET 8 in November 2023, but developers can try it out now for .NET apps thanks to experimental functionality in the brand-new .NET 7 Release Candidate 2. * [Using .NET 7's Blazor Custom Elements to render dynamic content](https://jonhilton.net/blazor-custom-elements/) - October 5, 2022 - Using .NET 7's Blazor Custom Elements to render dynamic content, by Jon Hilton. * [Creating a Generic Tree View Blazor Component](https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/creating-a-generic-tree-view-blazor-component/ba-p/3640237) - October 4, 2022 - This article shows off a Blazor component that can display an object recursively in a tree view. The component was made as part as a project, FHIR Watch, a tool for comparing FHIR data from two different data sources: FHIR API Service and Dataverse. As such, it is particularly useful for displaying FHIR data. * [Clean Architecture Solution Template for Blazor WebAssembly](https://jasontaylor.dev/clean-architecture-solution-template-for-blazor-webassembly/) - September 30, 2022 - This post shows how to install the new template, create and run a new app using the template, and then I’ll share some recommended resources. * [WebAssembly Steals the ASP.NET Core Show in .NET 7 RC1](https://visualstudiomagazine.com/articles/2022/09/20/aspnet-core-updates.aspx) - September 20, 2022 - WebAssembly, the open source tech that makes client-side Blazor work, was the star of the ASP.NET Core show in the new [ASP.NET Core updates in .NET 7 Release Candidate 1](https://devblogs.microsoft.com/dotnet/asp-net-core-updates-in-dotnet-7-rc-1/). * [C# / Blazor Wolfenstein - Part 7 - Solid Walls, Doors and C# Observations](https://www.jamesdrandall.com/posts/csharp_blazor_wolfenstein_part_7_sold_walls_doors_and_csharp_observations/) - September 20, 2022 - Part 7 of the articles explaining how port the game Wolfenstein 3D on C# and Blazor. [Source code](https://github.com/JamesRandall/csharp-wolfenstein) ![stars](https://img.shields.io/github/stars/JamesRandall/csharp-wolfenstein?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/JamesRandall/csharp-wolfenstein?style=flat-square&cacheSeconds=86400). * C# / Blazor Wolfenstein - Part 6 - The Basic Raycaster[](https://www.jamesdrandall.com/posts/csharp_blazor_wolfenstein_part_6_the_basic_raycaster/) - September 18, 2022 - Part 6 of the articles explaining how port the game Wolfenstein 3D on C# and Blazor. [Source code](https://github.com/JamesRandall/csharp-wolfenstein) ![stars](https://img.shields.io/github/stars/JamesRandall/csharp-wolfenstein?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/JamesRandall/csharp-wolfenstein?style=flat-square&cacheSeconds=86400). * [ASP.NET Core updates in .NET 7 Release Candidate 1](https://devblogs.microsoft.com/dotnet/asp-net-core-updates-in-dotnet-7-rc-1/) - September 14, 2022 - .NET 7 Release Candidate 1 (RC1) is now available and includes many great new improvements to ASP.NET Core. * [Why can't your Blazor WebAssembly PWA with offline support update even though you reload it?](https://dev.to/j_sakamoto/why-cant-your-blazor-webassembly-pwa-with-offline-support-update-even-though-you-reload-it-31ho) - September 7, 2022 - Please imagine the case that you have a Blazor WebAssembly PWA site that supports offline mode, and you've been opening that PWA by a web browser. If you published a new version of that PWA, you might want to verify it, I guess. In this case, you will reload the page on that web browser. However, you might never see the contents of the new version unexpectedly even though you hard-reload it. * [Implement OpenAPI Support for Blazor WebAssembly](https://jasontaylor.dev/implement-openapi-support-for-blazor-webassembly/) - September 2, 2022 - OpenAPI is a language-agnostic specification for documenting RESTful APIs. It allows both humans and computers to understand the capabilities of a service without accessing code, docs, or using other methods. Hosted Blazor WebAssembly apps include an ASP.NET Core backend for file and API access, and so implementing support for OpenAPI provides numerous benefits. * [Troubleshooting .NET Blazor WASM Debugging](https://khalidabuhakmeh.com/troubleshooting-dotnet-blazor-wasm-debugging) - August 30, 2022 - Troubleshooting .NET Blazor WASM Debugging. * [Oqtane 3.1.4 Released](https://www.oqtane.org/blog/!/49/oqtane-3-1-4-released) - July 27, 2022 - The 3.1.4 release is primarily focused on stabilization. This includes a variety of performance optimizations, user experience improvements, and localization enhancements. * [ClassFly UI - XAML vs Blazor - Part 1 - XAML](https://kymphillpotts.com/maui-ui-july.html) - July 23, 2022 - This serie of article compares the approaches of XAML vs Blazor. The same application will be create using XAML and Blazor to discover the pros and cons. The part 1 focuses on XAML. * [Authenticate Blazor WebAssembly with Azure Static Web Apps](https://anthonychu.ca/post/blazor-auth-azure-static-web-apps/) - July 22, 2022 - Azure Static Web Apps is a great place to host Blazor WebAssembly apps. Among Static Web Apps' many features, it has built-in support for authentication using social logins. In this article, we'll look at how we can take advantage of Static Web Apps Authentication in our Blazor WebAssembly apps. By Anthony Chu, from Microsoft. * [Share Blazor components using dependency injection](https://engstromjimmy.com/post/2022-07-21-ShareBlazorComponentsUsingDi) - July 19, 2022 - Share Blazor components using dependency injection. * [Using IAsyncEnumerable in a Razor component](https://www.meziantou.net/using-iasyncenumerable-in-a-razor-component.htm) - July 18, 2022 - Using IAsyncEnumerable in a Razor component. * [Blazor Sticker Giveaway](https://dev.to/dotnet/blazor-sticker-giveaway-try-the-new-blazor-guide-in-visual-studio-4c8b) - July 15, 2022 - Try the new Blazor Guide in Visual Studio. Read on to learn about how to give it a try and for a chance to receive a sticker. * [ASP.NET Core updates in .NET 7 Preview 6](https://devblogs.microsoft.com/dotnet/asp-net-core-updates-in-dotnet-7-preview-6/) - July 12, 2022 - .NET 7 Preview 6 is now available and includes many great new improvements to ASP.NET Core. Blazor custom elements no longer experimental, experimental QuickGrid component for Blazor, System.Security.Cryptography support on WebAssembly, empty Blazor project templates and more. For more details on the ASP.NET Core work planned for .NET 7 see the full [ASP.NET Core roadmap for .NET 7](https://aka.ms/aspnet/roadmap) on GitHub. * [Uno Platform Does WebAssembly Threading Months Ahead of .NET 7](https://visualstudiomagazine.com/articles/2022/06/29/uno-platform-4-4.aspx) - June 29, 2022 - Uno Platform Does WebAssembly Threading Months Ahead of .NET 7, by David Ramel for Visual Studio Magazine. * [3 reasons why I've created yet another library for the downloading on Blazor apps](https://dev.to/j_sakamoto/3-reasons-why-ive-created-yet-another-library-for-the-downloading-on-blazor-apps-2cpk) - June 29, 2022 - 3 reasons why I've created yet another library for the downloading on Blazor apps. * [Blazor Posts Biggest Gain in WebAssembly Usage: Report](https://visualstudiomagazine.com/articles/2022/06/24/blazor-webassembly.aspx) - June 24, 2022 - Blazor Posts Biggest Gain in WebAssembly Usage: Report, by David Ramel for Visual Studio Magazine. * [Implementing RavenDB time series in a Blazor project](https://engstromjimmy.com/post/2022-06-21-RavenTimeseries) - June 21, 2022 - Implementing RavenDB time series in a Blazor project. * [What’s new in the Microsoft Fluent UI library for Blazor versions 1.3 and 1.4](https://medium.com/fast-design/whats-new-in-the-microsoft-fluent-ui-library-for-blazor-versions-1-3-and-1-4-4c3ea55a4b56) - June 9, 2022 - The FAST team and friends have been hard at work not only on Microsoft’s core Web Components platform, but also on Fluent UI integrations for Blazor. This post will give you an overview of what’s new and changed in versions 1.3 and 1.4 of the [Microsoft Fluent UI Blazor library](https://www.nuget.org/packages/Microsoft.Fast.Components.FluentUI/). * [A Full-Stack Web App Using Blazor WebAssembly and GraphQL: Part 3](https://www.syncfusion.com/blogs/post/a-full-stack-web-app-using-blazor-webassembly-and-graphql-part-3.aspx) - June 8, 2022 - A Full-Stack Web App Using Blazor WebAssembly and GraphQL: Part 3. [Part 1](https://www.syncfusion.com/blogs/post/a-full-stack-web-app-using-blazor-webassembly-and-graphql-part-1.aspx). * [Dependency Injection Scopes in Blazor](https://www.thinktecture.com/blazor/dependency-injection-scopes-in-blazor/) - May 31, 2022 - The dependency injection system is a big part of how modern ASP.NET Core works internally: It provides a flexible solution for developers to structure their projects, decouple their dependencies, and control the lifetimes of the components within an application. In Blazor - a new part of ASP.NET Core - however, the DI system feels a bit odd, and things seem to work a bit differently than expected. This article will explain why this is not only a feeling but indeed the case in the first place and how to handle the differences in order to not run into problems later on. * [ASP.NET Core updates in .NET 7 Preview 4](https://devblogs.microsoft.com/dotnet/asp-net-core-updates-in-dotnet-7-preview-4) - May 10, 2022 - .NET 7 Preview 4 is now available and includes many great new improvements to ASP.NET Core. Article by Dan Roth from Microsoft. * [Q&A: Deep Dive on Blazor](https://visualstudiomagazine.com/articles/2022/05/09/qa-blazor-deep-dive.aspx) - May 9, 2022 - Q&A: Deep Dive on Blazor, on Visual Studio Magazine. * [Syncfusion Essential Studio Release Enhances .NET MAUI, WinUI, Blazor and More](https://visualstudiomagazine.com/articles/2022/05/06/syncfusion-v1.aspx) - May 6, 2022 - Syncfusion Essential Studio Release Enhances .NET MAUI, WinUI, Blazor and More, on Visual Studio Magazine. * [Use the Query String to pass an array of selected values between Blazor Pages](https://jonhilton.net/blazor-deep-linking/) - April 28, 2022 - Use the Query String to pass an array of selected values between Blazor Pages. * [Containerizing Blazor WASM, ASPNET Core WebApi, NGINX & Let's Encrypt](https://www.ledjonbehluli.com/posts/blazor_wasm_nginx_docker/) - April 26, 2022 - Guide on containerizing a complex system composed from multiple Blazor WASM apps, ASPNET Core Web Api, NGINX & Let's Encrypt SSL certificates. * [Blazor WebAssembly: The Power Of EF Core And SQLite In The Browser](https://www.thinktecture.com/blazor/ef-core-and-sqlite-in-browser/) - April 12, 2022 - Sneak Preview how to use SQLite and EF Core in the browser with Blazor WebAssembly. * [Mastering Blazor - Part 1: DOM Integration](https://dev.to/florianrappl/mastering-blazor-part-1-dom-integration-5591) - March 31, 2022 - Mastering Blazor - Part 1: DOM Integration. * [Conditional Blazor Styles (without the if statements)](https://jonhilton.net/conditional-blazor-css/) - March 24, 2022 - Conditional Blazor Styles (without the if statements). * [Securing a Blazor WebAssembly Hosted Apps with Azure Active Directory](https://dev.to/kasuken/securing-a-blazor-webassembly-hosted-apps-with-azure-active-directory-part-1-382d) - March 18, 2022 - Securing a Blazor WebAssembly Hosted Apps with Azure Active Directory. [Part 2](https://dev.to/kasuken/securing-a-blazor-webassembly-hosted-apps-with-azure-active-directory-part-2-1ppd). * [A Full-Stack Web App Using Blazor WebAssembly and GraphQL: Part 1](https://www.syncfusion.com/blogs/post/a-full-stack-web-app-using-blazor-webassembly-and-graphql-part-1.aspx) - February 17, 2022 - This article shows how to create a full-stack web application using Blazor WebAssembly and GraphQL with SQL Server as our database. * [What's Coming for Blazor Hybrid in .NET 7](https://visualstudiomagazine.com/articles/2022/02/10/blazor-hybrid-net-7.aspx) - February 10, 2022 - What's Coming for Blazor Hybrid in .NET 7, Visual Studio Magazine. * [Use Tailwind's new JIT mode when developing Blazor applications](https://jonhilton.net/tailwind3-blazor/) - January 19, 2022 - Use Tailwind's new JIT mode when developing Blazor applications. * [Archives](https://github.com/AdrienTorris/awesome-blazor/tree/master/Archives) - [2021](https://github.com/AdrienTorris/awesome-blazor/blob/master/Archives/2021.md#articles), [2020](https://github.com/AdrienTorris/awesome-blazor/blob/master/Archives/2020.md#articles), [2019](https://github.com/AdrienTorris/awesome-blazor/blob/master/Archives/2019.md#articles), [2018](https://github.com/AdrienTorris/awesome-blazor/blob/master/Archives/2018.md#articles), [2017](https://github.com/AdrienTorris/awesome-blazor/blob/master/Archives/2017.md#articles). ## Podcasts * [Chris Sainty: Blazor in Action - Azure DevOps Episode 238](http://azuredevopspodcast.clear-measure.com/chris-sainty-blazor-in-action-episode-238) - March 27, 2023 - Chris is a Microsoft MVP, author, and software engineer with over 17 years of experience with ASP.NET. Passionate about sharing his knowledge with the community, he regularly writes both for his own blog as well as others — such as Visual Studio magazine, Progress Telerik, and StackOverflow. This passion for blogging led to his first book, Blazor in Action, a practical guide to building Blazor applications. He also maintains several popular open-source projects under the GitHub organization, Blazored. When not tapping on a keyboard, Chris is a keen speaker, having delivered talks at both user groups and conferences all over the world. Duration: 41 minutes. * [.NET Rocks - Blazor United with Javier Nelson and Steve Sanderson](https://www.dotnetrocks.com/details/1838) - March 23, 2023 - What if you didn't have to choose between client-side and server-side Blazor? Carl and Richard talk to Javier Nelson and Steve Sanderson about Blazor United in its early stages of development, providing flexibility at the web component level for client- and server-side rendering. At the simplest level, Blazor United offers server-side rendering when a site is first hit so that you can load the larger client-side components over time. But deeper is the idea that some elements on your web page benefit from being client-side, and some from being server-side, and why should you have to choose only one? Duration: 53 minutes. * [Steve Sanderson - Blazor, WASI and optimizing tomatoes](https://www.youtube.com/watch?v=1r3FwkUEte0) - July 17, 2022 - At NDC London we sat down with Steve Sanderson and talked about the origin story of Blazor, some of the upcoming things in .NET 7, and what Steve is working on right now. Duration: 35 minutes. * [WASM Everywhere with Steve Sanderson](https://www.dotnetrocks.com/default.aspx?ShowNum=1801) - July 7, 2022 - WebAssembly is awesome - what else can you do with it? While at NDC London, Carl and Richard talked to Steve Sanderson about his work with WebAssembly, including Blazor. Steve talks about how WebAssembly continues to evolve adding WebAssembly System Integration. This opens the door to the idea that code in the WebAssembly can be run anywhere - any operating system, any language, using whatever compute is available. That gives an option to run code on the client, the server, and anywhere in between. Duration: 55 minutes. * [Indexing Video using KlipTok with Jeff Fritz](https://www.dotnetrocks.com/default.aspx?ShowNum=1796) - June 2, 2022 - How do you find a great video clip after the fact? Carl and Richard talk to Jeff Fritz about his work building KlipTok, a tool for making Twitch video clips more discoverable and shareable. Jeff digs into the tricky bits of KlipTok, which is all about fast indexing and searching to get to the right clip. The conversation digs into various data storage techniques and using the cloud in a way that doesn't break the bank. Jeff may be a Microsoft employee, but that doesn't mean he uses only Microsoft tools for his projects. Duration: 57 minutes. * [David Ortinau on .NET MAUI](https://herdingcode.com/herding-code-246-david-ortinau-on-net-maui/) - May 27, 2022 - Jon Galloway talks to David Ortinau about [.NET MAUI](https://docs.microsoft.com/en-us/dotnet/maui/what-is-maui). Duration: 41 minutes. [YouTube video](https://www.youtube.com/watch?v=OyqzWAivI7I). * [The Unhandled Exception Podcast: Microsoft Build 2022](https://unhandledexceptionpodcast.com/posts/0037-build/) - May 25, 2022 - Microsoft have just had their annual Build conference - which comes with a whole host of exciting announcements and discussions about hot programming topics in the Microsoft developer space. And each year, I long for a podcast episode to come out straight after Build, overviewing them! Well, this year - this podcast aims to do just that! In this episode, I was joined by both Scott Hunter and Gaurav Seth to chat about various topics. Check out the links below for a guide to what we discussed. Duration: 71 minutes. * [ASP.NET, Blogging, Kuberetes, and more](https://unhandledexceptionpodcast.com/posts/0036-andrewlock/) - May 10, 2022 - The Unhandled Exception Podcast with Andrew Lock (andrewlock.net). He’s the author of the Manning ebook, ASP.NET Core in Action, Second Edition. This episode covers a lot - from the different flavours of ASP.NET, to Kubernetes, Blazor, gRPC, testing, Minimal API, MediatR and more. * [Umbraco Heartcore and Blazor with Poornima Nayar](https://dotnetcore.show/episode-93-umbraco-heartcore-and-blazor-with-poornima-nayar/) - May 4, 2022 - In this episode we talked with Poornima Nayar about Umbraco Heartcore and where you might use it, Blazor, a little on GraphQL and how it fits very well with mobile apps which communicate with remote APIs. Duration: 59 minutes. * [In The Core of Blazor](https://www.youtube.com/watch?v=IF_7DPddmcs) - February 12, 2022 - Steve Sanderson talks about his journey into tech, his life, education, career and everything else in between. Duration: 73 minutes. * [Archives](https://github.com/AdrienTorris/awesome-blazor/tree/master/Archives) - [2021](https://github.com/AdrienTorris/awesome-blazor/blob/master/Archives/2021.md#podcasts), [2020](https://github.com/AdrienTorris/awesome-blazor/blob/master/Archives/2020.md#podcasts), [2019](https://github.com/AdrienTorris/awesome-blazor/blob/master/Archives/2019.md#podcasts), [2018](https://github.com/AdrienTorris/awesome-blazor/blob/master/Archives/2018.md#podcasts), [2017](https://github.com/AdrienTorris/awesome-blazor/blob/master/Archives/2017.md#podcasts). ## Presentations slides * [Using .NET 5 with the Raspberry Pi](https://www.slideshare.net/PGallagher69/using-net-5-with-the-raspberry-pi) - January 28, 2021 - Using .NET 5 with the Raspberry Pi, on Slideshare. * [Archives](https://github.com/AdrienTorris/awesome-blazor/tree/master/Archives) - [2020](https://github.com/AdrienTorris/awesome-blazor/blob/master/Archives/2020.md#presentations-slides), [2019](https://github.com/AdrienTorris/awesome-blazor/blob/master/Archives/2019.md#presentations-slides), [2018](https://github.com/AdrienTorris/awesome-blazor/blob/master/Archives/2018.md#presentations-slides). ## Tooling * [LiveSharp](https://github.com/ionoy/LiveSharp) - ![stars](https://img.shields.io/github/stars/ionoy/LiveSharp?style=flat-square&cacheSeconds=604800) ![last commit](https://img.shields.io/github/last-commit/ionoy/LiveSharp?style=flat-square&cacheSeconds=86400) Update `.razor` files and see the updates instantly without reloading the page. Your application state is preserved because there is no need to reload anything. [livesharp.net](https://www.livesharp.net/). * [BlazorFiddle](https://blazorfiddle.com) - Blazor .Net Developer Playground and Code Editor in the Browser. * [Blazor Minimum Project Templates](https://github.com/jsakamoto/BlazorMinimumTemplates) - ![GitHub stars](https://img.shields.io/github/stars/jsakamoto/BlazorMinimumTemplates?style=flat-square&cacheSeconds=604800) ![GitHub last commit](https://img.shields.io/github/last-commit/jsakamoto/BlazorMinimumTemplates?style=flat-square&cacheSeconds=86400) A project templates package of Blazor apps without JavaScript and CSS libraries. * [Blazor REPL](https://github.com/BlazorRepl/BlazorRepl) - ![GitHub stars](https://img.shields.io/github/stars/BlazorRepl/BlazorRepl?style=flat-square&cacheSeconds=604800) ![GitHub last commit](https://img.shields.io/github/last-commit/BlazorRepl/BlazorRepl?style=flat-square&cacheSeconds=86400) Write, compile, execute and share Blazor components entirely in the browser - https://blazorrepl.com. * [Blazor Snippets Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=ScottSauber.blazorsnippets) - A Visual Studio Code extension that offers Blazor and Razor snippets. * [Publish-time Pre-render for Blazor Wasm](https://github.com/jsakamoto/BlazorWasmPreRendering.Build) - ![last commit](https://img.shields.io/github/last-commit/jsakamoto/BlazorWasmPreRendering.Build?style=flat-square&cacheSeconds=86400) When you publish your Blazor WebAssembly app, this package pre-renders and saves the app as static HTML files in your public folder. * [Publish SPA for GitHub Pages](https://github.com/jsakamoto/PublishSPAforGitHubPages.Build) - ![last commit](https://img.shields.io/github/last-commit/jsakamoto/PublishSPAforGitHubPages.Build?style=flat-square&cacheSeconds=86400) Add this NuGet package to your Blazor WebAssembly project to easily publish it to GitHub Pages. * [WebCompiler](https://github.com/excubo-ag/WebCompiler) - ![GitHub stars](https://img.shields.io/github/stars/excubo-ag/WebCompiler?style=flat-square&cacheSeconds=604800) ![GitHub last commit](https://img.shields.io/github/last-commit/excubo-ag/WebCompiler?style=flat-square&cacheSeconds=86400) A dotnet global tool for compilation, minification, and compression of scss, css and js. * [.NET Core](https://www.microsoft.com/net/download/dotnet-core) - .NET Core. * [Razor+ Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=austincummings.razor-plus) - A Visual Studio Code extension that offers improved Razor support. * [Tracetool](https://github.com/capslock66/Tracetool#Blazor-client--server) - ![GitHub stars](https://img.shields.io/github/stars/capslock66/Tracetool?style=flat-square&cacheSeconds=604800) ![GitHub last commit](https://img.shields.io/github/last-commit/capslock66/Tracetool?style=flat-square&cacheSeconds=86400) Tracetool viewer, client Api for Dotnet, Java, Javasvript, C++ , Python, Delphi. * [Visual Studio](https://www.visualstudio.com/vs/preview) - Latest preview of Visual Studio. * [Visual Studio Code](https://code.visualstudio.com/) - Visual Studio Code, free, open source and cross-platform code editor. ## Books * [Learning Blazor](https://learning.oreilly.com/library/view/learning-blazor/9781098113230) - Build Single-Page Apps with WebAssembly and C# (By David Pine via O'Reilly Early Release February 3rd, 2022). * [Microsoft Blazor: Building Web Applications in .NET 6 and Beyond](https://www.amazon.com/Microsoft-Blazor-Building-Applications-Beyond/dp/1484278445) - A practical and hands-on approach to learning Blazor using .NET 6. (3rd ed. edition (December 8, 2021). * [Blazor WebAssembly by Example](https://www.amazon.com/Blazor-WebAssembly-Example-project-based-building-ebook/dp/B095X7FH6M) - Blazor WebAssembly by Example: A project-based guide to building web apps with .NET, Blazor WebAssembly, and C# (1st edition published the July 9th, 2021). * [Blazor in Action](https://www.manning.com/books/blazor-in-action) - An example-driven guide to building reusable UI components and web frontends—all with Blazor, C#, and .NET. (Manning Early Access Program began October 2020). * [Microsoft Blazor: Building Web Applications in .NET](https://www.amazon.com/Microsoft-Blazor-Building-Applications-NET/dp/1484259270/ref=pd_sbs_2/144-0745230-5007239?pd_rd_w=LPinn&pf_rd_p=3676f086-9496-4fd7-8490-77cf7f43f846&pf_rd_r=V7CQTYC0W8RZAVPVVXA1&pd_rd_r=b34ab9d9-09dd-4eca-9207-f56311bde8d2&pd_rd_wg=9V1tA&pd_rd_i=1484259270&psc=1) - Microsoft Blazor: Building Web Applications in .NET (Second edition published in May 2020). * [Blazor Revealed](https://www.apress.com/gp/book/9781484243428) - Blazor Revealed, Building Web Applications in .NET (Published February, 2019). * [Blazor Quick Start Guide: Build web applications using Blazor, EF Core, and SQL Server](https://www.amazon.in/gp/product/178934414X/ref=awesome_blazor) - Blazor Quick Start Guide: Build web applications using Blazor, EF Core, and SQL Server (Published October 31, 2018). ## E-Books * [Blazor WebAssembly Succinctly](https://www.syncfusion.com/ebooks/blazor_webassembly_succinctly) - August 31, 2020 - Blazor is a framework for creating SPA webpages with either client-side or server-side architectures, using Razor technology written with the C# language. Because client-side Blazor with WebAssembly executes entirely on a user's browser, it’s very fast for many applications. In Blazor WebAssembly Succinctly, Michael Washington will take readers through the core elements of Blazor and then explore additional features by building a sample application. Free e-book. * [Blazor Succinctly](https://www.syncfusion.com/ebooks/blazor-succinctly) - April 16, 2020 - A free e-book for starting with the Blazor framework. * [Blazor, A Beginners Guide](https://www.telerik.com/campaigns/blazor/wp-beginners-guide-ebook) - March 18, 2020 - A free e-book for getting started with the Blazor framework. [Examples source code](https://github.com/EdCharbeneau/BlazorBookExamples). * [Blazor for ASP.NET Web Forms developers](https://dotnet.microsoft.com/learn/aspnet/architecture#blazor-for-web-forms-devs-ebook-swim) - Blazor for ASP.NET Web Forms developers, a free e-book from Microsoft. * [Using CSLA 5: Blazor and WebAssembly](https://store.lhotka.net/using-csla-5-blazor-and-webassembly) - This book covers the new Blazor UI framework, including how to create server-side and client-side WebAssembly projects, how to implement authentication and authorization, and how to use data binding. It then covers how CSLA .NET supports Blazor, including walking through a complete sample app. * [An Introduction to Building Applications with Blazor](https://www.amazon.com/Introduction-Building-Applications-Blazor-applications-ebook/dp/B07WPQTT6H) - August 24, 2019 - An Introduction to Building Applications with Blazor: How to get started creating applications using this exciting easy to use Microsoft C# framework * [Archives](https://github.com/AdrienTorris/awesome-blazor/tree/master/Archives) - [2018](https://github.com/AdrienTorris/awesome-blazor/blob/master/Archives/2018.md#e-books). ## Courses * [Build a web app with Blazor WebAssembly and Visual Studio Code](https://docs.microsoft.com/learn/modules/build-blazor-webassembly-visual-studio-code/) - Build a web app with Blazor WebAssembly and Visual Studio Code, on Microsoft Learn. * [DevOps and Docker Support for .NET Core Blazor Applications](https://www.udemy.com/course/devops-and-docker-support-for-net-core-blazor/?ranMID=39197&ranEAID=w6JuN00t%2Fzo&ranSiteID=w6JuN00t_zo-Kv09UYco3AqwmZkipiMIXw&utm_source=aff-campaign&LSNPUBID=w6JuN00t%2Fzo&utm_medium=udemyads) - June 2020 - DevOps and Docker Support for .NET Core Blazor Applications, on Udemy. This Docker Support Course with DevOps concepts using ASP.NET Core Blazor will teach you Dockerisation of Blazor Apps. * [Programming in Blazor - ASP.NET Core 3.1](https://www.udemy.com/course/programming-in-blazor-aspnet-core) - Create interactive web applications with C#, on Udemy. * [Creating Blazor Components](https://www.pluralsight.com/courses/creating-blazor-components) - December, 2019 - Building a Blazor app is building components. This course is essential for gaining a good understanding of components. On Pluralsight. * [Authentication and Authorization in Blazor Applications](https://www.pluralsight.com/courses/authentication-authorization-blazor-applications) - December, 2019 - Learn how to secure your Blazor application using a variety of best practice techniques for authentication and authorization. On Pluralsight. * [Blazor: Getting Started](https://www.pluralsight.com/courses/getting-started-blazor) - December, 2019 - Learn how to build your first application in a hands-on way using Blazor, Microsoft's solution to use C# to write interactive web UIs without JavaScript. On Pluralsight. * [Blazor In-Depth Workshop (Blaze Invaders)](https://www.csharpacademy.com/courseinfo/2ccff0ac-4d3e-4d25-9368-6c1474640de5) - December 2019 - Blazor In-Depth Workshop (Blaze Invaders), on C# Academy. Learn serious Blazor concepts while building a working browser based game. * [Blazor and Razor Components in a nutshell](https://www.udemy.com/course/blazor-and-razor-components-in-a-nutshell/) - October 2019 - Learn how to use a framework that allows you to run your compiled code directly in the browser on top of WebAssembly, a course on Udemy. * [Blazor on ASP.NET Core 3.0](https://www.skillshare.com/site/join?teacherRef=102575464&t=Blazor-on-ASP.NET-Core-3.0&sku=1662883580) - October 2019 - Blazor on ASP.NET Core 3.0, a course on SkillShare. * [Blazor First Look on LinkedIn Learning](https://www.linkedin.com/learning/blazor-first-look) - Blazor First Look on LinkedIn Learning. [Source code](https://github.com/Dedac/Beam) ![GitHub stars](https://img.shields.io/github/stars/Dedac/Beam?style=flat-square&cacheSeconds=604800). * [Free Blazor Training Course](https://www.devexpress.com/support/training/blazor/) - DevExpress Blazor free training course [Source code](https://github.com/DevExpress/blazor-training-samples) ![GitHub stars](https://img.shields.io/github/stars/DevExpress/blazor-training-samples?style=flat-square&cacheSeconds=604800). * [Getting Started with Blazor](https://codered.eccouncil.org/course/getting-started-with-blazor) - June 2021 - Discover the Core Concepts of Blazor and learn to create web applications with ease. ## Community * [Awesome Blazor on Twitter](https://twitter.com/awesomeblazor) - This repository's Twitter feed. * [Discord](https://discord.com/channels/732297728826277939/732297874062311424) - Blazor channel on the DotNetEvolution discord server. * [Gitter](https://gitter.im/aspnet/Blazor) - Blazor discussion on Gitter. * [I Love DotNet](https://github.com/ILoveDotNet/ilovedotnet) - ![stars](https://img.shields.io/github/stars/ilovedotnet/ilovedotnet?style=flat-square) ![last commit](https://img.shields.io/github/last-commit/ilovedotnet/ilovedotnet?style=flat-square) - This is a .NET knowledge sharing platform with live demos crafted by developers for developers with love using .NET Blazor. [ilovedotnet.org](https://www.ilovedotnet.org). * [Learn Blazor](https://learn-blazor.com/) - Community documentation on Blazor. * [Blazor Help Website](https://blazorhelpwebsite.com/) - Blogs and code samples primarily covering server-side Blazor. * [Practical samples of Blazor](https://github.com/dodyg/practical-aspnetcore/tree/master/projects/blazor) - Practical samples of Blazor. * [Practical samples of Blazor Server-Side](https://github.com/dodyg/practical-aspnetcore/tree/master/projects/blazor-ss) - Practical samples of Blazor Server-Side. * [Reddit](https://www.reddit.com/r/Blazor/) - Blazor subreddit. * [Stack Overflow](https://stackoverflow.com/questions/tagged/blazor) - Blazor questions feed on Stack Overflow. * [Twitter](https://twitter.com/hashtag/blazor) - Hashtag on Twitter. * [WebAssemblyMan](https://www.webassemblyman.com/) - Man page for Blazor and WebAssembly. ## Other Languages * [Blaze of Code](https://blazeofcode.com/) - [Portuguese] Blog about Blazor. * [Blazor.ru](https://blazor.ru/) - [Russian] Old official documentation website translated in Russian. * [DevApps.be's podcast #44](http://devapps.be/podcast/blazor-webassembly/) - [French] DevApps.be's podcast #44: "Blazor et WebAssembly vont-ils tuer JavaScript ?". * [DevApps.be's podcast #47](http://devapps.be/podcast/47-typescript-uno-angular-docfx/) - [French] DevApps.be's podcast #47: "Actualités : TypeScript, Uno, Angular, DocFX, Database". * [Modern web apps with Blazor](https://media.aspitalia.com/events/VS2019-Blazor.media) - [Italian] Video about Blazor. * [Blazor Developer Italiani](https://blazordev.it/) - [Italian] Italian Blazor community website with useful articles and events. * [Playlist - Programando en Blazor](https://www.youtube.com/playlist?list=PL0kIvpOlieSNdIPZbn-mO15YIjRHY2wI9) - [Spanish] Series of videos about Blazor. * [Insights from the oracle](http://blog.ppedv.de/?tag=Blazor) - [German] Blog about Blazor. * [ASP.NET Core Blazor 5.0: Blazor WebAssembly und Blazor Server: Moderne Single-Page-Web-Applications mit .NET, C# und Visual Studio](https://www.amazon.de/exec/obidos/ASIN/393427935X/itvisions-21) - German book on Blazor (Published September 15, 2020 with montly Updates). ## License [![CC0](http://mirrors.creativecommons.org/presskit/buttons/88x31/svg/cc-zero.svg)](https://creativecommons.org/publicdomain/zero/1.0/) To the extent possible under law, Adrien Torris has waived all copyright and related or neighboring rights to this work.
Qihoo360/Luwak
https://github.com/Qihoo360/Luwak
利用预训练语言模型从非结构化威胁报告中提取 MITRE ATT&CK TTP 信息
# Luwak TTP Extractor English | [简体中文](./README_cn.md) ## Table of Content 1. [Overview](#overview) 2. [Background](#background) 3. [Setup](#setup) - [1. Setup Virtual Env](#1-setup-virtual-env) - [2. Download and Merge Model Artifacts](#2-download-and-merge-model-artifacts) 4. [Demo](#demo) ## Overview Luwak TTP Extractor uses pre-trained models to extract Tactics, Techniques and Procedures (TTPs) from unstructured threat reports. It uses [ERNIE](https://github.com/PaddlePaddle/ERNIE) pre-trained models to infer common TTPs from threat reports. Currently, we only open source the fine-tuned model for English content. It is expected that in H2, we will have an external TTP extraction service for trial use. It supports Chinese and English content. When the time comes, we will update the URL here. ## Background MITRE ATT&CK is a framework which uses TTPs to describe the operation modes in campaigns of threat actors. TTPs are valuable to Breach and Attack Simulation (BAS) system to assess defense capabilities, and are the most important parts of TTP-based Knowledge Graph. Most TTPs of threat actors in security community exist in unstructured threat reports, such as malware blogs and white papers. The pre-trained language model is pre-trained on a large-scale corpus, thus it can learn a general language representation, and has excellent results in many downstream natural language processing tasks. Extracting TTPs from unstructured reports is essentially a text multi-classification task. Therefore, using the pre-trained language model for downstream TTP extraction task can achieve good results. ## Setup **Key Requirements** - Python (64 bit) >= 3.7 - pip (64 bit) >= 20.2.2 - PaddlePaddle (64 bit) >= 2.0 - paddlenlp - nltk - Processor arch: x86_64 (arm64 is not supported) ### 1. Setup Virtual Env Clone the repository and setup a virtual env with virthenv: ``` pip3 install virtualenv mkdir <VENV_NAME> cd <VENV_NAME> virtualenv -p python3 . source ./bin/activate cd <path to this notebook> pip install -r requirements.txt ``` ### 2. Download and Merge Model Artifacts Download nltk punkt and merge pretrained model: ``` python -c "import os, nltk; nltk_data_path = os.path.join(os.getcwd(),'nltk_data'); nltk.download('punkt', nltk_data_path);" ./merge_model.sh ``` ## Demo Load the model and import predict function: ``` import inference from inference import predict_text ``` Put the text to extract TTPs in the `text` variable: ``` text = """ACNSHELL is sideloaded by a legitimate executable. It will then create a reverse shell via ncat.exe to the server closed.theworkpc.com""" ``` Then call the `predict_text` function to infer TTPs: ``` o_text, ttps = predict_text(text) ``` The `o_text` variable stores the original text which is the input of `predict_text` function. The `ttps` variable stores the predicted TTPs, it looks like: ``` [{'sent': 'ACNSHELL is sideloaded by a legitimate executable.', 'tts': [{'tactic_name': 'Persistence', 'tactic_id': 'TA0003', 'technique_name': 'DLLSide-Loading', 'technique_id': 'T1574.002', 'score': 0.9799978}]}, {'sent': 'It will then create a reverse shell via ncat.exe to the server closed.theworkpc.com', 'tts': [{'tactic_name': 'Execution', 'tactic_id': 'TA0002', 'technique_name': 'WindowsCommandShell', 'technique_id': 'T1059.003', 'score': 0.98543674}]}] ``` Every sentence (specified by the `sent`) may maps zero or serveral tactics and techniques (specified by `tts`). For each predicted tactic and technique, the fields `tactic_name` and `tactic_id` indicate the name and id of tactic, the fields `technique_name` and `technique_id` indicate the name and id of technique, the field `score` gives the model's score for the tactic and technique.
jmanhype/AI_Tools_and_Papers_Providers_Frameworks
https://github.com/jmanhype/AI_Tools_and_Papers_Providers_Frameworks
AI_Tools_and_Papers_Providers_Frameworks is a curated collection of AI resources, including tools, papers, frameworks, and providers. It covers various categories such as Cloud SaaS Tools, Python Modules, GitHub Repos, and Scientific Papers. Discover valuable references for AI development and research.
## AI_Tools_and_Papers_Providers_Frameworks This repository contains a curated list of tools and resources for various categories related to AI/ML development and operations. The list includes popular tools as well as some open-source projects and learning resources. Please note that the information provided here is based on the knowledge cutoff in September 2021 and may not include the latest tools and advancements. ### Table of Contents - [Cloud Providers](#cloud-providers) - [Data Versioning](#data-versioning) - [Feature Stores](#feature-stores) - [Model Training Orchestration](#model-training-orchestration) - [Hyperparameter Tuning](#hyperparameter-tuning) - [Experiment Tracking](#experiment-tracking) - [Model Deployment & Serving](#model-deployment--serving) - [Model Monitoring](#model-monitoring) - [Model Governance & Management](#model-governance--management) - [Model Explainability](#model-explainability) - [Model Testing](#model-testing) - [Automation & Streamlining](#automation--streamlining) - [Infrastructure](#infrastructure) - [Collaboration](#collaboration) - [Project Management](#project-management) - [Knowledge Management](#knowledge-management) - [Communication](#communication) - [AI/ML Libraries](#aiml-libraries) - [IDEs](#ides) - [Data Visualization](#data-visualization) - [Microservices](#microservices) - [Open Source AI/ML Projects](#open-source-aiml-projects) - [AI/ML Learning Resources](#aiml-learning-resources) - [HR for Global Teams](#hr-for-global-teams) ### Cloud Providers - **Amazon Web Services (AWS)**: A comprehensive, evolving cloud computing platform provided by Amazon. - **Google Cloud Platform (GCP)**: A suite of cloud computing services offered by Google. - **Microsoft Azure**: A cloud computing service created by Microsoft. - **IBM Cloud**: IBM's open and secure public cloud for business. - **Oracle Cloud**: It offers best-in-class services across software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS). ### Data Versioning - **DVC (Data Version Control)**: An open-source tool for data science and machine learning projects that enables version control of large datasets, ML models, and intermediate files. - **Pachyderm**: A version-controlled data lineage system. ### Feature Stores - **Feast**: An open-source feature store for machine learning. - **Tecton**: A feature store for operational machine learning. - **Hopsworks**: An open-source data-intensive AI platform with a feature store. ### Model Training Orchestration - **Kubeflow**: An open-source project dedicated to making deployments of machine learning workflows on Kubernetes simple, portable, and scalable. - **Polyaxon**: An open-source platform for machine learning lifecycle management. - **MLflow**: An open-source platform to manage the ML lifecycle. ### Hyperparameter Tuning - **Katib**: A Kubernetes-native project for automated machine learning (AutoML). - **Hyperopt**: A Python library for optimizing over awkward search spaces. - **Optuna**: An open-source hyperparameter optimization framework in Python. ### Experiment Tracking - **MLflow**: An open-source platform to manage the ML lifecycle. - **Weights & Biases**: A tool that helps track experiments in deep learning projects. - **Comet.ml**: A machine learning platform that enables engineers to automatically track their datasets, code changes, experimentation history. ### Model Deployment & Serving - **TensorFlow Serving**: A flexible, high-performance serving system for machine learning models. - **Seldon**: An open-source platform for deploying, scaling, and managing machine learning models in Kubernetes. - **BentoML**: A flexible, high-performance framework for serving, managing, and deploying machine learning models. - **TorchServe**: A flexible and easy-to-use tool for serving PyTorch models in production. ### Model Monitoring - **Model Monitor (Amazon SageMaker)**: A service that automatically monitors ML models in production. - **Prometheus + Grafana**: Prometheus is an open-source systems monitoring and alerting toolkit. Grafana is the open-source analytics & monitoring solution. - **Evidently.AI**: An open-source tool for machine learning model validation and monitoring. ### Model Governance & Management - **MLflow**: An open-source platform to manage the ML lifecycle. - **Neptune**: A metadata store for MLOps. - **Alteryx**: A leader in analytic process automation (APA). ### Model Explainability - **SHAP (SHapley Additive exPlanations)**: A game theoretic approach to explain the output of any machine learning model. - **Lime**: A project that explains what machine learning classifiers (or models) are doing. - **Alibi**: An open-source Python library aimed at machine learning model inspection and interpretation. ### Model Testing - **Great Expectations**: A Python-based open-source library for validating, documenting, and profiling your data. - **Deequ**: A library built on top of Apache Spark for defining 'unit tests for data'. - **TFDV (TensorFlow Data Validation)**: A library used to analyze and validate machine learning data. ### Automation & Streamlining - **Jenkins**: An open-source automation server. - **GitLab CI/CD**: A tool built into GitLab for software development through the continuous methodologies. - **GitHub Actions**: A CI/CD platform that automates all your software workflows. - **Argo CD**: A declarative, GitOps continuous delivery tool for Kubernetes. ### Infrastructure - **Docker**: An open-source platform to automate the deployment, scaling, and management of applications. - **Kubernetes**: An open-source platform designed to automate deploying, scaling, and operating application containers. - **Terraform**: An open-source infrastructure as code software tool. ### Collaboration - **Jupyter Notebooks**: An open-source web application that allows you to create and share documents that contain live code, equations, visualizations, and narrative text. - **Colab Notebooks**: Google Colab is a free cloud service with GPU support. - **GitHub**: A global platform that brings together the world's largest community of developers. ### Project Management - **Jira**: A proprietary issue tracking product developed by Atlassian. - **Trello**: A web-based Kanban-style list-making application. - **Asana**: A web and mobile application designed to help teams organize, track, and manage their work. - **Monday.com**: A cloud-based platform that allows teams to create their own IT systems without coding. - **Plane (makeplane/plane)**: An open-source project management tool. ### Knowledge Management - **Notion**: An all-in-one workspace where you can write, plan, collaborate and get organized. - **Confluence**: A collaboration wiki tool used to help teams to collaborate and share knowledge efficiently. - **AFFiNE (toeverything/AFFiNE)**: A knowledge management tool that integrates AI technology to enhance knowledge discovery and sharing. - **Quivr - Get a Second Brain with Generative AI**: An AI tool that organizes, connects, and generates insights from your notes. - **Airtable**: A cloud-based collaboration service. ### Communication - **Slack**: A proprietary business communication platform. - **Microsoft Teams**: A unified communication and collaboration platform. - **Zoom**: A proprietary video teleconferencing software program. ### AI/ML Libraries - **TensorFlow**: An end-to-end open-source platform for machine learning. - **PyTorch**: An open-source machine learning library. - **Scikit-learn**: A free software machine learning library for Python. - **Keras**: A user-friendly neural network library written in Python. - **NLTK (Natural Language Toolkit)**: A leading platform for building Python programs to work with human language data. - **XGBoost**: A scalable, portable, and distributed gradient boosting (GBDT, GBRT, or GBM) library. ### IDEs - **Jupyter Notebook**: An open-source web application that allows the creation and sharing of documents with live code, equations, visualizations, and narrative text. - **Google Colab**: A free Jupyter notebook environment that runs entirely in the cloud. - **VS Code**: A freeware source-code editor made by Microsoft. - **PyCharm**: An integrated development environment (IDE) used in computer programming, specifically for the Python language. ### Data Visualization - **Matplotlib**: A plotting library for Python. - **Seaborn**: A Python data visualization library based on Matplotlib. - **Plotly **: An open-source data visualization library used to create interactive and high-quality graphs in R and Python. - **Tableau**: A data visualization tool used in the Business Intelligence industry. ### Microservices - **Flask**: A micro web framework written in Python. - **Django**: A high-level Python web framework that enables rapid development of secure and maintainable websites. - **Express.js**: A web application framework for Node.js, released as free and open-source software. ### Automation & Streamlining - **Jenkins**: An open-source automation server. - **GitLab CI/CD**: A tool built into GitLab for software development through the continuous methodologies. - **GitHub Actions**: A CI/CD platform that automates all your software workflows. - **Argo CD**: A declarative, GitOps continuous delivery tool for Kubernetes. - [Chadxz.dev](https://chadxz.dev/platform/) - How Platform Engineering Works - [Smol Developer](https://github.com/smol-ai/developer) - With 100k context windows on the way, it's now feasible for every dev to have their own smol developer - [Rift](https://github.com/morph-labs/rift) - Rift: an AI-native language server for your personal AI software engineer - [Cursor](https://github.com/getcursor/cursor) - An editor made for programming with AI 🤖 - [Modal](https://modal.com/) - End-to-end cloud compute Model inference, batch jobs, task queues, web apps and more. All without your own infrastructure. - [Chainlit](https://github.com/Chainlit/chainlit) - Build Python LLM apps in minutes ⚡️ - [Pipedream](https://pipedream.com/) - Connect APIs, remarkably fast. Stop writing boilerplate code, struggling with authentication, and managing infrastructure. Start connecting APIs with code-level control when you need it — and no code when you don't. - [Make.com](https://www.make.com/en) - Design workflow #withMake. From tasks and workflows to apps and systems, build and automate anything in one powerful visual platform. - [Render](https://render.com/) - Render is a unified cloud to build and run all your apps and websites with free TLS certificates, a global CDN, DDoS protection, private networks, and auto-deploy from GitHub. - [LeanDojo](https://leandojo.org/) - LeanDojo is an open-source playground consisting of toolkits, benchmarks, and models for LLMs to prove formal theorems in the Lean proof assistant. - [Ai-Shell](https://github.com/BuilderIO/ai-shell) - A CLI that converts natural language to shell commands. - [Quivr Mobile](https://github.com/iMADi-ARCH/quivr-mobile) - The Quivr React Native Client is a mobile application built using React Native that provides users with the ability to upload files and engage in chat conversations using the Quivr backend API. - [AutoScrum](https://github.com/autoscrum/autoscrum) - AutoScrum is a python script for automating the Scrum project planning process using language models. - [Whisper.cpp](https://github.com/ggerganov/whisper.cpp) - Port of OpenAI's Whisper model in C/C++ whisper: support speaker segmentation (local diarization) of mono audio via tinydiarize #1058 - [Ai-Engineer](https://github.com/AntonOsika/gpt-engineer) - Specify what you want it to build, the AI asks for clarification, and then builds it. - [Ecoute](https://github.com/SevaSk/ecoute) - Ecoute is a live transcription tool that provides real-time transcripts for both the user's microphone input (You) and the user's speakers output (Speaker) in a textbox. It also generates a suggested response using OpenAI's GPT-3.5 for the user to say based on the live transcription of the conversation. ### Collaboration - [Airtable](https://www.airtable.com/) - The fastest way to build apps. Transform your business with Airtable's next-gen app platform. Unmatched agility and efficiency. - [Quivr](https://github.com/OpenBMB/ToolBench) - An open platform for training, serving, and evaluating a large language model for tool learning. - [GitHub Assistant](https://github.com/g-emarco/github-assistant) - Generative AI GitHub Assistant for Your Repository - [Yohei Nakajima's Twitter](https://twitter.com/yoheinakajima) - Woo hoo 🎉 Just set up "Deals McDealFace" - an internal email address for tracking deals! ### AI/ML Libraries - [FinGPT](https://github.com/AI4Finance-Foundation/FinGPT) - Data-Centric FinGPT. Open-source for open finance! Revolutionize 🔥 We'll soon release the trained model. ### Open Source AI/ML Projects - [Salesforce AI Research](https://blog.salesforceairesearch.com/large-action-models/) - Toward Actionable Generative AI LAMs: From Large Language Models to Large Action Models - [Otter](https://github.com/Luodian/Otter) - 🦦 Otter, a multi-modal model based on OpenFlamingo (open-sourced version of DeepMind's Flamingo), trained on MIMIC-IT and showcasing improved instruction-following and in-context learning ability. - [ToolBench](https://github.com/BuilderIO/ToolBench) - An open platform for training, serving, and evaluating large language model for tool learning. - [Guardrails](https://github.com/shreyar/guardrails) - Adding guardrails to large language models. - [LLM-ToolMaker](https://github.com/ctlllll/LLM-ToolMaker) - Large Language Models as Tool Makers. - [AI-Shell](https://github.com/BuilderIO/ai-shell) - A CLI that converts natural language to shell commands. - [FinGPT](https://github.com/AI4Finance-Foundation/FinGPT) - Data-Centric FinGPT. Open-source for open finance! Revolutionize 🔥 We'll soon release the trained model. - [LeanDojo](https://leandojo.org/) - We release LeanDojo (http://leandojo.org): an open-source playground consisting of toolkits, benchmarks, and models for LLMs to prove formal theorems in the Lean proof assistant. - [ToolBench](https://github.com/OpenBMB/ToolBench) - An open platform for training, serving, and evaluating large language model for tool learning. - [Ecoute](https://github.com/SevaSk/ecoute) - Ecoute is a live transcription tool that provides real-time transcripts for both the user's microphone input (You) and the user's speakers output (Speaker) in a textbox. It also generates a suggested response using OpenAI's GPT-3.5 for the user to say based on the live transcription of the conversation. ### AI/ML Learning Resources Sure thing, here are brief descriptions for each link in Markdown: - [SocraticAI](https://princeton-nlp.github.io/SocraticAI/) - A research project exploring the application of the Socratic method as a tool for self-discovery within large language models. - [SequenceMatch](https://arxiv.org/abs/2306.05426) - A paper introducing a method for imitation learning in autoregressive sequence modelling with backtracking. - [ZipIt!](https://arxiv.org/abs/2305.03053) - A research article presenting a technique for combining models from different tasks without additional training. - [Understanding Social Reasoning in Language Models with Language Models](https://arxiv.org/abs/2306.15448) - This study delves into how social reasoning is encapsulated within language models. - [Supervised Pretraining Can Learn In-Context Reinforcement Learning](https://arxiv.org/abs/2306.14892) - An analysis demonstrating how supervised pretraining can be adapted for in-context reinforcement learning scenarios. - [Extending Context Window of Large Language Models via Positional Interpolation](https://arxiv.org/abs/2306.15595) - Research on techniques to extend the context window of large language models via positional interpolation. - [Inferring the Goals of Communicating Agents from Actions and Instructions](https://arxiv.org/abs/2306.16207) - A study on the methodologies for inferring the goals of agents from their actions and instructions. - [Toward Actionable Generative AI LAMs: From Large Language Models to Large Action Models](https://blog.salesforceairesearch.com/large-action-models/) - A blog post discussing the transition from large language models to large action models in generative AI. - [MEGABYTE: Predicting Million-byte Sequences with Multiscale Transformers](https://huggingface.co/papers/2307.00184) - A research paper outlining how to predict million-byte sequences using multiscale transformers. - [RepoFusion: Training Code Models to Understand Your Repository](https://huggingface.co/papers/2306.07944) - This study presents RepoFusion, a method for training code models to better understand a specific code repository. - [Personality Traits in Large Language Models](https://huggingface.co/papers/2307.00119) - A study investigating how personality traits manifest within large language models. - [Improving Language Plasticity via Pretraining with Active Forgetting](https://huggingface.co/papers/2307.01163) - A research paper discussing strategies to improve language plasticity through pretraining with active forgetting. - [RSPrompter: Learning to Prompt for Remote Sensing Instance Segmentation based on Visual Foundation Model](https://huggingface.co/papers/2306.16269) - A research paper on learning to prompt for instance segmentation in remote sensing, based on a visual foundation model. - [The Concise TypeScript Book](https://github.com/nat/openplayground) - A comprehensive and concise guidebook for learning the TypeScript programming language. - [VLLM](https://arxiv.org/abs/2306.06070) - A research paper discussing the VLLM, an approach to language modelling. - [GPT Migrate](https://github.com/swarnaHub/ExplanationIntervention) - A GitHub repository detailing the process and techniques of migrating GPT models. - [LLM As Chatbot](https://arxiv.org/abs/2306.09299) - An exploration of the application of large language models in chatbot development. - [MetaGPT](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4475995) - A research paper discussing the MetaGPT approach to generative pretraining. - [System Design](https://github.com/AntonOsika/gpt-engineer) - A GitHub repository offering resources and projects related to system design in AI. - [Kaguya](https://github.com/SevaSk/ecoute) - A GitHub repository for the Kaguya project, an AI system for intelligent music generation. - [Talk](https://arxiv.org/abs/2307.00522) - A research paper discussing 'Talk', a novel methodology for conversational AI. - [Emergent](https://arxiv.org/abs/2302.00049) - A research paper investigating the concept of emergence in machine learning and artificial intelligence contexts. - [LLM Survey](https://arxiv.org/abs/2306.07915) - A survey paper providing a comprehensive overview of the field of Large Language Models (LLM). - [Gorilla](https://arxiv.org/abs/2306.17563) - A research paper discussing Gorilla, an approach aimed at enhancing learning efficiency in AI. - [Dialoqbase](https://huggingface.co/papers/2306.07915) - A paper introducing Dialoqbase, a project that aims to improve the generation of human-like dialogue. - [Financial Document Analysis with LlamaIndex](https://github.com/openai/openai-cookbook/blob/main/examples/third_party_examples/financial_document_analysis_with_llamaindex.ipynb) - A Jupyter Notebook showcasing how to use the LlamaIndex tool to perform financial document analysis. - [GPT-Index](https://gpt-index.readthedocs.io/en/latest/examples/output_parsing/evaporate_program.html) - Documentation for GPT-Index, a tool for indexing and querying text using the GPT model. - [MindMapper](https://arxiv.org/abs/2306.16207) - This research presents the MindMapper technique, a new methodology for mapping and interpreting AI models. - [Linen.dev](https://arxiv.org/abs/2306.15448) - This paper explores Linen, a tool designed for developing and fine-tuning AI models. - [MetaGPT](https://github.com/THUDM/WebGLM) - A GitHub repository dedicated to the MetaGPT project, which investigates meta-learning in GPT. - [APITable](https://github.com/HazyResearch/TART) - A GitHub repository for the APITable project, a research endeavor focused on the design and use of APIs in AI. - [GPT4All](https://huggingface.co/papers/2306.07944) - A research paper discussing the GPT4All approach, which aims to democratize the use of large language models. - [PySpark AI](https://huggingface.co/papers/2306.07915) - A paper introducing PySpark AI, an approach designed for large-scale data processing in machine learning. ### HR for Global Teams - [Assigning AI: Seven Approaches for Students, with Prompts](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4475995) ### Open Source AI/ML Projects - The Concise TypeScript Book - VLLM - GPT Migrate - LLM As Chatbot - MetaGPT - System Design - Kaguya - Talk - Emergent - LLM Survey - Gorilla - Dialoqbase - Financial Document Analysis with LlamaIndex - GPT-Index - MindMapper - Linen.dev - MetaGPT - APITable - GPT4All - PySpark AI - Monster API - Platform Engineering Works - Ray Serve - Ray Aviary - smol developer - Rift - Cursor - Modal - Chainlit - Pipedream - Make - Render - Otter - Quivr Mobile - AutoScrum - Autoscrum™: Automating Project Planning Using Language Model Programs - Generative AI Github Assistant - Multi-Agent Collaboration: Harnessing the Power of Intelligent LLM Agents - Orca: Progressive Learning from Complex Explanation Traces of GPT-4 - Toolformer (Toolformer-pytorch) - Toolformer: Language Models Can Teach Themselves to Use Tools - ToolBench - Guardrails - LLM-ToolMaker - ai-shell - FinGPT - The Socratic Method for Self-Discovery in Large Language Models - Deals McDealFace - LeanDojo - ZipIt! - SequenceMatch: Imitation Learning for Autoregressive Sequence Modelling with Backtracking - Custom Retriever combining KG Index and VectorStore Index - From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought - ToolBench - Adding Guardrails to Large Language Models - ecoute - LEDITS: Real Image Editing with DDPM Inversion and Semantic Guidance - Transformers Meet Directed Graphs - OpenPlayground - Inferring the Goals of Communicating Agents from Actions and Instructions - Understanding Social Reasoning in Language Models with Language Models - WebGLM - TART - Speech-to-Text Adapter and Speech-to-Entity Retriever Augmented LLMs for Speech Understanding - Image Captioners Are Scalable Vision Learners Too - Efficient Exploration via Epistemic-Risk-Seeking Policy Optimization - FLARE Query Engine - Mind2Web: Towards a Generalist Agent for the Web - Improve ChatGPT with Knowledge Graphs - Meta-training with Demonstration Retrieval for Efficient Few-shot Learning - Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting - Improving Language Plasticity via Pretraining with Active Forgetting - RSPrompter - RSPrompter (pytorch implementation) - AutoScrum evaluation - Whisper.cpp - LEDITS - Transformers Meet Directed Graphs - OpenPlayground - Inferring the Goals of Communicating Agents from Actions and Instructions - Understanding Social Reasoning in Language Models with Language Models - The Quivr React Native Client - Can Language Models Teach? Teacher Explanations Improve Student Performance via Theory of Mind - Can Language Models Teach Weaker Agents? Teacher Explanations Improve Students via Theory of Mind - Automata: Bottom-up self-coding agents
rafaballerini/lanchonete
https://github.com/rafaballerini/lanchonete
Projeto desenvolvido para o Tech Challenge da Pós Tech em Software Architecture
digitalinnovationone/santander-dev-week-2023-api
https://github.com/digitalinnovationone/santander-dev-week-2023-api
RESTful API da Santander Dev Week 2023 construída em Java 17 com Spring Boot 3.
# Santander Dev Week 2023 Java API RESTful API da Santander Dev Week 2023 construída em Java 17 com Spring Boot 3. ## Principais Tecnologias - **Java 17**: Utilizaremos a versão LTS mais recente do Java para tirar vantagem das últimas inovações que essa linguagem robusta e amplamente utilizada oferece; - **Spring Boot 3**: Trabalharemos com a mais nova versão do Spring Boot, que maximiza a produtividade do desenvolvedor por meio de sua poderosa premissa de autoconfiguração; - **Spring Data JPA**: Exploraremos como essa ferramenta pode simplificar nossa camada de acesso aos dados, facilitando a integração com bancos de dados SQL; - **OpenAPI (Swagger)**: Vamos criar uma documentação de API eficaz e fácil de entender usando a OpenAPI (Swagger), perfeitamente alinhada com a alta produtividade que o Spring Boot oferece; - **Railway**: facilita o deploy e monitoramento de nossas soluções na nuvem, além de oferecer diversos bancos de dados como serviço e pipelines de CI/CD. ## [Link do Figma](https://www.figma.com/file/0ZsjwjsYlYd3timxqMWlbj/SANTANDER---Projeto-Web%2FMobile?type=design&node-id=1421%3A432&mode=design&t=6dPQuerScEQH0zAn-1) O Figma foi utilizado para a abstração do domínio desta API, sendo útil na análise e projeto da solução. ## Diagrama de Classes (Domínio da API) ```mermaid classDiagram class User { -String name -Account account -Feature[] features -Card card -News[] news } class Account { -String number -String agency -Number balance -Number limit } class Feature { -String icon -String description } class Card { -String number -Number limit } class News { -String icon -String description } User "1" *-- "1" Account User "1" *-- "N" Feature User "1" *-- "1" Card User "1" *-- "N" News ``` ## Documentação da API (Swagger) ### [https://sdw-2023-prd.up.railway.app/swagger-ui.html](https://sdw-2023-prd.up.railway.app/swagger-ui.html) Esta API ficará disponível no Railway por um período de tempo limitado, mas este é um código-fonte aberto. Portanto, sintam-se à vontade para cloná-lo, modificá-lo (já que é um bom projeto base para novos projetos) e executar localmente ou onde achar mais interessante! Só não esquece de marcar a gente quando divulgar a sua solução 🥰 ### IMPORTANTE Aos interessados no desenvolvimento da tela inicial do App do Santander (Figma) em Angular, Android, iOS ou Flutter... Caso a URL produtiva não esteja mais disponível, deixamos um Backup no GitHub Pages, é só dar um GET lá 😘 - URL de Produção: https://sdw-2023-prd.up.railway.app/users/1 - Mock (Backup): https://digitalinnovationone.github.io/santander-dev-week-2023-api/mocks/find_one.json
extremecoders-re/nuitka-extractor
https://github.com/extremecoders-re/nuitka-extractor
Tool to extract nuitka compiled executables
# Nuitka Extractor Nuitka extractor is a tool to extract nuitka compiled Python executables. In particular only onefile (single file) executables are supported. The tool can extract both Windows PE as well as Linux ELF binaries. Nuitka compiles Python code to native code. A nuitka compiled executable doesn't contain pyc file. Hence this tool can only extract DLLs, shared libraries and other binary assets embedded in the executable. ## Usage Precompiled binaries can be downloaded from releases. Simply pass the path to the file as an argument. ``` $ ./nuitka-extractor <file name> ``` ``` X:\> nuitka-extractor.exe <file name> ``` ## ToDo - Signed PE execuables are not yet supported. Strip the certificate before extraction. - Support for executables with a hardcoded extraction directory. For such executables, nuitka includes the crc32 of each embedded file within the payload. Thus the payload format is slightly changed. ## License Nuitka extractor is released under the MIT license.
wldfngrs/chrome-dinosaur-terminal
https://github.com/wldfngrs/chrome-dinosaur-terminal
A C++ and ncurses rendering of the popular chrome dinosaur game on the terminal
![dino_gif](https://github.com/wldfngrs/chrome-dinosaur-terminal/blob/main/assets/dino.gif) # Dino This repository contains a rendering of the popular chrome dinosaur game on the terminal, in C++ and curses. ## Requirements ### ncurses library `sudo apt-get install libncurses5-dev libncursesw5-dev` ### C++ compiler `sudo apt-get install g++` ## Installation To install this program, clone this repository onto your local machine using the following command: `$ git clone [email protected]:wldfngrs/chrome-dinosaur-terminal.git` Once you have cloned the repository, navigate to the project directory and run the following command: `$ make` This will compile the program and create the executable file in the 'bin' subdirectory. Navigate to the 'bin' subdirectory and run the following command: `$ ./main` Enjoy! ## Credits Thanks to [fundamelon](https://github.com/fundamelon/terminal-game-tutorial) and their terminal game tutorial for an intro to terminal game development.
maxnowack/signaldb
https://github.com/maxnowack/signaldb
SignalDB is a client-side database with a MongoDB-like interface and TypeScript support. It allows easy data management, reactivity and persistence on the client side, with plans for future replication capabilities.
<p align="center"> <a href="#"> <img src="./docs/public/logo.svg" width="150px" alt="JavaScript Database" /> </a> </p> # SignalDB SignalDB is a client side database that provides an easy mongob-like interface to the data with first class typescript support. Data persistance can be archieved by using storage providers which save the data through a JSON interface to placed such as localStorage. Support for replicating data with remote storages is planned, but not yet implemented. However, there will be a storage provider for RxDB that will take over the replication. ## Installation ```` $ npm install signaldb ```` ## Usage ```js import { Collection } from 'signaldb' const posts = new Collection() const postId = posts.insert({ title: 'Foo', text: 'Lorem ipsum …' }) const cursor = collection.find({}) console.log(cursor.fetch()) // returns an array with all documents in the collection ``` Please also take a look at the [documentation](https://maxnowack.github.io/signaldb) ## Architecture ### Reactivity SignalDB uses the primitives of signals to provide reactivity. We want to keep dependencies small and don't lock-in to a specific framework. SignalDB works with the signal library of your choice. We've written adapters for all popular ones. If you thinnk there is a adapter missing, you can file an issue or provide a Pull Request and add it by yourself. ### Collections & Queries SignalDB holds all data in memory to provide blazing fast query performance. This also allows it to have a synchronous api. Thanks to that, you don't have to care about asynchronous operations while working with your data. ### Data Persistance SignalDB provides an interface were data can be persisted. It works by loading and saving the documents inside to an external system. Reads and writes are triggered by events from both directions. The most simple and default persistance interface is `localStorage`, were data will be loaded and saved from `window.localStorage`. However, since all data lays in memory, data persistance is totally optional and only needed if you want to keep your data across page loads. ### Replication It's planned to implement a data replication engine based on the paradigms used by then [replication protocol of RxDB](https://rxdb.info/replication.html) ([more info](https://github.com/pubkey/rxdb/issues/3883)). In the first version we offer data replication by just implementing a persistance interface for RxDB and the replication will be handled inside RxDB. ## License Licensed under MIT license. Copyright (c) 2023 Max Nowack ## Contributions Contributions are welcome. Please open issues and/or file Pull Requests. ## Maintainers - Max Nowack ([maxnowack](https://github.com/maxnowack))
wipeout-phantom-edition/wipeout-phantom-edition
https://github.com/wipeout-phantom-edition/wipeout-phantom-edition
An enhanced PC source port of the original WipeOut.
# WipeOut Phantom Edition [![](images/screenshot01thumb.png)](images/screenshot01.png) WipeOut Phantom Edition is an enhanced PC source port of the original WipeOut. It uses game data from the PlayStation version and is much more comparable to the PlayStation version than the official PC port. ## Features ### 🖥️ Graphics - 🚀**Uncapped frame rate**: Render frame rate is decoupled from game state simulation using interpolation. - 📈**High resolution rendering**: Matches your desktop resolution by default. - 🛣️**Distant geometry fade**: Objects fade into view smoothly, eliminating pop-ins. - 🚨**Ship lighting**: Ships inherit coloration from track lighting data, similar to WipeOut 2 and 3. - 👓**Increased view distance**: See further into the distance. - 💻**Configurable aspect ratio and widescreen support**: Adjust screen settings to suit your monitor. - 📼**Optional lo-fi resolution mode**: Switch to 320x240 graphics mode. - 📺**Maintained PSX-accurate rasterization and blending**: Retains authentic PlayStation look by only using blending features available on original hardware, while also providing high resolution smooth graphics. ### 🕹️ Gameplay - ⌨️**Keyboard and gamepad input support**: Choose your preferred control input method. - 💥**Wall collision response options**: - **Modern**: Comparable to BallisticNG. - **Classic**: Comparable to WipeOut 2. - **Legacy**: 🪦 - 🎇**Wall scrape particle effects and audio**: Visual and audio enhancement in Modern and Classic wall collision modes. ### 📢 Audio - 📻**New music and sound effect system**: Similar to PlayStation version. - 🚒**3D audio for sound effects**: Spatial audio sources and doppler effect. ### 🎛️ UI - 🎚️**Additional options menus**: Configure most of the new features. - 🎮**Keyboard and gamepad control configuration**: Customize your controls to your preference. ### 🤓 Technical - 💾**New config file system**: Game configuration data and progress is stored in editable text files. - 💽**Automatic game data extraction**: The game can automatically extract game data files from provided bin/cue disk image files. ## Setup > #### **TL;DR**: Download the [latest release](https://github.com/wipeout-phantom-edition/wipeout-phantom-edition/releases/latest), put your PlayStation USA-region `.bin` and `.cue` files in `wipeout/diskimages`, and launch the game. Download the [latest release](https://github.com/wipeout-phantom-edition/wipeout-phantom-edition/releases/latest) and unzip the `wipeout` folder to your desired location on your hard drive. You'll need game data files from the original PlayStation USA-region version of Wipeout. You can either manually provide these files or supply bin/cue disk image files, which can be obtained by ripping a disk you own. The disk image method is preferred as it automatically extracts the music into wav files. **IMPORTANT:** Ensure that the game data is from the **PlayStation USA-region** version of Wipeout. Data from official PC versions won't work. ### Disk Image Method - **Place Disk Image Files**: Locate the `wipeout/diskimages` directory and place your Wipeout disk image files there. - **Ensure Correct Format**: Your disk image must be a multi-bin `.bin` and `.cue` format. There should be 9 `.bin` files and one `.cue` file. Example: ``` WipeOut USA (Track 1).bin WipeOut USA (Track 2).bin WipeOut USA (Track 3).bin WipeOut USA (Track 4).bin WipeOut USA (Track 5).bin WipeOut USA (Track 6).bin WipeOut USA (Track 7).bin WipeOut USA (Track 8).bin WipeOut USA (Track 9).bin WipeOut USA.cue ``` The game data should be in "MODE2/2352" format in the first track of the cue sheet, while other tracks should be in "AUDIO" format. **Extraction on Startup**: Upon launching, the game will check for missing data files and attempt to extract them from a disk image. **Removal of Disk Image Files**: After the game has successfully loaded into the main menu once, the disk image files are no longer required and can be removed. **NOTE: Since reading the file system in the disk image is non-trivial, a hash-based search is performed on the data track of the disk image. This can be slow on systems with less than 8 CPU cores.** ### Loose File Method **Copying Game Files**: If you already possess all the game files (517 in total), copy them directly into the `wipeout/wipeoutgame` folder. These files can be obtained directly from a PlayStation disk using windows explorer. **Music Files**: The downside of this method is that the music, which is stored in Red Book audio tracks on the CD and not in a file system, cannot be copied. For music, you can use the Disk Image method, or if you have individual music files, place them in the `wipeout/music` folder. Note that these files must follow a specific naming convention, with 2-digit numbers between 01-32 in their name. For more information, see `wipeout/music/musicgoeshere.txt`. ## Screenshots [![](images/screenshot02.png)](images/screenshot02.png) [![](images/screenshot03.png)](images/screenshot03.png) [![](images/screenshot04.png)](images/screenshot04.png) [![](images/screenshot05.png)](images/screenshot05.png) [![](images/screenshot06.png)](images/screenshot06.png)