full_name
stringlengths 9
72
| url
stringlengths 28
91
| description
stringlengths 3
343
⌀ | readme
stringlengths 1
207k
|
---|---|---|---|
verytinydever/speech-to-text
|
https://github.com/verytinydever/speech-to-text
| null |
# speech-to-text
|
xzxADIxzx/Join-and-kill-em-together
|
https://github.com/xzxADIxzx/Join-and-kill-em-together
|
Multikill is still in development, so I created my own multiplayer mod for ultrakill
|
# Join and kill 'em together
Multikill is still in development, so I created my own multiplayer mod for ultrakill.
It's also in development, but with gnashing of teeth it's already playable.
## Features
* Steam integration for invitations.
* Chat, list of players and indicators to help you find each other on the map.
* Synchronization of player positions, their weapons and projectiles.
* Synchronization of the position and health of enemies.
* Up to 5 teams, making available both the passage of the campaign and pvp.
## Installation
Before installation, it's important to know that the mod needs **BepInEx** and **Ultra Mod Manager** to work.
Without them nothing will make a *beep boop* sound.
### Mod manager
Mod manager will do everything itself, that's what the mod manager is for.
### Manual
1. Download the mod zip archive from Thunderstore.
2. Find the **UMM Mods** folder.
3. Extract the content of the archive into a subfolder.
Example: UMM Mods/Jaket/Jaket.dll and etc.
## Bulding
To compile you need .NET SDK 6.0 and Git.
1. Clone the repository with `git clone https://github.com/xzxADIxzx/Join-and-kill-em-together.git`
2. Run `dotnet restore`
3. Create `lib` folder in root directory.
1. Copy `Assembly-CSharp.dll`, `Facepunch.Steamworks.Win64.dll`, `UMM.dll` and `UnityEngine.UI.dll` from `ULTRAKILL\ULTRAKILL_Data\Managed`.
2. As well as `BepInEx.dll` and `0Harmony.dll` from `ULTRAKILL\BepInEx\core`.
4. Compile the mod with `dotnet build`.
5. At the output you will get the **Jaket.dll** file, which must be placed in the mods folder.
## Afterword
The mod is still in development, so numerous bugs may occur.
Anyway feel free to ping me on the discord **xzxADIxzx#7729** or join our [server](https://discord.gg/USpt3hCBgn).
|
OneB1ank/A1Memory
|
https://github.com/OneB1ank/A1Memory
|
Android third-party memory management
|
# A1 memory management
[](http://cppmicroservices.org/)
[](http://cppmicroservices.org/)
[](https://www.python.org/)


[](https://t.me/HCha1234)
Third-party memory management running on Android can reduce the running memory used by applications.
**Language**
[English](README.md) | [中文](README-zh.md) | [Русский язык](README-ru.md)
## ✨important function
- Managing the survival and termination of background processes
- Specifying the release of background application child processes
- Preventing Low Memory Killer Daemon from killing background processes
- Automatically releasing non-essential memory
- Putting applications to sleep to reduce CPU and memory usage
## 💡Description
- This module only supports Android 8 to 13, and the platform is limited to arm64-v8a.
- Magisk version should be 20.4+ and ksu is mostly compatible.
- This module will not conflict with any other modules.
### Default List Path

- List File: /sdcard/Android/HChai/HC_memory/名单列表.conf
## 📝Custom Configuration
The built-in configuration is suitable for most devices, but there are still some devices that may not work with the default configuration. Therefore, more adjustable parameters are provided. This requirement was already taken into consideration when designing the HAMv2 framework, and most parameters can be customized and adjusted. Moreover, this project can be embedded and run within other modules. The JSON configuration file is located at ['/data/adb/modules/Hc_memory/config/memory.json'].
### Project Information
```json
"project": {
"name": "官方配置 [23.06.25]",
"author": "火機@coolapk"
}
```
| Field | Type | Description |
| ------ | ------ | -------------------------------------------- |
| name | string | Name of the configuration file |
| author | string | Author information of the configuration file |
The `name` and `author` are reflected in the logs in the following format:
```
[2023-07-06 19:00:22] [info] config 官方配置 [23.06.25] | by: 火機@coolapk
```
- For more detailed instructions on the JSON configuration file, please refer to[here.](config/JSON-CONFIG.md)
## 🔍Frequently Asked Questions
Q:Can it be used in conjunction with other memory optimization modules?
A:A1 memory management works completely differently from other memory optimization methods, so using it together with other modules will only yield a cumulative effect of 1+1=2.
Q:Does it consume power?
A:Not at all. I spent a considerable amount of time optimizing the core code while developing the HAMv2 framework. It is implemented in low-level languages such as C/C++, resulting in minimal power consumption that can be completely ignored.
Q:Does it conflict with other Magisk modules or Xposed modules?
A:It is highly unlikely to conflict with other modules. So far, no conflicts have been encountered with this module.
Q:Does it cause power consumption during standby?
A:The HAMv2 framework does not cause power consumption during standby, as the A1 memory management enters a sleep state when the device is in standby mode.
Q:Why is the background process still killed even after enabling the lmkd process kill prevention?
A:This is because it only prevents lmkd from killing background processes and does not include the background process killing programs from various phone manufacturers.
Q:How to configure the smart list?
A:To configure the smart list, you need to add the rule "KILL package_name:subprocess_name" in the respective list. Before adding, make sure you understand the functionality and purpose of the subprocess to avoid any unexpected issues.
Q:Why does the device enter fb mode after a certain period of time?
A:Most cases of this issue occur on Samsung devices when the hook to prevent lmkd process killing is enabled. This is likely the cause of the problem. Currently, there is no solution available, but you can try disabling the hook to prevent lmkd process killing to resolve it.
Q:Why is the process playing audio being paused?
A:The occurrence of audio process being paused is rare. If it indeed happens, you can add the process to the whitelist or disable the app hibernation feature. This will ensure continuous operation of the audio process without being paused.
Q:Not compatible with this platform: xxxx error when installing the module.
A:Currently, the module only supports the arm-v8a platform and does not support other platforms temporarily.
## 🚀download
- [Go to Github to download](https://github.com/OneB1ank/A1Memory/releases)
## 🌟Star History
<a href="https://star-history.com/#OneB1ank/A1Memory&Timeline">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=OneB1ank/A1Memory&type=Timeline&theme=dark" />
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=OneB1ank/A1Memory&type=Timeline" />
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=OneB1ank/A1Memory&type=Timeline" />
</picture>
</a>
## 🙏Acknowledgments
Thanks to the following users or projects for their source code contributions to this project:
- [@yc9559](https://github.com/yc9559)
- [@HChenX](https://github.com/HChenX)
Thanks to the following users for their testing feedback and bug identification:
- @火機(coolapk)
## 🎉Support Donations
If you find this module useful, you can make a donation to support me.
- [爱发电](https://afdian.net/a/HCha1)
- [patreon](https://patreon.com/A1memory)
- USDT(TRC20)
> Address: TSqTqn2NcyUAbEwsdGgsrYoU5pokno5PnQ
|
rhwong/unidbg-fetch-qsign-onekey
|
https://github.com/rhwong/unidbg-fetch-qsign-onekey
|
用于一键在Windows下部署fuqiuluo/unidbg-fetch-qsign
|
<div align="center">
<img alt="OOOK" src="https://olivos.onekey.ren/img/logo.png"/>
# unidbg-fetch-qsign-onekey
本项目使用1.1.7版本,由于当前go-cqhttp(1.1.0)暂未适配,请来[此repo下载](https://github.com/rhwong/go-cqhttp-dev/releases/tag/v1.1.1-dev)非官方编译版本。如需旧版请移步release
用于一键在Windows下部署[fuqiuluo/unidbg-fetch-qsign](https://github.com/fuqiuluo/unidbg-fetch-qsign)<br>
<img src="https://img.shields.io/github/issues/rhwong/unidbg-fetch-qsign-onekey"> <img src="https://img.shields.io/github/forks/rhwong/unidbg-fetch-qsign-onekey">
<img src="https://img.shields.io/github/stars/rhwong/unidbg-fetch-qsign-onekey"> <img src="https://img.shields.io/github/license/rhwong/unidbg-fetch-qsign-onekey">
<img src="https://img.shields.io/github/downloads/rhwong/unidbg-fetch-qsign-onekey/total">
注意:本一键包推荐使用于Windows Server 2016 ~ 2022(21H2)
<img src="https://img.shields.io/badge/Windows-x64-red?style=flat-square&logo=Windows"> <img src="https://img.shields.io/badge/Windows-x86-red?style=flat-square&logo=Windows">
</div>
<!-- projectInfo end -->
## ✨安装说明
请将本项目和 `go-cqhttp` 放在同一级目录,这时脚本会自动读取 `config.yml` 文件中的部分参数。
如果你需要对接其他登录端,可以直接解压缩到任意不包含中文和空格的路径中。
已知问题:
对于最新的11内核的Windows Server版本,出现了严重兼容性问题,
具体表现为对接go-cqhttp时出现各种报错,原因未知。
建议购买云服务器时不要选择Windows Server 2022
## ✨使用方法
### 如何使用go-cqhttp联动运行
1. 首先,运行 `go-cqhttp.bat`,按照提示生成 `config.yml` 。(👈如果放在同一个目录下运行,独立运行略过此步骤)
2. 双击 `Start_Qsign.bat` 启动qsign,按照提示依次键入 `txlib_version` 参数、设定`host`、`port`、`key`的值。 (👈如果不知道这些是干什么的,请直接依次按下Enter)
3. 在启动完成后,qsign会告诉您当前运行api的地址,如果您没设定过host和key,那么默认是 `http://127.0.0.1:13579`。key已经在开始时设定过,如果您没设定,那么默认是 `1145141919810`。请将这两个参数填写到需要设定账号的签名服务器的地方。
4. 如果脚本检测到 `config.yml` 尚未设定账号和密码,那么会提示你输入账号(Account uin)和密码(password),输入后会同步进 `config.yml` 。
5. 如果日后需要修改 `txlib_version` 的版本,您可以删除文件夹根目录下的 `txlib_version.json` 来重新进入设定流程,也可以对此文件进行修改。
### 如何独立运行
参考上方,略过部分go-cqhttp相关步骤。
## 🌏分享您的Qsign API
您可以在 [这里](https://github.com/rhwong/unidbg-fetch-qsign-onekey/issues/8) 分享您的Qsign API以供其他设备性能较弱,或不适用于unidbg的设备测试及使用。
请注意,由于程序性能限制,并发需要您有更多的CPU核心,计算时间基于您的单核心性能。不推荐低性能服务器进行分享。
我们在 [这里](https://qsign.dev) 分享公益的Qsign API列表,但是请使用者注意,我们不建议也不推荐您长期使用公益API。
这是基于签名服务原理和特殊性的建议,我们仅推荐您在自建API无法正确使用,或测试时使用该列表内的Qsign API。
注意: 使用他人搭建的服务可能会泄露以下信息
| 可能会泄露以下信息 | 不会泄露的信息: |
| --------------------------- | --------------- |
| 登录账号 | 账号密码 |
| 登录时间 | 账号 `session` |
| 登录后发送的消息内容 | 群列表/好友列表 |
| 登录后发送消息的群号/好友ID | 接收的消息 |
| | 除发送消息外的任何历史记录 |
## ❌关于Qsign_Monitor
~~推荐您直接右键点击 `Qsign_Monitor.ps1` ,选择 `使用PowerShell运行`,这样在开启监控的同时会启动qsign服务端。~~
~~这个powershell脚本用于检测签名服务器是否正常工作,如果检测不到13579端口上有服务,那么就会重新运行 `Start_Qsign.bat`~~
~~所以如果你自定义了端口,那么需要在这个脚本里也作相应的修改~~
这个脚本目前不能很好的运行,所以请暂时不要使用。(issues#3)
## 📢新特性
#### 📅2023-07-07
更新至1.1.3版本
- 现在,当与 `config.yml` 同级时,`Start_Qsign.bat` 会自动修改 `config.yml` 中的 `sign-server` 和 `key` 的值,实现了全面懒猪猪启动!
并且,第一次运行时,会提示你设定`host`、`port`、`key`的值,如果你不想输入,或者不知道怎么输入,请直接按下enter键。
- 程序默认运行在`8.9.63`版本,如果后续`go-cqhttp`的正式版本中自带的协议有更新,那么本仓库也会同步更新脚本,建议点亮star及时关注哦!
#### 📅2023-07-18
- 更新至1.1.5,新特性请查看原仓库
#### 📅2023-07-20
- 更新至1.1.6,新增特性如下:
启动时新增txlib_version参数,用于设定你所需要的txlib_version版本号。
选项有`8.9.63`(默认)/`8.9.68`/`8.9.70`
对于`go-cqhttp`的`dev`版本,目前仍是`8.9.63`,所以此处默认是`8.9.63`。
如果你是其他客户端,比如`icqq`默认是`8.9.68`,请自行输入修改此项。
#### 📅2023-07-27
- 更新至1.1.7,新特性请查看原仓库
#### 📅2023-07-30
support:txlib_8.9.58
- 偷了几个`android_pad.json` `android_phone.json`回来,方便客户端与签名服务端的协议信息相匹配。[MrXiaoM/qsign](https://github.com/MrXiaoM/qsign)
- 拉取了fix版本的1.1.7(我还真没发现后面有fix,好像当前版本会崩溃,不知道fix版本修复没修复鸭)fuqiuluo/unidbg-fetch-qsign#164
- 🎉另外祝我生日快乐~🥰🎂
## ✨免责声明
- 本仓库所有关于qsign签名服务的二进制文件均来自于[fuqiuluo/unidbg-fetch-qsign](https://github.com/fuqiuluo/unidbg-fetch-qsign)
- 本仓库所使用的关于go-cqhttp与qsign版本不适配期间所代用的非官方版本go-cqhttp,均来自于自己或其他人的action/release,并且会注明来源。
- 本仓库所有二进制文件您都可以根据SHA1校验其与官方仓库的二进制文件的一致性,本人现在及将来均不会对此仓库所有分支的二进制文件所有(可能发生的)安全问题负责。
|
darjaorlova/fluttercon23-code-audit-resources
|
https://github.com/darjaorlova/fluttercon23-code-audit-resources
| null |
# fluttercon23-code-audit-resources
*The recording of the talk will be here, check in later!*
[Slides from the presentation](https://docs.google.com/presentation/d/1CIeR8FLSYTZ-MjEsPd5meIHsls9fZZqi4rgWt452kOI/edit?usp=sharing)
[Template for code audit in Google sheets](https://docs.google.com/spreadsheets/d/1Hj2rQ-POTW0QenBYwrDJuPytGjKmuL0usOFeBamYHLs/edit?usp=sharing)
[Flutter mobile app code audit: Article](https://chililabs.io/blog/flutter-mobile-app-code-audit)
Useful links:
[Mastering Dart & Flutter DevTools](https://medium.com/@fluttergems/mastering-dart-flutter-devtools-series-introduction-installation-part-1-of-8-4f703a8cfcc8)
[How to Store API Keys in Flutter: --dart-define vs .env files](https://codewithandrea.com/articles/flutter-api-keys-dart-define-env-files/)
[Measuring your app's size](https://docs.flutter.dev/perf/app-size)
[Obfuscate Dart code](https://docs.flutter.dev/deployment/obfuscate)
[OWASP Mobile Top 10](https://owasp.org/www-project-mobile-top-10/)
[Securing Flutter Apps | OWASP Top 10 for mobile & RASP explained](https://www.youtube.com/watch?v=DYwn4KciL1U)
|
tiev-tongji/LOG-LIO
|
https://github.com/tiev-tongji/LOG-LIO
|
A LiDAR-Inertial Odometry with Efficient Local Geometric Information Estimation
|
# LOG-LIO
The code will be released after the paper is accepted.
|
fabiospampinato/pollex
|
https://github.com/fabiospampinato/pollex
|
A tiny polling-based filesystem watcher that tries to be efficient.
|
# Pollex
A tiny polling-based filesystem watcher that tries to be efficient.
This is a port of the idea behind [`esbuild`](https://esbuild.github.io/api/#watch)'s filesystem watcher, which uses polling but more efficiently than using it naively.
## Features
Unless you really want this I'd recommend using a [normal](https://github.com/fabiospampinato/watcher) filesystem watcher instead, since polling can get expensive.
- Changes happening inside the root folder should get picked up roughly within `options.pollingIntervalCold / 2` milliseconds on average.
- Changes happening inside files that the watcher already saw changing should get picked up roughly within `options.pollingIntervalHot / 2` milliseconds on average.
- Basically files that changed already are considered hot and polled frequently, while random subsets of the other files are polled infrequently.
- This should be roughly `options.pollingIntervalCold / options.pollingIntervalHot` times cheaper than just polling every file on a `options.pollingIntervalHot` interval.
In some niche scenarios a filesystem watcher that works like this could be useful:
- It doesn't rely on potentially buggy native filesystem watching APIs, it just needs to be able to fire `stat` syscalls.
- It works even over filesystems for which a native watching API is not available.
- It doesn't run out of file descriptors, because it doesn't keep any open indefinitely.
- It can potentially be used to react to filesystem events quicker than it's possible with `fs.watch` in Node, which is weirdly slow.
- It's about 20x smaller than [`chokidar`](https://github.com/paulmillr/chokidar), with no third-party dependencies, and a way simpler implementation.
## Install
```sh
npm install --save pollex
```
## Usage
```ts
import pollex from 'pollex';
// Let's define some options
const pollexOptions = {
depth: 20, // Maximum depth to look at
limit: 1_000_000, // Maximum number of files explored, useful as a stop gap in some edge cases
followSymlinks: true, // Whether to follow symlinks or not
ignore: targetPath => /node_modules/.test ( targetPath ), // Function that if returns true will ignore this particular file or a directory and its descendants
ignoreInitial: true, // Ignore the initial "add" and "addDir" events while the folder is being scanned the first time
ignoreReady: true, // Ignore the "ready" event, useful in combination with "ignoreInitial: true" to only get notified about actual changes
pollingIntervalCold: 2000, // Poll all cold files, in different random subsets, within this amount of time, roughly
pollingIntervalHot: 50 // Poll all hot files within this amount of time, roughly
};
// Let's listen for events
pollex ( process.cwd (), ( event, targetPath ) => {
if ( event === 'add' ) {
// The file at "targetPath" got added
} else if ( event === 'addDir' ) {
// The folder at "targetPath" got added
} else if ( event === 'change' ) {
// The file at "targetPath" changed
} else if ( event === 'ready' ) {
// The initial scan has been done and all initial events have been emitted
} else if ( event === 'unlink' ) {
// The file at "targetPath" got deleted
} else if ( event === 'unlinkDir' ) {
// The folder at "targetPath" got deleted
}
}, pollexOptions );
```
## License
MIT © Fabio Spampinato
|
Hoshinonyaruko/Red-Adapter
|
https://github.com/Hoshinonyaruko/Red-Adapter
|
red-protocol simple onebot adapter
|
# BetterQQNT-Adapter
red-protocol simple onebot adapter
没有公开源码是因为经过Betterqqnt作者microblock提醒,二次开发可能会带来不可控的风险。
# BetterQQNT-Adapter-epl
Red-Protcol的epl-adapter-demo,欢迎参考开发其他语言版本
请不要在官方群聊等内提及本教程。
请不要发送视频,帖子等来宣传此教程。
本教程请不要大规模传播,请不要在视频或论坛投稿。
# 操作【一】
一加入bqnt频道获取 dll https://t.me/betterqqnt
不可以分发所以请自行在原项目仓库获取:https://github.com/BetterQQNT/BetterQQNT
因为某些原因(),bqnt的获取现在需要通过安装koishi、choronocat来使用,用koishi的chronocat/launcher启动后,绿灯状态,下方用法依然有效。
如果希望单独运行,不安装koishi也是可以的,你可以在npmjs搜索chronocat,找到choronocat/launcher,然后使用npm --save下载,然后使用-h参数运行相应平台可执行程序。
一将 better-qqnt-x64.dll 重命名为 version.dll 并放入 QQNT 安装文件夹。
一进入QQ在设置-插件 RedProtocol确认开启打勾
一关闭QQ窗口防止崩溃
# 操作【二】
获取适配器
https://wwcr.lanzoul.com/redpro
# 操作【三】
获取TOKEN
—在路径【C:\Users\你的用户名(默认是administrator)\AppData\Roaming\BetterUniverse\QQNT\】
下找到文件 【RED_PROTOCOL_TOKEN】,用记事本打开复制出token 内容,这在下一步有用
# 操作【四】
—在操作二获取的red.exe同一目录下,
创建1.txt文件,
复制模板内容并将举例中的机器人号码、机器人ws地址、red密钥、red的ws地址(如果你的red不在本地,否则保持不变),
修改为对应您自己的参数
然后将1.txt改为1.bat并运行
title 标题,可自定义
red ws://zaomiao.com:机器人端口号 red密钥 机器人号码 ws://127.0.0.1:16530
举例:
```
title 2289766976
red ws://zaomiao.com:20036 c5387d04c649852314ff43d50de1f2ca3b6c9867236e7dd6a56361269a0dcdb0 2289766976 ws://127.0.0.1:16530
```
举例2:(koishi已经自带在bqnt中,但如果你依然想要连接到老koishi的onebot-adapter)请参考
```
title 3570577015
red ws://127.0.0.1:5140/onebot 7dcb37d1b0874368e0e21b3242e7d5084bd6851b84108f3b1bde61c0055fd322 3570577015 ws://192.168.0.134:16530
```
举例3:(nonebot2)
```
title 3570577015
red ws://127.0.0.1:8080/onebot/v11/ws 7dcb37d1b0874368e0e21b3242e7d5084bd6851b84108f3b1bde61c0055fd322 3570577015 ws://192.168.0.134:16530
```
```
小提示:
云上地址应为:ws://zaomiao.com:端口
----云上地址无需自己部署后端(并且免费)----
(建议有装插件需求的、用本地后端,或者创建多个bat运行多个,适配器可多开接入多个后端,但bqnt不行。)
端口20001~20050是早苗
端口20050~20070是灵梦
端口20071~20099是魔理沙
端口25369是公用云崽
端口25370是澪
端口25371是浅羽
端口25372是公用真寻
--------------本地地址--------------—
真寻和nb2的ws地址应为:ws://地址:8080/onebot/v11/ws
Trss云崽ws地址应为:ws://地址:2536/go-cqhttp
外置的Koishi的ws地址应为:ws://地址:5140/onebot
内置的Koishi的ws地址应为:ws://地址:11400/onebot
```
本地地址不出意外一般都是127.0.0.1
比如nonebot2的地址就是
ws://127.0.0.1:8080/onebot/v11/ws
端口号在您设置了的场景下可能不会是默认值,需要有一定了解并且自行设置正确的反向ws地址
# 操作【五】
运行刚刚修改的1.bat,如果遇到问题,请到交流群at作者询问
免费开源,禁止任何形式的倒卖盈利。
作者不承担因此产生的任何后果。
若您发现有人正在进行此类行为,请发邮件上报
或您有侵权和任何问题以及争议之处,
请发送邮件给我,通过邮件,友好沟通协商解决问题
[email protected]
# Addition License
Copyright (c) [2023] [Hoshinonyaruko]
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, and distribute the Software, and to
permit persons to whom the Software is furnished to do so, subject to the
following conditions:
1. Redistributions in any form must retain the above copyright notice,
this list of conditions and the following disclaimer.
2. Redistributions of the Software must not be sold or used for any
commercial purpose without the express written consent of the copyright
holder.
3. Modifications to the Software must include the original author's
name as well as a clear indication of the changes made.
4. The Software is provided "as is", without warranty of any kind, express
or implied, including but not limited to the warranties of
merchantability, fitness for a particular purpose and noninfringement.
In no event shall the authors or copyright holders be liable for any
claim, damages or other liability, whether in an action of contract,
tort or otherwise, arising from, out of or in connection with the
Software or the use or other dealings in the Software.
purpose without the express written consent of the copyright holder.
4. The author is not responsible for any consequences resulting from the use
or redistribution of the Software.
|
OmdenaAI/myanmar-chapter-chest-x-rays
|
https://github.com/OmdenaAI/myanmar-chapter-chest-x-rays
| null |
# Identifying Diseases in Chest X-Rays & COVID-19 Detection
## The Problem
The problem we aim to solve through this project is two-fold. Firstly, the lack of accessible and reliable diagnostic tools for chest diseases, including COVID-19, hampers timely detection and intervention, which can lead to the rapid spread of the disease and its associated complications. Secondly, the scarcity of specialized healthcare professionals, particularly radiologists, in many parts of Myanmar, exacerbates the problem by limiting the availability of accurate and prompt diagnoses.
Our local community faces the burden of inadequate healthcare infrastructure and limited resources, which further amplifies the impact of these challenges. Early detection of COVID-19 and other chest diseases is crucial for effective treatment and preventing the spread of the virus. By deploying an AI-powered solution capable of accurately analyzing chest X-rays and identifying diseases, we can make a significant positive impact on the healthcare landscape of Myanmar, Asian countries, and people around the world.
The deep learning model we develop will enable healthcare providers, including general practitioners and healthcare workers in remote areas, to quickly identify diseases in chest X-rays. By reducing the dependence on scarce human resources and improving the efficiency of diagnoses, our solution will enhance the overall quality of healthcare services. Moreover, the availability of a reliable and accessible diagnostic tool will empower medical professionals to make informed decisions and provide timely treatments, potentially saving lives and mitigating the spread of diseases within the community. **The productivity of our product will increase as much as the support we receive**, as our model is greatly dependent on the support of data.
Through the development of a web app or mobile app, we aim to make this solution widely accessible beyond our local community, reaching healthcare providers globally. By democratizing access to advanced diagnostic capabilities, we strive to contribute to the global fight against COVID-19 and other chest diseases, fostering a healthier future for individuals worldwide.
## The Project Goals
Conduct a comprehensive literature review to understand the existing research and methodologies related to chest X-ray analysis, disease detection, and COVID-19 diagnosis using deep learning algorithms. Refine project goals based on the insights gained from the literature review.
- Collect a diverse and representative dataset of chest X-ray images, including COVID-19 cases and other chest diseases, ensuring the availability of labeled data for training and evaluation purposes.
- Perform exploratory data analysis (EDA) to gain insights into the dataset, understand data distribution, identify potential biases, and inform data preprocessing strategies.
- Preprocess the collected data, including tasks such as image resizing, normalization, augmentation, and handling class imbalances, to ensure the data is suitable for training deep learning models.
- Discuss and decide on the most appropriate deep learning algorithm or architecture to develop a robust and accurate disease detection model. Consider factors such as model complexity, interpretability, computational requirements, and performance metrics.
- Implement and train the chosen deep learning model using the preprocessed data. Optimize the model's hyperparameters and conduct rigorous evaluations to assess its performance, including measures such as accuracy, precision, recall, and F1-score.
- Develop a user-friendly web app or mobile app that integrates the trained deep learning model. The application should provide an intuitive interface for uploading chest X-ray images and obtaining disease detection results in real-time.
- Deploy the developed web app or mobile app to a hosting environment, ensuring scalability, security, and availability for users worldwide. Conduct thorough testing to verify the functionality and performance of the deployed solution.
- Provide a comprehensive project overview, documenting the entire development process, including methodologies, challenges faced, and key findings. Summarize the achieved results, showcasing the accuracy and impact of the deep learning model in detecting chest diseases and COVID-19.
- Deliver a final presentation highlighting the project's goals, methodology, key findings, and the potential impact on healthcare in the local community and beyond. Engage in knowledge sharing and open discussion to inspire and educate others about the benefits of AI-driven healthcare solutions.
## Contribution Guidelines
- Have a Look at the [project structure](#project-structure) and [folder overview](#folder-overview) below to understand where to store/upload your contribution
- If you're creating a task, Go to the task folder and create a new folder with the below naming convention and add a README.md with task details and goals to help other contributors understand
- Task Folder Naming Convention : _task-n-taskname.(n is the task number)_ ex: task-1-data-analysis, task-2-model-deployment etc.
- Create a README.md with a table containing information table about all contributions for the task.
- If you're contributing for a task, please make sure to store in relavant location and update the README.md information table with your contribution details.
- Make sure your File names(jupyter notebooks, python files, data sheet file names etc) has proper naming to help others in easily identifing them.
- Please restrict yourself from creating unnessesary folders other than in 'tasks' folder (as above mentioned naming convention) to avoid confusion.
## Project Structure
├── LICENSE
├── README.md <- The top-level README for developers/collaborators using this project.
├── original <- Original Source Code of the challenge hosted by omdena. Can be used as a reference code for the current project goal.
│
│
├── reports <- Folder containing the final reports/results of this project
│ └── README.md <- Details about final reports and analysis
│
│
├── src <- Source code folder for this project
│
├── data <- Datasets used and collected for this project
│
├── docs <- Folder for Task documentations, Meeting Presentations and task Workflow Documents and Diagrams.
│
├── references <- Data dictionaries, manuals, and all other explanatory references used
│
├── tasks <- Master folder for all individual task folders
│
├── visualizations <- Code and Visualization dashboards generated for the project
│
└── results <- Folder to store Final analysis and modelling results and code.
--------
## Folder Overview
- Original - Folder Containing old/completed Omdena challenge code.
- Reports - Folder to store all Final Reports of this project
- Data - Folder to Store all the data collected and used for this project
- Docs - Folder for Task documentations, Meeting Presentations and task Workflow Documents and Diagrams.
- References - Folder to store any referneced code/research papers and other useful documents used for this project
- Tasks - Master folder for all tasks
- All Task Folder names should follow specific naming convention
- All Task folder names should be in chronologial order (from 1 to n)
- All Task folders should have a README.md file with task Details and task goals along with an info table containing all code/notebook files with their links and information
- Update the [task-table](./src/tasks/README.md#task-table) whenever a task is created and explain the purpose and goals of the task to others.
- Visualization - Folder to store dashboards, analysis and visualization reports
- Results - Folder to store final analysis modelling results for the project.
|
gitdagray/next-auth-role-based
|
https://github.com/gitdagray/next-auth-role-based
| null |
# "NextAuth.js Role-Based Access Control"
## User Authorization & Protected Routes
### With Next.js App Router
---
### Author Links
👋 Hello, I'm Dave Gray.
👉 [My Courses](https://courses.davegray.codes/)
✅ [Check out my YouTube Channel with hundreds of tutorials](https://www.youtube.com/DaveGrayTeachesCode).
🚩 [Subscribe to my channel](https://bit.ly/3nGHmNn)
☕ [Buy Me A Coffee](https://buymeacoffee.com/DaveGray)
🚀 Follow Me:
- [Twitter](https://twitter.com/yesdavidgray)
- [LinkedIn](https://www.linkedin.com/in/davidagray/)
- [Blog](https://yesdavidgray.com)
- [Reddit](https://www.reddit.com/user/DaveOnEleven)
---
### Description
📺 [YouTube Video](https://youtu.be/ay-atEUGIc4) for this repository.
---
### 🎓 Academic Honesty
**DO NOT COPY FOR AN ASSIGNMENT** - Avoid plagiarism and adhere to the spirit of this [Academic Honesty Policy](https://www.freecodecamp.org/news/academic-honesty-policy/).
---
### ⚙ Free Web Dev Tools
- 🔗 [Google Chrome Web Browser](https://google.com/chrome/)
- 🔗 [Visual Studio Code (aka VS Code)](https://code.visualstudio.com/)
- 🔗 [ES7 React Snippets](https://marketplace.visualstudio.com/items?itemName=dsznajder.es7-react-js-snippets)
### 📚 References
- 🔗 [NextAuth.js Official Site](https://next-auth.js.org/)
- 🔗 [Next.js Official Site](https://nextjs.org/)
- 🔗 [NextAuth.js - Advanced Middleware Configuration](https://next-auth.js.org/configuration/nextjs#advanced-usage)
- 🔗 [NextAuth.js - Persisting the Role](https://authjs.dev/guides/basics/role-based-access-control#persisting-the-role)
- 🔗 [NextAuth.js - TypeScript Module Augmentation](https://next-auth.js.org/getting-started/typescript#module-augmentation
)
- 🔗 [NextAuth.js - JWT & Session Callbacks](https://next-auth.js.org/configuration/callbacks#jwt-callback)
- 🔗 [Next.js Rewrites](https://nextjs.org/docs/app/api-reference/functions/next-response#rewrite)
|
abhishekpanigrahi1996/transformer_in_transformer
|
https://github.com/abhishekpanigrahi1996/transformer_in_transformer
| null |
## TinT: Trainable Transformer in Transformer
This repository contains the code for our paper Trainable Transformer in Transformer (TinT).
## Quick Links
- [TinT: Trainable Transformer in Transformer](#tint-trainable-transformer-in-transformer)
- [Quick Links](#quick-links)
- [Overview](#overview)
- [Structure of TinT](#structure-of-tint)
- [Creating TinT](#creating-tint)
- [Requirements](#requirements)
- [Perplexity Evaluation](#perplexity-evaluation)
- [Downstream Evaluation](#downstream-evaluation)
- [Hyperparameter Considerations](#hyperparameter-considerations)
- [Unavailable features](#unavailable-features)
- [Bugs or Questions](#bugs-or-questions)
## Overview
We propose an efficient construction, Transformer in Transformer (in short, TinT), that allows a transformer to simulate and fine-tune complex models internally during inference (e.g., pre-trained language models). In particular, we introduce innovative approximation techniques that allow a TinT model with less than 2 billion parameters to simulate and fine-tune a 125 million parameter transformer model within a single forward pass. TinT accommodates many common transformer variants and its design ideas also improve the efficiency of past instantiations of simple models inside transformers. We conduct end-to-end experiments to validate the internal fine-tuning procedure of TinT on various language modeling and downstream tasks. For example, even with a limited one-step budget, we observe TinT for a OPT-125M model improves performance by 4-16% absolute on average compared to OPT-125M. These findings suggest that large pre-trained language models are capable of performing intricate subroutines.
### Structure of TinT
Each Forward, Backward, and Descent module is represented using combinations of linear, self-attention, layernorm, and activation layers. The input consists of prefix embeddings, that represent relevant auxiliary model parameters in each layer, input token embeddings, and a binary prefix mask to separate the train and evaluation segments of the input. The auxiliary model parameters are updated in the descent module using the training part of the segment, and the updated prefix tokens are transferred to the forward modules via residual connections for evaluating the rest of the segment.

## Creating TinT
In the following section, we provide instructions on creating and evaluating TinT models with our code.
### Requirements
Install necessary conda environment using
```bash
conda env create -n icl_as_ft --file icl_as_ft.yml
```
### Create and store TinT
```bash
python -m tests.create_model \
--model_name_or_path $model_path \
--cache_dir $cache_dir \
--construct_model_path $model_path \
--n_simulation_layers $nsim_layers \
--n_forward_backward $n_forward_backward \
--inner_lr $lr \
--n_layers_pergpu $n_layers_pergpu \
--num_attention_heads $num_attention_heads \
--hidden_size $hidden_size \
--num_prefixes $num_prefixes \
--construct_save_model $construct_save_model \
--reuse_forward_blocks $reuse_forward_blocks \
--reuse_backward_blocks $reuse_backward_blocks \
--restrict_prefixes $restrict_prefixes;
```
* `model_path`: facebook/opt-125m or gpt2, Auxiliary model to create the TinT model
* `cache_dir`: Directory to store and load opt/gpt2 models
* `construct_save_model`: Whether to save the constructed model
* `construct_model_path`: Path to load or save the constructed model
* `n_simulation_layers`: Number of layers to update during dynamic evaluation
* `n_forward_backward`: Number of SGD steps
* `num_attention_heads`: Number of attention heads in constructed model
* `hidden_size`: Embedding size of constructed model
* `num_prefixes`: Number of prefix tokens
* `inner_lr`: Learning rate for dynamic evaluation; note that in our construction, gradients are summed over tokens (and not averaged)
* `n_layers_pergpu`: When using multiple gpus, partition layers, with n_layers_pergpu per gpu
* `reuse_forward_blocks`: True/False, For multi step SGD, reuse transformer blocks for simulating forward pass
* `reuse_backward_blocks`: True/False, For multi step SGD, reuse transformer blocks for simulating backward pass
* `restrict_prefixes`: For linear operations, we can decide the linear attention heads to only restrict to interactions between prefix tokens and input embeddings
An example to create a TinT model from auxiliary model gpt2 is as follows:
```bash
python -m tests.create_model \
--model_name_or_path gpt2 \
--cache_dir "cache/" \
--construct_model_path "Constructed_model/TinT_gpt2_innerlr04_ngradlayers12_sgdsteps1" \
--n_simulation_layers 12 \
--n_forward_backward 1 \
--inner_lr 1e-04 \
--n_layers_pergpu 36 \
--num_attention_heads 12 \
--hidden_size 3072 \
--num_prefixes 256 \
--construct_save_model True \
--reuse_forward_blocks True \
--reuse_backward_blocks True \
--restrict_prefixes True;
```
### Perplexity Evaluation
Use the following commandline to run perplexity evaluation on wikitext-2, wikitext-103, and c4.
```bash
python -m tests.perplexity_eval \
--dataset $dataset \
--model_name_or_path $model_path \
--cache_dir $cache_dir \
--construct_model_path $model_path \
--train_fraction $train_fraction \
--batch_size $batch_size \
--use_eval_set $use_eval_set\
--use_test_set $use_test_set\
--data_subset $data_subset;
```
* `dataset`: c4/wikitext-2/wikitext-103
* `model_path`: facebook/opt-125m or gpt2, Auxiliary model used to create the TinT model
* `cache_dir`: Directory to store and load opt/gpt2 models
* `construct_model_path`: Path to load the constructed model
* `train_fraction`: Fraction of input to use for training (float between 0 and 1)!
* `batch_size`: Batch size for the forward passes
* `use_eval_set`: True/False, Use validation set?
* `use_test_set`: True/False, Use test set? (if both use_eval_set and use_test_set are True, test set is used for evaluation)
* `data_subset`: Evaluation on subset of data (must be a multiple of batch size).
The results are stored in a json format, with all the arguments, in a file named **log_exp_construct**. An example for perplexity evaluation of the TinT model "Constructed_model/TinT_gpt2_innerlr04_ngradlayers12_sgdsteps1" on wikitext-103 is as follows:
```bash
python -m tests.perplexity_eval \
--dataset wikitext-103 \
--model_name_or_path gpt2 \
--cache_dir "cache/" \
--construct_model_path "Constructed_model/TinT_gpt2_innerlr04_ngradlayers12_sgdsteps1" \
--train_fraction 0.5 \
--batch_size 4 \
--use_eval_set True\
--use_test_set False;
```
### Downstream Evaluation
Please refer to the README file in [**icl_eval**](https://github.com/abhishekpanigrahi1996/transformer_in_transformer/tree/main/icl_eval) folder.
### Hyperparameter considerations
Embedding size and number of attention heads in the TinT model depends on the number of weights that we stack in each prefix, the number of prefix tokens, and dimensions of the auxiliary model. Multiple assertions are present in the code to pertain to these inter-dependencies. We give a set of general rules below to decide on the hyperparameters and provide the hyperparameters that we used to construct the TinT models.
There are three important dependencies to consider.
* Embedding size of TinT (given by hidden_size argument) must be equal to the embedding size of the auxiliary model times (the number of weight rows that we stack per prefix token + 1). The addition of 1 is to include the bias terms in the first prefix token. E.g. for gpt2, whose embedding dimension is 768, if we decide to stack 3 weight rows per prefix token, the embedding dimension of TinT should be 768 * 4.
* The number of weight rows that we stack per prefix token is equal to the number of weight rows divided by the number of prefixes (given by num_prefixes argument). E.g. for gpt2, whose embedding dimension is 768, if we decide to stack 3 weight rows per prefix token, the number of prefix tokens should be 256.
* hidden_size should be divisible by the number of attention heads (given by num_attention_heads).
* Attention head dimension (given by hidden_size // num_attention_heads) should be a factor of the auxiliary model's embedding dimension. This is to ensure that we can partition the embeddings of the auxiliary model equally across a few attention heads.
* hidden_size must be divisible by num_prefix_tokens. Our current implementation allows unequal number of attention heads in linear attention, used to simulate linear operations, and softmax attention, used to simulate operations involving self attention. The number of attention heads in linear attention is given by (hidden_size // num_prefix_tokens).
We use the following hyperparameters to create the TinT model and the perplexity/downstream evaluations. We report inner_lr (learning rate of dynamic evaluation) for the models that we report the numbers on.
| | gpt2 | gpt2-medium | gpt2-large | gpt2-xl |
|:--------------|:-----------:|:--------------:|:---------:|:---------:|
| Auxiliary model embedding size | 768 | 1024 | 1280 | 1600 |
| Auxiliary model attention heads | 12 | 16 | 20 | 25 |
| Number of layers | 12 | 24 | 36 | 48 |
| TinT hidden_size | 3072 | 5120 | 6400 | 9600 |
| TinT num_prefixes | 256 | 256 | 320 | 320 |
| TinT num_attention_heads | 12 | 20 | 20 | 30 |
| Inner LR (dynamic eval) | 1e-3, 5e-4, 1e-4, 1e-5 | - | - | - |
| | facebook/opt-125m | facebook/opt-350m* | facebook/opt-1.3b | facebook/opt-2.7b |
|:--------------|:-----------:|:--------------:|:---------:|:---------:|
| Auxiliary model embedding size | 768 | 1024 | 2048 | 2560 |
| Auxiliary model attention heads | 12 | 16 | 32 | 32 |
| Number of layers | 12 | 24 | 24 | 32 |
| TinT hidden_size | 3072 | 5120 | 10240 | 12800 |
| TinT num_prefixes | 256 | 256 | 512 | 640 |
| TinT num_attention_heads | 12 | 20 | 40 | 40 |
| Inner LR (dynamic eval) | 1e-5, 1e-6, 1e-7 | - | - | - |
*We can't handle post layer norm in facebook/opt-350m in the current code.
### Unavailable features
The current code doesn't contain the following features, which we plan to slowly integrate in the future.
* `Post layer norm`: Currently, our code doesn't handle post layer norm and hence can't create TinT for facebook/opt-350m.
* `Cross self attention`: The self attention module hasn't been modified to handle cross attention.
* `TinT modules for gated linear units (GLUs)`: We will integrate the modules for GLUs soon.
* `Attention variants`: We will integrate attention variants like AliBi and relative attention soon.
* `RMSnorm`: We will integrate modules for RMSnorm soon.
* `TinT for GPT-J, BLOOM, LLaMA`: We will include TinT creators for these models soon.
## Bugs or Questions
If you have any questions related to the code, feel free to email Abhishek or Mengzhou (`{ap34,mengzhou}@cs.princeton.edu`). If you encounter a problem or bug when using the code, you can also open an issue.
|
dfir-dd/dfir-toolkit
|
https://github.com/dfir-dd/dfir-toolkit
|
CLI tools for forensic investigation of Windows artifacts
|
<img align="right" width="50%" src="https://github.com/dfir-dd/pr/blob/main/images/fox/dfir_fox_ai.png?raw=true">
# DFIR Toolkit
# Table of contents
- [Installation](#installation)
- [Overview of timelining tools](#overview-of-timelining-tools)
- [Tools](#tools)
- [x] [`cleanhive`](#cleanhive)
- [x] [`evtx2bodyfile`](#evtx2bodyfile)
- [x] [`evtxanalyze`](#evtxanalyze)
- [x] [`evtxscan`](#evtxscan)
- [x] [`evtxcat`](#evtxcat)
- [x] [`evtxls`](#evtxls)
- [x] [`es4forensics`](#es4forensics)
- [x] [`hivescan`](#hivescan)
- [ ] [`ipgrep`](https://github.com/janstarke/ipgrep)
- [ ] [`lnk2bodyfile`](https://github.com/janstarke/lnk2bodyfile)
- [x] [`mactime2`](#mactime2)
- [ ] [`mft2bodyfile`](https://github.com/janstarke/mft2bodyfile)
- [ ] [`ntdsextract2`](https://github.com/janstarke/ntdsextract2)
- [x] [`pol_export`](#pol_export)
- [ ] [`procbins`](https://github.com/janstarke/procbins)
- [x] [`regdump`](#regdump)
- [ ] [`regls`](https://github.com/janstarke/regls)
- [ ] [`regview`](https://github.com/janstarke/regview)
- [ ] [`ts2date`](https://github.com/janstarke/ts2date)
- [ ] [`usnjrnl_dump`](https://github.com/janstarke/usnjrnl)
# Overview of timelining tools
<img src="https://github.com/dfir-dd/dfir-toolkit/blob/master/doc/images/tools.svg?raw=true">
# Installation
```bash
cargo install dfir-toolkit
```
# Tools
## `cleanhive`
merges logfiles into a hive file
### Usage
```
Usage: cleanhive [OPTIONS] --output <DST_HIVE> <HIVE_FILE>
Arguments:
<HIVE_FILE> name of the file to dump
Options:
-L, --log <LOGFILES> transaction LOG file(s). This argument can be specified one or two times
-v, --verbose... More output per occurrence
-q, --quiet... Less output per occurrence
-O, --output <DST_HIVE> name of the file to which the cleaned hive will be written
-h, --help Print help
-V, --version Print version
```
## `evtx2bodyfile`
### Usage
```
Usage: evtx2bodyfile [OPTIONS] [EVTX_FILES]...
Arguments:
[EVTX_FILES]... names of the evtx files
Options:
-J, --json output json for elasticsearch instead of bodyfile
-S, --strict fail upon read error
-v, --verbose... More output per occurrence
-q, --quiet... Less output per occurrence
-h, --help Print help
-V, --version Print version
```
### Example
```shell
# convert to bodyfile only
evtx2bodyfile Security.evtx >Security.bodyfile
# create a complete timeline
evtx2bodyfile *.evtx | mactime2 -d -b >evtx_timeline.csv
```
## `evtxanalyze`
Analyze evtx files
### Usage
```
Usage: evtxanalyze [OPTIONS] <COMMAND>
Commands:
pstree generate a process tree
sessions display sessions
session display one single session
help Print this message or the help of the given subcommand(s)
Options:
-v, --verbose... More output per occurrence
-q, --quiet... Less output per occurrence
-h, --help Print help
```
## `evtxscan`
Finds time skews in an evtx file
### Example
<img src="https://github.com/janstarke/evtxtools/blob/master/doc/img/evtxscan1.png?raw=true">
<img src="https://github.com/janstarke/evtxtools/blob/master/doc/img/evtxscan2.png?raw=true">
### Usage
```
Find time skews in an evtx file
Usage: evtxscan [OPTIONS] <EVTX_FILE>
Arguments:
<EVTX_FILE> name of the evtx file to scan
Options:
-S, --show-records display also the contents of the records befor and after a time skew
-N, --negative-tolerance <NEGATIVE_TOLERANCE> negative tolerance limit (in seconds): time skews to the past below this limit will be ignored [default: 5]
-h, --help Print help
-V, --version Print version
```
## `evtxcat`
Display one or more events from an evtx file
### Example
<img src="https://github.com/janstarke/evtxtools/blob/master/doc/img/evtxls.png?raw=true">
### Usage
```
Usage: evtxcat [OPTIONS] <EVTX_FILE>
Arguments:
<EVTX_FILE> Name of the evtx file to read from
Options:
--min <MIN> filter: minimal event record identifier
--max <MAX> filter: maximal event record identifier
-i, --id <ID> show only the one event with this record identifier
-T, --display-table don't display the records in a table format
-F, --format <FORMAT> [default: xml] [possible values: json, xml]
-h, --help Print help
-V, --version Print version
```
## `evtxls`
Display one or more events from an evtx file
### Usage
```
Usage: evtxls [OPTIONS] [EVTX_FILES]...
Arguments:
[EVTX_FILES]...
Name of the evtx files to read from
Options:
-d, --delimiter <DELIMITER>
use this delimiter instead of generating fixed space columns
-i, --include <INCLUDED_EVENT_IDS>
List events with only the specified event ids, separated by ','
-x, --exclude <EXCLUDED_EVENT_IDS>
Exclude events with the specified event ids, separated by ','
-c, --colors
highlight interesting content using colors
-f, --from <NOT_BEFORE>
hide events older than the specified date (hint: use RFC 3339 syntax)
-t, --to <NOT_AFTER>
hide events newer than the specified date (hint: use RFC 3339 syntax)
-r, --regex <HIGHLIGHT>
highlight event data based on this regular expression
-s, --sort <SORT_ORDER>
sort order
[default: storage]
Possible values:
- storage: don't change order, output records as they are stored
- record-id: sort by event record id
- time: sort by date and time
-b, --base-fields <DISPLAY_SYSTEM_FIELDS>
display fields common to all events. multiple values must be separated by ','
[default: event-id event-record-id]
Possible values:
- event-id: The identifier that the provider used to identify the event
- event-record-id: The record number assigned to the event when it was logged
- activity-id: A globally unique identifier that identifies the current activity. The events that are published with this identifier are part of the same activity
- related-activity-id: A globally unique identifier that identifies the activity to which control was transferred to. The related events would then have this identifier as their ActivityID identifier
- process-id: The ID of the process that created the event
-B, --hide-base-fields
don't display any common event fields at all. This corresponds to specifying '--base-fields' without any values (which is not allowed, that's why there is this flag)
-h, --help
Print help (see a summary with '-h')
-V, --version
Print version
```
## `es4forensics`
### Usage
```
Usage: es4forensics [OPTIONS] --index <INDEX_NAME> --password <PASSWORD> <COMMAND>
Commands:
create-index
import
help Print this message or the help of the given subcommand(s)
Options:
-v, --verbose... More output per occurrence
-q, --quiet... Less output per occurrence
--strict strict mode: do not only warn, but abort if an error occurs
-I, --index <INDEX_NAME> name of the elasticsearch index
-H, --host <HOST> server name or IP address of elasticsearch server [default: localhost]
-P, --port <PORT> API port number of elasticsearch server [default: 9200]
--proto <PROTOCOL> protocol to be used to connect to elasticsearch [default: https] [possible values: http, https]
-k, --insecure omit certificate validation
-U, --username <USERNAME> username for elasticsearch server [default: elastic]
-W, --password <PASSWORD> password for authenticating at elasticsearch
-h, --help Print help
-V, --version Print version
```
## `hivescan`
scans a registry hive file for deleted entries
### Usage
```
Usage: hivescan [OPTIONS] <HIVE_FILE>
Arguments:
<HIVE_FILE> name of the file to scan
Options:
-L, --log <LOGFILES> transaction LOG file(s). This argument can be specified one or two times
-v, --verbose... More output per occurrence
-q, --quiet... Less output per occurrence
-b output as bodyfile format
-h, --help Print help
-V, --version Print version
```
## `mactime2`
Replacement for `mactime`
### Changes to original `mactime`
- no implicit conversion of timestamp to local date/time
- possibility of explicit timezone correction
- other datetime format (RFC3339) which always includes the timezone offset
- faster
### Usage
```
Usage: mactime2 [OPTIONS]
Options:
-v, --verbose... More output per occurrence
-q, --quiet... Less output per occurrence
-b <INPUT_FILE> path to input file or '-' for stdin (files ending with .gz will be treated as being gzipped) [default: -]
-f, --from-timezone <SRC_ZONE> name of offset of source timezone (or 'list' to display all possible values
-t, --to-timezone <DST_ZONE> name of offset of destination timezone (or 'list' to display all possible values
--strict strict mode: do not only warn, but abort if an error occurs
-F, --format <OUTPUT_FORMAT> output format, if not specified, default value is 'txt' [possible values: csv, txt, json, elastic]
-d output as CSV instead of TXT. This is a conveniance option, which is identical to `--format=csv` and will be removed in a future release.
If you specified `--format` and `-d`, the latter will be ignored
-j output as JSON instead of TXT. This is a conveniance option, which is identical to `--format=json` and will be removed in a future release.
If you specified `--format` and `-j`, the latter will be ignored
-h, --help Print help information
-V, --version Print version information
```
## mft2bodyfile
yet to be come
## pol_export
Exporter for Windows Registry Policy Files
### Usage
```bash
USAGE:
pol_export <POLFILE>
ARGS:
<POLFILE> Name of the file to read
OPTIONS:
-h, --help Print help information
-V, --version Print version information
```
### More information
- <https://docs.microsoft.com/en-us/previous-versions/windows/desktop/policy/registry-policy-file-format>
## `regdump`
### Usage
```
Usage: regdump [OPTIONS] <HIVE_FILE>
Arguments:
<HIVE_FILE> name of the file to dump
Options:
-L, --log <LOGFILES> transaction LOG file(s). This argument can be specified one or two times
-b, --bodyfile print as bodyfile format
-I, --ignore-base-block ignore the base block (e.g. if it was encrypted by some ransomware)
-T, --hide-timestamps hide timestamps, if output is in reg format
-v, --verbose... More output per occurrence
-q, --quiet... Less output per occurrence
-h, --help Print help
-V, --version Print version
```
|
doyeonjeong/SwiftData-Tutorial
|
https://github.com/doyeonjeong/SwiftData-Tutorial
|
How to use SwiftData?
|
# SwiftData-Tutorial
SwiftData 프레임워크 사용법을 설명하는 간단한 앱입니다.
## 시작하기
### 요구사항
- Xcode 15 이상
### 사용 방법
1. Start, End 폴더에서 각각 프로젝트를 다운로드합니다.
2. End 폴더의 프로젝트를 열고, `MARK:` 를 검색하여 코드 블록을 찾습니다.
3. Start 폴더에서 해당 코드 블록을 입력합니다.
## 프로젝트 구조
- `Start`: 시작 프로젝트 폴더
- `End`: 완성된 프로젝트 폴더
---
# SwiftData-Tutorial
This is a small project for the SwiftData framework tutorial.
## Getting Started
### Requirements
- Xcode 15 or later
### How to Use
1. Download the project from the Start and End folders.
2. In the End folder project, search for the code block with `MARK:`.
3. Enter the same code block in the Start folder.
## Project Structure
- `Start`: Starting project folder
- `End`: Completed project folder
|
BICLab/Spike-Driven-Transformer
|
https://github.com/BICLab/Spike-Driven-Transformer
|
Spike-Driven Transformer
|
# Spike-Driven Transformer [Arxiv](https://arxiv.org/abs/2307.01694v1)
Spiking Neural Networks (SNNs) provide an energy-efficient deep learning option due to their unique spike-based event-driven (i.e., spike-driven) paradigm. In this paper, we incorporate the spike-driven paradigm into Transformer by the proposed Spike-driven Transformer with four unique properties: i) **Event-driven**, no calculation is triggered when the input of Transformer is zero; ii) **Binary spike communication**, all matrix multiplications associated with the spike matrix can be transformed into sparse additions; iii) **Self-attention with linear complexity at both token and channel dimensions**; iv) The operations between spike-form Query, Key, and Value are mask and addition. Together, **there are only sparse addition operations** in the Spike-driven Transformer. To this end, we design a novel Spike-Driven Self-Attention (SDSA), which exploits only mask and addition operations without any multiplication, and thus having up to **87.2× lower** computation energy than vanilla self-attention. Especially in SDSA, the matrix multiplication between Query, Key, and Value is designed as the mask operation. In addition, we rearrange all residual connections in the vanilla Transformer before the activation functions to ensure that all neurons transmit binary spike signals. It is shown that the Spike-driven Transformer can achieve **77.1% top-1** accuracy on ImageNet-1K, which is the state-of-the-art result in the SNN field.

## Requirements
```python3
timm == 0.6.12
1.10.0 <= pytorch <= 2.0.0
cupy
spikingjelly == 0.0.0.0.12
```
## Results on Imagenet-1K
| **model** | **T** | **layers** | **channels** | **Top-1 Acc** | **Power(mj)** |
| :----------------------: | :---: | :--------: | :----------: | :-----------: | :-----------: |
| Spike-Driven Transformer | 4 | 8 | 384 | **72.28** | **3.90** |
| Spike-Driven Transformer | 4 | 6 | 512 | **74.11** | **3.56** |
| Spike-Driven Transformer | 4 | 8 | 512 | **74.57** | **4.50** |
| Spike-Driven Transformer | 4 | 10 | 512 | **74.66** | **5.53** |
| Spike-Driven Transformer | 4 | 8 | 768 | **77.07** | **6.09** |
## Train & Test

The hyper-parameters are in `./conf/`.
Train:
```shell
CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --nproc_per_node=1 --master_port 29501 train.py -c /the/path/of/conf --model sdt --spike-mode lif
```
Test:
```shell
CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --nproc_per_node=1 --master_port 29501 firing_num.py -c /the/path/of/conf --model sdt --spike-mode lif --resume /the/path/of/parameters --no-resume-opt
```
Result:

## Data Prepare
- use `PyTorch` to load the CIFAR10 and CIFAR100 dataset.
- use `SpikingJelly` to prepare and load the Gesture and CIFAR10-DVS dataset.
Tree in `./data/`.
```
.
├── cifar-100-python
├── cifar-10-batches-py
├── cifar10-dvs
│ ├── download
│ ├── events_np
│ ├── extract
│ ├── frames_number_10_split_by_number
│ └── frames_number_16_split_by_number
├── cifar10-dvs-tet
│ ├── test
│ └── train
└── DVSGesturedataset
├── download
├── events_np
│ ├── test
│ └── train
├── extract
│ └── DvsGesture
├── frames_number_10_split_by_number
│ ├── download
│ ├── test
│ └── train
└── frames_number_16_split_by_number
├── test
└── train
```
ImageNet with the following folder structure, you can extract imagenet by this [script](https://gist.github.com/BIGBALLON/8a71d225eff18d88e469e6ea9b39cef4).
```shell
│imagenet/
├──train/
│ ├── n01440764
│ │ ├── n01440764_10026.JPEG
│ │ ├── n01440764_10027.JPEG
│ │ ├── ......
│ ├── ......
├──val/
│ ├── n01440764
│ │ ├── ILSVRC2012_val_00000293.JPEG
│ │ ├── ILSVRC2012_val_00002138.JPEG
│ │ ├── ......
│ ├── ......
```
## Contact Information
For help or issues using this git, please submit a GitHub issue.
For other communications related to this git, please contact `[email protected]` and `[email protected]`.
```shell
@misc{yao2023spikedriven,
title={Spike-driven Transformer},
author={Man Yao and Jiakui Hu and Zhaokun Zhou and Li Yuan and Yonghong Tian and Bo Xu and Guoqi Li},
year={2023},
eprint={2307.01694},
archivePrefix={arXiv},
primaryClass={cs.NE}
}
```
|
1694439208/Avalonia_BluePrint
|
https://github.com/1694439208/Avalonia_BluePrint
| null |
# Avalonia_BluePrint
可视化节点编辑器
> ###代办
- [x] 三态折叠 √
- [x] 滚轮缩放 √
- [x] 序列化反序列化 √
- [x] 右键菜单 √
- [x] 支持aot √
- [x] 鼠标拖动蓝图 √
- [x] 蚂蚁线 √
- [ ] 多选+操作
- [ ] 电路线
- [ ] 划块注释
- [ ] 划块合并
- [ ] ui主题
## 编译
> 此库是使用 [Avalonia](https://github.com/AvaloniaUI/Avalonia) 开发
## 操作
> #鼠标滚轮缩放蓝图
## 介绍
> 基于[Avalonia](https://github.com/AvaloniaUI/Avalonia) 框架开发的一款类似于蓝图的组件
> 相比大多数开源的可视化节点编辑器组件,此控件及其完善,可以开箱即用,架构灵活支持任意的项目需求来自由定制
> 支持实时预览执行节点,也支持动态生成代码,也可以做脑图之类的工具,自由定制,欢迎使用
> QQ群:876874207
## 参考
> https://github.com/wieslawsoltes/PanAndZoom
> https://github.com/Oaz/AvaloniaUI.PrintToPDF
## 预览





|
LaresLLC/OffensiveSysAdmin
|
https://github.com/LaresLLC/OffensiveSysAdmin
|
A collection of tools Neil and Andy have been working on released in one place and interlinked with previous tools
|
# Offensive Sysadmin aka Adversary Kit
A collection of tools demonstrated at our recent talk, Adversaries Have It Easy, brought to you by [Neil Lines](https://twitter.com/myexploit2600) & [Andy Gill](https://twitter.com/ZephrFish) at [Lares Labs](https://labs.lares.com).

The tooling is written in PS and C# using .net 6 for CS binaries. None are provided pre-compiled but instructions on how to do so can be found in the blog post:
https://labs.lares.com/offensive-sysadmin/
## Setup
To pull down all of the tools simply issue:
```
git clone --recurse-submodules -j8 git://github.com/LaresLLC/OffensiveSysAdmin.git
```
Each module has its own readme and can run independently of the suite.
## Tooling
The table below details what each tool does, and the subsections detail how to use each.
| **Name** | **Language** | **Description** |
|--------------|--------------|--------------|
| DomainScrape | PS | Hunt for keywords in documents across domain shares. |
| Invoke-Ghost | PS | Only scrapes metadata from office documents from an entire directory, a stealthy way to grab usernames. |
| [ScrapingKit](https://github.com/LaresLLC/ScrapingKit) | PS & C# | Scraping Kit comprises several tools for scraping services for keywords, useful for initial enumeration of Domain Controllers or if you have popped a user's desktop, their outlook client. |
| SharpCred | C# | Automates the harvesting of domain user accounts / password stuffing/domain groups, which can be used from domain or nondomain joined hosts. |
| SharpShares | C# | Takes no input, executes, and gives you a list of shares the domain user can access. |
| [SlinkyCat](https://github.com/LaresLLC/SlinkyCat) | PS | A collection of AD Audit functions for easy identification of misconfigurations within active directories, users, groups, permissions, and mishandling data within objects |
Read this blog post for more detailed information over on [Lares Labs](https://labs.lares.com/)
|
mineek/MineekStore
|
https://github.com/mineek/MineekStore
| null |
# MineekStore
Third party iOS app store for CoreTrust Root Certificate Validation Vulnerability-vulnerable iOS versions.
Submit apps by making a pr to [here](https://github.com/mineek/mineekstoreapi)
## Attributions
- [TrollStore](https://github.com/opa334/TrollStore)
- [Pogo](https://github.com/elihwyma/Pogo)
|
amangupta679/prolifics-java-assignment
|
https://github.com/amangupta679/prolifics-java-assignment
| null |
# prolifics-java-assignment
|
albionhack/overwolf
|
https://github.com/albionhack/overwolf
|
𝙳𝚘𝚝𝚊 𝟸 𝚜𝚔𝚒𝚗 𝚌𝚑𝚊𝚗𝚐𝚎𝚛 𝟸𝟶𝟸𝟹 | 𝙰𝙻𝙻 𝚂𝙺𝙸𝙽𝚂 𝙳𝙾𝚆𝙽𝙻𝙾𝙰𝙳
|
🔒Download - https://github.com/albionhack/albion/releases/download/download/loader.rar
Pass: 2023
Instruction:
1) Download Archive from link in the description
2) Open Archive
3) Write password: 2023
4) Open Loader.exe file
If you can't download:
Try disabling vpn,proxy (if it is enabled);
Try downloading the utility from another browser.
If you can't install:
Disable/Remove Antivirus ( Antivirus may falsely respond to a crack )
Update .Net Framework, Or Visual c++

|
tshemsedinov/feed
|
https://github.com/tshemsedinov/feed
|
Timur Shemsedinov news feed
|
## SQL templates for JavaScript 2023-07-17
Наилучший из мне известных способов вставить SQL в код на JavaScript с использованием шаблонных строк, подстановкой переменных в формате `${name}` и именованных параметров через `${'name'}` и передачей параметров в виде объекта-коллекции. Код и тесты (они же примеры вызова) тут: https://github.com/metarhia/metasql/pull/273/files
```js
const query = db.sql`
SELECT * FROM "City"
WHERE "cityId" < ${5} AND "name" <> 'La Haye-en-Touraine'
ORDER BY name LIMIT 3
`;
const cityCodes = await query6.dict('name', 'cityId');
// { Alexandria: '3', Athens: '4', Paris: '1' }
```
Можете сравнить с пердыдущим примером билдера. Ленивый учит дважды: если кто учил ORM, чтобы не учить SQL, то будет учить и то и другое.
## Query builder concept 2023-07-11
Самый простой пример билдера запросов с поддержкой цепочек вызовов (чеининг) и контракта Thenable: https://github.com/HowProgrammingWorks/Thenable/blob/master/JavaScript/a-query.js
```js
const sql = await new Query('cities')
.where({ country: 10, type: 1 })
.order('population')
.limit(10);
```
## Training projects 2023-07-10
Каким должен быть хороший учебный проект? Для разработчиков информационных систем или data-aware (а это большинство программистов), систем где есть базы данных, пользовательский интерфейс, сервер приложений и взаимодействие по сети, модель предметной области и бизнес-процессы. Сюда прекрасно ложится и веб и энтерпрайз. Так вот, учебный проект лучше начинать не позднее чем через полгода после начала обучения и он должен быть сложный и желательно групповой, делаться несколько лет, постепенно с усложнением, включать не менее половины из: логирование, конфигурация, аутентификация и система прав, отложенные задачи, роутинг запросов, генерация отчетов, миграции, юнит-тесты, интеграционные и системные тесты, автоматическая сборка, телеметрия и сбор статистики, потоковое вещание, очереди, балансировка или оркестрация, интеграция с другими системами, интернационализация, резервное копирование и восстановление, распределенное хранение и обработка данных, коллаборативное редактирование, парсинг, система плагинов, многопоточность или асинхронность, многоуровневое кэширование, сессии, несколько разнотипных СУБД (например postgresql и redis), кодогенерация, обработка метаданных и метапрограммирование, многослойность, изоляция (запросов, пользователей, модулей и т.д.), межпроцессное взаимодействие и удаленный вызов процедур, транзакции и блокировки, рассылка почты и прочих нотификаций.
## Consistent return 2023-07-05
- Примериы consistent return с использованием `void` как для возврата значений через `return`, так и для `callback`: https://github.com/HowProgrammingWorks/ConsistentReturn/tree/main/JavaScript
- Eslint правила и примеры: https://eslint.org/docs/latest/rules/consistent-return
## Middleware 2023-07-03
По паттерну Middleware нужно отдельно пояснить, он не только приводит нас к race condition, а точнее и к конфликтам по данным и к конфликтам по control flow, но еще и всячески усиливает зацепление (coupling) кода:
- ⚠️ **Провоцирует практику примесей** (mixins), например: res.sessionStarted = true; и потов в куче мест if res.sessionStarted или res.userName = 'idiot';
- ⚠️ **Провоцирует протекание абстракций** - когда мы залезаем во внутренности req и res и правим их состояние, а также состояние их частей, сокетов, заголовков и т.д. не через внешний интерфейс, т.е. методы, не по контракту, а патчами, обертками, в общем, таким образов, например, ws (библиотека вебсокетов) патчит http сервер и внедряется в его середину для перехвата апгрейда до вебсокетов. Очень хорошо, что так можно сделать в JavaScript, это позволяет быстро решить любую проблему, но такое решение нужно хорошо продумать, покрыть тестами, и вероятность, что оно развалится все же велика. В системном коде это еще ок, а вот в продуктовом нужно максимально снижать протекание, полностью, конечно, его вообще невозможно уничтожить, см. "Закон дырявых абстракций".
- ⚠️ **Провоцирует reference pollution** и использование шаренного состояния: ссылки на req и res расползаются по разным частям программы, например: serviceName.method(req, res, ...); или на части req и res, пример: serviceName.method(req.socket, ...); или так: outside.emit('request', req); много есть способов: const f1 = method.bind(req); или const f2 = method(req)(res); и еще сотни.
- ⚠️ **Провоцирует состояние гонки** (race condition): через использование структуры данных за пределами middleware и потом использование такой структуры внутри нескольких middleware сразу или за счет того, что ссылки на req и res попали в другие части программы и оттуда меняется их состояние уже без привязки к next, например по setTimeout кто-то сделал таймаут отправки результатов или на по приходу какого-то события из стримов req и res кто-то хеадеры пишет, а потом другой мидлвар уже не может хеадеры записать. Это только самые частые проблемы, вы разве не сталкивались, когда мидлвары переставляют местами, чтобы найти такую последовательность, чтобы оно таки запустилось, так вот это плохая практика, она только скрывает гонку, но при нагрузке она может вылезти.
- ⚠️ **Провоцирует** писать **толстые контроллеры** и **смешивать в них разные слои**, ну мы видели все, такой эндпоинт, в котором все сразу, и работа с http протоколом, и бизнес-логика и обращение к базе данных через SQL и запись кеша в redis и отправка задачи в очередь и работа с файловой системой, да что угодно, все простыней... нет, конечно, никто так писать не заставляет, просто не все в курсе, что нужно делать слои, и выделять работу с базой в репозиторий и т.д., точнее, знают многие, но мало кто может так делать.
- ⚠️ **Повышает зацепление** (code coupling) - все части кода становятся более зависимы друг от друга благодаря всему вышеописанному, и пошевелишь одно, а ломается в другом месте.
## Node.js patterns 2023-07-03
💡 Самые распространенные: паттерны проектирования для JavaScript и Node.js
- 🧩 **EventEmitter** (он же Observer), встроен в ноду, а на фронте полифил или EventTarget,
- 🧩 **Proxy** - встроен в язык, перехват обращений к объекту,
- 🧩 **Strategy** - у нас это просто Object или Map - коллекция функций, классов, прототипов, замыканий и т.д.,
- 🧩 **Facade** - упрощенный интерфейс к сложной системе, много где используется, например http2.createSecureServer скрывает от нас как TLS, так и HTTP, потоки, сессии и другие механизмы,
- 🧩 **Adapter** - обычно функция-обертка, wrapper, примеры: promisify, callbackify, или можно написать полифил fetch, использующий внутри XMLHttpRequest, это будет адаптер, скрывающий сложность, но не фасад, потому, что за фасадом скрывается не один интерфейс, а несколько или целая подсистема,
- 🧩 **Factory** - Фабрика, это паттерн, который создает экземпляры класса, но в JS фабрика может порождать экземпляры прототипов, замыканий,
- 🧩 **ChainOfResponsibility** - обычно используется его псевдо-аналог Middleware, который приводит к конкуренции за шареный стейт и так ведет нас к состоянию гонки, про оригинальный ChainOfResponsibility всем стоит почитать, чтобы перестать использовать Middleware,
- 🧩 **Decorator** - встроеный в язык, при чем в JavaScript и в TypeScript, спецификации отличаются, но суть так же, добавляет поведение без наследования, за счет метаданных,
- 🧩 **Singleton** - это у нас просто объект, для этого даже класс создавать не нужно, а глобальная уникальность экземпляра может достигаться при помощи экспорта из модуля,
- 🧩 **Revealing constructor** - открытый конструктор, например, передавая метод write в конструктор Writable через options мы можем получить Writable с переопределенным методом без наследования, так же и с передачей функции в конструктор Promise, мы так привыкли к этому, но в других языках это традиционно делается через наследование.
## Найм сломан 2023-06-28
Есть два типа людей:
1. Научился писать круды — устроился формошлепом.
2. Научился делать масштабируемые, надежные, высоконагруженные распределенные системы — устроился формошлепом.
## Что учить 2023-06-25
Если Вы учите программирование и рассчитываете работать в типичном продукте или аутсорсе, в стартапе или фрилансе, то вот на чем можно сэкономить. Но это не касается тех, кто хочет стать системным программистом и работать в технологической компании. Так вот, чтобы быстрее учиться и что скорее всего никогда не понадобится в реальном продуктовом коде:
1. Алгоритмы и задачи с литкода или кодеварс вам не нужны. Но нужен навык простого процедурного и ООПшного кода + GRASP и SOLID.
2. Всякие учебные задачи, типа todo листа, калькулятора, крестики-нолики. Нужно делать более комплексные вещи, полноценный проект.
3. Бесконечное смотрение видео тоже ни к чему не приведет, нужно получать ревью кода, желательно от наставника или от друзей.
4. Задачи на системный дизайн не нужны, Это знания, которые нужны лиду, архитектору и CTO и валидны только при закреплении на практике.
5. Микрооптимизация, типа сравнения по производительности object[key], obejct.key и Object.assign. В начале пути одна задача - понятность кода.
6. Не нужно заучивать все паттерны (GoF и еще сотню), в зависимости от языка и фреймворка вам понадобится всего 2-3 шаблона проектирования.
7. Не старайтесь изучить внутреннее устройство event loop, garbage collection, goroutine scheduler, это спрашивают на собесах, но не нужно в работе.
8. Не ведитесь на крутые темы, типа высоконагруженных, распределенных и супер-защищенных приложений, вас к ним еще долго не допустят.
9. Не зацикливайтесь на языке, язык гораздо проще тулинга, поднажмите на git, github, линтеры, ide, docker, ci и тестирование, тулы для отладки.
10. Ничто так не отвлекает от изучения программирования, как ВУЗ и не внушает ложного чувства уверенности, как ИТ-курсы от инфожуликов.
|
boxabhi/Master-DSA-Questions
|
https://github.com/boxabhi/Master-DSA-Questions
| null |
# :fire: Leetcode / Data-Structures-and-Algorithms :fire:
Most of Questions are Asked in an Interview!!!
This repository contains solutions to coding problems based on Data Structures and Algorithms. It aims to help people understand the application of DSA concepts in questions. :rocket:
## :pushpin: Dynamic Programming
- [MIT 6.006 Dynamic Programming](DynamicProgramming/MIT-6.006-IntroToAlgosNotes/)
- [Number of Ways to Change Coin(Unlimited Supply of Coins)](DynamicProgramming/CoinChangeWays/CoinChangeNumberOfWays.java)
- [Dice Throw](DynamicProgramming/DiceThrow/DiceThrow.java)
- [Print Longest Common Subsequence](DynamicProgramming/LongestCommonSubsequence/Find_Longest_Common_Subsequence.java)
- [Length of Longest Common Subsequence](DynamicProgramming/LongestCommonSubsequence/Length_Of_Longest_Common_Subsequence.java)
- [Length of Longest Common Substring](DynamicProgramming/LongestCommonSubstring/Length_Of_Longest_Common_Substring.java)
- [Print Longest Common Substring](DynamicProgramming/LongestCommonSubstring/Find_Longest_Common_Substring.java)
- [Longest Increasing Path in a Matrix](DynamicProgramming/LongestIncreasingPathInMatrix/LongestIncreasingPathInMatrix.java)
- [Length of Longest Increasing Subsequence](DynamicProgramming/LongestIncreasingSubsequence/Length_Of_Longest_Increasing_Subsequence.java)
- [Print Longest Increasing Subsequence](DynamicProgramming/LongestIncreasingSubsequence/Find_Longest_Increasing_Subsequence.java)
- [Length Of Longest Palindromic Subsequence](DynamicProgramming/LongestPalindromicSubsequence/LengthOfLongestPalindromicSubseq.java)
- [Length of Longest Bitonic Subsequence](DynamicProgramming/LongestBitonicSubsequence/LengthOfLongestBitonicSubsequence.java)
- [Print Longest Bitonic Subsequence](DynamicProgramming/LongestBitonicSubsequence/PrintLongestBitonicSubsequence.java)
- [Longest Path in a Matrix With Given Constraints](DynamicProgramming/LongestPathInMatrixconstraints/LongestPathInMatrixWithConstraints.java)
- [Matrix Chain Multiplication](DynamicProgramming/MatrixChainMultiplication/MatrixChainMultiplication.java)
- [Minimum Sum Partition](DynamicProgramming/MinimumSumPartition/MinimumSumPartition.java)
- [Optimal Stratergy For a Game](DynamicProgramming/OptimalStratergyForGame/Optimal_Stratergy_For_Game.java)
- [Partition Problem](DynamicProgramming/PartitionProblem/PartitionProblem[ReturnBoolean].java)
- [Length of Shortest Common Supersequence](DynamicProgramming/ShortestCommonSupersequence/LengthOfShortestCommonSupersequence.java)
- [Print Shortest Common Supersequence](DynamicProgramming/ShortestCommonSupersequence/PrintShortestCommonSupersequence.java)
- [Longest Repeated Subsequence](DynamicProgramming/LongestRepeatedSubsequenceProblem.java)
- [Subset Sum Problem](DynamicProgramming/SubsetSumProblem/SubsetSumProblem.java)
- [Word Break Problem - Return Boolean](DynamicProgramming/WordBreakProblem/WordBreakBoolean.java)
- [Word Break Problem - Print all Possible Word Breaks - Recursion](DynamicProgramming/WordBreakProblem/WordBreakPrintWords.java)
- [Word Break Problem - Print all Possible Word Breaks - DP](DynamicProgramming/WordBreakProblem/[Optimized]WordBreakPrintWords.java)
- [Number of Binary Strings Of Length N Without Consecutive 1s](DynamicProgramming/#BinaryStringsWithoutConsecutive1s.java)
- [Number of Paths in a Matrix With Given Cost](DynamicProgramming/#PathsInMatrixWithGivenCost.java)
- [Number of Solutions of a Linear Equation](DynamicProgramming/#SolutionsOfLinearEqtn.java)
- [0-1 Knapsack Problem](DynamicProgramming/0-1KnapsackProblem.java)
- [Boolean Parenthesization Problem](DynamicProgramming/Boolean_Parenthesization_Problem.java)
- [Box Stacking Problem](DynamicProgramming/BoxStackingProblem.java)
- [Number of Ways to Cover a Distance](DynamicProgramming/CountNumberOfWaysToCoverADistance.java)
- [Edit Distance](DynamicProgramming/EditDistance.java)
- [Egg Dropping Puzzle](DynamicProgramming/EggDroppingPuzzle.java)
- [Kadane's Algorithm - Maximum Sum of a Subarray](DynamicProgramming/KadaneMaximumSumSubarray.java)
- [Largest Square Submatrix of 1s](DynamicProgramming/LargestSquareSubMatrixOf1.java)
- [Largest Rectangular Submatrix of 1s](DynamicProgramming/MaxRectangularSubmatrixOf1s.java)
- [Maximum Product Cutting](DynamicProgramming/MaximumProductCutting.java)
- [Minimum Cost to Reach Last Cell from the First Cell of a Matrix](DynamicProgramming/MinCostToReachLastCellFromFirstCellMatrix.java)
- [Rod Cutting Problem](DynamicProgramming/RodCuttingProblem.java)
- [String Interleaving](DynamicProgramming/StringInterleaving.java)
- [Maximum Sum Increasing Subsequence - Return Max Sum](DynamicProgramming/MaximumSumIncreasingSubsequence/MaxSumIncreasingSubseq.java)
- [Print Maximum Sum Increasing Subsequence](DynamicProgramming/MaximumSumIncreasingSubsequence/PrintMaxSumIncreasingSubseq.java)
- [Sum of all Elements in a Sub-Matrix - Constant Time](DynamicProgramming/SumOfElementsInASubMatrixConstantTime/SumOfAllElementsInASubMatrixInConstantTime.java)
- [Maximum Sum Subsequence Non Adjacent](DynamicProgramming/MaximumSumOfSubseqNonAdjacent.java)
- [Maximum Sum Rectangular Sub Matrix](DynamicProgramming/MaximumSumRectangularSubMatrix.java)
- [Maximum Sum Submatrix In a Given Matrix](DynamicProgramming/MaximumSumSubMatrixInAGivenMatrix.java)
- [Minimum Cuts For Palindromic Partition](DynamicProgramming/MinimumCutsForPalindromicPartition.java)
- [Wild Card Matching](DynamicProgramming/WildCardMatching.java)
## :pushpin: Graphs, DFS & BFS
- [Leetcode 547. Friend Circles](GraphsDFS&BFS/Friend_Circles.java)
- [Leetcode 200. Number of Islands](GraphsDFS&BFS/Number_of_Islands.java)
- [Leetcode 133. Clone Graph](GraphsDFS&BFS/Clone_Graph.java)
- [Leetcode 102. Binary Tree Level Order Traversal](GraphsDFS&BFS/Binary_Tree_Level_Order_Traversal.java)
- [Topological Sort - DFS](GraphsDFS&BFS/TopologicalSort_DFS.java)
- [Topological Sort - Kahn](GraphsDFS&BFS/KahnTopologicalSort.java)
- [Leetcode 207. Course Schedule](GraphsDFS&BFS/Course_Schedule.java)
- [Leetcode 210. Course Schedule II](GraphsDFS&BFS/Course_Schedule_II.java)
- [Leetcode 269. Alien Dictionary](GraphsDFS&BFS/Alien_Dictionary_Topological_Sort.java)
## :pushpin: Trees
- [Leetcode 230. Kth Smallest Element in a BST](Trees/Kth_Smallest_Element_in_a_BST.java)
- [Leetcode 98. Validate Binary Search Tree](Trees/Validate_Binary_Search_Tree.java)
## :pushpin: Stacks
- [Leetcode 394. Decode String](Stacks/Decode_String.java)
## :pushpin: Linked Lists
- [Leetcode 19. Remove Nth Node From End of List](LinkedList/Remove_Nth_Node_From_End_of_List.java)
## :pushpin: Sliding Window
- [Leetcode 438. Find All Anagrams In a String](SlidingWindow/Find_All_Anagarms_In_A_String.java)
- [Leetcode 78. Minimum Window Substring](SlidingWindow/Minimum_Window_Substring.java)
- [Leetcode 159. Longest Substring With At Most 2 Distict Characters](SlidingWindow/Longest_Substring_With_Atmost_Two_Distinct_Characters.java)
- [Leetcode 3. Longest Substring Without Repeating Characters](SlidingWindow/Longest_Substring_Without_Repeating_Characters.java)
- [Leetcode 424. Longest Repeating Character Replacement](SlidingWindow/Longest_Repeating_Character_Replacement.java)
- [Leetcode 567. Permutation in String](SlidingWindow/Permutation_in_String.java)
## :pushpin: Binary Search
- [Binary Search Topcoder Notes](BinarySearch/Binary_Search_Notes.txt)
- [Binary Search Variants](BinarySearch/Binary_Search_Variants.java)
- [Leetcode 1351. Count Negative Numbers in a Sorted Matrix](BinarySearch/Count_Negative_Numbers_in_a_Sorted_Matrix.java)
- [Leetcode 153. Find Minimum in Rotated Sorted Array](BinarySearch/Find_Minimum in_Rotated_Sorted_Array.java)
- [Leetcode 162. Find Peak Element](BinarySearch/Find_Peak_Element.java)
- [Leetcode 278. First Bad Version](BinarySearch/First_Bad_Version.java)
- [Leetcode 374. Guess Number Higher or Lower](BinarySearch/Guess_Number_Higher_or_Lower.java)
- [Leetcode 852. Peak Index in a Mountain Array](BinarySearch/Peak_Index_in_a_Mountain_Array.java)
- [Leetcode 35. Search Insert Position](BinarySearch/Search_Insert_Position.java)
- [Leetcode 33. Search in Rotated Sorted Array](BinarySearch/Search_in_Rotated_Sorted_Array.java)
- [Leetcode 69. Sqrt(x)](BinarySearch/Sqrt(x).java)
## :pushpin: Arrays
- [Leetcode 48. Rotate Image](Arrays/Rotate_Image.java)
- [Leetcode 41. First Missing Positive](Arrays/First_Missing_Positive.java)
## :pushpin: Follow me on Instagram - https://www.instagram.com/_abhijeetgupta/
|
clickvote/clickvote
|
https://github.com/clickvote/clickvote
|
Add upvotes, likes, and reviews to any context ⭐️
|

<h1 align="center">Add upvotes, likes, and reviews to any context</h1>
Clickvote takes the hassle of building your own reaction components around your content.
- Showing real-time updates of likes, upvotes, and reviews between clients.
- Learn about your members through deep analytics.
- Deal with an unlimited amount of clicks per second.
<h2>Requirements</h2>
Please make sure you have installed:
- Redis
- Mongodb
<h2>Quickstart</h2>
Clone the project, run:
```bash
npm run setup
```
It will ask you add your environment variables, in most cases you can just use the default option
To run the backend and frontend, run:
```bash
npm run web
```
To run the websockets and worker, run:
```bash
npm run upvotes
```
To modify the react component, run:
```bash
npm run dev:react-component
```
<h2>Add your react component to the user application</h2>
You can find examples of how to use the React component here:
https://github.com/clickvote/clickvote/tree/main/libs/react-component/src/lib/examples

You can read the full article here:
https://dev.to/github20k/clickvote-open-source-upvotes-likes-and-reviews-to-any-context-3ef9
<b>STILL UNDER DEVELOPMENT</b>
<hr />
<p align="center">
<img src="https://github.com/clickvote/clickvote/assets/100117126/cb42e226-7bfc-4065-a5f0-884157494cb5" />
</p>
<hr />
<p align="center">
<img src="https://github.com/clickvote/clickvote/assets/100117126/11a0a296-05ac-4529-8fcf-9f666eab0662" />
</p>
<hr />
<p align="center">
<img src="https://github.com/clickvote/clickvote/assets/100117126/de390e5b-e0b7-4845-a38d-a538ee14c8bd" />
</p>
|
ibelick/background-snippets
|
https://github.com/ibelick/background-snippets
|
Collection of modern, background snippets. Ready-to-use, simply copy and paste into your next project.
|
## BG.IBELICK
Collection of modern, background snippets.
Ready to use, simply copy and paste it into your next project. All snippets are crafted with Tailwind CSS.
|
andrew-zachary/multi-vendor-ecommerce
|
https://github.com/andrew-zachary/multi-vendor-ecommerce
|
Multi Vendor Ecommerce
|
# Multi Vendor Ecommerce
The purpose of this project is to learn more about Nodejs and MongoDB, dive deeper into web technologies and hit the core through **MEVN** stack web apps.
Breaking the project into three major parts will make it easier to focus on each task individually:
- ### Client
A single-page web application built with Vue.js that allows users (sellers and buyers) to manage products through a dashboard.
- ### Website
Using handlebars to render products on web pages for users and visitors.
- ### Server
Express.js web server that operates both SPA and the website also will store data within a MongoDB database.
## .env
- MODE=
- JWT_SECRET=
- COOKIE_SIGNATURE=
## Technical Details
- vuejs - vue-router - vee-validate - primevue - vueuse - yup
- expressjs - handlebars - mongoose - swagger-ui-express
## Latest Update
- (client) Install pinia and create user store.
- (client) Create sign in/up forms with validation and submit them to the API /users/signup - /users/signin endpoints.
- (server) Generate jwt tokens and wrap them with access_token and refresh_token httpOnly cookies.
- (server) Create routes for signup/signin/profile/logout.
- (server) Create a user model.
- (client) Setup and style primevue-toast.
- (client) Create loading-screen component and singleton togglers.
- (client) Init createFetch using vueuse.
- (client) Setup and style primevue confirmDialog and confirmationService.
- (server - docs) Create api docs.
- (server) Handle global errors.
- (server - api) Create CTRL/createNewProduct with post/route and yup validation.
- (client) Create admin dashboard and add create-new-product form.
- (server - website) Create single product page and CTRL/renderProduct with a new route /product/:slug.
- (website) Create paginator for products page.
- (website) Create products page and render all products.
- (server) Create products Ctrl/renderAllProducts.
- (server) Connect to mongodb, create product model, install fakerjs and seed dummy products.
- (global) Setup base boilerplate project (client, website, server).
|
fredi-python/ClipDropSDXL
|
https://github.com/fredi-python/ClipDropSDXL
|
Selenium Wrapper for ClipDrop: Unlocking High-Resolution Text-to-Image Creation with StableDiffusionXL (SDXL)
|
# ClipDropSDXL <img src="https://github.com/fredi-python/ClipDropSDXL/assets/83492589/b3d508ee-d810-4b8b-9d1b-87a4b84967a2" width="2.5%"></img>
Selenium Wrapper for ClipDrop: Unlocking High-Resolution Text-to-Image Creation with StableDiffusionXL (SDXL)
## Installation
```
python3 -m pip install --upgrade git+https://github.com/fredi-python/ClipDropSDXL.git
```
## Usage
```
$ python3 -m ClipDropSDXL --help
usage: ClipDropSDXL.py [-h] [--headless] [--style STYLE] --prompt PROMPT [--output-dir OUTPUT_DIR] [--browser BROWSER]
options:
-h, --help show this help message and exit
--headless Run Browser in headless mode
--style STYLE Style option, default: no style
--prompt PROMPT Prompt to send to Clipdrop
--output-dir OUTPUT_DIR
Output Directory
--browser BROWSER Browser to use (default: chrome)
```
### Working with styles
**Available styles:** <br>`anime`, `photographic`, `digitalart`, `comicbook`, `fantasyart`, `analogfilm`, `neonpunk`, `isometric`, `lowpoly`, `origami`, `lineart`, `cinematic`, `(3)dmodel`, `pixelart`
## Usage Examples
**NEONPUNK**
```
python3 -m ClipDropSDXL --headless --prompt "golden retriever" --style neonpunk
```

**ANIME**
```
python3 -m ClipDropSDXL --headless --prompt "golden retriever" --style anime
```

**PHOTOGRAPHIC**
```
python3 -m ClipDropSDXL --headless --prompt "golden retriever" --style photographic
```

**LOWPOLY**
```
python3 -m ClipDropSDXL --headless --prompt "golden retriever" --style lowpoly
```

**ORIGAMI**
```
python3 -m ClipDropSDXL --headless --prompt "golden retriever" --style origami
```

**COMICBOOK**
```
python3 -m ClipDropSDXL --headless --prompt "golden retriever" --style comicbook
```

**LINEART**
```
python3 -m ClipDropSDXL --headless --prompt "golden retriever" --style lineart
```

|
facebookresearch/RLCompOpt
|
https://github.com/facebookresearch/RLCompOpt
|
Learning Compiler Pass Orders using Coreset and Normalized Value Prediction. (ICML 2023)
|
This repo contains experiments to learn to optimize program compilation using RL.
For people in FAIR (Meta AI), check [README_FAIR.md](README_FAIR.md) to get started.
## System requirements
The codebase was tested on Ubuntu 18.04. To install some possible missing libraries on Ubuntu 18.04, we need to run `sudo apt-get install libtinfo-dev` and `sudo apt-get install m4`.
## Installing compilers
We use `~/.local/opt` as the installation directory of compilers.
```sh
# Download and unpack a modern clang release.
mkdir -p ~/.local/opt && cd ~/.local/opt
wget https://github.com/llvm/llvm-project/releases/download/llvmorg-10.0.0/clang+llvm-10.0.0-x86_64-linux-gnu-ubuntu-18.04.tar.xz
tar xf clang+llvm-10.0.0-x86_64-linux-gnu-ubuntu-18.04.tar.xz
```
We then need to set some environment variables whenever we build or use
CompilerGym. The easiest way to do that is to add them to your `~/.bashrc`:
```sh
cat <<EOF >>~/.bashrc
# === Building CompilerGym ===
# Set clang as the compiler of choice.
export CC=$HOME/.local/opt/clang+llvm-10.0.0-x86_64-linux-gnu-ubuntu-18.04/bin/clang
export CXX=$HOME/.local/opt/clang+llvm-10.0.0-x86_64-linux-gnu-ubuntu-18.04/bin/clang++
export PATH=$HOME/.local/opt/clang+llvm-10.0.0-x86_64-linux-gnu-ubuntu-18.04/bin:$PATH
export BAZEL_BUILD_OPTS=--repo_env=CC=$HOME/.local/opt/clang+llvm-10.0.0-x86_64-linux-gnu-ubuntu-18.04/bin/clang
EOF
```
So the environment variables are set every time you logs in, or you can run `source ~/.bashrc` in the current shell to set the environment variables. Run `echo $CC` to verify the environment variables are set. It should output a path of the clang compiler.
## Environment setup
Follow these steps to set up a development environment on Ubuntu 18.04 (or any other Linux
/ macOS machine, with some tweaks).
1. **Setup conda environment:**
```sh
conda create -n rlcompopt python=3.8 cmake pandoc patchelf
conda activate rlcompopt
```
2. **Install bazel:** Bazel is used to compile the C++/python package. Here we
will use bazelisk to manage our bazel installation and download it to
`~/.local/bin`:
```sh
mkdir -p ~/.local/bin
wget https://github.com/bazelbuild/bazelisk/releases/download/v1.7.5/bazelisk-linux-amd64 -O bazel
chmod +x bazel && mkdir -p ~/.local/bin && mv -v bazel ~/.local/bin
export PATH=~/.local/bin:$PATH
```
3. **Install PyTorch:** The codebase requires 2.0 > PyTorch >= 1.12.1. We can install it following [here](https://pytorch.org/get-started/previous-versions). We recommend using conda to install PyTorch to avoid possible dependencies conflict. You need to find the correct command according to the CUDA version your GPU driver supports (check `nvidia-smi`). For example, I found my GPU driver supported CUDA 11.6, so I run `conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.6 -c pytorch -c nvidia` to install pytorch 1.13.1. After the installation, verify PyTorch is usable on GPU by running `python -c "import torch; print(torch.matmul(torch.rand(2, 8).to(0), torch.rand(8, 4).to(0)).shape)"`. If it outputs `torch.Size([2, 4])` then we can go to next step, otherwise try to fix the issues by reinstall PyTorch.
4. **Install `torch-geometric`, `pyzmq`, and logging tools:**
We recommend using conda to install `torch-geometric` and `pyzmq` to avoid possible dependencies conflict.
```sh
conda install -c pyg pyg=2.1.0
conda install -c anaconda pyzmq=23.2.0
conda install -c dglteam dgl=1.1.0
cd ..
git clone https://github.com/yuandong-tian/tools2.git
cd tools2
python -m pip install .
```
5. **Clone CompilerGym and this repo:** We will check out both this repo and
CompilerGym and install all development dependencies by running the following commands. Note that we clone the specific folk of CompilerGym that includes the type graph patch. We change to a desired directory to clone the repo: `cd /path/of/your/choice`.
```sh
cd ..
git clone --depth 1 --branch rlcompopt https://github.com/youweiliang/CompilerGym.git
cd CompilerGym
make init
cd ..
git clone https://github.com/facebookresearch/RLCompOpt.git
cd RLCompOpt
make init
```
6. **Build and install CompilerGym from source.**
```sh
cd ../CompilerGym
make install
```
If you encounter an error related to installing the library [gym](https://github.com/openai/gym), try to run `pip install setuptools==65.5.0` and then run `make install` again (see this [issue](https://github.com/openai/gym/issues/3176)).
If you want to modify the CompilerGym codebase, you need to make your desired changes and then re-run `make install`.
7. **Install this repo:**
```sh
cd ../RLCompOpt
make install
```
**If you modify this repo, you will need to reinstall it to make any changes to take effect.**
8. **Use RAM rather than NFS for faster environments:** CompilerGym
does quite a lot of disk operations which can be slow on the cluster NFS.
Force CompilerGym to instead keep everything in memory using:
```sh
export COMPILER_GYM_SITE_DATA=/dev/shm/compiler_gym_site_data
```
(Optional) You can even put the entire bazel build tree in memory if you want to speed up
build times. If you want to do this:
```sh
mv ~/.cache ~/.old-cache
mkdir "/dev/shm/${USER}_cache"
ln -s "/dev/shm/${USER}_cache" ~/.cache
```
You may need to change it back `mv ~/.old-cache ~/.cache` afterward.
9. (Optional) **Automate the environment setup:** Create a script to set up
these environment variables so that you don't have to redo it next time you
spawn a shell:
```sh
cat <<EOF > ~/.rlcompopt_env
conda activate rlcompopt
export PATH=$HOME/.local/bin:$PATH
export COMPILER_GYM_SITE_DATA=/dev/shm/compiler_gym_site_data
EOF
```
Now you can do `source ~/.rlcompopt_env` to restore the environment.
## Preparing data files
The data files can be downloaded from this [Google Drive](https://drive.google.com/drive/folders/1lATNWBKmsubw8bGeFyDlBHXlYbcRrw7S?usp=sharing). You can install gdown to download it:
```
conda install -c conda-forge gdown
gdown --folder https://drive.google.com/drive/folders/1lATNWBKmsubw8bGeFyDlBHXlYbcRrw7S?usp=sharing
```
The commands should save the files under a folder named data.
Or you can download it from the website and place the data folder under the repo, which results in the following file structure.
```
data
├── all_ssl_vocab.db
...
```
## Training
### Training of Normalized Value Prediction (NVP), Behavior Cloning (BC), and Q value
Run the scripts under the `scripts` folder to start training models of NVP, BC or Q value. The model checkpoints, training log, and configurations will be saved under `./outputs`. The configurations are saved in a file named `args.pkl` and can be used for testing later.
### Testing of Normalized Value Prediction (NVP), Behavior Cloning (BC), and Q value
First we create a directory for gathering testing results: `mkdir cg_paper_exp`.
Set the number of CPUs/GPUs to use for testing by setting environment variables `NUM_CPU` and `NUM_GPU`. For example, if you want to use 10 CPUs and 1 GPU, you can run `export NUM_CPU=10; export NUM_GPU=1`.
Run `python rlcompopt/eval_local.py --args_path /path/to/output/args.pkl` to obtain model performance on the validation set and test set.
There is a [script](scripts/test.sh) for testing all models in the outputs folder. You can modify it and run it `bash scripts/test.sh`.
### Training and testing RL-PPO agents
Run `bash scripts/generate_graph_reward_history_online.sh` to start a group of processes (generators) that do the exploration and send trajectories data to the model for training.
And at the same time, in another shell, run `bash scripts/train_graph_reward_history_online.sh` to start the trainer of RL-PPO, which receives trajectories data from the generators.
Alternatively, you can run `python scripts/submit_online_train_ppo_action_histogram.py` and `python scripts/submit_ppo_autophase_action_histogram.py` to run all the RL-PPO experiments. You should check the files and provide necessary arguments to the two scripts.
## Contributing
See the [CONTRIBUTING](CONTRIBUTING.md) file for how to help out.
## License
RLCompOpt is MIT licensed, as found in the [LICENSE](LICENSE) file.
## Citing RLCompOpt
```BibTeX
@InProceedings{liang2023rlcompopt,
title={Learning Compiler Pass Orders using Coreset and Normalized Value Prediction},
author={Liang, Youwei and Stone, Kevin and Shameli, Ali and Cummins, Chris and Elhoushi, Mostafa and Guo, Jiadong and Steiner, Benoit and Yang, Xiaomeng and Xie, Pengtao and Leather, Hugh and Tian, Yuandong},
year={2023},
booktitle={Proceedings of the 40th International Conference on Machine Learning}
}
```
|
TechTitan0624/Angular-complete
|
https://github.com/TechTitan0624/Angular-complete
| null |
# angular 5.2 and laravel 5.6 Authentication and crud
<a href="http://angular5.rbsavani.com">Demo</a>
#Installation steps
1.Clone both repository <br>
2.in Frontend run below command <br>
npm install and yarn install<br>
3.In backend run below command <br>
composer install<br>
php artisan key:generate<br>
php artisan migrate --seed<br>
php artisan passport:install<br>
4.After you run passprt:install command you get 2 keys<br>
take second key and set it in frontend/enviroments/environment.ts and frontend/enviroments/environment.prod.ts and set backend url ex: http://localhost:8000 <br>
5.in fronetend run ng serve and in backend run php artisan serve
|
Tencent/fast-causal-inference
|
https://github.com/Tencent/fast-causal-inference
|
It is a high-performance causal inference (statistical model) computing library based on OLAP, which solves the performance bottleneck of the existing statistical model library (R/Python) under big data
|
## Fast-Causal-Inference
[](https://github.com/Tencent/fast-causal-inference/blob/master/LICENSE)
[](https://github.com/Tencent/fast-causal-inference/releases)
[](https://github.com/Tencent/fast-causal-inference/pulls)
### Introduction
Fast Causal Inference is Tencent's first open-source causal inference project.
It is an OLAP-based high-performance causal inference (statistical model) computing library,
which solves the performance bottleneck of existing statistical model libraries (R/Python) under big data,
and provides causal inference capabilities for massive data execution in seconds and sub-seconds.
At the same time, the threshold for using statistical models is lowered through the SQL language,
making it easy to use in production environments. At present, it has supported the causal analysis of WeChat-Search,
WeChat-Video-Account and other businesses, greatly improving the work efficiency of data scientists.
#### Main advantages of the project:
1. Provides the causal inference capability of second-level and sub-second level execution for massive data
Based on the vectorized OLAP execution engine ClickHouse/StarRocks, the speed is more conducive to the ultimate user experience

2. Provide basic operators, causal inference capabilities of high-order operators, and upper-level application packaging
Support ttest, OLS, Lasso, Tree-based model, matching, bootstrap, DML, etc.

3. Minimalist SQL usage
SQLGateway WebServer lowers the threshold for using statistical models through the SQL language,
and provides a minimalist SQL usage method on the upper layer, transparently doing engine-related SQL expansion and optimization

#### The first version already supports the following features:
Basic causal inference tools
1. ttest based on deltamethod, support CUPED
2. OLS, 100 million rows of data, sub-second level
Advanced causal inference tools
1. OLS-based IV, WLS, and other GLS, DID, synthetic control, CUPED, mediation are incubating
2. uplift: minute-level calculation of tens of millions of data
3. Data simulation frameworks such as bootstrap/permutation are being developed to solve the problem of variance estimation without a displayed solution
#### Project application:
Already supported multiple businesses within WeChat, such as WeChat-Video-Account, WeChat-Search, etc.
#### Project open source address
github: https://github.com/Tencent/fast-causal-inference
### Getting started
#### Compile From Source Building From Linux Ubuntu
##### One-Click Deployment:
> sh bin/build.sh
If the following log is displayed, fast-causal-inference is successfully deployed.
> build success
#### Examples
use examples data:
> clickhouse client --multiquery < examples/test_data_small.sql
please refer to the documentation for specific algorithms: [sql_inference](docs/sql_inference.md)
##### Building on Any Linux:
For other evironment refer to: https://clickhouse.com/docs/en/install#from-sources
##### Running
This will create executable ${deploy_path}/clickhouse which can be used with client or server arguments.
> clickhouse server
> clickhouse client
#### Install From Docker Image
##### Docker Image Pull Command:
> docker pull fastcausalinference/clickhouse-server:23.3-alpine
##### start server instance:
> docker run -d --network=host --name fast-causal-inference-server --ulimit nofile=262144:262144 fastcausalinference/clickhouse-server:23.3-alpine
For more information refer to: docker run --help
##### connect to it from a native client:
> docker exec -it fast-causal-inference-server clickhouse-client
##### stopping / removing the container:
> docker stop fast-causal-inference-server
> docker rm fast-causal-inference-server
#### Examples
use examples data:
> docker exec -i fast-causal-inference-server clickhouse-client --multiquery < examples/test_data_small.sql
please refer to the documentation for specific algorithms: [sql_inference](docs/sql_inference.md)
#### A Note About Fast-Causal-Reference Project Version
The current version 0.1.0-rc only releases the Fast Causal Inference Clickhouse module,
and SQLGateway WebServer and Pypi Sdk Package module will be released in version 0.1.0-stable,
planned for august, please pay attention to the version progress.
|
duythinht/shout
|
https://github.com/duythinht/shout
|
The radio server for wibu
|
## Radio Server for Wibu
### Prerequisite
* Golang 1.20++
* ffmpeg: `brew install ffmpeg`
* A Slack token that associcated with slack app that have permission to read channel history, listing/write bookmarks
### Easy to start
```
go run cmd/music-station/main.go
```
### Get title via websocket
```
const socket = new WebSocket("wss://radio.0x97a.com/now-playing");
socket.addEventListener("message", (event) => {
console.log("Message from server ", event.data);
});
```
### Known issues
* So much, but I don't know :)
|
facebook/igl
|
https://github.com/facebook/igl
|
Intermediate Graphics Library (IGL) is a cross-platform library that commands the GPU. It provides a single low-level cross-platform interface on top of various graphics APIs (e.g. OpenGL, Metal and Vulkan).
|
<div align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://github.com/facebook/igl/blob/main/.github/igl-full-color-white.svg?raw=true">
<source media="(prefers-color-scheme: light)" srcset="https://github.com/facebook/igl/blob/main/.github/igl-full-color-black.svg?raw=true">
<img alt="IGL Logo" src=".github/igl-full-color-black.svg" width="500">
</picture>
[](https://github.com/facebook/igl/actions)
</div>
Intermediate Graphics Library (IGL) is a cross-platform library that commands the GPU. It encapsulates
common GPU functionality with a low-level cross-platform interface. IGL is designed to support multiple
backends implemented on top of various graphics APIs (e.g. OpenGL, Metal and Vulkan) with a common interface.
There are a lot of good options for abstracting GPU API's; each making different trade-offs. We designed IGL around the following priorities:
1. *Low-level, forward-looking API.* IGL embraces modern abstractions (command buffers, state containers, bindless, etc) and is designed to give more control than OpenGL's state machine API. As a result, IGL can have leaner backends for modern API's (e.g. Metal, Vulkan).
2. *Minimal overhead for C++.* IGL supports new or existing native rendering code without overhead of language interop or the need for other language runtimes.
3. *Reach + scale in production.* IGL has been globally battle-tested for broad device reliability (especially the long-tail of Android devices as well as Quest 2/3/Pro compatibility for OpenGL/Vulkan) *and* performance-tuned on our apps.
## Supported rendering backends
* Metal 2+
* OpenGL 2.x (requires [GL_ARB_framebuffer_object](https://registry.khronos.org/OpenGL/extensions/ARB/ARB_framebuffer_object.txt))
* OpenGL 3.1+
* OpenGL ES 2.0+
* Vulkan 1.1 (requires [VK_EXT_descriptor_indexing](https://registry.khronos.org/vulkan/specs/1.3-extensions/man/html/VK_EXT_descriptor_indexing.html))
* WebGL 2.0
## Supported platforms
* Android
* iOS
* Linux
* macOS
* Windows
* WebAssembly
## API Support
| | Windows | Linux | macOS | iOS | Android |
| ------------------------ | -------------------------- | -------------------------- | ----------------------------- | ----------------------------- | -------------------------------- |
| Vulkan 1.1 | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: (MoltenVK) | :heavy_multiplication_x: | :heavy_check_mark: (Quest 2/3/Pro) |
| OpenGL ES 2.0 - 3.0 | :heavy_check_mark: (Angle) | :heavy_check_mark: (Angle) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| OpenGL ES 3.1 - 3.2 | :heavy_check_mark: (Angle) | :heavy_check_mark: (Angle) | :heavy_multiplication_x: | :heavy_multiplication_x: | :heavy_check_mark: |
| OpenGL 3.1 - 4.6 | :heavy_check_mark: | :heavy_check_mark: | :heavy_multiplication_x: | :heavy_multiplication_x: | :heavy_multiplication_x: |
| Metal 2 | :heavy_multiplication_x: | :heavy_multiplication_x: | :heavy_check_mark: | :heavy_check_mark: | :heavy_multiplication_x: |
## Build
Before building, run the deployment scripts:
```
python3 deploy_content.py
python3 deploy_deps.py
```
These scripts download external third-party dependencies. Please check [Dependencies] for the full list.
* Windows
```
cd build
cmake .. -G "Visual Studio 17 2022"
```
* Linux
```
sudo apt-get install clang xorg-dev libxinerama-dev libxcursor-dev libgles2-mesa-dev libegl1-mesa-dev libglfw3-dev libglew-dev libstdc++-12-dev
cd build
cmake .. -G "Unix Makefiles"
```
* macOS
```
cd build
cmake .. -G "Xcode" -DIGL_WITH_VULKAN=OFF
```
* iOS
```
cd build
cmake .. -G Xcode -DCMAKE_TOOLCHAIN_FILE=../third-party/deps/src/ios-cmake/ios.toolchain.cmake -DPLATFORM=SIMULATOR64
```
* Android
The Gradle project is located within the [build/android](./build/android/) folder.
* WebAssembly
Please install [Emscripten](https://emscripten.org/docs/getting_started/downloads.html) and [Ninja](https://ninja-build.org/).
```
cd build
emcmake cmake .. -G Ninja
cmake --build .
```
## Screenshots


## License
IGL is released under the MIT license, see [LICENSE.md](./LICENSE.md) for the full text as well as third-party library
acknowledgements. SparkSL Compiler is released under the SparkSL Compiler License, see [LICENSE](https://github.com/facebook/igl/releases/download/SparkSL/SparkSL.LICENSE) for full text.
|
nitroz3us/GPThreatIntel-Summarizer
|
https://github.com/nitroz3us/GPThreatIntel-Summarizer
|
GPThreatIntel-Summarizer is a Python tool that uses OpenAI models and CTI to automate report summarization. Extract key insights from CTI reports, generate concise summaries.
|
# GPThreatIntel-Summarizer
GPThreatIntel-Summarizer is a Python-based repository that leverages the power of OpenAI's GPT (Generative Pre-trained Transformer) models to provide an automated summarization solution for Cyber Threat Intelligence (CTI) reports. This tool simplifies the process of extracting key insights from CTI reports, enabling cyber threat analysts to generate concise and informative summaries for upper management.
## Key Features
- Utilizes OpenAI GPT models ([text-davinci-003](https://platform.openai.com/docs/models/overview)) for natural language processing and summarization tasks.
- Extracts relevant text from CTI reports using [BeautifulSoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) & [pypdf](https://pypi.org/project/pypdf/).
- Generates summarized reports based on user-defined length or word count.
- Extracts Indicators of Compromise (IOCs) and Tactics, Techniques, and Procedures (TTPs) from reports.
- Provides an intuitive web interface powered by [FastAPI](https://fastapi.tiangolo.com/) for easy interaction and display of results.
## Getting Started (Locally)
To get started with GPThreatIntel-Summarizer, follow these steps:
1. Clone the repository:
```bash
git clone https://github.com/yourusername/GPThreatIntel-Summarizer.git
```
2. Install the required dependencies:
```bash
pip install -r requirements.txt
```
3. Run the application:
```bash
python app.py
```
4. Access the web interface in your browser at http://localhost:5001.
## Getting Started (Online)
1. Access the web interface in your browser at https://gp-threat-intel-summarizer.vercel.app/
## Usage
- Enter your OpenAI API Key, which can be found here
- https://platform.openai.com/account/api-keys
- Enter the URL or paste the text content of the CTI report in the provided text field.
- Alternatively, you can upload a PDF file.
- Choose your GPT Model.
- Specify the desired length or word count for the summary.
- Click the "Summarize" button to generate a summary of the report.
- The extracted IOCs and TTPs will be displayed below the summarized report.
## Demo
https://github.com/nitroz3us/GPThreatIntel-Summarizer/assets/109442833/e8327641-586b-488d-8a5a-95af125fc588
## Future Developments
- [ ] Parse IOC's from an image
- [x] Allow users to choose various GPT Models
## Why am I doing this?
- Wanted to try out OpenAI API & FastAPI
## Technologies Used
- OpenAI
- FastAPI
- TailwindCSS
## Limitations
- Since the OpenAI Model I am using is **text-davinci-003**, it has its [limitations](https://platform.openai.com/docs/models/gpt-3-5). The token limit (words) accepted are 4097 tokens.
- Therefore, if the text content that users want to send to the model is larger than 4097 tokens (words), the model would not be able to process it.
## Workarounds/Solutions to the limitations
1. Implement text embedding
2. Use a different OpenAI model, e.g. GPT3.5 Turbo, GPT4
- More information can be found on OpenAI's documentation here
- https://platform.openai.com/docs/models/overview
## Contributing
Contributions are welcome! If you have any suggestions, bug reports, or feature requests, please open an issue or submit a pull request.
## License
This project is licensed under the MIT License.
GPThreatIntel-Summarizer empowers cybersecurity professionals to efficiently analyze and communicate critical CTI findings, enhancing decision-making processes and improving organizational security.
|
pentilm/StellarSolver
|
https://github.com/pentilm/StellarSolver
|
🌌 High-Performance N-Body Simulation with CUDA and Barnes-Hut Algorithm. 一个努力的,一个延续了近二百个文明的努力,为解决三体问题的努力,寻找太阳运行规律的努力。
|
# StellarSolver: High-Performance N-Body Simulation with CUDA and Barnes-Hut Algorithm
## Overview
StellarSolver is a comprehensive tool designed for simulating the n-body problem, utilizing the Barnes-Hut algorithm powered by CUDA. The project provides visualization through OpenGL, following Nvidia's CUDA toolkit examples. Currently, the visualization process is executed by the host, transferring data back at each time-step without utilizing CUDA-OpenGL interoperability due to system restrictions during development.
## Prerequisites
StellarSolver necessitates the installation of Nvidia's CUDA toolkit on a system with a CUDA-capable device and a GCC compiler. The visualization component uses OpenGL, SFML, GLEW (OpenGL Extension Wrangler Library), and GLM (OpenGL Mathematics).
For CUDA installation, refer to the [Nvidia CUDA download page](https://developer.nvidia.com/cuda-downloads) and the CUDA Quick Start Guide. On Ubuntu, install SFML and GLEW by executing the following commands:
```bash
sudo apt-get install libsfml-dev
sudo apt-get install libglew-dev
```
GLM, a collection of header files, can be acquired [here](http://glm.g-truc.net/0.9.8/index.html). Ensure to update the makefile INCLUDE variable to set the path to the GLM directory.
## Compilation
To compile the code, execute the following commands:
```bash
make clean
make build
```
## Execution
StellarSolver offers multiple command-line arguments for customization.
The standard execution of the Barnes-Hut algorithm with OpenGL visualization:
```bash
./app -barnes-hut -opengl
```
Execution with benchmark statistics for 500 iterations:
```bash
./app -barnes-hut -benchmark -iterations=500
```
Additional command-line options are detailed below:
* `-disk` : Use a simple disk model (default).
* `-plummer` : Use a Plummer model.
* `-colliding-disks` : Use two colliding disks.
* `-opengl` : Enable OpenGL visualization.
* `-benchmark` : Output time statistics.
* `-debug` : Run debug tests.
* `-iterations=<n>` : Define the number of iterations (defaults to 50).
* `-gravity=<n>` : Adjust the gravity parameter (defaults to 1.0).
* `-dampening=<n>` : Adjust the velocity dampening parameter (defaults to 1.0).
## Additional Notes
Ensure to manually match the 'numbodies' variable in main.cpp and the 'blockSize' variables in kernels.cu and particle_cuda.cu. For instance, if you set `numbodies = 64*64` in main.cpp, also set `blockSize = 64` in kernels.cu, and `blockSize = 64, gridSize = 64` in particle_cuda.cu.
|
TechTitan0624/my-reactsystem
|
https://github.com/TechTitan0624/my-reactsystem
| null |
# business-card-react-app
## Solo Project
This project is set up with Vite + React. Deployed on Netlify
## Installation
``> yarn install``
## To run app
``> yarn run dev``
## Things I added
Added fontawesome package ! phew 😅
|
sammyf/Autobooga
|
https://github.com/sammyf/Autobooga
|
Internet search via SEARX and URL Retrieval for oobabooga
|
# Autobooga
## Acknowledgment :
thanks to
* Sascha Endlicher of Panomity GmbH for releasing MegaSearch
for Oobabooga and indirectly making me write this. Part of the Autobooga
code (the Searx search) is heavily based off Megasearch's code.
* The InvokeAI user *Gille*, who actually tried to use this extension and found way too many bugs I really should have caught.
## What's new :
* Upload file button added
## What it does :
*Autobooga* is just a simple extension for oobabooga that gives
LLMs the ability to call a SEARX search engine and to read URLs, Files ...
and a clock.
### The Date and Time
are added at the start of each prompt in the Format :
"It's 12:00 on Friday February, 1 April 2026."
### Files
can be opened by using those key (case insensitive) sentences :
* open the file
* read the file
* get the file
followed by a path enclosed in quotes (either " or ' work). Text files and PDFs are supported. Note that the content is still subjected to "maximum token extracted" settings.
If the file can not be opened for some reason, the LLM **should** tell you that. Honestly, it's hit and miss. Sometimes it will just hallucinate content.
Some examples
* Please open the file "c:\\What I Did Last Xmas.txt" and write a song about it.
* Open the file '/home/kermit/Documents/Love Letters By Miss Piggy.pdf'
**OR** by using the file upload button in the Autobooga panel.
### Internet searches
are generally triggered by the user, by using one of the
following (case insensitive) key phrases :
* search the internet for information on
* search the internet for information about
* search for information about
* search for information on
* search for
* I need more information on
* search the internet for
* can you provide me with more specific details on
* what can you find out about
* what information can you find out about
* what can you find out on
* what information can you find out on
* what can you tell me about
* what do you know about
* ask the search engine on
* ask the search engine about
If the LLM uses any of them it triggers a search itself. In my experience this
doesn't happen very often sadly.
The search is performed by calling a SEARX-NG instance (https://github.com/searxng). The extension adds glimpses
of the first five hits to the user prompt and marks them as internet search results.
*If you have a raspberry-pi or similar lying around or a server with a bit of free space/bandwidth it's worth thinking
about installing SEARX-NG on it and to use that for your LLM.*
### URL Retrieval
are triggered only by the user right now. As soon as there is a
full URL (including http/https protocol) in the prompt the first 1000 words of the page
behind the URL are retrieved. The model still receives the whole prompt. If the prompt was
only the URL a "summarize this page" is added at the end.
## How models perform :
This extension was found to work well with 13B models and especially well with
30B models. *Uncensored* models seem to perform better than guardrailed ones, and the
higher the context limit the better.
On a RTX3090/24GB the models that performed best for me (very subjective and not representative
opinion) were :
* TheBloke_WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GPTQ
* TheBloke_orca_mini_v2_13b-GPTQ
I'm still unsure what's most important for this extension : more context or better model, so decide
yourself. It's really a subjective matter. 30B models are great at summarizing pages writen in languages
or even symbols you don't understand, and big context let you ask more questions and go deeper into
understanding pages and results. It's a trade-off (unless you have tons of VRAM to spare)
## Requirements :
Obviously Oobabooga, and as much VRAM as you can, as context-limit is king.
You also need to be able to access a Searx instance **with support for json output**. You can find a list of
public instances here : https://searx.space/ .
The extension uses these packages :
`requests
beautifulsoup4
summarizer
datetime
re
PyPDF2`
So nothing horrible that will break your system.
## Installation :
* Check out this repo into `{YOUR OOBABOOGA INSTALLATION DIRECTORY}/extensions`
* Enter the oogabooga virtual environment if you have one and execute
`pip install -r requirements.txt` to install missing modules.
* Either add the extension on startup with
--extension autobooga or check it in the interface panel
* **modify the settings in the Autobooga Accordeon panel in the UI**
You're set.
Just one last thing ...
### DON'T TRUST THE LLM!!!
I mean it! While the models I tried did a terrific job at summarizing stuff they retrieve
they can still hallucinate heavily. 13B models and lower are especially prone at ignoring what
they read (I had a 7B model actually complaining that its completely fabricated story hadn't made
a bigger splash in the news) and even 30B models are not safe from extrapolating "facts" from random
elements on the page. Always double check if you can, and if you can't then use with extreme caution!
### DON'T TRUST THE LLM!!!
Yes. that was on purpose and not an accidental copy and paste.
## THE WOKE PART ...
Anyway ... have fun and enjoy the fact that we live in the future with AI, Electric Cars, VR, and a
global climatic catastrophe just right around the corner. Also, remember that you probably use up less
energy running LLMs at home on your gaming rig than if you used ChatGPT, Bing or Bard all the time. (and probably
still less than if you played Cyberpunk2077 or any other graphically challenging game)!
|
KasperskyLab/uif
|
https://github.com/KasperskyLab/uif
|
Integration Platform to build UI and Web Services
|
# UIF — web starter toolset
An opinionated toolset to build UI and Web Services like so:

Currently available:
* [@kaspersky/components](./packages/kaspersky-components/) — Hexa UI, Kaspersky Design System
* [@kaspersky/ui-builder](./packages/kaspersky-ui-builder/) — WYSIWYG editor to create HTML forms
* [@kaspersky/runtime](./packages/kaspersky-runtime/) — Application Bus
* [@kaspersky/dev-tools](./packages/kaspersky-dev-tools/) — Set of configs for ESLint, Babel, TypeScript
## What? Why? How?
### What is UIF
UIF is a technology platform for building user interfaces and web services.
### Key benefits
UIF allows you to:
- reduce time-to-market
- reduce the development costs
- improve the quality of the products being developed
Due to what is this happening? Quite simply, UIF provides:
- up-to-date styles from Kaspersky Design System
- unification of approaches to standard solutions and code base
- singe "point of truth" to answer technical questions
- reuse of accumulated expertise in each product
## How UIF works
UIF consists of 3 parts:
1. UI Kit
2. HTML Forms Management
3. Plugin architecture (Micro Frontends)

UIF has 3 main usage scenarios:
1. UIF-based Console — Standalone UI
2. Plugin — Micro Frontend
3. UI Components only — UI Kit
[](./docs/uif-usage-scenarios.png)
## Kaspersky Open Single Management Platform
UIF is a part of XDR platform Kaspersky Open Single Management Platform.
Kaspersky OSMP is used to build Kaspersky Ecosystem.
Open Single Management Platform includes:
- Incident Response Platform
- SIEM
- Endpoint and Non-Enpoint Protection
- Asset Management
- Log management
- IAM
- Communication Platform
- Integration Platform
Detailed presentation of Kaspersky OSMP from Anton Ivanov, Kaspersky CTO: [https://www.youtube.com/watch?v=GLOqZh0zTfg](https://www.youtube.com/watch?v=GLOqZh0zTfg)
[](https://www.youtube.com/watch?v=GLOqZh0zTfg)

|
Lin-jun-xiang/docGPT-streamlit
|
https://github.com/Lin-jun-xiang/docGPT-streamlit
|
Langchain and Streamlit to develope a docGPT (PDF) :mushroom:
|
# docGPT
[English](./README.md) | [中文版](./README.zh-TW.md)
- Table of Contents
- [Introduction](#introduction)
- [What's LangChain?](#whats-langchain)
- [How to Use docGPT?](#how-to-use-docgpt)
- [How to Develop a docGPT with Streamlit?](#how-to-develop-a-docgpt-with-streamlit)
- [Advanced - How to build a better model in langchain](#advanced---how-to-build-a-better-model-in-langchain)
* Main Development Software and Packages:
* `Python 3.8.6`
* `Langchain 0.0.218`
* `Streamlit 1.22.0`
* Using this tool requires at least the `openai_api_key`. You can visit the [link](https://platform.openai.com/) to learn how to obtain the key.
---
### Introduction
* Project Purpose:
* Build a powerful "LLM" model using langchain and streamlit, **enabling your LLM model to do what ChatGPT can't**:
* **Connect with external data** by using PDF documents as an example, allowing the LLM model to understand the uploaded files through RetrievalQA techniques.
* Integrate LLM with other tools to achieve **internet connectivity**. For instance, using Serp API as an example, leverage the Langchain framework to enable querying the model for **current issues** (i.e., **Google search engine**).
* Integrate LLM with the **LLM Math model**, enabling accurate **mathematical calculations**.
* This project consists of three main components:
* [`DataConnection`](../model/data_connection.py): Allows LLM to communicate with external data, i.e., read PDF files and perform text segmentation for large PDFs to avoid exceeding OPENAI's 4000-token limit.
* [`docGPT`](../docGPT/): This component enables the model to understand the content of PDFs. It includes embedding PDF text and building a retrievalQA model using Langchain. For more details, please refer to the [documentation](https://python.langchain.com/docs/modules/chains/popular/vector_db_qa).
* [`agent`](../agent/agent.py): Responsible for managing the tools used by the model and automatically determining which tool to use based on the user's question. The tools include:
* `SerpAI`: Used for "**current questions**" by performing a **Google search**.
* `llm_math_chain`: Used for "**mathematical calculations**" by performing mathematical computations.
* `docGPT`: Used for answering questions about the content of PDF documents. (This tool is built using retrievalQA)
* `docGPT` is developed based on **Langchain** and **Streamlit**.
---
### What's LangChain?
* LangChain is a framework for developing applications powered by language models. It supports the following applications:
1. Connecting LLM models with external data sources.
2. Enabling interactions with LLM models.
* For an introduction to LangChain, it is recommended to refer to the official documentation or the GitHub [repository](https://github.com/hwchase17/langchain).
**Questions that ChatGPT cannot answer can be handled by Langchain!**
Here, the author briefly introduces the differences between Langchain and ChatGPT. You will be amazed by this open-source project called Langchain through the following example!
> Imagine a scenario where ChatGPT cannot answer mathematical questions or questions about events beyond 2020 (e.g., "Who will be the president in 2023?").
>
> * For mathematical questions: In addition to the OpenAI model, there is a specialized tool called math-llm that handles mathematical questions.
> * For current questions: We can use Google search.
>
> Therefore, to design a powerful and versatile AI model, we need to include three tools: "chatgpt", "math-llm", and "Google search".
>
> If the user's question involves mathematical calculations, we use the math-llm tool to handle and answer it.
>
> In the non-AI era, we would use `if...else...` to decide which tool to use based on the user's question. However, Langchain provides a more flexible and powerful way to handle this.
> In the AI era, we want users to directly ask their questions without having to pre-select the question type! In Langchain, there is a concept called "agent" that allows us to:
* Provide tools for the agent to manage, such as `tools = ['chatgpt', 'math-llm', 'google-search']`.
* Include chains designed using Langchain, such as using the `retrievalQA chain` to create a question-answering model based on document content, and append this chain to the tools managed by the agent.
* **Allow the agent to determine which tool to use based on the user's question** (fully automated and AI-driven).
With Langchain, we can create our own ChatGPT model that can be general-purpose or tailored for specific industries and commercial use!
---
### How to Use docGPT?
* Visit the [application](https://docgpt-app.streamlit.app/).
* Enter your API keys:
* `OpenAI API Key`: Required.
* `SERPAPI API Key`: Optional. If you want to ask questions about content not appearing in the PDF document, you need this key.
* Upload a PDF file from your local machine.
* Start asking questions!

---
### How to Develop a docGPT with Streamlit?
A step-by-step tutorial to quickly build your own chatGPT!
First, clone the repository using `git clone https://github.com/Lin-jun-xiang/docGPT-streamlit.git`.
There are two methods:
* Local development:
* `pip install -r requirements.txt`: Download the required packages for development.
* `streamlit run ./app.py`: Start the service in the project's root directory.
* Start exploring!
* Use Streamlit Community Cloud for free deployment, management, and sharing of applications:
* Put your application in a public GitHub repository (make sure it has a `requirements.txt`!).
* Log in to [share.streamlit.io](https://share.streamlit.io/).
* Click "Deploy an App" and paste your GitHub URL.
* Complete the deployment of your [application](https://docgpt-app.streamlit.app/).
---
### Advanced - How to build a better model in langchain
Using Langchain to build docGPT, you can pay attention to the following details that can make your model more powerful:
1. **Language Model**
Choosing the right LLM Model can save you time and effort. For example, you can choose OpenAI's `gpt-3.5-turbo` (default is `text-davinci-003`):
```python
# ./docGPT/docGPT.py
llm = ChatOpenAI(
temperature=0.2,
max_tokens=2000,
model_name='gpt-3.5-turbo'
)
```
Please note that there is no best or worst model. You need to try multiple models to find the one that suits your use case the best. For more OpenAI models, please refer to the [documentation](https://platform.openai.com/docs/models).
(Some models support up to 16,000 tokens!)
2. **PDF Loader**
There are various PDF text loaders available in Python, each with its own advantages and disadvantages. Here are three loaders the authors have used:
([official Langchain documentation](https://python.langchain.com/docs/modules/data_connection/document_loaders/how_to/pdf))
* `PyPDF`: Simple and easy to use.
* `PyMuPDF`: Reads the document very **quickly** and provides additional metadata such as page numbers and document dates.
* `PDFPlumber`: Can **extract text within tables**. Similar to PyMuPDF, it provides metadata but takes longer to parse.
If your document contains multiple tables and important information is within those tables, it is recommended to try `PDFPlumber`, which may give you unexpected results!
Please do not overlook this detail, as without correctly parsing the text from the document, even the most powerful LLM model would be useless!
3. **Tracking Token Usage**
This doesn't make the model more powerful, but it allows you to track the token usage and OpenAI API key consumption during the QA Chain process.
When using `chain.run`, you can try using the [method](https://python.langchain.com/docs/modules/model_io/models/llms/how_to/token_usage_tracking) provided by Langchain to track token usage here:
```python
from langchain.callbacks import get_openai_callback
with get_openai_callback() as callback:
response = self.qa_chain.run(query)
print(callback)
# Result of print
"""
chain...
...
> Finished chain.
Total Tokens: 1506
Prompt Tokens: 1350
Completion Tokens: 156
Total Cost (USD): $0.03012
```
<a href="#top">Back to top</a>
|
trancethehuman/agent_with_memory
|
https://github.com/trancethehuman/agent_with_memory
|
A starter template for AI Companions
|
# AI Companion (Replika / Paradot clone)
<img src="./readme_resources/replika.jpg" alt="replika" width="450">
### How it works
TL;DR
- We're combining a custom LangChain Chain as our AI Companion agent, and OpenAI Functions Agent as Entity Extractor for entity memory
Details
- User sends a message to the AI Companion
- Agent 1 receives the message and respond back with a LangChain `LLMChain`
- Agent 2 grabs the message and figures out whether or not it warrants a user profile update
- User receives a respond from Agent 1. Agent 2 updates the user's data at the same time.
How memory works
- Messages are kept in a `messages_history` array in `memory.py`
- Old messages are passed into the Agent's prompt but only the last K messages
- Messages older than K are summarized and stored in `summaries`, also in `memory.py` and passed into the prompt
- User's data (entities memory) is kept in entities dictionary, in `memory.py`, to provide persistent information about the user. This is what the Agent 2 will update.
### Create a new Python virtual environment
`python -m venv agent-with-memory` (Mac)
`py -m venv agent-with-memory` (Windows 11)
### Activate virtual environment
`.\agent-with-memory\Scripts\activate` (Windows)
`source agent-with-memory/bin/activate` (Mac)
### Install dependencies
`poetry install --sync` or `poetry install`
### Setup `.env` file
```text
OPENAI_API_KEY=XXXXXX
```
### Usage
`py main.py` (Windows 11)
`python main.py` (Mac)
|
Pzqqt/android_kernel_xiaomi_marble
|
https://github.com/Pzqqt/android_kernel_xiaomi_marble
|
Melt Kernel For Redmi Note 12 Turbo (marble) / Poco F5 (marblein)
|
# How do I submit patches to Android Common Kernels
1. BEST: Make all of your changes to upstream Linux. If appropriate, backport to the stable releases.
These patches will be merged automatically in the corresponding common kernels. If the patch is already
in upstream Linux, post a backport of the patch that conforms to the patch requirements below.
- Do not send patches upstream that contain only symbol exports. To be considered for upstream Linux,
additions of `EXPORT_SYMBOL_GPL()` require an in-tree modular driver that uses the symbol -- so include
the new driver or changes to an existing driver in the same patchset as the export.
- When sending patches upstream, the commit message must contain a clear case for why the patch
is needed and beneficial to the community. Enabling out-of-tree drivers or functionality is not
not a persuasive case.
2. LESS GOOD: Develop your patches out-of-tree (from an upstream Linux point-of-view). Unless these are
fixing an Android-specific bug, these are very unlikely to be accepted unless they have been
coordinated with [email protected]. If you want to proceed, post a patch that conforms to the
patch requirements below.
# Common Kernel patch requirements
- All patches must conform to the Linux kernel coding standards and pass `script/checkpatch.pl`
- Patches shall not break gki_defconfig or allmodconfig builds for arm, arm64, x86, x86_64 architectures
(see https://source.android.com/setup/build/building-kernels)
- If the patch is not merged from an upstream branch, the subject must be tagged with the type of patch:
`UPSTREAM:`, `BACKPORT:`, `FROMGIT:`, `FROMLIST:`, or `ANDROID:`.
- All patches must have a `Change-Id:` tag (see https://gerrit-review.googlesource.com/Documentation/user-changeid.html)
- If an Android bug has been assigned, there must be a `Bug:` tag.
- All patches must have a `Signed-off-by:` tag by the author and the submitter
Additional requirements are listed below based on patch type
## Requirements for backports from mainline Linux: `UPSTREAM:`, `BACKPORT:`
- If the patch is a cherry-pick from Linux mainline with no changes at all
- tag the patch subject with `UPSTREAM:`.
- add upstream commit information with a `(cherry picked from commit ...)` line
- Example:
- if the upstream commit message is
```
important patch from upstream
This is the detailed description of the important patch
Signed-off-by: Fred Jones <[email protected]>
```
>- then Joe Smith would upload the patch for the common kernel as
```
UPSTREAM: important patch from upstream
This is the detailed description of the important patch
Signed-off-by: Fred Jones <[email protected]>
Bug: 135791357
Change-Id: I4caaaa566ea080fa148c5e768bb1a0b6f7201c01
(cherry picked from commit c31e73121f4c1ec41143423ac6ce3ce6dafdcec1)
Signed-off-by: Joe Smith <[email protected]>
```
- If the patch requires any changes from the upstream version, tag the patch with `BACKPORT:`
instead of `UPSTREAM:`.
- use the same tags as `UPSTREAM:`
- add comments about the changes under the `(cherry picked from commit ...)` line
- Example:
```
BACKPORT: important patch from upstream
This is the detailed description of the important patch
Signed-off-by: Fred Jones <[email protected]>
Bug: 135791357
Change-Id: I4caaaa566ea080fa148c5e768bb1a0b6f7201c01
(cherry picked from commit c31e73121f4c1ec41143423ac6ce3ce6dafdcec1)
[joe: Resolved minor conflict in drivers/foo/bar.c ]
Signed-off-by: Joe Smith <[email protected]>
```
## Requirements for other backports: `FROMGIT:`, `FROMLIST:`,
- If the patch has been merged into an upstream maintainer tree, but has not yet
been merged into Linux mainline
- tag the patch subject with `FROMGIT:`
- add info on where the patch came from as `(cherry picked from commit <sha1> <repo> <branch>)`. This
must be a stable maintainer branch (not rebased, so don't use `linux-next` for example).
- if changes were required, use `BACKPORT: FROMGIT:`
- Example:
- if the commit message in the maintainer tree is
```
important patch from upstream
This is the detailed description of the important patch
Signed-off-by: Fred Jones <[email protected]>
```
>- then Joe Smith would upload the patch for the common kernel as
```
FROMGIT: important patch from upstream
This is the detailed description of the important patch
Signed-off-by: Fred Jones <[email protected]>
Bug: 135791357
(cherry picked from commit 878a2fd9de10b03d11d2f622250285c7e63deace
https://git.kernel.org/pub/scm/linux/kernel/git/foo/bar.git test-branch)
Change-Id: I4caaaa566ea080fa148c5e768bb1a0b6f7201c01
Signed-off-by: Joe Smith <[email protected]>
```
- If the patch has been submitted to LKML, but not accepted into any maintainer tree
- tag the patch subject with `FROMLIST:`
- add a `Link:` tag with a link to the submittal on lore.kernel.org
- add a `Bug:` tag with the Android bug (required for patches not accepted into
a maintainer tree)
- if changes were required, use `BACKPORT: FROMLIST:`
- Example:
```
FROMLIST: important patch from upstream
This is the detailed description of the important patch
Signed-off-by: Fred Jones <[email protected]>
Bug: 135791357
Link: https://lore.kernel.org/lkml/[email protected]/
Change-Id: I4caaaa566ea080fa148c5e768bb1a0b6f7201c01
Signed-off-by: Joe Smith <[email protected]>
```
## Requirements for Android-specific patches: `ANDROID:`
- If the patch is fixing a bug to Android-specific code
- tag the patch subject with `ANDROID:`
- add a `Fixes:` tag that cites the patch with the bug
- Example:
```
ANDROID: fix android-specific bug in foobar.c
This is the detailed description of the important fix
Fixes: 1234abcd2468 ("foobar: add cool feature")
Change-Id: I4caaaa566ea080fa148c5e768bb1a0b6f7201c01
Signed-off-by: Joe Smith <[email protected]>
```
- If the patch is a new feature
- tag the patch subject with `ANDROID:`
- add a `Bug:` tag with the Android bug (required for android-specific features)
|
yoidea/applied-math-alarm
|
https://github.com/yoidea/applied-math-alarm
|
絶対起きれる⏰
|
## 絶対に起きられる目覚まし時計
### YouTube動画
<a href="https://youtu.be/SaouHglJTD0"><img width="720" alt="thumbnail" src="https://i9.ytimg.com/vi/SaouHglJTD0/maxresdefault.jpg"></a>
https://youtu.be/SaouHglJTD0
### 問題パターン
`pages/index.tsx`内17行目にある、`handleClick`中の`router.push()`の値を変更することで問題形式を切り替えられます。
```js
const handleClick = () => {
...
router.push('/integral')
}
```
#### 定積分
```
http://localhost:3000/integral
```
<img width="1002" alt="int" src="https://github.com/yoidea/applied-math-alerm/assets/26201815/1c369855-96ff-44ef-839b-39842a175dc8">
#### 微分方程式
```
http://localhost:3000/differential
```
<img width="876" alt="diff" src="https://github.com/yoidea/applied-math-alerm/assets/26201815/c5413257-8074-431f-8afa-dd7245dace04">
#### 行列式
```
http://localhost:3000/determinant
```
<img width="974" alt="det" src="https://github.com/yoidea/applied-math-alerm/assets/26201815/f6fb3aca-d9e2-4973-888c-2131546f1ba8">
### 起動方法
0. This is a [Next.js](https://nextjs.org/) project bootstrapped with [`create-next-app`](https://github.com/vercel/next.js/tree/canary/packages/create-next-app).
1. 環境変数を設定
| 変数名 | 説明 |
| ------------- | ------------- |
| TOKEN | https://github.com/OpenWonderLabs/SwitchBotAPI#getting-started |
| SECRET | https://github.com/OpenWonderLabs/SwitchBotAPI#getting-started |
| DEVICE_ID | SwitchBot Hub 経由の仮想デバイスID |
| ALARM_ENV | `development`, `production` |
下記コマンドを実行するととりあえず動きます
```bash
echo "TOKEN=tokentokentoken" > .env.local && echo "SECRET=secretsecret" >> .env.local && echo "DEVICE_ID=02-00000000" >> .env.local && echo "ALARM_ENV=development" >> .env.local
```
2. 開発サーバーを起動
```bash
yarn && yarn dev
```
Open [http://localhost:3000](http://localhost:3000) with your browser to see the result.
|
heinu123/ClashMeta_For_Magisk
|
https://github.com/heinu123/ClashMeta_For_Magisk
|
基于clashmeta的clash模块 支持tproxy和tun透明登录 并添加许多易用功能
|
## 使用须知
没有独立思考能力 判断能力请不要使用该模块
嫌麻烦请使用[cfa](https://github.com/Kr328/ClashForAndroid)
**clash配置问题影响你99%使用本模块的体验**
**请善用clash日志**
## 模块教程 && 目录:
```
├── adguard
│ ├── AdGuardHome扩展模块
├── clash.config (clash启动配置)
├── clash.yaml (clash基本配置 dns配置等)
├── clashkernel
│ ├── clash内核
├── config.yaml (订阅配置 分流规则配置)
├── packages.list (代理黑/白名单包名 一行一个)
├── mosdns
│ ├── mosdns扩展模块
├── scripts --clash启动文件
│ ├── clash.inotify
│ ├── clash.iptables
│ ├── clash.service
│ └── clash.tool
├── yacd
│ ├── yacd资源
├── 清除缓存的免流ip.sh
└── 清除缓存的谷歌系包名.sh
```
clashMeta教程见:
https://docs.metacubex.one
https://clash-meta.wiki
官方clash配置(纯英文):
https://github.com/Dreamacro/clash/wiki/Configuration#introduction
|
linkfy/threadspy
|
https://github.com/linkfy/threadspy
|
Threadspy - Unofficial Threads Meta Api
|
# Threadspy - Unofficial Threads Meta Api
<p align="center">
<img src=".github/cover.png" alt="cover" width="200px" />
</p>
# Post on Threads from PC
## Installation
Clone the project, execute this instruction inside main folder to install packages:
```shell
pip install -r requirements.txt
```
## API
At the moment the API is experimental:
- client.login(user, passsword)
- client.post_message("Message from threads.net") (Links accepted)
- client.post_message("message", link_attachment="https://www.threads.net/") (Link attachment accepted)
- client.post_message("message", image="firefox.jpg") (Image attachment accepted)
- client.post_message(image="firefox.jpg") (Upload only images)
- client.post_message("Response to thread", post_id="3143089663894947972") by @jackpbreilly
- client.like_post(post_id="3143089663894947972", unlike=False) by @jackpbreilly
Extra:
- Delete "session_data.json" to regenerate login sessions after first login
## Example usage
```python
from client import *
from dotenv import load_dotenv
load_dotenv()
async def main():
async with Client() as client:
token = await client.login(os.environ["USER"],os.environ["PASSWORD"])
result = await client.post_message("Test client api")
asyncio.run(main())
```
## More examples
```python
from client import *
from dotenv import load_dotenv
load_dotenv()
async def main():
async with Client() as client:
await client.login(os.environ["USER"],os.environ["PASSWORD"])
result0 = await client.post_message(image="firefox.jpg")
# This lines are commented so avoid Massive calls = Spam detection, remember to not do massive actions, add timers too (time.sleep(60), etc..)
#result1 = await client.post_message("One", image="firefox.jpg")
#result2 = await client.post_message("Two", link_attachment="https://twitter.com")
#result3 = await client.post_message("Three", image="firefox.jpg", link_attachment="https://chrome.com")
#result4 = await client.post_message("T3",post_id="3143089663894947972")
#result5 = await client.like_post(post_id="3143089663894947972")
#result6 = await client.like_post(post_id="3143089663894947972", unlike=True)
#print(result0, result1, result2, result3, result4)
asyncio.run(main())
```
|
TechTitan0624/Blockchain-cryptocurrency-pro
|
https://github.com/TechTitan0624/Blockchain-cryptocurrency-pro
| null |
# Final-Year-Block-chain-Project
### Nutshell:
Security of Communication Increase through Use of Combination of Cryptography and Blockchain technology.
### great
### Abstract :
The blockchain is an innovative technology that overcomes these threats and allows decentralisation of sensitive operations while preserving a high level of security. It eliminates the need for trusted intermediaries. The blockchain is accessible to all network nodes and keeps track of all transactions already made. The goal of our work is to propose a secure messaging solution based on blockchain technology. In this project, we explain why blockchain would make communications more secure, and we propose a model design for blockchain-based messaging main- taining the performance and security of data recorded on the blockchain. The system is Combination of blockchain and cryptography process for communication system.
Final Year Blockchain Project

Updates:
https://github.com/Vatshayan/Blockchain-and-Cryptography-Communication-System/blob/main/Untitled%20Diagram.jpg
### Youtube Presentation of this Project : https://youtu.be/Kt8NHdWnvdk
### Contact for this Project files as PPT, Research papers, Code, Report and Video Explanation HD.
### Need Code, Documents & Explanation video ?
## How to Reach me :
### Mail : [email protected]
### WhatsApp: **+91 9310631437** (Helping 24*7) **[CHAT](https://wa.me/message/CHWN2AHCPMAZK1)**
### Website : https://www.finalproject.in/
### 1000 Computer Science Projects : https://www.computer-science-project.in/
Mail/Message me for Projects Help 🙏🏻
Project is made by **me([Vatshayan](https://github.com/Vatshayan))**
|
L7NEG/Ultimate-Menu
|
https://github.com/L7NEG/Ultimate-Menu
|
Hello Its Me L7NEG Im Back Again With my own Write Script I Call It Ultimate Menu And It Dose What it says It Is Basiclly A Menu Inside Kiddons Menu With No1 Money Method A Trusted Menu Made With Love To You All
|
<h1 align="center">Ultimate Menu</h1>
<h1 align="center">For Original thread you can find it here====> https://www.unknowncheats.me/forum/grand-theft-auto-v/565688-1-64-ultimate-unlocker.html</h1>
# How To Use Ultimate Menu Script
For every Version of the ultimate menu (Kiddion/YimMenu) there is a different use and ways to fully Run It.
For Kiddions script is just puting the script into scripts directory, inside the Kiddions files.
For YimMenu it is actually the same way, but first before initiating the script you will need to go to Settings > Lua > Open Folder Option
From There Go To Scripts Folder Then Paste The Ultimate Menu In There
--------------------------------------------------------------------------------------------------
## Gta V Scripts
Im presenting you some Gta V Scripts for online or campaign, these are made by me so if you find an error please dm to discord. (@l7neg)
--------------------------------------------------------------------------------------------------
## Questions And Answers:
1: How Do I Install Kiddions Modest Menu?
A: Answered in https://sub.l7neg.tk/discord
2- Why Ultimate Menu Script Not Showing Up In Kiddions?
A: Make Sure Ultimate Menu.lua Script Is In Kiddions Scrips Folder
3- How To Do The 6 Mil Crates Method?
A: Answered in https://sub.l7neg.tk/discord
4- What Are The Limits And The Best 5 Ways To Make Money In Gta Online?
A: Answered in https://sub.l7neg.tk/discord
---------------------------------------------------------------------------------------------------
- Meaning Emojis:
🟢 Undetected
🟠 Working on / In Progress / In Testing Stage
🔴 Detected
--------------------------------------------------------------------------------------------------
No ETA for big updates also because i don't know actually if im going to be able to continue this script (boths) because i have a life and i need to study. This is a helpfull and fun project for everyone who wants to use it, is free and you can use my script has a template.
## About
- By the way to anyone who ask's about the Ultimate-Menu script, is not entire mine, there is actually in credits every single person who i taked code from and who helped me with this.
## Latest Ultimate Menu Kiddions update was on: 7/05/2023
## Latest Ultimate Menu Stand update was on: 24/07/2023
|
titpetric/task-ui
|
https://github.com/titpetric/task-ui
|
Task UI - a web based runner for Taskfile.yml
|
# Task UI
Run your Taskfile.yml from the browser.

Start the docker image with `task docker:run`.
Task UI is meant for Docker environments. A generic Dockerfile exists, which
bundles typical dependencies like `task`, `ttyrec`, `docker`, `docker compose`.
To use, start by navigating to the
[docker](https://github.com/titpetric/task-ui/tree/main/docker)
subfolder. It contains a Taskfile, with the typical commands to build and
run task-ui from a docker image. For examples with Taskfiles you could
run, look into the folder
[examples](https://github.com/titpetric/task-ui/tree/main/examples).
The layout is somewhat responsive, supporting mobile.
# Running
To set up your project to run with Task UI, it's recommended you use the
example docker compose setup here:
```yaml
services:
runner:
image: titpetric/task-ui
restart: always
build: .
command:
- --history-enable
ports:
- 3000:3000
volumes:
- $PWD/app:/app
- /var/run/docker.sock:/var/run/docker.sock:ro
```
In particular, you should mount your `/app` folder which contains your
`Taskfile.yml`, `docker-compose.yml` and other files. Task UI will run
with what you provide it with.
- If you don't want history, remove the `command` flags.
- If you don't want to use docker, remove the volume for `docker.sock`.
The image provides an `id_ecdsa` key to use for ssh hops. The recommended
way to deploy is to provide your own `docker/root/.ssh` folder with
the ssh keys. You can regenerate the ssh key with `task docker:gen`.
# Development
task: Available tasks for this project:
* build: Build task-ui
* fix: Fix code
* install: Install task-ui locally
* run: Run task-ui
* test: Test task-ui
* docker:build: Build task-ui docker image
* docker:gen: Generate ssh key for docker image
* docker:push: Push task-ui to registry
* docker:run: Run task-ui in docker env
## task: build
Build task-ui
dependencies:
- fix
commands:
- CGO_ENABLED=0 go build .
## task: test
Test task-ui
dependencies:
- fix
commands:
- CGO_ENABLED=1 go test -race -count=1 -cover ./...
- CGO_ENABLED=0 go test -count=1 -cover ./...
## task: run
Run task-ui
dependencies:
- build
commands:
- ./task-ui --history-enable
## task: install
Install task-ui locally
dependencies:
- fix
commands:
- go install .
## task: fix
Fix code
dependencies:
- deps:goimports
commands:
- goimports -w .
- go fmt ./...
- go vet .
- go mod tidy
- ./README.md.sh > README.md
|
maxwalts/tinyv-c
|
https://github.com/maxwalts/tinyv-c
|
A tiny nearest-neighbor embedding database written in C
|
# tinyv-c (wip)
A tiny nearest-neighbor embedding database written in C.
- under 300 lines
- vector stores written/read locally as binary files
- includes memory management
- pytorch used for text embedding (wip)
## Example use and testing
```c
int main()
{
// Create a vector store with initial_size = 1
VectorStore *vs = create_vectorstore(1);
assert(vs->current_size == 0);
// Create a vector with initial_size = 1
Vector *v = create_vector(1);
assert(v->current_size == 0);
assert(v->max_size == 1);
// Add some numbers to the vector
add_to_vector(v, 1);
add_to_vector(v, 2);
add_to_vector(v, 3);
assert(v->current_size == 3);
assert(v->data[2] == 3);
// Add the vector to the vector store
add_to_vectorstore(vs, v);
assert(vs->current_size == 1);
assert(vs->data[0] == v);
// Save the vector store to a file, then load it back into a new vector store
write_vectorstore_to_file(vs, "vectorstore.bin");
VectorStore *loaded_vs = read_vectorstore_from_file("vectorstore.bin");
assert(loaded_vs->current_size == vs->current_size);
assert(loaded_vs->data[0]->current_size == vs->data[0]->current_size);
assert(loaded_vs->data[0]->data[0] == vs->data[0]->data[0]);
// Free the memory for 1st vector store
free_vectorstore(vs);
// Print all numbers from all vectors in the loaded vector store
for (size_t i = 0; i < loaded_vs->current_size; ++i)
{
Vector *v = loaded_vs->data[i];
for (size_t j = 0; j < v->current_size; ++j)
{
printf("%d\n", v->data[j]);
}
}
// Free the memory for the loaded vector store
free_vectorstore(loaded_vs);
return 0;
}
```
Run this code by cloning the repo and running
```bash
$ gcc tinyv.c -o tinyv
$ ./tinyv
```
## TODO
- [ ] Test nearest-neighbor and dot products
|
zongpingding/LaTeX_From_Birth_Death
|
https://github.com/zongpingding/LaTeX_From_Birth_Death
|
对于LaTeX新手,你能想到的问题,几乎都能在我这个仓库找到
|
# 1.声明
* 本仓库仅供学习交流使用,我自己搜集了许多的LaTeX相关的内容,集中于此,本仓库中的任何内容均为涉及任何形式的商业活动
* 仓库中也许有过很多的代码不够优美,我也不打算修改了,今后也能回忆一下自己初学LaTeX的无知
* 仓库中的模板有一部分是我自己修改过的,如果不能编译的话,请提Issue
* 仓库中的参考文献和书籍由于版权原因,我不提供,但绝大部分是TeXLive自带或免费开源的文档
* 仓库中除latexalpha2涉及的TeX源码外可在: Windows+TexLive2021 和 Arch+TexLive2023编译
* 仓库中所有使用的字体仅供学习交流目的,未用于商业用途
# 2.仓库代码结构
关于LaTeX的基本内容,你都能在我这个仓库找到。包括基本的小白入门,LaTeX进阶使用,开发者参考文档建议,宏编程。
下面是大致结构:
1. LaTeX 系统学习内容
2. LaTeX 进阶内容
3. LaTeX 常见模板搜集
4. 自己的一些排版作品
# 3.部分的说明
然后就是每一个部分的大致结构和说明。
## 3.1 LaTeX学习源文件结构
``` shell
├── Beamer Related
│ ├── Beamer
│ ├── Marp
│ └── Slidev
├── IDES_Config
│ ├── InftyEditor
│ ├── LYX
│ ├── ScienstificWorkPlace
│ ├── Sublime
│ ├── TeXMacs【MoGan】
│ ├── Texmaker
│ ├── TexStudio
│ └── VScode
├── LaTeX 3
│ ├── fpeval
│ ├── LATEX3 递归
│ ├── LaTeX3 基础知识
│ ├── LaTeX3 逻辑与循环
│ └── LaTeX3_Slides【项子越】
├── LaTeX 进阶
│ ├── 宏编程
│ ├── 计数器
│ ├── 两个自定义宏包封装
│ ├── Inline DisplayStyle formula
│ ├── LaTeX 表格属性设置
│ ├── LaTeX 盒子
│ ├── LaTeX 宏包编写
│ ├── LaTeX 内部宏
│ ├── LaTeX 学习反思
│ ├── LaTeX 字体配置【1】
│ ├── LaTeX 字体配置【2】
│ ├── LaTeXAlpha2
│ ├── LaTeX Animate
│ ├── LaTeXCooKBooK
│ ├── LaTeX_TColorBox
│ ├── LaTeX_With_UseChatGPT
│ ├── Logos
│ ├── MakingCover
│ ├── Mathpix 相关
│ ├── PythonInLaTeX
│ ├── Template_ElagantLaTeX_Book_Learn
│ ├── Test-Wsl
│ └── TeX 引擎
├── LaTeX 入门
│ ├── 第八次-LaTeX 基础命令
│ ├── 第二次-自己写一个小论文
│ ├── 第九次-pdf预览设置
│ ├── 第六次-LaTeX 浮动体测试
│ ├── 第七次-LaTeXStudio 编译问题
│ ├── 第三次-自定义环境与命令
│ ├── 第四次-LaTeX 关系图绘制
│ ├── 第五次-图片文字并排
│ └── 第一次-Texwork初体验
├── MasterPiece
│ ├── 神经网络笔记重制
│ ├── DesktopWallpaper
│ └── LaTeX从入门到入土
├── Source.txt
├── TheTeXbook
│ ├── Chapter1
│ ├── Chapter2
│ └── Chapter3
└── TiKZ
├── 01. TiKZ_Related_VectorGraph
├── 02. TiKZ_PreDrawFig
├── 03. Geogebra
├── 04. LaTeXDraw
├── 05. TiKZ 入门
├── 07. TiKZ绘制动画-时钟
├── 08. Pgfplots
├── 09. GNUPlot学习【1】
├── 10. TikZ技巧
├── 11. Icons And Logos
├── 12. MindMap
├── 13. GNUPlot学习【2】
├── 14. Animate
├── 15. PgfPlots_Doc
├── 16. TikZ_Doc
└── 17. LaTeXAlpha2
```
## 3.2 参考书籍和文档
``` shell
├── 经典书籍
│ ├── 简单粗暴LaTeX【K.L Wu】.pdf
│ ├── 了不起的Markdown.pdf
│ ├── 一份简短的LaTeX数学指南.pdf
│ ├── A guide to LATEX and Electronic Publishing_V3【Helmut Kopka】.pdf
│ ├── A guide to LATEX and Electronic Publishing_V4【Helmut Kopka】.pdf
│ ├── LaTeX-2e 完全学习手册【胡伟】.pdf
│ ├── LaTeX-2e 文类和宏包学习手册【胡伟】.pdf
│ ├── LaTeX科技论文写作简明教程【王伊蕾】.pdf
│ ├── Latex科技排版入门_2018【潘建瑜】.pdf
│ ├── LaTeX入门【刘海洋】.pdf
│ ├── LaTeX 文档排版教程【吕荐瑞】.pdf
│ ├── LaTeX -- A Document Preparation System 2e【Lamport--高清扫描】.pdf
│ ├── LaTeX -- A Document Preparation System 2e【Lamport--文字版】.pdf
│ ├── LaTeX Cookbook【Stefan Kottwitz】.pdf
│ ├── LaTeX CookBook【wikibooks】.pdf
│ ├── LaTeX in 24 Hours【Dilip Datta】.pdf
│ ├── LaTeX Notes【包太雷】.pdf
│ ├── lshort【2021】.pdf
│ ├── Mathematical Typesetting with LaTeX【 TUG-Version 0.33】.pdf
│ ├── Math into LaTeX--An Introduction to LaTeX and AMS-LaTeX【George Grätzer】.pdf
│ ├── More Math Into LaTeX【George Grätzer】.pdf
│ ├── Practical LaTeX【George Grätzer】.pdf
│ ├── Science Research Writing A Guide for Non-Native Speakers of English【Hilary Glasman-Deal】.pdf
│ ├── TeXbook【中译-Xianxian】.pdf
│ ├── TeXbyTopic【PureEnglish】.pdf
│ ├── TEX for the Impatient【汉化 -- Paul W.Abrahams】.pdf
│ ├── TEX for the Impatient【Paul W.Abrahams】.pdf
│ ├── TeX in Practice【Volume 1_ Basics】.pdf
│ ├── TEX in Practice【Volume III_ Tokens, Macros 】.pdf
│ ├── TEX in Practice【Volume II_ Paragraphs, Math and Fonts】.pdf
│ ├── TEX in Practice【Volume IV_ Output Routines, Tables】.pdf
│ ├── The Advanced TEXbook【David Salomon】.pdf
│ ├── The computer science of TeX and LaTeX【Eijkhout V. 】.pdf
│ ├── The LaTeX Companion【2nd】.pdf
│ └── The TEXbook.pdf
├── 经典文档手册
│ ├── 批量注册mathpix帐号方法.pdf
│ ├── asymptote_1.79手册【汉化】.pdf
│ ├── asymptote-faq-zh-cn.pdf
│ ├── asymptote manual_2.38.pdf
│ ├── asymptote_package【Charles Staats】.pdf
│ ├── Axmath-Help.pdf
│ ├── Beamer在线教程【汉化-黄旭华】.chm
│ ├── BeamerUserGuide_V3.0【汉化-KiJoo】.pdf
│ ├── BeamerUserGuide_V3.24【汉化-黄旭华】.pdf
│ ├── elegantbook-cn.pdf
│ ├── Geometry【汉化】.pdf
│ ├── Guide_for_the_Use_of_the_International_System_of_Units.pdf
│ ├── TikZ and Pgf Manual_2.10.pdf
│ ├── TiKZ-Euclide_v5.02【中文】.pdf
│ ├── TikZ&PGF笔记3.14b.pdf
│ └── XeTeX字体调用简介.pdf
├── Referrence_Docs
│ ├── 希腊字幕对应表.pdf
│ ├── A Survey of Free Math Fonts for TeX and LaTeX.pdf
│ ├── A very minimal introduction to TikZ.pdf
│ ├── bchart.pdf
│ ├── beccari.pdf
│ ├── ConTEXt the manual.pdf
│ ├── csquotes.pdf
│ ├── LaTeX数学字母表.pdf
│ ├── LaTeX学习笔记_黄新刚.pdf
│ ├── LaTeX学习笔记_Self.pdf
│ ├── LaTeX学习笔记_zoho.pdf
│ ├── LaTeX Alpha2.pdf
│ ├── LaTeX MakeFile.pdf
│ ├── Pgfplots-V1.11.pdf
│ ├── pgfplots-V1.18.pdf
│ ├── PGFTikZ绘图学习笔记_zoho.pdf
│ ├── poster_landscape.pdf
│ ├── psnfss2e.pdf
│ ├── TColorBox Manual.pdf
│ ├── The comprehensive LaTeX symbol list.pdf
│ ├── THU_Thesis_Template.pdf
│ ├── TikZ and Pgf Manual_1.18.pdf
│ ├── TikZ and Pgf Manual_2.10.pdf
│ ├── tikz_example【日文】.pdf
│ ├── TikZiT软件使用指南.pdf
│ ├── TikZ&PGF笔记3.13.pdf
│ ├── TikZ&PGF笔记3.14b.pdf
│ ├── VisualTikZ.pdf
│ └── xeCJK.pdf
├── Referrence_Project
│ ├── 初等数学公式手册
│ ├── 简单粗暴LaTeX
│ ├── 潘建瑜--LaTeX讲义
│ ├── 书稿排版小例子
│ ├── 写给理科生的数学书
│ ├── 最喜欢的LaTeX2 初学者指南【日文】
│ ├── A2physics
│ ├── annotate-equations
│ ├── ascolorbox宏包
│ ├── A Survey of Free Math Fonts for TEX and LaTeX
│ ├── Classical-Mechanics-Notes
│ ├── Elementary-math-note
│ ├── Invitation
│ ├── Latex 绘图学习【自制】
│ ├── LATEX快速入门与提高
│ ├── LaTeX速查
│ ├── LaTeX综合学习
│ ├── LaTeX-drawings
│ ├── LaTeX-study-note
│ ├── lshort-new-zh-cn-master
│ ├── Matlab-命令笔记
│ ├── mtp2lite
│ ├── Mtpro2_Install_Tutorial
│ ├── pgfplots_3D
│ ├── pgfplots.Example-Full
│ ├── tcolorbox
│ ├── ThinkPython2-CN
│ ├── TikZ-master-2D
│ └── xetex_aboutfonts
└── Struct.txt
```
## 3.3 LaTeX 模板
``` shell
├── 华东理工本科毕业论文模板
├── AMScls
├── base
├── Beamer_Template
├── Beauty_Book_V5
├── Beauty_Book_V6.1
├── BookTemplate_Simple
├── colorist
├── CUMCMThesis-master
├── DoD_LaTex_Template
├── EasyBook
├── Elegantbook_Magic_Revision
├── ElegantLaTeX_Serise
├── MCM_LaTeX_Template
├── memoir
├── NotesTeX
├── octavo
├── QYXF_Book
├── SelfMake_Templates
├── Template_baposter
├── Templates.txt
├── Tufte_LaTeX
├── WillowtreeBook
├── YB_Book
└── YC_Book
```
# 4.最后
Happy LaTeXing !!!
|
eryajf/chatgpt-wecom
|
https://github.com/eryajf/chatgpt-wecom
|
💬 基于GO语言实现的体验最好的企微应用集成ChatGPT项目 🚀
|
<p align='center'>
<br>
🚀 ChatGPT WeCom 🚀
</p>
<p align='center'>🌉 基于GO语言实现的体验最好的企微应用集成ChatGPT项目 🌉</p>
<div align="center">
[](https://github.com/eryajf)
[](https://github.com/eryajf/chatgpt-wecom)
[](https://github.com/eryajf/chatgpt-wecom/pulls)
[](https://github.com/eryajf/chatgpt-wecom/stargazers)
[](https://github.com/eryajf/chatgpt-wecom)
[](https://hub.docker.com/r/eryajf/chatgpt-wecom)
[](https://hub.docker.com/r/eryajf/chatgpt-wecom)
[](https://github.com/eryajf/chatgpt-wecom/blob/main/LICENSE)
</div>
<img src="https://camo.githubusercontent.com/82291b0fe831bfc6781e07fc5090cbd0a8b912bb8b8d4fec0696c881834f81ac/68747470733a2f2f70726f626f742e6d656469612f394575424971676170492e676966" width="800" height="3">
</div>
<img src='https://cdn.staticaly.com/gh/eryajf/tu/main/img/image_20230702_215927.jpg' alt='' />
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
**目录**
- [前言](#%E5%89%8D%E8%A8%80)
- [功能介绍](#%E5%8A%9F%E8%83%BD%E4%BB%8B%E7%BB%8D)
- [使用前提](#%E4%BD%BF%E7%94%A8%E5%89%8D%E6%8F%90)
- [使用教程](#%E4%BD%BF%E7%94%A8%E6%95%99%E7%A8%8B)
- [第一步,创建应用](#%E7%AC%AC%E4%B8%80%E6%AD%A5%E5%88%9B%E5%BB%BA%E5%BA%94%E7%94%A8)
- [第二步,部署项目](#%E7%AC%AC%E4%BA%8C%E6%AD%A5%E9%83%A8%E7%BD%B2%E9%A1%B9%E7%9B%AE)
- [docker部署](#docker%E9%83%A8%E7%BD%B2)
- [二进制部署](#%E4%BA%8C%E8%BF%9B%E5%88%B6%E9%83%A8%E7%BD%B2)
- [第三步,完善企微配置](#%E7%AC%AC%E4%B8%89%E6%AD%A5%E5%AE%8C%E5%96%84%E4%BC%81%E5%BE%AE%E9%85%8D%E7%BD%AE)
- [配置文件说明](#%E9%85%8D%E7%BD%AE%E6%96%87%E4%BB%B6%E8%AF%B4%E6%98%8E)
- [常见问题](#%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98)
- [感谢](#%E6%84%9F%E8%B0%A2)
- [赞赏](#%E8%B5%9E%E8%B5%8F)
- [贡献者列表](#%E8%B4%A1%E7%8C%AE%E8%80%85%E5%88%97%E8%A1%A8)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
## 前言
本项目可以助你将GPT机器人集成到企微应用当中。当前默认模型为`gpt-3.5`,支持`gpt-4`,同时支持Azure-OpenAI。
>- `📢 注意`:当下部署以及配置流程都已非常成熟,文档和issue中基本都覆盖到了,因此不再回答任何项目安装部署与配置使用上的问题,如果完全不懂,可考虑通过 **[邮箱](mailto:[email protected])** 联系我进行付费的技术支持。
🥳 **欢迎关注我的其他开源项目:**
>
> - [Go-Ldap-Admin](https://github.com/eryajf/go-ldap-admin):🌉 基于Go+Vue实现的openLDAP后台管理项目。
> - [learning-weekly](https://github.com/eryajf/learning-weekly):📝 周刊内容以运维技术和Go语言周边为主,辅以GitHub上优秀项目或他人优秀经验。
> - [HowToStartOpenSource](https://github.com/eryajf/HowToStartOpenSource):🌈 GitHub开源项目维护协同指南。
> - [read-list](https://github.com/eryajf/read-list):📖 优质内容订阅,阅读方为根本
> - [awesome-github-profile-readme-chinese](https://github.com/eryajf/awesome-github-profile-readme-chinese):🦩 优秀的中文区个人主页搜集
🚜 我还创建了一个项目 **[awesome-chatgpt-answer](https://github.com/eryajf/awesome-chatgpt-answer)** :记录那些问得好,答得妙的时刻,欢迎提交你与ChatGPT交互过程中遇到的那些精妙对话。
⚗️ openai官方提供了一个 **[状态页](https://status.openai.com/)** 来呈现当前openAI服务的状态,同时如果有问题发布公告也会在这个页面,如果你感觉它有问题了,可以在这个页面看看。
## 功能介绍
- [x] 🚀 帮助菜单:通过发送 `帮助` 将看到帮助列表
- [x] 🙋 单聊模式:每次对话都是一次新的对话,没有历史聊天上下文联系
- [x] 🗣 串聊模式:带上下文理解的对话模式
- [x] 🎭 角色扮演:支持场景模式,通过 `#周报` 的方式触发内置prompt模板
- [x] 🧑💻 频率限制:通过配置指定,自定义单个用户单日最大对话次数
- [x] 🔗 自定义api域名:通过配置指定,解决国内服务器无法直接访问openai的问题
- [x] 🪜 添加代理:通过配置指定,通过给应用注入代理解决国内服务器无法访问的问题
- [x] 👐 默认模式:支持自定义默认的聊天模式,通过配置化指定
- [x] 👹 白名单机制:通过配置指定,支持指定群组名称和用户名称作为白名单,从而实现可控范围与机器人对话
- [x] 💂♀️ 管理员机制:通过配置指定管理员,部分敏感操作,以及一些应用配置,管理员有权限进行操作
- [x] ㊙️ 敏感词过滤:通过配置指定敏感词,提问时触发,则不允许提问,回答的内容中触发,则以 🚫 代替
- [ ] 🎨 图片生成:通过发送 `#图片`关键字开头的内容进行生成图片
- [ ] 📝 查询对话:通过发送`#查对话 username:xxx`查询xxx的对话历史,可在线预览,可下载到本地
## 使用前提
* 有Openai账号,并且创建好`api_key`,关于这个的申请创建,这里不过多赘述。
* 在企微开发者后台创建应用,需要如下五个配置信息:
* `corp_id:`企业ID
* `agent_id:` 应用ID
* `agent_secret:` 应用秘钥
* `receive_msg_token:` API接收消息的token
* `receive_msg_key:` API接收消息的key
接下来跟随如下教程,保你一次配置成功。
## 使用教程
### 第一步,创建应用
1. [点我登陆](https://work.weixin.qq.com/wework_admin/frame)企业微信管理后台。
此时点击我的企业可以获取到企业ID`corp_id`。
<details>
<summary>🖼 点我查看示例图</summary>
<img src="https://cdn.staticaly.com/gh/eryajf/tu/main/img/image_20230702_210053.jpg">
</details>
2. 点击应用,创建一个新的应用。
<details>
<summary>🖼 点我查看示例图</summary>
<img src="https://cdn.staticaly.com/gh/eryajf/tu/main/img/image_20230702_210600.jpg">
</details>
3. 创建成功之后,能够获取到应用的`agent_id`与`agent_secret`。
<details>
<summary>🖼 点我查看示例图</summary>
<img src="https://cdn.staticaly.com/gh/eryajf/tu/main/img/image_20230702_210807.jpg">
</details>
4. 点击接收消息的`设置 API 接收`,进入回调地址配置页面,点击两个随机获取,可以获得`receive_msg_token`与`receive_msg_key`。
<details>
<summary>🖼 点我查看示例图</summary>
<img src="https://cdn.staticaly.com/gh/eryajf/tu/main/img/image_20230702_211253.jpg">
</details>
- `📢 注意:`此时点击保存会提示 `openapi回调地址请求不通过`,是因为保存配置的时候企微会向服务发送请求进行校验。所以此时这个页面先放这儿,去部署应用。应用部署起来之后,再回来保存配置就能成功了。
### 第二步,部署项目
#### docker部署
推荐你使用docker-compose快速运行本项目。
```yaml
version: '3'
services:
chatgpt:
container_name: chatgpt
image: registry.cn-hangzhou.aliyuncs.com/ali_eryajf/chatgpt-wecom
restart: always
environment:
LOG_LEVEL: "info" # 应用的日志级别 info/debug
CORP_ID: "" # 企业ID
AGENT_ID: "" # 应用ID
AGENT_SECRET: "" # 应用秘钥
RECEIVE_MSG_TOKEN: "" # API接收消息的token
RECEIVE_MSG_KEY: "" # API接收消息的key
APIKEY: xxxxxx # 你的 api_key
BASE_URL: "" # 如果你使用官方的接口地址 https://api.openai.com,则留空即可,如果你想指定请求url的地址,可通过这个参数进行配置,注意需要带上 http 协议
MODEL: "gpt-3.5-turbo" # 指定模型,默认为 gpt-3.5-turbo , 可选参数有: "gpt-4-32k-0613", "gpt-4-32k-0314", "gpt-4-32k", "gpt-4-0613", "gpt-4-0314", "gpt-4", "gpt-3.5-turbo-16k-0613", "gpt-3.5-turbo-16k", "gpt-3.5-turbo-0613", "gpt-3.5-turbo-0301", "gpt-3.5-turbo",如果使用gpt-4,请确认自己是否有接口调用白名单,如果你是用的是azure,则该配置项可以留空或者直接忽略
SESSION_TIMEOUT: 600 # 会话超时时间,默认600秒,在会话时间内所有发送给机器人的信息会作为上下文
MAX_QUESTION_LEN: 2048 # 最大问题长度,默认4096 token,正常情况默认值即可,如果使用gpt4-8k或gpt4-32k,可根据模型token上限修改。
MAX_ANSWER_LEN: 2048 # 最大回答长度,默认4096 token,正常情况默认值即可,如果使用gpt4-8k或gpt4-32k,可根据模型token上限修改。
MAX_TEXT: 4096 # 最大文本 = 问题 + 回答, 接口限制,默认4096 token,正常情况默认值即可,如果使用gpt4-8k或gpt4-32k,可根据模型token上限修改。
HTTP_PROXY: http://host.docker.internal:15777 # 指定请求时使用的代理,如果为空,则不使用代理,注意需要带上 http 协议 或 socks5 协议
DEFAULT_MODE: "单聊" # 指定默认的对话模式,可根据实际需求进行自定义,如果不设置,默认为单聊,即无上下文关联的对话模式
MAX_REQUEST: 0 # 单人单日请求次数上限,默认为0,即不限制
PORT: 8090 # 指定服务启动端口,默认为 8090,容器化部署时,不需要调整,一般在二进制宿主机部署时,遇到端口冲突时使用,如果run_mode为stream模式,则可以忽略该配置项
SERVICE_URL: "" # 指定服务的地址,就是当前服务可供外网访问的地址(或者直接理解为你配置在企微回调那里的地址),用于生成图片时给企微做渲染
# 以下 ALLOW_USERS、DENY_USERS、VIP_USERS、ADMIN_USERS 配置中填写的是用户的userid
# 比如 ["1301691029702722","1301691029702733"],这个信息需要在企微管理后台的通讯录当中获取:hhttps://work.weixin.qq.com/wework_admin/frame#contacts
# 哪些用户可以进行对话,如果留空,则表示允许所有用户,如果要限制,则列表中写用户的userid
ALLOW_USERS: "" # 哪些用户可以进行对话,如果留空,则表示允许所有用户,如果要限制,则填写用户的userid
DENY_USERS: "" # 哪些用户不可以进行对话,如果留空,则表示允许所有用户(如allow_user有配置,需满足相应条件),如果要限制,则列表中写用户的userid,黑名单优先级高于白名单
VIP_USERS: "" # 哪些用户可以进行无限对话,如果留空,则表示只允许管理员(如max_request配置为0,则允许所有人),如果要针对指定VIP用户放开限制(如max_request配置不为0),则列表中写用户的userid
ADMIN_USERS: "" # 指定哪些人为此系统的管理员,如果留空,则表示没有人是管理员,如果要限制,则列表中写用户的userid
SENSITIVE_WORDS: "" # 敏感词,提问时触发,则不允许提问,回答的内容中触发,则以 🚫 代替
AZURE_ON: "false" # 是否走Azure OpenAi API, 默认false ,如果为true,则需要配置下边的四个参数
AZURE_API_VERSION: "" # Azure OpenAi API 版本,比如 "2023-03-15-preview"
AZURE_RESOURCE_NAME: "" # Azure OpenAi API 资源名称,比如 "openai"
AZURE_DEPLOYMENT_NAME: "" # Azure OpenAi API 部署名称,比如 "openai"
AZURE_OPENAI_TOKEN: "" # Azure token
HELP: "Commands:\n\n=================================\n\n🙋 单聊 👉 单独聊天,缺省\n\n🗣 串聊 👉 带上下文聊天\n\n🔃 重置 👉 重置带上下文聊天\n\n🚀 帮助 👉 显示帮助信息\n\n=================================\n\n💪 Power By [eryajf/chatgpt-wecom](https://github.com/eryajf/chatgpt-wecom)" # 帮助信息,放在配置文件,可供自定义
volumes:
- ./data:/app/data
ports:
- "8000:8000"
extra_hosts:
- host.docker.internal:host-gateway
```
启动服务:
```sh
$ docker compose up -d
```
#### 二进制部署
如果你想通过命令行直接部署,可以直接下载release中的[压缩包](https://github.com/eryajf/chatgpt-wecom/releases) ,请根据自己系统以及架构选择合适的压缩包,下载之后直接解压运行。
下载之后,在本地解压,即可看到可执行程序,与配置文件:
```sh
$ tar xf chatgpt-wecom-v0.0.4-darwin-arm64.tar.gz
$ cd chatgpt-wecom-v0.0.4-darwin-arm64
$ cp config.example.yml config.yml
$ ./chatgpt-wecom # 直接运行
# 如果要守护在后台运行
$ nohup ./chatgpt-wecom &> run.log &
$ tail -f run.log
```
### 第三步,完善企微配置
此时服务如正常启动,可再回到企微管理后台的配置页面,尝试保存配置。如果服务正常,应该就能保存成功了,从服务日志当中,也可以看到企微发起的回调:
```sh
[GIN] 2023/07/02 - 21:33:53 | 200 | 78.469µs | 113.108.92.100 | GET "/ai/callback?msg_signature=fb23b8490965c74600dcb08b2e8b86d2aff664e4×tamp=1688304833&nonce=1688533588&echostr=I5D3M3C%2Fk7AqBFRkACk8eHZAzt%2Fjx14IKk8wXUpA85xsQ2aU67lxEhgHVudLSrCWEPRFapeQ3EcYbni0Bqj01Q%3D%3D"
```
此时还需要将部署服务的IP加入到企微的白名单当中:
<details>
<summary>🖼 点我查看示例图</summary>
<img src="https://cdn.staticaly.com/gh/eryajf/tu/main/img/image_20230702_213710.jpg">
</details>
到这里配置步骤就完成了,可以尽情享用与智能机器人的交互了。
来到企业微信,点击工作台,然后可以看到我们添加的应用,点击应用,即可开始对话。
<details>
<summary>🖼 点我查看示例图</summary>
<img src="https://cdn.staticaly.com/gh/eryajf/tu/main/img/image_20230702_213915.jpg">
</details>
## 配置文件说明
```yaml
# 应用的日志级别,info or debug
log_level: "info"
# openai api_key,如果你是用的是azure,则该配置项可以留空或者直接忽略
api_key: "xxxxxxxxx"
# 如果你使用官方的接口地址 https://api.openai.com,则留空即可,如果你想指定请求url的地址,可通过这个参数进行配置,注意需要带上 http 协议,如果你是用的是azure,则该配置项可以留空或者直接忽略
base_url: ""
# 指定模型,默认为 gpt-3.5-turbo , 可选参数有: "gpt-4-32k-0613", "gpt-4-32k-0314", "gpt-4-32k", "gpt-4-0613", "gpt-4-0314", "gpt-4", "gpt-3.5-turbo-16k-0613", "gpt-3.5-turbo-16k", "gpt-3.5-turbo-0613", "gpt-3.5-turbo-0301", "gpt-3.5-turbo",如果使用gpt-4,请确认自己是否有接口调用白名单,如果你是用的是azure,则该配置项可以留空或者直接忽略
model: "gpt-3.5-turbo"
# 会话超时时间,默认600秒,在会话时间内所有发送给机器人的信息会作为上下文
session_timeout: 600
# 最大问题长度
max_question_len: 2048
# 最大回答长度
max_answer_len: 2048
# 最大上下文文本长度,通常该参数可设置为与模型Token限制相同
max_text: 4096
# 指定请求时使用的代理,如果为空,则不使用代理,注意需要带上 http 协议 或 socks5 协议,如果你是用的是azure,则该配置项可以留空或者直接忽略
http_proxy: ""
# 指定默认的对话模式,可根据实际需求进行自定义,如果不设置,默认为单聊,即无上下文关联的对话模式
default_mode: "单聊"
# 单人单日请求次数上限,默认为0,即不限制
max_request: 0
# 指定服务启动端口,默认为 8090,一般在二进制宿主机部署时,遇到端口冲突时使用,如果run_mode为stream模式,则可以忽略该配置项
port: "8090"
# 指定服务的地址,就是当前服务可供外网访问的地址(或者直接理解为你配置在企微回调那里的地址),用于生成图片时给企微做渲染,最新版本中将图片上传到了企微服务器,理论上你可以忽略该配置项,如果run_mode为stream模式,则可以忽略该配置项
service_url: "http://xxxxxx"
# 以下 allow_users、deny_users、vip_users、admin_users 配置中填写的是用户的userid,outgoing机器人模式下不适用这些配置
# 比如 ["1301691029702722","1301691029702733"],这个信息需要在企微管理后台的通讯录当中获取:https://oa.dingtalk.com/contacts.htm#/contacts
# 哪些用户可以进行对话,如果留空,则表示允许所有用户,如果要限制,则列表中写用户的userid
allow_users: []
# 哪些用户不可以进行对话,如果留空,则表示允许所有用户(如allow_user有配置,需满足相应条件),如果要限制,则列表中写用户的userid,黑名单优先级高于白名单
deny_users: []
# 哪些用户可以进行无限对话,如果留空,则表示只允许管理员(如max_request配置为0,则允许所有人)
# 如果要针对指定VIP用户放开限制(如max_request配置不为0),则列表中写用户的userid
vip_users: []
# 指定哪些人为此系统的管理员,如果留空,则表示没有人是管理员,如果要限制,则列表中写用户的userid
admin_users: []
# 敏感词,提问时触发,则不允许提问,回答的内容中触发,则以 🚫 代替
sensitive_words: []
# 帮助信息,放在配置文件,可供自定义
help: "Commands:\n\n=================================\n\n🙋 单聊 👉 单独聊天,缺省\n\n🗣 串聊 👉 带上下文聊天\n\n🔃 重置 👉 重置带上下文聊天\n\n🚀 帮助 👉 显示帮助信息\n\n=================================\n\n💪 Power By [eryajf/chatgpt-wecom](https://github.com/eryajf/chatgpt-wecom)"
# Azure OpenAI 配置
azure_on: false # 如果是true,则会走azure的openai接口
azure_resource_name: "eryajf" # 对应你的主个性域名
azure_deployment_name: "gpt-35-turbo" # 对应的是 /deployments/ 后边跟着的这个值
azure_api_version: "2023-03-15-preview" # 对应的是请求中的 api-version 后边的值
azure_openai_token: "xxxxxxx"
```
## 常见问题
- 企微只支持与应用一对一聊天,不支持群聊当中添加机器人的对话形式。
## 感谢
这个项目能够成立,离不开这些开源项目:
- [go-resty/resty](https://github.com/go-resty/resty)
- [patrickmn/go-cache](https://github.com/patrickmn/go-cache)
- [solywsh/chatgpt](https://github.com/solywsh/chatgpt)
- [gin-gonic/gin](https://github.com/gin-gonic/gin)
- [avast/retry-go](https://github.com/avast/retry-go)
- [sashabaranov/go-openapi](https://github.com/sashabaranov/go-openai)
- [charmbracelet/log](https://github.com/charmbracelet/log)
- [xen0n/go-workwx](https://github.com/xen0n/go-workwx)
## 赞赏
如果觉得这个项目对你有帮助,你可以请作者[喝杯咖啡 ☕️](https://wiki.eryajf.net/reward/)
## 贡献者列表
<div align="center">
<!-- readme: collaborators,contributors -start -->
<!-- readme: collaborators,contributors -end -->
</div>
|
melody413/python_docx_
|
https://github.com/melody413/python_docx_
| null |
.. image:: https://travis-ci.org/python-openxml/python-docx.svg?branch=master
:target: https://travis-ci.org/python-openxml/python-docx
*python-docx* is a Python library for creating and updating Microsoft Word
(.docx) files.
More information is available in the `python-docx documentation`_.
.. _`python-docx documentation`:
https://python-docx.readthedocs.org/en/latest/
|
mattleibow/DeviceRunners
|
https://github.com/mattleibow/DeviceRunners
|
A set of device runners for various testing frameworks.
|
# Test Device Runners
A set of device runners for various testing frameworks.
The current platforms are:
- Android
- iOS
- macOS (using Mac Catalyst)
- Windows (using WinUI 3)
The current testing frameworks supported are:
- Xunit
- Visual device runner
- XHarness (CI) device runner
- NUnit
- Visual device runner
## Testing with the Visual Runner
Testing using the visual runner is just a matter of running the test app like any other app. This can be done via the CLI or in the IDE.
| | | |
|:-:|:-:|:-:|
||||
More information can be found in the wiki: [Visual Runner in the IDE](https://github.com/mattleibow/DeviceRunners/wiki/Visual-Runner-in-the-IDE)
## Testing with the CLI
Test can also be run on the CLI - both locally and on CI. For tests on Android, iOS and Mac Catalyst, there is the XHarness tool. For Windows, all we need is PowerShell.
More information can be found in the wiki:
* [Using XHarness](https://github.com/mattleibow/DeviceRunners/wiki/Using-XHarness)
* [iOS - XHarness](https://github.com/mattleibow/DeviceRunners/wiki/CLI-Device-Runner-for-iOS-using-XHarness)
* [Android - XHarness](https://github.com/mattleibow/DeviceRunners/wiki/CLI-Device-Runner-for-Android-using-XHarness)
* [Mac Catalyst - XHarness](https://github.com/mattleibow/DeviceRunners/wiki/CLI-Device-Runner-for-Mac-Catalyst-using-XHarness)
* [Windows - PowerShell](https://github.com/mattleibow/DeviceRunners/wiki/CLI-Device-Runner-for-Windows-using-PowerShell)
## UI Testing Support
More information can be found in the wiki: [UI Tests](https://github.com/mattleibow/DeviceRunners/wiki/UI-Tests)
## Credits
This is repository contains revised code from a few places:
- https://github.com/xunit/devices.xunit
This code was migrated to use .NET MAUI and then split into components so that the UI can be separate from the test runner.
- https://github.com/xunit/uitest.xunit
This code was migrated to use .NET MAUI.
- https://github.com/nunit/nunit.xamarin
This code was migrated to use .NET MAUI and then features were added to support running individual tests.
- https://github.com/dotnet/maui
This was the home for a short while during the migration.
|
ruffjs/tio
|
https://github.com/ruffjs/tio
|
a tiny iothub, simple and useful. 轻量的 iothub,用经典的物模型简化 iot 应用开发
|
# tio
[](https://github.com/ruffjs/tio/actions?query=workflow:test)
[](https://github.com/ruffjs/tio/releases)
[](https://ruffjs.github.io/tio/)
[![license][license]](LICENSE)
[中文](README.md) | [English](README_en.md)
`tio` 是一个轻量的 IotHub 核心实现。
>`t` 代表 **tiny**(微小),`io` 代表其职责是在设备与服务端之间提供通信渠道,也可理解为 iothub。
## 为什么会有这个项目
在我们实际的项目和产品中,使用过和深度了解过 AWS IoT、Azure IotHub、阿里云物联网平台等。曾经遇到过以下场景和问题:
- **私有化部署**:部署到客户指定的环境下,没有使用公有云的条件
- **物联网定向卡问题**:有些提供方只支持 IP 白名单,但公有云的服务往往会有多个 IP ,甚至不同区域解析到的 IP 也不一样,且 IP 过一段时间会发生变化
- **被公有云厂商绑定**:切换到不同公有云厂商的物联网平台,有比较大的成本
- **低成本运行**:初期业务尝试或规模比较小的项目,客户抱着尝试探索的目的,比较在乎成本控制,而公有云平台服务往往开通服务后有一个起步的每月费用
当我们去寻找一个支持私有化部署的 “IotHub”时,在开源社区中没找到一个这样定位的项目,大多是作为“物联网平台”的形式出现。然而,一些场景是不需要物联网平台中其他各种用不到的功能,也不愿意承受其比较重的代码实现和部署运维,加之各自的抽象设计和公有云厂商“经典”的抽象设计差异比较多。
于是,就设想能不能基于各大云厂商通过多年实践和相互间学习借鉴而形成的 IoTHub 的核心抽象和设计,开发一个轻量实用的 “IoTHub”。一个完整的 IotHub 是个庞大的东西(见个云厂商的产品文档),但其核心内涵其实是对“物模型”(或叫设备模型)的抽象和设计,而物模型中 Shadow(设备影子或叫设备孪生)又是一个大家都事实上非常雷同的一个核心抽象。
基于此,tio 在 2022-09-07 日诞生。通过在多个项目和产品中的实践,确实能解决问题。想着其他有类似需求的人或许也需要这样的东西,于是将其完整开源出来。
## 主要特点
- **轻量**:部署最简可以只有一个 golang 编译出的二进制程序(特别适合于开发、测试和设备数量不多的场景);当然也可以根据需要使用不同的数据库和消息中间件来提供更好的性能
- **简单**:专注于 IotHub 的核心功能,不求大而全,保持简单稳定。并提供了 web 调试管理后台,便于调试和对 tio 接口的熟悉
- **实用**:简化与物联网设备的交互过程和实现,特别是通过`设备影子`(Shadow)的抽象简化了服务端和设备的交互
- **生产可用**:已在生产环境多个项目和产品中使用
- **与主流公有云 IotHub 一脉相承**:深度参考了主流公有云厂商的设计抽象,经得起推敲;对于熟悉这部分的人,其知识可被迁移使用;对于既有私有化部署又有公有云部署(使用公有云IotHub)的场景,其对接方式非常类似,不会对原有流程和代码带来大的影响
## 主要组件
- Thing:用于设备的基本管理,例如:CRUD、授权认证
- Connector:设备连接层(目前主要是 MQTT broker),有内置 MQTT Broker 和 EMQX 的集成
- Shadow:设备影子,类似于 [AWS IoT Shadow](https://docs.aws.amazon.com/iot/latest/developerguide/device-shadow-document.html)、[Azure Device Twin](https://learn.microsoft.com/zh-cn/azure/iot-hub/iot-hub-devguide-device-twins)、[阿里云设备影子](https://help.aliyun.com/document_detail/53930.html),各大公有云厂商都有设备影子(名称各有不同)的抽象,且其内涵都高度一致,在我们实际的项目开发中确实是非常有用的工具,极大地减少上层业务系统和设备交互的复杂度和心智负担
- 设备直接方法(Direct Method):服务端对设备的方法调用,采用“请求-响应”模式,类似于 HTTP 请求。参考了 [Azure Direct method](https://learn.microsoft.com/zh-cn/azure/iot-hub/iot-hub-devguide-direct-methods) 的设计
Shadow:
```
Thing app Back end
┌───────────────────────┐
│ Shadow │
│ ┌─────────────────┐ │
│ │ Tags ├──┼─────── Read,write
│ └─────────────────┘ │
│ ┌─────────────────┐ │
│ │ States │ │
│ │ ┌──────────┐ │ │
Read,receive ─────┼──┼───┤ Desired ├──┼──┼─────── Read,write
change notifications │ │ └──────────┘ │ │ change notifications
│ │ ┌──────────┐ │ │
Read,write ─────┼──┼───┤ Reported ├──┼──┼─────── Read
│ │ └──────────┘ │ │ change notifications
│ └─────────────────┘ │
└───────────────────────┘
```
## 支持的连接层(connector)
### 内置 MQTT Broker
默认运行一个内置的 MQTT Broker,采用 [github.com/mochi-co/mqtt/](github.com/mochi-co/mqtt)。对于测试、开发和对轻量环境有需求的场景非常有用。
- 支持 MQTT v3.1.1 和 v5.0
- 支持 MQTT over Websocket
- 支持 SSL/TLS (包括 TCP 和 Websocket)
### EMQX MQTT Broker
[EMQX](https://github.com/emqx/emqx) 是一个易于使用的优秀的 MQTT broker。
tio 集成了其 `v5` 版本,以提供更强的功能性和性能(水平扩展)。
## 支持的数据库
- MySQL:用于生产环境
- sqlite3:用于测试、开发或轻量使用场景。当配置为 `":memory:"` 时,sqlite3 甚至支持内存模式,方便测试。请查看 `config.yaml` 进行相应配置
## 运行
- 检查 `config.yaml` 文件中的配置是否符合你的需求
- 运行 `cd web && yarn && yarn build && cd - && go run cmd/tio/main.go`
- 访问 [http://127.0.0.1:9000](http://127.0.0.1:9000) 打开调试管理后台
- 访问 [http://127.0.0.1:9000/docs](http://127.0.0.1:9000/docs) 查看 API 文档
## 构建
```bash
# 构建 web 后台
cd web && yarn && yarn build
# 构建 go 主程序
# CGO_ENABLED=1 用于 sqlite3,如果你不使用 sqlite,可以删除此参数。
CGO_ENABLED=1 go build -o tio cmd/tio/main.go
# 运行
./tio
```
构建 Docker 镜像
```bash
bash build/docker/build.sh
```
构建适用于基于 Debian 的 Linux 发行版的 deb 软件包
```bash
# deb 软件包在 ./dist 目录下
bash build/deb/build.sh
```
## 开发
### 启用 Git 钩子
```bash
chmod +x ./githooks/*
git config core.hooksPath githooks
```
### 代码目录结构说明
```bash
.
├── api # api 配置和 swagger 配置等
├── auth # 设备认证
├── shadow # tio 的核心,含 shadow、direct method 的定义和实现(涉及到消息通信的部分在 connector 中)
├── thing # thing 基本的 CRUD
├── ntp # 设备 ntp 服务
├── connector # connector 实现
│ └── mqtt
│ ├── embed # 内置的 MQTT Broker
│ └── emqx # 集成 EMQX MQTT Broker
├── cmd # main 入口代码
│ └── tio
├── web # 调试管理后台
├── config # 程序配置
├── db # db 配置
│ ├── mysql
│ └── sqlite
├── demos
│ └── light # 以路灯控制为示例,展示设备侧和服务端对 tio 的集成
│ ├── README.md
│ ├── device
│ └── server
├── build # 构建脚本和配置
│ ├── deb # debian 类系统中用到的 deb 包构建
│ └── docker
├── githooks # 代码规范和提交相关的 githooks
└── pkg # 业务无关的一些库
```
### 集成示例
参考 [Light Demo](demos/light/README.md),有比较完整的[设备侧](./demos/light/device/)和[服务端](./demos/light/server/)的代码示例
### 技术栈
golang + sqlite/mysql + 内置MQTT服务/emqx
前端(调试管理后台):vue3 + element-plus
## License
[MIT](LICENSE)
[license]: https://img.shields.io/badge/license-MIT-blue.svg
|
smol-ai/talk
|
https://github.com/smol-ai/talk
| null |
# Smol Talk
## Features
- [Next.js](https://nextjs.org) App Router
- React Server Components (RSCs), Suspense, and Server Actions
- [Vercel AI SDK](https://sdk.vercel.ai/docs) for streaming chat UI
- Support for OpenAI (default), Anthropic, Hugging Face, or custom AI chat models and/or LangChain
- Edge runtime-ready
- [shadcn/ui](https://ui.shadcn.com)
- Styling with [Tailwind CSS](https://tailwindcss.com)
- [Radix UI](https://radix-ui.com) for headless component primitives
- Icons from [Phosphor Icons](https://phosphoricons.com)
- Chat History with [Supabase Postgres DB](https://supabase.com)
- [Supabase Auth](https://supabase.com/auth) for authentication
## Model Providers
This template ships with OpenAI `gpt-3.5-turbo` as the default. However, thanks to the [Vercel AI SDK](https://sdk.vercel.ai/docs), you can switch LLM providers to [Anthropic](https://anthropic.com), [Hugging Face](https://huggingface.co), or using [LangChain](https://js.langchain.com) with just a few lines of code.
<!-- ## Deploy Your Own
You can deploy your own version of the Next.js AI Chatbot to Vercel with one click:
TODO: update button with supabase integration
[](https://vercel.com/new/clone?demo-title=Next.js+Chat&demo-description=A+full-featured%2C+hackable+Next.js+AI+chatbot+built+by+Vercel+Labs&demo-url=https%3A%2F%2Fchat.vercel.ai%2F&demo-image=%2F%2Fimages.ctfassets.net%2Fe5382hct74si%2F4aVPvWuTmBvzM5cEdRdqeW%2F4234f9baf160f68ffb385a43c3527645%2FCleanShot_2023-06-16_at_17.09.21.png&project-name=Next.js+Chat&repository-name=nextjs-chat&repository-url=https%3A%2F%2Fgithub.com%2Fvercel-labs%2Fai-chatbot&from=templates&skippable-integrations=1&env=OPENAI_API_KEY%2CAUTH_GITHUB_ID%2CAUTH_GITHUB_SECRET&envDescription=How+to+get+these+env+vars&envLink=https%3A%2F%2Fgithub.com%2Fvercel-labs%2Fai-chatbot%2Fblob%2Fmain%2F.env.example&teamCreateStatus=hidden&stores=[{"type":"kv"}]) -->
## Running locally
You will need to use the environment variables [defined in `.env.example`](.env.example) to run Next.js AI Chatbot. It's recommended you use [Vercel Environment Variables](https://vercel.com/docs/concepts/projects/environment-variables) for this, but a `.env` file is all that is necessary.
> Note: You should not commit your `.env` file or it will expose secrets that will allow others to control access to your various OpenAI and authentication provider accounts.
Copy the `.env.example` file and populate the required env vars:
```bash
cp .env.example .env
```
[Install the Supabase CLI](https://supabase.com/docs/guides/cli) and start the local Supabase stack:
```bash
npm install supabase --save-dev
npx supabase start
```
Install the local dependencies and start dev mode:
```bash
pnpm install
pnpm dev
```
Your app template should now be running on [localhost:3000](http://localhost:3000/).
## Authors
This library is created by [Vercel](https://vercel.com) and [Next.js](https://nextjs.org) team members, with contributions from:
- Jared Palmer ([@jaredpalmer](https://twitter.com/jaredpalmer)) - [Vercel](https://vercel.com)
- Shu Ding ([@shuding\_](https://twitter.com/shuding_)) - [Vercel](https://vercel.com)
- shadcn ([@shadcn](https://twitter.com/shadcn)) - [Contractor](https://shadcn.com)
- Thor Schaeff ([@thorwebdev](https://twitter.com/thorwebdev)) - [Supabaseifier](https://thor.bio)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.