full_name
stringlengths
10
67
url
stringlengths
29
86
description
stringlengths
3
347
readme
stringlengths
0
162k
stars
int64
10
3.1k
forks
int64
0
1.51k
Mehdi-H/WeeklyCuration
https://github.com/Mehdi-H/WeeklyCuration
Interesting links I saw, every week
# WeeklyCuration Interesting links I saw, every week It's not necessarily articles published this week, it can also be articles I discovered this week 👀 Feel free to propose content to the newsletter - by proposing a pull request on the [NEXT_WEEK.md file](./NEXT_WEEK.md) - or [by creating an issue](https://github.com/Mehdi-H/WeeklyCuration/issues/new) ## Legend - 📝 : Blog post/article, or slide deck - 🚀 : Release note - 🧰 : Technical or methodological tool to add to my toolbox - 🗓️ : An event/meetup/conference I spotted - 📽️ : Video content, VOD of an event/meetup/conference - 📚 : About a book I discovered - 🐦 : A Tweet - 🎙️ : A podcast series or episode --- ## 24 Jul. 2023 ### AI 🤖 - 🐦 [kNN using a gzip-based distance metric outperforms BERT and other neural methods for OOD sentence classification (Riley Goodside)](https://twitter.com/goodside/status/1679358632431853568) | #Gzip #BERT #Benchmark #TextClassification - 📝 [FTC investigates OpenAI over data leak and ChatGPT’s inaccuracy (Washington Post)](https://www.washingtonpost.com/technology/2023/07/13/ftc-openai-chatgpt-sam-altman-lina-khan/) | #OpenAPI #ChatGPT #ConsumerProtectionLaws #Industry #SlidesDeck - 📝 [Large Language Models: From Prototype to Production (Ines Montani, EuroPython2023 keynote)](https://speakerdeck.com/inesmontani/large-language-models-from-prototype-to-production-europython-keynote) | #LLM #NLP #NER #Spacy #Prodigy - 📝 [Llama 2: Open Foundation and Fine-Tuned Chat Models (paper - Meta AI)](https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/) | #R&D #Paper - 📝 [Llama 2: Open source, free for research and commercial use (website - Meta AI)](https://ai.meta.com/llama/) | # - 📝 [Llama 2: Statement of Support for Meta’s Open Approach to Today’s AI](https://about.fb.com/news/2023/07/llama-2-statement-of-support/) | # - 📽️ [Compliant Mechanisms that learn - Mechanical Neural Network Architected Materials](https://www.youtube.com/watch?v=_CwUuyN6NTE&t=3s) | #R&D #Physics #NeuralNetworks #ArchitectedMaterial ### Architecture 📐 - 📝 [How platform teams get stuff done (Pete Hodgson, martinfowler.com)](https://martinfowler.com/articles/platform-teams-stuff-done.html) | #Platform #Productivity #Adoption #Collaboration #TeamTopologies #Conway'sLaw - 📝 [RedisGraph End-of-Life Announcement](https://redis.com/blog/redisgraph-eol/) | #Redis #RedisGraph #Sunset - 📽️ [[Live coding] C4 Models as Code • Simon Brown • YOW! 2022](https://www.youtube.com/watch?v=4aiAkUm7rzQ) | #DiagramAsCode #C4Model #LivingDocumentation #C4PlantUML #Structurizr ### DDD 📘 - 🎙️ [🇫🇷 Introduction à Domain-Driven Design avec Nelson da Costa (Podcast Café Craft)](https://podcasts-francais.fr/podcast/cafe-craft/episode-4-domain-driven-design-avec-nelson-da-cost) | #DDD #Introduction - 🐦 [DDDEurope2024’s call for proposal is open](https://twitter.com/ddd_eu/status/1681658780772122624) | #Conference #CFP #SoftwareModelling ### Data Mesh 🥅 - 🧰 [datamesh-architecture.com : why, what, and how of Data Mesh with examples](https://www.datamesh-architecture.com/#tech-stacks) | #Toolbox #DataProduct #TechStack #DataContract ### DevOps 🛠️ - 📽️ [🇫🇷 Laissez tomber vos Dockerfile, adoptez un buildpack ! (Julien Wittouck, Sunny Tech 2023)](https://www.youtube.com/watch?v=2Zo34sXsMxU) | #Conference #Docker #Buildpack #Pack #Distroless #SBOM #Paketo - 📽️ [🇫🇷 Suivez vos applications à la trace grâce à OpenTelemetry (Julien Tanguy, Sunny Tech 2022)](https://www.youtube.com/watch?v=NXYAtkEm_hk) | #Conference #OpenTelemetry #OTLP #LiveDemo - 🚀 [Terraform 1.6.0-alpha available soon, “test” command not experimental anymore](https://github.com/hashicorp/terraform/releases/tag/v1.6.0-alpha20230719) | #Terraform - 🧰 [awesome-cloud-native](https://github.com/rootsongjc/awesome-cloud-native) | #CloudNative - 🧰 [📖 trimstray/the-book-of-secret-knowledge](https://github.com/trimstray/the-book-of-secret-knowledge) | #Bible #AdminSys #Network #DevOps #PenTest #Shell #Hack ### Living Documentation 📖💗 - 📚 [Living Documentation (Cyrille Martraire)](https://www.goodreads.com/book/show/34927405-living-documentation) | #LivingDocumentation #KnowledgeAugmentation #EvergreenDoc - 📝 [JSONSchema](https://json-schema.org/) | #DataDocumentation #DataContract #DataValidation - 📝 [coveooss/json-schema-for-humans](https://github.com/coveooss/json-schema-for-humans) | #JSONSchemaToHTML #JSONSchemaToMarkdown - 🧰 [Self-Documented Makefile (François Zaninotto)](https://marmelab.com/blog/2016/02/29/auto-documented-makefile.html) | #Automation #LivingDocumentation #GNUMake #RecList #DeveloperExperience #Shell ### Management 👔 - 📽️ [Work Anywhere: Managing Remote Engineering Teams at Airbnb (Jessica Tai • YOW! 2022)](https://www.youtube.com/watch?v=7cPOa5FX_Rw&t=1138s) | #FullRemote #WorkFromAnywhere #MultipleTimezones #RemoteManager #DesignDocs #RFCs #NoAgendaNoMeeting - 🧰 [Gitlab’s Objectives and Key Results (OKRs) handbook](https://about.gitlab.com/company/okrs/) | #OKR ### Python 🐍 - 📝 [10 Best Practices for Logging in Python (BetterStack)](https://betterstack.com/community/guides/logging/python/python-logging-best-practices/) | #Logging #LoggingConfig #Loguru #StructuredLogging #python-json-logger - 📝 [Asyncio Evolved: Enhanced Exception Handling with TaskGroup in Python 3.11(Junya Fukuda, EuroPython 2023)](https://speakerdeck.com/jrfk/asyncio-evolved-enhanced-exception-handling-with-taskgroup-in-python-3-dot-11-europython-2023) | #SlidesDeck #AynscIO #TaskGroup - 📝 [PEP 710 (draft) – Recording the provenance of installed packages](https://peps.python.org/pep-0710/) | #PEP #SBOM #Auditability #Security - 📝 [🔵 Blue : a somewhat less uncompromising code formatter than ⚫ Black, the OG of Python formatters](https://github.com/grantjenks/blue) | #Style #Lint - 🚀 [Cython 3.0.0 is out](https://cython.readthedocs.io/en/latest/src/changes.html#major-themes-in-3-0-0) | #C #Performance #LowLevel ### Security☣️ - 📝 [Security Developer-in-Residence – Weekly Report #2 (Seth Larson) - On the importance of having a SBOM (Software Bill of Materials)](https://sethmlarson.dev/security-developer-in-residence-weekly-report-2) | #PEP #PEP710 #SBOM #Security - 🧰 [Explaining JSON Web Token (JWT) to a 10 year old Kid (ByteByteGo)](https://blog.bytebytego.com/p/ep69-explaining-json-web-token-jwt#%C2%A7explaining-json-web-token-jwt-to-a-year-old-kid) | #JWT #Infographics ### Software Engineering ⚙️ - 📝 [Object Calisthenics (William Durand, 2013)](https://williamdurand.fr/2013/06/03/object-calisthenics/) | #SOLID #CleanCode #CodeReadability #SlidesDeck - 🧰 [Egoless Crafting: Practicing Software Craftsmanship with Ego-Resiliency](https://egolesscrafting.org/) | #SoftSkills #EgolessProgramming #Manifesto ## 17 Jul. 2023 ### AI 🤖 - 🐦 [François Chollet - Introducing Keras Core: Keras for TensorFlow, JAX, and PyTorch](https://twitter.com/fchollet/status/1678777783848419330) | #Keras #TensorFlow #JAX #PyTorch - 📝 [Actors say Hollywood studios want their AI replicas — for free, forever (2023’s strike)](https://www.theverge.com/2023/7/13/23794224/sag-aftra-actors-strike-ai-image-rights) | #Trivia🎈 #Industry #AI #Copyright #ActorsStrike - 📝 [How Alexa learned to speak with an Irish accent](https://www.amazon.science/blog/how-alexa-learned-to-speak-with-an-irish-accent) | #Trivia🎈 #ChatBot🤖🗣️ #TextToSpeech - 📝 [Skyrim Mod Powered by ChatGPT Gives NPCs Memories](https://opendatascience.com/skyrim-mod-powered-by-chatgpt-gives-npcs-memories/) | #Trivia🎈 #VideoGames #ChatGPT - 📝 [State of Computer Vision 2023 (Sebastian Raschka)](https://magazine.sebastianraschka.com/p/ahead-of-ai-10-state-of-computer) | #StateOfTheArt #ComputerVision #LLM #Transformers #GenerativeAI #Attention #DiffusionModels - 📽️ [Andrej Karpathy’s state of GPT (@Microsoft Build 2023)](https://www.youtube.com/watch?v=bZQun8Y4L2A) | #ChatGPT #LLM #Training #DataCollection #LLama🦙 ### Architecture 📐 - 📝 [Cloudflare is moving away from Nginx (2022)](https://rodneyosodo.medium.com/cloudflare-is-moving-away-from-nginx-248831c3b22) | #Network #Nginx #Pingora #Rust #Cloudflare - 📝 [PostgreSQL: No More VACUUM, No More Bloat (Alexander Korotkov)](https://www.orioledata.com/blog/no-more-vacuum-in-postgresql/) | #Database #PostgreSQL #LowLevel - 📽️ [Fabulous Fortunes, Fewer Failures, and Faster Fixes from Functional Fundamentals - Scott Havens (DOES2019 Las Vegas)](https://www.youtube.com/watch?v=FskIb9SariI&t=1s) | #Kakfa #Production #EventSourcing #ConferenceTalk #FunctionalProgrammingλ - 🧰 [Software architecture hype cycle (Milan Milanovic)](https://www.linkedin.com/posts/milanmilanovic_technology-softwareengineering-programming-activity-7084818676960440320-f949) | #SoftwareEngineering #CQRS #Serverless #Microservices #Adopt - 🧰 [Tech blogs & talks from (at least) 30 companies that run Kafka in production](https://github.com/dttung2905/kafka-in-production) | #Kafka #Industry #Production ### Blockchain ⛓️ - 🐦 [Introducing Polygon 2.0 and transition from MATIC to POL](https://twitter.com/LayerE_Intern/status/1679434845577961472) | #Blockchain #Polygon #Token #Governance ### Cloud ☁️ - 📝 [Announcing DynamoDB local version 2.0](https://aws.amazon.com/fr/about-aws/whats-new/2023/07/dynamodb-local-version-2-0/) | #AWS - 📝 [Azure AD is becoming Microsoft Entra ID](https://azure.microsoft.com/en-us/updates/azure-ad-is-becoming-microsoft-entra-id/) | #Azure - 📝 [Lessons learned - Discontinuation of InfluxDB Cloud in AWS Sydney and GCP Belgium](https://www.influxdata.com/blog/update-from-influxdata-paul-dix-july-10/) | #MeaCulpa #PostMortem #InfluxDB #DataLoss - 📝 [Understanding AWS Lambda proactive initialization (Aaron Stuyvenberg)](https://aaronstuyvenberg.com/posts/understanding-proactive-initialization) | #AWS #Lambda #ColdStart #WarmUp ### Data Mesh 🥅 - 📝 [PayPal open sources its data contract template](https://jgp.ai/2023/05/01/paypal-open-sources-its-data-contract-template/) | #DataQuality #Contract #Schema - 🧰 [paypal/data-contract-template - Template for a data contract used in a data mesh](https://github.com/paypal/data-contract-template) | #DataQuality #Contract #Schema #YAML ### DevOps 🛠️ - 📝 [The rise of open standards in observability: highlights from KubeCon](https://www.cncf.io/blog/2023/07/10/the-rise-of-open-standards-in-observability-highlights-from-kubecon/) | #CNCF #OpenTelemetry #OpenCensus #Prometheus #KubeCon - 🚀 [Keycloak 22.0.0 is out ](https://www.keycloak.org/2023/07/keycloak-2200-released.html) - 🚀 [OpenTelemetry Protocol (OTLP) version 1.0 is out (Dotan Horovits)](https://twitter.com/horovits/status/1675946183032729622) | #OpenTelemetry #1.0.0 - 🚀 [docker compose 2.20.0 is out](https://github.com/docker/compose/releases/tag/v2.20.0) ### Functional programming λ - 📝 [Love Letter To Clojure (Part 1) (Gene Kim, 2019)](https://itrevolution.com/articles/love-letter-to-clojure-part-1/) | #Clojure #FunctionalProgrammingλ #LISP - 🧰 [F# for Fun and Profit](https://fsharpforfunandprofit.com/) | #F# #LearningResource ### Product Management 📦 - 🧰 [Misleading roadmap | Honest roadmap | Strategic roadmap](https://twitter.com/carlvellotti/status/1679530059345055751) | #Roadmap #Linearity #Strategy #Agility ### Python 🐍 - 🐦 [Meta commits to dedicate three engineer-years to implement the removal of the GIL from Python](https://twitter.com/llanga/status/1677648534563086338) | #SoftwareEngineering #LowLevel #Performance - 🗓️ [Airflow summit 2023 will take place on September 19th to 21st](https://airflowsummit.org/sessions/2023/) | #Airflow #Conference #DataMesh - 🚀 [Conda’s dependency solver switching to libmamba this month](https://conda.org/blog/2023-07-05-conda-libmamba-solver-rollout/) | #Conda #Anaconda #Miniconda #Mamba #Performance - 🚀 [Great Expectations 0.17.5 is out](https://docs.greatexpectations.io/docs/changelog/#0175) | #SoftwareEngineering #DataQuality #open-source - 🚀 [Uvicorn 0.23.0 is out](https://github.com/encode/uvicorn/releases/tag/0.23.0) | #WebServer #ASGI ### QuantumComputing ⚛️ - 📝 [Google Claims Latest Quantum Experiment Would Take Decades on Classical Computer](https://thequantuminsider.com/2023/07/04/google-claims-latest-quantum-experiment-would-take-decades-on-classical-computer/) | #Trivia🎈 #Industry #R&D ### Security☣️ - 📝 [PyLoose: Python-based fileless malware targets cloud workloads to deliver cryptominer](https://www.wiz.io/blog/pyloose-first-python-based-fileless-attack-on-cloud-workloads) | #Python #Malware #memfd - 📝 [The massive bug at the heart of the npm ecosystem](https://blog.vlt.sh/blog/the-massive-hole-in-the-npm-ecosystem) | #NPM #NodeJS #Security ### Software Engineering ⚙️ - 📝 [How Google Measures and Manages Tech Debt (Abi Noda)](https://newsletter.abinoda.com/p/measuring-and-managing-tech-debt) | #TechnicalDebt #Productivity #CodeQuality #SystemsThinking #MaturityModel - 🗓️ [DDD Europe 2024 will happen on May 27-31 2024 in Amsterdam](https://twitter.com/ddd_eu/status/1667449494294740998) | #Conference #DDD ### Web Development 🧑‍💻 - 📝 [➰ Understanding SVG Paths](https://www.nan.fyi/svg-paths) | #Animation, #Bezier, #Cursor, #Demo, #Line, #SVG - Contributed by [@rfrenoy](https://github.com/rfrenoy) --- ## 10 Jul. 2023 ### AI 🤖 - 📝 [AI Could Change How Blind People See the World](https://www.wired.com/story/ai-gpt4-could-change-how-blind-people-see-the-world/) | #R&D #GPT-4 - 📝 [Introducing English as the New Programming Language for Apache Spark](https://www.databricks.com/blog/introducing-english-new-programming-language-apache-spark) | #Spark #Databricks #AI #Data - 📝 [The Rise of Applied AI Engineers and the Shift in AI Skillsets](https://softlandia.fi/en/blog/the-rise-of-applied-ai-engineers-and-the-shift-in-ai-skillsets) | #MLOps #AI #DataScience #Software Engineering - 📝 [Urtopia Unveils the World's First Smart E-Bike with ChatGPT Integration at EUROBIKE 2023](https://newurtopia.de/en/blogs/blog/smart-e-bike-with-chatgpt-urtopia-eurobike2023) | #Trivia 🎈 #ChatGPT ### Architecture 📐 - 📝 [AWS SQS, SNS, Kinesis, EventBridge : How to choose ?](https://dev.to/onepoint/aws-sqs-sns-kinesis-eventbridge-how-to-choose--32l7) | #AWS #SQS #SNS #Kinesis #EventBridge #Queue #Messaging - 📝 [Implementing AWS Well-Architected best practices for Amazon SQS – Part 1](https://aws.amazon.com/fr/blogs/compute/implementing-aws-well-architected-best-practices-for-amazon-sqs-part-1/) | #AWS #event-driven #Cloud #SQS #Queue - 📝 [Implementing AWS Well-Architected best practices for Amazon SQS – Part 2](https://aws.amazon.com/fr/blogs/compute/implementing-aws-well-architected-best-practices-for-amazon-sqs-part-2/) | #AWS #event-driven #Cloud #SQS #Queue - 📝 [Implementing AWS Well-Architected best practices for Amazon SQS – Part 3](https://aws.amazon.com/fr/blogs/compute/implementing-aws-well-architected-best-practices-for-amazon-sqs-part-3/) | #AWS #event-driven #Cloud #SQS #Queue ### Cloud ☁️ - 📝 [Microsoft Azure generated 34b in revenue in FY22, about half of the revenue of AWS](https://www.bigtechwire.com/2023/06/30/microsoft-azure-generated-34b-in-revenue-in-fy22-about-half-of-the-revenue-of-aws/) | #Trivia 🎈 #Industry ### DDD 📘 - 📝 [Balancing Coupling in Software Design (Vladik Khononov)](https://speakerdeck.com/vladikk/balancing-coupling-in-software-design-kandddinsky-2022) | #DDDEurope2023 #KanDDDinsky2022 #Software Engineering - 📝 [Retour sur la conférence EventSourcing Live @ DDD Europe 2023 (Mehdi Houacine, Sofia Calcagno)](https://www.linkedin.com/feed/update/urn:li:activity:7081697211239026690/) | #Conference #event-driven #Architecture - 📝 [Systems thinking in large-scale modeling (Xin Yao)](https://speakerdeck.com/xinyao/dddeu2023-keynote-systems-thinking-in-large-scale-modeling) | #DDDEurope2023 #OOP23Munich #Methodology #FeedbackLoop #SystemsThinking - 🧰 [Wardley Mapping templates (Tangible concepts)](https://tangible-concepts.de/wardley-mapping-templates) | #Toolbox #DDD #Wardley map ### Data Mesh 🥅 - 📝 [Ecosystem of Data Products > Centralized Data Platform](https://www.linkedin.com/posts/ryan-donnally_datamesh-activity-7064595412061446144-YH8N/) | #Data #Governance #Architecture - 🧰 [Jacek Majchrzak’s Data Bazaar Workshop](https://twitter.com/JacekMajchrzak_/status/1413069380037005313) | #Methodology #Toolbox ### Database 🧫 - 📚 [PostgreSQL 14 internals - Edgar Gorov’s free book to deep dive into the server mechanics](https://postgrespro.com/community/books/internals) | #PostgreSQL #Low-level #Performance #Architecture - 📝 [JunoDB: PayPal’s Key-Value Store Goes Open-Source](https://medium.com/paypal-tech/unlocking-the-power-of-junodb-paypals-key-value-store-goes-open-source-ee85f935bdc1) | #open-source #KV store #NoSQL ### DevOps 🛠️ - 📝 [2023 SRE Report (CatchPoint)](https://www.catchpoint.com/asset/2023-sre-report) | #SRE #DevOps #AIOps - 📝 [8 Terraform continuous validation use cases for AWS, Google Cloud, and Azure](https://www.hashicorp.com/blog/8-terraform-continuous-validation-use-cases-for-aws-google-cloud-and-azure) | #Terraform #Cloud #AWS #Azure #GCP - 📽️ [Replay of HashiDays 2023](https://www.youtube.com/playlist?list=PL81sUbsFNc5YhxNu2De8BWl_1tmEVmLRJ) | #Terraform #Security #Cloud #Conference ### FinOps 💸 - 📝 [How Canva saves millions annually in Amazon S3 costs](https://www.canva.dev/blog/engineering/optimising-s3-savings/) | #Cloud #AWS #S3 #FinOps - 🧰 [FinOps Principles (FinOps Foundation)](https://www.finops.org/framework/principles/) | #FinOps #Methodology #Cloud ### MLOps 🧠⚙️ - 📝 [Building LLM applications for production (Chip Huyen’s blog)](https://huyenchip.com/2023/04/11/llm-engineering.html) | #LLM #MLOps - 📝 [The Post-Modern Stack, Joining the modern data stack and the modern ML stack](https://towardsdatascience.com/the-post-modern-stack-993ec3b044c1) | #MLOps #Metaflow #ModernDataStack #dbt #snowflake #S3 #Sagemaker #RecList - 📽️ [Building LLM Applications for Production // Chip Huyen @ LLMs in Prod Conference](https://www.youtube.com/watch?v=spamOhG7BOA) | #LLM #Conference #MLOps ### Python 🐍 - 🗓️ [EuroPython2023 conference will be in Prague (July 17-23)](https://ep2023.europython.eu/) | #Conference #Python #Architecture #OpenAPI #Design - 🚀 [FastAPI 0.100.0 is out and supports Pydantic V2](https://fastapi.tiangolo.com/release-notes/#01000) | #Web #Rust #Performance #OpenAPI - 🧰 [CodeCarbon, a Python library to track carbon emissions from your computer](https://github.com/mlco2/codecarbon) | #GreenIT #Software Engineering
18
0
chasingboy/Xtools
https://github.com/chasingboy/Xtools
Xtools 是一款 Sublime Text 插件,同时是一款简单的资产处理、命令行调用工具。
# Xtools ### 前言 Xtools 是一款 Sublime Text 插件,同时是一款简单的资产处理工具,在渗透测试实战过程中,有很多重复的操作,所以思考着写一款小工具来减少重复的劳动。 日常渗透中使用过一款资产文本清洗工具,使用起来感觉不错,并且添加了一些额外功能和修改了暗黑主题,在此感谢 xinyu2428 师傅。 ``` https://github.com/xinyu2428/HTML_TOOLS ``` 在日常使用过程中,总感觉缺少了点什么。思考着继续补充 javascript 代码,发现无法和命令行进行交互,遂放弃。一番挣扎过后,发现很多时候都在使用 Subliem Text 编辑器,嗯,最后的思路就是集成在 Sublime Text 插件。这样一来,同时减少了很多的 ctl+c 和 ctl+v。 <img width="1649" alt="1" src="https://github.com/chasingboy/Xtools/assets/39737245/e3f15d93-f6c7-4baf-9d44-ca01dfbab00d"> ### 功能 1. IP、domain、url 处理 * 提取 IPv4 (内网、外网) * IPv4 和 C 段互转 * 提取 domain(根域名、所有域名) * 提取 url(有路径、无路径) * 提取 router(js、text) * 过滤 CDN 和 DNS 域名和IP(需补充) 2. 简单文本处理 * 删除特殊字符、空格、`[*]`、`(*)` (* 表示括号内的所有内容) * 按行提取指定内容 * 按行删除指定内容 * 替换指定字典的 key 和 value 3. 简单编码和解码 * base64 编码和解码 * md5 加密 4. 调用系统命令执行 * curl 下载文件 * sqlmap * ......(自行配置) ### 使用截图 1. 在文本中提取 IP。 <img width="1736" alt="image" src="https://github.com/chasingboy/Xtools/assets/39737245/b53054e3-2192-4292-98cb-08068bbbe219"><br/> 2. 按行进行 base64 编码。 <img width="1736" alt="image" src="https://github.com/chasingboy/Xtools/assets/39737245/ac3ffa1f-ff2a-45c8-b018-72dc37891108"><br/> 3. 按字典进行 key 和 value 替换。 <img width="1721" alt="image" src="https://github.com/chasingboy/Xtools/assets/39737245/c2c0d300-26c7-4b49-aa5f-0c06f63d8b38"><br/> 4. 打开终端调用 sqlmap。 <img width="1736" alt="image" src="https://github.com/chasingboy/Xtools/assets/39737245/933036ac-52be-4fac-b315-73bc59e6cafd"><br/> 5. curl 批量下载文件,会在桌面自动创建 work 文件夹,并保存下载结果。 <img width="1731" alt="image" src="https://github.com/chasingboy/Xtools/assets/39737245/071abf87-839d-49f3-bca6-ac9719327e8e"><br/> 6. 在处理需要输入时,选择 Input Text 即可打开输入框。 <img width="1698" alt="image" src="https://github.com/chasingboy/Xtools/assets/39737245/96b80ebb-c73d-4666-b527-fb998d4d2f1b"><br/> ### 配置命令行 选择 Setting Config 即可打开配置文件,并在注释的范围内添加需要的系统命令。统一格式为 `"args": {"cmd":"sqlmap -r target.txt"}`, 比如 slqmap,httpx,nuclei。 ``` /* 通过 <args->cmd> 设置命令, 设置目标为 target.txt, 运行时自动替换为临时文件 eg: httpx -l target.txt */ { "caption": "httpx", "command": "run_cmd", "args": {"cmd":"httpx -sc -title -l target.txt"} }, { "caption": "nuclei", "command": "run_cmd", "args": {"cmd":"nuclei -l target.txt"} }, { "caption": "sqlmap", "command": "run_cmd", "args": {"cmd":"sqlmap -r target.txt"} }, /* -- END -- */ ``` ~~⚠️注意:命令行功能目前只支持 macOS。~~ #### 新增支持 windows 命令行调用 ``` /* 通过 <args->cmd> 设置命令, 设置目标为 target.txt, 运行时自动替换为临时文件 eg: httpx -l target.txt */ { "caption": "httpx", "command": "run_cmd", "args": {"cmd":"C:\\Users\\kali\\httpx\\httpx -sc -title -l target.txt"} }, /* -- END -- */ ``` 比如配置 httpx 命令,或者把 httpx 命令添加到环境变量。 <img width="1846" alt="image" src="https://github.com/chasingboy/Xtools/assets/39737245/ecc36edb-c1d0-40d2-907c-7fd90bce36ac"> ### 安装 下载源码,github 下载后文件名 Xtools-main.zip,解压后需重命名为 Xtools,否则可能某些路径出错。 进入到 Sublime Text 插件目录:Preferences->Browse Packpages,把 Xtools 放在该目录下,同时解压 applescript 文件即可。 <img width="1730" alt="image" src="https://github.com/chasingboy/Xtools/assets/39737245/6d4a5c50-1079-4534-8acf-9aec8213dc23"><br/> 注意:python 调用 masOS 终端需要 applescript 模块,需在 Xtools 目录下解压 applescript.zip #### 安装报错 最近有师傅反馈,window 11 安装时出现错误,功能无法正常使用。经过调试,发现是师傅的系统**用户名是中文**。如果系统的用户名是中文且安装不成功,可以尝试在 xtools.py 文件自定义系统用户名。 ``` if platform == 'windows': HOME = os.environ['HOMEPATH'] else: HOME = os.environ['HOME'] ''' 如果系统的用户名是中文且安装不成功,可以尝试在 xtools.py 文件自定义系统<用户名>,并删除 # 注释。 ''' # HOME = "/Users/" + u"<用户名>" # osx # HOME = "/home/" + u"<用户名>" # linux # HOME = "C:\Users\" + u"<用户名>" # windows workdir = os.path.join(HOME,'.xtools') ``` #### 功能灰色无法使用 * 检查系统用户名是否为 **中文** * 检查工具文件夹名称是否为 Xtools * 检查 applescript 压缩包是否解压 * 查看 issues ``` https://github.com/chasingboy/Xtools/issues ``` ### 特别感谢 xinyu2428 师傅 https://github.com/xinyu2428/HTML_TOOLS linkfinder https://github.com/GerbenJavado/LinkFinder ### 更新记录 [+] 2023-07-15 增加 Windows 命令行调用支持。 [+] 2023-07-18 增加一键排序去重、提取 javascript 文件路由。
66
3
IsaMarvin/vi-emacs_learning_hub
https://github.com/IsaMarvin/vi-emacs_learning_hub
Welcome to my personal Vi and Emacs notes repository! This project is dedicated to providing a comprehensive resource for learning and mastering the popular text editors, Vi and Emacs. Within this repository, you'll find a collection of my thoughts, insights, and learnings on these powerful tools.
# Vi & Emacs Learning Hub Welcome to the Vi & Emacs Learning Hub! 🌟 This repository is dedicated to helping you master two popular text editors: Vi and Emacs. Whether you're a beginner or an experienced user, this hub provides valuable resources to enhance your text editing skills. Let's embark on a journey to explore the power and versatility of Vi and Emacs! ## Vi Folder In the [Vi folder](./0x00_vi), you'll find a comprehensive collection of guides, tutorials, and exercises specifically tailored for Vi enthusiasts. Whether you want to navigate faster, edit efficiently, or unleash advanced features, this folder will empower you to become a proficient Vi user. Explore the Vi folder to unlock the full potential of this iconic editor. 📂 [Vi Folder](./0x00_vi) ## Emacs Folder The [Emacs folder](./0x00_emacs) is your gateway to a world of extensible and customizable text editing. Discover a wide range of resources, including guides, tips, and tricks, to help you harness the power of Emacs. From mastering keybindings to exploring powerful extensions, the Emacs folder will equip you with the knowledge to elevate your editing experience. 📂 [Emacs Folder](./0x00_emacs) ## Contributing Contributions are welcome! If you have additional guides, tutorials, or any other valuable resources related to Vi or Emacs, feel free to contribute to the respective folders. Your contributions can help others in their learning journey. ## Happy Editing! ✨💻 Congratulations on exploring the Vi & Emacs Learning Hub! 🎉 We hope this repository provides you with valuable insights and resources to master the art of text editing in Vi and Emacs. May your editing journey be filled with productivity, creativity, and joy! If you have any questions, suggestions, or feedback, don't hesitate to reach out. Happy editing and may you write code that inspires!
25
3
canerkaseler/jetpack-compose-threads-card
https://github.com/canerkaseler/jetpack-compose-threads-card
null
![final_400](https://github.com/canerkaseler/jetpack-compose-threads-card/assets/130801186/b4d271c7-74f3-465c-8cd5-a0e58bd74d7f) # Threads Invitation Card with Jetpack Compose ![threads_medium_ck 2](https://github.com/canerkaseler/jetpack-compose-threads-card/assets/130801186/2652db5e-0092-4ab3-9705-aadc41e0a668) This repository targets to show Threads Card animation with Jetpack Compose in android development. This repository has a [Medium Article](https://proandroiddev.com/threads-invitation-card-with-jetpack-compose-2e5b9baede44). You can study this repository commit by commit with the article. ## Description This article aims to create a animation and UI copy of the Threads Invitation Card with Jetpack Compose in the Android project. This project includes combination of these three (3) different animations; Always Turning Animation, Rotating Card to near Axis-Y Animation after Dragging, Animation Rotating after Quick Dragging. ### A) Necessary topics: 1. User Data Model & Design Images 2. Front Side of the Card 3. Back Side of the Card ### B) Main Topics: 1. Card Turning with Dragging 2. Always Turning Animation & Stop Animation 3. Rotating Card to near Axis-Y Animation after Dragging 4. Animation Rotating after Quick Dragging then Infinite Turning Animation To continue reading about above parts, please check the [Medium Article](https://proandroiddev.com/threads-invitation-card-with-jetpack-compose-2e5b9baede44). ## Author All social media and contact info is [@canerkaseler](https://linktr.ee/canerkaseler) <a href="https://www.buymeacoffee.com/canerkaseler" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 50px !important;width: 180px !important;" ></a> ## License ```xml MIT License Copyright (c) 2023 Caner Kaşeler Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. ```
23
2
Anon-Artist/100-days-of-OffSec
https://github.com/Anon-Artist/100-days-of-OffSec
Hi everyone,
# 100-days-of-OffSec ## Summary 1. Day 1 * [Introduction to Red Teaming](introduction-to-red-teaming.md) * [What is Red Teaming](introduction-to-red-teaming.md#what-is-red-teaming) * [Why Red Teaming](introduction-to-red-teaming.md#why-red-teaming) * [Red Teaming x Penetration Testing](introduction-to-red-teaming.md#red-teaming-x-penetration-testing) * [Red Team Methodologies](introduction-to-red-teaming.md#red-team-methodologies) * [Red Team Report Template](introduction-to-red-teaming.md#red-team-report-template) * [References](introduction-to-red-teaming.md#references) 2. Day 2 * [Introduction to Active Directory](introduction-to-active-directory.md) * [What is Active Directory](introduction-to-active-directory.md#what-is-active-directory) * [Components of Active Directory](introduction-to-active-directory.md#components-of-active-directory) * [Structure of Active Directory](introduction-to-active-directory.md#structure-of-active-directory) * [References](introduction-to-active-directory.md#references) 3. Day 3 * [Building an Active Directory Lab](building-an-active-directory-lab.md) * [Checklist for setting up a basic active directory lab](building-an-active-directory-lab.md#checklist-for-setting-up-a-basic-active-directory-lab) * [How to setup an Active Directory Lab](building-an-active-directory-lab.md#how-to-setup-an-active-directory-lab) * [Building a Vulnerable Active Directory Lab](building-an-active-directory-lab.md#building-a-vulnerable-active-directory-lab) 4. Day 4 * [Introduction to Red Team Infrastructure](introduction-to-command-and-control.md) * [What is Command and Control](introduction-to-command-and-control.md#what-is-command-and-control-c2) * [Open Source C2 and Commercial C2](introduction-to-command-and-control.md#open-source-c2-and-commercial-c2) * [C2 Matrix](introduction-to-command-and-control.md#c2-matrix) * [Setup a Red Team Infrastructure](introduction-to-command-and-control.md#set-up-a-red-team-infrastructure) * [References for setting up a Red Team Infrastructure](introduction-to-command-and-control.md#references-for-setting-up-a-red-team-infrastructure) 5. Day 5 * [External Reconnaissance](reconnaissance.md) * [What is External Reconnaissance](reconnaissance.md#what-is-external-reconnaissance) * [Passive Reconnaissance](reconnaissance.md#passive-reconnaissance) * [Active Reconnaissance](reconnaissance.md#active-reconnaissance) * [How to Perform External Reconnaissance](reconnaissance.md#how-to-perform-external-reconnaissance) * [References](reconnaissance.md#reference)
57
8
Lomray-Software/react-route-manager
https://github.com/Lomray-Software/react-route-manager
React route manager for react-router.
# React route manager for [react-router](https://reactrouter.com/) Define and manage application url's in one place. ![npm](https://img.shields.io/npm/v/@lomray/react-route-manager) ![GitHub](https://img.shields.io/github/license/Lomray-Software/react-route-manager) [![Quality Gate Status](https://sonarcloud.io/api/project_badges/measure?project=react-route-manager&metric=alert_status)](https://sonarcloud.io/summary/new_code?id=react-route-manager) [![Reliability Rating](https://sonarcloud.io/api/project_badges/measure?project=react-route-manager&metric=reliability_rating)](https://sonarcloud.io/summary/new_code?id=react-route-manager) [![Security Rating](https://sonarcloud.io/api/project_badges/measure?project=react-route-manager&metric=security_rating)](https://sonarcloud.io/summary/new_code?id=react-route-manager) [![Vulnerabilities](https://sonarcloud.io/api/project_badges/measure?project=react-route-manager&metric=vulnerabilities)](https://sonarcloud.io/summary/new_code?id=react-route-manager) [![Lines of Code](https://sonarcloud.io/api/project_badges/measure?project=react-route-manager&metric=ncloc)](https://sonarcloud.io/summary/new_code?id=react-route-manager) [![Coverage](https://sonarcloud.io/api/project_badges/measure?project=react-route-manager&metric=coverage)](https://sonarcloud.io/summary/new_code?id=react-route-manager) ## Getting started The package is distributed using [npm](https://www.npmjs.com/), the node package manager. ``` npm i --save @lomray/react-route-manager ``` ## Usage ```typescript jsx import { Manager } from '@lomray/react-route-manager'; import type { RouteObject } from 'react-router'; /** * Application URL manager */ const manager = new Manager({ routes: { home: { url: '/', }, details: { url: '/details', children: { user: { url: '/user/:id', params: { id: '' }, // id required param }, }, }, about: { url: '/about', }, }, }); /** * Now we can use it for get routes path for react-router */ const routes: RouteObject[] = [ { path: manager.path('home'), lazy: () => import('@pages/home'), }, { path: manager.path('details'), children: [ { index: true, lazy: () => import('@pages/details'), }, { path: manager.path('details.user'), lazy: () => import('@pages/details/user'), }, ], }, { path: manager.path('about'), lazy: () => import('@pages/about'), }, ]; /** * Also we can use it for generate url's */ const MyComponent = () => { return ( <> <Link to{manager.makeURL('home')}>Home page</Link> <Link to{manager.makeURL('about')}>About page</Link> <Link to{manager.makeURL('details')}>Details page</Link> <Link to{manager.makeURL('details.user', { id: 1 })}>User page</Link> </> ) } ``` ## Route params ```typescript const manager = new Manager({ routes: { user: { url: '/user', params: { // required id: '', // required union id: '' as 'aaa' | 'dddd', // required enum id: DD, // optional id: undefined, // optional union id: undefined as 'aaa' | 'dddd' | undefined, // optional enum id: undefined as DD | undefined } } } }); ``` Explore [demo app](https://github.com/Lomray-Software/vite-template) to more understand. ## Bugs and feature requests Bug or a feature request, [please open a new issue](https://github.com/Lomray-Software/react-route-manager/issues/new). ## License Made with 💚 Published under [MIT License](./LICENSE).
14
0
linxid/Focus-DETR-mindspore
https://github.com/linxid/Focus-DETR-mindspore
[ICCV 2023] Official implementation of the paper "Less is More: Focus Attention for Efficient DETR"
## Focus-DETR This is the official implementation of the paper "Less is More: Focus Attention for Efficient DETR" Authors: Dehua Zheng, Wenhui Dong, Hailin Hu, Xinghao Chen, Yunhe Wang. [[`arXiv`](https://arxiv.org/abs/2307.12612)] [[`BibTeX`](#citing-focus-detr)] Focus-DETR is a model that focuses attention on more informative tokens for a better trade-off between computation efficiency and model accuracy. Compared with the state-of-the-art sparse transformed-based detector under the same setting, our Focus-DETR gets comparable complexity while achieving 50.4AP (+2.2) on COCO. <div align="center"> <img src="./figs/model_arch.PNG"/> </div><br/> ## Table of Contents - [Focus-DETR](#focus-detr) - [Table of Contents](#table-of-contents) - [Main Results with Pretrained Models](#main-results-with-pretrained-models) - [Pretrained focus\_detr with ResNet Backbone](#pretrained-focus_detr-with-resnet-backbone) - [Pretrained focus\_detr with Swin-Transformer Backbone](#pretrained-focus_detr-with-swin-transformer-backbone) - [Installation](#installation) - [Training](#training) - [Evaluation](#evaluation) - [Citing Focus-DETR](#citing-focus-detr) ## Main Results with Pretrained Models Here we provide the pretrained `Focus-DETR` weights based on detrex. ##### Pretrained focus_detr with ResNet Backbone <table><tbody> <!-- START TABLE --> <!-- TABLE HEADER --> <th valign="bottom">Name</th> <th valign="bottom">Backbone</th> <th valign="bottom">Pretrain</th> <th valign="bottom">Epochs</th> <th valign="bottom">Denoising Queries</th> <th valign="bottom">box<br/>AP</th> <th valign="bottom">download</th> <!-- TABLE BODY --> <!-- ROW: focus_detr_r50_4scale_12ep --> <tr><td align="left"><a href="configs/focus_detr_resnet/focus_detr_r50_4scale_12ep.py">Focus-DETR-R50-4scale</a></td> <td align="center">R-50</td> <td align="center">IN1k</td> <td align="center">12</td> <td align="center">100</td> <td align="center">48.8</td> <td align="center"> <a href="https://github.com/linxid/Focus-DETR-mindspore/releases/download/Focus-DETR/focus_detr_r50_4scale_12ep.zip">model</a></td> </tr> <!-- ROW: focus_detr_r50_4scale_24ep --> <tr><td align="left"><a href="configs/focus_detr_resnet/focus_detr_r50_4scale_24ep.py">Focus-DETR-R50-4scale</a></td> <td align="center">R-50</td> <td align="center">IN1k</td> <td align="center">24</td> <td align="center">100</td> <td align="center">50.3</td> <td align="center"> <a href="https://github.com/linxid/Focus-DETR-mindspore/releases/download/Focus-DETR/focus_detr_r50_4scale_24ep.zip">model</a></td> </tr> <!-- ROW: focus_detr_r50_4scale_36ep --> <tr><td align="left"><a href="configs/focus_detr_resnet/focus_detr_r50_4scale_36ep.py">Focus-DETR-R50-4scale</a></td> <td align="center">R-50</td> <td align="center">IN1k</td> <td align="center">36</td> <td align="center">100</td> <td align="center">50.4</td> <td align="center"> <a href="https://github.com/linxid/Focus-DETR-mindspore/releases/download/Focus-DETR/focus_detr_r50_4scale_36ep_v3.zip">model</a></td> </tr> <!-- ROW: focus_detr_r101_4scale_12ep --> <tr><td align="left"><a href="configs/focus_detr_resnet/focus_detr_r101_4scale_12ep.py">Focus-DETR-R101-4scale</a></td> <td align="center">R-101</td> <td align="center">IN1k</td> <td align="center">12</td> <td align="center">100</td> <td align="center">50.8</td> <td align="center"> <a href="https://github.com/linxid/Focus-DETR-mindspore/releases/download/Focus-DETR/focus_detr_r101_4scale_12ep.zip">model</a></td> </tr> <!-- ROW: focus_detr_r101_4scale_24ep --> <tr><td align="left"><a href="configs/focus_detr_resnet/focus_detr_r101_4scale_24ep.py">Focus-DETR-R101-4scale</a></td> <td align="center">R-101</td> <td align="center">IN1k</td> <td align="center">24</td> <td align="center">100</td> <td align="center">51.2</td> <td align="center"> <a href="https://github.com/linxid/Focus-DETR-mindspore/releases/download/Focus-DETR/focus_detr_r101_4scale_24ep.zip">model</a></td> </tr> <!-- ROW: focus_detr_r101_4scale_36ep --> <tr><td align="left"><a href="configs/focus_detr_resnet/focus_detr_r101_4scale_36ep.py">Focus-DETR-R101-4scale</a></td> <td align="center">R-101</td> <td align="center">IN1k</td> <td align="center">36</td> <td align="center">100</td> <td align="center">51.4</td> <td align="center"> <a href="https://github.com/linxid/Focus-DETR-mindspore/releases/download/Focus-DETR/focus_detr_r101_4scale_36ep_v2.zip">model</a></td> </tr> </tbody></table> #### Pretrained focus_detr with Swin-Transformer Backbone <table><tbody> <th valign="bottom">Name</th> <th valign="bottom">Backbone</th> <th valign="bottom">Pretrain</th> <th valign="bottom">Epochs</th> <th valign="bottom">Denoising Queries</th> <th valign="bottom">box<br/>AP</th> <th valign="bottom">download</th> <!-- ROW: focus_detr_swin_tiny_4scale_12ep --> <tr><td align="left"><a href="configs/focus_detr_swin/focus_detr_swin_tiny_224_4scale_12ep.py">Focus-DETR-Swin-T-224-4scale</a></td> <td align="center">Swin-Tiny-224</td> <td align="center">IN1k</td> <td align="center">12</td> <td align="center">100</td> <td align="center">50.0</td> <td align="center"> <a href="https://github.com/linxid/Focus-DETR-mindspore/releases/download/Focus-DETR/focus_detr_swin_tiny_224_4scale_12ep.zip">model</a></td> </tr> <!-- ROW: focus_detr_swin_tiny_4scale_24ep --> <tr><td align="left"><a href="configs/focus_detr_swin/focus_detr_swin_tiny_224_4scale_24ep.py">Focus-DETR-Swin-T-224-4scale</a></td> <td align="center">Swin-Tiny-224</td> <td align="center">IN1k</td> <td align="center">24</td> <td align="center">100</td> <td align="center">51.2</td> <td align="center"> <a href="https://github.com/linxid/Focus-DETR-mindspore/releases/download/Focus-DETR/focus_detr_swin_tiny_224_4scale_24ep.zip">model</a></td> </tr> <!-- ROW: focus_detr_swin_tiny_4scale_36ep --> <tr><td align="left"><a href="configs/focus_detr_swin/focus_detr_swin_tiny_224_4scale_36ep.py">Focus-DETR-Swin-T-224-4scale</a></td> <td align="center">Swin-Tiny-224</td> <td align="center">IN1k</td> <td align="center">36</td> <td align="center">100</td> <td align="center">52.5</td> <td align="center"> <a href="https://github.com/IDEA-Research/detrex-storage/releases/download/v0.1.1/focus_detr_swin_tiny_224_4scale_12ep.pth">model</a></td> </tr> <!-- ROW: focus_detr_swin_tiny_4scale_22k_36ep --> <tr><td align="left"><a href="configs/focus_detr_swin/focus_detr_swin_tiny_224_4scale_36ep.py">Focus-DETR-Swin-T-224-4scale</a></td> <td align="center">Swin-Tiny-224</td> <td align="center">IN22k to IN1k</td> <td align="center">36</td> <td align="center">100</td> <td align="center">53.2</td> <td align="center"> <a href="">model</a></td> </tr> <!-- ROW: focus_detr_swin_base_4scale_22k_36ep --> <tr><td align="left"><a href="configs/focus_detr_swin/focus_detr_swin_base_384_4scale_36ep.py">Focus-DETR-Swin-B-384-4scale</a></td> <td align="center">Swin-Base-384</td> <td align="center">IN22k to IN1k</td> <td align="center">36</td> <td align="center">100</td> <td align="center">56.2</td> <td align="center"> <a href="https://github.com/linxid/Focus-DETR-mindspore/releases/download/Focus-DETR/focus_detr_swin_base_384_4scale_22k_36ep.pth">model</a></td> </tr> <!-- ROW: focus_detr_swin_large_4scale_22k_36ep --> <tr><td align="left"><a href="configs/focus_detr_swin/focus_detr_swin_large_384_4scale_36ep.py">Focus-DETR-Swin-L-384-4scale</a></td> <td align="center">Swin-Large-384</td> <td align="center">IN22k to IN1k</td> <td align="center">36</td> <td align="center">100</td> <td align="center">56.3</td> <td align="center"> <a href="">model</a></td> </tr> </tbody></table> **Note:** * Swin-X-384 means the backbone pretrained resolution is 384 x 384 and IN22k to In1k means the model is pretrained on ImageNet-22k and finetuned on ImageNet-1k. ## Installation Please refer to [Installation Instructions](https://detrex.readthedocs.io/en/latest/tutorials/Installation.html) for the details of installation. ## Training All configs can be trained with: ```bash cd detrex python tools/train_net.py --config-file projects/focus_detr/configs/path/to/config.py --num-gpus 8 ``` By default, we use 8 GPUs with total batch size as 16 for training. ## Evaluation Model evaluation can be done as follows: ```bash cd detrex python tools/train_net.py --config-file projects/focus_detr/configs/path/to/config.py --eval-only train.init_checkpoint=/path/to/model_checkpoint ``` ## Citing Focus-DETR If you find our work helpful for your research, please consider citing the following BibTeX entry. ```BibTex @misc{zheng2023more, title={Less is More: Focus Attention for Efficient DETR}, author={Dehua Zheng and Wenhui Dong and Hailin Hu and Xinghao Chen and Yunhe Wang}, year={2023}, eprint={2307.12612}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
58
1
joshuavial/aider.nvim
https://github.com/joshuavial/aider.nvim
null
# Aider Plugin for Neovim This is a simple plugin for Neovim that allows you to open a terminal window inside Neovim and run [Aider](https://github.com/paul-gauthier/aider). I wrote it as an experiment in using Aider which is by far the best AI coding assistant I've seen, and now just a few keystrokes away in vim. ## Installation You can install the Aider Plugin for Neovim using various package managers. Here are the instructions for some common ones: Using [packer.nvim](https://github.com/wbthomason/packer.nvim) ```lua use 'joshuavial/aider.nvim' ``` Using [vim-plug](https://github.com/junegunn/vim-plug) ```vim Plug 'joshuavial/aider.nvim' ``` Using [dein](https://github.com/Shougo/dein.vim) ```vim call dein#add('joshuavial/aider.nvim') ``` ## Usage The Aider Plugin for Neovim provides the `OpenAider` function, which you can call to open a terminal window with the Aider command. The `OpenAider` function accepts the following arguments: - `command`: The full aider command to use - defaults to `aider` - `window`: The window style to use 'vsplit' (default), 'hsplit' or 'float' Note: When Aider opens, it will automatically add all open buffers to the command. Before using the `OpenAider` function, you need to require the `aider` module in your configuration file. Add the following line to your `.vimrc` or `init.vim`: ```vim lua require('aider') ``` Here are some examples of how to use the `OpenAider` function: ```vim :lua require('aider').OpenAider() :lua require('aider').OpenAider("aider", "float") :lua require('aider').OpenAider("AIDER_NO_AUTO_COMMITS=1 aider -3" ) ``` You can also set keybindings for the `OpenAider` function in Lua. Here's an example: ```lua -- set a keybinding for the OpenAider function vim.api.nvim_set_keymap('n', '<leader>oa', '<cmd>lua require("aider").OpenAider()<cr>', {noremap = true, silent = true}) ``` In this example, pressing `<leader>oa` in normal mode will call the `OpenAider` function. Run `aider --help` to see all the options you can pass to the cli. The plugin provides the following keybindings: - `<leader><Space><Space>` to open a terminal window with the Aider defaults (gpt-4). - `<leader><Space>3` to open a terminal window with the Aider command using the gpt-3.5-turbo-16k model for chat. ## Tips for Working with Buffers in Vim If you're not familiar with buffers in Vim, here are some tips: - Use `:ls` or `:buffers` to see all open buffers. - Use `:b <number>` or `:buffer <number>` to switch to a specific buffer. Replace `<number>` with the buffer number. - Use `:bd` or `:bdelete` to close the current buffer. - Use `:bd <number>` or `:bdelete <number>` to close a specific buffer. Replace `<number>` with the buffer number. - Use `:bufdo bd` to close all buffers. ## NOTE if you resize a split the nvim buffer can truncate the text output, chatGPT tells me there isn't an easy work around for this. Feel free to make a PR if you think it's easy to solve without rearchitecting and using tmux or something similar.
21
0
dart-lang/native_synchronization
https://github.com/dart-lang/native_synchronization
Low-level synchronization primitives built using dart:ffi.
[![Dart](https://github.com/dart-lang/native_synchronization/actions/workflows/dart.yaml/badge.svg)](https://github.com/dart-lang/native_synchronization/actions/workflows/dart.yaml) Low level synchronization primitives built on dart:ffi. ## TODO: Projects docs TODO: Add a brief project description here. ## Status: experimental **NOTE**: This package is currently experimental and published under the [labs.dart.dev](https://dart.dev/dart-team-packages) pub publisher in order to solicit feedback. For packages in the labs.dart.dev publisher we generally plan to either graduate the package into a supported publisher (dart.dev, tools.dart.dev) after a period of feedback and iteration, or discontinue the package. These packages have a much higher expected rate of API and breaking changes. Your feedback is valuable and will help us evolve this package. For general feedback, suggestions, and comments, please file an issue in the [bug tracker](https://github.com/dart-lang/native_synchronization/issues).
11
0
verytinydever/phone-checker
https://github.com/verytinydever/phone-checker
null
# phone-checker
14
0
cpojer/eslint-config
https://github.com/cpojer/eslint-config
Opinionated ESLint config with sensible defaults.
# `@nkzw/eslint-config` Opinionated ESLint config with sensible defaults. ## Installation & Usage ``` npm install @nkzw/eslint-config ``` In your `.eslintrc.js` or `.eslintrc.cjs`: ```js module.exports = { extends: ['@nkzw'], }; ``` Then run `pnpm eslint .` or `npm eslint .`. ## Philosophy & Principles Use this configuration if these principles resonate with you: - **Error, Never Warn:** People tend to ignore warnings. There is little value in only warning about potentially problematic code patterns. Either it's an issue or not. Errors force the developer to address the problem either by fixing it or explicitly disabling the role in that location. - **Strict, consistent code style:** If there are multiple ways of doing something, or there is a new language construct or best practice, this configuration will suggest the most strict and consistent solution. - **Prevent Bugs:** Problematic patterns such as `instanceof` are not allowed. This forces developers to choose more robust patterns. This configuration disallows usage of `console` or `test.only` so that you don't end up with unintended logging in production or CI failures. If you want to log to the console in your production app, use another function that calls `console.log` to distinguish between debug logs and intentional logs. - **Fast:** Slow rules are avoided if possible. For example, it is recommended to use the fast `noUnusedLocals` check in TypeScript instead of the `no-unused-vars` rules. - **Don't get in the way:** Rules that get in the way or are too [subjective](https://github.com/airbnb/javascript) are disabled. Rules with autofixers are preferred over rules without them. ## Included Plugins & Rules This configuration consists of the most useful and least annoying rules from the following eslint plugins: - [`typescript-eslint`](https://github.com/typescript-eslint/typescript-eslint) - [`eslint-import-resolver-typescript`](https://www.npmjs.com/package/eslint-import-resolver-typescript) - [`eslint-plugin-unicorn`](https://github.com/sindresorhus/eslint-plugin-unicorn) - [`eslint-plugin-import`](https://github.com/import-js/eslint-plugin-import) - [`eslint-plugin-sort-keys-fix`](https://github.com/leo-buneev/eslint-plugin-sort-keys-fix) - [`eslint-plugin-typescript-sort-keys`](https://github.com/infctr/eslint-plugin-typescript-sort-keys) - [`eslint-plugin-react`](https://github.com/jsx-eslint/eslint-plugin-react) - [`eslint-plugin-react-hooks`](https://github.com/facebook/react/tree/main/packages/eslint-plugin-react-hooks) - [`eslint-plugin-no-instanceof`](https://www.npmjs.com/package/eslint-plugin-no-instanceof) - [`eslint-plugin-no-only-tests`](https://github.com/levibuzolic/eslint-plugin-no-only-tests) ## Suggestions This configuration is meant to be used with: - [TypeScript](https://www.typescriptlang.org/) and the [`noUnusedLocals`](https://www.typescriptlang.org/tsconfig#noUnusedLocals) setting. - [Prettier](https://prettier.io/) and the [`@ianvs/prettier-plugin-sort-imports`](https://github.com/ianvs/prettier-plugin-sort-imports). Read more [frontend tooling suggestions in this blog post](https://cpojer.net/posts/fastest-frontend-tooling-in-2022).
25
2
hrbrmstr/go-hhhash
https://github.com/hrbrmstr/go-hhhash
#️⃣ 🕸️ 👤 HTTP Headers Hashing
# hhhash golang HTTP Headers Hashing CLI ## Description HTTP Headers Hashing (HHHash) is a technique used to create a fingerprint of an HTTP server based on the headers it returns. HHHash employs one-way hashing to generate a hash value for the set of header keys returned by the server. See https://www.foo.be/2023/07/HTTP-Headers-Hashing_HHHash for more info. ## Usage ```bash ./hhhash https://www.circl.lu/ hhh:1:78f7ef0651bac1a5ea42ed9d22242ed8725f07815091032a34ab4e30d3c3cefc ```
12
2
mcleodchris/dev-threads
https://github.com/mcleodchris/dev-threads
null
# :thread: Dev-Threads ## What is it? Dev-Threads is a directory of software engineering and related accounts found on the Threads social network - developers, engineers, tools, newsletters, etc. All accounts have submitted themselves for inclusion. Profile data is pulled from Threads using [the unofficial API client](https://github.com/junhoyeo/threads-api) by [Junho Yeo](https://junho.io/). ## How to add yourself to the directory? :one: First, fork this repository using the button near the top of the repository homepage. :two: Secondly, add an entry with your username to the array in `./src/developers.json`, along with an optional array of a few topics which describe what you post about on Threads. **Please do not add any accounts you do not control**. The format is a regular JavaScript object in an array. Don't forget to add the comma after the preceding entry! The updated file should look something like this: ```JSON [ { "username": "mstrkapowski", "topics": [ "general", "azure", "learning & development", "warhammer", "life" ] }, // snip ..... { "username": "yournamehere", "topics": [ "ai", "machine learning", "boba tea", "photography" ] } ] ``` Optionally: build and preview locally. `npm install` then `npm run start`. The site should load on [http://localhost:8080/] :three: Finally, commit, and submit a Pull Request back to this repository. At busy/holiday times it might take a while to review and approve the PR, but all will be looked at. While it should be rare, the admin reserves the right to reject a submission for any reason. Once a PR is merged, it will take a few minutes for the pipelines to rebuild the site - after which your profile will be visible on [https://dev-threads.directory/](https://dev-threads.directory/) :no_pedestrians: If you want to remove your profile from the directory, please submit a new PR with your data removed from `./src/developers.json`. ## Credits Dev-Threads is originally based on the [winty](https://github.com/distantcam/windty/) [11ty](https://www.11ty.dev/) starter project. The idea is based on [PersonalSit.es](https://personalsit.es/). Profile data is pulled from Threads using [the unofficial API client](https://github.com/junhoyeo/threads-api) by [Junho Yeo](https://junho.io/).
17
42
NiazMorshed2007/appwrite-writer
https://github.com/NiazMorshed2007/appwrite-writer
Appwrite Writer is a notion like editor powered with Appwrite, OpenAI & Novel.
![appwrite-writer](https://github.com/NiazMorshed2007/appwrite-writer/assets/77217706/a203decd-76fe-4204-99e3-aeccd0ba7c48) <p align="center"> <a href="#introduction"><strong>Introduction</strong></a> · <a href="#setting-up-locally"><strong>Setting Up Locally</strong></a> · <a href="#tech-stack"><strong>Tech Stack</strong></a> · </p> <br/> ## Introduction Appwrite Writer is a Notion-style WYSIWYG editor with AI-powered autocompletions powered by Appwrite and [Novel] (https://github.com/steven-tey/novel). It's goal is to give you an example of how powerful things you can build with Appwrite. Use this project to build your next cool project. And share love by sharing on social! https://github.com/NiazMorshed2007/appwrite-writer/assets/77217706/f7fba7f5-d902-473f-9e0f-82e05c4e7797 <br /> ## Tech Stack Appwrite Writer is built on the following stack: - [Appwrite](https://appwrite.io/) - backend - [Next.js](https://nextjs.org/) – framework - [OpenAI](https://openai.com/) - AI completions - [Novel](https://github.com/steven-tey/novel) - [Vercel AI SDK](https://sdk.vercel.ai/docs) – AI library - [Vercel](https://vercel.com) – deployments - [TailwindCSS](https://tailwindcss.com/) – styles - [Cal Sans](https://github.com/calcom/font) – font ## Author - Niaz Morshed ([@niazmorshed_](https://twitter.com/niazmorshed_))
14
2
Sampaio-Vitor/webscraper.classifier
https://github.com/Sampaio-Vitor/webscraper.classifier
Desenvolvimento de um Scraper para obtenção de vagas.
# WebScraper de vagas de dados + Classificação textual com XGBoost ## Objetivo Este projeto consiste em um aplicativo desenvolvido para automatizar a busca por vagas no campo de dados em grandes empresas, utilizando o LinkedIn como plataforma de pesquisa. A aplicação é capaz de captar vagas, classificá-las de acordo com a sua relevância utilizando Machine Learning, e então enviar um e-mail diariamente com as vagas mais relevantes. O processo é executado na nuvem (AWS), permitindo uma busca e análise constantes sem a necessidade de interação manual constante. ## Funcionalidades - **Web Scraping:** A aplicação raspa as vagas listadas no LinkedIn com base em uma consulta predefinida, armazenando os resultados em um arquivo CSV. - **Classificação de Vagas:** Utilizando um modelo de Machine Learning treinado previamente (XGBoost), a aplicação classifica as vagas raspadas de acordo com a sua relevância. - **Notificação por e-mail:** A aplicação envia diariamente um e-mail para o usuário com as vagas mais relevantes encontradas. ## Sequencial de desenvolvimento 1) Realização do scraping inicial de vagas pela pesquisa por "("(data science)" OR "cientista de dados)" OR "machine learning")"; 2) Classificação manual das vagas encontradas, entre 0 e 2, onde: - 0: Vaga não relevante - 1: Vaga Relevante - 2: Vaga muito relevante 3) Treinamento do modelo de classificação: Manualmente fiz o label de todas as vagas encontradas (400+), depois foi treinado um modelo XGBoost para classificação baseado nas minhas labels. 4) Desenvolvimento do scraper a ser executado diariamente na nuvem; 5) Desenvolvimento do template de e-mail HTML e do módulo que envia e-mails diariamente; 6) Subida do código para uma instância EC2, programando a execução do código diariamente; 7) Obtenção de uma vaga como cientista de dados como resultado final! 🎉 ## Contribuição Sinta-se à vontade para contribuir para este projeto e usá-lo como desejar, com as devidas adaptações. Seu feedback e contribuições são muito apreciados.
13
4
YvanYin/Metric3D
https://github.com/YvanYin/Metric3D
The repo for "Metric3D: Towards Zero-shot Metric 3D Prediction from A Single Image"
# 🚀 Metric3D (ICCV23) 🚀 **The is official PyTorch implementation of paper "Metric3D: Towards Zero-shot Metric 3D Prediction from A Single Image" (Metric 3D)** Authors: [Wei Yin](https://yvanyin.net/)<sup>1*</sup>, [Chi Zhang](https://icoz69.github.io/)<sup>2*</sup>, [Hao Chen](https://scholar.google.com/citations?hl=zh-CN&user=i-2ghuYAAAAJ)<sup>3</sup>, [Zhipeng Cai](https://zhipengcai.github.io/)<sup>3</sup>, [Gang Yu](https://www.skicyyu.org/)<sup>4</sup>, [Kaixuan Wang](https://wang-kx.github.io/)<sup>1</sup>, [Xiaozhi Chen](https://xiaozhichen.github.io/)<sup>1</sup>, [Chunhua Shen](https://cshen.github.io/)<sup>3</sup> ### [Arxiv](https://arxiv.org/abs/2307.10984) | [Video](https://www.youtube.com/playlist?list=PLEuyXJsWqUNd04nwfm9gFBw5FVbcaQPl3) | Hugging Face 🤗 (Comming Soon) [@JUGGHM](https://github.com/JUGGHM)<sup>1,5</sup> will also maintain this project. ## News and TO DO LIST - [ ] Stronger models and tiny models - [ ] Hugging face - `[2023/8/10]` Inference codes, pretrained weights, and demo released. ## 🌼 Abstract Reconstructing accurate 3D scenes from images is a long-standing vision task. Due to the ill-posedness of the single-image reconstruction problem, most well-established methods are built upon multi-view geometry. - State-of-the-art (SOTA) monocular metric depth estimation methods can only handle a single camera model and are unable to perform mixed-data training due to the metric ambiguity. - Meanwhile, SOTA monocular methods trained on large mixed datasets achieve zero-shot generalization by learning affine-invariant depths, which cannot recover real-world metrics. In this work, we show that the key to a zero-shot single-view metric depth model lies in the combination of large-scale data training and resolving the metric ambiguity from various camera models. To recover metric depth from a monocular image, we observe that - Most objects are unique in scale - Once the camera intrinsic and the pose of an object is fixed, the object image size is determined. - If the intrinsic remains unknown, scale ambiguity will make this task ill-posed. Based on such observations, we propose a canonical camera space transformation module, which explicitly addresses the ambiguity problems and can be effortlessly plugged into existing monocular models. Equipped with our module, monocular models can be stably trained over 8 million of images with thousands of camera models, resulting in zero-shot generalization to in-the-wild images with unseen camera set. ### Fully zero-shot state-of-the-art mono-depth #### 🏆 Highlights: The Champion of [2nd Monocular Depth Estimation Challenge](https://jspenmar.github.io/MDEC) in CVPR 2023 🏆 <div align=center> <img src="media/screenshots/challenge.PNG"> </div> #### Routing benchmarks WITHOUT re-training the models on target datasets, we obtain comparable performance against SoTA supervised methods Adabins and NewCRFs. | | Backbone | KITTI $\delta 1$ ↑ | KITTI $\delta 2$ ↑ | KITTI $\delta 3$ ↑ | KITTI AbsRel ↓| KITTI RMSE ↓| KITTI log10 ↓| NYU $\delta 1$ ↑ | NYU $\delta 2$ ↑ | NYU $\delta 3$ ↑ | NYU AbsRel ↓| NYU RMSE ↓| NYU RMSE-log ↓| |---------|------------|---|---|---|---|---|---|---|---|---|---|---|---| | Adabins | Efficient-B5 | 0.964 | 0.995 | 0.999 | 0.058 | 2.360 | 0.088 | 0.903 | 0.984 | 0.997 | 0.103 | 0.0444 | 0.364 | | NewCRFs | SwinT-L | 0.974 | 0.997 | 0.999 | 0.052 | 2.129 | 0.079 | 0.922 | 0.983 | 0.994 | 0.095 | 0.041 | 0.334 | | Ours (CSTM_label) | ConvNeXt-L | 0.964 | 0.993 | 0.998 | 0.058 | 2.770 | 0.092 | 0.944 | 0.986 | 0.995 | 0.083 | 0.035 | 0.310 | ## 🌈 DEMOs ### In-the-wild 3D reconstruction | | Image | Reconstruction | Pointcloud File | |:---------:|:------------------:|:------------------:|:--------:| | room | <img src="data/wild_demo/jonathan-borba-CnthDZXCdoY-unsplash.jpg" width="300" height="335"> | <img src="media/gifs/room.gif" width="300" height="335"> | [Download](https://drive.google.com/file/d/1P1izSegH2c4LUrXGiUksw037PVb0hjZr/view?usp=drive_link) | | Colosseum | <img src="data/wild_demo/david-kohler-VFRTXGw1VjU-unsplash.jpg" width="300" height="169"> | <img src="media/gifs/colo.gif" width="300" height="169"> | [Download](https://drive.google.com/file/d/1jJCXe5IpxBhHDr0TZtNZhjxKTRUz56Hg/view?usp=drive_link) | | chess | <img src="data/wild_demo/randy-fath-G1yhU1Ej-9A-unsplash.jpg" width="300" height="169" align=center> | <img src="media/gifs/chess.gif" width="300" height="169"> | [Download](https://drive.google.com/file/d/1oV_Foq25_p-tTDRTcyO2AzXEdFJQz-Wm/view?usp=drive_link) | All three images are downloaded from [unplash](https://unsplash.com/) and put in the data/wild_demo directory. ### 3D metric reconstruction, Metric3D × DroidSLAM Metric3D can also provide scale information for DroidSLAM, help to solve the scale drift problem for better trajectories. (Left: Droid-SLAM (mono). Right: Droid-SLAM with Metric-3D) <div align=center> <img src="media/gifs/0028.gif"> </div> #### KITTI odemetry evaluation (Translational RMS drift (t_rel, ↓) / Rotational RMS drift (r_rel, ↓)) | | Modality | seq 00 | seq 02 | seq 05 | seq 06 | seq 08 | seq 09 | seq 10 | |:----------:|:--------:|:----------:|:----------:|:---------:|:----------:|:----------:|:---------:|:---------:| | ORB-SLAM2 | Mono | 11.43/0.58 | 10.34/0.26 | 9.04/0.26 | 14.56/0.26 | 11.46/0.28 | 9.3/0.26 | 2.57/0.32 | | Droid-SLAM | Mono | 33.9/0.29 | 34.88/0.27 | 23.4/0.27 | 17.2/0.26 | 39.6/0.31 | 21.7/0.23 | 7/0.25 | | Droid+Ours | Mono | 1.44/0.37 | 2.64/0.29 | 1.44/0.25 | 0.6/0.2 | 2.2/0.3 | 1.63/0.22 | 2.73/0.23 | | ORB-SLAM2 | Stereo | 0.88/0.31 | 0.77/0.28 | 0.62/0.26 | 0.89/0.27 | 1.03/0.31 | 0.86/0.25 | 0.62/0.29 | Metric3D makes the mono-SLAM scale-aware, like stereo systems. #### KITTI sequence videos - Youtube [2011_09_30_drive_0028](https://youtu.be/gcTB4MgVCLQ) / [2011_09_30_drive_0033](https://youtu.be/He581fmoPP4) / [2011_09_30_drive_0034](https://youtu.be/I3PkukQ3_F8) videos - Bilibili (TODO) #### Estimated pose [2011_09_30_drive_0033](https://drive.google.com/file/d/1SMXWzLYrEdmBe6uYMR9ShtDXeFDewChv/view?usp=drive_link) / [2011_09_30_drive_0034](https://drive.google.com/file/d/1ONU4GxpvTlgW0TjReF1R2i-WFxbbjQPG/view?usp=drive_link) / [2011_10_03_drive_0042](https://drive.google.com/file/d/19fweg6p1Q6TjJD2KlD7EMA_aV4FIeQUD/view?usp=drive_link) #### Pointcloud files [2011_09_30_drive_0033](https://drive.google.com/file/d/1K0o8DpUmLf-f_rue0OX1VaHlldpHBAfw/view?usp=drive_link) / [2011_09_30_drive_0034](https://drive.google.com/file/d/1bvZ6JwMRyvi07H7Z2VD_0NX1Im8qraZo/view?usp=drive_link) / [2011_10_03_drive_0042](https://drive.google.com/file/d/1Vw59F8nN5ApWdLeGKXvYgyS9SNKHKy4x/view?usp=drive_link) ## 🔨 Installation ### One-line Installation ```bash pip install -r requirements.txt ``` Or you could also try: #### 30 series GPUs, pytorch1.10 ```bash conda create -n metric3d python=3.7 conda activate metric3d pip install torch==1.10.0+cu111 torchvision==0.11.0+cu111 torchaudio==0.10.0 -f https://download.pytorch.org/whl/torch_stable.html pip install -r requirements.txt pip install -U openmim mim install mmengine mim install "mmcv-full==1.3.17" pip install "mmsegmentation==0.19.0" ``` #### 40 series GPUs, pytorch2.0 ```bash conda create -n metric3d python=3.8 conda activate metric3d pip3 install torch torchvision torchaudio pip install -r requirements.txt pip install -U openmim mim install mmengine mim install "mmcv-full==1.7.1" pip install "mmsegmentation==0.30.0" pip install numpy==1.20.0 pip install scikit-image==0.18.0 ``` ### dataset annotation components With off-the-shelf depth datasets, we need to generate json annotaions in compatible with this dataset, which is organized by: ``` dict( 'files':list( dict( 'rgb': 'data/kitti_demo/rgb/xxx.png', 'depth': 'data/kitti_demo/depth/xxx.png', 'depth_scale': 1000.0 # the depth scale of gt depth img. 'cam_in': [fx, fy, cx, cy], ), dict( ... ), ... ) ) ``` To generate such annotations, please refer to the "Inference" section. ### configs In ```mono/configs``` we provide different config setups. Intrinsics of the canonical camera is set bellow: ``` canonical_space = dict( img_size=(512, 960), focal_length=1000.0, ), ``` where cx and cy is set to be half of the image size. Inference settings are defined as ``` depth_range=(0, 1), depth_normalize=(0.3, 150), crop_size = (512, 1088), ``` where the images will be first resized as the ```crop_size``` and then fed into the model. ## ✈️ Inference ### Download Checkpoint | | Encoder | Decoder | Link | |:----:|:-------:|:-------:|:-------:| | v1.0 | ConvNeXt-L | Hourglass-Decoder | [Download](https://drive.google.com/file/d/1KVINiBkVpJylx_6z1lAC7CQ4kmn-RJRN/view?usp=drive_link)| More models are on the way... ### Dataset Mode 1. put the trained ckpt file ```model.pth``` in ```weight/```. 2. generate data annotation by following the code ```data/gene_annos_kitti_demo.py```, which includes 'rgb', (optional) 'intrinsic', (optional) 'depth', (optional) 'depth_scale'. 3. change the 'test_data_path' in ```test_*.sh``` to the ```*.json``` path. 4. run ```source test_kitti.sh``` or ```source test_nyu.sh```. ### In-the-Wild Mode 1. put the trained ckpt file ```model.pth``` in ```weight/```. 2. change the 'test_data_path' in ```test.sh``` to the image folder path. 3. run ```source test.sh```. As no intrinsics are provided, we provided by default 9 settings of focal length. ## ❓ Q & A ### Q1: Why depth maps look good but pointclouds are distorted? Because the focal length is not properly set! Please find a proper focal length by modifying codes [here](mono/utils/do_test.py#309) yourself. ### Q2: Why the pointclouds are too slow to be generated? Because the images are too large! Use smaller ones instead. ### Q3: Why predicted depth maps are not satisfactory? First be sure all black padding regions at image boundaries are cropped out. Then please try again. Besides, metric 3D is not almighty. Some objects (chandeliers, drones...) / camera views (aerial view, bev...) do not occur frequently in the training datasets. We will going deeper into this and release more powerful solutions. ## 🍭 Acknowledgement This work is empowered by DJI Automotive<sup>1</sup> <div align=center> <img src="media/icons/dji.jpg" width="150" height="200" align=center> </div> and collaborators from Tencent<sup>2</sup>, ZJU<sup>3</sup>, Intel Labs<sup>4</sup>, and HKUST<sup>5</sup> <img src="media/icons/tencent.png" width="200" height="100" align=center> <img src="media/icons/zju.png" width="100" height="100" align=center> <img src="media/icons/intel.jpg" width="150" height="100" align=center> <img src="media/icons/hkust.png" width="300" height="75" align=center> We appreciate efforts from the contributors of [mmcv](https://github.com/open-mmlab/mmcv), all concerning datasets, and [NVDS](https://github.com/RaymondWang987/NVDS). ## 📧 Citation ``` @article{yin2023metric, title={Metric3D: Towards Zero-shot Metric 3D Prediction from A Single Image}, author={Wei Yin, Chi Zhang, Hao Chen, Zhipeng Cai, Gang Yu, Kaixuan Wang, Xiaozhi Chen, Chunhua Shen}, booktitle={ICCV}, year={2023} } ``` ## License and Contact The *Metric 3D* code is under a 2-clause BSD License for non-commercial usage. For further questions, contact Dr. yvan.yin [[email protected]] and mu.hu [[email protected]].
145
0
namkoong-lab/whyshift
https://github.com/namkoong-lab/whyshift
A python package providing a benchmark with various specified distribution shift patterns.
[![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg?color=g&style=plastic)](https://opensource.org/licenses/MIT) [![Downloads](https://static.pepy.tech/personalized-badge/whyshift?period=total&units=abbreviation&left_color=grey&right_color=blue&left_text=Downloads)](https://pepy.tech/project/whyshift) [![pypy: v](https://img.shields.io/pypi/v/whyshift.svg)](https://pypi.python.org/pypi/whyshift/) ## `WhyShift`: A Benchmark with Specified Distribution Shift Patterns > Jiashuo Liu, Tianyu Wang, Peng Cui, Hongseok Namkoong > Tsinghua University, Columbia University `WhyShift` is a python package that provides a benchmark with various specified distribution shift patterns on real-world tabular data. Our testbed highlights the importance of future research that builds an understanding of how distributions differ. For more details, please refer to our <a href="https://arxiv.org/abs/2307.05284">paper</a>. ## Table of Contents 1. [Dataset Access](#basic-installation-instructions) 2. [Python Package: `whyshift`](#python-package-whyshift) 3. [Different Distribution Shift Patterns](#different-distribution-shift-patterns) 4. [Implemented Algorithms](#implemented-algorithms) 5. [License and terms of use](#license-and-terms-of-use) 6. [References](#references) ## Dataset Access Here we provide the access links for the 5 datasets used in our benchmark. #### ACS Income * The task is to predict whether an individual’s income is above \$50,000. * Access link: https://github.com/socialfoundations/folktables * Reference: Ding, F., Hardt, M., Miller, J., & Schmidt, L. (2021). Retiring adult: New datasets for fair machine learning. Advances in neural information processing systems, 34, 6478-6490. * License: MIT License #### ACS PubCov * The task is to predict whether an individual has public health insurance. * Access link: https://github.com/socialfoundations/folktables * Reference: Ding, F., Hardt, M., Miller, J., & Schmidt, L. (2021). Retiring adult: New datasets for fair machine learning. Advances in neural information processing systems, 34, 6478-6490. * License: MIT License #### ACS Mobility * The task is to predict whether an individual had the same residential address one year ago. * Access link: https://github.com/socialfoundations/folktables * Reference: Ding, F., Hardt, M., Miller, J., & Schmidt, L. (2021). Retiring adult: New datasets for fair machine learning. Advances in neural information processing systems, 34, 6478-6490. * License: MIT License #### Taxi Dataset * The task is to predict whether the total ride duration time exceeds 30 minutes, based on location and temporal features. * Access link: * https://www.kaggle.com/datasets/mnavas/taxi-routes-for-mexico-city-and-quito * https://www.kaggle.com/competitions/nyc-taxi-trip-duration/data * License: CC BY-SA 4.0 #### US Accident Dataset * The task is to predict whether an accident is severe (long delay) or not (short delay) based on weather features and Road condition features. * Access link: https://www.kaggle.com/datasets/sobhanmoosavi/us-accidents * License: CC BY-SA 4.0 ## Python Package: `whyshift` Here we provide the scripts to get data in our proposed settings. #### Install the package ``` pip3 install whyshift ``` #### For settings utilizing ACS Income, Public Coverage, Mobility datasets * `get_data(task, state, year, need_preprocess, root_dir)` function * `task` values: 'income', 'pubcov', 'mobility' * examples: ```python from whyshift import get_data # for ACS Income X, y, feature_names = get_data("income", "CA", True, './datasets/acs/', 2018) # for ACS Public Coverage X, y, feature_names = get_data("pubcov", "CA", True, './datasets/acs/', 2018) # for ACS Mobility X, y, feature_names = get_data("mobility", "CA", True, './datasets/acs/', 2018) ``` * support `state` values: * ['AL', 'AK', 'AZ', 'AR', 'CA', 'CO', 'CT', 'DE', 'FL', 'GA', 'HI', 'ID', 'IL', 'IN', 'IA', 'KS', 'KY', 'LA', 'ME', 'MD', 'MA', 'MI', 'MN', 'MS', 'MO', 'MT', 'NE', 'NV', 'NH', 'NJ', 'NM', 'NY', 'NC', 'ND', 'OH', 'OK', 'OR', 'PA', 'RI', 'SC', 'SD', 'TN', 'TX', 'UT', 'VT', 'VA', 'WA', 'WV', 'WI', 'WY', 'PR'] #### For settings utilizing US Accident, Taxi datasets * download data files: ```python # US Accident: https://www.kaggle.com/datasets/sobhanmoosavi/us-accidents # Taxi https://www.kaggle.com/competitions/nyc-taxi-trip-duration ``` * put data files in dir `./datasets/` * accident: `./datasets/Accident/US_Accidents_Dec21_updated.csv` * taxi: `./datasets/Taxi/{city}_clean.csv` * pass the `path to the data file` of `get_data` function * example: ```python from whyshift import get_data # for US Accident X, y, _ = get_data("accident", "CA", True, './datasets/Accident/US_Accidents_Dec21_updated.csv') # for Taxi X, y, _ = get_data("taxi", "nyc", True, './datasets/Taxi/train.csv') ``` * support `state` values: * for US Accident: ['CA', 'TX', 'FL', 'OR', 'MN', 'VA', 'SC', 'NY', 'PA', 'NC', 'TN', 'MI', 'MO'] * for Taxi: ['nyc', 'bog', 'uio', 'mex'] ## Different Distribution Shift Patterns Based on our `whyshift` package, one could design various source-target pairs with different distribution shift patterns. Here we list some of them for reference: | #ID | Dataset | Type | #Features | Outcome | Source | #Train Samples | #Test Domains | Dom. Ratio | | --- | ------- | ---- | --------- | ------- | ------ | -------------- | ------------- | ---------- | | 1 | ACS Income | Spatial | 9 | Income≥50k | California | 195,665 | 50 | $Y\|X: 13/14$ | | 2 | ACS Income | Spatial | 9 | Income≥50k | Connecticut | 19,785 | 50 | $Y\|X: 24/24$ | | 3 | ACS Income | Spatial | 9 | Income≥50k | Massachusetts | 40,114 | 50 | $Y\|X: 21/22$ | | 4 | ACS Income | Spatial | 9 | Income≥50k | South Dakota | 4,899 | 50 | $Y\|X: 9/9$ | | 5 | ACS Mobility | Spatial | 21 | Residential Address | Mississippi | 5,318 | 50 | $Y\|X: 28/34$ | | 6 | ACS Mobility | Spatial | 21 | Residential Address | New York | 40,463 | 50 | $Y\|X: 30/31$ | | 7 | ACS Mobility | Spatial | 21 | Residential Address | California | 80,329 | 50 | $Y\|X: 9/17$ | | 8 | ACS Mobility | Spatial | 21 | Residential Address | Pennsylvania | 23,918 | 50 | $Y\|X: 17/17$ | | 9 | Taxi | Spatial | 7 | Duration time≥30 min | Bogotá | 3,063 | 3 | $Y\|X: 1/2$ | | 10 | Taxi | Spatial | 7 | Duration time≥30 min | New York City | 1,458,646 | 3 | $Y\|X: 3/3$ | | 11 | ACS Pub.Cov | Spatial | 18 | Public Ins. Coverage | Nebraska | 6,332 | 50 | $Y\|X: 32/39$ | | 12 | ACS Pub.Cov | Spatial | 18 | Public Ins. Coverage | Florida | 71,297 | 50 | $Y\|X: 28/29$ | | 13 | ACS Pub.Cov | Spatial | 18 | Public Ins. Coverage | Texas | 98,928 | 50 | $Y\|X: 33/34$ | | 14 | ACS Pub.Cov | Spatial | 18 | Public Ins. Coverage | Indiana | 24,330 | 50 | $Y\|X: 11/13$ | | 15 | US Accident | Spatial | 47 | Severity of Accident | Texas | 26,664 | 13 | $Y\|X: 7/7$ | | 16 | US Accident | Spatial | 47 | Severity of Accident | California | 64,909 | 13 | X: 22/31 | | 17 | US Accident | Spatial | 47 | Severity of Accident | Florida | 32,278 | 13 | X: 5/7 | | 18 | US Accident | Spatial | 47 | Severity of Accident | Minnesota | 8,927 | 13 | X: 8/11 | | 19 | ACS Pub.Cov | Temporal | 18 | Public Ins. Coverage | Year 2010 (NY) | 73,208 | 3 | X: 2/2 | | 20 | ACS Pub.Cov | Temporal | 18 | Public Ins. Coverage | Year 2010 (CA) | 149,441 | 3 | X: 2/2 | | 21 | ACS Income | Synthetic | 9 | Income≥50k | Younger People (80%) | 20,000 | 1 | X: 1/1 | | 22 | ACS Income | Synthetic | 9 | Income≥50k | Younger People (90%) | 20,000 | 1 | X: 1/1 | In our benchmark, each setting has multiple target domains (except the last setting). In our main body, we select only one target domain for each setting. We report the `Dom. Ratio` to represent the dominant ratio of $Y|X$ shifts or $X$ shifts in source-target pairs with performance degradation larger than **5** percentage points in each setting. For example, "$Y|X$: 13/14" means that there are 14 source-target pairs in Setting 1 with degradation larger than 5 percentage points and 13 out of them with over 50\% degradation attributed to $Y|X$ shifts. We use XGBoost to measure this. ## Implemented Algorithms In our `whyshift` package, we also implement several algorithms for tabular data classification, including `Logistic Regression`, `MLP`, `SVM`, `Random Forest`, `XGBoost`, `LightGBM`, `GBM`, $\chi^2$/CVaR-`DRO/DORO`, `Group DRO`, `Simple-Reweighting`, `JTT`, `Fairness-In/Postprocess` and `DWR` methods. ```python # use the implemented methods algo = fetch_model(method_name) ``` Note that the supported method names are: ```python method_name_list = ['lr','svm','xgb', 'lightgbm', 'rf', 'dwr', 'jtt','suby', 'subg', 'rwy', 'rwg', 'FairPostprocess_exp','FairInprocess_dp', 'FairPostprocess_threshold', 'FairInprocess_eo', 'FairInprocess_error_parity','chi_dro', 'chi_doro','cvar_dro','cvar_doro','group_dro'] ``` ## License and terms of use Our benchmark is built upon `Folktables`. The License of `Folktables` is: ``` Folktables provides code to download data from the American Community Survey (ACS) Public Use Microdata Sample (PUMS) files managed by the US Census Bureau. The data itself is governed by the terms of use provided by the Census Bureau. For more information, see https://www.census.gov/data/developers/about/terms-of-service.html The Adult reconstruction dataset is a subsample of the IPUMS CPS data available from https://cps.ipums.org/. The data are intended for replication purposes only. Individuals analyzing the data for other purposes must submit a separate data extract request directly via IPUMS CPS. Individuals are not to redistribute the data without permission. Contact [email protected] for redistribution requests. ``` Besides, for US Accident and Taxi data from `kaggle`, individuals should follow the their Licenses, see https://www.kaggle.com/datasets/sobhanmoosavi/us-accidents and https://www.kaggle.com/competitions/nyc-taxi-trip-duration/data. ## References [1] Ding, F., Hardt, M., Miller, J., & Schmidt, L. (2021). Retiring adult: New datasets for fair machine learning. Advances in neural information processing systems, 34, 6478-6490. * We modify the <a href="https://github.com/socialfoundations/folktables">`folktables`</a> code to support `year` before 2014, and involve the revised version in our package. * Part of the algorithm codes are used from the <a href="https://github.com/jpgard/subgroup-robustness-grows-on-trees">codebase</a>.
11
0
chaytonmin/Awesome-Papers-World-Models
https://github.com/chaytonmin/Awesome-Papers-World-Models
Awesome papers about World Models
### The image of the world around us, which we carry in our head, is just a model. Nobody in his head imagines all the world, government or country. He has only selected concepts, and relationships between them, and uses those to represent the real system. (Forrester, 1971) <p align="center"> <img src="/docs/world_model.png" width="50%"/>World_Model </p> ### World model can be trained quickly in an unsupervised manner to learn a compressed spatial and temporal representation of the environment. (World Models, 2018) ### Occupancy grid itself provides a stochastic spatial world model. The resulting geometric world models serve as the underlying representation for other robotic tasks, such as obstacle avoidance, path planning and navigation, or planning of grasping and assembly operations. (Using Occupancy Grids for Mobile Robot Perception and Navigation, 1989) #### 1989 + Using Occupancy Grids for Mobile Robot Perception and Navigation [[paper](http://www.sci.brooklyn.cuny.edu/~parsons/courses/3415-fall-2011/papers/elfes.pdf)] #### 2018 + World Models [[paper](https://arxiv.org/abs/1803.10122)] [[Project](https://worldmodels.github.io/)] + Recurrent world models facilitate policy evolution [[paper](https://proceedings.neurips.cc/paper/2018/hash/2de5d16682c3c35007e4e92982f1a2ba-Abstract.html)] #### 2019 + Contrastive Learning of Structured World Models [[paper](https://arxiv.org/abs/1911.12247)] #### 2020 + DreamerV2: Mastering atari with discrete world models [[paper](https://arxiv.org/pdf/2010.02193.pdf)] + Planning to Explore via Self-Supervised World Models [[paper](http://proceedings.mlr.press/v119/sekar20a/sekar20a.pdf)] #### 2022 + Yann LeCun: A Path Towards Autonomous Machine Intelligence [[paper](https://openreview.net/pdf?id=BZ5a1r-kVsf)] [[Video](https://www.youtube.com/watch?v=OKkEdTchsiE)] #### 2023 + Mastering Diverse Domains through World Models [[paper](https://arxiv.org/abs/2301.04104)] [[Project](https://danijar.com/project/dreamerv3/)] + DayDreamer: World Models for Physical Robot Learning [[paper](https://proceedings.mlr.press/v205/wu23c.html)] + FOCUS: Object-Centric World Models for Robotics Manipulation [[paper](https://arxiv.org/pdf/2307.02427.pdf)] + Tesla CVPR 2023 workshop [[Video](https://www.youtube.com/watch?v=6x-Xb_uT7ts)] + Wayve GAIA-1 [[blog](https://wayve.ai/thinking/introducing-gaia1/)] + UniWorld: Multi-camera Spatio-temporal Pre-Training via World Models[[github](https://github.com/chaytonmin/UniWorld)]
27
0
Maknee/minigpt4.cpp
https://github.com/Maknee/minigpt4.cpp
Port of MiniGPT4 in C++ (4bit, 5bit, 6bit, 8bit, 16bit CPU inference with GGML)
# minigpt4.cpp <a href='https://huggingface.co/spaces/maknee/minigpt4.cpp'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue'> [![Quickstart in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Maknee/minigpt4.cpp/blob/master/minigpt4/colab_webui.ipynb) Inference of [MiniGPT4](https://github.com/Vision-CAIR/MiniGPT-4) in pure C/C++. ## Description The main goal of `minigpt4.cpp` is to run minigpt4 using 4-bit quantization with using the [ggml](https://github.com/ggerganov/ggml) library. ## Demo ![minigpt1](assets/webui_demo.png) ![minigpt1](assets/minigpt4-demo1.gif) ## Usage ### 1. Clone repo **Requirements**: [git](https://gitforwindows.org/) ```bash git clone --recursive https://github.com/Maknee/minigpt4.cpp cd minigpt4.cpp ``` ### 2. Getting the library #### Option 1: Download precompiled binary ##### Windows / Linux / MacOS Go to [Releases](https://github.com/Maknee/minigpt4.cpp/releases) and extract `minigpt4` library file into the repository directory. #### Option 2: Build library manually ##### Windows **Requirements**: [CMake](https://cmake.org/download/), [Visual Studio](https://visualstudio.microsoft.com/) and [Git](https://gitforwindows.org/) ```commandline cmake . cmake --build . --config Release ``` `bin\Release\minigpt4.dll` should be generated ##### Linux **Requirements**: CMake (Ubuntu: `sudo apt install cmake`) ```bash cmake . cmake --build . --config Release ``` `minigpt4.so` should be generated ##### MacOS **Requirements**: CMake (MacOS: `brew install cmake`) ```sh cmake . cmake --build . --config Release ``` `minigpt4.dylib` should be generated **Note:** If you build with opencv (allowing features such as loading and preprocessing image within the library itself), set `MINIGPT4_BUILD_WITH_OPENCV` to `ON` in `CMakeLists.txt` or build with `-DMINIGPT4_BUILD_WITH_OPENCV=ON` as a parameter to the cmake cli. ### 3. Obtaining the model #### Option 1: Download pre-quantized MiniGPT4 model Pre-quantized models are available on Hugging Face ~ [7B](https://huggingface.co/datasets/maknee/minigpt4-7b-ggml/tree/main) or [13B](https://huggingface.co/datasets/maknee/minigpt4-13b-ggml/tree/main). Recommended for reliable results, but slow inference speed: [minigpt4-13B-f16.bin](https://huggingface.co/datasets/maknee/minigpt4-13b-ggml/blob/main/minigpt4-13B-f16.bin) #### Option 2: Convert and quantize PyTorch model **Requirements**: [Python 3.x](https://www.python.org/downloads/) and [PyTorch](https://pytorch.org/get-started/locally/). Clone the [MiniGPT-4](https://github.com/Vision-CAIR/MiniGPT-4) repository and perform the setup ```sh cd minigpt4 git clone https://github.com/Vision-CAIR/MiniGPT-4.git cd MiniGPT-4 conda env create -f environment.yml conda activate minigpt4 ``` Download the pretrained checkpoint in the [MiniGPT-4](https://github.com/Vision-CAIR/MiniGPT-4) repository under `Checkpoint Aligned with Vicuna 7B` or `Checkpoint Aligned with Vicuna 13B` or download them from [Huggingface link for 7B](https://huggingface.co/datasets/maknee/minigpt4-7b-ggml/blob/main/pretrained_minigpt4_7b.pth) or [13B](https://huggingface.co/datasets/maknee/minigpt4-13b-ggml/blob/main/pretrained_minigpt4.pth) Convert the model weights into ggml format ##### Windows 7B model ```commandline cd minigpt4 python convert.py C:\pretrained_minigpt4_7b.pth --ftype=f16 ``` 13B model ```commandline cd minigpt4 python convert.py C:\pretrained_minigpt4.pth --ftype=f16 ``` ##### Linux / MacOS 7B model ```sh python convert.py ~/Downloads/pretrained_minigpt4_7b.pth --outtype f16 ``` 13B model ```sh python convert.py ~/Downloads/pretrained_minigpt4.pth --outtype f16 ``` `minigpt4-7B-f16.bin` or `minigpt4-13B-f16.bin` should be generated #### 4. Obtaining the vicuna model #### Option 1: Download pre-quantized vicuna-v0 model Pre-quantized models are available on [Hugging Face](https://huggingface.co/datasets/maknee/ggml-vicuna-v0-quantized/tree/main) Recommended for reliable results and decent inference speed: [ggml-vicuna-13B-v0-q5_k.bin](https://huggingface.co/datasets/maknee/ggml-vicuna-v0-quantized/blob/main/ggml-vicuna-13B-v0-q5_k.bin) #### Option 2: Convert and quantize vicuna-v0 model **Requirements**: [Python 3.x](https://www.python.org/downloads/) and [PyTorch](https://pytorch.org/get-started/locally/). Follow the [guide from the MiniGPT4](https://github.com/Vision-CAIR/MiniGPT-4/blob/main/PrepareVicuna.md) to obtain the vicuna-v0 model. Then, clone llama.cpp ```sh git clone https://github.com/ggerganov/llama.cpp cd llama.cpp cmake . cmake --build . --config Release ``` Convert the model to ggml ```sh python convert.py <path-to-model> ``` Quantize the model ```sh python quanitize <path-to-model> <output-model> Q4_1 ``` #### 5. Running Test if minigpt4 works by calling the following, replacing `minigpt4-13B-f16.bin` and `ggml-vicuna-13B-v0-q5_k.bin` with your respective models ```sh cd minigpt4 python minigpt4_library.py minigpt4-13B-f16.bin ggml-vicuna-13B-v0-q5_k.bin ``` ##### Webui Install the requirements for the webui ```sh pip install -r requirements.txt ``` Then, run the webui, replacing `minigpt4-13B-f16.bin` and `ggml-vicuna-13B-v0-q5_k.bin` with your respective models ```sh python webui.py minigpt4-13B-f16.bin ggml-vicuna-13B-v0-q5_k.bin ``` The output should contain something like the following: ```sh Running on local URL: http://127.0.0.1:7860 To create a public link, set `share=True` in `launch()`. ``` Go to `http://127.0.0.1:7860` in your browser and you should be able to interact with the webui.
437
14
Light-City/light-memory-pool
https://github.com/Light-City/light-memory-pool
基于Arrow的轻量内存池
# 基于Arrow的轻量内存池 这个项目的内存池是基于[Apache Arrow项目](https://github.com/apache/arrow)的衍生版本。我们将Arrow项目中复杂的核心结构——内存池——完全剥离出来,形成了这个独立的项目。由于原始的内存池与Arrow项目本身的工具有深度依赖关系,因此我们在这个项目中对内存池进行了一些深度移除和改造,以保持与原始Arrow内存池的基础功能一致。一些改动包括: - 分离allocator与memory_pool - 移除不需要的LoggingMemoryPool、ProxyMemoryPool - 移除jemalloc等第三方malloc库,未来可以支持 通过这些改动,我们的目标是: - 使代码更加精简 - 使内存池更方便地作为其他项目的依赖库使用 - 提供简单的方式来引入本项目的so库和头文件,以使用内存池功能 此外,这个项目还可以作为深入学习内存池设计与实现的资源。我们欢迎您探索并使用这个经过精心改进的内存池。 ## 1.如何编译 ``` ➜ bazel build //src:memory_pool WARNING: Ignoring JAVA_HOME, because it must point to a JDK, not a JRE. WARNING: Ignoring JAVA_HOME, because it must point to a JDK, not a JRE. INFO: Analyzed target //src:memory_pool (36 packages loaded, 169 targets configured). INFO: Found 1 target... Target //src:memory_pool up-to-date: bazel-bin/src/libmemory_pool.a bazel-bin/src/libmemory_pool.dylib INFO: Elapsed time: 1.568s, Critical Path: 1.05s INFO: 10 processes: 4 internal, 6 darwin-sandbox. INFO: Build completed successfully, 10 total actions ``` ## 2.如何使用 所有的用例放在[examples目录](./examples/) ### 2.1 编写一个简单的case 参见:[helloworld](./examples/hello_world.cc) ```cpp arrow::MemoryPool* pool = arrow::default_memory_pool(); char* val; arrow::Status status = pool->Allocate(14, reinterpret_cast<uint8_t**>(&val)); if (status.ok()) { std::cout << "Memory allocation successful." << std::endl; std::strcpy(val, "Hello, World!"); std::cout << "Filled content: " << val << std::endl; pool->Free(reinterpret_cast<uint8_t*>(val), 4); } else { std::cout << "Memory allocation failed." << std::endl; } ``` 编译: ```cpp ➜ bazel build //examples:hello_world WARNING: Ignoring JAVA_HOME, because it must point to a JDK, not a JRE. WARNING: Ignoring JAVA_HOME, because it must point to a JDK, not a JRE. INFO: Analyzed target //examples:hello_world (0 packages loaded, 2 targets configured). INFO: Found 1 target... Target //examples:hello_world up-to-date: bazel-bin/examples/hello_world INFO: Elapsed time: 0.881s, Critical Path: 0.73s INFO: 7 processes: 5 internal, 2 darwin-sandbox. INFO: Build completed successfully, 7 total actions ``` 运行: ```cpp ➜ bazel-bin/examples/hello_world Memory allocation successful. Filled content: Hello, World! ``` ## 3.如何测试 测试基于catch2编写,所有测试位于[tests目录](./tests/) 可以测试tests目录下面的其他测试,只需要替换submit_test为对应的test即可。 ```cpp bazel test //tests:memory_pool_test ```
23
2
freenodes/freenodes
https://github.com/freenodes/freenodes
免费梯子🪜 免费科学上网🛜免费翻墙🧱免费订阅♻️免费代理✨ 免费节点🆓免费机场✈️4小时更新⌚️一键订阅📪
<h1 align="center"> <img src="https://github.com/Dreamacro/clash/raw/master/docs/logo.png" alt="Clash" width="200"> <br>FreeNodes<br> </h1> <p align="center"> <a href="https://img.shields.io/github/watchers/freenodes/freenodes"> <img src="https://img.shields.io/github/watchers/freenodes/freenodes" alt="watchers"> </a> <a href="https://img.shields.io/github/stars/freenodes/freenodes"> <img src="https://img.shields.io/github/stars/freenodes/freenodes" alt="stars"> </a> <a href="https://img.shields.io/github/forks/freenodes/freenodes"> <img src="https://img.shields.io/github/forks/freenodes/freenodes" alt="forks"> </a> <a href="https://visitor-badge.laobi.icu/badge?page_id=freenodes.freenodes"> <img src="https://visitor-badge.laobi.icu/badge?page_id=freenodes.freenodes" alt="visitor"> </a> <a href="https://img.shields.io/badge/license-GNU%20General%20Public%20License%20v3.0-green.svg"> <img src="https://img.shields.io/badge/license-GNU%20General%20Public%20License%20v3.0-green.svg" alt="license"> </a> </p> ## 免费节点及订阅地址: - Proxies: https://raw.githubusercontent.com/freenodes/freenodes/main/clash.yaml - Proxies with Speed Test: https://raw.githubusercontent.com/freenodes/freenodes/main/clash_speed.yaml ## Clash、SS等客户端订阅地址一键转换: - acl4ssr: https://acl4ssr-sub.github.io/ ## 测速 | 节点 | 带宽 (MB/s) | 延迟 (ms) | 节点 | 带宽 (MB/s) | 延迟 (ms) | 节点 | 带宽 (MB/s) | 延迟 (ms) | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | 🇬🇧-GB-012 | 45.00 | 398 | 🇩🇪-DE-006 | 44.79 | 436 | 🇺🇸-US-088 | 44.59 | 184 | | 🇯🇵-JP-005 | 44.59 | 438 | 🇯🇵-JP-006 | 43.21 | 480 | 🇭🇰-HK-006 | 43.02 | 768 | | 🇸🇬-SG-009 | 43.02 | 886 | 🇬🇧-GB-005 | 42.83 | 427 | 🇭🇰-HK-005 | 42.64 | 742 | | 🇸🇬-SG-010 | 42.27 | 823 | 🇺🇸-US-070 | 42.27 | 137 | 🇷🇺-RU-002 | 41.73 | 989 | | 🇱🇻-LV-001 | 41.20 | 663 | 🇺🇸-US-072 | 40.52 | 184 | 🇸🇬-SG-003 | 40.35 | 711 | | 🇮🇳-IN-003 | 40.18 | 764 | 🇺🇸-US-083 | 40.02 | 241 | 🇬🇧-GB-002 | 39.86 | 407 | | 🇺🇸-US-063 | 39.38 | 219 | 🇸🇬-SG-004 | 39.37 | 871 | 🇬🇧-GB-003 | 38.90 | 391 | | 🇬🇧-GB-017 | 38.75 | 473 | 🇺🇸-US-077 | 38.44 | 174 | 🇺🇸-US-085 | 37.56 | 962 | | 🇺🇸-US-062 | 37.41 | 748 | 🇺🇸-US-031 | 36.71 | 984 | 🇺🇸-US-096 | 36.71 | 164 | | 🇺🇸-US-064 | 36.17 | 517 | 🇺🇸-US-109 | 36.17 | 140 | 🇺🇸-US-036 | 34.87 | 142 | | 🇺🇸-US-100 | 34.75 | 225 | 🇺🇸-US-112 | 34.75 | 242 | 🇺🇸-US-066 | 34.51 | 104 | | 🇬🇧-GB-011 | 33.79 | 395 | 🇭🇰-HK-001 | 33.44 | 425 | 🇺🇸-US-084 | 33.10 | 223 | | 🇺🇸-US-057 | 32.99 | 409 | 🇩🇪-DE-004 | 32.66 | 425 | 🇯🇵-JP-003 | 32.44 | 420 | | 🇬🇧-GB-007 | 32.23 | 389 | 🇺🇸-US-107 | 32.23 | 156 | 🇳🇱-NL-003 | 32.02 | 1205 | | 🇨🇳-TW-002 | 31.91 | 1081 | 🇺🇸-US-068 | 31.00 | 132 | 🇫🇷-FR-030 | 30.71 | 492 | | 🇸🇬-SG-006 | 30.61 | 623 | 🇺🇸-US-034 | 30.61 | 148 | 🇺🇸-US-017 | 30.52 | 137 | | 🇺🇸-US-044 | 30.14 | 923 | 🇬🇧-GB-013 | 30.05 | 563 | 🇰🇷-KR-002 | 29.77 | 1586 | | 🇺🇸-US-028 | 29.77 | 392 | 🇺🇸-US-025 | 29.77 | 142 | 🇺🇸-US-024 | 29.59 | 153 | | 🇺🇸-US-052 | 29.41 | 147 | 🇺🇸-US-045 | 29.41 | 1309 | 🇺🇸-US-061 | 29.32 | 402 | | 🇺🇸-US-102 | 29.15 | 425 | 🇺🇸-US-111 | 28.98 | 178 | 🇺🇸-US-029 | 28.81 | 155 | | 🇺🇸-US-099 | 28.06 | 140 | 🇺🇸-US-060 | 27.98 | 407 | 🇺🇸-US-011 | 27.82 | 143 | | 🇳🇱-NL-005 | 27.66 | 478 | 🇯🇵-JP-002 | 27.66 | 621 | 🇺🇸-US-067 | 27.51 | 144 | | 🇺🇸-US-040 | 27.51 | 141 | 🇺🇸-US-093 | 27.51 | 191 | 🇺🇸-US-116 | 27.43 | 184 | | 🇳🇱-NL-004 | 27.12 | 400 | 🇺🇸-US-026 | 26.90 | 438 | 🇺🇸-US-076 | 26.75 | 182 | | 🇰🇷-KR-004 | 26.75 | 613 | 🇰🇷-KR-003 | 26.54 | 523 | 🇺🇸-US-108 | 26.46 | 246 | | 🇬🇧-GB-006 | 26.25 | 399 | 🇺🇸-US-091 | 26.18 | 197 | 🇺🇸-US-055 | 26.18 | 179 | | 🇸🇬-SG-008 | 26.18 | 678 | 🇺🇸-US-002 | 26.11 | 1644 | 🇺🇸-US-009 | 26.04 | 423 | | 🇬🇧-GB-009 | 26.04 | 420 | 🇺🇸-US-012 | 26.04 | 133 | 🇫🇷-FR-029 | 25.90 | 401 | | 🇺🇸-US-069 | 25.83 | 167 | 🇺🇸-US-046 | 25.36 | 357 | 🇸🇬-SG-001 | 25.23 | 827 | | 🇺🇸-US-037 | 24.91 | 139 | 🇦🇹-AT-002 | 24.91 | 461 | 🇳🇱-NL-002 | 24.85 | 420 | | 🇺🇸-US-014 | 24.85 | 412 | 🇰🇷-KR-001 | 24.78 | 883 | 🇺🇸-US-021 | 24.72 | 127 | | 🇺🇸-US-051 | 24.66 | 1560 | 🇺🇸-US-027 | 24.66 | 148 | 🇺🇸-US-081 | 24.41 | 229 | | 🇺🇸-US-071 | 24.41 | 258 | 🇺🇸-US-095 | 24.35 | 167 | 🇺🇸-US-059 | 24.17 | 184 | | 🇬🇧-GB-008 | 24.05 | 393 | 🇩🇪-DE-001 | 23.99 | 412 | 🇫🇷-FR-007 | 23.99 | 412 | | 🇺🇸-US-003 | 23.88 | 140 | 🇦🇺-AU-002 | 23.88 | 1090 | 🇺🇸-US-038 | 23.53 | 140 | | 🇺🇸-US-065 | 23.25 | 129 | 🇺🇸-US-113 | 23.14 | 243 | 🇺🇸-US-105 | 23.03 | 518 | | 🇫🇷-FR-003 | 22.60 | 887 | 🇺🇸-US-048 | 22.04 | 995 | 🇫🇷-FR-001 | 22.04 | 570 | | 🇭🇰-HK-004 | 21.99 | 736 | 🇨🇦-CA-002 | 21.80 | 434 | 🇺🇸-US-020 | 21.80 | 401 | | 🇫🇷-FR-008 | 21.75 | 575 | 🇺🇸-US-008 | 21.75 | 124 | 🇸🇬-SG-005 | 21.75 | 711 | | 🇺🇸-US-079 | 21.56 | 169 | 🇺🇸-US-106 | 21.51 | 199 | 🇺🇸-US-018 | 21.41 | 157 | | 🇺🇸-US-047 | 21.09 | 876 | 🇺🇸-US-080 | 21.05 | 177 | 🇬🇧-GB-004 | 21.00 | 391 | | 🇺🇸-US-041 | 20.91 | 136 | 🇳🇱-NL-006 | 20.91 | 502 | 🇺🇸-US-097 | 20.87 | 803 | | 🇨🇦-CA-004 | 20.51 | 601 | 🇺🇸-US-075 | 20.34 | 176 | 🇸🇬-SG-002 | 20.34 | 1234 | | 🇺🇸-US-119 | 20.26 | 299 | 🇺🇸-US-110 | 20.18 | 408 | 🇺🇸-US-042 | 20.05 | 148 | | 🇵🇹-PT-001 | 19.97 | 763 | 🇯🇵-JP-004 | 19.93 | 492 | 🇩🇪-DE-002 | 19.89 | 473 | | 🇺🇸-US-001 | 19.89 | 132 | 🇺🇸-US-019 | 19.81 | 133 | 🇺🇸-US-118 | 19.61 | 310 | | 🇺🇸-US-074 | 19.57 | 509 | 🇺🇸-US-010 | 19.57 | 420 | 🇺🇸-US-032 | 19.57 | 133 | | 🇺🇸-US-013 | 19.41 | 155 | 🇺🇸-US-015 | 19.37 | 140 | 🇺🇸-US-022 | 19.34 | 137 | | 🇯🇵-JP-001 | 18.56 | 482 | 🇳🇱-NL-001 | 18.39 | 395 | 🇺🇸-US-007 | 18.29 | 137 | | 🇨🇦-CA-001 | 18.15 | 374 | 🇺🇸-US-016 | 17.98 | 147 | 🇮🇳-IN-002 | 17.85 | 922 | | 🇺🇸-US-103 | 17.69 | 179 | 🇬🇧-GB-010 | 17.59 | 412 | 🇺🇸-US-054 | 17.56 | 168 | | 🇨🇭-CH-001 | 17.34 | 473 | 🇺🇸-US-033 | 17.31 | 163 | 🇺🇸-US-082 | 17.13 | 247 | | 🇺🇸-US-087 | 16.66 | 142 | 🇺🇸-US-101 | 16.52 | 200 | 🇺🇸-US-053 | 16.41 | 174 | | 🇺🇸-US-073 | 16.25 | 233 | 🇺🇸-US-023 | 16.22 | 144 | 🇺🇸-US-094 | 15.85 | 149 | | 🇺🇸-US-006 | 15.75 | 477 | 🇺🇸-US-035 | 15.72 | 131 | 🇺🇸-US-050 | 15.72 | 186 | | 🇯🇵-JP-007 | 15.50 | 466 | 🇭🇰-HK-002 | 15.21 | 419 | 🇳🇱-NL-007 | 15.16 | 527 | | 🇫🇷-FR-028 | 15.07 | 582 | 🇺🇸-US-056 | 13.68 | 406 | 🇺🇸-US-039 | 13.66 | 138 | | 🇫🇷-FR-002 | 13.62 | 612 | 🇺🇸-US-030 | 12.83 | 137 | 🇺🇸-US-004 | 12.76 | 1196 | | 🇦🇪-AE-001 | 12.58 | 960 | 🇺🇸-US-104 | 11.39 | 396 | 🇩🇪-DE-003 | 10.91 | 523 | | 🇺🇸-US-058 | 10.80 | 171 | 🇨🇦-CA-003 | 8.99 | 270 | 🇺🇸-US-078 | 4.55 | 195 | | 🇷🇸-RS-001 | 2.52 | 551 | 🇬🇧-GB-001 | 0.51 | 760 | 🇫🇷-FR-027 | 0.19 | 909 | | 🇷🇺-RU-001 | 0.18 | 604 | 🇮🇳-IN-001 | 0.12 | 1624 | 🇭🇰-HK-003 | 0.10 | 1323 | | 🇬🇧-GB-016 | 0.01 | 1588 | 🇺🇸-US-090 | 0.01 | 4369 | 🇺🇸-US-086 | 0.01 | 594 | | 🇫🇷-FR-009 | 0.01 | 475 | 🇫🇷-FR-012 | 0.01 | 482 | 🇫🇷-FR-010 | 0.01 | 507 | | 🇬🇧-GB-015 | 0.01 | 557 | 🇫🇷-FR-011 | 0.01 | 541 | 🇮🇲-IM-001 | 0.01 | 582 | | 🇪🇸-ES-001 | 0.01 | 562 | 🇮🇹-IT-001 | 0.01 | 555 | 🇨🇭-CH-002 | 0.01 | 627 | | 🇩🇰-DK-001 | 0.01 | 881 | 🇦🇹-AT-001 | 0.01 | 634 | 🇸🇪-SE-001 | 0.01 | 665 | | 🇫🇷-FR-019 | 0.01 | 531 | 🇵🇱-PL-001 | 0.01 | 663 | 🇫🇷-FR-023 | 0.01 | 524 | | 🇫🇷-FR-006 | 0.01 | 669 | 🇪🇪-EE-001 | 0.01 | 777 | 🇷🇴-RO-001 | 0.01 | 661 | | 🇺🇦-UA-001 | 0.01 | 695 | 🇮🇪-IE-001 | 0.01 | 568 | 🇬🇧-GB-014 | 0.01 | 565 | | 🇧🇦-BA-001 | 0.01 | 739 | 🇫🇷-FR-017 | 0.01 | 442 | 🇫🇷-FR-013 | 0.01 | 623 | | 🇩🇪-DE-005 | 0.01 | 1034 | 🇦🇱-AL-001 | 0.01 | 831 | 🇳🇴-NO-001 | 0.01 | 595 | | 🇫🇷-FR-020 | 0.01 | 517 | 🇫🇷-FR-022 | 0.01 | 537 | 🇫🇷-FR-021 | 0.01 | 543 | | 🇫🇷-FR-018 | 0.01 | 667 | 🇫🇷-FR-024 | 0.01 | 530 | 🇫🇷-FR-025 | 0.01 | 649 | | 🇫🇷-FR-016 | 0.01 | 536 | 🇫🇷-FR-014 | 0.01 | 663 | 🇫🇷-FR-026 | 0.01 | 513 | | 🇫🇷-FR-005 | 0.01 | 530 | 🇧🇬-BG-001 | 0.01 | 658 | 🇨🇳-TW-001 | 0.00 | 681 | | 🇰🇪-KE-001 | 0.00 | 1027 | 🇺🇸-US-043 | 0.00 | 2196 | 🇿🇦-ZA-001 | 0.00 | 1199 | | 🇨🇾-CY-001 | 0.00 | 975 | 🇺🇸-US-005 | -0.00 | 0 | 🇸🇬-SG-007 | -0.00 | 0 | | 🇺🇸-US-049 | -0.00 | 0 | 🇸🇬-SG-011 | -0.00 | 0 | 🇫🇷-FR-015 | -0.00 | 0 | | 🇺🇸-US-098 | -0.00 | 0 | 🇦🇺-AU-001 | -0.00 | 0 | 🇺🇸-US-114 | -0.00 | 0 | | 🇺🇸-US-115 | -0.00 | 0 | 🇫🇷-FR-004 | -0.00 | 0 | 🇺🇸-US-117 | -0.00 | 0 | | 🇺🇸-US-092 | -0.00 | 0 | 🇯🇵-JP-008 | -0.00 | 0 | 🇺🇸-US-089 | -0.00 | 0 | ## 声明 本项目遵循 GNU General Public License v3.0 开源,在此基础上,所有使用本项目提供服务者都必须在网站首页保留指向本项目的链接 本项目仅限**个人使用**,禁止使用本项目进行营利和做其他违法事情,产生的一切后果本项目概不负责 ## 统计 [![Stargazers over time](https://starchart.cc/freenodes/freenodes.svg)](https://starchart.cc/freenodes/freenodes)
80
2
CareTiger/use-nuxt-vitest
https://github.com/CareTiger/use-nuxt-vitest
A repo with examples to perform unit tests and e2e tests with nuxt-vitest
# A guide to using Nuxt-Vitest Objective of this repo is to provide a guide to using Nuxt-Vitest and encourage a test driven development approach to building Nuxt apps. From simple unit tests to end to end tests, this repo will provide examples of how to use nuxt-vitest. This repo is a work in progress and will be updated as the project progresses. ## Demo Site > link to demo site ## Tech Stack - Nuxt 3 - Supabase - VueUse - Pinia - Stripe - Nuxt/ui ## Todo List ### Nuxt 3 - [] navigateTo - [] useRoute - [] defineEmits, etc ### Test Supabase composables and services #### Vue composables - [] useSupabaseAuthClient - [] useSupabaseClient - [] useSupabaseUser - [] protected routes - [] roles based routes/layouts, etc #### Server services - [] serverSupabaseClient - [] serverSupabaseUser ### Pinia and Pinia persist ### Stripe ### Nuxt/ui ## Contributing We would love to have your contributions! All PRs all welcomed! We need help building foundational tests to make your Nuxt app stable and production ready from day 1! > Join the Discord channel to discuss about the project!
22
0
nileane/TangerineUI-for-Mastodon
https://github.com/nileane/TangerineUI-for-Mastodon
A Tangerine redesign for Mastodon's Web UI. 🍊🐘
# Tangerine UI for Mastodon 🍊🐘 A Tangerine redesign for Mastodon's Web UI. Tangerine UI features a bubblier look, a more compact timeline, round avatars, and a soft color palette that automatically switches between light and dark modes. [🕹️ **Live demo** @ nileane.fr](https://nileane.fr) • [📢 **Announcement** post on Mastodon](https://nileane.fr/@nileane/110691663040709608) • [📝 **Changelog**](https://github.com/nileane/TangerineUI-for-Mastodon/releases) ## Summary * [**Variants**](#variants) * [**List of instances that use Tangerine UI**](#list-of-instances-that-use-tangerine-ui) * [**Installation**](#installation-for-mastodon-admins) * [Install on a **Mastodon** instance](#installation-for-mastodon-admins) * [Install on a **Glitch-soc** instance](#installation-for-glitch-soc-admins) * [Install as a regular user](#installation-for-regular-users) * [**Things to know**](#things-to-know) * [**Accessibility**](#accessibility) * [**Credits**](#credits) * [**Support me**](#support-me-3) ## Variants * **Tangerine 🍊** Default variant for Tangerine UI, featuring a soft orange palette. ![Tangerine UI's orange palette, both in dark and light modes.](https://github.com/nileane/TangerineUI-for-Mastodon/assets/914451/5048329b-9d95-4b11-a859-48c1f37d54e6) * **Purple 🪻** For those of you who like Tangerine UI but want to stick to Mastodon's purple palette. ![Tangerine UI's purple variant, both in dark and light modes.](https://github.com/nileane/TangerineUI-for-Mastodon/assets/914451/c01c7a54-d2db-4fe5-a0f6-dc6e77cfe128) ## List of instances that use Tangerine UI These are the known instances that have enabled Tangerine UI for their users, either as the only theme, or as an optional theme. If you're an admin and have installed Tangerine UI on your instance, **feel free to add yours here** (open a PR, or just [DM me](https://nileane.fr/@nileane)) | **Instance** | **User count** | **Installed as...** | **Default theme?** | | ------------------------------------------------------ | -------------- | ------------------- | ----------------------- | | [piaille.fr](https://piaille.fr) | 10K+ | an optional theme | No | | [norden.social](https://norden.social) | 5K+ | an optional theme | No | | [shelter.moe](https://shelter.moe) | 350+ | an optional theme | No | | [pipou.academy](https://pipou.academy) | 100+ | an optional theme | No | | [indiepocalypse.social](https://indiepocalypse.social) | 100+ | an optional theme | No | | [bolha.one](https://bolha.one) | 20+ | an optional theme | Yes (Tangerine variant) | | [i1.no](https://i1.no) | 15+ | the only theme | Yes (Purple variant) | | [nileane.fr](https://nileane.fr) | 5+ | the only theme | Yes (Tangerine variant) | | [social.nah.re](https://social.nah.re) | 5+ | an optional theme | No | | [esoteric.party](https://esoteric.party) | 5+ | the only theme | Yes (Tangerine variant) | | [isfeeling.social](https://isfeeling.social) | 1+ | the only theme | Yes (Purple variant) | ## Installation for Mastodon admins ### Install Tangerine UI as the only theme on your instance: * Copy & paste the contents of [`TangerineUI.css`](https://github.com/nileane/TangerineUI-for-Mastodon/blob/main/TangerineUI.css) to the **Custom CSS** field in the administration panel on your Mastodon instance (Navigate to https://*domain*/admin/settings/appearance). * 🪻 For the purple variant, copy the contents of [`TangerineUI-purple.css`](https://github.com/nileane/TangerineUI-for-Mastodon/blob/main/TangerineUI-purple.css) instead. * ⚠️ **Caution: Using the 'Custom CSS' field to apply Tangerine UI will prevent all users on your instance from being able to choose another theme in their Appearance settings** ([see *Accessibility*](#accessibility)). Please make sure there is a consensus among your users for doing so. If not, see below how to install Tangerine UI as an optional theme for your users. ### Install Tangerine UI as an optional theme on your instance [Recommended]: Follow these instructions if you wish to add Tangerine UI as an available theme for your users on your instance. This will also allow you to set Tangerine UI as the default theme for your instance, while still letting your users change back to any of Mastodon's default themes in their Appearance settings. 1. **Copy the files** from [this folder](https://github.com/nileane/TangerineUI-for-Mastodon/tree/main/mastodon/app/javascript/styles/) to your Mastodon themes directory `app/javascript/styles/`: ``` app/ javascript/ styles/ tangerineui.scss | **new** tangerineui-purple.scss | **new** tangerineui/ | **new** layout-single-column.scss | **new** tangerineui-purple/ | **new** layout-single-column.scss | **new** ``` 2. **Add Tangerine UI to `themes.yml`**. To make Tangerine UI available in your users's settings, you need to add a new line to [`config/themes.yml`](https://github.com/mastodon/mastodon/blob/main/config/themes.yml). Here we're adding 2 new lines, one for Tangerine UI, another for Tangerine UI's purple variant: ```yml default: styles/application.scss contrast: styles/contrast.scss mastodon-light: styles/mastodon-light.scss tangerineui: styles/tangerineui.scss | **new** tangerineui-purple: styles/tangerineui-purple.scss | **new** ``` 3. **Add a localized name (optional).** You can edit each desired language's locale file in `config/locales/[lang].yml` to add a localized string name for Tangerine UI. You need to do this for every language you expect your users to use. Otherwise, in their themes list, they will see the unlocalized theme name ("*tangerineui-purple*"), instead of a readable theme name ("*Tangerine UI (Purple)*"). ```yml themes: contrast: Mastodon (High contrast) default: Mastodon (Dark) mastodon-light: Mastodon (Light) tangerineui: Tangerine UI | **new** tangerineui-purple: Tangerine UI (Purple) | **new** ``` 4. **Compile theme assets and restart.** Run `RAILS_ENV=production bundle exec rails assets:precompile` and restart your Mastodon instance for the changes to take effect. Your users should now be able to choose '*Tangerine UI*' and '*Tangerine UI (Purple)*' as their site theme: ![Screenshot : select Tangerine UI as a theme in appearance settings on Mastodon.](https://github.com/nileane/TangerineUI-for-Mastodon/assets/914451/8cce803c-099b-4f25-8e39-e1c0da3aa6dc) As an admin, you should also now be able to set Tangerine UI as the default theme for your instance (navigate to https://*domain*/admin/settings/appearance): ![Screenshot : select Tangerine UI as the default theme for your Mastodon instance in the administration panel.](https://github.com/nileane/TangerineUI-for-Mastodon/assets/914451/05fcbb53-54de-40e4-89bd-199107116dfc) ## Installation for Glitch-soc admins Tangerine UI does not yet support Glitch-soc's features and layout, but it can still be installed as a vanilla skin on your Glitch-soc instance: 1. **Copy the files** from [this folder](https://github.com/nileane/TangerineUI-for-Mastodon/tree/main/mastodon/app/javascript/styles/) to your Mastodon themes directory `app/javascript/styles/`: ``` app/ javascript/ styles/ tangerineui.scss | **new** tangerineui-purple.scss | **new** tangerineui/ | **new** layout-single-column.scss | **new** tangerineui-purple/ | **new** layout-single-column.scss | **new** ``` 2. **Copy the files** from [this folder](https://github.com/nileane/TangerineUI-for-Mastodon/tree/main/mastodon/app/javascript/skins/vanilla/) to your Glitch-soc skins directory `app/javascript/skins/vanilla/`: ``` app/ javascript/ skins/ vanilla/ tangerineui/ | **new** common.scss | **new** names.yml | **new** tangerineui-purple/ | **new** common.scss | **new** names.yml | **new** ``` 3. **Compile theme assets and restart.** Run `RAILS_ENV=production bundle exec rails assets:precompile` and restart your Glitch-soc instance for the changes to take effect. Your users should now be able to select Tangerine UI as a theme in their settings, under Flavours → Vanilla Mastodon → Skin ![Glitch-soc settings. Flavours → Vanilla Mastodon → Skin](https://github.com/nileane/TangerineUI-for-Mastodon/assets/914451/abd931ab-685a-4d55-aa24-cb6356a19a7c) ## Installation for regular users Even if you are not the admin of your instance, you can still use Tangerine UI with a browser extension. * Install any browser extension that allows you to inject CSS on a webpage, such as [Stylus](https://add0n.com/stylus.html), or [Live CSS Editor](https://github.com/webextensions/live-css-editor) * Copy & paste the contents of [`TangerineUI.css`](https://github.com/nileane/TangerineUI-for-Mastodon/blob/main/TangerineUI.css) to the extension's editor * 🪻 For the purple variant, copy the contents of [`TangerineUI-purple.css`](https://github.com/nileane/TangerineUI-for-Mastodon/blob/main/TangerineUI-purple.css) instead. * ⚠️ If you are a user on a Glitch-soc instance, you must switch to the vanilla flavour for Tangerine UI to work properly: * In your settings, navigate to Flavours → Vanilla Mastodon → select the 'Default' skin. ## Things to know * **Tangerine UI currently only supports Mastodon's single column layout**. The advanced web interface (multiple columns) will not be affected. * **Tangerine UI auto-switches from light to dark mode based on your OS preference**. * Check your Mastodon instance version before using. The latest Mastodon release checked to be compatible is indicated in the CSS file header. ## Accessibility * Please consider that some of your users may depend on Mastodon's High Contrast theme before [setting Tangerine UI as the only theme](#install-tangerine-ui-as-the-only-theme-on-your-instance) on your instance. For this reason, unless you're running a single-user instance, I recommend [installing Tangerine UI as an optional/revertable theme](#install-tangerine-ui-as-an-optional-theme-on-your-instance-recommended) instead. ## Credits Huge thanks to [Roni Laukkarinen](https://mementomori.social/@rolle) whose work on [Mastodon Bird UI](https://github.com/ronilaukkarinen/mastodon-bird-ui) I adapted for some parts of the redesign. ## Support me <3 If you enjoy Tangerine UI, jobless me would really appreciate a [tip 💛](https://ko-fi.com/nileane)!
131
6
Lolomgrofl/fastapi_genesis
https://github.com/Lolomgrofl/fastapi_genesis
FastAPI Template to make your life easier 🧬 🚀
# FastAPI Genesis 🧬 - Project Template Generator 🚀 Simple FastAPI project template with Docker, Alembic, PostgreSQL, Poetry and pre-commit to kickstart your new projects. ## How to use it 🤓 Go to the directory where you want to create your project and run: ```bash pip install cookiecutter cookiecutter https://github.com/Lolomgrofl/fastapi_genesis.git ``` ## What's included in the template 🎉 - Here is an explanation of the directory structure of the template: ``` ├── alembic <- Alembic migrations │ ├── app <- Source code of the application (the main package) │ ├── daos <- Data Access Objects (DAOs) to interact with the database │ ├── models <- SQLAlchemy models (the database schema) │ ├── routers <- FastAPI routers (endpoints) │ ├── schemas <- Pydantic schemas for request and response models │ ├── services <- Business logic layer (services) │ ├── db.py <- Database initialization and session management code │ ├── main.py <- FastAPI application instance and startup code │ └── settings.py <- Settings management code (using pydantic settings) │ ├── scripts <- Scripts to perform various tasks like alembic migrations, etc. │ ├── static <- Static files like images, documents, etc. │ ├── tests <- Unit tests, one subdirectory per application module │ ├── .env <- Environment variables. Should not be committed to VCS │ ├── .gitignore <- Files and directories to be ignored by git │ ├── .pre-commit-config.yaml <- Configuration of pre-commit hooks (see https://pre-commit.com/) │ ├── alembic.ini <- Alembic configuration file │ ├── docker-compose.yml <- Docker compose configuration file │ ├── Dockerfile <- Dockerfile for building the image of the application │ ├── Makefile <- Makefile with useful commands for development and project setup │ ├── pyproject.toml <- Python dependencies for Poetry (see https://python-poetry.org/) │ ├── README.md <- File with useful information about the project and how to use it ``` ## Features 🧩 - **Docker** and **docker-compose** for local development - **FastAPI** application with **uvicorn** server - **AsyncPG** for asynchronous access to PostgreSQL - **Pydantic** for data validation - **Poetry** for managing Python dependencies - **Alembic** for database migrations - **Pre-commit** hooks for code formatting and linting - **JWT** token authentication - **SQLAlchemy** models - **CORS** (Cross Origin Resource Sharing) ## User flow as an example of how to use the template 💡 - It consists of the following steps: ``` - Register a new user - Login with the new user - Get all users - Delete all users ``` - This following example will show you the pattern and good practices to follow in order to continue developing your application. ## Cookiecutter parameters explained 🍪 - `repo_name`: Name of the project repository (e.g. `my_project`) - `app_container_name`: Name of the Docker container for the FastAPI application server inside `docker-compose.yaml` file - `app_service_port`: Port on the host machine to which the FastAPI application will be exposed inside `docker-compose.yaml` file and `Dockerfile` - `db_container_name`: Name of the Docker container for the PostgreSQL database server inside `docker-compose.yaml` file - `db_service_port`: Port on the host machine to which the PostgreSQL database server will be exposed inside `docker-compose.yaml` file - `pgadmin_container_name`: Name of the Docker container for the pgAdmin web interface inside `docker-compose.yaml` file - `pgadmin_service_port`: Port on the host machine to which the pgAdmin web interface will be exposed inside `docker-compose.yaml` file - `network_name`: Name of the Docker network that will be created inside `docker-compose.yaml` file # GLHF 🚀 ## License This project is licensed under the terms of the MIT license.
181
9
ml-jku/semantic-image-text-alignment
https://github.com/ml-jku/semantic-image-text-alignment
null
# SITTA: A Semantic Image-Text Alignment for Image Captioning Fabian Paischer<sup>1 2</sup>, Thomas Adler<sup>1</sup>, Markus Hofmarcher<sup>1</sup>, Sepp Hochreiter<sup>1 2 3</sup> <sup>1</sup> LIT AI Lab, Institute for Machine Learning, Johannes Kepler University Linz, Austria <br/> <sup>2</sup> ELLIS Unit Linz <sup>3</sup> Institute of Advanced Research in Artificial Intelligence (IARAI) --- **[SITTA: A semantic Image-Text Alignment for Image Captioning]()** is a lightweight mapping from image to text domain that enables conditioning pretrained Language Models on visual input. See below some examples for captions created with SITTA for sample images of the MS-COCO validation set. ![Captions](assets/sample_captions.png) --- ## Prerequisites First clone the repository and create a conda environment with the required packages git clone https://git.bioinf.jku.at/ml/semantic-image-text-alignment.git cd semantic-image-text-alignment conda env create -f env.yml pip install -e . ## Using SITTA for Image Captioning If you want to use SITTA for image captioning right away, you will first need to dump the token embeddings of the Llama model: python semantic_image_text_alignment/data_prep/prepare_embeddings.py --lm-only Then, you can use SITTA for image captioning within a few lines of code: from transformers import pipeline, LlamaForCausalLM, LlamaTokenizer import torch from semantic_image_text_alignment.pipeline import SITTA from PIL import Image tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-7b-hf") model = LlamaForCausalLM.from_pretrained("decapoda-research/llama-7b-hf") device = 'cuda' if torch.cuda.is_available() else 'cpu' device = torch.device(device, index=0) sitta = pipeline("sitta-image-to-text", model=model, tokenizer=tokenizer, device=device) test_img = Image.open("test_imgs/COCO_val2014_000000334321.jpg") sitta(test_sample) This will yield the following output: > 'a white dog sitting on a bench with people sitting around it.' By default SITTA uses the semantic mapping trained via least-squares on MS-COCO data and the RN50x64 CLIP encoder provided [here](https://github.com/openai/CLIP). All our pre-trained mappings are available in the ```models``` directory. For now our pipeline only supports the 7 billion huggingface version of Llama. We will add support for the other language models used in our paper in the future. ## Reproducing Results of our paper First, download the MS-COCO data and the Flickr30k data and store them in ```data/coco```, and ```data/flickr30k```, respectively. You can download the MS-COCO data by cd datasets mkdir mscoco && cd mscoco wget http://images.cocodataset.org/zips/train2014.zip unzip train2014.zip wget http://images.cocodataset.org/zips/val2014.zip unzip val2014.zip cd ../.. Also, apply for access to the [Flickr30k dataset](https://shannon.cs.illinois.edu/DenotationGraph/) and save the images to ```./datasets/flickr30k```. Further, you will need to download the train/val/test set annotations for both datasets [here](https://cs.stanford.edu/people/karpathy/deepimagesent/) and save them to the ```annotations``` directory. Parse both datasets by python semantic_image_text_alignment/data_prep/parse_coco.py python semantic_image_text_alignment/data_prep/parse_flickr30k.py For computing the different mappings, first, you will need to extract the CLIP and language embeddings for Llama (or other language models) python semantic_image_text_alignment/data_prep/prepare_embeddings.py You can extract embeddings for the other language models by using the ```--lm``` argument. This will run for a while and extract token embeddings for all CLIP backbones and save them to ```data/```. Next you can train the mappings via *lexical matching* by running python semantic_image_text_alignment/train_lexical_matching.py Before running the computation for the *external datasets* method, you will need to run python -m spacy download en_core_web_sm This will download and install the english spacy pipeline used for stop-word removal. Then execute python semantic_image_text_alignment/train_external_dataset.py --dataset mscoco The ```--dataset``` arguments can be set to either ```mscoco``` or ```flickr30k```. By default the mappings will be computed for Llama, but you can specify other language models via the ```--lm``` command line argument. Further you can specify the fraction of the MS-COCO dataset to be used for the computation of the mapping using the ```--fraction``` command line argument. Currently our code supports ```Llama, T5-v1_1, FLAN-T5, GPT-J, GPT-JT```. If you want to create mappings for other language models, simply look up the respective huggingface identifier and add it to the code. To run our retrieval experiments on mscoco, simply run python semantic_image_text_alignment/retrieval_eval.py --mscoco Finally, you can generate captions for the MS-COCO datasets on the respective test splits via python semantic_image_text_alignment/generate_captions.py --k 8 --l 40 --mscoco --vis-encoder RN50x64 --train-method linear_reg --decoding greedy For generating captions for the Flickr30k datasets, simply set ```--datadir data/flickr30k/imgs_test.pkl``` and ```--flickr```. The hyperparameters ```k``` and ```l``` denote the number of tokens provided in the prompt, and the number of random permutations, respectively. Currently, decoding supports ```greedy```, ```sampling```, ```nucleus```, and ```topk```. In case you only have access to small GPUs (VRAM < 48GB) consider using 8-bit quantization by setting ```load_in_8bit=True``` while loading the model from the huggingface hub. ## Pretrained Mappings We provide the pretrained mappings from all our results in the main paper in the ```models/``` directory. These include ordinary least squares and procrustes mappings for Llama, GPT-J, GPT-JT, FLAN-T5, and T5-v1_1. ## Results on Retrieval Task The results for our retrieval task can be found in the ```results/retrieval``` directory. ## Generated Captions You can find all generated captions, as well as reported scores from our paper for all pretrained mappings and language models on both, the MS-COCO, and Flickr30k datasets, in the ```results/captioning``` directory. Each result consists of a json file containing the captions for each image in the respective test set, and an associated ```.out``` file containing all computed evaluation metrics. These metrics (BLEU, CIDEr-D, Rouge-L) are computed using the code from [here](https://github.com/tylin/coco-caption). The corresponding annotation files for computing these scores can be found in the ```annotations/``` directory. ## LICENSE MIT LICENSE
13
1
codedog-ai/codedog
https://github.com/codedog-ai/codedog
Code review powered by LLM
# 🐶 Codedog [![Checkstyle](https://github.com/Arcadia822/codedog/actions/workflows/flake8.yml/badge.svg)](https://github.com/Arcadia822/codedog/actions/workflows/flake8.yml) [![Pytest](https://github.com/Arcadia822/codedog/actions/workflows/test.yml/badge.svg?branch=master)](https://github.com/Arcadia822/codedog/actions/workflows/test.yml) [![Coverage](https://img.shields.io/endpoint?url=https://gist.githubusercontent.com/Arcadia822/ce38dae58995aeffef42065093fcfe84/raw/codedog_master.json)](https://github.com/Arcadia822/codedog/actions/workflows/test.yml) Review your Github/Gitlab PR with ChatGPT ![Design](docs/design.png) ## Configuration Codedog currently load config from environment variables. settings: | Config Name | Required | Default | Description | | ----------------------------- | -------- | ----------------- | --------------------------------------- | | OPENAI_API_KEY | Yes | | Api Key for calling openai gpt api | | AZURE_OPENAI | No | | Use azure openai gpt 3.5 if not blank | | AZURE_OPENAI_API_KEY | No | | Azure openai api key | | AZURE_OPENAI_API_BASE | No | | Azure openai api base | | AZURE_OPENAI_DEPLOYMENT_ID | No | | Azure openai deployment id for gpt 3.5 | | AZURE_OPENAI_EMBEDDING_DEP_ID | No | | Azure openai deployment id for embedding| ## Usage ### Github Example with GPT4 check `example/github_review.py` ### server We have a demo server for you to try. 1. Run server with: ```bash poetry install --with http poetry run demoserver ``` ## Development ```shell poetry install --with dev, test, http ```
10
1
threeColorFr/LLMforDialogDataGenerate
https://github.com/threeColorFr/LLMforDialogDataGenerate
Generate dialog data from documents using LLM like ChatGLM2 or ChatGPT;利用ChatGLM2,ChatGPT等大模型根据文档生成对话数据集
# LLMforDialogDataGenerate Generate dialog data from documents using LLM like ChatGLM2 or ChatGPT; 利用ChatGLM2,ChatGPT等大模型根据文档生成对话数据集 生成效果如下: ``` A: 你好,我最近看到了一篇关于北京国内航线市场细分的文章,感觉挺有意思的。 B: 是啊,我也注意到了。这个研究旨在为航空公司进入北京航线市场、合理选择运营航线提供参考依据。 A: 文章提出了构建航线分类指标体系的方法,并利用因子分析对13个分类指标进行简化降维,提取了5个公因子。 B: 对,没错。然后,计算5个公因子的因子得分值,并以其为分类变量的系统聚类分析,以伪F统计量作为确定最佳分类数的指标。 A: 聚类结果表明,97条航线可以显著地分为7类细分市场。 B: 没错,这表明北京国内航线市场具有一定的特点和差异性,需要针对不同市场细分制定相应的运营策略。 A: 文章还对这些细分市场的特点进行了分析,为航空公司提供了有价值的参考。 B: 对,这个研究对于航空公司制定运营策略、合理分配资源非常有帮助。 A: 我也听说现在航空公司很难获取有效的起降时刻,这对我们的运营航线提出了很大的挑战。 B: 是的,而且现在市场竞争也很激烈,我们需要利用有限的时刻资源,选择收益较好的航线,提高我们的竞争力和盈利能力。 A: 那么有没有什么方法可以对北京国内航线市场进行细分,指导航空公司正确把握航线市场呢? B: 有一些研究成果可以作为参考,但是目前还没有专门针对民航航线市场细分的研究。 A: 我听说有一篇文献[5]主要依据客流量对航线进行分类,将北京国内航线市场分为快线市场、大客流市场、中客流市场和低客流市场。 B: 是的,这篇文献主要研究了客流量对航线的影响,但是没有涉及细分市场的问题。 A: 那有没有其他的方法可以对市场进行细分呢? B: 有一些研究使用聚类分析对航线市场上的旅客进行分类,但这些研究主要针对的是旅客类型和消费行为特征,没有涉及市场细分的概念。 A: 我听说有一篇文献[6]根据航线的市场集中度、客流量和收益水平,利用两阶段聚类法将美国国内O&D市场分成了7类,并对每个类别进行了投资组合分析。 B: 是的,这篇文献使用了一些方法对市场进行细分,但是这些方法主要是基于市场的因素,而不是旅客的特征。 ``` ## 整体步骤 1. 首先准备原始数据(pdf, docx, doc); 存放在Data文件夹下(也可以自己重命名),支持多级目录存放;doc需要先转换成docx,批量转见[about](https://github.com/threeColorFr/pdfOrdoc2txt-txt2dialog-llm/blob/main/Data/readme.md) 2. 转换原始数据到txt文件中 - 首先`cd pdfOrdoc_mining` - 执行`bash run.sh ../Data ../Data_txt`;其中两个参数分别表示`原始数据目录`和`生成的txt存放的目录` - 环境:`pdfminer`, `docx2txt`包 3. 根据txt文件中的document,利用LLM ChatGLM2生成对话数据 - 首先 `cd doc2conv_chatglm2` - 执行 `bash run.sh`; 注意修改batch_chatglm2.py文件中的参数:`call_for_all('../Data_txt', '../Data_txt_conv')`; Data_txt_conv文件夹是生成的对话数据存放目录 - 环境见 [ChatGLM2](https://github.com/THUDM/ChatGLM2-6B); chatglm2-6b是本地加载时存放模型的文件夹,详情见[about](https://github.com/threeColorFr/pdfOrdoc2txt-txt2dialog-llm/blob/main/chatglm2-6b/readme.md) ps: 也可以利用chatgpt生成,见`doc2conv_chatgpt`文件夹
49
2
ReVanced-Extended-Community/Patches-Documentation
https://github.com/ReVanced-Extended-Community/Patches-Documentation
Additional community documentation with screenshots for the various ReVanced Extended patches.
# Patches-Documentation <details><summary> ## Suggested Versions </summary> ***Recommended application versions to patch for best compatibilty with patches.*** <details><summary> #### YouTube - Versions </summary> ``` 18.29.38 ``` ``` 18.27.36 ``` ``` 18.25.40 ``` ``` 18.24.37 ``` ``` 18.23.36 ``` ``` 18.22.37 ``` ``` 18.21.35 ``` ``` 18.20.39 ``` </details> <details><summary> #### YouTube Music - Versions </summary> ``` all ``` </details> <details><summary> #### Reddit - Versions </summary> ``` all ``` </details> <details><summary> #### MicroG - Versions </summary> ``` all ``` </details></details> <details><summary> ## Patches with Screenshots </summary> ***List of patches with screenshots. You may need to scroll to view the complete table.*** <details><summary> #### YouTube </summary> | Patch | Description | Related Screenshots | |:--------:|:--------------:|:-----------------:| | `add-splash-animation` | Adds splash animation, which was removed in YT v18.19.36+. This patch won't work with the `custom-branding-icon` patches. | [Screenshots](https://imgur.com/a/Ls6167p) | | `bypass-ambient-mode-restrictions` | Allows ambient mode to be on while battery saver mode is enabled. | [Screenshots](https://imgur.com/a/qjNlGP3) | | `change-homepage` | Defaults to subscription tab instead of home when the app opens. | [Screenshots](https://imgur.com/a/Xxeq0XD) | | `custom-branding-icon-mmt` | Changes the app launcher icon to MMT. | [Screenshots](https://imgur.com/1h94NCw) | | `custom-branding-icon-revancify-blue` | Changes the app launcher icon to Revancify Blue. | [Screenshots](https://imgur.com/EjJOlYq) | | `custom-branding-icon-revancify-red` | Changes the app launcher icon to Revancify Red. | [Screenshots](https://imgur.com/BPgRMHt) | | `custom-branding-youtube-name` | Rename the app to the name specified in the options.json file. (Default: ReVanced Extended) | [Screenshots](https://imgur.com/a/uYAWf65) | | `custom-double-tap-length` | Add custom 'double-tap to seek' values that are specified in the options.json file. | [Screenshots](https://imgur.com/a/S1fyX9A) | | `custom-package-name` | Uses the package name specified in the options.json file for the non-root build. | [Screenshots](https://imgur.com/a/DY0EMNI) | | `custom-seekbar-color` | Change seekbar color in video player and video thumbnails. | [Screenshots](https://imgur.com/a/wUBZNdH) | | `custom-video-speed` | Adds custom video speed options. | [Screenshots](https://imgur.com/a/7dE1QiH) | | `default-video-quality` | Adds ability to set default video quality settings. | [Screenshots](https://imgur.com/a/hqY3SiN) | | `default-video-speed` | Adds ability to set default video speed settings. | [Screenshots](https://imgur.com/a/x1YmkfG) | | `disable-auto-captions` | Disables forced auto-captions. | [Screenshots](https://imgur.com/a/rYqTjk1) | | `disable-haptic-feedback` | Adds options to disable haptic feedback. | [Screenshots](https://imgur.com/a/c0og6Ay) | | `disable-hdr-video` | Disable HDR video. | [Screenshots](https://imgur.com/a/pbVp2g3) | | `disable-landscape-mode` | Disable landscape mode when entering fullscreen. | [Screenshots](https://imgur.com/a/tJiXrmf) | | `disable-quic-protocol` | Disable CronetEngine's QUIC protocol. | [Screenshots](https://imgur.com/a/CPNzSFq) | | `disable-startup-shorts-player` | Disables Shorts from resuming when launching YouTube. | [Screenshots](https://imgur.com/a/GmsP5oK) | | `enable-compact-controls-overlay` | Enables a compact control overlay in fullscreen. | [Screenshots](https://imgur.com/a/gVc4uMQ) | | `enable-debug-logging` | Adds debugging options. | [Screenshots](https://imgur.com/a/7mNOSsa) | | `enable-external-browser` | Opens URLs outside the app in an external browser. | [Screenshots](https://imgur.com/a/Nm2mvzd) | | `enable-minimized-playback` | Enables picture-in-picture and background playback. | [Screenshots](https://imgur.com/a/ET3HcEx) | | `enable-new-comment-popup-panels` | Enables a new type of comment popup panel in the Shorts player. | [Screenshots](https://imgur.com/a/0UZlccZ) | | `enable-new-splash-animation` | Enables a new type of splash animation on Android 12+ devices. | [Screenshots](https://imgur.com/a/dtLaOYP) | | `enable-new-thumbnail-preview` | Enables a new type of seek preview. | [Screenshots](https://imgur.com/a/lv2AxVP) | | `enable-old-quality-layout` | Enables the original quality flyout menu. | [Screenshots](https://imgur.com/a/v7HyezL) | | `enable-open-links-directly` | Skips over redirection URLs to external links. | [Screenshots](https://imgur.com/a/lMJqViC) | | `enable-seekbar-tapping` | Enables tap-to-seek on the seekbar of the video player. | [Screenshots](https://imgur.com/a/PtA0tb3) | | `enable-tablet-mini-player` | Enables the tablet mini-player layout. | [Screenshots](https://imgur.com/a/mLjsifI) | | `enable-tablet-navigation-bar` | Enables the tablet navigation bar layout. | [Screenshots](https://imgur.com/a/KUi3w7f) | | `enable-timestamps-speed` | Adds the current video speed in brackets next to the current time. | [Screenshots](https://imgur.com/a/QZoeBfT) | | `enable-wide-search-bar` | Replaces the search icon with a wide search bar. This will hide the YouTube logo when active. | [Screenshots](https://imgur.com/a/wG3Mx3S) | | `force-hide-player-button-background` | Remove the dark circle surrounding the pause/play button and the next and previous buttons/arrows. | [Screenshots](https://imgur.com/a/4nejeVc) | | `force-opus-codec` | Forces the opus codec for audios. | [Screenshots](https://imgur.com/a/coCGCKS) | | `force-premium-heading` | Forces the YouTube premium logo on the homepage. | [Screenshots](https://imgur.com/a/wcuugDV) | | `force-vp9-codec` | Forces the VP9 codec for videos. | [Screenshots](https://imgur.com/a/Rl0u1Z4) | | `header-switch` | Add switch to change the YouTube logo on the homepage. | [Screenshots](https://imgur.com/a/bPFJif1) | | `hide-account-menu` | Allows you to hide account menu elements. | [Screenshots](https://imgur.com/a/MCvbnQu) | | `hide-auto-player-popup-panels` | Hides automatic popup panels when opening a playlist/livestream. | [Screenshots](https://imgur.com/a/R3BHdAn) | | `hide-autoplay-button` | Hides the autoplay toggle in the video player. | [Screenshots](https://imgur.com/a/9S3NUVx) | | `hide-autoplay-preview` | Hides the autoplay preview container in fullscreen. | [Screenshots](https://imgur.com/a/OhxdFY9) | | `hide-button-container` | Adds options to hide action buttons under a video (like, clip, remix, etc). | [Screenshots](https://imgur.com/a/pB2DkdJ) | | `hide-captions-button` | Hides the captions button in the video player. | [Screenshots](https://imgur.com/a/iKc0ARk) | | `hide-cast-button` | Hides the cast button in the video player. | [Screenshots](https://imgur.com/a/WNwI6Ve) | | `hide-category-bar` | Hides the category bar at the top of feeds. | [Screenshots](https://imgur.com/a/P7H2Edn) | | `hide-channel-avatar-section` | Hides the channel avatar section in the subscription tab. | [Screenshots](https://imgur.com/a/e0bU6sz) | | `hide-channel-watermark` | Hides the creator watermarks on videos. | [Screenshots](https://imgur.com/a/Hlj6967) | | `hide-collapse-button` | Hides the collapse button in the video player. | [Screenshots](https://imgur.com/a/bI1Fuoh) | | `hide-comment-component` | Adds options to hide components related to comments. | [Screenshots](https://imgur.com/a/hTXpbSV) | | `hide-crowdfunding-box` | Hides the crowdfunding box between the player and video description. | [Screenshots](https://imgur.com/a/WJlGhpq) | | `hide-description-components` | Hides video description components. | [Screenshots](https://imgur.com/a/xhIJoD6) | | `hide-double-tap-overlay-filter` | Prevents the screen from darkening when double-tapping. | [Screenshots](https://imgur.com/a/ualcmms) | | `hide-email-address` | Hides the email address and handle in the account menu and switcher. | [Screenshots](https://imgur.com/a/MfWO2Rr) | | `hide-endscreen-cards` | Hides the suggested video cards at the end of a video in fullscreen. | [Screenshots](https://imgur.com/a/50psTcB) | | `hide-endscreen-overlay` | Hides endscreen overlay when swiping up while in fullscreen and at the end of videos. | [Screenshots](https://imgur.com/a/t8x32O6) | | `hide-feed-flyout-panel` | Hides feed flyout panel components. | [Screenshots](https://imgur.com/a/nf1UPHc) | | `hide-filmstrip-overlay` | Hides the filmstrip overlay when holding down on the seekbar. | [Screenshots](https://imgur.com/a/0f2sH10) | | `hide-floating-microphone` | Hides the floating microphone button above the keyboard. | [Screenshots](https://imgur.com/a/PX54fRG) | | `hide-fullscreen-panels` | Hides the video title and quick actions in fullscreen. And prevents the description, comments, live chat, and playlist panels from showing while in fullscreen. | [Screenshots](https://imgur.com/a/5e2Lxrx) | | `hide-general-ads` | Removes ads in feeds and other areas. | [Screenshots](https://imgur.com/a/UfuiO7s) | | `hide-info-cards` | Hides info-cards in videos. | [Screenshots](https://imgur.com/a/yKKXVDP) | | `hide-layout-components` | Hides general layout components. | [Screenshots](https://imgur.com/a/5BP009b) | | `hide-load-more-button` | Hides the button under videos that loads similar videos. | [Screenshots](https://imgur.com/a/jihDei9) | | `hide-mix-playlists` | Hides mix playlists from the home feed and video player. | [Screenshots](https://imgur.com/a/hzpefwO) | | `hide-music-button` | Hides the YouTube Music button in the video player. | [Screenshots](https://imgur.com/a/KYu3bMj) | | `hide-navigation-buttons` | Adds options to hide or change navigation buttons. | [Screenshots](https://imgur.com/a/TEHIhKt) | | `hide-navigation-label` | Hides the labels under the navigation buttons. | [Screenshots](https://imgur.com/a/TzHnK8l) | | `hide-pip-notification` | Disable the PiP notification when you first launch PiP mode. | [Screenshots](https://imgur.com/a/ZEPIdOW) | | `hide-player-button-background` | Remove the dark circle surrounding the pause/play button and the next and previous buttons/arrows. | [Screenshots](https://imgur.com/a/7l2ExDA) | | `hide-player-flyout-panel` | Adds options to hide player flyout panel components. | [Screenshots](https://imgur.com/a/ZYc7wRe) | | `hide-player-overlay-filter` | Prevent the player from darkening when you tap to reveal the player controls. | [Screenshots](https://imgur.com/a/U6bQxcM) | | `hide-previous-next-button` | Hides the previous and next buttons from the player controls. | [Screenshots](https://imgur.com/a/WNp9p4t) | | `hide-quick-actions` | Adds options to hide the quick action buttons beneath the seekbar while in fullscreen. | [Screenshots](https://imgur.com/a/PADAsaL) | | `hide-seek-message` | Hides the 'Slide left or right to seek' message container. | [Screenshots](https://imgur.com/a/rQyBYg5) | | `hide-seekbar` | Hides the seekbar in the video player and video thumbnails. | [Screenshots](https://imgur.com/a/qkVEocI) | | `hide-shorts-component` | Adds options to hide Shorts in feeds and Shorts components. | [Screenshots](https://imgur.com/a/qbJO6yf) | | `hide-snack-bar` | Hides snack bar popups. | [Screenshots](https://imgur.com/a/VBkD9LN) | | `hide-speed-overlay` | Hides speed overlay when holding down in the player. | [Screenshots](https://imgur.com/a/mQ9uXn7) | | `hide-suggested-actions` | Hides the suggested actions bar inside the player. | [Screenshots](https://imgur.com/a/CQ1gJS7) | | `hide-suggestions-shelf` | Hides the suggestions shelves in feeds. | [Screenshots](https://imgur.com/a/mPOKZru) | | `hide-time-stamp` | Hides timestamp in the video player. | [Screenshots](https://imgur.com/a/9TxGuEE) | | `hide-tooltip-content` | Hides the tooltip box that appears on first install. | [Screenshots](https://imgur.com/a/OAZ30Z5) | | `hide-trending-searches` | Hides trending searches in the search bar. | [Screenshots](https://imgur.com/a/1VjVi3A) | | `hide-video-ads` | Removes ads in the video player. | [Screenshots](https://imgur.com/a/Shr7JuB) | | `language-switch` | Adds language switch toggle. | [Screenshots](https://imgur.com/a/ERg1coh) | | `layout-switch` | Adds the option to switch between tablet and phone layouts. | [Screenshots](https://imgur.com/a/16YQCJj) | | `materialyou` | Applies the MaterialYou theme for Android 12+. | [Screenshots](https://imgur.com/a/CzspOyn) | | `microg-support` | Allows the app to run without root using MicroG and under a different package name. | [Screenshots](https://imgur.com/a/HDh7OiC) | | `optimize-resource` | Removes duplicate resources to reduce file size. | [Screenshots](https://imgur.com/a/n4KuROD) | | `overlay-buttons` | Adds overlay buttons to the player (download, speed controls, amd copy link). | [Screenshots](https://imgur.com/a/U6JexYB) | | `return-youtube-dislike` | Shows the dislike count of videos using the Return YouTube Dislike API. | [Screenshots](https://imgur.com/a/mWj0eoj) | | `settings` | Applies mandatory patches to implement ReVanced settings into the application. | [Screenshots](https://imgur.com/a/qZJN1p0) | | `sponsorblock` | Integrates SponsorBlock, which allows skipping undesired video segments, such as sponsored content. | [Screenshots](https://imgur.com/a/N7Z0CjM) | | `spoof-app-version` | Adds the ability to trick YouTube into thinking you are using a different app version. Useful if you want the old YouTube UI. | [Screenshots](https://imgur.com/a/x5E6fF0) | | `swipe-controls` | Adds volume and brightness swipe controls. | [Screenshots](https://imgur.com/a/76uY3A9) | | `theme` | Change the app's theme to the values specified in options.json file (Default: Amoled black). | [Screenshots](https://imgur.com/a/4gsDQJS) | | `translations` | Add Crowdin translations for YouTube. | [Screenshots](https://imgur.com/a/R7Q1k2h) | </details> <details><summary> #### YouTube Music </summary> | Patch | Description | Related Screenshots | |:--------:|:--------------:|:-----------------:| | `amoled` | Applies an amoled black theme to flyout panels. | [Screenshots](https://imgur.com/a/PXnpWqK) | | `background-play` | Enables background playback. | [Screenshots](https://imgur.com/a/gZki03j) | | `bitrate-default-value` | Set the audio quality to 'Always High' when you first install the app. | [Screenshots](https://imgur.com/a/sL2k1m4) | | `certificate-spoof` | Spoofs the YouTube Music certificate for Android Auto. | [Screenshots](https://imgur.com/a/wYqUq6J) | | `custom-branding-icon-mmt` | Changes the app launcher icon to MMT. | [Screenshots](https://imgur.com/K96jJ52) | | `custom-branding-icon-revancify-blue` | Changes the app launcher icon to Revancify Blue. | [Screenshots](https://imgur.com/1ijcyHr) | | `custom-branding-icon-revancify-red` | Changes the app launcher icon to Revancify Red. | [Screenshots](https://imgur.com/wwUsmiW) | | `custom-branding-music-name` | Rename the app to the name specified in the options.json file. | [Screenshots](https://imgur.com/a/ExSTD82) | | `custom-package-name` | Uses the package name specified in the options.json file for the non-root build. | [Screenshots](https://imgur.com/a/99sBIlq) | | `disable-auto-captions` | Disables forced auto captions. | [Screenshots](https://imgur.com/a/4PKAy9o) | | `enable-black-navigation-bar` | Sets the navigation bar color to black. | [Screenshots](https://imgur.com/a/UK1YGZP) | | `enable-color-match-player` | Matches the color of the mini player and the fullscreen player. | [Screenshots](https://imgur.com/a/F5mib6W) | | `enable-compact-dialog` | Enable compact flyout on phone layouts. | [Screenshots](https://imgur.com/a/NstyglG) | | `enable-custom-filter` | Adds a custom filter to hide specified layout components. | [Screenshots](https://imgur.com/a/U308EWB) | | `enable-debug-logging` | Adds debugging options. | [Screenshots](https://imgur.com/a/sqPwaM7) | | `enable-dismiss-queue` | Adds 'Dismiss queue' option to flyout menu. (YT Music v6.04.51+) | [Screenshots](https://imgur.com/a/12LYPAi) | | `enable-force-minimized-player` | Keep player minimized even after switching tracks. | [Screenshots](https://imgur.com/a/lqAV44p) | | `enable-force-shuffle` | Keeps shuffle enabled even after switching tracks. | [Screenshots](https://imgur.com/a/DWElbFu) | | `enable-landscape-mode` | Enables entry into landscape mode by screen rotation on the phone. | [Screenshots](https://imgur.com/a/1ZUpMZg) | | `enable-minimized-playback` | Enables minimized playback on Kids music. | [Screenshots](https://imgur.com/a/6uOVWJp) | | `enable-new-layout` | Enables new player layouts. (YT Music v5.47.51+) | [Screenshots](https://imgur.com/a/LkvqOKO) | | `enable-old-style-miniplayer` | Return the mini-player to old style. (for YT Music v5.55.53+) | [Screenshots](https://imgur.com/a/jH46Cvo) | | `enable-opus-codec` | Enable opus codec when playing audio. | [Screenshots](https://imgur.com/a/uRdhxbI) | | `enable-sleep-timer` | Adds a sleep timer option to flyout menu. | [Screenshots](https://imgur.com/a/cwEWZQi) | | `enable-zen-mode` | Adds a grey tint to the video player to reduce eye strain. | [Screenshots](https://imgur.com/a/KX7jYRi) | | `exclusive-audio-playback` | Enables the option to play music without video. | [Screenshots](https://imgur.com/a/WdZHw3M) | | `hide-button-shelf` | Hides the category shelf from homepage and explorer. | [Screenshots](https://imgur.com/a/h0408Yl) | | `hide-carousel-shelf` | Hides the carousel shelf from the homepage and explore tab. | [Screenshots](https://imgur.com/a/RkAIZkF) | | `hide-cast-button` | Hides the cast button in the video player and mini-player. | [Screenshots](https://imgur.com/a/NRNKGQG) | | `hide-category-bar` | Hides the music category bar at the top of the homepage. | [Screenshots](https://imgur.com/a/dCWHZmu) | | `hide-get-premium` | Removes all "Get Premium" evidences from the avatar menu. | [Screenshots](https://imgur.com/a/xUfdCHx) | | `hide-music-ads` | Hides ads before playing music. | [Screenshots](https://imgur.com/a/HCIlRvI) | | `hide-navigation-label` | Hide navigation button labels. | [Screenshots](https://imgur.com/a/G9YE9kY) | | `hide-new-playlist-button` | Hide the New Playlist button in the Library tab. | [Screenshots](https://imgur.com/a/RaANMid) | | `hide-playlist-card` | Hides the suggested playlist card from the homepage. | [Screenshots](https://imgur.com/a/W6pxiuQ) | | `hide-taste-builder` | Hides the 'Tell us which artists you like" card from homepage. | [Screenshots](https://imgur.com/a/vLXUsph) | | `hide-upgrade-button` | Hides upgrade button from navigation bar and upgrade banner from the homepage. | [Screenshots](https://imgur.com/a/JMuhsrX) | | `microg-support` | Allows the app to run without root using MicroG and under a different package name. | [Screenshots](https://imgur.com/a/HDh7OiC) | | `optimize-resource` | Remove unnecessary resources to reduce file size. | [Missing]() | | `remember-video-quality` | Remember the video quality whenever you change it. | [Screenshots](https://imgur.com/a/olwfVCf) | | `settings` | Adds settings for ReVanced Extended to YouTube Music. | [Screenshots](https://imgur.com/a/prYgamZ) | | `share-button-hook` | Adds the option to make the 'Share' button function as an external download button. | [Screenshots](https://imgur.com/a/HrtxSlV) | | `spoof-app-version` | Spoof the YouTube Music client version. Allows Canadian users to bypass the Radio-only restriction. | [Screenshots](https://imgur.com/a/oJ1Y60L) | | `translations` | Add Crowdin translations for YouTube Music. | [Screenshots](https://imgur.com/a/tVIibVh) | </details> <details><summary> #### Reddit </summary> | Patch | Description | Related Screenshots | |:--------:|:--------------:|:-----------------:| | `disable-screenshot-popup` | Disables the popup that shows up when taking a screenshot. | [Screenshots](/assets/reddit/disable-screenshot-popup/README.md) | | `hide-ads` | Removes ads from Reddit. | [Screenshots](/assets/reddit/hide-ads/README.md) | | `hide-create-button` | Hide create button at navigation bar. | [Screenshots](/assets/reddit/hide-create-button/README.md) | | `hide-discover-button` | Hides the discover button from the navigation bar. | [Screenshots](/assets/reddit/hide-discover-button/README.md) | | `open-links-directly` | Skips over redirection URLs to external links. | [Screenshots](/assets/reddit/open-links-directly/README.md) | | `open-links-externally` | Open links outside of the app directly in your browser. | [Screenshots](/assets/reddit/open-links-externally/README.md) | | `premium-icon-reddit` | Unlocks premium Reddit app icons. | [Screenshots](/assets/reddit/premium-icon-reddit/README.md) | | `reddit-settings` | Adds ReVanced settings to Reddit. | [Screenshots](/assets/reddit/reddit-settings/README.md) | | `sanitize-sharing-links` | Removes (tracking) query parameters from the URLs when sharing links. | [Screenshots](/assets/reddit/sanitize-sharing-links/README.md) | </details> <details><summary> #### MicroG </summary> | 💊 Patch | 📜 Description | 🏹 Target Version | |:--------:|:--------------:|:-----------------:| | `custom-branding-microg-name` | Renames the app to the name specified in options.json file. | [Screenshots](/assets/microg/custom-branding-microg-name/README.md) | | `custom-branding-microg-revancify-blue` | Changes the app launcher icon to Revancify Blue. | [Screenshots](/assets/microg/custom-branding-icon-revancify-blue/README.md) | | `custom-branding-microg-revancify-red` | Changes the app launcher icon to Revancify Red. | [Screenshots](/assets/microg/custom-branding-icon-revancify-red/README.md) | | `hide-icon-from-launcher` | Hides the app icon from the launcher. | [Screenshots](/assets/microg/hide-icon-from-launcher/README.md) | </details>
97
2
codecov/worker
https://github.com/codecov/worker
Code for Background Workers of Codecov
# worker [![CircleCI](https://dl.circleci.com/status-badge/img/gh/codecov/worker/tree/main.svg?style=svg)](https://dl.circleci.com/status-badge/redirect/gh/codecov/worker/tree/main) [![worker](https://codecov.io/github/codecov/worker/coverage.svg?branch=master&token=BWTOrjBaE5)](https://codecov.io/github/codecov/worker) > We believe that everyone should have access to quality software (like Sentry), that’s why we have always offered Codecov for free to open source maintainers. > > By open sourcing Codecov, we’re not only joining the community that’s supported us from the start — but also want to make sure that every developer can contribute to and build on the Codecov experience. Code for Background Workers of Codecov. This is built on top of the `celery` async framework ## Quickstart ### Setting Virtual Env Before starting, we suggest using a virtual environment for this project. It eases testing in general at least. If you already know how to do it (and how you like it), just do what you already do. If you dont know how to do it, we suggest the following steps: - `python3 -m venv workerenv` - `cd workerenv` - `source bin/activate` Then you should clone this project when inside `workerenv` folder. ### Installing dependencies Make sure to: - Install rust. See https://www.rust-lang.org/tools/install - Have access to any private codecov repos listed in the requirements.txt file. See [here](https://codecovio.atlassian.net/wiki/spaces/ENG/pages/1270743045/Setup) for help on getting that set up. To install the dependencies, run ``` pip install -r requirements.txt ``` ### Environment variables In order to successfully run `make push`, you'll need to define the `CODECOV_WORKER_GCR_REPO_BASE` variable. See its use in [`Makefile`](Makefile) to understand what it's used for. An example is `gcr.io/your-project-here/codecov`. Codecov internal users, see [the env setup documentation](https://www.notion.so/sentry/Environment-variables-for-building-pushing-Docker-images-locally-3159e90c5e6f4db4bfbde8800cdad2c0?pvs=4) for our canonical defaults. ### Running Tests Then, try to run tests to see if the code is working. First get some postgres database running. Anything is fine. I like to spin a `postgres` docker (`docker run -d -p 5432:5432 postgres:9.6.16`). Then do ``` make test ``` ### Linting and Import Sorts Install/run `black` and `isort` using ``` make lint ``` ### Getting into docker To build this into a docker image: ``` make build.base make build ``` To run this as part of the whole infrasctructure, you will be better served by getting the main codebase and running `docker-compose up` from there ### Getting into enterprise mode To generate an enterprise build, do ``` make build.enterprise ``` ## Versioning The source of truth on which version we use is in the file `VERSION`. Every script that tags things with versions will consult that file to see what version it is. That file is manually updated. We use semantic versioning. If you are unsure whether you need to change that version at a given moment, the answer is that you probaby don't then. We have multiple deploys on the same version, and only change it when we want to cut a version to enterprise. ## Upgrading Dependencies This repository uses `pip-tools` to manage dependencies, so make sure you've installed it with `pip install pip-tools`. To add or update dependencies, change `requirements.in`, Then run ``` make update-requirements ``` Do not change `requirements.txt` directly ## Deploying To deploy, all you have to do is create a release (preferred option) or push a tag with the pattern production-year-month-number. More specifically, it needs to follow the regex: ``` /^prod(uction)?-[0-9]{4}-[0-9]{2}-[0-9]{3,4}/ ``` Which means, for example: - `production-2020-11-0001` - First deploy of 2020-11 - `production-2020-12-0015` - Fifteenth deploy of 2020-12 - `prod-2020-12-015` - Fifteenth deploy of 2020-12 Notice that, while the dates are really useful for understanding when code was deployed, it doesn't affect whether or not your release will go to production. If your regex matches the pattern, regardless of what date is tagged, that version will go to production. To create releases on Github, you can go to https://github.com/codecov/worker/releases or use Github CLI. to push tags you can follow instructions on https://git-scm.com/book/en/v2/Git-Basics-Tagging ### After deploying If you are deploying or helping with a deploy, make sure to: 1. Watch logs (on datadog and sentry) 2. Monitor error rates and timing graphs on the dashboards we have set up As the deployer, it is your responsability to make sure the system is working as expected post-deploy. If not, you might need to do a rollback. ## Code Structure Before getting into changing the code, try to use the following structure (feel free to suggest changes.Some bits of it are based on our experience) - `helpers` - Those are the "low" level pieces of code, that don't depend on database models or any other heavy business logic. Those shouldn't depend on anything else on the codebase, preferrably - `database` - Those contain database models. They can use logic from `helpers` and other models, but nothing else. Try to avoid any heavy logic in this code. - `services` - Those are heavier pieces of logic, that don't talk to the external world. They can use `helpers` and `database` logic, and among themselves. But make sure that if a service _bravo_ depends on service _alpha_, then _alpha_ should not depend on any part of _bravo_ - `tasks` - Those are the parts of the code that talk to the external world: it has the tasks that are triggered by external containers. They can depend on `helpers`, `models` and `services`, but NEVER depend on another task (except to schedule them). If some code is common to two tasks, try to put it in a `service` or somewhere else. You will also notice some usage of the package https://github.com/codecov/shared for various things. The logic that is there is used by both here and `codecov/api` codebase. So feel free to make changes there, but dont do anything that will break compatibility too hard. ## Contributing This repository, like all of Codecov's repositories, strives to follow our general [Contributing guidlines](https://github.com/codecov/contributing). If you're considering making a contribution to this repository, we encourage review of our Contributing guidelines first.
36
3
cssmagic/Nuxt-Notes
https://github.com/cssmagic/Nuxt-Notes
Nuxt 学习笔记,涵盖 Nuxt 3 和 Nuxt 2。
# Nuxt-Notes > Nuxt 学习笔记,涵盖 Nuxt 3 和 Nuxt 2。 > > 某些敏感信息已用 `***` 替换,可能导致链接不可访问,请见谅。 ## 概述 * Nuxt 2 基于 Vue 2,Nuxt 3 基于 Vue 3,请选择合适的版本。 * Nuxt 2 官网:https://v2.nuxt.com/ (原 http://nuxtjs.org/ ) * Nuxt 3 官网:https://nuxt.com/ ## Nuxt 3 笔记 > 持续更新中。 * [准备工作与初步认识](https://github.com/cssmagic/Nuxt-Notes/issues/8) * [页面与路由](https://github.com/cssmagic/Nuxt-Notes/issues/9) * [中间件](https://github.com/cssmagic/Nuxt-Notes/issues/10) * [插件](https://github.com/cssmagic/Nuxt-Notes/issues/11) * …… ## Nuxt 2 笔记 > Nuxt 2 相关笔记已不再更新。 * [规范](https://github.com/cssmagic/Nuxt-Notes/issues/1) * [文件组织](https://github.com/cssmagic/Nuxt-Notes/issues/2) * [依赖](https://github.com/cssmagic/Nuxt-Notes/issues/3) * [CSS 开发](https://github.com/cssmagic/Nuxt-Notes/issues/4) * [组件开发](https://github.com/cssmagic/Nuxt-Notes/issues/5) * [中间件](https://github.com/cssmagic/Nuxt-Notes/issues/6) * [插件](https://github.com/cssmagic/Nuxt-Notes/issues/7) *** ## License © Creative Commons BY-NC-ND 4.0
18
0
yurisuika/Zehn
https://github.com/yurisuika/Zehn
A Steam skin based on Windows 10's Metro/Fluent transitional design language.
# Zehn Zehn is a Steam skin based on Windows 10's Metro/Fluent transitional design language. It is currently a WIP, so some sections are not complete and may change at any time. Why the name "Zehn"? Well, I wanted to make my own attempt to match Steam to a stock Windows 10 experience. The design language behind this OS is known as MDL2. It isn't quite the Metro of Windows 8 or the Fluent of Windows 11. Rather, it is a transitional design language that merges the sharp lines and minimalistic icons of Metro with effects such as Acrylic and Reveal that would later stay in Fluent. Unfortunately, some iconography of Fluent came into Windows 10 over the years through updates. If you're like me, you've managed to stop those updates from happening yet still be on 22H2. Both of these names were already used for other skins, but still neither quite fit anyways. So, I took the German word for "ten", as it also sounds like the Japanese "禅". My mind is clear knowing that this theme fits seamlessly into a Windows 10 environment. Zehn is partly based on the [Metro](https://steamcommunity.com/groups/metroskin) skin, however everything has been re-made from the ground up in CSS. Reveal effects are courtesy of [FluentReveal](https://github.com/aleversn/FluentReveal). Thanks to [AikoMidori](https://github.com/AikoMidori/SteamSkins) for my introduction to JS in moving some classes around. #### Installation If you are using [SteamFriendsPatcher](https://github.com/PhantomGamers/SFP/releases), ensure that JavaScript support is enabled. This skin uses JS to move several elements around. As well, button reveal effects are done with JS. Please review the JS code before injection if you have received this skin from elsewhere. Extract the root folder `Zehn` and place it in `~/Steam/steamui/skins`, and then select the skin in SFP. #### Customization In the `~/Zehn/css/config.css` file you will find several configurable options, such as those to remove certain buttons like the Big Picture, VR, Add Game, Announcements, and such. As well, you can configure some colors. Zehn has separate background colors for settings windows and main client windows. As well, there is an overall accent color and the standard in-game and online colors. In default colors, the in-game is styled to match the accent. If you haven't already noticed, this is themed after [fauux's site](https://fauux.neocities.org/). #### Things of Note In the library page, the divider has a width of 0.1px. Upon hover, it will expand with a delay, which you can then use to resize the sidebar. If things crash because of the class moving, press F5 to refresh the client. This will hopefully be resolved soon. I believe it has to do with the downloads progress. #### To-Do - Most things on the overlay - Library Page - Downloads Page - Notifications Dropdown - Account Dropdown - Figure out to implement Reveal on stuff that isn't initially loaded #### Previews Please note that the library in these previews does not yet exist. This is from my old skin, but it will return very much the same to appear as so. ![zehn](https://cdn.discordapp.com/attachments/729991202778251317/1128805252137754705/zehn.png) ![zehn settings](https://cdn.discordapp.com/attachments/729991202778251317/1128817047690813440/zehn-settings.png) ![zehn chat](https://cdn.discordapp.com/attachments/729991202778251317/1128813573045506198/zehn-chat.gif)
10
0
lisongkun/hygge-imaotai
https://github.com/lisongkun/hygge-imaotai
i茅台app接口自动化csharp wpf实现,挂机windows服务器每日自动预约, (╯°□°)╯︵ ┻━┻ 预约启动!
<p align="center"> <img src="Resources/250white.png" /> </p> <p align="center"> <a href="https://github.com/lisongkun/hygge-imaotai/stargazers"><img src="https://img.shields.io/github/stars/lisongkun/hygge-imaotai.svg"></a> <a href="https://github.com/lisongkun/hygge-imaotai/blob/master/LICENSE"> <img src="https://img.shields.io/github/license/lisongkun/hygge-imaotai.svg"> </a> <a href="https://visualstudio.microsoft.com"> <img src="https://badgen.net/badge/icon/visualstudio?icon=visualstudio&label" /> </a> <a href="http://donate.lisok.cn/#"> <img src="https://img.shields.io/badge/Love-pink?style=flat-square&logo=data:image/svg%2bxml;base64,PHN2ZyByb2xlPSJpbWciIHZpZXdCb3g9IjAgMCAyNCAyNCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj48dGl0bGU+R2l0SHViIFNwb25zb3JzIGljb248L3RpdGxlPjxwYXRoIGQ9Ik0xNy42MjUgMS40OTljLTIuMzIgMC00LjM1NCAxLjIwMy01LjYyNSAzLjAzLTEuMjcxLTEuODI3LTMuMzA1LTMuMDMtNS42MjUtMy4wM0MzLjEyOSAxLjQ5OSAwIDQuMjUzIDAgOC4yNDljMCA0LjI3NSAzLjA2OCA3Ljg0NyA1LjgyOCAxMC4yMjdhMzMuMTQgMzMuMTQgMCAwIDAgNS42MTYgMy44NzZsLjAyOC4wMTcuMDA4LjAwMy0uMDAxLjAwM2MuMTYzLjA4NS4zNDIuMTI2LjUyMS4xMjUuMTc5LjAwMS4zNTgtLjA0MS41MjEtLjEyNWwtLjAwMS0uMDAzLjAwOC0uMDAzLjAyOC0uMDE3YTMzLjE0IDMzLjE0IDAgMCAwIDUuNjE2LTMuODc2QzIwLjkzMiAxNi4wOTYgMjQgMTIuNTI0IDI0IDguMjQ5YzAtMy45OTYtMy4xMjktNi43NS02LjM3NS02Ljc1em0tLjkxOSAxNS4yNzVhMzAuNzY2IDMwLjc2NiAwIDAgMS00LjcwMyAzLjMxNmwtLjAwNC0uMDAyLS4wMDQuMDAyYTMwLjk1NSAzMC45NTUgMCAwIDEtNC43MDMtMy4zMTZjLTIuNjc3LTIuMzA3LTUuMDQ3LTUuMjk4LTUuMDQ3LTguNTIzIDAtMi43NTQgMi4xMjEtNC41IDQuMTI1LTQuNSAyLjA2IDAgMy45MTQgMS40NzkgNC41NDQgMy42ODQuMTQzLjQ5NS41OTYuNzk3IDEuMDg2Ljc5Ni40OS4wMDEuOTQzLS4zMDIgMS4wODUtLjc5Ni42My0yLjIwNSAyLjQ4NC0zLjY4NCA0LjU0NC0zLjY4NCAyLjAwNCAwIDQuMTI1IDEuNzQ2IDQuMTI1IDQuNSAwIDMuMjI1LTIuMzcgNi4yMTYtNS4wNDggOC41MjN6Ii8+PC9zdmc+" /> </a> </p> <p align="center">Wpf实现i茅台app接口自动化每日自动预约</p> <h2 align="center">hygge-imaotai</h2> <p align="center"> <a href="https://github.com/lisongkun?tab=repositories">『 All open source projects 』</a> <a href="https://www.lisok.cn/">『 Personal blog 』</a> </p> ## 项目介绍 通过接口自动化模拟i茅台app实现每日自动预约茅台酒的功能,可添加多用户,选择本市出货量最大的门店,或预约你的位置附近门店 软件会在指定时间开始对管理的用户进行批量预约。 本程序是对该项目(**SpringBoot使用Docker部署版本**:[https://github.com/oddfar/campus-imaotai](https://github.com/oddfar/campus-imaotai))的WPF客户端实现 ## 演示图 | i茅台预约 | | | ----------------------------------- | --------------------------------------- | | ![homepage](Resources/homepage.png) | ![usermanage](Resources/usermanage.png) | | | | | ![productList](Resources/product.png) | ![storeList](Resources/storeList.png) | | ![logList](Resources/logList.png) | | ## 贡献代码 若您有好的想法,发现一些 **BUG** 并修复了,欢迎提交 **Pull Request** 参与开源贡献 发起 pull request 请求,提交到 master 分支,等待作者合并 ## Star历史 [![Star History Chart](https://api.star-history.com/svg?repos=lisongkun/hygge-imaotai&type=Date)](https://star-history.com/#lisongkun/hygge-imaotai&Date) ## 鸣谢 ### 感谢以下组织机构提供开源许可 <p> <a style="border:0" href="https://visualstudio.microsoft.com/free-developer-offers/" target="_blank" rel="noopener"> <img width="70" height="70" src="Resources/vs2019_logo.png" alt="Visual Studio Community 2019"> </a> <a style="border:0" href="https://www.jetbrains.com/" target="_blank" rel="noopener"> <img width="70" height="70" src="Resources/resharper_logo.png" alt="JetBrains"> </a> </p>
52
9
kiliman/epic-stack-time-zone
https://github.com/kiliman/epic-stack-time-zone
Epic Stack example with Time Zone client hint
# Epic Stack Time Zone Client Hint Example This example adds a new [client hint](https://github.com/epicweb-dev/epic-stack/blob/main/docs/client-hints.md) to get the user's time zone. This is helpful when rendering dates and times on the server. By returning the correct local time, we eliminate the "flash of incorrect content" when the server and local times are not the same. ## Changes _app/utils/client-hints.tsx_ ```ts export const clientHints = { // ... timeZone: { cookieName: 'CH-time-zone', getValueCode: `Intl.DateTimeFormat().resolvedOptions().timeZone`, fallback: 'UTC', transform(value: string | null) { return value ?? 'UTC' }, }, // add other hints here } ``` _app/utils/misc.tsx_ ```ts export function getDateTimeFormat( request: Request, options?: Intl.DateTimeFormatOptions, ) { const locales = parseAcceptLanguage(request.headers.get('accept-language'), { validate: Intl.DateTimeFormat.supportedLocalesOf, }) const locale = locales[0] ?? 'en-US' // change your default options here const defaultOptions: Intl.DateTimeFormatOptions = { year: 'numeric', month: 'numeric', day: 'numeric', hour: 'numeric', minute: 'numeric', } options = { ...defaultOptions, ...options, timeZone: options?.timeZone ?? getHints(request).timeZone ?? 'UTC', } return new Intl.DateTimeFormat(locale, options) } ``` _root.tsx_ ```ts // default options to use timeZone client hint const localDateTimeFormat = getDateTimeFormat(request) const serverDateTimeFormat = getDateTimeFormat(request, { timeZone: 'UTC', // override timezone here month: 'short', // override month style, etc. }) const now = new Date() return json({ serverTime: serverDateTimeFormat.format(now), localTime: localDateTimeFormat.format(now), }) ```
11
0
anyproto/tech-docs
https://github.com/anyproto/tech-docs
Tech documentation for Any components
# Open repos The source code for Anytype and Any-sync is available on [our Github](https://github.com/anyproto). You can find basic documentation on how to build and run the solutions in the `README.md` files in each repository. Our main repos: | Repo | Description | |---|---| | [any-sync](https://github.com/anyproto/any-sync) | Protocol designed to create high-performance, local-first, peer-to-peer, end-to-end encrypted applications that facilitate seamless collaboration among multiple users and devices | | [any-sync-node](https://github.com/anyproto/any-sync-node) | Implementation of node from any-sync protocol | | [any-sync-filenode](https://github.com/anyproto/any-sync-filenode) | Implementation of file node from any-sync protocol | | [any-sync-coordinator](https://github.com/anyproto/any-sync-coordinator) | Implementation of coordinator node from any-sync protocol | | [any-block](https://github.com/anyproto/any-block) | Protocol describing data structures used in Anytype software | | [anytype-heart](https://github.com/anyproto/anytype-heart) | Middleware library for Anytype | | [anytype-ts](https://github.com/anyproto/anytype-ts) | Official Anytype client for MacOS, Linux, and Windows | | [anytype-kotlin](https://github.com/anyproto/anytype-kotlin) | Official Anytype client for Android | | [anytype-swift](https://github.com/anyproto/anytype-swift) | Official Anytype client for iOS |
13
3
Suraja18/Suraja18
https://github.com/Suraja18/Suraja18
null
### Hi there 👋 <!-- **Suraja18/Suraja18** is a ✨ _special_ ✨ repository because its `README.md` (this file) appears on your GitHub profile. Here are some ideas to get you started: --> - 🔭 I’m currently working on WebSoft Technology Nepal Pvt. Ltd. - 🌱 I’m currently learning Flutter - 👯 I’m looking to collaborate on Yarsa - 🤔 I’m looking for help with Flutter Developer - 💬 Ask me about Laravel - 📫 How to reach me: [email protected] - Find me on [![My Skills](https://skillicons.dev/icons?i=instagram)](https://www.instagram.com/surajadhikari_18/) [![trophy](https://github-profile-trophy.vercel.app/?username=suraja18&theme=onedark)] ![Top Langs](https://github-readme-stats.vercel.app/api/top-langs/?username=suraja18&theme=merko&hide_progress=true) ![Anurag's GitHub stats](https://github-readme-stats.vercel.app/api?username=suraja18&theme=merko&show_icons=true) ### Language & Tool: [![My Skills](https://skillicons.dev/icons?i=laravel,html,css,bootstrap,js,jquery,c,cs,cpp,java,dotnet,php,git,github,linux,mysql,vscode,flutter,python,react)]() ### :fire: My Stats : [![GitHub Streak](https://streak-stats.demolab.com/?user=suraja18&theme=merko)](https://git.io/streak-stats)
12
0
codrops/TextBlockTransitions
https://github.com/codrops/TextBlockTransitions
Some inspiration for transitioning text blocks with different word animations.
# Inspiration for Text Block Transitions Some inspiration for transitioning text blocks with word and letter animations. ![Image Title](https://tympanus.net/codrops/wp-content/uploads/2023/07/textbocktransitions.jpg) [Article on Codrops](https://tympanus.net/codrops/?p=72862) [Demo](http://tympanus.net/Development/TextBlockTransitions/) ## Installation Run this demo on a [local server](https://developer.mozilla.org/en-US/docs/Learn/Common_questions/Tools_and_setup/set_up_a_local_testing_server). ## Misc Follow Codrops: [Twitter](http://www.twitter.com/codrops), [Facebook](http://www.facebook.com/codrops), [GitHub](https://github.com/codrops), [Instagram](https://www.instagram.com/codropsss/) ## License [MIT](LICENSE) Made with :blue_heart: by [Codrops](http://www.codrops.com)
33
4
commaai/comma-steering-control
https://github.com/commaai/comma-steering-control
null
# commaSteeringControl `commaSteeringControl` is a dataset of car steering measurements from ~12500 hours of driving with openpilot engaged. We control steering on most cars in openpilot using `steeringTorque`. This results in some lateral acceleration depending on both the car's internal vehicle dynamics and external factors (car speed, road roll, etc). Learning this relationship is essential to having accurate steering control in openpilot. `commaSteeringControl` is the largest controls dataset of its kind, spanning hundreds of car models across 10+ brands. The main purpose of this dataset is to give the community access to the data needed to model the steering of their car, and with that make a more accurate steering controller in openpilot to improve openpilot's performance on that car. This is the largest dataset of vehicle dynamics ever released. It can also be used to develop or verify practical vehicle dynamics models for lateral acceleration, tire slip, road roll, understeer/oversteer, etc. We may add more fields for this goal in the future. ![image](https://github.com/commaai/comma-steering-control/assets/1649262/c6f18767-26ac-4bc8-ab60-afdae197a300) ## Dataset - Download the dataset from [HuggingFace](https://huggingface.co/datasets/commaai/commaSteeringControl/tree/main/data) - Checkout the example notebook at [`visualize.ipynb`](https://github.com/commaai/comma-steering-control/blob/master/visualize.ipynb) ``` # Data Structure data/ ├── Platform 1 | ├── Segment 1 | ├── ... | └── Segment N └── Platform M ├── Segment 1 └── ... | | Fields | Description | Value Range | |---:|:----------------------|:---------------------------------------------------------------------------------|:----------------| | 0 | t | Time | [0, 60] | | 1 | latActive | Is openpilot engaged? | {True, False} | | 2 | steeringPressed | Is steering wheel pressed by the user? | {True, False} | | 3 | vEgo | Forward velocity of the car (m/s) | [0, ∞] | | 4 | aEgo | Forward acceleration of the car (m/s^2) | [-∞, ∞] | | 5 | steeringAngleDeg | Steering Angle (Deg) | [-∞, ∞] | | 6 | steer | Normalized steer torque | [-1, 1] | | 7 | steerFiltered | Normalized, rate limited steer torque | [-1, 1] | | 8 | roll | Road roll (rad) | [-0.174, 0.174] | | 9 | latAccelDesired | Lateral acceleration requested from the planner | [-∞, ∞] | | 10 | latAccelSteeringAngle | Lateral acceleration computed from the steering wheel angle and vehicle dynamics | [-∞, ∞] | | 11 | latAccelLocalizer | Lateral acceleration from the localizer | [-∞, ∞] | | 12 | epsFwVersion | EPS firmware version | str | ``` ![image](https://github.com/commaai/comma-steering-control/assets/1649262/f0195877-48ad-4664-85d6-7b2df12eb3d0) ## Dataset Notes - All values from different messages are interpolated and synced to time `t` - Steering torque is normalized in openpilot (to get `steer`), and further rate limits are applied (to get `steerFiltered`). `steerFiltered` is the best input signal. - The `latAccelSteeringAngle` is computed from steering angle and roll using the vehicle model from openpilot. This is the best signal to predict as `latAccelLocalizer`, which comes from a sensor fusion localizer on the comma three device, can be quite noisy. - In reality (especially for some cars), the relationship is non-linear depending on vehicle speed, and has temporal dynamics. On many cars the steering command is processed and smoothed inside the EPS causing non-linearities and temporal effects. There are also temporal effects in the physics (like in a mass-spring-damper model). - There may be a lag in openpilot fully regaining steering control after `steeringPressed` which may have to be accounted for. - In some platforms, cars with different `epsFwVersion` have dramatically different steering behaviour, although this is not common. - Any algorithm that could be upstreamed to openpilot needs to be simple, fast, and reliable - similar to `torqued`, simple non-linear functions, or simple MLPs etc. ![image](https://github.com/commaai/comma-steering-control/assets/1649262/03905b06-6894-4b67-bd5b-77b1de552e62) ## Timeline of lateral control modeling in openpilot - In [0.8.15](https://blog.comma.ai/0815release/#torque-controller), we introduced a [new controller](https://github.com/commaai/openpilot/blob/master/selfdrive/controls/lib/latcontrol_torque.py) that leveraged the relationship between steering torque and lateral acceleration. - In [0.9.0](https://blog.comma.ai/090release/#torqued-an-auto-tuner-for-lateral-control), we introduced [torqued](https://github.com/commaai/openpilot/blob/master/selfdrive/locationd/torqued.py), which learns the relationship online. Here we assume that the gravity adjusted lateral acceleration has a linear dependence wrt. the steer command. We fit a Total-Least-Squares solution to obtain the factor. We also assume an error-dependant friction value (causes the hysteresis). - In [0.9.2](https://blog.comma.ai/092release/#chevrolet-bolt-euv), we introduced a non-linear feed-forward function. - There has been [extensive community effort](https://github.com/twilsonco/openpilot/tree/log-info) to improve the controller (speed-based relationships, using neural networks, etc). - We are working on further improvements for future releases.
24
1
psyai-net/D-IF_release
https://github.com/psyai-net/D-IF_release
This is the official source of our ICCV 2023 paper " D-IF: Uncertainty-aware Human Digitization via Implicit Distribution Field"
![Psyche AI Inc release](./media/psy_logo.png) # D-IF: Uncertainty-aware Human Digitization via Implicit Distribution Field [ICCV2023] Official PyTorch implementation for the paper: > **D-IF: Uncertainty-aware Human Digitization via Implicit Distribution Field**, ***ICCV 2023***. > > Xueting Yang, Yihao Luo, Yuliang Xiu, Wei Wang, Hao Xu, Zhaoxin Fan > > <a href=''><img src='https://img.shields.io/badge/arXiv-2303.11089-red'></a> <a href='https://yxt7979.github.io/idf/'><img src='https://img.shields.io/badge/Project-Video-Green'></a> [![License ↗](https://img.shields.io/badge/License-CCBYNC4.0-blue.svg)](LICENSE) <p align="center"> <img src="./media/DIF-pipeline .png" width="90%" /> </p> > Detailed human reconstruction from a single image using Implicit Distribution Field. ## Environment - Linux - Python 3.8 - Pytorch 1.13.0 - CUDA 11.3 - CUDA=11.3, GPU Memory > 12GB - PyTorch3D Clone the repo: ```bash git clone https://github.com/psyai-net/D-IF_release.git cd D-IF_release ``` Create conda environment: ```bash conda env create -f environment.yaml conda init bash source ~/.bashrc source activate D-IF pip install -r requirements.txt --use-deprecated=legacy-resolver ``` ## **Demo** ```bash python -m apps.infer -cfg ./configs/d_if.yaml -gpu 0 -in_dir ./examples -out_dir ./results -export_video -loop_smpl 100 -loop_cloth 200 -hps_type pixie ``` ## **Train/Test** Train dataset: Thuman2.0, for download, please follow the steps of [ICON_train](https://github.com/YuliangXiu/ICON/blob/master/docs/dataset.md#thuman20) completely. ```bash CUDA_VISIBLE_DEVICES=7 python -m apps.train -cfg ./configs/train/d_if.yaml ``` Test dataset: CAPE, for download, please follow the steps of [ICON_test](https://github.com/YuliangXiu/ICON/blob/master/docs/evaluation.md#cape-testset) completely. ```bash python -m apps.train -cfg ./configs/train/d_if.yaml -test ``` ## **Citation** If you find this work useful for your research, please cite our paper: ``` comming soon ``` ## **Acknowledgement** Here are some great resources we benefit: - [ICON](https://github.com/YuliangXiu/ICON/) for pipeline. - [CAPE](https://github.com/QianliM/CAPE) and [THuman](https://github.com/ytrock/THuman2.0-Dataset) for datasets. - [PaMIR](https://github.com/ZhengZerong/PaMIR), [PIFu](https://github.com/shunsukesaito/PIFu), [ECON](https://github.com/YuliangXiu/ECON/), and [ICON](https://github.com/YuliangXiu/ICON/) for Benchmark - [PyTorch3D](https://github.com/facebookresearch/pytorch3d) for Differential Rendering. ## **Contact** For research purpose, please contact [email protected] For commercial licensing, please contact [email protected] ## **License** This project is licensed under the Creative Commons Attribution-NonCommercial 4.0 International License. Please read the [LICENSE](LICENSE) file for more information. ## **Invitation** We invite you to join [Psyche AI Inc](https://www.psyai.com/home) to conduct cutting-edge research and business implementation together. At Psyche AI Inc, we are committed to pushing the boundaries of what's possible in the fields of artificial intelligence and computer vision, especially their applications in avatars. As a member of our team, you will have the opportunity to collaborate with talented individuals, innovate new ideas, and contribute to projects that have a real-world impact. If you are passionate about working on the forefront of technology and making a difference, we would love to hear from you. Please visit our website at [Psyche AI Inc](https://www.psyai.com/home) to learn more about us and to apply for open positions. You can also contact us by [email protected]. Let's shape the future together!!
30
1
ademakdogan/GPTerm
https://github.com/ademakdogan/GPTerm
Creating Intelligent Terminal Apps with ChatGPT and LLM Models
<p align="center"> <img width="180" src="./images/logo.png" alt="GPTerm"> <h1 align="center">GPTerm</h1> </p> This project focuses on converting plain text into shell commands using different models, including ChatGPT and various open-source language models. While some models yielded good results, others did not meet the desired level of success. The project primarily utilized iTerm as the terminal application for testing, but assessments with other terminals have yet to be conducted as the emphasis is on running and comparing the models. However, it is anticipated that other terminal applications would also be suitable for this project. The project consists of two main parts. In the first part, users can manually enter and execute shell commands without closing the plugin. The second part involves translating given plain text into shell commands, which are then presented to the user. Users have the flexibility to modify or delete sections of the generated command without execution, if desired. Both sections operate in a similar manner. To indicate the intention of obtaining shell commands from plain text only, users need to prefix the plain text with dot (.). This allows the application to distinguish between manual command entry and obtaining commands from plain text. https://github.com/ademakdogan/GPTerm/assets/53964471/d2334e53-3647-4d62-a83b-6dbc0abdb3aa This project tests four models for performance and results. ChatGPT initially shows the best outcomes due to its cost-effective low token count for generating shell commands. Alpaca with 7 billion parameters doesn't perform well, while the MPT model performs better but has occasional incorrect responses. WizardCoder outperforms both Alpaca and MPT, offering acceptable results for open-source model users. However, it is still behind ChatGPT. It is worth mentioning that while ChatGPT delivers results in a well-organized JSON structure, other open-source models may sometimes provide noisy responses to inquiries. To handle such situations, the [json_extractor](/src/responser.py) function is utilized to eliminate any noise present in the responses of other open-source models.. This part can be developed further. All packages are installed before starting. The following command is used for this installation process (python 3.8 is used in this project): ## Usage To begin with, **it is recommended to work within an environment.** In this project, the conda environment "py1" (sample conda env name) is utilized for development. Then; ``` pip3 install gpterm-tool ``` You can run the provided above command for installation purposes. Once installed, it can be utilized by using the 'gpterm' keyword on iTerm or any other terminal application. - Run with ChatGPT model (Highly Recommended) ``` gpterm -k <openai_api_key> ``` - Run with mpt model ``` gpterm -m mpt -p <quantized_mpt_model_path> ``` - Run with wizardcoder model ``` gpterm -m wizardcoder -p <quantized_wizardcoder_model_path> ``` As the models used have a large number of parameters and are executed on the CPU, the processing speed of the results may be slower. The project was developed on an M1 MacBook Pro, and no tests with GPU implementation have been performed yet. Hence, for professional use, it is recommended to opt for the ChatGPT model. For swift access and usage of this program, you have the option to include an alias in the zshrc file. This allows for convenient and rapid execution of the program. My conda env name is py1. ``` alias gt='conda activate py1 && gpterm -k <openai_api_key>' ``` Following that, the program can be easily launched via the terminal by simply entering the "gp" keyword. If you are interested in running the models using GPU acceleration, you can refer to the link provided below for further information and instructions. https://github.com/marella/ctransformers ## ChatGPT vs WizardCoder - ChatGPT ``` name_ai ---> . go in storage folder and sort only pdf files reversely >>> cd storage && ls -r *.pdf name_ai ---> . Create a folder named sample_1 and create a txt file for each number starting from 1 to 10 in it and assign that number as the name >>> mkdir sample_1 && for i in {1..10}; do touch sample_1/$i.txt; done name_ai ---> . Create a file named nw_1 with .txt extension and add all the numbers from 1 to 10 in it >>> touch nw_1.txt; echo {1..10} >> nw_1.txt ``` - WizardCoder ``` name_ai ---> . go in storage folder and sort only pdf files reversely >>> ls -l storage | grep pdf | sort -r name_ai ---> . Create a folder named sample_1 and create a txt file for each number starting from 1 to 10 in it and assign that number as the name >>> mkdir sample_1 && cd sample_1 \\ Wrong one here ! name_ai ---> . Create a file named nw_1 with .txt extension and add all the numbers from 1 to 10 in it >>> touch nw_1.txt && echo '1 2 3 4 5 6 7 8 9 10' >> nw_1.txt ``` As observed earlier, ChatGPT provides highly accurate responses. In the WizardCoder model, it successfully generated the correct command in two out of the three requests, although it did produce an incorrect command in one request. Sometimes MPT and WizardCoder models return our prompt as result. **Remember! The ChatGPT model is highly suitable for implementation, while the others are still in the experimental phase. They are not suitable for use yet.** _**Note:** As previously explained, it is important to note that open-source models may sometimes produce noisy results. To solve this issue, the [json_extractor](/src/responser.py) function is utilized to filter out any unwanted noise and obtain the desired outcome. This function can be adjusted to handle JSONDecodeError exceptions, ensuring smooth execution. In future iterations, the regex pattern will be refined to encompass all possible error cases. However, for the current implementation, the pattern has been tailored to address the most prevalent sources of noise, considering the constraints of time. However, in the current implementation, the pattern has been adjusted to effectively handle the main sources of noise, taking into account the time limitations._ ## TODOS - [X] Test of ChatGPT - [X] Test of WizardCoder - [X] Test of MPT - [ ] Test of Alpaca - [ ] Test of StarCoder - [ ] Upgrade [json_extractor](/src/responser.py) (regex pattern) - [ ] Running models on GPU - [ ] Train opensource models
14
1
Klerith/mas-talento
https://github.com/Klerith/mas-talento
Repositorio con las hojas de atajo y presentaciones de mis cursos
<div align="center"> <a href="https://cursos.devtalles.com"> <img src="https://raw.githubusercontent.com/Klerith/mas-talento/main/assets/devtalles-white-black.png" width="300" alt="DevTalles Logo" /> </a> </div> ## Repositorio de +Talento Aquí les comparto todas las hojas de atajos y presentaciones que tengo sobre mis cursos. Espero les sirva mucho en su camino! Atte: Fernando y **equipo de DevTalles**
152
24
secureum/AMAZEX-DSS-PARIS
https://github.com/secureum/AMAZEX-DSS-PARIS
null
# **Secureum A-MAZE-X Maison de la Chimie, DeFi Security Summit** ## **A Smart Contract Security _Capture the Flag_ Workshop** ![A-MAZE-X-Stanford-LOGO](./img/A-MAZE-X-Maison-de-la-Chimie.png) \*Hosted by [Defi Security Summit](https://defisecuritysummit.org) as part of **[Defi Security 101](https://defisecuritysummit.org/defi-101-2023/)\***\ _Built with love by [eugenioclrc](https://github.com/eugenioclrc), [luksgrin](https://github.com/luksgrin), [PeterisPrieditis](https://github.com/PeterisPrieditis), [RomiRand](https://github.com/RomiRand) and [misirov](https://twitter.com/p_misirov)_\ _Special thanks to [patrickd](https://github.com/patrickd-), [StErMi](https://github.com/StErMi), [tinchoabbate](https://github.com/tinchoabbate) and [Rajeev](https://twitter.com/0xrajeev) for reviewing, commenting and helping during the elaboration and design of this CTF Workshop_ --- <br> # Contents 1. [**Instructions** 🕹️](#instructions-%EF%B8%8F) - [**Flavors**](#flavors) - [**How to play** ♘](#how-to-play-) 2. [**Challenges** 🎮](#challenges-) 3. [**CTF Writeup** 🗒️🗒️🗒️](#ctf-writeup-%EF%B8%8F%EF%B8%8F%EF%B8%8F) # **Instructions** 🕹️ This Workshop consists in a series of challenges, of increasing difficulty, targeting different **concepts** and common **vulnerabilities** found in **DeFi**. The CTF consists of a series of challenges suitable for different levels of expertise. --- <br> ## **Flavors** This workshop provides different flavors. Feel free to use the one you feel more comfortable with: - **Option 1**: Locally with `Foundry` - **Option 2**: Online through Gitpod, using `Foundry` [![Open in Gitpod](https://gitpod.io/button/open-in-gitpod.svg)](https://gitpod.io/#https://github.com/misirov/DEFI101-CTF/tree/main) --- <br> ## Important note This set of challenges aren't set for competitive purposes. Their main objective is to showcase scenarios involving DeFi, `Solidity` concepts and common vulnerabilities. Focus on **learning** and having **fun**! 😊 <br> ## **How to play** ♘ This challenge is thought for users who are very familiar with `Solidity` and do not want to use additional languages. The following setup tutorial will guide you through the installation of `Foundry` and its setup. <br> ### **Clone this repository** Run the command below to clone this repository into your local machine ```bash git clone https://github.com/secureum/AMAZEX-DSS-PARIS.git cd AMAZEX-DSS-PARIS ``` <br> ### **Install `Foundry`** _(if you don't have `Foundry` already installed)_ Run the command below to get `foundryup` the `Foundry` toolchain installer: ```bash curl -L https://foundry.paradigm.xyz | bash ``` Then, in a new terminal session (or after reloading your `PATH` environmental variable), run `foundryup` to get the latest `forge` and `cast` binaries: ```console foundryup ``` And finally, install the repository's dependencies by entering it and running: ```console forge install ``` Note that you might have to restart your terminal for the `forge` command to become available. At this point you should be all set. If not, check [`Foundry`'s installation troubleshooting](https://github.com/foundry-rs/foundry#troubleshooting-installation). <br> ### **Solving a challenge** Challenge contracts are located in the subdirectories of the `src/` directory. **Do not** modify them, as it may lead to unexpected behaviors within the challenges. To solve a challenge, you must open the corresponding `test/ChallengeX.t.sol` _(where X is a number)_ and add your exploit code in the signalized areas within said file. Then, to check if the challenge has been solved, execute the following command ```bash forge test --match-path test/ChallengeX.t.sol ``` If the solution criteria have been reached, it shall display the following message ```bash Running 1 test for test/ChallengeX.t.sol:ChallengeXTest [PASS] testChallenge() (gas: XXXX) Test result: ok. 1 passed; 0 failed; finished in XXXms ``` Alternatively, to check if all challenges have been solved, execute the following command: ```bash bash isSolved.sh ``` which will return the test results for all challenges in order. If one wishes to have a more detailed prompt (i.e. to see the logged messages), it is necessary to increase the verbosity with `-vvvv`, for example: ```bash forge test --match-path test/ChallengeX.t.sol -vvvv ``` --- # **Challenges** 🎮 - [**Challenge 1: Operation magic redemption** 🪄🔮](src/1_MagicETH/README.md) - [**Challenge 2: Mission Modern WETH: Rescue the Ether** 🧗🧭](src/2_ModernWETH/README.md) - [**Challenge 3: LendEx pool hack** 🤺🃏](src/3_LendingPool/README.md) - [**Challenge 4: Operation Rescue `POSI` Token!** 💼🔓](src/4_RescuePosi/README.md) - [**Challenge 5: Balloon Vault** 🎈🎈](src/5_balloon-vault/README.md) - [**Challenge 6: Safe Yield?** 🏦📈](src/6_yieldPool/README.md) - [**Challenge 7: Crystal DAO** 💎💎](src/7_crystalDAO/README.md) - [**Challenge 8: Liquidatoooor** 🔱🔱](src/8_oiler/README.md) --- # **Slides** Find the slides of the event's presentation [here](./presentation/A-MAZE-X%2C%20Secureum%20at%20DeFi%20Security%20101%20Paris.pdf). --- # **CTF Writeup** 🗒️🗒️🗒️ **_Writeups will be available after the event_** [**SOLUTIONS**](https://www.youtube.com/watch?v=dQw4w9WgXcQ)
80
30
simdjson/simdjson-java
https://github.com/simdjson/simdjson-java
A Java version of simdjson
# simdjson-java A Java version of [simdjson](https://github.com/simdjson/simdjson) - a JSON parser using SIMD instructions, based on the paper [Parsing Gigabytes of JSON per Second](https://arxiv.org/abs/1902.08318) by Geoff Langdale and Daniel Lemire. This implementation is still missing several features available in simdsjon. For example: * Support for Unicode characters * UTF-8 validation * Full support for parsing floats * Support for 512-bit vectors ## Code Sample ```java byte[] json = loadTwitterJson(); SimdJsonParser parser = new SimdJsonParser(); JsonValue jsonValue = simdJsonParser.parse(json, json.length); Iterator<JsonValue> tweets = jsonValue.get("statuses").arrayIterator(); while (tweets.hasNext()) { JsonValue tweet = tweets.next(); JsonValue user = tweet.get("user"); if (user.get("default_profile").asBoolean()) { System.out.println(user.get("screen_name").asString()); } } ``` ## Benchmarks To run the JMH benchmarks, execute the following command: ```./gradlew jmh``` ## Tests To run the tests, execute the following command: ```./gradlew test``` ## Performance This section presents a performance comparison of different JSON parsers available as Java libraries. The benchmark used the [twitter.json](src/jmh/resources/twitter.json) dataset, and its goal was to measure the throughput (ops/s) of parsing and finding all unique users with a default profile. **Note that simdjson-java is still missing several features (mentioned in the introduction), so the following results may not reflect its real performance.** Environment: * CPU: Intel(R) Core(TM) i5-4590 CPU @ 3.30GHz * OS: Ubuntu 23.04, kernel 6.2.0-23-generic * Java: OpenJDK 64-Bit Server VM Temurin-20.0.1+9 Library | Version | Throughput (ops/s) ---------------------------------------------------|---------|-------------------- simdjson-java | - | 1450.951 simdjson-java (padded) | - | 1505.227 [jackson](https://github.com/FasterXML/jackson) | 2.15.2 | 504.562 [fastjson2](https://github.com/alibaba/fastjson) | 2.0.35 | 590.743 [jsoniter](https://github.com/json-iterator/java) | 0.9.23 | 384.664 To reproduce the benchmark results, execute the following command: ```./gradlew jmh -Pjmh.includes='.*ParseAndSelectBenchmark.*'```
92
4
jonatas-lima/minecraft-server-provisioner
https://github.com/jonatas-lima/minecraft-server-provisioner
null
# Minecraft Server Provisioner ## Objective * Provision an [Minecraft](https://www.minecraft.net/) server where you and your friends can connect and play. ## Prerequisites * [Terraform](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli) * [Ansible](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html) * Minecraft Client ### AWS * An AWS account with [access and secret keys](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-appendix-sign-up.html). ### Hetzner * An Hetzner Cloud account with a valid [API access token](https://docs.hetzner.com/cloud/api/getting-started/generating-api-token/). ## Which resources does Terraform will provision? ### AWS | Instance Type | OS | RAM | vCPUs | |---------------|--------------|------|-------| | t2.medium | Ubuntu 22.04 | 4GB | 2 | ### Hetzner | Instance Type | OS | RAM | vCPUs | |---------------|--------------|------|-------| | CPX21 | Ubuntu 22.04 | 4GB | 3 | ## How to ### Customization If you want, you can customize some options on your Minecraft Server. To do it, you need to edit `./ansible/playbook.yml` minecraft-server role. * `server_jar_url`: The server.jar file that will run the server. The version of this file must match the version of the Minecraft Client. (Default 1.20.1) The other variables are self-explainable. ### Provision and Configure 1. Init terraform: ```bash cd terraform/<aws | hetzner> && terraform init ``` 2. Create, if not existent, an SSH key: ```bash ssh-keygen ``` 3. Config your credentials on `./env/.env.<aws | hetzner>` 4. Provision your server: ```bash ./provision.sh <aws | hetzner> ``` ### Just Configure If you already have a server, you can only run the ansible playbook that install and configure a Minecraft Server on your server. 1. Write the user that ansible will user and your IP address on `./ansible/hosts.ini` 2. Install ansible community.general collection: ```bash ansible-galaxy collection install community.general ``` 3. Run: ```bash cd ansible && ansible-playbook -i hosts.ini -v playbook.yml ``` ### Destroy If you want to destroy your Minecraft Server instances, just run: ```bash ./destroy.sh <aws | hetzner> ``` ## Playing * Grab your server IP address (`output.json`) * Go to Multiplayer on your Minecraft Client * Add your brand new server and start playing! ## Managing 1. SSH into your server: ```bash ssh minecraft@<server ip address> ``` 2. Your Minecraft Server simply is a systemd process, that you can manage with: ```bash sudo /bin/systemctl (status | start | stop | restart | enable | disable) ``` ## Costs ### AWS TODO ### Hetzner | ~ 7.00€ / month | |-----------------|
11
0
jdtsmith/indent-bars
https://github.com/jdtsmith/indent-bars
Fast, configurable indentation guide-bars for Emacs
# indent-bars Fast, configurable indentation guide-bars for Emacs. <img align="right" width="514" alt="ib_default" src="https://github.com/jdtsmith/indent-bars/assets/93749/4f652554-bede-4aa6-bdbc-233ec843d782"> This package provides vertical indentation _guide bars_, with the following features: - Uses stipple face properties with font-lock for ultra-fast performance (simply: *faces on spaces*). - Works in the terminal, using a vertical bar character instead of stipple. - Learns your buffer indentation spacing from the mode. - Bar colors can be blended with the frame background color, to reduce their intrusiveness. - Bar appearance is highly configurable: width, position within the character, vertical fill/blank pattern, even zigzag (see [examples](examples.md)). - Optional depth-based coloring, with a customizable cyclical color palette. - Properly handles font size changes. - Optional zero-cost current-depth bar highlighting, permitting bar color and/or appearance changes. - Optional support for drawing bars on blank lines. # FAQ's - **I don't see anything/bars are garbled!** Not all Emacsen support stipples; see [Compatibility](#compatibility). - **When I view the same buffer side by side, the bars jump around!** This is a known issue for versions of Emacs with arbitrary window widths; see [Per-buffer stipple offsets](#per-buffer-stipple-offsets). - **How can I find out if my Emacs supports stipples?!** See [Testing Stipples](#testing-stipples). - **These bars are too instrusive!** Reduce the `:blend` value in `indent-bars-color` closer to zero. Consider disabling `indent-bars-color-by-depth`. - **I can barely see the bars!** Increase the `:blend` value in `indent-bars-color` closer to one. - **I want completely unique indent guidebars so as to flex on my colleagues!** check the [Examples](examples.md) for some ideas. The sky is the limit. - **I use Emacs on the terminal, you insensitive clod!** `indent-bars` will just work for you (though you don't get any fancy bar patterns). - **I use graphical Emacs, but am an extreme minimalist. All my outfits are gray. Including my socks.** Maybe [this](examples.md#minimal) will suit you? Otherwise, you can turn off the stipple and use old fashioned `│` characters with [`indent-bars-prefer-character`](#non-stipple-display). # Install/config Not yet in a package database; simply clone and point `use-package` at the correct path. You can also simply use the `vc-package-install` command newly released with Emacs 29. ```elisp (use-package indent-bars :load-path "~/code/emacs/indent-bars" :hook ((python-mode yaml-mode) . indent-bars-mode)) ; or whichever modes you prefer ``` ## Straight To clone with `use-package` and `straight`: ```elisp (use-package indent-bars :straight (indent-bars :type git :host github :repo "jdtsmith/indent-bars") :hook ((python-mode yaml-mode) . indent-bars-mode)) ; or whichever modes you prefer ``` ## Compatibility For `indent-bars` to display fancy guide bars, your port and version of emacs must correctly display the `:stipple` face attribute. **Most do.** It can also be used *without stipples*, drawing a simple vertical character (like `│`) instead. It automatically does this in non-graphical displays (terminals), but can optionally be configured to always do so; see [Non-stipple Display](#non-stipple-display). Known `:stipple` support by Emacs versions: - All known UNIX/GNU Linux versions support stipples. - "Pure GTK" (`--with-pgtk` build flag) versions support stipples, but had a display bug that caused them to appear incorrectly (as [reverse video](../../issues/3)) and lead to [crashes](../../issues/6); this was fixed in Emacs [here](https://lists.gnu.org/archive/html/bug-gnu-emacs/2023-07/msg02081.html). - On Mac, the [emacs-mac](https://bitbucket.org/mituharu/emacs-mac/src/master/)[^1] port has stipple support, but others do not. - Windows Emacs does not (apparently) support stipples. - Stipples are not supported on terminal emacs. [^1]: Most easily installed [with brew](https://github.com/railwaycat/homebrew-emacsmacport). Please [open an issue](../../issues) with any updates/corrections to this list. See also [Testing Stipples](#testing-stipples). # Customization `M-x customize-group indent-bars` is the easiest way to customize everything about the appearence and function of `indent-bars`. Note: when changing any of these custom variables while `indent-bars` is enabled, you must `M-x indent-bars-reset` in the buffers of interest to see the resulting changes. See some [examples](examples.md) with relevant settings. The main customization variables: - `indent-bars-width-frac`: The fractional width of the bar (0-1, in terms of fraction of a single character's width). - `indent-bars-pad-frac`: The fractional padding offset of the bar from the left edge of the character. - `indent-bars-pattern`: A string specifying the vertical structure of the bar (space=blank, non-space=filled). Scaled to the height of one character. - `indent-bars-zigzag`: A fractional left-right *zigzag* to apply to consecutive groups of identical non-space characters in `pattern`. - `indent-bars-color`: The main bar color, either a color name or face, from which foreground or background color will be taken. Also used to set a `:blend` factor, to blend colors into the frame's background color. - `indent-bars-color-by-depth`: How and whether to alter the color of the indent bars by indentation depth. Defaults to using the foreground of the `outline-*` faces. - `indent-bars-highlight-current-depth`: How and whether to highlight the bars at the indentation depth of the current line. The current depth bar can change color (including blending with the pre-existing color), as well as structure (size, pad, pattern, zigzag). - `indent-bars-spacing-override`: Normally the number of spaces for indentation is automatically discovered from the mode and other variables. If that doesn't work for any reason, it can be explicitly set using this variable. - `indent-bars-display-on-blank-lines`: Whether to display bars on blank lines. - `indent-bars-prefer-character`: Use *characters* to display the vertical bar instead of stipples. This occurs automatically on non-graphical displays (terminals), but this variable can be used to always prefer character-based display. - `indent-bars-no-stipple-char`: The character to use when stipples are unavailable or disabled. Defaults to the vertical box character `│`. Other good options include `┃` and `║`. - `indent-bars-unspecified-bg|fg-color`: Colors to use for the frame background and default foreground when unspecified (e.g. in terminals). If you intend to use `indent-bars` in the terminal, set to the terminal background/foreground colors you use. See the documentation of each variable for more details. # Details and Caveats ## Speed `indent-bars` was partially motivated by the inefficiency of older indentation highlight modes, and is designed for speed. It uses stipples (fixed bitmap patterns) and font lock for fast and efficient bar drawing — *faces on spaces*. Highlighting the current indentation level is essentially free, since it works by [remapping](https://www.gnu.org/software/emacs/manual/html_node/elisp/Face-Remapping.html) the relevant face. The heaviest operation (though still fairly efficient) is **blank-line highlighting**, since the indentation level of blank lines depends on their surrounding context, and strings must be allocated, styled, and used as `'display` properties. If you experience any speed issues, this is the first setting to turn off. ## Indentation `indent-bars` only works with space-based indentation, i.e. `indent-tabs-mode=nil`. Note that many modes enable this by default. ## Display ### Non-stipple display For terminals, (and everywhere, if `indent-bars-prefer-character` is set), `indent-bars` will not attempt stipple display, but instead use simple characters (e.g. `│`; see [an example](examples.md#in-terminal)). Note that in mixed gui/terminal sessions of the same Emacs version, you may need to `M-x indent-bars-reset` when switching a given buffer between graphical and terminal frames. ### Stipples Stipples are repeating patterns anchored to the full emacs frame. `indent-bars` basically "opens windows" on this fixed pattern to "reveal" the bars. The fast *stipple* method used for drawing bars enables lots of [interesting patterns](examples.md). #### Testing Stipples If you are experiencing issues with bar display (missing, garbled, etc.), and would like to determine if stipples are working correctly in your build of emacs, enter (in the `*scratch*` buffer, first `M-x font-lock-mode` to disable fontification, then hitting `C-x C-e` just after the last `)`): ```elisp (let* ((w (window-font-width)) (stipple `(,w 1 ,(apply #'unibyte-string (append (make-list (1- (/ (+ w 7) 8)) ?\0) '(1)))))) (insert "\n" (propertize (concat (make-string 15 ?\s) "THIS IS A TEST" (make-string 15 ?\s)) 'face `(:background "red" :foreground "blue" :stipple ,stipple)))) ``` which should then look something like: <img width="668" alt="image" src="https://github.com/jdtsmith/indent-bars/assets/93749/dd0f65f5-3cdc-4865-a66d-41365cecadd0"> If you determine that stipples do not work in your Emacs, consider upgrading to a version which suppor them, or setting `indent-bars-prefer-character=t`. #### Per-buffer stipple offsets To get the stipple bars in the right place, `indent-bars` must consider the starting horizontal pixel position of the current window, and adjust the stipple pattern accordingly. It does this automatically, per buffer, so you shouldn't ever notice problems, even when re-sizing or re-arranging windows, changing font size, etc. There is one rare corner case, however: showing the *same buffer* side by side in Emacs versions which support pixel-level window width/offsets (e.g. emacs-mac) can lead to unexpected bar positions in the non-active buffer, since the stipple offset in the remapped face applies *per-buffer*, not per-window. I.e. it can't be correct for the same buffer in left and right windows at the same time. Options are living with this, switching to [character-based bars](#non-stipple-display), or (for Emacs >=29) instead of visiting the same buffer, cloning an indirect buffer (which has other advantages, like an independent region). Note that Emacs 28 and earlier have a bug which results in cloned buffers sharing the same face remapping list as their parent; this is fixed in Emacs 29. ### Advantages/Disadvantages #### Advantages of stipples - Custom appearance and position within the character is possible — [examples](examples.md). - Fastest option: does not need to apply display properties. - Results in continuous lines even when `line-spacing` is non-nil (vs. gaps even with box characters). #### Advantages of character bar display - Works equally for terminal and GUI. - Works even for emacs ports which do not support or mishandle stipple display (see [Compatibility](#compatibility)). # Related Packages - [indent-guide](https://github.com/zk-phi/indent-guide): An older package that uses overlays with `|` characters. Some reports of performance concerns. Incompatible with company and other related in-buffer modes. - [highlight-indentation-mode](https://github.com/antonj/Highlight-Indentation-for-Emacs): Uses overlays to draw indentation guides, and includes a current indentation mode. Partial support for blank line guides. `indent-bars` adapts the indentation guessing function from this mode. - [highlight-indent-guides](https://github.com/DarthFennec/highlight-indent-guides): a highly configurable mode for indentation highlight, with color and style options, as well as current depth highlighting. - [hl-indent-scope](https://codeberg.org/ideasman42/emacs-hl-indent-scope): Highlights indentation based on language scope - requiring support for each language, uses overlays to draw indentation guides. - [visual-indentation-mode](https://github.com/skeeto/visual-indentation-mode): Full character-based alternating color indentation guides. Package is now archived. ## Why a new package? None of the existing packages: 1. were fast enough with large files (including current depth highlighting) 2. had enough guide appearance configurability 3. were able to support depth-based coloring 4. offered robust support for guides on blank lines
82
2
verytinydever/sockets-uploadFile
https://github.com/verytinydever/sockets-uploadFile
null
# sockets-uploadFile
16
0
feisuanyz/Frontend-ADT
https://github.com/feisuanyz/Frontend-ADT
Visual Development | Flexible Data Configuration | Multi-platform Compatibility
Frontend Automated Development Tool(Frontend ADT) ----------------------------------- Language: [English](https://github.com/feisuanyz/Frontend-ADT/blob/main/README.md) | [中文](https://github.com/feisuanyz/Frontend-ADT/blob/main/READMEcn.md) Client Download:[For Windows](https://download.feisuanyz.com/release/SoFlu-Page_latest.exe) | [For MacOS](https://download.feisuanyz.com/release-mac/SoFlu-Page_latest.dmg) Installation Environment: | Category | Requirement | |----------|----------------| | Operating System | Windows 7 and above 64-bit or MacOS | | CPU | i5 or above(Recommended) | | RAM | 16 GB or above(Recommended) | | Disk | 1 GB or above | For previous client versions and installation instructions, please refer to [Frontend Installation Resource](https://github.com/feisuanyz/Frontend-ADT/tree/main/.%20Frontend%20Installation%20Resource). =============================================== #### Product Introduction Frontend Automated Development Tool(a.k.a Frontend ADT) is an efficient, secure, and stable tool that helps users quickly build frontend interfaces. It significantly reduces development barrier, enabling individuals without coding knowledge to customize page development based on their own requirements, thus lowering personnel costs for enterprises while improving development efficiency and reducing development cycles. ##### 1. Features a)Visual Development Frontend ADT provides a visual and configurable development environment with a rich library of page components. Users can quickly develop frontend interfaces by dragging and configuring different components. b)Simplified Frontend-Backend Data Integration Frontend ADT standardizes the interaction between frontend user interfaces and backend data formats, which simplifies the process of integrating frontend and backend data. c)Various Development Templates Frontend ADT offers various templates, including application templates, page templates, and block templates. Users can choose suitable templates based on different business scenarios and interaction effects, allowing for easy customization of page designs and greatly enhancing development efficiency. Additionally, users can publish commonly used application features or completed applications to the template marketplace for quick reuse and application building. d)Multiple Data Integration Methods Frontend ADT supports integration with not only Java Automated Development Tool(a.k.a Java ADT) but also with third-party platform. By providing relevant server information, seamless data integration can be achieved. e)Multi-platform Compatibility Frontend ADT supports application development for web, H5, WeChat Mini Programs, WeChat Official Accounts, and Enterprise WeChat, with future integration planned for Android and iOS platforms. Applications developed are independent of the platform itself and can be deployed in runtime environment by downloading deployment package. ##### 2. Target Users Enterprise users and individuals who want to quickly build frontend application systems. ##### 3. Application Scenarios Suitable for various application scenarios that require frontend development, without any limitations on specific scenarios. #### Installation Steps a)Double-click or right-click to open the installation package. ![image](https://github.com/feisuanyz/Frontend-adp/assets/79617492/8a75c424-607c-49be-91f2-17b50ee37e08) b)For Windows 10 systems, a warning may appear. Click "More Info" and then click "Run Anyway". ![image](https://github.com/feisuanyz/Frontend-adp/assets/79617492/b134e589-7162-4fe4-a1f3-0e447e34b5bc) c)After launching .exe file, installation will start. You can choose to install it for current user or all users. Proceed to the next step, and you can see currently installed version. ![image](https://github.com/feisuanyz/Frontend-adp/assets/79617492/42ed4c00-0424-419b-830c-caace9d45fe5) d)Select installation directory and proceed with the installation. ![image](https://github.com/feisuanyz/Frontend-adp/assets/79617492/963d8e63-8db4-4053-a2dd-b992695cc79e) e)Click "Finish" after installation is complete. ![image](https://github.com/feisuanyz/Frontend-adp/assets/79617492/f29f4557-9c76-46a9-96a9-11a6afa67161) f)Once installed, the frontend development tool will automatically launch. ![image](https://github.com/feisuanyz/Frontend-adp/assets/79617492/c88424ba-b24d-47dd-b4cf-23f02c354d1a) **Official Document** ----------------------------------- - Product:[Text Tutorial](https://feisuanyz.com/support/helpCenter/) - Video:[Video Tutorial](https://feisuanyz.com/shortVideo/list/) **Community** ----------------------------------- - Join us in Wechat Group ![WechatGroup](https://github.com/feisuanyz/SoFlu-adp/blob/main/images/QRCode.PNG) <br><br> - Feedback:[Providing Issue](https://github.com/feisuanyz/Java-ADT/issues) - Welcome to SoFlu community to make frontend development better!
12
1
markemicek/ComfyUI-SDXL-Workflow
https://github.com/markemicek/ComfyUI-SDXL-Workflow
null
# ComfyUI-SDXL-Workflow First you will need to have ComfyUI set up on your system, after that it's just a simple drag and drop. drag the JSON code labled MarkDiffusionV1-55 (https://github.com/markemicek/ComfyUI-SDXL-Workflow/blob/main/MarkDiffusionV1-55.json) into your ComfyUI browser and it will automatically set up! Here is a great tutorial if you dont have ComfyUI already set up. Once you get to the step that asks you to import a JSON file, you would instead come back to this page and import mine. ComfyUI Setup Tutorial: https://www.reddit.com/r/StableDiffusion/comments/14sacvt/how_to_use_sdxl_locally_with_comfyui_how_to/
21
0
Umer-Shah-98/project-02
https://github.com/Umer-Shah-98/project-02
null
# project-02
10
1
PB2204/Derivative-Calculator
https://github.com/PB2204/Derivative-Calculator
Derivative Calculator is a web app written using JavaScript. It uses libraries like math.js and Plotly.js for computing the derivative of the expression and plotting the graphs.
# Derivative-Calculator Derivative Calculator is a web app written using JavaScript. It uses libraries like math.js and Plotly.js for computing the derivative of the expression and plotting the graphs.
17
0
Antoinegtir/flutter-icons
https://github.com/Antoinegtir/flutter-icons
Your vscode extension that display a large gam of icons for a clean architecture of your Flutter project !
# Flutter Icons ---- ## `vscode extension` that display a large gam of icons for a clean architecture of your Flutter project ! ## <a href="https://marketplace.visualstudio.com/items?itemName=AntoineGtr.flutter-icons">Click here to install</a> | Before | After | |------|------| ![](https://raw.githubusercontent.com/Antoinegtir/flutter-icons/main/assets/before.png)|![](https://raw.githubusercontent.com/Antoinegtir/flutter-icons/main/assets/after.png) --- ## Contributing Feel free to create read the <a href="https://github.com/Antoinegtir/flutter-icons/blob/main/CONTRIBUTING.md">Contributing</a> if you wanna add some flutter icons that i might forgot ! ## Credits All Folders & Files at the top are created and implemented on the extensions by @Antoinegtir using Procreate software. If you like this extension do not hesitate to help us by staring the repo, we wanna that this extension became the flutter icon reference for helping people ❤️. The rest of the files & folder come from the following beautyfull library: <a href="https://github.com/PKief/vscode-material-icon-theme">Material Icon Theme</a> they create an awesome work ! ## Cookbook 📖 Check the following <a href="https://github.com/Antoinegtir/flutter-icons/wiki/⚙%EF%B8%8F-Develop">Wiki</a> if you wanna run locally the extension and start developping you're own ! ## License MIT license, see `LICENSE`.
13
0
huozhi/rollup-plugin-swc-preserve-directives
https://github.com/huozhi/rollup-plugin-swc-preserve-directives
This is a rollup plugin that uses SWC to help preserve shebang and string directives.
# rollup-swc-preserve-directives This is a rollup plugin that uses SWC to help preserve shebang and string directives. ## Install ```bash npm install rollup-swc-preserve-directives # You also need to install @swc/core as peer dependency npm install @swc/core ``` ## Usage ```js import swcPreserveDirectives from 'rollup-swc-preserve-directives'; export default { input: './src/index.js', output: { file: './dist/index.js', format: 'cjs' }, plugins: [ swcPreserveDirectives() ] } ``` ## License MIT
13
1
limithit/Mask2Background
https://github.com/limithit/Mask2Background
Mask2Background for Stable Diffusion Web UI
# Mask2Background for Stable Diffusion Web UI The png image, transparent area replaced with white, and generate a black mask, and finally send the other to img-to-img, and then enter the prompt word to generate a new background, in short, it is to give the product for the background! ## Your png image must contain transparent pixels [version](https://github.com/limithit/Mask2Background/) ## Installation To install the software, please follow these steps: * Open the `Extensions` tab on the AUTOMATIC1111's [Stable Diffusion Web UI](https://github.com/limithit/Mask2Background.git). * Select the `Install from URL` option. * Enter `https://github.com/limithit/Mask2Background.git` in the `URL for extension's git repository` field. * Click on the `Install` button. * Once installation is complete, restart the Web UI. * Note: This extension supports v1.4.0 or higher of AUTOMATIC1111's Stable Diffusion Web UI. ## How update * Delete the `extensions/Mask2Background` and then restart the Web UI. * Repeat the first installation steps ## Usage * Drag and drop your image onto the input image area. * Click on the `Run Fill the background` button. * Use sketching to point the area you want to inpaint. You can undo and adjust the pen size. * Click on the `Create mask` button. The mask will appear in the selected mask image area. ### Mask only Tab * Gives ability to just save mask without any other processing, so it's then possible to use the mask in img2img's `Inpaint upload` with any model/extensions/tools you already have in your AUTOMATIC1111. * `Get mask` button: Save the mask as RGB image. * After the `Get mask` button press you can use `Send to img2img inpaint` button under the mask image to send both input image and mask to the img2img tab. ![UI image](img3.png) ![UI image](img.png) ![UI image](img2.png) ## Auto-saving images * The inpainted image will be automatically saved in the folder that matches the current date within the `outputs/Mask2Background` directory. * You can switch to the `outputs/img2img-images` directory via the `Mask2Background` section found in the `Settings` tab on the Web UI. ## License The source code is licensed under the [Apache 2.0 license](LICENSE).
13
1
IsaMarvin/git-github_learning_hub
https://github.com/IsaMarvin/git-github_learning_hub
Welcome to my personal Git and GitHub notes repository. This project is all about exploring Git, a powerful tool for tracking changes in coding projects, and GitHub, the go-to platform for collaborating and sharing code with others. Here, you'll find a comprehensive collection of my thoughts and learnings on these topics.
# Git and GitHub A Beginner-Friendly Guide to Git and GitHub ✨🌟 ## Introduction Welcome to the Git and GitHub repository! This project aims to provide you with a complete understanding of Git, a powerful tool for tracking changes in your coding projects, and GitHub, a popular platform for collaboration and sharing code with others. Git allows you to keep a record of all the changes you make to your code, helping you easily track and manage different versions of your project. With Git, you can experiment with new features, fix bugs, and revert changes if needed, all while maintaining a clean and organized history of your code. GitHub, on the other hand, provides a platform for hosting your Git repositories and enables seamless collaboration with other developers. You can share your code, contribute to open-source projects, and receive feedback from the community. ## Table of Contents - [Introduction to Git](#introduction-to-git-) - [Basic Git Commands](#basic-git-commands-) - [Initializing a Repository](#initializing-a-repository-) - [Writing Good Commit Messages](#writing-good-commit-messages-) ## Introduction to Git ✨👩‍💻 Git is a powerful tool that helps us keep track of changes in our coding projects and collaborate with others. It's like a magical notebook that allows us to work on our projects together, keeping track of who made what changes and when. 📝🤝🌟 <details> <summary>ℹ️ Click Here to Expand</summary> ### Why do we need Git? 🤔 We need Git because it makes working on coding projects easier and less confusing. Here are some reasons why Git is awesome: - **Tracking Changes**: Git helps us keep a record of all the changes we make to our code. It's like having a time machine that can take us back to any version of our program. For example, imagine you're working on an essay, and you want to see what it looked like last week. Git can show you the exact version of your essay from that time. 🔍📅 - 🤝 **Collaboration**: Git allows us to collaborate with others on the same coding project. Just like working on a group project in school, Git lets each person work on their part of the program without getting in each other's way. It helps us avoid conflicts and makes it easier to combine everyone's work. For example, imagine you and your friends are writing a story together. Git ensures that everyone's changes are organized and merged smoothly into the final story. 🤝📚 - 🌿 **Branching and Merging**: Sometimes, we want to experiment with new features or fix bugs without breaking the main program. Git lets us create a separate space called a "branch" where we can work on these ideas. If things don't go well, we can easily go back to the main program without causing any trouble. Once the changes in the branch are ready, they can be easily merged back into the main codebase. This means taking the successful changes from the separate branch and incorporating them into the main project. For example, imagine you have a beautiful garden, and you want to try growing different types of flowers in a special section. Branches allow you to experiment without affecting the rest of the garden. Merging is like bringing beautiful flowers from the special section of your garden and planting them in the main garden, making it more vibrant and diverse. 🌱🌼 - ↩️ **Reverting Changes**: Git helps us fix mistakes or bugs in our code. It's like having an "undo" button for our changes. If we realize we made a mistake, we can easily go back to a previous version and start over. For example, imagine you're drawing a picture, and you accidentally make a wrong stroke. Git allows you to erase that stroke and continue from a clean canvas. 🖌️🎨 - 👩‍💻 **Code Reviews**: Git works great with websites like GitHub, where we can share our code with others and learn from their projects too. We can showcase our coding skills, ask for feedback, and even contribute to open-source projects. It's like joining a big community of coders and learning together. For example, imagine you're part of a book club where everyone shares their favorite books. Git and GitHub are like platforms that allow coders to share their favorite code and learn from each other's projects. 🌟📚 ### What is Git? 🤓 Git is a special program that helps us with version control, which means keeping track of all the changes we make to our code over time. It's like a magical notebook that organizes our coding projects. When we use Git, we take snapshots of our project at different points in time. These snapshots are called "commits." Each commit represents a specific version of our project. For example, let's say you're working on an art project, and every time you finish a step, you take a picture of your artwork. Each picture represents a commit, showing the progress of your artwork over time. Git is also "distributed," which means that everyone working on the project has their own copy of the whole project, including all the commits. It's like having your own copy of the artwork and all its pictures on your computer. This way, you can work on the project even when you're offline, and when you're ready, you can share your changes with others. It's like sharing your artwork with your friends so they can see the different stages and contribute their ideas. 🖥️🖼️ To make it even more fun, Git allows us to create different "branches" of our project. These branches are like separate storylines or versions of our project. For example, imagine you and your friends are writing a fantasy story. Git lets each person create their own branch to work on different chapters or characters without getting confused. Once everyone is happy with their changes, Git can combine the different branches and merge them into one final story. 🌳📖 In summary, Git is like a magical notebook that keeps our coding projects organized, makes collaboration easy, and helps us become superhero programmers! 💪🚀 </details> ## Basic Git Commands 🌟 Ready to become a Git expert? Here are some basic Git commands that will make you feel like a coding superhero: <details> <summary>ℹ️ Click Here to Expand</summary> <br> 1. 🎒 `git init`: Imagine you're starting a new coding adventure. The `git init` command is like preparing your backpack for the journey. It initializes a new Git repository, creating a special place to track your code changes. 2. 📚 `git clone`: Let's say your friend has a cool project you want to contribute to. The `git clone` command is like making a copy of their project onto your computer. It's like borrowing a book from your friend's library to read and make your own notes. 3. ➕ `git add`: Think of the `git add` command as putting things in your backpack. It's like adding your code files or changes to the staging area, getting them ready for the next step. 4. 📸 `git commit`: You've completed a task or made an improvement to your code. The `git commit` command is like taking a snapshot of your work and saving it with a message. It's like creating a checkpoint in your adventure, allowing you to look back and see how far you've come. 5. 🗺️ `git status`: Wondering what's happening with your code? The `git status` command is like a map that shows you where you are in your coding journey. It tells you which files have changed, what's ready to be committed, and any pending tasks. 6. 🌳 `git branch`: Imagine your project has multiple storylines or different paths to explore. The `git branch` command lets you create separate storylines or branches. It's like choosing different adventure paths to work on different features or experiment with ideas. 7. 🔀 `git checkout`: Suppose you're working on different branches or want to go back to a previous version of your code. The `git checkout` command is like changing gears in your adventure. It allows you to switch between branches or time-travel to previous versions of your project. 8. 🤝 `git merge`: Collaboration is an exciting part of coding. The `git merge` command combines different branches or storylines. It's like bringing characters from different adventures together and merging their stories into one. 9. 📦 `git pull`: Let's say your friends have been working on the project, and you want to get their latest changes. The `git pull` command is like receiving a package full of updates. It fetches the latest code from a remote repository and integrates it into your project. 10. 🚀 `git push`: Finally, you're proud of your work and want to share it with others. The `git push` command is like publishing your adventure online for everyone to see. It sends your committed changes to a remote repository, making them accessible to others. Now you're ready to embark on your coding adventures with Git! Explore these commands, experiment with different branches, and collaborate with others. </details> ## Initializing a Repository 🌟🏗️ To start using Git for version control in your project, follow these steps to initialize a repository: <details> <summary>ℹ️ Click Here to Expand</summary> <br> 1. **Open Terminal or Command Prompt**: Launch your preferred terminal application, such as Terminal on macOS or Command Prompt on Windows. 2. **Navigate to Project Directory**: Look for the folder where your project files are stored. It's like finding the secret entrance to your coding adventure. 3. **Initialize the Repository**: Use the magic words `git init` in your terminal or command prompt. It's like casting a spell to create a new Git repository in your project directory. 4. **Add Your Magical Files**: Gather all the files you want to include in your repository. Use the command `git add <filename>` to add them one by one. It's like picking up magical artifacts and preparing them for your quest. 5. **Commit Your Changes**: Capture the current state of your project with a special message. Say `git commit -m "Initial commit"` to create your first commit. It's like sealing your magical items in a treasure chest and leaving a note about what they're for. 6. **Remote Repository (Optional)**: If you want to share your coding magic with others or keep a backup in the cloud, create a remote repository on platforms like GitHub, GitLab, or Bitbucket. It's like having a secret magical castle where you can store your spells. With these steps, you have successfully initialized a Git repository for your project. You can now start tracking changes, creating branches, and collaborating with others using Git. </details> ## Writing Good Commit Messages 👌✍️ Writing clear and descriptive commit messages is essential for effective collaboration and maintaining a clean commit history. Here are some tips to help you write good commit messages: <details> <summary>ℹ️ Click Here to Expand</summary> <br> 1. ✨ **Be Clear and Concise**: Make your commit message clear and concise. Use simple and specific language to describe the purpose of the commit. Avoid vague or ambiguous messages that can lead to confusion. 2. 🌟 **Separate Subject and Body**: Structure your commit message with a subject and, if necessary, a body. The subject should be a brief summary (usually 50 characters or less) that conveys the main idea of the commit. The body can provide additional details or explanations. 3. 🚀 **Start with an Imperative Verb**: Begin the subject line with an imperative verb to indicate what the commit does. For example, use words like "Add," "Fix," "Update," or "Refactor." This helps provide clarity and consistency in your commit messages. 4. 🔍 **Provide Context**: Explain why the commit is necessary and provide relevant context. Describe the problem or issue being addressed and how the commit solves or improves it. This helps others understand the purpose and impact of the commit. 5. 🎯 **Keep it Relevant**: Focus on the specific changes made in the commit. Avoid including unrelated changes or mentioning every file affected. Keep the commit message focused on the main purpose of the commit. 6. 📚 **Use Proper Grammar and Punctuation**: Maintain good grammar, spelling, and punctuation in your commit messages. This enhances readability and professionalism. Review your messages before committing to ensure accuracy. 7. 🕒 **Use Present Tense**: Write commit messages in the present tense, as if you are describing the current state of the codebase. For example, use "Fix a bug" instead of "Fixed a bug." This creates a sense of consistency and clarity. 8. 📏 **Consider the 50/72 Rule**: Keep your commit messages within the recommended 50 characters for the subject line and 72 characters for the body. This ensures that messages are readable in various contexts, such as in commit logs or on web interfaces. 9. 📎 **Reference Relevant Issues**: If your commit relates to an issue or feature request, include a reference to it in the commit message. For example, use "Fix #123" to link the commit to issue number 123. This helps track changes and provides additional context. 10. 📝 **Review and Edit**: Before finalizing your commit, review and edit your commit message. Make sure it accurately represents the changes and follows the guidelines mentioned above. Taking a moment for this step ensures a clean and meaningful commit history. Remember, good commit messages improve collaboration and make it easier for others to understand the history and purpose of your changes. Aim for clarity, relevance, and professionalism in your commit messages. For more in-depth guidance on writing good commit messages, refer to this tutorial: [How to Write a Git Commit Message](https://cbea.ms/git-commit/) </details> ## Happy coding and happy Git adventures!✨🚀 Remember, with Git by your side, you're equipped to conquer any coding challenge and collaborate with fellow developers. May your commits be meaningful, your branches be fruitful, and your merges be seamless. Keep exploring, keep learning, and keep sharing your coding magic with the world. Wishing you success and enjoyment in your coding journey!
12
2
Flutterando/minicore
https://github.com/Flutterando/minicore
Flutter/Dart Architecture proposal inspired by Clean Architecture.
# MiniCore Arch Flutter/Dart Architecture proposal inspired by Clean Architecture. ![Image 1](imgs/image2.png) ## Clean Dart Proposal If you need something more robust, try Clean Dart! - [pt-BR](1.0/README.md) - [en-US](1.0/README_en.md) # Start Using Flutter as an example, we will then have three layers maintaining the “Plugin Architecture”, with the main focus on the State of the Application, which is the layer that hosts the events/actions for state changes. ![Image 1](imgs/image1.png) The Architecture proposal proposes to decouple the outermost layers and preserve the Business Rule. ## UI The **UI** Layer is responsible for declaring the application's inputs, outputs and interactions. Using Flutter as an example, we will host the Widgets and Pages, in the backend as an example, it would be in this layer where we would place the Handlers or Commands of our API. ## INTERACTOR The **Interactor** layer will host the application's Business Rules along with their states. The core of the layer will be state elaboration and scheduling through some state management approach. Taking a Repository as an example, we will only have to have the interfaces contract (Abstractions) and the responsibility for implementing this object will have to be transferred to another lower layer. ## DATA This layer supports the **Interactor** layer by implementing its interfaces. To do this, it adapts external data so that it can fulfill the domain's contracts. Most likely in this layer we will implement some Repository or Services interface that may or may not depend on external data such as an API or access to some Hardware such as Bluetooth. In order for the Repository to be able to process and adapt external data, we must create contracts for these services in order to pass the implementation responsibility to the lowest layer of our architecture. Basically, the **DATA** layer should contain everything that will have a high chance of being changed without the programmer being able to intervene directly in the project. # Design Patterns ## Isolate Layers - Service The `Service` pattern will be used for code types that don't have a predefined pattern but need to be separated. [Service layer pattern documentation](https://en.wikipedia.org/wiki/Service_layer_pattern) ## Dependency Injection Necessary to apply the Dependency Inversion Principle (DIP). [Flutterando Video - Dependency Injection](https://www.youtube.com/watch?v=KpPnDHpgHnA) ## State management In cases of multiple sucessful states the _[State Pattern](https://refactoring.guru/design-patterns/state)_ can be used: ```dart sealed class UserState {} final class UnregisteredUserState implements UserState {...} final class RegisteredUserState implements UserState {...} ``` Use any state management approach to propagate states. Suggestions: - [ValueNotifier](https://api.flutter.dev/flutter/foundation/ValueNotifier-class.html?gclid=Cj0KCQjwkqSlBhDaARIsAFJANki5MzNFMZ_zHkydtK6igQyyyDdJHteXp3steWclG70LsnJYFiE98JsaAqebEALw_wcB&gclsrc=aw.ds) - [Triple](https://triple.flutterando.com.br/) - [ASP](https://github.com/Flutterando/asp) - [BLoC/Cubit](https://bloclibrary.dev/#/) - [MobX](https://pub.dev/packages/mobx) <br> ## Adaptation and conversion Data types conversion should be made using the `Adapter` pattern.<br> [Adapter Pattern documentation](https://refactoring.guru/design-patterns/adapter) <br> ## External API Access (REST OR GRAPHQL) `Repository Pattern` with `Datasource`.<br> [Repository Documentation form Microsoft](https://learn.microsoft.com/en-us/dotnet/architecture/microservices/microservice-ddd-cqrs-patterns/infrastructure-persistence-layer-design) <br>OR<br> `Gateway Pattern`.<br> [Martin Fowler Gateway definitions](https://martinfowler.com/articles/gateway-pattern.html) <br> ## Data Transfer Object Can be used to transfer data between layers.<br> [Martin Fowler Text about DTO](https://martinfowler.com/eaaCatalog/dataTransferObject.html) # Tests Tests must follow the triple-A pattern (Arrange, Act, Assert). [Triple-A Article](https://medium.com/@pjbgf/title-testing-code-ocd-and-the-aaa-pattern-df453975ab80) Example: ```dart test('should execute a sum correctly', (){ // arrange int number1 = 2; int number2 = 1; // act final result = sumFunction(number1, number2); // assert expect(result, 3); }); ``` <br> ## Test description The test description should represent the expected result, according to the action performed. You should _NOT_ use descriptions that are obvious like, for example, when a result of a list is expected to be a List you have to avoid a description as such: "Should return a `List<Product>` object". <br> ## Test group The groups must be named according to the class name, which may or may not be followed by the method. At the end of the description, you must add " | " (space, pipe, space). <br> Store example: ```dart group('ProductStore | ', (){ // all ProductStore`s test }); ``` Repository exemple: ```dart group('ProductRepository.fetchAll | ', (){ // all ProductRepository.fetchAll`s test }); ``` <br> <br> --- # Tips ## Modularize Obviously we can keep our layers for the entire application but we can get more out of it by creating Interactor, Data and UI layers for each feature. Example: ``` module ├── data │ ├── datasources │ └── repositories ├── domain │ ├── entities │ └── usecases └── presenter ├── stores ├── controllers ├── pages └── widgets ``` ## Think by layer When developing, start thinking by layer, we shouldn't worry about what's in the **UI** or **DATA** layer at the beginning of the functionality. If we think about the outermost layers, we can end up orienting ourselves (mistakenly) by these layers. Thus, we must get used to developing layer by layer, from the inside out and not the other way around. Perhaps at the beginning of your "Clean" journey some layers may seem "useless", this happens when our mind is not yet **Thinking in Layers** (or because your Business Rule is too simple for that). ## Unit Testing will be your new UI It is very common for developers to first create their Views so that they can then "test" the Business Rules. But we already have a proper tool for this and a dedicated place to store these tests. Developing in a "clean" way is in total synergy with TDD (Test Driven Development) as the UI layer will be one of the last things we will think about in the development of our feature. # Sign We appreciate your feedback! If you agree with the "MiniCore Architecture" proposal, leave a Star on this repository. A Star is the same as signing a "clean manifesto" agreeing with this proposal. We are open to suggestions and improvements in documentation! Do this through the issues, our team will be very pleased with your interest in improving this tool for the community. Feel free to open a PR with corrections to the documentation of this proposal. # Examples - [Clean Dart Burgers Cart using BLoC, Cubit, Triple, Asp, MobX, etc](https://github.com/jacobaraujo7/bloc_atom) - Clean Dart Login with Firebase, MobX and Modular - [Clean Dart Github Search with BLoC and Modular](https://github.com/Flutterando/clean-dart-search-bloc) - [Clean Dart Github Search with MobX and Modular](https://github.com/jacobaraujo7/clean-dart-search-mobx) - [Simple App with MiniCore](https://github.com/viniciusddrft/mini_core_exemple) - [Todo App with MiniCore](https://github.com/EdsonMello-code/todoapp) # Links - [Resumo do livro "Arquitetura Limpa"](https://medium.com/@deividchari/desvendando-a-arquitetura-limpa-de-uncle-bob-3e60d9aa9cce) - [Sua Arquitetura está limpa?](https://medium.com/flutterando/sua-arquitetura-est%C3%A1-limpa-clean-architecture-no-flutter-458c68fad120) - [Os tijolos do Clean Architecture](https://www.youtube.com/watch?v=C8mpy3pwqQc) - [The Clean Code Blog](https://blog.cleancoder.com/uncle-bob/2012/08/13/the-clean-architecture.html)
28
9
jsj/parrotflow
https://github.com/jsj/parrotflow
Get an answer, not 10 blue links
# Parrotflow 🦜🌊 > Get an answer, not 10 blue links [![download-app](/.README/assets/badges/download-app.svg)](https://parrotflow.com) [![google](/.README/assets/badges/google.svg)](https://parrotflow.com) [![safari](/.README/assets/badges/safari.svg)](https://parrotflow.com) [![haptics](/.README/assets/badges/haptics.svg)](https://parrotflow.com) [![discord](/.README/assets/badges/discord.svg)](https://discord.gg/8FZMaucm) [![github-star](/.README/assets/badges/github-star.svg)](https://github.com/jsj/parrotflow) [![xcode-cloud](/.README/assets/badges/xcode-cloud.svg)](https://parrotflow.com) [![license](/.README/assets/badges/license.svg)](https://parrotflow.com/license) ## Screens ![demo](/.README/assets/screens/demo.gif) ![0](/.README/assets/screens/0.png) ![1](/.README/assets/screens/1.png) --- ![all](/.README/assets/screens/all.png) ## Download [![app-store](/.README/assets/badges/Download_on_the_App_Store_Badge_US-UK_RGB_blk_092917.svg)](https://apps.apple.com/us/app/parrotflow/id6450801102) Requires iOS 16.1 or later.
10
0
remotemcu/adin-llvm-pass
https://github.com/remotemcu/adin-llvm-pass
null
# ADIN LLVM Pass ![logo](img/logo.png) ## Introduction: The **ADIN LLVM pass** is Transform LLVM pass for Runtime Hooking of Memory Operations is a crucial component within the [**ADIN LLVM fork**](https://github.com/remotemcu/adin-llvm). Designed to enhance the capabilities of the LLVM compiler infrastructure, this pass(plugin) facilitates the dynamic modification of memory operations, such as store and load operations, by replacing them with custom hook functions at runtime. By integrating this powerful plugin(pass) into your development workflow, you can gain fine-grained control over memory access and inject custom logic into your programs. ## How to Build See [**ADIN LLVM fork doc**](https://github.com/remotemcu/adin-llvm) ## Usage To utilize the memory operation hooking capabilities of the **ADIN LLVM plugin**, you can modify LLVM IR compiled code using the `opt` tool of the [**ADIN LLVM fork**](https://github.com/remotemcu/adin-llvm) with the `-adin` flag. Here's an example to help you understand the process: Let's assume you have a simple C code file named `example.c`. ```c int var = 0; void f(){ *(int*)0x100 = 1; var = *(int*)0x100; } ``` To compile it into LLVM IR code using Clang, execute the following command: ```shell clang -S -emit-llvm example.c -o example.ll ``` This command will generate the LLVM IR code file `example.ll` based on your C code. ```llvm ; Function Attrs: noinline nounwind optnone uwtable define dso_local void @f() #0 { store i32 1, i32* inttoptr (i64 256 to i32*), align 4 %1 = load i32, i32* inttoptr (i64 256 to i32*), align 4 store i32 %1, i32* @b, align 4 ret void } ``` Now, you can use the **ADIN LLVM plugin** to modify the LLVM IR code and add memory operation hooks. Run the following command: ```shell opt -adin -S example.ll-o adin_modified_example.ll ``` the `-adin` flag indicates that you want to perform memory operation hooking on the input LLVM IR code. The modified LLVM IR code will be written to the `modified.ll` file. ```llvm define dso_local void @f() #0 { call void @__adin_store_(i8* inttoptr (i64 256 to i8*), i64 1, i32 32, i32 4) %load_i32_ = call i64 @__adin_load_(i8* inttoptr (i64 256 to i8*), i32 32, i32 4) %truncated_i32_ = trunc i64 %load_i32_ to i32 store i32 %truncated_i32_, i32* @b, align 4 ret void } ``` In the modified LLVM IR code (`modified.ll`), the original store and load operations have been replaced with the `__adin_store_` and `__adin_load_` functions. These functions are the hook functions provided by the ADIN LLVM Fork, which allow you to intercept and modify the behavior of memory operations. You can define and implement these hook functions in C/C++ code to perform any desired modifications or additional actions before or after the memory operations. * `__adin_store_` function will be called instead of a regular store operation, * `__adin_load_` function will be called instead of a regular load operation. To implement the **__adin_store_** and **__adin_load_** hook functions in your C/C++ code for performing desired modifications or additional actions before memory operations, you can follow a similar approach to what is done in the [Address Interceptor Lib]. Here's an example: ```c extern "C" void __adin_store_(llvm_pass_addr pointer, llvm_value_type value, llvm_pass_arg TypeSizeArg, llvm_pass_arg AlignmentArg) { //... } extern "C" llvm_value_type __adin_load_(const llvm_pass_addr pointer, llvm_pass_arg TypeSizeArg, llvm_pass_arg AlignmentArg) { //... } ``` Finally, you can use the LLVM IR code to continue with the compilation process, linking, and generating the final executable or library as needed. Yes, the `opt` utility provided by the [**ADIN LLVM fork**](https://github.com/remotemcu/adin-llvm) also allows you to hook `memmove`, `memcpy`, and `memset` operations in addition to store and load operations. You can enable the hooking of these memory operations using specific options provided by `opt`. Here are the options you can use: ```sh $ opt --help | grep adin -adin-alloca-address-skip - Skip intercept address on alloca frame (Stack var) -adin-check-normal-address-aligment - Checks normal alignment of address attempt -adin-mem-function-instructions - if equal true - intercept memmove/memcpy/memset function, else skip -adin-name-callback-load=<string> - Set name callback of load operation. Default __adin_load_ -adin-name-callback-memcpy=<string> - Set name callback of memcpy operation. Default __adin_memcpy_ -adin-name-callback-memmove=<string> - Set name callback of memmove operation. Default __adin_memmove_ -adin-name-callback-memset=<string> - Set name callback of memset operation. Default __adin_memset_ -adin-name-callback-store=<string> - Set name callback of store operation. Default __adin_store_ -adin-simple-global-skip - Skip intercept address of SIMPLE global var -adin-skip-unsupported-instructions - if equal true - skip this unsupported instruction, else throw error -adin-verbose-level=<int> - Set Level of verbose for AddressIntercept Pass ```
32
0
levi-nz/vercel-anti-bot
https://github.com/levi-nz/vercel-anti-bot
Reverse engineering Vercel's bot protection
# vercel-anti-bot Reverse engineering and analysis of Vercel's bot protection used on https://sdk.vercel.ai (and potentially more of their platforms). ## Usage The `generate_token` function in `src/lib.rs` takes in the data from the `/openai.jpeg` response, which returns a valid token for usage in the `custom-encoding` header on a protected request. While this repository does not provide request automation, you can generate a token and replay a request from the browser with the generated token. Keep in mind the data returned from `/openai.jpeg` seems to be *very* short-lived. Disclaimer: this repository is intended for criticism only. ### Benchmarks Token generation time: | CPU | Average time | |---------------|--------------| | Apple M2 | 100.66 µs | | Ryzen 9 5950X | 213.37 µs | ## Background I first became aware of this after seeing [this tweet](https://twitter.com/jaredpalmer/status/1675192755763412992?s=20) from Vercel's VP claiming they have reduced costs by 100x since implementing this solution (as well as rate limiting). The tweet claims their solution is "quite promising" and encouraged anyone interested to contact him. This sounds convincing at first, but when taking a look at how their bot protection works, unfortunately it's *very easy*. This is extremely disappointing especially if you read [this reply](https://twitter.com/jaredpalmer/status/1675196288831311876?s=20) claiming Vercel's CTO who previously ran Google Search built this bot protection system. For clarification, a system like this can be built easily by anyone in the cybersecurity space, and a lot of people - including myself - can easily do better. The analysis below explains how the system works and how this repository circumvents it. ## Analysis If you navigate to https://sdk.vercel.ai, open DevTools, navigate to the Sources tab and then use the Search feature at the bottom using this filter: `file:* function useAntibotToken()` You should come across a function in a JavaScript file that looks like this: ```js function useAntibotToken() { let {data, mutate, isValidating} = (0, swr__WEBPACK_IMPORTED_MODULE_0__.ZP)("antibot-token", async()=>{ let response = await fetch("/openai.jpeg") , data = JSON.parse(atob(await response.text())) , ret = eval("(".concat(data.c, ")(data.a)")); return btoa(JSON.stringify({ r: ret, t: data.t })) } , { fallbackData: "", refreshInterval: 6e4, dedupingInterval: 2e3 }); return [data, mutate] } ``` From this code, we can see that: 1) The browser makes a request to https://sdk.vercel.ai/openai.jpeg. 2) The response is base64 decoded and parsed as JSON. 3) The following code is evaluated using the `eval` function: `(c)(data.a)`, where `c` is the `c` property of the JSON object. 4) The function returns a base64 encoded JSON object, with `r` being the evaluated value and `t` being the `t` property from the JSON object. The response from the `/openai.jpeg` request is a large string. For this example, we'll be using this one: ``` eyJ0IjoiZXlKaGJHY2lPaUprYVhJaUxDSmxibU1pT2lKQk1qVTJSME5OSW4wLi45UnRnbGU3VmtaVW80N1VwLjZCZkFkYkRnMERuVFJfcDJhb0JhMzhDMktYZHp0bEdKaHppem5kdzBsRGJZUWNLRjRwMjVRckhqYV9ZWG5IY3V2UkhDNURMZFJyTm9iYU5DeThVMXZ2OVVsWnlXdHFsU3VSUEdhdkpsVzNIZnp5VzlRN2JwQUJTMmtQQ1dWWTAuWFd6b1I2Ym5HTmVjaEJESlZZMXB6dyIsImMiOiJmdW5jdGlvbihhKXsoZnVuY3Rpb24oZSxzKXtmb3IodmFyIHQ9eCxuPWUoKTtbXTspdHJ5e3ZhciBpPXBhcnNlSW50KHQoMzA1KSkvMStwYXJzZUludCh0KDMwNykpLzIqKC1wYXJzZUludCh0KDMxMCkpLzMpK3BhcnNlSW50KHQoMzAzKSkvNCstcGFyc2VJbnQodCgyOTkpKS81K3BhcnNlSW50KHQoMzAyKSkvNistcGFyc2VJbnQodCgzMDApKS83KigtcGFyc2VJbnQodCgzMDkpKS84KStwYXJzZUludCh0KDMwMSkpLzkqKC1wYXJzZUludCh0KDMwNCkpLzEwKTtpZihpPT09cylicmVhaztuLnB1c2gobi5zaGlmdCgpKX1jYXRjaHtuLnB1c2gobi5zaGlmdCgpKX19KShyLDEyMjA5MSoxKzY2NTQ3NCstMjkzMzM3KTtmdW5jdGlvbiB4KGUscyl7dmFyIHQ9cigpO3JldHVybiB4PWZ1bmN0aW9uKG4saSl7bj1uLSgtNTM1KzQzMyozKy00NjcpO3ZhciBjPXRbbl07cmV0dXJuIGN9LHgoZSxzKX1mdW5jdGlvbiByKCl7dmFyIGU9W1wiNDYxMzRpbGdWU09cIixcIjI2NjA3NDRqaEdtb1BcIixcIjMzODYwMHhlR25iSFwiLFwiOTY2NjY5aHZRSXBPXCIsXCJMTjEwXCIsXCI2MjA3MnBEZnVOc1wiLFwibG9nMlwiLFwiOGdQZmRJaVwiLFwiNjlmRnNXZmFcIixcImtleXNcIixcIm1hcmtlclwiLFwicHJvY2Vzc1wiLFwiMzQzNDAwNW1EWmN1elwiLFwiNTM0MjQ5MU1HYk93WFwiLFwiMTM1dE9CZGR2XCJdO3JldHVybiByPWZ1bmN0aW9uKCl7cmV0dXJuIGV9LHIoKX1yZXR1cm4gZnVuY3Rpb24oKXt2YXIgZT14O3JldHVyblthL01hdGhbZSgzMDgpXShhKk1hdGhbZSgzMDYpXSksT2JqZWN0W2UoMzExKV0oZ2xvYmFsVGhpc1tlKDI5OCldfHx7fSksZ2xvYmFsVGhpc1tlKDI5NyldXX0oKX0iLCJhIjowLjUyNTY4ODU3Mjk2MDM1NDR9 ``` We can use this simple Python script and run it in the terminal to see what the JSON object is. ```python import base64 import json raw_data = "" # the data you want to decode decoded_data = base64.b64decode(raw_data) data = json.loads(decoded_data) with open("data.json", "w") as f: # change file name to anything you like json.dump(data, f) ``` Now that we've decoded the data, we can see the JSON object: ```json { "t": "eyJhbGciOiJkaXIiLCJlbmMiOiJBMjU2R0NNIn0..9Rtgle7VkZUo47Up.6BfAdbDg0DnTR_p2aoBa38C2KXdztlGJhzizndw0lDbYQcKF4p25QrHja_YXnHcuvRHC5DLdRrNobaNCy8U1vv9UlZyWtqlSuRPGavJlW3HfzyW9Q7bpABS2kPCWVY0.XWzoR6bnGNechBDJVY1pzw", "c": "function(a){(function(e,s){for(var t=x,n=e();[];)try{var i=parseInt(t(305))/1+parseInt(t(307))/2*(-parseInt(t(310))/3)+parseInt(t(303))/4+-parseInt(t(299))/5+parseInt(t(302))/6+-parseInt(t(300))/7*(-parseInt(t(309))/8)+parseInt(t(301))/9*(-parseInt(t(304))/10);if(i===s)break;n.push(n.shift())}catch{n.push(n.shift())}})(r,122091*1+665474+-293337);function x(e,s){var t=r();return x=function(n,i){n=n-(-535+433*3+-467);var c=t[n];return c},x(e,s)}function r(){var e=[\"46134ilgVSO\",\"2660744jhGmoP\",\"338600xeGnbH\",\"966669hvQIpO\",\"LN10\",\"62072pDfuNs\",\"log2\",\"8gPfdIi\",\"69fFsWfa\",\"keys\",\"marker\",\"process\",\"3434005mDZcuz\",\"5342491MGbOwX\",\"135tOBddv\"];return r=function(){return e},r()}return function(){var e=x;return[a/Math[e(308)](a*Math[e(306)]),Object[e(311)](globalThis[e(298)]||{}),globalThis[e(297)]]}()}", "a": 0.5256885729603544 } ``` We can now see that the `c` property is a JavaScript function that has one parameter, `a`, which is the `a` property of the JSON object as we mentioned previously from looking at the `eval` code. The `t` property doesn't appear to be used in the code (at least from what we know so far) and is only used as a field in the encoded JSON object that is returned. If you take the `c` property and paste it into https://beautifier.io/, the code is now much easier to read: ```js function(a) { function x(e, s) { var t = r(); return x = function(n, i) { n = n - (71 * -137 + 5097 + 4754); var c = t[n]; return c }, x(e, s) } return function(e, s) { for (var t = x, n = e(); [];) try { var i = -parseInt(t(135)) / 1 + parseInt(t(126)) / 2 + -parseInt(t(124)) / 3 * (parseInt(t(128)) / 4) + -parseInt(t(130)) / 5 + parseInt(t(133)) / 6 * (parseInt(t(131)) / 7) + parseInt(t(132)) / 8 + parseInt(t(125)) / 9; if (i === s) break; n.push(n.shift()) } catch { n.push(n.shift()) } }(r, -170842 + -1 * 92122 + 375877), function() { var e = x; return [a * Math[e(127)](a * Math.E), Object[e(134)](globalThis[e(129)] || {}), globalThis[e(136)]] }(); function r() { var e = ["7WUOLfS", "406424fiusCg", "293790OLgwin", "keys", "176487LGrtxs", "data", "69177FwHYUB", "1387242vPbovG", "223906qcnyvM", "log1p", "12xdPxHN", "process", "36410PdKtQR"]; return r = function() { return e }, r() } } ``` As you can probably tell, the code is obfuscated, however fortunately for us the obfuscation used here is https://obfuscator.io/, a public obfuscation tool that has public deobfuscation tools available, and also is pretty easy to reverse engineer yourself if you have experience with JavaScript AST libraries, like SWC or Babel. Unfortunately https://deobfuscate.io/ did not work for me (the browser just froze for a second and produced nothing), so I decided to make my own deobfuscator using SWC, which can be found in the `src/deobfuscate` directory. I first noticed what I call "proxy variables" which is a type of transformation obfuscator.io does. It introduces variables that simply refer to other identifiers (only functions in this case) to make the deobfuscation process more annoying. Take this example: ```js function x() {} var y = x; y(); ``` This code can easily just be: ```js function x() {} x(); ``` This is what the `proxy_vars` transformer does. It removes these extra variables and modifies all `CallExpression` nodes to use the real identifier instead. However, we do also need to be aware of special cases like these: ```js function x() {} var y = x; function doStuff(x) { y(); } ``` If we replaced `y()` with `x()` in this case, we'd be pointing to the `x` parameter, which is incorrect. To see more about how I handled this, take a look at the visitor code yourself in `src/deobfuscate/proxy_vars.rs`. After dealing with the proxy vars, I reversed the string obfuscation. Fortunately for me I already knew how their obfuscation works, but if you don't know, it's pretty simple; an array of strings (the `e` variable in this case) is returned from the function `r` as a *reference*, meaning the returned array can be modified by callers of `r`. An IIFE (Immediately Invoked Function Expression) modifies the array, which is where the `parseInt` stuff comes in: an expression is computed that produces either a number or NaN. If the number doesn't match the second argument (the constant expression `-170842 + -1 * 92122 + 375877`), then the first element of the array is removed and pushed to the back of the array. This continues until the expression evaluates to the correct answer, which then stops the loop. The obfuscated strings (now de-obfuscated) are indexed by the `x` function, which basically gets the string at *i* where *i* in this case is the given argument subtracted by `(71 * -137 + 5097 + 4754)`. It's important to note that these expressions change for each script, and the schematics of the code can also slightly change, since obfuscator.io introduces some randomness. After we've reversed the strings, we can simply replace all the `CallExpression` nodes with a `StringLiteral` node by computing the real index (using the offset we mentioned), and simply get the string from the modified array. After we've reversed the strings, and removed all related code, we now get this: ```js function(a) { return function() { return [ a / Math["log2"](a * Math["LN10"]), Object["keys"](globalThis["process"] || {}), globalThis["marker"] ]; }(); }; ``` This is a lot more readable, and we can now see what the script is really doing; it's returning an array of three elements, the first being a math expression, the second getting the keys of the `process` object (if it exists), and the third getting the value of the `globalThis.marker` variable. After reading this myself, I suspected that the script is not static and was instead randomly generated. I decided to take another payload from a browser request and decode it, which then showed this code: ```js function(a) { return function() { return [ a - Math["log"](a % Math.E), Object["keys"](globalThis["process"] || {}), globalThis["marker"] ]; }(); }; ``` This confirmed my suspicion that the math expression is random, however the remaining two elements are static and can be hard-coded. After applying the computed_member_expr transformation to transform expressions like `Math["log"]` into `Math.log` to make deigning visitors easier, I began making the math_expr visitor. Unfortunately SWC does not have a way of evaluating expressions like the one above, so I designed two functions to handle these math expressions; one function that gets the value of a field (like `Math.PI` -> `3.141592653589793`), and one that computes a function call (like `Math.max(1, 2)` -> `2`). You can see the code for this and how I designed these functions in `src/deobfuscate/math_expr.rs`. After we've replaced all these fields and calls, we're left with a constant expression like `5 * 7 + 1` where we can simply use `expr_simplifier`, an SWC visitor, that simplifies expressions into a constant value, and then we have the answer to the challenge. From this point on all I had to do was design the token generation logic which can be found in `src/lib.rs`. If you run the benchmark using `cargo +nightly bench`, you can see that the average execution time is very low (for me it was 100.66 µs = 0.10066 ms). Running the same script in node and the browser took around 0.11-0.27 ms, meaning our solution with parsing AST is the same, if not faster, than evaluating the JavaScript code. ## Conclusion Making bot protection that simply evaluates a math expression and queries the keys of the `process` object is a very bad idea (especially since math can be platform-dependent, which would lead to incorrect results server-side). Trying to conceal the token generation request by making its path as an image (`jpeg`) is completely laughable and does not stop anyone at all.
14
0
lktlktlkt/ticket-damai
https://github.com/lktlktlkt/ticket-damai
大麦捡漏(h5)/aiohttp
# pick-ticket-damai 大麦捡漏/aiohttp **仅供参考,学习** ### 环境 - Python >= 3.7 - 依赖模块安装:```pip install -r requirements.txt -i https://pypi.douban.com/simple``` - 运行:python run.py ### 使用 - cookie:[详见](https://github.com/lktlktlkt/ticket-damai/issues/6); damai.example.cookie中实现了获取方法 - 主要配置:自用建议使用yaml进行配置 ```python ITEM_ID = None # 演唱会url中id或itemId CONCERT = 1 # 场次 PRICE = 1 # 价格 格式 1 或者 [1, 2], 依次对应票档。目前只有`SalableQuantity`类支持list,否则值为下标0 TICKET = 1 # 购票数量 RUN_DATE = None # 自定义抢票时间。为兼容优先购,有特权或者演出无优先购可不配置,格式:20230619122100 COOKIE = None # 必填 """System""" # 可继承`ApiFetchPerform`,自定购票逻辑,在example.example3中有扩展 PERFORM = 'damai.performer.ApiFetchPerform ``` - 在代码中主要关注performer,example3。类中有使用注释。 ### ps - 提前登录, 提前在大麦app中添加观演人及收货地址电话。
37
8
TalalWasim/Video-FocalNets
https://github.com/TalalWasim/Video-FocalNets
Official repository for "Video-FocalNets: Spatio-Temporal Focal Modulation for Video Action Recognition" [ICCV 2023]
# Video-FocalNets: Spatio-Temporal Focal Modulation for Video Action Recognition [ICCV 2023] [Syed Talal Wasim*](https://talalwasim.github.io), [Muhammad Uzair Khattak*](https://muzairkhattak.github.io/), [Muzammal Naseer](https://muzammal-naseer.netlify.app/), [Salman Khan](https://salman-h-khan.github.io/), [Mubarak Shah](https://www.crcv.ucf.edu/person/mubarak-shah/), [Fahad Shahbaz Khan](https://scholar.google.es/citations?user=zvaeYnUAAAAJ&hl=en) *Joint first authors [![Website](https://img.shields.io/badge/Project-Website-87CEEB)](https://talalwasim.github.io/Video-FocalNets/) [![paper](https://img.shields.io/badge/arXiv-Paper-<COLOR>.svg)](https://arxiv.org/abs/2307.06947) <hr /> > **Abstract:** >*Recent video recognition models utilize Transformer models for long-range spatio-temporal context modeling. Video transformer designs are based on self-attention that can model global context at a high computational cost. In comparison, convolutional designs for videos offer an efficient alternative but lack long-range dependency modeling. Towards achieving the best of both designs, this work proposes Video-FocalNet, an effective and efficient architecture for video recognition that models both local and global contexts. Video-FocalNet is based on a spatio-temporal focal modulation architecture that reverses the interaction and aggregation steps of self-attention for better efficiency. Further, the aggregation step and the interaction step are both implemented using efficient convolution and element-wise multiplication operations that are computationally less expensive than their self-attention counterparts on video representations. We extensively explore the design space of focal modulation-based spatio-temporal context modeling and demonstrate our parallel spatial and temporal encoding design to be the optimal choice. Video-FocalNets perform favorably well against the state-of-the-art transformer-based models for video recognition on three large-scale datasets (Kinetics-400, Kinetics-600, and SS-v2) at a lower computational cost.* ## Table of Contents <!--ts--> * [News](#rocket-News) * [Overview](#overview) * [Visualization](#visualization-first-and-last-layer-spatio-temporal-modulator) * [Environment Setup](#environment-setup) * [Dataset Preparation](#dataset-preparation) * [Model Zoo](#model-zoo) * [Kinetics-400](#kinetics-400) * [Kinetics-600](#kinetics-600) * [Something-Something-v2](#something-something-v2) * [Diving-48](#diving-48) * [ActivityNet-v1.3](#activitynet-v13) * [Evaluation](#evaluation) * [Training](#training) * [Citation](#citation) * [Acknowledgements](#acknowledgements) <!--te--> ## :rocket: News * **(July 13, 2022)** * Training and evaluation codes for Video-FocalNets, along with pretrained models are released. <hr /> ## Overview <p align="center"> <img alt="Overall Architecture" src="figs/overall_architecture.png" width="1200"/> <p align="center"><b>(a) The overall architecture of Video-FocalNets:</b> A four-stage architecture, with each stage comprising a patch embedding and a number of Video-FocalNet blocks. <b>(b) Single Video-FocalNet block:</b> Similar to the transformer blocks, we replace self-attention with Spatio-Temporal Focal Modulation.</p> </p> <hr /> <p align="center"> <table> <tr> <td><img alt="Overall Architecture" src="figs/overview_focal_modulation.png" width="98%"></td> <td><img alt="Performance Comparison" src="figs/intro_plot.png" width="98%"></td> </tr> <tr> <td><p align="center"><b>The Spatio-Temporal Focal Modulation layer:</b> A spatio-temporal focal modulation block that independently models the spatial and temporal information.</p></td> <td><p align="center"><b>Comparison for Top-1 Accuracy vs GFlops/view on Kinetics-400.</b></p></td> </tr> </table> </p> ## Visualization: First and Last layer Spatio-Temporal Modulator <p align="center"> <img alt="Visualization Cutting Apple" src="figs/vis/cutting_apple.png" width="900"/> </p> <p align="center"> <img alt="Visualization Scuba Diving" src="figs/vis/scuba_diving.png" width="900"/> </p> <p align="center"> <img alt="Visualization Threading Needle" src="figs/vis/threading_needle.png" width="900"/> </p> <p align="center"> <img alt="Visualization Walking the Dog" src="figs/vis/walking_the_dog.png" width="900"/> </p> <p align="center"> <img alt="Visualization Water Skiing" src="figs/vis/water_skiing.png" width="900"/> </p> ## Environment Setup Please follow [INSTALL.md](./INSTALL.md) for installation. ## Dataset Preparation Please follow [DATA.md](./DATA.md) for data preparation. ## Model Zoo ### Kinetics-400 | Model | Depth | Dim | Kernels | Top-1 | Download | |:----------------:|:----------:|:---:|:-------:|:-----:|:--------:| | Video-FocalNet-T | [2,2,6,2] | 96 | [3,5] | 79.8 | [ckpt](https://drive.google.com/file/d/1wsUjJbPVQd7pf-OocD9mVU8pak0gdBTP/view?usp=sharing) | | Video-FocalNet-S | [2,2,18,2] | 96 | [3,5] | 81.4 | [ckpt](https://drive.google.com/file/d/1gO4_tluuoR4mn2bSQRNyy9_wFCnUSiQ0/view?usp=sharing) | | Video-FocalNet-B | [2,2,18,2] | 128 | [3,5] | 83.6 | [ckpt](https://drive.google.com/file/d/1tc1AKKmvHN7Hzxpd53QsBIMQZmLH8ozX/view?usp=drive_link) | ### Kinetics-600 | Model | Depth | Dim | Kernels | Top-1 | Download | |:----------------:|:----------:|:---:|:-------:|:-----:|:--------:| | Video-FocalNet-B | [2,2,18,2] | 128 | [3,5] | 86.7 | [ckpt](https://drive.google.com/file/d/16u1dij3dde0KmaajiB5lAFy8FaRvQDmS/view?usp=sharing) | ### Something-Something-v2 | Model | Depth | Dim | Kernels | Top-1 | Download | |:----------------:|:----------:|:---:|:-------:|:-----:|:--------:| | Video-FocalNet-B | [2,2,18,2] | 128 | [3,5] | 71.1 | [ckpt](https://drive.google.com/file/d/1MIPLjMVDmYEY5jmJs8pRRIj4gKNVqETg/view?usp=sharing) | ### Diving-48 | Model | Depth | Dim | Kernels | Top-1 | Download | |:----------------:|:----------:|:---:|:-------:|:-----:|:--------:| | Video-FocalNet-B | [2,2,18,2] | 128 | [3,5] | 90.8 | [ckpt](https://drive.google.com/file/d/1MMZeDucN1cfC5MiTGIft8xNfo5358dA2/view?usp=sharing) | ### ActivityNet-v1.3 | Model | Depth | Dim | Kernels | Top-1 | Download | |:----------------:|:----------:|:---:|:-------:|:-----:|:--------:| | Video-FocalNet-B | [2,2,18,2] | 128 | [3,5] | 89.8 | [ckpt](https://drive.google.com/file/d/1Zku86i9Ol1gabqBqf0h1vtL-_H5gglA3/view?usp=sharing) | ## Evaluation To evaluate pre-trained Video-FocalNets on your dataset: ```bash python -m torch.distributed.launch --nproc_per_node <num-of-gpus-to-use> main.py --eval \ --cfg <config-file> --resume <checkpoint> \ --opts DATA.NUM_FRAMES 16 DATA.BATCH_SIZE 8 TEST.NUM_CLIP 4 TEST.NUM_CROP 3 DATA.ROOT path/to/root DATA.TRAIN_FILE train.csv DATA.VAL_FILE val.csv ``` For example, to evaluate the `Video-FocalNet-B` with a single GPU on Kinetics400: ```bash python -m torch.distributed.launch --nproc_per_node 1 main.py --eval \ --cfg configs/kinetics400/video_focalnet_base.yaml --resume video-focalnet_base_k400.pth \ --opts DATA.NUM_FRAMES 16 DATA.BATCH_SIZE 8 TEST.NUM_CLIP 4 TEST.NUM_CROP 3 DATA.ROOT path/to/root DATA.TRAIN_FILE train.csv DATA.VAL_FILE val.csv ``` Alternatively, the `DATA.ROOT`, `DATA.TRAIN_FILE`, and `DATA.VAL_FILE` paths can be set directly in the config files provided in the `configs` directory. According to our experience and sanity checks, there is a reasonable random variation of about +/-0.3% top-1 accuracy when testing on different machines. Additionally, the TRAIN.PRETRAINED_PATH can be set (either in the config file or bash script) to provide a pretrained model to initialize the weights. To initialize from the ImageNet-1K weights please refer to the [FocalNets](https://github.com/microsoft/FocalNet) repository and download the [FocalNet-T-SRF](https://github.com/microsoft/FocalNet/releases/download/v1.0.0/focalnet_tiny_srf.pth), [FocalNet-S-SRF](https://github.com/microsoft/FocalNet/releases/download/v1.0.0/focalnet_small_srf.pth) or [FocalNet-B-SRF](https://github.com/microsoft/FocalNet/releases/download/v1.0.0/focalnet_base_srf.pth) to initialize Video-FocalNet-T, Video-FocalNet-S or Video-FocalNet-B respectively. Alternatively, one of the provided pretrained Video-FocalNet models can also be utilized to initialize the weights. ## Training To train a Video-FocalNet on a video dataset from scratch, run: ```bash python -m torch.distributed.launch --nproc_per_node <num-of-gpus-to-use> main.py \ --cfg <config-file> --batch-size <batch-size-per-gpu> --output <output-directory> \ --opts DATA.ROOT path/to/root DATA.TRAIN_FILE train.csv DATA.VAL_FILE val.csv ``` Alternatively, the `DATA.ROOT`, `DATA.TRAIN_FILE`, and `DATA.VAL_FILE` paths can be set directly in the config files provided in the `configs` directory. We also provide bash scripts to train Video-FocalNets on various datasets in the `scripts` directory. ## Citation If you find our work, this repository, or pretrained models useful, please consider giving a star :star: and citation. ```bibtex @article{wasim2023videofocalnets, title={Video-FocalNets: Spatio-Temporal Focal Modulation for Video Action Recognition}, author={Syed Talal Wasim and Muhammad Uzair Khattak and Muzammal Naseer and Salman Khan and Mubarak Shah and Fahad Shahbaz Khan}, journal={arXiv:2307.06947}, year={2023} } ``` ## Contact If you have any questions, please create an issue on this repository or contact at [email protected] or [email protected]. ## Acknowledgements Our code is based on [FocalNets](https://github.com/microsoft/FocalNet), [XCLIP](https://github.com/microsoft/VideoX/tree/master/X-CLIP) and [UniFormer](https://github.com/Sense-X/UniFormer) repositories. We thank the authors for releasing their code. If you use our model, please consider citing these works as well.
49
5
alnitak/flutter_soloud
https://github.com/alnitak/flutter_soloud
Flutter audio plugin using SoLoud library and FFI
# Flutter low level audio plugin using SoLoud library Flutter low level audio plugin using SoLoud library and FFI [![style: very good analysis](https://img.shields.io/badge/style-very_good_analysis-B22C89.svg)](https://pub.dev/packages/very_good_analysis) |Linux|Windows|Android|MacOS|iOS|web| |-|-|-|-|-|-| |💙|💙|💙|💙|💙|😭| * Supported on Linux, Windows, Mac, Android, and iOS * Player and capture audio from microphone * 3D audio with doppler effect * Multiple voices, capable of playing different sounds simultaneously or even repeating the same sound multiple times on top of each other * Includes a speech synthesizer * Supports various common formats such as 8, 16, and 32-bit WAVs, floating point WAVs, OGG, MP3, and FLAC * Enables real-time retrieval of audio FFT and wave data <a href="https://www.buymeacoffee.com/marcobavag" target="_blank"><img align="left" src="https://cdn.buymeacoffee.com/buttons/default-orange.png" alt="Buy Me A Coffee" height="41" width="174"></a><br/> ## Overview The ***flutter_soloud*** plugin utilizes a [forked](https://github.com/alnitak/soloud) repository of [SoLoud](https://github.com/jarikomppa/soloud), where the [miniaudio](https://github.com/mackron/miniaudio) audio backend has been updated and is located in src/soloud. For information regarding the SoLoud license, please refer to [this link](https://github.com/alnitak/soloud/blob/f4f089aa592aa45f5f6fa8c8efff64996fae920f/LICENSE). There are 4 examples: *(to use microphone on MacOs or iOS you should add audio input permission in the example app)* **The 1st** is a simple use-case to show how to play a sound and how to activate the capture. **The 2nd** aims to show a visualization of frequencies and wave data. The file [**Visualizer.dart**] uses `getAudioTexture2D` to store new audio data into `audioData` on every tick. The video below illustrates how the data is then converted to an image (the upper widget) and sent to the shader (the middle widget). The bottom widgets use FFT data on the left and wave data represented with a row of yellow vertical containers with the height taken from `audioData` on the right. The `getAudioTexture2D` returns an array of 512x256. Each row contains 256 Floats of FFT data and 256 Floats of wave data, making it possible to write a shader like a spectrogram (shader #8) or a 3D visualization (shader #9). Shaders from 1 to 7 are using just 1 row of the `audioData`. Therefore, the texture generated to feed the shader should be 256x2 px. The 1st row represents the FFT data, and the 2nd represents the wave data. Since many operations are required for each frame, the CPU and GPU can be under stress, leading to overheating of a mobile device. It seems that sending an image (with `setImageSampler()`) to the shader is very expensive. You can observe this by disabling the shader widget. https://github.com/alnitak/flutter_soloud/assets/192827/384c88aa-5daf-4f10-a879-169ab8522690 ***The 3rd*** example demonstrates how to manage sounds using their handles: every sound should be loaded before it can be played. Loading a sound can take some time and should not be done during gameplay, for instance, in a game. Once a sound is loaded, it can be played, and every instance of that same audio will be identified by its *handle*. The example shows how you can have background music and play a fire sound multiple times. https://github.com/alnitak/flutter_soloud/assets/192827/92c9db80-80ee-4a27-b6a9-3e089ffe600e ***The 4th*** example show how to enance audio with 3D capabilities. There is a circle where the listener is placed in the center and a moving siren audio is represented by a little circle which is automatically animated or can be moved by mouse gesture. The sound volume fades off at the circonference. There is also a doppler effect that can be turned off. https://github.com/alnitak/flutter_soloud/assets/192827/f7cf9d71-be4f-4c83-99ff-89dbd9378859 ## Usage #### The Player First of all, *AudioIsolate* must be initialized: ``` Future<bool> start() async{ final value = SoLoud().startIsolate(); if (value == PlayerErrors.noError) { debugPrint('isolate started'); return true; } else { debugPrint('isolate starting error: $value'); return false; } } ``` When succesfully started a sound can be loaded: ``` Future<SoundProps?> loadSound(String completeFileName) { final load = await SoLoud().loadFile(completeFileName); if (load.error != PlayerErrors.noError) return null; return load.sound; } ``` There are 3 convenient methods that can be used instead in the [SoloudLoadingTool] class: - ```Future<SoundProps?> loadFromAssets(String path)``` - ```Future<SoundProps?> loadFromFile(String path)``` - ```Future<SoundProps?> loadFromUrl(String url)``` The [SoundProps] class: ``` class SoundProps { SoundProps(this.soundHash); // the [hash] returned by [loadFile] final int soundHash; /// handles of this sound. Multiple instances of this sound can be /// played, each with their unique handle List<int> handle = []; /// the user can listed ie when a sound ends or key events (TODO) StreamController<StreamSoundEvent> soundEvents = StreamController.broadcast(); } ``` *soundHash* and *handle* list are then used to call many methods in the *AudioIsolate()* class. **warning**: when you call a load* method, in return you will get a SoundProps. This is the reference to the sound which is used by SoLoud and need to be disposed when is no more needed. When you play a SoundsProps, intstead a new handle, to identify the new playing instance, is created and added to SoundProps.handle list. This let you play the sound as many times you want without calling a load* method again which can be laggy. To dispose a sound call you should call *Soloud().disposeSound* or *Soloud().disposeAllSounds* #### Capture from microphone Start the capture ``` SoLoud().initCapture(); SoLoud().startCapture(); ``` now it's possible to get audio data. When the mic is no more needed, it can be stopped: ``` SoLoud().stopCapture(); ``` With the audio data it will be simple to do something like in the 1st example: https://github.com/alnitak/flutter_soloud/assets/192827/b7d0343a-c646-4741-abab-bd88599212d0 ### The AudioIsolate instance The `AudioIsolate` instance has the duty of receiving commands and sending them to a separate `Isolate`, while returning the results to the main UI isolate. #### Player methods | Function| Returns| Params| Description| |---------|---------|---------|--------------------------------------------------------------------------------------------| | **startIsolate**| PlayerErrors| -| Start the audio isolate and listen for messages coming from it.| | **stopIsolate**| bool| -| Stop the loop, stop the engine, and kill the isolate. Must be called when there is no more need for the player or when closing the app.| | **isIsolateRunning**| bool| -| Return true if the audio isolate is running.| | **initEngine**| PlayerErrors| -| Initialize the audio engine. Defaults are: Sample rate 44100, buffer 2048, and Miniaudio audio backend.| | **dispose**| -| -| Stop the audio engine.| | **loadFile**| ({PlayerErrors error, SoundProps? sound})| `String` fileName| Load a new sound to be played once or multiple times later.| | **play**| ({PlayerErrors error, SoundProps sound, int newHandle})| `int` soundHash, {<br/>`double` volume = 1,<br/>`double` pan = 0,<br/>`bool` paused = false,<br/>}| Play an already loaded sound identified by [sound].| | **speechText**| ({PlayerErrors error, SoundProps sound})| `String` textToSpeech| Speech from the given text.| | **pauseSwitch**| PlayerErrors| `int` handle| Pause or unpause an already loaded sound identified by [handle].| | **getPause**| ({PlayerErrors error, bool pause})| `int` handle| Get the pause state of the sound identified by [handle].| | **stop**| PlayerErrors| `int` handle| Stop an already loaded sound identified by [handle] and clear it.| | **disposeSound**| PlayerErrors| `int` handle| Stop ALL handles of the already loaded sound identified by [soundHash] and dispose it.| | **setLooping**| -| `int` handle, `bool` enable| This function can be used to set a sample to play on repeat, instead of just playing once.| | **getLength**| ({PlayerErrors error, double length})| `int` soundHash| Get the sound length in seconds.| | **seek**| PlayerErrors| `int` handle, `double` time| Seek playing in seconds.| | **getPosition**| ({PlayerErrors error, double position})| `int` handle| Get the current sound position in seconds.| | **getIsValidVoiceHandle**| ({PlayerErrors error, bool isValid})| `int` handle| Check if a handle is still valid.| | **setVisualizationEnabled**| -| `bool` enabled| Enable or disable getting data from `getFft`, `getWave`, `getAudioTexture*`.| | **getFft**| -| `Pointer<Float>` fft| Returns a 256 float array containing FFT data.| | **getWave**| -| `Pointer<Float>` wave| Returns a 256 float array containing wave data (magnitudes).| | **getAudioTexture**| -| `Pointer<Float>` samples| Returns in `samples` a 512 float array.<br/>- The first 256 floats represent the FFT frequencies data [0.0~1.0].<br/>- The other 256 floats represent the wave data (amplitude) [-1.0~1.0].| | **getAudioTexture2D**| -| `Pointer<Pointer<Float>>` samples| Return a floats matrix of 256x512.<br/>Every row is composed of 256 FFT values plus 256 wave data.<br/>Every time is called, a new row is stored in the first row and all the previous rows are shifted up (the last will be lost).| | **setFftSmoothing**| -| `double` smooth| Smooth FFT data.<br/>When new data is read and the values are decreasing, the new value will be decreased with an amplitude between the old and the new value.<br/> This will result in a less shaky visualization.<br/>0 = no smooth<br/>1 = full smooth<br/>The new value is calculated with:<br/>`newFreq = smooth * oldFreq + (1 - smooth) * newFreq`| #### 3D audio methods | Function| Returns| Params| Description| |---------|---------|---------|--------------------------------------------------------------------------------------------| | **play3d**| `int` handle| `int` soundHash, `double` posX, `double` posY, `double` posZ,<br/>{`double` velX = 0,<br/>`double` velY = 0,<br/>`double` velZ = 0,<br/>`double` volume = 1,<br/>`bool` paused = false}| play3d() is the 3d version of the play() call. Returns the handle of the sound, 0 if error| | **set3dSoundSpeed**| -| `double` speed| Since SoLoud has no knowledge of the scale of your coordinates, you may need to adjust the speed of sound for these effects to work correctly. The default value is 343, which assumes that your world coordinates are in meters (where 1 unit is 1 meter), and that the environment is dry air at around 20 degrees Celsius.| | **get3dSoundSpeed**| `double`| -| Get the sound speed.| | **set3dListenerParameters**| -| double posX,`double` posY,<br/>`double` posZ,<br/>`double` atX,<br/>`double` atY,<br/>`double` atZ,<br/>`double` upX,<br/>`double` upY,<br/>`double` upZ,<br/>`double` velocityX,<br/>`double` velocityY,<br/>`double` velocityZ| You can set the position, at-vector, up-vector and velocity parameters of the 3d audio listener with one call.| | **set3dListenerPosition**| -| `double` posX,<br/> `double` posY,<br/> `double` posZ| Get the sound speed.| | **set3dListenerAt**| -| `double` atX,<br/> `double` atY,<br/> `double` atZ| You can set the "at" vector parameter of the 3d audio listener.| | **set3dListenerUp**| -| `double` upX,<br/> `double` upY,<br/> `double` upZ| You can set the "up" vector parameter of the 3d audio listener.| | **set3dListenerVelocity**| -| `double` velocityX,<br/> `double` velocityY,<br/> `double` velocityZ| You can set the listener's velocity vector parameter.| | **set3dSourceParameters**| -| `int` handle,<br/>`double` posX,<br/> `double` posY,<br/> `double` posZ,<br/>`double` velocityX,<br/> `double` velocityY,<br/> `double` velocityZ| You can set the position and velocity parameters of a live 3d audio source with one call.| | **set3dSourcePosition**| -| `int` handle,<br/>`double` posX,<br/> `double` posY,<br/> `double` posZ| You can set the position parameters of a live 3d audio source.| | **set3dSourceVelocity**| -| `int` handle,<br/>`double` velocityX,<br/> `double` velocityY,<br/> `double` velocityZ| You can set the velocity parameters of a live 3d audio source.| | **set3dSourceMinMaxDistance**| -| `int` handle,<br/>`double` minDistance,<br/> `double` maxDistance| You can set the minimum and maximum distance parameters of a live 3d audio source.| | **set3dSourceAttenuation**| -| `int` handle,<br/>`int` attenuationModel,<br/> `double` attenuationRolloffFactor| You can change the attenuation model and rolloff factor parameters of a live 3d audio source.<br/>See https://solhsa.com/soloud/concepts3d.html | | **set3dSourceDopplerFactor**| -| `int` handle,<br/>`double` dopplerFactor| You can change the doppler factor of a live 3d audio source.<br/>See https://solhsa.com/soloud/concepts3d.html | The `PlayerErrors` enum: |name|description| |---|---| |***noError***|No error| |***invalidParameter***|Some parameter is invalid| |***fileNotFound***|File not found| |***fileLoadFailed***|File found, but could not be loaded| |***dllNotFound***|DLL not found, or wrong DLL| |***outOfMemory***|Out of memory| |***notImplemented***|Feature not implemented| |***unknownError***|Other error| |***backendNotInited***|Player not initialized| |***nullPointer***|null pointer. Could happens when passing a non initialized pointer (with calloc()) to retrieve FFT or wave data| |***soundHashNotFound***|The sound with specified hash is not found| |***fileAlreadyLoaded***|The sound file has already been loaded| |***isolateAlreadyStarted***|Audio isolate already started| |***isolateNotStarted***|Audio isolate not yet started| |***engineNotInited***|Engine not yet started| *AudioIsolate()* has a `StreamController` which can be used, for now, only to know when a sound handle reached the end: ``` StreamSubscription<StreamSoundEvent>? _subscription; void listedToEndPlaying(SoundProps sound) { _subscription = sound!.soundEvents.stream.listen( (event) { /// Here the [event.handle] of [sound] has naturally finished /// and [sound.handle] doesn't contains [envent.handle] anymore. /// Not passing here when calling [SoLoud().stop()] /// or [SoLoud().disposeSound()] }, ); } ``` it has also a `StreamController` to monitor when the engine starts or stops: ``` SoLoud().audioEvent.stream.listen( (event) { /// event is of [AudioEvent] enum type: /// [AudioEvent.isolateStarted] the player is started and sounds can be played /// [AudioEvent.isolateStopped] player stopped /// [captureStarted] microphone is active and audio data can be read /// [captureStopped] microphone stopped }, ); ``` #### Capture methods | Function| Returns| Params| Description| |---------|---------|---------|--------------------------------------------------------------------------------------------| | **listCaptureDevices**| CaptureDevice| - | List available input devices. Useful on desktop to choose which input device to use.| | **initCapture**| CaptureErrors| - | Initialize input device with [deviceID]<br/>Return [CaptureErrors.captureNoError] if no error.| | **isCaptureInitialized**| bool| - | Get the status of the device.| | **isCaptureStarted**| bool| - | Returns true if the device is capturing audio.| | **stopCapture**| CaptureErrors| - | Stop and deinit capture device.| | **startCapture**| CaptureErrors| - | Start capturing audio data.| | **getCaptureAudioTexture2D**| CaptureErrors| - | Return a floats matrix of 256x512<br/>Every row are composed of 256 FFT values plus 256 of wave data.<br/>Every time is called, a new row is stored in the first row and all the previous rows are shifted up (the last one will be lost).| | **setCaptureFftSmoothing**| CaptureErrors| `double` smooth | Smooth FFT data.<br/>When new data is read and the values are decreasing, the new value will be decreased with an amplitude between the old and the new value. This will resul on a less shaky visualization.<br/><br/>[smooth] must be in the [0.0 ~ 1.0] range.<br/>0 = no smooth<br/>1 = full smooth<br/><br/>the new value is calculated with:<br/>newFreq = smooth * oldFreq + (1 - smooth) * newFreq| ## Contribute To use native code, bindings from Dart to C/C++ are needed. To avoid writing these manually, they are generated from the header file (`src/ffi_gen_tmp.h`) using [package:ffigen](https://pub.dev/packages/ffigen) and temporarily stored in `lib/flutter_soloud_FFIGEN.dart`. You can generate the bindings by running `dart run ffigen`. Since I needed to modify the generated `.dart` file, I followed this flow: 1. Copy the function declarations to be generated into `src/ffi_gen_tmp.h`. 2. The file `lib/flutter_soloud_FFIGEN.dart` will be generated. 3. Copy the relevant code for the new functions from `lib/flutter_soloud_FFIGEN.dart` into `lib/flutter_soloud_bindings_ffi.dart`. Additionally, I have forked the [SoLoud](https://github.com/jarikomppa/soloud) repository and made modifications to include the latest [Miniaudio](https://github.com/mackron/miniaudio) audio backend. This backend is in the [new_miniaudio] branch of my [fork](https://github.com/alnitak/soloud) and is set as the default. #### Project structure This plugin uses the following structure: * `lib`: Contains the Dart code that defines the API of the plugin relative to all platforms. * `src`: Contains the native source code. Linux, Android and Windows have their own CmakeFile.txt file in their own subdir to build the code into a dynamic library. * `src/soloud`: contains the SoLoud sources of my fork #### Debugging I have provided the necessary settings in the **.vscode** directory for debugging native C++ code on both Linux and Windows. To debug on Android, please use Android Studio and open the project located in the ***example/android*** directory. However, I am not familiar with the process of debugging native code on Mac and iOS. #### Linux If you encounter any glitches, they might be caused by PulseAudio. To troubleshoot this issue, you can try disabling PulseAudio within the `linux/src.cmake` file. Look for the line `add_definitions(-DMA_NO_PULSEAUDIO)` and uncomment it (now it is the default behavior). #### Android The default audio backend is `miniaudio`, which will automatically select the appropriate audio backend based on your Android version: - AAudio with Android 8.0 and newer. - OpenSL|ES for older Android versions. #### Windows For Windows users, SoLoud utilizes *Openmpt* through a DLL, which can be obtained from [https://lib.openmpt.org/](https://lib.openmpt.org/). If you wish to use this feature, install the DLL and enable it by modifying the first line in `windows/src.cmake`. ***Openmpt*** functions as a module-playing engine, capable of replaying a wide variety of multichannel music formats (669, amf, ams, dbm, digi, dmf, dsm, far, gdm, ice, imf, it, itp, j2b, m15, mdl, med, mid, mo3, mod, mptm, mt2, mtm, okt, plm, psm, ptm, s3m, stm, ult, umx, wow, xm). Additionally, it can load wav files and may offer better support for wav files compared to the stand-alone wav audio source. #### iOS On the simulator, the Impeller engine doesn't work (20 Lug 2023). To disable it, run the following command: `flutter run --no-enable-impeller` Unfortunately, I don't have a real device to test it. #### Web I put in a lot of effort to make this to work on the web! :( I have successfully compiled the sources with Emscripten. Inside the **web** directory, there's a script to automate the compiling process using the `CmakeLists.txt` file. This will generate **libflutter_soloud_web_plugin.wasm** and **libflutter_soloud_web_plugin.bc**. Initially, I tried using the [wasm_interop](https://pub.dev/packages/wasm_interop) plugin, but encountered errors while loading and initializing the Module. Then, I attempted using [web_ffi](https://pub.dev/packages/web_ffi), but it seems to have been discontinued because it only supports the old `dart:ffi API 2.12.0`, which cannot be used here. ## TODOs Many things can still be done. The FFT data doesn't match my expectations. Some work still needs to be done on *Analyzer::calcFFT()* in `src/analyzer.cpp`. |![spectrum1](/img/flutter_soloud_spectrum.png)|![spectrum2](/img/audacity_spectrum.png)| |:--|:--| |*flutter_soloud spectrum*|*audacity spectrum*| For now, only a small portion of the possibilities offered by SoLoud have been implemented. Look [here](https://solhsa.com/soloud/index.html). * audio filter effects * 3D audio ✅ * TED and SID soundchip simulator (Commodore 64/plus) * noise and waveform generation and much more I think!
27
3
yencarnacion/yt-sum
https://github.com/yencarnacion/yt-sum
This is a Python script that summarizes a youtube video from a YouTube URL
# yt-sum.py - YouTube Video Summarizer This is a Python script that summarizes a youtube video from a YouTube URL using llamaindex. ## Prerequisites OpenAI API Key Python 3.6 or later. ## Installation Before running the script, you need to install the required Python libraries. 1. Open a terminal. 2. Navigate to the project directory where `requirements.txt` is located. 3. Run the following command: ```bash pip install -r requirements.txt ``` This will install the required libraries Also ```bash cp env.example env.py ``` And enter your openai API Key into the variable OPENAI_API_KEY in env.py ## Usage To summarize a YouTube video, you need to provide the YouTube URL as an argument when you run the go.sh script. Here is are some examples of how to run the script: ```bash ./go.sh https://www.youtube.com/watch?v=wbiEGHjlE4Y ./go.sh https://www.youtube.com/watch?v=-hxeDjAxvJ8 ``` Running a command from like the above will generate an output.html file in the ./output directory. You can then ask additional questions about the video by running: ```bash python3 repl.py ``` The output of the repl gets appended to output.html ## Other: The script is a work in progress. If you pass a url with the character &amp; on *nix, it will give an error (e.g., https://www.youtube.com/watch?v=-hxeDjAxvJ8&t=108s ). This is a *nix thing I will correct at some later point in time. ## Inspiration for the script The prompts that follow were copied from https://github.com/daveshap/Quickly_Extract_Science_Papers which has an MIT license. ``` >>> can you give me a very clear explanation of the core assertions, implications, and mechanics elucidated in this paper? <<< --- >>> can you explain the value of this in basic terms? Like you're talking to a ceo. so what? what's the bottom line here? <<< --- can you give me an analogy or metaphor that will help explain this to a broad audience <<< ```
21
2
lich4/debugserver_azj
https://github.com/lich4/debugserver_azj
debugserver enhanced version
## 生成可用的debugserver 只需生成一次,确认debugserver满足以下条件直接跳过这一步 1. debugserver必须支持lockdown和frontboard模式:`debugserver --lockdown --launch=frontboard` 2. bingner源中的debugserver无法满足要求 ### 挂载/Developer分区 1. 连接XCode调试后会自动挂载 2. 手动挂载失败原因:系统版本不匹配/Developer目录不为空/已经挂载成功 手动挂载: ``` cd /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/DeviceSupport/12.4 ideviceimagemounter -d -t Developer DeveloperDiskImage.dmg DeveloperDiskImage.dmg.signature deviceconsole | grep MobileStorageMounter # 查看错误 ``` ### 重签名 由于Developer的debugserver权限不够,无法调试第三方进程,因此需要重签名: ## 执行签名并归位 ida的ios debugger使用lockdown服务,会操作/Developer/usr/bin/debugserver ```bash cp -f /Developer/usr/bin/debugserver debugserver ldid -S1.xml debugserver umount /Developer # 操作前关闭包括debugserver在内的关联进程 mkdir -p /Developer/usr/bin cp debugserver /Developer/usr/bin/ kernel(AppleMobileFileIntegrity)[0] <Notice>: AMFI: '/usr/bin/debugserver_azj' has no CMS blob? kernel(AppleMobileFileIntegrity)[0] <Notice>: AMFI: '/usr/bin/debugserver_azj': Unrecoverable CT signature issue, bailing out. kernel(AppleMobileFileIntegrity)[0] <Notice>: AMFI: code signature validation failed. 此错误需要在Mac上执行ldid -S ``` ## ida查看进程列表 todo
12
6
brophdawg11/react-router-auth0-example
https://github.com/brophdawg11/react-router-auth0-example
React Router v6.4+ Example Using Auth0
# React Router Auth0 Example This example demonstrates how to restrict access to routes to authenticated users when using `<RouterProvider>` while using https://auth0.com/ and the Vanilla JS [`@auth0/auth0-spa-js`](https://github.com/auth0/auth0-spa-js) SDK. This is an extension of the basic [RouterProvider Authorization example](https://github.com/remix-run/react-router/tree/main/examples/auth-router-provider) in the repository. ## Prerequisites You will need an Auth0 application set up before you can run this example. Once you have created the application, you will want to change some of the default settings: - Go to your application in the Auth0 dashboard - Choose the Settings tab - Add `http://localhost:3000/, http://localhost:3000/login-result` to `Allowed Callback URLs` - Add `http://localhost:3000` to `Allowed Logout URLs` - Add `http://localhost:3000` to `Allowed Web Origins` - Choose the `Credentials` tab - Set `Authentication Methods` to `None` - See https://community.auth0.com/t/success-login-and-a-failed-exchange/41513/8 for background ## Usage - Clone this repo - `npm ci` - Change the two constants in `src/auth.ts` to include your Auth0 application values: - `const AUTH0_DOMAIN` - `const AUTH0_CLIENT_ID` - `npm run dev` - Open `http://localhost:3000`
10
1
Slamsneider/SingleTom
https://github.com/Slamsneider/SingleTom
A GPT tool (client) using OpenAI's API
# SingleTom ## A GPT tool (client) using OpenAI's API **SingleTom** is a tutorial project that combines HTML and JavaScript to create a local HTML client. The client utilizes OpenAI's GPT API, eliminating the need for a server, node.js, or Python. To get started, simply open the HTML file in your browser. You will need your personal OpenAI API-KEY. You can obtain it by visiting this link: [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys). The **SingleTom** client is designed to be a simple demo and serve as a source of inspiration. Feel free to use the code as-is or expand upon it to create your own unique implementation. If you have basic knowledge of HTML and JavaScript and are interested in learning how to leverage OpenAI's API then this project is for you. Please note that this project is intended for local use and learning purposes. If you were to use it on a public server, be cautious as your API key would be exposed to the world. It's important to clarify that, even being useful as-is, this project is not a fully-fledged application. However, with some modifications, you can transform it into one if desired. NOTE: When one or more text files are drag/dropped onto the 'history' textarea, their contents are read and appended to the textarea so you can 'talk with them'. <details> <summary>📦 Installation</summary> 1. Press the green "Code" button on the project page and choose "Download ZIP" or [download here](https://github.com/Slamsneider/SingleTom/archive/refs/heads/main.zip). 2. Once downloaded, unzip the `html` folder to your desired location. 3. **RENAME** `apikeys.js.RENAME_AND_ADD_API_KEY` to `apikeys.js` and open the file in a text editor. 4. Replace `YOUR_OPENAI_API_KEY_HERE` with your OpenAI API key. 5. Save the changes made in the `apikeys.js` file. 6. Now, open the `index.html` file in your browser to start using the application. NOTE: Do **NOT** rename or add your api key to the `apicall.php.RENAME_AND_ADD_API_KEY` file unless you (optional) intend to run the application ONLINE from a PHP server. (see below) </details> <details> <summary>📚 Code Structure</summary> - `index.html`: Main HTML file for the application. - `apikeys.js`: Contains the API key for OpenAI's API. (Never upload this file anywhere) - `models.js`: OpenAI Models. - `agents.js`: System-prompt definitions aka "custom instructions". (make/add your own) - `functions.js`: Main functionality of the application. - `dropTextFile.js`: Functionality for drag and drop text files to the history. - `styles.css`: CSS styles for the application. - `jquery_3_5_1.min.js`: jQuery library. ([from here](https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js)) </details> ![image](https://github.com/Slamsneider/SingleTom/assets/192285/de041073-33a1-4603-a5c3-c39e3f20658e) <details> <summary>💻 Workflow using SingleTom as a tool</summary> You do not need to supply all documents when working with text/code, normally you would only have the essential parts in history (memory) or in your prompt. But if need be then it handles multiple documents and can work with them. Here an example where I threw (drag/drop) all **SingleTom**'s scripts in history and asked a question. I added all the 7 scripts just for good measure (not the jquery library though): ![image](https://github.com/Slamsneider/SingleTom/assets/192285/129ce56a-48be-4a0f-a452-23e882bde7d6) This example is only to somehow illustrate the flexibillity of this workflow. Also note the tokenuse where `gpt-3.5-turbo-16k` is a life saver. TIPS: * The implemented system-prompts aka _Custom instructions_ (agents) are just simple examples, use your (system-) prompt engineering skills to make your own, better agents. * Test by simply editing the text in the system-prompt textarea and when you have a good one, then add it as a new agent in the `agents.js` file. * If you do not "ADD TO HISTORY", eg. if you don't need the answer in further communication, then you save tokens down the line. * Remember to "ADD TO HISTORY" if you need the answer in further communication. * If you need to have a lot of text in history, then use `gpt-3.5-turbo-16k` as it has 16k tokens available for each request. * Treat the HISTORY as a scratchpad (literally), it's not a freakin' chatbot. * There is no right or wrong way to do it, just do it your way. * If you get an error because there was not enough tokens available then if you have access to a model with more tokens use that and try again. Or delete some stuff in HISTORY and try again. * Remember you can have multiple browser windows (_sessions_) open at the same time. </details> <details> <summary>🧠 About OpenAI Models and Tokens</summary> Each model have a different total tokens available for the inference (request). One token is approximately 4 characters. As example then `gpt-3.5-turbo` has 4096 tokens available for each request. When sending a request, the token count consists of the following components: - System prompt - Conversation history - User prompt - `max_tokens` parameter (optional and will default to max available tokens if not set) The sum of these components, must be less than the total tokens available for the model, or else an error will occur. ### max_tokens (parameter) The `max_tokens` parameter determines how many tokens should be reserved for the response. If set to AUTO (default) it will reserve the maximum available tokens for the model. Note: You only pay for the actual tokens used and not by how many is reserved for the output. ### finish_reason (output) The `finish_reason` indicates the reason why the response ended. It can be either "stop" or "length". "stop" means that the response had a 'normal' run, while "length" indicates that the response reached the token limit and is incomplete. If so, then pick a model having more tokens, make sure 'max' is 'auto' and/or delete some stuff in history, and then try again. ### temperature (parameter) The temperature parameter controls the randomness of the response. Lower values will result in more predictable responses, while higher values will result in more surprising responses (hallucinations). </details> <details><summary>🤖 Agents (Make your own!)</summary> There is 4 example system-prompts aka _Custom instructions_ for inspiration (See agents.js). - You are encouraged to make your own. System-prompt engineering is not the scope of this tutorial project. - **SingleTom**: A simple agent - **Pirate**: A pirate by the name of Dorothy - **Marvin**: The Paranoid Android from The Hitchhiker's Guide to the Galaxy - **Children Books**: Prompt desired reader age, number of pages, and theme to make a children book </details> ## ⚠️ Important Note Do not use this application on a public server as it will expose your API key to the world. This application is intended for 'local' use only. (see below though) <details> <summary>🌐 How to run this ONLINE from a server?</summary> * php * python * node.js * whatever... **I repeat that this tutorial project is aimed at local use only and ONLINE deployment is not in the scope of the project.** But anyhows, the important thing is to not expose your API key to the world. So instead you make an api call to your server that in turn can do the OpenAI API calls for you while not exposing the API key to the user. ### Example using PHP This ad hoc example implementation is using a PHP server, but you can (change the scripts and) use whatever server you want. If **SingleTom** can not find the variable `openai_apikey` from the `apikeys.js` file, then it will use `apicall.php` to do the API calls instead. (Intended functionality) Calling OpenAI locally (directly from your browser client) is faster and less prone to errors, but the client then would expose your API key. So instead you make an api call to your server that can do the OpenAI API calls for you without compromising your API key. You can easily convert the api call in `apicall.php` to a Python script or Node.js script and serve the OpenAI api call from that environment instead. Maybe even ask SingleTom to help with that. Atm. the only thing that needs a server request is the API calls to obfuscate your API key from online predators. So to run this ONLINE on a PHP server, then you need to do the following: * RENAME `apicall.php.RENAME_AND_ADD_API_KEY` to `apicall.php` and open the file in a text editor. * Add your API key to the `apicall.php` file and save it. * Upload all files **EXCEPT `apikeys.js`** from the `html` folder to your PHP server. * Navigate to the index.html on the server and you are good to go. Then when the online HTML client can not find the `openai_apikey` variable from `apikeys.js`, it will use `apicall.php` to do the API CALLs instead. (Intended functionality) The reason for this implementation is that the SingleTon client is intended for local use only. But you occasionally want to share your extended and improved version with someone, and then you can just upload it to a server and it will work. IMPORTANT: Do not upload the `apikeys.js` file! Whatever you do, then do not expose your API key to the world. </details> ## Disclaimer This application is made for learning and is not a full fledged application.
15
6
gh123man/LazyPager
https://github.com/gh123man/LazyPager
A SwiftUI lazy loaded, paging, panning, and zooming view for images and more
# LazyPager for SwiftUI A buttery smooth, lazy loaded, panning, zooming, and gesture dismissible pager view for SwiftUI. The goal with this library is to expose a simple SwiftUI interface for a fluid and seamless content viewer. <p align="center"> <img src="https://github.com/gh123man/LazyPager/assets/959778/a82da8c3-9d65-4782-8fd7-40cc598e16da" alt="animated" /> </p> The above example is from [dateit](https://dateit.com/) demonstrating the capabilities of this library. Note: the overlay is custom and can be added by putting `LazyPager` inside a `ZStack` # Usage ## Add the Swift Package 1. Right click on your project -> `Add Package` 2. In the search bar paste: `https://github.com/gh123man/LazyPager` 3. Click `Add Package` Or add the package to your `Package.swift` if your project is a Swift package. ## Example ```swift @State var data = [ ... ] @State var show = true @State var opacity: CGFloat = 1 // Dismiss gesture background opacity @State var index = 0 var body: some View { Button("Open") { show.toggle() } .fullScreenCover(isPresented: $show) { // Provide any list of data and bind to an index LazyPager(data: data, page: $index) { element in // Supports any kind of view - not only images Image(element) .resizable() .aspectRatio(contentMode: .fit) } // Make the content zoomable .zoomable(min: 1, max: 5) // Enable the swipe to dismiss gesture and background opacity control .onDismiss(backgroundOpacity: $opacity) { show = false } // Handle single tap gestures .onTap { print("tap") } // Get notified when to load more content .shouldLoadMore { data.append("foobar") } // Set the background color with the drag opacity control .background(.black.opacity(opacity)) // A special included modifier to help make fullScreenCover transparent .background(ClearFullScreenBackground()) // Works with safe areas or ignored safe areas .ignoresSafeArea() } } ``` For a full working example, [open the sample project](https://github.com/gh123man/LazyPager/tree/master/Examples) in the examples folder, or [check out the code here](https://github.com/gh123man/LazyPager/blob/master/Examples/LazyPagerExampleApp/ContentView.swift) # Features - All content is lazy loaded. By default content is pre-loaded 3 elements ahead and behind the current index. - Display any kind of content - not just images! - Lazy loaded views are disposed when they are outside of the pre-load frame to conserve resources. - Enable zooming and panning with `.zoomable(min: CGFloat, max: CGFloat)` - Double tap to zoom is also supported. - Notifies when to load more content with `.shouldLoadMore` - Works with `.ignoresSafeArea()` (or not) to get a true full screen view. - Drag to dismiss is supported with `.onDismiss` - Supply a binding opacity value to control the background opacity during the transition. - Tap events are handled internally, so use `.onTap` to handle single taps (useful for hiding and showing UI) - Use `.settings` to [modify advanced settings](https://github.com/gh123man/LazyPager/blob/master/Sources/LazyPager/LazyPager.swift#L46) # Detailed usage ## Working with `fullScreenCover` `fullScreenCover` is a good native element for displaying a photo browser, however it has an opaque background by default that is difficult to remove. So `LazyPager` provides a `ClearFullScreenBackground` background view you can use to fix it. Simply add `.background(ClearFullScreenBackground())` to the root element of your `fullScreenCover`. This makes the pull to dismiss gesture seamless. ## Double tap to zoom You can customize the double tap behavior using the `zoomable(min: CGFloat, max: CGFloat, doubleTapGesture: DoubleTap)`. By default `doubleTapGesture` is set to `.scale(0.5)` which means "zoom 50% when double tapped". You can change this to a different ratio or set it to `.disabled` to disable the double tap gesture. ## Dismiss gesture handling By default `.onDismiss` will be called after the pull to dismiss gesture is completed. It is often desirable to fade out the background in the process. `LazyPager` uses a fully transparent background by default so you can set your own custom background. To control the dismiss opacity of a custom background, use a `Binding<CGFloat>` like `.onDismiss(backgroundOpacity: $opacity) {` to fade out your custom background.
20
1
gaboolic/nodejs-proxy
https://github.com/gaboolic/nodejs-proxy
nodejs实现vless fork的同时帮我点点star
不能说的秘密 代码全是chatgpt写的 我一个变量都看不懂 https://www.chunqiujinjing.com/2023/07/28/free-vps-free-domain/
185
1,509
arslan555/StarProfiles
https://github.com/arslan555/StarProfiles
StarProfile, multimodule android app following the Test Driven Development (TDD) approach and clean code architecture
# GituhbStarRepos App ✨📱🚀 ## Overview 🌟📱🧪🔧🚀 - Application for viewing starred github repositories 📱🌟✨ - Implement a test-driven development (TDD) approach to ensure high code quality and reliability 🧪📝 - Utilize a multi-module architecture to achieve modularity and maintainability 🧩 - Integrate Hilt for dependency injection and enhance code organization 🗝️ - Utilize Coroutines and Flow for asynchronous programming and reactive data streams ⚡🌊 - Follow Material Design guidelines to provide a visually appealing and user-friendly interface 🎨 ## Previews 📷 ### 🌞 Light Theme 🌞 <p align="start"> <img src="preview/loading_light.png" alt="drawing" width="250px" /> <img src="preview/data_light.png" alt="drawing" width="250px" /> <img src="preview/data_expended_light.png" alt="drawing" width="250px" /> <img src="preview/network_light.png" alt="drawing" width="250px" /> <img src="preview/empty_data_light.png" alt="drawing" width="250px" /> </p> ### 🌑 Dark Theme 🌑 <p align="start"> <img src="preview/loading_dark.png" alt="drawing" width="250px" /> <img src="preview/data_dark.png" alt="drawing" width="250px" /> <img src="preview/data_expanded_dark.png" alt="drawing" width="250px" /> <img src="preview/network_dark.png" alt="drawing" width="250px" /> <img src="preview/empty_dark.png" alt="drawing" width="250px" /> </p> ## Test Cases 🧪 #### 🔬 Unit Tests (with code coverage) 🔬 includes unit tests of Network module, data module( repository layer), common module, Feature module (ViewModels) <p align="start"> <img src="preview/common0.png" alt="drawing" /> <img src="preview/network0.png" alt="drawing" /> <img src="preview/data0.png" alt="drawing" /> <img src="preview/data1.png" alt="drawing" /> <img src="preview/trendingvm.png" alt="drawing" /> #### 📱 UI Tests 📱 <img src="preview/repos_test.png" alt="drawing"/> <img src="preview/Expandable_repo_test.png" alt="drawing" /> <img src="preview/error_screen_test.png" alt="drawing" /> <img src="preview/empty_screen_test.png" alt="drawing" /> </p> ## Tech Stack & Open Source Libraries 🛠 - [Android Architecture Components](https://developer.android.com/topic/libraries/architecture) - Collection of libraries that help you design robust, testable, and maintainable apps. - 100% [Jetpack Compose](https://developer.android.com/jetpack/compose) based + [Coroutines](https://github.com/Kotlin/kotlinx.coroutines) + [Flow](https://kotlin.github.io/kotlinx.coroutines/kotlinx-coroutines-core/kotlinx.coroutines.flow/) for asynchronous. - Jetpack - [Compose](https://developer.android.com/jetpack/compose): Android’s modern toolkit for building native UI. - [ViewModel](https://developer.android.com/topic/libraries/architecture/viewmodel): UI related data holder and lifecycle aware. - [Hilt](https://dagger.dev/hilt/): Dependency Injection. - [Retrofit2 & OkHttp3](https://github.com/square/retrofit): Construct the REST APIs and paging network data. - [viewmodel-lifecycle](https://github.com/skydoves/viewmodel-lifecycle): ViewModel Lifecycle allows you to track and observe Jetpack's ViewModel lifecycle changes. ## Architecture 🏛️ This app has been fully modularized. ### 🌟 Overview 🌟 Modularization is the practice of breaking the concept of a monolithic, one-module codebase into loosely coupled, self contained modules. ### 💡 Benefits of Modularization 💡 This offers many benefits, including: **Scalability** - In a tightly coupled codebase, a single change can trigger a cascade of alterations. A properly modularized project will embrace the [separation of concerns](https://en.wikipedia.org/wiki/Separation_of_concerns) principle. This in turn empowers the contributors with more autonomy while also enforcing architectural patterns. **Enabling work in parallel** - Modularization helps decrease version control conflicts and enables more efficient work in parallel for developers in larger teams. **Ownership** - A module can have a dedicated owner who is responsible for maintaining the code and tests, fixing bugs, and reviewing changes. **Encapsulation** - Isolated code is easier to read, understand, test and maintain. **Reduced build time** - Leveraging Gradle’s parallel and incremental build can reduce build times. **Dynamic delivery** - Modularization is a requirement for [Play Feature Delivery](https://developer.android.com/guide/playcore/feature-delivery) which allows certain features of your app to be delivered conditionally or downloaded on demand. **Reusability** - Proper modularization enables opportunities for code sharing and building multiple apps, across different platforms, from the same foundation. ### 📦 Types of Modules in GitHubStarRepos 📦 - The `app` module - contains app level and scaffolding classes that bind the rest of the codebase, such as `MainActivity`. The `app` module depends on all `feature` modules and required `core` modules. - `feature:` modules - feature specific modules which are scoped to handle a single responsibility in the app. These modules can be reused by any app, including test or other flavoured apps, when needed, while still keeping it separated and isolated. - `core:` modules - common library modules containing auxiliary code and specific dependencies that need to be shared between other modules in the app. These modules can depend on other core modules, but they shouldn’t depend on feature nor app modules. ### 🧩 Modules 🧩 <table> <tr> <td><strong>Name</strong> </td> <td><strong>Responsibilities</strong> </td> </tr> <tr> <td><code>app</code> </td> <td>Brings everything together required for the app to function correctly. This includes UI scaffolding and navigation. </td> </tr> <tr> <td><code>feature:1,</code><br> ... </td> <td>Functionality associated with a specific feature or user journey. Typically contains UI components and ViewModels which read data from other modules.<br> Examples include:<br> <ul> <code>feature:trending</code> displays information about starred repositories of specific programming language </ul> </td> </tr> <tr> <td><code>core:data</code> </td> <td>Fetching app data from multiple sources, shared by different features. </td> </tr> <tr> <td><code>core:ui</code> </td> <td>Composite UI components and resources used by feature modules, such as the repos feed. It is dependent on the data layer since it renders models, like star repos. </td> </tr> <tr> <td><code>core:common</code> </td> <td>Common classes shared between modules. </td> </tr> <tr> <td><code>core:network</code> </td> <td>Making network requests and handling responses from a remote data source. </td> </tr> <tr> <td><code>core:testing</code> </td> <td>Testing dependencies, repositories and util classes. </td> </tr> <tr> <td><code>core:datastore</code> </td> <td>Storing persistent data using DataStore. </td> </tr> <tr> <td><code>core:model</code> </td> <td>Model classes used throughout the app. </tr> </table> ## Developed by 👨‍💻 <a href="https://www.linkedin.com/in/mirza-arslan/" target="_blank"> </a> **Mirza Arslan** [![Medium](https://img.shields.io/badge/-medium-grey?logo=medium)](https://medium.com/@mirzaarslan450) [![Linkedin](https://img.shields.io/badge/-linkedin-grey?logo=linkedin)](https://www.linkedin.com/in/mirza-arslan/)
10
0
Seoul-ICT/Seoul-ICT-Web-Base
https://github.com/Seoul-ICT/Seoul-ICT-Web-Base
null
# 서울ICT이노베이션스퀘어 웹개발 취업캠프 안녕하세요. 수강생 여러분! 만나 뵙게 되어 반갑습니다. 강사 주정헌입니다. 2023년 7월 18일 ~ 2023년 8월 4일까지의 "소프트웨어 기초 교육" 강의의 교안, 자료, 과제, 질의응답 등 **관련된 모든 자료**는 여기에 첨부하여 진행할 예정입니다. ## 강의 방식 소개 > 지향점 : 새로운 것이 나와도 스스로 배울 수 있는 개발자가 되기! (길은 가보면 모두 **Google에 있다**) > > 원하지 않는 것 : 호기심이 없음. 배우는 걸 두려워 함. 질문하는 걸 두려워 함. 누군가 숟가락으로 ~~떠먹여줘야 성장함~~ ## 1일 타임테이블 | 시간 | 내용 | |--|--| | 13:00 ~ 14:00 | 전일 과제 같이 풀기 & 해설 | | 14:00 ~ 16:00 (17:00) | 이론 강의 | | 16:00 ~ 18:00 | HandsOnLab & Q&A | ## 3주간 타임테이블 | 날짜 | 내용 | |--|--| |7.18| 오리엔테이션 & 강의노트북 셋업 & 자기소개 | |7.19 ~ 7.21| 웹과 인터넷, HTML, CSS 기초 강의 | |7.24 ~ 7.26| Javascript 기초 강의, Javascript로 HTML, CSS 조작하기 | |7.27 ~ 7.28| Python 기초 강의, Python으로 간단한 데이터분석 해보기 | |7.31 ~ 8.2| SQL과 DBMS 기초, 프로그래밍언어로 DBMS 연동하기 | |8.3 ~ 8.4| 나만의 웹사이트 구축하기 (HTML,CSS,Python,SQL 연동) | ## 강의 듣는 법 ### 강의자료는 어디있을까? https://github.com/Seoul-ICT/Seoul-ICT-Web-Base - 강의자료는 수업시작 직전에 올라감 ### 과제, 수업 등의 질의응답은 어떻게할까? - "Issues"에 들어간다 -> "new Issues" 를 클릭하고 작성한다. ## 주의사항 - 강의 사용된 모든 자료는 금전적, 비금전적 이유 불문하고 제3자에게 양도하면 안됩니다. - 구직을 위한 본인의 포트폴리오 용도는 별도의 깃허브 레포를 만들어서 사용하셔야 합니다. ## 강의 참고자료 - https://developer.mozilla.org/ - https://www.python.org/doc/ - https://dev.mysql.com/doc/ - http://tcpschool.com ## 힘든거, 취업진로, 수업 건의사항 등등 DM (대환영!!) - 이메일 : [email protected] ## 수업 인텐티브 제도 (스타벅스쿠폰 1만원, 총 5명) - 깃헙 이슈를 가장 많이 올린 사람 + Zoom 수업 중 열렬하게 듣는 사람 (3명) - 과제를 잘하던 못하던 안빠지고 제출한 사람 (2명)
19
1
FernandaOchoa/ThreadsAPI
https://github.com/FernandaOchoa/ThreadsAPI
Unofficial API for collecting user profiles in python, and scripts to processing threads, and analyzing social media interaction data.
# Threads.Net Data Analysis API No Oficial en Python para recopilar los threads de perfiles de usuario y analizar la data correspondientes. ## Estructura del Proyecto * ```threads_api.py```: Contiene el script de conexión a Threads realizando ingeniería inversa. [**Leer aviso legal**](#aviso-legal) * ```get_user_profile_threads.py```: Permite obtener los threads de un usuario en particular. Este código puede ser usado de forma individual o como interfaz para otras implementaciones. * ```getData.py```: Permite obtener los threads, limpiarlos, procesarlos, incluirlos en un dataframe para su exploración y exportar la data en un archivo csv. * ```threads.ipynb```: Es una implementación que muestra el uso de la data exportada lista para trabajarla con cualquier API. * ```ai-samples```: Notebooks con implementaciones de modelos de procesamiento de lenguaje natural para el análisis de sentimientos con la data extraída del API. * ```text-analytics.ipynb```: Notebook para realizar análisis de sentimientos con la data extraída de ```getData``` y procesada por ```threads.ipynb```. * ```mined-opinions.ipynb```: Notebook para realizar minado de opiniones con la data extraída de ```getData``` y procesada por ```threads.ipynb```. * ```data```: Es la carpeta que almacena el archivo generado al trabajar con el archivo ```getData.py```. ### Obtener los threads de un usuario #### get_user_profile_threads El script realiza las siguientes acciones: - Se conecta a una API ```threads_api.py``` y obtiene una lista de threads de un perfil de usuario utilizando la función `get_user_profile_threads` del módulo `get_user_profile_threads`. - Procesa estos threads y limpia el texto eliminando una serie de caracteres específicos. - Separa y recopila los datos de "me gusta" de los threads. - Almacena la información procesada en un DataFrame de Pandas. - Exporta el DataFrame a un archivo CSV para su posterior análisis. #### Cómo usar Este script se ejecuta desde la línea de comandos de la siguiente manera: ```shell python <get_user_profile_threads>.py ``` #### Dependencias Este script depende de los siguientes paquetes de Python: - asyncio - pandas Para instalar estas dependencias, puede usar el siguiente comando: ```shell pip install pandas asyncio ``` #### Documentación del Código ##### Importaciones ```python from get_user_profile_threads import get_user_profile_threads import asyncio import pandas as pd ``` Importamos las bibliotecas y funciones necesarias para el script. Esto incluye `get_user_profile_threads` (una función personalizada que debe estar definida en un archivo en el mismo directorio), así como las bibliotecas estándar de Python `asyncio` y `pandas`. #### Recopilación de Datos ```python threads_data = asyncio.run(get_user_profile_threads()) ``` Recoge los datos de los threads del perfil de un usuario utilizando una función asíncrona. #### Limpieza de Datos ```python if threads_data is not None: cleanThreads = [] likesData =[] chars = ['"', "'", '}', '{', 'text', '\n', ':', '\'', '\"'] ``` Si `threads_data` no está vacío, el script crea dos listas vacías, `cleanThreads` y `likesData`. La lista `chars` define los caracteres que se eliminarán del texto del thread. #### Procesamiento de Datos El siguiente bloque de código procesa cada thread en `threadsData`, limpia el texto y recoge los datos de "me gusta". ```python for thread in threadsData: for char in chars: thread = str(thread).lstrip(' ') thread = str(thread).replace(char, '').lstrip('"\'') if ', likes' in thread: split_data = thread.split(', likes') cleanThreads.append(split_data[0]) likesData.append(split_data[1]) else: cleanThreads.append(thread) likesData.append('0') ``` #### Creación de DataFrame ```python df = pd.DataFrame({ 'Text': cleanThreads, 'Likes': likesData }) ``` Se crea un DataFrame de Pandas con los datos de texto limpios y los "me gusta". #### Procesamiento de datos ```df = df.dropna() df['Text'] = df['Text'].replace('\\\\n', '', regex=True) df['Text'] = df['Text'].replace('\\\\n\\\\n', '', regex=True) ``` Eliminamos los valores nulos del dataframe con ```drop.na()```, luego creamos una expresión regular para eliminar los saltos de línea ```\n``` y los saltos de línea dobles consecutivos ```\n\n``` #### Exportación de Datos ```python df.to_csv('./data/data.csv', index=False,header=False) ``` Por último, el script exporta el DataFrame a un archivo CSV, sin índice ni encabezado, para su posterior análisis, en el archivo ```threads.ipynb``` ### Limitaciones Esta biblioteca actualmente tiene espacios reservados para las clases Extensions, Thread, y ThreadsUser y no realiza ninguna comprobación de errores o limitación de velocidad. Debes asegurarte de que tienes permiso para acceder a cualquier dato que solicites y manejar cualquier error devuelto por la API de threads.net. ### Aviso Legal Este proyecto es puramente educativo. Se trata de una implementación de la versión 1.0.4 del proceso de ingeniería inversa realizado por el usuario Daniel 1 en su repositorio en [GitHub](https://github.com/Danie1/threads-api). La base de este trabajo proviene de la documentación proporcionada en el blog [Intuitive Explanations](https://intuitiveexplanations.com/tech/messenger) sobre ingeniería inversa, descrita en el artículo "Reverse Engineering the Facebook Messenger API". De acuerdo con el artículo, la ingeniería inversa es ética, pro-democrática y está protegida bajo la ley de Estados Unidos, pero aún es necesario ejercer integridad y responsabilidad al interactuar con cualquier sistema en línea. El comportamiento irresponsable, como enviar spam a otros usuarios, descargar datos de las personas sin su consentimiento o poner una carga indebida en la infraestructura que no estás pagando, es inapropiado independientemente de cómo se logre. La ingeniería inversa, cuando se usa correctamente, es una forma de dar a uno mismo y a otros una mayor agencia, libertad y creatividad en línea. Sin embargo, Facebook a menudo suspende o prohíbe automáticamente a las personas que interactúan con su API de una manera que les parece sospechosa, incluso si no están haciendo nada malo. Por lo tanto, se recomienda explorar con precaución. Para más detalles y para entender el proceso de ingeniería inversa llevado a cabo, recomendamos leer el artículo completo en [Intuitive Explanations](https://intuitiveexplanations.com/tech/messenger). El propósito de este proyecto es ayudar a las personas a entender cómo funcionan las aplicaciones y los sistemas, y no se debe usar para violar la privacidad de otros usuarios ni abusar de las infraestructuras de sistemas ajenos. Los desarrolladores no se hacen responsables del mal uso de este código. ### Contribución ¿Encontraste una mejora que se puede implementar o te gustaría solicitar un cambio? Puedes abrir un [Issue](https://github.com/FernandaOchoa/ThreadsAPI/issues) solicitando el cambio o enviar directamente un [Pull Request](https://github.com/FernandaOchoa/ThreadsAPI/pulls) con tu cambio. Para cualquier duda o aclaración, puedes contactarme [Fernanda Ochoa](https://github.com/FernandaOchoa): Email: [email protected] | [email protected] Twitter: [@imonsh](https://twitter.com/imonsh) Instagram: [@fherz8a](https://www.instagram.com/fherz8a/) ## Licencia [MIT License](https://github.com/FernandaOchoa/ThreadsAPI/blob/main/LICENSE) Derechos de autor (c) 2023 Fernanda Ochoa --- Este código está sujeto a la licencia MIT que se muestra arriba. Al utilizar este código, aceptas dar crédito al autor original mencionando su nombre en el código y en cualquier documentación relacionada con el proyecto.
48
5
ChristianBelloni/sciport-rs
https://github.com/ChristianBelloni/sciport-rs
Port of scipy to rust
![Maintenance](https://img.shields.io/badge/maintenance-actively--developed-brightgreen.svg) [![crates-io](https://img.shields.io/crates/v/sciport-rs.svg)](https://crates.io/crates/sciport-rs) [![api-docs](https://docs.rs/sciport-rs/badge.svg)](https://docs.rs/sciport-rs) ## Sciport-rs Sciport is a collection of mathematical algorithms ported from the popular python package Scipy ## Dependencies To build this library is necessary to install gfortran-13 ## Api design The main philosophy behind sciport is to change the api surface of scipy to better utilize the rich rust typesystem, when deciding between keeping the original function signature and rewriting it to better represent the valid input space, more often than not we'll decide to change it.<br/> for example this is the scipy butter filter api: ```python scipy.signal.butter(N: int, Wn: array_like, btype: String, analog: bool, output: String, fs: float) ``` Wn represents a single or a pair of frequencies and btype is the type of filter, however, a single frequency makes sense only for a subset of btypes and so does a pair, in our implementation we rewrite this function like: ```rust fn filter<T>(order: u32, band_filter: BandFilter, analog: Analog) { .. } ``` where T represents the output representation of the filter (Zpk, Ba, Sos), band_filter encapsulates the original Wn and btype like this: ```rust enum BandFilter pub enum BandFilter { Highpass(f64), Lowpass(f64), Bandpass { low: f64, high: f64 }, Bandstop { low: f64, high: f64 }, } ``` and analog encapsulates analog and fs (since a sampling rate makes sense only when talking about a digital filter) like this: ```rust pub enum Analog { True, False { fs: f64 } } ``` ## Modules ### Signal Processing The signal processing toolbox currently contains some filtering functions, a limited set of filter design tools, and a few B-spline interpolation algorithms for 1- and 2-D data. While the B-spline algorithms could technically be placed under the interpolation category, they are included here because they only work with equally-spaced data and make heavy use of filter-theory and transfer-function formalism to provide a fast B-spline transform. ### Special The main feature of this module is the definition of numerous special functions of mathematical physics. Available functions include airy, elliptic, bessel, gamma, beta, hypergeometric, parabolic cylinder, mathieu, spheroidal wave, struve, and kelvin. If there's a specific module or function that you'd like to see been worked on open a pr linking the scipy documentation for the module or function
24
1
zmovirzynski/Android-Azure-Devops
https://github.com/zmovirzynski/Android-Azure-Devops
YAML for start a Android 13 Emulator in Azure Devops Pipelines.
# Android Azure DevOps This repository contains the YAML configuration to create an Azure Pipelines pipeline for setting up and starting an Android emulator. The pipeline automates the process of creating and launching the emulator on a macOS machine. ## Prerequisites Before using this pipeline, make sure you have the following: - An Azure DevOps account set up with a suitable project. - A macOS agent available in your Azure Pipelines organization. ## Configuration Follow the steps below to configure the pipeline in Azure Pipelines: 1. Clone this repository to your local environment: ```bash git clone https://github.com/zmovirzynski/Android-Azure-Devops.git ``` In Azure DevOps, navigate to the project where you want to set up the pipeline. In the project dashboard, go to **Pipelines** and click **New Pipeline**. Choose the source code repository to be the cloned repository from earlier. Select the `azure-pipelines.yml` configuration file from this repository. Review and customize the YAML file as needed to fit your requirements. Save and run the pipeline. ## Pipeline The pipeline consists of a single job: **Job: macOS** **Pool: macOS-latest** This job runs on a macOS agent. **Steps:** 1. Install Node.js - This step installs Node.js version 16.x. 2. List already installed Android packages - This step lists the Android packages that are already installed. 3. Install Android image - This step installs the Android image with the specified version. 4. Create AVD (Android Virtual Device) - This step creates an AVD named "test_android_emulator" using the installed Android image. 5. Start Android emulator - This step starts the Android emulator by executing commands to list available AVDs, create the AVD, and start the emulator using the specified AVD. ## Contributing Feel free to contribute to this project by opening issues and submitting pull requests. Your participation is welcome! ## Support If you have any questions or encounter any issues related to this repository, please reach out by opening an issue. We will do our best to assist you. ## License This project is licensed under the MIT License.
12
0
haiyanghan/six-eared-macaque
https://github.com/haiyanghan/six-eared-macaque
null
# six-eared-macaque
20
0
renwang435/video-ttt-release
https://github.com/renwang435/video-ttt-release
null
# Test-Time Training on Video Streams [Renhao Wang*](https://renwang435.github.io/), [Yu Sun*](https://yueatsprograms.github.io/), [Yossi Gandelsman](https://yossigandelsman.github.io/), [Xinlei Chen](https://xinleic.xyz/), [Alexei A. Efros](http://people.eecs.berkeley.edu/~efros/), [Xiaolong Wang](https://xiaolonw.github.io/) [[`arXiv`](https://arxiv.org/abs/2307.05014)] [[`Project`](https://video-ttt.github.io/)] [[`BibTeX`](#Citing)] <div align="center"> <img src="assets/teaser.png" height="100%" width="100%"> </div> <br> ## Installation See [installation instructions](INSTALL.md). ## Datasets We release COCO-Videos, a new dataset for instance and panoptic segmentation which follows the COCO labeling format. We also rely on semantic-level labels in the [KITTI-STEP](https://www.cvlibs.net/datasets/kitti/eval_step.php) dataset for evaluation on semantic segmentation. <br> All datasets can be [downloaded here](https://berkeley.box.com/s/8ieod46tjh4k2n1lyid6qhap9qsy947s) and should subsequently be unzipped to the path specified under the `$DETECTRON2_DATASETS` environment variable (see [installation instructions](INSTALL.md)). ## Checkpoints Relevant pretrained checkpoints can be [obtained here](https://berkeley.box.com/s/ksy6bf90qqpshd70785v8v38btqdn0oa). These should be downloaded and stored at some `/path/to/checkpoints`. ## Reproducing Results ### Baselines To evaluate a pretrained Mask2Former-S on COCO-Videos for panoptic segmentation: ``` python runner_coco_videos_baseline.py --gpu 0 \ --videos bangkok bar berkeley havana house irvine paris restaurant school tokyo \ --batch_size 8 \ --weights /path/to/checkpoints/ttt_coco_panoptic_baseline.pkl \ --output_dir coco_vid_panoptic_baseline \ --eval_type pano \ --num_imgs 4000 ``` You can pass `--eval_type inst` to obtain the baseline instance numbers (as well as the corresponding pretrained instance segmentation checkpoint). Results will be logged under the directory specified in the `--output_dir` flag, ### COCO-Videos Instance and Panoptic Segmentation Runner script for instance segmentation: ``` python runner_ttt_mae_inst.py --gpu 0 \ --videos bangkok bar berkeley havana house irvine paris restaurant school tokyo \ --batch_size 32 \ --accum_iter 8 \ --base_lr 0.0001 \ --weights /path/to/checkpoints/ttt_coco_instance_baseline.pkl \ --restart_optimizer ``` Runner script for panoptic segmentation: ``` python runner_ttt_mae_panoptic.py --gpu 0 \ --videos bangkok bar berkeley havana house irvine paris restaurant school tokyo \ --batch_size 32 \ --accum_iter 8 \ --base_lr 0.0001 \ --weights /path/to/checkpoints/ttt_coco_panoptic_baseline.pkl \ --restart_optimizer ``` For easy collation of numbers, we provide a utility script which can, for example, be called as `python mask2former/utils/tabulate_results_cv.py --root_dir exp_dir/mae_coco_inst_32_0.0001`. ### KITTI-STEP Semantic Segmentation Runner script: ``` python runner_ttt_mae.py --gpu 0 \ --videos 0000 0001 0002 0003 0004 0005 0006 0007 0008 0009 0010 0011 0012 0013 0014 0015 0016 0017 0018 0019 0020 \ --batch_size 32 \ --accum_iter 4 \ --base_lrs 0.0001 \ --weights /path/to/checkpoints/ttt_ks_semantic_baseline.pkl \ --restart_optimizer ``` For easy collation of numbers, we provide a utility script which can, for example, be called as `python mask2former/utils/tabulate_results.py --root_dir exp_dir/mae_ks_sema_32_0.0001`. ## License This codebase inherits all licenses from the public release of [Mask2Former](https://github.com/facebookresearch/Mask2Former#license). ## <a name="Citing"></a>Citing Video-TTT ```BibTeX @article{wang2023test, title={Test-time training on video streams}, author={Wang, Renhao and Sun, Yu and Gandelsman, Yossi and Chen, Xinlei and Efros, Alexei A and Wang, Xiaolong}, journal={arXiv preprint arXiv:2307.05014}, year={2023} } ``` ## Acknowledgements Code is based on [Mask2Former](https://github.com/facebookresearch/Mask2Former).
35
2
verytinydever/parse_authentication
https://github.com/verytinydever/parse_authentication
null
# Assignment Documentation (Android App Development) # Objective: To create an android app which allows the user to login and logout using a Parse server as a backend. # Abstract: The main objective of this documentation is to present a software application for the login and logout use case using a parse server as backend. The application developed for android will enable the new users to signup as well as registered users can log in and view the home page. The system requires devices to be connected via the internet. Java is used as a programming language and Bitnami Parse Server is hosted on AWS. # Introduction This is a simple android mobile application where a new user can create a new profile using signup page or previously registered user can log in. # Features ## Login: Input: username , password (valid) Output: If credentials matches Redirect to Home Page Else Error message is displayed ## SignUp: Input: username , password, confirm password Output: If (username is unique) && (password is valid) && (password==confirm password) Signup the user Redirect to Home page Else Error message is displayed ## Logout: Input: Press Logout from option menu Output: If there is current user Logout Redirect to login page ## ShowPassword Input: Click Output: If checked: Show password Else: Hide password # Testing Result - Username and password shouldn’t be blank. - Passwords should meet the requirement. - Minimum 8 letters - At least 1 digit - At least 1 lower case letter - At least 1 upper case letter - No white spaces - At least 1 special character - Password should match with confirm password. # Conclusion We can implement authentication using a parse server as a backend conveniently with our android application. It can also be used to store data and files as per our need. # Future Work - Improvement in UI. - Addition of content in home page.
10
0
verytinydever/ethereumToken
https://github.com/verytinydever/ethereumToken
null
# ethereumToken It is not just simple ERC20 token with basic functions, classic mapping in Standart token looks like "mapping (address => uint) balances;" no I created structure: struct Customer { uint256 balance; uint256 lockTime; uint256 lockedTokens; } mapping(address => Customer) internal balances; So in that way every holder of this token can get on one address locked tokens(on some everage time) and free tokens(he can transfer them any moment he want).
14
0
verytinydever/vstupai
https://github.com/verytinydever/vstupai
null
# vstupai
16
0
tro1d/flutter-scrcpy-manager
https://github.com/tro1d/flutter-scrcpy-manager
Simple Scrcpy Manager with Command Prompt for Windows built in Flutter.
# Scrcpy Manager This is a simple `Scrcpy Manager` with `Command Prompt` for Windows created using Flutter. ![2023-07-21_04-20-45](https://github.com/tro1d/flutter-scrcpy-manager/assets/23493345/f7571abf-a249-40fa-982e-3fc60b6f6e72) https://github.com/tro1d/flutter-scrcpy-manager/assets/23493345/58971475-a1b4-4034-968c-674394877c1b ## Features - Automatic device detection - IP detection to check the connection status between the device and Windows - Check for the latest Screen Copy version - Console Command Prompt and ADB - No need to edit environment paths - and more. `Note`: Requires an internet connection for the initial download and checking the latest Scrcpy version. ## Support Scrcpy - [GitHub Scrcpy](https://github.com/Genymobile/scrcpy/)
10
1
urazakgul/python-pandas-dersleri
https://github.com/urazakgul/python-pandas-dersleri
null
# Python Pandas Dersleri - [Python Pandas Dersleri](#python-pandas-dersleri) - [1. Pandas Nedir?](#1-pandas-nedir) - [2. Pandas Kütüphanesini Yükleme ve Çağırma](#2-pandas-kütüphanesini-yükleme-ve-çağırma) - [2.1. Yükleme](#21-yükleme) - [2.2. Çağırma](#22-çağırma) - [3. Veri Setini Tanıma, Veri Setinin İçeri Aktarılması ve İncelenmesi](#3-veri-setini-tanıma-veri-setinin-i̇çeri-aktarılması-ve-i̇ncelenmesi) - [3.1. Tanıma](#31-tanıma) - [3.2. İçeri Aktarma](#32-i̇çeri-aktarma) - [3.3. Baştaki Verileri Yazdırma: head()](#33-baştaki-verileri-yazdırma-head) - [3.4. Sondaki Verileri Yazdırma: tail()](#34-sondaki-verileri-yazdırma-tail) - [3.5. Satır ve Sütun Sayısı Bilgisi Alma: shape](#35-satır-ve-sütun-sayısı-bilgisi-alma-shape) - [3.6. Veri Seti Hakkında Detaylı Bilgi Alma: info()](#36-veri-seti-hakkında-detaylı-bilgi-alma-info) - [3.7. Sütun ve Satır Gösterimi Ayarları: pd.set\_option()](#37-sütun-ve-satır-gösterimi-ayarları-pdset_option) - [4. Pandas Series ve Pandas DataFrame Kavramları](#4-pandas-series-ve-pandas-dataframe-kavramları) - [4.1. Series](#41-series) - [4.2. DataFrame](#42-dataframe) - [4.2.1. Sütunlara Erişme](#421-sütunlara-erişme) - [4.2.1.1. Tekli Erişim](#4211-tekli-erişim) - [4.2.1.2. Çoklu Erişim](#4212-çoklu-erişim) - [4.2.2. Satırlara Erişme: iloc ve loc](#422-satırlara-erişme-iloc-ve-loc) - [4.2.2.1. iloc](#4221-iloc) - [4.2.2.2. loc](#4222-loc) - [4.2.3. Satır ve Sütunlara Erişme: İki Nokta Kullanımı](#423-satır-ve-sütunlara-erişme-i̇ki-nokta-kullanımı) - [4.2.4. Sütuna Ait Değerleri Saydırma: value\_counts()](#424-sütuna-ait-değerleri-saydırma-value_counts) - [5. İndeksler](#5-i̇ndeksler) - [5.1. İndeks nedir?](#51-i̇ndeks-nedir) - [5.2. İndeks Ayarlama: set\_index() ve index\_col](#52-i̇ndeks-ayarlama-set_index-ve-index_col) - [5.3. İndeks Aracılığıyla Erişim: loc](#53-i̇ndeks-aracılığıyla-erişim-loc) - [5.4. İndeks Sıfırlama: reset\_index()](#54-i̇ndeks-sıfırlama-reset_index) - [5.5. İndekslerin Sıralanması: sort\_index()](#55-i̇ndekslerin-sıralanması-sort_index) - [6. Filtreleme](#6-filtreleme) - [6.1. Tekli Filtreleme: İç içe ve loc](#61-tekli-filtreleme-i̇ç-içe-ve-loc) - [6.2. Çoklu Filtreleme: \&, | ve isin()](#62-çoklu-filtreleme---ve-isin) - [6.3. String İçerenleri Filtreleme: str.contains()](#63-string-i̇çerenleri-filtreleme-strcontains) - [7. Sütun ve Satır Güncelleme](#7-sütun-ve-satır-güncelleme) - [7.1. Sütun Güncelleme: columns, List Comprehension, str.replace(), rename](#71-sütun-güncelleme-columns-list-comprehension-strreplace-rename) - [7.2. Satır Güncelleme: loc, at, str.lower(), apply(), applymap(), lambda, map() ve replace()](#72-satır-güncelleme-loc-at-strlower-apply-applymap-lambda-map-ve-replace) - [8. Sütun ve Satır Ekleme ve Kaldırma](#8-sütun-ve-satır-ekleme-ve-kaldırma) - [8.1. Sütun Ekleme ve Kaldırma: str.split() ve drop()](#81-sütun-ekleme-ve-kaldırma-strsplit-ve-drop) - [8.2. Satır Ekleme ve Kaldırma: append(), concat() ve drop()](#82-satır-ekleme-ve-kaldırma-append-concat-ve-drop) - [9. Sıralama](#9-sıralama) - [9.1. Tekli Sıralama: sort\_values()](#91-tekli-sıralama-sort_values) - [9.2. Çoklu Sıralama: sort\_values()](#92-çoklu-sıralama-sort_values) - [9.3. İndekse Göre Sıralama: sort\_index()](#93-i̇ndekse-göre-sıralama-sort_index) - [9.4. Serilerin Sıralanması: sort\_values()](#94-serilerin-sıralanması-sort_values) - [9.5. En Büyüklerin Sıralanması: nlargest()](#95-en-büyüklerin-sıralanması-nlargest) - [9.6. En Küçüklerin Sıralanması: nsmallest()](#96-en-küçüklerin-sıralanması-nsmallest) - [10. Gruplama ve Özetleme](#10-gruplama-ve-özetleme) - [10.1. Tekli Sütunun Bir İstatistik Değeri: median()](#101-tekli-sütunun-bir-i̇statistik-değeri-median) - [10.2. Çoklu Sütunların Bir İstatistik Değeri: median()](#102-çoklu-sütunların-bir-i̇statistik-değeri-median) - [10.3. İstatistiksel Özet: describe()](#103-i̇statistiksel-özet-describe) - [10.4. Değerlerin Saydırılması: value\_counts()](#104-değerlerin-saydırılması-value_counts) - [10.5. Değerlerin Yüzdelere Ayrılması: normalize](#105-değerlerin-yüzdelere-ayrılması-normalize) - [10.6. Gruplayarak Saydırma, Yüzde Alma ve İndeks İnceleme: groupby(), value\_counts(), normalize ve loc](#106-gruplayarak-saydırma-yüzde-alma-ve-i̇ndeks-i̇nceleme-groupby-value_counts-normalize-ve-loc) - [10.7. Bir Gruba Göre Bir İstatistik: groupby() ve median()](#107-bir-gruba-göre-bir-i̇statistik-groupby-ve-median) - [10.8. Bir Gruba Göre Birden Fazla İstatistik: groupby(), agg(), median() ve std()](#108-bir-gruba-göre-birden-fazla-i̇statistik-groupby-agg-median-ve-std) - [10.9. Bir String İçeriğine Göre Bir İstatistik: groupby(), apply(), lambda, str.contains() ve sum()](#109-bir-string-i̇çeriğine-göre-bir-i̇statistik-groupby-apply-lambda-strcontains-ve-sum) - [11. Kayıp Veri](#11-kayıp-veri) - [11.1. NaN Sayısını Öğrenme: isna() ve sum()](#111-nan-sayısını-öğrenme-isna-ve-sum) - [11.2. NaN ve Temizliği: dropna()](#112-nan-ve-temizliği-dropna) - [11.3. Kayıp Veriyi Anlatan Manuel Girilmiş String İfadeleri NaN Yapma: replace()](#113-kayıp-veriyi-anlatan-manuel-girilmiş-string-i̇fadeleri-nan-yapma-replace) - [11.4. NaN Değerleri String Bir İfadeye Çevirme: fillna()](#114-nan-değerleri-string-bir-i̇fadeye-çevirme-fillna) - [11.5. NaN Değerleri Bir Önceki Değere Çevirme: fillna()](#115-nan-değerleri-bir-önceki-değere-çevirme-fillna) - [11.6. NaN Değerleri Bir Sonraki Değere Çevirme: fillna()](#116-nan-değerleri-bir-sonraki-değere-çevirme-fillna) - [11.7. NaN Değerleri Bir İstatistik Değerine Çevirme: fillna() ve mean()](#117-nan-değerleri-bir-i̇statistik-değerine-çevirme-fillna-ve-mean) - [11.8. NaN Değerlerinin Interpolasyon Tahmini: fillna() ve interpolate()](#118-nan-değerlerinin-interpolasyon-tahmini-fillna-ve-interpolate) - [12. Verilerin Dışarı Aktarılması](#12-verilerin-dışarı-aktarılması) - [12.1. CSV: to\_csv()](#121-csv-to_csv) - [12.2. XLSX: to\_excel()](#122-xlsx-to_excel) # 1. Pandas Nedir? --- Pandas, Python programlama dilinin üzerine inşa edilmiş hızlı, güçlü, esnek ve kullanımı kolay bir açık kaynak veri analizi ve manipülasyonu aracıdır. # 2. Pandas Kütüphanesini Yükleme ve Çağırma --- ## 2.1. Yükleme Pandas'ı yüklemek için, Python paket yöneticisi olan `pip`'i kullanabiliriz. Komut, aşağıdaki platformlarda çalıştırılabilir: * Windows: Komut İstemi (Command Prompt, Cmd) veya PowerShell * Linux: Terminal * macOS: Terminal Cmd kullanarak anlatacağım. ``` pip install pandas ``` Versiyon bilgisi yine Cmd'den aşağıdaki gibi öğrenilebilir. ``` python import pandas as pd pd.__version__ ``` Bu dersin ilk paylaşımında `1.5.2` versiyonu kullanılıyor olacak. ## 2.2. Çağırma Pandas başarıyla yüklendikten sonra aşağıdaki ifadeyi ekleyerek kütüphaneyi çağırabiliriz. ```python import pandas as pd ``` `pd` kısaltmasını kullanmak yaygın bir uygulamadır ancak kısaltma isteğe bağlı olarak değiştirilebilir. # 3. Veri Setini Tanıma, Veri Setinin İçeri Aktarılması ve İncelenmesi --- ## 3.1. Tanıma Ders anlatımında, İş Yatırım'ın Hisse Değerleri ve Oranları bölümünde bulunan Özet isimli tabloyu kullanacağız. Verilere [buradan](https://www.isyatirim.com.tr/tr-tr/analiz/hisse/Sayfalar/Temel-Degerler-Ve-Oranlar.aspx#page-1) ulaşabilir ve excel olarak indirebilirsiniz. Veriye erişim tarihim 07/07/2023 olduğu için sizin verileriniz ile farklılıklar olabilir. Eğer GitHub hesabımdaki `python-pandas-dersleri` repo'sunda bulunan `data`'dan `temelozet.xlsx` dosyasını indirirseniz herhangi bir farklılık olmayacaktır. ## 3.2. İçeri Aktarma Veriyi aşağıdaki gibi içeri aktarabiliriz. ```python import pandas as pd df = pd.read_excel('./data/temelozet.xlsx') ``` İlerleyen derslerde göreceğimiz DataFrame'in kısaltması olan `df`'i kullanmak yaygın bir uygulamadır ancak değişken ismi isteğe bağlı olarak değiştirilebilir. ## 3.3. Baştaki Verileri Yazdırma: head() ```python df.head() ``` ![](/imgs/df_head.PNG) `df.head()` ile veri setinin ilk 5 satırına baktık. Burada 5 varsayılan değerdir. Örneğin, `df.head(10)` ile ilk 10 satıra da bakılabilirdi. ## 3.4. Sondaki Verileri Yazdırma: tail() ```python df.tail() ``` ![](/imgs/df_tail.PNG) `df.tail()` ile veri setinin son 5 satırına baktık. Burada 5 varsayılan değerdir. Örneğin, `df.tail(10)` ile son 10 satıra da bakılabilirdi. ## 3.5. Satır ve Sütun Sayısı Bilgisi Alma: shape Tam olarak satır ve sütun sayısı bilgisini alalım. ```python df.shape ``` `df.shape` veri çerçevesinin boyutunu döndürür ve veri çerçevesinin satır ve sütun sayısını bir demet olarak verir. Örneğimizde, (509, 8) şeklinde bir çıktı veri çerçevesinin 509 satır ve 8 sütundan oluştuğunu gösterir. ``` (509, 8) ``` ## 3.6. Veri Seti Hakkında Detaylı Bilgi Alma: info() `df.info()` fonksiyonu veri çerçevesi hakkında daha detaylı bilgiler sunar. Bu fonksiyon, veri çerçevesindeki her sütunun veri tipini, veri tiplerinin sayısal dağılımını, bellek kullanımını, eksik değerleri ve sütunların ve satırların toplamda kaç olduğu bilgisini gösterir. ```python df.info() ``` ![](/imgs/df_info.PNG) ## 3.7. Sütun ve Satır Gösterimi Ayarları: pd.set_option() `df.head()` veya `df.tail()` ile çalıştırdığımızda sütunların eğer sütun sayısı fazla olsaydı sadece bir kısmını görebilirdik. Veri çerçevesindeki sütunların tamamını görmek isteseydik `pd.set_option()` ile aşağıdaki ayarı yapabilirdik. ```python pd.set_option('display.max_columns', None) ``` `None` kullanılmasının amacı, `display.max_columns` seçeneğini sınırlamadan kaldırmaktır. Aynı şekilde, satır sayısını da aşağıdaki gibi ayarlayabiliriz. ```python pd.set_option('display.max_rows', None) ``` `None` kullanılmasının amacı, `display.max_rows` seçeneğini sınırlamadan kaldırmaktır. # 4. Pandas Series ve Pandas DataFrame Kavramları --- ```python import pandas as pd df = pd.read_excel('./data/temelozet.xlsx') ``` Pandas DataFrame (bizim örneğimizde içeri aktardığımız `df`), iki boyutlu bir veri tablosunu temsil eder ve sütunlar ve satırlar şeklinde düzenlenmiş verileri içerir. Pandas Series ise (bizim örneğimizde içeri aktardığımız `df`'in herhangi bir sütunu) tek boyutlu bir diziyi temsil eder ve sıralı bir şekilde indekslenmiş verileri içerir. ## 4.1. Series ```python df_ornek = { 'Kod': ['A1CAP','ACSEL','ADEL','ADESE','AEFES','AFYON','AGESA'], 'Sektör': ['Aracı Kurumlar','Kimyasal Ürün','Kırtasiye','Perakande - Ticaret','Meşrubat / İçecek','Çimento','Sigorta'] } ``` Önce Pandas Series'e çevirelim. ```python pd_seri = pd.Series(df_ornek['Sektör'], index=df_ornek['Kod']) pd_seri ``` ![](/imgs/df_ornek_series.PNG) Bu kod satırında, `df_ornek` sözlüğünden `Sektör` anahtarına karşılık gelen değerleri alarak bir Pandas Series oluşturduk. `pd.Series()` işlevini kullanarak `df_ornek['Sektör']` listesini ve `df_ornek['Kod']` listesini sırasıyla `values` ve `index` parametreleri olarak verdik. `index`, her bir veri noktasını tanımlayan etiketlerden oluşan bir dizidir. ## 4.2. DataFrame Şimdi Pandas DataFrame'e çevirelim. ```python pd_df = pd.DataFrame(df_ornek) pd_df ``` ![](/imgs/df_ornek_dataframe.PNG) Veri çerçevesinde 0, 1, 2, ... gibi giden değerler görüyoruz. Bunlar indekstir ve indeksler her bir satırı tanımlayan tekil değerlerdir. Ancak tekil olmak zorunda değillerdir ki ilerleyen derslerde göreceğiz. ### 4.2.1. Sütunlara Erişme #### 4.2.1.1. Tekli Erişim Son oluşturduğumuz veri çerçevesinden `Sektör` sütununa erişmek istediğimizi varsayalım. ```python pd_df['Sektör'] ``` ![](/imgs/df_ornek_sektor.PNG) Çıktı tanıdık geliyor. Hemen veri tipine bakalım. ```python type(pd_df['Sektör']) ``` Çıktının bir seriyi temsil eden `pandas.core.series.Series` olduğunu göreceğiz. Serilerin tek boyutlu olduğunu öğrenmiştik. Veri çerçeveleri de serilerin birleşmesinden oluşuyor. Aynı sütuna ulaşmanın bir başka yolu ise nokta notasyonunu kullanmaktır. ```python pd_df.Sektör ``` Yine aynı çıktıyı almış olacağız. ![](/imgs/df_ornek_sektor.PNG) Hangi yöntemi tercih etmeliyiz? `pd_df['Sektör']` ifadesi, DataFrame üzerindeki sütuna doğrudan bir dizi indeksi kullanarak erişim sağlar. Bu yöntem, sütun ismi boşluk veya özel karakterler içerdiğinde veya Python programlama dilinde özel bir kelimeyle çakıştığında daha güvenlidir. Örneğin, eğer sütun ismi `sutun ismi` veya `if` gibi bir kelime ise bu ifadeleri kullanarak doğrudan sütuna erişim sağlayabiliriz. `pd_df.Sektör` ifadesi ise, nokta notasyonunu kullanarak sütuna erişim sağlar. Bu ifade daha kısa ve daha okunabilir bir yazım şekli sunar. Ancak bazı durumlarda, sütun ismi boşluk veya özel karakterler içeriyorsa veya Python programlama dilinde özel bir kelimeyle çakışıyorsa hata verebilir. Daha önce oluşturduğumuz veri çerçevesine sütunlar ekleyip az önce öğrendiklerimizi pekiştirelim. ```python df_ornek = { 'Kod': ['A1CAP','ACSEL','ADEL','ADESE','AEFES','AFYON','AGESA'], 'Hisse Adı': ['A1 Capital', 'Acıselsan Acıpayam Selüloz', 'Adel Kalemcilik', 'Adese AVM', 'Anadolu Efes', 'Afyon Çimento', 'Agesa Hayat ve Emeklilik'], 'Sektör': ['Aracı Kurumlar','Kimyasal Ürün','Kırtasiye','Perakande - Ticaret','Meşrubat / İçecek','Çimento','Sigorta'], 'if': [False,False,False,False,True,True,False], '@nerede': ['İstanbul','Denizli','İstanbul','Konya','İstanbul','İstanbul','İstanbul'] } pd_df = pd.DataFrame(df_ornek) ``` `Hisse Adı` sütununa erişmeye çalışalım. ```python pd_df['Hisse Adı'] ``` ![](/imgs/df_ornek_column_single_access.PNG) Bir de nokta notasyonunu kullanalım. ```python pd_df.Hisse Adı ``` Yukarıdaki ifade ile ilgili sütuna erişmeye çalışırsak `SyntaxError: invalid syntax` hatası alacağız. Python programlama dilinde anahtar kelime (keyword) olan ve koşullu ifadeleri belirtmek için kullanılan `if` isimli sütuna erişmeye çalışalım. ```python pd_df['if'] ``` Yukarıdaki ifadeyi kullanırsak ilgili sütuna erişebileceğiz. Bir de nokta notasyonunu kullanalım. ```python pd_df.if ``` Yukarıdaki ifade ile ilgili sütuna erişmeye çalışırsak `SyntaxError: invalid syntax` hatası alacağız. Özel bir karakter olan `@`'in kullanıldığı sütuna erişmeye çalışalım. ```python pd_df['@nerede'] ``` Yukarıdaki ifadeyi kullanırsak ilgili sütuna erişebileceğiz. Bir de nokta notasyonunu kullanalım. ```python pd_df.@nerede ``` Yukarıdaki ifade ile ilgili sütuna erişmeye çalışırsak `SyntaxError: invalid syntax` hatası alacağız. #### 4.2.1.2. Çoklu Erişim Buraya kadar tek bir sütuna erişimi gördük. Birden fazla sütuna erişmek istediğimizde aşağıdaki ifadeyi kullanıyoruz. ```python pd_df[['Kod','Hisse Adı']] ``` ![](/imgs/df_ornek_column_multiple_access.PNG) Yukarıdaki iki duruma dikkat edelim. Birincisi, tek sütuna tek köşeli parantez (`[]`) ile ulaşırken çoklu sütunlara çift köşeli parantez (`[[]]`) ile ulaştık. İkincisi, tek sütuna erişirken çıktıyı bir seri olarak alıyorduk ancak çoklu sütunlara erişmek istediğimizde artık bir seri değil bir veri çerçevesi olarak çıktıyı alıyoruz. Son olarak, sütun isimlerinin tamamını görmek istiyorsak aşağıdaki ifadeyi kullanabiliriz. ```python pd_df.columns ``` Yukarıdaki ifade ile `Index(['Kod', 'Hisse Adı', 'Sektör', 'if', '@nerede'], dtype='object')` çıktısını almış olacağız. ### 4.2.2. Satırlara Erişme: iloc ve loc Burada iki tane kavram ile tanışacağız: `iloc` ve `loc`. #### 4.2.2.1. iloc `iloc`, integer location anlamına gelir ve DataFrame veya Series üzerinde konum tabanlı indeksleme yapmamıza olanak tanır. İndeksler sıfırdan başlar ve satır veya sütunları belirlemek için tamsayı indekslerini kullanır. `iloc` kullanırken satır veya sütunların konumunu belirtmek için köşeli parantez içinde tamsayı indeksleri kullanırız. İlk satıra erişelim. ```python df.iloc[0] ``` ![](/imgs/df_iloc_first_row.PNG) Yukarıda indeksin sütun isimleri olduğunu görüyoruz. Tek bir satıra erişebileceğimiz gibi birden fazla satıra da erişebiliriz. Tıpkı çoklu sütuna erişimde olduğu gibi ilerleyeceğiz. ```python df.iloc[[0,1]] ``` ![](/imgs/df_iloc_multiple_rows.PNG) Görüldüğü üzere çift parantez kullandık ve çıktıyı bir veri çerçevesi olarak aldık. `iloc` ile sütunlara da erişebiliriz. Örneğin, ilk iki satırın 5. sütununa erişmeye çalışalım. ```python df.iloc[[0,1],4] ``` ![](/imgs/df_iloc_rows_and_column.PNG) `Piyasa Değeri(mn TL)` 5. sütun olsa da indeksler sıfırdan başladığı için konumu 4'tür. Ayrıca çıktıyı seri olarak aldık. Çoklu sütunlara da erişebiliriz. Örneğin, 4. ve 5. sütunlara erişelim. ```python df.iloc[[0,1],[3,4]] ``` ![](/imgs/df_iloc_rows_and_columns.PNG) Çoklu olduğu zaman veri çerçevesi olarak alıyoruz. Son bir bilgi olarak, integer location'ları hangi sırayla yazarsak o sırayla çıktıyı alırız. ```python df.iloc[[1,0],[4,3]] ``` ![](/imgs/df_iloc_rows_and_columns_order.PNG) #### 4.2.2.2. loc `loc`, label location anlamına gelir ve DataFrame veya Series üzerinde etiket tabanlı indeksleme yapmak için kullanılan bir indeksleme yöntemidir. İlk satıra erişelim. Burada 0 etiketine sahip satırı getireceğiz. ```python df.loc[0] ``` ![](/imgs/df_loc_first_row.PNG) Tek bir satıra erişebileceğimiz gibi birden fazla satıra da erişebiliriz. Bu defa örnek olarak 0 ve 1 etiketlerine sahip satırları getireceğiz. ```python df.loc[[0,1]] ``` ![](/imgs/df_loc_multiple_rows.PNG) Buraya kadar yaptıklarımız aslında `iloc`'ta yaptıklarımıza benziyor ancak biz etiket bazlı ilerliyoruz. Son olarak, son sütuna erişelim. ```python df.loc[[0,1], 'Sermaye(mn TL)'] ``` ![](/imgs/df_loc_rows_and_column.PNG) `iloc`'tan farklı olarak sütuna erişmek istediğimizde direkt olarak ismini yazdık. Çoklu sütunlara da erişebiliriz. Örneğin, aşağıdaki iki sütuna erişelim. ```python df.loc[[0,1], ['Piyasa Değeri(mn TL)','Piyasa Değeri(mn $)']] ``` ![](/imgs/df_loc_rows_and_columns.PNG) Yine sütun isimlerini belirttik ve çift parantez kullandık. Aynı zamanda çoklu olduğu için veri çerçevesi olarak aldık. Son bir bilgi olarak, label location'ları hangi sırayla yazarsak o sırayla çıktıyı alırız. ```python df.loc[[1,0], ['Piyasa Değeri(mn $)','Piyasa Değeri(mn TL)']] ``` ![](/imgs/df_loc_rows_and_columns_order.PNG) ### 4.2.3. Satır ve Sütunlara Erişme: İki Nokta Kullanımı Örneğin, ilk 5 hissenin kod bilgilerine erişmek istediğimizi varsayalım. Bu durumda indeks veya etiketleri tek tek yazmamıza gerek kalmayacak. `:` kullanarak da ilk 5 satıra erişebiliriz. Burada dikkat etmemiz gereken nokta çift parantez yerine tek parantez kullanacak olmamızdır. ```python df.iloc[0:4,0] # veya df.loc[0:4, 'Kod'] ``` ![](/imgs/df_iloc_loc_double_dot_single.PNG) Aynısını `:` kullanarak sütunlar için de yapabiliriz. `Kod` sütunundan sonra gelen `Hisse Adı`, `Sektör` ve `Kapanış(TL)` sütunlarını da almak istediğimizi varsayalım. ```python df.iloc[0:4, 0:4] # veya df.loc[0:4, 'Kod':'Kapanış(TL)'] ``` ![](/imgs/df_iloc_loc_double_dot_multiple.PNG) ### 4.2.4. Sütuna Ait Değerleri Saydırma: value_counts() Sektörlere ve bunlara ait sayılara ulaşalım. ```python df['Sektör'].value_counts() ``` ![](/imgs/df_value_counts.PNG) Görüldüğü üzere, `GYO` sektörünün sayısı 41 ile ilk sırada yer alıyor. En az şirkete sahip sektörler 1 ile `Eğlence Hizmetleri` ve `Cam` olmuş. Visual Studio Code editörünü kullananlar için: *`Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings...`* şeklinde bir bilgilendirme alabilirsiniz. Burada, `scrollable element` veya `text editor` seçeneklerine tıklarsanız çıktının tamamını görebilirsiniz. # 5. İndeksler --- ```python import pandas as pd df = pd.read_excel('./data/temelozet.xlsx') ``` ## 5.1. İndeks nedir? ```python df ``` ![](/imgs/df_index.PNG) Sol tarafta bir sütunmuş gibi görünen 0, 1, 2, ... değerleri indekstir. İndeksler, bir numaralandırma veya etiketleme mekanizmasıdır. Örneğin, bir liste içindeki elemanların her biri bir indeks değerine sahiptir ve bu indeksler kullanılarak elemanlara erişebiliriz. Örneğimizdeki veri çerçevesinde indeks, satırları etiketlemek veya numaralandırmak için kullanılır. Varsayılan olarak, veri çerçevesinin indeksi sıfırdan başlayan tam sayılarla oluşturulur. Bununla birlikte, indeksler benzersiz olmak zorunda değildir. Yani aynı indeks değeri birden fazla satıra karşılık gelebilir. ## 5.2. İndeks Ayarlama: set_index() ve index_col İndeks ayarlamayı iki şekilde yapabiliriz. Birincisi, `set_index()` fonksiyonunu kullanmaktır. Örneğimizdeki `Kod` sütununu indeks olarak ayarlamak istediğimizi varsayalım. ```python df = df.set_index('Kod') df ``` ![](/imgs/df_index_kod.PNG) Yukarıda `Kod` sütununu indeks olarak ayarladık. Ancak değişiklikleri yine aynı veri çerçevesine atadık. Bunu yapmak yerine `inplace` parametresini `True` olarak ayarlayabiliriz. ```python df.set_index('Kod', inplace=True) df ``` İkincisi ise veriyi içeri aktarma sırasında `index_col` ile indeks ayarlaması yapmaktır. ```python df = pd.read_excel('./data/temelozet.xlsx', index_col='Kod') df ``` ![](/imgs/df_index_col.PNG) İndeksin ne olduğunu aşağıdaki gibi kontrol edebiliriz. `name` ile indeksin `Kod` olarak ayarlandığını görebiliriz. ```python df.index ``` ![](/imgs/df_index_result.PNG) ## 5.3. İndeks Aracılığıyla Erişim: loc `loc` ile `THYAO` indeksine ulaşalım. Aslında burada `iloc` ile `loc`'un ayrımı daha net görmüş olacağız. ```python df.loc['THYAO'] ``` ![](/imgs/df_index_spesific.PNG) Aynı indeksin `Halka AçıklıkOranı (%)` değerine bakalım. ```python df.loc['THYAO', 'Halka AçıklıkOranı (%)'] ``` Çıktıyı `50.4` olarak alacağız. Yeri gelmişken, `iloc` ile `loc`'un farkını aşağıdaki gibi gösterebiliriz. ```python df.iloc[0] ``` Yukarıda iloc `A1 Capital` indeksinin değerlerini sağlıklı bir şekilde verebilirken `loc`'u aynı şekilde kullandığımızda `KeyError` hatası alacağız. ```python df.loc[0] ``` ## 5.4. İndeks Sıfırlama: reset_index() İndeksi `Kod` olarak ayarlamıştık. Varsayılan indeks değerlerine aşağıdaki gibi dönebiliriz. ```python df.reset_index(inplace=True) df ``` ![](/imgs/df_index_default.PNG) ## 5.5. İndekslerin Sıralanması: sort_index() İndeksleri artan sırada olacak şekilde sıralayabiliriz. ```python df.sort_index() ``` ![](/imgs/df_index_sort.PNG) Eğer sıralamayı azalan sırada yapmak istersek `ascending` parametresini `False` yapmamız gerekiyor. ```python df.sort_index(ascending=False) ``` ![](/imgs/df_index_sort_asc_false.PNG) Eğer sıralamanın kalıcı olmasını istersek `inplace` parametresini `True` yapmalıyız. ```python df.sort_index(ascending=False, inplace=True) # ya da df.sort_index(inplace=True) ``` # 6. Filtreleme --- ```python import pandas as pd df = pd.read_excel('./data/temelozet.xlsx', index_col='Kod') ``` ## 6.1. Tekli Filtreleme: İç içe ve loc `Sektör` sütununun `Bankacılık` olduğu değerleri filtreleyelim. Filtrelemenin birden fazla yolu olabilir. İlki olan iç içe yöntemine bakalım. ```python df[df['Sektör'] == 'Bankacılık'] ``` ![](/imgs/df_filter_single.PNG) İkinci bir yol olarak `loc`'u kullanabiliriz. Hatta burada filtrelemenin yanında herhangi bir sütunu da seçebiliriz. Örneğin, `Halka AçıklıkOranı (%)` sütununu alalım. ```python df.loc[df['Sektör'] == 'Bankacılık', 'Halka AçıklıkOranı (%)'] ``` ![](/imgs/df_filter_single_spesific_column.PNG) ## 6.2. Çoklu Filtreleme: &, | ve isin() Birden fazla filtreleme yapmak istediğimizde mantıksal operatörleri kullanabiliriz. Örneğin, hem `Sektör` sütunundan `Bankacılık` değerini hem de `Halka AçıklıkOranı (%)` sütunundan 50'den büyük olanları alalım ve `Hisse Adı` sütunundaki değerleri getirelim. ```python df.loc[(df['Sektör'] == 'Bankacılık') & (df['Halka AçıklıkOranı (%)'] > 50), 'Hisse Adı'] ``` ![](/imgs/df_filter_and_condition.PNG) Burada, her bir filtreleme işlemini parantez içerisine aldık. Örnekte, ve anlamına gelen `&` operatörünü kullandık. Bir de veya anlamına gelen `|` operatörünü kullanalım. ```python df.loc[(df['Sektör'] == 'Bankacılık') | (df['Halka AçıklıkOranı (%)'] > 50), 'Hisse Adı'] ``` ![](/imgs/df_filter_or_condition.PNG) Ve anlamına gelen `&` kullandığımız örneğe uymayan (tersi) değerleri getirelim. Bunun için `~` kullanmamız yeterli olacaktır. Yani, `Sektör` sütunu `Bankacılık` dışı olan ve `Halka AçıklıkOranı (%)` sütunu <=50 olacak ve `Hisse Adı` değerleri gelecek. ```python df.loc[~(df['Sektör'] == 'Bankacılık') & ~(df['Halka AçıklıkOranı (%)'] > 50), 'Hisse Adı'] ``` ![](/imgs/df_filter_and_condition_tilda.PNG) Eğer yukarıdaki ifadeyi `~` her iki filtreyi de dışarıdan kapsayacak şekilde yazarsak `Sektör` sütunu `Bankacılık` olan ve `Halka AçıklıkOranı (%)` sütunu >50 olanları ilk başta alıp sonra bunun dışında kalanları alacak ve `Hisse Adı` değerlerini getirecek. İlk koşula uyan bir tek AKBNK var. Bunun dışında kalan da 508 hisse olacak. ```python df.loc[~((df['Sektör'] == 'Bankacılık') & (df['Halka AçıklıkOranı (%)'] > 50)), 'Hisse Adı'] ``` ![](/imgs/df_filter_and_condition_tilda_general.PNG) Alternatif bir yol olarak `isin()` kullanılabilir. ```python df_sektor = df.loc[df['Sektör'].isin(['GYO','Bankacılık'])] df_sektor ``` ![](/imgs/df_filter_isin_sektor.PNG) Sadece `Hisse Adı` sütununu alalım. ```python df_sektor = df.loc[df['Sektör'].isin(['GYO','Bankacılık']), 'Hisse Adı'] df_sektor ``` ![](/imgs/df_filter_isin_sektor_single_column.PNG) Yukarıda yapılan işlemin karışık gelmemesi için parçalara ayırabiliriz. Böylece yaptığımız işlem daha net anlaşılabilir. ```python sektorler = ['GYO','Bankacılık'] sektorler_filtre = df['Sektör'].isin(sektorler) df_sektor = df.loc[sektorler_filtre, 'Sektör'] df_sektor ``` ![](/imgs/df_filter_isin_sektor_single_column.PNG) ## 6.3. String İçerenleri Filtreleme: str.contains() `Hisse Adı` sadece `Enerji` içerenleri filtreleyelim. Bunun için bir string'i içerip içermediği kontrolü yapmış olacağız. ```python df_filtre_enerji = df.loc[df['Hisse Adı'].str.contains('Enerji', na=False)] df_filtre_enerji ``` ![](/imgs/df_filter_enerji.PNG) İhtiyacımız olmamasına rağmen `na` parametresini `False` olacak şekilde ekledik. İlgili sütunda `NA / NaN` içerdiğini varsayalım. Bu durumda kodu çalıştırdığımızda `ValueError: Cannot mask with non-boolean array containing NA / NaN values` hatası alırdık. `na=False` olarak ayarlandığında, `contains()` fonksiyonu eksik değerleri içeren satırları dikkate almadan sadece `Enerji` kelimesini içeren satırları filtrelemek için kullanılır. Yani, `Hisse Adı` sütununda `Enerji` kelimesini içeren satırları seçerken eksik değerleri göz ardı eder. Sadece ilgilendiğimiz `Hisse Adı` sütununu alalım. ```python df_filtre_enerji = df.loc[df['Hisse Adı'].str.contains('Enerji', na=False), 'Hisse Adı'] df_filtre_enerji ``` ![](/imgs/df_filter_enerji_hisseadi.PNG) # 7. Sütun ve Satır Güncelleme --- ```python import pandas as pd df = pd.read_excel('./data/temelozet.xlsx', index_col='Kod') ``` ## 7.1. Sütun Güncelleme: columns, List Comprehension, str.replace(), rename Tüm sütunların isimlerini güncelleyelim. ```python df.columns = [ 'HisseAdi', 'Sektor', 'KapanisTL', 'PiyasaDegeriMnTL', 'PiyasaDegeriMnUSD', 'HalkaAciklikOraniYuzde', 'SermayeMnTL' ] df ``` ![](/imgs/df_columns_update.PNG) Tüm sütun isimlerini büyük harfe çevirelim. Bu işlemi tek tek yapmak yerine list comprehension yöntemi ile yapacağız. ```python df.columns = [sutun.upper() for sutun in df.columns] df ``` ![](/imgs/df_columns_update_listcomp_upper.PNG) Bu kod, bir Pandas DataFrame'in sütun isimlerini büyük harflere dönüştürmek için kullanılan bir dizi ifadedir. Kod, list comprehension yöntemini kullanarak, DataFrame'in sütunlarını tek tek dolaşarak her bir sütunun ismini büyük harflere dönüştürür ve bu dönüşüm sonucunda oluşan yeni sütun isimlerini DataFrame'in sütunlarına atar. USD içeren sütunları `$` ile değiştirelim. ```python df.columns = df.columns.str.replace('USD','$') df ``` ![](/imgs/df_columns_update_replace_usd.PNG) Hepsini tekrar list comprehension ile bu defa küçük yapalım. ```python df.columns = [sutun.lower() for sutun in df.columns] df ``` ![](/imgs/df_columns_update_listcomp_lower.PNG) Sütun güncellemeyi `rename()` ile de yapabiliriz. Değişikliklerin uygulanması için `inplace` parametresini de `True` yapalım. ```python df.rename(columns={ 'hisseadi':'HisseAdi', 'sektor':'Sektor', 'kapanistl':'KapanisTL', 'piyasadegerimntl':'PiyasaDegeriMnTL', 'piyasadegerimn$':'PiyasaDegeriMnUSD', 'halkaaciklikoraniyuzde':'HalkaAciklikOraniYuzde', 'sermayemntl':'SermayeMnTL' }, inplace=True) df ``` ![](/imgs/df_columns_update_rename.PNG) ## 7.2. Satır Güncelleme: loc, at, str.lower(), apply(), applymap(), lambda, map() ve replace() `A1CAP` etiketine sahip satırdaki bilgileri güncelleyelim. ```python df.loc['A1CAP'] = ['A1 Capital (Test)','Aracı Kurumlar (Test)',26.80,3618.0,138.7,25.9,135] df ``` ![](/imgs/df_row_update.PNG) Burada ilgili satırdaki bazı sütunlara denk gelen değerleri güncelledik. Eğer çok daha fazla sütun olsaydı tek tek hepsini yazmak zor olurdu. Bilgisini değiştirdiğimiz `Hisse Adı` ve `Sektör` sütunlarına ait değerleri eski haline getirelim. ```python df.loc['A1CAP', ['Hisse Adı','Sektör']] = ['A1 Capital','Aracı Kurumlar'] df ``` ![](/imgs/df_row_update_spesific.PNG) Eğer tek bir değeri güncellemek istersek bunu iki farklı yoldan yapabiliriz. Birincisi, her zaman kullandığımız `loc`; ikincisi ise `at` yöntemi. ```python df.loc['A1CAP', 'Hisse Adı'] = 'A1 Capital Test' # veya df.at['A1CAP', 'Hisse Adı'] = 'A1 Capital Test' df ``` ![](/imgs/df_row_update_loc_at.PNG) Bir filtreleme sonrası da tek bir hücre için güncelleme yapılabilir. ```python df.loc[df['Sektör'] == 'Bankacılık', 'Halka AçıklıkOranı (%)'] = 0 df.loc[df['Sektör'] == 'Bankacılık', 'Halka AçıklıkOranı (%)'] ``` ![](/imgs/df_rows_update_single_column.PNG) Çoklu satır güncellemesi yapmak istediğimiz zaman birkaç farklı yolu kullanabiliriz. Örneğin, `Hisse Adı` sütunundaki tüm değerleri küçük yapalım. Bunun için birincisi `str.lower()` kullanabiliriz. ```python df['Hisse Adı'] = df['Hisse Adı'].str.lower() df ``` ![](/imgs/df_rows_update_single_column_lower.PNG) İkinci bir yol olan `apply()` ile `Hisse Adı` sütunundaki tüm değerleri büyük harfli yapalım. Bunun için önce bir fonksiyon yazıp ardından bu fonksiyonu `apply()` ile uygulayacağız. ```python def hisse_adi_guncelle(hisse_adi): return hisse_adi.upper() df['Hisse Adı'] = df['Hisse Adı'].apply(hisse_adi_guncelle) df ``` ![](/imgs/df_rows_update_single_column_lower_apply.PNG) Üçüncü bir yol olan `apply()` ve `lambda` ile `Hisse Adı` sütunundaki tüm değerlerin yalnızca ilk harflerini büyük bırakalım. ```python df['Hisse Adı'] = df['Hisse Adı'].apply(lambda x: x.capitalize()) # veya df['Hisse Adı'] = df['Hisse Adı'].apply(lambda x: x.title()) # veya df['Hisse Adı'] = df['Hisse Adı'].apply(lambda x: x[0] + x[1:].lower()) df ``` ![](/imgs/df_rows_update_single_column_lower_apply_lambda.PNG) Dördüncü bir yol olan `applymap()` ve `lambda` ile `Hisse Adı` ve `Sektör` sütunlarındaki harfleri küçük yapalım. ```python df[['Hisse Adı','Sektör']] = df[['Hisse Adı','Sektör']].applymap(lambda x: x.lower()) df ``` ![](/imgs/df_rows_update_single_column_lower_applymap_lambda.PNG) Beşinci bir yol olan `map()` veya `replace()` ile seriler üzerinde güncelleme işlemleri yapabiliriz. ```python df['Hisse Adı'].map({'a1 capital test':'a1 capital'}) ``` ![](/imgs/df_rows_update_single_column_map.PNG) Ancak `map()` yönteminde eşleştirme sözlüğünde yer almayan değerler dönüşüm sırasında `NaN` olarak kabul edilir. Bu noktada `replace()` fonksiyonunu kullanabiliriz. ```python df['Hisse Adı'].replace({'a1 capital test':'a1 capital'}) ``` ![](/imgs/df_rows_update_single_column_replace.PNG) Değişiklikleri kaydetmek için yine aynı veri çerçevesine atayabiliriz. # 8. Sütun ve Satır Ekleme ve Kaldırma --- ```python import pandas as pd df = pd.read_excel('./data/temelozet.xlsx', index_col='Kod') ``` ## 8.1. Sütun Ekleme ve Kaldırma: str.split() ve drop() `Hisse Adı` sütunu ile `Sektör` sütununu yeni bir sütunda birleştirelim. ```python df['HisseAdi@Sektor'] = df['Hisse Adı'] + '@' + df['Sektör'] df ``` ![](/imgs/df_new_column.PNG) `Hisse Adı` ve `Sektör` sütunlarına ihtiyacımız olmadığını düşünelim. Bunları `drop()` yardımıyla kaldırabiliriz. Değişiklikleri de aynı veri çerçevesine `inplace` parametresini `True` yapıp kaydedelim. ```python df.drop(columns=['Hisse Adı','Sektör'], inplace=True) df ``` ![](/imgs/df_drop_columns.PNG) Kaldırdığımız sütunları tekrar yerine koyalım. Bunun için `str.split()` fonksiyonunu kullanacağız. ```python df['HisseAdi@Sektor'].str.split('@') ``` ![](/imgs/df_split_column.PNG) Sonucu yeni sütunlar olarak genişletelim. Bunu, `expand` parametresi ile yapacağız. ```python df['HisseAdi@Sektor'].str.split('@', expand=True) ``` ![](/imgs/df_split_column_new.PNG) Yeni oluşan sütunları veri çerçevesine ekleyelim. ```python df[['Hisse Adı','Sektör']] = df['HisseAdi@Sektor'].str.split('@', expand=True) df ``` ![](/imgs/df_split_column_new_final.PNG) ## 8.2. Satır Ekleme ve Kaldırma: append(), concat() ve drop() Sadece `Hisse Adı` sütununa bir veri girişi yapalım. Bunu `append()` ile yapacağız. ```python df.append({'Hisse Adı':'TEST'}) ``` Eğer bu şekilde yaparsak `TypeError: Can only append a dict if ignore_index=True` hatasını alacağız. Bu hata, yalnızca bir sözlüğü veri çerçevesine ekleyebileceğimizi söylüyor. Bunu `ignore_index` parametresini `True` yaparak aşabiliriz. ```python df.append({'Hisse Adı':'TEST'}, ignore_index=True) ``` ![](/imgs/df_new_row.PNG) Görüldüğü üzere, diğerlerini `NaN` olarak ekledi. İki veri çerçevesini birleştirelim. Bunun için bir veri çerçevesi daha oluşturalım. İlk veri çerçevesini de ilk hali ile kullanalım. ```python df = pd.read_excel('./data/temelozet.xlsx', index_col='Kod') df2 = { 'Kod':['TST'], 'Hisse Adı':['TEST'], 'Sektör':['Bankacılık'], 'Kapanış(TL)':[0], 'Piyasa Değeri(mn TL)':[0], 'Piyasa Değeri(mn $)':[0], 'Halka AçıklıkOranı (%)':[0], 'Sermaye(mn TL)':[0], 'USDTRY':26 } df2 = pd.DataFrame(df2) df2.set_index('Kod', inplace=True) df2 ``` ![](/imgs/df2.PNG) İkinci veri çerçevesinde bir sütun fazla. Bu durumda birleştirme sırasında bunu görmezden geleceğiz. ```python df3 = df.append(df2, ignore_index=True) df3 ``` ![](/imgs/df3_append.PNG) Burada aslında çıkan uyarıları da dikkate almamız gerekiyor. Son kodu çalıştırdığımızda bize `FutureWarning: The frame.append method is deprecated and will be removed from pandas in a future version. Use pandas.concat instead.` şeklinde bir uyarı veriliyor. Bu uyarı, `append()` fonksiyonunun pandas'ın gelecekteki bir sürümünde kullanımdan kaldırılacağını ve bunun yerine `concat()` fonksiyonunu kullanmamız gerektiğini söylüyor. Biz de kullanalım. ```python df3 = pd.concat([df,df2], ignore_index=True) df3 ``` ![](/imgs/df3_concat.PNG) Yine aynı çıktıyı aldık. 509 numaralı indeksi kaldırmak istediğimizi varsayalım. Daha önce kullandığımız `drop()` fonksiyonunun içine `index` parametresini ekleyerek kaldırma işlemini gerçekleştirebiliriz. ```python df3.drop(index=509, inplace=True) df3 ``` ![](/imgs/df3_drop_index.PNG) Yukarıda sadece bir adet indeks belirtip onu kaldırdık. Sadece `Aracı Kurumlar` içeren satırları indeks ile kaldırmak istediğimizi varsayalım. Önce koşulu belirteceğiz ardından da bu koşulun indekslerini alacağız. ```python df3.drop(index=df3[df3['Sektör'] == 'Aracı Kurumlar'].index, inplace=True) df3 ``` ![](/imgs/df3_drop_index_spesific.PNG) # 9. Sıralama --- ```python import pandas as pd df = pd.read_excel('./data/temelozet.xlsx', index_col='Kod') ``` ## 9.1. Tekli Sıralama: sort_values() Veri çerçevesini kapanış fiyatlarına göre sıralayalım. ```python df.sort_values(by='Piyasa Değeri(mn $)', inplace=True) df ``` ![](/imgs/df_sort_values.PNG) Yukarıda küçükten büyüğe doğru sıraladık. Şimdi ise büyükten küçüğe doğru sıralayalım. ```python df.sort_values(by='Piyasa Değeri(mn $)', ascending=False, inplace=True) df ``` ![](/imgs/df_sort_values_asc_false.PNG) ## 9.2. Çoklu Sıralama: sort_values() `Sektör` sütununa göre artan ve `Piyasa Değeri(mn $)` sütununa göre azalan şekilde sıralayalım. ```python df.sort_values(by=['Sektör','Piyasa Değeri(mn $)'], ascending=[True, False], inplace=True) df ``` ![](/imgs/df_sort_values_multiple.PNG) ## 9.3. İndekse Göre Sıralama: sort_index() İndekse göre artan bir şekilde sıralayabiliriz. ```python df.sort_index(inplace=True) df ``` ![](/imgs/df_sort_index.PNG) İndekse göre azalan bir şekilde de sıralayabiliriz. ```python df.sort_index(ascending=False, inplace=True) df ``` ![](/imgs/df_sort_index_asc_false.PNG) ## 9.4. Serilerin Sıralanması: sort_values() `Sektör` sütununu alıp seri olacak şekilde bir sıralama yapabiliriz. ```python df['Sektör'].sort_values() ``` ![](/imgs/df_sort_values_series.PNG) ## 9.5. En Büyüklerin Sıralanması: nlargest() `Piyasa Değeri(mn $)` sütununa göre piyasa değeri $ cinsinden en yüksek 10'a bakalım. ```python df.nlargest(10, 'Piyasa Değeri(mn $)') ``` ![](/imgs/df_nlargest.PNG) ## 9.6. En Küçüklerin Sıralanması: nsmallest() `Piyasa Değeri(mn $)` sütununa göre piyasa değeri $ cinsinden en düşük 10'a bakalım. ```python df.nsmallest(10, 'Piyasa Değeri(mn $)') ``` ![](/imgs/df_nsmallest.PNG) # 10. Gruplama ve Özetleme --- ```python import pandas as pd df = pd.read_excel('./data/temelozet.xlsx', index_col='Kod') ``` ## 10.1. Tekli Sütunun Bir İstatistik Değeri: median() `Piyasa Değeri(mn $)` sütununun medyan değerine bakalım. ```python df['Piyasa Değeri(mn $)'].median() ``` Piyasa değerinin `107.3` milyon $ olduğu öğrendik. ## 10.2. Çoklu Sütunların Bir İstatistik Değeri: median() `Piyasa Değeri(mn TL)` ve `Piyasa Değeri(mn $)` sütunlarının medyan değerine bakalım. ```python df[['Piyasa Değeri(mn TL)','Piyasa Değeri(mn $)']].median() ``` ![](/imgs/df_multiple_columns_median.PNG) ## 10.3. İstatistiksel Özet: describe() Sayısal veri tipine sahip sütunların istatistiksel özetlerine bakalım. İstatistiksel özet: * count: Sütundaki non-null (boş olmayan) değerlerin sayısı. * mean: Sütundaki değerlerin ortalaması. * std: Sütundaki değerlerin standart sapması. * min: Sütundaki en küçük değer. * 25%: Alt çeyrek yüzdesi, sütundaki değerlerin %25'inin altında olan değer. * 50%: Medyan veya ortanca, sütundaki değerlerin yarısından küçük ve yarısından büyük olan değer. * 75%: Üst çeyrek yüzdesi, sütundaki değerlerin %75'inin altında olan değer. * max: Sütundaki en büyük değer. ```python df_istatistiksel_ozet = df.drop(['Hisse Adı','Sektör'], axis=1) df_istatistiksel_ozet.describe() ``` `axis=0`'da (varsayılan değer) işlemler satırlar boyunca yapılır. `axis=1`'de ise işlemler sütunlar boyunca yapılır. ![](/imgs/df_describe.PNG) ## 10.4. Değerlerin Saydırılması: value_counts() `Sektör` sütunundaki değerleri saydıralım. ```python df['Sektör'].value_counts() ``` ![](/imgs/df_value_counts.PNG) ## 10.5. Değerlerin Yüzdelere Ayrılması: normalize `Sektör` sütunundaki değerleri saydırmıştık. Bunların yüzde paylarını `normalize` parametresini `True` yaparak alabiliriz. ```python df['Sektör'].value_counts(normalize=True) ``` ![](/imgs/df_value_counts_normalize.PNG) ## 10.6. Gruplayarak Saydırma, Yüzde Alma ve İndeks İnceleme: groupby(), value_counts(), normalize ve loc Öncelikle `Halka AçıklıkOranı (%)` sütununa göre yeni bir sütun oluşturalım. 50'den büyüksek `>50`; küçük veya eşitse `<=50` yazsın. ```python df['HalkaAciklikOraniGrup'] = df['Halka AçıklıkOranı (%)'].apply(lambda x: '>50' if x > 50 else '<=50') df ``` ![](/imgs/df_new_group.PNG) Şimdi `Sektör` sütununa göre `HalkaAciklikOraniGrup` sütununu saydıralım. ```python df.groupby(['Sektör'])['HalkaAciklikOraniGrup'].value_counts() ``` ![](/imgs/df_groupby_value_counts.PNG) İstediğimizi elde ettik. Son olarak örneğin, `Teknoloji` sektörüne bakalım. ```python df.groupby(['Sektör'])['HalkaAciklikOraniGrup'].value_counts().loc['Teknoloji'] ``` ![](/imgs/df_groupby_value_counts_spesific.PNG) Görüldüğü üzere, ilgilendiğimiz sektördeki halka açıklık dağılımı bilgisine gruplandırılmış olarak ulaştık. Aynı bilgiye yüzde olarak da erişebiliriz. ```python df.groupby(['Sektör'])['HalkaAciklikOraniGrup'].value_counts(normalize=True).loc['Teknoloji'] ``` ![](/imgs/df_groupby_value_counts_spesific_pct.PNG) ## 10.7. Bir Gruba Göre Bir İstatistik: groupby() ve median() `Sektör` sütununa göre sektörlerin piyasa değerlerinin medyanını `Piyasa Değeri(mn $)` sütununu kullanarak alalım. ```python df.groupby(['Sektör'])['Piyasa Değeri(mn $)'].median() ``` ![](/imgs/df_groupby_median.PNG) ## 10.8. Bir Gruba Göre Birden Fazla İstatistik: groupby(), agg(), median() ve std() `Sektör` sütununa göre sektörlerin piyasa değerlerinin medyanını `Piyasa Değeri(mn $)` sütununu kullanarak alalım. Bunun yanına bir de standart sapma ekleyelim. ```python df.groupby(['Sektör'])['Piyasa Değeri(mn $)'].agg(['median','std']) ``` ![](/imgs/df_groupby_agg_median_std.PNG) Sütun isimlerini güncelleyebiliriz. ```python df.groupby(['Sektör'])['Piyasa Değeri(mn $)'].agg(Medyan='median',StandartSapma='std') ``` ![](/imgs/df_groupby_agg_median_std_update_columns.PNG) ## 10.9. Bir String İçeriğine Göre Bir İstatistik: groupby(), apply(), lambda, str.contains() ve sum() `Hisse Adı` sütununda `Enerji` içeren hisseleri `HalkaAciklikOraniGrup` sütununa göre saydıralım. ```python df.groupby(['HalkaAciklikOraniGrup']).apply(lambda x: x['Hisse Adı'].str.contains('Enerji').sum()) # veya df.groupby(['HalkaAciklikOraniGrup'])['Hisse Adı'].apply(lambda x: x.str.contains('Enerji').sum()) ``` ![](/imgs/df_groupby_apply_lambda_str_contains_sum.PNG) # 11. Kayıp Veri --- ```python import pandas as pd df = pd.read_excel('./data/temelozet.xlsx', index_col='Kod') ``` ## 11.1. NaN Sayısını Öğrenme: isna() ve sum() Bazı sütunların bazı değerlerini `NaN` yapalım. ```python import numpy as np np.random.seed(34) random_satirlar = df.sample(n=200) df2 = df df2.loc[random_satirlar.index, ['Kapanış(TL)','Piyasa Değeri(mn $)']] = np.nan df2 ``` ![](/imgs/df2_nan.PNG) Her bir sütunda kaç adet `NaN` olduğunu bulabiliriz. ```python df2.isna().sum() ``` Eğer yukarıda bir `sum()` daha eklersek toplam `NaN` sayısını alırız. ```python df2.isna().sum().sum() ``` Bu da `400` değerini verecektir. ![](/imgs/df2_nan_sum.PNG) ## 11.2. NaN ve Temizliği: dropna() `dropna()` kullanarak `NaN` içeren satırları kaldırabiliriz. ```python df2.dropna() ``` ![](/imgs/df2_nan_drop.PNG) 509 satırlık veri çerçevesinin iki sütununa 200 adet `NaN` atamıştık. 200'ünü de kaldırıp 309 satırlık bir veri çerçevesi bıraktı. `dropna()`'i aşağıdaki gibi özelleştirerek de kullanabilirdik. ```python df2.dropna(axis='index', how='all', subset=['Kapanış(TL)','Piyasa Değeri(mn $)']) ``` ![](/imgs/df2_nan_drop.PNG) Eksik değerlerin satırlarda bulunduğunu belirtmek için `axis='index'` parametresi kullanılır. `how='all'` parametresi, bir satır veya sütunda tüm değerlerin eksik olduğu durumu belirtir. `'all'` değeri, tüm değerlerin eksik olduğu satırları çıkarmak için kullanılır. Yani, bir satırdaki tüm belirtilen sütunlarda eksik değer varsa o satır veri çerçevesinden çıkarılır. `subset` parametresi ile eksik değerlerin kontrol edileceği sütunları belirttik. Sonuç olarak, `Kapanış(TL)` ve `Piyasa Değeri(mn $)` sütunlarında eksik değerleri olan satırları veri çerçevesinden çıkardık. ## 11.3. Kayıp Veriyi Anlatan Manuel Girilmiş String İfadeleri NaN Yapma: replace() Bazı sütunların bazı değerlerini `NaN` yapmak yerine `Veri Yok` yazdığımızı varsayalım. ```python import numpy as np np.random.seed(34) random_satirlar = df.sample(n=200) df2 = df df2.loc[random_satirlar.index, ['Kapanış(TL)','Piyasa Değeri(mn $)']] = 'Veri Yok' df2 ``` ![](/imgs/df2_veriyok.PNG) `Veri Yok` yazan satırları `replace()` ile `NaN` yapalım. ```python df2.replace(to_replace='Veri Yok', value=np.nan, inplace=True) df2 ``` ![](/imgs/df2_veriyok_nan.PNG) ## 11.4. NaN Değerleri String Bir İfadeye Çevirme: fillna() `NaN` içeren satırları belirlediğimiz bir string ifadeye çevirelim. ```python df2.fillna(value='VERI YOK') ``` ![](/imgs/df2_fillna.PNG) ## 11.5. NaN Değerleri Bir Önceki Değere Çevirme: fillna() ```python import numpy as np np.random.seed(34) random_satirlar = df.sample(n=200) df2 = df df2.loc[random_satirlar.index, ['Kapanış(TL)','Piyasa Değeri(mn $)']] = np.nan ``` Bu örnek için pek anlamlı olmasa da nasıl yapıldığını görmek için yapmış olalım. Bunun için `method` parametresini `pad` yapacağız. ```python df2.fillna(method = 'pad') ``` ![](/imgs/df2_fillna_pad.PNG) ## 11.6. NaN Değerleri Bir Sonraki Değere Çevirme: fillna() ```python import numpy as np np.random.seed(34) random_satirlar = df.sample(n=200) df2 = df df2.loc[random_satirlar.index, ['Kapanış(TL)','Piyasa Değeri(mn $)']] = np.nan ``` Bu örnek için pek anlamlı olmasa da nasıl yapıldığını görmek için yapmış olalım. Bunun için `method` parametresini `bfill` yapacağız. ```python df2.fillna(method = 'bfill') ``` ![](/imgs/df2_fillna_bfill.PNG) ## 11.7. NaN Değerleri Bir İstatistik Değerine Çevirme: fillna() ve mean() ```python import numpy as np np.random.seed(34) random_satirlar = df.sample(n=200) df2 = df df2.loc[random_satirlar.index, ['Kapanış(TL)','Piyasa Değeri(mn $)']] = np.nan ``` Bu örnek için pek anlamlı olmasa da nasıl yapıldığını görmek için yapmış olalım. İstatistik olarak ortalamayı kullanalım. ```python df2.fillna(value=df2['Kapanış(TL)'].mean()) ``` ![](/imgs/df2_fillna_mean.PNG) ## 11.8. NaN Değerlerinin Interpolasyon Tahmini: fillna() ve interpolate() ```python import numpy as np np.random.seed(34) random_satirlar = df.sample(n=200) df2 = df df2.loc[random_satirlar.index, ['Kapanış(TL)','Piyasa Değeri(mn $)']] = np.nan ``` Bu örnek için pek anlamlı olmasa da nasıl yapıldığını görmek için yapmış olalım. `method` parametresini `linear` yapacağız. ```python df2.interpolate(method='linear') ``` ![](/imgs/df2_interpolate_linear.PNG) # 12. Verilerin Dışarı Aktarılması --- ## 12.1. CSV: to_csv() ```python # En basit haliyle kaydetme df.to_csv('./data/temelozet_v2.csv') # İndeksleri çıkarma df.to_csv('./data/temelozet_v2.csv', index=False) # Türkçe karakterleri dikkate alma df.to_csv('./data/temelozet_v2.csv', index=False, encoding='utf-8') # Zip'li kaydetme zip_secenekler = dict(method='zip', archive_name='output.csv') df.to_csv('./data/output.zip', compression=zip_secenekler) # Farklı bir dosyaya kaydetme (yol-1) from pathlib import Path dosya_yolu = Path('./data/data_alt/temelozet_v2.csv') dosya_yolu.parent.mkdir(parents=True, exist_ok=True) df.to_csv(dosya_yolu) # Farklı bir dosyaya kaydetme (yol-2) import os os.makedirs('./data/data_alt', exist_ok=True) df.to_csv('./data/data_alt/temelozet_v2.csv') ``` ## 12.2. XLSX: to_excel() ```python # En basit haliyle kaydetme df.to_excel('./data/temelozet_v2.xlsx') # İndeksleri çıkarma df.to_excel('./data/temelozet_v2.xlsx', index=False) # Sheet ismini değiştirme df.to_excel('./data/temelozet_v2.xlsx', sheet_name='IsYatirim') ```
49
0
zhangwenboi/daimaiqr
https://github.com/zhangwenboi/daimaiqr
抢票助手,将大麦要抢得场次复制转换为二维码,大麦app扫码进入
# 大麦抢票抢票助手 ## 最新教程已经更新 又黄了,群内正在讨论连点器和滑块的事情。。。 [直接使用](http://wen_wen_okok.gitee.io/daimai_qr) ## 创建了一个技术交流群现在已经300多人了,全是技术大牛,加V拉你 最新方案优先会在群内更新,都是大家研究出来的.... ``` momoppp666 ``` ~~好吧我又来更新了,生成之后扫码登录即可....~~ ~~暂停更新几天,只在群更新。。。~~ ~~2023/7/27凌晨,新方案测试完毕已经更新...~~ ~~点击分享将链接复制出来,使用转码工具转码.~~ ~~2023/7/26早9点,最新bp方案在群内出现.不知道能存活多久~~ ~~2023/7/25晚上10点,脱离bp之后没有有效的方法,有能力者麻烦进群讨论最新方案...~~ ~~2023/7/25下午4点,群内正在研究新方法中...~~ ~~2023/7/25下午3点,大麦又进行了热更新,bp已经无法使用~~ ~~2023/7/25上午10点,项目已经更新,配合淘宝扫码正常使用!~~ ~~由于在2023年7月24号中午,大麦进行了热更新,现在需要使用淘宝进行扫码.~~ ~~将大麦要抢得场次复制 使用此工具 转换为二维码,大麦app扫码进入,推荐安卓,ios要选观影人.~~ ~~提前十分左右扫码,到点直接点击确认!提交订单就行~~ ## 建议 此工具只提供bp链接的生成,想找全自动的不要想了. 全自动的全都是收费的,如果你有时间,善于专研,可以自己尝试下. 只提供链接的转换,进抢票群 代抢 出票 +v: 同时也回收茅台 数码.
56
5
LWL-cpu/SCPRG-master
https://github.com/LWL-cpu/SCPRG-master
The code implement of "Enhancing Document-level Event Argument Extraction with Contextual Clues and Role Relevance" in the findings of ACL 2023.
# SCPRG [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/enhancing-document-level-event-argument/event-argument-extraction-on-wikievents)](https://paperswithcode.com/sota/event-argument-extraction-on-wikievents?p=enhancing-document-level-event-argument) Source code for Findings of ACL 2023 paper: [Enhancing Document-level Event Argument Extraction with Contextual Clues and Role Relevance](https://aclanthology.org/2023.findings-acl.817). Our code is based on TSAR (A Two-Stream AMR-enhanced Model for Document-level Event Argument Extraction) [here](https://github.com/RunxinXu/TSAR) and thanks for their implement. ## 🔥 Introduction Document-level event argument extraction poses new challenges of long input and cross-sentence inference compared to its sentence-level counterpart. However, most prior works focus on capturing the relations between candidate arguments and the event trigger in each event, ignoring two crucial points: a) non-argument contextual clue information; b) the relevance among argument roles. In this paper, we propose a SCPRG (Span-trigger-based Contextual Pooling and latent Role Guidance) model, which contains two novel and effective modules for the above problem. The Span-Trigger-based Contextual Pooling (STCP) adaptively selects and aggregates the information of non-argument clue words based on the context attention weights of specific argument-trigger pairs from pre-trained model. The Role-based Latent Information Guidance (RLIG) module constructs latent role representations, makes them interact through role-interactive encoding to capture semantic relevance, and merges them into candidate arguments. Both STCP and RLIG introduce no more than 1% new parameters compared with the base model and can be easily applied to other event extraction models, which are compact and transplantable. Experiments on two public datasets show that our SCPRG outperforms previous state-of-the-art methods, with 1.13 F1 and 2.64 F1 improvements on RAMS and WikiEvents respectively. Further analyses illustrate the interpretability of our model. You can refer to our [paper](https://aclanthology.org/2023.findings-acl.817) for more details. <div align=center> <img width="800" height="350" src="./model.png"/> </div> ## 🚀 How to use our code? ### 1. Dependencies - pytorch==1.9.0 - transformers==4.8.1 - datasets==1.8.0 - tqdm==4.49.0 - spacy==3.2.4 - opt_einsum - wandb For the usage of spacy, the following command could be helpful. ```bash >> pip install https://github.com/explosion/spacy-models/releases/download en_core_web_sm-3.2.0/en_core_web_sm-3.2.0.tar.gz ``` ### 2. Data Preprocessing You can first download the datasets and some scripts [here](https://drive.google.com/file/d/1euuD7ST94b5smaUFo6ROLW_ZasHwDpib/view?usp=sharing). You only need to unzip the data.zip. Then Go to [data/wikievents](./data/wikievents) folder and run the following command, which is used to transfer the data formats. ```bash >> python transfer.py ``` ### 3. Training and Evaluation The training scripts are provided. ```bash >> bash run_rams_base.sh >> bash run_rams_large.sh >> bash run_wikievents_base.sh >> bash run_wikievents_large.sh ``` You can change the settings in the corresponding scripts. And you can evaluate the model by the following scripts. ```bash >> bash evaluate_rams.sh >> bash evaluate_wikievent.sh ``` You can download our best model checkpoint [here](https://drive.google.com/drive/folders/1hUovlrl5aRi8b84KhHS5DOg0tzT_1JyB?usp=sharing). If you have any questions, pls contact us via [email protected]. Thanks! ## 🌝 Citation If you use this work or code, please kindly cite the following paper: ```bib @inproceedings{liu2023enhancing, title={Enhancing Document-level Event Argument Extraction with Contextual Clues and Role Relevance}, author={Liu, Wanlong and Cheng, Shaohuan and Zeng, Dingyi and Hong, Qu}, booktitle={Findings of the Association for Computational Linguistics: ACL 2023}, pages={12908--12922}, year={2023} } ```
88
1
LyleMi/ja3proxy
https://github.com/LyleMi/ja3proxy
Customizing TLS (JA3) Fingerprints through HTTP Proxy
# JA3Proxy Customizing TLS (JA3) Fingerprints through HTTP Proxy ## Usage ```bash git clone https://github.com/lylemi/ja3proxy cd ja3proxy make ./ja3proxy -port 8080 -client 360Browser -version 7.5 curl -v -k --proxy http://localhost:8080 https://www.example.com ``` ### Perdefined clients and versions Please note that certain preconfigured fingerprints can significantly alter the application layer interactions. If the corresponding configuration is not present on the client side, it may result in connection errors. For example, newer versions of Chrome require the server to use HTTP/2. If you are testing with tools like curl, you should include the ``--http2`` parameter to accommodate the corresponding behavior. | Client | Version | | ------ | ------- | | Golang | 0 | | Firefox | 55 | | Firefox | 56 | | Firefox | 63 | | Firefox | 99 | | Firefox | 105 | | Chrome | 58 | | Chrome | 62 | | Chrome | 70 | | Chrome | 96 | | Chrome | 102 | | Chrome | 106 | | iOS | 12.1 | | iOS | 13 | | iOS | 14 | | Android | 11 | | Edge | 85 | | Edge | 106 | | Safari | 16.0 | | 360Browser | 7.5 | | QQBrowser | 11.1 | > for full list, see: https://github.com/refraction-networking/utls/blob/master/u_common.go ## Contribution If you have any ideas or suggestions, please feel free to submit a pull request. We appreciate any contributions. ## Contact If you have any questions or suggestions, please feel free to contact us.
28
4
Kuhlman-Lab/PIPPack
https://github.com/Kuhlman-Lab/PIPPack
Implementation of Protein Invariant Point Packer (PIPPack)
# PIPPack Implementation of Protein Invariant Point Packer (PIPPack) PIPPack is a graph neural network (GNN) that utilizes geometry-aware invariant point message passing (IPMP) updates and recycling to rapidly generate accurate protein side chains. ![PIPPack Architecture](./images/pippack_architecture.png) ## Quickstart To get started right in your browser, click this button to open the PIPPack notebook in Google Colab: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Kuhlman-Lab/PIPPack/blob/main/notebooks/PIPPack.ipynb) Note that this notebook is still a WIP, with a number of features still to be implemented. ## Getting started To build the environment from scratch: ``` # Create and activate the pippack environment conda create -n pippack conda activate pippack # Install PyTorch (see https://pytorch.org/get-started/locally/) conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia # Install Lightning (see https://lightning.ai/docs/pytorch/stable/starter/installation.html) conda install lightning=2.0.1 -c conda-forge # Pip installs: # - PyTorch Geometric (see https://pytorch-geometric.readthedocs.io/en/latest/install/installation.html) # - BioPython (see https://biopython.org/wiki/Download) # - Hydra (see https://hydra.cc/docs/intro/#installation) python -m pip install torch-geometric biopython hydra-core -U ``` Alternatively, you can use the environment file `env/pippack_env.yaml` to build the environment: ``` # Build pippack environment from yaml file conda env create -f env/pippack_env.yaml ```
11
2
DiasEllen26/template-readme
https://github.com/DiasEllen26/template-readme
Templates prontos para readme no GitHub
# GitHub Templates 🚀 [![License](https://img.shields.io/badge/License-MIT-blue.svg)](LICENSE) Bem-vindo ao **GitHub Templates**, onde a diversão encontra a produtividade! Aqui você encontrará uma coleção de templates incríveis para GitHub, repletos de emojis, cards de status, ícones de linguagens e informações de contato. Prepare-se para tornar seus perfis e repositórios ainda mais impressionantes! 😎 ## Menu 🚀 - [Templates de Perfil](https://github.com/DiasEllen26/template-readme/tree/main/perfil) - [Templates de Repositório](https://github.com/DiasEllen26/template-readme/tree/main/repositorio) - [Status](https://github.com/DiasEllen26/template-readme/blob/main/cards/status.md) - [Linguagens](https://github.com/DiasEllen26/template-readme/blob/main/cards/linguagem.md) - [Contato](https://github.com/DiasEllen26/template-readme/blob/main/icones/sociais.md) - [Skills](https://github.com/DiasEllen26/template-readme/blob/main/icones/skills.md) ## Templates de Perfil 🙋‍♂️ Você é único e seu perfil do GitHub também deve ser! Explore nosso diretório de [templates de perfil](https://github.com/DiasEllen26/template-readme/tree/main/perfil) para encontrar exemplos incríveis que o ajudarão a se destacar da multidão. Mostre ao mundo suas habilidades, projetos em destaque e muito mais! ## Templates de Repositório 📚 Cansado de READMEs chatos e monótonos? Dê uma olhada nos nossos [templates de repositório](https://github.com/DiasEllen26/template-readme/tree/main/repositorio) e deixe seus projetos brilharem! Nossos modelos oferecem uma estrutura flexível e divertida para documentar seu projeto, incluindo seções para descrição, instalação, uso, contribuição e licença. ## Status ✨ Deixe o mundo saber como está o seu projeto! Adicione cards de status e informe o estado do build, cobertura de testes, análise de código e muito mais. Os cards de status são uma maneira divertida e visual de fornecer informações importantes sobre o seu projeto. ## Linguagens 🚀 Mostre suas habilidades de programação com estilo! Utilize nossos ícones de linguagens para destacar as tecnologias envolvidas em seus projetos. Esses ícones são amplamente reconhecidos pela comunidade de desenvolvedores e adicionam um toque especial ao seu README. --- ## Contribuição 🤝 Este é um projeto de código aberto e adoraríamos receber contribuições da comunidade de desenvolvedores! Sinta-se à vontade para fazer fork deste repositório, trabalhar em melhorias e enviar pull requests para análise. Se você encontrar problemas ou tiver sugestões, abra uma issue e teremos prazer em discuti-las. Lembre-se de seguir as diretrizes de contribuição do projeto e respeitar o código de conduta. Junte-se a nós para tornar este projeto ainda mais incrível! --- Aproveite os templates e divirta-se criando READMEs incríveis! 😄✨
12
0
biu9/cc98-summary
https://github.com/biu9/cc98-summary
null
## cc98-summary 基于Azure openai GPT-3.5的对于cc98帖子进行总结的赛博算命师 接入cc98登录中心,通过cc98-api获取帖子 需要在校园网下运行(webvpn不彳亍 ❗本项目给出的总结仅供参考,不代表本人观点,也不对其真实性负责 ### 技术栈 - next.js 13 - react-oidc-context - tailwind css - material UI - [commit规范](https://www.conventionalcommits.org/en/v1.0.0/) ### 演示截图 ![20230724000747](https://typora-1309407228.cos.ap-shanghai.myqcloud.com/20230724000747.png) ![20230724000936](https://typora-1309407228.cos.ap-shanghai.myqcloud.com/20230724000936.png) ![20230724001057](https://typora-1309407228.cos.ap-shanghai.myqcloud.com/20230724001057.png) ### TODO - [x] 使用正则匹配将markdown格式的帖子转换成纯字符串 - [x] 接入azure openai GPT-3.5 - [ ] 增加版面过滤功能(e.g. 心灵) - [ ] 增加关键字过滤功能(e.g. 回忆卷) - [ ] webvpn下运行 - [x] cloudflare pages发布 - [x] 增加获取全部帖子功能 & 优化并发请求 - [x] 并发控制util函数 - [ ] 单元测试 - [x] 将api请求转到serverless api上 - [ ] 想一个好点儿的prompt
11
0
TheMightyGit/rssole
https://github.com/TheMightyGit/rssole
An RSS Reader inspired by the late Google Reader
![badge](./badge.svg) ![workflow status](https://github.com/TheMightyGit/rssole/actions/workflows/build.yml/badge.svg) # rssole (aka rissole) An RSS Reader inspired by the late Google Reader. Runs on your local machine or local network serving your RSS feeds via a clean responsive web interface. ![Screenshot 2023-08-03 at 09 41 52](https://github.com/TheMightyGit/rssole/assets/888751/bf202040-2976-4570-8c2e-f6c21d61613e) A single executable with a single config file that can largely be configured within the web UI. Its greatest feature is the lack of excess features. It tries to do a simple job well and not get in the way. ## Background I really miss Google Reader. Recently I noticed I'd gone back to an old habbit of jumping between various sites to scan their headlines, maintaining that sitelist purely in my head. So I looked at a few of the well knows RSS readers out there and nothing really grabbed me, either I didn't like the UI, or the install process seemed overly complicated, or there were just too many features, or ads. I like things simple. So I made this non-SaaS ode to Google Reader so I can triage my incoming information in one place with one interface in a way I like. At heart this is a very self serving project solely based around my needs, and because of that it's something I use constantly. Hopefully it's of use to some other people, or you can build upon it (MIT license, do what you want to it - make it comfortable for you). The name is supposed to be a pun on 'rissole'. As well as 'rissole' containing the letters R S and S, a rissole is a "*a compressed mixture of meat and spices, coated in breadcrumbs and fried*" and that struck me as similar to the role of an RSS reader (compressing the mixed meat of the internet into a handy faceful). ## Pre-Built Binaries and Packages Check out the [Releases](https://github.com/TheMightyGit/rssole/releases/) section in github, there should be a good selection of pre-built binaries and packages for various platforms. ## Installing via Brew ```console $ brew install themightygit/rssole/rssole ``` ## Installing via Go You can install the binary with go install: ```console $ go install github.com/TheMightyGit/rssole/cmd/rssole@latest ``` ## Building To build for your local architecture/OS... ```console $ go build ./cmd/... ``` It should also cross build for all the usual golang targets fine as well (as no CGO is used)... ```console $ GOOS=linux GOARCH=amd64 go build ./cmd/... $ GOOS=linux GOARCH=arm64 go build ./cmd/... $ GOOS=darwin GOARCH=amd64 go build ./cmd/... $ GOOS=darwin GOARCH=arm64 go build ./cmd/... $ GOOS=windows GOARCH=amd64 go build ./cmd/... $ GOOS=windows GOARCH=arm64 go build ./cmd/... ``` ...but I only regularly test on `darwin/amd64` and `linux/amd64`. I've seen it run on `windows/amd64`, but it's not something I try regularly. ### Smallest Binary Go binaries can be a tad chunky, so if you're really space constrained then... ```console $ go build -ldflags "-s -w" ./cmd/... $ upx rssole ``` ## Running ### Command Line If you built locally then it should be in the current directory: ```console $ ./rssole ``` If you used `go install` or brew then it should be on your path already: ```console $ rssole ``` ### GUI Double click on the file, I guess. If your system has restrictions on which binaries it will run then try compiling locally instead of using the pre-built binaries. ## Now read your feeds with your browser Now open your browser on `<hostname/ip>:8090` e.g. http://localhost:8090 ## Network Options By default it binds to `0.0.0.0:8090`, so it will be available on all network adaptors on your host. You can change this in the `feeds.json` config file. I run rssole within a private network so this is good enough for me so that I can run it once but access it from all my devices. If you run this on an alien network then someone else can mess with the UI (there's no protection at all on it) - change the `listen` value in `feeds.json` to `127.0.0.1:8090` if you only want it to serve locally. If you want to protect rssole behind a username and password or encryption (because you want rssole wide open on the net so you can use it from anywhere) then you'll need a web proxy that can be configured to sit in front of it to provide that protection. I'm highly unlikely to add username/password or encryption directly to rssole as I don't need it. Maybe someone will create a docker image that autoconfigures all of that... maybe that someone is you? ## Config ### Arguments ```console $ ./rssole -h Usage of ./rssole: -c string config filename (default "feeds.json") -r string readcache location (default "readcache.json") ``` ### `feeds.json` There are two types of feed definition... - Regular RSS URLs. - Scrape from website (for those pesky sites that have no RSS feed). - Scraping uses css selectors and is not well documented yet. Use `category` to group similar feeds together. ```json { "config": { "listen": "0.0.0.0:8090", "update_seconds": 300 }, "feeds": [ {"url":"https://github.com/TheMightyGit/rssole/releases.atom", "category":"Github Releases"}, {"url":"https://news.ycombinator.com/rss", "category":"Nerd"}, {"url":"http://feeds.bbci.co.uk/news/rss.xml", "category":"News"}, { "url":"https://www.pcgamer.com/uk/news/", "category":"Games", "name":"PCGamer News", "scrape": { "urls": [ "https://www.pcgamer.com/uk/news/", "https://www.pcgamer.com/uk/news/page/2/", "https://www.pcgamer.com/uk/news/page/3/" ], "item": ".listingResult", "title": ".article-name", "link": ".article-link" } } ] } ``` ## Key Dependencies I haven't had to implement anything actually difficult, I just do a bit of plumbing. All the difficult stuff has been done for me by these projects... - github.com/mmcdole/gofeed - for reading all sorts of RSS formats. - github.com/andybalholm/cascadia - for css selectors during website scrapes. - github.com/k3a/html2text - for making a plain text summary of html. - HTMX - for the javascript framework (a b/e engineers delight). - Bootstrap 5 - for HTML niceness because I know it slightly better than the alternatives.
30
2
previoustube/previoustube
https://github.com/previoustube/previoustube
UNOFFICIAL reverse-engineered open source firmware for the Rotrics Nextube clock
# PreviousTube UNOFFICIAL reverse-engineered open source firmware for the Rotrics Nextube clock. ## Feature Status: Incomplete and Unusable! The *only* reason you would install this is to contribute *code* to the effort. Much later this may be helpful to others. ## Hardware Notes The core of the device is an ESP32-WROVER-E with 16MB of Flash and 8MB of PSRAM. This is capable of WiFi and Bluetooth. This is connected via SPI to six ST7735-based 16-bit color LCD displays, three touchpads, a speaker, and an external RTC chip (with battery), and six WS2812 (aka Neopixel)-compatible RGB LEDs. Flashing can be done using the built-in USB to Serial adapter. ## Reverse Engineering Status: | Part | Model | Works? | Pins | Notes | |:------------|:------------------------------|:-------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------| | CPU: | ESP32-WROVER-E | :heavy_check_mark: | | 16MB Flash, 8MB PSRAM | | Displays | Unknown ST7735-based | :heavy_check_mark: | Backlight PWM GPIO19, SPI SCK GPIO12, SPI MOSI GPIO13, DC GPIO14, Reset GPIO27, LCD1 CS GPIO33, LCD2 CS GPIO26, LCD3 CS GPIO21, LCD4 CS GPIO0, LCD5 CS GPIO5, LCD6 CS GPIO18 | Seems capable of up to 60fps per display, 30fps overall, PWM backlight | | LEDs | Unknown WS2812-compatible RGB | :heavy_check_mark: | Output GPIO32 | Updated from one pin using WS2812 "Neopixel" protocol | | Touch Pads | 3x metal pins on surface | :heavy_check_mark: | GPIO2, GPIO4, GPIO15 | Connected to standard ESP32 touch input peripheral | | RTC (Clock) | Unconfirmed PCF8563 | :x: | i2c SCL GPIO22, i2c SDA GPIO23 | Probably connected via i2c | | Speaker | Unconfirmed LTK8002D amp | :x: | Probably DAC on pin 25 | Untested | | WiFi | ESP32 Built-in | :heavy_check_mark: | n/a | | All on Hardware Rev "1.31 2022/01/19" according to the PCB. ## Building 1. Install ESP-IDF with the official instructions: https://docs.espressif.com/projects/esp-idf/en/latest/esp32/get-started/linux-macos-setup.html 2. Activate ESP-IDF environment: `source <path-to-esp-idf>/esp-idf/export.sh` 3. `idf.py build` ## Workflow I use CLion with the ESP-IDF instructions https://www.jetbrains.com/help/clion/esp-idf.html and use "idf.py monitor" for logs. For faster iteration you can comment out 'FLASH_IN_PROJECT' in CMakeLists.txt to avoid flashing the art assets over and over if you have already flashed once and they haven't changed.
11
0
Pranav-chib/End-to-End-Autonomous-Driving
https://github.com/Pranav-chib/End-to-End-Autonomous-Driving
null
# <p align=center>`End-to-End Autonomous Driving`<br> End-to-End autonomous driving is a promising paradigm as it circumvents the drawbacks associated with modular systems, such as their overwhelming complexity and propensity for error propagation. Autonomous driving transcends conventional traffic patterns by proactively recognizing critical events in advance, ensuring passengers’ safety and providing them with comfortable transportation, particularly in highly stochastic and variable traffic settings. </p> <p align="center"> <img src="/Learning3_Methods.gif" width="500" height="500"/> <p> <hr /> # <p align=center>[Recent Advancements in End-to-End Autonomous Driving using Deep Learning: A Survey](http://arxiv.org/abs/2307.04370) Authors: [Pranav Singh Chib](https://github.com/Pranav-chib), [Pravendra Singh](https://scholar.google.com/citations?user=YwDTxJMAAAAJ&hl=en)</p> Modular architecture is a widely used approach in autonomous driving systems, which divides the driving pipeline into discrete sub-tasks. This architecture relies on individual sensors and algorithms to process data and generate control outputs. In contrast, the End-to-End autonomous driving approach streamlines the system, improving efficiency and robustness by directly mapping sensory input to control outputs. The benefits of End-to-End autonomous driving have garnered significant attention in the research community. This repo contains a curated list of resources on End-to-End Autonomous Driving, arranged chronologically. We regularly update it with the latest papers and their corresponding open-source implementations. ## Table of Contents - [LEARNING APPROACHES](#LEARNING-APPROACHES) - [EXPLAINABILITY](#EXPLAINABILITY) - [EVALUATION](#EVALUATION) - [SAFETY](#SAFETY) - [CITATION](#Citation) <hr /> # LEARNING APPROACHES The following are the different learning approaches of End-to-End Driving - [Imitation learning](#Imitation-learning)<br> - [Behavioural cloning](#Behavioural-cloning)<br> - [Reinforcement learning](#Reinforcement-learning)<br> - [Multi-task learning](#Multi-task-learning)<br> - [Knowledge Distillation](#Knowledge-Distillation)<br> - [Other Learning](#Other-Learning) ## Imitation learning [**Think Twice before Driving: Towards Scalable Decoders for End-to-End Autonomous Driving.**](https://arxiv.org/abs/2305.0624) [CVPR2023] <br> Xiaosong Jia, Penghao Wu, Li Chen, Jiangwei Xie, Conghui He, Junchi Yan, Hongyang Li <br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/OpenDriveLab/ThinkTwice) [**Policy Pre-training for Autonomous Driving via Self-supervised Geometric Modeling**](https://openreview.net/forum?id=X5SUR7g2vVw) [ICLR2023] <br> Penghao Wu, Li Chen, Hongyang Li, Xiaosong Jia, Junchi Yan, Yu Qiao <br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/OpenDriveLab/PPGeo) [**Hidden Biases of End-to-End Driving Models**](https://arxiv.org/abs/2306.07957) [ICCV2023] <br> Bernhard Jaeger, Kashyap Chitta, Andreas Geiger<br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/autonomousvision/carla_garage) [**Scaling Self-Supervised End-to-End Driving with Multi-View Attention Learning**](https://arxiv.org/abs/2302.03198) [arxiv2023] <br> Yi Xiao, Felipe Codevilla, Diego Porres, Antonio M. Lopez<br> [**Learning from All Vehicles**](http://arxiv.org/pdf/1709.04622v4) [CVPR2022] <br> Dian Chen, Philipp Krähenbühl <br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/dotchen/LAV.git) [**PlanT: Explainable Planning Transformers via Object-Level Representations**](https://arxiv.org/abs/2210.14222) [CoRL2022] <br> Katrin Renz, Kashyap Chitta, Otniel-Bogdan Mercea, A. Sophia Koepke, Zeynep Akata, Andreas Geiger<br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/autonomousvision/plant) [**Multi-Modal Fusion Transformer for End-to-End Autonomous Driving**](https://arxiv.org/abs/2104.09224) [CVPR2021] <br> Aditya Prakash, Kashyap Chitta, Andreas Geiger<br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/autonomousvision/transfuser.git) [**Learning by Watching**](https://arxiv.org/abs/2106.05966) [CVPR2021] <br> Jimuyang Zhang, Eshed Ohn-Bar <br> [**End-to-End Urban Driving by Imitating a Reinforcement Learning Coach**](https://arxiv.org/abs/2108.08265) [ICCV2021] <br> Zhejun Zhang, Alexander Liniger, Dengxin Dai, Fisher Yu, Luc Van Gool <br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/zhejz/carla-roach.git) [**Learning by Cheating**](http://arxiv.org/pdf/2107.00123v1) [CoRL2020] <br> Dian Chen, Brady Zhou, Vladlen Koltun, Philipp Krähenbühl <br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/dotchen/LearningByCheating.git) [**SAM: Squeeze-and-Mimic Networks for Conditional Visual Driving Policy Learning**](https://arxiv.org/abs/1912.02973) [[CoRL2020]] <br> Albert Zhao, Tong He, Yitao Liang, Haibin Huang, Guy Van den Broeck, Stefano Soatto <br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/twsq/sam-driving.git) [**Urban Driving with Conditional Imitation Learning**](http://arxiv.org/pdf/1912.00177v2) [ICRA2020] <br> Jeffrey Hawke, Richard Shen, Corina Gurau, Siddharth Sharma, Daniele Reda, Nikolay Nikolov, Przemyslaw Mazur, Sean Micklethwaite, Nicolas Griffiths, Amar Shah, Alex Kendall <br> [**Multimodal End-to-End Autonomous Driving**](https://ieeexplore.ieee.org/abstract/document/9165167) [TITS2020] <br> Yi Xiao, Felipe Codevilla, Akhil Gurram, Onay Urfalioglu, Antonio M. López <br> [**Learning to Drive from Simulation without Real World Labels**](https://arxiv.org/abs/1812.03823) [ICRA2019] <br> Alex Bewley, Jessica Rigley, Yuxuan Liu, Jeffrey Hawke, Richard Shen, Vinh-Dieu Lam, Alex Kendall <br> ## Behavioural cloning [**TransFuser: Imitation with Transformer-Based Sensor Fusion for Autonomous Driving**](https://arxiv.org/abs/2205.15997) [TPAMI2022] <br> Kashyap Chitta, Aditya Prakash, Bernhard Jaeger, Zehao Yu, Katrin Renz, Andreas Geiger<br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/autonomousvision/transfuser.git) [**Trajectory-guided Control Prediction for End-to-end Autonomous Driving: A Simple yet Strong Baseline**](https://arxiv.org/abs/2206.08129) [NeurIPS2022] <br>Penghao Wu, Xiaosong Jia, Li Chen, Junchi Yan, Hongyang Li, Yu Qiao<br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/OpenDriveLab/TCP) [**KING: Generating Safety-Critical Driving Scenarios for Robust Imitation via Kinematics Gradients**](https://arxiv.org/abs/2204.13683) [ECCV2022] <br> Niklas Hanselmann, Katrin Renz, Kashyap Chitta, Apratim Bhattacharyya, Andreas Geiger <br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/autonomousvision/transfuser.git) [**Learning to Drive by Watching YouTube Videos: Action-Conditioned Contrastive Policy Pretraining**](https://arxiv.org/abs/2204.02393) [ECCV2022] <br> Qihang Zhang, Zhenghao Peng, Bolei Zhou <br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/metadriverse/ACO) [**NEAT: Neural Attention Fields for End-to-End Autonomous Driving**](https://arxiv.org/abs/2109.04456) [ICCV2021] <br> Kashyap Chitta, Aditya Prakash, Andreas Geiger <br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/autonomousvision/neat.git) [**Learning Situational Driving**](http://arxiv.org/pdf/1811.07868v2) [CVPR2020] <br> Eshed Ohn-Bar, Aditya Prakash, Aseem Behl, Kashyap Chitta, Andreas Geiger <br> [**Exploring the Limitations of Behavior Cloning for Autonomous Driving**](https://arxiv.org/abs/1904.08980) [ICCV2019] <br> Felipe Codevilla, Eder Santana, Antonio M. López, Adrien Gaidon <br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/felipecode/coiltraine.git) ## Reinforcement learning [**Efficient Learning of Safe Driving Policy via Human-AI Copilot Optimization**](https://arxiv.org/abs/2202.10341#:~:text=HACO%20can%20train%20agents%20to,baselines%20with%20a%20large%20margin.) [ICLR2022] <br> Quanyi Li, Zhenghao Peng, Bolei Zhou <br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/decisionforce/HACO) [**End-to-End Urban Driving by Imitating a Reinforcement Learning Coach**](https://arxiv.org/abs/2108.08265) [ICCV2021] <br> Zhejun Zhang, Alexander Liniger, Dengxin Dai, Fisher Yu, Luc Van Gool <br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/zhejz/carla-roach.git) [**Learning To Drive From a World on Rails**](http://arxiv.org/pdf/2105.00636v3) [ICCV2021]<br> Dian Chen, Vladlen Koltun, Philipp Krähenbühl <br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/dotchen/WorldOnRails.git) [**End-to-End Model-Free Reinforcement Learning for Urban Driving Using Implicit Affordances**](https://openaccess.thecvf.com/content_CVPR_2020/html/Toromanoff_End-to-End_Model-Free_Reinforcement_Learning_for_Urban_Driving_Using_Implicit_Affordances_CVPR_2020_paper.html) [CVPR2020] <br> Marin Toromanoff, Emilie Wirbel, Fabien Moutarde<br> [**Learning to drive in a day**](https://arxiv.org/abs/1807.00412) [ICRA2019] <br> Alex Kendall, Jeffrey Hawke, David Janz, Przemyslaw Mazur, Daniele Reda, John-Mark Allen, Vinh-Dieu Lam, Alex Bewley, Amar Shah<br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/r7vme/learning-to-drive-in-a-day) ## Multi-task learning [**Planning-oriented Autonomous Driving**](https://arxiv.org/abs/2212.10156) :trophy:Best Paper [CVPR2023] <br> Yihan Hu, Jiazhi Yang, Li Chen, Keyu Li, Chonghao Sima, Xizhou Zhu, Siqi Chai, Senyao Du, Tianwei Lin, Wenhai Wang, Lewei Lu, Xiaosong Jia, Qiang Liu, Jifeng Dai, Yu Qiao, Hongyang Li <br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/OpenDriveLab/UniAD) [**ReasonNet: End-to-End Driving with Temporal and Global Reasoning**](https://arxiv.org/abs/2305.10507) [CVPR2023] <br> Hao Shao, Letian Wang, Ruobing Chen, Steven L. Waslander, Hongsheng Li, Yu Liu<br> [**Coaching a Teachable Student**](https://openaccess.thecvf.com/content/CVPR2023/html/Zhang_Coaching_a_Teachable_Student_CVPR_2023_paper.html) [CVPR2023] <br> Jimuyang Zhang, Zanming Huang, Eshed Ohn-Bar <br> [**Think Twice before Driving: Towards Scalable Decoders for End-to-End Autonomous Driving.**](https://arxiv.org/abs/2305.0624) [CVPR2023] <br> Xiaosong Jia, Penghao Wu, Li Chen, Jiangwei Xie, Conghui He, Junchi Yan, Hongyang Li <br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/OpenDriveLab/ThinkTwice) [**Safety-Enhanced Autonomous Driving Using Interpretable Sensor Fusion Transformer**](https://arxiv.org/abs/2207.14024) [CoRL2022] <br> Hao Shao, Letian Wang, RuoBing Chen, Hongsheng Li, Yu Liu<br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/opendilab/InterFuser) [**SAM: Squeeze-and-Mimic Networks for Conditional Visual Driving Policy Learning**](https://arxiv.org/abs/1912.02973) [[CoRL2020]] <br> Albert Zhao, Tong He, Yitao Liang, Haibin Huang, Guy Van den Broeck, Stefano Soatto <br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/twsq/sam-driving.git) [**Urban Driving with Conditional Imitation Learning**](http://arxiv.org/pdf/1912.00177v2) [ICRA2020] <br> Jeffrey Hawke, Richard Shen, Corina Gurau, Siddharth Sharma, Daniele Reda, Nikolay Nikolov, Przemyslaw Mazur, Sean Micklethwaite, Nicolas Griffiths, Amar Shah, Alex Kendall <br> ## Knowledge Distillation [**Learning from All Vehicles**](http://arxiv.org/pdf/1709.04622v4) [CVPR2022] <br> Dian Chen, Philipp Krähenbühl <br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/dotchen/LAV.git) [**End-to-End Urban Driving by Imitating a Reinforcement Learning Coach**](https://arxiv.org/abs/2108.08265) [ICCV2021] <br> Zhejun Zhang, Alexander Liniger, Dengxin Dai, Fisher Yu, Luc Van Gool <br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/zhejz/carla-roach.git) [**Learning To Drive From a World on Rails**](http://arxiv.org/pdf/2105.00636v3) [ICCV2021]<br> Dian Chen, Vladlen Koltun, Philipp Krähenbühl <br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/dotchen/WorldOnRails.git) [**Learning by Cheating**](http://arxiv.org/pdf/2107.00123v1) [CoRL2020] <br> Dian Chen, Brady Zhou, Vladlen Koltun, Philipp Krähenbühl <br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/dotchen/LearningByCheating.git) [**SAM: Squeeze-and-Mimic Networks for Conditional Visual Driving Policy Learning**](https://arxiv.org/abs/1912.02973) [[CoRL2020]] <br> Albert Zhao, Tong He, Yitao Liang, Haibin Huang, Guy Van den Broeck, Stefano Soatto <br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/twsq/sam-driving.git) ## Other Learning [**ST-P3: End-to-end Vision-based Autonomous Driving via Spatial-Temporal Feature Learning**](https://arxiv.org/abs/2207.07601) [ECCV2022] <br> Shengchao Hu, Li Chen, Penghao Wu, Hongyang Li, Junchi Yan, Dacheng Tao<br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/OpenDriveLab/ST-P3) [🔼 Back to top](#Table-of-Contents) <hr /> # EXPLAINABILITY - [Post-hoc saliency methods]() - [Counterfactual explanation]() ## Post-hoc saliency methods ## Attention [**Planning-oriented Autonomous Driving**](https://arxiv.org/abs/2212.10156) :trophy:Best Paper [CVPR2023] <br> Yihan Hu, Jiazhi Yang, Li Chen, Keyu Li, Chonghao Sima, Xizhou Zhu, Siqi Chai, Senyao Du, Tianwei Lin, Wenhai Wang, Lewei Lu, Xiaosong Jia, Qiang Liu, Jifeng Dai, Yu Qiao, Hongyang Li <br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/OpenDriveLab/UniAD) [**Policy Pre-training for Autonomous Driving via Self-supervised Geometric Modeling**](https://openreview.net/forum?id=X5SUR7g2vVw) [ICLR2023] <br> Penghao Wu, Li Chen, Hongyang Li, Xiaosong Jia, Junchi Yan, Yu Qiao <br> [**Scaling Self-Supervised End-to-End Driving with Multi-View Attention Learning**](https://arxiv.org/abs/2302.03198) [arxiv2023] <br> Yi Xiao, Felipe Codevilla, Diego Porres, Antonio M. Lopez<br> [**TransFuser: Imitation with Transformer-Based Sensor Fusion for Autonomous Driving**](https://arxiv.org/abs/2205.15997) [TPAMI2022] <br> Kashyap Chitta, Aditya Prakash, Bernhard Jaeger, Zehao Yu, Katrin Renz, Andreas Geiger<br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/autonomousvision/transfuser.git) [**PlanT: Explainable Planning Transformers via Object-Level Representations**](https://arxiv.org/abs/2210.14222) [CoRL2022] <br> Katrin Renz, Kashyap Chitta, Otniel-Bogdan Mercea, A. Sophia Koepke, Zeynep Akata, Andreas Geiger<br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/autonomousvision/plant) [**Multi-Modal Fusion Transformer for End-to-End Autonomous Driving**](https://arxiv.org/abs/2104.09224) [CVPR2021] <br> Aditya Prakash, Kashyap Chitta, Andreas Geiger<br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/autonomousvision/transfuser.git) [**NEAT: Neural Attention Fields for End-to-End Autonomous Driving**](https://arxiv.org/abs/2109.04456) [ICCV2021] <br> Kashyap Chitta, Aditya Prakash, Andreas Geiger <br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/autonomousvision/neat.git) ## Semantic representation and Auxiliary output [**Learning from All Vehicles**](http://arxiv.org/pdf/1709.04622v4) [CVPR2022] <br> Dian Chen, Philipp Krähenbühl <br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/dotchen/LAV.git) [**TransFuser: Imitation with Transformer-Based Sensor Fusion for Autonomous Driving**](https://arxiv.org/abs/2205.15997) [TPAMI2022] <br> Kashyap Chitta, Aditya Prakash, Bernhard Jaeger, Zehao Yu, Katrin Renz, Andreas Geiger<br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/autonomousvision/transfuser.git) [**ST-P3: End-to-end Vision-based Autonomous Driving via Spatial-Temporal Feature Learning**](https://arxiv.org/abs/2207.07601) [ECCV2022] <br> Shengchao Hu, Li Chen, Penghao Wu, Hongyang Li, Junchi Yan, Dacheng Tao<br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/OpenDriveLab/ST-P3) ## Counterfactual explanation ## Attention [**Planning-oriented Autonomous Driving**](https://arxiv.org/abs/2212.10156) :trophy:Best Paper [CVPR2023] <br> Yihan Hu, Jiazhi Yang, Li Chen, Keyu Li, Chonghao Sima, Xizhou Zhu, Siqi Chai, Senyao Du, Tianwei Lin, Wenhai Wang, Lewei Lu, Xiaosong Jia, Qiang Liu, Jifeng Dai, Yu Qiao, Hongyang Li <br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/OpenDriveLab/UniAD) [**Safety-Enhanced Autonomous Driving Using Interpretable Sensor Fusion Transformer**](https://arxiv.org/abs/2207.14024) [CoRL2022] <br> Hao Shao, Letian Wang, RuoBing Chen, Hongsheng Li, Yu Liu<br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/opendilab/InterFuser) [**PlanT: Explainable Planning Transformers via Object-Level Representations**](https://arxiv.org/abs/2210.14222) [CoRL2022] <br> Katrin Renz, Kashyap Chitta, Otniel-Bogdan Mercea, A. Sophia Koepke, Zeynep Akata, Andreas Geiger<br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/autonomousvision/plant) [**NEAT: Neural Attention Fields for End-to-End Autonomous Driving**](https://arxiv.org/abs/2109.04456) [ICCV2021] <br> Kashyap Chitta, Aditya Prakash, Andreas Geiger <br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/autonomousvision/neat.git) ## Semantic representation and Auxiliary output [**Hidden Biases of End-to-End Driving Models**](https://arxiv.org/abs/2306.07957) [arXiv2023] <br> Bernhard Jaeger, Kashyap Chitta, Andreas Geiger<br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/autonomousvision/carla_garage) [**TransFuser: Imitation with Transformer-Based Sensor Fusion for Autonomous Driving**](https://arxiv.org/abs/2205.15997) [TPAMI2022] <br> Kashyap Chitta, Aditya Prakash, Bernhard Jaeger, Zehao Yu, Katrin Renz, Andreas Geiger<br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/autonomousvision/transfuser.git) [**Safety-Enhanced Autonomous Driving Using Interpretable Sensor Fusion Transformer**](https://arxiv.org/abs/2207.14024) [CoRL2022] <br> Hao Shao, Letian Wang, RuoBing Chen, Hongsheng Li, Yu Liu<br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/opendilab/InterFuser) [**Learning Situational Driving**](http://arxiv.org/pdf/1811.07868v2) [CVPR2020] <br> Eshed Ohn-Bar, Aditya Prakash, Aseem Behl, Kashyap Chitta, Andreas Geiger <br> [🔼 Back to top](#Table-of-Contents) <hr /> # EVALUATION ## Open Loop - [**nuScenes**](https://www.nuscenes.org/nuscenes) - [**KITTI**](https://www.cvlibs.net/datasets/kitti/) - [**Argoverse 1 & 2**](https://www.argoverse.org/av2.html) ## Close Loop - [**CARLA Autonomous Driving Leaderboard**](https://leaderboard.carla.org/) - [**nuPlan**](https://www.nuscenes.org/nuplan?externalData=all&mapData=all&modalities=Any) <hr /> # SAFETY - [Training on Critical Scenarios](#Training-on-Critical-Scenarios) - [Safety Constraints Integration](#Safety-Constraints-Integration) - [Additional Safety Modules](#Additional-Safety-Modules) ## Training on Critical Scenarios unprotected turnings at intersections, pedestrians emerging from occluded regions, aggressive lane-changing, and other safety heuristics. [**KING: Generating Safety-Critical Driving Scenarios for Robust Imitation via Kinematics Gradients**](https://arxiv.org/abs/2204.13683) [ECCV2022] <br> Niklas Hanselmann, Katrin Renz, Kashyap Chitta, Apratim Bhattacharyya, Andreas Geiger <br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/autonomousvision/transfuser.git) [**Learning from All Vehicles**](http://arxiv.org/pdf/1709.04622v4) [CVPR2022] <br> Dian Chen, Philipp Krähenbühl <br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/dotchen/LAV.git) [**Multi-Modal Fusion Transformer for End-to-End Autonomous Driving**](https://arxiv.org/abs/2104.09224) [CVPR2021] <br> Aditya Prakash, Kashyap Chitta, Andreas Geiger<br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/autonomousvision/transfuser.git) ## Safety Constraints Integration safety cost function, avoiding unsafe maneuvers and collision avoidance strategies. [**Think Twice before Driving: Towards Scalable Decoders for End-to-End Autonomous Driving.**](https://arxiv.org/abs/2305.0624) [CVPR2023] <br> Xiaosong Jia, Penghao Wu, Li Chen, Jiangwei Xie, Conghui He, Junchi Yan, Hongyang Li <br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/OpenDriveLab/ThinkTwice) [**Policy Pre-training for Autonomous Driving via Self-supervised Geometric Modeling**](https://openreview.net/forum?id=X5SUR7g2vVw) [ICLR2023] <br> Penghao Wu, Li Chen, Hongyang Li, Xiaosong Jia, Junchi Yan, Yu Qiao <br> [**TransFuser: Imitation with Transformer-Based Sensor Fusion for Autonomous Driving**](https://arxiv.org/abs/2205.15997) [TPAMI2022] <br> Kashyap Chitta, Aditya Prakash, Bernhard Jaeger, Zehao Yu, Katrin Renz, Andreas Geiger<br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/autonomousvision/transfuser.git) [**Efficient Learning of Safe Driving Policy via Human-AI Copilot Optimization**](https://arxiv.org/abs/2202.10341#:~:text=HACO%20can%20train%20agents%20to,baselines%20with%20a%20large%20margin.) [ICLR2022] <br> Quanyi Li, Zhenghao Peng, Bolei Zhou <br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/decisionforce/HACO) [**Safety-Enhanced Autonomous Driving Using Interpretable Sensor Fusion Transformer**](https://arxiv.org/abs/2207.14024) [CoRL2022] <br> Hao Shao, Letian Wang, RuoBing Chen, Hongsheng Li, Yu Liu<br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/opendilab/InterFuser) [**ST-P3: End-to-end Vision-based Autonomous Driving via Spatial-Temporal Feature Learning**](https://arxiv.org/abs/2207.07601) [ECCV2022] <br> Shengchao Hu, Li Chen, Penghao Wu, Hongyang Li, Junchi Yan, Dacheng Tao<br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/OpenDriveLab/ST-P3) [**Learning To Drive From a World on Rails**](http://arxiv.org/pdf/2105.00636v3) [ICCV2021]<br> Dian Chen, Vladlen Koltun, Philipp Krähenbühl <br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/dotchen/WorldOnRails.git) [**SAM: Squeeze-and-Mimic Networks for Conditional Visual Driving Policy Learning**](https://arxiv.org/abs/1912.02973) [[CoRL2020]] <br> Albert Zhao, Tong He, Yitao Liang, Haibin Huang, Guy Van den Broeck, Stefano Soatto <br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/twsq/sam-driving.git) ## Additional Safety Modules Preventing deviations from safe operation. [**Planning-oriented Autonomous Driving**](https://arxiv.org/abs/2212.10156) :trophy:Best Paper [CVPR2023] <br> Yihan Hu, Jiazhi Yang, Li Chen, Keyu Li, Chonghao Sima, Xizhou Zhu, Siqi Chai, Senyao Du, Tianwei Lin, Wenhai Wang, Lewei Lu, Xiaosong Jia, Qiang Liu, Jifeng Dai, Yu Qiao, Hongyang Li <br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/OpenDriveLab/UniAD) [**PlanT: Explainable Planning Transformers via Object-Level Representations**](https://arxiv.org/abs/2210.14222) [CoRL2022] <br> Katrin Renz, Kashyap Chitta, Otniel-Bogdan Mercea, A. Sophia Koepke, Zeynep Akata, Andreas Geiger<br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/autonomousvision/plant) [**Trajectory-guided Control Prediction for End-to-end Autonomous Driving: A Simple yet Strong Baseline**](https://arxiv.org/abs/2206.08129) [NeurIPS2022] <br>Penghao Wu, Xiaosong Jia, Li Chen, Junchi Yan, Hongyang Li, Yu Qiao<br> [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/OpenDriveLab/TCP) <hr /> # Citation If you find the listing and survey useful for your work, please cite the paper: ``` @article{chib2023recent, title={Recent Advancements in End-to-End Autonomous Driving using Deep Learning: A Survey}, author={Pranav Singh Chib and Pravendra Singh}, year={2023}, eprint={2307.04370}, archivePrefix={arXiv}, primaryClass={cs.RO} } ``` [🔼 Back to top](#Table-of-Contents)
11
1
Mizogg/GUI_keyhunt
https://github.com/Mizogg/GUI_keyhunt
🐍GUI Keyhunt🐍PyQt5 GUI display for running albertobsd/keyhunt
# KeyHunt GUI This is a graphical user interface (GUI) for the KeyHunt program , developed by albertobsd/keyhunt (https://github.com/albertobsd/keyhunt). KeyHunt is a tool used to hunt private keys for cryptocurrencies that use the secp256k1 elliptic curve. https://github.com/Mizogg/GUI_keyhunt/assets/88630056/52f3713f-82c0-4f2e-8f4e-937b3672e6c4 ## New Version In video available from https://mizogg.co.uk/keyhunt/ ## Features Scans for private keys using different modes (address, xpoint, rmd160, bsgs, vanity). Supports CPU parallelization with adjustable thread count. Allows customization of search parameters such as key space, move mode, look, stride, and K value. Supports different cryptocurrencies (BTC and ETH). Provides an option to enable endomorphism search (only for address, rmd160, and vanity modes). Enables vanity address search with a specified prefix. Matrix screen option for a hacker-style experience (may affect performance). # Prerequisites Python 3.x PyQt5 library keyhunt command-line tool (make sure it is installed and accessible via the command line) # Installation and Usage Clone or download the repository to your local machine. Install the required dependencies using the following command: ``` pip install PyQt5 ``` Run the program using the following command: ``` python GUI_keyhunt.py ``` Configure the desired search parameters and click the "Start scanning" button to initiate the keyhunt process. The program will display the output in the console window. Use the provided options to customize the search and control the scanning process. ## Contributing Contributions to the KeyHunt GUI are welcome! If you find any issues or have suggestions for improvements, please open an issue or submit a pull request. ## License This program is licensed under the MIT License. See the LICENSE file for more information. ## Acknowledgements This GUI is based on the KeyHunt program developed by Alberto albertobsd. For more information about Keyhunt [email protected] & https://albertobsd.dev/ . For more information about the GUI KeyHunt tool, visit github.com/Mizogg or mizogg.co.uk
17
4
verytinydever/text-to-speech
https://github.com/verytinydever/text-to-speech
null
# text-to-speech in python
14
0
verytinydever/dockerized-postgres
https://github.com/verytinydever/dockerized-postgres
null
## Build postgres in docker > $ docker run --name docker-postgres -p 5432:5432 -e POSTGRES_PASSWORD=pass -d onjin/alpine-postgres ## Run to connect to postgres > $ docker run -it --link docker-postgres:postgres --rm onjin/alpine-postgres sh -c 'exec psql -h "$POSTGRES_PORT_5432_TCP_ADDR" -p "$POSTGRES_PORT_5432_TCP_PORT" -U postgres'
14
0
ir33k/gmi100
https://github.com/ir33k/gmi100
Gemini CLI protocol client written in 100 lines of ANSI C
gmi100 ====== Gemini CLI protocol client written in 100 lines of ANSI C. ![demo.gif](demo.gif) Other similar Gemini client projects written in few lines of code successfully shows how simple Gemini protocol is. This code is far from straight forward. But I had a different goal in mind. I tried to pack as much as possible in 100 lines of ANSI C. Initially I struggled to fit simple TLS connection in such small space but eventually I ended up with CLI client capable of efficient navigation between capsules of Gemini space 🚀 [Discussion on Hacker News][3] Build, run and usage -------------------- Run `build` script or use any C compiler and link with [OpenSSL][0]. $ ./build # Compile on Linux $ ./gmi100 # Run with default "less -XI" pager $ ./gmi100 more # Run using "more" pager $ ./gmi100 cat # Run using "cat" as pager gmi100> gemini.circumlunar.space In `gmi100>` prompt you can take few actions: 1. Type Gemini URL to visit specific site. 2. Type a number of link on current capsule, for example: `12`. 3. Type `q` to quit. 4. Type `r` to refresh current capsule. 5. Type `u` to go "up" in URL directory path. 6. Type `b` to go back in browsing history. 7. Type `c` to print current capsule URI. 8. Type `?` to search, geminispace.info/search is used by default. 9. Type shell command prefixed with `!` to run it on current capsule. Each time you navigate to `text` document the pager program will be run with that file. By default `less -XI` is used but you can provide any other in first program argument. If your pager is interactive like less the you have to exit from that pager in order to go back to gmi100 prompt and navigate to other capsule. When non `text` file is visited, like an image or music then nothing will be displayed but temporary file will be created. Then you can use any shell command to do something with it. For example you can visit capsule with video and open it with `mpv`: gmi100> gemini://tilde.team/~konomo/noocat.webm gmi100> !mpv Or similar example with image and music. For example you can use `xdg-open` or `open` command to open file with default program for given MIME type. gmi100> gemini://158.nu/images/full/158/2022-03-13-0013_v1.jpg gmi100> !xdg-open You also can use any program on reqular text capsules. For example you decided that your defauly pager is `cat` but for some capsules you want to use `less`. Or you want to edit given page in text editor. In summary, you can open currently loaded capsule as file in any program as long as you don't navigate to other URI. gmi100> gemini.circumlunar.space gmi100> !less gmi100> !emacs gmi100> !firefox gmi100> !xdg-open How browsing history works -------------------------- Browsing history in gmi100 works differently than regular "stack" way that is commonly used in browsers and other regular modern software. It is inspired by how Emacs handles undo history. That means with the single "back" button you can go back and forward in browsing history. Also with that you will never loose any page you visited from history file and I was able to write this implementation in only few lines. After you run the program it will open or create history .gmi100 file. Then every page you visits that is not a redirection to other page and doesn't ask you for input will be appended at the end of history file. File is never cleaned up by program itself to make history persistent between sessions but that means cleaning up browsing history is your responsibility. But this also gives you an control over history file content. You can for example append some links that you want to visit in next session to have easier access to them just by running program and pressing "b" which will navigate to last link from history file. During browsing session typing "b" in program prompt for the first time will result in navigation to last link in history file. Then if you type "b" again it will open second to last link from history. But it will also append that link at the end. You can input "b" multiple times and it will always go back by one link in history and append it at then end of history file at the same time. Only if you decide to navigate to other page by typing URL or choosing link number you will break that cycle. Then history "pointer" will go back to the very bottom of the history file. Example: gmi100 session pos .gmi100 history file content ================== === =============================== gmi100> <EMPTY HISTORY FILE> gmi100> tilde.pink >>> tilde.pink gmi100> 2 tilde.pink >>> tilde.pink/documentation.gmi gmi100> 2 tilde.pink tilde.pink/documentation.gmi >>> tilde.pink/docs/gemini.gmi gmi100> b tilde.pink >>> tilde.pink/documentation.gmi tilde.pink/docs/gemini.gmi tilde.pink/documentation.gmi gmi100> b >>> tilde.pink tilde.pink/documentation.gmi tilde.pink/docs/gemini.gmi tilde.pink/documentation.gmi tilde.pink gmi100> 3 tilde.pink tilde.pink/documentation.gmi tilde.pink/docs/gemini.gmi tilde.pink/documentation.gmi tilde.pink >>> gemini.circumlunar.space/ Devlog ------ ### 2023.07.11 Initial motivation and thoughts Authors of Gemini protocol claims that it should be possible to write Gemini client in modern language [in less than 100 lines of code][1]. There are few projects that do that in programming languages with garbage collectors, build in dynamic data structures and useful std libraries for string manipulation, parsing URLs etc. Intuition suggest that such achievement is not possible in plain C. Even tho I decided to start this silly project and see how far I can go with just ANSI C, std libraries and one dependency - OpenSSL. It took me around 3 weeks of lazy slow programming to get to this point but results exceeded my expectations. It turned out that it's not only achievable but also it's possible to include many convenient features like persistent browsing history, links formatting, wrapping of lines, pagination and some error handling. My goal was to write in c89 standard avoiding any dirty tricks that could buy me more lines like defining imports and constant values in compiler command or writing multiple things in single line separated with semicolon. I think that final result can be called a normal C code but OFC it is very dense, hard to read and uses practices that are normally not recommended. Even tho I call it a success. I was not able to make better line wrapping work. Ideally lines should wrap at last whitespace that fits within defined boundary and respects wide characters. The best I could do in given constrains was to do a hard line wrap after defined number of bytes. Yes - bytes, so it is possible to split wide character in half at the end of the line. It can ruin ASCII art that uses non ASCII characters and sites written mainly without ASCII characters. This is the only thing that bothers me. Line wrapping itself is very necessary to make pagination and pagination is necessary to make this program usable on terminals that does not support scrolling. Maybe it would be better to somehow integrate gmi100 with pager like "less". Then I don't have to implement pagination and line wrapping at all. That would be great. I'm very happy that I was able to make browsing history work using external file and not and array like in most small implementation I have read. With that this program is actually usable for me. I'm very happy about how the history works which is out of the ordinary but I allows to have back and forward navigation with single logic. With that I could fit 2 functionalities in single implementation. I'm also very happy about links formatting. Without this small adjustment of output text I would not like to use this program for actual browsing of Gemini space. I thought about adding "default site" being the Gemini capsule that opens by default when you run the program. But that can be easily done with small shell script or alias so I'm not going to do it. ```sh echo "some.default.page.com" | gmi100 ``` I's amazing how much can fit in 100 lines of C. ### 2023.07.12 - v2.0 the pager Removing manual line wrapping and pagination in favor of pager program that can be changed at any time was a great idea. I love to navigate Gemini holes with `cat` as pager when I'm in Emacs and with `less -X` when in terminal. ### 2023.07.12 Wed 19:48 - v2.1 SSL issues and other changes After using gmi100 for some time I noticed that often you stumble upon a capsule by navigating directly to some distant path pointing at some gemlog entry. But then you want to visit home page of this author. With current setup you would had to type URL by hand if visited page did not provided handy "Go home" link. Then I recalled that many GUI browsers include "Up" and "Go home" buttons because you are able to easily modify current URI to achieve such navigation. This was trivial to add in gmi100. Required only single line that appends `../` to current URI. I added only "Up" functionality as navigation to "Home" can be achieved by using "Up" few times in row and I don't want to loose precious lines of code. More than that, I changed default pager to `less` as it provides the best experience in terminal and this is what people will use most of the time including me. For special cases in Emacs I can change pager to `cat` with ease anyway. Back to the main topic. I had troubles opening many pages from specific domains. All of those probably run on the same server. Some kind o SSL error, not very specific. I was able to open those pages with this simple line of code: ```sh $ openssl s_client -crlf -ign_eof -quiet -connect senders.io:1965 <<< "gemini://senders.io:1965/gemlog/" ``` Which means that servers work fine and there is something wrong in my code. I'm probably missing some SSL setting. ### 2023.07.13 Thu 04:56 - `SSL_ERROR_SSL` error fixed I finally found it. I had to use `SSL_set_tlsext_host_name` before establishing connection. I would not be able to figured it out by myself. All thanks to source code of project [gplaces][2]. And yes, it's 5 am. ### 2023.07.18 Tue 16:42 - v3.0 I am complete! \m/ In v3 I completely redesigned core memory handling by switching to files only. With that program is now able to handle non text capsules that contains images, music, videos and other. In simpler words, server response body is always stored as temporary file. This file is then passed to pager program if MIME type is of text type. Else nothing happens but you can invoke any command on this file so you can use `mpv` for media files or PDF viewer for documents etc. This also opens a lot of other possibilities. For example you can easily open currently loaded capsule in different pager than default or in text editor or you can just use your system default program with `xdg-open`. And as log as you don't navigate to other capsule you can keep using different commands on that file. I also added few small useful commands like easy searching with `?`. I was trying really hard to also implement handling for local files with `file://` prefix. But I would have to make links parser somehow generic. Right now it depends on SSL functions. I don't see how to fit that in current code structure. I'm not planning any further development. I already achieved much more than I initially wanted. I'm calling this project complete. > I am complete! > Ha-aaaack > Yes, you are hacked > Overflow stack > Now I'm complete > And my log you debug > This code will be mine > #include in first line > <you_brought_me_the_lib.h> > And now your shell compile ### 2023.07.24 Mon 17:53 - Feedback from interwebs During [discussion on Hacker News][3] one user pointed out critical bugs and potential errors in code. Corrections are committed. Everyone should be safe now from buffer underflow and memory leak so please disperse as there is nothing to see here and please don't tell Rust community about it. [0]: https://www.openssl.org/ [1]: https://gemini.circumlunar.space/docs/faq.gmi [2]: https://github.com/dimkr/gplaces/blob/gemini/gplaces.c#L841 [3]: https://news.ycombinator.com/item?id=36786239
54
0
marc2332/ghboard
https://github.com/marc2332/ghboard
🦑 GitHub Dashboard
# ghboard 🦑 GitHub dashboard written in Rust🦀, made using [Dioxus SSR 🧬](https://dioxuslabs.com/), hosted in [Shuttle 🚀](https://www.shuttle.rs/) and powered by the [GitHub GraphQL API 🦑](https://docs.github.com/en/graphql). [⚠️ Work in progress ⚠️] ### Usage Just replace your GitHub username to the end of the URL: ``` https://ghboard.shuttleapp.rs/user/<YOUR_GITHUB_USERNAME> ``` Example: [https://ghboard.shuttleapp.rs/user/marc2332](https://ghboard.shuttleapp.rs/user/marc2332)
34
0
verytinydever/automation-chrome
https://github.com/verytinydever/automation-chrome
null
# automation-chrome # test
12
0
Brunowilliang/my-finance
https://github.com/Brunowilliang/my-finance
null
## My Finance with React Native and Expo ### Technologies: <img src="https://img.shields.io/badge/React%20Native-0.71.8-0077B5?style=flat-square&logo=react&logoColor=white" alt="React Native" /> <img src="https://img.shields.io/badge/Typescript-4.9.4-0077B5?style=flat-square&logo=typescript&logoColor=white" alt="Typescript" /> <img src="https://img.shields.io/badge/Expo-48.0.15-0077B5?style=flat-square&logo=expo&logoColor=white" alt="Expo" /> <img src="https://img.shields.io/badge/Expo%20Router-1.5.3-0077B5?style=flat-square&logo=expo&logoColor=white" alt="Expo Router" /> <img src="https://img.shields.io/badge/Zustand-4.3.8-0077B5?style=flat-square&logo=zustand&logoColor=white" alt="Zustand" /> <img src="https://img.shields.io/badge/Pocketbase-0.16.0-0077B5?style=flat-square&logo=pocketbase&logoColor=white" alt="Pocketbase" /> # ### How to run: <!-- coloque o código abaixo --> ```bash # Clone this repository $ git clone # Go to the project folder $ cd my-finance # Install Dependencies $ yarn install # Run the application $ expo start # Enjoy! ``` # ### Let's talk? [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://twitter.com/brunowgarcia) [![Linkedin](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/brunowilliang/) [![Gmail](https://img.shields.io/badge/Mail-0077B5?style=for-the-badge&logo=gmail&logoColor=white)](mailto:[email protected]) # ### Author <img src="https://avatars.githubusercontent.com/brunowilliang" alt="Bruno Garcia" width="100px" /> <br/> [![Linkedin](https://img.shields.io/badge/Bruno-author-0077B5?style=for-the-badge&logo=github&logoColor=white)](https://github.com/Brunowilliang/)
37
5
ashawkey/grid_put
https://github.com/ashawkey/grid_put
An operation trying to do the opposite of F.grid_sample
# Grid Put An operation trying to do the opposite of `F.grid_sample()`. ### Install Assume `torch` already installed. ```bash pip install git+https://github.com/ashawkey/grid_put # or locally git clone https://github.com/ashawkey/grid_put cd grid_put pip install . ``` ### Usage ```python from grid_put import grid_put H, W # target grid shape coords # [N, 2], float grid coords in [-1, 1] values # [N, C], values to put into grid # mode: nearest, bilinear, bilinear-mipmap (default) out = grid_put((H, W), coords, values, mode='bilinear-mipmap') # [H, W, C] ``` ### Examples ```bash # extra dependency: pip install kiui python test.py --mode <random|grid> --ratio <float> ``` |mode-ratio | nearest | bilinear | bilinear-mipmap | |:-:|:-:|:-:|:-:| |grid-10%|![](assets/out_nearest_0.1_grid.png) | ![](assets/out_bilinear_0.1_grid.png) | ![](assets/out_bilinear-mipmap_0.1_grid.png) | |grid-90%|![](assets/out_nearest_0.9_grid.png) | ![](assets/out_bilinear_0.9_grid.png) | ![](assets/out_bilinear-mipmap_0.9_grid.png) | |random-10%|![](assets/out_nearest_0.1_random.png) | ![](assets/out_bilinear_0.1_random.png) | ![](assets/out_bilinear-mipmap_0.1_random.png) | |random-90%|![](assets/out_nearest_0.9_random.png) | ![](assets/out_bilinear_0.9_random.png) | ![](assets/out_bilinear-mipmap_0.9_random.png) |
11
0
dgagn/diagflow.nvim
https://github.com/dgagn/diagflow.nvim
LSP diagnostics in virtual text at the top right of your screen
# diagflow.nvim **diagflow.nvim** is a Neovim plugin that provides a neat and distraction-free way to display LSP diagnostics. It shows diagnostics in virtual text at the top-right corner of your screen, only when the cursor is positioned over the problematic code or across an entire line, according to your preference. This provides a clean and focused coding environment. This approach to diagnostics management is inspired by the Helix editor. ## Example 1. Opening a file with multiple diagnostics but no issues under the cursor: ![nothing](./images/nothing.png) 2. An error under the cursor: ![error](./images/error.png) 3. A hint under the cursor: ![hint](./images/hint.png) ## Installation To install **diagflow.nvim**, use your preferred Neovim package manager. If you're using `packer.nvim`, add the following line to your plugin list: ```lua -- Packer use {'dgagn/diagflow.nvim'} ``` If you're using `lazy.nvim`, add the following line to your plugin list: ```lua -- Lazy { 'dgagn/diagflow.nvim', -- event = 'LspAttach', This is what I use personnally and it works great opts = {} } ``` ## Configuration **Note** if you are using the `opts` with `lazy.nvim`, you don't need to run the setup, it does it for you. The scope option determines the context of diagnostics display: 'cursor' (default) shows diagnostics only under the cursor, while 'line' shows diagnostics for the entire line where the cursor is positioned. ```lua require('diagflow').setup({ enable = true, max_width = 60, -- The maximum width of the diagnostic messages max_height = 10, -- the maximum height per diagnostics severity_colors = { -- The highlight groups to use for each diagnostic severity level error = "DiagnosticFloatingError", warning = "DiagnosticFloatingWarn", info = "DiagnosticFloatingInfo", hint = "DiagnosticFloatingHint", }, format = function(diagnostic) return diagnostic.message end, gap_size = 1, scope = 'cursor', -- 'cursor', 'line' this changes the scope, so instead of showing errors under the cursor, it shows errors on the entire line. padding_top = 0, padding_right = 0, text_align = 'right', -- 'left', 'right' placement = 'top', -- 'top', 'inline' inline_padding_left = 0, -- the padding left when the placement is inline update_event = { 'DiagnosticChanged', 'BufReadPost' }, -- the event that updates the diagnostics cache toggle_event = { }, -- if InsertEnter, can toggle the diagnostics on inserts show_sign = false, -- set to true if you want to render the diagnostic sign before the diagnostic message render_event = { 'DiagnosticChanged', 'CursorMoved' } }) ``` Or simply use the default configuration: ```lua require('diagflow').setup() ``` ## FAQ 1. How do I change the colors of the virtual text? You can set up custom colors by changing the highlight group in the configuration. For instance, in the default configuration, `:hi Hint` determines the color of the hints. You can change the hint color to blue with `:hi Hint guifg=blue`. 2. Can I still have the diagnostics inline? Yes, with the new option `placement`, you can set the diagnostics inline instead of at the top right. Here is a example : ![inline](./images/inline.png) Here is the example config used in this screenshot : ```lua { 'dgagn/diagflow.nvim', opts = { placement = 'inline', inline_padding_left = 3, }, } ``` 3. How can I disable the cursor when I enter insert mode and reenable it when I go in normal mode? ```lua { 'dgagn/diagflow.nvim', opts = { toggle_event = { 'InsertEnter' }, }, } ``` 4. Something doesn't update when X or Y happens. You can setup when the diagnostic is cached with this option : ```lua { 'dgagn/diagflow.nvim', opts = { update_event = { 'DiagnosticChanged', ... }, }, } ``` 5. I want to customize my diagnostic messages You can set a diagnostic message by supplying the `format` option. ```lua { 'dgagn/diagflow.nvim', opts = { format = function(diagnostic) return '[LSP] ' .. diagnostic.message end }, } ```
88
3
0a00/hyprfiles
https://github.com/0a00/hyprfiles
hyprland configuration file
# hyprfiles hyprland configuration file # 配置说明 - 终端:Alacritty,Wezterm - 应用启动器:Anyrun,Rofi - 音量,亮度控制:Avizo (自带了一个修改音量亮度后的通知) - 通知:Dunst - 状态栏:Waybar - 壁纸:Swww 详细参考配置文件 # 截图 ![Untitled](screenshot/8.png) ![Untitled](screenshot/9.png) ![Untitled](screenshot/10.png) ![Untitled](screenshot/1.png) ![Untitled](screenshot/2.png) ![Untitled](screenshot/3.png) ![Untitled](screenshot/4.png) ![Untitled](screenshot/5.png) ![Untitled](screenshot/6.png) ![Untitled](screenshot/7.png)
14
1
ByteBallet/TreePixel
https://github.com/ByteBallet/TreePixel
Open source of https://treepixel.vercel.app
# treepixel_v2 Nuxt3, Supabase, Bootstrap, Typescript, Vuetify
13
0
AntonioErdeljac/next13-ai-saas
https://github.com/AntonioErdeljac/next13-ai-saas
null
# Build a SaaS AI Platform with Next.js 13, React, Tailwind, Prisma, Stripe | Full Tutorial 2023 ![Copy of Copy of Copy of Fullstack Twitter Clone](https://github.com/AntonioErdeljac/next13-ai-saas/assets/23248726/c47e604a-b50b-4eb0-b420-fda20908f522) This is a repository for Build a SaaS AI Platform with Next.js 13, React, Tailwind, Prisma, Stripe | Full Tutorial 2023. [VIDEO TUTORIAL](https://www.youtube.com/watch?v=ffJ38dBzrlY) Features: - Tailwind design - Tailwind animations and effects - Full responsiveness - Clerk Authentication (Email, Google, 9+ Social Logins) - Client form validation and handling using react-hook-form - Server error handling using react-toast - Image Generation Tool (Open AI) - Video Generation Tool (Replicate AI) - Conversation Generation Tool (Open AI) - Music Generation Tool (Replicate AI) - Page loading state - Stripe monthly subscription - Free tier with API limiting - How to write POST, DELETE, and GET routes in route handlers (app/api) - How to fetch data in server react components by directly accessing database (WITHOUT API! like Magic!) - How to handle relations between Server and Child components! - How to reuse layouts - Folder structure in Next 13 App Router ### Prerequisites **Node version 18.x.x** ### Cloning the repository ```shell git clone https://github.com/AntonioErdeljac/next13-ai-saas.git ``` ### Install packages ```shell npm i ``` ### Setup .env file ```js NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY= CLERK_SECRET_KEY= NEXT_PUBLIC_CLERK_SIGN_IN_URL=/sign-in NEXT_PUBLIC_CLERK_SIGN_UP_URL=/sign-up NEXT_PUBLIC_CLERK_AFTER_SIGN_IN_URL=/dashboard NEXT_PUBLIC_CLERK_AFTER_SIGN_UP_URL=/dashboard OPENAI_API_KEY= REPLICATE_API_TOKEN= DATABASE_URL= STRIPE_API_KEY= STRIPE_WEBHOOK_SECRET= NEXT_PUBLIC_APP_URL="http://localhost:3000" ``` ### Setup Prisma Add MySQL Database (I used PlanetScale) ```shell npx prisma db push ``` ### Start the app ```shell npm run dev ``` ## Available commands Running commands with npm `npm run [command]` | command | description | | :-------------- | :--------------------------------------- | | `dev` | Starts a development instance of the app |
488
158
aman0046/Which-companies-hires-offCampus-and-through-which-platform
https://github.com/aman0046/Which-companies-hires-offCampus-and-through-which-platform
null
# Which all companies hires OffCampus and through which program? 😇 ✅ ### 1) Goldman Sachs <img src="/assets/images/wtc1.png" width="60" height="35" align="center"> **➡️ Role**: Intern and Full time both \ **⭐️ Program**: Engineer Campus Hiring \ **🎯 Eligibility**: Final and pre final year students --- ### 2) DeShaw & Co **➡️ Role**: Intern (Female grads) \ **⭐️ Program**: Ascend Educare \ **🎯 Eligibility**: 2nd and 3rd year students --- ### 3) Uber **➡️ Role**: Intern and Full time \ **⭐️ Program**: HackTag \ **🎯 Eligibility**: Final and pre final year students --- ### 4) Cisco **➡️ Role**: Inten \ **⭐️ Program**: Ideathon \ **🎯 Eligibility**: Pre final year students --- ### 5) Microsoft **➡️ Role**: Intern and Full time \ **⭐️ Program**: Microsoft Engage \ **🎯 Eligibility**: 2nd and 3rd year students --- ### 6) Flipkart **➡️ Role**: Intern and Full time \ **⭐️ Program**: Flipkart Grid \ **🎯 Eligibility**: All years undergrads --- ### 7) GitHub **➡️ Role**: Intern \ **⭐️ Program**: Externship \ **🎯 Eligibility**: Final and pre final year students --- ### 8) American Express **➡️ Role**: Intern \ **⭐️ Program**: CodeStreet, Geek Goddess \ **🎯 Eligibility**: Final and pre final year students --- ### 9) J.P. Morgan **➡️ Role**: Intern and Full time \ **⭐️ Program**: Code for Good \ **🎯 Eligibility**: 2nd and 3rd year students --- ### 10) Lowe’s **➡️ Role**: Full time \ **⭐️ Program**: Lowe’s Hiring Challenge \ **🎯 Eligibility**: Final year students --- ### 11) Myntra **➡️ Role**: Intern and Full time \ **⭐️ Program**: HackerRamp \ **🎯 Eligibility**: Final and pre final year students --- ### 12) Code Nation (Trilogy) **➡️ Role**: Intern and Full time \ **⭐️ Program**: CodeAgon \ **🎯 Eligibility**: Final and pre final year students --- ### 13) Juspay **➡️ Role**: Intern and Full time \ **⭐️ Program**: Juspay Hiring Challenge \ **🎯 Eligibility**: Final and pre final year students --- ### 14) Intuit **➡️ Role**: Intern and Full time \ **⭐️ Program**: Hire through Referral only \ **🎯 Eligibility**: Final and pre final year students --- ### 15) Optum **➡️ Role**: Full time \ **⭐️ Program**: Stratethon \ **🎯 Eligibility**: All year students --- <img src="/assets/images/save.png" width="600" height="200"> **For any doubt, feel free to connect on Linkedin and Instagram** > [Linkedin](https://www.linkedin.com/in/amanchowdhury046/) \ [Instagram](https://www.instagram.com/aman_chowdhury_046/)
41
9