id
int64
393k
2.82B
repo
stringclasses
68 values
title
stringlengths
1
936
body
stringlengths
0
256k
labels
stringlengths
2
508
priority
stringclasses
3 values
severity
stringclasses
3 values
2,706,407,990
transformers
Only Fine-tune the embeddings of the added special tokens
### Feature request Hi, I added some new special tokens to the LLMs (specifically I'm using Qwen2-VL) and then I only want to fine-tune the embedding layers of these added tokens while keeping all other parameters (and the embedding layers for other tokens) frozen. I wonder if there is a built-in way to do so instead of fine-tuning the whole embedding matrix? ### Motivation If we want to maximumly retain the original capabilities of the model while adding new tokens for certain scenarios, this might be needed, especially when we don't have much data and do not want to alter the pretrained weights. Another question: if we have a considerable amount of data, is it recommended to fine-tune the whole embedding matrix or only the embeddings for the added tokens? ### Your contribution If it's a reasonable feature and not implemented yet, I'm happy to submit a PR.
Feature request
low
Minor
2,706,478,008
ollama
There are issues with the upgrade
### What is the issue? Why is the version always 0.3.14 during each upgrade, and why can't it be upgraded to the latest version? ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.4.3~0.4.5
bug
low
Minor
2,706,480,373
ollama
Classify tool call vs. content earlier and stream to user
https://github.com/ollama/ollama/issues/5796#issuecomment-2508374342
feature request
low
Major
2,706,486,084
PowerToys
Mouse Pointer Cross Hairs. Automatic disable
### Description of the new feature / enhancement When typing - temporarily disable the cross hairs ### Scenario when this would be used? I find that, when I am typing, I manually have to move the mouse cross hairs out of the way. I am aware that manual enable/disable helps but to have the hairs automatically disable while typing keys and enable when stopping would be valuable ### Supporting information _No response_
Idea-Enhancement,Needs-Triage,Product-Mouse Utilities
low
Minor
2,706,504,763
flutter
[in_app_purchase_storekit] Comment in example/lib/main.dart may be incorrect.
[packages/packages/in_app_purchase/in_app_purchase_storekit/example/lib/main.dart at main · flutter/packages](https://github.com/flutter/packages/blob/main/packages/in_app_purchase/in_app_purchase_storekit/example/lib/main.dart#L17-L19) Current: ```dart // When using the Android plugin directly it is mandatory to register // the plugin as default instance as part of initializing the app. InAppPurchaseStoreKitPlatform.registerPlatform(); ``` Expected: ```dart // When using the StoreKit plugin directly it is mandatory to register // the plugin as default instance as part of initializing the app. InAppPurchaseStoreKitPlatform.registerPlatform(); ```
platform-ios,d: examples,p: in_app_purchase,package,P2,team-ios,triaged-ios
low
Minor
2,706,507,916
next.js
After upgrading to Next15, the custom server cannot be started in the production environment
### Verify canary release - [X] I verified that the issue exists in the latest Next.js canary release ### Provide environment information ```bash Operating System: Platform: darwin Arch: arm64 Version: Darwin Kernel Version 23.0.0: Fri Sep 15 14:42:57 PDT 2023; root:xnu-10002.1.13~1/RELEASE_ARM64_T8112 Available memory (MB): 16384 Available CPU cores: 8 Binaries: Node: 20.11.1 npm: 10.2.4 Yarn: 1.22.21 pnpm: N/A Relevant Packages: next: 15.0.4-canary.32 eslint-config-next: 15.0.3 react: 19.0.0-rc-fb9a90fa48-20240614 react-dom: 19.0.0-rc-fb9a90fa48-20240614 typescript: 5.7.2 Next.js Config: output: N/A ``` ### Which example does this report relate to? https://github.com/vercel/next.js/tree/canary/examples/with-mobx ### What browser are you using? (if relevant) _No response_ ### How are you deploying your application? (if relevant) Other platform ### Describe the Bug Here are the errors on the console when I start the custom server: ``` # node[57308]: void node::SetupHooks(const FunctionCallbackInfo<v8::Value> &) at ../src/async_wrap.cc:164 # Assertion failed: env->async_hooks_init_function().IsEmpty() ----- Native stack trace ----- 1: 0x1000f159c node::Abort() [/usr/local/bin/node] 2: 0x1000f12ec node::PrintCaughtException(v8::Isolate*, v8::Local<v8::Context>, v8::TryCatch const&) [/usr/local/bin/node] 3: 0x100032c24 node::SetupHooks(v8::FunctionCallbackInfo<v8::Value> const&) [/usr/local/bin/node] 4: 0x1002e4494 v8::internal::MaybeHandle<v8::internal::Object> v8::internal::(anonymous namespace)::HandleApiCallHelper<false>(v8::internal::Isolate*, v8::internal::Handle<v8::internal::HeapObject>, v8::internal::Handle<v8::internal::FunctionTemplateInfo>, v8::internal::Handle<v8::internal::Object>, unsigned long*, int) [/usr/local/bin/node] 5: 0x1002e3b8c v8::internal::Builtin_HandleApiCall(int, unsigned long*, v8::internal::Isolate*) [/usr/local/bin/node] 6: 0x100b6cb24 Builtins_CEntry_Return1_ArgvOnStack_BuiltinExit [/usr/local/bin/node] 7: 0x100ae43e4 Builtins_InterpreterEntryTrampoline [/usr/local/bin/node] 8: 0x100ae1708 construct_stub_create_deopt_addr [/usr/local/bin/node] 9: 0x100c205cc Builtins_ConstructHandler [/usr/local/bin/node] 10: 0x100ae43e4 Builtins_InterpreterEntryTrampoline [/usr/local/bin/node] 11: 0x10602c078 12: 0x100ae43e4 Builtins_InterpreterEntryTrampoline [/usr/local/bin/node] 13: 0x10602c078 14: 0x100ae43e4 Builtins_InterpreterEntryTrampoline [/usr/local/bin/node] 15: 0x100b1b210 Builtins_AsyncFunctionAwaitResolveClosure [/usr/local/bin/node] 16: 0x100bc8fb8 Builtins_PromiseFulfillReactionJob [/usr/local/bin/node] 17: 0x100b0ab94 Builtins_RunMicrotasks [/usr/local/bin/node] 18: 0x100ae23f4 Builtins_JSRunMicrotasksEntry [/usr/local/bin/node] 19: 0x1003b84d0 v8::internal::(anonymous namespace)::Invoke(v8::internal::Isolate*, v8::internal::(anonymous namespace)::InvokeParams const&) [/usr/local/bin/node] 20: 0x1003b89bc v8::internal::(anonymous namespace)::InvokeWithTryCatch(v8::internal::Isolate*, v8::internal::(anonymous namespace)::InvokeParams const&) [/usr/local/bin/node] 21: 0x1003b8b98 v8::internal::Execution::TryRunMicrotasks(v8::internal::Isolate*, v8::internal::MicrotaskQueue*) [/usr/local/bin/node] 22: 0x1003dfd64 v8::internal::MicrotaskQueue::RunMicrotasks(v8::internal::Isolate*) [/usr/local/bin/node] 23: 0x1003e0500 v8::internal::MicrotaskQueue::PerformCheckpoint(v8::Isolate*) [/usr/local/bin/node] 24: 0x100020c64 node::InternalCallbackScope::Close() [/usr/local/bin/node] 25: 0x1000207c4 node::InternalCallbackScope::~InternalCallbackScope() [/usr/local/bin/node] 26: 0x1000f559c node::fs::FileHandle::CloseReq::Resolve() [/usr/local/bin/node] 27: 0x10010ecdc node::fs::FileHandle::ClosePromise()::$_0::__invoke(uv_fs_s*) [/usr/local/bin/node] 28: 0x1000ec4e8 node::MakeLibuvRequestCallback<uv_fs_s, void (*)(uv_fs_s*)>::Wrapper(uv_fs_s*) [/usr/local/bin/node] 29: 0x100abe39c uv__work_done [/usr/local/bin/node] 30: 0x100ac1dec uv__async_io [/usr/local/bin/node] 31: 0x100ad3ec4 uv__io_poll [/usr/local/bin/node] 32: 0x100ac23b0 uv_run [/usr/local/bin/node] 33: 0x100021754 node::SpinEventLoopInternal(node::Environment*) [/usr/local/bin/node] 34: 0x100131c6c node::NodeMainInstance::Run(node::ExitCode*, node::Environment*) [/usr/local/bin/node] 35: 0x100131a08 node::NodeMainInstance::Run() [/usr/local/bin/node] 36: 0x1000bb718 node::Start(int, char**) [/usr/local/bin/node] 37: 0x189209058 start [/usr/lib/dyld] ----- JavaScript stack trace ----- 1: l (/Users/hujimin/tt-web-site-application/build/trip/server/chunks/446.js:1:942945) 2: 10053 (/Users/hujimin/tt-web-site-application/build/trip/server/chunks/446.js:1:944017) 3: t (/Users/hujimin/tt-web-site-application/build/trip/server/webpack-runtime.js:1:143) 4: 1792 (/Users/hujimin/tt-web-site-application/build/trip/server/chunks/446.js:1:956702) 5: t (/Users/hujimin/tt-web-site-application/build/trip/server/webpack-runtime.js:1:143) 6: /Users/hujimin/tt-web-site-application/node_modules/next/dist/server/next-server.js:464:31 [1] 57295 abort sudo npm run start ``` I found the source code related to node_modules/next/dist/server/next server. js: 464:31 as follows: <img width="644" alt="image" src="https://github.com/user-attachments/assets/5003d7c1-ca42-42e0-be4f-0b5f1eb11b4c"> The unstable_preloadEntries method is called in the constructor of NextNodeServer: <img width="703" alt="image" src="https://github.com/user-attachments/assets/ad0a2095-096a-4423-bc5c-d98695d2d7fb"> ### Expected Behavior The application can start normally in the production environment ### To Reproduce My custom server code : ```javascript // server/index.js const next = require('next'); const _path = require('path'); const express = require('express'); const distDir = 'build/trip' distDir = _path.join('../', distDir); const next_app = next({ dev: false, distDir, }) const start = async () => { await next_app.prepare() server.listen(8080, () => {}) } start() ``` My build command: ```bash next build ``` My startup command: ```bash server/index.js ```
examples
low
Critical
2,706,513,655
ollama
Add tests for openai response logic - potentially refactor middleware
null
feature request
low
Minor
2,706,582,015
go
crypto/tls: TestFIPSCertAlgs failures
``` #!watchflakes default <- pkg == "crypto/tls" && test == "TestFIPSCertAlgs" ``` Issue created automatically to collect these failures. Example ([log](https://ci.chromium.org/b/8729882779692073953)): === RUN TestFIPSCertAlgs panic: test timed out after 3m0s running tests: TestFIPSCertAlgs (2m58s) goroutine 255 gp=0x9dc67e8 m=9 mp=0x9e80808 [running]: panic({0x83eebe0, 0x9c28050}) /home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/panic.go:806 +0x144 fp=0x9c54784 sp=0x9c54730 pc=0x80bacc4 testing.(*M).startAlarm.func1() /home/swarming/.swarming/w/ir/x/w/goroot/src/testing/testing.go:2456 +0x382 fp=0x9c547f0 sp=0x9c54784 pc=0x815bd72 ... runtime.gopark(0x846f384, 0x9cda4e0, 0x1b, 0xa, 0x0) /home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/proc.go:435 +0x10c fp=0x9d62f8c sp=0x9d62f78 pc=0x80bb16c runtime.gcBgMarkWorker(0x9c90b00) /home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/mgc.go:1423 +0xfd fp=0x9d62fe8 sp=0x9d62f8c pc=0x806104d runtime.gcBgMarkStartWorkers.gowrap1() /home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/mgc.go:1339 +0x27 fp=0x9d62ff0 sp=0x9d62fe8 pc=0x8060f37 runtime.goexit({}) /home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/asm_386.s:1393 +0x1 fp=0x9d62ff4 sp=0x9d62ff0 pc=0x80c0ce1 created by runtime.gcBgMarkStartWorkers in goroutine 25 /home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/mgc.go:1339 +0x12c — [watchflakes](https://go.dev/wiki/Watchflakes)
NeedsInvestigation
low
Critical
2,706,584,344
next.js
Cannot use Next.js as a transitive dep to build an application
### Link to the code that reproduces this issue https://github.com/Ethan-Arrowood/next-issue-repro ### To Reproduce Follow readme in https://github.com/Ethan-Arrowood/next-issue-repro Or: 1. Generate a new nextjs app, `npx create-next-app`. Called it `my-app` 2. As a sibling to the next app created in step 1, create a new directory `next-transitive` 3. Add a package.json with the content: `{ "dependencies": { "next-app": "file:../my-app" } }` 4. cd into `next-transitive` and run `npm install --install-links` (the option is so that the file: is actually installed instead of symlinked) 5. cd into `node_modules/next-app` and try to run `npx next build` or `npm run build`. They will both fail with the following: ``` > [email protected] build > next build ▲ Next.js 15.0.3 Creating an optimized production build ... Failed to compile. ./app/page.js Module parse failed: Unexpected token (6:4) File was processed with these loaders: * ../next/dist/build/webpack/loaders/next-flight-loader/index.js * ../next/dist/build/webpack/loaders/next-swc-loader.js You may need an additional loader to handle the result of these loaders. | export default function Home() { | return ( > <div className={styles.page}> | <main className={styles.main}> | <Image Import trace for requested module: ./app/page.js > Build failed because of webpack errors ``` ### Current vs. Expected behavior This behavior seems to exist for a few major versions. My assumption is that the build process is not resolving its own dependencies correctly. ### Provide environment information ```bash Operating System: Platform: darwin Arch: arm64 Version: Darwin Kernel Version 23.6.0: Mon Jul 29 21:14:30 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T6030 Available memory (MB): 18432 Available CPU cores: 12 Binaries: Node: 20.16.0 npm: 10.8.1 Yarn: N/A pnpm: N/A Relevant Packages: next: 15.0.3 // Latest available version is detected (15.0.3). eslint-config-next: N/A react: 19.0.0-rc-66855b96-20241106 react-dom: 19.0.0-rc-66855b96-20241106 typescript: N/A Next.js Config: output: N/A ``` ### Which area(s) are affected? (Select all that apply) Developer Experience, Script (next/script), SWC, Webpack ### Which stage(s) are affected? (Select all that apply) next build (local) ### Additional context _No response_
bug,SWC,Webpack,Script (next/script)
low
Critical
2,706,594,838
godot
Entire scenes disappearing with MissingNode error
### Tested versions Godot v4.3.stable ### System information Godot v4.3.stable (77dcf97d8) - Windows 10.0.19045 - GLES3 (Compatibility) - NVIDIA GeForce GTX 1660 SUPER (NVIDIA; 32.0.15.6094) - AMD Ryzen 7 2700X Eight-Core Processor (16 Threads) ### Issue description What seems like for no reason at all, random scenes in my file will completely disappear and turn to MissingNode. If it weren't for git there would be no way to get scene back. This is very bad. It happened to me twice and very far apart. Just reporting because no one has an answer to it online. ![Image](https://github.com/user-attachments/assets/6cb993a0-8931-4814-ad5c-031095bdaf8b) ### Steps to reproduce I have no idea how to reproduce, it seems to be happening randomly. My project is connected to GitHub desktop and it is a larger project. ### Minimal reproduction project (MRP) N/A
bug,topic:editor,needs testing
low
Critical
2,706,605,085
deno
`deno task`: cannot run tsx
Version: Deno 2.1.2 This is the same error as #20625, but not in the same context. Trying to create an alias for `npx tsx --test` (to test codebase on node js) but for some reason when executed through the task runner there a weird error (see stacktrace below) ```json { "tasks": { "test:node": "npx tsx --help" } } ``` ``` Task test:node npx tsx --help tsx Node.js runtime enhanced with esbuild for loading TypeScript & ESM Usage: tsx [flags...] [script path] tsx <command> Commands: watch Run the script and watch for changes Flags: -h, --help Show help --no-cache Disable caching --tsconfig <string> Custom tsconfig.json path -v, --version Show version --------------------------------------------- error: unexpected argument '--require' found tip: to pass '--require' as a value, use '-- --require' Usage: deno run [OPTIONS] [SCRIPT_ARG]... ``` There's a "deno run" mention so I assume there's some kind of shenanigans that alias/replace tsx. If executed directly in bash the command works as expected
bug,node compat,task runner
low
Critical
2,706,612,570
rust
Unexpected warning when doc string invokes a macro which is defined within the same module
<!-- Thank you for filing a bug report! 🐛 Please provide a short summary of the bug, along with any information you feel relevant to replicating the bug. --> This is a follow-up of https://github.com/rust-lang/rust/issues/124535#issuecomment-2466005846. TLDR: it looks like https://github.com/rust-lang/rust/issues/124535 might have not been completely resolved, or at least there are some undetected edge cases in https://github.com/rust-lang/rust/pull/125741/commits/c4c7859e40efcfff640af442fb5d1fab3718d374... ### Issue Description In the minimal reproduction ([rami3l/repro_rust_124535](https://github.com/rami3l/repro_rust_124535)), we have: ```rust // lib.rs macro_rules! pm_mods { ( $( $vis:vis $mod:ident; )+ ) => { $( $vis mod $mod; pub use self::$mod::$mod; )+ } } pm_mods! { dnf; } ``` ```rust // dnf.rs #![doc = doc_self!()] macro_rules! doc_self { () => { "The Dandified YUM." }; } use doc_self; // <- Please note that the suggested fix has already been applied. #[doc = doc_self!()] pub fn dnf() {} ``` **I expected to see this happen**: ✅ **Instead, this happened**: ```console > cargo check Checking repro_rust_124535 v0.1.0 (/path/censored) warning: cannot find macro `doc_self` in this scope --> src/dnf.rs:1:10 | 1 | #![doc = doc_self!()] | ^^^^^^^^ | = warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in a future release! = note: for more information, see issue #124535 <https://github.com/rust-lang/rust/issues/124535> = help: import `macro_rules` with `use` to make it callable above its definition = note: `#[warn(out_of_scope_macro_calls)]` on by default warning: `repro_rust_124535` (lib) generated 1 warning Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.06s ``` Also, it's worth noticing that after inlining the macro invocation in `lib.rs`, i.e.: ```rust // lib.rs mod dnf; pub use self::dnf::dnf; ``` ... the error seems gone. ### Meta <!-- If you're using the stable version of the compiler, you should also check if the bug also exists in the beta or nightly versions. --> `rustc --version --verbose`: ``` rustc 1.83.0 (90b35a623 2024-11-26) binary: rustc commit-hash: 90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf commit-date: 2024-11-26 host: aarch64-apple-darwin release: 1.83.0 LLVM version: 19.1.1 ```
A-resolve,A-macros,T-compiler,C-bug
low
Critical
2,706,614,268
PowerToys
Chinese Pinyin(拼音) search doesn't work when it corresponds to more than two characters (or 3 maybe) in PowerToys Run
### Microsoft PowerToys version 0.86.0 ### Installation method PowerToys auto-update, GitHub ### Running as admin No ### Area(s) with issue? PowerToys Run ### Steps to reproduce ![Image](https://github.com/user-attachments/assets/65fc58a4-bf74-40de-a52f-08939e6728b1) ![Image](https://github.com/user-attachments/assets/299fdd57-3fdd-492f-a843-409f5781bcef) ![Image](https://github.com/user-attachments/assets/d9f02cb4-c388-4c7c-b1d3-da136d36ddc6) ![Image](https://github.com/user-attachments/assets/8eb5a110-2603-4039-ae6d-abaaaac4a3d5) ![Image](https://github.com/user-attachments/assets/7aed0cab-c95c-4112-8301-74bdb5108a66) ![Image](https://github.com/user-attachments/assets/c06ee60d-e4b4-4193-973b-f0e08cb617fd) ![Image](https://github.com/user-attachments/assets/a772c46e-dd2b-4cb3-9f8e-e658ab91e372) ![Image](https://github.com/user-attachments/assets/4a9dd1d7-16be-4ad5-b841-91da8f0b56f1) ![Image](https://github.com/user-attachments/assets/ca8390cd-b97b-4cea-9e34-ae3fb9447ad9) ### ✔️ Expected Behavior baidufanyi -> `百度翻译` bdfy -> `百度翻译` baiduwangpan -> `百度网盘` bdwp -> `百度网盘` bd -> `百度翻译` and `百度网盘` baidu -> `百度翻译` and `百度网盘` ### ❌ Actual Behavior Work as Expected: - [x] bd -> `百度网盘` - [x] baidu -> `百度网盘` Not as Expected: - [ ] baidufanyi -> `百度翻译` - [ ] bdfy -> `百度翻译` - [ ] baiduwangpan -> `百度网盘` - [ ] bdwp -> `百度网盘` - [ ] bd -> `百度翻译` - [ ] baidu -> `百度翻译` ### Other Software _No response_
Issue-Bug,Product-PowerToys Run,Area-Localization,Needs-Triage
low
Minor
2,706,616,734
awesome-mac
🎉 Add Tran
### 🪩 Provide a link to the proposed addition https://github.com/Borber/Tran ### 😳 Explain why it should be added Tran 简洁, 快速, 划词翻译 ### 📖 Additional context 跨平台 ### 🧨 Issue Checklist - [X] I have checked for other similar issues - [X] I have explained why this change is important - [X] I have added necessary documentation (if appropriate)
addition
low
Minor
2,706,633,790
awesome-mac
🎉 Add <Gopeed>
### 🪩 Provide a link to the proposed addition https://github.com/GopeedLab/gopeed ### 😳 Explain why it should be added Gopeed (full name Go Speed), a high-speed downloader developed by Golang + Flutter, supports (HTTP, BitTorrent, Magnet) protocol, and supports all platforms. In addition to basic download functions, Gopeed is also a highly customizable downloader that supports implementing more features through integration with APIs or installation and development of extensions. ### 📖 Additional context 跨平台且简洁 ### 🧨 Issue Checklist - [X] I have checked for other similar issues - [X] I have explained why this change is important - [X] I have added necessary documentation (if appropriate)
addition
low
Minor
2,706,690,273
deno
SyntaxError: Unexpected identifier 'parentPort' in worker_threads
Version: Deno 2.1.2, Windows 10 x64 I wrote a program using Node.js, TypeScript and a lib called [microjob](https://github.com/wilk). the program code below: ```TS export class EnvChecker { static async checkRuntimes() { // 首先通过进程命令行获取运行时类型(node/tsx/deno/bun...) const willCheck = process.env.EC_ON_LAUNCH || false if (willCheck) { try { // start the worker pool first await start({ maxWorkers: 1 }) // this function will be executed in another thread const res = await job(() => { console.log(`run time is: ${process.argv[0]}`) // doing some other tasks, skip here return 'Env check done.' }) console.log(res) } catch (err) { console.error(err) } finally { await stop() // shutdown worker pool } } } } EnvChecker.checkRuntimes() ``` When I execute it use deno, got error: ```console PS F:\my-program> npm run dev:deno > deno run --watch --inspect --allow-all src/main.ts Watcher Process started. Debugger listening on ws://127.0.0.1:9229/ws/3426a06b-7d14-4a71-bdca-e45de5e07894 Visit chrome://inspect to connect to the debugger. Debugger listening on ws://127.0.0.1:9229/ws/0ef0d61e-7b4c-4b57-94bc-2633e8907f82 Visit chrome://inspect to connect to the debugger. error: Uncaught (in worker "[worker eval]") SyntaxError: Unexpected identifier 'parentPort' at <anonymous> (data:text/javascript,const { parentPort }......error(err) } }}):1:49) 2024/11/30 13:16:01 Web app > Main Process launched, name: Mot&Mot 2024/11/30 13:16:01 Web app > Server is running on port: 55501 2024/11/30 13:16:01 Web app > Process(Whole App/Service) instance xid(random): ct59v4cn97dj54118l2g, OS process id: 12944 Debugger listening on ws://127.0.0.1:9229/ws/955a666c-c547-46c9-a6f8-7fea6490b9d5 Visit chrome://inspect to connect to the debugger. error: Uncaught (in worker "[worker eval]") SyntaxError: Unexpected identifier 'parentPort' at <anonymous> (data:text/javascript,const { parentPort }......error(err) } }}):1:49) [Object: null prototype] { message: "Uncaught SyntaxError: Unexpected identifier 'parentPort'", fileName: "data:text/javascript,const { parentPort } = require('worker_threads')parentPort.on('message', async worker => { const response = { error: null, data: null } try { eval(worker) // __executor__ is defined in worker response.data = await __executor__() parentPort.postMessage(response) } catch (err) { response.data = null response.error = { message: err.message, stack: err.stack } try { parentPort.postMessage(response) } catch (err) { console.error(err) } }})", lineNumber: 1, columnNumber: 49 } ``` Meanwhile, this program works fine in node/tsx and bun: ```console PS F:\my-program> npm run dev:tsx > npx tsx watch --inspect --env-file=.env src/main.ts Debugger listening on ws://127.0.0.1:9229/085d4c46-186c-43c4-9676-95fc522e81c4 For help, see: https://nodejs.org/en/docs/inspector run time is: C:\Apps\Nvm\Node\node.exe Env check done. ``` ```console PS F:\my-program> npm run dev:bun > bun run --watch --inspect src/main.ts --------------------- Bun Inspector --------------------- Listening: ws://localhost:6499/m4h84jh099g Inspect in browser: https://debug.bun.sh/#localhost:6499/m4h84jh099g --------------------- Bun Inspector --------------------- run time is: C:\Users\jason\scoop\apps\bun\1.1.38\bun.exe Env check done. ```
bug,node compat
low
Critical
2,706,779,259
ui
[bug]: ECONNRESET Error When Running `npx shadcn@latest add --all`
### Describe the bug Hello, I'm encountering an issue while trying to use the `shadcn` CLI tool to add all components to my project. The command fails with an `ECONNRESET` error when attempting to fetch certain resources from https://ui.shadcn.com. Here are the details: **What I Have Tried:** - Checked internet connectivity (it's stable). - Cleared npm cache using npm cache clean --force. - Updated Node.js and npm to the latest versions. - Tried using a different network (e.g., mobile hotspot). **Environment:** - Node.js version: v22.11.0 (LTS) - npm version: 10.9.0 - Operating System: Windows 11 Pro **Additional Information:** - I’ve attached screenshots of the terminal errors for reference. - Please let me know if this is an issue with the shadcn server or if there are additional steps I should follow. **Suggested Note for the Issue:** > I am encountering the `ECONNRESET `error with different components each time I run the command. It seems the issue affects all components fetched via `npx shadcn@latest add --all`. This could indicate a problem with the network/resource fetching mechanism rather than any specific component. Thank you! **Attachments:** ![Screenshot 2024-11-30 115133](https://github.com/user-attachments/assets/783fb442-c22b-428b-b2da-3d43a212ab65) ### Affected component/components potentially all. ### How to reproduce 1. Run the following command in the terminal: ``` npx shadcn@latest add --all ``` 2. The command fails with the following error: ``` Something went wrong. Please check the error below for more details. request to https://ui.shadcn.com/r/styles/new-york/... failed, reason: read ECONNRESET ``` ### Codesandbox/StackBlitz link https://github-production-user-asset-6210df.s3.amazonaws.com/88278303/391216641-783fb442-c22b-428b-b2da-3d43a212ab65.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAVCODYLSA53PQK4ZA%2F20241130%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20241130T062805Z&X-Amz-Expires=300&X-Amz-Signature=9212cc08d79e91503bce5c075265c51ba2d70ee4e05fb1c8e8a60f16cd228bf6&X-Amz-SignedHeaders=host ### Logs _No response_ ### System Info ```bash Microsoft Windows 11 Pro(24H2), Node.js v22.11.0, npm v10.9.0, shadcn CLI v2.1.6 ``` ### Before submitting - [X] I've made research efforts and searched the documentation - [X] I've searched for existing issues
bug
low
Critical
2,706,839,005
flutter
[desktop]: Screen doesn't scroll with trackpad if pointer is placed on scrollbar and then try to scroll.
### Steps to reproduce 1. Run the sample 2. Move the cursor to the edge to show the dragbar (over the dragbar) 3. Scroll on touchpad ### Expected results Widget also scrolls. ### Actual results The widget does not scroll. Windows and MacOS have this issue. I have no Linux touchpad to test. ### Code sample <details open><summary>Code sample</summary> ```dart import 'package:flutter/material.dart'; /// Flutter code sample for [SingleChildScrollView]. void main() => runApp(const SingleChildScrollViewExampleApp()); class SingleChildScrollViewExampleApp extends StatelessWidget { const SingleChildScrollViewExampleApp({super.key}); @override Widget build(BuildContext context) { return const MaterialApp( home: SingleChildScrollViewExample(), ); } } class SingleChildScrollViewExample extends StatelessWidget { const SingleChildScrollViewExample({super.key}); @override Widget build(BuildContext context) { return DefaultTextStyle( style: Theme.of(context).textTheme.bodyMedium!, child: LayoutBuilder( builder: (BuildContext context, BoxConstraints viewportConstraints) { return SingleChildScrollView( child: ConstrainedBox( constraints: BoxConstraints( minHeight: viewportConstraints.maxHeight, ), child: Column( mainAxisSize: MainAxisSize.min, mainAxisAlignment: MainAxisAlignment.spaceAround, children: List.generate(10, (int i) { return Container( color: i.isEven ? const Color(0xffeeee00) : const Color(0xff008000), height: 120.0, alignment: Alignment.center, child: Text('List item $i'), ); }), ), ), ); }, ), ); } } ``` </details> ### Screenshots or Video <details open> <summary>Screenshots / Video demonstration</summary> [Upload media here] </details> ### Logs <details open><summary>Logs</summary> ```console [Paste your logs here] ``` </details> ### Flutter Doctor output <details open><summary>Doctor output</summary> ```console [√] Flutter (Channel stable, 3.24.5, on Microsoft Windows [Version 10.0.22631.4460], locale en-US) • Flutter version 3.24.5 on channel stable at D:\DevEnv\flutter\flutter • Upstream repository https://github.com/flutter/flutter.git • Framework revision dec2ee5c1f (2 weeks ago), 2024-11-13 11:13:06 -0800 • Engine revision a18df97ca5 • Dart version 3.5.4 • DevTools version 2.37.3 [√] Windows Version (Installed version of Windows is version 10 or higher) [X] Android toolchain - develop for Android devices X Unable to locate Android SDK. Install Android Studio from: https://developer.android.com/studio/index.html On first launch it will assist you in installing the Android SDK components. (or visit https://flutter.dev/to/windows-android-setup for detailed instructions). If the Android SDK has been installed to a custom location, please use `flutter config --android-sdk` to update to that location. [√] Chrome - develop for the web • Chrome at C:\Program Files\Google\Chrome\Application\chrome.exe [√] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.7.0) • Visual Studio at C:\Program Files\Microsoft Visual Studio\2022\Community • Visual Studio Community 2022 version 17.7.34003.232 • Windows 10 SDK version 10.0.22621.0 [!] Android Studio (not installed) • Android Studio not found; download from https://developer.android.com/studio/index.html (or visit https://flutter.dev/to/windows-android-setup for detailed instructions). [!] Proxy Configuration • HTTP_PROXY is set ! NO_PROXY is not set [√] Connected device (3 available) • Windows (desktop) • windows • windows-x64 • Microsoft Windows [Version 10.0.22631.4460] • Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.86 • Edge (web) • edge • web-javascript • Microsoft Edge 130.0.2849.80 [√] Network resources • All expected network resources are available. ! Doctor found issues in 3 categories. ``` </details>
framework,f: scrolling,a: desktop,has reproducible steps,P3,team-framework,triaged-framework,found in release: 3.24,found in release: 3.27
low
Major
2,706,867,360
flutter
Improve documentation of `TextField`'s `decoration` property interplay with `Theme.inputDecorationTheme`
### Use case The [current documentation of flutter's `TextField`](https://api.flutter.dev/flutter/material/TextField-class.html) contains no mention of the fact that any explicit `decoration` is merged with whatever additional decorations are defined in the surrounding theme, making it unnecessarily difficult to debug why e.g. trying to remove a field's border (if defined in the theme) is impossible without wrapping the widget with an additional `Theme` containing explicit `InputBorder.none` values. ### Proposal As a bare minimum, some form of notice should be added to the `decoration` property informing about the potential need for inserting an applied-property-clearing `Theme` wrapper surrounding the widget. Ideally though, users should have a less verbose way of specifying whether to merge or fully replace the active `InputDecorationTheme` either when creating, or when applying an `InputDecoration`, e.g. with a `bool? mergeWithTheme` on all constructors, while applying more intuitive defaults (looking at [the `InputDecoration.collapsed()` named constructor](https://api.flutter.dev/flutter/material/InputDecoration/InputDecoration.collapsed.html), that would be a `false` rather than a `true`) EDIT: Note, I got started creating a small example on https://dartpad.dev to illustrate the issue, but writing code containing a textfield over there is an absolute nightmare once "Run" is executed due to the textfield acting as a cursor magnet making further code edits an incredibly frustrating experience :(
c: new feature,framework,f: material design,d: api docs,a: quality,c: proposal,P2,team-text-input,triaged-text-input
low
Critical
2,706,872,980
rust
After building from https://static.rust-lang.org/dist/rustc-1.83.0-src.tar.xz, rustc --version prints `-nightly`
Upacking https://static.rust-lang.org/dist/rustc-1.83.0-src.tar.xz and executing `./configure --enable-optimize-llvm --enable-extended --llvm-root=/usr/local --enable-profiler --enable-llvm-link-shared --enable-sanitizers --disable-docs --target=x86_64-unknown-linux-gnu && x.py install && rustc --version` produces the configuration file below, and prints > rustc 1.83.0-nightly (90b35a623 2024-11-26) (built from a source tarball) * The version string should not include `-nightly` <details> <summary>Produced config.toml</summary> ```toml # Use different pre-set defaults than the global defaults. # # See `src/bootstrap/defaults` for more information. # Note that this has no default value (x.py uses the defaults in `config.example.toml`). profile = 'dist' [llvm] # Indicates whether the LLVM build is a Release or Debug build optimize = true # Whether to build LLVM as a dynamically linked library (as opposed to statically linked). # Under the hood, this passes `--shared` to llvm-config. # NOTE: To avoid performing LTO multiple times, we suggest setting this to `true` when `thin-lto` is enabled. link-shared = true [build] # Which triples to build libraries (core/alloc/std/test/proc_macro) for. Each of these triples will # be bootstrapped from the build triple themselves. In other words, this is the list of triples for # which to build a library that can CROSS-COMPILE to that triple. # # Defaults to `host`. If you set this explicitly, you likely want to add all # host triples to this list as well in order for those host toolchains to be # able to compile programs for their native target. target = ['x86_64-unknown-linux-gnu'] # Whether to build documentation by default. If false, rustdoc and # friends will still be compiled but they will not be used to generate any # documentation. # # You can still build documentation when this is disabled by explicitly passing paths, # e.g. `x doc library`. docs = false # Enable a build of the extended Rust tool set which is not only the compiler # but also tools such as Cargo. This will also produce "combined installers" # which are used to install Rust and Cargo together. # The `tools` (check `config.example.toml` to see its default value) option specifies # which tools should be built if `extended = true`. # # This is disabled by default. extended = true # Build the sanitizer runtimes sanitizers = true # Build the profiler runtime (required when compiling with options that depend # on this runtime, such as `-C profile-generate` or `-C instrument-coverage`). profiler = true # Arguments passed to the `./configure` script, used during distcheck. You # probably won't fill this in but rather it's filled in by the `./configure` # script. Useful for debugging. configure-args = ['--enable-optimize-llvm', '--enable-extended', '--llvm-root=/usr/local', '--enable-profiler', '--enable-llvm-link-shared', '--enable-sanitizers', '--disable-docs', '--target=x86_64-unknown-linux-gnu'] [install] [rust] [target.x86_64-unknown-linux-gnu] # Path to the `llvm-config` binary of the installation of a custom LLVM to link # against. Note that if this is specified we don't compile LLVM at all for this # target. llvm-config = '/usr/local/bin/llvm-config' [dist] ``` </details>
T-bootstrap,C-bug
low
Critical
2,706,939,912
next.js
root not-found page not displayed when inside a route group
### Link to the code that reproduces this issue https://github.com/stefanprobst/issue-next-not-found-route-group ### To Reproduce 1. clone the [repo](https://github.com/stefanprobst/issue-next-not-found-route-group) 2. `pnpm install && pnpm build && pnpm start` 3. open http://localhost:3000/not-existing-route 4. see the built-in default not-found page being displayed, not the one from `/app/(app)/not-found.tsx` ### Current vs. Expected behavior the custom root not-found page is not shown for e.g. http://localhost:3000/not-existing-route because it lives in a route group (instead the built-in default 404 page is shown): ``` └── app └── (app) ├── layout.tsx ├── not-found.tsx └── page.tsx ``` it only works correctly when getting rid of the route group (check out the [no-route-group branch](https://github.com/stefanprobst/issue-next-not-found-route-group/tree/no-route-group)) ``` └── app ├── layout.tsx ├── not-found.tsx └── page.tsx ``` ### Provide environment information ```bash Operating System: Platform: linux Arch: x64 Version: #49-Ubuntu SMP PREEMPT_DYNAMIC Mon Nov 4 02:06:24 UTC 2024 Available memory (MB): 32041 Available CPU cores: 4 Binaries: Node: 22.11.0 npm: 10.9.0 Yarn: N/A pnpm: 9.14.2 Relevant Packages: next: 15.0.4-canary.32 // Latest available version is detected (15.0.4-canary.32). eslint-config-next: N/A react: 19.0.0-rc-b01722d5-20241114 react-dom: 19.0.0-rc-b01722d5-20241114 typescript: 5.7.2 Next.js Config: output: N/A ``` ### Which area(s) are affected? (Select all that apply) Navigation ### Which stage(s) are affected? (Select all that apply) next dev (local), next start (local) ### Additional context _No response_
bug,Navigation
low
Minor
2,706,941,447
kubernetes
possibility of resource reservation features
### What would you like to be added? What about the possibility of resource reservation? For example, in an online cluster with rescheduling capabilities, or when certain nodes (due to node maintenance or eviction from the cluster) need to be taken offline for some operational tasks. Suppose we have an online cluster with hundreds of nodes, and we use node labels to differentiate resource pools for different business units. If today I need to decommission a specific node or several nodes within a particular resource pool, I would need to "migrate" the pods on those nodes (i.e., evict the pods so they can be rescheduled to other nodes in different pools). We want to safely migrate the online services, so we are looking for a mechanism similar to "resource reservation." We want to preemptively reserve certain resources on the nodes before the pods are created, ensuring that our high-priority pods can definitely be scheduled successfully. ### Why is this needed? For some high-priority pods, we want them to remain intact during node maintenance, which is very important to us. If we cannot preempt them, we will not evict them. I know this seems to be a very complicated problem in a large cluster, and I may not have considered it comprehensively. Or if there is already a similar feature, that would be great.
sig/scheduling,sig/node,kind/feature,needs-triage
medium
Major
2,706,991,299
next.js
Cache control header is missing in response
### Link to the code that reproduces this issue https://github.com/zeckaissue/nextjs-missing-cache-control ### To Reproduce I created this repository as a working base to reproduce this bug locally. Unfortunately, I haven't been able to reproduce the bug locally so far. Because this bug is very random, it’s difficult for me to find the right way to reproduce it. I’m a bit lost on how to debug this issue... ### Current vs. Expected behavior Sometimes, mainly just after a deployment, the `cache-control` header is not included in the response. <details> <summary>Here is two response header of the same page:</summary> #### With the bug (Missing cache-control value) ![image](https://github.com/user-attachments/assets/efeae13c-e398-4029-9eda-bc9d831e379e) #### Without the bug ![image](https://github.com/user-attachments/assets/30a97676-b0a7-4319-83a3-c0970814d078) </details> ### Provide environment information ```bash Operating System: Platform: darwin Arch: arm64 Version: Darwin Kernel Version 23.4.0: Fri Mar 15 00:10:42 PDT 2024; root:xnu-10063.101.17~1/RELEASE_ARM64_T6000 Available memory (MB): 32768 Available CPU cores: 10 Binaries: Node: 20.17.0 npm: 10.8.2 Yarn: 1.22.22 pnpm: 9.5.0 Relevant Packages: next: 14.2.16 // An outdated version detected (latest is 15.0.3), upgrade is highly recommended! eslint-config-next: 14.2.16 react: 18.2.0 react-dom: 18.2.0 typescript: 5.5.3 Next.js Config: output: standalone ``` ### Which area(s) are affected? (Select all that apply) Not sure ### Which stage(s) are affected? (Select all that apply) Other (Deployed) ### Additional context - I’m working on an App Router. - I have a custom Redis cache handler. - My app is deployed on Azure App Service and is behind Azure Front Door. ### Why it's an issue for me ? When the response doesn’t include a `Cache-Control` header, Azure Front Door keeps the response cached for an indefinite duration. As a result, even if the content changes, the page is not refreshed. ### Workaround used Before [v14.2.10](https://github.com/vercel/next.js/releases/tag/v14.2.10) I used a workaround by setting a custom response header through `next.config.js` as a fallback. With this configuration, when the bug occurred, instead of omitting the `Cache-Control `header, Next.js would return the fallback value (in this case, max-age=60). **Before [v14.2.10](https://github.com/vercel/next.js/releases/tag/v14.2.10) , the behavior was as follows:** If the page had its own`revalidate` value, the `Cache-Control` header was taken from this revalidate value. Otherwise, the value was taken from the Next.js configuration (next.config.js). So the next.js **So, the header configuration in `next.config.js` was not prioritized over the `revalidate` value defined individually for each route.** But this behaviour seems to be consider as a bug from nextjs team (https://github.com/vercel/next.js/issues/22319) (#69802) So after this [pr on v14.2.10](https://github.com/vercel/next.js/pull/69802) my workaround was not working anymore because all page get the same Cache-Control header. Here the next config workaround ```js const nextConfig = { headers: async () => { return [ { source: '/(.*)', headers: [ { key: 'Cache-Control', value: 'public, s-maxage=60, stale-while-revalidate=60', }, ], }, ]; }, } ``` As a potential workaround, I am looking for a way to modify the response just before it is sent to the client. This would allow me to add a Cache-Control header if one is not already present. I have opened a discussion about this: https://github.com/vercel/next.js/discussions/72240 Has anyone else encountered the same issue? Do you have any suggestions to help me debug it better and find potential solutions? Maybe related to: - https://github.com/vercel/next.js/issues/70213 - https://github.com/vercel/next.js/issues/22319
bug
low
Critical
2,707,012,379
react
[React 19] useTranslations Hook Causes "Expected a Suspended Thenable" Error in Async React Components Requiring Client-Side Rendering
### Bug Report **React version:** 19 --- ### Description of the Bug When creating an **async React component** and calling the `useTranslations` hook, which requires client-side rendering, React does not provide an error message indicating the issue. Instead, it throws an **internal error** with the following message: ``` Internal error: Error: Expected a suspended thenable. This is a bug in React. Please file an issue. at getSuspendedThenable ... at retryTask ... at performWork ... ``` It appears that React is not correctly handling the combination of `useTranslations` (a client-side rendering hook) inside an asynchronous component. This leads to React attempting to suspend improperly, resulting in the error. --- ### Steps to Reproduce 1. Create an **async React component**. 2. Use the `useTranslations` hook inside the component. 3. Attempt to render the component in a server-side context or without proper client-side rendering setup. 4. Observe the error described above. --- ### Link to Code Example Here is a minimal example demonstrating the issue: ```jsx import { useTranslations } from 'next-intl'; const AsyncComponent = async () => { const t = useTranslations(); // Requires client-side rendering return <div>{t('exampleKey')}</div>; }; export default AsyncComponent; ``` --- ### The Current Behavior Instead of providing a clear error indicating the need for client-side rendering or proper handling of hooks within async components, React throws the following error: ``` Internal error: Error: Expected a suspended thenable. This is a bug in React. Please file an issue. at getSuspendedThenable ... at retryTask ... at performWork ... ``` Additionally, errors such as `Invalid state: ReadableStream is already closed` and `failed to pipe response` are observed, leading to significant debugging overhead. --- ### The Expected Behavior React should: 1. Provide a **clear error message** that explains why `useTranslations` cannot be used in this context (e.g., "useTranslations must be called in a client-side rendering context"). 2. Avoid internal errors such as "Expected a suspended thenable." This would help developers quickly identify and fix the issue without requiring extensive debugging of React internals.
Status: Unconfirmed
low
Critical
2,707,016,436
rust
Tracking Issue for const_swap_nonoverlapping
<!-- Thank you for creating a tracking issue! Tracking issues are for tracking a feature from implementation to stabilization. Make sure to include the relevant RFC for the feature if it has one. If the new feature is small, it may be fine to skip the RFC process. In that case, you can use `issue = "none"` in your initial implementation PR. The reviewer will ask you to open a tracking issue if they agree your feature can be added without an RFC. --> Feature gate: `#![feature(const_swap_nonoverlapping)]` This is a tracking issue for making `swap_nonoverlapping` a `const fn`. <!-- Include a short description of the feature. --> ### Public API ```rust mod ptr { pub const unsafe fn swap_nonoverlapping<T>(x: *mut T, y: *mut T, count: usize); } ``` ### Steps / History <!-- For larger features, more steps might be involved. If the feature is changed later, please add those PRs here as well. --> - [x] Split out of https://github.com/rust-lang/rust/issues/83163 - [ ] Resolve blocking issues - [ ] Final comment period (FCP)[^1] - [ ] Stabilization PR <!-- Once the feature has gone through a few release cycles and there are no unresolved questions left, the feature might be ready for stabilization. If this feature didn't go through the RFC process, a final comment period (FCP) is always needed before stabilization. This works as follows: A library API team member can kick off the stabilization process, at which point the rfcbot will ask all the team members to verify they agree with stabilization. Once enough members agree and there are no concerns, the final comment period begins: this issue will be marked as such and will be listed in the next This Week in Rust newsletter. If no blocking concerns are raised in that period of 10 days, a stabilization PR can be opened by anyone. --> ### Blocking Issues `ptr::swap_nonoverlapping` has a limitation currently where it can fail when the data-to-swap contains pointers that cross the "element boundary" of such a swap (i.e., `count > 0` and the pointer straddles the boundary between two `T`). Here's an example of code that unexpectedly fails: ```rust const { let mut ptr1 = &1; let mut ptr2 = &666; // Swap ptr1 and ptr2, bytewise. unsafe { ptr::swap_nonoverlapping( ptr::from_mut(&mut ptr1).cast::<u8>(), ptr::from_mut(&mut ptr2).cast::<u8>(), mem::size_of::<&i32>(), ); } // Make sure they still work. assert!(*ptr1 == 666); assert!(*ptr2 == 1); }; ``` The proper way to fix this is to implement https://github.com/rust-lang/const-eval/issues/72. [^1]: https://std-dev-guide.rust-lang.org/feature-lifecycle/stabilization.html
T-libs-api,C-tracking-issue,I-lang-nominated
low
Minor
2,707,027,881
material-ui
[Autocomplete] Provide more details in the `details` param of `onChange` prop
### Steps to reproduce Steps: 1. Add an `Autocomplete` component with `multiple={true}` and `disableClearable={false}`. 2. Add values 3. Press the clear button ### Current behavior When `onChange` is called you get: • `event = {...}` • `value = []` • `reason = 'clear'` • `details = undefined` ### Expected behavior When `onChange` is called you get: • `event = {...}` • `value = []` • `reason = 'clear'` • `details = { option: [...clearedValues] }` ### Context When you have an `Autocomplete` with `multiple` and not `disableClearable` and you: 1. Select an option (op5) you get: • `event = {...}` • `value = [op1, op2, op5]` • `reason = 'selectOption'` • `details = { option: op5 }` 2. Remove an option (op5) you get: • `event = {...}` • `value = [op1, op2]` • `reason = 'removeOption'` • `details = { option: op5 }` ### Your environment <details> <summary><code>npx @mui/envinfo</code></summary> ``` System: OS: Windows 10 10.0.19045 Binaries: Node: 18.20.3 - C:\Program Files\nodejs\node.EXE npm: 10.9.0 - C:\Program Files\nodejs\npm.CMD pnpm: 8.14.0 - C:\Program Files\nodejs\pnpm.CMD Browsers: Chrome: Not Found Edge: Chromium (131.0.2903.51) npmPackages: @emotion/react: ^11.13.3 => 11.13.3 @emotion/styled: ^11.13.0 => 11.13.0 @mui/base: 5.0.0-beta.40 @mui/core-downloads-tracker: 6.1.7 @mui/icons-material: ^6.1.7 => 6.1.7 @mui/lab: 5.0.0-alpha.173 @mui/material: ^6.1.7 => 6.1.7 @mui/private-theming: 5.16.6 @mui/styled-engine: 5.16.6 @mui/system: 5.16.7 @mui/types: 7.2.19 @mui/utils: 5.16.6 @types/react: 18.3.12 react: ^18.3.1 => 18.3.1 react-dom: ^18.3.1 => 18.3.1 typescript: 4.9.5 ``` </details> I use Chrome: Version 131.0.6778.86 (Official Build) (64-bit) **Search keywords**: multiple onChange reason clear details disableClearable
waiting for 👍,package: material-ui,component: autocomplete,enhancement
low
Minor
2,707,039,985
ollama
Remove macOS menu bar icon
Can we please get the ability to turn off the macOS menu bar icon?
feature request
low
Major
2,707,080,009
deno
Not implemented: BrotliDecompress.prototype.constructor
stderr: sharp: Installation error: Not implemented: BrotliDecompress.prototype.constructor gyp info it worked if it ends with ok gyp info using [email protected] gyp info using [email protected] | linux | x64 gyp info find Python using Python version 3.12.7 found at "/usr/bin/python3" gyp info spawn /usr/bin/python3 gyp info spawn args [ gyp info spawn args '/usr/share/nodejs/node-gyp/gyp/gyp_main.py', gyp info spawn args 'binding.gyp', gyp info spawn args '-f', gyp info spawn args 'make', gyp info spawn args '-I', gyp info spawn args '/home/test/deno/node_modules/.deno/[email protected]/node_modules/sharp/build/config.gypi', gyp info spawn args '-I', gyp info spawn args '/usr/share/nodejs/node-gyp/addon.gypi', gyp info spawn args '-I', gyp info spawn args '/usr/include/nodejs/common.gypi', gyp info spawn args '-Dlibrary=shared_library', gyp info spawn args '-Dvisibility=default', gyp info spawn args '-Dnode_root_dir=/usr/include/nodejs', gyp info spawn args '-Dnode_gyp_dir=/usr/share/nodejs/node-gyp', gyp info spawn args '-Dnode_lib_file=/usr/include/nodejs/<(target_arch)/node.lib', gyp info spawn args '-Dmodule_root_dir=/home/test/deno/node_modules/.deno/[email protected]/node_modules/sharp', gyp info spawn args '-Dnode_engine=v8', gyp info spawn args '--depth=.', gyp info spawn args '--no-parallel', gyp info spawn args '--generator-output', gyp info spawn args 'build', gyp info spawn args '-Goutput_dir=.' gyp info spawn args ] <string>:108: SyntaxWarning: invalid escape sequence '\/' gyp info spawn make gyp info spawn args [ 'BUILDTYPE=Release', '-C', 'build' ] ../src/common.cc:13:10: fatal error: vips/vips8: No such file or directory 13 | #include <vips/vips8> | ^~~~~~~~~~~~
bug,node compat
low
Critical
2,707,121,419
deno
Upgrading to Deno 2.1.0 causes a mismatched versions in some node packages
Version: Deno 2.1.0 For my project, I am using the following stack: - `@trpc/server` v11.0.0 - `@trpc/client` v11.0.0 - `@trpc/react-query` v11.0.0 - `@tanstack/react-query` v5.62.0 Upgrading from Deno 2.0.6 to Deno 2.1.0 causes the following mismatched version error 👇 ``` Type 'import("file:///Users/x/Documents/perso/stay/$node_modules/.deno/@[email protected]/$node_modules/@tanstack/query-core/build/modern/hydration-OMuWWX9N").b' is not assignable to type 'import("file:///Users/x/Documents/perso/stay/$node_modules/.deno/@[email protected]/$node_modules/@tanstack/query-core/build/modern/hydration-BOBMySlm").b'. Property '#private' in type 'QueryClient' refers to a different member that cannot be accessed from within type 'QueryClient'.deno-ts(2322) context.d.ts(56, 5): The expected type comes from property 'queryClient' which is declared here on type 'IntrinsicAttributes & TRPCProviderProps<BuiltRouter<{ ctx: { currentUser: { id: string; firstName: string; lastName: string | null; email: string; encryptedPassword: string; } | null; }; meta: object; errorShape: DefaultErrorShape; transformer: true; }, { ...; }>, unknown>' ``` <img width="965" alt="image" src="https://github.com/user-attachments/assets/58688820-edb7-41af-96f2-e7e5e825897d"> Looking at the [Deno 2.1.0 release](https://github.com/denoland/deno/releases/tag/v2.1.0), I couldn't find the commit that makes this error appears
bug,needs investigation,node compat,node resolution
low
Critical
2,707,185,667
yt-dlp
Feature for Subtitle Translation embedding with Video
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE - [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field ### Checklist - [X] I'm requesting a feature unrelated to a specific site - [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme) - [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates - [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue) ### Provide a description that is worded well enough to be understood - [ ] 1. If we download YT Videos from Korean Auto Generated or French Subtitle, it should be translated to another language like English etc. - [ ] 2. When I downloaded YT Video which has Thai ( Auto - Generated ) Subtitle and I will not see this Subtitle in MX Player. And make sure Auto - Generated can be Translated based on Selected language. - [ ] 3. Use Translation to get proper Subtitles in English or another language. - [ ] 4. Video and Audio quality selection should have grid for better visibility, but in List it made it hard for me to read the size that I want to choose also they are big text without spacing and sizes. It took time for me. - [ ] 5. Show bit rates Video and Audio quality selection in mb and kb to check Video and Audio quality. - [ ] 6. Skip any error coming from Subtitle "en-GB" etc ### Provide verbose output that clearly demonstrates the problem - [ ] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`) - [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead - [ ] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below ### Complete Verbose Output ```shell Nil ```
enhancement,incomplete,triage,core:post-processor
low
Critical
2,707,195,523
flutter
[Windows]: Blank content when using FancyZones
### Steps to reproduce 1. Install Microsoft's FancyZones, setup any kind of zone, 2. Run a Flutter-based app (in my case it's Rustdesk but demo is done with Flutter Simple Demo, timecode: 0:12) 3. Bind app to a zone you created (timecode: 0:15) 4. Restart the app 5. Graphics get corrupted, no content is showing (0:22) 6. Move the app out of the zone and restart the app (timecode: 0:28) 7. Everything is fine again. Reference video: https://www.youtube.com/watch?v=hr3hcWKMIhY ### Expected results Everything shows up correctly, no blanking. ### Actual results Content is blank, application is unusable. ### Code sample There's no way for me to provide this piece, I'm not a developer of neither of the apps. ### Screenshots or Video <details open> <summary>Video demonstration</summary> https://www.youtube.com/watch?v=hr3hcWKMIhY </details> ### Logs _No response_ ### Flutter Doctor output There's no way for me to provide this piece, I'm not a developer of neither of the apps.
platform-windows,a: desktop,P3,team-windows,triaged-windows
low
Major
2,707,201,686
deno
require not defined error when cjs module in npm package has no file extension
Version: Deno 2.1.0 Last working version `2.0.6`. ```console $ docker run --rm -it denoland/deno:alpine-2.0.6 run -A npm:@informalsystems/[email protected] --version Warning The following packages contained npm lifecycle scripts (preinstall/install/postinstall) that were not executed: ┠─ npm:[email protected] ┃ ┠─ This may cause the packages to not work correctly. ┠─ Lifecycle scripts are only supported when using a `node_modules` directory. ┠─ Enable it in your deno config file: ┖─ "nodeModulesDir": "auto 0.22.4 ``` But for `>=2.1.0`, I am getting error for commonjs. ```console $ docker run --rm -it denoland/deno:alpine-2.1.0 run -A npm:@informalsystems/[email protected] --version Warning The following packages contained npm lifecycle scripts (preinstall/install/postinstall) that were not executed: ┠─ npm:[email protected] ┃ ┠─ This may cause the packages to not work correctly. ┠─ Lifecycle scripts are only supported when using a `node_modules` directory. ┠─ Enable it in your deno config file: ┖─ "nodeModulesDir": "auto" error: Uncaught (in promise) ReferenceError: require is not defined at file:///deno-dir/npm/registry.npmjs.org/yargs/17.7.2/yargs:3:69 at loadESMFromCJS (node:module:761:21) at Module._compile (node:module:722:12) at Object.Module._extensions..js.Module._extensions..ts.Module._extensions..jsx.Module._extensions..tsx (node:module:754:10) at Module.load (node:module:662:32) at Function.Module._load (node:module:534:12) at Module.require (node:module:681:19) at require (node:module:797:16) at Object.<anonymous> (file:///deno-dir/npm/registry.npmjs.org/@informalsystems/quint/0.22.4/dist/src/cli.js:16:33) at Object.<anonymous> (file:///deno-dir/npm/registry.npmjs.org/@informalsystems/quint/0.22.4/dist/src/cli.js:356:4) info: Deno supports CommonJS modules in .cjs files, or when the closest package.json has a "type": "commonjs" option. hint: Rewrite this module to ESM, or change the file extension to .cjs, or add package.json next to the file with "type": "commonjs" option. docs: https://docs.deno.com/go/commonjs $ docker run --rm -it denoland/deno:alpine-2.1.1 run -A npm:@informalsystems/[email protected] --version < same error as before > ``` In `2.1.2`, I do get a suggestion for `--unstable-detect-cjs`. ```console $ docker run --rm -it denoland/deno:alpine-2.1.2 run -A npm:@informalsystems/[email protected] --version Warning The following packages contained npm lifecycle scripts (preinstall/install/postinstall) that were not executed: ┠─ npm:[email protected] ┃ ┠─ This may cause the packages to not work correctly. ┠─ Lifecycle scripts are only supported when using a `node_modules` directory. ┠─ Enable it in your deno config file: ┖─ "nodeModulesDir": "auto" error: Uncaught (in promise) ReferenceError: require is not defined at file:///deno-dir/npm/registry.npmjs.org/yargs/17.7.2/yargs:3:69 at loadESMFromCJS (node:module:777:21) at Module._compile (node:module:722:12) at loadMaybeCjs (node:module:770:10) at Object.Module._extensions..js (node:module:761:12) at Module.load (node:module:662:32) at Function.Module._load (node:module:534:12) at Module.require (node:module:681:19) at require (node:module:812:16) at Object.<anonymous> (file:///deno-dir/npm/registry.npmjs.org/@informalsystems/quint/0.22.4/dist/src/cli.js:16:33) info: Deno supports CommonJS modules in .cjs files, or when the closest package.json has a "type": "commonjs" option. hint: Rewrite this module to ESM, or change the file extension to .cjs, or add package.json next to the file with "type": "commonjs" option, or pass --unstable-detect-cjs flag to detect CommonJS when loading. docs: https://docs.deno.com/go/commonjs ``` But it still doesn't work with `--unstable-detect-cjs` flag. ```console $ docker run --rm -it denoland/deno:alpine-2.1.2 run --unstable-detect-cjs -A npm:@informalsystems/[email protected] --version ... < still same error > ```
bug
low
Critical
2,707,256,628
ollama
jina-clip-v2
It does have some interesting features for embeddings. https://huggingface.co/jinaai/jina-clip-v2
model request
low
Minor
2,707,268,922
flutter
[in_app_purchase_storekit] Add `AppStore.sync()` for StoreKit2 wrapper
### Use case I would like to request the addition of `AppStore.sync()` to allow users to manually restore their purchases. According to the documentation: > Include some mechanism in your app, such as a Restore Purchases button, to let users restore their in-app purchases. In rare cases when a user suspects the app isn’t showing all the transactions, call sync(). By calling sync(), you force the app to obtain transaction information and subscription status from the App Store. For details, see: - [sync() | Apple Developer Documentation](https://developer.apple.com/documentation/storekit/appstore/3791906-sync) - [Meet StoreKit 2 - WWDC21 - Videos - Apple Developer](https://developer.apple.com/videos/play/wwdc2021/10114?time=1316) ### Proposal Add `sync()` (Wrapper for StoreKit2's [`sync()`](https://developer.apple.com/documentation/storekit/appstore/3791906-sync)) to [`AppStore`](https://pub.dev/documentation/in_app_purchase_storekit/latest/store_kit_2_wrappers/AppStore-class.html) class.
c: new feature,platform-ios,p: in_app_purchase,package,c: proposal,P2,team-ios,triaged-ios
low
Minor
2,707,292,867
vscode
When using alt+up/down arrow to move code, indentation does not apply
<!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ --> <!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ --> <!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions --> <!-- 🔎 Search existing issues to avoid creating duplicates. --> <!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ --> <!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. --> <!-- 🔧 Launch with `code --disable-extensions` to check. --> <!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. --> <!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. --> before: ![Image](https://github.com/user-attachments/assets/10e8420d-dc13-46b2-98d4-988e5fcdbc4b) After doing **alt+up**: ![Image](https://github.com/user-attachments/assets/fa0d3980-23ac-4581-8090-d77ac731f961) As I recall, it was changing its indentation according to the parent loop, but now it's not functioning as intended.
bug,editor-autoindent
low
Critical
2,707,301,606
node
Proposal: Add a method to check if a string is a OneByteString
### What is the problem this feature will solve? In the Buffer module, we have a series of methods for handling string writing and reading, such as `buffer.write("string", 'latin1')` and `buffer.write("string", 'utf16le')`. However, in some situations, we don't know the actual encoding of the string without checking every character. Checking the encoding will introduce overhead, especially when the string is large since SIMD is not accessible on the JavaScript side. In some string processing programs like the serialize framework (https://fury.apache.org/), high performance in string processing is highly beneficial for such programs. ### What is the feature you are proposing to solve the problem? Add `isOneByteString` function on the javascript side. ```javascript function isOneByteString(str) { if (typeof str !== "string") { return null; } return getIsOneByte(str); } ``` Add the getIsOneByte function on the C++ side. There is `GetIsOneByteSlow` for the slow mode, which is used when the place where it is being used cannot be compiled by TurboFan. And there is `GetIsOneByteFast` for the fast mode. This function is only applicable when the input string is a `FastOneByteString`, and in such a case, it will return true directly. ```c++ void GetIsOneByteSlow( const v8::FunctionCallbackInfo<v8::Value>& info) { DCHECK(ValidateCallbackInfo(info)); if (info.Length() != 1 || !info[0]->IsString()) { info.GetIsolate()->ThrowError( "isOneByteString() requires a single string argument."); return; } bool is_one_byte = Utils::OpenDirectHandle(*info[0].As<v8::String>()) ->IsOneByteRepresentation(); info.GetReturnValue().Set(is_one_byte); } bool GetIsOneByteFast(v8::Local<v8::Value> receiver, const v8::FastOneByteString &source) { return true; } ``` ### What alternatives have you considered? _No response_
feature request
low
Critical
2,707,367,869
vscode
Error when opening notebook files in tests
I have written an extension that automatically opens a notebook file. The relevant code for that is below: ```typescript const p = path.join(__dirname, "simple-notebook.ipynb"); const u = vscode.Uri.file(p); const nb = await vscode.workspace.openNotebookDocument(u); await vscode.window.showNotebookDocument(nb); ``` When I run the extension in debug mode, this works just fine. When I try to test this logic via the API, the following error occurs: ``` Error: Notebook Editor creation failure for document {"$mid":1,"external":"file:///c%3A/Users/.../simple-notebook.ipynb","path":"/c:/Users/.../simple-notebook.ipynb","scheme":"file"} ```
info-needed
low
Critical
2,707,381,023
go
crypto/x509: certificate with empty Authority Key Identifier extension considered invalid
### Go version go version go1.18.1 linux/amd64 ### Output of `go env` in your module/workspace: ```shell GO111MODULE="" GOARCH="amd64" GOBIN="" GOCACHE="/home/liu/.cache/go-build" GOENV="/home/liu/.config/go/env" GOEXE="" GOEXPERIMENT="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOINSECURE="" GOMODCACHE="/home/liu/go/pkg/mod" GONOPROXY="" GONOSUMDB="" GOOS="linux" GOPATH="/home/liu/go" GOPRIVATE="" GOPROXY="https://proxy.golang.org,direct" GOROOT="/usr/lib/go-1.18" GOSUMDB="sum.golang.org" GOTMPDIR="" GOTOOLDIR="/usr/lib/go-1.18/pkg/tool/linux_amd64" GOVCS="" GOVERSION="go1.18.1" GCCGO="gccgo" GOAMD64="v1" AR="ar" CC="gcc" CXX="g++" CGO_ENABLED="1" GOMOD="/dev/null" GOWORK="" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build3649475886=/tmp/go-build -gno-record-gcc-switches" ``` ### What did you do? I used crypto/x509 of golang to convert the der certificate to a pem certificate. For my test case, there was an Authority Key Identifier extension with an empty value. ### What did you see happen? Golang considered it an invalid Authority Key Identifier extension, but openssl and gnutls did not. ### What did you expect to see? According to rfc5280, the keyIdentifier, authorityCertIssuer, and authorityCertSerialNumber of the Authority Key Identifier extension are all OPTIONAL
NeedsInvestigation
low
Critical
2,707,387,442
storybook
[Bug]: Incompatibility between storybook and vue devtools...?
### Describe the bug I am getting a strange error in the console when trying to configure Storybook to use Pinia within a Vue & Vite project.... I have a simple reproduction case at https://github.com/eric-g-97477-vue/dev-tools-plugin-test Either I have missed something in the configuration or there is a bug in either storybook or the vue devtools. I am not sure which is the case between those three options. ### Reproduction link https://github.com/eric-g-97477-vue/dev-tools-plugin-test ### Reproduction steps To create this, I did the following... (1) I created the default vue Project ``` $ npm create vue@latest 10:25:02 > npx > create-vue Vue.js - The Progressive JavaScript Framework ✔ Project name: … devtools_test ✔ Add TypeScript? … No / Yes ✔ Add JSX Support? … No / Yes ✔ Add Vue Router for Single Page Application development? … No / Yes ✔ Add Pinia for state management? … No / Yes ✔ Add Vitest for Unit Testing? … No / Yes ✔ Add an End-to-End Testing Solution? › No ✔ Add ESLint for code quality? › No Scaffolding project in /Users/eric.g/depot/vue/devtools_test... Done. Now run: cd devtools_test npm install npm run dev ``` (2) I initialized the project with `npm install`. (3) I ran `npx storybook@latest init` (4) I then setup storybook to be able to use Pinia. Based on https://storybook.js.org/docs/get-started/frameworks/vue3-vite, I added ``` import { setup, type Preview } from '@storybook/vue3'; import { createPinia } from 'pinia'; setup((app) => { app.use(createPinia()); }); ``` to `.storybook/preview.js`. (5) I then run `npm run storybook` and the below error appears in the console when I click around using the stories for button, header, or page. ``` index.js?v=eab36b85:2603 Uncaught (in promise) TypeError: Cannot read properties of undefined (reading 'app') at Object.get (index.js?v=eab36b85:2603:57) at index.js?v=eab36b85:4507:13 at index.mjs?v=eab36b85:48:66 at index.mjs?v=eab36b85:48:56 ``` ### System ```bash Storybook Environment Info: System: OS: macOS 14.7 CPU: (16) arm64 Apple M3 Max Shell: 3.7.1 - /opt/homebrew/bin/fish Binaries: Node: 20.17.0 - ~/.nvm/versions/node/v20.17.0/bin/node Yarn: 1.22.22 - ~/.nvm/versions/node/v20.17.0/bin/yarn npm: 10.9.0 - ~/depot/myproject/node_modules/.bin/npm <----- active pnpm: 9.14.4 - /opt/homebrew/bin/pnpm Browsers: Chrome: 131.0.6778.86 Safari: 18.0.1 npmPackages: @storybook/addon-designs: 8.0.4 => 8.0.4 @storybook/addon-essentials: 8.4.5 => 8.4.5 @storybook/addon-interactions: 8.4.5 => 8.4.5 @storybook/addon-links: 8.4.5 => 8.4.5 @storybook/addon-themes: 8.4.5 => 8.4.5 @storybook/blocks: 8.4.5 => 8.4.5 @storybook/manager-api: 8.4.5 => 8.4.5 @storybook/test: 8.4.5 => 8.4.5 @storybook/theming: 8.4.5 => 8.4.5 @storybook/vue3: 8.4.5 => 8.4.5 @storybook/vue3-vite: 8.4.5 => 8.4.5 eslint-plugin-storybook: 0.11.1 => 0.11.1 storybook: 8.4.5 => 8.4.5 ``` ### Additional context _No response_
bug,vue3
low
Critical
2,707,391,761
next.js
Bug: Mangling of Constants Causing Runtime Errors in GraphQL-based Packages with Turbopack
### Link to the code that reproduces this issue https://github.com/0no-co/graphql.web/pull/43 ### To Reproduce 1. Install and configure Next.js 15 with Turbopack and a GraphQL-based package (e.g., `gql.tada`). 2. Start the application using Turbopack in development mode. 3. Encounter runtime errors where exported constants (e.g., `Kind`) are incorrectly referenced (e.g., as `e` instead of `e1`). ### Current vs. Expected behavior **Current behavior**: Some exported constants in `graphql.web`, such as `Kind` as `e`, are incorrectly referenced during the bundling process in Turbopack, leading to references like `e` instead of the intended `e1`. This causes runtime errors because the application expects the correct constant name (e.g., `e1` for `Kind`), but Turbopack provides an incorrect reference. **Expected behavior**: The exported constants should be preserved with their correct names during bundling. For example, `Kind` should not be mangled to `e` or any other incorrect name, and it should maintain its correct reference throughout the build and runtime. Example Code: https://github.com/0no-co/graphql.web/pull/43 ### Provide environment information ```bash Operating System: Platform: darwin Arch: arm64 Version: Darwin Kernel Version 24.1.0: Thu Oct 10 21:02:45 PDT 2024; root:xnu-11215.41.3~2/RELEASE_ARM64_T8112 Available memory (MB): 16384 Available CPU cores: 8 Binaries: Node: 20.5.0 npm: 9.8.0 Yarn: N/A pnpm: 9.7.1 Relevant Packages: next: 15.0.3 // Latest available version is detected (15.0.3). react: 19.0.0-rc-66855b96-20241106 react-dom: 19.0.0-rc-66855b96-20241106 gql.tada: 1.8.10 graphql: 16.9.0 urql: 4.2.1 Next.js Config: output: N/A ``` ### Which area(s) are affected? (Select all that apply) Turbopack ### Which stage(s) are affected? (Select all that apply) next dev (local) ### Additional context _No response_
bug,Turbopack,linear: turbopack
low
Critical
2,707,393,362
ui
[bug]: "The component was not found. It may not exist at the registry. Please make sure it is a valid component."
### Describe the bug I was putting my v0 code in to cursor AI and it gave me this error after doing the whole procedure thing. I have searched for how to fix this, but all of them are not helping. Please if someone could give me a few minutes of their time, I would be really thankfull. ![image](https://github.com/user-attachments/assets/6195cb76-8306-49b0-a6f4-4cf642865a6c) ### Affected component/components styles ### How to reproduce npx shadcn@latest and add the v0 codebase ### Codesandbox/StackBlitz link _No response_ ### Logs _No response_ ### System Info ```bash windows 10? idk what to put here ``` ### Before submitting - [X] I've made research efforts and searched the documentation - [X] I've searched for existing issues
bug
low
Critical
2,707,408,104
next.js
Source map warning on Next 15.0.3 with turbopack, Windows, Firefox
### Link to the code that reproduces this issue https://github.com/altbdoor/nextjs-sourcemap-warning-firefox-reproduction ### To Reproduce 1. Create a base reproduction template with `npx create-next-app -e reproduction-template` 2. Name the project as `test-ground` 3. `cd test-ground` 4. `npm run dev -- --turbo` 5. Open http://localhost:3000 in Windows Firefox 133.0 (64-bit) 6. Get warning on source map error ### Current vs. Expected behavior ``` Source map error: Error: Invalid URL: file://C%3A/Users/carbon/projects/test-ground/node_modules/next/src/client/app-bootstrap.ts Stack in the worker:URLImpl@resource://devtools/client/shared/vendor/whatwg-url.js:22:13 setup@resource://devtools/client/shared/vendor/whatwg-url.js:537:14 URL@resource://devtools/client/shared/vendor/whatwg-url.js:246:18 createSafeHandler/<@resource://devtools/client/shared/vendor/source-map/lib/util.js:181:17 computeSourceURL@resource://devtools/client/shared/vendor/source-map/lib/util.js:437:22 BasicSourceMapConsumer/</that._absoluteSources<@resource://devtools/client/shared/vendor/source-map/lib/source-map-consumer.js:213:23 BasicSourceMapConsumer/<@resource://devtools/client/shared/vendor/source-map/lib/source-map-consumer.js:212:33 Resource URL: http://localhost:3000/_next/static/chunks/node_modules_next_dist_client_239c40._.js Source Map URL: node_modules_next_dist_client_239c40._.js.map ``` <details><summary>Show warning image</summary> ![image](https://github.com/user-attachments/assets/cdec58ab-8a30-43c3-bff8-56aa7a179175) </details> Should not have the warning, I suppose? ### Provide environment information ```bash Operating System: Platform: win32 Arch: x64 Version: Windows 11 Pro Available memory (MB): 16206 Available CPU cores: 12 Binaries: Node: 22.11.0 npm: 10.9.0 Yarn: N/A pnpm: N/A Relevant Packages: next: 15.0.4-canary.32 // Latest available version is detected (15.0.4-canary.32). eslint-config-next: N/A react: 19.0.0-rc-b01722d5-20241114 react-dom: 19.0.0-rc-b01722d5-20241114 typescript: 5.3.3 Next.js Config: output: N/A ``` ### Which area(s) are affected? (Select all that apply) Turbopack ### Which stage(s) are affected? (Select all that apply) next dev (local) ### Additional context This warning **did not** appear for: - Windows Edge 131.0.2903.70 (Official build) (64-bit), all `npm run` commands - Windows Firefox 133.0 (64-bit), `npm run dev` without turbopack - Windows Firefox 133.0 (64-bit), `npm run start` after build Edit: tried it on Windows Firefox private mode with all extensions disabled, and I still see the warning.
please add a complete reproduction,Turbopack,linear: turbopack
medium
Critical
2,707,498,853
rust
Scrutinee dropped after if-let body
Objects inside block expressions in the scrutinee of an if-let statement are not dropped before the body. This is inconsistent both with declaring a temporary variable, e.g. `x` in test #2, and let-else behavior in test #3. I tried this code: ```rust fn main() { struct S(i32); impl Drop for S { fn drop(&mut self) { println!("Dropped S {}", self.0) } } println!("--- test 1"); { println!("Before if-let"); if let _ = { S(1).0 } { println!("Inside body"); } } println!("--- test 2"); { println!("Before if-let"); let x = { S(2).0 }; if let _ = x { println!("Inside body"); } } println!("--- test 3"); { println!("Before let-else"); let _ = ({ S(3).0 }) else { unreachable!() }; println!("After let-else"); } } ``` I expected to see this happen: ``` --- test 1 Before if-let Dropped S 1 Inside body --- test 2 Before if-let Dropped S 2 Inside body --- test 3 Before let-else Dropped S 3 After let-else ``` Instead, this happened: ``` --- test 1 Before if-let Inside body Dropped S 1 --- test 2 Before if-let Dropped S 2 Inside body --- test 3 Before let-else Dropped S 3 After let-else ``` ### Meta Same behavior on both stable and nightly. `rustc --version --verbose`: ``` rustc 1.82.0 (f6e511eec 2024-10-15) binary: rustc commit-hash: f6e511eec7342f59a25f7c0534f1dbea00d01b14 commit-date: 2024-10-15 host: x86_64-unknown-linux-gnu release: 1.82.0 LLVM version: 19.1.1 ```
A-lifetimes,T-lang,C-discussion
low
Major
2,707,584,111
rust
Tracking issue for release notes of #132056: Stabilize `#[diagnostic::do_not_recommend]`
This issue tracks the release notes text for #132056. ### Steps - [ ] Proposed text is drafted by PR author (or team) making the noteworthy change. - [ ] Issue is nominated for release team review of clarity for wider audience. - [ ] Release team includes text in release notes/blog posts. ### Release notes text The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing). ````markdown # Language - [Stabilize `#[diagnostic::do_not_recommend]`](https://github.com/rust-lang/rust/pull/132056) ```` > [!TIP] > Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use. > The category will be de-duplicated with all the other ones by the release team. > > *More than one section can be included if needed.* ### Release blog section If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section. *Otherwise leave it empty.* ````markdown ```` cc @weiznich, @compiler-errors -- origin issue/PR authors and assignees for starting to draft text
T-lang,relnotes,relnotes-tracking-issue
low
Critical
2,707,589,853
rust
Tracking issue for release notes of #132397: Make missing_abi lint warn-by-default.
This issue tracks the release notes text for #132397. ### Steps - [x] Proposed text is drafted by PR author (or team) making the noteworthy change. - [ ] Issue is nominated for release team review of clarity for wider audience. - [ ] Release team includes text in release notes/blog posts. ### Release notes text The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing). ````markdown # Category (e.g. Language, Compiler, Libraries, Compatibility notes, ...) - [Make missing_abi lint warn-by-default.](https://github.com/rust-lang/rust/pull/132397) ```` > [!TIP] > Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use. > The category will be de-duplicated with all the other ones by the release team. > > *More than one section can be included if needed.* ### Release blog section If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section. *Otherwise leave it empty.* ````markdown Omitting the ABI in extern blocks and functions (e.g. `extern {}` and `extern fn`) will now result in a warning. Omitting the ABI after the `extern` keyword has always implicitly resulted in the `"C"` ABI. It is now recommended to explicitly specify the `"C"` ABI (e.g. `extern "C" {}` and `extern "C" fn`). ```` cc @m-ou-se, @traviscross -- origin issue/PR authors and assignees for starting to draft text
T-lang,relnotes,relnotes-tracking-issue
low
Minor
2,707,591,680
godot
Vulkan API Error: err != VK_SUCCESS in fence_wait and command_queue_execute_and_present
### Tested versions Godot_v4.3-stable ### System information Debain12-Godot_v4.3-stable-Vulkan(1.2.175)forward-dedicated nvidia geforce gtx 710 ### Issue description When running a project in Godot using Vulkan, the following errors appear in the console: ERROR: Condition "err != VK_SUCCESS" is true. Returning: FAILED at: fence_wait (drivers/vulkan/rendering_device_driver_vulkan.cpp:2066) ERROR: Condition "err != VK_SUCCESS" is true. Returning: FAILED at: command_queue_execute_and_present (drivers/vulkan/rendering_device_driver_vulkan.cpp:2266) ### Steps to reproduce 1.Open the project in Godot. 2.Start the project and render a scene. 3.After a while, observe the console where the error messages appear during rendering.(point to be note that after some while it start showing this behaviour) ### Minimal reproduction project (MRP) This issue occurs in all my projects, so it is not specific to a particular setup or scene. You can replicate the problem by creating a new project with default settings, using Vulkan as the rendering backend, and running it for a while until the error appears during scene rendering.
bug,platform:linuxbsd,topic:rendering,needs testing
low
Critical
2,707,632,763
flutter
[Android] CJK IME candidate window position error with external keyboard on Android
### Steps to reproduce 1. Connect a physical keyboard (e.g., a Bluetooth keyboard) to your Android device. 2. Switch to a CJK input method (e.g., Chinese) and start typing to trigger candidate words. ### Expected results The candidate window should be displayed correctly near the cursor to ensure a seamless user experience. With the increasing popularity of Android tablets, many of which are commonly used with physical keyboards, it has become essential to address this issue. Correctly positioning the candidate window is critical for maintaining usability and consistency across devices, especially for users relying on CJK input methods. Fixing this bug will significantly improve the input experience for a growing segment of Android users. ### Actual results On Android devices, the candidate window for the input method is incorrectly displayed in the lower-left corner of the screen. In contrast, on iOS and desktop devices, the candidate window appears in the correct position. I used Gboard as the input method on Android and tested other input methods as well, but the issue persisted. This suggests that the problem is not related to a specific input method. ### Code sample <details open><summary>Code sample</summary> This is a very simple example that contains only a textfiled. ```dart import 'package:flutter/material.dart'; void main() { runApp(const MyApp()); } class MyApp extends StatelessWidget { const MyApp({super.key}); @override Widget build(BuildContext context) { return MaterialApp( home: Scaffold( body: Center( child: TextField(), ), ), ); } } ``` </details> ### Screenshots or Video <details open> <summary>Screenshots / Video demonstration</summary> ## Android (incorrect position) https://github.com/user-attachments/assets/a8cd4819-7af1-4910-8d80-4a6400d13d1f ## ios (correct position) https://github.com/user-attachments/assets/0876ab3b-bfae-4fbb-ab30-4630f2093074 </details> ### Logs _No response_ ### Flutter Doctor output <details open><summary>Doctor output</summary> ```console [✓] Flutter (Channel beta, 3.27.0-0.2.pre, on macOS 15.1.1 24B91 darwin-arm64, locale zh-Hans-CN) • Flutter version 3.27.0-0.2.pre on channel beta at /Users/alex/development/flutter • Upstream repository https://github.com/flutter/flutter.git • Framework revision fc011960a2 (2 weeks ago), 2024-11-14 12:19:18 -0800 • Engine revision 397deba30f • Dart version 3.6.0 (build 3.6.0-334.4.beta) • DevTools version 2.40.1 • Pub download mirror https://pub.flutter-io.cn • Flutter download mirror https://storage.flutter-io.cn [✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0) • Android SDK at /Users/alex/development/Android • Platform android-35, build-tools 35.0.0 • Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java • Java version OpenJDK Runtime Environment (build 21.0.3+-79915917-b509.11) • All Android licenses accepted. [✓] Xcode - develop for iOS and macOS (Xcode 16.1) • Xcode at /Applications/Xcode.app/Contents/Developer • Build 16B40 • CocoaPods version 1.16.2 [✗] Chrome - develop for the web (Cannot find Chrome executable at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome) ! Cannot find Chrome. Try setting CHROME_EXECUTABLE to a Chrome executable. [✓] Android Studio (version 2024.2) • Android Studio at /Applications/Android Studio.app/Contents • Flutter plugin can be installed from: 🔨 https://plugins.jetbrains.com/plugin/9212-flutter • Dart plugin can be installed from: 🔨 https://plugins.jetbrains.com/plugin/6351-dart • Java version OpenJDK Runtime Environment (build 21.0.3+-79915917-b509.11) [✓] IntelliJ IDEA Ultimate Edition (version 2024.3) • IntelliJ at /Applications/IntelliJ IDEA Ultimate.app • Flutter plugin can be installed from: 🔨 https://plugins.jetbrains.com/plugin/9212-flutter • Dart plugin can be installed from: 🔨 https://plugins.jetbrains.com/plugin/6351-dart [✓] VS Code (version 1.95.3) • VS Code at /Applications/Visual Studio Code.app/Contents • Flutter extension can be installed from: 🔨 https://marketplace.visualstudio.com/items?itemName=Dart-Code.flutter [✓] Connected device (3 available) • 晦气4 (mobile) • 00008103-000A488102D9001E • ios • iOS 18.2 22C5142a • macOS (desktop) • macos • darwin-arm64 • macOS 15.1.1 24B91 darwin-arm64 • Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 15.1.1 24B91 darwin-arm64 ! Error: Browsing on the local area network for iPhone SE. Ensure the device is unlocked and attached with a cable or associated with the same local area network as this Mac. The device must be opted into Developer Mode to connect wirelessly. (code -27) [!] Network resources ✗ A cryptographic error occurred while checking "https://cocoapods.org/": Connection terminated during handshake You may be experiencing a man-in-the-middle attack, your network may be compromised, or you may have malware installed on your computer. ! Doctor found issues in 2 categories. ``` </details>
a: text input,platform-android,a: internationalization,P2,team-text-input,triaged-text-input
low
Critical
2,707,771,502
ollama
Audio to audio models
Hi, any plan to add audio to audio support? There are couple of open source model witch provide that
feature request
low
Minor
2,707,783,663
rust
Unnecessarily complex suggestion when using extraneous reference type
### Code ```Rust fn main() { let a: Vec<i32> = Vec::new(); let ref_a = &a; let mut b: Vec<i32> = Vec::new(); b.extend(&ref_a); } ``` ### Current output ```Shell error[E0277]: `&&Vec<i32>` is not an iterator --> src/main.rs:5:14 | 5 | b.extend(&ref_a); | ------ ^^^^^^ `&&Vec<i32>` is not an iterator | | | required by a bound introduced by this call | = help: the trait `Iterator` is not implemented for `&&Vec<i32>`, which is required by `&&Vec<i32>: IntoIterator` = note: required for `&&Vec<i32>` to implement `IntoIterator` note: required by a bound in `extend` --> /playground/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/iter/traits/collect.rs:450:18 | 450 | fn extend<T: IntoIterator<Item = A>>(&mut self, iter: T); | ^^^^^^^^^^^^^^^^^^^^^^ required by this bound in `Extend::extend` help: consider dereferencing here | 5 | b.extend(&*ref_a); | + For more information about this error, try `rustc --explain E0277`. ``` ### Desired output ```Shell help: consider removing the borrow ``` ### Rationale and extra context The part that says: ``` help: consider dereferencing here | 5 | b.extend(&*ref_a); ``` seems overly complicated, since a simpler solution is to just remove the borrow. It's possible there are cases when such a reborrow is needed, but when it is not, the simpler suggestion would be welcome. Note that the message `help: consider removing the borrow` already exists in other situations (e.g. passing an `&i32` argument to a function that expects a `i32`). ### Other cases ```Rust ``` ### Rust Version ```Shell rustc 1.82.0 (f6e511eec 2024-10-15) binary: rustc commit-hash: f6e511eec7342f59a25f7c0534f1dbea00d01b14 commit-date: 2024-10-15 host: x86_64-unknown-linux-gnu release: 1.82.0 LLVM version: 19.1.1 ``` ### Anything else? _No response_
A-diagnostics,T-compiler,A-suggestion-diagnostics
low
Critical
2,707,932,696
godot
"Make Local" will still reference original until editor restart.
### Tested versions - Reproducible in v4.3.stable.official [77dcf97d8], v4.4.dev5.official [9e6098432] ### System information Windows 11 Vulkan forward + dedicated Nvidia GPU 3080 ### Issue description When a scene is "Make Local" resources attached to the scene are not made unique until after editor restart. https://github.com/user-attachments/assets/3f9fe79b-0f98-4606-bc6b-2a4d465a6aac ### Steps to reproduce Add scene into scene tree, create a duplicate or drag an additional scene into the editor make one of the scenes "Make Local" , SAVE SCENE. Go into the inspector panel and change a property, for example the albedo color, and observe that the scene that was not made local and references the original scene and resources also changes. Restart the editor to get the first scene to reference the original again. ### Minimal reproduction project (MRP) [MRP](https://share.childlearning.club/s/RgpoM2dxcPkmpG7) --- - NOTE: An editor restart can be avoided by going into the inspector and first making the mesh unique and then going to the surface material and selecting the "Make Unique" before making any changes to the albedo color. In this case everything works as I would understand the "Make Local" would work as intended, or at least the same result from what I can tell from "Make Local" and then doing an editor restart. To be clear, I don't know if "Make Local" is supposed to make sub-resources unique, it just appears to be what it is doing. And if that is what it is intended to do it is not doing it until after the editor restart. - EDIT: Just found that the same behavior exists when selecting a scene from the scene tree and copying its surface material over to the surface material override. all scenes that reference the original scene show a change when this scenes surface material override's albedo color is changed, until editor restart, where they then revert back to the original.
bug,topic:editor
low
Minor
2,707,933,645
tauri
[feat] Include local NSH files in windows NSIS hooks
### Describe the problem Would be awesome to be able to use the !include feature in our `hooks.nsi` file so that we can abstract our nsi/nsh files. ### Describe the solution you'd like Not sure if this is already possible but attempting to do `!include "Example.nsh"` doesn't work neither does prepending "./" or ".\". Perhaps we need the `windows.nsis.installerHooks` to take in an array instead of a string so we can register other nsh/nsi files in there as well? ### Alternatives considered Currently I just paste everything into my .nsi file which is just messy but does work. ### Additional context The context is that I am working on an image viewer and using the `FileAssociation.nsh` script from the NSIS documentation: https://nsis.sourceforge.io/File_Association
type: feature request
low
Minor
2,708,035,370
yt-dlp
Add support for tvmonaco.com
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE - [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field ### Checklist - [X] I'm reporting a new site support request - [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details - [X] I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge - [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates - [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue) - [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required ### Region France ### Example URLs https://videos.tvmonaco.com/content/la-grande-traversee-des-alpes-ep01-la-premiere-semaine-de-marche-la-region-du-mont-blanc ### Provide a description that is worded well enough to be understood I can't download videos from tvmonaco.com: ERROR: Unsupported URL: https://videos.tvmonaco.com/content/la-grande-traversee-des-alpes-ep01-la-premiere-semaine-de-marche-la-region-du-mont-blanc ### Provide verbose output that clearly demonstrates the problem - [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`) - [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead - [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below ### Complete Verbose Output ```shell [debug] Command-line config: ['-vU', 'https://videos.tvmonaco.com/content/la-grande-traversee-des-alpes-ep01-la-premiere-semaine-de-marche-la-region-du-mont-blanc'] [debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8 [debug] yt-dlp version [email protected] [00dcde728] [debug] Python 3.12.7 (CPython x86_64 64bit) - Linux-6.6.52-gentoo-x86_64-Intel-R-_Core-TM-_i5-3550_CPU_@_3.30GHz-with-glibc2.40 (OpenSSL 3.3.2 3 Sep 2024, glibc 2.40) [debug] exe versions: ffmpeg 6.1.1 (setts), ffprobe 6.1.1 [debug] Optional libraries: brotli-1.1.0, certifi-3024.7.22, pycrypto-3.21.0, requests-2.32.3, sqlite3-3.46.1, urllib3-2.2.3 [debug] Proxy map: {} [debug] Request Handlers: urllib, requests [debug] Loaded 1837 extractors [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest Latest version: [email protected] from yt-dlp/yt-dlp yt-dlp is up to date ([email protected]) [generic] Extracting URL: https://videos.tvmonaco.com/content/la-grande-traversee-des-alpes-ep01-la-premiere-semaine-de-marche-la-region-du-mont-blanc [generic] la-grande-traversee-des-alpes-ep01-la-premiere-semaine-de-marche-la-region-du-mont-blanc: Downloading webpage WARNING: [generic] Falling back on generic information extractor [generic] la-grande-traversee-des-alpes-ep01-la-premiere-semaine-de-marche-la-region-du-mont-blanc: Extracting information [debug] Looking for embeds ERROR: Unsupported URL: https://videos.tvmonaco.com/content/la-grande-traversee-des-alpes-ep01-la-premiere-semaine-de-marche-la-region-du-mont-blanc Traceback (most recent call last): File "/usr/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 1624, in wrapper return func(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 1759, in __extract_info ie_result = ie.extract(url) ^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/site-packages/yt_dlp/extractor/common.py", line 742, in extract ie_result = self._real_extract(url) ^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/site-packages/yt_dlp/extractor/generic.py", line 2553, in _real_extract raise UnsupportedError(url) yt_dlp.utils.UnsupportedError: Unsupported URL: https://videos.tvmonaco.com/content/la-grande-traversee-des-alpes-ep01-la-premiere-semaine-de-marche-la-region-du-mont-blanc ```
site-request,triage
low
Critical
2,708,045,555
godot
Overriding virtual properties in C# leads to errors in the generated code.
### Tested versions 4.2 and 4.3 ### System information Godot v4.3.stable.mono - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3080 Ti (NVIDIA; 32.0.15.6603) - AMD Ryzen 9 5950X 16-Core Processor (32 Threads) ### Issue description Overriding virtual properties in C# leads to errors in the generated code. ### Steps to reproduce Uncomment override code in SimpleOverride.cs ### Minimal reproduction project (MRP) [Scripts.zip](https://github.com/user-attachments/files/17966933/Scripts.zip) Included just a simple scripts folder for a fresh Godot project.
needs testing,topic:dotnet
low
Critical
2,708,078,867
pytorch
F.cross_entropy unexpectedly slower than F.log_softmax + torch.gather
I’m working on some training code that computes the total log probabilities of prediction sequences (i.e. outputs from a language model). I had previously implemented this using `F.log_softmax` followed by `torch.gather`, but realized this can also be done with `F.cross_entropy` with `reduction='none'` followed by a sum operation over the last dimension. I rewrote my code using `F.cross_entropy` thinking it would speed up my code since it’s a single builtin function instead of two but to my surprise, training is now significantly slower. Here’s a hacky microbenchmark script capturing what I’m talking about: ``` import torch import torch.nn.functional as F import time torch.manual_seed(42) batch_size, seq_len, vocab_size = 2, 512, 50257 logits = torch.randn(batch_size, seq_len, vocab_size) labels = torch.randint(0, vocab_size, (batch_size, seq_len)) labels[labels % 3 == 0] = -100 labels_safe = labels.detach() labels_safe[labels_safe == -100] = 0 logits = logits.to("cuda") labels = labels.to("cuda") labels_safe = labels_safe.to("cuda") def benchmark(func, num_iters=20): torch.cuda.synchronize() start = time.perf_counter() for _ in range(num_iters): value = func() torch.cuda.synchronize() end = time.perf_counter() return (end - start) / num_iters, value def original_method(): token_logprobs = F.log_softmax(logits, dim=-1) gathered_logprobs = torch.gather( token_logprobs, 2, labels_safe.unsqueeze(-1) ).squeeze(-1) mask = (labels_safe != -100).float() loss = (gathered_logprobs * mask).sum(dim=-1) return -loss def new_method(): token_loss = F.cross_entropy( logits.permute(0, 2, 1), labels, reduction="none", ) loss = token_loss.sum(dim=-1) return loss num_iters = 20 original_time, original_value = benchmark(original_method, num_iters) new_time, new_value = benchmark(new_method, num_iters) print(f"Original Average Time: {original_time * 1e3:.3f} ms") print(f"Original Value: {original_value.tolist()}") print(f"New Average Time: {new_time * 1e3:.3f} ms") print(f"New Value: {new_value.tolist()}") ``` Outputs on 1x A100 GPU: ``` Original Average Time: 2.653 ms Original Value: [5802.1328125, 5825.033203125] New Average Time: 76.821 ms New Value: [5802.1328125, 5825.0341796875] ``` I thought it might be due to the `reduction='none'`, but commenting it out doesn’t lead to any noticeable speedup. I'm on version 2.5.0. Does anyone know what might be happening here? cc @msaroufim @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
module: performance,module: nn,triaged
low
Major
2,708,094,785
go
cmd/internal/testdir: Test/fixedbugs/issue70189.go failures
``` #!watchflakes default <- pkg == "cmd/internal/testdir" && test == "Test/fixedbugs/issue70189.go" ``` Issue created automatically to collect these failures. Example ([log](https://ci.chromium.org/b/8729828567881424129)): === RUN Test/fixedbugs/issue70189.go === PAUSE Test/fixedbugs/issue70189.go === CONT Test/fixedbugs/issue70189.go testdir_test.go:147: exit status 1 go: error obtaining buildID for go tool link: fork/exec /home/swarming/.swarming/w/ir/x/w/goroot/pkg/tool/netbsd_arm/link: bad file descriptor --- FAIL: Test/fixedbugs/issue70189.go (789.44s) — [watchflakes](https://go.dev/wiki/Watchflakes)
NeedsInvestigation
low
Critical
2,708,115,612
PowerToys
App picker
### Description of the new feature / enhancement Can you add a feature that force this panel similar to android ![Image](https://github.com/user-attachments/assets/ea5f6291-34c0-4d5f-a918-92bc588a58b4) something similar to this app https://switchbar.com/ ### Scenario when this would be used? Let's say you clicked a link from an app and you have a chrome default browser. if you have 3 chrome profile and one of them is open already the link will always goes to the last profile used. But if you go through the "Default app selector by windows" you will open chrome home instead than select the profile you need. By the way, this always work when you install a new browser the first time. but after some use it disappear without notice![Image](https://github.com/user-attachments/assets/5a153ad7-8f79-4396-ba9c-844a2969b49d) . ### Supporting information _No response_
Idea-New PowerToy,Needs-Triage
low
Minor
2,708,191,302
excalidraw
Lock position of elements
I often use Excalidraw with a digital pen, and I’ve noticed that accidentally moving elements is a recurring issue. Whenever I touch elements on the interface with the pen, they tend to move unintentionally. To address this, I suggest implementing a "Lock Position" option that prevents elements from being accidentally moved while still allowing other interactions. This option could be added to the context menu (right-click menu), alongside the existing "Lock" feature. Unlike the current "Lock" option—which completely disables interaction with the element—this new feature would allow users to: - Edit text blocks without risk of moving the element. - Add text to elements or other elements in a natural workflow. - Resize small elements intuitively without unintentional displacement. ![Image](https://github.com/user-attachments/assets/4540a0bd-3217-4c01-b5d0-585afdb653ee)
enhancement
low
Minor
2,708,275,375
PowerToys
Mouse Without Borders copy folder
### Description of the new feature / enhancement It tells me that I can't just copy a folder, I need to package it into a zip file first and then copy it, so why doesn't powertoys do this directly? I think it's possible to add an additional option to automatically perform the package-send-unzip to a specified directory function. In addition, I think there should be no 100M file copy size limit, which can be determined by the user. 100M transmission on the Internet is indeed not small, but in a Gigabit LAN it takes less than 1 second, so larger files are acceptable. ### Scenario when this would be used? This function can improve the efficiency of file transfer and avoid manual packaging/decompression operations. Considering the popularity of Gigabit LAN devices nowadays, I think it is okay to unlock the file size limit. ### Supporting information _No response_
Idea-Enhancement,Needs-Triage,Product-Mouse Without Borders
low
Minor
2,708,279,034
electron
`BaseWindow.fromWebContents()`
### Preflight Checklist - [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project. - [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to. - [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a feature request that matches the one I want to file, without success. ### Problem Description While it is possible to determine the `BrowserWindow` from a `WebContents` instance using `BrowserWindow.fromWebContents()`, there is not a similar method for `BaseWindow`. ### Proposed Solution New static method: `BaseWindow.fromWebContents()` ### Alternatives Considered ```TS export function baseWindowFromWebContents( wc: Electron.WebContents ): Electron.BaseWindow | null { const allWindows = BaseWindow.getAllWindows() for (const win of allWindows) { if ( win.contentView.children.some( // This should probably handle nested views (child) => child instanceof WebContentsView && child.webContents === wc ) ) { return win } } return null } ``` ### Additional Information Assuming that `BrowserWindow` is a subclass of `BaseWindow`, this method should also work for `BrowserWindow.webContents` instances.
enhancement :sparkles:
low
Major
2,708,292,027
vscode
Code-Snippet file has no icon
<!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ --> <!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ --> <!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions --> <!-- 🔎 Search existing issues to avoid creating duplicates. --> <!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ --> <!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. --> <!-- 🔧 Launch with `code --disable-extensions` to check. --> Does this issue occur when all extensions are disabled?: Yes <!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. --> <!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. --> - VS Code Version: 1.95.3 - OS Version: Windows 11 The new snippet language added in #189176 doesn't have a default icon Steps to Reproduce: 1. Create `snippets.code-snippets` file 2. no json icon ❌ it should match the same icon as `json`/`jsonc`/`jsonl` ![Image](https://github.com/user-attachments/assets/21cc7be0-d3e4-43ee-a609-a514795f185c)
feature-request,snippets
low
Critical
2,708,295,539
vscode
PowerShell integrated terminal is launched in the system directory when cwd contains square brackets
<!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ --> <!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ --> <!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions --> <!-- 🔎 Search existing issues to avoid creating duplicates. --> <!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ --> <!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. --> <!-- 🔧 Launch with `code --disable-extensions` to check. --> Does this issue occur when all extensions are disabled?: Yes <!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. --> <!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. --> - VS Code Version: 1.95.3 - OS Version: Windows 10 22H2 Steps to Reproduce: 1. Create a directory `D:\New[folder]`. 2. Open the directory with VSCode. 3. Create a new terminal. 4. The terminal output displays `PS C:\Windows\System32\WindowsPowerShell\v1.0>` instead of `PS D:\New[folder]>`. I know this was already reported in #70432 and is kinda an external issue, but I believe it can be worked around in VSCode itself. As I can see the PowerShell terminal is created by running the following command ([source](https://github.com/microsoft/vscode/blob/df74071da5ba92662305adf50ed926370f150d7b/src/vs/platform/terminal/node/terminalEnvironment.ts#L340)): `C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe '-noexit' '-command' 'try { . "<path-to-the-vscode>\resources\app\out\vs\workbench\contrib\terminal\common\scripts\shellIntegration.ps1" } catch {}'` In order to guarantee the expected current directory `Set-Location -LiteralPath '<cwd>'` can be called. Note that each single quote in `<cwd>` must be escaped with two single quotes (`'` -> `''`). Also `'<cwd>'` should be used instead of `"<cwd>"` to avoid escaping of the other special characters used in PowerShell strings.
bug,confirmed,terminal-shell-pwsh
low
Critical
2,708,307,216
pytorch
Should coordinator_rank in class _DistWrapper be the global_rank instead of local rank in its process group?
### 🐛 Describe the bug For the class `_DistWrapper` in dcp, its `coordinator_rank` has been set to local rank of its process group: https://github.com/pytorch/pytorch/blob/c2fa544472f5fdc9825901e63a40d5ed840ca61e/torch/distributed/checkpoint/utils.py#L71 However, in its method of `gather_object()`, the `coordinator_rank` has been used as `destination` of `dist.gather_object`, which should be the global_rank instead of local_rank: ``` def gather_object(self, object: T) -> Optional[List[T]]: """Implement functionality similar to c10d::gather_object but without distributed enabled.""" if self.use_dist: gather_objs = ( cast(List[T], [None] * dist.get_world_size(self.group)) if self.is_coordinator else None ) dist.gather_object( obj=object, object_gather_list=gather_objs if self.is_coordinator else None, dst=self.coordinator_rank, group=self.group, ) result = gather_objs else: result = [object] return result ``` ### Versions 2.5.1 cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @pradeepfn
oncall: distributed,triaged,oncall: distributed checkpointing
low
Critical
2,708,309,624
transformers
Unexpected output of _flash_attention_forward() for cross attention
### System Info My environment: > transformers 4.44.1 > flash-attn 2.6.3 ### Who can help? _No response_ ### Information - [ ] My own modified scripts ### Tasks - [ ] My own task or dataset (give details below) ### Reproduction ```py import torch from transformers.modeling_flash_attention_utils import _flash_attention_forward query = torch.randn(4, 20,1, 32).cuda().half() key = torch.randn(4, 10, 1, 32).cuda().half() value = key unmasked = _flash_attention_forward(query,key,value, attention_mask=None, query_length=20, is_causal=False) masked = _flash_attention_forward(query,key,value, attention_mask=torch.ones((4, 10)).cuda().bool(), query_length=20, is_causal=False) breakpoint() ``` ### Expected behavior In my understanding, the attention_mask has a size of `(batch_size, seq_len)` where 1 stands for the position of non-padding tokens, so an all-one mask should lead to the same result as no mask provided. However, the outputs are significantly different. ``` (Pdb) print(unmasked, masked) tensor([[[[ 0.4114, -0.3369, 0.6221, ..., -0.4475, -0.2361, 0.3022]], [[ 0.2480, -0.1396, 0.1614, ..., -0.0728, -0.2788, 0.3950]], [[ 0.3828, -0.1323, 0.2101, ..., -0.4751, -0.0179, 0.3181]], ..., [[ 0.2654, -0.3137, 0.1637, ..., 0.3464, -0.6318, 0.4377]], [[ 0.5464, -0.2251, 0.4897, ..., -0.3184, -0.1769, 0.3203]], [[-0.1514, 0.3037, -0.1609, ..., -0.4651, -0.1842, 0.3386]]], [[[ 0.1772, 0.3240, -1.1143, ..., 0.1444, 0.5684, 0.3770]], [[ 0.4187, 0.2264, 0.2446, ..., 0.7036, 0.3003, 0.2981]], [[ 0.1241, 0.1919, -0.5239, ..., -0.1606, 0.5210, -0.1896]], ..., [[ 0.5225, -0.2333, 0.1004, ..., 0.0297, 1.0059, -0.1329]], [[ 0.4304, 0.4819, 0.1232, ..., 0.5234, 0.5210, 0.2379]], [[ 0.5361, -0.0976, -0.3975, ..., 0.2217, 0.8481, 0.0780]]], [[[-0.6426, -0.1761, 0.3420, ..., 0.4404, 0.5273, 0.0485]], [[-0.2313, 0.5249, 0.8975, ..., 0.2517, 0.2163, 0.3628]], [[-0.9180, -0.7173, -0.3291, ..., 0.0781, 1.0693, -0.5142]], ..., [[-0.8945, -0.1444, -0.0460, ..., 0.2571, 0.8721, -0.0226]], [[-0.6978, -0.7417, 0.2061, ..., 0.2173, 0.2798, -0.2246]], [[-0.3818, -0.7246, 0.7720, ..., -0.3567, 0.0623, -0.0179]]], [[[-0.5347, -0.6885, -1.3604, ..., -1.3672, -1.1768, -1.2275]], [[-0.2400, -0.5176, -0.4875, ..., -0.2822, 0.1527, -0.0917]], [[-0.1940, -0.1766, -0.8022, ..., -0.3743, -0.2607, 0.1602]], ..., [[-0.2346, -0.4260, -0.2166, ..., 0.1776, -0.2793, -0.8052]], [[-0.3430, -0.9839, 0.3735, ..., 0.3267, 0.4268, -0.5464]], [[-0.3293, -0.0431, -1.1631, ..., -0.5742, -1.3242, -1.2441]]]], device='cuda:0', dtype=torch.float16) tensor([[[[ 0.4114, -0.3369, 0.6221, ..., -0.4475, -0.2361, 0.3022]], [[ 0.2480, -0.1396, 0.1614, ..., -0.0728, -0.2788, 0.3950]], [[ 0.3828, -0.1323, 0.2101, ..., -0.4751, -0.0179, 0.3181]], ..., [[-0.0500, -0.0776, 0.3552, ..., 0.6475, 0.1764, -0.2125]], [[ 0.7197, 0.4253, 0.0373, ..., 0.7168, 0.5254, 0.2496]], [[ 0.0498, 0.1896, -0.1191, ..., 0.1239, 0.5039, -0.0274]]], [[[-0.6572, -0.5571, -0.0978, ..., 0.2294, 0.8623, -0.4048]], [[-0.5635, -0.4026, 0.1295, ..., 0.1743, 0.6333, -0.0356]], [[-0.8359, -0.7275, 0.3054, ..., -0.2150, 0.5693, -0.5825]], ..., [[-0.9004, -0.7935, 0.1372, ..., -0.1024, 0.6543, 0.3892]], [[-0.1636, 0.1940, -0.9355, ..., -0.2068, -0.7847, -0.1024]], [[-0.1720, -0.6123, -0.6470, ..., -0.3550, 0.4495, -0.0429]]], [[[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]], [[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]], [[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]], ..., [[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]], [[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]], [[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]]], [[[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]], [[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]], [[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]], ..., [[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]], [[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]], [[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]]]], device='cuda:0', dtype=torch.float16) ``` Is this due to a bug? Or I just have some misunderstandings about this function?
bug
low
Critical
2,708,317,719
transformers
Make it possible to save and evaluate checkpoint on CTRL+C / `KeyboardInterrupt` with Hugging Face Trainer
### Feature request I would like to request that one or more optional flags be added to the Hugging Face Trainer, perhaps named `save_on_exit`/`save_on_interrupt` and `eval_on_exit`/`eval_on_interrupt`, to ensure that a checkpoint is always saved upon CTRL+C or perhaps even a kill command sent via `wandb`. ### Motivation Extremely often I will be training a model but then need to utilise the GPU on which I am conducting the training and so I need to pause my training. For example, over the past month, I have been training models on my PC but often I have needed to use my GPU and so I have exited my training and then resumed it at night. Because, at present, it is not possible to have the Hugging Face Trainer save a checkpoint upon recieving a `KeyboardInterrupt`, I end up needing to save checkpoints at excessively short intervals to minimise lost progress if I quit training at any arbitrary point. This invariably still ends up with some amount of progress being lost and it also does a lot of write wear on my SSDs, which, like all hard drives, have a limited write lifetime. The wear can in fact add up to quite a lot of writing if you are saving multigigabyte models. By allowing for progress to automatically save upon exit, I can be assured that, barring an unexpected system crash, or repeated CTRL+Cs being sent, my progress will always be saved and so I do not need to save and evaluate checkpoints so frequently. ### Your contribution N/A
Feature request
low
Critical
2,708,331,873
godot
Multiple OpenXR action sets in the same map causes bindings to not function properly
### Tested versions Reproducible in v4.3.stable.official [77dcf97d8] and currently untested in any 4.4 dev or betas ### System information Godot v4.3.stable - Windows 10.0.26100 - Vulkan (Mobile) - integrated Intel(R) UHD Graphics 730 (Intel Corporation; 31.0.101.4502) - 13th Gen Intel(R) Core(TM) i5-13400 (16 Threads) ### Issue description When using two action sets in the same action map, the two sets do not function as expected. For baseline bugs, the first thing i noticed was that i couldn't change the set priority on either set. Changing the value and then saving would simply reset both priorities back to 0. Then, in practice, my xrcontrollers connected to my script via button_pressed and button_released would fire constantly an absurd amount of times/second, and i dont believe that button_released was even firing multiple times like this, just pressed. Also, in one of the two sets (in _ready() i would toggle off one of the two sets to avoid issues) the get_vector2 wasnt working at all. It DID work in one of the two, but the button_pressed didnt work regardless of set used. Here is the set that wouldn't work for either:![Image](https://github.com/user-attachments/assets/f33e1212-fc55-407a-8c3a-44564b0841d8) Here is the set which worked for joystick input, but not for the squeeze:![Image](https://github.com/user-attachments/assets/bcbf772f-da61-4d06-8296-ed5236be0576) Here is the config. under the touch controller profile: ![Image](https://github.com/user-attachments/assets/f24241f3-832e-4935-b4b9-b96955488b67) Any other context im happy to provide upon request. ### Steps to reproduce Create an xr project which has two action sets in the default map Use the connections from the xr controllers from a script and use the button_pressed and button_released in relation to the squeeze button ![Image](https://github.com/user-attachments/assets/d526f58d-67e8-4ab2-8f0a-c4c8d7641be3) See what happens. ### Minimal reproduction project (MRP) None, i guess, i just spent 20 minutes trying to make one only for it to open the project IN MY HEADSET and then have the balls to tell me it failed to initialize. Sorry.
bug,topic:xr,topic:input
low
Critical
2,708,421,059
terminal
Preview: Inconsistent Behavior When Writing to Stdout While Reading From Stdin
### Windows Terminal version 1.22.3232.0 ### Windows build number 10.0.19045.5131 ### Other Software WSL seems to work fine, but other hosted terminals have this issue. To be clear, this only happens with the preview version of WT. The stable version works fine. ### Steps to reproduce Create any program that begins reading from stdin and then after reading has begun, write to stdout. Here is an example program in golang: ```go package main import ( "bufio" "fmt" "os" "os/signal" "syscall" "time" ) func DoScan() { scanner := bufio.NewScanner(os.Stdin) scanner.Split(bufio.ScanRunes) for scanner.Scan() { fmt.Print(scanner.Text()) } } func main() { go DoScan() time.Sleep(1) fmt.Print("Here is a prompt! >") sigs := make(chan os.Signal, 1) signal.Notify(sigs, syscall.SIGINT, syscall.SIGTERM) <-sigs } ``` ### Expected Behavior I expected that my typed keys would appear after the caret of the prompt output. That's what it does in the stable version of WT and that's what it does in WSL even in the preview version. ### Actual Behavior My typed keys appear at the beginning of the line, overwriting the prompt.
Issue-Bug,Product-Terminal,Needs-Tag-Fix,Area-CookedRead
low
Major
2,708,429,589
rust
Running `./x test tests/debuginfo` triggers recompiles for debugger visualizer script changes
After making changes to `src\etc\lldb_commands`, re-running the debuginfo tests triggers a 10-15 minute compile to, in effect, replace 1 file that's read and executed at runtime. Might also be worth adding a CI rule to allow for using everything prebuilt and just replacing the debugger visualizer files to speed things up. That would be especially helpful since running the debuginfo test suite locally is [somewhat broken](https://github.com/rust-lang/rust/issues/126092#issuecomment-2278943683).
A-testsuite,A-debuginfo,T-bootstrap,C-bug,A-compiletest
low
Critical
2,708,595,814
pytorch
Compiled and masked MHA running with DDP and `no_grad()` leads to Graph break due to unsupported builtin torch._C._distributed_c10d.PyCapsule._broadcast_coalesced
### 🐛 Describe the bug As titled, calling `torch.nn.MultiheadAttention` with `attn_mask` within a compiled model wrapped in DDP and `no_grad()` leads to the graph break warning. I track the issue down to this gist: https://gist.github.com/EIFY/dee8c28e1f5c7d78388020abd36a627e ```sh cd ~/Downloads/dee8c28e1f5c7d78388020abd36a627e torchrun mha_mask_ddp.py --multiprocessing-distributed --compiled --mask Use GPU: 2 for training Use GPU: 5 for training Use GPU: 6 for training Use GPU: 3 for training Use GPU: 7 for training Use GPU: 4 for training Use GPU: 1 for training Use GPU: 0 for training Train-like step with gradient: # repeated total of 8 times # (...) Eval-like step without gradient: # repeated and interleaved total of 8 times /home/ubuntu/.local/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py:725: UserWarning: Graph break due to unsupported builtin torch._C._distributed_c10d.PyCapsule._broadcast_coalesced. This function is either a Python builtin (e.g. _warnings.warn) or a third-party C/C++ Python extension (perhaps created with pybind). If it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it and see the next case for a workaround. If it is a third-party C/C++ Python extension, please either wrap it into a PyTorch-understood custom operator (see https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html for more details) or, if it is traceable, use torch.compiler.allow_in_graph. torch._dynamo.utils.warn_once(msg) # (...) # No such warnings unless both --compiled and --mask flags are set: torchrun mha_mask_ddp.py --multiprocessing-distributed --compiled torchrun mha_mask_ddp.py --multiprocessing-distributed --mask torchrun mha_mask_ddp.py --multiprocessing-distributed ``` The warning is very specific and goes away if I call MHA without `attn_mask` or avoid compiled model for the eval-like step with the `no_grad()` context. I don't know how serious this is since it's just a warning, but it seems unintended regardless. ### Versions The output below shows PyTorch version: 2.3.1 and torchvision==0.18.1, but I am actually running ``` $ pip freeze (...) torch==2.5.1 torchaudio==2.5.1 torchvision==0.20.1 (...) ``` I don't know why `python3 -mpip list --format=freeze` finds the old packages. ``` Collecting environment information... PyTorch version: 2.3.1 Is debug build: False CUDA used to build PyTorch: 12.4 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.5 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: version 3.22.1 Libc version: glibc-2.35 Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime) Python platform: Linux-6.8.0-47-generic-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: 12.4.131 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB GPU 1: NVIDIA A100-SXM4-40GB GPU 2: NVIDIA A100-SXM4-40GB GPU 3: NVIDIA A100-SXM4-40GB GPU 4: NVIDIA A100-SXM4-40GB GPU 5: NVIDIA A100-SXM4-40GB GPU 6: NVIDIA A100-SXM4-40GB GPU 7: NVIDIA A100-SXM4-40GB Nvidia driver version: 550.90.12 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 48 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 124 On-line CPU(s) list: 0-123 Vendor ID: AuthenticAMD Model name: AMD EPYC 7542 32-Core Processor CPU family: 23 Model: 49 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 124 Stepping: 0 BogoMIPS: 5800.00 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr wbnoinvd arat npt nrip_save umip rdpid arch_capabilities Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 7.8 MiB (124 instances) L1i cache: 7.8 MiB (124 instances) L2 cache: 62 MiB (124 instances) L3 cache: 1.9 GiB (124 instances) NUMA node(s): 2 NUMA node0 CPU(s): 0-31,64-95 NUMA node1 CPU(s): 32-63,96-123 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Reg file data sampling: Not affected Vulnerability Retbleed: Mitigation; untrained return thunk; SMT disabled Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] flake8==4.0.1 [pip3] numpy==1.21.5 [pip3] optree==0.12.1 [pip3] torch==2.3.1 [pip3] torchvision==0.18.1 [pip3] triton==2.3.1 [conda] Could not collect ``` cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @drisspg
oncall: distributed,module: nn,triaged
low
Critical
2,708,599,818
vscode
Synchronize Text Selection Highlights Across Split Panes in the Same File or unsaved tab code
<!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ --> <!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ --> <!-- Please search existing issues to avoid creating duplicates. --> <!-- Describe the feature you'd like. --> Currently, Visual Studio Code highlights occurrences of the selected text in the same pane when working on a file. However, when the same file is open in multiple split panes, the selection highlight does not synchronize across panes. This makes it harder to visually track corresponding code segments in split views, especially when comparing or editing similar code sections. I propose adding a feature that allows synchronized highlighting of selected text or lines across split panes for the same file. This enhancement would improve productivity and code comparison tasks. Proposed Behavior: - When a user selects a piece of text in one pane, matching occurrences of the selection should be highlighted in all open panes of the same file. - The feature should work for: - Line selection: Highlight the same line in both panes. - Text selection: Highlight the same string in both panes. Use Cases: - Developers comparing or editing sections of the same file in split panes can easily see corresponding code without manually selecting it again. - Simplifies visual tracking of repeated patterns or function calls in large files.
feature-request,editor-core
low
Minor
2,708,603,790
angular
Route Resolvers: allow for async (non blocking) resolve of data
### Which @angular/* package(s) are relevant/related to the feature request? router ### Description For my app I need to fetch data and I like to do this in my Router config, using Route Resolver functions. I prefer to load my core data via a resolver in the root, in a blocking way because it is required to render the home page. However, some other data may be resolved in a lazy fashion. Currently, I can't do that in a Route resolver, I have to load it as part of a (container/page) component (either via effects or in the ngOninit). ### Proposed solution Can we expand on the resolver config to allow for an Options object to be added so we can tell the Router to load the data blocking or non blocking? ``` { path: '', component: AuthenticatedLayoutComponent, canActivate: [AuthGuard], data: { resolveAsync: ['tvs'] }, resolve: { facility: facilityResolver, resident: residentResolver, calenderEvents: CalendarEventsResolver, tvs: tvsResolver({ runAsync: true } ), <======== Change homeCards: HomeCardsResolver, whoIsListening: whoIsListeningResolver, }, ``` ### Alternatives considered So I did a little work around, so I can configure my resolver to load async or sync: ``` // ROUTE SEGMENT { path: '', component: AuthenticatedLayoutComponent, canActivate: [AuthGuard], data: { resolveAsync: ['tvs'] }, resolve: { facility: facilityResolver, resident: residentResolver, calenderEvents: CalendarEventsResolver, tvs: tvsResolver, homeCards: HomeCardsResolver, whoIsListening: whoIsListeningResolver, }, ``` I used `data` to pass in a variable to tell my `tvsResolver` if it should load sync or async. Then in the resolver: ``` export const tvsResolver: ResolveFn<any> = // (route: ActivatedRouteSnapshot, state: RouterStateSnapshot) => { (route: ActivatedRouteSnapshot, state: RouterStateSnapshot) => { const loadAsync = Array.isArray(route.data.resolveAsync) && route.data.resolveAsync.includes('tvs'); .... if (loadAsync) { // load Non Blocking const sub: Subscription = deviceService.getTvs().subscribe(() => sub.unsubscribe()); } else { // load Blocking return deviceService.getTvs(); } } ``` However I think this could be done in a better way configuring this in the Router segment.
area: router,cross-cutting: signals
low
Minor
2,708,665,053
pytorch
Adding to Private Type Stubs
### 🚀 The feature, motivation and pitch In torch/nn/functional, many calls are made to functions from torch._C._nn, but the type stubs in torch/_C/_nn.pyi (once generated) are incomplete and many of these calls are not recognized as valid by type-checking software. I propose adding to these private type-stubs. ### Alternatives Based on the information in native_functions.yaml, it seems possible to automatically generate these type stubs. There has been progress on this in tools/pyi/gen_pyi.py, but it doesn't seem complete considering the missing pyi information in torch/_C/_nn.pyi. It seems possible to set up an automatic conversion between the C++ type hints and Python type hints, which could then be converted to individual .pyi files depending on the python_module attirbute. example: ```yaml # cross_entropy in native_functions.yaml - func: cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor python_module: nn ```` could be translated to ```python # in torch/_C/_nn.pyi.in # autotranslated from above def cross_entropy_loss( self: Tensor, target: Tensor, weight: Optional[Tensor] = None, reduction: int = Mean, # would have to have a lookup for this ignore_index: int = - 100, # would have to have mappings from some C++ types to python types label_smoothing: float = 0.0, ) -> Tensor: ... ```` ### Additional context I'd be happy to work on adding to the private type-stubs or adding to tools/pyi/gen_pyi.py cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @ezyang @malfet @xuzhao9 @gramster
module: nn,module: typing,triaged
low
Minor
2,708,681,516
flutter
flutter create command does not populate iOS Team Identifier
### Steps to reproduce 1. on a Mac 2. on command line, create a new project `flutter create testproject` 3. open project in vscode 4. select physical iOS device for debugging 5. run project ### Expected results Project should build, sign, deploy, and run on an iOS device ### Actual results Build output ``` Launching lib/main.dart on Gadget Boys iPhone in debug mode... Developer identity "REDACTED" selected for iOS code signing! Xcode build done. 0.9s Failed to build iOS app Could not build the precompiled application for the device. Error (Xcode): Signing for "Runner" requires a development team. Select a development team in the Signing & Capabilities editor. /Users/adam/Projects/testlist/ios/Runner.xcodeproj ``` ### Code sample I have tracked down the exact cause of the issue. The flutter iOS Runner swift project template looks like this... ``` {{#hasIosDevelopmentTeam}} DEVELOPMENT_TEAM = {{iosDevelopmentTeam}}; {{/hasIosDevelopmentTeam}} ``` Which appears to show that the "development team" _should be_ getting populated with a valid value. Tracking the code in flutter tools, I can see the `iosDevelopmentTeam` template token is populated by this code https://github.com/flutter/flutter/blob/a0ba2decab156c88708c4261d40660ab8f60da5f/packages/flutter_tools/lib/src/ios/code_signing.dart#L155 There is plenty of logging to debug what is going on at runtime, so I ran the template generator with verbose logging... `flutter create --verbose`, outputs the following ``` [ +5 ms] executing: security find-identity -p codesigning -v [ +90 ms] 1) REDACTED "Apple Distribution: REDACTED" 2) REDACTED "Apple Development: REDACTED" 3) REDACTED "Apple Development: REDACTED" 3 valid identities found [ +1 ms] Developer identity "REDACTED" selected for iOS code signing [ +13 ms] -----BEGIN CERTIFICATE----- REDACTED -----END CERTIFICATE----- [ ] executing: openssl x509 -subject [ +52 ms] Creating project testlist... ``` So, the template generator has indeed detected my signing certificate and team... why isn't it getting populated? The only code straight after this logging, is a call to openssl to decode the signing certificate... then a regex to parse its output... but that bit doesnt have any logging unfortunately. ``` final Process opensslProcess = await processUtils.start( const <String>['openssl', 'x509', '-subject']); await ProcessUtils.writeToStdinGuarded( stdin: opensslProcess.stdin, content: signingCertificateStdout, onError: (Object? error, _) { throw Exception('Unexpected error when writing to openssl: $error'); }, ); await opensslProcess.stdin.close(); final String opensslOutput = await utf8.decodeStream(opensslProcess.stdout); // Fire and forget discard of the stderr stream so we don't hold onto resources. // Don't care about the result. unawaited(opensslProcess.stderr.drain<String?>()); if (await opensslProcess.exitCode != 0) { return null; } return _certificateOrganizationalUnitExtractionPattern.firstMatch(opensslOutput)?.group(1); ``` So, I run the same openssl command, using the public key output that the flutter templator logged, and I get ``` adam@Mac ~ % openssl x509 -subject < test.txt subject=UID = REDACTED, CN = REDACTED, OU = 73FD8HDG96, O = REDACTED, C = REDACTED -----BEGIN CERTIFICATE----- REDACTED -----END CERTIFICATE----- ``` ok, let's take a look a the regex to see if it matched my certificate subject... ``` final RegExp _certificateOrganizationalUnitExtractionPattern = RegExp(r'OU=([a-zA-Z0-9]+)'); ``` Aha! The regex isn't expecting spaces around the tokens in the subject key/value pairs. I edited my copy of `code_signing.dart` to make the regex resilient to spaces... ``` final RegExp _certificateOrganizationalUnitExtractionPattern = RegExp(r'OU[\s]*=[\s]*([a-zA-Z0-9]+)'); ``` ...re run the template builder ``` > flutter create testproject ``` Check the ios project output ``` 97C147061CF9000F007C117D /* Debug */ = { isa = XCBuildConfiguration; baseConfigurationReference = 9740EEB21CF90195004384FC /* Debug.xcconfig */; buildSettings = { ASSETCATALOG_COMPILER_APPICON_NAME = AppIcon; CLANG_ENABLE_MODULES = YES; CURRENT_PROJECT_VERSION = "$(FLUTTER_BUILD_NUMBER)"; DEVELOPMENT_TEAM = REDACTED; ENABLE_BITCODE = NO; INFOPLIST_FILE = Runner/Info.plist; ``` Yay!!! My dev team has been populated Hit F5... and it signs and runs on my device! ### Screenshots or Video <details open> <summary>Screenshots / Video demonstration</summary> [Upload media here] </details> ### Logs <details open><summary>Logs</summary> ```console [Paste your logs here] ``` </details> ### Flutter Doctor output <details open><summary>Doctor output</summary> ```console Doctor summary (to see all details, run flutter doctor -v): [✓] Flutter (Channel stable, 3.24.5, on macOS 15.1.1 24B2091 darwin-arm64, locale en-NZ) [✓] Android toolchain - develop for Android devices (Android SDK version 32.0.0) [✓] Xcode - develop for iOS and macOS (Xcode 16.1) [✓] Chrome - develop for the web [✓] Android Studio (version 2023.2) [✓] VS Code (version 1.95.3) [✓] Connected device (4 available) ! Error: Browsing on the local area network for Adam’s Apple Watch. Ensure the device is unlocked and discoverable via Bluetooth. (code -27) [✓] Network resources • No issues found! ``` </details>
platform-ios,tool,P2,has partial patch,team-ios,triaged-ios
low
Critical
2,708,840,188
next.js
wrong resources URL keep reloading multiple times
### Link to the code that reproduces this issue https://github.com/lior-amsalem/nextjs-refresh-bug ### To Reproduce 1. git clone this project: `git clone https://github.com/lior-amsalem/nextjs-refresh-bug` 2. npm i 3. npm run dev 4. open console see this: <img width="755" alt="image" src="https://github.com/user-attachments/assets/a69be4fd-37e4-481c-9c97-9c886cf6ede8"> this happens if the resource is not accessible, for some reasons nextjs keep trying to reload it ### Current vs. Expected behavior image path is wrong, nextjs keep trying to reload it- expected behavioour: don't reload again and again ### Provide environment information ```bash sonoma v22.5.1 nextjs 15 "next": "15.0.3", "react": "19.0.0-rc-66855b96-20241106", "react-dom": "19.0.0-rc-66855b96-20241106" ``` ### Which area(s) are affected? (Select all that apply) Not sure, Developer Experience ### Which stage(s) are affected? (Select all that apply) next dev (local) ### Additional context _No response_
bug
low
Critical
2,708,890,711
tauri
[feat] provide a way to set webview locale
### Describe the problem Edge webview does not always properly detect system local and defaults to en-US. Which leads to all dates and other region formatted strings being incorrect. There is also a use case for having system locale be different from preferred locale of the tauri app. ### Describe the solution you'd like It would be good to have an API to directly control the webview locale. On edge webview2 there is language option that can be specified https://learn.microsoft.com/en-us/microsoft-edge/webview2/reference/win32/icorewebview2environmentoptions?view=webview2-1.0.2903.40#put_language ### Alternatives considered _No response_ ### Additional context _No response_
type: feature request
low
Minor
2,708,903,950
PowerToys
FancyZones can only snap the windows of apps that run as admin, by holding down mouse's secondary key.
### Microsoft PowerToys version 0.86.0 ### Installation method PowerToys auto-update ### Running as admin Yes ### Area(s) with issue? FancyZones ### Steps to reproduce (This issue did not firstly appear in version 0.86.0 but it definitely was not there some versions ago.) Using FancyZones, try to manage (i.e. snap) a window of an app that is not running as admin, while pressing the mouse's secondary key. You will see that the zones will not appear at all (actually if you look closely you will notice that they appear very briefly only when you click the secondary mouse key - that is easier to notice if you press the secondary mouse key repeatedly). The feature of FancyZones that allows to snap windows in the predefined zones just by moving the window while keeping pressed the secondary mouse key, only works with the windows of apps that are running as admin. Holding down the shift key allows for snapping all the windows, even the ones that belong to apps that are not running as admin ### ✔️ Expected Behavior The feature of FancyZones that allows to manage (i.e. snap) windows in the predefined zones just by moving the window while keeping pressed the secondary mouse key, should work even with the windows of apps that are NOT running as admin. ### ❌ Actual Behavior The feature of FancyZones that allows to manage (i.e. snap) windows in the predefined zones just by moving the window while keeping pressed the secondary mouse key, only works with the windows of apps that are running as admin. Holding down the shift key allows for snapping all the windows, even the ones that belong to apps that are not running as admin ### Other Software N / A
Issue-Bug,FancyZones-Dragging&UI,Product-FancyZones,Needs-Triage
low
Minor
2,708,907,219
deno
Add `hidden` and `internal` task options
The recent "Task dependencies" feature is great task composition method but I think there is 2 missing options. - `hidden`: that allow to hide a task from tasks list but kept callable (eg: when it is only used as dependency or for legacy purposes, ...) - `iternal`: that allow to hide a task from tasks list and not callable (eg: task used as dependency only, part of a command that was split, framework required logic, ...) ```json { "tasks": { "build:css_legacy": { "command": "...", "hidden": true }, "build:css": { "command": "...", "hidden": true }, "build:assets": { "command": "...", "hidden": true },, "framework_build_config": { "command": "...", "internal": true }, "build": { "command": "...", "dependencies": ["framework_build_config", "build:css", "build:html"] }, "long_command": { "command": "...", "dependencies": ["long_command_part:a", "long_command_part:b", "long_command_part:c"] }, "long_command_part:a": { "command": "...", "internal": true }, "long_command_part:b": { "command": "...", "internal": true }, "long_command_part:c": { "command": "...", "internal": true }, } } ``` Give the following output: ```sh deno task - build ... - long_command ... ``` And the following behaviour: ```sh deno task build:css # ok deno task framework_build_config > Task not found: framework_build_config # //or more specific message like > Task is internal: framework_build_config ```
suggestion,task runner
low
Minor
2,708,934,957
rust
std assumes that accessing an already-initialized TLS variable is async-signal-safe
TLS accesses are not documented to be async-signal-safe, but they mostly are – except for the first access to a variable. We rely on that: we install signal handlers that access thread-local variables, but we ensure they have been accessed before setting up the signal handler. This would be a good thing to discuss with the runtimes we depend on, since hopefully they can provide this as an actual guarantee, or else we'll learn that we can't rely on it. @joboet @the8472 in terms of interaction with POSIX, what exactly are we relying on here? Or are the kind of TLS variables we are using not part of POSIX? Is it a libc thing, a linker thing, or something else?
A-runtime,A-thread-locals,C-bug,T-libs
low
Major
2,709,002,764
opencv
discrepancy in constants defined between the core modules and the Objective-C (objc) bindings in OpenCV 5.x
### System Information OpenCV version: 5.x Operating System / Platform: Ubuntu 22.04 Compiler & compiler version: GCC 11.3.0 OpenCV Python version: 5.x Operating System / Platform: Windows 11 Python version: 3.11.4 OpenCV version: 5.x Operating System / Platform: macOS 13 Ventura Development Environment: Xcode 14.0, Swift 5.7 ### Detailed description Description of the Bug When using OpenCV 5.x Objective-C bindings, the matrix operations fail due to mismatched constant definitions in CvTypeExt.swift and the core OpenCV module. The Objective-C binding uses CV_CN_SHIFT = 3, but the core module defines CV_CN_SHIFT = 5. This causes incorrect matrix depth calculations. Observed Behavior The code crashes or produces an incorrect matrix with fewer channels than expected. Expected Behavior The matrix should be created with a depth supporting 128 channels, as defined by the core module constants. Error Log ''' [ERROR]: Assertion failed in core.cpp, line 123: invalid depth for 128-channel matrix ''' This issue seems to be caused by outdated constants in CvTypeExt.swift. The constants CV_CN_SHIFT and CV_DEPTH_MAX in the Objective-C bindings need to match those in the core module. ### Steps to reproduce Setup Environment: OpenCV version: 5.x Platform: macOS 13 Ventura Compiler: Xcode 14.0, Swift 5.7 Code to Reproduce: ''' #import <opencv2/opencv.hpp> #import <iostream> using namespace cv; int main() { // Attempt to create a matrix with 128 channels try { Mat matrix = Mat::zeros(3, 3, CV_8UC128); // This will fail due to mismatched constants std::cout << "Matrix created successfully!" << std::endl; std::cout << "Matrix depth: " << matrix.depth() << std::endl; std::cout << "Matrix type: " << matrix.type() << std::endl; } catch (const cv::Exception& e) { std::cerr << "OpenCV Exception: " << e.what() << std::endl; } return 0; } ''' Expected Outcome: The matrix should be created successfully, supporting 128 channels, with the depth consistent with the CV_CN_SHIFT defined in the core module. Observed Outcome: The code fails with an assertion error or unexpected behavior due to the mismatch between CV_CN_SHIFT and CV_CN_MAX in the Objective-C bindings and the core module. No related data files (images, ONNX, etc.) are needed to reproduce this issue. The problem lies in the mismatched constant definitions in the Objective-C bindings, and the provided code snippet is self-contained. ### Issue submission checklist - [X] I report the issue, it's not a question - [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution - [X] I updated to the latest OpenCV version and the issue is still there - [X] There is reproducer code and related data files (videos, images, onnx, etc)
bug,platform: ios/osx,category: swift/objc bindings
low
Critical
2,709,012,600
rust
Support non-UTF-8 in environment variable dependency tracking
[`ParseSess::env_depinfo`](https://github.com/rust-lang/rust/blob/6c76ed5503966c39381fac64eb905ac45e346695/compiler/rustc_session/src/parse.rs#L226) stores `Symbol` instead of `OsStr`. While we're at it, might make sense to actually not store the environment variable value at all, and instead fetch it inside `write_out_deps`? (which assumes that the variable won't change during execution of the compiler, but it probably won't). See also https://github.com/rust-lang/rust/pull/130883#discussion_r1864384740.
T-compiler,C-discussion
low
Minor
2,709,018,916
opencv
How do I get the address of cv::cuda::GpuMat?
### System Information function cudaMemcpyAsync **invalid argument** ### Detailed description When I use function cudaMemcpyAsync copy memory DeviceToDevice,the second parameters need to pass the **device address of blob_img_GPU**,How should I specify it? In my code, I use blob_img_GPU.ptr(),but it is fault. When I run the code,error with invalid argument. ### Steps to reproduce ```cpp for (int batch_idx = 0; batch_idx < _batch_size; ++batch_idx) { letterbox(images[batch_idx], blob_imgs_host[batch_idx], size); cv::cuda::GpuMat blob_img_GPU; blob_img_GPU.upload(blob_imgs_host[batch_idx]); cout << "pointer_host:" << blob_imgs_host[batch_idx].ptr<float>() << endl; CHECK(cudaMemcpyAsync( (char *)_device_ptrs[0] + batch_idx * total_elements * blob_img_GPU.elemSize(), blob_img_GPU.ptr<float>(), total_elements * blob_img_GPU.elemSize(), cudaMemcpyDeviceToDevice, _stream)); } ``` ### Issue submission checklist - [X] I report the issue, it's not a question - [ ] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution - [ ] I updated to the latest OpenCV version and the issue is still there - [ ] There is reproducer code and related data files (videos, images, onnx, etc)
question (invalid tracker),category: gpu/cuda (contrib)
low
Critical
2,709,087,334
kubernetes
Improper Permissions on ConfigMap and Secret Mounts
### What would you like to be added? Hey, I am opening this issue publicly as it was approved by a member of the Kubernetes staff on HackerOne (#2867563) to be discussed with the larger community. Since addressing it would break existing users, it may require a Kubernetes Enhancement Proposal (KEP). The staff member also agreed that it is a poor default. # Description Kubernetes mounts ConfigMaps and Secrets to containers with default permissions -rw-r--r-- (644) also the permission on the ConfigMaps and Secrets directories mounted to the container is set to -rwxrwxrwx (777), This configuration allows any user in the container, not just the root user, to read these files, exposing sensitive data and violating the principle of least privilege. In multi-user containers, any non-root user can access these ConfigMaps and Secrets, further increasing the risk of unauthorized data exposure. To prevent this, users can use the defaultMode parameter to set stricter permissions, such as 600/750, ensuring only the root user has access. The current default behavior conflicts with the principle of least privilege, making manual adjustment necessary to protect sensitive data. # Steps To Reproduce: 1. Create a Kubernetes ConfigMap or Secret. 2. Mount the ConfigMap or Secret into a container. ``` apiVersion: v1 kind: Pod metadata: name: secret-mounted-pod spec: containers: - name: my-app image: nginx:latest securityContext: runAsUser: 1000 volumeMounts: - name: secret-volume mountPath: "/etc/secret" readOnly: true volumes: - name: secret-volume secret: secretName: my-secret ``` 3. Run the container with multiple users, including a non-root user (e.g., user ID 1000). 4. Observe that the default permissions are -rw-r--r-- and that any non-root user can read the mounted data. ### Why is this needed? To mitigate this risk, we need to ensure that the directory and the mounted files of Configmaps/Secrets have permissions set to 750. This setting allows only the root user and the group owner to read and execute, while preventing access by other users. his approach limits read access to the root/owner only, thereby following the principle of least privilege and reducing unauthorized exposure of sensitive data.
sig/storage,sig/node,kind/feature,sig/security,needs-triage
low
Major
2,709,091,011
godot
Visual Shaders Texture2DParameter can't be turned into constant (Unlike other Parameter nodes)
### Tested versions 4.3 Stable ### System information Windows 10 x64 (But not related to the issue) ### Issue description ![Image](https://github.com/user-attachments/assets/3707bfab-3e93-4f13-83a7-e1b091339875) ![Image](https://github.com/user-attachments/assets/a14f85de-a231-487e-b35b-486be46c9e52) ![Image](https://github.com/user-attachments/assets/dbf4da52-588c-4fbc-ade2-d77d6932a132) 1. Currently, Texture2DParameter is required in Visual Shaders to properly set Filtering, Texture Repeat, and more. There are no ways to convert it into Constant (even if the workflow doesn't require the user to change it later, and it should be permanent for all instances of the shader). 2. Texture node by default Does Not have features to Repeat or Filter the stored texture. It ignores Repeat settings of the node where used. 3. All other Parameter nodes can be converter into their Constant versions. ### Steps to reproduce Make Texture2DParameter, try to make it a Constant ### Minimal reproduction project (MRP) N/A
enhancement,topic:editor,topic:shaders
low
Minor
2,709,095,182
pytorch
Nearest-neighbour upsampling with `interpolate` silently fails on CUDA when output size exceeds 2^31
### 🐛 Describe the bug Upscaling a tensor where the result size would exceed 2^31 (considering one of the upscaled dimensions) results in zero-valued tensors and no error message, if memory format is `channels_last`. **Reproduction** Consider the following **working snippet**. The output is all ones. ```python import torch x = torch.ones((16, 256, 512, 512), dtype=torch.bfloat16).cuda().to(memory_format=torch.channels_last) out = torch.nn.functional.interpolate(x, scale_factor=2, mode='nearest') assert (out[0] == out[-1]).all() ``` However, **the following snippet fails** - the only difference is that the first dimension changed to `17`, which results in `17*256*512*512*2` being larger than `2^31`: ```python import torch x = torch.ones((17, 256, 512, 512), dtype=torch.bfloat16).cuda().to(memory_format=torch.channels_last) out = torch.nn.functional.interpolate(x, scale_factor=2, mode='nearest') assert (out[0] == out[-1]).all() ``` Furthermore, the final tensor is all zeroes instead of the expected ones: ``` print(out[-1].abs().sum()) ``` > `tensor(0., device='cuda:0', dtype=torch.bfloat16)` **Workaround** Make the input tensor contiguous, as in #81665. **Related issues** - https://github.com/pytorch/pytorch/issues/129118 - https://github.com/pytorch/pytorch/issues/81665 **Additional considerations** - The fact that the failure is silent makes this hard to identify, track and debug. - The CPU implementation works correctly. ### Versions Collecting environment information... PyTorch version: 2.5.1+cu124 Is debug build: False CUDA used to build PyTorch: 12.4 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.4 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: version 3.26.4 Libc version: glibc-2.35 Python version: 3.10.12 (main, Jul 5 2023, 18:54:27) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-6.8.0-45-generic-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: 12.4.99 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA RTX 6000 Ada Generation GPU 1: NVIDIA GeForce RTX 3090 Ti Nvidia driver version: 550.107.02 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 36 On-line CPU(s) list: 0-35 Vendor ID: GenuineIntel Model name: Intel(R) Core(TM) i9-10980XE CPU @ 3.00GHz CPU family: 6 Model: 85 Thread(s) per core: 2 Core(s) per socket: 18 Socket(s): 1 Stepping: 7 CPU max MHz: 4800.0000 CPU min MHz: 1200.0000 BogoMIPS: 6000.00 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi avx512_vnni md_clear flush_l1d arch_capabilities Virtualization: VT-x L1d cache: 576 KiB (18 instances) L1i cache: 576 KiB (18 instances) L2 cache: 18 MiB (18 instances) L3 cache: 24.8 MiB (1 instance) NUMA node(s): 1 NUMA node0 CPU(s): 0-35 Vulnerability Gather data sampling: Mitigation; Microcode Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable Vulnerability Reg file data sampling: Not affected Vulnerability Retbleed: Mitigation; Enhanced IBRS Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Mitigation; TSX disabled Versions of relevant libraries: [pip3] clip-anytorch==2.5.2 [pip3] flake8==7.0.0 [pip3] mypy-extensions==1.0.0 [pip3] mypy-protobuf==3.4.0 [pip3] numpy==1.26.4 [pip3] nvidia-cublas-cu11==11.10.3.66 [pip3] nvidia-cublas-cu12==12.4.5.8 [pip3] nvidia-cuda-cupti-cu11==11.7.101 [pip3] nvidia-cuda-cupti-cu12==12.4.127 [pip3] nvidia-cuda-nvrtc-cu11==11.7.99 [pip3] nvidia-cuda-nvrtc-cu12==12.4.127 [pip3] nvidia-cuda-runtime-cu11==11.7.99 [pip3] nvidia-cuda-runtime-cu12==12.4.127 [pip3] nvidia-cudnn-cu11==8.5.0.96 [pip3] nvidia-cudnn-cu12==9.1.0.70 [pip3] nvidia-cufft-cu11==10.9.0.58 [pip3] nvidia-cufft-cu12==11.2.1.3 [pip3] nvidia-curand-cu11==10.2.10.91 [pip3] nvidia-curand-cu12==10.3.5.147 [pip3] nvidia-cusolver-cu11==11.4.0.1 [pip3] nvidia-cusolver-cu12==11.6.1.9 [pip3] nvidia-cusparse-cu11==11.7.4.91 [pip3] nvidia-cusparse-cu12==12.3.1.170 [pip3] nvidia-nccl-cu11==2.14.3 [pip3] nvidia-nccl-cu12==2.21.5 [pip3] nvidia-nvjitlink-cu12==12.4.127 [pip3] nvidia-nvtx-cu11==11.7.91 [pip3] nvidia-nvtx-cu12==12.4.127 [pip3] open_clip_torch==2.24.0 [pip3] pytorch-model-summary==0.1.2 [pip3] rotary-embedding-torch==0.6.4 [pip3] torch==2.5.1 [pip3] torch-fidelity==0.3.0 [pip3] torchaudio==2.5.1 [pip3] torchdiffeq==0.2.3 [pip3] torchinfo==1.8.0 [pip3] torchmetrics==1.4.1 [pip3] torchsummary==1.5.1 [pip3] torchvision==0.20.1 [pip3] triton==3.1.0 [conda] clip-anytorch 2.5.2 pypi_0 pypi [conda] cuda-cudart 12.4.99 hd3aeb46_0 conda-forge [conda] cuda-cudart-dev 12.4.99 hd3aeb46_0 conda-forge [conda] cuda-cudart-dev_linux-64 12.4.99 h59595ed_0 conda-forge [conda] cuda-cudart-static 12.4.99 hd3aeb46_0 conda-forge [conda] cuda-cudart-static_linux-64 12.4.99 h59595ed_0 conda-forge [conda] cuda-cudart_linux-64 12.4.99 h59595ed_0 conda-forge [conda] cuda-nvrtc 12.4.99 hd3aeb46_0 conda-forge [conda] cuda-nvtx 12.4.99 h59595ed_0 conda-forge [conda] cudnn 8.9.7.29 h092f7fd_3 conda-forge [conda] libcublas 12.4.5.8 0 nvidia [conda] libcublas-dev 12.4.5.8 0 nvidia [conda] libcufft 11.2.0.44 hd3aeb46_0 conda-forge [conda] libcurand 10.3.5.119 hd3aeb46_0 conda-forge [conda] libcusolver 11.6.0.99 hd3aeb46_0 conda-forge [conda] libcusparse 12.3.0.142 hd3aeb46_0 conda-forge [conda] libnvjitlink 12.4.99 hd3aeb46_0 conda-forge [conda] nccl 2.20.5.1 h3a97aeb_0 conda-forge [conda] numpy 1.26.4 pypi_0 pypi [conda] nvidia-cublas-cu11 11.10.3.66 pypi_0 pypi [conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi [conda] nvidia-cuda-cupti-cu11 11.7.101 pypi_0 pypi [conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi [conda] nvidia-cuda-nvrtc-cu11 11.7.99 pypi_0 pypi [conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi [conda] nvidia-cuda-runtime-cu11 11.7.99 pypi_0 pypi [conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi [conda] nvidia-cudnn-cu11 8.5.0.96 pypi_0 pypi [conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi [conda] nvidia-cufft-cu11 10.9.0.58 pypi_0 pypi [conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi [conda] nvidia-curand-cu11 10.2.10.91 pypi_0 pypi [conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi [conda] nvidia-cusolver-cu11 11.4.0.1 pypi_0 pypi [conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi [conda] nvidia-cusparse-cu11 11.7.4.91 pypi_0 pypi [conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi [conda] nvidia-nccl-cu11 2.14.3 pypi_0 pypi [conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi [conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi [conda] nvidia-nvtx-cu11 11.7.91 pypi_0 pypi [conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi [conda] open-clip-torch 2.24.0 dev_0 <develop> [conda] pytorch-model-summary 0.1.2 pypi_0 pypi [conda] rotary-embedding-torch 0.6.4 pypi_0 pypi [conda] torch 2.5.1 pypi_0 pypi [conda] torch-fidelity 0.3.0 pypi_0 pypi [conda] torchaudio 2.5.1 pypi_0 pypi [conda] torchdiffeq 0.2.3 pypi_0 pypi [conda] torchinfo 1.8.0 pypi_0 pypi [conda] torchmetrics 1.4.1 pypi_0 pypi [conda] torchsummary 1.5.1 pypi_0 pypi [conda] torchvision 0.20.1 pypi_0 pypi [conda] triton 3.1.0 pypi_0 pypi cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @ptrblck @eqy
high priority,module: nn,module: cuda,triaged,module: 64-bit
low
Critical
2,709,109,527
go
x/crypto/x509roots: apply constraints with CertPool.AddCertWithConstraint
Now that #57178 has landed, we should use CertPool.AddCertWithConstraint to apply nss.Constraint values. In particular, the Entrust root now has [a Distrust After of November 30, 2024 that we need to apply](https://sslmate.com/blog/post/entrust_distrust_more_disruptive_than_intended). /cc @golang/security
Security,NeedsFix
low
Minor
2,709,126,426
tauri
In the Tauri project development, how to implement the name change of the Tauri menu?
In the Tauri project development, how to implement the name change of the Tauri menu? Mainly used in multi-language development, such as English: Copy in Chinese: copy, just change the display name, do not change the shortcut key event operation, thank you. The current project is using Tauri 1.x
type: documentation
low
Minor
2,709,153,300
pytorch
Optimizers' `differentiable` flag doesn't work
### 🐛 Describe the bug #### 🐛 Bug description The use of the `differentiable` flag in the optimizers such as SGD and Adam leads to the following exception: ```bash Traceback (most recent call last): File "/home/haug/test.py", line 15, in <module> optimizer.step() File "/home/haug/.venv/lib/python3.10/site-packages/torch/optim/optimizer.py", line 487, in wrapper out = func(*args, **kwargs) File "/home/haug/.venv/lib/python3.10/site-packages/torch/optim/optimizer.py", line 91, in _use_grad ret = func(self, *args, **kwargs) File "/home/haug/.venv/lib/python3.10/site-packages/torch/optim/sgd.py", line 123, in step sgd( File "/home/haug/.venv/lib/python3.10/site-packages/torch/optim/sgd.py", line 298, in sgd func( File "/home/haug/.venv/lib/python3.10/site-packages/torch/optim/sgd.py", line 351, in _single_tensor_sgd param.add_(grad, alpha=-lr) RuntimeError: a leaf Variable that requires grad is being used in an in-place operation. ``` #### ⚗️ Minimal working example ```python import torch model = torch.nn.Linear(1, 1) criterion = torch.nn.MSELoss() optimizer = torch.optim.SGD(model.parameters(), lr=0.01, differentiable=True) x = torch.full((1, 1), 1.0) y = 2 * x y_pred = model(x) loss = criterion(y_pred, y) loss.backward() optimizer.step() ``` #### 📝 Related content - https://github.com/pytorch/pytorch/issues/116490 - https://discuss.pytorch.org/t/differentiable-optimizer-not-working-for-simple-example/207483 ### 🔖 Versions System 1: ``` PyTorch version: 2.5.1+cu124 Is debug build: False CUDA used to build PyTorch: 12.4 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.3 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: Could not collect Libc version: glibc-2.35 Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime) Python platform: Linux-5.19.0-46-generic-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: 11.5.119 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: Quadro RTX 4000 Nvidia driver version: 525.147.05 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 39 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 20 On-line CPU(s) list: 0-19 Vendor ID: GenuineIntel Model name: Intel(R) Core(TM) i9-10900K CPU @ 3.70GHz CPU family: 6 Model: 165 Thread(s) per core: 2 Core(s) per socket: 10 Socket(s): 1 Stepping: 5 CPU max MHz: 5300.0000 CPU min MHz: 800.0000 BogoMIPS: 7399.70 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp pku ospke md_clear flush_l1d arch_capabilities Virtualization: VT-x L1d cache: 320 KiB (10 instances) L1i cache: 320 KiB (10 instances) L2 cache: 2.5 MiB (10 instances) L3 cache: 20 MiB (1 instance) NUMA node(s): 1 NUMA node0 CPU(s): 0-19 Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable Vulnerability Retbleed: Mitigation; Enhanced IBRS Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence Vulnerability Srbds: Mitigation; Microcode Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] numpy==2.1.3 [pip3] nvidia-cublas-cu12==12.4.5.8 [pip3] nvidia-cuda-cupti-cu12==12.4.127 [pip3] nvidia-cuda-nvrtc-cu12==12.4.127 [pip3] nvidia-cuda-runtime-cu12==12.4.127 [pip3] nvidia-cudnn-cu12==9.1.0.70 [pip3] nvidia-cufft-cu12==11.2.1.3 [pip3] nvidia-curand-cu12==10.3.5.147 [pip3] nvidia-cusolver-cu12==11.6.1.9 [pip3] nvidia-cusparse-cu12==12.3.1.170 [pip3] nvidia-nccl-cu12==2.21.5 [pip3] nvidia-nvjitlink-cu12==12.4.127 [pip3] nvidia-nvtx-cu12==12.4.127 [pip3] torch==2.5.1 [pip3] triton==3.1.0 [conda] Could not collect ``` System 2: ```bash PyTorch version: 2.5.1+cpu Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.5 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: version 3.22.1 Libc version: glibc-2.35 Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime) Python platform: Linux-6.8.0-49-generic-x86_64-with-glibc2.35 Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 39 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Vendor ID: GenuineIntel Model name: Intel(R) Core(TM) i7-8550U CPU @ 1.80GHz CPU family: 6 Model: 142 Thread(s) per core: 2 Core(s) per socket: 4 Socket(s): 1 Stepping: 10 CPU max MHz: 4000.0000 CPU min MHz: 400.0000 BogoMIPS: 3999.93 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb pti ssbd ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp vnmi md_clear flush_l1d arch_capabilities Virtualization: VT-x L1d cache: 128 KiB (4 instances) L1i cache: 128 KiB (4 instances) L2 cache: 1 MiB (4 instances) L3 cache: 8 MiB (1 instance) NUMA node(s): 1 NUMA node0 CPU(s): 0-7 Vulnerability Gather data sampling: Mitigation; Microcode Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable Vulnerability Meltdown: Mitigation; PTI Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable Vulnerability Reg file data sampling: Not affected Vulnerability Retbleed: Mitigation; IBRS Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected Vulnerability Srbds: Mitigation; Microcode Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] numpy==1.26.3 [pip3] torch==2.5.1+cpu [conda] Could not collect ``` cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar
module: optimizer,triaged,actionable
medium
Critical
2,709,193,011
flutter
Custom semantics actions not working on Mac OS
Hi, Adding custom semantics actions has no affect on Mac OS: ```import 'package:flutter/material.dart'; import 'package:flutter/rendering.dart'; void main() { runApp(const MyApp()); } class MyApp extends StatelessWidget { const MyApp({super.key}); // This widget is the root of your application. @override Widget build(BuildContext context) { return MaterialApp( title: 'Flutter Demo', theme: ThemeData( // This is the theme of your application. // // TRY THIS: Try running your application with "flutter run". You'll see // the application has a purple toolbar. Then, without quitting the app, // try changing the seedColor in the colorScheme below to Colors.green // and then invoke "hot reload" (save your changes or press the "hot // reload" button in a Flutter-supported IDE, or press "r" if you used // the command line to start the app). // // Notice that the counter didn't reset back to zero; the application // state is not lost during the reload. To reset the state, use hot // restart instead. // // This works for code too, not just values: Most code changes can be // tested with just a hot reload. colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple), useMaterial3: true, ), home: const MyHomePage(title: 'Flutter Demo Home Page'), ); } } class MyHomePage extends StatefulWidget { const MyHomePage({super.key, required this.title}); // This widget is the home page of your application. It is stateful, meaning // that it has a State object (defined below) that contains fields that affect // how it looks. // This class is the configuration for the state. It holds the values (in this // case the title) provided by the parent (in this case the App widget) and // used by the build method of the State. Fields in a Widget subclass are // always marked "final". final String title; @override State<MyHomePage> createState() => _MyHomePageState(); } class _MyHomePageState extends State<MyHomePage> { int _counter = 0; void _incrementCounter() { setState(() { // This call to setState tells the Flutter framework that something has // changed in this State, which causes it to rerun the build method below // so that the display can reflect the updated values. If we changed // _counter without calling setState(), then the build method would not be // called again, and so nothing would appear to happen. _counter++; }); } @override Widget build(BuildContext context) { // This method is rerun every time setState is called, for instance as done // by the _incrementCounter method above. // // The Flutter framework has been optimized to make rerunning build methods // fast, so that you can just rebuild anything that needs updating rather // than having to individually change instances of widgets. return Scaffold( appBar: AppBar( // TRY THIS: Try changing the color here to a specific color (to // Colors.amber, perhaps?) and trigger a hot reload to see the AppBar // change color while the other colors stay the same. backgroundColor: Theme.of(context).colorScheme.inversePrimary, // Here we take the value from the MyHomePage object that was created by // the App.build method, and use it to set our appbar title. title: Text(widget.title), ), body: Center( // Center is a layout widget. It takes a single child and positions it // in the middle of the parent. child: Column( // Column is also a layout widget. It takes a list of children and // arranges them vertically. By default, it sizes itself to fit its // children horizontally, and tries to be as tall as its parent. // // Column has various properties to control how it sizes itself and // how it positions its children. Here we use mainAxisAlignment to // center the children vertically; the main axis here is the vertical // axis because Columns are vertical (the cross axis would be // horizontal). // // TRY THIS: Invoke "debug painting" (choose the "Toggle Debug Paint" // action in the IDE, or press "p" in the console), to see the // wireframe for each widget. mainAxisAlignment: MainAxisAlignment.center, children: <Widget>[ const Text( 'You have pushed the button this many times:', ), Text( '$_counter', style: Theme.of(context).textTheme.headlineMedium, ), ], ), ), floatingActionButton: Semantics( customSemanticsActions: { CustomSemanticsAction(label: 'Increase'): () => setState( () => _counter++, ), CustomSemanticsAction(label: 'Decrease'): () => setState( () => _counter--, ) }, child: FloatingActionButton( onPressed: _incrementCounter, tooltip: 'Increment', child: const Icon( Icons.add, semanticLabel: 'Increase', ), ), ), // This trailing comma makes auto-formatting nicer for build methods. ); } } ``` If you run that sample with VoiceOver enabled, focus the button by holding down either capslock or control and option with the right arrow key, then view the actions list with VO command space, only the default "Press" action is visible. I haven't been able to check iOS yet, but I know they work on Android at least.
a: accessibility,platform-mac,a: desktop,has reproducible steps,P2,team-macos,triaged-macos,found in release: 3.24,found in release: 3.27
low
Critical
2,709,200,867
vscode
Git - Built-in Git extension does not respect `includeIf` in a symbolic link directory
<!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ --> <!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ --> <!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions --> <!-- 🔎 Search existing issues to avoid creating duplicates. --> <!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ --> <!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. --> <!-- 🔧 Launch with `code --disable-extensions` to check. --> Does this issue occur when all extensions are disabled?: Yes <!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. --> <!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. --> - VS Code Version: 1.95.1 - OS Version: 6.12.1.zen - Git Version: 2.47.1 Steps to Reproduce: 1. Make a directory named `git-includeIf` and initialize Git 2. Make a symbolic link that targets it (`ln -s git-includeIf git-includeIf-ln`) 3. Have Git containing this configurations ``` # ~/.gitconfig [includeIf "gitdir:**/git-includeIf-ln/"] path = ~/git-includeIf.inc # ~/git-includeIf.inc [user] name = John Doe email = [email protected] ``` 4. Open the `git-includeIf-ln` with VSCode 5. Create any file 6. Source Control -> Stage Changes -> Commit (with any message) 7. See that it failed 8. Open Terminal -> `git commit` (with any message) 9. See that it succeed with VSCode used as editor properly <details><summary>Logs</summary> <pre><code> 2024-12-01 20:29:56.116 [info] > git rev-parse --show-toplevel [1ms] 2024-12-01 20:29:56.122 [info] > git rev-parse --path-format=relative --show-toplevel [1ms] 2024-12-01 20:29:56.129 [info] > git rev-parse --git-dir --git-common-dir [1ms] 2024-12-01 20:29:56.148 [info] [Model][openRepository] Opened repository: /home/nexus/git-includeIf-ln 2024-12-01 20:29:56.156 [info] > git fetch [9ms] 2024-12-01 20:29:56.168 [info] > git config --get commit.template [13ms] 2024-12-01 20:29:56.169 [info] > git config --get commit.template [2ms] 2024-12-01 20:29:56.175 [info] [Git][getRemotes] No remotes found in the git config file 2024-12-01 20:29:56.181 [info] [Git][getRemotes] No remotes found in the git config file 2024-12-01 20:29:56.181 [info] > git for-each-ref --format=%(refname)%00%(upstream:short)%00%(objectname)%00%(upstream:track)%00%(upstream:remotename)%00%(upstream:remoteref) refs/heads/master refs/remotes/master [6ms] 2024-12-01 20:29:56.181 [warning] [Git][getBranch] No such branch: master 2024-12-01 20:29:56.197 [info] > git for-each-ref --format=%(refname)%00%(upstream:short)%00%(objectname)%00%(upstream:track)%00%(upstream:remotename)%00%(upstream:remoteref) refs/heads/master refs/remotes/master [7ms] 2024-12-01 20:29:56.197 [warning] [Git][getBranch] No such branch: master 2024-12-01 20:29:56.198 [error] [GitHistoryProvider][resolveHEADMergeBase] Failed to resolve merge base for master: Error: No such branch: master. 2024-12-01 20:29:56.199 [info] > git for-each-ref --format=%(refname)%00%(upstream:short)%00%(objectname)%00%(upstream:track)%00%(upstream:remotename)%00%(upstream:remoteref) refs/heads/master refs/remotes/master [19ms] 2024-12-01 20:29:56.199 [warning] [Git][getBranch] No such branch: master 2024-12-01 20:29:56.202 [info] > git config --get commit.template [5ms] 2024-12-01 20:29:56.208 [info] [Git][getRemotes] No remotes found in the git config file 2024-12-01 20:29:56.209 [info] > git for-each-ref --format=%(refname)%00%(upstream:short)%00%(objectname)%00%(upstream:track)%00%(upstream:remotename)%00%(upstream:remoteref) refs/heads/master refs/remotes/master [1ms] 2024-12-01 20:29:56.209 [warning] [Git][getBranch] No such branch: master 2024-12-01 20:29:56.221 [info] > git status -z -uall [6ms] 2024-12-01 20:29:56.222 [info] > git for-each-ref --sort -committerdate --format %(refname) %(objectname) %(*objectname) [2ms] 2024-12-01 20:29:57.241 [info] [Git][getRemotes] No remotes found in the git config file 2024-12-01 20:29:57.242 [info] > git config --get commit.template [7ms] 2024-12-01 20:29:57.242 [info] > git for-each-ref --format=%(refname)%00%(upstream:short)%00%(objectname)%00%(upstream:track)%00%(upstream:remotename)%00%(upstream:remoteref) refs/heads/master refs/remotes/master [1ms] 2024-12-01 20:29:57.242 [warning] [Git][getBranch] No such branch: master 2024-12-01 20:29:57.254 [info] > git status -z -uall [6ms] 2024-12-01 20:29:57.255 [info] > git for-each-ref --sort -committerdate --format %(refname) %(objectname) %(*objectname) [2ms] 2024-12-01 20:30:02.264 [info] > git config --get commit.template [1ms] 2024-12-01 20:30:02.272 [info] [Git][getRemotes] No remotes found in the git config file 2024-12-01 20:30:02.273 [info] > git for-each-ref --format=%(refname)%00%(upstream:short)%00%(objectname)%00%(upstream:track)%00%(upstream:remotename)%00%(upstream:remoteref) refs/heads/master refs/remotes/master [1ms] 2024-12-01 20:30:02.273 [warning] [Git][getBranch] No such branch: master 2024-12-01 20:30:02.285 [info] > git status -z -uall [6ms] 2024-12-01 20:30:02.287 [info] > git for-each-ref --sort -committerdate --format %(refname) %(objectname) %(*objectname) [1ms] 2024-12-01 20:30:09.486 [info] > git show --textconv :any.file [7ms] 2024-12-01 20:30:09.488 [info] > git ls-files --stage -- /home/nexus/git-includeIf-ln/any.file [3ms] 2024-12-01 20:30:09.732 [info] > git check-ignore -v -z --stdin [1ms] 2024-12-01 20:30:10.318 [info] [Git][getRemotes] No remotes found in the git config file 2024-12-01 20:30:10.319 [info] > git config --get commit.template [8ms] 2024-12-01 20:30:10.320 [info] > git for-each-ref --format=%(refname)%00%(upstream:short)%00%(objectname)%00%(upstream:track)%00%(upstream:remotename)%00%(upstream:remoteref) refs/heads/master refs/remotes/master [2ms] 2024-12-01 20:30:10.320 [warning] [Git][getBranch] No such branch: master 2024-12-01 20:30:10.337 [info] > git status -z -uall [9ms] 2024-12-01 20:30:10.339 [info] > git for-each-ref --sort -committerdate --format %(refname) %(objectname) %(*objectname) [3ms] 2024-12-01 20:30:10.884 [info] > git log --format=%H%n%aN%n%aE%n%at%n%ct%n%P%n%D%n%B -z --shortstat --diff-merges=first-parent -n50 --skip=0 --topo-order --decorate=full --stdin [2ms] 2024-12-01 20:30:10.884 [info] fatal: bad revision 'refs/heads/master' 2024-12-01 20:30:10.885 [error] [GitHistoryProvider][provideHistoryItems] Failed to get history items with options '{"historyItemRefs":["refs/heads/master"],"limit":50,"skip":0}': Failed to execute git { "exitCode": 128, "gitCommand": "log", "stdout": "", "stderr": "fatal: bad revision 'refs/heads/master'\n" } 2024-12-01 20:30:39.147 [info] > git add -A -- . [2ms] 2024-12-01 20:30:39.154 [info] > git -c user.useConfigOnly=true commit --quiet --allow-empty-message --file - [2ms] 2024-12-01 20:30:39.154 [info] Author identity unknown *** Please tell me who you are. Run git config --global user.email "[email protected]" git config --global user.name "Your Name" to set your account's default identity. Omit --global to set the identity only in this repository. fatal: no email was given and auto-detection is disabled 2024-12-01 20:30:39.162 [info] > git config --get-all user.name [3ms] 2024-12-01 20:30:39.176 [info] [Git][getRemotes] No remotes found in the git config file 2024-12-01 20:30:39.177 [info] > git config --get commit.template [9ms] 2024-12-01 20:30:39.177 [info] > git for-each-ref --format=%(refname)%00%(upstream:short)%00%(objectname)%00%(upstream:track)%00%(upstream:remotename)%00%(upstream:remoteref) refs/heads/master refs/remotes/master [1ms] 2024-12-01 20:30:39.178 [warning] [Git][getBranch] No such branch: master 2024-12-01 20:30:39.191 [info] > git status -z -uall [7ms] 2024-12-01 20:30:39.192 [info] > git for-each-ref --sort -committerdate --format %(refname) %(objectname) %(*objectname) [2ms] 2024-12-01 20:30:40.257 [info] [Git][getRemotes] No remotes found in the git config file 2024-12-01 20:30:40.258 [info] > git config --get commit.template [6ms] 2024-12-01 20:30:40.258 [info] > git for-each-ref --format=%(refname)%00%(upstream:short)%00%(objectname)%00%(upstream:track)%00%(upstream:remotename)%00%(upstream:remoteref) refs/heads/master refs/remotes/master [2ms] 2024-12-01 20:30:40.258 [warning] [Git][getBranch] No such branch: master 2024-12-01 20:30:40.272 [info] > git status -z -uall [6ms] 2024-12-01 20:30:40.274 [info] > git for-each-ref --sort -committerdate --format %(refname) %(objectname) %(*objectname) [2ms] 2024-12-01 20:30:40.353 [info] > git ls-files --stage -- /home/nexus/git-includeIf-ln/any.file [1ms] 2024-12-01 20:30:40.361 [info] > git cat-file -s e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 [1ms] 2024-12-01 20:30:40.385 [info] > git show --textconv :any.file [11ms] 2024-12-01 20:30:52.772 [info] > git rev-parse --show-toplevel [2ms] 2024-12-01 20:30:52.772 [info] fatal: this operation must be run in a work tree 2024-12-01 20:30:53.630 [info] [Git][getRemotes] No remotes found in the git config file 2024-12-01 20:30:53.631 [info] > git config --get commit.template [7ms] 2024-12-01 20:30:53.631 [info] > git for-each-ref --format=%(refname)%00%(upstream:short)%00%(objectname)%00%(upstream:track)%00%(upstream:remotename)%00%(upstream:remoteref) refs/heads/master refs/remotes/master [1ms] 2024-12-01 20:30:53.631 [warning] [Git][getBranch] No such branch: master 2024-12-01 20:30:53.646 [info] > git status -z -uall [9ms] 2024-12-01 20:30:53.647 [info] > git for-each-ref --sort -committerdate --format %(refname) %(objectname) %(*objectname) [2ms] 2024-12-01 20:30:58.770 [info] [Git][getRemotes] No remotes found in the git config file 2024-12-01 20:30:58.770 [info] > git config --get commit.template [7ms] 2024-12-01 20:30:58.771 [info] > git for-each-ref --format=%(refname)%00%(upstream:short)%00%(objectname)%00%(upstream:track)%00%(upstream:remotename)%00%(upstream:remoteref) refs/heads/master refs/remotes/master [1ms] 2024-12-01 20:30:58.782 [info] > git status -z -uall [6ms] 2024-12-01 20:30:58.784 [info] > git for-each-ref --sort -committerdate --format %(refname) %(objectname) %(*objectname) [2ms] 2024-12-01 20:30:58.810 [info] > git log --format=%H%n%aN%n%aE%n%at%n%ct%n%P%n%D%n%B -z --shortstat --diff-merges=first-parent -n50 --skip=0 --topo-order --decorate=full --stdin [3ms] 2024-12-01 20:30:58.886 [info] > git show --textconv :any.file [7ms] 2024-12-01 20:30:58.887 [info] > git ls-files --stage -- /home/nexus/git-includeIf-ln/any.file [2ms] 2024-12-01 20:30:58.894 [info] > git cat-file -s e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 [1ms] 2024-12-01 20:30:59.816 [info] [Git][getRemotes] No remotes found in the git config file 2024-12-01 20:30:59.822 [info] > git config --get commit.template [6ms] 2024-12-01 20:30:59.823 [info] > git for-each-ref --format=%(refname)%00%(upstream:short)%00%(objectname)%00%(upstream:track)%00%(upstream:remotename)%00%(upstream:remoteref) refs/heads/master refs/remotes/master [1ms] 2024-12-01 20:30:59.836 [info] > git status -z -uall [7ms] 2024-12-01 20:30:59.837 [info] > git for-each-ref --sort -committerdate --format %(refname) %(objectname) %(*objectname) [2ms] 2024-12-01 20:30:59.917 [info] > git ls-files --stage -- /home/nexus/git-includeIf-ln/any.file [1ms] 2024-12-01 20:30:59.925 [info] > git cat-file -s e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 [2ms] 2024-12-01 20:30:59.932 [info] > git show --textconv :any.file [1ms] </code></pre> </details>
bug,git
low
Critical
2,709,267,748
TypeScript
Flattening types in generated .d.ts files for libraries
### 🔍 Search Terms "delcaration readable" "declaration pretty" ### ✅ Viability Checklist - [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code - [x] This wouldn't change the runtime behavior of existing JavaScript code - [x] This could be implemented without emitting different JS based on the types of the expressions - [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.) - [x] This isn't a request to add a new utility type: https://github.com/microsoft/TypeScript/wiki/No-New-Utility-Types - [x] This feature would agree with the rest of our Design Goals: https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals ### ⭐ Suggestion Allow library authors to add the `inline` modifier on a type declaration which makes the compiler flatten the type in generated code. An `inline` modifier for type declarations ```ts // Strawman syntax // Instead of this type Foo = Bar & Baz | Omit<Bat, "x" | "y"> // The library author can write this inline type Foo = Bar & Baz | Omit<Bat, "x" | "y"> type Bar = { foo: "b" }; type Baz = { bar: "c" }; type Bat = { x: "x"; y: "y"; z: "z"; a: "a" }; ``` The compiler would mostly ignore the `inline` directive (it would behave similarly to the `Prettify` [helper type](https://www.totaltypescript.com/concepts/the-prettify-helper), however, when building the declaration files, the compiler will generate the flattened version like this ```ts type Foo = { foo: "b"; bar: "c"; } | { z: "z"; a: "a"; } ``` Alternatively, instead of a modifier on a declaration, it can be a modifier we can add on types ```ts type Foo = Inline<Bar> & Inline<Baz> | Inline<Omit<Bat, "x" | "y">> ``` Honestly, I'm not sure of the full semantics of `inline`. My mental intuition is based on the `Prettify` type helper. ```ts type Prettify<T> = { [K in keyof T]: T[K]; } & unknown; ``` Alternatively, a tsconfig.json setting might also work. Although I'm sure to have the best experience, the compiler might need some feedback from the library author too. Sorry if this isn't fully fleshed out, but I'm willing to work on a full proposal in case the team is open to the idea. ### 📃 Motivating Example We have a lot of computed types in our design system library. Most (React) components can receive Theme defined props. Here's a verbatim example which is our `Box` component, which is a design system aware `div`. ```ts export type BoxProps = AsChild & ThemeProps & styles.SprinklesProps & React.HTMLAttributes<HTMLDivElement>; ``` This describes the type perfectly, but it leads to very poor DX for the callers. When someone Cmd-Clicks `Box`, they see a type which isn't really helpful when trying to figure out the API of `Box`. What I'd like to see in the generated declarations would be something like the following ![Image](https://github.com/user-attachments/assets/b7bf04fc-cd9a-46a1-aaa6-f33bfff1ee09) ### 💻 Use Cases 1. What do you want to use this for? Improving the UX of our library consumers when there are computed types in the public API. 2. What shortcomings exist with current approaches? It's hard to understand the API of our library because cmputed types obscure the actual properties that are accepted by the types 3. What workarounds are you using in the meantime? We wrap certain public API types with the `Prettify` helper type so that the users can hover over certain types to received a flattened version of the type
Suggestion,Awaiting More Feedback
low
Minor
2,709,304,618
godot
Changing scene twice in `_ready` fails to call `_ready` on final scene
### Tested versions Reproducable in 4.3-stable ### System information Godot v4.3.stable - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1080 (NVIDIA; 32.0.15.6094) - Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz (8 Threads) ### Issue description Create a series of 3 scenes with attached scripts. Scene 1 contains a call to change the scene to scene 2 in it's ready method: ```gdscript func _ready(): print("First scene") get_tree().change_scene_to_file("res://SecondScene.tscn") ``` Scene 2 then does a similar operation in `_ready`, changing the scene to scene 3. ```gdscript func _ready(): print("Second scene") # await get_tree().create_timer(0.01).timeout # Uncommenting this line fixes the issue /\ get_tree().change_scene_to_file("res://ThirdScene.tscn") ``` This (as far as I am aware) should result in scene 3 being the currently active scene, however, a script attached to scene 3 does not have it's `_ready` method called once these transitions occur. Scene 3's `_init` method is successfully called however. ### Steps to reproduce Run the attached project, note that the message "Third scene ready" is not printed. However, uncommenting the `await` call in `second_scene.gd` will produce the correct behaviour. ### Minimal reproduction project (MRP) [scene_change_bug.zip](https://github.com/user-attachments/files/17969295/scene_change_bug.zip)
bug,topic:core
low
Critical
2,709,313,077
PowerToys
Make a tool to disable/reenable mouse devices on standby/wakeup
### Description of the new feature / enhancement a PowerToys that disable mouse devices when a standby is initiated, and reenable them when computer resumes it would be a good workaround to the issue of Windows 11 and Modern Standby reacting to mouse events. ### Scenario when this would be used? Since Windows 11 and Modern Standby, there is no way to tell Windows to not wake up the computer on mouse events (powercfg /DEVICEDISABLEWAKE or checkbox in device manager are ignored) That's a common annoying problem because the computer often wakes-up immediately after standby (as our hand release the mouse) or may cause laptops to wakeup unexpectedly in bags and overheat. So unless Windows teams plan to fix that problem very soon, a tool as described above would be nice ### Supporting information See https://www.reddit.com/r/thinkpad/comments/sbql0n/disable_usb_mouse_from_waking_up_device_from_sleep/ for more discussion on the subject (in particular, people under W11, like me, saying the trivial solution is not working)
Idea-New PowerToy,Needs-Triage,Product-Mouse Utilities
low
Minor
2,709,502,538
TypeScript
Generic function with function as argument autocomplete broken, unless generic argument of the argument function is it's first argument
### 🔎 Search Terms typescript generic function no autocomplete function as argument ### 🕗 Version & Regression Information Relevant versions are 5.4.5 and above, tested in all versions from 5.4.5 to 5.8.0 (Nightly) inclusive, I was able to reproduce the bug in all versions. ### ⏯ Playground Link https://www.typescriptlang.org/play/?ts=5.7.2#code/IYZwngdgxgBAZgV2gFwJYHsIwE4FMDmqIyu2ACgDYKEQBCwAJgDwDyADsjLgB4kQMgYAJVxR02ZsWyoI+ADQxgEMAD4VACijAKFAEbAoAawBcMdcDZtTSsAvQcMEEKfbIAlDAC8KmADd0qAx2HM4wAHLoAJIQcKSsHCpupmTY6AC2RLhM-oE+AN4AvgBQRXiExKSU1DL0DOrmlsHIoXkwIOm4yAAWMvimUr0wBR7eMIUKeUUwMAD0MzC6uBToAO6KCMjoYmlsFJ24MEQLqYa4EAq4vmcw3ejUXTddB8hgbAdHbKlv2BRgftqBYAkBhTNodIrDADcJSKc0OEBIeGIvV+CmAGy26V2+xgK3EhkEqDgMAARPY0JgQCTFNh8Ag0mdONtcII4KhsMQFHBxFxuMAdntjEVQJBYIgUI4cAQiIiqjQAOLodDMVy8vgCYSicSSZDSWRo5RqTTaPQGExmcmOUKuNGWazKEY+HJBGDk0IRaKxbDxZCJZKpDIgLLO-LFUrSirkKgKpV1dRu0ytdoM7q9fq6wYFW1WRQOryhiaguGLZZrdGbbbYki4-EgUHJ3AQtyQoA ### 💻 Code ```ts async function registerPluginBad<Opt extends Record<string, any>>(callback: (app: any, options: Opt) => void, opts: NoInfer<Opt>): Promise<void> {} registerPluginBad((app, opts: { something: string }) => {}, { // below autocomplete is broken, even though the type is properly validated some }); // interestingly, autocomplete works if "options" argument comes first, for example: async function registerPluginGood<Opt extends Record<string, any>>(callback: (options: Opt, app: any) => void, opts: NoInfer<Opt>): Promise<void> {} registerPluginGood((opts: { something: string }, app: any) => {}, { // below autocomplete works some }); ``` ### 🙁 Actual behavior This one has to be tested in the playground: registerPluginBad - second argument does not autocomplete, however type is properly validated registerPluginGood - exact same function, except the first argument "callback" arguments have changed the order, which for some reason fixes the autocomplete ### 🙂 Expected behavior For both functions, autocomplete should work. Argument order should not matter. Both do not infer from "opts", so any inference should be from "options" argument of the callback. ### Additional information about the issue _No response_
Suggestion,Help Wanted,Experience Enhancement
low
Critical
2,709,503,895
go
proposal: require more information for proposals
### Proposal Details The current [proposal process](https://github.com/golang/proposal) for non spec changes is fairly lightweight for the proposer: there is a text box labeled "Proposal Details". This has led to quite a large number of proposals, at the time of filing: [743 open proposals marked incoming](https://github.com/orgs/golang/projects/17), with a total of 636 filed this year so far, averaging 1.89 per day. A recurring pattern in proposals is that they lack the detail: they'll often just vaguely describe a desired change. This results in some repetitive back and forth as we try to narrow it down to a specific api change that can be evaluated, and surface the motivations for the proposal. I propose a 3 field proposal form that covers the majority of proposals we see: * **Proposal summary:** in words, describe the proposal. * **Motivation:** * Describe why the change should be made * If this is already possible in a third party library, why it should be in the standard library * If this has previously been proposed and declined, what significant new information justified revisiting the decision. * Include references to external specs, code searches for frequent patterns, etc. * **Change details:** * For API changes, the desired new api in godoc format, including comments. * For behavioural changes, the updated godoc describing the new behaviour. cc @golang/proposal-review
Proposal
low
Major
2,709,527,675
TypeScript
should error on assignment that shadows a prototype method
### 🔎 Search Terms own property, shadow, prototype, method, field, inheritance ### 🕗 Version & Regression Information - This is the behavior in every version I tried, and I reviewed - https://github.com/microsoft/TypeScript/issues/13141 - [MDN docs on classes](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Classes) - [TS docs on classes](https://www.typescriptlang.org/docs/handbook/2/classes.html) ### ⏯ Playground Link https://www.typescriptlang.org/play/?ts=5.8.0-dev.20241201#code/MYGwhgzhAEBiD29oG8BQ0PWPAdhALgE4Cuw+8hAFAJQrqYP4AWAlhAHQAOh85+AnpwCmAWSHN4AE2gBeaDVkA+Og1WZseeCCHsQ8AOaUA5BCZhJ8AO4xuvXoKHQAtuKZSj1ANz01AXx-Q-gy2fA5iEpIKaGoYGhBaOnqGRi4R0LjQIfbCHt4M-r5AA ### 💻 Code ```ts class Foo { constructor() { this.prototypeMethod = () => { console.log('shadows prototype method'); } } prototypeMethod() { console.log('method on prototype'); } } ``` ### 🙁 Actual behavior no error ### 🙂 Expected behavior error - something like "Type 'Foo' has no instance property named 'prototypeMethod'. If you meant to reassign the method 'prototypeMethod', it must be a function-valued instance property instead of a prototype method.". I expect an error here because the code is effectively equivalent to ```ts class Foo { prototypeMethod = () => { console.log('shadows prototype method'); }; prototypeMethod() { console.log('method on prototype'); } } ``` which [is a TS error](https://www.typescriptlang.org/play/?ts=5.8.0-dev.20241201#code/MYGwhgzhAEBiD29oG8BQ0PQA4Cd4BcCBPLAUwFlT8ALeAE2gF5oAKASiYD4V1M-h4AOwjwQpAHQh4AcxYByCNTB14Adxi4CxMtAC2VWnTlsA3LwwBfM302F8JCgfrseffkJFjJM+fpr1oIWw8Owdja0wLVCjUAWF8aAAzRCZoQVJVOER2M2T4cVttR386HKA) (see https://github.com/microsoft/TypeScript/issues/13141). Furthermore, it breaks error reporting in subclasses. The following code correctly reports a TS error due to the extended class not being able to override an instance property with a method: ([playground](https://www.typescriptlang.org/play/?ts=5.8.0-dev.20241201#code/MYGwhgzhAEBiD29oG8BQ0PQJYDsIBcwdgBTABQCd4AHEi-AT2gF5oAKAShYD4V1MBweHnggSAOhDwA5mwDkAMwCuxfFmEBaAG5gQSkgBNseQsRLRqVWvQZyOAbn4YAvqlepQkGACEwFaCQAHvgkOAYwCEhoAgD0MdAAKgDK0ACiAErpAPLpAFzQAMLgUNBykXLQBiQKuCQwuAREpNAAtiQtAEZ0FlZ0jKUNpqSUNH22ADTQHUr4AcGhVUaeJXK+FBVVNTh12LOQxo1mre1d-sqq6jjibABMACw3AKwcTgdD5L02nHwCgsIQogkUlkcngWjoFCwBiqOGgAHcsPgABbHZHwAx2V6udxCExTFjQbZw6BrTiODriQZND6jL4OaBxHq4fAwRQqYBqTQ6PSGN7Unq0xhyIA)) ```ts class Foo { instanceProperty = () => { console.log('function-valued instance property'); } } class Bar extends Foo { // TS ERROR: Class 'Foo' defines instance member property 'instanceProperty', but extended class 'Bar' defines it as instance member function.(2425) instanceProperty() { console.log('overridden with method') } } const b = new Bar(); b.instanceProperty(); // prints 'function-valued instance property' ``` However, this does not ([playground](https://www.typescriptlang.org/play/?ts=5.8.0-dev.20241201#code/MYGwhgzhAEBiD29oG8BQ0PWPAdhALgE4Cuw+8hAFAJQrqYP4AWAlhAHQC2Aps-ACbQAvNBrCAfHQbTM2PPBDd2IeAHNKAcgBmxHGRa4AtADcwIYt0Es8+MHu7QADoXiPuhfAE8N1ejIC+ftCBDDx8-GJoMhhyEApKKuoaYUwCPgDcQYGo2aCQMABCYITQ3AAe+Nw4-DAISFGYAPSN0DhI7i4lTO4O0ABclEEpApFBDLHxymqa8MYdLPz8VdAA7izM0MP8Plk5qKix+NAARsKt3CvQRVTUmcdcvKkRt9DNToTW+DDauvpGpuZLNBrAQ7MAHM5XO4vBogA)) ```ts class Foo { constructor() { this.method = () => { console.log('function-valued instance property') } } method() { console.log('method'); } } class Bar extends Foo { // no error here :( method() { console.log('overridden with method') } } const b = new Bar(); b.method(); // prints 'function-valued instance property' ``` ### Additional information about the issue inspired by https://github.com/typescript-eslint/typescript-eslint/issues/10427 cc @miguel-leon
Suggestion,Awaiting More Feedback
low
Critical
2,709,581,688
pytorch
StrideAPI caused regression in channels-last logic
### 🐛 Describe the bug https://github.com/pytorch/pytorch/pull/128393 caused numerous performance/correctness regressions for the ops that had a special codepath for channels last, see - https://github.com/pytorch/pytorch/issues/141471 - https://github.com/pytorch/pytorch/issues/140902 - https://github.com/pytorch/pytorch/issues/137800 - https://github.com/pytorch/pytorch/issues/142344 Would be nice to have a better testing for those cases ### Versions 2.5.0, 2.5.1, nightly cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @kulinseth @albanD @DenisVieriu97 @jhavukainen
high priority,triaged,module: regression,module: mps
low
Critical
2,709,584,057
vscode
Checked Render whitespace is not shown when the Vscode is initialised at start
<!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ --> <!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ --> <!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions --> <!-- 🔎 Search existing issues to avoid creating duplicates. --> <!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ --> <!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. --> <!-- 🔧 Launch with `code --disable-extensions` to check. --> Does this issue occur when all extensions are disabled?: Yes/No <!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. --> <!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. --> - VS Code Version: 1.82.2 (Universal) - OS Version: Darwin arm64 23.4.0 Steps to Reproduce: 1. View -> Appearance 2. Validate the Render Whitespace check 3. Validate the whitespace is being shown or not
bug,editor-render-whitespace
low
Critical
2,709,590,621
deno
`node:worker_threads` doesn't load passed in text as CommonJS
Deno 2.1.2 ```ts const Worker = require("worker_threads"); const w = (new Worker(`const self = require("worker_threads");`, { eval: true }))(); ``` ```ts error: Uncaught (in worker "[worker eval]") (in promise) ReferenceError: require is not defined const self = require("worker_threads"); ^ at data:text/javascript,const self = require("worker_threads");:1:14 info: Deno supports CommonJS modules in .cjs files, or when the closest package.json has a "type": "commonjs" option. hint: Rewrite this module to ESM, or change the file extension to .cjs, or add package.json next to the file with "type": "commonjs" option, or pass --unstable-detect-cjs flag to detect CommonJS when loading. docs: https://docs.deno.com/go/commonjs ``` `--unstable-detect-cjs` doesn't help. In my original project, the worker is in .cjs (a third-party lib) and I import it from ESM. I guess it would work if everything was CJS. `import("node:worker_threads")` (with node:) works, while the script that created the worker can use `require("worker_threads")`.
bug,node compat
low
Critical
2,709,609,398
pytorch
Invalid `RPATH` for `_C.so` for in tree installation: assuption fail `"$ORIGIN/../.." -> site-packages`
### 🐛 Describe the bug For in-tree installation, there is a `pytorch-xxx.pth` which contains: https://github.com/pytorch/pytorch/blob/5deca07c0dcf1482eba99bf93b805cf1cc41ad6c/tools/nightly.py#L762-L771 points to the git repository. The `torch` package is not installed in the `site-packages` directory. ```console $ make setup-env-cuda PYTHON="/home/linuxbrew/.linuxbrew/bin/python3.12" $ source venv/bin/activate $ patchelf --print-rpath torch/_C.*.so | tr ':' '\n' $ORIGIN/../../nvidia/cublas/lib $ORIGIN/../../nvidia/cuda_cupti/lib $ORIGIN/../../nvidia/cuda_nvrtc/lib $ORIGIN/../../nvidia/cuda_runtime/lib $ORIGIN/../../nvidia/cudnn/lib $ORIGIN/../../nvidia/cufft/lib $ORIGIN/../../nvidia/curand/lib $ORIGIN/../../nvidia/cusolver/lib $ORIGIN/../../nvidia/cusparse/lib $ORIGIN/../../cusparselt/lib $ORIGIN/../../nvidia/nccl/lib $ORIGIN/../../nvidia/nvtx/lib $ORIGIN $ORIGIN/lib ``` For CUDA builds, the PyPI wheels require the `torch` package to be installed in the `site-packages` directory: ```console $ python3 -c 'import torch' Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/PanXuehai/Projects/pytorch/torch/__init__.py", line 378, in <module> from torch._C import * # noqa: F403 ^^^^^^^^^^^^^^^^^^^^^^ ImportError: libcudnn.so.9: cannot open shared object file: No such file or directory ``` https://github.com/pytorch/pytorch/blob/5deca07c0dcf1482eba99bf93b805cf1cc41ad6c/.ci/manywheel/build_cuda.sh#L178-L195 https://github.com/pytorch/pytorch/blob/5deca07c0dcf1482eba99bf93b805cf1cc41ad6c/.ci/manywheel/build_xpu.sh#L90-L95 Maybe related: - #138506 ### Versions nightly cc @seemethere @malfet @osalpekar @atalman
module: binaries,triaged,topic: build,topic: binaries
low
Critical
2,709,611,040
vscode
feature request: vscode://file scheme url add support for tilde based links
<!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ --> <!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ --> <!-- Please search existing issues to avoid creating duplicates. --> <!-- Describe the feature you'd like. --> Maybe I'm missing something.. but I wish to be able to open file in vscode with a tilde in the path. That way a server or a greasemonkey JS scriplet can add a generic file opener button currently this works ``` vscode://file/Users/t/foo.txt ``` but not ``` vscode://file/~/foo.txt ``` It says path `/~/foo.txt` doesn't exist. Neither ``` vscode://file~/foo.txt ``` Nothing happens with this whether the file exists or not. The support was added long back.. https://github.com/microsoft/vscode/pull/39122 https://github.com/microsoft/vscode-docs/issues/5215
feature-request,workbench-os-integration
low
Minor
2,709,649,364
godot
Black Background when trying to use pixel perfect transparency
### Tested versions I am using, and encountered the recurring issue on the official Godot 4.3.stable build, on compatibility mode. ### System information Godot v4.3.stable - Windows Version 10.0.22631 Build 22631 - NVIDIA GeForce RTX 3070 Laptop GPU ### Issue description My native resolution is 1920x1080, and when I set this as the window resolution in project settings, my transparent background turns black. Attempting to screenshot with PrtSc reverts it back to a transparent state, but returns to black afterwards, so I can't attach any screenshots. Changing the resolution to 1920x1079 fixes the issue, but I'm not sure how sustainable it'll be in the long run when adapting to other resolutions since the one pixel gap might become more noticeable. The project setting window mode is set to Windowed, with Borderless enabled, since fullscreen seems to make the transparent background black. ### Steps to reproduce In Project Settings > Display > Window > Size, set the Viewport Width and height to 1920 and 1080 respectively, then turn on Borderless, and set the Mode to Windowed. In the same area, enable Transparent, as well as check "Allowed" under Per Pixel Transparency. Then enable Transparent Background in Rendering > Viewport. Running the main scene shows a black background in place of the intended transparent background. Reducing the resolution to 1280x720, or comically 1920x1079, restores the intended transparent background. I assume the fact that my native resolution is 1920x1080 plays an important role in finding the solution. I'm relatively new to Godot, and using game engines in general, but I hope I provided enough information. ### Minimal reproduction project (MRP) [planner-project2.zip](https://github.com/user-attachments/files/17969954/planner-project2.zip)
bug,topic:rendering,needs testing
low
Minor