id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
2,686,122,133 | neovim | signcolumn: toggle alignment of signs | ### Problem
By default, signs in the signcolumn appear to be left justified. In other words, the first sign shows up on the far left, with additional signs added to the right of that, up to the maximum allowed.
However, when the signcolumn is wide, this can lead to significant visual separation between the sign and the line numbers/text when there aren't many signs on a line. For example, considering the lonely `W` at the top left, notice the big space between it and the line number:

### Expected behavior
It would be nice if this could be configured. For example, `signalign = "left" | "right"`, where `"left"` equals the current default behavior (left-alignment, new signs added to the right) and `"right"` would beget right-alignment with new signs added to the left. | enhancement,documentation,column | low | Major |
2,686,171,676 | vscode | Settings Defaults gets very confused when updating extensions. |
Type: <b>Bug</b>
If the settings schema has changed for an extension during an update, it is not getting correctly detected by VS Code Settings.
## Steps to Reproduce
1. Install the Code Spell Checker extension version `v3.0.1` (the old version).

1. Restart VS Code -- this is necessary if already had the extension installed.
2. Go to the extension's settings.

1. Notice the list of settings (59) -- Keep settings editor open.

1. Update the extension to v4.0.21.

1. Restart extensions as instructed.
2. Notice the list of settings (59) -- which is incorrect.

1. Closing the settings editor and reopening doesn't make a difference. It is still showing stale settings.
2. Even Restarting the Extension Host does not work.
3. The only option is to Restart VS Code.
4. Restart VS Code.
5. Go to the Code Spell Checker settings. Notice that there are now 77.

VS Code version: Code 1.95.3 (Universal) (f1a4fb101478ce6ec82fe9627c43efbf9e98c813, 2024-11-13T14:50:04.152Z)
OS version: Darwin arm64 24.1.0
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Apple M3 Pro (12 x 2400)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|3, 4, 3|
|Memory (System)|36.00GB (0.47GB free)|
|Process Argv|--crash-reporter-id 21ae0ece-62a2-4681-a5cf-c687ea23e753|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (95)</summary>
Extension|Author (truncated)|Version
---|---|---
terraform|4op|0.2.5
ada|Ada|26.0.202411173
commit-message-editor|ada|0.25.0
tsl-problem-matcher|amo|0.6.2
alignment|ann|0.3.0
cpupro|ant|0.1.1
vscode-zipfs|arc|3.0.0
astro-vscode|ast|2.15.4
markdown-mermaid|bie|1.27.0
mermaid-markdown-syntax-highlighting|bpr|1.7.0
npm-intellisense|chr|1.4.5
codesandbox-projects|Cod|0.2.142
jison-syntax-highlight|cru|0.1.1
scala|dal|0.0.5
vscode-jq-playground|dav|4.3.5
vscode-eslint|dba|3.0.13
vscode-wasm|dts|1.4.1
gitlens|eam|16.0.3
EditorConfig|Edi|0.16.4
prettier-vscode|esb|11.0.0
linter-gfortran|for|3.2.0
copilot|Git|1.245.0
copilot-chat|Git|0.22.4
remotehub|Git|0.64.0
vscode-github-actions|git|0.27.0
vscode-pull-request-github|Git|0.100.3
gitpod-desktop|git|0.0.180
yaml-plus-json|hil|1.12.2
mediawiki|jak|2.1.0
latex-workshop|Jam|10.5.6
svg|joc|1.5.4
jq-syntax-highlighting|jq-|0.0.2
vscode-tree-sitter-query|jri|0.0.6
language-haskell|jus|3.6.0
vscode-cfml|Kam|0.5.4
bison|lun|0.1.0
Kotlin|mat|1.7.1
Lisp|mat|0.1.12
rainbow-csv|mec|3.13.0
dotenv|mik|1.0.1
vscode-apache|mrm|1.2.0
vscode-puglint|mrm|2.3.0
vscode-azureresourcegroups|ms-|0.9.9
vscode-docker|ms-|1.29.3
vscode-language-pack-de|MS-|1.95.2024103009
vscode-dotnet-runtime|ms-|2.2.3
al|ms-|14.1.1180850
playwright|ms-|1.1.12
black-formatter|ms-|2024.4.0
debugpy|ms-|2024.12.0
isort|ms-|2023.10.1
python|ms-|2024.20.0
vscode-pylance|ms-|2024.11.3
jupyter|ms-|2024.10.0
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.21
vscode-jupyter-cell-tags|ms-|0.1.9
vscode-jupyter-slideshow|ms-|0.1.6
remote-containers|ms-|0.391.0
remote-ssh|ms-|0.115.1
remote-ssh-edit|ms-|0.87.0
remote-wsl|ms-|0.88.5
azure-account|ms-|0.12.0
azure-repos|ms-|0.40.0
cpptools|ms-|1.22.11
cpptools-extension-pack|ms-|1.3.0
live-server|ms-|0.4.15
powershell|ms-|2024.4.0
remote-explorer|ms-|0.4.3
remote-repositories|ms-|0.42.0
test-adapter-converter|ms-|0.2.1
vscode-js-profile-flame|ms-|1.0.9
vsliveshare|ms-|1.0.5941
vetur|oct|0.37.3
common-lisp|qin|1.2.11
vscode-yaml|red|1.15.0
rust-analyzer|rus|0.3.2188
es6-mocha-snippets|spo|0.2.2
avro|str|0.5.0
code-spell-checker|str|4.0.21
hunspell|str|1.0.4
iconfont-preview|stx|0.0.5
svelte-vscode|sve|109.3.2
even-better-toml|tam|0.19.2
msbuild-project-tools|tin|0.6.6
es6-string-html|Tob|2.16.0
vscode-mermaid-editor|tom|0.19.1
vscode-mdx|uni|1.8.11
intellicode-api-usage-examples|Vis|0.2.9
vscodeintellicode|Vis|1.3.2
vscode-gradle|vsc|3.16.4
vscode-java-debug|vsc|0.58.1
debug|web|0.27.0
php-debug|xde|1.35.0
php-intellisense|zob|1.3.3
(1 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805:30301674
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
vscaac:30438847
c4g48928:30535728
azure-dev_surveyone:30548225
vscrp:30673768
2i9eh265:30646982
962ge761:30959799
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
2e7ec940:31000449
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
945dj816:31013170
dvdeprecation:31068756
dwnewjupyter:31046869
nativerepl1:31139838
pythonrstrctxt:31112756
nativeloc2:31185842
cf971741:31144450
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
```
</details>
<!-- generated by issue reporter --> | bug,settings-editor | low | Critical |
2,686,174,929 | tauri | [bug] could not find native static library `ring_core_0_17_8_` when build android apk | ### Describe the bug
error: could not find native static library `ring_core_0_17_8_`, perhaps an -L flag is missing?
error: could not compile `ring` (lib) due to 1 previous error
### Reproduction
pnpm tauri android build --apk
### Expected behavior
_No response_
### Full `tauri info` output
```text
[✔] Environment
- OS: Windows 10.0.19045 x86_64 (X64)
✔ WebView2: 130.0.2849.80
✔ MSVC: Visual Studio 生成工具 2019
✔ rustc: 1.82.0 (f6e511eec 2024-10-15)
✔ cargo: 1.82.0 (8f40fc59f 2024-08-21)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-x86_64-pc-windows-msvc (default)
- node: 18.18.0
- pnpm: 8.9.0
- yarn: 1.22.4
- npm: 9.8.1
[-] Packages
- tauri 🦀: 2.1.1
- tauri-build 🦀: 2.0.3
- wry 🦀: 0.47.2
- tao 🦀: 0.30.8
- @tauri-apps/api : 2.1.1
- @tauri-apps/cli : 2.1.0
[-] Plugins
- tauri-plugin-shell 🦀: 2.0.2
- @tauri-apps/plugin-shell : 2.0.1
- tauri-plugin-os 🦀: 2.0.1
- @tauri-apps/plugin-os : 2.0.0
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- framework: Vue.js
- bundler: Vite
```
### Stack trace
```text
error: could not find native static library `ring_core_0_17_8_`, perhaps an -L flag is missing?
warning: `ring` (lib) generated 1 warning
error: could not compile `ring` (lib) due to 1 previous error; 1 warning emitted
Caused by:
.cargo\registry\src\rsproxy.cn-0dccff568467c15b\ring-0.17.8&& set CARGO_PKG_AUTHORS="Brian Smith <[email protected]>"&& set CARGO_PKG_DESCRIPTION="Safe, fast, small crypto using Rust."&& set CARGO_PKG_HOMEPAGE=""&& set CARGO_PKG_LICENSE=""&& set CARGO_PKG_LICENSE_FILE=LICENSE&& set CARGO_PKG_NAME=ring&& set CARGO_PKG_README=README.md&& set CARGO_PKG_REPOSITORY=https://github.com/briansmith/ring&& set CARGO_PKG_RUST_VERSION=1.61.0&& set CARGO_PKG_VERSION=0.17.8&& set CARGO_PKG_VERSION_MAJOR=0&& set CARGO_PKG_VERSION_MINOR=17&& set CARGO_PKG_VERSION_PATCH=8&& set CARGO_PKG_VERSION_PRE=""&& set OUT_DIR=E:\WorkSpace\RProject\hote\src-tauri\target\aarch64-linux-android\release\build\ring-83d2c80ee44a91e6\out&& set PATH="E:\WorkSpace\RProject\hote\src-tauri\target\release\deps;E:\WorkSpace\RProject\hote\node_modules\.bin;C:\Users\wgtam\AppData\Roaming\npm\node_modules\pnpm\dist\node-gyp-bin;C:\Program Files\Scripts\;C:\Program Files\;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Windows\System32\OpenSSH\;C:\Program Files\NVIDIA Corporation\NVIDIA NvDLISR;C:\ProgramData\chocolatey\bin;C:\Program Files\Git\cmd;C:\Program Files\Docker\Docker\resources\bin;D:\Program\nodejs\;C:\Program Files\dotnet\;C:\Program Files\Pandoc\;C:\Program Files (x86)\NetSarang\Xshell 8\;C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;D:\Program\MiniConda3;D:\Program\MiniConda3\Library\mingw-w64\bin;D:\Program\MiniConda3\Library\usr\bin;D:\Program\MiniConda3\Library\bin;D:\Program\MiniConda3\Scripts;C:\Users\wgtam\AppData\Local\Programs\Python\Python37\Scripts\;C:\Users\wgtam\AppData\Local\Programs\Python\Python37\;D:\Program\Python37\Scripts\;D:\Program\Python37\;D:\Program\python\Scripts\;D:\Program\python\;C:\Users\wgtam\.cargo\bin;C:\Users\wgtam\AppData\Local\Microsoft\WindowsApps;D:\Program\Microsoft VS Code\bin;C:\Users\wgtam\AppData\Local\Programs\Fiddler;E:\Mysql\MySQL Server 8.0\bin;C:\Users\wgtam\AppData\Roaming\npm;C:\Users\wgtam\.dotnet\tools;C:\Program Files\dotnet\sdk;E:\tools\Godot_v4.2.1-stable_mono_win64\GodotSharp\Tools\nupkgs;E:\apache-maven-3.9.6\bin;D:\Program\IntelliJ IDEA
2024.1.2\bin;;D:\Android\SDK\build-tools;D:\Android\SDK\tools;D:\Android\SDK\platform-tools;D:\Android\Studio\jbr\bin;"&& set RING_CORE_PREFIX=ring_core_0_17_8_&& C:\Users\wgtam\.rustup\toolchains\stable-x86_64-pc-windows-msvc\bin\rustc.exe --crate-name ring --edition=2021 C:\Users\wgtam\.cargo\registry\src\rsproxy.cn-0dccff568467c15b\ring-0.17.8\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --diagnostic-width=196 --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C embed-bitcode=no --cfg "feature=\"alloc\"" --cfg "feature=\"default\"" --cfg "feature=\"dev_urandom_fallback\"" --check-cfg cfg(docsrs) --check-cfg "cfg(feature, values(\"alloc\", \"default\", \"dev_urandom_fallback\", \"less-safe-getrandom-custom-or-rdrand\", \"slow_tests\", \"std\", \"test_logging\", \"unstable-testing-arm-no-hw\", \"unstable-testing-arm-no-neon\", \"wasm32_unknown_unknown_js\"))" -C metadata=12fd6bfa9ddd151b -C extra-filename=-12fd6bfa9ddd151b --out-dir E:\WorkSpace\RProject\hote\src-tauri\target\aarch64-linux-android\release\deps --target aarch64-linux-android -C linker=D:\Android\SDK\ndk\28.0.12674087\toolchains/llvm/prebuilt/windows-x86_64\bin\aarch64-linux-android24-clang.cmd -C strip=debuginfo -L dependency=E:\WorkSpace\RProject\hote\src-tauri\target\aarch64-linux-android\release\deps -L dependency=E:\WorkSpace\RProject\hote\src-tauri\target\release\deps --extern cfg_if=E:\WorkSpace\RProject\hote\src-tauri\target\aarch64-linux-android\release\deps\libcfg_if-c691cec757294987.rmeta --extern getrandom=E:\WorkSpace\RProject\hote\src-tauri\target\aarch64-linux-android\release\deps\libgetrandom-9247e49a06156adc.rmeta --extern libc=E:\WorkSpace\RProject\hote\src-tauri\target\aarch64-linux-android\release\deps\liblibc-5fe46d5f133ea63c.rmeta --extern spin=E:\WorkSpace\RProject\hote\src-tauri\target\aarch64-linux-android\release\deps\libspin-eb62214d0b0642b5.rmeta --extern untrusted=E:\WorkSpace\RProject\hote\src-tauri\target\aarch64-linux-android\release\deps\libuntrusted-b6fc3769d2c36d2a.rmeta --cap-lints warn -Clink-arg=-landroid -Clink-arg=-llog -Clink-arg=-lOpenSLES -l static=ring_core_0_17_8_` (exit code: 1)
warning: build failed, waiting for other jobs to finish...
`Failed to run `cargo build`: command ["cargo", "build", "-vv", "--package", "hote", "--manifest-path", "E:\\WorkSpace\\RProject\\hote\\src-tauri\\Cargo.toml", "--target", "aarch64-linux-android",
"--features", "tauri/custom-protocol tauri/rustls-tls", "--lib", "--release"] exited with code 101
Error [tauri_cli_node] `Failed to run `cargo build`: command ["cargo", "build", "-vv", "--package", "hote", "--manifest-path", "E:\\WorkSpace\\RProject\\hote\\src-tauri\\Cargo.toml", "--target", "aarch64-linux-android", "--features", "tauri/custom-protocol tauri/rustls-tls", "--lib", "--release"] exited with code 101
```
### Additional context
_No response_ | type: bug,status: needs triage,platform: Android | low | Critical |
2,686,198,133 | rust | Tracking issue for release notes of #130843: Tracking Issue for `#![feature(const_float_methods)]` |
This issue tracks the release notes text for #130843.
### Steps
- [ ] Proposed text is drafted by PR author (or team) making the noteworthy change.
- [ ] Issue is nominated for release team review of clarity for wider audience.
- [ ] Release team includes text in release notes/blog posts.
### Release notes text
The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing).
````markdown
# Const Stabilized APIs
- [`<float>::recip`](https://doc.rust-lang.org/stable/std/primitive.f32.html#method.recip)
- [`<float>::to_degrees`](https://doc.rust-lang.org/stable/std/primitive.f32.html#method.to_degrees)
- [`<float>::to_radians`](https://doc.rust-lang.org/stable/std/primitive.f32.html#method.to_radians)
- [`<float>::max`](https://doc.rust-lang.org/stable/std/primitive.f32.html#method.max)
- [`<float>::min`](https://doc.rust-lang.org/stable/std/primitive.f32.html#method.min)
- [`<float>::clamp`](https://doc.rust-lang.org/stable/std/primitive.f32.html#method.clamp)
- [`<float>::abs`](https://doc.rust-lang.org/stable/std/primitive.f32.html#method.abs)
- [`<float>::signum`](https://doc.rust-lang.org/stable/std/primitive.f32.html#method.signum)
- [`<float>::copysign`](https://doc.rust-lang.org/stable/std/primitive.f32.html#method.copysign)
````
> [!TIP]
> Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use.
> The category will be de-duplicated with all the other ones by the release team.
>
> *More than one section can be included if needed.*
### Release blog section
If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section.
*Otherwise leave it empty.*
````markdown
````
cc @eduardosm -- origin issue/PR authors and assignees for starting to draft text
| T-lang,T-libs-api,relnotes,A-const-eval,WG-const-eval,relnotes-tracking-issue | low | Minor |
2,686,205,547 | rust | Tracking issue for release notes of #132611: Add `AsyncFn*` to to the prelude in all editions |
This issue tracks the release notes text for #132611.
### Steps
- [ ] Proposed text is drafted by PR author (or team) making the noteworthy change.
- [ ] Issue is nominated for release team review of clarity for wider audience.
- [ ] Release team includes text in release notes/blog posts.
### Release notes text
The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing).
````markdown
# Libraries
- [Add `AsyncFn*` to the prelude in all editions.](https://github.com/rust-lang/rust/pull/132611)
````
> [!TIP]
> Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use.
> The category will be de-duplicated with all the other ones by the release team.
>
> *More than one section can be included if needed.*
### Release blog section
If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section.
*Otherwise leave it empty.*
````markdown
````
cc @compiler-errors, @ibraheemdev -- origin issue/PR authors and assignees for starting to draft text
| T-lang,T-libs-api,relnotes,A-async-await,relnotes-tracking-issue | low | Critical |
2,686,267,818 | godot | KTX texture is imported darker as it is - sRGB to linear | ### Tested versions
- Reproducible in 4.3-stable
The related code was introduced in August 2023, so it might be happening in 4.2.x as well.
### System information
Godot v4.3.stable (77dcf97d8) - Windows 10.0.22631 - Vulkan (Mobile)
### Issue description
I noticed some glTF/glB were darker as I used to see them. Then I realized it was related to the Texture and the `KTX` format.
I've isolated the texture, and the effect is evident when I just import the texture.
Original texture

Darker texture

After some research, I found that commenting the `srgb-to-linear` convertion, it works fine:

https://github.com/godotengine/godot/blob/0c45ace151f25de2ca54fe7a46b6f077be32ba6f/modules/ktx/texture_loader_ktx.cpp#L500
### Steps to reproduce
1. Open a empty project and copy the texture to the project folder
2. Double click the texture resource, you'll notice darker as it is
### Minimal reproduction project (MRP)
Example texture
[QEbvwX_0.zip](https://github.com/user-attachments/files/17879545/QEbvwX_0.zip)
| bug,topic:import | low | Minor |
2,686,280,249 | rust | Compile time error, for structs with lifetime and trait functions that have a where clause | If two traits are dependent on each other and the implementing struct has lifetimes, the compiler cannot build the source code. The two traits work for all structs that do not have a lifetime.
However, you can avoid this error by commenting out the where clause from TPrivateFoo::execute().
I tried this code:
```rust
struct DData {
counter: u32,
}
pub trait TFoo: TPrivateFoo {
type NativeStruct;
}
pub trait TPrivateFoo {
fn execute(&self, root: &<Self as TFoo>::NativeStruct)
where
Self: TFoo;
}
struct DSuccessBecauseNoLifetime {
title: String,
}
impl TFoo for DSuccessBecauseNoLifetime {
type NativeStruct = DData;
}
impl TPrivateFoo for DSuccessBecauseNoLifetime {
fn execute(&self, root: &<Self as TFoo>::NativeStruct)
where
Self: TFoo,
{
println!("{}", root.counter);
}
}
struct DSuccessBecauseCommentedOutWhereClause<'a> {
title: &'a str,
}
impl TFoo for DSuccessBecauseCommentedOutWhereClause<'_> {
type NativeStruct = DData;
}
impl TPrivateFoo for DSuccessBecauseCommentedOutWhereClause<'_> {
fn execute(&self, root: &<Self as TFoo>::NativeStruct)
//where
// Self: TFoo,
{
println!("{}", root.counter);
}
}
struct DErrorBecauseOfLifetime<'a> {
title: &'a str,
}
impl TFoo for DErrorBecauseOfLifetime<'_> {
type NativeStruct = DData;
}
impl TPrivateFoo for DErrorBecauseOfLifetime<'_> {
fn execute(&self, root: &<Self as TFoo>::NativeStruct)
// ####### Compile error #######
where
Self: TFoo,
// #############################
{
println!("{}", root.counter);
}
}
```
I expected to see this happen:
Source code is successfully compiled
Instead, a compile time error occured:
```
error[E0609]: no field `counter` on type `&<DErrorBecauseOfLifetime<'_> as TFoo>::NativeStruct`
--> src\test.rs:57:25
|
57 | println!("{:?}", root.counter);
| ^^^^^^^ unknown field
```
### Meta
Bug **exists** in both stable and nighlty build.
`rustc --version --verbose`:
```
rustc 1.82.0 (f6e511eec 2024-10-15)
binary: rustc
commit-hash: f6e511eec7342f59a25f7c0534f1dbea00d01b14
commit-date: 2024-10-15
host: x86_64-pc-windows-msvc
release: 1.82.0
LLVM version: 19.1.1
``` | C-discussion,T-types | low | Critical |
2,686,362,258 | react | Bug: startTransition is causing TypeError when used directly from react | <!--
Please provide a clear and concise description of what the bug is. Include
screenshots if needed. Please test using the latest version of the relevant
React packages to make sure your issue has not already been fixed.
-->
React version: 18.3.1
## Steps To Reproduce
Getting this error message
`TypeError: Cannot read property 'add' of undefined`
1. Run the code example https://snack.expo.dev/@raajnadar/swr-error-in-expo-sdk-52
2. Run android or ios, the web version works without the error
3. Click the "Call the API button" button inside the mobile app
4. Check the logs tab, when switched to axios there is no issue
5. This happens with `swr` because it uses startTransition internally
<!--
Your bug will get fixed much faster if we can run your code and it doesn't
have dependencies other than React. Issues without reproduction steps or
code examples may be immediately closed as not actionable.
-->
Link to code example:
Import like this
```
export const startTransition: (scope: TransitionFunction) => void =
IS_REACT_LEGACY
? cb => {
cb()
}
: React.startTransition
```
And use like this
```
startTransition(() =>
setState({ data, isMutating: false, error: undefined })
)
```
Code from this repository https://github.com/vercel/swr/blob/1585a3e37d90ad0df8097b099db38f1afb43c95d/src/mutation/state.ts#L5-L10
<!--
Please provide a CodeSandbox (https://codesandbox.io/s/new), a link to a
repository on GitHub, or provide a minimal code example that reproduces the
problem. You may provide a screenshot of the application if you think it is
relevant to your bug report. Here are some tips for providing a minimal
example: https://stackoverflow.com/help/mcve.
-->
## The current behavior
Getting this error when `startTransition` used from React directly
`TypeError: Cannot read property 'add' of undefined`
On Expo snack the error says
`Error: "Cannot read property 'add' of undefined" in TypeError: Cannot read property 'add' of undefined << at requestUpdateLane (/data/user/0/host.exp.exponent/files/.expo-internal/5cb1b0c52b8fcab94364327c83b808ee:18892:43) << at dispatchSetState (/data/user/0/host.exp.exponent/files/.expo-internal/5cb1b0c52b8fcab94364327c83b808ee:16726:33) << at anonymous (swr.mutation:12:14616)`
## The expected behavior
There is no TypeError,
When I tried using the `startTransition` from the `useTransition` hook the problem got solved | Status: Unconfirmed,Resolution: Needs More Information | low | Critical |
2,686,364,567 | rust | Const generic is incorrectly inferred | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code:
```rust
#![allow(unused)]
fn weird<const N: usize>()
where
[f32; N]: SomeArray,
{
let input: [f32; 1] = get_array();
}
fn get_array<const D: usize>() -> [f32; D]
where
[f32; D]: SomeArray,
{
return [0.0; D];
}
trait SomeArray {}
impl SomeArray for [f32; 1] {}
impl SomeArray for [f32; 2] {}
fn main() {}
```
I expected to see this happen:
I expect the code to compile with no errors and no warnings.
Both functions have a const generic parameter with the same constraint applied, but these parameters should be completely independent. I made sure to given them different names (`N` and `D`) to make the distinction more obvious.
I expect the `get_array` call inside `weird` to infer `D=1`, no matter what `N` is monomorphized to.
Instead, this happened:
```
error[E0308]: mismatched types
--> src/main.rs:5:27
|
5 | let input: [f32; 1] = get_array();
| -------- ^^^^^^^^^^^ expected `1`, found `N`
| |
| expected due to this
|
= note: expected array `[f32; 1]`
found array `[f32; N]`
For more information about this error, try `rustc --explain E0308`.
```
Note that the error disappears if **any single one** of these changes is applied:
* The `[f32; N]: SomeArray` constraint on `weird` is removed
* The `[f32; D]: SomeArray` constraint on `get_array` is removed
* The call to `get_array()` is turbofished to `get_array::<1>()`
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.82.0 (f6e511eec 2024-10-15)
binary: rustc
commit-hash: f6e511eec7342f59a25f7c0534f1dbea00d01b14
commit-date: 2024-10-15
host: x86_64-unknown-linux-gnu
release: 1.82.0
LLVM version: 19.1.1
rustc 1.85.0-nightly (a47555110 2024-11-22)
binary: rustc
commit-hash: a47555110cf09b3ed59811d9b02235443e76a595
commit-date: 2024-11-22
host: x86_64-unknown-linux-gnu
release: 1.85.0-nightly
LLVM version: 19.1.4
```
The behaviour on nightly 2024-11-22 is the same as on 1.82.0
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
`RUST_BACKTRACE=1` has no effect on the compiler output.
| T-compiler,C-bug,T-types | low | Critical |
2,686,443,354 | electron | App exits silently when unsupported `productName` is used for a platform while it works for another platform. | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a feature request that matches the one I want to file, without success.
### Problem Description
When an unsupported `productName` is used in `package.json` for a platform, the app instead of warning or causing an error exits silently. The user unaware of the productname spends days to find out why the app state is not getting ready and window is not being created.
For eg. in my case I used the app name as `App Name | Organisation Name` . it worked perfectly on mac. but when i started testing it on windows the app state never turned to ready and so the browser window was not created. the windows doesn't support this list of reserved characters: <, >, :, ", /, `\`, |, ?, * . If a simple error had been shown the user will not have to spend days figuring this out since it worked on mac I never gave the thought there would be issue in package.json but kept on debugging other files.
### Proposed Solution
Please add support to check if the `productName` set in `package.json` is actually supported by the platform and if not warn or create an error so the user can correct it.
For eg: windows doesn't support the app name if it includes this list of reserved characters: <, >, :, ", /, `\`, |, ?, *
### Alternatives Considered
The proposed solution is straightforward and easy to implement.
### Additional Information
Let me know if I should go ahead, fix it and raise a PR. Thanks | enhancement :sparkles: | low | Critical |
2,686,506,631 | PowerToys | Mouse without Boarders for just shared clipboard | ### Description of the new feature / enhancement
It would be nice if mouse without boarders could be used for just shared clipboards without sharing the mouse between the two computers. This would just involve adding a switch in the MWB menu to disable the shared mouse feature.
### Scenario when this would be used?
This would be helpful for use with remote desktops. For instance I use Sunshine/Moonlight to as my remote desktop software, however, it lacks a shared clipboard. Using MWB while streaming has the weird effect of having the mouse loop across the screen.
I am sure there are other use cases for this, but I can't think of any other off the top of my head.
### Supporting information
https://www.reddit.com/r/MoonlightStreaming/comments/15k3mrq/copy_and_paste_functions/
https://www.reddit.com/r/linux_gaming/comments/1crvbt7/gaming_remote_desktop_with_clipboard_access/
| Needs-Triage | low | Minor |
2,686,532,816 | flutter | [WEB][WASM] Google fonts provided as assets are not loaded in WEB WASM builds | ### Steps to reproduce
In Flutter WASM builds the Google Fonts are not loaded when using them as implicit assets.
This is issue one of tree issues discussed with @kevmoo in a video meeting Nov 18, 2024. A reproduction sample and issue in the Flutter repo was requested by Kevin during the meeting.
The issue can be demonstrated by using the sample repo: https://github.com/rydmike/wasm_fonts_issue
It needs a repo since the font assets are needed to demonstrate the issue.
### Expected results
Expect Google Fonts to load and be used in a Flutter WEB WASM build using same code and configuration as for native Flutter VM builds and Web JS builds, when using Google Fonts in the Flutter app, bundled as assets.
Build the sample counter application as WEB JS build and VM build, in this example macOS build was used.
| WEB JS BUILD | VM MacOS Build |
|--------------|-----------------|
|  |  |
In both cases the example fonts are loaded from assets correctly.
### Actual results
Build the exact same application as a WASM build, using flag `--wasm`:
The fonts are **NOT** loaded:

### Code sample
See repo: https://github.com/rydmike/wasm_fonts_issue
The app uses the Google Fonts feature to force loading used fonts as assets:
```dart
// Only use Google fonts via asset provided fonts.
GoogleFonts.config.allowRuntimeFetching = false;
```
## Workaround
If you list the individual fonts in the `pubspec.yaml` file using "the classic way" and use them as assets that way, the fonts will be loaded and used in the Flutter WASM build too.
Uncomment the fonts in the project `pubspec.yaml`:
```yaml
# fonts:
# - family: Rancho
# fonts:
# - asset: assets/google_fonts/Rancho-Regular.ttf
# - family: FiraMono
# fonts:
# - asset: assets/google_fonts/FiraMono-Regular.ttf
```
So it becomes:
```yaml
fonts:
- family: Rancho
fonts:
- asset: assets/google_fonts/Rancho-Regular.ttf
- family: FiraMono
fonts:
- asset: assets/google_fonts/FiraMono-Regular.ttf
```
Build WEB --wasm again, hot-restart was enough on in my case, fonts are **now** loaded and used correctly.

But the way mentioned in Google Fonts docs here:
https://pub.dev/packages/google_fonts#bundling-fonts-when-releasing
to only put them in assets and have them loaded, does not work.
This does as shown work in both VM and Web JS builds, it only fails in the WASM build.
So the WASM build does something different that causes the fonts not to load in a setup that works in WEB JS and VM builds. This was unexpected and required a lot of trouble shooting to find the presented simple workaround. Expect WASM to behave the same way as VM and JS build builds when loading Google font assets.
### Flutter version
Used Flutter version: Channel master
**Channel master, 3.27.0-1.0.pre.621**
It reproduces on beta and stable 3.24.5 too.
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel master, 3.27.0-1.0.pre.621, on macOS 15.1.1 24B91 darwin-arm64, locale en-US)
• Flutter version 3.27.0-1.0.pre.621 on channel master at /Users/rydmike/fvm/versions/master
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision da188452a6 (55 minutes ago), 2024-11-23 19:55:24 +0100
• Engine revision b382d17a27
• Dart version 3.7.0 (build 3.7.0-183.0.dev)
• DevTools version 2.41.0-dev.2
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/rydmike/Library/Android/sdk
• Platform android-34, build-tools 34.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
This is the JDK bundled with the latest Android Studio installation on this machine.
To manually set the JDK path, use: `flutter config --jdk-dir="path/to/jdk"`.
• Java version OpenJDK Runtime Environment (build 17.0.9+0-17.0.9b1087.7-11185874)
• All Android licenses accepted.
[!] Xcode - develop for iOS and macOS (Xcode 16.1)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16B40
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2023.2)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
• Dart plugin can be installed from:
• Java version OpenJDK Runtime Environment (build 17.0.9+0-17.0.9b1087.7-11185874)
[✓] IntelliJ IDEA Community Edition (version 2024.2.4)
• IntelliJ at /Applications/IntelliJ IDEA CE.app
• Flutter plugin version 82.1.3
• Dart plugin version 242.22855.32
[✓] VS Code (version 1.95.3)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.100.0
[✓] Connected device (4 available)
• MrPinkPro (mobile) • 74120d6ef6769c3a2e53d61051da0147d0279996 • ios • iOS 17.7.2 21H221
• macOS (desktop) • macos • darwin-arm64 • macOS 15.1.1 24B91 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 15.1.1 24B91 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.86
[✓] Network resources
• All expected network resources are available.
```
</details>
| engine,platform-web,has reproducible steps,P1,e: wasm,e: web_skwasm,team-web,triaged-web,found in release: 3.24,found in release: 3.27 | medium | Minor |
2,686,607,250 | pytorch | Numerical differences on bf16/fp32 division and fp32 torch.cumsum | ### 🐛 Describe the bug
With the following code that does a simple division, I would expect there's no numerical difference larger than the default atol and rtol between eager mode and compiled mode, but there is
```
import torch
def division(Y, y):
return (Y / y.to(Y.dtype).unsqueeze(-1))
Y, y = torch.randn(size=(1, 6, 128, 4, 32), dtype=torch.bfloat16, device='cuda'), torch.randn(size=(1, 6, 128, 4), dtype=torch.float32, device='cuda')
print("Y.mean()", Y.mean())
print("Y.std()", Y.std())
print("y.mean()", y.mean())
print("y.std()", y.std())
out_eager = division(Y, y)
out_compiled = torch.compile(division)(Y, y)
print(torch.allclose(out_eager, out_compiled))
print("diff", (out_eager - out_compiled).abs().max())
print("torch.version", torch.__version__)
```
This gives the following output
```
Y.mean() tensor(0.0010, device='cuda:0', dtype=torch.bfloat16)
Y.std() tensor(1., device='cuda:0', dtype=torch.bfloat16)
y.mean() tensor(-0.0218, device='cuda:0')
y.std() tensor(0.9883, device='cuda:0')
False
diff tensor(8., device='cuda:0', dtype=torch.bfloat16)
torch.version 2.4.1+cu121
```
However, I would expect the output from eager mode matches that from the compiled mode.
If I change the division to the following,
```
def division(Y, y):
return (Y / y.unsqueeze(-1)).to(Y.dtype)
```
eager mode output and compiled mode output matches, which suggests division in bf16 is not handled the same way between eager mode and compiled mode?
----
Similarly, I would expect the following code to not produce numerical differences larger than the default atol and rtol, but there is
```
import torch
def fn(discount):
discount = discount.cumsum(2, dtype=torch.float32)
return discount
discount = torch.randn(size=(1, 6, 128, 4), dtype=torch.float32, device='cuda')
print("discount.mean()", discount.mean())
print("discount.std()", discount.std())
out_eager = fn(discount)
out_compiled = torch.compile(fn)(discount)
print(torch.allclose(out_eager, out_compiled))
print("diff:", torch.max(torch.abs(out_eager - out_compiled)))
```
will print
```
discount.mean() tensor(-0.0191, device='cuda:0')
discount.std() tensor(0.9815, device='cuda:0')
False
diff: tensor(7.6294e-06, device='cuda:0')
```
### Error logs
_No response_
### Versions
```
Collecting environment information...
PyTorch version: 2.4.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 23.04 (x86_64)
GCC version: (Ubuntu 12.3.0-1ubuntu1~23.04) 12.3.0
Clang version: Could not collect
CMake version: version 3.25.1
Libc version: glibc-2.37
Python version: 3.11.4 (main, Dec 7 2023, 15:43:41) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-6.2.0-39-generic-x86_64-with-glibc2.37
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000
Nvidia driver version: 535.161.07
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 5955WX 16-Cores
CPU family: 25
Model: 8
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
Frequency boost: enabled
CPU(s) scaling MHz: 43%
CPU max MHz: 7031.2500
CPU min MHz: 1800.0000
BogoMIPS: 7985.56
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin brs arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Virtualization: AMD-V
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 8 MiB (16 instances)
L3 cache: 64 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] jax-triton==0.1.4
[pip3] jaxlib==0.4.26+cuda12.cudnn89
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.5.82
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] optree==0.11.0
[pip3] pytorch-lightning==2.3.0
[pip3] torch==2.4.1
[pip3] torchmetrics==1.4.0.post0
[pip3] triton==3.0.0
[conda] Could not collect
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov | triaged,oncall: pt2,module: inductor,module: pt2 accuracy | low | Critical |
2,686,665,158 | godot | Invalid string to numeric type yields undefined behavior. | ### Tested versions
- Reproducible in: 4.3, 3.x
### System information
Windows 11 - Godot 4.3.stable - Forward+ - Intel Iris Xe
### Issue description
Attempting to cast a string to a numeric type yields undefined behavior instead of an error.
### Steps to reproduce
Run the project and the output will print numbers parsed from the invalid string to numeric type casts that don't reflect the string contents.
### Minimal reproduction project (MRP)
[NumericCastBug.zip](https://github.com/user-attachments/files/17882623/NumericCastBug.zip) | discussion,topic:core,needs testing | low | Critical |
2,686,667,162 | godot | LineEdit and SpinBox don't accept input from on-screen keyboards a majority of the time on Android Web | ### Tested versions
v4.3.stable.official [77dcf97d8]
v4.4.dev5.official [9e6098432]
### System information
Galaxy A52s 5G - One UI 6.1 - Samsung Keyboard 5.8.20.7 - GBoard 14.8.05.686567880-release-arm64-v8a - Android 14 Build/UP1A.231005.007 - Kernel 5.4.254 - Brave 1.73.89, Chromium 131.0.6778.69
### Issue description
On the web export of a Godot project opened in an Android browser, `LineEdit` and `SpinBox` elements either do not react to on-screen keyboard input or react to it erratically, dumping previously typed text when certain keys are pressed. The exact behavior is difficult to circumscribe as its reaction to any keypress seems to depend on a number of factors such as pervious input, keyboard used etc., but here are some of the observations I made:
- Input fields don't seem to react to GBoard (Google Keyboard) input
- The input does seem to get cached somewhere though, as it's dumped as soon as I switch to Samsung Keyboard and press a key
- Samsung Keyboard input sometimes works properly, but this seems to be temporary and I don't know the exact steps to make this happen
- Samsung Keyboard seems to not be able to type any digits, but all digits that were typed (regardless of being backspaced) are suddenly displayed if a latin character is entered, after which you can enter one digit properly before this behavior repeats
- Samsung Keyboard doesn't seem to be able to enter any special characters (hashmarks, curly braces etc.) what so ever, not even with the technique in the above point
- Turning off Predictive Text on Samsung Keyboard makes it completely unable to type, the same as GBoard
- Using a physical keyboard on Android Web works fine, but buggy behavior resumes when switching back to touch keyboard
- Using the on-screen keyboard on Windows works fine
https://github.com/user-attachments/assets/062a2c2f-aef1-48cf-8170-d7ac5f4b2db0
### Steps to reproduce
1. Download the MRP
2. Export it to web
3. Host the web export on a server
4. Access the game on an Android phone browser
5. Try to use any of the LineEdit/SpinBox inputs with different on-screen keyboards
Additionally you should also access the same export on a desktop browser and verify that inputs do work properly there
### Minimal reproduction project (MRP)
[line_edit_bug_web_android.zip](https://github.com/user-attachments/files/17882548/line_edit_bug_web_android.zip)
The MRP is also hosted for quick access from mobile (running on v4.3.stable.official): https://godot-bugtest-html.vercel.app/ | bug,platform:web,topic:input,topic:gui | low | Critical |
2,686,671,769 | vscode | Git vscode commands on toggled files in the scm view | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
I would like to be able to use git.clean, git.stage and git.unstage while having files selected but not opened in the scm view.
If i rightclick on the selected files and click on discard changes, stash, or press the icons hovering over the files it works fine, but calling the mentioned commands does nothing:

The command seems to only work if i open the tracked file in the scm view but will only run on said file, not taking the other highlightet ones into account.
I use the vim plugin and in the file explorer, e.g., i can highlight files with list.toggle and then delete them with deleteFile and would love to do these git commands aswell without using the mouse.
| bug,git | low | Major |
2,686,764,419 | vscode | Settings Sync anomaly when syncing two devices from two networks |
Type: <b>Bug</b>
## The setup
I have two computers, they are connected to the internet through different ISPs (one wired, one cellular). I am running 2 instances of vscode on each computers. I am logged in with the same account on all four instances and settings syncing is turned on.
## The problem
When editing the settings either with the UI editor or with the `settings.json` file, the instances running on the same netwok have no problem syncing with each other. The instances on the different network will not sync, not even when forced with the "Settings Sync: Sync Now" command. When changing something on the "outdated" instance, it will report a conflict, resolving it will sync it so that other instances can sync it, but only "local" ones will sync automatically and without conflict, the instances on the other network will be the new "outdated" ones.
## Example
Computer `A` and `B` are connected to different networks, and have the same account (Github) and profile (Default).
On each computer, there are two vscode instances: `A1`, `A2`, `B1`, `B2`. All extensions are disabled.
1) Changing a setting in `A1` will trigger a sync.
2) `A2` will detect and download the changes.
3) `B1` and `B2` will not:
```
Settings: No changes found during synchronizing settings.
```
4) Changing a setting in `B1` will trigger a sync and report a conflict:
```
Settings: Failed to synchronize as there is a new remote version available. Synchronizing again...
Settings: Detected conflicts while synchronizing settings.
```
5) After resolving the conflict on `B1` with either "Replace local" or "Replace remote", `B2` will sync and download the changes.
6) `A1` and `A2` will now act as `B1` and `B2` in step 3
## Notes
When the two computers are connected to the same network (either one) all four vscode instances are syncing without a problem.
## System info
VS Code version: Code 1.95.3 (f1a4fb101478ce6ec82fe9627c43efbf9e98c813, 2024-11-13T14:50:04.152Z)
OS version: Linux x64 6.11.6-300.fc41.x86_64
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i5-2500 CPU @ 3.30GHz (4 x 3392)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: disabled_software<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: disabled_off<br>webnn: disabled_off|
|Load (avg)|2, 2, 3|
|Memory (System)|15.57GB (6.34GB free)|
|Process Argv|--crash-reporter-id cb6a3ac5-b939-4ca1-92c6-fc345c0c601d|
|Screen Reader|no|
|VM|0%|
|DESKTOP_SESSION|gnome|
|XDG_CURRENT_DESKTOP|GNOME|
|XDG_SESSION_DESKTOP|gnome|
|XDG_SESSION_TYPE|wayland|
</details>Extensions: none<details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
9c06g630:31013171
dvdeprecation:31068756
dwnewjupytercf:31046870
nativerepl2:31139839
pythonrstrctxt:31112756
nativeloc1:31185841
cf971741:31144450
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
```
</details>
<!-- generated by issue reporter --> | info-needed | low | Critical |
2,686,811,336 | angular | Automatic InjectionToken Name via Compiler | ### Which @angular/* package(s) are relevant/related to the feature request?
core
### Description
Currently, when creating an `InjectionToken` a `desc` string param is required to describe the injection token, for debugging purposes.
A popular practice for supplying this `desc` param is to copy the name of the variable, so that it becomes easy to rename this injection token via global replace:
```ts
const MY_TOKEN = new InjectionToken<...>("MY_TOKEN");
```
Manually specifying the `desc` could be verbose and boring.
Besides, although not very important, it also causes some ugly formatting:
```ts
const MY_TOKEN_WITH_A_LONG_NAME = new InjectionToken<...>(
"MY_TOKEN_WITH_A_LONG_NAME"
);
const ANOTHER_TOKEN_WITH_A_LONG_NAME = new InjectionToken<...>(
"ANOTHER_TOKEN_WITH_A_LONG_NAME"
);
```
### Proposed solution
Since now all signal APIs have an optional `debugName` param, which will be automatically populated by the compiler with the variable name, I was wondering if this is also possible for `InjectionToken`:
Source:
```ts
const MY_TOKEN = new InjectionToken<...>();
const MY_TOKEN_WITH_OPTIONS = new InjectionToken<...>({ ... });
```
After compliation:
```js
const MY_TOKEN = new InjectionToken("MY_TOKEN");
const MY_TOKEN_WITH_OPTIONS = new InjectionToken("MY_TOKEN_WITH_OPTIONS", { ... });
```
### Alternatives considered
N/A | area: core | low | Critical |
2,686,853,625 | rust | GNU Hurd compilation failure: no field `st_fsid` on type `&stat64` | Compiling a project for the `i686-unknown-hurd-gnu` target using rustc 1.85.0-nightly (a47555110 2024-11-22) results in the following compilation error:
```
error[E0609]: no field `st_fsid` on type `&stat64`
--> /home/runner/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/os/hurd/fs.rs:301:36
|
301 | self.as_inner().as_inner().st_fsid as u64
| ^^^^^^^ unknown field
|
help: a field with a similar name exists
|
301 | self.as_inner().as_inner().st_uid as u64
|
```
We previously had a similar issue (see #123032). Maybe it's worth to add a CI check for this target as was discussed in it? | C-bug,T-libs,O-hurd | low | Critical |
2,686,870,781 | angular | Improve angular signals benchmark performance | ### Which @angular/* package(s) are relevant/related to the feature request?
_No response_
### Description
According to public benchmarks, angular's signals are much worse than alternative ecosystem implementations.
> https://github.com/transitive-bullshit/js-reactivity-benchmark
This makes angular look inferior from an outsider's viewpoint.
### Proposed solution
Unknown
### Alternatives considered
Unknown | area: core,core: reactivity | low | Major |
2,686,889,864 | vscode | SCM - Using shortcut keys to move the cursor between the inline diff editor and the original code editor. | #51549 、 #66518
`editor.action.dirtydiff.next` - `Alt`+`F3`
`editor.action.dirtydiff.previous` - `Shift`+`Alt`+`F3`
I hope to be able to use shortcut keys to focus on the open inline diff editor. Currently, the above two commands combined with `Esc` can only be used to open/close the inline diff editor.
If it is possible to use shortcut keys to switch the cursor into/out of the inline diff editor, that would be fantastic!!! (When using vim, if I can use shortcut keys to move the cursor into/out of the inline diff editor, it would allow for mouse-free navigation when reviewing changes).
I am creating this issue because it seems currently I can only use the mouse to navigate through changes in the inline diff editor, and cannot achieve a mouse-free operation. (Because I cannot move the cursor into/out of the inline diff editor using shortcut keys, or focus/unfocus). | feature-request,scm | low | Minor |
2,687,037,185 | PowerToys | PowerToys Run doesn't have focus when called | ### Microsoft PowerToys version
0.86.0
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
PowerToys Run
### Steps to reproduce
Call PowerToys Run with the preferred key combo. In my case I use 3 simultaneous taps on the touchpad
### ✔️ Expected Behavior
PowerToys Run search box is on focus and the user is able to write onto it
### ❌ Actual Behavior
I have to click on the search box to bring it up to focus, otherwise I'm unable to write and neither the esc key works to get rid of it
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,687,047,007 | PowerToys | Always On Top: Separate shortcut keys can be assigned for pinning and unpinning windows, instead of using the same shortcut key to toggle between the two. | ### Description of the new feature / enhancement
Separate shortcut keys can be assigned for pinning and unpinning windows, instead of using the same shortcut key to toggle between the two.
### Scenario when this would be used?
Sometimes, when I use the pin operation on multiple windows, my memory of whether a particular window is pinned becomes unclear. It would be great to have a dedicated shortcut key to unpin windows that were previously pinned but are no longer needed.
Currently, there is no such shortcut key, so I can only judge whether a window is pinned or unpinned by the sound prompt. However, for someone like me who is not good at distinguishing sounds, it is sometimes difficult to tell the difference (especially after frequently switching the pin status, I seem to get confused by the two sounds).
Therefore, having a clear shortcut key for unpinning a window is very important.
### Supporting information
_No response_ | Product-Always On Top,Needs-Triage | low | Minor |
2,687,106,847 | pytorch | It is apparently very difficult to use decompose correctly in TorchDispatchMode | ### 🐛 Describe the bug
Suppose you are implementing a TorchDispatchMode. In some circumstances, you may be dispatched to with a function that you cannot directly handle, but for which a CompositeImplicitAutograd decomposition exists. (In https://github.com/pytorch/pytorch/pull/138508 this was triggered by inference mode, which causes you always to dispatch BEFORE CompositeImplicitAutograd decomps happen). In this case, you may want to say "oh, please decompose and call back to me for the individual decomposed pieces."
I implemented this in a straightforward way in that PR but it does not work. Specifically, it is not safe to unconditionally call func.decompose because it will fail when passed torch.ops.prim.device.default. Furthermore, there are a few ways to spell the decompose that cause a Kineto failure in obscure internal situations (https://fb.workplace.com/groups/1075192433118967/permalink/1545856962719176/). Specifically in https://github.com/pytorch/pytorch/pull/138508/files/0bcf842e30417f82ed9e37e18f23d0df22e3839e the original PR author had split the TorchDispatchMode to have an inner dispatch mode which they activate before calling decompose. This apparently does not work. And it seems if you raise an exception from inside TorchDispatchMode, this can also cause Kineto corruption, potentially because Kineto cannot deal with reentrant RecordFunction entries.
See internal WP post for how to repro this situation.
### Versions
main
cc @Chillee @zou3519 @albanD @samdow @chauhang @penguinwu | triaged,module: dispatch,module: __torch_dispatch__,oncall: pt2 | low | Critical |
2,687,111,179 | rust | Tracking Issue for `lock_value_accessors` | Feature gate: `#![feature(lock_value_accessors)]`
This is a tracking issue for feature `lock_value_accessors`.
### Public API
```rust
impl<T> Mutex<T> {
pub fn get_cloned(&self) -> Result<T, PoisonError<()>> where T: Clone { ... }
pub fn set(&self, value: T) -> Result<(), PoisonError<T>> { ... }
pub fn replace(&self, value: T) -> LockResult<T> { ... }
}
impl<T> RwLock<T> {
pub fn get_cloned(&self) -> Result<T, PoisonError<()>> where T: Clone { ... }
pub fn set(&self, value: T) -> Result<(), PoisonError<T>> { ... }
pub fn replace(&self, value: T) -> LockResult<T> { ... }
}
```
### Steps / History
- [x] ACP: https://github.com/rust-lang/libs-team/issues/485
- [x] Implementation: #133406
- [ ] Final comment period (FCP)[^1]
- [ ] Stabilization PR
### Unresolved Questions
- Whether we should checking poisoning first and avoid unnecessary lock acquire attempts.
[^1]: https://std-dev-guide.rust-lang.org/feature-lifecycle/stabilization.html
| T-libs-api,C-tracking-issue | low | Critical |
2,687,116,479 | pytorch | Heap-buffer-overflow in `_fft_c2r` | ### 🐛 Describe the bug
Under specific inputs, `_fft_c2r` triggered a crash.
```python
import torch
self = torch.full((3, 1, 3, 1,), 0.372049, dtype=torch.cfloat)
dim = [2]
normalization = 2
last_dim_size = 536870912
torch._fft_c2r(self, dim, normalization, last_dim_size)
```
In 2.5.0a0+git32f585d pytorch, Output:
```
Segmentation fault (core dumped)
```
In colab 2.5.1+cu121 pytorch, Output:
```
RuntimeError: in_size == signal_size[i + 1] || in_size == (signal_size[i + 1] / 2) + 1 INTERNAL ASSERT FAILED at "../aten/src/ATen/native/mkl/SpectralOps.cpp":466, please report a bug to PyTorch.
```
and 2.5.0a0+git32f585d pytorch ASAN report:
```
=================================================================
==1372420==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60e0002ccb48 at pc 0x7f40e5180042 bp 0x7ffcbc4e4080 sp 0x7ffcbc4e4070
READ of size 4 at 0x60e0002ccb48 thread T0
#0 0x7f40e5180041 in pocketfft::detail::general_c2r<float>(pocketfft::detail::cndarr<pocketfft::detail::cmplx<float> > const&, pocketfft::detail::ndarr<float>&, unsigned long, bool, float, unsigned long)::{lambda()#1}::operator()() const /mnt/pytorch-2.5.0/third_party/pocketfft/pocketfft_hdronly.h:3356
#1 0x7f40e5189f84 in void pocketfft::detail::threading::thread_map<pocketfft::detail::general_c2r<float>(pocketfft::detail::cndarr<pocketfft::detail::cmplx<float> > const&, pocketfft::detail::ndarr<float>&, unsigned long, bool, float, unsigned long)::{lambda()#1}>(unsigned long, pocketfft::detail::general_c2r<float>(pocketfft::detail::cndarr<pocketfft::detail::cmplx<float> > const&, pocketfft::detail::ndarr<float>&, unsigned long, bool, float, unsigned long)::{lambda()#1}) /mnt/pytorch-2.5.0/third_party/pocketfft/pocketfft_hdronly.h:792
#2 0x7f40e51807ad in void pocketfft::detail::general_c2r<float>(pocketfft::detail::cndarr<pocketfft::detail::cmplx<float> > const&, pocketfft::detail::ndarr<float>&, unsigned long, bool, float, unsigned long) /mnt/pytorch-2.5.0/third_party/pocketfft/pocketfft_hdronly.h:3302
#3 0x7f40e517a7cb in void pocketfft::detail::c2r<float>(std::vector<unsigned long, std::allocator<unsigned long> > const&, std::vector<long, std::allocator<long> > const&, std::vector<long, std::allocator<long> > const&, unsigned long, bool, std::complex<float> const*, float*, float, unsigned long) /mnt/pytorch-2.5.0/third_party/pocketfft/pocketfft_hdronly.h:3479
#4 0x7f40e5177097 in void pocketfft::detail::c2r<float>(std::vector<unsigned long, std::allocator<unsigned long> > const&, std::vector<long, std::allocator<long> > const&, std::vector<long, std::allocator<long> > const&, std::vector<unsigned long, std::allocator<unsigned long> > const&, bool, std::complex<float> const*, float*, float, unsigned long) /mnt/pytorch-2.5.0/third_party/pocketfft/pocketfft_hdronly.h:3489
#5 0x7f40e5167588 in at::native::_fft_c2r_mkl(at::Tensor const&, c10::ArrayRef<long>, long, long) /mnt/pytorch-2.5.0/aten/src/ATen/native/mkl/SpectralOps.cpp:268
#6 0x7f40e7b6a98a in wrapper_CPU___fft_c2r /mnt/pytorch-2.5.0/build/aten/src/ATen/RegisterCPU.cpp:6541
#7 0x7f40e7fb6b2d in operator() /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
#8 0x7f40e7fb6b2d in call /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:468
#9 0x7f40e799328c in at::Tensor c10::callUnboxedKernelFunction<at::Tensor, at::Tensor const&, c10::ArrayRef<long>, long, c10::SymInt>(void*, c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>&&, long&&, c10::SymInt&&) /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/KernelFunction_impl.h:53
#10 0x7f40e781c0d0 in at::Tensor c10::KernelFunction::call<at::Tensor, at::Tensor const&, c10::ArrayRef<long>, long, c10::SymInt>(c10::OperatorHandle const&, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, long, c10::SymInt) const /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/KernelFunction_impl.h:93
#11 0x7f40e781c0d0 in at::Tensor c10::Dispatcher::redispatch<at::Tensor, at::Tensor const&, c10::ArrayRef<long>, long, c10::SymInt>(c10::TypedOperatorHandle<at::Tensor (at::Tensor const&, c10::ArrayRef<long>, long, c10::SymInt)> const&, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, long, c10::SymInt) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:714
#12 0x7f40e7594a75 in c10::TypedOperatorHandle<at::Tensor (at::Tensor const&, c10::ArrayRef<long>, long, c10::SymInt)>::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, long, c10::SymInt) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:536
#13 0x7f40e7594a75 in at::_ops::_fft_c2r::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, long, c10::SymInt) /mnt/pytorch-2.5.0/build/aten/src/ATen/Operators_4.cpp:2749
#14 0x7f40f06da110 in at::redispatch::_fft_c2r_symint(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, long, c10::SymInt) /mnt/pytorch-2.5.0/build/aten/src/ATen/RedispatchFunctions.h:3737
#15 0x7f40f031cda2 in operator() /mnt/pytorch-2.5.0/torch/csrc/autograd/generated/VariableType_4.cpp:858
#16 0x7f40f031d74d in _fft_c2r /mnt/pytorch-2.5.0/torch/csrc/autograd/generated/VariableType_4.cpp:859
#17 0x7f40f0602a10 in operator() /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
#18 0x7f40f0602a10 in call /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:485
#19 0x7f40e799328c in at::Tensor c10::callUnboxedKernelFunction<at::Tensor, at::Tensor const&, c10::ArrayRef<long>, long, c10::SymInt>(void*, c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>&&, long&&, c10::SymInt&&) /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/KernelFunction_impl.h:53
#20 0x7f40e7593d68 in at::Tensor c10::KernelFunction::call<at::Tensor, at::Tensor const&, c10::ArrayRef<long>, long, c10::SymInt>(c10::OperatorHandle const&, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, long, c10::SymInt) const /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/KernelFunction_impl.h:93
#21 0x7f40e7593d68 in at::Tensor c10::Dispatcher::call<at::Tensor, at::Tensor const&, c10::ArrayRef<long>, long, c10::SymInt>(c10::TypedOperatorHandle<at::Tensor (at::Tensor const&, c10::ArrayRef<long>, long, c10::SymInt)> const&, at::Tensor const&, c10::ArrayRef<long>, long, c10::SymInt) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:698
#22 0x7f40e7593d68 in c10::TypedOperatorHandle<at::Tensor (at::Tensor const&, c10::ArrayRef<long>, long, c10::SymInt)>::call(at::Tensor const&, c10::ArrayRef<long>, long, c10::SymInt) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:531
#23 0x7f40e7593d68 in at::_ops::_fft_c2r::call(at::Tensor const&, c10::ArrayRef<long>, long, c10::SymInt) /mnt/pytorch-2.5.0/build/aten/src/ATen/Operators_4.cpp:2742
#24 0x7f412a9a1a71 in at::_fft_c2r_symint(at::Tensor const&, c10::ArrayRef<long>, long, c10::SymInt) /mnt/pytorch-2.5.0/build/aten/src/ATen/ops/_fft_c2r.h:38
#25 0x7f412a8807b4 in operator() /mnt/pytorch-2.5.0/torch/csrc/autograd/generated/python_torch_functions_2.cpp:3969
#26 0x7f412a881251 in THPVariable__fft_c2r /mnt/pytorch-2.5.0/torch/csrc/autograd/generated/python_torch_functions_2.cpp:3971
#27 0x56abf2 in cfunction_call /usr/local/src/conda/python-3.13.0/Objects/methodobject.c:540
#28 0x5341f3 in _PyObject_MakeTpCall /usr/local/src/conda/python-3.13.0/Objects/call.c:242
#29 0x549ece in _PyEval_EvalFrameDefault /usr/local/src/conda/python-3.13.0/Python/generated_cases.c.h:813
#30 0x60902d in PyEval_EvalCode /usr/local/src/conda/python-3.13.0/Python/ceval.c:596
#31 0x62eedc in run_eval_code_obj /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:1323
#32 0x629d9c in run_mod /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:1408
#33 0x64888f in pyrun_file /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:1241
#34 0x6473fa in _PyRun_SimpleFileObject /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:490
#35 0x64711a in _PyRun_AnyFileObject /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:77
#36 0x640b66 in pymain_run_file_obj /usr/local/src/conda/python-3.13.0/Modules/main.c:409
#37 0x640b66 in pymain_run_file /usr/local/src/conda/python-3.13.0/Modules/main.c:428
#38 0x640b66 in pymain_run_python /usr/local/src/conda/python-3.13.0/Modules/main.c:696
#39 0x640b66 in Py_RunMain /usr/local/src/conda/python-3.13.0/Modules/main.c:775
#40 0x5f9508 in Py_BytesMain /usr/local/src/conda/python-3.13.0/Modules/main.c:829
#41 0x7f41330a1d8f (/lib/x86_64-linux-gnu/libc.so.6+0x29d8f)
#42 0x7f41330a1e3f in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x29e3f)
#43 0x5f885c (/mnt/anaconda3/envs/pytorch-2.3-asan/bin/python3.13+0x5f885c)
0x60e0002ccb48 is located 0 bytes to the right of 72-byte region [0x60e0002ccb00,0x60e0002ccb48)
allocated by thread T0 here:
#0 0x7f413345457c in __interceptor_posix_memalign ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:226
#1 0x7f40d17601f1 in c10::alloc_cpu(unsigned long) /mnt/pytorch-2.5.0/c10/core/impl/alloc_cpu.cpp:116
#2 0x7f40d1690068 in c10::DefaultCPUAllocator::allocate(unsigned long) (/mnt/pytorch-2.5.0/torch/lib/libc10.so+0x9d068)
#3 0x7f40e267e966 in c10::StorageImpl::StorageImpl(c10::StorageImpl::use_byte_size_t, c10::SymInt const&, c10::Allocator*, bool) (/mnt/pytorch-2.5.0/torch/lib/libtorch_cpu.so+0x10cac966)
#4 0x7f40e268c914 in c10::intrusive_ptr<c10::StorageImpl, c10::detail::intrusive_target_default_null_type<c10::StorageImpl> > c10::intrusive_ptr<c10::StorageImpl, c10::detail::intrusive_target_default_null_type<c10::StorageImpl> >::make<c10::StorageImpl::use_byte_size_t, unsigned long&, c10::Allocator*&, bool>(c10::StorageImpl::use_byte_size_t&&, unsigned long&, c10::Allocator*&, bool&&) (/mnt/pytorch-2.5.0/torch/lib/libtorch_cpu.so+0x10cba914)
#5 0x7f40e268b61f in c10::intrusive_ptr<c10::StorageImpl, c10::detail::intrusive_target_default_null_type<c10::StorageImpl> > c10::make_intrusive<c10::StorageImpl, c10::detail::intrusive_target_default_null_type<c10::StorageImpl>, c10::StorageImpl::use_byte_size_t, unsigned long&, c10::Allocator*&, bool>(c10::StorageImpl::use_byte_size_t&&, unsigned long&, c10::Allocator*&, bool&&) (/mnt/pytorch-2.5.0/torch/lib/libtorch_cpu.so+0x10cb961f)
#6 0x7f40e2688c91 in at::TensorBase at::detail::_empty_generic<long>(c10::ArrayRef<long>, c10::Allocator*, c10::DispatchKeySet, c10::ScalarType, std::optional<c10::MemoryFormat>) (/mnt/pytorch-2.5.0/torch/lib/libtorch_cpu.so+0x10cb6c91)
#7 0x7f40e267952e in at::detail::empty_generic(c10::ArrayRef<long>, c10::Allocator*, c10::DispatchKeySet, c10::ScalarType, std::optional<c10::MemoryFormat>) /mnt/pytorch-2.5.0/aten/src/ATen/EmptyTensor.cpp:203
#8 0x7f40e2679d21 in at::detail::empty_cpu(c10::ArrayRef<long>, c10::ScalarType, bool, std::optional<c10::MemoryFormat>) /mnt/pytorch-2.5.0/aten/src/ATen/EmptyTensor.cpp:260
#9 0x7f40e267a04a in at::detail::empty_cpu(c10::ArrayRef<long>, std::optional<c10::ScalarType>, std::optional<c10::Layout>, std::optional<c10::Device>, std::optional<bool>, std::optional<c10::MemoryFormat>) /mnt/pytorch-2.5.0/aten/src/ATen/EmptyTensor.cpp:275
#10 0x7f40e4327aa0 in at::native::empty_cpu(c10::ArrayRef<long>, std::optional<c10::ScalarType>, std::optional<c10::Layout>, std::optional<c10::Device>, std::optional<bool>, std::optional<c10::MemoryFormat>) /mnt/pytorch-2.5.0/aten/src/ATen/native/TensorFactories.cpp:249
#11 0x7f40e7b57f61 in wrapper_CPU_memory_format_empty /mnt/pytorch-2.5.0/build/aten/src/ATen/RegisterCPU.cpp:5260
#12 0x7f40e7fa8f05 in operator() /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
#13 0x7f40e7fa8f05 in call /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:468
#14 0x7f40e6d79ca7 in at::Tensor c10::callUnboxedKernelFunction<at::Tensor, c10::ArrayRef<c10::SymInt>, std::optional<c10::ScalarType>, std::optional<c10::Layout>, std::optional<c10::Device>, std::optional<bool>, std::optional<c10::MemoryFormat> >(void*, c10::OperatorKernel*, c10::DispatchKeySet, c10::ArrayRef<c10::SymInt>&&, std::optional<c10::ScalarType>&&, std::optional<c10::Layout>&&, std::optional<c10::Device>&&, std::optional<bool>&&, std::optional<c10::MemoryFormat>&&) /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/KernelFunction_impl.h:53
#15 0x7f40e6aa507c in at::Tensor c10::KernelFunction::call<at::Tensor, c10::ArrayRef<c10::SymInt>, std::optional<c10::ScalarType>, std::optional<c10::Layout>, std::optional<c10::Device>, std::optional<bool>, std::optional<c10::MemoryFormat> >(c10::OperatorHandle const&, c10::DispatchKeySet, c10::ArrayRef<c10::SymInt>, std::optional<c10::ScalarType>, std::optional<c10::Layout>, std::optional<c10::Device>, std::optional<bool>, std::optional<c10::MemoryFormat>) const /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/KernelFunction_impl.h:93
#16 0x7f40e6aa507c in at::Tensor c10::Dispatcher::redispatch<at::Tensor, c10::ArrayRef<c10::SymInt>, std::optional<c10::ScalarType>, std::optional<c10::Layout>, std::optional<c10::Device>, std::optional<bool>, std::optional<c10::MemoryFormat> >(c10::TypedOperatorHandle<at::Tensor (c10::ArrayRef<c10::SymInt>, std::optional<c10::ScalarType>, std::optional<c10::Layout>, std::optional<c10::Device>, std::optional<bool>, std::optional<c10::MemoryFormat>)> const&, c10::DispatchKeySet, c10::ArrayRef<c10::SymInt>, std::optional<c10::ScalarType>, std::optional<c10::Layout>, std::optional<c10::Device>, std::optional<bool>, std::optional<c10::MemoryFormat>) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:714
#17 0x7f40e6703c6a in c10::TypedOperatorHandle<at::Tensor (c10::ArrayRef<c10::SymInt>, std::optional<c10::ScalarType>, std::optional<c10::Layout>, std::optional<c10::Device>, std::optional<bool>, std::optional<c10::MemoryFormat>)>::redispatch(c10::DispatchKeySet, c10::ArrayRef<c10::SymInt>, std::optional<c10::ScalarType>, std::optional<c10::Layout>, std::optional<c10::Device>, std::optional<bool>, std::optional<c10::MemoryFormat>) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:536
#18 0x7f40e6703c6a in at::_ops::empty_memory_format::redispatch(c10::DispatchKeySet, c10::ArrayRef<c10::SymInt>, std::optional<c10::ScalarType>, std::optional<c10::Layout>, std::optional<c10::Device>, std::optional<bool>, std::optional<c10::MemoryFormat>) /mnt/pytorch-2.5.0/build/aten/src/ATen/Operators_2.cpp:3337
#19 0x7f40e7a5ec48 in empty_memory_format /mnt/pytorch-2.5.0/build/aten/src/ATen/RegisterBackendSelect.cpp:185
#20 0x7f40e7aab160 in operator() /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
#21 0x7f40e7aab160 in call /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:468
#22 0x7f40e6d79ca7 in at::Tensor c10::callUnboxedKernelFunction<at::Tensor, c10::ArrayRef<c10::SymInt>, std::optional<c10::ScalarType>, std::optional<c10::Layout>, std::optional<c10::Device>, std::optional<bool>, std::optional<c10::MemoryFormat> >(void*, c10::OperatorKernel*, c10::DispatchKeySet, c10::ArrayRef<c10::SymInt>&&, std::optional<c10::ScalarType>&&, std::optional<c10::Layout>&&, std::optional<c10::Device>&&, std::optional<bool>&&, std::optional<c10::MemoryFormat>&&) /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/KernelFunction_impl.h:53
#23 0x7f40e6702623 in at::Tensor c10::KernelFunction::call<at::Tensor, c10::ArrayRef<c10::SymInt>, std::optional<c10::ScalarType>, std::optional<c10::Layout>, std::optional<c10::Device>, std::optional<bool>, std::optional<c10::MemoryFormat> >(c10::OperatorHandle const&, c10::DispatchKeySet, c10::ArrayRef<c10::SymInt>, std::optional<c10::ScalarType>, std::optional<c10::Layout>, std::optional<c10::Device>, std::optional<bool>, std::optional<c10::MemoryFormat>) const /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/KernelFunction_impl.h:93
#24 0x7f40e6702623 in at::Tensor c10::Dispatcher::call<at::Tensor, c10::ArrayRef<c10::SymInt>, std::optional<c10::ScalarType>, std::optional<c10::Layout>, std::optional<c10::Device>, std::optional<bool>, std::optional<c10::MemoryFormat> >(c10::TypedOperatorHandle<at::Tensor (c10::ArrayRef<c10::SymInt>, std::optional<c10::ScalarType>, std::optional<c10::Layout>, std::optional<c10::Device>, std::optional<bool>, std::optional<c10::MemoryFormat>)> const&, c10::ArrayRef<c10::SymInt>, std::optional<c10::ScalarType>, std::optional<c10::Layout>, std::optional<c10::Device>, std::optional<bool>, std::optional<c10::MemoryFormat>) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:698
#25 0x7f40e6702623 in c10::TypedOperatorHandle<at::Tensor (c10::ArrayRef<c10::SymInt>, std::optional<c10::ScalarType>, std::optional<c10::Layout>, std::optional<c10::Device>, std::optional<bool>, std::optional<c10::MemoryFormat>)>::call(c10::ArrayRef<c10::SymInt>, std::optional<c10::ScalarType>, std::optional<c10::Layout>, std::optional<c10::Device>, std::optional<bool>, std::optional<c10::MemoryFormat>) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:531
#26 0x7f40e6702623 in at::_ops::empty_memory_format::call(c10::ArrayRef<c10::SymInt>, std::optional<c10::ScalarType>, std::optional<c10::Layout>, std::optional<c10::Device>, std::optional<bool>, std::optional<c10::MemoryFormat>) /mnt/pytorch-2.5.0/build/aten/src/ATen/Operators_2.cpp:3330
#27 0x7f40e286a151 in at::empty(c10::ArrayRef<long>, c10::TensorOptions, std::optional<c10::MemoryFormat>) /mnt/pytorch-2.5.0/build/aten/src/ATen/ops/empty.h:36
#28 0x7f40e43366c2 in at::native::full(c10::ArrayRef<long>, c10::Scalar const&, std::optional<c10::ScalarType>, std::optional<c10::Layout>, std::optional<c10::Device>, std::optional<bool>) /mnt/pytorch-2.5.0/aten/src/ATen/native/TensorFactories.cpp:619
#29 0x7f40e82c3c1f in wrapper_CompositeExplicitAutograd__full /mnt/pytorch-2.5.0/build/aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:2400
#30 0x7f40e864003c in operator() /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
#31 0x7f40e864003c in call /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:468
#32 0x7f40e798e3d0 in at::Tensor c10::callUnboxedKernelFunction<at::Tensor, c10::ArrayRef<c10::SymInt>, c10::Scalar const&, std::optional<c10::ScalarType>, std::optional<c10::Layout>, std::optional<c10::Device>, std::optional<bool> >(void*, c10::OperatorKernel*, c10::DispatchKeySet, c10::ArrayRef<c10::SymInt>&&, c10::Scalar const&, std::optional<c10::ScalarType>&&, std::optional<c10::Layout>&&, std::optional<c10::Device>&&, std::optional<bool>&&) /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/KernelFunction_impl.h:53
#33 0x7f40e781825e in at::Tensor c10::KernelFunction::call<at::Tensor, c10::ArrayRef<c10::SymInt>, c10::Scalar const&, std::optional<c10::ScalarType>, std::optional<c10::Layout>, std::optional<c10::Device>, std::optional<bool> >(c10::OperatorHandle const&, c10::DispatchKeySet, c10::ArrayRef<c10::SymInt>, c10::Scalar const&, std::optional<c10::ScalarType>, std::optional<c10::Layout>, std::optional<c10::Device>, std::optional<bool>) const /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/KernelFunction_impl.h:93
#34 0x7f40e781825e in at::Tensor c10::Dispatcher::redispatch<at::Tensor, c10::ArrayRef<c10::SymInt>, c10::Scalar const&, std::optional<c10::ScalarType>, std::optional<c10::Layout>, std::optional<c10::Device>, std::optional<bool> >(c10::TypedOperatorHandle<at::Tensor (c10::ArrayRef<c10::SymInt>, c10::Scalar const&, std::optional<c10::ScalarType>, std::optional<c10::Layout>, std::optional<c10::Device>, std::optional<bool>)> const&, c10::DispatchKeySet, c10::ArrayRef<c10::SymInt>, c10::Scalar const&, std::optional<c10::ScalarType>, std::optional<c10::Layout>, std::optional<c10::Device>, std::optional<bool>) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:714
#35 0x7f40e7584056 in c10::TypedOperatorHandle<at::Tensor (c10::ArrayRef<c10::SymInt>, c10::Scalar const&, std::optional<c10::ScalarType>, std::optional<c10::Layout>, std::optional<c10::Device>, std::optional<bool>)>::redispatch(c10::DispatchKeySet, c10::ArrayRef<c10::SymInt>, c10::Scalar const&, std::optional<c10::ScalarType>, std::optional<c10::Layout>, std::optional<c10::Device>, std::optional<bool>) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:536
#36 0x7f40e7584056 in at::_ops::full::redispatch(c10::DispatchKeySet, c10::ArrayRef<c10::SymInt>, c10::Scalar const&, std::optional<c10::ScalarType>, std::optional<c10::Layout>, std::optional<c10::Device>, std::optional<bool>) /mnt/pytorch-2.5.0/build/aten/src/ATen/Operators_4.cpp:2549
#37 0x7f40e7a608ca in full /mnt/pytorch-2.5.0/build/aten/src/ATen/RegisterBackendSelect.cpp:252
#38 0x7f40e7ab230c in operator() /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
#39 0x7f40e7ab230c in call /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:468
#40 0x7f40e798e3d0 in at::Tensor c10::callUnboxedKernelFunction<at::Tensor, c10::ArrayRef<c10::SymInt>, c10::Scalar const&, std::optional<c10::ScalarType>, std::optional<c10::Layout>, std::optional<c10::Device>, std::optional<bool> >(void*, c10::OperatorKernel*, c10::DispatchKeySet, c10::ArrayRef<c10::SymInt>&&, c10::Scalar const&, std::optional<c10::ScalarType>&&, std::optional<c10::Layout>&&, std::optional<c10::Device>&&, std::optional<bool>&&) /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/KernelFunction_impl.h:53
SUMMARY: AddressSanitizer: heap-buffer-overflow /mnt/pytorch-2.5.0/third_party/pocketfft/pocketfft_hdronly.h:3356 in pocketfft::detail::general_c2r<float>(pocketfft::detail::cndarr<pocketfft::detail::cmplx<float> > const&, pocketfft::detail::ndarr<float>&, unsigned long, bool, float, unsigned long)::{lambda()#1}::operator()() const
Shadow bytes around the buggy address:
0x0c1c80051910: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c1c80051920: fd fd fd fd fa fa fa fa fa fa fa fa fd fd fd fd
0x0c1c80051930: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c1c80051940: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
0x0c1c80051950: fd fd fd fd fd fd fd fd fd fd fd fd fa fa fa fa
=>0x0c1c80051960: 00 00 00 00 00 00 00 00 00[fa]fa fa fa fa fa fa
0x0c1c80051970: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c1c80051980: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c1c80051990: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c1c800519a0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c1c800519b0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
Shadow gap: cc
==1372420==ABORTING
```
### Versions
[pip3] numpy==2.1.3
[pip3] torch==2.5.0a0+git32f585d
[conda] numpy 2.1.3 pypi_0 pypi
[conda] torch 2.5.0a0+git32f585d pypi_0 pyp
cc @malfet | module: crash,module: error checking,triaged,module: edge cases,topic: fuzzer | low | Critical |
2,687,118,724 | pytorch | Segmentation fault (core dumped) in `_fft_r2c` | ### 🐛 Describe the bug
Under specific inputs, `_fft_r2c` triggered a crash.
```python
import torch
self = torch.full((10, 10, 10,), 9.0072e+15, dtype=torch.double)
dim = [4194304]
normalization = 4
onesided = False
torch._fft_r2c(self, dim, normalization, onesided)
```
Output:
```
Segmentation fault (core dumped)
```
ASAN report:
```
=================================================================
==1403775==ERROR: AddressSanitizer: SEGV on unknown address 0x61000220ef88 (pc 0x7f91a3c37f93 bp 0x7ffd781d05a0 sp 0x7ffd781d0200 T0)
==1403775==The signal is caused by a READ memory access.
#0 0x7f91a3c37f93 in at::native::_fft_r2c_mkl(at::Tensor const&, c10::ArrayRef<long>, long, bool) /mnt/pytorch-2.5.0/aten/src/ATen/native/mkl/SpectralOps.cpp:285
#1 0x7f91a663a691 in wrapper_CPU___fft_r2c /mnt/pytorch-2.5.0/build/aten/src/ATen/RegisterCPU.cpp:6527
#2 0x7f91a6a85ebe in operator() /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
#3 0x7f91a6a85ebe in call /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:468
#4 0x7f91a4fb3f68 in at::Tensor c10::callUnboxedKernelFunction<at::Tensor, at::Tensor const&, c10::ArrayRef<long>, long, bool>(void*, c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>&&, long&&, bool&&) /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/KernelFunction_impl.h:53
#5 0x7f91a5d58e60 in at::Tensor c10::KernelFunction::call<at::Tensor, at::Tensor const&, c10::ArrayRef<long>, long, bool>(c10::OperatorHandle const&, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, long, bool) const /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/KernelFunction_impl.h:105
#6 0x7f91a5d58e60 in at::Tensor c10::Dispatcher::redispatch<at::Tensor, at::Tensor const&, c10::ArrayRef<long>, long, bool>(c10::TypedOperatorHandle<at::Tensor (at::Tensor const&, c10::ArrayRef<long>, long, bool)> const&, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, long, bool) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:714
#7 0x7f91a5a88e4b in c10::TypedOperatorHandle<at::Tensor (at::Tensor const&, c10::ArrayRef<long>, long, bool)>::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, long, bool) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:536
#8 0x7f91a5a88e4b in at::_ops::_fft_r2c::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, long, bool) /mnt/pytorch-2.5.0/build/aten/src/ATen/Operators_3.cpp:3310
#9 0x7f91aec97302 in at::redispatch::_fft_r2c(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, long, bool) /mnt/pytorch-2.5.0/build/aten/src/ATen/RedispatchFunctions.h:3717
#10 0x7f91ae8b7adb in operator() /mnt/pytorch-2.5.0/torch/csrc/autograd/generated/VariableType_3.cpp:727
#11 0x7f91ae8b855b in _fft_r2c /mnt/pytorch-2.5.0/torch/csrc/autograd/generated/VariableType_3.cpp:728
#12 0x7f91aebbc6c9 in operator() /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
#13 0x7f91aebbc6c9 in call /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:485
#14 0x7f91a4fb3f68 in at::Tensor c10::callUnboxedKernelFunction<at::Tensor, at::Tensor const&, c10::ArrayRef<long>, long, bool>(void*, c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>&&, long&&, bool&&) /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/KernelFunction_impl.h:53
#15 0x7f91a5a8862e in at::Tensor c10::KernelFunction::call<at::Tensor, at::Tensor const&, c10::ArrayRef<long>, long, bool>(c10::OperatorHandle const&, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, long, bool) const /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/KernelFunction_impl.h:105
#16 0x7f91a5a8862e in at::Tensor c10::Dispatcher::call<at::Tensor, at::Tensor const&, c10::ArrayRef<long>, long, bool>(c10::TypedOperatorHandle<at::Tensor (at::Tensor const&, c10::ArrayRef<long>, long, bool)> const&, at::Tensor const&, c10::ArrayRef<long>, long, bool) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:698
#17 0x7f91a5a8862e in c10::TypedOperatorHandle<at::Tensor (at::Tensor const&, c10::ArrayRef<long>, long, bool)>::call(at::Tensor const&, c10::ArrayRef<long>, long, bool) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:531
#18 0x7f91a5a8862e in at::_ops::_fft_r2c::call(at::Tensor const&, c10::ArrayRef<long>, long, bool) /mnt/pytorch-2.5.0/build/aten/src/ATen/Operators_3.cpp:3303
#19 0x7f91e92d7be3 in at::_fft_r2c(at::Tensor const&, c10::ArrayRef<long>, long, bool) /mnt/pytorch-2.5.0/build/aten/src/ATen/ops/_fft_r2c.h:27
#20 0x7f91e91caa19 in operator() /mnt/pytorch-2.5.0/torch/csrc/autograd/generated/python_torch_functions_1.cpp:3710
#21 0x7f91e91cb38f in THPVariable__fft_r2c /mnt/pytorch-2.5.0/torch/csrc/autograd/generated/python_torch_functions_1.cpp:3712
#22 0x56abf2 in cfunction_call /usr/local/src/conda/python-3.13.0/Objects/methodobject.c:540
#23 0x5341f3 in _PyObject_MakeTpCall /usr/local/src/conda/python-3.13.0/Objects/call.c:242
#24 0x549ece in _PyEval_EvalFrameDefault /usr/local/src/conda/python-3.13.0/Python/generated_cases.c.h:813
#25 0x60902d in PyEval_EvalCode /usr/local/src/conda/python-3.13.0/Python/ceval.c:596
#26 0x62eedc in run_eval_code_obj /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:1323
#27 0x629d9c in run_mod /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:1408
#28 0x64888f in pyrun_file /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:1241
#29 0x6473fa in _PyRun_SimpleFileObject /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:490
#30 0x64711a in _PyRun_AnyFileObject /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:77
#31 0x640b66 in pymain_run_file_obj /usr/local/src/conda/python-3.13.0/Modules/main.c:409
#32 0x640b66 in pymain_run_file /usr/local/src/conda/python-3.13.0/Modules/main.c:428
#33 0x640b66 in pymain_run_python /usr/local/src/conda/python-3.13.0/Modules/main.c:696
#34 0x640b66 in Py_RunMain /usr/local/src/conda/python-3.13.0/Modules/main.c:775
#35 0x5f9508 in Py_BytesMain /usr/local/src/conda/python-3.13.0/Modules/main.c:829
#36 0x7f91f1b6fd8f (/lib/x86_64-linux-gnu/libc.so.6+0x29d8f)
#37 0x7f91f1b6fe3f in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x29e3f)
#38 0x5f885c (/mnt/anaconda3/envs/pytorch-2.3-asan/bin/python3.13+0x5f885c)
AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /mnt/pytorch-2.5.0/aten/src/ATen/native/mkl/SpectralOps.cpp:285 in at::native::_fft_r2c_mkl(at::Tensor const&, c10::ArrayRef<long>, long, bool)
==1403775==ABORTING
```
### Versions
PyTorch version: 2.5.0a0+git32f585d
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] torch==2.5.0a0+git32f585d
[conda] numpy 2.1.3 pypi_0 pypi
[conda] torch 2.5.0a0+git32f585d pypi_0 pypi
cc @malfet | module: error checking,triaged,module: edge cases,topic: fuzzer | low | Critical |
2,687,127,343 | pytorch | Segmentation fault (core dumped) in `_fft_c2c` | ### 🐛 Describe the bug
Under specific inputs, `_fft_c2c` triggered a crash.
```python
import torch
self = torch.full((1, 1, 1,), 0, dtype=torch.cdouble)
dim = [36028797018963968]
normalization = 1250999896764
forward = False
torch._fft_c2c(self, dim, normalization, forward)
```
ASAN Report:
```
AddressSanitizer:DEADLYSIGNAL
=================================================================
==1426160==ERROR: AddressSanitizer: SEGV on unknown address (pc 0x7fc91d93fa09 bp 0x7fff040b4a60 sp 0x7fff040b4970 T0)
==1426160==The signal is caused by a READ memory access.
==1426160==Hint: this fault was caused by a dereference of a high value address (see register values below). Dissassemble the provided pc to learn which register was used.
#0 0x7fc91d93fa09 in compute_fct<double> /mnt/pytorch-2.5.0/aten/src/ATen/native/mkl/SpectralOps.cpp:254
#1 0x7fc91d93937c in at::native::_fft_c2c_mkl(at::Tensor const&, c10::ArrayRef<long>, long, bool) /mnt/pytorch-2.5.0/aten/src/ATen/native/mkl/SpectralOps.cpp:321
#2 0x7fc92033aca5 in wrapper_CPU___fft_c2c /mnt/pytorch-2.5.0/build/aten/src/ATen/RegisterCPU.cpp:6555
#3 0x7fc9207878df in operator() /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
#4 0x7fc9207878df in call /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:468
#5 0x7fc91ecb3ce4 in at::Tensor c10::callUnboxedKernelFunction<at::Tensor, at::Tensor const&, c10::ArrayRef<c10::SymInt>, long, bool>(void*, c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>&&, long&&, bool&&) /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/KernelFunction_impl.h:53
#6 0x7fc91ea2cb85 in at::Tensor c10::Dispatcher::redispatch<at::Tensor, at::Tensor const&, c10::ArrayRef<c10::SymInt>, long, bool>(c10::TypedOperatorHandle<at::Tensor (at::Tensor const&, c10::ArrayRef<c10::SymInt>, long, bool)> const&, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, long, bool) const (/mnt/pytorch-2.5.0/torch/lib/libtorch_cpu.so+0x1488ab85)
#7 0x7fc91e70cf81 in c10::TypedOperatorHandle<at::Tensor (at::Tensor const&, c10::ArrayRef<c10::SymInt>, long, bool)>::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, long, bool) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:536
#8 0x7fc91e70cf81 in at::_ops::_fft_c2c::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, long, bool) /mnt/pytorch-2.5.0/build/aten/src/ATen/Operators_1.cpp:2963
#9 0x7fc927f3ea89 in at::redispatch::_fft_c2c_symint(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, long, bool) /mnt/pytorch-2.5.0/build/aten/src/ATen/RedispatchFunctions.h:3767
#10 0x7fc927bf6b19 in operator() /mnt/pytorch-2.5.0/torch/csrc/autograd/generated/VariableType_1.cpp:615
#11 0x7fc927bf74b4 in _fft_c2c /mnt/pytorch-2.5.0/torch/csrc/autograd/generated/VariableType_1.cpp:616
#12 0x7fc927e8430a in operator() /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
#13 0x7fc927e8430a in call /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:485
#14 0x7fc91ecb3ce4 in at::Tensor c10::callUnboxedKernelFunction<at::Tensor, at::Tensor const&, c10::ArrayRef<c10::SymInt>, long, bool>(void*, c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>&&, long&&, bool&&) /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/KernelFunction_impl.h:53
#15 0x7fc91e70c413 in at::Tensor c10::KernelFunction::call<at::Tensor, at::Tensor const&, c10::ArrayRef<c10::SymInt>, long, bool>(c10::OperatorHandle const&, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, long, bool) const /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/KernelFunction_impl.h:93
#16 0x7fc91e70c413 in at::Tensor c10::Dispatcher::call<at::Tensor, at::Tensor const&, c10::ArrayRef<c10::SymInt>, long, bool>(c10::TypedOperatorHandle<at::Tensor (at::Tensor const&, c10::ArrayRef<c10::SymInt>, long, bool)> const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, long, bool) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:698
#17 0x7fc91e70c413 in c10::TypedOperatorHandle<at::Tensor (at::Tensor const&, c10::ArrayRef<c10::SymInt>, long, bool)>::call(at::Tensor const&, c10::ArrayRef<c10::SymInt>, long, bool) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:531
#18 0x7fc91e70c413 in at::_ops::_fft_c2c::call(at::Tensor const&, c10::ArrayRef<c10::SymInt>, long, bool) /mnt/pytorch-2.5.0/build/aten/src/ATen/Operators_1.cpp:2956
#19 0x7fc962e50fd1 in at::_fft_c2c_symint(at::Tensor const&, c10::ArrayRef<c10::SymInt>, long, bool) (/mnt/pytorch-2.5.0/torch/lib/libtorch_python.so+0x22e9fd1)
#20 0x7fc962d26e51 in operator() /mnt/pytorch-2.5.0/torch/csrc/autograd/generated/python_torch_functions_0.cpp:3955
#21 0x7fc962d277c7 in THPVariable__fft_c2c /mnt/pytorch-2.5.0/torch/csrc/autograd/generated/python_torch_functions_0.cpp:3957
#22 0x56abf2 in cfunction_call /usr/local/src/conda/python-3.13.0/Objects/methodobject.c:540
#23 0x5341f3 in _PyObject_MakeTpCall /usr/local/src/conda/python-3.13.0/Objects/call.c:242
#24 0x549ece in _PyEval_EvalFrameDefault /usr/local/src/conda/python-3.13.0/Python/generated_cases.c.h:813
#25 0x60902d in PyEval_EvalCode /usr/local/src/conda/python-3.13.0/Python/ceval.c:596
#26 0x62eedc in run_eval_code_obj /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:1323
#27 0x629d9c in run_mod /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:1408
#28 0x64888f in pyrun_file /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:1241
#29 0x6473fa in _PyRun_SimpleFileObject /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:490
#30 0x64711a in _PyRun_AnyFileObject /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:77
#31 0x640b66 in pymain_run_file_obj /usr/local/src/conda/python-3.13.0/Modules/main.c:409
#32 0x640b66 in pymain_run_file /usr/local/src/conda/python-3.13.0/Modules/main.c:428
#33 0x640b66 in pymain_run_python /usr/local/src/conda/python-3.13.0/Modules/main.c:696
#34 0x640b66 in Py_RunMain /usr/local/src/conda/python-3.13.0/Modules/main.c:775
#35 0x5f9508 in Py_BytesMain /usr/local/src/conda/python-3.13.0/Modules/main.c:829
#36 0x7fc96b86dd8f (/lib/x86_64-linux-gnu/libc.so.6+0x29d8f)
#37 0x7fc96b86de3f in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x29e3f)
#38 0x5f885c (/mnt/anaconda3/envs/pytorch-2.3-asan/bin/python3.13+0x5f885c)
AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /mnt/pytorch-2.5.0/aten/src/ATen/native/mkl/SpectralOps.cpp:254 in compute_fct<double>
==1426160==ABORTING
```
### Versions
PyTorch version: 2.5.0a0+git32f585d
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] torch==2.5.0a0+git32f585d
[conda] numpy 2.1.3 pypi_0 pypi
[conda] torch 2.5.0a0+git32f585d pypi_0 pypi
cc @malfet | module: error checking,triaged,module: edge cases,topic: fuzzer | low | Critical |
2,687,166,262 | godot | set_block_signals for slider no effect | ### Tested versions
v4.3.stable.mono.official [77dcf97d8]
### System information
Godot v4.3.stable.mono - Windows 10.0.17763 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 2060 (NVIDIA; 31.0.15.2849) - AMD Ryzen 9 5950X 16-Core Processor (32 Threads)
### Issue description


### Steps to reproduce
just test the MRP
### Minimal reproduction project (MRP)
[ValueChangeTest.zip](https://github.com/user-attachments/files/17892119/ValueChangeTest.zip)
| bug,topic:gui | low | Minor |
2,687,167,246 | node | Unix Domain Socket doesn't support on Windows | ### Version
20.18.0
### Platform
```text
Microsoft Windows NT 10.0.26100.0 x64
```
### Subsystem
_No response_
### What steps will reproduce the bug?
Currently, Windows (at least .NET) supports connect to UnixDomainSocket, or listen to UnixDomainSocket in ASP.NET Core. But node's net module seems not support connect to a Windows local file.
```csharp
public class UnixDomainSocketsConnectionFactory(EndPoint endPoint) {
public async ValueTask<Stream> ConnectAsync(SocketsHttpConnectionContext _,
CancellationToken cancellationToken = default) {
var socket = new Socket(AddressFamily.Unix, SocketType.Stream, ProtocolType.Unspecified);
try {
await socket.ConnectAsync(endPoint, cancellationToken).ConfigureAwait(false);
return new NetworkStream(socket, true);
} catch {
socket.Dispose();
throw;
}
}
public static GrpcChannel CreateChannel(string socketPath) {
var udsEndPoint = new UnixDomainSocketEndPoint(socketPath);
var connectionFactory = new UnixDomainSocketsConnectionFactory(udsEndPoint);
var socketsHttpHandler = new SocketsHttpHandler {
ConnectCallback = connectionFactory.ConnectAsync
};
return GrpcChannel.ForAddress("http://localhost", new GrpcChannelOptions {
HttpHandler = socketsHttpHandler
});
}
}
```
```csharp
protected void Initialize(string token) {
var builder = WebApplication.CreateBuilder();
builder.WebHost.ConfigureKestrel(serverOptions => {
if (!Directory.Exists(MetaBoxStatic.UnixSocketPath)) {
Directory.CreateDirectory(MetaBoxStatic.UnixSocketPath);
}
var socketPath = Path.Combine(MetaBoxStatic.UnixSocketPath, $"MetaBox-{token}.tmp");
serverOptions.ListenUnixSocket(socketPath);
serverOptions.ConfigureEndpointDefaults(listenOptions => { listenOptions.Protocols = HttpProtocols.Http2; });
});
ConfigureBuilder(builder);
builder.Services.PostConfigureAll<HostOptions>(opts => opts.ShutdownTimeout = TimeSpan.FromSeconds(1));
WebApp = builder.Build();
ConfigureApp(WebApp);
}
```
### How often does it reproduce? Is there a required condition?
Always
### What is the expected behavior? Why is that the expected behavior?
The net connects to the "Windows" UnixDomainSocket as it could.
### What do you see instead?
EACCESS / ENOENT
### Additional information
_No response_ | windows,libuv | low | Critical |
2,687,174,722 | flutter | Android, PhysicalKeyboardKey, usbHidUsage are wrong | ### Steps to reproduce
Physical Android device and keyboard are required. The simulator may not have this issue.
1. Connect an Android device through usb, and enable device debug, then connect `<ip>:<port>`
```
adb shell ip addr show wlan0
adb tcpip 5555
adb connect DEVICE_IP_ADDRESS:5555
```
2. Disconnect usb
3. Run the sample code `flutter run -d <ip>:<port>`
4. Connect a physical keyboard to the Android device
5. Click some keys
### Expected results
Click any keys on the physical keyboard, the `usbHideUsage` in `PhysicalKeyboardKey` should be right.
### Actual results
See the screenshot.
The correct `PhysicalKeyboardKey` should be `PhysicalKeyboardKey (usbHidUsage: "0x00070004", debugName: "Key A")` if I click `a`.
https://github.com/flutter/flutter/blob/97596e589561b508b7b15cd6c6f04ecf5a5e15e2/packages/flutter/lib/src/services/keyboard_key.g.dart#L3864
`event.physicalKey.usbHidUsage & 0xFFFF` should be the value of the following pdf.
https://github.com/flutter/flutter/blob/97596e589561b508b7b15cd6c6f04ecf5a5e15e2/packages/flutter/lib/src/services/keyboard_key.g.dart#L3543
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:flutter/services.dart';
void main() {
runApp(MyApp());
}
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Keyboard Event Demo',
theme: ThemeData(
primarySwatch: Colors.blue,
),
home: const KeyboardEventPage(),
);
}
}
class KeyboardEventPage extends StatefulWidget {
const KeyboardEventPage({super.key});
@override
_KeyboardEventPageState createState() => _KeyboardEventPageState();
}
class _KeyboardEventPageState extends State<KeyboardEventPage> {
final FocusNode _focusNode = FocusNode();
@override
Widget build(BuildContext context) {
final child = Scaffold(
appBar: AppBar(
title: const Text('Keyboard Event Demo'),
),
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
const SizedBox(height: 20),
],
),
),
);
return FocusScope(
autofocus: true,
child: Focus(
autofocus: true,
canRequestFocus: true,
focusNode: _focusNode,
onKeyEvent: (node, event) {
if (event is KeyDownEvent) {
debugPrint('Key down, logicalKey: ${event.logicalKey}');
debugPrint('Key down, physicalKey: ${event.physicalKey}');
}
return KeyEventResult.handled;
},
child: child,
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>

</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.5, on Ubuntu 22.04.5 LTS 6.8.0-49-generic, locale en_US.UTF-8)
• Flutter version 3.24.5 on channel stable at /home/username/workspace/devenv/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision dec2ee5c1f (10 days ago), 2024-11-13 11:13:06 -0800
• Engine revision a18df97ca5
• Dart version 3.5.4
• DevTools version 2.37.3
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at /home/username/Android/Sdk
• Platform android-35, build-tools 35.0.0
• Java binary at: /opt/android-studio/jbr/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
• All Android licenses accepted.
[✓] Chrome - develop for the web
• Chrome at google-chrome
[✓] Linux toolchain - develop for Linux desktop
• Ubuntu clang version 14.0.0-1ubuntu1.1
• cmake version 3.30.3
• ninja version 1.10.1
• pkg-config version 0.29.2
[✓] Android Studio (version 2024.1)
• Android Studio at /opt/android-studio
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
[✓] VS Code (version 1.95.3)
• VS Code at /usr/share/code
• Flutter extension version 3.100.0
[!] Proxy Configuration
• HTTP_PROXY is set
! NO_PROXY is not set
[✓] Connected device (3 available)
• V2241A (mobile) • 192.168.1.9:5555 • android-arm64 • Android 14 (API 34)
• Linux (desktop) • linux • linux-x64 • Ubuntu 22.04.5 LTS 6.8.0-49-generic
• Chrome (web) • chrome • web-javascript • Google Chrome 130.0.6723.69
[✓] Network resources
• All expected network resources are available.
! Doctor found issues in 1 category.
```
</details>
| a: text input,platform-android,has reproducible steps,P2,team-text-input,triaged-text-input,found in release: 3.24,found in release: 3.27 | low | Critical |
2,687,182,792 | godot | CanvasItem.show_behind_parent does nothing if parent has Y sort enabled | ### Tested versions
- Reproducible in: v4.3.stable.official [77dcf97d8]
### System information
Godot v4.3.stable - Windows 10.0.22631 - GLES3 (Compatibility) - NVIDIA GeForce RTX 4070 (NVIDIA; 32.0.15.6094) - AMD Ryzen 7 5800X 8-Core Processor (16 Threads)
### Issue description
The property `CanvasItem.show_behind_parent` does not cause the sprite to draw behind its parent if the parent has Y sort enabled, even if the child does not.
### Steps to reproduce
Steps to reproduce:
Add two `Sprite2D` nodes to a scene, one being a child of the other. (Give them graphics of course.)
Set `y_sort_enabled` on the parent node to `true`.
Set `show_behind_parent` on the child node to `true`.
Expected result:
The child sprite draws behind the parent sprite.
Observed result:
The child sprite draws in front of the parent sprite.
### Minimal reproduction project (MRP)
N/A | discussion,documentation,topic:2d | low | Minor |
2,687,193,856 | ui | [bug]: Combobox doesn't lock scroll on underlying page, causing it to scroll out of focus | ### Describe the bug
When combobox is opened, when the component itself doesn't scroll, it will scroll the page instead, causing it to scroll out of focus and depending on z-index of the underlying components it could either scroll on top or under those components.
https://github.com/user-attachments/assets/e9606a58-fd7c-4c24-9d3b-56775d7d07b1
### Affected component/components
Combobox
### How to reproduce
1. insert combobox component
2. expand the page so that it has a vertical scroll, 2x the screen height
3. open combobox component
4. scroll down
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Chrome, Windows
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,687,194,846 | tensorflow | Aborted (core dumped) in `tf.raw_ops.SparseConcat` | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
tf2.17.0 tf2.16.1
### Custom code
Yes
### OS platform and distribution
Ubuntu 20.04
### Mobile device
_No response_
### Python version
3.11
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Under specific inputs, `SparseConcat` triggered a crash.
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
indices1 = tf.constant(2, shape=[3,3], dtype=tf.int64)
values1 = tf.constant("aaaabaaacaaadaaaeaaafaaagaaahaaaiaaajaaakaaalaaamaaanaaaoaaapaaaqaaaraaasaaataaauaaavaaawaaaxaaayaaazaabbaabcaabdaabeaabfaabgaabhaabiaabjaabkaablaabmaabnaaboaabpaabqaabraabsaabtaabuaabvaabwaabxaabyaabzaacbaaccaacdaaceaacfaacgaachaaciaacjaackaaclaacmaacnaacoaacpaacqaacraacsaactaacuaacvaacwaacxaacyaac",
shape=[3], dtype=tf.string)
shapes1 = tf.constant([5, 2, 2147483647], dtype=tf.int64)
indices2 = tf.constant(-2, shape=[4,3], dtype=tf.int64)
values2 = tf.constant(" ", shape=[4], dtype=tf.string)
shapes2 = tf.constant([5,1879048192,536870912], dtype=tf.int64)
concat_dim = 1
tf.raw_ops.SparseConcat(
indices=[indices1, indices2], values=[values1, values2], shapes=[shapes1, shapes2], concat_dim=concat_dim, name=None
)
```
### Relevant log output
```shell
2024-11-24 06:36:10.994508: F tensorflow/core/framework/tensor_shape.cc:607] Non-OK-status: RecomputeNumElements() status: INVALID_ARGUMENT: Shape [5,1879048194,2147483647] results in overflow when computing number of elements
Aborted (core dumped)
```
| stat:awaiting tensorflower,type:bug,comp:ops,2.17 | low | Critical |
2,687,252,110 | pytorch | [Dynamo] Keep weak reference of parameters instead of accessing them by attributes | ### 🚀 The feature, motivation and pitch
For this code:
```python
import torch
class DummyModule(torch.nn.Module):
def __init__(self):
super(DummyModule, self).__init__()
self.a = torch.nn.ModuleDict(
{
"b": torch.nn.ModuleDict(
{
"c": torch.nn.ModuleDict(
{
"d": torch.nn.ModuleDict(
{
"e": torch.nn.Linear(10, 10, bias=False)
}
)
}
)
}
)
}
)
def forward(self, x):
return self.a.b.c.d.e(x)
model = DummyModule()
opt_model = torch.compile(model)
x = torch.randn(10, 10)
opt_model(x)
```
Dynamo will compile it to be:
```python
def __transformed_code_0_for_forward(self, x):
__temp_2, = __compiled_fn_1(self._modules['a']._modules['b']._modules['c'].
_modules['d']._modules['e']._parameters['weight'], x)
return __temp_2
```
Note that the tensor is accessed via a series of python attribute access.
For real-world models, we can have hundreds of weight parameters, and every parameter needs several layers of accessing. This overhead cannot be removed, even if we use cudagraph.
One potential optimization is: during Dynamo compilation time, we keep a weak reference of the tensor for the parameters, and during forward time, we directly use the saved tensors.
Or we can save all parameters in the top-level `self` object.
Ideal compiled bytecode from Dynamo:
```python
def __transformed_code_0_for_forward(self, x):
l_self_a_b_c_d_e_weight = self._all_states[0]
__temp_2, = __compiled_fn_1(l_self_a_b_c_d_e_weight, x)
return __temp_2
```
This can be even faster than PyTorch's eager mode. We are really doing some compilation time computation and removing the overhead during runtime.
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @zou3519 @anijain2305
### Alternatives
_No response_
### Additional context
_No response_ | triaged,oncall: pt2,module: dynamo,vllm-compile | low | Major |
2,687,254,602 | react-native | boxShadow on TextInput alternates visible/hidden with each key typed on Android | ### Description
Very simple - when typing in a TextInput on Android, each alternate keypress seems to enable/disable the boxShadow style on the TextInput. This behaviour is not seen on iOS; there, the boxShadow stays visible as expected.
This affects Expo 52/React Native 0.76.
### Steps to reproduce
Run the Snack in Android
### React Native Version
0.76.3
### Affected Platforms
Runtime - Android
### Output of `npx react-native info`
```text
info Fetching system and libraries information...
System:
OS: Windows 11 10.0.22631
CPU: (16) x64 Intel(R) Core(TM) i7-9800X CPU @ 3.80GHz
Memory: 105.98 GB / 127.71 GB
Binaries:
Node:
version: 20.10.0
path: C:\Program Files (x86)\Nodist\bin\node.EXE
Yarn: Not Found
npm:
version: 10.2.3
path: C:\Program Files (x86)\Nodist\bin\npm.EXE
Watchman: Not Found
SDKs:
Android SDK: Not Found
Windows SDK: Not Found
IDEs:
Android Studio: AI-242.23339.11.2421.12550806
Visual Studio: Not Found
Languages:
Java: 17.0.13
Ruby: Not Found
npmPackages:
"@react-native-community/cli":
installed: 15.1.2
wanted: latest
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.3
wanted: 0.76.3
react-native-windows: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: Not found
newArchEnabled: Not found
iOS:
hermesEnabled: Not found
newArchEnabled: Not found
```
### Stacktrace or Logs
```text
No error - see the Snack.
```
### Reproducer
https://snack.expo.dev/@albertsokol/android-boxshadow-bug-on-textinput
### Screenshots and Videos
_No response_ | Stale,Component: TextInput,Platform: Android,Needs: Author Feedback,Needs: Version Info | low | Critical |
2,687,301,587 | deno | Deno is not able to run npm:web-ext to run firefox extension | Version: Deno 2.1.1
Deno is not able to run web-ext to execute a firefox extension. It seems like failing in the connection retry process.
There is no problem when deno run web-ext to spin up a chrome extension.
✔️ `deno run -A npm:web-ext run --target=chromium -s ./dist/chrome`
❌ `deno run -A npm:web-ext run -s ./dist/firefox`
Here's the detail logs for web ext runner to run firefox extension.
```
deno-webext-starter on main [!?] via 🦕 v2.1.1 via v23.3.0 on ☁️ took 10s
❯ deno run -A npm:web-ext run -s dist/firefox --verbose --no-config-discovery
[/Users/<user>/Library/Caches/deno/npm/registry.npmjs.org/web-ext/8.3.0/lib/program.js][info] Version:
[/Users/<user>/Library/Caches/deno/npm/registry.npmjs.org/web-ext/8.3.0/lib/program.js][debug] Not discovering config files
[/Users/<user>/Library/Caches/deno/npm/registry.npmjs.org/web-ext/8.3.0/lib/cmd/run.js][info] Running web extension from /<path>/deno-webext-starter/dist/firefox
[/Users/<user>/Library/Caches/deno/npm/registry.npmjs.org/web-ext/8.3.0/lib/util/manifest.js][debug] Validating manifest at /<path>/deno-webext-starter/dist/firefox/manifest.json
[/Users/<user>/Library/Caches/deno/npm/registry.npmjs.org/web-ext/8.3.0/lib/extension-runners/firefox-desktop.js][debug] Creating new Firefox profile
[/Users/<user>/Library/Caches/deno/npm/registry.npmjs.org/web-ext/8.3.0/lib/firefox/index.js][debug] Running Firefox with profile at /var/folders/5x/m56tp_7x1f54q12shdvx_jh80000gq/T/firefox-profileAcETEx/
[/Users/<user>/Library/Caches/deno/npm/registry.npmjs.org/web-ext/8.3.0/lib/firefox/index.js][debug] Executing Firefox binary: /Applications/Firefox.app/Contents/MacOS/firefox
[/Users/<user>/Library/Caches/deno/npm/registry.npmjs.org/web-ext/8.3.0/lib/firefox/index.js][debug] Firefox args: -start-debugger-server 61034 -foreground -no-remote -profile /var/folders/5x/m56tp_7x1f54q12shdvx_jh80000gq/T/firefox-profileAcETEx/
[/Users/<user>/Library/Caches/deno/npm/registry.npmjs.org/web-ext/8.3.0/lib/firefox/index.js][info] Use --verbose or --devtools to see logging
[/Users/<user>/Library/Caches/deno/npm/registry.npmjs.org/web-ext/8.3.0/lib/firefox/remote.js][debug] Connecting to the remote Firefox debugger
[/Users/<user>/Library/Caches/deno/npm/registry.npmjs.org/web-ext/8.3.0/lib/firefox/remote.js][debug] Connecting to Firefox on port 61034
error: Uncaught Error: connect ECONNREFUSED 127.0.0.1:61034
at __node_internal_captureLargerStackTrace (ext:deno_node/internal/errors.ts:93:9)
at __node_internal_exceptionWithHostPort (ext:deno_node/internal/errors.ts:217:10)
at TCPConnectWrap._afterConnect [as oncomplete] (node:net:172:16)
at TCP.afterConnect (ext:deno_node/internal_binding/connection_wrap.ts:43:11)
at ext:deno_node/internal_binding/tcp_wrap.ts:306:14
at eventLoopTick (ext:core/01_core.js:175:7)
```
There is no problem when executing web-ext script with npx
```
deno-webext-starter on main [!?] via 🦕 v2.1.1 via v23.3.0 on ☁️
❯ npx web-ext run -s dist/firefox --verbose --no-config-discovery
(node:56043) ExperimentalWarning: CommonJS module /usr/local/lib/node_modules/npm/node_modules/debug/src/node.js is loading ES Module /usr/local/lib/node_modules/npm/node_modules/supports-color/index.js using require().
Support for loading ES Module in require() is an experimental feature and might change at any time
(Use `node --trace-warnings ...` to show where the warning was created)
[/<path>/deno-webext-starter/node_modules/.deno/[email protected]/node_modules/web-ext/lib/program.js][info] Version:
[/<path>/deno-webext-starter/node_modules/.deno/[email protected]/node_modules/web-ext/lib/program.js][debug] Not discovering config files
[/<path>/deno-webext-starter/node_modules/.deno/[email protected]/node_modules/web-ext/lib/cmd/run.js][info] Running web extension from /<path>/deno-webext-starter/dist/firefox
[/<path>/deno-webext-starter/node_modules/.deno/[email protected]/node_modules/web-ext/lib/util/manifest.js][debug] Validating manifest at /<path>/deno-webext-starter/dist/firefox/manifest.json
[/<path>/deno-webext-starter/node_modules/.deno/[email protected]/node_modules/web-ext/lib/extension-runners/firefox-desktop.js][debug] Creating new Firefox profile
[/<path>/deno-webext-starter/node_modules/.deno/[email protected]/node_modules/web-ext/lib/firefox/index.js][debug] Running Firefox with profile at /var/folders/5x/m56tp_7x1f54q12shdvx_jh80000gq/T/firefox-profilesoLrue/
[/<path>/deno-webext-starter/node_modules/.deno/[email protected]/node_modules/web-ext/lib/firefox/index.js][debug] Executing Firefox binary: /Applications/Firefox.app/Contents/MacOS/firefox
[/<path>/deno-webext-starter/node_modules/.deno/[email protected]/node_modules/web-ext/lib/firefox/index.js][debug] Firefox args: -start-debugger-server 60895 -foreground -no-remote -profile /var/folders/5x/m56tp_7x1f54q12shdvx_jh80000gq/T/firefox-profilesoLrue/
[/<path>/deno-webext-starter/node_modules/.deno/[email protected]/node_modules/web-ext/lib/firefox/index.js][info] Use --verbose or --devtools to see logging
[/<path>/deno-webext-starter/node_modules/.deno/[email protected]/node_modules/web-ext/lib/firefox/remote.js][debug] Connecting to the remote Firefox debugger
[/<path>/deno-webext-starter/node_modules/.deno/[email protected]/node_modules/web-ext/lib/firefox/remote.js][debug] Connecting to Firefox on port 60895
[/<path>/deno-webext-starter/node_modules/.deno/[email protected]/node_modules/web-ext/lib/firefox/remote.js][debug] Retrying Firefox (0); connection error: Error: connect ECONNREFUSED 127.0.0.1:60895
[/<path>/deno-webext-starter/node_modules/.deno/[email protected]/node_modules/web-ext/lib/firefox/remote.js][debug] Connecting to Firefox on port 60895
[/<path>/deno-webext-starter/node_modules/.deno/[email protected]/node_modules/web-ext/lib/firefox/remote.js][debug] Retrying Firefox (1); connection error: Error: connect ECONNREFUSED 127.0.0.1:60895
[/<path>/deno-webext-starter/node_modules/.deno/[email protected]/node_modules/web-ext/lib/firefox/remote.js][debug] Connecting to Firefox on port 60895
[/<path>/deno-webext-starter/node_modules/.deno/[email protected]/node_modules/web-ext/lib/firefox/remote.js][debug] Retrying Firefox (2); connection error: Error: connect ECONNREFUSED 127.0.0.1:60895
[/<path>/deno-webext-starter/node_modules/.deno/[email protected]/node_modules/web-ext/lib/firefox/remote.js][debug] Connecting to Firefox on port 60895
[/<path>/deno-webext-starter/node_modules/.deno/[email protected]/node_modules/web-ext/lib/firefox/remote.js][debug] Retrying Firefox (3); connection error: Error: connect ECONNREFUSED 127.0.0.1:60895
[/<path>/deno-webext-starter/node_modules/.deno/[email protected]/node_modules/web-ext/lib/firefox/remote.js][debug] Connecting to Firefox on port 60895
[/<path>/deno-webext-starter/node_modules/.deno/[email protected]/node_modules/web-ext/lib/firefox/remote.js][debug] Retrying Firefox (4); connection error: Error: connect ECONNREFUSED 127.0.0.1:60895
[/<path>/deno-webext-starter/node_modules/.deno/[email protected]/node_modules/web-ext/lib/firefox/remote.js][debug] Connecting to Firefox on port 60895
[/<path>/deno-webext-starter/node_modules/.deno/[email protected]/node_modules/web-ext/lib/firefox/index.js][debug] Firefox stdout: Started devtools server on 60895
[/<path>/deno-webext-starter/node_modules/.deno/[email protected]/node_modules/web-ext/lib/firefox/remote.js][debug] Retrying Firefox (5); connection error: Error: connect ECONNREFUSED 127.0.0.1:60895
[/<path>/deno-webext-starter/node_modules/.deno/[email protected]/node_modules/web-ext/lib/firefox/remote.js][debug] Connecting to Firefox on port 60895
[/<path>/deno-webext-starter/node_modules/.deno/[email protected]/node_modules/web-ext/lib/firefox/remote.js][debug] Connected to the remote Firefox debugger on port 60895
[/<path>/deno-webext-starter/node_modules/.deno/[email protected]/node_modules/web-ext/lib/firefox/index.js][debug] Firefox stderr: UNSUPPORTED (log once): POSSIBLE ISSUE: unit 1 GLD_TEXTURE_INDEX_2D is unloadable and bound to sampler type (Float) - using zero texture because texture unloadable
[/<path>/deno-webext-starter/node_modules/.deno/[email protected]/node_modules/web-ext/lib/firefox/remote.js][debug] installTemporaryAddon: {"addon":{"id":"[email protected]","actor":false},"from":"server1.conn0.addonsActor2"}
[/<path>/deno-webext-starter/node_modules/.deno/[email protected]/node_modules/web-ext/lib/firefox/remote.js][info] Installed /<path>/deno-webext-starter/dist/firefox as a temporary add-on
[/<path>/deno-webext-starter/node_modules/.deno/[email protected]/node_modules/web-ext/lib/cmd/run.js][info] The extension will reload if any source file changes
[/<path>/deno-webext-starter/node_modules/.deno/[email protected]/node_modules/web-ext/lib/util/file-filter.js][debug] Resolved path **/*.xpi with sourceDir /<path>/deno-webext-starter/dist/firefox to /<path>/deno-webext-starter/dist/firefox/**/*.xpi
[/<path>/deno-webext-starter/node_modules/.deno/[email protected]/node_modules/web-ext/lib/util/file-filter.js][debug] Resolved path **/*.zip with sourceDir /<path>/deno-webext-starter/dist/firefox to /<path>/deno-webext-starter/dist/firefox/**/*.zip
[/<path>/deno-webext-starter/node_modules/.deno/[email protected]/node_modules/web-ext/lib/util/file-filter.js][debug] Resolved path **/.* with sourceDir /<path>/deno-webext-starter/dist/firefox to /<path>/deno-webext-starter/dist/firefox/**/.*
[/<path>/deno-webext-starter/node_modules/.deno/[email protected]/node_modules/web-ext/lib/util/file-filter.js][debug] Resolved path **/.*/**/* with sourceDir /<path>/deno-webext-starter/dist/firefox to /<path>/deno-webext-starter/dist/firefox/**/.*/**/*
[/<path>/deno-webext-starter/node_modules/.deno/[email protected]/node_modules/web-ext/lib/util/file-filter.js][debug] Resolved path **/node_modules with sourceDir /<path>/deno-webext-starter/dist/firefox to /<path>/deno-webext-starter/dist/firefox/**/node_modules
[/<path>/deno-webext-starter/node_modules/.deno/[email protected]/node_modules/web-ext/lib/util/file-filter.js][debug] Resolved path **/node_modules/**/* with sourceDir /<path>/deno-webext-starter/dist/firefox to /<path>/deno-webext-starter/dist/firefox/**/node_modules/**/*
[/<path>/deno-webext-starter/node_modules/.deno/[email protected]/node_modules/web-ext/lib/watcher.js][debug] Watching /<path>/deno-webext-starter/dist/firefox for changes
[/<path>/deno-webext-starter/node_modules/.deno/[email protected]/node_modules/web-ext/lib/extension-runners/index.js][info] Press R to reload (and Ctrl-C to quit)
``` | bug,needs investigation,node compat | low | Critical |
2,687,342,610 | kubernetes | Introduce `ReevaluationHint` in QHint for a single node scheduling constraint optimization | /assign
/sig scheduling
/cc @kubernetes/sig-scheduling-misc @macsko @dom4ha
## Summary
Introduce a new return value `ReevaluationHint` in QHint so that we don't have to re-evaluate all nodes at the scheduling retries.
## Motivation
We can roughly divide our scheduling constraints into two groups: "single node constraint" or "cross-node (or node unrelated) constraint".
For example, resource requirement is "single node constraint" because if node-a doesn't have enough CPU for pod-a, node-a is filtered out in the scheduling cycle, that is, node-a's filtering result is determined only by node-a's status.
And, pod topology spread is "cross-node constraint" because if domain-a, which node-a belongs to, has too many Pods matching topology spread, node-a is filtered out, that is, node-a's filtering result is determined by not only node-a's status, but also by other nodes' status too.
Let’s think about the requeueing scenarios of the resource fit plugin (single node constraints).
There are four registered events:
- Node/Add: Node is newly added. In this case, the Pod might be schedulable on the new node.
- Node/UpdateNodeAllocatable: Node got more resources. In this case, the Pod might be schedulable on the updated node.
- Pod/Delete: The running Pod is deleted, In this case, the Pod might be schedulable on the node where the deleted Pod was running on.
- Pod/UpdatePodScaleDown:
- The existing Pod is scaled down: In this case, the Pod might be schedulable on the node where the scaled-down pod is running on.
- The unschedulable Pod itself is scaled down: In this case, the Pod might be schedulable on any nodes.
In all cases, we can determine which node(s) to re-evaluate in the next scheduling cycles.
Currently, every scheduling cycle (basically) always evaluates all the nodes.
If we could reduce the number of nodes to re-evaluate, that could be a significant performance optimization for the second or later scheduling cycles.
## Goals
- Allow QHint to return which node(s) to re-evaluate, change the scheduling cycles to consider them, and improve the scheduling latency for Pods with a single node scheduling constraint.
## Non-Goals
<!--
What is out of scope for this proposal? Listing non-goals helps to focus discussion
and make progress.
-->
- Cross nodes scheduling constraint is out-of-scope for now. But, see `Future ideas` section at the bottom.
- Given the cross node scheduling constraints are only PodAffinity and PodTopologySpread in the default scheduler, we can still get a huge benefit only with a single node scheduling constraint support.
## Proposal
### Design Details
#### QHint interface change
Change QHint interface to return which node(s) to re-evaluate.
```go
type ReevaluationHint struct {
// NodeNames has the names of nodes which should be re-evaluated in the next scheduling cycle.
// Empty means all nodes.
NodeNames sets.Set[string]
}
type QueueingHintFn func(logger klog.Logger, pod *v1.Pod, oldObj, newObj interface{}) (QueueingHint, ReevaluationHint, error)
```
#### Scheduling queue change
We need to run QHints for Pods that are already in backoffQ or activeQ,
otherwise some nodes that should be re-evaluated might be overlooked.
The queue would aggregate all ReevaluationHint(s) that are returned from QHints,
and store them in PodInfo or somewhere so that the next scheduling cycle can refer to them.
### User Stories
<!--
Detail the things that people will be able to do if this proposal is implemented.
Include as much detail as possible so that people can understand the "how" of
the system. The goal here is to make this feel real for users without getting
bogged down.
-->
#### Story 1
There are 5k nodes in the cluster.
Pod-a is created, but a scheduling for Pod-a fails because all Nodes don't have enough CPU for this Pod.
A moment later, one Pod on Node-a is removed and now Node-a may have enough CPU for Pod-a, thus the scheduler retries the scheduling for this Pod.
We currently (basically) re-evaluate all nodes in the next scheduling cycle.
But, after this enhancement, QHint from the resource fit plugin returns Queue for this Pod/Delete event, and also it returns Node-a in ReevaluationHint.
And, at the next scheduling cycle, 4999 Nodes are pre-filtered out by ReevaluationHint, and the scheduling cycle only re-evaluate Node-a
We can save the calculation for 4999 Nodes.
### Risks and Mitigations
#### QHint calculation cost and possible negative impact on the scheduling throughput
As mentioned earlier, the number of QHint runs would increase because we have to run QHints for Pods in activeQ/backoffQ as well.
As we experienced recently, the latency of event handling could impact the scheduling throughput negatively, mainly because of the shared lock in the queue.
We need to carefully evaluate if this enhancement really improves the throughput with some PoC.
## Future ideas
### Cross node scheduling constraint support
Actually, we might be able to do the same for some cross node constraints too.
We can add LabelSelector in ReevaluationHint.
```go
type ReevaluationHint struct {
// NodeNames has the names of nodes which should be re-evaluated in the next scheduling cycle.
// Empty means all nodes.
NodeNames sets.Set[string]
// LabelSelector has the selector for nodes which should be re-evaluated in the next scheduling cycle.
// Empty means all nodes.
LabelSelector *labels.Selector
}
```
And, for example, let’s say a Pod failed with PodTopologySpread because the spreading is like: zone-a:zone-b:zone-c = 2:3:4 (maxSkew: 1). In this case, PodTopologySpread rejects all nodes in zone-b and zone-c.
And, supposing a new matching Pod is added to zone-a, which makes zone-a:zone-b:zone-c = 3:3:4.
In this case, QHint from PodTopologySpread could return the labelselector for zone-a and zone-b.
But, the problem is that, then QHint has to know the current spreading situation (2:3:4) to return the label selector.
We currently have a internal policy where QHint shouldn’t look up resources other than the ones given from the QHint paramters (= QHint should only use pod, oldObj, and newObj to return results) so that we can prevent QHints to be too complicated and slow.
So, if we want to do this, we probably have to loosen this policy.
Those are why I don’t want to include the cross node scheduling constraints in the first step.
We should see if this idea is really worthy only with single node constraints first, and then we can discuss those stuff later.
### Concurrent scheduling cycles
In the long term, this enhancement could be a first step for the concurrent scheduling cycles, our dream.
As long as `NodeNames` is not overlapped, we can run the evaluations for multiple Pods concurrently since the scheduling results won't be conflict. | sig/scheduling,kind/feature,needs-triage | medium | Critical |
2,687,407,812 | rust | Confusing diagnostic for stray lifetime in type | ### Code
```Rust
struct Bar;
pub struct Foo<'a>('a, Bar);
```
### Current output
```Shell
error: lifetime in trait object type must be followed by `+`
--> src/lib.rs:2:20
|
2 | pub struct Foo<'a>('a, Bar);
| ^^
error[E0224]: at least one trait is required for an object type
--> src/lib.rs:2:20
|
2 | pub struct Foo<'a>('a, Bar);
| ^^
```
### Rationale and extra context
[Playground link](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2021&gist=8784d44966d845deb0f24c469eb86de8)
### Rust Version
Tested on playground so latest stable and nightly at the time of writing (1.82.0 stable) | A-diagnostics,A-parser,T-compiler,D-confusing | low | Critical |
2,687,418,210 | godot | Adding a Window node disables the default cursor shape of sibling Control nodes | ### Tested versions
- Reproducible in: 4.3.stable
### System information
Windows 10 - Godot v4.3.stable.steam - Vulkan (Forward+)
### Issue description
Normally, changing the default cursor shape of a control node will change the cursor shape while hovering that node.
However, adding a Window node to the scene will disable that behavior and hovering any control node will only show the default arrow cursor until clicking the node, after which, the nodes start behaving normally again.
This seems to be a focus issue, because setting the window to unfocusable will make the mouse cursor transform normally while hovering control nodes.
### Steps to reproduce
1. Add a button to the scene with the `Mouse > Default Cursor Shape` property set to `Pointing Hand`.
2. Run the game. Hover the button. The cursor should be a pointing hand.
3. Add a Window node to the scene.
4. Run the game. Hover the button. The cursor is now the default arrow head.
5. Click the button. The cursor now transforms into what it is supposed to be: a pointing hand.
### Minimal reproduction project (MRP)
[MRP.zip](https://github.com/user-attachments/files/17892685/MRP.zip)
| bug,confirmed,topic:gui | low | Major |
2,687,479,008 | rust | apple-m3 detected as the native CPU for nightly rustc on apple-m4 | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried the following code with the latest nightly toolchain on a MacBook Pro M4 Max device:
```bash
$ rustup run nightly rustc --print target-cpus | head -n2
```
I expected to see this happen:
The output is expected to be as follows, indicating that the detected CPU is `apple-m4`.
```
Available CPUs for this target:
native - Select the CPU of the current host (currently apple-m4).
```
Instead, this happened:
The output indicates that the detected CPU is `apple-m3`, not `apple-m4`.
```
Available CPUs for this target:
native - Select the CPU of the current host (currently apple-m3).
```
The latest nightly rustc does recognize `apple-m4` as a valid target-cpu, though:
```bash
$ rustup run nightly rustc --print=cfg -C target-cpu=apple-m4 | grep feature
target_feature="aes"
target_feature="bf16"
target_feature="bti"
...
target_feature="wfxt"
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
```bash
$ rustup run nightly rustc --version --verbose
rustc 1.85.0-nightly (a47555110 2024-11-22)
binary: rustc
commit-hash: a47555110cf09b3ed59811d9b02235443e76a595
commit-date: 2024-11-22
host: aarch64-apple-darwin
release: 1.85.0-nightly
LLVM version: 19.1.4
```
```
$ /usr/sbin/sysctl -n machdep.cpu.brand_string
Apple M4 Max
```
As was the case in the past similar issue, https://github.com/rust-lang/rust/issues/127448 (mis-detection of apple-m3 as apple-m2), this might be addressed by an upcoming update to the upstream LLVM.
This fix (https://github.com/llvm/llvm-project/pull/106599) in LLVM 19.x might be relevant. | A-LLVM,T-compiler,O-AArch64,llvm-fixed-upstream,O-apple,C-external-bug | low | Critical |
2,687,482,361 | vscode | The font of global search bar and search bar should be a monospace font. | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->

Can the search bar and the global search bar (and the results) be set to the same monospace font as the workspace?
This is because the content we enter in the search bar is also part of the code itself.
Cheers,
Lu | bug,search | low | Minor |
2,687,483,806 | ui | [feat]: Add "Copy Link" Variant to Button Component | ### Feature description
I’d like to request adding a new variant to the Button component called "Copy Link." This variant would enable users to easily copy a link by clicking the button, with an animation or visual change to indicate the link has been copied successfully. The animation could show a "Copied" state, providing users with clear feedback that the action was successful. This would improve usability and create a more familiar experience for users, as copying links is a common functionality.Currently, the "Copy Link" button in the shadcn examples only have a hover effect but lack any confirmation or feedback to indicate whether the link has been successfully copied. It would be helpful to add a visual indication (like changing the button text to "Copied" or an animation) once the link is copied. This would provide clear feedback to users and enhance the overall user experience, ensuring they know the action was successful.
### Affected component/components
Button
### Additional Context
Additional details here...
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Minor |
2,687,495,321 | storybook | [Bug]: Angular signal inputs that are aliased don't work properly | ### Describe the bug
Angular allows to alias inputs, for example when kebab-case attribute names are desired ([see here](https://angular.dev/guide/components/inputs#input-aliases)). When I do so and assign such attribute a value in the `args` list then it seems the signal isn't getting the correct value (`InputSignal<string>`) but the value itself directly.
Related to #25784 and PR #26413
### Reproduction link
https://stackblitz.com/edit/github-gh38gu
### Reproduction steps
1. Go to above link
2. Navigate to the story Example\Button\Works
3. Open Devtools' console which will **not** show errors about backgroundColor (which is aliased)
4. Navigate to the story Example\Button\Broken
5. In Devtools' console there will be error messages about backgroundColor only. The attribute foregroundColor is not mentioned and the only difference is it not being aliased in the component.
```
ERROR TypeError: ctx.backgroundColor is not a function
at ButtonComponent_Template (template.html:6:3)
```
The broken story defines both attribute in the args, that seems to "activate" the difference.
### System
```bash
Storybook Environment Info:
System:
OS: Linux 5.0 undefined
CPU: (8) x64 Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
Shell: 1.0 - /bin/jsh
Binaries:
Node: 18.20.3 - /usr/local/bin/node
Yarn: 1.22.19 - /usr/local/bin/yarn
npm: 10.2.3 - /usr/local/bin/npm <----- active
pnpm: 8.15.6 - /usr/local/bin/pnpm
npmPackages:
@storybook/addon-docs: ^8.5.0-alpha.10 => 8.5.0-alpha.10
@storybook/addon-essentials: ^8.5.0-alpha.10 => 8.5.0-alpha.10
@storybook/addon-interactions: ^8.5.0-alpha.10 => 8.5.0-alpha.10
@storybook/addon-onboarding: ^8.5.0-alpha.10 => 8.5.0-alpha.10
@storybook/angular: ^8.5.0-alpha.10 => 8.5.0-alpha.10
@storybook/blocks: ^8.5.0-alpha.10 => 8.5.0-alpha.10
@storybook/test: ^8.5.0-alpha.10 => 8.5.0-alpha.10
storybook: ^8.5.0-alpha.10 => 8.5.0-alpha.10
```
### Additional context
_No response_ | bug,help wanted,angular,sev:S3 | low | Critical |
2,687,557,133 | flutter | When using workspaces the code coverage is broken | ### Steps to reproduce
1. Create a project with at least one test e.g. in `<root>/demo/test/test.dart`
2. Run `cd demo && flutter test --coverage`
3. The generated `<root>/demo/coverage/lcov.info` is not empty
4. Now add a [workspace](https://flutter.dev/go/pub-workspace)
5. Run `flutter test demo/test --coverage`
6. The generated `<root>/coverage/lcov.info` is empty
### Expected results
I expect coverage data when I run the tests
### Actual results
The coverage report file is empty
### Code sample
<!--details open><summary>Code sample</summary-->
See [my autologin_plugin commit](https://github.com/rekire/autologin_plugin/tree/2fdf7216f11860faf6f11ed4d0d96fb24690da39). I have right now no minimal sample. Check the CI output
<!--
```dart
[Paste your code here]
```>
</details-->
<!--
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
-->
### Logs
<details open><summary>Logs</summary>
```console
$ flutter test autologin/test --coverage
00:01 +3: All tests passed!
$ du -k coverage/lcov.info
0 coverage/lcov.info
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel stable, 3.24.5, on macOS 15.1.1 24B91 darwin-arm64, locale de-DE)
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
[✓] Xcode - develop for iOS and macOS (Xcode 16.1)
[✓] Chrome - develop for the web
[✓] Android Studio (version 2024.2)
[✓] IntelliJ IDEA Community Edition (version 2024.3)
[✓] VS Code (version 1.92.2)
[✓] Connected device (3 available)
[✓] Network resources
• No issues found!
```
</details>
| a: tests,framework,has reproducible steps,team-tool,found in release: 3.24,found in release: 3.27 | low | Critical |
2,687,566,367 | rust | Tracking Issue for asm_experimental_reg | <!--
NOTE: For library features, please use the "Library Tracking Issue" template instead.
Thank you for creating a tracking issue! 📜 Tracking issues are for tracking a
feature from implementation to stabilisation. Make sure to include the relevant
RFC for the feature if it has one. Otherwise provide a short summary of the
feature and link any relevant PRs or issues, and remove any sections that are
not relevant to the feature.
Remember to add team labels to the tracking issue.
For a language team feature, this would e.g., be `T-lang`.
Such a feature should also be labeled with e.g., `F-my_feature`.
This label is used to associate issues (e.g., bugs and design questions) to the feature.
-->
This tracks support for additional registers in architectures where inline assembly is already stable.
The feature gate for the issue is `#![feature(asm_experimental_reg)]`.
### About tracking issues
Tracking issues are used to record the overall progress of implementation.
They are also used as hubs connecting to other relevant issues, e.g., bugs or open design questions.
A tracking issue is however *not* meant for large scale discussion, questions, or bug reports about a feature.
Instead, open a dedicated issue for the specific matter and add the relevant feature gate label.
Discussion comments will get marked as off-topic or deleted.
Repeated discussions on the tracking issue may lead to the tracking issue getting locked.
### Steps
<!--
Include each step required to complete the feature. Typically this is a PR
implementing a feature, followed by a PR that stabilises the feature. However
for larger features an implementation could be broken up into multiple PRs.
-->
- [x] Implementation (https://github.com/rust-lang/rust/pull/131664)
- [ ] Stabilization PR ([see instructions on rustc-dev-guide][stabilization-guide])
[stabilization-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#stabilization-pr
[doc-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#documentation-prs
[nightly-style-procedure]: https://github.com/rust-lang/style-team/blob/main/nightly-style-procedure.md
[Style Guide]: https://github.com/rust-lang/rust/tree/master/src/doc/style-guide
### Unresolved Questions
<!--
Include any open questions that need to be answered before the feature can be
stabilised.
-->
### Implementation history
<!--
Include a list of all the PRs that were involved in implementing the feature.
-->
- s390x: vreg input/output https://github.com/rust-lang/rust/pull/131664
Other candidates:
- GPR pair on RISC-V (added in LLVM 20: https://github.com/llvm/llvm-project/commit/4615cc38f35d111f09073f51cc734e29c9211067), Arm, and s390x
- AArch64 SVE types
- RISC-V RVV types
@rustbot label +A-inline-assembly | A-inline-assembly,C-tracking-issue | low | Critical |
2,687,604,587 | stable-diffusion-webui | [Bug]:all I get is a grey box with nothing in it | ### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
no matter what I do all I get is a grey box with nothing in it. The image starts to form, gets slightly clearer and then vanishes into a solid grey box. Please help
### Steps to reproduce the problem

Tried multiple prompts and negative prompts always comes out the same.
Tried multiple settings samples and schedules
### What should have happened?
I'm pretty sure its supposed to give me a picture
### What browsers do you use to access the UI ?
Mozilla Firefox
### Sysinfo
[sysinfo-2024-11-24-11-19.json](https://github.com/user-attachments/files/17893015/sysinfo-2024-11-24-11-19.json)
### Console logs
```Shell
unfortunately I don't know how to do this
```
### Additional information
first time user | bug-report | low | Critical |
2,687,628,469 | flutter | Unexpected refresh rate behavior while playing videos. | ### What package does this bug report belong to?
video_player
### What target platforms are you seeing this bug on?
Android
### Have you already upgraded your packages?
Yes
### Dependency versions
<details><summary>pubspec.lock</summary>
```lock
# Generated by pub
# See https://dart.dev/tools/pub/glossary#lockfile
packages:
_fe_analyzer_shared:
dependency: transitive
description:
name: _fe_analyzer_shared
sha256: f256b0c0ba6c7577c15e2e4e114755640a875e885099367bf6e012b19314c834
url: "https://pub.dev"
source: hosted
version: "72.0.0"
_macros:
dependency: transitive
description: dart
source: sdk
version: "0.3.2"
analyzer:
dependency: transitive
description:
name: analyzer
sha256: b652861553cd3990d8ed361f7979dc6d7053a9ac8843fa73820ab68ce5410139
url: "https://pub.dev"
source: hosted
version: "6.7.0"
args:
dependency: transitive
description:
name: args
sha256: bf9f5caeea8d8fe6721a9c358dd8a5c1947b27f1cfaa18b39c301273594919e6
url: "https://pub.dev"
source: hosted
version: "2.6.0"
async:
dependency: transitive
description:
name: async
sha256: "947bfcf187f74dbc5e146c9eb9c0f10c9f8b30743e341481c1e2ed3ecc18c20c"
url: "https://pub.dev"
source: hosted
version: "2.11.0"
boolean_selector:
dependency: transitive
description:
name: boolean_selector
sha256: "6cfb5af12253eaf2b368f07bacc5a80d1301a071c73360d746b7f2e32d762c66"
url: "https://pub.dev"
source: hosted
version: "2.1.1"
build:
dependency: transitive
description:
name: build
sha256: "80184af8b6cb3e5c1c4ec6d8544d27711700bc3e6d2efad04238c7b5290889f0"
url: "https://pub.dev"
source: hosted
version: "2.4.1"
build_config:
dependency: transitive
description:
name: build_config
sha256: bf80fcfb46a29945b423bd9aad884590fb1dc69b330a4d4700cac476af1708d1
url: "https://pub.dev"
source: hosted
version: "1.1.1"
build_daemon:
dependency: transitive
description:
name: build_daemon
sha256: "79b2aef6ac2ed00046867ed354c88778c9c0f029df8a20fe10b5436826721ef9"
url: "https://pub.dev"
source: hosted
version: "4.0.2"
build_resolvers:
dependency: transitive
description:
name: build_resolvers
sha256: "339086358431fa15d7eca8b6a36e5d783728cf025e559b834f4609a1fcfb7b0a"
url: "https://pub.dev"
source: hosted
version: "2.4.2"
build_runner:
dependency: "direct dev"
description:
name: build_runner
sha256: "028819cfb90051c6b5440c7e574d1896f8037e3c96cf17aaeb054c9311cfbf4d"
url: "https://pub.dev"
source: hosted
version: "2.4.13"
build_runner_core:
dependency: transitive
description:
name: build_runner_core
sha256: f8126682b87a7282a339b871298cc12009cb67109cfa1614d6436fb0289193e0
url: "https://pub.dev"
source: hosted
version: "7.3.2"
built_collection:
dependency: transitive
description:
name: built_collection
sha256: "376e3dd27b51ea877c28d525560790aee2e6fbb5f20e2f85d5081027d94e2100"
url: "https://pub.dev"
source: hosted
version: "5.1.1"
built_value:
dependency: transitive
description:
name: built_value
sha256: c7913a9737ee4007efedaffc968c049fd0f3d0e49109e778edc10de9426005cb
url: "https://pub.dev"
source: hosted
version: "8.9.2"
characters:
dependency: transitive
description:
name: characters
sha256: "04a925763edad70e8443c99234dc3328f442e811f1d8fd1a72f1c8ad0f69a605"
url: "https://pub.dev"
source: hosted
version: "1.3.0"
checked_yaml:
dependency: transitive
description:
name: checked_yaml
sha256: feb6bed21949061731a7a75fc5d2aa727cf160b91af9a3e464c5e3a32e28b5ff
url: "https://pub.dev"
source: hosted
version: "2.0.3"
clock:
dependency: transitive
description:
name: clock
sha256: cb6d7f03e1de671e34607e909a7213e31d7752be4fb66a86d29fe1eb14bfb5cf
url: "https://pub.dev"
source: hosted
version: "1.1.1"
code_builder:
dependency: transitive
description:
name: code_builder
sha256: "0ec10bf4a89e4c613960bf1e8b42c64127021740fb21640c29c909826a5eea3e"
url: "https://pub.dev"
source: hosted
version: "4.10.1"
collection:
dependency: transitive
description:
name: collection
sha256: ee67cb0715911d28db6bf4af1026078bd6f0128b07a5f66fb2ed94ec6783c09a
url: "https://pub.dev"
source: hosted
version: "1.18.0"
convert:
dependency: transitive
description:
name: convert
sha256: b30acd5944035672bc15c6b7a8b47d773e41e2f17de064350988c5d02adb1c68
url: "https://pub.dev"
source: hosted
version: "3.1.2"
coverage:
dependency: transitive
description:
name: coverage
sha256: "4b03e11f6d5b8f6e5bb5e9f7889a56fe6c5cbe942da5378ea4d4d7f73ef9dfe5"
url: "https://pub.dev"
source: hosted
version: "1.11.0"
crypto:
dependency: transitive
description:
name: crypto
sha256: "1e445881f28f22d6140f181e07737b22f1e099a5e1ff94b0af2f9e4a463f4855"
url: "https://pub.dev"
source: hosted
version: "3.0.6"
csslib:
dependency: transitive
description:
name: csslib
sha256: "09bad715f418841f976c77db72d5398dc1253c21fb9c0c7f0b0b985860b2d58e"
url: "https://pub.dev"
source: hosted
version: "1.0.2"
dart_style:
dependency: transitive
description:
name: dart_style
sha256: "7856d364b589d1f08986e140938578ed36ed948581fbc3bc9aef1805039ac5ab"
url: "https://pub.dev"
source: hosted
version: "2.3.7"
fake_async:
dependency: transitive
description:
name: fake_async
sha256: "511392330127add0b769b75a987850d136345d9227c6b94c96a04cf4a391bf78"
url: "https://pub.dev"
source: hosted
version: "1.3.1"
ffi:
dependency: transitive
description:
name: ffi
sha256: "16ed7b077ef01ad6170a3d0c57caa4a112a38d7a2ed5602e0aca9ca6f3d98da6"
url: "https://pub.dev"
source: hosted
version: "2.1.3"
file:
dependency: transitive
description:
name: file
sha256: "5fc22d7c25582e38ad9a8515372cd9a93834027aacf1801cf01164dac0ffa08c"
url: "https://pub.dev"
source: hosted
version: "7.0.0"
fixnum:
dependency: transitive
description:
name: fixnum
sha256: b6dc7065e46c974bc7c5f143080a6764ec7a4be6da1285ececdc37be96de53be
url: "https://pub.dev"
source: hosted
version: "1.1.1"
flutter:
dependency: "direct main"
description: flutter
source: sdk
version: "0.0.0"
flutter_driver:
dependency: transitive
description: flutter
source: sdk
version: "0.0.0"
flutter_test:
dependency: "direct dev"
description: flutter
source: sdk
version: "0.0.0"
flutter_web_plugins:
dependency: transitive
description: flutter
source: sdk
version: "0.0.0"
frontend_server_client:
dependency: transitive
description:
name: frontend_server_client
sha256: f64a0333a82f30b0cca061bc3d143813a486dc086b574bfb233b7c1372427694
url: "https://pub.dev"
source: hosted
version: "4.0.0"
fuchsia_remote_debug_protocol:
dependency: transitive
description: flutter
source: sdk
version: "0.0.0"
glob:
dependency: transitive
description:
name: glob
sha256: "0e7014b3b7d4dac1ca4d6114f82bf1782ee86745b9b42a92c9289c23d8a0ab63"
url: "https://pub.dev"
source: hosted
version: "2.1.2"
graphs:
dependency: transitive
description:
name: graphs
sha256: "741bbf84165310a68ff28fe9e727332eef1407342fca52759cb21ad8177bb8d0"
url: "https://pub.dev"
source: hosted
version: "2.3.2"
html:
dependency: transitive
description:
name: html
sha256: "1fc58edeaec4307368c60d59b7e15b9d658b57d7f3125098b6294153c75337ec"
url: "https://pub.dev"
source: hosted
version: "0.15.5"
http_multi_server:
dependency: transitive
description:
name: http_multi_server
sha256: "97486f20f9c2f7be8f514851703d0119c3596d14ea63227af6f7a481ef2b2f8b"
url: "https://pub.dev"
source: hosted
version: "3.2.1"
http_parser:
dependency: transitive
description:
name: http_parser
sha256: "2aa08ce0341cc9b354a498388e30986515406668dbcc4f7c950c3e715496693b"
url: "https://pub.dev"
source: hosted
version: "4.0.2"
integration_test:
dependency: "direct dev"
description: flutter
source: sdk
version: "0.0.0"
io:
dependency: transitive
description:
name: io
sha256: "2ec25704aba361659e10e3e5f5d672068d332fc8ac516421d483a11e5cbd061e"
url: "https://pub.dev"
source: hosted
version: "1.0.4"
js:
dependency: transitive
description:
name: js
sha256: c1b2e9b5ea78c45e1a0788d29606ba27dc5f71f019f32ca5140f61ef071838cf
url: "https://pub.dev"
source: hosted
version: "0.7.1"
json_annotation:
dependency: transitive
description:
name: json_annotation
sha256: "1ce844379ca14835a50d2f019a3099f419082cfdd231cd86a142af94dd5c6bb1"
url: "https://pub.dev"
source: hosted
version: "4.9.0"
leak_tracker:
dependency: transitive
description:
name: leak_tracker
sha256: "3f87a60e8c63aecc975dda1ceedbc8f24de75f09e4856ea27daf8958f2f0ce05"
url: "https://pub.dev"
source: hosted
version: "10.0.5"
leak_tracker_flutter_testing:
dependency: transitive
description:
name: leak_tracker_flutter_testing
sha256: "932549fb305594d82d7183ecd9fa93463e9914e1b67cacc34bc40906594a1806"
url: "https://pub.dev"
source: hosted
version: "3.0.5"
leak_tracker_testing:
dependency: transitive
description:
name: leak_tracker_testing
sha256: "6ba465d5d76e67ddf503e1161d1f4a6bc42306f9d66ca1e8f079a47290fb06d3"
url: "https://pub.dev"
source: hosted
version: "3.0.1"
logging:
dependency: transitive
description:
name: logging
sha256: c8245ada5f1717ed44271ed1c26b8ce85ca3228fd2ffdb75468ab01979309d61
url: "https://pub.dev"
source: hosted
version: "1.3.0"
macros:
dependency: transitive
description:
name: macros
sha256: "0acaed5d6b7eab89f63350bccd82119e6c602df0f391260d0e32b5e23db79536"
url: "https://pub.dev"
source: hosted
version: "0.1.2-main.4"
matcher:
dependency: transitive
description:
name: matcher
sha256: d2323aa2060500f906aa31a895b4030b6da3ebdcc5619d14ce1aada65cd161cb
url: "https://pub.dev"
source: hosted
version: "0.12.16+1"
material_color_utilities:
dependency: transitive
description:
name: material_color_utilities
sha256: f7142bb1154231d7ea5f96bc7bde4bda2a0945d2806bb11670e30b850d56bdec
url: "https://pub.dev"
source: hosted
version: "0.11.1"
meta:
dependency: transitive
description:
name: meta
sha256: bdb68674043280c3428e9ec998512fb681678676b3c54e773629ffe74419f8c7
url: "https://pub.dev"
source: hosted
version: "1.15.0"
mime:
dependency: transitive
description:
name: mime
sha256: "41a20518f0cb1256669420fdba0cd90d21561e560ac240f26ef8322e45bb7ed6"
url: "https://pub.dev"
source: hosted
version: "2.0.0"
node_preamble:
dependency: transitive
description:
name: node_preamble
sha256: "6e7eac89047ab8a8d26cf16127b5ed26de65209847630400f9aefd7cd5c730db"
url: "https://pub.dev"
source: hosted
version: "2.0.2"
package_config:
dependency: transitive
description:
name: package_config
sha256: "1c5b77ccc91e4823a5af61ee74e6b972db1ef98c2ff5a18d3161c982a55448bd"
url: "https://pub.dev"
source: hosted
version: "2.1.0"
path:
dependency: transitive
description:
name: path
sha256: "087ce49c3f0dc39180befefc60fdb4acd8f8620e5682fe2476afd0b3688bb4af"
url: "https://pub.dev"
source: hosted
version: "1.9.0"
path_provider:
dependency: "direct dev"
description:
name: path_provider
sha256: "50c5dd5b6e1aaf6fb3a78b33f6aa3afca52bf903a8a5298f53101fdaee55bbcd"
url: "https://pub.dev"
source: hosted
version: "2.1.5"
path_provider_android:
dependency: transitive
description:
name: path_provider_android
sha256: c464428172cb986b758c6d1724c603097febb8fb855aa265aeecc9280c294d4a
url: "https://pub.dev"
source: hosted
version: "2.2.12"
path_provider_foundation:
dependency: transitive
description:
name: path_provider_foundation
sha256: f234384a3fdd67f989b4d54a5d73ca2a6c422fa55ae694381ae0f4375cd1ea16
url: "https://pub.dev"
source: hosted
version: "2.4.0"
path_provider_linux:
dependency: transitive
description:
name: path_provider_linux
sha256: f7a1fe3a634fe7734c8d3f2766ad746ae2a2884abe22e241a8b301bf5cac3279
url: "https://pub.dev"
source: hosted
version: "2.2.1"
path_provider_platform_interface:
dependency: transitive
description:
name: path_provider_platform_interface
sha256: "88f5779f72ba699763fa3a3b06aa4bf6de76c8e5de842cf6f29e2e06476c2334"
url: "https://pub.dev"
source: hosted
version: "2.1.2"
path_provider_windows:
dependency: transitive
description:
name: path_provider_windows
sha256: bd6f00dbd873bfb70d0761682da2b3a2c2fccc2b9e84c495821639601d81afe7
url: "https://pub.dev"
source: hosted
version: "2.3.0"
platform:
dependency: transitive
description:
name: platform
sha256: "9b71283fc13df574056616011fb138fd3b793ea47cc509c189a6c3fa5f8a1a65"
url: "https://pub.dev"
source: hosted
version: "3.1.5"
plugin_platform_interface:
dependency: transitive
description:
name: plugin_platform_interface
sha256: "4820fbfdb9478b1ebae27888254d445073732dae3d6ea81f0b7e06d5dedc3f02"
url: "https://pub.dev"
source: hosted
version: "2.1.8"
pool:
dependency: transitive
description:
name: pool
sha256: "20fe868b6314b322ea036ba325e6fc0711a22948856475e2c2b6306e8ab39c2a"
url: "https://pub.dev"
source: hosted
version: "1.5.1"
process:
dependency: transitive
description:
name: process
sha256: "21e54fd2faf1b5bdd5102afd25012184a6793927648ea81eea80552ac9405b32"
url: "https://pub.dev"
source: hosted
version: "5.0.2"
pub_semver:
dependency: transitive
description:
name: pub_semver
sha256: "40d3ab1bbd474c4c2328c91e3a7df8c6dd629b79ece4c4bd04bee496a224fb0c"
url: "https://pub.dev"
source: hosted
version: "2.1.4"
pubspec_parse:
dependency: transitive
description:
name: pubspec_parse
sha256: c799b721d79eb6ee6fa56f00c04b472dcd44a30d258fac2174a6ec57302678f8
url: "https://pub.dev"
source: hosted
version: "1.3.0"
shelf:
dependency: transitive
description:
name: shelf
sha256: ad29c505aee705f41a4d8963641f91ac4cee3c8fad5947e033390a7bd8180fa4
url: "https://pub.dev"
source: hosted
version: "1.4.1"
shelf_packages_handler:
dependency: transitive
description:
name: shelf_packages_handler
sha256: "89f967eca29607c933ba9571d838be31d67f53f6e4ee15147d5dc2934fee1b1e"
url: "https://pub.dev"
source: hosted
version: "3.0.2"
shelf_static:
dependency: transitive
description:
name: shelf_static
sha256: c87c3875f91262785dade62d135760c2c69cb217ac759485334c5857ad89f6e3
url: "https://pub.dev"
source: hosted
version: "1.1.3"
shelf_web_socket:
dependency: transitive
description:
name: shelf_web_socket
sha256: cc36c297b52866d203dbf9332263c94becc2fe0ceaa9681d07b6ef9807023b67
url: "https://pub.dev"
source: hosted
version: "2.0.1"
sky_engine:
dependency: transitive
description: flutter
source: sdk
version: "0.0.99"
source_map_stack_trace:
dependency: transitive
description:
name: source_map_stack_trace
sha256: c0713a43e323c3302c2abe2a1cc89aa057a387101ebd280371d6a6c9fa68516b
url: "https://pub.dev"
source: hosted
version: "2.1.2"
source_maps:
dependency: transitive
description:
name: source_maps
sha256: "708b3f6b97248e5781f493b765c3337db11c5d2c81c3094f10904bfa8004c703"
url: "https://pub.dev"
source: hosted
version: "0.10.12"
source_span:
dependency: transitive
description:
name: source_span
sha256: "53e943d4206a5e30df338fd4c6e7a077e02254531b138a15aec3bd143c1a8b3c"
url: "https://pub.dev"
source: hosted
version: "1.10.0"
stack_trace:
dependency: transitive
description:
name: stack_trace
sha256: "73713990125a6d93122541237550ee3352a2d84baad52d375a4cad2eb9b7ce0b"
url: "https://pub.dev"
source: hosted
version: "1.11.1"
stream_channel:
dependency: transitive
description:
name: stream_channel
sha256: ba2aa5d8cc609d96bbb2899c28934f9e1af5cddbd60a827822ea467161eb54e7
url: "https://pub.dev"
source: hosted
version: "2.1.2"
stream_transform:
dependency: transitive
description:
name: stream_transform
sha256: "14a00e794c7c11aa145a170587321aedce29769c08d7f58b1d141da75e3b1c6f"
url: "https://pub.dev"
source: hosted
version: "2.1.0"
string_scanner:
dependency: transitive
description:
name: string_scanner
sha256: "556692adab6cfa87322a115640c11f13cb77b3f076ddcc5d6ae3c20242bedcde"
url: "https://pub.dev"
source: hosted
version: "1.2.0"
sync_http:
dependency: transitive
description:
name: sync_http
sha256: "7f0cd72eca000d2e026bcd6f990b81d0ca06022ef4e32fb257b30d3d1014a961"
url: "https://pub.dev"
source: hosted
version: "0.3.1"
term_glyph:
dependency: transitive
description:
name: term_glyph
sha256: a29248a84fbb7c79282b40b8c72a1209db169a2e0542bce341da992fe1bc7e84
url: "https://pub.dev"
source: hosted
version: "1.2.1"
test:
dependency: "direct dev"
description:
name: test
sha256: "7ee44229615f8f642b68120165ae4c2a75fe77ae2065b1e55ae4711f6cf0899e"
url: "https://pub.dev"
source: hosted
version: "1.25.7"
test_api:
dependency: transitive
description:
name: test_api
sha256: "5b8a98dafc4d5c4c9c72d8b31ab2b23fc13422348d2997120294d3bac86b4ddb"
url: "https://pub.dev"
source: hosted
version: "0.7.2"
test_core:
dependency: transitive
description:
name: test_core
sha256: "55ea5a652e38a1dfb32943a7973f3681a60f872f8c3a05a14664ad54ef9c6696"
url: "https://pub.dev"
source: hosted
version: "0.6.4"
timing:
dependency: transitive
description:
name: timing
sha256: "70a3b636575d4163c477e6de42f247a23b315ae20e86442bebe32d3cabf61c32"
url: "https://pub.dev"
source: hosted
version: "1.0.1"
typed_data:
dependency: transitive
description:
name: typed_data
sha256: f9049c039ebfeb4cf7a7104a675823cd72dba8297f264b6637062516699fa006
url: "https://pub.dev"
source: hosted
version: "1.4.0"
vector_math:
dependency: transitive
description:
name: vector_math
sha256: "80b3257d1492ce4d091729e3a67a60407d227c27241d6927be0130c98e741803"
url: "https://pub.dev"
source: hosted
version: "2.1.4"
video_player:
dependency: "direct main"
description:
name: video_player
sha256: "4a8c3492d734f7c39c2588a3206707a05ee80cef52e8c7f3b2078d430c84bc17"
url: "https://pub.dev"
source: hosted
version: "2.9.2"
video_player_android:
dependency: transitive
description:
name: video_player_android
sha256: "391e092ba4abe2f93b3e625bd6b6a6ec7d7414279462c1c0ee42b5ab8d0a0898"
url: "https://pub.dev"
source: hosted
version: "2.7.16"
video_player_avfoundation:
dependency: transitive
description:
name: video_player_avfoundation
sha256: "0b146e5d82e886ff43e5a46c6bcbe390761b802864a6e2503eb612d69a405dfa"
url: "https://pub.dev"
source: hosted
version: "2.6.3"
video_player_platform_interface:
dependency: transitive
description:
name: video_player_platform_interface
sha256: "229d7642ccd9f3dc4aba169609dd6b5f3f443bb4cc15b82f7785fcada5af9bbb"
url: "https://pub.dev"
source: hosted
version: "6.2.3"
video_player_web:
dependency: transitive
description:
name: video_player_web
sha256: "881b375a934d8ebf868c7fb1423b2bfaa393a0a265fa3f733079a86536064a10"
url: "https://pub.dev"
source: hosted
version: "2.3.3"
vm_service:
dependency: transitive
description:
name: vm_service
sha256: "5c5f338a667b4c644744b661f309fb8080bb94b18a7e91ef1dbd343bed00ed6d"
url: "https://pub.dev"
source: hosted
version: "14.2.5"
watcher:
dependency: transitive
description:
name: watcher
sha256: "3d2ad6751b3c16cf07c7fca317a1413b3f26530319181b37e3b9039b84fc01d8"
url: "https://pub.dev"
source: hosted
version: "1.1.0"
web:
dependency: transitive
description:
name: web
sha256: cd3543bd5798f6ad290ea73d210f423502e71900302dde696f8bff84bf89a1cb
url: "https://pub.dev"
source: hosted
version: "1.1.0"
web_socket:
dependency: transitive
description:
name: web_socket
sha256: "3c12d96c0c9a4eec095246debcea7b86c0324f22df69893d538fcc6f1b8cce83"
url: "https://pub.dev"
source: hosted
version: "0.1.6"
web_socket_channel:
dependency: transitive
description:
name: web_socket_channel
sha256: "9f187088ed104edd8662ca07af4b124465893caf063ba29758f97af57e61da8f"
url: "https://pub.dev"
source: hosted
version: "3.0.1"
webdriver:
dependency: transitive
description:
name: webdriver
sha256: "003d7da9519e1e5f329422b36c4dcdf18d7d2978d1ba099ea4e45ba490ed845e"
url: "https://pub.dev"
source: hosted
version: "3.0.3"
webkit_inspection_protocol:
dependency: transitive
description:
name: webkit_inspection_protocol
sha256: "87d3f2333bb240704cd3f1c6b5b7acd8a10e7f0bc28c28dcf14e782014f4a572"
url: "https://pub.dev"
source: hosted
version: "1.2.1"
xdg_directories:
dependency: transitive
description:
name: xdg_directories
sha256: "7a3f37b05d989967cdddcbb571f1ea834867ae2faa29725fd085180e0883aa15"
url: "https://pub.dev"
source: hosted
version: "1.1.0"
yaml:
dependency: transitive
description:
name: yaml
sha256: "75769501ea3489fca56601ff33454fe45507ea3bfb014161abc3b43ae25989d5"
url: "https://pub.dev"
source: hosted
version: "3.1.2"
sdks:
dart: ">=3.5.0 <4.0.0"
flutter: ">=3.24.0"
```
</details>
### Steps to reproduce
1. Run the example app (https://github.com/flutter/packages/tree/main/packages/video_player/video_player/example) on a device with variable refresh rate, in release mode (I have encountered the issue on a Samsung Galaxy S24 Ultra running Android 14, One UI 6.1.1, but also on a Samsung Galaxy S21+, Android 14, One UI 6.1).
2. Enable "Show refresh rate" in the developer options and make sure the phone refresh rate is set to Adaptive.
3. Open the app. You'll see the refresh rate in the top left corner switch from 120 to a lower value (60, 48, 24, depending on the minimum the phone supports) as you navigate through the app.
4. Play any video inside the example app. While the video is playing, the refresh rate is forced to always be the highest supported (120 in my case). As soon as the video is paused, the refresh rate will return to being adaptive.
5. Download the video from the assets folder in the app (https://github.com/flutter/packages/blob/main/packages/video_player/video_player/example/assets/Butterfly-209.mp4) and play it with any native Android video player (on Samsung try the Gallery app). You will see the refresh rate switching as expected to a lower one, as expected.
### Expected results
On the Flutter app, while playing the video, the refresh rate should change based on the framerate of the video or at least go to a lower value that does not affect the video playback (such as 60hz), a behavior that matches the native one.
### Actual results
The refresh rate while playing the video is capped to the highest one the phone supports. This is an issue on apps that primarily stream videos because the battery life will be much worse if the refresh rate is locked to 120hz.
### Code sample
Code sample: https://github.com/flutter/packages/tree/main/packages/video_player/video_player/example
### Screenshots or Videos
<summary>Screenshots / Video demonstration</summary>
Unfortunately, I cannot provide any video or photo demonstration because, when screen recording or screenshoting, the refresh rate is locked to 120hz. When using the app normally, the issue can be observed clearly. Also, I don't have another device on hand at the moment to film the screen.
### Logs
Not applicable.
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.3, on Microsoft Windows [Version 10.0.26100.2448], locale ro-RO)
• Flutter version 3.24.3 on channel stable at C:\Tools\flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 2663184aa7 (2 months ago), 2024-09-11 16:27:48 -0500
• Engine revision 36335019a8
• Dart version 3.5.3
• DevTools version 2.37.3
[✓] Windows Version (Installed version of Windows is version 10 or higher)
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at C:\Users\iamco\AppData\Local\Android\sdk
• Platform android-35, build-tools 35.0.0
• Java binary at: C:\Program Files\Android\Android Studio\jbr\bin\java
• Java version OpenJDK Runtime Environment (build 17.0.11+0--11852314)
• All Android licenses accepted.
[✗] Chrome - develop for the web (Cannot find Chrome executable at .\Google\Chrome\Application\chrome.exe)
! Cannot find Chrome. Try setting CHROME_EXECUTABLE to a Chrome executable.
[✓] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.11.6)
• Visual Studio at C:\Program Files\Microsoft Visual Studio\2022\Community
• Visual Studio Community 2022 version 17.11.35431.28
• Windows 10 SDK version 10.0.22621.0
[✓] Android Studio (version 2024.1)
• Android Studio at C:\Program Files\Android\Android Studio
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.11+0--11852314)
[✓] VS Code (version 1.95.2)
• VS Code at C:\Users\iamco\AppData\Local\Programs\Microsoft VS Code
• Flutter extension version 3.100.0
[✓] Connected device (2 available)
• Windows (desktop) • windows • windows-x64 • Microsoft Windows [Version 10.0.26100.2448]
• Edge (web) • edge • web-javascript • Microsoft Edge 131.0.2903.51
[✓] Network resources
• All expected network resources are available.
! Doctor found issues in 1 category.
```
</details>
| platform-android,p: video_player,package,has reproducible steps,P2,team-android,triaged-android,found in release: 3.24,found in release: 3.27 | low | Critical |
2,687,632,101 | PowerToys | Customizable Key Shortcuts (with sane defaults) to Switch enable/disable a Toy / feature | ### Description of the new feature / enhancement
As a User, I would like to have customizable key shortcuts (with sane defaults) to switch enable/disable a Toy / feature ;-)
### Scenario when this would be used?
When users want to quickly turn on or off a feature / toy.
For example I use Keyboard Manager, which I want it _Enabled_ when I type on my Laptop's German keyboard, but _disabled_ when I'm docking it.
### Supporting information
_No response_ | Idea-Enhancement,Needs-Triage | low | Minor |
2,687,632,704 | godot | Incorrect results for 'is_valid_ip_address()' for compressed IPv6 adresses using '::' | ### Tested versions
4.4 dev
### System information
Godot v4.4.dev (0c45ace15) - Windows 10.0.18363 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3060 Ti (NVIDIA; 32.0.15.6614) - Intel(R) Core(TM) i7-4790K CPU @ 4.00GHz (8 threads)
### Issue description
The ```is_valid_ip_address()``` method has inconsistent behavior when validating IPv6 addresses with multiple occurrences of ```::```. According to the IPv6 standard, ```::``` can only appear once in an address to represent consecutive zero segments, but currently, for example, the method incorrectly validates addresses like ```2001:db8:::1``` and ```2001::1::1``` as valid. These padding styles should not be possible.
Down below are some test cases that shows that the methods returns ```true``` when we expect ```false```

### Steps to reproduce
```gdscript
extends Node
func _ready() -> void:
print("\nTesting is_valid_ip_address() Method:\n")
print("Valid IPv4 Addresses:")
print("Input: ", "192.168.1.1", "\t -> Result:", "192.168.1.1".is_valid_ip_address()) # Expected: true
print("Input: ", "255.255.255.255", "\t -> Result:", "255.255.255.255".is_valid_ip_address()) # Expected: true (broadcast address)
print("\nInvalid IPv4 Addresses:")
print("Input: ", "256.256.256.256", "\t -> Result:", "256.256.256.256".is_valid_ip_address()) # Expected: false (out of range)
print("Input: ", "192.168.1", "\t -> Result:", "192.168.1".is_valid_ip_address()) # Expected: false (missing octet)
print("Input: ", "192.168.-1.1", "\t -> Result:", "192.168.-1.1".is_valid_ip_address()) # Expected: false (negative octet)
print("\nValid IPv6 Addresses:")
print("Input: ", "::1", "\t -> Result:", "::1".is_valid_ip_address()) # Expected: true (loopback address)
print("Input: ", "2001:db8::", "\t -> Result:", "2001:db8::".is_valid_ip_address()) # Expected: true (trailing `::`)
print("\nInvalid IPv6 Addresses:")
print("Input: ", "2001:db8:::1", "\t -> Result:", "2001:db8:::1".is_valid_ip_address()) # Expected: false (multiple `::`)
print("Input: ", "2001::1::1", "\t -> Result:", "2001::1::1".is_valid_ip_address()) # Expected: false (multiple `::`)
print("Input: ", "::ffff:999.999.999.999","\t -> Result:", "::ffff:999.999.999.999".is_valid_ip_address()) # Expected: false (invalid IPv4 in IPv6)
print("\nEdge Cases:")
print("Input: ", "", "\t -> Result:", "".is_valid_ip_address()) # Expected: false (empty string)
print("Input: ", ":1", "\t -> Result:", ":1".is_valid_ip_address()) # Expected: false (incomplete address)
print("Input: ", "12345::", "\t -> Result:", "12345::".is_valid_ip_address()) # Expected: false (invalid segment length)
```
### Minimal reproduction project (MRP)
N/A | bug,topic:network | low | Minor |
2,687,735,369 | deno | Not able to compile hello world app with unused dependency | deno 2.1.1 (stable, release, x86_64-unknown-linux-gnu)
v8 13.0.245.12-rusty
typescript 5.6.2
Getting
```
2.472 error: Writing deno compile executable to temporary file './build/client.tmp-c7ddbd1567062e51'
2.472
2.472 Caused by:
2.472 0: Building npm vfs.
2.472 1: No such file or directory (os error 2)
```
when trying to compile tiny project within official docker image. The error is also not really helpful.
Steps to reproduce:
`client.ts`
```
// import Bottleneck from "bottleneck";
console.log("foo bar");
```
`deno.json`
```
{
"tasks": {
"build:client": "deno compile --allow-read --allow-write --output ./build/client ./client.ts"
},
"imports": {
"bottleneck": "npm:[email protected]"
},
"compilerOptions": {
"checkJs": true,
"strict": true
}
}
```
`Dockerfile`
```
FROM denoland/deno:alpine-2.1.1
COPY . /app
RUN deno --version && cd /app/ && ls -liah && deno run build:client
```
`deno.lock`
```
{
"version": "4",
"specifiers": {
"npm:[email protected]": "2.19.5"
},
"npm": {
"[email protected]": {
"integrity": "sha512-VHiNCbI1lKdl44tGrhNfU3lup0Tj/ZBMJB5/2ZbNXRCPuRCO7ed2mgcK4r17y+KB2EfuYuRaVlwNbAeaWGSpbw=="
}
},
"workspace": {
"dependencies": [
"npm:[email protected]"
]
}
}
```
Although same project builds fine on my host running Arch Linux.
When I uncomment `bottlebeck` import it builds within docker without issues 🤔
is that some ldd issues? Tried both alpine and debian base images, but result is identical. | needs investigation,compile | low | Critical |
2,687,849,630 | react | [React 19] Can't use debounce for useCallback - Expected the first argument to be an inline function expressioneslint(react-compiler/react-compiler) | ## Summary
> "react-compiler-runtime": "19.0.0-beta-0dec889-20241115",
> "babel-plugin-react-compiler": "19.0.0-beta-0dec889-20241115",
> "eslint-plugin-react-compiler": "19.0.0-beta-0dec889-20241115",
> "eslint": "^8.56.0",
.eslintrc.cjs
```cjs
module.exports = {
root: true,
env: { browser: true, es2020: true },
extends: [
'eslint:recommended',
'plugin:@typescript-eslint/recommended',
'plugin:react-hooks/recommended',
],
ignorePatterns: ['dist', '.eslintrc.cjs'],
parser: '@typescript-eslint/parser',
plugins: ['react-refresh', 'react-compiler'],
rules: {
'react-refresh/only-export-components': [
'warn',
{ allowConstantExport: true },
],
"react-compiler/react-compiler": "error"
},
}
```
I use debounce functoin for fetching api on each input typing, react-compiler eslint error comes out .
> Expected the first argument to be an inline function expressioneslint(react-compiler/react-compiler)
```ts
const debouncedSearch = useCallback(debounce(async (query: string) => {
const response = await fetch(`api/search/quotes?query=${query}`);
if(response.ok) {
const { results } = await response.json();
setResults(results);
}
else {
setResults([]);
}
},300),[]);
```

how can I pass function be wrapped by another function as a argument in hooks without breaking this lint rule? Should I just not use `useCallback` in this case? | React 19 | medium | Critical |
2,687,855,956 | ui | [bug]: Dialog abstraction throwsDialogContent requires a DialogTitle since last update | ### Describe the bug
# Current Behavior
Error message about using `@radix-ui/react-dialog` library for `shadcn components`
### Affected component/components
command
### How to reproduce
### ERROR
Dialog abstraction throwsDialogContent requires a DialogTitle since last update

### Expected behavior
There is no error when using `shadcn components` associated with `@radix-ui/react-dialog` for example: [command](https://ui.shadcn.com/docs/components/command)
### My local solution
components/ui/dialog
```js
import * as DialogPrimitive from '@radix-ui/react-dialog'
const DialogContent = React.forwardRef<
....
>(({ className, children, ...props }, ref) => (
<DialogPortal>
....
<DialogPrimitive.Content
....
+ aria-describedby={undefined}
>
+ <DialogPrimitive.Title className='hidden'>visually hidden</DialogPrimitive.Title>
{children}
....
</DialogPrimitive.Content>
</DialogPortal>
))
```
### correlation lssues
[#4302](https://github.com/shadcn-ui/ui/issues/4302)
[#5474](https://github.com/shadcn-ui/ui/issues/5474)
[#5698](https://github.com/shadcn-ui/ui/issues/5698)
[#5746](https://github.com/shadcn-ui/ui/issues/5746)
### radix-ui lssues
[#2986](https://github.com/radix-ui/primitives/issues/2986)
### Codesandbox/StackBlitz link
https://ui.shadcn.com/docs/components/command
### Logs
_No response_
### System Info
```bash
[✔] Environment
- OS: Windows 10.0.19045 x86_64 (X64)
- node: 20.17.0
- pnpm: 9.10.0
- yarn: 1.22.22
- npm: 10.7.0
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,687,861,427 | neovim | completion: wildcards not escaped during env var expansion | ### Problem
Wildcards are not escaped when expanding environment variables on the cmdline. This differs from shell behavior, where a wildcard character inside an env var is treated as the literal character.
Front end frameworks like next.js are making directory names with wildcard characters [more common](https://nextjs.org/docs/app/building-your-application/routing/dynamic-routes). I found this bug when trying to use env vars to store paths for navigating around a large codebase.
I believe this is a separate issue to #24233 - I do not see the behavior described in that ticket.
From a security perspective, it seems like escaping the contents of external vars is safer.
### Steps to reproduce
```bash
export d='[dir]'
mkdir $d
echo hello > $d/file.txt
nvim --clean
:e $d/<tab>
```
Tab completion does not work as the path does not get converted to `\[dir]/` during variable expansion. Most other things seem to work fine:
- `:e $d/file.txt` opens the file
- `:e $d` opens netrw correctly
Changing the definition of `$d` to include `\` in the path fixes completion (and weirdly doesn't break `:e $d/file.txt` or `:e $d`), but obviously doesn't work when the variable is used outside neovim:
```bash
:q!
export d='\[dir]'
cd $d # error
nvim --clean
:e $d/<tab> # works
```
### Expected behavior
Wildcards are escaped during environment variable expansion. In the above example I would expect to see successful completion after hitting tab:
```
:e \[dir]/file.txt
```
### Nvim version (nvim -v)
v0.10.2
### Vim (not Nvim) behaves the same?
yes, vim 9.0
### Operating system/version
macOS 15.1.1
### Terminal name/version
wezterm 20240203-110809-5046fc22
### $TERM environment variable
xterm-256color
### Installation
homebrew | bug,bug-vim,completion,filesystem,cmdline-mode | low | Critical |
2,687,865,120 | storybook | [Bug]: npx sb init fails node lts (22.11.0) | ### Describe the bug
As title.
node --version
v22.11.0
Storybook version 8.4.5
### Reproduction link
-
### Reproduction steps
Use npm lts (22.11.0)
npx sb init
Hangs indefinitely
### System
```bash
Can't. It hangs.
```
### Additional context
_No response_ | bug | low | Critical |
2,687,866,002 | go | cmd/go: TestScript/goline_order failures | ```
#!watchflakes
default <- pkg == "cmd/go" && test == "TestScript/goline_order"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8730381456461888913)):
=== RUN TestScript/goline_order
=== PAUSE TestScript/goline_order
=== CONT TestScript/goline_order
script_test.go:139: 2024-11-24T14:29:31Z
script_test.go:141: $WORK=C:\Users\swarming\.swarming\w\ir\x\t\cmd-go-test-1453968191\tmpdir572423073\goline_order3281155740
script_test.go:163:
PATH=C:\Users\swarming\.swarming\w\ir\x\t\cmd-go-test-1453968191\tmpdir572423073\testbin;C:\Users\swarming\.swarming\w\ir\x\w\goroot\bin;C:\Users\swarming\.swarming\w\ir\x\w\goroot\bin;C:\Users\swarming\.swarming\w\ir\x\w\goroot\bin;C:\Users\swarming\.swarming\w\ir\cache\tools\bin;C:\Users\swarming\.swarming\w\ir\bbagent_utility_packages;C:\Users\swarming\.swarming\w\ir\bbagent_utility_packages\bin;C:\Users\swarming\.swarming\w\ir\cipd_bin_packages;C:\Users\swarming\.swarming\w\ir\cipd_bin_packages\bin;C:\Users\swarming\.swarming\w\ir\cipd_bin_packages\cpython3;C:\Users\swarming\.swarming\w\ir\cipd_bin_packages\cpython3\bin;C:\Users\swarming\.swarming\w\ir\cache\cipd_client;C:\Users\swarming\.swarming\w\ir\cache\cipd_client\bin;C:\Users\swarming\.swarming\cipd_cache\bin;C:\Python311\Scripts\;C:\Python311\;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Windows\System32\OpenSSH\;C:\ProgramData\chocolatey\bin;C:\Users\swarming\AppData\Local\Microsoft\WindowsApps;C:\Users\swarming\.swarming\w\ir\cache\tools\cc\windows\gcc64\bin
USERPROFILE=/no-home
CCACHE_DISABLE=1
GOARCH=arm64
...
m1
m
# go get -tags usem1 fixes the error. (0.103s)
> cp go.mod.orig go.mod
> go get -tags usem1
[stderr]
go: module ./m1 requires go >= 1.21.2; switching to go1.22.9
go: exec C:\Users\swarming\.swarming\w\ir\x\t\cmd-go-test-1453968191\tmpdir572423073\testbin\go.exe: fork/exec C:\Users\swarming\.swarming\w\ir\x\t\cmd-go-test-1453968191\tmpdir572423073\testbin\go.exe: The paging file is too small for this operation to complete.
script_test.go:163: FAIL: testdata\script\goline_order.txt:26: go get -tags usem1: exit status 1
--- FAIL: TestScript/goline_order (0.59s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,687,957,159 | go | debug/pe: TestDefaultLinkerDWARF failures | ```
#!watchflakes
default <- pkg == "debug/pe" && test == "TestDefaultLinkerDWARF"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8730379045324304033)):
=== RUN TestDefaultLinkerDWARF
file_test.go:389: building test executable for linktype 1 failed: exit status 1 # command-line-arguments
C:\Users\swarming\.swarming\w\ir\x\w\goroot\pkg\tool\windows_arm64\link.exe: running gcc failed: exit status 1
C:\Users\swarming\.swarming\w\ir\cache\tools\cc\windows\gcc64\bin\gcc.exe -mconsole -Wl,--tsaware -Wl,--nxcompat -Wl,--major-os-version=6 -Wl,--minor-os-version=1 -Wl,--major-subsystem-version=6 -Wl,--minor-subsystem-version=1 -Wl,--dynamicbase -Wl,--high-entropy-va -o $WORK\b001\exe\a.out.exe -Qunused-arguments -Wl,--no-insert-timestamp C:\Users\swarming\.swarming\w\ir\x\t\go-link-2263877185\go.o C:\Users\swarming\.swarming\w\ir\x\t\go-link-2263877185\000000.o C:\Users\swarming\.swarming\w\ir\x\t\go-link-2263877185\000001.o C:\Users\swarming\.swarming\w\ir\x\t\go-link-2263877185\000002.o C:\Users\swarming\.swarming\w\ir\x\t\go-link-2263877185\000003.o C:\Users\swarming\.swarming\w\ir\x\t\go-link-2263877185\000004.o C:\Users\swarming\.swarming\w\ir\x\t\go-link-2263877185\000005.o C:\Users\swarming\.swarming\w\ir\x\t\go-link-2263877185\000006.o C:\Users\swarming\.swarming\w\ir\x\t\go-link-2263877185\000007.o C:\Users\swarming\.swarming\w\ir\x\t\go-link-2263877185\000008.o C:\Users\swarming\.swarming\w\ir\x\t\go-link-2263877185\000009.o -O2 -g -O2 -g -Wl,--start-group -lmingwex -lmingw32 -Wl,--end-group -lkernel32
clang-14: error: linker command failed due to signal (use -v to see invocation)
--- FAIL: TestDefaultLinkerDWARF (6.25s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,687,957,183 | go | x/tools/internal/gocommand: unrecognized failures | ```
#!watchflakes
default <- pkg == "golang.org/x/tools/internal/gocommand" && test == ""
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8730377724509040097)):
FAIL golang.org/x/tools/internal/gocommand [build failed]
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation,Tools | low | Critical |
2,687,973,667 | ollama | Instant closure when using shell input with piped output. | ### What is the issue?
When running `ollama run [model] | cat` or `ollama run [model] > [file]`, ollama now closes immediately and does not accept any manual input.
`ollama run [model]` still functions as expected.
While `cat | ollama run [model] ...` seems to be the workaround, this requires entering ^d to have the input processed.
Thus closing the input stream and also ollama as a result, expected in this case since input was explicitly closed.
Additionally, thanks to [pull 416](https://github.com/ollama/ollama/pull/416), this removes the ability to queue or follow up with further prompts.
This regression seems specific to 0.4.4.
After downgrading to 0.4.3 or 0.4.2, ollama functions as I expect.
I have not tested older versions and will stick to version 0.4.3 for the time being.
The purpose of the above syntax being to use the ability of the chat functionality for to enter multiple prompts while processing the output by a further script.
Though this will be covered in a following feature request and is only tangentially related to this regression.
Thank you for your time, have a nice rest of your day!
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.4.4 | bug,needs more info | low | Major |
2,687,975,202 | godot | Touch-based navigation in the 3D editor has strange motion when zoomed out | ### Tested versions
- Reproducible in: 4.3.stable, 4.4.dev5
### System information
Godot v4.4.dev (fd6c4515f) - Android - Single-window, 1 monitor - OpenGL ES 3 (Compatibility) - Adreno (TM) 730 - (8 threads)
### Issue description
On Android, touch-based navigation in the 3D editor has its right stick motion act in a strange way depending on zoom. The more zoomed out you are, the more it will move around as you change the view direction. This results in "jelly"-looking motion.
Default zoom level:
https://github.com/user-attachments/assets/addffea8-2ab6-4f61-9216-847ab9a96b6a
When more zoomed out and going back to the origin using freelook:
https://github.com/user-attachments/assets/a57e5b10-34bb-4843-b1ee-9000e1108fda
This may occur because right stick motion doesn't actually enter freelook mode, but pans around the camera instead while compensating for the orbit position change manually (panning is smoothed by default, but orbiting isn't). If I set **Translation Inertia** to 0 in the Editor Settings, the problem goes away entirely.
### Steps to reproduce
- Use the right stick in a 3D scene in the Android editor to look around.
### Minimal reproduction project (MRP)
N/A | bug,platform:android,topic:editor,usability,topic:3d | low | Minor |
2,688,043,186 | go | cmd/compile/internal/devirtualize: unrecognized failures | ```
#!watchflakes
default <- pkg == "cmd/compile/internal/devirtualize" && test == ""
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8730373154373405105)):
FAIL cmd/compile/internal/devirtualize [build failed]
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,688,043,199 | go | cmd/compile/internal/dwarfgen: unrecognized failures | ```
#!watchflakes
default <- pkg == "cmd/compile/internal/dwarfgen" && test == ""
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8730376055035370369)):
FAIL cmd/compile/internal/dwarfgen [build failed]
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,688,043,218 | go | encoding/gob: TestLargeSlice/int8 failures | ```
#!watchflakes
default <- pkg == "encoding/gob" && test == "TestLargeSlice/int8"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8730379045324304033)):
=== RUN TestLargeSlice/int8
=== PAUSE TestLargeSlice/int8
=== CONT TestLargeSlice/int8
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,688,101,249 | godot | Window "Warning Unable to load images" getting pseudolocalized | ### Tested versions
Reproducible: v4.4.dev5.official [9e6098432]
### System information
Godot v4.4.dev5 - Fedora Linux 41.20241124.0 (Silverblue) on Wayland - X11 display driver, Single-window, 1 monitor - OpenGL 3 (Compatibility) - AMD Radeon RX 570 Series (radeonsi, polaris10, LLVM 19.1.0, DRM 3.59, 6.11.8-300.fc41.x86_64) - Intel(R) Core(TM) i3-10100F CPU @ 3.60GHz (8 threads)
### Issue description
Same deal as #99582
When pseudolocalization is active, if you try to reimport (make broken image file, choose "reimport" in context menu) broken file, then window with warning will show up, with only twist that it will be pseudolocalized.

### Steps to reproduce
1. Activate pseudolocalization in project settings.
2. Make broken image file.
3. Try to reimport it.
4. Warning window should show up, pseudolocalized.
### Minimal reproduction project (MRP)
No. | bug,topic:editor | low | Critical |
2,688,106,846 | godot | Using `--debug` on godot.exe (not console.exe) with bad/missing script causes project to fail loading and close editor without user feedback | ### Tested versions
Reproducible in Godot 4.3 Stable and Godot 4.4-dev5 with MRP
### System information
Godot v4.4.dev5 - Windows 6.2.9200 - Multi-window, 2 monitors - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3080 Ti Laptop GPU (NVIDIA; 32.0.15.6614) - 12th Gen Intel(R) Core(TM) i9-12900HK (20 threads)
### Issue description
running the godot.exe instead of the console.exe with --debug with script code errors can render a project unloadable with the editor just closing after a long pause, and no user feedback.
This can cause a user to incorrectly suspect they have corrupted data or a project, causing them to waste time rebuilding assets by deleting their .godot cache folder, deleting their shader cache or other actions that will not reveal the cause.
I would expect some sort of warning popup in the editor since I am not running with console window, informing me the debugger is triggering on a script error instead of just pausing for a long time and just closing, creating the impression of an editor crash.
### Steps to reproduce
- run godot version exe , not console.exe with --debug on commandline.
- create a fresh project , create a script with bad syntax that triggers an error.
- make bad syntax at top of script like 'class' instead of 'class_name' / disable an autoload script that would trigger other script errors, and save project/script file
- close godot
- open godot again with --debug on commandline and try to open the project
- project will take awhile to load, show a long pause and then close with no user feedback, implying corruption of the project or the editor.
### Minimal reproduction project (MRP)
[mrp_ProjectLoadFail.zip](https://github.com/user-attachments/files/17894186/mrp_ProjectLoadFail.zip)
| enhancement,platform:windows,discussion,topic:editor,usability | low | Critical |
2,688,119,451 | ant-design | Breadcrumb面包屑希望增加原来的routes参数 | ### What problem does this feature solve?
不明白为什么要删除routes参数,很正常的需求和功能,这是个我们N多个的项目一直用的好好的,3.0-4.0的时候还用Breadcrumb.Item方式也挺好的,我知道你们因为不同的需求扩展了更多功能,而且path这个参数为什么要自动拼接(我们每个项目都自己拼好了,我知道可能为了一些懒人需求),我知道可以用itemRender处理,但也不把这么基本原始需求搞复杂呀。我就只要基本的route导航
### What does the proposed API look like?
希望增加原来的routes参数或改进path不要拼接了
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | Inactive,unconfirmed | low | Minor |
2,688,129,485 | go | cmd/compile/internal/liveness: unrecognized failures | ```
#!watchflakes
default <- pkg == "cmd/compile/internal/liveness" && test == ""
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8730376055035370369)):
FAIL cmd/compile/internal/liveness [build failed]
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,688,131,773 | material-ui | [docs-infra] Fix light/dark mode & RTL change speed regression | ### Steps to reproduce
The light/dark mode and RTL toggle is today x4 slower? than what good enough is at in production, and the RTL/LTR toggle in dev mode is x5-50 slower than what good enough is at? It used to be ok-ish:
## v4.12.4 - Dev: mode ~600ms, RTL: ~600ms, Prod: mode and RTL [~150ms](https://v4.mui.com/components/buttons/)
<img width="968" alt="v4 dark" src="https://github.com/user-attachments/assets/7a8f19cb-cf4a-412c-b97a-a7d7a37a2373">
## v5.0.6 - Dev: mode ~1,369ms, RTL, ~10,865ms, Prod: mode and RTL [~200ms](https://v5-0-6.mui.com/components/buttons/)
## v6.1.8 - Dev: mode ~2,253ms, RTL ~ 59,761ms, Prod: mode and RTL [~800ms](https://mui.com/material-ui/react-button/)
Material UI doesn't feel like a credible option for developers with this performance.
### Context
Same problem as #41117. Material UI is not usable like this, not truly. I mean, as a developer, this reinforces this story: I would only adopt Material UI for a new React project so that I can then contribute a fix to this. Otherwise, I wouldn't want to touch Material UI, there are better options out there nowadays, and it would only make sense to use the rich components of MUI anytime I can't find anything better.
### Your environment
<details>
<summary><code>npx @mui/envinfo</code></summary>
```
Don't forget to mention which browser you used.
Output from `npx @mui/envinfo` goes here.
```
</details>
cc @romgrk as related to performance on styled engine. | bug 🐛,performance,priority: important,regression,scope: docs-infra | low | Major |
2,688,138,247 | godot | CPUParticles2D with scale max or min as a non-one number, align Y axis flag on, damping maximum set to a non-zero number, and gravity set to any value below the damping max causes particles to scale up exponentially | ### Tested versions
- Reproducible in 4.0.stable, 4.3.stable, 4.4.dev5
### System information
Windows 10.0.19045 - Vulkan (Forward+) - integrated Intel(R) Iris(R) Xe Graphics (Intel Corporation; 32.0.101.6078) - 13th Gen Intel(R) Core(TM) i5-1340P (16 Threads)
### Issue description
This is a very weird and niche bug I encountered when messing with particle systems. Not setting any one of the flags mentioned in the title will make the problem disappear, but setting all as mentioned causes the particles to unexpectedly rapidly scale up to massive sizes, often larger than the furthest zoom-out can see. This can all be done in the editor. I expected this to cause the initial velocity of the particles to dampen, slowing them from fast to stopped, and I had the minimum and maximum scale flags set higher than 1 to vary the particle size.
### Steps to reproduce
1. Create a new scene
2. Add a CPUParticles2D node
3. Set Particle Flags > Align Y to on
4. Set damping maximum to a non-zero positive number
5. Set scale maximum to a positive number higher than one (or scale minimum to a positive number lower than one, this will cause particles to scale down exponentially, essentially the opposite effect as described in the title)
6. Set X or Y gravity to a px/s number lower than the damping maximum. (Setting both also has an odd effect)
### Minimal reproduction project (MRP)
[project.zip](https://github.com/user-attachments/files/17894381/project.zip)
| bug,topic:2d,topic:particles | low | Critical |
2,688,144,942 | godot | Android - InputEventScreenTouch is canceled when pressing with 2 fingers then dragging | ### Tested versions
-Reproducible in version: 4.3.stable
### System information
Godot v4.3.stable - Windows 10.0.22621 - Vulkan (Mobile) - dedicated NVIDIA GeForce RTX 3060 Ti (NVIDIA; 32.0.15.6603) - AMD Ryzen 9 3900X 12-Core Processor (24 Threads)
### Issue description
In a Camera3D node, I have attached a script to handle basic touch screen input for panning, zooming, and rotation. However, I encountered an issue with multi-touch gesture detection:
1. When I touch the screen with two fingers simultaneously and begin dragging with both fingers, the InputEventScreenDrag event is not called.
2. Upon further investigation, I observed that as soon as I start dragging, the InputEventScreenTouch event is triggered with event.canceled = true for both finger indices.
Interestingly, this behavior does not occur if I:
- Touch with one finger, drag, and then add a second finger. In this scenario, the InputEventScreenTouch events are not canceled, and multi-touch gestures work as expected.
**Expected Behavior**
Touch events should not be canceled when two fingers are placed on the screen simultaneously and begin dragging. This behavior disrupts the ability to seamlessly perform pinch-and-zoom gestures.
**Note:**
Enable Pan and Scale Gestures for android is enabled
### Steps to reproduce
Create a Camera3D node and attached a script with the following code:
```gdscript
extends Camera3D
@export_range(0,10) var pan_speed: float = 1.0
@export var zoom_speed: float = 0.1
@export var rotation_speed: float = 1.0
@export var can_pan: bool
@export var can_zoom: bool
@export var can_rotate: bool
#To limit camera movement and zoom
@export var x_min: float = -10.0
@export var x_max: float = 10.0
@export var z_min: float = -10.0
@export var z_max: float = 10.0
@export var fov_min: float = 30
@export var fov_max: float = 60
var start_fov: float
var start_dist: float
var touch_points: Dictionary = {}
var start_angle: float
var current_angle: float
var button_states : Dictionary = {
MOUSE_BUTTON_LEFT: false,
MOUSE_BUTTON_RIGHT: false,
MOUSE_BUTTON_WHEEL_DOWN: false,
MOUSE_BUTTON_WHEEL_UP: false
}
var is_mobile := false
func _ready() -> void:
start_fov = fov
if OS.get_name() == "Android" or OS.get_name() == "iOS":
is_mobile = true
func _input(event: InputEvent) -> void:
if is_mobile:
if event is InputEventScreenTouch:
_handle_touch(event)
elif event is InputEventScreenDrag:
_handle_drag(event)
else:
if event is InputEventMouseButton:
button_states[event.button_index] = event.pressed
_handle_mouse_click(event)
elif event is InputEventMouseMotion:
_handle_mouse_drag(event)
func _handle_touch(event: InputEventScreenTouch) -> void:
printt(event.pressed, event.index, event.canceled)
if event.pressed:
touch_points[event.index] = event.position
else:
touch_points.erase(event.index)
if touch_points.size() == 2:
var touch_point_positions = touch_points.values()
start_dist = touch_point_positions[0].distance_to(touch_point_positions[1])
start_angle = get_angle(touch_point_positions[0],touch_point_positions[1])
start_fov = fov
elif touch_points.size() < 2:
start_dist = 0
func _handle_drag(event: InputEventScreenDrag):
touch_points[event.index] = event.position
if touch_points.size() == 1:
if can_pan:
h_offset -= event.relative.x * pan_speed / fov
v_offset += event.relative.y * pan_speed / fov
elif touch_points.size() == 2:
var touch_point_positions = touch_points.values()
var current_dist = touch_point_positions[0].distance_to(touch_point_positions[1])
var current_angle = get_angle(touch_point_positions[0], touch_point_positions[1])
var zoom_factor = start_dist / current_dist
if can_zoom:
fov = start_fov / zoom_factor
if can_rotate:
rotation.y -= (current_angle - start_angle) * rotation_speed
start_angle = current_angle # Update the start_angle to the current_angle for the next drag event
fov = clamp(fov,fov_min,fov_max)
func _handle_mouse_click(event: InputEventMouseButton) -> void:
if button_states[MOUSE_BUTTON_WHEEL_DOWN]:
fov = clamp(fov+2,fov_min,fov_max)
elif button_states[MOUSE_BUTTON_WHEEL_UP]:
fov = clamp(fov-2,fov_min,fov_max)
func _handle_mouse_drag(event: InputEventMouseMotion):
if button_states[MOUSE_BUTTON_LEFT]:
if can_pan:
h_offset -= event.relative.x * pan_speed
v_offset += event.relative.y * pan_speed
func get_angle(p1: Vector2, p2: Vector2) -> float:
var delta = p2 - p1
return fmod((atan2(delta.y, delta.x) + PI), (2 * PI))
```
### Minimal reproduction project (MRP)
N/A | bug,platform:android,topic:input | low | Minor |
2,688,196,555 | svelte | Using Vite inline styles causes hydration mismatch errors | ### Describe the bug
When using a CSS import with `?inline` in the URL and including it in the page using a `svelte:element` tag (to force a specific CSS file to be inlined in the page), a hydration error is created in Svelte 5 that did not exist in Svelte 4. This breaks the CSS, as the page hydration stops (the MRE doesn't significantly change as it's too minimal, but in the site I'm actually building, the styles mess up entirely!).
This appears to happen because the `svelte:element` node has no defined previous or next siblings.
### Reproduction
[See MRE repo](https://github.com/mashedkeyboard/test-hydration-inline-import) - it's essentially just this file:
https://github.com/mashedkeyboard/test-hydration-inline-import/blob/main/src/routes/%2Bpage.svelte
### Logs
No notable content in server console. Browser console as below:
```
Navigated to http://localhost:5173/
[vite] connecting... [client:495:8](http://localhost:5173/@vite/client)
[vite] connected. [client:614:14](http://localhost:5173/@vite/client)
Source map error: No sources are declared in this source map.
Resource URL: http://localhost:5173/node_modules/.vite/deps/chunk-UGBVNEQM.js?v=f8c6c012
Source Map URL: chunk-UGBVNEQM.js.map
Source map error: No sources are declared in this source map.
Resource URL: http://localhost:5173/node_modules/.vite/deps/svelte_legacy.js?v=f8c6c012
Source Map URL: svelte_legacy.js.map
Source map error: No sources are declared in this source map.
Resource URL: http://localhost:5173/node_modules/.vite/deps/svelte_internal_client.js?v=f8c6c012
Source Map URL: svelte_internal_client.js.map
Source map error: No sources are declared in this source map.
Resource URL: http://localhost:5173/node_modules/.vite/deps/svelte.js?v=f8c6c012
Source Map URL: svelte.js.map
[svelte] hydration_mismatch
Hydration failed because the initial UI does not match what was rendered on the server [client.js:2684:15](http://localhost:5173/node_modules/@sveltejs/kit/src/runtime/client/client.js?v=f8c6c012)
```
### System Info
```shell
System:
OS: Linux 6.1 Manjaro Linux
CPU: (12) x64 AMD Ryzen 5 3600X 6-Core Processor
Memory: 6.13 GB / 31.29 GB
Container: Yes
Shell: 3.7.1 - /bin/fish
Binaries:
Node: 18.20.4 - /bin/node
Yarn: 1.22.22 - /bin/yarn
npm: 10.8.3 - /bin/npm
pnpm: 8.7.5 - ~/.local/share/nvm/v18.12.0/bin/pnpm
Browsers:
Chrome: Linux
Chromium: 131.0.6778.69
npmPackages:
svelte: ^5.0.0 => 5.2.7
```
### Severity
blocking an upgrade | bug | low | Critical |
2,688,210,753 | vscode | Poor/pathological multi-cursor performance |
Type: <b>Performance Issue</b>
Using Windows-Alt-down shortcut (Command-Option?) on Mac to extend multi-cursor, after a dozen or so lines vs code totally freezes and must be killed (dialog prompts to reopen window), even though cursor limit is in the thousands (like, 10000).
Debugging, it seems that vs/editor/common/cursor/cursorMoveCommands.ts addCursorDown is generating duplicate cursors for each existing cursor for each down command. E.g. 1 -> 2 -> 4 -> 8 -> 16...after extending down to 4 cursors. I imagine this is easily reaching the cursor limit. It makes performing simple edits (delete indents) for more than just a handful of lines impossible.
https://github.com/microsoft/vscode/blob/8eb7fac5658846e35a0399dc65e9a0580d4e4ed7/src/vs/editor/common/cursor/cursorMoveCommands.ts#L17
```typescript
public static addCursorDown(viewModel: IViewModel, cursors: CursorState[], useLogicalLine: boolean): PartialCursorState[] {
const result: PartialCursorState[] = [];
let resultLen = 0;
// multiple duplicate cursors added for each action?
for (let i = 0, len = cursors.length; i < len; i++) {
const cursor = cursors[i];
result[resultLen++] = new CursorState(cursor.modelState, cursor.viewState);
if (useLogicalLine) {
result[resultLen++] = CursorState.fromModelState(MoveOperations.translateDown(viewModel.cursorConfig, viewModel.model, cursor.modelState));
} else {
result[resultLen++] = CursorState.fromViewState(MoveOperations.translateDown(viewModel.cursorConfig, viewModel, cursor.viewState));
}
}
return result;
}
```



(the leaf calls here are to function `removeChild`)
VS Code version: Code 1.95.3 (Universal) (f1a4fb101478ce6ec82fe9627c43efbf9e98c813, 2024-11-13T14:50:04.152Z)
OS version: Darwin x64 23.4.0
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz (12 x 2600)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: disabled_off<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|3, 3, 4|
|Memory (System)|32.00GB (1.84GB free)|
|Process Argv|--crash-reporter-id 129e9867-ec36-439b-a1f1-7c366a524771|
|Screen Reader|no|
|VM|0%|
</details><details>
<summary>Process Info</summary>
```
CPU % Mem MB PID Process
7 197 22632 code main
0 66 22635 gpu-process
0 33 22636 utility-network-service
0 492 22638 window [1] (realm.ts — web)
0 131 22900 shared-process
0 0 23751 /bin/ps -ax -o pid=,ppid=,pcpu=,pmem=,command=
0 66 22901 fileWatcher [1]
0 295 23481 extensionHost [1]
0 98 23604 electron-nodejs (tsserver.js )
0 164 23605 electron-nodejs (tsserver.js )
0 66 23608 electron-nodejs (typingsInstaller.js typesMap.js )
0 131 23606 electron-nodejs (server.js )
0 66 23642 electron-nodejs (languageserver.js )
0 66 23643 electron-nodejs (languageserver.js )
0 131 23645 electron-nodejs (index.js )
0 33 23656 /Applications/Visual Studio Code.app/Contents/Frameworks/Code Helper (Plugin).app/Contents/MacOS/Code Helper (Plugin) /Applications/Visual Studio Code.app/Contents/Resources/app/extensions/json-language-features/server/dist/node/jsonServerMain --node-ipc --clientProcessId=23481
0 131 23679 electron-nodejs (eslintServer.js )
0 131 23724 electron-nodejs (main-bundle.js )
```
</details>
<details>
<summary>Workspace Info</summary>
```
| Window (realm.ts — web)
| Folder (web): 13737 files
| File types: json(4490) txt(2175) js(514) ts(404) properties(248)
| png(220) svg(200) dmg(180) map(179) sql(171)
| Conf files: dockerfile(41) package.json(31) tsconfig.json(17)
| makefile(6) settings.json(4) launch.json(1)
| gulp.js(1)
| Launch Configs: pwa-node(4);
```
</details>
<details><summary>Extensions (49)</summary>
Extension|Author (truncated)|Version
---|---|---
rust-bundle|1Yi|1.0.0
Bookmarks|ale|13.5.0
tsl-problem-matcher|amo|0.6.2
atlascode|atl|3.0.13
markdown-mermaid|bie|1.27.0
vscode-svgviewer|css|2.0.0
vscode-eslint|dba|3.0.10
javascript-ejs-support|Dig|1.3.3
rust-syntax|dus|0.6.1
gitlens|eam|16.0.3
shell-format|fox|7.2.5
gitlab-workflow|Git|5.18.1
vscode-graphql|Gra|0.12.1
vscode-graphql-execution|Gra|0.3.1
vscode-graphql-syntax|Gra|1.3.8
vscode-mocha-test-adapter|hbe|2.14.1
vscode-test-explorer|hbe|2.22.1
rainbow-csv|mec|3.13.0
vscode-docker|ms-|1.29.3
debugpy|ms-|2024.12.0
isort|ms-|2023.10.1
python|ms-|2024.20.0
vscode-pylance|ms-|2024.11.3
jupyter|ms-|2024.10.0
vscode-jupyter-cell-tags|ms-|0.1.9
vscode-jupyter-slideshow|ms-|0.1.6
remote-ssh|ms-|0.115.1
remote-ssh-edit|ms-|0.87.0
live-server|ms-|0.4.15
makefile-tools|ms-|0.11.13
remote-explorer|ms-|0.4.3
test-adapter-converter|ms-|0.2.1
vscode-js-profile-flame|ms-|1.0.9
sqltools|mtx|0.28.3
sqltools-driver-pg|mtx|0.5.4
java|red|1.36.0
vscode-xml|red|0.27.1
vscode-yaml|red|1.15.0
vscode-coverage-gutters|rya|2.12.0
semanticdiff|sem|0.9.0
intellicode-api-usage-examples|Vis|0.2.9
vscodeintellicode|Vis|1.3.2
vscode-gradle|vsc|3.16.4
vscode-java-debug|vsc|0.58.1
vscode-java-dependency|vsc|0.24.1
vscode-java-pack|vsc|0.29.0
vscode-java-test|vsc|0.43.0
vscode-maven|vsc|0.44.0
quokka-vscode|Wal|1.0.667
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805:30301674
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
a9j8j154:30646983
962ge761:30959799
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
2e7ec940:31000449
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
9c06g630:31013171
dvdeprecation:31068756
dwnewjupyter:31046869
newcmakeconfigv2:31071590
nativerepl1:31139838
pythonrstrctxt:31112756
nativeloc1:31185841
cf971741:31144450
iacca1:31171482
notype1cf:31157160
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
```
</details>
<!-- generated by issue reporter --> | bug,freeze-slow-crash-leak,editor-multicursor | low | Critical |
2,688,293,278 | tauri | [bug] plugin: event.emit_to freezing app - because it's never get a response | ### Describe the bug
I have a APP with a Main WebViewWindow and a Backend WebViewWindow (hidden).
When ever I send a emit_to from or to the Backend the Main WebViewWindow will unresponsive.



### Reproduction
A main window, sends a Job to the backend window. This will report its progress - sometimes very fast...
### Expected behavior
No freezing WebViewWindows....
### Full `tauri info` output
```text
pnpm run tauri info
> [email protected] tauri C:\data\DeepRsNest
> tauri "info"
WARNING: Only one package manager should be used, but found bun and pnpm.
Please remove unused package manager lock files, will use bun for now!
[✔] Environment
- OS: Windows 10.0.26100 x86_64 (X64)
✔ WebView2: 131.0.2903.63
✔ MSVC: Visual Studio Community 2022
✔ rustc: 1.82.0 (f6e511eec 2024-10-15)
✔ cargo: 1.82.0 (8f40fc59f 2024-08-21)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-x86_64-pc-windows-msvc (default)
- node: 20.18.0
- pnpm: 9.13.2
- npm: 10.9.0
- bun: 1.1.34
[-] Packages
- tauri 🦀: 2.1.1
- tauri-build 🦀: 2.0.3
- wry 🦀: 0.47.2
- tao 🦀: 0.30.8
- @tauri-apps/api : 2.1.1
- @tauri-apps/cli : 2.1.0
[-] Plugins
- tauri-plugin-stronghold 🦀: 2.0.1
- @tauri-apps/plugin-stronghold : 2.0.0
- tauri-plugin-updater 🦀: 2.0.2
- @tauri-apps/plugin-updater : 2.0.0
- tauri-plugin-process 🦀: 2.0.1
- @tauri-apps/plugin-process : 2.0.0
- tauri-plugin-upload 🦀: 2.1.0
- @tauri-apps/plugin-upload : 2.1.0
- tauri-plugin-fs 🦀: 2.0.3
- @tauri-apps/plugin-fs : 2.0.2
- tauri-plugin-store 🦀: 2.1.0
- @tauri-apps/plugin-store : 2.1.0
- tauri-plugin-dialog 🦀: 2.0.3
- @tauri-apps/plugin-dialog : 2.0.1
- tauri-plugin-log 🦀: 2.0.2
- @tauri-apps/plugin-log : 2.0.0
- tauri-plugin-http 🦀: 2.0.3
- @tauri-apps/plugin-http : 2.0.1
- tauri-plugin-os 🦀: 2.0.1
- @tauri-apps/plugin-os : 2.0.0
- tauri-plugin-single-instance 🦀: 2.0.1
- @tauri-apps/plugin-single-instance : not installed!
[-] App
- build-type: bundle
- CSP: default-src blob: data: filesystem: ws: wss: http: https: tauri: 'unsafe-eval' 'unsafe-inline' 'self' img-src: 'self'; connect-src ipc: http://ipc.localhost
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- framework: SolidJS
- bundler: Vite
```
### Stack trace
```text
no stacktrace available in console....
```
### Additional context
_No response_ | type: bug,status: needs triage | low | Critical |
2,688,304,621 | ollama | Feature request : Do not update ollama when OS is on limited data connextion | Ollama is a big software ( almost 1GB ) , when windows is on a limited internet connection ( I dont know if same option is available on other OSs) , do not update , or at least prompt for updating. | feature request | low | Minor |
2,688,372,778 | godot | Using change_scene_to_file during a Node's _init causes the previous scene's Nodes to be created in the new scene | ### Tested versions
4.3.dev5
### System information
Godot v4.4.dev5 - Windows 10.0.26100 - Multi-window, 2 monitors - OpenGL 3 (Compatibility) - AMD Radeon RX 6700 XT (Advanced Micro Devices, Inc.; 32.0.12019.1028) - Intel(R) Core(TM) i5-9600K CPU @ 3.70GHz (6 threads)
### Issue description
Changing the current scene in a node's _init will make the Engine create the remaining Nodes in the new scene that was changed to.
https://imgur.com/a/h2MsqV9
### Steps to reproduce
See Video
1. Create Scene with script that changes the current scene in _init and with sub-nodes
2. Observe issue
### Minimal reproduction project (MRP)
[99651.zip](https://github.com/user-attachments/files/17952724/99651.zip)
| bug,discussion,topic:core,needs testing | low | Minor |
2,688,378,335 | neovim | allow zero-width WinSeparator, for extra horizontal space | ### Problem
On my laptop I never have enough horizontal width. I'd like to remove vertical separator character used for `:vsplit`.
I'd imagine this might be useful for other separators, but I just care about `vert`.
I tried setting it to nothing with:
```
:set fillchars+=vert:
```
but get the error:
```
E1511: Wrong number of characters for field "vert:" fillchars+=vert:
```
### Expected behavior
Allow removing the separator with something like `:set fillchars+=vert:`. Or maybe `:set fillchars+=vert:<NULL>` to disambiguate it from being unset and/or the default value. | enhancement,ui,display,options | low | Critical |
2,688,457,330 | go | proposal: utf8: add ErrInvalid | ### Proposal Details
I propose the addition of the following sentinel error to the `utf8` package:
```go
// ErrInvalid indicates that invalid UTF-8 was encountered within a text string.
// Packages that return this error should usually wrap this error value within
// some local error type to provide further semantic context regarding where
// this error occurred.
var ErrInvalid = errors.New("invalid UTF-8")
```
Many higher-level formats are built on top of UTF-8 (e.g., XML, JSON, protobuf, etc.) where encountering an UTF-8 encoding problem is a possibility. In many cases such an error is not fatal and processing can still continue such that a function that returns a typical `(T, error)` result may provide a sensible while also returning an error that matches `utf8.ErrInvalid`. Even if it is fatal, it is often useful for metrics reporting purposes to specially identify invalid UTF-8 as a particular class of errors.
This could replace internal error values used by the "encoding/json/v2" prototype and also within the protobuf module (e.g., golang/protobuf#1228). | Proposal | low | Critical |
2,688,461,695 | rust | ICE: `item_name: no name for DefPath` | <!--
[31mICE[0m: Rustc ./a.rs '' 'error: internal compiler error: compiler/rustc_middle/src/ty/mod.rs:1594:13: item_name: no name for DefPath { data: [DisambiguatedDefPathData { data: ValueNs("a"), disambiguator: 0 }, DisambiguatedDefPathData { data: TypeNs(""), disambiguator: 0 }], krate: crate0 }', 'error: internal compiler error: compiler/rustc_middle/src/ty/mod.rs:1594:13: item_name: no name for DefPath { data: [DisambiguatedDefPathData { data: ValueNs("a"), disambiguator: 0 }, DisambiguatedDefPathData { data: TypeNs(""), disambiguator: 0 }], krate: crate0 }'
File: /tmp/im2/a.rs
-->
snippet:
````rust
fn a(
_: impl Iterator<
Item = [(); {
match *todo!() { ! };
}],
>,
) {
}
````
Version information
````
rustc 1.85.0-nightly (481b5fadd 2024-11-24)
binary: rustc
commit-hash: 481b5fadd7994d0f04e9a6fe9ded3f22d6753825
commit-date: 2024-11-24
host: x86_64-unknown-linux-gnu
release: 1.85.0-nightly
LLVM version: 19.1.4
````
Possibly related line of code:
https://github.com/rust-lang/rust/blob/481b5fadd7994d0f04e9a6fe9ded3f22d6753825/compiler/rustc_middle/src/ty/mod.rs#L1588-L1600
Command:
`/home/matthias/.rustup/toolchains/master/bin/rustc `
<details><summary><strong>Program output</strong></summary>
<p>
```
error[E0658]: `!` patterns are experimental
--> /tmp/icemaker_global_tempdir.RFI67OtIoUCV/rustc_testrunner_tmpdir_reporting.BmpfyXLeFAgh/mvce.rs:4:37
|
4 | match *todo!() { ! };
| ^
|
= note: see issue #118155 <https://github.com/rust-lang/rust/issues/118155> for more information
= help: add `#![feature(never_patterns)]` to the crate attributes to enable
= note: this compiler was built on 2024-11-24; consider upgrading it if it is out of date
error: internal compiler error: compiler/rustc_middle/src/ty/mod.rs:1594:13: item_name: no name for DefPath { data: [DisambiguatedDefPathData { data: ValueNs("a"), disambiguator: 0 }, DisambiguatedDefPathData { data: TypeNs(""), disambiguator: 0 }], krate: crate0 }
thread 'rustc' panicked at compiler/rustc_middle/src/ty/mod.rs:1594:13:
Box<dyn Any>
stack backtrace:
0: 0x7fada44bf8fa - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h99e2445636bcb92d
1: 0x7fada4c24d7c - core::fmt::write::h812019335d70dce0
2: 0x7fada6014b11 - std::io::Write::write_fmt::hcf100a634041ee1b
3: 0x7fada44bf752 - std::sys::backtrace::BacktraceLock::print::hcac021922af4bfb2
4: 0x7fada44c1c2a - std::panicking::default_hook::{{closure}}::h38cfa96ac70267dc
5: 0x7fada44c1a90 - std::panicking::default_hook::hb27c01f91a1d7781
6: 0x7fada35392a5 - std[1f3ac3280189331a]::panicking::update_hook::<alloc[4c85ded2b12904e4]::boxed::Box<rustc_driver_impl[23521148f3a5f4e2]::install_ice_hook::{closure#0}>>::{closure#0}
7: 0x7fada44c2308 - std::panicking::rust_panic_with_hook::ha0a555d8ce7b1086
8: 0x7fada3573db1 - std[1f3ac3280189331a]::panicking::begin_panic::<rustc_errors[ccc3cc1d084bad89]::ExplicitBug>::{closure#0}
9: 0x7fada3566d56 - std[1f3ac3280189331a]::sys::backtrace::__rust_end_short_backtrace::<std[1f3ac3280189331a]::panicking::begin_panic<rustc_errors[ccc3cc1d084bad89]::ExplicitBug>::{closure#0}, !>
10: 0x7fada3562329 - std[1f3ac3280189331a]::panicking::begin_panic::<rustc_errors[ccc3cc1d084bad89]::ExplicitBug>
11: 0x7fada357dce1 - <rustc_errors[ccc3cc1d084bad89]::diagnostic::BugAbort as rustc_errors[ccc3cc1d084bad89]::diagnostic::EmissionGuarantee>::emit_producing_guarantee
12: 0x7fada3bf4f03 - rustc_middle[b432c06521885074]::util::bug::opt_span_bug_fmt::<rustc_span[a0cbfa4ee9ea560a]::span_encoding::Span>::{closure#0}
13: 0x7fada3bdd18a - rustc_middle[b432c06521885074]::ty::context::tls::with_opt::<rustc_middle[b432c06521885074]::util::bug::opt_span_bug_fmt<rustc_span[a0cbfa4ee9ea560a]::span_encoding::Span>::{closure#0}, !>::{closure#0}
14: 0x7fada3bdd01b - rustc_middle[b432c06521885074]::ty::context::tls::with_context_opt::<rustc_middle[b432c06521885074]::ty::context::tls::with_opt<rustc_middle[b432c06521885074]::util::bug::opt_span_bug_fmt<rustc_span[a0cbfa4ee9ea560a]::span_encoding::Span>::{closure#0}, !>::{closure#0}, !>
15: 0x7fada1d08050 - rustc_middle[b432c06521885074]::util::bug::bug_fmt
16: 0x7fada501112f - <rustc_ast_lowering[79467ad6dbcc4c8f]::LoweringContext>::lower_ty_direct
17: 0x7fada500b9d8 - <rustc_ast_lowering[79467ad6dbcc4c8f]::LoweringContext>::lower_fn_decl
18: 0x7fada5ce742d - <rustc_ast_lowering[79467ad6dbcc4c8f]::LoweringContext>::lower_item_kind
19: 0x7fada4fd1c3d - <rustc_ast_lowering[79467ad6dbcc4c8f]::item::ItemLowerer>::lower_node
20: 0x7fada4fd0990 - rustc_ast_lowering[79467ad6dbcc4c8f]::lower_to_hir
21: 0x7fada5bdebb8 - rustc_query_impl[b696f309ae195481]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[b696f309ae195481]::query_impl::hir_crate::dynamic_query::{closure#2}::{closure#0}, rustc_middle[b432c06521885074]::query::erase::Erased<[u8; 8usize]>>
22: 0x7fada5bddb35 - rustc_query_system[62b31e2f5f52cd09]::query::plumbing::try_execute_query::<rustc_query_impl[b696f309ae195481]::DynamicConfig<rustc_query_system[62b31e2f5f52cd09]::query::caches::SingleCache<rustc_middle[b432c06521885074]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[b696f309ae195481]::plumbing::QueryCtxt, false>
23: 0x7fada5bdd6dd - rustc_query_impl[b696f309ae195481]::query_impl::hir_crate::get_query_non_incr::__rust_end_short_backtrace
24: 0x7fada4e36bc2 - rustc_query_impl[b696f309ae195481]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[b696f309ae195481]::query_impl::hir_attrs::dynamic_query::{closure#2}::{closure#0}, rustc_middle[b432c06521885074]::query::erase::Erased<[u8; 8usize]>>
25: 0x7fada4e361b7 - rustc_query_system[62b31e2f5f52cd09]::query::plumbing::try_execute_query::<rustc_query_impl[b696f309ae195481]::DynamicConfig<rustc_data_structures[6ee81a484f31d219]::vec_cache::VecCache<rustc_hir[8382c86485cd6313]::hir_id::OwnerId, rustc_middle[b432c06521885074]::query::erase::Erased<[u8; 8usize]>, rustc_query_system[62b31e2f5f52cd09]::dep_graph::graph::DepNodeIndex>, false, false, false>, rustc_query_impl[b696f309ae195481]::plumbing::QueryCtxt, false>
26: 0x7fada4e35f0f - rustc_query_impl[b696f309ae195481]::query_impl::hir_attrs::get_query_non_incr::__rust_end_short_backtrace
27: 0x7fada50f6843 - <rustc_middle[b432c06521885074]::hir::map::Map>::attrs
28: 0x7fada5d86200 - rustc_passes[f810fb5d3a359db7]::entry::entry_fn
29: 0x7fada5d86198 - rustc_query_impl[b696f309ae195481]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[b696f309ae195481]::query_impl::entry_fn::dynamic_query::{closure#2}::{closure#0}, rustc_middle[b432c06521885074]::query::erase::Erased<[u8; 12usize]>>
30: 0x7fada5d86171 - <rustc_query_impl[b696f309ae195481]::query_impl::entry_fn::dynamic_query::{closure#2} as core[6532f1aa9b91b5ff]::ops::function::FnOnce<(rustc_middle[b432c06521885074]::ty::context::TyCtxt, ())>>::call_once
31: 0x7fada5d85b09 - rustc_query_system[62b31e2f5f52cd09]::query::plumbing::try_execute_query::<rustc_query_impl[b696f309ae195481]::DynamicConfig<rustc_query_system[62b31e2f5f52cd09]::query::caches::SingleCache<rustc_middle[b432c06521885074]::query::erase::Erased<[u8; 12usize]>>, false, false, false>, rustc_query_impl[b696f309ae195481]::plumbing::QueryCtxt, false>
32: 0x7fada5d858a8 - rustc_query_impl[b696f309ae195481]::query_impl::entry_fn::get_query_non_incr::__rust_end_short_backtrace
33: 0x7fada55a257f - rustc_interface[8ffc5b6c08143df7]::passes::run_required_analyses
34: 0x7fada5596a9e - rustc_interface[8ffc5b6c08143df7]::passes::analysis
35: 0x7fada5596a6f - rustc_query_impl[b696f309ae195481]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[b696f309ae195481]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[b432c06521885074]::query::erase::Erased<[u8; 1usize]>>
36: 0x7fada5bdb3ee - rustc_query_system[62b31e2f5f52cd09]::query::plumbing::try_execute_query::<rustc_query_impl[b696f309ae195481]::DynamicConfig<rustc_query_system[62b31e2f5f52cd09]::query::caches::SingleCache<rustc_middle[b432c06521885074]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[b696f309ae195481]::plumbing::QueryCtxt, false>
37: 0x7fada5bdb0ce - rustc_query_impl[b696f309ae195481]::query_impl::analysis::get_query_non_incr::__rust_end_short_backtrace
38: 0x7fada5aeb9d8 - rustc_interface[8ffc5b6c08143df7]::interface::run_compiler::<core[6532f1aa9b91b5ff]::result::Result<(), rustc_span[a0cbfa4ee9ea560a]::ErrorGuaranteed>, rustc_driver_impl[23521148f3a5f4e2]::run_compiler::{closure#0}>::{closure#1}
39: 0x7fada5b66820 - std[1f3ac3280189331a]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[8ffc5b6c08143df7]::util::run_in_thread_with_globals<rustc_interface[8ffc5b6c08143df7]::util::run_in_thread_pool_with_globals<rustc_interface[8ffc5b6c08143df7]::interface::run_compiler<core[6532f1aa9b91b5ff]::result::Result<(), rustc_span[a0cbfa4ee9ea560a]::ErrorGuaranteed>, rustc_driver_impl[23521148f3a5f4e2]::run_compiler::{closure#0}>::{closure#1}, core[6532f1aa9b91b5ff]::result::Result<(), rustc_span[a0cbfa4ee9ea560a]::ErrorGuaranteed>>::{closure#0}, core[6532f1aa9b91b5ff]::result::Result<(), rustc_span[a0cbfa4ee9ea560a]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[6532f1aa9b91b5ff]::result::Result<(), rustc_span[a0cbfa4ee9ea560a]::ErrorGuaranteed>>
40: 0x7fada5b6653d - <<std[1f3ac3280189331a]::thread::Builder>::spawn_unchecked_<rustc_interface[8ffc5b6c08143df7]::util::run_in_thread_with_globals<rustc_interface[8ffc5b6c08143df7]::util::run_in_thread_pool_with_globals<rustc_interface[8ffc5b6c08143df7]::interface::run_compiler<core[6532f1aa9b91b5ff]::result::Result<(), rustc_span[a0cbfa4ee9ea560a]::ErrorGuaranteed>, rustc_driver_impl[23521148f3a5f4e2]::run_compiler::{closure#0}>::{closure#1}, core[6532f1aa9b91b5ff]::result::Result<(), rustc_span[a0cbfa4ee9ea560a]::ErrorGuaranteed>>::{closure#0}, core[6532f1aa9b91b5ff]::result::Result<(), rustc_span[a0cbfa4ee9ea560a]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[6532f1aa9b91b5ff]::result::Result<(), rustc_span[a0cbfa4ee9ea560a]::ErrorGuaranteed>>::{closure#1} as core[6532f1aa9b91b5ff]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
41: 0x7fada5b65cf9 - std::sys::pal::unix::thread::Thread::new::thread_start::h8f9ff78050bb9448
42: 0x7fada735139d - <unknown>
43: 0x7fada73d649c - <unknown>
44: 0x0 - <unknown>
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: rustc 1.85.0-nightly (481b5fadd 2024-11-24) running on x86_64-unknown-linux-gnu
query stack during panic:
#0 [hir_crate] getting the crate HIR
#1 [hir_attrs] getting HIR owner attributes in ``
end of query stack
error: aborting due to 2 previous errors
For more information about this error, try `rustc --explain E0658`.
```
</p>
</details>
<!--
query stack:
#0 [hir_crate] getting the crate HIR
#1 [hir_attrs] getting HIR owner attributes in ``
-->
| I-ICE,T-compiler,C-bug,S-bug-has-test | low | Critical |
2,688,554,310 | react | [Compiler Bug]: ESLint Compiler rules do not allow writing to external variable from effect-called function | ### What kind of issue is this?
- [ ] React Compiler core (the JS output is incorrect, or your app works incorrectly after optimization)
- [ ] babel-plugin-react-compiler (build issue installing or using the Babel plugin)
- [X] eslint-plugin-react-compiler (build issue installing or using the eslint plugin)
- [ ] react-compiler-healthcheck (build issue installing or using the healthcheck script)
### Link to repro
https://playground.react.dev/#N4Igzg9grgTgxgUxALhASwLYAcIwC4AEwBUYCAogGaUJyEC+BlMEGBAOiDAgIZ2cBudgDsRCAB458BACYJKPKABtClKMLpoIwggEEAFAEoiIgiTJUadfUYIBeAHwmdZgjyNCX9ADQEA2gC6hp6mBAD0YQQAkpQEAJ7QBBgQAG4IBHgAFmhgTOqa2gQARvK46ZykFNS0eJy+WekIMCwwBADmEAi5PADuPHEAdKFqGnhaOu7GwKFmPWjCMhA9A+L2HCAAjIKh9CEu3HiwOgA8MmgpDsdhZxee9CIg9EA
### Repro steps
1. Run ESLint with eslint-plugin-react-compiler with the Playground's link
2. Observe error "InvalidReact: Writing to a variable defined outside a component or hook is not allowed. Consider using an effect"
I assume functions called from effects should have the same liberties as the effects themselves, else we are forced to write big effects. I used `window` as the external variable as it represents a common use case, but any other name would work.
Notice the error only happens when the function which writes to `window` is defined after the `useEffect` which calls it. Invert the order and it's fine.
### How often does this bug happen?
Every time
### What version of React are you using?
18.3.1
### What version of React Compiler are you using?
eslint-plugin-react-compiler@npm:19.0.0-beta-0dec889-20241115 | Type: Bug,Status: Unconfirmed,Component: Optimizing Compiler | low | Critical |
2,688,581,053 | ollama | Tool calling parsing for llama3.2 | ### What is the issue?
Llama 3.2 tool call outputs [are not in JSON](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/text_prompt_format.md) and so Ollama's tool parsing needs to be updated
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_ | bug | low | Minor |
2,688,584,495 | go | os/signal: sending SIGTSTP doesn't return to shell when non-SIGTSTP handler exists (OpenBSD) | ### Go version
go version go1.20.1 openbsd/amd64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/peter/.cache/go-build"
GOENV="/home/peter/.config/go/env"
GOEXE=""
GOEXPERIMENT=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="openbsd"
GOINSECURE=""
GOMODCACHE="/home/peter/go/pkg/mod"
GONOPROXY=""
GONOSUMDB=""
GOOS="openbsd"
GOPATH="/home/peter/go"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/usr/local/go"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/openbsd_amd64"
GOVCS=""
GOVERSION="go1.20.1"
GCCGO="gccgo"
GOAMD64="v1"
AR="ar"
CC="cc"
CXX="c++"
CGO_ENABLED="1"
GOMOD="/dev/null"
GOWORK=""
CGO_CFLAGS="-O2 -g"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-O2 -g"
CGO_FFLAGS="-O2 -g"
CGO_LDFLAGS="-O2 -g"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -Wl,--no-gc-sections -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build1289460440=/tmp/go-build -gno-record-gcc-switches"
```
### What did you do?
```
package main
import (
"os"
"os/signal"
"syscall"
"time"
)
func main() {
c := make(chan os.Signal, 1)
signal.Notify(c, syscall.SIGINT, syscall.SIGTERM)
go func() {
<-c
os.Exit(1)
}()
for {
time.Sleep(time.Second)
}
}
```
go build test.go
./test
Hit Ctrl-Z
### What did you see happen?
<Doesn't return to shell as expected>
### What did you expect to see?
```
^Z[1] + Suspended ./test
me@mymachine:~$
``` | OS-OpenBSD,NeedsInvestigation,compiler/runtime | low | Critical |
2,688,589,147 | godot | `get_line_count()` is wrong when characters are hidden with "Characters After Shaping" behavior | ### Tested versions
4.3
4.4 dev5
### System information
Windows 10.0.19045 - Multi-window, 2 monitors - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1060 (NVIDIA; 32.0.15.6603) - Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz (8 threads)
### Issue description
I'm using RichTextLabel for my dialogue system. I rely on `get_line_count()` returning only lines not hidden by `visible_characters`.
However unfolding `visible_characters` does not work that well with autowrap. The word will start appearing at the end of line and then jump to the next line as it gets wrapped. It looks very jarring and many people reported this problem when playtesting my game.
The problem is that the only way to fix it is changing `visible_characters_behavior` from the default Characters Before Shaping to Characters After Shaping. This makes `get_line_count()` always return all lines, even if hidden by `visible_characters`.
Though while testing this, I discovered another weird thing about `get_line_count()` - the `visible_characters` interaction only works if lines end with `\r`. If lines end with `\n`, the behavior is consistent between `visible_characters_behavior`, i.e. it always returns all lines 🙃
### Steps to reproduce
```GDScript
extends RichTextLabel
func _ready() -> void:
for i in 10:
append_text("line of text\r")
func _process(delta: float) -> void:
visible_characters += 1
prints(get_line_count())
```
Run this scripts. You will see 1, 2, 3... 10 being printed. If you change `visible_characters_behavior`, it will always print 10.
### Minimal reproduction project (MRP)
N/A | bug,topic:gui | low | Minor |
2,688,638,555 | godot | VR right eye not rendering sky properly | ### Tested versions
4.3 stable mono, 4.4 dev5 mono
### System information
Godot v4.4.dev5.mono - Windows 10.0.19045 - Multi-window, 1 monitor - Vulkan (Mobile) - dedicated Radeon RX 580 Series (Advanced Micro Devices, Inc.; 31.0.21921.1000) - AMD Ryzen 5 1600 Six-Core Processor (12 threads)
### Issue description
This issue seems to have a few open tickets which have led me on a crumb trail to #99448 though I'm not sure if this is related.
When using WorldEnvironment with Background set to Clear Color and defaults, AND Glow enabled, the right eye for my VR camera will not be able to render the background properly. This also happens with the Sky setting, but upon setting a material to it it will render properly.
### Steps to reproduce
- Set up VR cam and stuff
- Create WorldEnvironment
- Set Glow to enabled
- ???
- non profit
### Minimal reproduction project (MRP)
N/A | bug,topic:rendering,topic:xr | low | Minor |
2,688,647,934 | angular | Migration to lazy-loaded routes not working if Routes are exported as default | ### Command
generate
### Is this a regression?
- [ ] Yes, this behavior used to work in the previous version
### The previous version in which this bug was not present was
this is a new migration schematic
### Description
using the new migration schematic
ng generate @angular/core:route-lazy-loading
results in 'Could not find any files to migrate under the path ...'
The Routes are exported as default, which causes the schematic to fail finding them.
### Minimal Reproduction
trying to migrate the following route.ts file:
```
export default [
{
path: '',
component: SomeComponent,
children: [
{ path: 'index', component: IndexComponent },
{ path: 'index/:name', component: IndexDetailComponent, resolve: { index: indexDetailResolver } }
]
}
] as Routes;
```
using:
``
ng generate @angular/core:route-lazy-loading --path src/app/something
``
fails, because the migration schematic does not recognize the Routes, which are exported as default.
### Exception or Error
```text
```
### Your Environment
```text
"@angular/cli": "^19.0.1"
```
### Anything else relevant?
_No response_ | area: migrations | low | Critical |
2,688,656,965 | godot | slowing down when saving a script | ### Tested versions
version : 4.3
os : fedora
cpu : Intel® Core™ i3-4030U × 4
ram : 8,0 Gio
graphics : Intel® HD Graphics 4400
### System information
Godot v4.3.stable (77dcf97d8) - Fedora Linux 41 (Workstation Edition) - Wayland - GLES3 (Compatibility) - Mesa Intel(R) HD Graphics 4400 (HSW GT2) - Intel(R) Core(TM) i3-4030U CPU @ 1.90GHz (4 Threads)
### Issue description
When saving gdscript files, the window for choosing the backup location is very slow to navigate from one directory to another.
### Steps to reproduce
The problem appears to be present in the project where there is a lot of file and sub-file
### Minimal reproduction project (MRP)
Necessitates a big project | bug,topic:editor,performance | low | Major |
2,688,751,413 | go | cmd/compile, go/types2: for case sensitive mismatches in function/method/attribute lookups please suggest "but does have $<CASE_INSENSITIVE_EQUIVALENT>" | ### Go version
go version devel go1.24-2b33434287 Fri Nov 8 01:06:04 2024 +0000 linux/amd64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='amd64'
GOAUTH='netrc'
GOBIN='/home/emmanuel/go/src/go.googlesource.com/go/bin'
GOCACHE='/home/emmanuel/.cache/go-build'
GOENV='/home/emmanuel/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='linux'
GOINSECURE=''
GOMODCACHE='/home/emmanuel/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPATH='/home/emmanuel/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/home/emmanuel/go/src/go.googlesource.com/go'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/home/emmanuel/go/src/go.googlesource.com/go/pkg/tool/linux_amd64'
GOVCS=''
GOVERSION='devel go1.24-2b33434287 Fri Nov 8 01:06:04 2024 +0000'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/home/emmanuel/.config/go/telemetry'
GCCGO='gccgo'
GOAMD64='v1'
AR='ar'
CC='gcc'
CXX='g++'
CGO_ENABLED='1'
GOMOD='/home/emmanuel/go/src/github.com/googleapis/google-cloud-go/spanner/go.mod'
GOWORK='/home/emmanuel/go/src/github.com/googleapis/google-cloud-go/go.work'
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -m64 -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build825742400=/tmp/go-build -gno-record-gcc-switches'
```
### What did you do?
While working on some terse library with a ton of symbols/functions, in which the nomenclature for a bunch of symbols at times defies expectations in case, I had such code
```go
// Add a unaryClientInterceptor and streamClientInterceptor.
reqIDInjector := new(requestIDHeaderInjector)
opts = append(opts,
option.WithGRPCDialOption(grpc.WithchainStreamInterceptor(reqIDInjector.interceptStream)),
option.WithGRPCDialOption(grpc.WithchainUnaryInterceptor(reqIDInjector.interceptUnary)),
)
```
to reference https://pkg.go.dev/google.golang.org/grpc#WithChainStreamInterceptor and https://pkg.go.dev/google.golang.org/grpc#WithChainUnaryInterceptor
### What did you see happen?
```shell
$ go test -run=TestRequestID
# cloud.google.com/go/spanner [cloud.google.com/go/spanner.test]
./client.go:414:35: undefined: grpc.WithchainStreamInterceptor
./client.go:415:35: undefined: grpc.WithchainUnaryInterceptor
FAIL cloud.google.com/go/spanner [build failed]
```
### What did you expect to see?
```shell
$ go test -run=TestRequestID
# cloud.google.com/go/spanner [cloud.google.com/go/spanner.test]
./client.go:414:35: undefined: grpc.WithchainStreamInterceptor, did you mean "grpc.WithChainStreamInterceptor"?
./client.go:415:35: undefined: grpc.WithchainUnaryInterceptor, did you mean "grpc.WithChainUnaryInterceptor"?
FAIL cloud.google.com/go/spanner [build failed]
```
If there is a case insensitve mismatch, kindly please suggest the first one so that the user can trivially fix their code instead of
having to eye-ball to figure out their typo. | NeedsDecision,compiler/runtime | low | Critical |
2,688,751,904 | pytorch | Inconsistent Bias Broadcasting Results in PyTorch GPU `F.linear(...)` and `F.conv2d(...)` Operations | The following code snippet demonstrates an issue with `torch.nn.functional.linear(...)` and `torch.nn.functional.conv2d(...)` when using automatic broadcasting on the GPU. Specifically, the outputs of two mathematically equivalent operations diverge when broadcasting is involved. While only `F.linear(...)` and `F.conv2d(...)` have been tested, this suggests that other operations utilizing broadcasting might exhibit similar inconsistencies.
```python
import torch
import torch.nn.functional as F
# Initialize weights, input, and bias on GPU
W = torch.randn(8192, 8192, device='cuda')
x = torch.randn(1, 8192, device='cuda')
bias = torch.randn(8192, device='cuda')
# Perform linear operation with bias
y1 = F.linear(x, W, bias)
# Perform linear operation without bias and add bias manually
y2 = F.linear(x, W, bias=None) + bias
assert torch.allclose(y1, y2), \
'Outputs are not equal! Max error: {}'.format(torch.max(torch.abs(y1 - y2)))
```
On the GPU (e.g., NVIDIA A10, CUDA 12.0), the assertion fails with a maximum error around 0.0018. On the CPU, `torch.allclose(...)` passes consistently.
## Key Observations
- **GPU-Specific Issue**: The discrepancy occurs exclusively on GPU hardware.
- **Bias Shape Sensitivity**:
- Affected: When the bias is a 1D tensor (`bias.shape == (8192,)`), requiring automatic broadcasting.
- Unaffected: Reshaping the bias to match the input shape (e.g., `bias = torch.randn(1, 8192, device='cuda')`) eliminates the issue.
- **Non-Deterministic Errors**: The magnitude of the error varies unpredictably despite adhering to PyTorch’s [reproducibility guide](https://pytorch.org/docs/stable/notes/randomness.html).
- **Potential for Broader Impact**: Since broadcasting is a common operation in many PyTorch functions, similar discrepancies might arise in other GPU-accelerated scenarios beyond those mentioned in this issue regarding `F.linear(...)` and `F.conv2d(...)`.
## Workaround
To circumvent the issue, ensure that the bias tensor has a shape compatible with the input tensor, thereby avoiding automatic broadcasting:
```python
bias = torch.randn(1, 8192, device='cuda')
```
This adjustment ensures consistent and accurate results on both GPU and CPU.
## Environment Details
Configurations applied to enforce reproducibility (issue persists despite these settings):
```python
import os
os.environ["CUBLAS_WORKSPACE_CONFIG"]=':4096:8'
import torch
torch.backends.cudnn.enabled = True
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
torch.use_deterministic_algorithms(True)
torch.backends.cudnn.allow_tf32 = False
```
### Versions
PyTorch version: 2.2.2+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.8.10 (default, Nov 14 2022, 12:59:47) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A10
Nvidia driver version: 525.85.12
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 30
On-line CPU(s) list: 0-29
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 30
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz
Stepping: 6
CPU MHz: 2593.970
BogoMIPS: 5187.94
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 960 KiB
L1i cache: 960 KiB
L2 cache: 120 MiB
L3 cache: 480 MiB
NUMA node0 CPU(s): 0-29
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves wbnoinvd arat avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid fsrm md_clear arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==8.9.2.26
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.19.3
[pip3] nvidia-nvjitlink-cu12==12.6.77
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] torch==2.2.2+cu121
[pip3] torchaudio==2.2.2+cu121
[pip3] torchvision==0.17.2+cu121
[pip3] triton==2.2.0
[conda] Could not collect
cc @csarofeen @ptrblck @xwang233 @eqy @msaroufim | needs reproduction,module: cudnn,module: cuda,triaged | low | Critical |
2,688,752,670 | TypeScript | Rethinking relationships between `{}` type, `object`, and primitives | ### 🔎 Search Terms
empty object type, `{}`, type safety violation, unsound
### 🕗 Version & Regression Information
- This is the behavior in every version I tried, and I reviewed the FAQ for entries about [object types](https://www.typescriptlang.org/docs/handbook/2/objects.html) and [structural typing](https://www.typescriptlang.org/docs/handbook/typescript-in-5-minutes-func.html#structural-typing)
### ⏯ Playground Link
https://www.typescriptlang.org/play/?ts=5.8.0-dev.20241124#code/GYVwdgxgLglg9mABFAhgawKYGcDyAjAKw2gAoAPALkTkOKgEpEBvAKEXcQHpPEAVACxhZEAJxgBzflAA2AT0RDEKPgGVEGESLgiAdGw4xgicogC85xAAZGrDncQQEWONIw6NWkSQDkAxXhEUSH5ELH44EGkAE0Q8DERwEQwUCH4UPFdvegBufXYAXxZClkcwLChqKiZ8sytclm5EACEQCqhBYSEwbygAGiUwGLCI6Ni3FlRMXFpSOBygA
### 💻 Code
```ts
function takesObject(x: object) {
// This rightly is a TS error.
if (x === 0) {
console.error('This branch should be unreachable');
}
}
const o: {} = 0;
// But this isn't, and should be.
takesObject(o);
```
### 🙁 Actual behavior
No error on `takesObject(o)`. `{}` can be assigned to type `object`, despite `{}` meaning "all nonnullish values", whereas `object` means only JS object types.
### 🙂 Expected behavior
Error on `takesObject(o)`. `{}` is a wider type than `object`.
### Additional information about the issue
This stems from a typescript-eslint investigation into making no-unnecessary-condition be more correct around possibly-falsy "object types", such as `{}`, or even `{ toFixed(): string}` (to which `number` may be assigned). See https://github.com/typescript-eslint/typescript-eslint/pull/10378. This also relates to previous (controversial) conversations about whether to flag the `{}` type with the linter, see, e.g. https://github.com/typescript-eslint/typescript-eslint/issues/8700.
I'm wondering if this was simply an oversight in https://github.com/microsoft/TypeScript/pull/49119, which aimed to fix soundness holes with `{}`? | Suggestion,Needs Proposal | medium | Critical |
2,688,788,176 | pytorch | [Break XPU] The change of default value `torch._dynamo.config.specialize_float` cause XPU Inductor UT failures. | ### 🐛 Describe the bug
We found ciflow/xpu failed in main branch:
[xpu / linux-jammy-xpu-2025_0-py3.9 / test (default, 3, 4, linux.idc.xpu)](https://hud.pytorch.org/pr/pytorch/pytorch/133080#33443591124) ([gh](https://github.com/pytorch/pytorch/actions/runs/11997297811/job/33443591124))
inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_maximize_xpu
[xpu / linux-jammy-xpu-2025_0-py3.9 / test (default, 4, 4, linux.idc.xpu)](https://hud.pytorch.org/pr/pytorch/pytorch/133080#33443591194) ([gh](https://github.com/pytorch/pytorch/actions/runs/11997297811/job/33443591194))
inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_weight_decay_xpu
Root cause:
Recently the PR #138922 changed the default value `torch._dynamo.config.specialize_float` from `True` to `False`. And caused the dynamo traced graph for XPU changed. Then caused the generated kernel count mismatch with expected count.
The changed part is the check in creating a constant float variable or a `symfloat`.
The code is:
https://github.com/pytorch/pytorch/blob/6ad04237580d2ef1cc57b18a59af3e3eb65afd49/torch/_dynamo/variables/builder.py#L1901-L1915
CUDA does not has such change because the check condition includes `or torch._inductor.config.triton.cudagraphs`, which make it always crate constant float variable instead of symbol.
I'll update the expected kernel count in test case for XPU to resolve this XPU CI break.
### Versions
PyTorch version: 2.6.0a0+git6a096a0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.29.4
Libc version: glibc-2.35
Python version: 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:45:18) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.15.47+prerelease23.5.18-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
cc @gujinghui @EikanWang @fengyuan14 @guangyey | triaged,module: xpu | low | Critical |
2,688,789,846 | svelte | Document Pattern for Importing Runes | ### Describe the problem
# Issue with Runes Documentation
Using the runes in a component is documented nicely but when you go to replicate those patterns for importing runes the experience falls apart instantly and a new undocumented pattern has to be used.
## State Example from Documentation
This is the $state example from the Svelte 5 runes docs:
https://svelte.dev/playground/6b854f5d5fc2465ea5d9306d13f51b85?version=5.2.7
## Importing with the Same Pattern
This is what happens when you take that example and try and define that state elsewhere and import it (even assuming you knew to call the file ".svelte.js"):
https://svelte.dev/playground/61c1a5769d99478dac3be0206a7ced1f?version=5.2.7
## Working Importing Runes Pattern
This is how you ACTUALLY have to import runes and (I had to discover this in Discord):
https://svelte.dev/playground/d25142f533d844b080dd6eb756ac0ba3?version=5.2.7
## Exampling using Stores
So, when imported, $state runes CANNOT hold a value themselves. This is counter to the example and also counter to how stores used to work:
https://svelte.dev/playground/3ca44f7cac3841a688ecad512d3b87df?version=5.2.7
As you can see the stores example allows my store to hold the actual value and I don't need to do something different like add a "value" property.
## To summarize
### Magic:
- Defining runes in a component
- Defining stores in a component or importing them
### Not Magic:
- Importing a rune
### Describe the proposed solution
# Suggestion
I don't have a problem with having to do something different when I import runes, that's not what this is about. I'm sure the Svelte 5 devs have good reasons they work this way and I'm not here to judge. I tend to always define my state in separate files, so personally I will only have 1 pattern to contend with in my own apps.
I'm raising this issue so that the runes documentation can hopefully get an update to include the pattern needed when importing runes. Maybe a new section after $host called "importing" that just illustrates what I have shown above.
If this is somewhere in the documentation then I could not find it by searching. The only official thing I was given in Discord was a link to some tutorial section that didn't even have `runes` or `$state` in the name which isn't good for discoverability. Furthermore a tutorial should be a walkthrough of the documentation, not the other way around.
### Alternatives considered
_No response_
### Importance
would make my life easier
### Additional Information
Coming from Svelte 4 and using stores extensively this was a departure and a bit of a roadblock for me. I would like other developers to be able to discover this pattern from the documentation and not from Discord like I had to. | documentation | low | Minor |
2,688,801,757 | pytorch | import torch failed after installing nightly wheel | ### 🐛 Describe the bug
import torch failed after installing nightly wheel
### Versions
CUDA Version: 12.4
pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu124

pip list

cc @seemethere @malfet @osalpekar @atalman | module: binaries,triaged | low | Critical |
2,688,847,988 | angular | [Feature Request] Subject Signal | ### Which @angular/* package(s) are relevant/related to the feature request?
core
### Description
I find myself constantly running into the situation that for a given reactive value, I need to use it both in rx and signal contexts.
For instance let's say there is an `selectedItem` value, representing an entity selected by the user. We would like to store it in a `WritableSignal` so something could be derived out of it with `computed`; we also want to use it to retrieve some related data from the backend, thus it would also be great to be an observable (to be `switchMap`-ed).
In the meantime we just make the value a signal and spam `toObservable`s whenever we need it in rx. There are so many of them that I really considered it as pattern that should be addressed with better ergonomics.
### Proposed solution
We came up with a solution like this:
```typescript
import { Signal } from "@angular/core";
import { toSignal } from "@angular/core/rxjs-interop";
import { BehaviorSubject, Observable } from "rxjs";
/** A signal backed by a subject. */
export interface SubjectSignal<T> extends Signal<T> {
observable: Observable<T>;
update(updateFn: (value: T) => T): void;
set(value: T): void;
}
/** Make a signal that is backed by a subject. */
export function subjectSignal<T>(initialValue: T): SubjectSignal<T> {
const subject = new BehaviorSubject<T>(initialValue);
const signal = toSignal(subject) as SubjectSignal<T>;
signal.observable = subject.asObservable();
signal.update = (updateFn: (value: T) => T) => {
subject.next(updateFn(subject.value));
};
signal.set = (value: T) => {
subject.next(value);
};
Object.freeze(signal);
return signal;
}
```
So it's basically a wrapper to `toSignal`, but an observable could be retrieved out from it.
### Alternatives considered
Ideally we could have `SubjectSignal<T>` to be used both as a signal and as an observable:
```typescript
interface SubjectSignal<T> extends Observable<T>, Signal<T>;
```
but I'm not sure if it's possible, because `Signal<T>` has a callable signature and `Observable<T>` is a class. Anyways we would like to see a more elegant solution. | area: core,core: reactivity,core: rxjs interop | low | Minor |
2,688,924,412 | tensorflow | Cannot send mail to [email protected], the contact address shown on front page of github.com/tensorflow | ### Issue type
Documentation Bug
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
source
### TensorFlow version
irrelevant
### Custom code
Yes
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
The front page of https://github.com/tensorflow lists [email protected] as a contact address:
<p align="center">
<img width="80%" alt="image" src="https://github.com/user-attachments/assets/a80e6161-1cde-4d10-b9a2-774f80294d15">
</p>
However, attempting to send mail to that address results in an error message being mailed back:
```
We're writing to let you know that the group you tried to
contact (github-admin) may not exist, or you may not have
permission to post messages to the group. A few more
details on why you weren't able to post:
* You might have spelled or formatted the group name incorrectly.
* The owner of the group may have removed this group.
* You may need to join the group before receiving permission to post.
* This group may not be open to posting.
If you have questions related to this or any other Google Group,
visit the Help Center at
https://support.google.com/a/tensorflow.org/bin/topic.py?topic=25838.
Thanks,
[tensorflow.org](http://tensorflow.org/) admins
```
### Standalone code to reproduce the issue
```shell
Send an email message to [email protected]
```
### Relevant log output
```shell
We're writing to let you know that the group you tried to
contact (github-admin) may not exist, or you may not have
permission to post messages to the group. A few more
details on why you weren't able to post:
* You might have spelled or formatted the group name incorrectly.
* The owner of the group may have removed this group.
* You may need to join the group before receiving permission to post.
* This group may not be open to posting.
If you have questions related to this or any other Google Group,
visit the Help Center at
https://support.google.com/a/tensorflow.org/bin/topic.py?topic=25838.
Thanks,
[tensorflow.org](http://tensorflow.org/) admins
```
| type:docs-bug | medium | Critical |
2,688,932,927 | TypeScript | Misleading error message "'{}' and 'number' have no overlap" | ### 🔎 Search Terms
empty object type, unintentional comparison
### 🕗 Version & Regression Information
- This is the behavior in every version I tried, and I reviewed the FAQ for entries about [object types](https://www.typescriptlang.org/docs/handbook/2/objects.html) and [structural typing](https://www.typescriptlang.org/docs/handbook/typescript-in-5-minutes-func.html#structural-typing)
### ⏯ Playground Link
https://www.typescriptlang.org/play/?noUncheckedIndexedAccess=true&target=99&ts=5.8.0-dev.20241124#code/JYMwBAFA3gvmC8iwAYCUZZA
### 💻 Code
```ts
if ({} === 0) {}
```
### 🙁 Actual behavior
TS 2367: "This comparison appears to be unintentional because the types '{}' and 'number' have no overlap."
### 🙂 Expected behavior
TS 2367: "This comparison appears to be unintentional because the types 'object' and 'number' have no overlap."
### Additional information about the issue
the type `{}` and `number` do have overlap; `number` is a subset of `{}`.
```ts
declare const n: number;
// allowed!
const emptyObjectType: {} = n;
```
TS is correct to report an error, but it should give a technically correct reason for the error.
Analogous (very contrived) examples can be made for similar coincidentally-including-primitive types, like
```ts
// This comparison appears to be unintentional because the types '{ toFixed(): string; toPrecision(): string; }' and 'number' have no overlap.
if ({ toFixed(): string { return "" }; toPrecision(): string { return "" }} === 0) {}
// yet this is allowed
const numbery: { toFixed(): string; toPrecision(): string; } = 0;
```
Stems from discord discussion beginning here: https://discord.com/channels/508357248330760243/508357638677856287/1310419206260654081
Closely related to discussion regarding https://github.com/microsoft/TypeScript/issues/60582
| Suggestion,Needs Proposal | low | Critical |
2,688,950,769 | pytorch | [sparse] `mul`: missing support with CSC arguments | ### 🐛 Describe the bug
I cannot calculate `a * a` when a is a CSC tensor:
```python
import torch
a = torch.randn(3,3).to_sparse_coo()
a * a
a = torch.randn(3,3).to_sparse_csr()
a * a
a = torch.randn(3,3).to_sparse_csc()
a * a
```
The error reports:
```
/home/hzhangxyz/Downloads/env/lib/python3.12/site-packages/torch/_subclasses/functional_tensor.py:295: UserWarning: Failed to initialize NumPy: No module named 'numpy' (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:84.)
cpu = _conversion_method_template(device=torch.device("cpu"))
/home/hzhangxyz/Downloads/test.py:6: UserWarning: Sparse CSR tensor support is in beta state. If you miss a functionality in the sparse tensor support, please submit a feature request to https://github.com/pytorch/pytorch/issues. (Triggered internally at ../aten/src/ATen/SparseCsrTensorImpl.cpp:53.)
a = torch.randn(3,3).to_sparse_csr()
Traceback (most recent call last):
File "/home/hzhangxyz/Downloads/test.py", line 10, in <module>
a * a
~~^~~
RuntimeError: Expected result Tensor to be of format CSR
```
### Versions
/home/hzhangxyz/Downloads/env/lib/python3.12/site-packages/torch/_subclasses/functional_tensor.py:295: UserWarning: Failed to initialize NumPy: No module named 'numpy' (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:84.)
cpu = _conversion_method_template(device=torch.device("cpu"))
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Arch Linux (x86_64)
GCC version: (GCC) 14.2.1 20240910
Clang version: 18.1.8
CMake version: version 3.31.0
Libc version: glibc-2.40
Python version: 3.12.7 (main, Oct 1 2024, 11:15:50) [GCC 14.2.1 20240910] (64-bit runtime)
Python platform: Linux-6.6.59-1-lts-x86_64-with-glibc2.40
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 565.57.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
架构: x86_64
CPU 运行模式: 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
字节序: Little Endian
CPU: 12
在线 CPU 列表: 0-11
厂商 ID: GenuineIntel
型号名称: 12th Gen Intel(R) Core(TM) i5-12400F
CPU 系列: 6
型号: 151
每个核的线程数: 2
每个座的核数: 6
座: 1
步进: 5
CPU(s) scaling MHz: 25%
CPU 最大 MHz: 4400.0000
CPU 最小 MHz: 800.0000
BogoMIPS: 4993.00
标记: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
虚拟化: VT-x
L1d 缓存: 288 KiB (6 instances)
L1i 缓存: 192 KiB (6 instances)
L2 缓存: 7.5 MiB (6 instances)
L3 缓存: 18 MiB (1 instance)
NUMA 节点: 1
NUMA 节点0 CPU: 0-11
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] triton==3.1.0
[conda] Could not collect
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip | module: sparse,triaged | low | Critical |
2,689,022,808 | PowerToys | Can't use browser search (Plugin Web Search) | ### Description of the new feature / enhancement
Allow customizing how browser exe is called
### Scenario when this would be used?
When search fails


### Supporting information
Plugins based on the web search feature also fail: https://github.com/Daydreamer-riri/PowerToys-Run-WebSearchShortcut/issues/29 | Product-PowerToys Run,Needs-Triage,Run-Plugin | low | Major |
2,689,181,640 | pytorch | How to use torch.compile + HF model? | ### 🐛 Describe the bug
Problem: There seem to be 2 ways of using torch compile with a HF model, both of which don't work for all the ways a model inference is called, which is one of 3 possible methods: `generate()`, `forward()` and `__call__()`.
## Option 1: `model = torch.compile(model)`
This works if we use either `forward()` or the `__call__()` methods. But, if we try to call the `.generate()` method (which is the more popular API for inferencing and calls `forward()` internally), we notice that we DON'T seem to be using the compiled model (ex. `TORCH_LOGS="dynamo"` gives no output).
Simple reproducible example (custom class with `generate` and `forward` like implementations):
```
import torch
class MyModule(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, input):
# return average of the inputs
return torch.Tensor(torch.sum(input)/len(input))
def generate(self, max_tokens, input):
for i in range(max_tokens):
output = self(input) # Doesn't work with either call or forward
input = torch.cat((input, output.view(1)))
return input
model = MyModule()
model = torch.compile(model)
input = torch.rand(4)
output = model.generate(input=input, max_tokens=3) # THIS DOES NOT WORK!!!
#output = model.forward(input=input) # THIS WORKS
```
or use any HF model compile followed by generate:
```
model = AutoModelForCausalLM.from_pretrained("facebook/opt-125m")
model = torch.compile(model)
output = model.generate(input_ids, max_new_tokens=100)
```
The problem is that the output of `torch.compile(model)` is an `OptimizedModule` object with the `__call__()` set to the compiled forward and `orig_mod` set to `model` itself.
When `compiled_model.generate()` is called, this accesses the generate through the `__getattr__()` function which gets the model's generate. That `generate` calls `self()`, which calls the original model's forward instead of the compiled forward.
## Option 2: `model.compile()`
The other option is to use the `torch.nn.Module`'s compile, which does an inplace modification where the compiled forward is stored in `_compiled_call_impl` variable and used when `__call__()` is done. But, this only works with the `__call__()` method and does NOT work with the `forward()` method. If the `generate()` internally uses call, then generate works.
```
model.compile()
output = model.generate(input=input, max_tokens=3) # Works
#output = model.forward(input_ids) # DOES NOT WORK
```
Problem is that neither of these approaches works with both `generate()` and `forward()` methods.
As an aside, I tried a couple of unsuccessful possible fixes:
- Tried if Option 1 could be fixed somehow by setting the `orig_mod.forward` to the compiled forward but that causes infinite recursion because of the circular dependency
- I also tried changing `TorchDynamoContext.__call__()` (in `eval_frame.py`) in the nn.Module case, to internally do `model.compile` instead of creating an OptimizedModule. This fixes things slightly, for ex. Option 1 works if it generate uses `call` instead of `forward`, but obviously, not really a solution.
cc: @chanderg
### Error logs
_No response_
### Versions
Collecting environment information...
PyTorch version: 2.6.0.dev20241103+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.2
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.14.0-284.73.1.el9_2.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version: 525.105.17
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.4.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 256
On-line CPU(s) list: 0-255
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7763 64-Core Processor
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3529.0520
CPU min MHz: 1500.0000
BogoMIPS: 4900.23
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin brs arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Virtualization: AMD-V
L1d cache: 4 MiB (128 instances)
L1i cache: 4 MiB (128 instances)
L2 cache: 64 MiB (128 instances)
L3 cache: 512 MiB (16 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-63,128-191
NUMA node1 CPU(s): 64-127,192-255
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cudnn-frontend==1.6.0
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] nvtx==0.2.5
[pip3] onnx==1.16.2
[pip3] optree==0.12.1
[pip3] pynvjitlink==0.3.0
[pip3] pytorch-triton==3.1.0+cf34004b8a
[pip3] torch==2.6.0.dev20241103+cu124
[pip3] torch_tensorrt==2.5.0a0
[pip3] torchaudio==2.5.0.dev20241103+cu124
[pip3] torchdata==0.9.0
[pip3] torchvision==0.20.0.dev20241103+cu124
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | triaged,oncall: pt2,module: dynamo | low | Critical |
2,689,212,455 | deno | `Deno.permissions.query({name:"imports"})` is not supported | Version: Deno 2.1.1
It's not possible to query `--allow-imports` descriptor currently
```ts
console.log(await Deno.permissions.query({name:"imports"}))
```
```
Uncaught TypeError: The provided value "imports" is not a valid permission name
at Permissions.querySync (ext:runtime/10_permissions.js:211:13)
at Permissions.query (ext:runtime/10_permissions.js:203:34)
at <anonymous>:1:57
```
Possibly just a quick fix to do in:
https://github.com/denoland/deno/blob/12b377247be2b74155ded3a678ff2996ef3d7c9f/runtime/js/10_permissions.js#L41-L49
Edit: the name should probably be `import` and not `imports`, to match https://docs.deno.com/api/deno/~/Deno.PermissionOptionsObject | bug,permissions | low | Critical |
2,689,281,025 | stable-diffusion-webui | [Bug]: ERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE | ### Checklist
- [ ] The issue exists after disabling all extensions
- [x] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [x] The issue exists in the current version of the webui
- [x] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
Upon pulling the repo and running webui.sh it goes through the launch.py like normal but then it haults and throws this error: "ERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE"
This also happens on Windows. I tried running `. ./venv/bin/activate` to see if it would help. it didn't. it just said that the expected sha256 doesn't match.
### Steps to reproduce the problem
run webui.sh
### What should have happened?
stable diffusion should run.
### What browsers do you use to access the UI ?
Brave
### Sysinfo
I can't get this due to the requirements not being met
### Console logs
```Shell
whitequill@abstractions stable-diffustion-webui% ./webui.sh November 25, 2024 12:50AM
################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer.
################################################################
################################################################
Running on whitequill user
################################################################
################################################################
Repo already cloned, using it as install directory
################################################################
################################################################
Create and activate python venv
################################################################
################################################################
Launching launch.py...
################################################################
glibc version is 2.40
Check TCMalloc: libtcmalloc_minimal.so.4
libtcmalloc_minimal.so.4 is linked with libc.so,execute LD_PRELOAD=/usr/lib64/libtcmalloc_minimal.so.4
Python 3.10.15 (main, Nov 19 2024, 06:31:52) [GCC 13.3.1 20241024]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Installing torch and torchvision
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu121
Collecting torch==2.1.2
Using cached https://download.pytorch.org/whl/cu121/torch-2.1.2%2Bcu121-cp310-cp310-linux_x86_64.whl (2200.7 MB)
Collecting torchvision==0.16.2
Using cached https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp310-cp310-linux_x86_64.whl (6.8 MB)
Collecting filelock (from torch==2.1.2)
Using cached filelock-3.16.1-py3-none-any.whl.metadata (2.9 kB)
Collecting typing-extensions (from torch==2.1.2)
Using cached typing_extensions-4.12.2-py3-none-any.whl.metadata (3.0 kB)
Collecting sympy (from torch==2.1.2)
Using cached sympy-1.13.3-py3-none-any.whl.metadata (12 kB)
Collecting networkx (from torch==2.1.2)
Using cached networkx-3.4.2-py3-none-any.whl.metadata (6.3 kB)
Collecting jinja2 (from torch==2.1.2)
Using cached jinja2-3.1.4-py3-none-any.whl.metadata (2.6 kB)
Collecting fsspec (from torch==2.1.2)
Using cached fsspec-2024.10.0-py3-none-any.whl.metadata (11 kB)
Collecting triton==2.1.0 (from torch==2.1.2)
Using cached https://download.pytorch.org/whl/triton-2.1.0-0-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (89.2 MB)
ERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE. If you have updated the package versions, please update the hashes. Otherwise, examine the package contents carefully; someone may have tampered with them.
triton==2.1.0 from https://download.pytorch.org/whl/triton-2.1.0-0-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl#sha256=66439923a30d5d48399b08a9eae10370f6c261a5ec864a64983bae63152d39d7 (from torch==2.1.2):
Expected sha256 66439923a30d5d48399b08a9eae10370f6c261a5ec864a64983bae63152d39d7
Got 0f7f3a9430ab9f70a90def6d6b383f63b0e0a5206739e8eb71ea5e5fe376128e
Traceback (most recent call last):
File "/home/whitequill/gitrepos/stable-diffustion-webui/launch.py", line 48, in <module>
main()
File "/home/whitequill/gitrepos/stable-diffustion-webui/launch.py", line 39, in main
prepare_environment()
File "/home/whitequill/gitrepos/stable-diffustion-webui/modules/launch_utils.py", line 381, in prepare_environment
run(f'"{python}" -m {torch_command}', "Installing torch and torchvision", "Couldn't install torch", live=True)
File "/home/whitequill/gitrepos/stable-diffustion-webui/modules/launch_utils.py", line 116, in run
raise RuntimeError("\n".join(error_bits))
RuntimeError: Couldn't install torch.
Command: "/home/whitequill/gitrepos/stable-diffustion-webui/venv/bin/python" -m pip install torch==2.1.2 torchvision==0.16.2 --extra-index-url https://download.pytorch.org/whl/cu121
Error code: 1
```
### Additional information
_No response_ | bug-report | low | Critical |
2,689,299,821 | transformers | WSD Scheduler to auto infer training steps | ### Feature request
WSD Scheduler should calculate stable steps in `trainer.py`. And if num_warmup_steps is provided in kwargs, schedule_func should respect the kwargs.
My guess is that the intention is it to decay till min and stay there till the end of training, but `min_lr_ratio` is set to the default of 0, wouldn't the learning rate be always 0? Would like to have some insights on this if possible.
```
TypeError: get_wsd_schedule() missing 1 required positional argument: 'num_stable_steps'
```
Additionally, trying to pass in `num_warmup_steps` in `lr_scheduler_kwargs` will result in duplicate keys:
```
return schedule_func(optimizer, num_warmup_steps=num_warmup_steps, **scheduler_specific_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: transformers.optimization.get_wsd_schedule() got multiple values for keyword argument 'num_warmup_steps'
```
### Motivation
I want to run WSD scheduler for my training, but I do not want to have to calculate the stable steps.
### Your contribution
I can contribute to this, but I would like to better understand the edge cases or possible scenarios I might have missed out from the maintainers. However, here is my current workaround:
```
def get_wsd_schedule(
+ num_training_steps: int = 0,
):
...
assert num_stable_steps or num_training_steps, "One of either stable steps or training steps must be provided"
if not num_stable_steps:
num_stable_steps = num_training_steps - num_warmup_steps - num_decay_steps
```
```
if name == SchedulerType.WARMUP_STABLE_DECAY:
return schedule_func(optimizer, num_warmup_steps=num_warmup_steps,num_training_steps=num_training_steps, **scheduler_specific_kwargs)
``` | Feature request | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.