id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
2,783,887,326 | ollama | Allow building with BLAS/BLIS now that Ollama's runners are not pure native builds of llama.cpp anymore | _A regression of ~18tks/s to ~8tks/s eval for llama3.2 in a Ryzen Threadripper 1820X._
Up to version `v.0.5.1` I was able to build the official `llama-server` from llama.cpp and use it as part of an Ollama build that skips generation. I'm using AMD's AOCC compiler and AOCL (A BLIS-flavored implementation tunned for AMD cores) on Linux with `-march=znver1`.
I was building `llama-server` with (with AOCC and AOCL configured) with:
```
cmake -G Ninja -B build \
-DGGML_BLAS=ON \
-DGGML_BLAS_VENDOR=AOCL_mt \
-DCMAKE_C_COMPILER=clang \
-DCMAKE_CXX_COMPILER=clang++ \
-DGGML_NATIVE=OFF \
-DLLAMA_BUILD_TESTS:BOOL=0 \
-DCMAKE_BUILD_TYPE:STRING=Release \
-DGGML_AVX:BOOL=1 \
-DGGML_AVX2:BOOL=1 \
-DGGML_BLAS:BOOL=1 \
-DGGML_BUILD_EXAMPLES:BOOL=0 \
-DBUILD_SHARED_LIBS:BOOL=0 \
-DGGML_NATIVE:BOOL=0 \
-DGGML_FMA:BOOL=1 \
-DGGML_F16C:BOOL=1 \
-DGGML_LTO:BOOL=1 \
-DCMAKE_C_FLAGS:STRING="-march=znver1" \
-DCMAKE_CXX_FLAGS:STRING="-march=znver1" \
-DCMAKE_INSTALL_PREFIX:PATH=/root/llama.cpp/install \
-DBLAS_INCLUDE_DIRS:PATH=/root/aocl/5.0.0/aocc/include
```
But now, Ollama doesn't use a pure build of llama.cpp anymore. And to make things worse, it's passing a `runner` argument that `llama-server` doesn't accept.
From my understand the runners are now Go applications that link to llama.cpp at build time.
How can I have a custom build of these Go runners that use BLIS and allow me to pass `-march=znver1` at build time?
p.s.: I'm not a Go developer :( | feature request | low | Minor |
2,783,892,879 | kubernetes | https://github.com/kubernetes/kubernetes/issues/129593 | ### Which jobs are failing?
master-informing
- ec2-master-scale-performance
### Which tests are failing?
ClusterLoaderV2.load overall (/home/prow/go/src/k8s.io/perf-tests/clusterloader2/testing/load/config.yaml)
ClusterLoaderV2.load: [step: 29] gathering measurements [01] - APIResponsivenessPrometheusSimple
ClusterLoaderV2.load: [step: 29] gathering measurements [07] - APIAvailability
### Since when has it been failing?
Continuous fail since 2 Jan until now (13 Jan)
[2025-01-03 07:03:16 +0000 UTC](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-kops-aws-scale-amazonvpc-using-cl2/1875075116421877760)
[2025-01-04 07:02:14 +0000 UTC](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-kops-aws-scale-amazonvpc-using-cl2/1875437256194396160)
[2025-01-08 07:02:13 +0000 UTC](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-kops-aws-scale-amazonvpc-using-cl2/1876886807254142976)
[2025-01-13 07:03:04 +0000 UTC](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-kops-aws-scale-amazonvpc-using-cl2/1878698764034641920)

### Testgrid link
https://testgrid.k8s.io/sig-release-master-informing#ec2-master-scale-performance
### Reason for failure (if possible)
`{ Failure :0
[measurement call APIResponsivenessPrometheus - APIResponsivenessPrometheusSimple error: top latency metric: there should be no high-latency requests, but: [got: &{Resource:secrets Subresource: Verb:DELETE Scope:resource Latency:perc50: 25.95124ms, perc90: 46.712232ms, perc99: 3.770032761s Count:16151 SlowCount:478}; expected perc99 <= 1s got: &{Resource:configmaps Subresource: Verb:GET Scope:resource Latency:perc50: 25.439364ms, perc90: 45.790856ms, perc99: 3.403321481s Count:579 SlowCount:8}; expected perc99 <= 1s got: &{Resource:leases Subresource: Verb:PUT Scope:resource Latency:perc50: 25.681733ms, perc90: 46.22712ms, perc99: 2.928246732s Count:2734799 SlowCount:44167}; expected perc99 <= 1s got: &{Resource:pods Subresource:binding Verb:POST Scope:resource Latency:perc50: 25.594763ms, perc90: 46.070573ms, perc99: 2.366918654s Count:182655 SlowCount:2645}; expected perc99 <= 1s got: &{Resource:deployments Subresource: Verb:POST Scope:resource Latency:perc50: 25.563803ms, perc90: 46.014846ms, perc99: 1.562933373s Count:16609 SlowCount:228}; expected perc99 <= 1s]]
:0}`
`{ Failure :0
[measurement call APIAvailability - APIAvailability error: API availability: SLO not fulfilled (expected >= 99.50, got: 98.93)]
:0}`
### Anything else we need to know?
N/A
### Relevant SIG(s)
/sig architecture
/sig scheduling | kind/failing-test,sig/architecture,needs-triage | low | Critical |
2,783,893,022 | yt-dlp | [Rai] Failed to parse JSON | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Italy (shouldn't be blocked outside)
### Provide a description that is worded well enough to be understood
The Rai extractor stopped working with a "Failed to parse JSON" error
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-v', 'https://www.rai.it/programmi/report/inchieste/Questione-di-lobby-9fbdb9dc-3765-4377-823e-58db0561a4f2.html', '--ignore-config']
[debug] Encodings: locale utf-8, fs utf-8, pref utf-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp-nightly-builds [dade5e35c] (pip)
[debug] Python 3.12.7 (CPython aarch64 64bit) - Linux-4.19.191-28577532-abA346BXXU9CXK1-aarch64-with-libc (OpenSSL 3.3.2 3 Sep 2024, libc)
[debug] exe versions: ffmpeg 6.1.2 (setts), ffprobe 6.1.2
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, mutagen-1.47.0, requests-2.32.3, sqlite3-3.47.1, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets
[debug] Post-Processor Plugins: FixupMtimePP
[debug] Plugin directories: ['/data/data/com.termux/files/usr/lib/python3.12/site-packages/yt_dlp_plugins']
[debug] Loaded 1837 extractors
[Rai] Extracting URL: https://www.rai.it/programmi/report/inchieste/Questione-di-lobby-9fbdb9dc-3765-4377-823e-58db0561a4f2.html
[Rai] 9fbdb9dc-3765-4377-823e-58db0561a4f2: Downloading video JSON
WARNING: [Rai] 9fbdb9dc-3765-4377-823e-58db0561a4f2: Failed to parse JSON: Expecting value in '': line 1 column 1 (char 0)
WARNING: Extractor Rai returned nothing; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
```
| site-bug,triage | low | Critical |
2,783,946,765 | react-native | TextInput onKeyPress fires after the onChange | ### Description
According to the [docs](https://reactnative.dev/docs/textinput#:~:text=Fires%20before%20onChange%20callbacks.) the keyPress event should fire before the change event. That does not seem to be the case though, I've run into this on a physical device in a project (doesn't use expo) I can't share and also reproduced this on a online playground for react native with expo.
In the video you can see that the `console.log` in the onChange callback happens before the `cosnole.log` in the `onKeyPress` callback:
https://github.com/user-attachments/assets/6e5b8cd5-6452-42a7-b315-54c911933a14
### Steps to reproduce
1. Add the onKeyPress and onChange callbacks to a TextInput
2. Trigger both events by pressing any key on a keyboard
3. see that onChange triggers first
### React Native Version
tested on 0.74.5 and 0.76.6
### Affected Platforms
Runtime - Android
### Output of `npx react-native info`
```text
info Fetching system and libraries information...
System:
OS: Linux 5.10 Manjaro Linux
CPU: (8) x64 Intel(R) Core(TM) i7-10610U CPU @ 1.80GHz
Memory: 18.82 GB / 30.99 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 18.18.2
path: /usr/local/bin/node
Yarn:
version: 1.22.22
path: /usr/local/bin/yarn
npm:
version: 9.8.1
path: /usr/local/bin/npm
Watchman: Not Found
SDKs:
Android SDK: Not Found
IDEs:
Android Studio: AI-241.18034.62.2411.12169540
Languages:
Java: Not Found
Ruby:
version: 2.6.8
path: /home/owner/.rbenv/shims/ruby
npmPackages:
"@react-native-community/cli": Not Found
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.74.5
wanted: 0.74.5
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: false
iOS:
hermesEnabled: false
newArchEnabled: false
```
### Stacktrace or Logs
```text
no logs to show
```
### Reproducer
https://snack.expo.dev/@ncpa/rn-bug
### Screenshots and Videos
_No response_ | Component: TextInput,Needs: Repro,Newer Patch Available,Needs: Attention | low | Critical |
2,783,980,089 | godot | The Toggle Scripts Panel keyboard shortcut in the script editor is responding to the wrong key in brazilian abnt2 keyboards. | ### Tested versions
- Reproducible in: v4.3.stable.official [77dcf97d8]
- I haven't tested, but I remember this happening for as long as I can remember, I just didn't reported until now.
### System information
Godot v4.3.stable - Windows 10.0.19044 - GLES3 (Compatibility) - ANGLE (NVIDIA, NVIDIA GeForce GTX 1050 Ti (0x00001C82) Direct3D11 vs_5_0 ps_5_0, D3D11-31.0.15.4617) - Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz (8 Threads)
### Issue description
The keyboard shortcut for "toggle scripts panel" is ctrl+backslash.

But when using a default brazilian abnt2 keyboard, I have to press the "]" key, wich is in the same position as the "#" key in an ISO keyboard. Bellow you can see the abnt2 layout.

I think this should probably be because sortcut is using physical key code instead of non physical (layout dependent) key code.
### Steps to reproduce
- plug in an abnt2 keyboard
- open a script
- press ctrl+backslash (the backslash printed in the keyboard, that is, the key to the right of the left shift)
- nothing happens
- press ctrl+right_square_bracket
- the scripts panel will hide
### Minimal reproduction project (MRP)
Not needed. any new empty project will do. | discussion,topic:editor,topic:input | low | Minor |
2,784,025,721 | godot | Page fault in amdgpu driver on Linux with Radeon RX 6600 | ### Tested versions
Definitely happening in `v4.3.stable.mono.official.77dcf97d8`, but I've been having this issue on and off ever since I bought this machine in spring 2023. So it's probably present in earlier 4.x versions as well.
### System information
Godot v4.3.stable.mono - Arch Linux #1 SMP PREEMPT_DYNAMIC Mon, 09 Dec 2024 14:31:57 +0000 - X11 - Vulkan (Forward+) - dedicated AMD Radeon RX 6600 (amdgpu) - 13th Gen Intel(R) Core(TM) i7-13700K (24 Threads)
### Issue description
Occasionally, and seemingly randomly, the entire system freezes. The mouse cursor stops moving, and the display is not updated anymore. Even Num Lock does not toggle the keyboard light anymore. After exactly 10 seconds, the screen briefly goes black and then comes back again, but responsiveness does not. The only remedy is a hard reset.
A couple of dumps from the system journal:
<details>
<summary>Jan 10 13:23:31</summary>
```
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: [gfxhub] page fault (src_id:0 ring:40 vmid:6 pasid:0)
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: in page starting at address 0x0000000b12bc8000 from client 0x1b (UTCL2)
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: GCVM_L2_PROTECTION_FAULT_STATUS:0x00641051
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: Faulty UTCL2 client ID: TCP (0x8)
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: MORE_FAULTS: 0x1
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: WALKER_ERROR: 0x0
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: PERMISSION_FAULTS: 0x5
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: MAPPING_ERROR: 0x0
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: RW: 0x1
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: [gfxhub] page fault (src_id:0 ring:40 vmid:6 pasid:0)
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: in page starting at address 0x0000000b12bc8000 from client 0x1b (UTCL2)
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: GCVM_L2_PROTECTION_FAULT_STATUS:0x00641051
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: Faulty UTCL2 client ID: TCP (0x8)
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: MORE_FAULTS: 0x1
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: WALKER_ERROR: 0x0
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: PERMISSION_FAULTS: 0x5
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: MAPPING_ERROR: 0x0
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: RW: 0x1
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: [gfxhub] page fault (src_id:0 ring:40 vmid:6 pasid:0)
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: in page starting at address 0x0000000b12bc0000 from client 0x1b (UTCL2)
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: GCVM_L2_PROTECTION_FAULT_STATUS:0x00641051
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: Faulty UTCL2 client ID: TCP (0x8)
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: MORE_FAULTS: 0x1
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: WALKER_ERROR: 0x0
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: PERMISSION_FAULTS: 0x5
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: MAPPING_ERROR: 0x0
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: RW: 0x1
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: [gfxhub] page fault (src_id:0 ring:40 vmid:6 pasid:0)
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: in page starting at address 0x0000000b12bc0000 from client 0x1b (UTCL2)
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: GCVM_L2_PROTECTION_FAULT_STATUS:0x00641051
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: Faulty UTCL2 client ID: TCP (0x8)
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: MORE_FAULTS: 0x1
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: WALKER_ERROR: 0x0
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: PERMISSION_FAULTS: 0x5
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: MAPPING_ERROR: 0x0
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: RW: 0x1
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: [gfxhub] page fault (src_id:0 ring:40 vmid:6 pasid:0)
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: in page starting at address 0x0000000b12bc9000 from client 0x1b (UTCL2)
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: GCVM_L2_PROTECTION_FAULT_STATUS:0x00641051
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: Faulty UTCL2 client ID: TCP (0x8)
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: MORE_FAULTS: 0x1
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: WALKER_ERROR: 0x0
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: PERMISSION_FAULTS: 0x5
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: MAPPING_ERROR: 0x0
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: RW: 0x1
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: [gfxhub] page fault (src_id:0 ring:40 vmid:6 pasid:0)
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: in page starting at address 0x0000000712c0a000 from client 0x1b (UTCL2)
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: GCVM_L2_PROTECTION_FAULT_STATUS:0x00641051
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: Faulty UTCL2 client ID: TCP (0x8)
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: MORE_FAULTS: 0x1
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: WALKER_ERROR: 0x0
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: PERMISSION_FAULTS: 0x5
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: MAPPING_ERROR: 0x0
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: RW: 0x1
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: [gfxhub] page fault (src_id:0 ring:40 vmid:6 pasid:0)
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: in page starting at address 0x0000000712c02000 from client 0x1b (UTCL2)
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: GCVM_L2_PROTECTION_FAULT_STATUS:0x00641051
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: Faulty UTCL2 client ID: TCP (0x8)
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: MORE_FAULTS: 0x1
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: WALKER_ERROR: 0x0
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: PERMISSION_FAULTS: 0x5
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: MAPPING_ERROR: 0x0
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: RW: 0x1
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: [gfxhub] page fault (src_id:0 ring:40 vmid:6 pasid:0)
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: in page starting at address 0x0000000712c0a000 from client 0x1b (UTCL2)
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: GCVM_L2_PROTECTION_FAULT_STATUS:0x00641051
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: Faulty UTCL2 client ID: TCP (0x8)
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: MORE_FAULTS: 0x1
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: WALKER_ERROR: 0x0
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: PERMISSION_FAULTS: 0x5
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: MAPPING_ERROR: 0x0
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: RW: 0x1
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: [gfxhub] page fault (src_id:0 ring:40 vmid:6 pasid:0)
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: in page starting at address 0x0000000b12bc8000 from client 0x1b (UTCL2)
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: GCVM_L2_PROTECTION_FAULT_STATUS:0x00641051
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: Faulty UTCL2 client ID: TCP (0x8)
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: MORE_FAULTS: 0x1
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: WALKER_ERROR: 0x0
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: PERMISSION_FAULTS: 0x5
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: MAPPING_ERROR: 0x0
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: RW: 0x1
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: [gfxhub] page fault (src_id:0 ring:40 vmid:6 pasid:0)
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: in page starting at address 0x0000000b12bc0000 from client 0x1b (UTCL2)
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: GCVM_L2_PROTECTION_FAULT_STATUS:0x00641051
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: Faulty UTCL2 client ID: TCP (0x8)
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: MORE_FAULTS: 0x1
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: WALKER_ERROR: 0x0
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: PERMISSION_FAULTS: 0x5
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: MAPPING_ERROR: 0x0
Jan 10 13:23:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: RW: 0x1
Jan 10 13:23:41 craig kernel: amdgpu 0000:03:00.0: amdgpu: Dumping IP State
Jan 10 13:23:41 craig kernel: amdgpu 0000:03:00.0: amdgpu: Dumping IP State Completed
Jan 10 13:23:41 craig kernel: amdgpu 0000:03:00.0: amdgpu: ring gfx_0.0.0 timeout, signaled seq=59200948, emitted seq=59200950
Jan 10 13:23:41 craig kernel: amdgpu 0000:03:00.0: amdgpu: Process information: process godot-mono-bin pid 605579 thread godot-mono-bin pid 605579
Jan 10 13:23:41 craig kernel: amdgpu 0000:03:00.0: amdgpu: GPU reset begin!
Jan 10 13:23:45 craig kernel: amdgpu 0000:03:00.0: amdgpu: failed to suspend display audio
Jan 10 13:23:45 craig kernel: amdgpu 0000:03:00.0: amdgpu: MODE1 reset
Jan 10 13:23:45 craig kernel: amdgpu 0000:03:00.0: amdgpu: GPU mode1 reset
Jan 10 13:23:45 craig kernel: amdgpu 0000:03:00.0: amdgpu: GPU smu mode1 reset
Jan 10 13:23:45 craig kernel: snd_hda_intel 0000:03:00.1: spurious response 0x0:0x0, last cmd=0x977100
Jan 10 13:23:45 craig kernel: snd_hda_intel 0000:03:00.1: spurious response 0x0:0x0, last cmd=0x977100
Jan 10 13:23:45 craig kernel: snd_hda_intel 0000:03:00.1: spurious response 0x0:0x0, last cmd=0x977100
Jan 10 13:23:45 craig kernel: snd_hda_intel 0000:03:00.1: spurious response 0x0:0x0, last cmd=0x977100
Jan 10 13:23:45 craig kernel: snd_hda_intel 0000:03:00.1: spurious response 0x0:0x0, last cmd=0x977100
Jan 10 13:23:45 craig kernel: snd_hda_intel 0000:03:00.1: spurious response 0x0:0x0, last cmd=0x977100
Jan 10 13:23:45 craig kernel: snd_hda_intel 0000:03:00.1: spurious response 0x0:0x0, last cmd=0x977100
Jan 10 13:23:45 craig kernel: snd_hda_intel 0000:03:00.1: spurious response 0x0:0x0, last cmd=0x977100
Jan 10 13:23:46 craig kernel: amdgpu 0000:03:00.0: amdgpu: GPU reset succeeded, trying to resume
Jan 10 13:23:46 craig kernel: [drm] PCIE GART of 512M enabled (table at 0x0000008001300000).
Jan 10 13:23:46 craig kernel: [drm] VRAM is lost due to GPU reset!
Jan 10 13:23:46 craig kernel: amdgpu 0000:03:00.0: amdgpu: PSP is resuming...
Jan 10 13:23:46 craig kernel: amdgpu 0000:03:00.0: amdgpu: reserve 0xa00000 from 0x81fd000000 for PSP TMR
Jan 10 13:23:46 craig kernel: amdgpu 0000:03:00.0: amdgpu: RAS: optional ras ta ucode is not available
Jan 10 13:23:46 craig kernel: amdgpu 0000:03:00.0: amdgpu: SECUREDISPLAY: securedisplay ta ucode is not available
Jan 10 13:23:46 craig kernel: amdgpu 0000:03:00.0: amdgpu: SMU is resuming...
Jan 10 13:23:46 craig kernel: amdgpu 0000:03:00.0: amdgpu: smu driver if version = 0x0000000f, smu fw if version = 0x00000013, smu fw program = 0, version = 0x003b3100 (59.49.0)
Jan 10 13:23:46 craig kernel: amdgpu 0000:03:00.0: amdgpu: SMU driver if version not matched
Jan 10 13:23:46 craig kernel: amdgpu 0000:03:00.0: amdgpu: use vbios provided pptable
Jan 10 13:23:46 craig kernel: amdgpu 0000:03:00.0: amdgpu: SMU is resumed successfully!
Jan 10 13:23:46 craig kernel: [drm] DMUB hardware initialized: version=0x02020020
Jan 10 13:23:46 craig kernel: [drm] kiq ring mec 2 pipe 1 q 0
Jan 10 13:23:46 craig kernel: amdgpu 0000:03:00.0: amdgpu: ring gfx_0.0.0 uses VM inv eng 0 on hub 0
Jan 10 13:23:46 craig kernel: amdgpu 0000:03:00.0: amdgpu: ring gfx_0.1.0 uses VM inv eng 1 on hub 0
Jan 10 13:23:46 craig kernel: amdgpu 0000:03:00.0: amdgpu: ring comp_1.0.0 uses VM inv eng 4 on hub 0
Jan 10 13:23:46 craig kernel: amdgpu 0000:03:00.0: amdgpu: ring comp_1.1.0 uses VM inv eng 5 on hub 0
Jan 10 13:23:46 craig kernel: amdgpu 0000:03:00.0: amdgpu: ring comp_1.2.0 uses VM inv eng 6 on hub 0
Jan 10 13:23:46 craig kernel: amdgpu 0000:03:00.0: amdgpu: ring comp_1.3.0 uses VM inv eng 7 on hub 0
Jan 10 13:23:46 craig kernel: amdgpu 0000:03:00.0: amdgpu: ring comp_1.0.1 uses VM inv eng 8 on hub 0
Jan 10 13:23:46 craig kernel: amdgpu 0000:03:00.0: amdgpu: ring comp_1.1.1 uses VM inv eng 9 on hub 0
Jan 10 13:23:46 craig kernel: amdgpu 0000:03:00.0: amdgpu: ring comp_1.2.1 uses VM inv eng 10 on hub 0
Jan 10 13:23:46 craig kernel: amdgpu 0000:03:00.0: amdgpu: ring comp_1.3.1 uses VM inv eng 11 on hub 0
Jan 10 13:23:46 craig kernel: amdgpu 0000:03:00.0: amdgpu: ring kiq_0.2.1.0 uses VM inv eng 12 on hub 0
Jan 10 13:23:46 craig kernel: amdgpu 0000:03:00.0: amdgpu: ring sdma0 uses VM inv eng 13 on hub 0
Jan 10 13:23:46 craig kernel: amdgpu 0000:03:00.0: amdgpu: ring sdma1 uses VM inv eng 14 on hub 0
Jan 10 13:23:46 craig kernel: amdgpu 0000:03:00.0: amdgpu: ring vcn_dec_0 uses VM inv eng 0 on hub 8
Jan 10 13:23:46 craig kernel: amdgpu 0000:03:00.0: amdgpu: ring vcn_enc_0.0 uses VM inv eng 1 on hub 8
Jan 10 13:23:46 craig kernel: amdgpu 0000:03:00.0: amdgpu: ring vcn_enc_0.1 uses VM inv eng 4 on hub 8
Jan 10 13:23:46 craig kernel: amdgpu 0000:03:00.0: amdgpu: ring jpeg_dec uses VM inv eng 5 on hub 8
Jan 10 13:23:46 craig kernel: amdgpu 0000:03:00.0: amdgpu: GPU reset(2) succeeded!
```
</details>
<details>
<summary>Jan 13 14:28:41</summary>
```
Jan 13 14:28:41 craig kernel: amdgpu 0000:03:00.0: amdgpu: [gfxhub] page fault (src_id:0 ring:40 vmid:3 pasid:32773)
Jan 13 14:28:41 craig kernel: amdgpu 0000:03:00.0: amdgpu: in process godot-mono-bin pid 3894 thread godot-mono-bin pid 3894
Jan 13 14:28:41 craig kernel: amdgpu 0000:03:00.0: amdgpu: in page starting at address 0x0000000389268000 from client 0x1b (UTCL2)
Jan 13 14:28:41 craig kernel: amdgpu 0000:03:00.0: amdgpu: GCVM_L2_PROTECTION_FAULT_STATUS:0x00341051
Jan 13 14:28:41 craig kernel: amdgpu 0000:03:00.0: amdgpu: Faulty UTCL2 client ID: TCP (0x8)
Jan 13 14:28:41 craig kernel: amdgpu 0000:03:00.0: amdgpu: MORE_FAULTS: 0x1
Jan 13 14:28:41 craig kernel: amdgpu 0000:03:00.0: amdgpu: WALKER_ERROR: 0x0
```
</details>
<details>
<summary>Jan 13 14:11:32</summary>
```
Jan 13 14:11:32 craig kernel: gmc_v10_0_process_interrupt: 16 callbacks suppressed
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: [gfxhub] page fault (src_id:0 ring:40 vmid:2 pasid:32777)
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: in process godot-mono-bin pid 77804 thread godot-mono-bin pid 77804
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: in page starting at address 0x0000000728fb4000 from client 0x1b (UTCL2)
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: GCVM_L2_PROTECTION_FAULT_STATUS:0x00241051
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: Faulty UTCL2 client ID: TCP (0x8)
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: MORE_FAULTS: 0x1
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: WALKER_ERROR: 0x0
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: PERMISSION_FAULTS: 0x5
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: MAPPING_ERROR: 0x0
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: RW: 0x1
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: [gfxhub] page fault (src_id:0 ring:40 vmid:2 pasid:32777)
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: in process godot-mono-bin pid 77804 thread godot-mono-bin pid 77804
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: in page starting at address 0x0000000728fb4000 from client 0x1b (UTCL2)
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: GCVM_L2_PROTECTION_FAULT_STATUS:0x00241051
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: Faulty UTCL2 client ID: TCP (0x8)
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: MORE_FAULTS: 0x1
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: WALKER_ERROR: 0x0
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: PERMISSION_FAULTS: 0x5
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: MAPPING_ERROR: 0x0
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: RW: 0x1
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: [gfxhub] page fault (src_id:0 ring:40 vmid:2 pasid:32777)
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: in process godot-mono-bin pid 77804 thread godot-mono-bin pid 77804
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: in page starting at address 0x0000000728fbc000 from client 0x1b (UTCL2)
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: GCVM_L2_PROTECTION_FAULT_STATUS:0x00241051
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: Faulty UTCL2 client ID: TCP (0x8)
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: MORE_FAULTS: 0x1
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: WALKER_ERROR: 0x0
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: PERMISSION_FAULTS: 0x5
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: MAPPING_ERROR: 0x0
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: RW: 0x1
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: [gfxhub] page fault (src_id:0 ring:40 vmid:2 pasid:32777)
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: in process godot-mono-bin pid 77804 thread godot-mono-bin pid 77804
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: in page starting at address 0x0000000728fb5000 from client 0x1b (UTCL2)
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: GCVM_L2_PROTECTION_FAULT_STATUS:0x00241051
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: Faulty UTCL2 client ID: TCP (0x8)
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: MORE_FAULTS: 0x1
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: WALKER_ERROR: 0x0
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: PERMISSION_FAULTS: 0x5
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: MAPPING_ERROR: 0x0
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: RW: 0x1
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: [gfxhub] page fault (src_id:0 ring:40 vmid:2 pasid:32777)
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: in process godot-mono-bin pid 77804 thread godot-mono-bin pid 77804
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: in page starting at address 0x0000000728fb5000 from client 0x1b (UTCL2)
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: GCVM_L2_PROTECTION_FAULT_STATUS:0x00241051
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: Faulty UTCL2 client ID: TCP (0x8)
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: MORE_FAULTS: 0x1
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: WALKER_ERROR: 0x0
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: PERMISSION_FAULTS: 0x5
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: MAPPING_ERROR: 0x0
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: RW: 0x1
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: [gfxhub] page fault (src_id:0 ring:40 vmid:2 pasid:32777)
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: in process godot-mono-bin pid 77804 thread godot-mono-bin pid 77804
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: in page starting at address 0x0000000728fbd000 from client 0x1b (UTCL2)
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: GCVM_L2_PROTECTION_FAULT_STATUS:0x00241051
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: Faulty UTCL2 client ID: TCP (0x8)
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: MORE_FAULTS: 0x1
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: WALKER_ERROR: 0x0
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: PERMISSION_FAULTS: 0x5
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: MAPPING_ERROR: 0x0
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: RW: 0x1
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: [gfxhub] page fault (src_id:0 ring:40 vmid:2 pasid:32777)
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: in process godot-mono-bin pid 77804 thread godot-mono-bin pid 77804
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: in page starting at address 0x0000000728fbd000 from client 0x1b (UTCL2)
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: GCVM_L2_PROTECTION_FAULT_STATUS:0x00241051
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: Faulty UTCL2 client ID: TCP (0x8)
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: MORE_FAULTS: 0x1
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: WALKER_ERROR: 0x0
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: PERMISSION_FAULTS: 0x5
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: MAPPING_ERROR: 0x0
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: RW: 0x1
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: [gfxhub] page fault (src_id:0 ring:40 vmid:2 pasid:32777)
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: in process godot-mono-bin pid 77804 thread godot-mono-bin pid 77804
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: in page starting at address 0x0000000728fb4000 from client 0x1b (UTCL2)
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: GCVM_L2_PROTECTION_FAULT_STATUS:0x00241051
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: Faulty UTCL2 client ID: TCP (0x8)
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: MORE_FAULTS: 0x1
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: WALKER_ERROR: 0x0
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: PERMISSION_FAULTS: 0x5
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: MAPPING_ERROR: 0x0
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: RW: 0x1
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: [gfxhub] page fault (src_id:0 ring:40 vmid:2 pasid:32777)
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: in process godot-mono-bin pid 77804 thread godot-mono-bin pid 77804
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: in page starting at address 0x0000000728fbc000 from client 0x1b (UTCL2)
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: GCVM_L2_PROTECTION_FAULT_STATUS:0x00241051
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: Faulty UTCL2 client ID: TCP (0x8)
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: MORE_FAULTS: 0x1
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: WALKER_ERROR: 0x0
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: PERMISSION_FAULTS: 0x5
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: MAPPING_ERROR: 0x0
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: RW: 0x1
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: [gfxhub] page fault (src_id:0 ring:40 vmid:2 pasid:32777)
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: in process godot-mono-bin pid 77804 thread godot-mono-bin pid 77804
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: in page starting at address 0x0000000728fb4000 from client 0x1b (UTCL2)
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: GCVM_L2_PROTECTION_FAULT_STATUS:0x00241051
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: Faulty UTCL2 client ID: TCP (0x8)
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: MORE_FAULTS: 0x1
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: WALKER_ERROR: 0x0
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: PERMISSION_FAULTS: 0x5
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: MAPPING_ERROR: 0x0
Jan 13 14:11:32 craig kernel: amdgpu 0000:03:00.0: amdgpu: RW: 0x1
```
</details>
<details>
<summary>Jan 13 15:40:31 (this one with the RADV driver instead of the AMDVLK driver)</summary>
```
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: [gfxhub] page fault (src_id:0 ring:40 vmid:2 pasid:32771)
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: in process godot-mono-bin pid 2126 thread godot-mono-bin pid 2126
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: in page starting at address 0x000080051456c000 from client 0x1b (UTCL2)
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: GCVM_L2_PROTECTION_FAULT_STATUS:0x00241051
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: Faulty UTCL2 client ID: TCP (0x8)
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: MORE_FAULTS: 0x1
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: WALKER_ERROR: 0x0
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: PERMISSION_FAULTS: 0x5
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: MAPPING_ERROR: 0x0
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: RW: 0x1
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: [gfxhub] page fault (src_id:0 ring:40 vmid:2 pasid:32771)
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: in process godot-mono-bin pid 2126 thread godot-mono-bin pid 2126
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: in page starting at address 0x0000800514564000 from client 0x1b (UTCL2)
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: GCVM_L2_PROTECTION_FAULT_STATUS:0x00241051
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: Faulty UTCL2 client ID: TCP (0x8)
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: MORE_FAULTS: 0x1
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: WALKER_ERROR: 0x0
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: PERMISSION_FAULTS: 0x5
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: MAPPING_ERROR: 0x0
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: RW: 0x1
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: [gfxhub] page fault (src_id:0 ring:40 vmid:2 pasid:32771)
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: in process godot-mono-bin pid 2126 thread godot-mono-bin pid 2126
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: in page starting at address 0x0000800514564000 from client 0x1b (UTCL2)
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: GCVM_L2_PROTECTION_FAULT_STATUS:0x00241051
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: Faulty UTCL2 client ID: TCP (0x8)
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: MORE_FAULTS: 0x1
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: WALKER_ERROR: 0x0
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: PERMISSION_FAULTS: 0x5
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: MAPPING_ERROR: 0x0
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: RW: 0x1
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: [gfxhub] page fault (src_id:0 ring:40 vmid:2 pasid:32771)
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: in process godot-mono-bin pid 2126 thread godot-mono-bin pid 2126
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: in page starting at address 0x000080051456d000 from client 0x1b (UTCL2)
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: GCVM_L2_PROTECTION_FAULT_STATUS:0x00241051
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: Faulty UTCL2 client ID: TCP (0x8)
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: MORE_FAULTS: 0x1
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: WALKER_ERROR: 0x0
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: PERMISSION_FAULTS: 0x5
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: MAPPING_ERROR: 0x0
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: RW: 0x1
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: [gfxhub] page fault (src_id:0 ring:40 vmid:2 pasid:32771)
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: in process godot-mono-bin pid 2126 thread godot-mono-bin pid 2126
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: in page starting at address 0x000080051456d000 from client 0x1b (UTCL2)
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: GCVM_L2_PROTECTION_FAULT_STATUS:0x00241051
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: Faulty UTCL2 client ID: TCP (0x8)
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: MORE_FAULTS: 0x1
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: WALKER_ERROR: 0x0
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: PERMISSION_FAULTS: 0x5
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: MAPPING_ERROR: 0x0
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: RW: 0x1
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: [gfxhub] page fault (src_id:0 ring:40 vmid:2 pasid:32771)
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: in process godot-mono-bin pid 2126 thread godot-mono-bin pid 2126
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: in page starting at address 0x000080051456c000 from client 0x1b (UTCL2)
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: GCVM_L2_PROTECTION_FAULT_STATUS:0x00241051
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: Faulty UTCL2 client ID: TCP (0x8)
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: MORE_FAULTS: 0x1
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: WALKER_ERROR: 0x0
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: PERMISSION_FAULTS: 0x5
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: MAPPING_ERROR: 0x0
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: RW: 0x1
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: [gfxhub] page fault (src_id:0 ring:40 vmid:2 pasid:32771)
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: in process godot-mono-bin pid 2126 thread godot-mono-bin pid 2126
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: in page starting at address 0x0000800514565000 from client 0x1b (UTCL2)
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: GCVM_L2_PROTECTION_FAULT_STATUS:0x00000000
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: Faulty UTCL2 client ID: CB/DB (0x0)
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: MORE_FAULTS: 0x0
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: WALKER_ERROR: 0x0
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: PERMISSION_FAULTS: 0x0
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: MAPPING_ERROR: 0x0
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: RW: 0x0
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: [gfxhub] page fault (src_id:0 ring:40 vmid:2 pasid:32771)
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: in process godot-mono-bin pid 2126 thread godot-mono-bin pid 2126
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: in page starting at address 0x0000800514565000 from client 0x1b (UTCL2)
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: GCVM_L2_PROTECTION_FAULT_STATUS:0x00000000
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: Faulty UTCL2 client ID: CB/DB (0x0)
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: MORE_FAULTS: 0x0
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: WALKER_ERROR: 0x0
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: PERMISSION_FAULTS: 0x0
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: MAPPING_ERROR: 0x0
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: RW: 0x0
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: [gfxhub] page fault (src_id:0 ring:40 vmid:2 pasid:32771)
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: in process godot-mono-bin pid 2126 thread godot-mono-bin pid 2126
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: in page starting at address 0x000080051456c000 from client 0x1b (UTCL2)
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: GCVM_L2_PROTECTION_FAULT_STATUS:0x00000000
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: Faulty UTCL2 client ID: CB/DB (0x0)
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: MORE_FAULTS: 0x0
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: WALKER_ERROR: 0x0
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: PERMISSION_FAULTS: 0x0
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: MAPPING_ERROR: 0x0
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: RW: 0x0
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: [gfxhub] page fault (src_id:0 ring:40 vmid:2 pasid:32771)
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: in process godot-mono-bin pid 2126 thread godot-mono-bin pid 2126
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: in page starting at address 0x000080051456c000 from client 0x1b (UTCL2)
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: GCVM_L2_PROTECTION_FAULT_STATUS:0x00000000
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: Faulty UTCL2 client ID: CB/DB (0x0)
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: MORE_FAULTS: 0x0
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: WALKER_ERROR: 0x0
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: PERMISSION_FAULTS: 0x0
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: MAPPING_ERROR: 0x0
Jan 13 15:40:31 craig kernel: amdgpu 0000:03:00.0: amdgpu: RW: 0x0
Jan 13 15:40:41 craig kernel: amdgpu 0000:03:00.0: amdgpu: Dumping IP State
Jan 13 15:40:41 craig kernel: amdgpu 0000:03:00.0: amdgpu: Dumping IP State Completed
Jan 13 15:40:41 craig kernel: amdgpu 0000:03:00.0: amdgpu: ring gfx_0.0.0 timeout, signaled seq=79776, emitted seq=79778
```
</details>
### Steps to reproduce
I wish I knew.
This happens when I'm just running the editor, not a game.
It also happens when the editor is in the background and I'm interacting with some other application, like VS Code or Firefox.
It happens _far less frequently_ when Godot is not running at all. So it's possibly a driver bug, that is triggered or made more likely by something Godot is doing.
However, I've been playing several other games on this system without any issues. No idea if they use Vulkan or not, or whether this even is a Vulkan issue.
I'm well aware that this report does not have enough information to diagnose or fix the issue. But at least it provides a place to gather more reports (if any). And maybe someone could give me some pointers where I could look to gather more information?
### Minimal reproduction project (MRP)
n/a | bug,topic:rendering,needs testing,crash | low | Critical |
2,784,043,433 | svelte | :has with sibling selector shows as unused when styling svg elements | ### Describe the bug
When trying to style elements based on siblings in :has selector is falsy reported as unused. When i wrap it in global it works.
When trying to style html elements in the same way it works without issues.
### Reproduction
- With "~" sibling selector: https://svelte.dev/playground/hello-world#H4sIAAAAAAAAA22QwU7DMBBEf2VZDm1RRJpKXFwnEjf-AXNInW1jyTiRvS2NrPDtyEmqgsTRM7Nvdh3R1Z-EAt_I2g6-Om8bWFNjmJoNZng0lgKK94g89CmXBMxuU699_xwuZDlphzrQf7ruHJPjgAJl0N70XCmn2BJDikMJq6l3tVdO5veEk-FymqLxkWrdQrTkTtwKKLZjBmZMlmKpjdeWQF9LhdHAU7IVgh5KhcVWIfhS4UtSbB1CqdBTozCvlnFPmudHzFPNhP0D3W1vuN0vXJqR-bKik21RzX8Y01Hjg8zbYrmCB0tTwQwVbR3W36l2E-cdFB-NtQI8NfukjPfwkpj9gz3T4st8xmKGTFdGwf5M48f4A7HHHojRAQAA
- With "+" sibling selector: https://svelte.dev/playground/hello-world#H4sIAAAAAAAAA22QwU7DMBBEf8Ush7YQkaYSF9eJxI1_wBxSZ9tYWpzI3pZGVv4dOUlVkDh6ZvbNriO4-gtBwjsSdeK789SINTaWsdlABkdLGEB-ROChT7kkQHabeuv7l3BB4qQd6oD_6aZzjI4DSFDBeNtzpZ1mQhYpLkqxmnpXe-1Ufk84FS6nKRofsTatiITuxK0UxXbMhB2TpVkZ6w2hMNdSQ7TiKdkahBlKDcVWg_ClhtekUB1CqcFjoyGvlnGPhudHzFPNhP0D3W1vuN0vXJpR-bKiU21RzX8Y01Hjg8rbYrmCB8KpYIbKtg7r51S7ifMOmo-WSAqPzT4p4z28JGb_QGdcfJXPWMiA8cog2Z9x_Bx_AMd0wwzRAQAA
- With :global: https://svelte.dev/playground/hello-world#H4sIAAAAAAAAA21QzU7DMAx-lWAO26Ci7SQuWTqJG-9AOHSpt0YyaZV4Y1XUd0dpOw0kjv5-bUdw9ReChHck6sR356kRa2wsY7OBDI6WMID8iMBDn3QJgOzmeuv7l3BB4oQd6oD_4aZzjI4DSFDBeNvzXjvNhCySXFRiNfWudtqp_K5wKlxOkzQ-Ym1aEQndiVspymLMhB0TpVkZ6w2hMNdKQ7TiKdEahBkqDWWhQfhKw2tCqA6h0uCx0ZDvF7tHw_MQ81Qzxf4J3Ra3uO2vuORR-bKiU225n38Y01Hjg8rbcrmCB8KpYA6VJ-oONa1lW4f1c6rfbOK8jOajJZLCY7NLyHh3LYqZP9AZF17lcz5kwHhlkOzPOH6OP9ySgvHaAQAA
### Logs
_No response_
### System Info
```shell
System:
OS: Linux 6.8 Ubuntu 24.04.1 LTS 24.04.1 LTS (Noble Numbat)
CPU: (12) x64 Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz
Memory: 18.77 GB / 31.11 GB
Container: Yes
Shell: 5.2.21 - /bin/bash
Binaries:
Node: 18.20.3 - ~/.nvm/versions/node/v18.20.3/bin/node
Yarn: 1.22.22 - ~/.nvm/versions/node/v18.20.3/bin/yarn
npm: 10.7.0 - ~/.nvm/versions/node/v18.20.3/bin/npm
pnpm: 9.4.0 - ~/.nvm/versions/node/v18.20.3/bin/pnpm
bun: 1.1.34 - ~/.bun/bin/bun
Browsers:
Chrome: 131.0.6778.264
npmPackages:
svelte: 5.17.3 => 5.17.3
```
### Severity
annoyance | css | low | Critical |
2,784,101,580 | react-native | Accessibility Actions Not Triggering on FlatList Carousel (iOS and Android) | ### Description
I’m trying to add accessibility to a FlatList carousel in my React Native app by using [Accessibility Actions](https://reactnative.dev/docs/accessibility#accessibility-actions). I’ve added both the actions and the corresponding function to handle them. However, when I test it in the simulator (both Xcode for iOS and Android Studio), nothing happens when I attempt to trigger the actions.
Here’s a snippet of what I’ve done:
```
export default function App() {
const onAccessibilityAction = (event: any) => {
console.log("event: ", event);
};
const renderItem = ({ item }) => {
return (
<Text>{item.id}</Text>
);
};
const data = () => {
const result = []
for (let i = 0; i < 50; i++) {
result.push({ id: i })
}
return result;
}
return (
<FlatList
horizontal={false}
focusable={true}
accessibilityActions={[
{ name: "increment" },
{ name: "decrement" },
]}
onAccessibilityAction={onAccessibilityAction}
accessibilityRole="adjustable"
renderItem={renderItem}
data={data()}
/>
);
}
```
### Steps to reproduce
iOS
1. open xCode and accessibility inspector (xCode -> Open Developer Tool -> Accessibility Inspector)
2. run your app
3. navigate to the list previously created using the accessibility inspector, and try to perform an action of increment or decrement
### React Native Version
0.76.6
### Affected Platforms
Runtime - Android, Runtime - iOS
### Output of `npx react-native info`
```text
System:
OS: macOS 15.2
CPU: (8) arm64 Apple M1 Pro
Memory: 2.93 GB / 32.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 20.14.0
path: ~/.nvm/versions/node/v20.14.0/bin/node
Yarn:
version: 1.22.22
path: ~/.nvm/versions/node/v20.14.0/bin/yarn
npm:
version: 10.7.0
path: ~/.nvm/versions/node/v20.14.0/bin/npm
Watchman: Not Found
Managers:
CocoaPods:
version: 1.16.2
path: /opt/homebrew/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.2
- iOS 18.2
- macOS 15.2
- tvOS 18.2
- visionOS 2.2
- watchOS 11.2
Android SDK: Not Found
IDEs:
Android Studio: 2024.2 AI-242.23339.11.2421.12483815
Xcode:
version: 16.2/16C5032a
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.12
path: /usr/bin/javac
Ruby:
version: 2.7.5
path: /Users/lucasgermano/.rbenv/shims/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.1.3
wanted: ^15.1.3
react:
installed: 18.3.1
wanted: ^18.2.0
react-native:
installed: 0.76.6
wanted: ^0.76.6
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: Not found
newArchEnabled: false
```
### Stacktrace or Logs
```text
-
```
### Reproducer
https://snack.expo.dev/WQaHKjYSd8bTv_BVB_r67
### Screenshots and Videos
_No response_ | Platform: iOS,Platform: Android,Accessibility,Needs: Triage :mag: | low | Minor |
2,784,138,203 | go | runtime: a Windows application launched via Steam sometimes freezes | ### Go version
go version go1.23.2 windows/amd64
### Output of `go env` in your module/workspace:
```shell
set GO111MODULE=
set GOARCH=amd64
set GOBIN=
set GOCACHE=C:\Users\hajimehoshi\AppData\Local\go-build
set GOENV=C:\Users\hajimehoshi\AppData\Roaming\go\env
set GOEXE=.exe
set GOEXPERIMENT=
set GOFLAGS=
set GOHOSTARCH=amd64
set GOHOSTOS=windows
set GOINSECURE=
set GOMODCACHE=C:\Users\hajimehoshi\go\pkg\mod
set GONOPROXY=
set GONOSUMDB=
set GOOS=windows
set GOPATH=C:\Users\hajimehoshi\go
set GOPRIVATE=
set GOPROXY=https://proxy.golang.org,direct
set GOROOT=C:\Program Files\Go
set GOSUMDB=sum.golang.org
set GOTMPDIR=
set GOTOOLCHAIN=auto
set GOTOOLDIR=C:\Program Files\Go\pkg\tool\windows_amd64
set GOVCS=
set GOVERSION=go1.23.2
set GODEBUG=
set GOTELEMETRY=local
set GOTELEMETRYDIR=C:\Users\hajimehoshi\AppData\Roaming\go\telemetry
set GCCGO=gccgo
set GOAMD64=v1
set AR=ar
set CC=gcc
set CXX=g++
set CGO_ENABLED=1
set GOMOD=C:\Users\hajimehoshi\ebiten\go.mod
set GOWORK=
set CGO_CFLAGS=-O2 -g
set CGO_CPPFLAGS=
set CGO_CXXFLAGS=-O2 -g
set CGO_FFLAGS=-O2 -g
set CGO_LDFLAGS=-O2 -g
set PKG_CONFIG=pkg-config
set GOGCCFLAGS=-m64 -mthreads -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=C:\Users\HAJIME~1\AppData\Local\Temp\go-build1125585036=/tmp/go-build -gno-record-gcc-switches
```
### What did you do?
Compile this Go program to an execute file (with `-ldflags="-H=windowsgui"`)
*EDIT*: I could minized the case further. See https://github.com/golang/go/issues/71242#issuecomment-2587406503
```go
package main
import (
"log"
"os"
"runtime"
"runtime/debug"
"syscall"
"time"
"unsafe"
)
const (
WS_OVERLAPPEDWINDOW = 0x00000000 | 0x00C00000 | 0x00080000 | 0x00040000 | 0x00020000 | 0x00010000
CW_USEDEFAULT = ^0x7fffffff
SW_SHOW = 5
WM_DESTROY = 2
)
type (
ATOM uint16
HANDLE uintptr
HINSTANCE HANDLE
HICON HANDLE
HCURSOR HANDLE
HBRUSH HANDLE
HWND HANDLE
HMENU HANDLE
)
type WNDCLASSEX struct {
Size uint32
Style uint32
WndProc uintptr
ClsExtra int32
WndExtra int32
Instance HINSTANCE
Icon HICON
Cursor HCURSOR
Background HBRUSH
MenuName *uint16
ClassName *uint16
IconSm HICON
}
type RECT struct {
Left, Top, Right, Bottom int32
}
type POINT struct {
X, Y int32
}
type MSG struct {
Hwnd HWND
Message uint32
WParam uintptr
LParam uintptr
Time uint32
Pt POINT
}
func GetModuleHandle(modulename *uint16) HINSTANCE {
r, _, _ := syscall.SyscallN(procGetModuleHandle.Addr(), uintptr(unsafe.Pointer(modulename)))
return HINSTANCE(r)
}
func RegisterClassEx(w *WNDCLASSEX) ATOM {
r, _, _ := syscall.SyscallN(procRegisterClassEx.Addr(), uintptr(unsafe.Pointer(w)))
return ATOM(r)
}
func CreateWindowEx(exStyle uint, className, windowName *uint16,
style uint, x, y, width, height int, parent HWND, menu HMENU,
instance HINSTANCE, param unsafe.Pointer) HWND {
r, _, _ := syscall.SyscallN(procCreateWindowEx.Addr(), uintptr(exStyle), uintptr(unsafe.Pointer(className)),
uintptr(unsafe.Pointer(windowName)), uintptr(style), uintptr(x), uintptr(y), uintptr(width), uintptr(height),
uintptr(parent), uintptr(menu), uintptr(instance), uintptr(param))
return HWND(r)
}
func AdjustWindowRect(rect *RECT, style uint, menu bool) bool {
var iMenu uintptr
if menu {
iMenu = 1
}
r, _, _ := syscall.SyscallN(procAdjustWindowRect.Addr(), uintptr(unsafe.Pointer(rect)), uintptr(style), iMenu)
return r != 0
}
func ShowWindow(hwnd HWND, cmdshow int) bool {
r, _, _ := syscall.SyscallN(procShowWindow.Addr(), uintptr(hwnd), uintptr(cmdshow))
return r != 0
}
func GetMessage(msg *MSG, hwnd HWND, msgFilterMin, msgFilterMax uint32) int {
r, _, _ := syscall.SyscallN(procGetMessage.Addr(), uintptr(unsafe.Pointer(msg)), uintptr(hwnd), uintptr(msgFilterMin), uintptr(msgFilterMax))
return int(r)
}
func TranslateMessage(msg *MSG) bool {
r, _, _ := syscall.SyscallN(procTranslateMessage.Addr(), uintptr(unsafe.Pointer(msg)))
return r != 0
}
func DispatchMessage(msg *MSG) uintptr {
r, _, _ := syscall.SyscallN(procDispatchMessage.Addr(), uintptr(unsafe.Pointer(msg)))
return r
}
func DefWindowProc(hwnd HWND, msg uint32, wParam, lParam uintptr) uintptr {
r, _, _ := syscall.SyscallN(procDefWindowProc.Addr(), uintptr(hwnd), uintptr(msg), wParam, lParam)
return r
}
func PostQuitMessage(exitCode int) {
syscall.SyscallN(procPostQuitMessage.Addr(), uintptr(exitCode))
}
var (
kernel32 = syscall.NewLazyDLL("kernel32.dll")
procGetModuleHandle = kernel32.NewProc("GetModuleHandleW")
user32 = syscall.NewLazyDLL("user32.dll")
procRegisterClassEx = user32.NewProc("RegisterClassExW")
procCreateWindowEx = user32.NewProc("CreateWindowExW")
procAdjustWindowRect = user32.NewProc("AdjustWindowRect")
procShowWindow = user32.NewProc("ShowWindow")
procGetMessage = user32.NewProc("GetMessageW")
procTranslateMessage = user32.NewProc("TranslateMessage")
procDispatchMessage = user32.NewProc("DispatchMessageW")
procDefWindowProc = user32.NewProc("DefWindowProcW")
procPostQuitMessage = user32.NewProc("PostQuitMessage")
)
func init() {
runtime.LockOSThread()
}
func main() {
className, err := syscall.UTF16PtrFromString("Sample Window Class")
if err != nil {
panic(err)
}
inst := GetModuleHandle(className)
wc := WNDCLASSEX{
Size: uint32(unsafe.Sizeof(WNDCLASSEX{})),
WndProc: syscall.NewCallback(wndProc),
Instance: inst,
ClassName: className,
}
RegisterClassEx(&wc)
wr := RECT{
Left: 0,
Top: 0,
Right: 320,
Bottom: 240,
}
title, err := syscall.UTF16PtrFromString("My Title")
if err != nil {
panic(err)
}
AdjustWindowRect(&wr, WS_OVERLAPPEDWINDOW, false)
hwnd := CreateWindowEx(
0, className,
title,
WS_OVERLAPPEDWINDOW,
CW_USEDEFAULT, CW_USEDEFAULT, int(wr.Right-wr.Left), int(wr.Bottom-wr.Top),
0, 0, inst, nil,
)
if hwnd == 0 {
panic(syscall.GetLastError())
}
ShowWindow(hwnd, SW_SHOW)
go func() {
for {
_ = make([]byte, 256*1024)
time.Sleep(time.Millisecond)
}
}()
go func() {
f, err := os.Create("log.txt")
if err != nil {
panic(err)
}
defer f.Close()
log.SetOutput(f)
for {
time.Sleep(time.Second)
var gcStats debug.GCStats
debug.ReadGCStats(&gcStats)
log.Printf("LastGC: %s, NumGC: %d, PauseTotal: %s", gcStats.LastGC, gcStats.NumGC, gcStats.PauseTotal)
if err := f.Sync(); err != nil {
panic(err)
}
}
}()
var msg MSG
for GetMessage(&msg, 0, 0, 0) != 0 {
TranslateMessage(&msg)
DispatchMessage(&msg)
}
}
func wndProc(hwnd HWND, msg uint32, wparam, lparam uintptr) uintptr {
switch msg {
case WM_DESTROY:
PostQuitMessage(0)
}
return DefWindowProc(hwnd, msg, wparam, lparam)
}
```
Replace an exe file in a Steam game with the compiled exe file, and run it via Steam client.
### What did you see happen?
The application sometimes freezes for more than 10 seconds. For example, I saw this log
```
2025/01/13 23:23:41 LastGC: 2025-01-13 23:23:41.6060533 +0900 JST, NumGC: 51, PauseTotal: 1.3186ms
2025/01/13 23:23:42 LastGC: 2025-01-13 23:23:42.6470915 +0900 JST, NumGC: 102, PauseTotal: 4.5945ms
2025/01/13 23:23:43 LastGC: 2025-01-13 23:23:43.6434896 +0900 JST, NumGC: 152, PauseTotal: 7.8752ms
2025/01/13 23:23:44 LastGC: 2025-01-13 23:23:44.6481645 +0900 JST, NumGC: 204, PauseTotal: 10.224ms
2025/01/13 23:23:45 LastGC: 2025-01-13 23:23:45.6448025 +0900 JST, NumGC: 255, PauseTotal: 12.5067ms
2025/01/13 23:23:46 LastGC: 2025-01-13 23:23:46.6584686 +0900 JST, NumGC: 309, PauseTotal: 14.2881ms
2025/01/13 23:23:47 LastGC: 2025-01-13 23:23:47.6550747 +0900 JST, NumGC: 362, PauseTotal: 17.056ms
2025/01/13 23:23:48 LastGC: 2025-01-13 23:23:48.6681264 +0900 JST, NumGC: 413, PauseTotal: 18.5012ms
2025/01/13 23:23:49 LastGC: 2025-01-13 23:23:49.6577372 +0900 JST, NumGC: 464, PauseTotal: 21.2877ms
2025/01/13 23:23:50 LastGC: 2025-01-13 23:23:50.6675317 +0900 JST, NumGC: 514, PauseTotal: 24.9508ms
2025/01/13 23:23:51 LastGC: 2025-01-13 23:23:51.6642908 +0900 JST, NumGC: 563, PauseTotal: 27.2671ms
2025/01/13 23:23:52 LastGC: 2025-01-13 23:23:52.6696781 +0900 JST, NumGC: 612, PauseTotal: 29.6692ms
2025/01/13 23:23:53 LastGC: 2025-01-13 23:23:53.6818947 +0900 JST, NumGC: 661, PauseTotal: 31.9968ms
2025/01/13 23:23:54 LastGC: 2025-01-13 23:23:54.6830572 +0900 JST, NumGC: 711, PauseTotal: 34.9958ms
2025/01/13 23:23:55 LastGC: 2025-01-13 23:23:55.6889185 +0900 JST, NumGC: 761, PauseTotal: 38.2468ms
2025/01/13 23:23:56 LastGC: 2025-01-13 23:23:56.6869067 +0900 JST, NumGC: 813, PauseTotal: 41.5747ms
2025/01/13 23:23:57 LastGC: 2025-01-13 23:23:57.6920325 +0900 JST, NumGC: 863, PauseTotal: 45.5415ms
2025/01/13 23:24:18 LastGC: 2025-01-13 23:23:58.2810119 +0900 JST, NumGC: 894, PauseTotal: 47.2387ms
2025/01/13 23:24:19 LastGC: 2025-01-13 23:24:19.3442472 +0900 JST, NumGC: 945, PauseTotal: 51.3869ms
2025/01/13 23:24:20 LastGC: 2025-01-13 23:24:20.3460036 +0900 JST, NumGC: 995, PauseTotal: 54.0004ms
2025/01/13 23:24:21 LastGC: 2025-01-13 23:24:21.3371656 +0900 JST, NumGC: 1047, PauseTotal: 55.3437ms
2025/01/13 23:24:22 LastGC: 2025-01-13 23:24:22.344327 +0900 JST, NumGC: 1098, PauseTotal: 56.9757ms
2025/01/13 23:24:23 LastGC: 2025-01-13 23:24:23.3523815 +0900 JST, NumGC: 1147, PauseTotal: 61.8229ms
2025/01/13 23:24:24 LastGC: 2025-01-13 23:24:24.3560493 +0900 JST, NumGC: 1200, PauseTotal: 64.7476ms
2025/01/13 23:24:25 LastGC: 2025-01-13 23:24:25.3552534 +0900 JST, NumGC: 1250, PauseTotal: 67.5551ms
```
You can see a freeze happens between 22:23:57 and 22:24:18.
### What did you expect to see?
The application doesn't freeze. | OS-Windows,NeedsInvestigation,compiler/runtime,BugReport | medium | Critical |
2,784,171,345 | godot | Animation Performance Problems | ### Tested versions
Reproduced 4.4 Dev 7, 4.3 Stable
Not Reproduced 3.6 Stable
### System information
Windows 11, i3 10105f
### Issue description
I've noticed that Animation Tree and Animation Player seem to have a Ludicrously high cost
For Reference, in my game to have 60 FPS, I can have ~100 AI characters with active animation players and trees, but 3-4 times as many with them disabled.
Each of the AI's is dynamically typed GDScript, that isn't even all that optimized, gathering data of their environment and communicating with each other, moving around with Godot's physics and navigation calculations running in the background of all of that.
The GPU is not the limitation, given it's taking only a couple of milliseconds per frame to render and changing GPU taxing settings does nothing
### Steps to reproduce
Open MRP, Observe poor performance, go to the instance, and set process mode of the animation tree and player to disabled, Observe Massive Improvement
Note: The MRP went a bit extreme (some 2.4k instances) to tank the performance even harder for those more powerful systems out there (It's a slide show on mine in the editor with animations activated in the instances)
### Minimal reproduction project (MRP)
[Animation.zip](https://github.com/user-attachments/files/18397920/Animation.zip) | confirmed,topic:animation,performance | medium | Major |
2,784,173,379 | rust | Add mpsc select | Rust has nice implementations of channels, but is missing an implementation of `select` (i.e. wait on multiple channels), which severely limits their usefulness.
The usual solution is to give up on `std::sync::mpsc::channel` entirely and use [`crossbeam-channel`](https://docs.rs/crossbeam/latest/crossbeam/channel/index.html) instead. That's rather unfortunate - it's like providing a file API, but there's no write functionality so you have to use a third party crate for all file operations instead.
Years ago there was an unstable implementation, but [it was removed in 2019](https://github.com/rust-lang/rust/pull/60921) without ever being stabilised (the implementation was apparently unsafe and had other issues - see #27800).
However since 2022, `std::sync::mpsc::channel` has been implemented in terms of `crossbeam-channel`, which makes it even more weird that you still have to use `crossbeam-channel`, and also presumably means that importing `crossbeam-channel`'s `select` implementation is much easier.
Could we do that?
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"krishpranav"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | T-libs-api,C-feature-request | low | Minor |
2,784,223,070 | next.js | Turbopack - Module not found: Can't resolve './foo!=!./foo' | ### Link to the code that reproduces this issue
https://github.com/jantimon/reproduction-nextjs-inline-match-resource
### To Reproduce
1. `npm run dev --turbopack`
2. See the error: `Module not found: Can't resolve './foo!=!./foo'`
### Current vs. Expected behavior
# Next.js 15.1.4 `!=!` Support
Inline match resource query `!=!` is not supported by TurboPack
| Command | Status |
|-----------------------|--------|
| npm run dev | ✅ |
| npm run build | ✅ |
| npm run dev --turbopack | ❌ |
As turbo-pack does not support `!=!` in the path, it will throw an error when running `yarn dev --turbopack`:
```
⨯ ./app/page.tsx:3:1
Module not found: Can't resolve './foo!=!./foo'
1 | import Image from "next/image";
2 |
> 3 | import foo from "./foo!=!./foo";
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
4 |
5 |
6 | export default function Home() {
```
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.6.0: Mon Jul 29 21:14:30 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T6000
Available memory (MB): 65536
Available CPU cores: 10
Binaries:
Node: 20.9.0
npm: 10.1.0
Yarn: 1.22.19
pnpm: 9.6.0
Relevant Packages:
next: 15.1.4 // Latest available version is detected (15.1.4).
eslint-config-next: N/A
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Turbopack
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
We are trying to add TurboPack compatibility for [next-yak](https://github.com/DigitecGalaxus/next-yak)
> 🦀 Zero-runtime CSS-in-JS powered by Rust. Write styled-components syntax, get build-time CSS extraction and full RSC compatibility
next-yak transpiles tsx code to css
So our loader generates css from `button.tsx` although the file has a `.tsx` extension
`webpack` has a great feature for this case: "inline match resource queries"
Will you add inline match resource queries to turbopack or is there another way to migrate next-yak to TurboPack? | Turbopack | low | Critical |
2,784,237,100 | PowerToys | PowerToys.Run will not launch | ### Microsoft PowerToys version
0.87.1
### Installation method
WinGet, GitHub
### Running as admin
Yes
### Area(s) with issue?
PowerToys Run
### Steps to reproduce
[PowerToysReport_2025-01-13-09-51-28.zip](https://github.com/user-attachments/files/18398279/PowerToysReport_2025-01-13-09-51-28.zip)
I've recently updated to Windows 11 24H2 and recently PowerToys.Run has stopped launching.
### ✔️ Expected Behavior
PowerToys.Run to launch as usual.
### ❌ Actual Behavior
Nothing
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,784,263,955 | tauri | [bug] I am using show_menu_on_left_click, but the following warning keeps appearing: tauri::tauri_utils::config::TrayIconConfig::menu_on_left_click: Use show_menu_on_left_click instead. | ### Describe the bug
```json
{
"build": {
"beforeBuildCommand": "yarn build",
"beforeDevCommand": "yarn dev",
"frontendDist": "../out",
"devUrl": "http://localhost:3000"
},
"bundle": {
"active": true,
"category": "DeveloperTool",
"icon": ["icons/icon.icns", "icons/icon.png"],
"macOS": {
"dmg": {
"background": "./imgs/image.png",
"windowSize": {
"width": 700,
"height": 400
}
},
"entitlements": "./entitlements.plist",
"signingIdentity": "Developer ID Application"
},
"createUpdaterArtifacts": true
},
"productName": "Jiffy",
"mainBinaryName": "jiffy",
"version": "1.0.1",
"identifier": "com.jiffy.dev",
"plugins": {
"updater": {
"pubkey": "",
"endpoints": [""]
}
},
"app": {
"macOSPrivateApi": true,
"windows": [
{
"label": "main"
},
{
"label": "list"
}
],
"security": {
"csp": null
},
"trayIcon": {
"id": "tray_list",
"iconPath": "icons/tray_icon.png",
"iconAsTemplate": true,
"showMenuOnLeftClick": true
}
}
}
```
This is our tauri.conf.json file.
I am using show_menu_on_left_click, but the following warning keeps appearing:
`tauri::tauri_utils::config::TrayIconConfig::menu_on_left_click: Use show_menu_on_left_click instead.`
How to solve it?

### Reproduction
_No response_
### Expected behavior
_No response_
### Full `tauri info` output
```text
Error `tauri.conf.json` error on `app > trayIcon`: {"id":"tray_list","iconPath":"icons/tray_icon.png","iconAsTemplate":true,"showMenuOnLeftClick":true} is not valid under any of the schemas listed in the 'anyOf' keyword
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: needs triage | low | Critical |
2,784,329,318 | transformers | RLE of SAM can't handle masks with no change | ### System Info
- `transformers` version: 4.49.0.dev0
- Platform: Windows-10-10.0.26100-SP0
- Python version: 3.11.11
- Huggingface_hub version: 0.27.1
- Safetensors version: 0.5.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I'm fine-tuning the SamModel and using the fine-tuned model in a mask-generation pipeline afterward.
After some time in the training, I suddenly get the following error when using the fine-tuned model in the pipeline:
```
Traceback (most recent call last):
File "***.py", line 17, in <module>
outputs = generator(image)
^^^^^^^^^^^^^^^^
File "transformers\pipelines\mask_generation.py", line 166, in __call__
return super().__call__(image, *args, num_workers=num_workers, batch_size=batch_size, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "transformers\pipelines\base.py", line 1354, in __call__
return next(
^^^^^
File "transformers\pipelines\pt_utils.py", line 124, in __next__
item = next(self.iterator)
^^^^^^^^^^^^^^^^^^^
File "transformers\pipelines\pt_utils.py", line 269, in __next__
processed = self.infer(next(self.iterator), **self.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "transformers\pipelines\base.py", line 1269, in forward
model_outputs = self._forward(model_inputs, **forward_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "transformers\pipelines\mask_generation.py", line 237, in _forward
masks, iou_scores, boxes = self.image_processor.filter_masks(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "transformers\models\sam\image_processing_sam.py", line 847, in filter_masks
return self._filter_masks_pt(
^^^^^^^^^^^^^^^^^^^^^^
File "transformers\models\sam\image_processing_sam.py", line 945, in _filter_masks_pt
masks = _mask_to_rle_pytorch(masks)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "transformers\models\sam\image_processing_sam.py", line 1386, in _mask_to_rle_pytorch
counts += [cur_idxs[0].item()] + btw_idxs.tolist() + [height * width - cur_idxs[-1]]
~~~~~~~~^^^
IndexError: index 0 is out of bounds for dimension 0 with size 0
```
Note: this error doesn't occur on every image, but just on some.
Code used to produce error:
```
image = Image.open("PATH_TO_MY_IMAGE")
model = SamModel.from_pretrained("PATH_TO_MY_CHECKPOINT")
processor = SamImageProcessor.from_pretrained("facebook/sam-vit-huge")
generator = pipeline(
"mask-generation",
model=model,
device="cpu",
points_per_batch=64,
image_processor=processor
) # MaskGenerationPipeline
outputs = generator(image)
```
### Expected behavior
No error should be thrown and the RLE should be computed correctly. | bug | low | Critical |
2,784,336,760 | tauri | [feat] Add features to individually enable mobile support | ### Describe the problem
I've recently migrated a project that we have to Tauri v2 and noticed that many dependencies were showing up in our `Cargo.lock` file that are related to mobile targets (I noticed this especially with the `swift` and `objc2-` crates). Internally we are trying to review more and more dependencies of our projects to reduce the chance of supply chain attacks and make a contribution to the security of the general ecosystem.
I know that unused dependencies won't be included in the final executable (e.g. of our strictly desktop related app), but it would help our reviewing efforts a lot if unneeded dependencies would not show up in the `Cargo.lock` file.
### Describe the solution you'd like
Following the approach used by many other crates, it would be great to see new features added (maybe called `iOS` and `Android` for mobile targets, other targets could maybe follow in the future) to configure which target platforms are enabled. That would allow us to reduce the number of dependencies showing up in the `Cargo.lock` file, reducing the clutter, communicating clearly which deps are required, and also making it easier to review code to guarantee supply chain safety.
The current behavior can be kept by adding these target features to the default set of features that is enabled when adding Tauri to a project.
### Alternatives considered
_No response_
### Additional context
_No response_ | type: feature request | low | Minor |
2,784,355,003 | godot | Environment "Glow" should be put before the "Tonemap" | ### Tested versions
Tested on a Godot 4.4 nightly build I downloaded yesterday (Jan 12 2025)
### System information
Godot v4.4.dev (5b52b4b5c) - Windows 11 (build 22631) - Multi-window, 1 monitor - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3050 Laptop GPU (NVIDIA; 32.0.15.6109) - 11th Gen Intel(R) Core(TM) i5-11400H @ 2.70GHz (12 threads)
### Issue description
I saw the AgX PR(https://github.com/godotengine/godot/pull/87260) has been merged, so I decided to download Godot for the first time and take a look. I downloaded this demo scene: https://github.com/perfoon/Abandoned-Spaceship-Godot-Demo
Here is what I see:

It looks to me that the "Glow" effect is being put _after_ the Image Formation (or view transform or tonemap), while it should be put to _before_. A rule of thumb is that any processing that can generate Open Domain (higher than 1.0) values need to be put to pre-formation (or pre-tonemapping), this is why the entire Blender compositor is before the view transform. Godot is not putting the Image formation/view transform/tone mapping at the end of these processings, which causes this problem. I feel the need to report this. You might also want to check whether there are other similar issues with other settings apart from fixing this.
### Steps to reproduce
1. Open this demo project: https://github.com/perfoon/Abandoned-Spaceship-Godot-Demo
2. Select the WorldEnvironment, in the inspector, change the "Tonemap" setting to AgX
3. See how a clipping issue that shouldn't be there when AgX is enabled remains
4. Disable the "Glow", see how the issue disappears.
### Minimal reproduction project (MRP)
N/A | enhancement,discussion,topic:rendering,topic:editor,topic:3d,topic:vfx | low | Minor |
2,784,367,185 | go | x/tools/gopls: out-of-bounds slice panic in bug in frob.(*reader).bytes | ```
#!stacks
"runtime.goPanicSliceAcap" && "frob.(*reader).bytes"
```
Issue created by [stacks](https://pkg.go.dev/golang.org/x/tools/gopls/internal/telemetry/cmd/stacks).
```go
func (r *reader) bytes(n int) []byte {
v := r.data[:n]
r.data = r.data[n:] // <--- panic
return v
}
```
This stack `MZjt-Q` was [reported by telemetry](https://storage.googleapis.com/prod-telemetry-merged/2025-01-10.json):
- `crash/crash`
- [`runtime.gopanic:+69`](https://cs.opensource.google/go/go/+/go1.23.3:src/runtime/panic.go;l=804)
- [`golang.org/x/tools/gopls/internal/cache.assert:=10`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.1:gopls/internal/cache/debug.go;l=10)
- [`golang.org/x/tools/gopls/internal/cache.(*packageHandleBuilder).evaluatePackageHandle.func1:+2`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.1:gopls/internal/cache/check.go;l=1074)
- [`runtime.gopanic:+50`](https://cs.opensource.google/go/go/+/go1.23.3:src/runtime/panic.go;l=785)
- [`runtime.goPanicSliceAcap:+2`](https://cs.opensource.google/go/go/+/go1.23.3:src/runtime/panic.go;l=141)
- `golang.org/x/tools/gopls/internal/util/frob.(*reader).bytes:=392`
- [`golang.org/x/tools/gopls/internal/util/frob.(*frob).Decode:+6`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.1:gopls/internal/util/frob/frob.go;l=248)
- `golang.org/x/tools/gopls/internal/util/frob.Codec[...].Decode:=53`
- [`golang.org/x/tools/gopls/internal/cache/typerefs.decode:+2`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.1:gopls/internal/cache/typerefs/refs.go;l=805)
- `golang.org/x/tools/gopls/internal/cache/typerefs.Decode:=42`
- [`golang.org/x/tools/gopls/internal/cache.(*Snapshot).typerefs:+12`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.1:gopls/internal/cache/check.go;l=1274)
- [`golang.org/x/tools/gopls/internal/cache.(*packageHandleBuilder).evaluatePackageHandle:+53`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.1:gopls/internal/cache/check.go;l=1120)
- [`golang.org/x/tools/gopls/internal/cache.(*Snapshot).getPackageHandles.func2.1:+8`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.1:gopls/internal/cache/check.go;l=896)
- [`golang.org/x/sync/errgroup.(*Group).Go.func1:+3`](https://cs.opensource.google/go/x/sync/+/v0.9.0:errgroup/errgroup.go;l=78)
- `runtime.goexit:+0`
```
golang.org/x/tools/[email protected] go1.23.3 linux/amd64 vscode (1)
```
| NeedsInvestigation,gopls,Tools,gopls/telemetry-wins,BugReport | low | Critical |
2,784,402,411 | go | math/big: incorrect Float formatting for negative/auto prec | ### Go version
go version go1.23.4 darwin/arm64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='arm64'
GOBIN=''
GOCACHE='/Users/joeyc/Library/Caches/go-build'
GOENV='/Users/joeyc/Library/Application Support/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='arm64'
GOHOSTOS='darwin'
GOINSECURE=''
GOMODCACHE='/Users/joeyc/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='darwin'
GOPATH='/Users/joeyc/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/opt/homebrew/Cellar/go/1.23.4/libexec'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='local'
GOTOOLDIR='/opt/homebrew/Cellar/go/1.23.4/libexec/pkg/tool/darwin_arm64'
GOVCS=''
GOVERSION='go1.23.4'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/Users/joeyc/Library/Application Support/go/telemetry'
GCCGO='gccgo'
GOARM64='v8.0'
AR='ar'
CC='cc'
CXX='c++'
CGO_ENABLED='1'
GOMOD='/Users/joeyc/dev/go-utilpkg/go.mod'
GOWORK='/Users/joeyc/dev/go-utilpkg/go.work'
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -arch arm64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -ffile-prefix-map=/var/folders/_r/v0qs308n49952w5gddyqznbw0000gn/T/go-build2599410598=/tmp/go-build -gno-record-gcc-switches -fno-common'
```
### What did you do?
https://go.dev/play/p/DtKUPDsBMNg
### What did you see happen?
```
[0] 4.9406564584124654417656879e-324:
float64 JSON: 5e-324
strconv.FormatFloat g -1 64: 5e-324
big.Float(53) g -1: 4.940656458412465e-324
[1] 9.8813129168249308835313759e-324:
float64 JSON: 1e-323
strconv.FormatFloat g -1 64: 1e-323
big.Float(53) g -1: 9.88131291682493e-324
[2] 4.3749999999999916800000000e+17:
float64 JSON: 437499999999999170
strconv.FormatFloat g -1 64: 4.3749999999999917e+17
big.Float(53) g -1: 4.3749999999999916e+17
[3] 4.9999999999999916800000000e+17:
float64 JSON: 499999999999999170
strconv.FormatFloat g -1 64: 4.9999999999999917e+17
big.Float(53) g -1: 4.9999999999999916e+17
[4] 4.7619047619047596800000000e+17:
float64 JSON: 476190476190475970
strconv.FormatFloat g -1 64: 4.7619047619047597e+17
big.Float(53) g -1: 4.7619047619047596e+17
[5] 3.7499999999999916800000000e+17:
float64 JSON: 374999999999999170
strconv.FormatFloat g -1 64: 3.7499999999999917e+17
big.Float(53) g -1: 3.7499999999999916e+17
[6] 1.9047619047619038720000000e+18:
float64 JSON: 1904761904761903900
strconv.FormatFloat g -1 64: 1.9047619047619039e+18
big.Float(53) g -1: 1.9047619047619038e+18
[7] 2.2250738585071811263987532e-308:
float64 JSON: 2.225073858507181e-308
strconv.FormatFloat g -1 64: 2.225073858507181e-308
big.Float(53) g -1: 2.2250738585071811e-308
Bonus, float32 example:
strconv.FormatFloat g -1 32: 1.175494e-38
big.Float(24) g -1: 1.1754939e-38
```
### What did you expect to see?
I expected to big.Float's string formatter to do what it says on the tin: "A negative precision selects the smallest number of decimal digits necessary to identify the value x uniquely using x.Prec() mantissa bits".
I also expected the formatter to correctly apply round-to-nearest-half-to-even rounding, which is what I believe `decimal` implementation (in `math/big`) is intended to perform.
Seems like a pretty clear bug - I tested these values with python3 and node REPLs, and they all align with `strconv.FormatFloat` and `encoding/json`.
Not super relevant, but my original intent was to use the `big.Float.Append` method to canonicalize JSON numbers (which this issue prevents). | NeedsInvestigation,BugReport | low | Critical |
2,784,472,695 | transformers | Improve Guidance for Using DDP in `examples/pytorch` | ### Feature request
The examples in `examples/pytorch/` (e.g., [semantic-segmentation](https://github.com/huggingface/transformers/tree/main/examples/pytorch/semantic-segmentation)) would benefit from clearer guidance on how to use Distributed Data Parallel (DDP) in trainer version.
### Motivation
I modified the training script from [run_semantic_segmentation.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/semantic-segmentation/run_semantic_segmentation.py) for my task, and it worked well on one or two GPUs. However, when scaling to four GPUs, training became significantly slower. After several days of debugging, I realized that the default example in `README.md` does not use `accelerate` or another distributed launcher, which meant the script was running with Data Parallel (DP) instead of DDP the entire time.
The default command of trainer version provided in the `README.md` is:
```bash
python run_semantic_segmentation.py \
--model_name_or_path nvidia/mit-b0 \
--dataset_name segments/sidewalk-semantic \
--output_dir ./segformer_outputs/ \
--remove_unused_columns False \
--do_train \
--do_eval \
--push_to_hub \
--push_to_hub_model_id segformer-finetuned-sidewalk-10k-steps \
--max_steps 10000 \
--learning_rate 0.00006 \
--lr_scheduler_type polynomial \
--per_device_train_batch_size 8 \
--per_device_eval_batch_size 8 \
--logging_strategy steps \
--logging_steps 100 \
--eval_strategy epoch \
--save_strategy epoch \
--seed 1337
```
To enable DDP, the command needs to be modified as follows:
```bash
accelerate launch run_semantic_segmentation.py \
--model_name_or_path nvidia/mit-b0 \
--dataset_name segments/sidewalk-semantic \
--output_dir ./segformer_outputs/ \
--remove_unused_columns False \
--do_train \
--do_eval \
--push_to_hub \
--push_to_hub_model_id segformer-finetuned-sidewalk-10k-steps \
--max_steps 10000 \
--learning_rate 0.00006 \
--lr_scheduler_type polynomial \
--per_device_train_batch_size 8 \
--per_device_eval_batch_size 8 \
--logging_strategy steps \
--logging_steps 100 \
--eval_strategy epoch \
--save_strategy epoch \
--seed 1337
```
While this might be obvious to experienced users, it can be misleading for new users like me, as the default command seems to imply it works efficiently across any number of GPUs.
### Your contribution
To address this, we could include a note or alert in the `README.md`, highlighting that to use DDP with the Trainer, it is necessary to replace `python` with `accelerate launch`, `torchrun`, or another distributed launcher. This would greatly improve clarity for beginners and help avoid confusion. | Feature request | low | Critical |
2,784,500,397 | pytorch | torch.export treats two of the same parameters as the same node | ### 🚀 The feature, motivation and pitch
`torch.export` where the same tensor is used for multiple args in the example inputs, e.g. (self.x, self.x) results in a confusing graph where the two parameters seem to be treated as the same node. As a basic example, when I would pass in something like ep.module()(torch.zeros(10), torch.ones(10)) as example inputs for the export, when tracing through the exported graph, for ops where I am expecting the arg to be the first parameter, torch.zeros(10), it takes the second parameter, torch.ones(10).
Discussed with @angelayi and we think that this should be expected behavior, since it is a valid use case that the two parameters passed in are references to the same object, in which case it would make sense for them to share a node in the graph. However, my case was that both are possible - they could refer to the same object in some scenarios and different objects in others. To capture this dual use case we would need to pass in the latter as example input instead of the former. We think it would be useful thought to add a warning log about this behavior.
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | oncall: pt2,oncall: export | low | Minor |
2,784,501,190 | angular | JavaScript heap out of memory after using signal inputs | ### Command
build
### Is this a regression?
- [ ] Yes, this behavior used to work in the previous version
### The previous version in which this bug was not present was
_No response_
### Description
Hi,
I have a lib with about 5000 components, all are similar and svg like this:
```
@Component({
selector: 'svg[si-alarm-icon]',
standalone: true,
imports: [],
template: `
<svg:path stroke="none" d="M0 0h24v24H0z" fill="none" />
<svg:path
d="M16 6.072a8 8 0 1 1 -11.995 7.213l-.005 -.285l.005 -.285a8 8 0 0 1 11.995 -6.643zm-4 2.928a1 1 0 0 0 -1 1v3l.007 .117a1 1 0 0 0 .993 .883h2l.117 -.007a1 1 0 0 0 .883 -.993l-.007 -.117a1 1 0 0 0 -.993 -.883h-1v-2l-.007 -.117a1 1 0 0 0 -.993 -.883z"
/>
<svg:path
d="M6.412 3.191a1 1 0 0 1 1.273 1.539l-.097 .08l-2.75 2a1 1 0 0 1 -1.273 -1.54l.097 -.08l2.75 -2z"
/>
<svg:path
d="M16.191 3.412a1 1 0 0 1 1.291 -.288l.106 .067l2.75 2a1 1 0 0 1 -1.07 1.685l-.106 -.067l-2.75 -2a1 1 0 0 1 -.22 -1.397z"
/>
`,
styles: ``,
encapsulation: ViewEncapsulation.None,
changeDetection: ChangeDetectionStrategy.OnPush,
})
export class SiAlarmIcon implements OnInit {
private readonly elementRef = inject(ElementRef);
private readonly render = inject(Renderer2);
ngOnInit(): void {
const svg = this.elementRef.nativeElement;
this.render.setAttribute(svg, 'xmlns', 'http://www.w3.org/2000/svg');
this.render.setAttribute(svg, 'width', '24');
this.render.setAttribute(svg, 'height', '24');
this.render.setAttribute(svg, 'viewBox', '0 0 24 24');
this.render.setAttribute(svg, 'fill', 'currentColor');
}
}
```
The build for this lib is working fine, You can check this GitHub action: https://github.com/khalilou88/semantic-icons/actions/runs/12750337283/job/35534834902
I decided to use signal inputs to allow users to change inputs and I changed the implementation to be like this:
```
@Component({
selector: 'svg[si-alarm-icon]',
standalone: true,
imports: [],
template: `
<svg:path stroke="none" d="M0 0h24v24H0z" fill="none" />
<svg:path
d="M16 6.072a8 8 0 1 1 -11.995 7.213l-.005 -.285l.005 -.285a8 8 0 0 1 11.995 -6.643zm-4 2.928a1 1 0 0 0 -1 1v3l.007 .117a1 1 0 0 0 .993 .883h2l.117 -.007a1 1 0 0 0 .883 -.993l-.007 -.117a1 1 0 0 0 -.993 -.883h-1v-2l-.007 -.117a1 1 0 0 0 -.993 -.883z"
/>
<svg:path
d="M6.412 3.191a1 1 0 0 1 1.273 1.539l-.097 .08l-2.75 2a1 1 0 0 1 -1.273 -1.54l.097 -.08l2.75 -2z"
/>
<svg:path
d="M16.191 3.412a1 1 0 0 1 1.291 -.288l.106 .067l2.75 2a1 1 0 0 1 -1.07 1.685l-.106 -.067l-2.75 -2a1 1 0 0 1 -.22 -1.397z"
/>
`,
host: {
'[attr.xmlns]': 'xmlns()',
'[attr.width]': 'width()',
'[attr.height]': 'height()',
'[attr.viewBox]': 'viewBox()',
'[attr.fill]': 'fill()',
},
styles: ``,
encapsulation: ViewEncapsulation.None,
changeDetection: ChangeDetectionStrategy.OnPush,
})
export class SiAlarmIcon {
readonly xmlns = input<string>('http://www.w3.org/2000/svg');
readonly width = input<string | number>('24');
readonly height = input<string | number>('24');
readonly viewBox = input<string>('0 0 24 24');
readonly fill = input<string>('currentColor');
}
```
Now the build don't work and I have JavaScript heap out of memory.
```
------------------------------------------------------------------------------
Building entry point '@semantic-icons/tabler-icons/outline'
------------------------------------------------------------------------------
- Compiling with Angular sources in Ivy partial compilation mode.
✔ Compiling with Angular sources in Ivy partial compilation mode.
<--- Last few GCs --->
[2192:0x6c895d0] 14[37](https://github.com/khalilou88/semantic-icons/actions/runs/12750704442/job/35536032390#step:5:38)03 ms: Mark-Compact (reduce) 4045.6 (4143.9) -> 4044.8 (4143.7) MB, 3194.72 / 0.00 ms (average mu = 0.324, current mu = 0.229) allocation failure; GC in old space requested
[2192:0x6c895d0] 146870 ms: Mark-Compact (reduce) 4045.6 (4144.0) -> 4045.2 (4144.2) MB, 3164.06 / 0.00 ms (average mu = 0.180, current mu = 0.001) allocation failure; GC in old space requested
<--- JS stacktrace --->
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
----- Native stack trace -----
1: 0xb8d0a3 node::OOMErrorHandler(char const*, v8::OOMDetails const&) [/opt/hostedtoolcache/node/20.18.1/x64/bin/node]
2: 0xf06250 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [/opt/hostedtoolcache/node/20.18.1/x64/bin/node]
3: 0xf06537 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [/opt/hostedtoolcache/node/20.18.1/x64/bin/node]
4: 0x11180d5 [/opt/hostedtoolcache/node/20.18.1/x64/bin/node]
5: 0x112ff58 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/opt/hostedtoolcache/node/20.18.1/x64/bin/node]
6: 0x1106071 v8::internal::HeapAllocator::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/opt/hostedtoolcache/node/20.18.1/x64/bin/node]
7: 0x1107205 v8::internal::HeapAllocator::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/opt/hostedtoolcache/node/20.18.1/x64/bin/node]
8: 0x10e4856 v8::internal::Factory::NewFillerObject(int, v8::internal::AllocationAlignment, v8::internal::AllocationType, v8::internal::AllocationOrigin) [/opt/hostedtoolcache/node/20.18.1/x64/bin/node]
9: 0x15[40](https://github.com/khalilou88/semantic-icons/actions/runs/12750704442/job/35536032390#step:5:41)831 v8::internal::Runtime_AllocateInOldGeneration(int, unsigned long*, v8::internal::Isolate*) [/opt/hostedtoolcache/node/20.18.1/x64/bin/node]
10: 0x7fd5df4d9ef6
```
More info here: https://github.com/khalilou88/semantic-icons/actions/runs/12750704442/job/35536032390
### Minimal Reproduction
The code is public here: https://github.com/khalilou88/semantic-icons
The brunch that have the problem is `inputs`
The command to run is : `npx nx run tabler-icons:build:production --skip-nx-cache`
You can also run `Build tabler-icons` workflow
I understand if it's out of scope, since I am using NX, just want to share this.
### Exception or Error
```text
```
### Your Environment
```text
PS D:\Workspace\Projects\semantic-icons> ng version
_ _ ____ _ ___
/ \ _ __ __ _ _ _| | __ _ _ __ / ___| | |_ _|
/ △ \ | '_ \ / _` | | | | |/ _` | '__| | | | | | |
/ ___ \| | | | (_| | |_| | | (_| | | | |___| |___ | |
/_/ \_\_| |_|\__, |\__,_|_|\__,_|_| \____|_____|___|
|___/
Angular CLI: 19.0.7
Node: 20.18.0
Package Manager: npm 10.0.0
OS: win32 x64
Angular: 19.0.6
... animations, common, compiler, compiler-cli, core, forms
... language-service, platform-browser, platform-browser-dynamic
... router
Package Version
---------------------------------------------------------
@angular-devkit/architect 0.1900.7
@angular-devkit/build-angular 19.0.7
@angular-devkit/core 19.0.7
@angular-devkit/schematics 19.0.7
@angular/cli 19.0.7
@schematics/angular 19.0.7
ng-packagr 19.0.1
rxjs 7.8.1
typescript 5.6.3
zone.js 0.15.0
PS D:\Workspace\Projects\semantic-icons>
```
### Anything else relevant?
_No response_ | area: compiler,state: needs more investigation,bug | low | Critical |
2,784,573,825 | angular | Certain `@let` values can mimic internal data structure, breaking basic runtime logic | ### Which @angular/* package(s) are the source of the bug?
core
### Is this a regression?
Yes
### Description
Before hydration and using signals my code worked. I refactored it to use signals and incremental hydration and it broke. The error message was not really meaningful: `Cannot add property i18nNodes, object is not extensible`
I use [apollo-angular](https://www.npmjs.com/package/apollo-angular) which freezes every object in its state and results. The state cannot be modified.
### Please provide a link to a minimal reproduction of the bug
_No response_
### Please provide the exception or error you saw
```true
Cannot add property i18nNodes, object is not extensible
The line where the error has happened is https://github.com/angular/angular/blob/4491704fbaec02db8eec5845ffbb200dbf2a0bf6/packages/core/src/hydration/i18n.ts#L660
```
### Please provide the environment you discovered this bug in (run `ng version`)
```true
NX 20.3.0
Angular 19.0.5
Angular SSR 19.0.6
```
### Anything else?
Workaround:
Unfreeze the object using some hacky js code: `const unfrozenObj = JSON.parse(JSON.stringify(obj));`
Possible solutions:
Check before trying to set the values if the object is frozen using: `Object.isExtensible(object)`.
1. Throw an error that the state is not extensible and provide some information on the current component which has the error.
2. Show a warning during development and provide some information on the current component which has the error. In production mode ignore it.
3. Ignore setting the values without any warning.
I am very happy to code and make a PR on any of the solutions if the Angular team has decided. | area: core,state: has PR,P2,bug | low | Critical |
2,784,582,534 | transformers | FA2 support for Aria | Currently, Aria does not support FA2 in its modeling code: https://github.com/huggingface/transformers/blob/2fa876d2d824123b80ced9d689f75a153731769b/src/transformers/models/aria/modeling_aria.py#L656
However the reference checkpoint for the Aria model does set `attn_implementation` to FA2: https://huggingface.co/rhymes-ai/Aria/blob/104c6548d7da08fc3a30c4232d35cc9fb3239942/config.json#L36
So the modeling code on transformers side should be updated to support it. It currently causes errors on the CI and I think it's due to this mismatch.
cc @aymeric-roucher @zucchini-nlp | Vision,Flash Attention,Multimodal | low | Critical |
2,784,587,979 | TypeScript | Extend the tsc --listFiles option to specify an output file for the list | ### 🔍 Search Terms
listFiles redirect
### ✅ Viability Checklist
- [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
- [x] This wouldn't change the runtime behavior of existing JavaScript code
- [x] This could be implemented without emitting different JS based on the types of the expressions
- [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.)
- [x] This isn't a request to add a new utility type: https://github.com/microsoft/TypeScript/wiki/No-New-Utility-Types
- [x] This feature would agree with the rest of our Design Goals: https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals
### ⭐ Suggestion
I would like to be able to separate the output from the listFiles option from the rest of the tsc output (eg. compilation errors). I think the easiest way to do this would be to allow an output file to be specified.
### 📃 Motivating Example
While integrating typescript scripts into a `make` build process, I wanted to generate dependency files to facilitate incremental builds. Initially I tried running with the listFiles flag and capturing the output to create the dependency files to include in my makefile, however I quickly ran into the following issues:
1. Compilation errors and the file list are both output to stdout
2. By default capturing the output of a subshell stops it being output to the console
My initial solution was to run `tsc` twice, once with `--listFiles --noEmit` and then again without those options. This resulted in excessive build times as I needed to do this for lots of scripts.
For now I have solved the issue with the following bash script that attempts to extract the compilation output from the listFiles output:
```shell
#!/usr/bin/env bash
set -e
INPUT_DIR=$1
DEPS_FILE=$2
SQL_SCRIPT=$INPUT_DIR/sql/test.sql
rm -rf "${INPUT_DIR:?}/"lib
if INPUT_FILES=$(npx tsc --project "$INPUT_DIR"/tsconfig.json --listFiles); then
echo "$SQL_SCRIPT: $(echo "$INPUT_FILES" | sed 's/$/ \\/')" > "$DEPS_FILE"
cd "$INPUT_DIR"/lib
node index.js
else
EXIT_CODE=$?
# the following assumes that all important compilation output contains the text ': error'
echo "$INPUT_FILES" | grep ": error"
exit $EXIT_CODE
fi
```
However it would be much simpler if I could separate the listFiles output using the `tsc` command, eg:
```shell
#!/usr/bin/env bash
set -e
INPUT_DIR=$1
DEPS_FILE=$2
SQL_SCRIPT=$INPUT_DIR/sql/test.sql
rm -rf "${INPUT_DIR:?}/"lib
npx tsc --project "$INPUT_DIR"/tsconfig.json --listFilesTo $INPUT_DIR/ts/deps
echo "$SQL_SCRIPT: $(sed 's/$/ \\/' $INPUT_DIR/ts/deps)" > "$DEPS_FILE"
cd "$INPUT_DIR"/lib
node index.js
```
### 💻 Use Cases
1. What do you want to use this for? : I want to simply integrate the listFiles output into a make based build environment
3. What shortcomings exist with current approaches? : The listFiles and compilation error outputs are mixed together
4. What workarounds are you using in the meantime? : A complicated shell script that tries to extract the compilation output from the listFiles output | Suggestion,Awaiting More Feedback | low | Critical |
2,784,595,576 | rust | Handling of legacy-incompatible paths by `PathBuf::push` on Windows | Should `pathbuf.push()` on Windows convert legacy paths to UNC syntax when they exceed the `MAX_PATH` length, or other limits?
Legacy MS-DOS (non-UNC) paths should not exceed `MAX_PATH` length (~260 chars) and have other syntactic limitations. Windows has partial, opt-in support for longer legacy paths, but not all APIs and not all applications support long legacy paths.
Windows has *extended-length* paths that start with a `\\?\` (UNC) prefix. They are Microsoft's preferred way of specifying long paths. They can be 32KB long, and don't have path handling quirks inherited from MS-DOS. However, the UNC paths are not properly supported by many Windows applications #42869.
This question is related to `fs::canonicalize()` that currently returns UNC paths even when not necessary. `fs::canonicalize()` would be more compatible with Windows apps if it returned legacy paths whenever possible (when they fit under `MAX_PATH` and meet other restrictions like reserved names). However, the legacy paths have the length limit, so whether `fs::canonicalize(short_path).push(very_long_path)` works as expected depends on which syntax `fs::canonicalize` uses, or whether `path()` will correct the syntax if necessary.
Currently `push()` does not convert between legacy and UNC paths. If a `push()` causes a legacy path to exceed the limit, the path will keep using the legacy syntax, and technically won't be valid in APIs/apps that have the `MAX_PATH` limit. This is a simple, predictable implementation, but arguably makes it possible for `push()` to create an invalid path, a syntax error.
Besides the length limit, there are issues with reserved file names and trailing whitespace. `legacy_path.push("con.txt")` makes *the whole path* parse as a device name only, but `unc_path.push("con.txt")` simply appends the `con.txt` file name to the path as expected. Is this a bug in `push()`? Should `push` be a leaky abstraction that exposes quirks of how Windows parses legacy paths, or should `push()` switch to the less common UNC path syntax when it's necessary to precisely and losslessly append the given path components?
If `push()` converted paths to UNC whenever they exceed limits of legacy paths, then `push()` would be more robust, and semantically closer to `push()` on Unix that always appends components, rather than pushing characters that may cause the whole path to parse as something else, or get rejected entirely for not using a syntax appropriate for its length.
| O-windows,T-libs-api,A-io,C-discussion | low | Critical |
2,784,597,590 | kubernetes | AWS EC2 Scale Tests are failing due to elevated latencies for API calls | ### Which jobs are failing?
- ec2-master-scale-performance
### Which tests are failing?
ClusterLoaderV2 load tests
### Since when has it been failing?
Failing since Jan 02/2025
### Testgrid link
https://testgrid.k8s.io/sig-scalability-aws#ec2-master-scale-performance
### Reason for failure (if possible)
_No response_
### Anything else we need to know?
_No response_
### Relevant SIG(s)
/sig scalability | sig/scalability,kind/failing-test,needs-triage,sig/etcd | low | Critical |
2,784,626,531 | godot | Cinematic Preview makes the editor redraw continuosly | ### Tested versions
- Reproducible in 4.3 stable and latest master v4.4.dev (d79ff848f)
### System information
Godot v4.4.dev (d79ff848f) - Windows 10 (build 19045) - Multi-window, 2 monitors - Vulkan (Forward+) - dedicated AMD Radeon RX 580 2048SP (Advanced Micro Devices, Inc.; 31.0.21921.1000) - AMD Ryzen 5 3600 6-Core Processor (12 threads)
### Issue description
When you activate the Cinematic Preview in the 3D editor, the editor will keep redrawing even without any moviment/change in the screen. Disabling the Cinematic Preview stops the redrawings.
### Steps to reproduce
- Go to the 3D editor.
- Click on `Prespective > Cinematic Preview`.
- Notice how the redraw cicle will spin nonstop even without any change in the editor.
### Minimal reproduction project (MRP)
N/A | topic:editor,needs testing,performance | low | Minor |
2,784,636,922 | flutter | Flutter Web generating infinite <style> tags | ### Steps to reproduce
I'm not sure how to reproduce, but deck.blue is a Flutter Web app that handles lots of images at the same time. @kevmoo suspects it has to do with <style> tags from those images. All I know is that it's happening on Chrome, Windows.
### Expected results
No additional <style> tags should be appearing in the DOM
### Actual results
600+ <style> tags are being generated somewhat randomly, source: [Bluesky post](https://bsky.app/profile/oleschri.localfluff.space/post/3lfm3iokt6f2q)
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
debugShowCheckedModeBanner: false,
home: Scaffold(
body: Center(
child: ListView(
children: [
Image.network('https://cdn.bsky.app/img/feed_fullsize/plain/did:plc:ikvaup2d6nlir7xfm5vgzvra/bafkreib533tnnypa3wpbtlbdopwkmrg67cuhjhx5i7ymjho7k6z6jjoa2y@jpeg'),
Image.network('https://cdn.bsky.app/img/feed_fullsize/plain/did:plc:knj5sw5al3sukl6vhkpi7637/bafkreiagaodlyvzjreilwji6ixondwrg22sqgker2cbcg3r7622xtpy3gy@jpeg' ),
Image.network('https://cdn.bsky.app/img/feed_fullsize/plain/did:plc:fqz767ikcbgk7yifapygtxeh/bafkreibcnq7kkdnmulwbk5sytfb6tsgmfi6glbrov2c3q6sivndmaablgi@jpeg'),
],
),
),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>

</details>
### Logs
<details open><summary>Logs</summary>
```console
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel stable, 3.27.1, on macOS 14.7 23H124 darwin-arm64, locale en-BR)
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[✓] Xcode - develop for iOS and macOS (Xcode 15.4)
[✓] Chrome - develop for the web
[✓] Android Studio (version 2024.2)
[✓] VS Code (version 1.96.2)
[✓] Connected device (4 available)
[✓] Network resources
• No issues found!
```
</details>
| framework,platform-web,P1,team-web,triaged-web | medium | Critical |
2,784,639,505 | pytorch | Loading sparse tensors in a `DataLoader` raises CUDA initialization error since `2.5.0` | ### 🐛 Describe the bug
```python
import torch
from torch.utils.data import Dataset, DataLoader
def create_sparse_tensor():
tensor = torch.randn(5, 5)
sparse_tensor = tensor.to_sparse().to("cpu")
torch.save(sparse_tensor, "sparse_tensor.pth")
class OperatorDataset(Dataset):
def __init__(self):
self.files = ["sparse_tensor.pth"]
def __len__(self):
return len(self.files)
def __getitem__(self, idx):
_ = torch.load(self.files[idx], weights_only=True, map_location="cpu")
return None
if __name__ == '__main__':
print(torch.__version__)
create_sparse_tensor()
dataset = OperatorDataset()
dataloader = DataLoader(
dataset,
batch_size=None,
num_workers=1,
pin_memory=True,
)
for sparse_tensor in dataloader:
# Error raised here
pass
```
This code snippet succeeds on PyTorch 2.4.1 and fails on 2.5.0, 2.5.1 and the latest nightly:
```
2.5.1+cu124
Traceback (most recent call last):
File "/home/douglas/minimum_working_example.py", line 37, in <module>
for sparse_tensor in dataloader:
File "/home/douglas/miniconda3/envs/torch_sparse/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 701, in __next__
data = self._next_data()
File "/home/douglas/miniconda3/envs/torch_sparse/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1465, in _next_data
return self._process_data(data)
File "/home/douglas/miniconda3/envs/torch_sparse/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1491, in _process_data
data.reraise()
File "/home/douglas/miniconda3/envs/torch_sparse/lib/python3.10/site-packages/torch/_utils.py", line 715, in reraise
raise exception
RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/douglas/miniconda3/envs/torch_sparse/lib/python3.10/site-packages/torch/utils/data/_utils/worker.py", line 351, in _worker_loop
data = fetcher.fetch(index) # type: ignore[possibly-undefined]
File "/home/douglas/miniconda3/envs/torch_sparse/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 54, in fetch
data = self.dataset[possibly_batched_index]
File "/home/douglas/projects/gen11/research-lethe/minimum_working_example.py", line 19, in __getitem__
_ = torch.load(self.files[idx], weights_only=True, map_location="cpu")
File "/home/douglas/miniconda3/envs/torch_sparse/lib/python3.10/site-packages/torch/serialization.py", line 1351, in load
return _load(
File "/home/douglas/miniconda3/envs/torch_sparse/lib/python3.10/site-packages/torch/serialization.py", line 1851, in _load
torch._utils._validate_loaded_sparse_tensors()
File "/home/douglas/miniconda3/envs/torch_sparse/lib/python3.10/site-packages/torch/_utils.py", line 254, in _validate_loaded_sparse_tensors
torch._validate_sparse_coo_tensor_args(
RuntimeError: CUDA error: initialization error
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Rocky Linux release 8.8 (Green Obsidian) (x86_64)
GCC version: (GCC) 8.5.0 20210514 (Red Hat 8.5.0-18)
Clang version: Could not collect
CMake version: version 3.20.2
Libc version: glibc-2.28
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-4.18.0-477.13.1.el8_8.x86_64-x86_64-with-glibc2.28
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100 80GB PCIe
GPU 1: NVIDIA A100 80GB PCIe
GPU 2: NVIDIA A100 80GB PCIe
GPU 3: NVIDIA A100 80GB PCIe
Nvidia driver version: 530.30.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 1
Core(s) per socket: 16
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz
Stepping: 6
CPU MHz: 3500.000
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 5800.00
Virtualization: VT-x
L1d cache: 48K
L1i cache: 32K
L2 cache: 1280K
L3 cache: 24576K
NUMA node0 CPU(s): 0-15
NUMA node1 CPU(s): 16-31
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==2.2.1
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] numpy 2.2.1 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] torchaudio 2.5.1 pypi_0 pypi
[conda] torchvision 0.20.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip @andrewkho @divyanshk @SsnL @VitalyFedyunin @dzhulgakov @ptrblck @msaroufim @eqy | module: sparse,module: dataloader,module: cuda,triaged,module: regression | low | Critical |
2,784,647,655 | godot | Godot silently overwriting external (git) changes upon Running project, even when choosing 'reload' | ### Tested versions
Tested versions: I've had it happen from 4.0 to 4.2.2
### System information
Godot v4.2.2.stable.mono (15073afe3) - Windows 10.0.22631 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3070 Ti (NVIDIA; 32.0.15.6094) - AMD Ryzen 5 5600 6-Core Processor (12 Threads)
### Issue description
Related to #62258
Sometimes there will be a popup asking to choose which version of the file to keep, choosing reload here - If I'm understanding it correctly, should drop Godot's cached version and reload the file from disk - but it actually writes Godot's cached data to the file when the scene is saved, even though the scene visually looks reloaded. Maybe displayed data is reset but internal data isn't?
### Steps to reproduce
To recreate it, you need to download a repo on two machines, then push a resource uid or signal connection change to some scene. On the other machine, if the scene is closed pulling will work fine - but if it's open in the editor, the bug will occur when you save or run the project.
### Minimal reproduction project (MRP)
It's not possible to reproduce with one project because the issue usually occurs when two computers with the repo have different files that are open in the editor when pulling. This leads to these files constantly ping-ponging changes. Godot never acknowledges discarding the change when it happens with files that are 'always open' like theme files. | topic:editor,needs testing | low | Critical |
2,784,665,747 | vscode | Adding "Copilot" menu for users who already uninstalled Copilot is deeply offensive | Long ago, I installed VS Code, and there was a "Copilot" extension. I uninstalled it immediately. I do not want to use Copilot, I do not want Copilot on my computer, I do not want to use software that is capable of integrating with Copilot. I want that trash completely off my computer.
Yet today I opened VSCode¹ and found a new menu:

After poking around, I discovered I could disable it with a new checkbox.

But that is not good enough for me.
There are two problems here:
1. Those of us who have already uninstalled Copilot have already said no to Copilot. **We should not have to say no again.** This is the worst thing about Microsoft software these days— having to disable the same features over, and over, and over, as VPs get more desperate to juice metrics and software updates re-enable things we've already disabled. The checkbox should have been off by default for people who don't have the chatbot plugin installed, and for new users uninstalling the plugin should automatically uninstall the Copilot extension. You should respect our consent. Respect our humanity.
2. Although I've disabled the Copilot menu, I am unsatisfied because my copy of VS Code *still contains the Copilot menu code*. Previously I could remove all Copilot code completely by simply uninstalling the Copilot extension. Now, there's this little bridgehead of Copilot on my computer, inactive (until you enable it again) but still there. I want the infection gone completely.
My "**expected behavior**" is that you should remove the Copilot menu from VS Code and move the code that adds the menu into the Copilot extension. Or, to put it a different way, if you're going to put explicit Copilot support code into VS Code proper rather than quarantining it in the removable extension, then I personally **will** uninstall VS Code and switch to VS Codium, and you will have N-1 users. My policy is that I do not use software that can integrate with an AI chatbot. I will tolerate AI chatbot integration only if it is an extension that can be fully removed, as in iTerm2, Android Studio or (until the update this morning) older VS Code. I have already ditched SwiftKey over this, I'm avoiding Windows 11, and I am in the process of ditching Firefox (Firefox's "AI" integration works similarly to the new Copilot menu— there is an AI chatbot interface which is always present in the code, but it doesn't do anything until you install an AI plugin— that's not good enough. Even having the extension point present is too much: My complaint here is about moral rejection of something I consider harmful and repulsive, *not* about the actual behavior of my computer.)
This is a waste of time. I had work to do this morning and instead I'm having to clean corporate malware off my computer. If this is how you behave then VSCode is no longer a free program, it is something I am having to pay for with labor to keep the AI bugs out
- VS Code Version: 1.96.2 ("user setup")
- OS Version: Windows 10 19045.5247
Steps to Reproduce:
1. Open VSCode
---
<small>¹ Before seeing this change, I had been on a non-C# project for two weeks and using an IDE other than VSCode. So maybe the update I'm objecting to here is older than today.</small> | under-discussion,workbench-copilot | low | Critical |
2,784,666,022 | go | x/tools/gopls: render package documentation when hovering over imported package name identifiers | **Is your feature request related to a problem? Please describe.**
Package-level description isn't rendering
**Describe the solution you'd like**
I'd like the package-level description to render when hovering over an imported package, as shown in the screenshots below. ChatGPT suggests that if a package is spread across multiple files, Go uses first (alphabetically) nonempty comment, but I'm not totally sure. I don't know if VSCode is just getting confused.
**Describe alternatives you've considered**
One option is to ignore it. Another is to use GoLand. Another is to just always have the docs up
**Additional context**
### GoLand
I know the docs are there. They're in doc.go, and they're rendering in Godoc, and they're showing in GoLand
https://pkg.go.dev/go.temporal.io/[email protected]/workflow#Go
<img width="1474" alt="Screenshot 2025-01-10 at 4 01 35 PM" src="https://github.com/user-attachments/assets/19c43723-3995-4727-a0cf-fd98d6adeef1" />
<img width="934" alt="docs-are-in-doc-dot-go" src="https://github.com/user-attachments/assets/c47f605e-a9c4-4b48-8c99-a172eb257f1f" />
<img width="1617" alt="Using-GoLand" src="https://github.com/user-attachments/assets/82ec3635-390d-4ce1-a909-92ee347d41ce" />
### VSCode
<img width="624" alt="Using-vscode" src="https://github.com/user-attachments/assets/ebf82a7c-5133-40ac-af8d-df5fd70450f9" />
| help wanted,FeatureRequest,gopls | medium | Major |
2,784,667,048 | vscode | After editing the svg file,vscode file recognition failed |
Type: <b>Bug</b>
1.Create an svg file.
2.Pasting xml strings and save.
3.Switch to the source code file
ps. My source code file is .vue

VS Code version: Code 1.96.3 (91fbdddc47bc9c09064bf7acf133d22631cbf083, 2025-01-09T18:14:09.060Z)
OS version: Windows_NT x64 10.0.19045
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i5-8265U CPU @ 1.60GHz (8 x 1800)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|7.55GB (1.07GB free)|
|Process Argv|--crash-reporter-id 6c552993-4c93-4a5f-910e-07d3b6cc7bf7|
|Screen Reader|no|
|VM|67%|
</details><details><summary>Extensions (48)</summary>
Extension|Author (truncated)|Version
---|---|---
Bookmarks|ale|13.5.0
project-manager|ale|12.8.0
tongyi-lingma|Ali|2.0.6
formatjsdoccomments|Bon|0.0.1
vscode-tailwindcss|bra|0.12.18
uni-app-path|cc-|0.0.1
turbo-console-log|Cha|2.10.6
postcss|css|1.0.9
vscode-office|cwe|3.4.8
vscode-eslint|dba|3.0.10
composer-php-vscode|DEV|1.54.16574
intelli-php-vscode|DEV|0.12.15062
phptools-vscode|DEV|1.54.16574
profiler-php-vscode|DEV|1.54.16574
javascript-ejs-support|Dig|1.3.3
vscode-html-css|ecm|2.0.12
EditorConfig|Edi|0.16.4
auto-close-tag|for|0.5.15
auto-complete-tag|for|0.1.0
auto-rename-tag|for|0.1.10
html-slim-scss-css-class-completion|gen|1.7.8
go|gol|0.44.0
uniapp-run|hb0|0.0.9
vscode-gutter-preview|kis|0.32.2
drevolootion|Loo|0.0.16
peripheral-viewer|mcu|1.4.6
rainbow-csv|mec|3.14.0
git-graph|mhu|1.30.0
xml-format|mik|1.1.3
go-template-transpiler-extension|mor|0.1.0
easy-less|mrc|2.0.2
create-uniapp-view|mrm|2.1.0
vscode-language-pack-zh-hans|MS-|1.96.2024121109
autopep8|ms-|2024.0.0
debugpy|ms-|2024.14.0
python|ms-|2024.22.2
vscode-pylance|ms-|2024.12.1
remote-ssh|ms-|0.116.1
remote-ssh-edit|ms-|0.87.0
remote-explorer|ms-|0.4.3
element-plus-doc|mxp|2.1.2
laravel-blade|one|1.36.1
material-icon-theme|PKi|5.17.0
vscode-css-navigation|puc|2.3.3
LiveServer|rit|5.7.9
vscode-blade-formatter|shu|0.24.4
uni-app-snippets-vscode|uni|0.10.5
volar|Vue|2.2.0
(1 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
vscaat:30438848
c4g48928:30535728
azure-dev_surveyone:30548225
a9j8j154:30646983
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupytercf:31046870
nativerepl1:31139838
pythonrstrctxt:31112756
nativeloc2:31192216
cf971741:31144450
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
```
</details>
<!-- generated by issue reporter --> | triage-needed,stale | low | Critical |
2,784,672,782 | pytorch | topK for sparse Vectors | ### 🚀 The feature, motivation and pitch
Hello, Thanks for this great package. is it possible to have topk with sparse vectors ?
Thanks
### Alternatives
_No response_
### Additional context
_No response_
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip | module: sparse,triaged | low | Minor |
2,784,686,905 | langchain | with structured output gives error when using openai model but not anthropic or others. | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
def QuestionRouter (state) :
class RouteQuery(BaseModel):
"""Route a user query to the most relevant datasource."""
route: Literal["search", "ordinary"] = Field(
...,
description="Given a user question choose to route it to a tool or a ordinary question.",
)
print ("\n Inside Question Router")
structured_llm_router = llm.with_structured_output(RouteQuery)
### Error Message and Stack Trace (if applicable)
Failed to use model_dump to serialize <class 'pydantic._internal._model_construction.ModelMetaclass'> to JSON: TypeError("BaseModel.model_dump() missing 1 required positional argument: 'self'")
Failed to use dict to serialize <class 'pydantic._internal._model_construction.ModelMetaclass'> to JSON: TypeError("BaseModel.dict() missing 1 required positional argument: 'self'")
Hello! How can I assist you with insurance broking servicesFailed to use model_dump to serialize <class 'pydantic._internal._model_construction.ModelMetaclass'> to JSON: TypeError("BaseModel.model_dump() missing 1 required positional argument: 'self'")
### Description
using langchain + langgraph
Works perfectly with anthropic, groq etc
Issue is only visible with openai models
### System Info
python 3.12.6 | investigate,Ɑ: core | low | Critical |
2,784,691,501 | PowerToys | Quick Accent not properly working when using on-screen keyboard | ### Microsoft PowerToys version
0.87.1
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
Quick Accent
### Steps to reproduce
1 Enable Quick Accent
2 Launch on-screen keyboard
3 Open word (or other text input application)
4 Use your mouse to click on a character using the onscreen keyboard e.g."a" and simultaneously click on spacebar or one of the arrow keys to launch the Quick Accent tool.
5 the Quick Accent tool is launched correctly but the character is inserted in multitudes (as long as one interacts with the Quick Accent tool).
### ✔️ Expected Behavior
Same as with using the physical keyboard, I expect the character not to be inserted in multitudes/but a single time.
One might wonder when/how someone would enable this feature when using the on-screen keyboard given that the spacebar or arrow keys can not be triggered at the same time using the onscreen keyboard. However, the adaptive accessories allows someone to program the arrow keys to a physical button. In fact, I am helping a user who would greatly benefit from fixing this bug.
### ❌ Actual Behavior
The Quick Accent menu can be triggered but the character is inserted in multitudes/as long as the Quick Accent menu is active.
In the video I select the arrow keys on my physical keyboard to mimic triggering the Accent Tool with the button using the adaptive hub. I have confirmed actually doing this with the hub and external button gives the same result as reproducing it using the arrow keys on the keyboard.
https://github.com/user-attachments/assets/d4866bcc-d5b5-4228-80ba-4e45e62c7e97
### Other Software
_No response_ | Issue-Bug,Priority-1,Area-Accessibility,Product-Quick Accent | low | Critical |
2,784,695,373 | pytorch | DISABLED test_tcp (__main__.WorkerServerTest) | Platforms: rocm
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22distributed%2Felastic%2Ftest_control_plane.py%3A%3AWorkerServerTest%3A%3Atest_tcp%22%5D)).
There is some setup issue with the ROCm CI self-hosted runners that blocks this port. Need to investigate further, but disable for now to improve the CI signal.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | oncall: distributed,module: rocm,triaged,skipped | low | Critical |
2,784,701,156 | pytorch | Support new CUDA conda package layout natively in cpp_extension.CUDAExtension | ### 🚀 The feature, motivation and pitch
[`torch.utils.cpp_extension.CUDAExtension`](https://pytorch.org/docs/stable/cpp_extension.html#torch.utils.cpp_extension.CUDAExtension) is designed to simplify compiling extension modules that require CUDA. To facilitate CUDA usage, it adds library/include/etc paths and passes them along to setuptools. These paths are currently based on the standard layout for CUDA packages provided via standard package managers (e.g. for Linux distros). However, as of CUDA 12 this is not the layout when CUDA is installed via conda packages. Recent updates to the CUDA infrastructure on conda-forge have added support for compiling CUDA code using compilers installed from CUDA (which was previously not possible). Since conda environments need to support cross-compilation, the packages are installed into a splayed layout where all files are placed into a `${PREFIX}/targets` directory and only a subset of them are symlinked directly into normal directories. In particular, shared libraries are symlinked into `${PREFIX}/lib`, but the includes are not linked into `${PREFIX}/include` because instead the nvcc compiler in conda is configured (via nvcc.profile and environment variables) to know where to search for includes. As mentioned above, supporting cross-compilation in conda environments was a key point in these decisions (some discussion started in https://github.com/conda-forge/cuda-nvcc-feedstock/issues/12, happy to point to more threads if needed).
It would be ideal for PyTorch to also support compilation in these environments. To do so, the extension would need to also start searching these additional directories.
### Alternatives
At the moment this issue may be worked around by setting [`CUDA_INC_PATH`](https://github.com/pytorch/pytorch/blob/main/torch/utils/cpp_extension.py#L1240), so this issue is primarily to document a nice-to-have feature as well as to have something to point to in case future users encounter confusion around building extensions with pytorch inside modern conda environments.
### Additional context
_No response_
cc @malfet @zou3519 @xmfan @ptrblck @msaroufim @eqy | module: cpp-extensions,module: cuda,triaged,enhancement | low | Major |
2,784,703,472 | ollama | Running the same model on all GPUs | Is it possible to run the same model on multiple GPUs?
I have a server with 5 GPUs, and I want to run the same model on each GPU to provide more concurrency for users.
I found a solution by running multiple instances of "ollama serve" on different ports and using "haproxy" as a load balancer to distribute requests across the instances.
If this feature is not implemented, could you add an option to run the model on all or specific GPUs?
For example:
ollama run --gpus=all mistral
or
ollama run --gpus=0,1,2 mistral
| feature request,gpu | low | Minor |
2,784,778,320 | electron | NSWindowShouldDragOnGesture on macOS is not working in newer versions | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
33.2.1
### What operating system(s) are you using?
macOS
### Operating System Version
Sonoma 15.2
### What arch are you using?
arm64 (including Apple Silicon)
### Last Known Working Electron version
31
### Expected Behavior
With the setting `NSWindowShouldDragOnGesture` enabled the user is able to drag any window at any cursor position via `CTRL+ALT+Mouse1`.
### how to reproduce:
1. `defaults write -g NSWindowShouldDragOnGesture -bool true`
2. restart
3. See that you can drag any window with CTRL+ALT+Mouse1 at any cursor position
4. See that this doesn't work with some newer Electron apps
### Actual Behavior
It does not work and just triggers the context menu (because of `CTRL+Mouse1`)
### Testcase Gist URL
_No response_
### Additional Information
Examples where this can be observed:
- Anytype 0.44.2-alpha
- Signal 7.37.0
- VS Code 1.96.4 | platform/macOS,bug :beetle:,component/BrowserWindow,33-x-y | low | Critical |
2,784,812,440 | PowerToys | Request to add pack in Bengali Language | ### Description of the new feature / enhancement
Please add a feature where users can extract text in the Bengali Language also.
### Scenario when this would be used?
Bengali font is not supported
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,784,851,338 | go | cmd/compile: "bad ptr to array in slice" in SSA slice handling | ```
#!stacks
"cmd/compile/internal/ssagen.(*state).slice:+14"
```
Issue created by [stacks](https://pkg.go.dev/golang.org/x/tools/gopls/internal/telemetry/cmd/stacks).
This stack `XSjlkw` was [reported by telemetry](https://storage.googleapis.com/prod-telemetry-merged/2025-01-12.json):
- `compile/bug`
- [`cmd/compile/internal/base.FatalfAt:+3`](https://cs.opensource.google/go/go/+/go1.23.2:src/cmd/compile/internal/base/print.go;l=215)
- `cmd/compile/internal/base.Fatalf:=195`
- [`cmd/compile/internal/ssagen.(*ssafn).Fatalf:+3`](https://cs.opensource.google/go/go/+/go1.23.2:src/cmd/compile/internal/ssagen/ssa.go;l=8288)
- [`cmd/compile/internal/ssagen.(*state).Fatalf:+1`](https://cs.opensource.google/go/go/+/go1.23.2:src/cmd/compile/internal/ssagen/ssa.go;l=956)
- [`cmd/compile/internal/ssagen.(*state).slice:+14`](https://cs.opensource.google/go/go/+/go1.23.2:src/cmd/compile/internal/ssagen/ssa.go;l=6151)
- [`cmd/compile/internal/ssagen.(*state).exprCheckPtr:+619`](https://cs.opensource.google/go/go/+/go1.23.2:src/cmd/compile/internal/ssagen/ssa.go;l=3398)
- `cmd/compile/internal/ssagen.(*state).expr:=2776`
- [`cmd/compile/internal/ssagen.(*state).stmt:+252`](https://cs.opensource.google/go/go/+/go1.23.2:src/cmd/compile/internal/ssagen/ssa.go;l=1699)
- `cmd/compile/internal/ssagen.(*state).stmtList:=1442`
- [`cmd/compile/internal/ssagen.buildssa:+277`](https://cs.opensource.google/go/go/+/go1.23.2:src/cmd/compile/internal/ssagen/ssa.go;l=571)
- [`cmd/compile/internal/ssagen.Compile:+1`](https://cs.opensource.google/go/go/+/go1.23.2:src/cmd/compile/internal/ssagen/pgen.go;l=302)
- [`cmd/compile/internal/gc.compileFunctions.func5.1:+1`](https://cs.opensource.google/go/go/+/go1.23.2:src/cmd/compile/internal/gc/compile.go;l=188)
- [`cmd/compile/internal/gc.compileFunctions.func2:+1`](https://cs.opensource.google/go/go/+/go1.23.2:src/cmd/compile/internal/gc/compile.go;l=142)
- [`cmd/compile/internal/gc.compileFunctions.func5:+4`](https://cs.opensource.google/go/go/+/go1.23.2:src/cmd/compile/internal/gc/compile.go;l=187)
- [`cmd/compile/internal/gc.compileFunctions.func5.1:+2`](https://cs.opensource.google/go/go/+/go1.23.2:src/cmd/compile/internal/gc/compile.go;l=189)
- [`cmd/compile/internal/gc.compileFunctions.func2:+1`](https://cs.opensource.google/go/go/+/go1.23.2:src/cmd/compile/internal/gc/compile.go;l=142)
```
cmd/[email protected] go1.23.2 linux/amd64 (8)
```
https://cs.opensource.google/go/go/+/refs/tags/go1.23.2:src/cmd/compile/internal/ssagen/ssa.go;l=6151;drc=a130fb63091bf3103bb7baabbd2484f7e560edae
```
func (s *state) slice(v, i, j, k *ssa.Value, bounded bool) (p, l, c *ssa.Value) {
t := v.Type
var ptr, len, cap *ssa.Value
switch {
...
case t.IsPtr():
if !t.Elem().IsArray() {
s.Fatalf("bad ptr to array in slice %v\n", t)
}
```
cc @golang/compiler | NeedsInvestigation,compiler/runtime,compiler/telemetry-wins | low | Critical |
2,784,863,760 | pytorch | torch.cond + torch.non_zero does not work with torch.export.export | ### 🐛 Describe the bug
I can't export the following model after rewriting the code with torch.cond. I tried with different configurations all listed below. None worked.
```python
import torch
class Model(torch.nn.Module):
def forward(
self,
input_ids,
image_features,
vocab_size,
):
if image_features.numel():
input_shape = input_ids.size()
input_ids = input_ids.view(-1, input_shape[-1])
# positions for image tokens
condition = (input_ids < 0) & (input_ids > -int(1e9))
positions = torch.where(condition)
# has_image = len(positions[0].tolist()) > 0
input_ids = input_ids.clamp_min(0).clamp_max(vocab_size)
return (input_ids, *positions)
return (input_ids, *torch.where(torch.zeros((1, 1), dtype=torch.bool)))
inputs = [
(
(torch.arange(24) - 8).reshape((2, -1)).to(torch.int64),
torch.arange(32).reshape((2, -1)).to(torch.float32),
1025,
),
(
(torch.arange(24) - 8).reshape((2, -1)).to(torch.int64),
torch.tensor([[], []], dtype=torch.float32),
1025,
),
]
model = Model()
expected = [model(*inp) for inp in inputs]
assert len(expected) == 2
assert len(expected[0]) == len(expected[1]) == 3
# Rewriting with torch.cond.
class Model2(torch.nn.Module):
def forward(self, input_ids, image_features, vocab_size):
def then_branch(input_ids, image_features, vocab_size):
input_shape = input_ids.size()
input_ids = input_ids.view(-1, input_shape[-1])
condition = (input_ids < 0) & (input_ids > -int(1e9))
positions = torch.nonzero(condition, as_tuple=True)
input_ids = input_ids.clamp_min(0).clamp_max(vocab_size)
return (input_ids, positions[0], positions[1])
def else_branch(input_ids, image_features, vocab_size):
r = torch.where(torch.zeros((1, 1), dtype=torch.bool))
return (input_ids, r[0], r[1])
a, b, c = torch.cond(
image_features.numel() > 0,
then_branch,
else_branch,
[input_ids, image_features, vocab_size],
)
return a, b, c
# Check that it is equivalent.
model2 = Model2()
new_out = [model2(*inp) for inp in inputs]
for i in range(2):
for j in range(3):
torch.testing.assert_close(expected[i][j], new_out[i][j])
batch = torch.export.Dim("batch")
seq_length = torch.export.Dim("seq_length")
dynamic_shapes = ({0: batch}, {0: batch, 1: seq_length}, None)
# We try to export with (tensor, tensor, int)
# ep = torch.export.export(model2, inputs[0], dynamic_shapes=dynamic_shapes, strict=False)
# fails with Expect operands to be a tuple of possibly nested dict/list/tuple that only consists of tensor leaves, but got [FakeTensor(..., size=(s1, 12), dtype=torch.int64), FakeTensor(..., size=(s2, s3)), 1025].
# print(ep)
# We try to export with (tensor, tensor, int)
new_inputs = (*inputs[0][:2], torch.tensor([1025], dtype=torch.int64))
# ep = torch.export.export(model2, new_inputs, dynamic_shapes=dynamic_shapes, strict=False)
# torch._dynamo.exc.Unsupported: dynamic shape operator: aten.nonzero.default; to enable, set torch._dynamo.config.capture_dynamic_output_shape_ops = True
# torch._dynamo.exc.UncapturedHigherOrderOpError: Cond doesn't work unless it is captured completely with torch.compile. Scroll up to find out what causes the graph break.
# print(ep)
torch._dynamo.config.capture_dynamic_output_shape_ops = True
ep = torch.export.export(model2, new_inputs, dynamic_shapes=dynamic_shapes, strict=False)
# torch._dynamo.exc.UncapturedHigherOrderOpError: Expected true_fn_output and false_fn_output to have same metadata but found:
# pair[1] differ in 'shape: torch.Size([u0]) vs torch.Size([u1])', where lhs is FakeTensor(..., size=(u0,), dtype=torch.int64) and rhs is FakeTensor(..., size=(u1,), dtype=torch.int64)
# pair[2] differ in 'shape: torch.Size([u0]) vs torch.Size([u1])', where lhs is FakeTensor(..., size=(u0,), dtype=torch.int64) and rhs is FakeTensor(..., size=(u1,), dtype=torch.int64)
print(ep)
```
### Versions
Collecting environment information...
PyTorch version: 2.7.0.dev20250113+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.12.8 (main, Dec 4 2024, 08:54:12) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4060 Laptop GPU
Nvidia driver version: 538.92
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i7-13800H
CPU family: 6
Model: 186
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 2
BogoMIPS: 5836.79
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 480 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 12.5 MiB (10 instances)
L3 cache: 24 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.2.1
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.18.0
[pip3] onnx-extended==0.3.0
[pip3] onnxruntime_extensions==0.13.0
[pip3] onnxruntime-training==1.21.0+cu126
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.7.0.dev20250113+cu126
[pip3] torch_geometric==2.4.0
[pip3] torchaudio==2.6.0.dev20250113+cu126
[pip3] torchvision==0.22.0.dev20250113+cu126
[conda] Could not collect
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | oncall: pt2,oncall: export | low | Critical |
2,784,864,268 | go | cmd/compile: bug in cmd/compile/internal/noder.(*reader).pkgDecls | ```
#!stacks
"<insert predicate here>"
```
Issue created by [stacks](https://pkg.go.dev/golang.org/x/tools/gopls/internal/telemetry/cmd/stacks).
This stack `ff8DlA` was [reported by telemetry](https://storage.googleapis.com/prod-telemetry-merged/2025-01-06.json):
- `compile/bug`
- [`cmd/compile/internal/base.FatalfAt:+3`](https://cs.opensource.google/go/go/+/go1.23.4:src/cmd/compile/internal/base/print.go;l=215)
- `cmd/compile/internal/base.Fatalf:=195`
- `cmd/compile/internal/base.Assert:=242`
- `cmd/compile/internal/noder.assert:=15`
- [`cmd/compile/internal/noder.(*reader).pkgDecls:+26`](https://cs.opensource.google/go/go/+/go1.23.4:src/cmd/compile/internal/noder/reader.go;l=3334)
- [`cmd/compile/internal/noder.(*reader).pkgInit:+9`](https://cs.opensource.google/go/go/+/go1.23.4:src/cmd/compile/internal/noder/reader.go;l=3250)
- [`cmd/compile/internal/noder.unified:+14`](https://cs.opensource.google/go/go/+/go1.23.4:src/cmd/compile/internal/noder/unified.go;l=201)
- [`cmd/compile/internal/noder.LoadPackage:+50`](https://cs.opensource.google/go/go/+/go1.23.4:src/cmd/compile/internal/noder/noder.go;l=77)
- [`cmd/compile/internal/gc.Main:+140`](https://cs.opensource.google/go/go/+/go1.23.4:src/cmd/compile/internal/gc/main.go;l=200)
- [`main.main:+12`](https://cs.opensource.google/go/go/+/go1.23.4:src/cmd/compile/main.go;l=57)
- [`runtime.main:+125`](https://cs.opensource.google/go/go/+/go1.23.4:src/runtime/proc.go;l=272)
- `runtime.goexit:+0`
```
cmd/[email protected] go1.23.4 darwin/arm64 (2)
```
Another [export data](https://github.com/golang/go/issues/70998#issuecomment-2562638850) case.
cc @golang/compiler @timothy-king @griesemer
| NeedsInvestigation,compiler/runtime,compiler/telemetry-wins | low | Critical |
2,784,921,548 | rust | rustdoc book: document sorting of search results | currently, the sorting of search results is a bit of a black box. we should add a subsection in the rustdoc book about how the sorting works.
based on [this comment](https://github.com/rust-lang/rust/pull/135302#issuecomment-2587250206). | A-docs,A-rustdoc-search,T-rustdoc-frontend | low | Minor |
2,784,978,752 | flutter | Recipes-with-led broken after monorepo merge | For example, in CL: https://flutter-review.googlesource.com/c/recipes/+/61660
We scheduled recipes-with-led: https://ci.chromium.org/ui/p/flutter/builders/try/recipes-with-led/b8726194767882924529/infra
Which ran a particular test run that failed to find a Dart SDK: https://logs.chromium.org/logs/flutter/buildbucket/cr-buildbucket/8726194396066972705/+/u/flutter_config_--clear-features/stdout | team-infra,P1,monorepo | medium | Critical |
2,784,987,452 | vscode | Support setting encoding per file | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
`files.encoding` allows setting encoding user-wide or workspace-wide. Sometimes we need to set different encodings for specific files. I'd like something like:
```json
{
"files.encoding": "iso88591",
"file-specific": {
"/src/my-file.md": {
"files.encoding": "utf8"
}
}
}
```
(EDITED: adding the missing second `files.encoding` part)
Actual situation: my project uses iso88591. I'm using marp and need to save only that markdown file in utf8. | feature-request,file-encoding | low | Major |
2,785,011,820 | pytorch | inductor `full_like` decompositions give incorrect strides | min repro:
```
import torch
def f(x):
return torch.full_like(x, 3)
x = torch.randn(4, 5, 6).transpose(1, -1)
out = f(x)
out_compiled = torch.compile(f, backend="aot_eager_decomp_partition")(x)
print(out.stride())
print(out_compiled.stride())
# prints
# (30, 1, 6)
# (30, 5, 1)
```
This seems like the root cause of an NJT compile crash that @jbschlosser was running into (see his [repro](https://www.internalfb.com/intern/paste/P1710266970), [njt_patch](https://www.internalfb.com/phabricator/paste/view/P1710266748) and [error](https://www.internalfb.com/phabricator/paste/view/P1710267237))
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov | high priority,triaged,module: correctness (silent),oncall: pt2,module: inductor | low | Critical |
2,785,072,714 | pytorch | [Pipelining] Update all schedules to use _PipelineScheduleRuntime | We have a new runtime for pipeline schedules that the existing schedules should be transitioned to.
Things we need to do:
- Update the `_step_microbatches` for each Schedule class to call into the `_PipelineScheduleRuntime._step_microbatches()`
- Update the `Schedule1F1B` and `ScheduleGpipe` to generate the pipeline_order (IR).
- Handle the differences between `PipelineScheduleSingle` vs `PipelineScheduleMulti`
- Update `test_schedule_multiproc.py` and `test_schedule.py` to work as expected | triaged,better-engineering,module: pipelining | low | Minor |
2,785,182,062 | flutter | [a11y][mac] switch button should use NSAccessibilitySwitchSubrole | ### Steps to reproduce
Using https://api.flutter.dev/flutter/material/Switch-class.html
### Expected results
macos currently uses AXSwitch
https://github.com/flutter/engine/blob/66832de608c9f61e4db04589d52b2b899bca38eb/third_party/accessibility/ax/platform/ax_platform_node_mac.mm#L265
but it should use https://developer.apple.com/documentation/appkit/nsaccessibility-swift.struct/subrole/switch?language=objc
| engine,a: accessibility,P3,team-macos,triaged-macos | low | Minor |
2,785,220,825 | flutter | [a11y]SearchBar missing searchbox role | When using this widget https://api.flutter.dev/flutter/material/SearchBar-class.html, the semantics role should have the following
| OS | role |
|--------|--------|
| web | searchbox |
| ios | [UIAccessibilityTraitSearchField](https://developer.apple.com/documentation/uikit/uiaccessibilitytraits/searchfield?language=objc) |
| macos | NSAccessibilitySearchFieldSubrole |
| windows | ROLE_SYSTEM_TEXT |
| android | SearchView |
All platform except windows are missing | P1,team-accessibility,triaged-accessibility | medium | Minor |
2,785,237,599 | flutter | [a11y] navigation rail is missing role | When using this widget https://api.flutter.dev/flutter/material/NavigationRail-class.html, the semantics role should have the following
| OS | role |
|--------|--------|
| web | navigation |
| ios | - |
| macos | - |
| windows | - |
| android | NavigationRailView |
both web and android are missing | P2,team-accessibility,triaged-accessibility | low | Minor |
2,785,305,322 | godot | "Autorename Animation Tracks" editor setting breaks unique node path use | ### Tested versions
4.3.stable
### System information
Windows 10 - Godot v4.3.stable.official [77dcf97d8] - Forward+ - AMD Ryzen 9 5900x - NVIDIA GeFroce RTX 3090
### Issue description
"Autorename Animation Tracks" editor setting triggers on animation tracks using unique node paths, this breaks the possibility of using unique node names for reusing animations across multiple scenes which do not have an identical tree
### Steps to reproduce
1. create a scene
2. create any Node within the scene and set it as unique (Access as Unique Name option)
3. create an AnimationPlayer within the same scene and an animation track for it
4. create a track for any property on the unique node
5. edit the track path to use the unique node path format ( using the % symbol )
6. re-parent the Unique Node to any other node in the scene
7. observe the created animation track has been changed to a relative node path ignoring the uniqueness
alternatively starting from previous step 5:
6b. instantiate the created scene into another scene
7b. re-parent the instantiated scene to another node
8b. observe the created animation track has been changed to a relative node path ignoring the uniqueness
steps 1-5 example scene:

step 6-7, the unique node path in the track is changed to a relative one

In the included MRP:
prefab_scene.tscn contains the scene crated in steps 1-5
scene.tscn contains the scene for steps 6b-8b.
### Minimal reproduction project (MRP)
[uniquetracks.zip](https://github.com/user-attachments/files/18401614/uniquetracks.zip) | bug,topic:editor,topic:animation | low | Minor |
2,785,310,766 | pytorch | int_mm seems broken due to Triton upgrade | ### 🐛 Describe the bug
```python
import torch
from torch._higher_order_ops.out_dtype import out_dtype
def quantized_matmul(x_vals_int8, x_scales, w_vals_int8):
return out_dtype(torch.ops.aten.mm.default, torch.int32, x_vals_int8, w_vals_int8) * x_scales
x_vals_int8 = torch.randn(65536, 144).to(dtype=torch.int8).cuda()
x_scales = torch.randn(65536, 1).to(dtype=torch.float32).cuda()
w_vals_int8 = torch.randn(432, 144).to(dtype=torch.int8).cuda().t()
qcm = torch.compile(quantized_matmul, mode='max-autotune-no-cudagraphs')
qcm(x_vals_int8, x_scales, w_vals_int8)
```
produces
```
python: /root/.triton/llvm/llvm-86b69c31-almalinux-x64/include/llvm/Support/Casting.h:566: decltype(auto) llvm::cast(const From &) [To = mlir::FloatAttr, From = mlir::Attribute]: Assertion `isa<To>(Val) && "cast<Ty>() argument of incompatible type!"' failed.
Aborted (core dumped)
```
This works on `nightly20241126py312` with `pytorch-triton 3.1.0+cf34004b8a`. Can do more fine-grained bisection if needed.
### Versions
```
ersions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.7.0.dev20250113+cu124
[pip3] torchaudio==2.6.0.dev20250113+cu124
[pip3] torchvision==0.22.0.dev20250113+cu124
[conda] numpy 2.1.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.7.0.dev20250113+cu124 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250113+cu124 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250113+cu124 pypi_0 pypi
```
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @bertmaher @int3 @davidberard98 @nmacchioni @embg @peterbell10 | high priority,triaged,oncall: pt2,module: inductor,upstream triton | low | Critical |
2,785,348,301 | flutter | [RFW] Loops are not working in widget builder scopes | ### Steps to reproduce
1. Create a local widget and provide a builderArg containing an array of texts (could be a map as well).
```
runtime.update(localLibraryName, LocalWidgetLibrary(<String, LocalWidgetBuilder> {
'Builder': (context, source) {
final args = {
'values': ['Value1', 'Value2', 'Value3'],
};
return source.builder(['builder'], args);
},
}));
```
2. Try to loop through the strings inside "values" in a ListView.
```
widget root = Builder(
builder: (scope) => ListView(
children: [
...for value in scope.values:
Text(text: value, textDirection: 'ltr')
],
),
);
```
### Expected results
A ListView having 3 Text widgets with values "Value1", "Value2", "Value3" respectively should be rendered.
### Actual results
Nothing is rendered properly. Basically, the loop is ignored.
But, we can see the content when we use the array of values in a Text widget.
```
widget root = Builder(
builder: (scope) => Text(text: scope.values, textDirection: 'ltr'),
);
```
The output of above is "Value1Value2Value3".
Is there anything that I miss to make loops work in widget builders or this is a legit issue?
<details><summary>Here's a full example</summary>
<p>
```
import 'package:flutter/material.dart';
import 'package:rfw/formats.dart' show parseLibraryFile;
import 'package:rfw/rfw.dart';
void main() {
runApp(const Example());
}
class Example extends StatefulWidget {
const Example({super.key});
@override
State<Example> createState() => _ExampleState();
}
class _ExampleState extends State<Example> {
final Runtime _runtime = Runtime();
final DynamicContent _data = DynamicContent();
static const coreLibraryName = LibraryName(['core', 'widgets']);
static const localLibraryName = LibraryName(['local', 'widgets']);
static const remoteLibraryName = LibraryName(['remote']);
static final RemoteWidgetLibrary _remoteWidgets = parseLibraryFile('''
import core.widgets;
import local.widgets;
widget root = Builder(
builder: (scope) => ListView(
children: [
...for value in scope.values:
Text(text: value, textDirection: 'ltr')
],
),
);
''');
static final WidgetLibrary _localWidgets =
LocalWidgetLibrary(<String, LocalWidgetBuilder>{
'Builder': (context, source) {
final args = {
'values': ['Value1', 'Value2', 'Value3'],
};
return source.builder(['builder'], args);
},
});
@override
void initState() {
super.initState();
// Core widget library
_runtime.update(coreLibraryName, createCoreWidgets());
// Local widget library
_runtime.update(localLibraryName, _localWidgets);
// Remote widget library
_runtime.update(remoteLibraryName, _remoteWidgets);
}
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
appBar: AppBar(toolbarHeight: 0),
body: RemoteWidget(
runtime: _runtime,
data: _data,
widget: const FullyQualifiedWidgetName(
remoteLibraryName,
'root',
),
),
),
);
}
}
```
</p>
</details>
| package,team-ecosystem,has reproducible steps,P2,p: rfw,triaged-ecosystem,found in release: 3.27,found in release: 3.28 | low | Minor |
2,785,380,739 | rust | rustdoc: Consider eliminating edition sensitivity around RPIT capturing | Uplifted from https://github.com/rust-lang/rust/issues/127228#issuecomment-2201462571 and [#t-lang/blog post about precise capture (comment)](https://rust-lang.zulipchat.com/#narrow/channel/213817-t-lang/topic/blog.20post.20about.20precise.20capture/near/477498874).
---
At the time of writing, rustdoc renders RPITs (return-position impl-Trait) basically identical to how they appear in the source code, most important of which is impl-Trait captures (which boils down to the presence, absence and contents of `use<…>` bounds for all practical purposes). Since RPIT (but not RPITIT) capturing is edition sensitive, it follows that rustdoc's RPIT rendering is edition sensitive as well. This leads us to the point of discussion: Should we (somehow) eliminate this edition sensitivity in rustdoc?
Consider the following signature as it might be shown by rustdoc: `pub fn f(x: &()) -> impl Sized`. Unless you know the edition of the containing crate (which could be regarded as an implementation detail?), you won't know if the RPIT `impl Sized` captures the anonymous lifetime of the parameter (i.e., `impl Sized + use<'_>`) or not (i.e., `impl Sized + use<>`). I believe this to be less than ideal
Edition insensitivity would solve this:
**(A)** The solution I initially thought of would comprise rendering synthetic `use<…>` bounds on `use<…>`-less[^1] RPITs *whenever* we have *any*[^2] generic parameters in *all* editions! Note: If we have *anonymous* lifetime parameters, we can't always easily synthesize a `use<…>` bound[^3].
However, this leads to more verbose output and fixates too much on pre-2024 edition behavior. Assuming the 2021 edition and before lose relevance by the minute, this solution is stuck in the past. Moreover, <2024 RPIT capturing behavior is the odd one out among RPITIT, TAIT, ATPIT (and ≥2024 RPIT).
**(B)** I guess the more realistic and forward-thinking solution would be to render ≥2024 RPITs as is and to *always* render a synthetic `use<…>` bounds on `use<…>`-less[^1] <2024 RPITs if there are *any*[^2] generic parameters in scope. We don't face the "anon lifetime" issues solution (A) has because the synthetic `use<…>` bounds should *never* (need to) contain any lifetime parameters! In fact, if we didn't care about non-lifetime generic parameters, we could get away with just using `+ use<>` for ≤2021 edition crates.
---
I'm pretty sure this whole discussion is also relevant to rustdoc JSON (does its output even contain the edition?).
[^1]: Natural `use<…>` bounds are already edition agnostic/insensitive.
[^2]: I'm aware that opaque types are currently required to capture all in-scope type and const parameters (which is a *temporary* restriction I assume), so we could relax the rule to only consider lifetime parameters.
[^3]: Consider this Rust 2024(!) snippet: `pub fn f(x: &(), y: &()) -> impl Sized`. The opaque type `impl Sized` captures *both* anonymous/elided lifetime parameters. We can't denote this, `use<'_>` is not *really* an option since it's semantically ill-formed. I guess we could still use it as a sort of pseudo-Rust (rustdoc already employs pseudo-Rust syntax in certain cases). Otherwise, we could render it as `pub fn f<'a, 'b>(x: &'a (), y: &'b ()) -> impl Sized + use<'a, 'b>` (ensuring that `'a` and `'b` occur *free* in the signature) or pseudo-Rust `pub fn f(x: &'0 (), y: &'1 ()) -> impl Sized + use<'0, '1>`. | T-rustdoc,A-impl-trait,C-discussion | low | Minor |
2,785,412,516 | vscode | Support something like the language status UI for workspace status | The LanguageStatusItem api provides a nice, consolidated way to show language specific information based on the currently active file
However some status info is relevant to the entire workspace instead of just the active file. I'd like to have an equivalent of `LanguageStatusItem` for these workspace wide items
Currently the workarounds are:
- Use a `LanguageStatusItem` that always shows. This means the language status UI will have a mix of workspace and language info
- Use normal workspace status items. These work but unfortunately can't be grouped like the `LanguageStatusItem` api provides | api-proposal | low | Major |
2,785,419,409 | tauri | [bug] Cannot run default project using Deno & and Preact | ### Describe the bug
After initializing a new project using Preact with Deno as the package manager and runtime, the default project does not run due to missing `@babel` package.
<img width="912" alt="Screenshot 2025-01-13 at 3 43 47 PM" src="https://github.com/user-attachments/assets/88656e59-7ea3-41e8-a2d4-2ad0dacd9626" />
### Reproduction
1. Initize a new Tauri Preact project using Deno
```bash
deno run -A npm:create-tauri-app
```
```
✔ Project name · deno-tauri
✔ Identifier · com.azroberts.denotauri
✔ Choose which language to use for your frontend · TypeScript / JavaScript - (pnpm, yarn, npm, deno, bun)
✔ Choose your package manager · deno
✔ Choose your UI template · Preact - (https://preactjs.com/)
✔ Choose your UI flavor · TypeScript
```
2. Install dependencies
```bash
cd deno-tauri
deno install
```
3. Start development server
```bash
deno task tauri dev
```
### Expected behavior
Application window shows default application
### Full `tauri info` output
```text
Task tauri tauri "info"
[✔] Environment
- OS: Mac OS 15.2.0 x86_64 (X64)
✔ Xcode Command Line Tools: installed
✔ rustc: 1.83.0 (90b35a623 2024-11-26)
✔ cargo: 1.83.0 (5ffbef321 2024-10-29)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-x86_64-apple-darwin (default)
- node: 23.6.0
- npm: 10.9.2
- deno: deno 2.1.5
[-] Packages
- tauri 🦀: 2.2.2
- tauri-build 🦀: 2.0.5
- wry 🦀: 0.48.1
- tao 🦀: 0.31.1
- tauri-cli 🦀: 2.2.3
- @tauri-apps/api : 2.2.0
- @tauri-apps/cli : 2.2.3 (outdated, latest: 2.2.4)
[-] Plugins
- tauri-plugin-opener 🦀: 2.2.4
- @tauri-apps/plugin-opener : 2.2.2 (outdated, latest: 2.2.4)
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- framework: React
- bundler: Vite
```
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: upstream,status: needs triage | low | Critical |
2,785,445,326 | PowerToys | Launch PowerShell scripts inside Workspace (Or better: Power Automate integration!) | ### Description of the new feature / enhancement
The current implementation of Workspaces does not preserve application data, such as the path to my editor when I open multiple instances of VSCode. If there's no easy way to achieve this, please consider adding support for PowerShell scripting, perhaps in a field instead of loading the entire `.ps1` file. With PowerShell, at least I can trigger the path, and when I click 'Launch Workspace,' I can directly access it.
### Scenario when this would be used?
Automating the process of opening and navigating to different directories.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,785,461,929 | rust | rustc won't compile after error | I was going through the online Rust book and I was modifying the provided code examples to see the types of error messages I could receive.
I tried to compile with this code
```rs
std::io::stdin()
.read_line(&mut guess)
```
and I received this error:
```
warning: unused `Result` that must be used
--> src\main.rs:9:5
|
9 | / std::io::stdin()
10 | | .read_line(&mut guess);
| |______________________________^
|
= note: this `Result` may be an `Err` variant, which should be handled
= note: `#[warn(unused_must_use)]` on by default
help: use `let _ = ...` to ignore the resulting value
|
9 | let _ = std::io::stdin()
| +++++++
warning: error copying object file `C:\Binyamin\projects\guessing_game\target\debug\deps\guessing_game.06ft8sozyh5gkeuqf47wn33fq
.rcgu.o` to incremental directory as `\\?\C:\Binyamin\projects\guessing_game\target\debug\incremental\guessing_game-2np01ccfpwk3
q\s-h3n858d5d7-0bhp2xt-working\06ft8sozyh5gkeuqf47wn33fq.o`: Access is denied. (os error 5)
warning: error copying object file `C:\Binyamin\projects\guessing_game\target\debug\deps\guessing_game.0bacoeif7vu57vzgx0itycv8y
.rcgu.o` to incremental directory as `\\?\C:\Binyamin\projects\guessing_game\target\debug\incremental\guessing_game-2np01ccfpwk3
q\s-h3n858d5d7-0bhp2xt-working\0bacoeif7vu57vzgx0itycv8y.o`: Access is denied. (os error 5)
warning: `guessing_game` (bin "guessing_game") generated 3 warnings
Finished `dev` profile [unoptimized + debuginfo] target(s) in 1.03s
Running `target\debug\guessing_game.exe`
```
Based on the error hint, I changed the code to:
```rs
let _ = std::io::stdin()
.read_line(&mut guess);
```
and I got this error:
```
Compiling guessing_game v0.1.0 (C:\Binyamin\projects\guessing_game)
thread 'coordinator' panicked at /rustc/9fc6b43126469e3858e2fe86cafb4f0fd5068869\compiler\rustc_codegen_ssa\src\back\write.rs:16
62:29:
/rustc/9fc6b43126469e3858e2fe86cafb4f0fd5068869\compiler\rustc_codegen_ssa\src\back\write.rs:1662:29: worker thread panicked
stack backtrace:
0: 0x7ffe61b6ba41 - std::backtrace_rs::backtrace::dbghelp64::trace
at /rustc/9fc6b43126469e3858e2fe86cafb4f0fd5068869\library/std\src\..\..\backtrace\src\backtrace\
dbghelp64.rs:91
1: 0x7ffe61b6ba41 - std::backtrace_rs::backtrace::trace_unsynchronized
at /rustc/9fc6b43126469e3858e2fe86cafb4f0fd5068869\library/std\src\..\..\backtrace\src\backtrace\
mod.rs:66
2: 0x7ffe61b6ba41 - std::sys::backtrace::_print_fmt
at /rustc/9fc6b43126469e3858e2fe86cafb4f0fd5068869\library/std\src\sys\backtrace.rs:66
3: 0x7ffe61b6ba41 - std::sys::backtrace::impl$0::print::impl$0::fmt
at /rustc/9fc6b43126469e3858e2fe86cafb4f0fd5068869\library/std\src\sys\backtrace.rs:39
4: 0x7ffe61b9dd1a - core::fmt::rt::Argument::fmt
at /rustc/9fc6b43126469e3858e2fe86cafb4f0fd5068869\library/core\src\fmt\rt.rs:177
5: 0x7ffe61b9dd1a - core::fmt::write
at /rustc/9fc6b43126469e3858e2fe86cafb4f0fd5068869\library/core\src\fmt\mod.rs:1189
6: 0x7ffe61b61de7 - std::io::Write::write_fmt<std::sys::pal::windows::stdio::Stderr>
at /rustc/9fc6b43126469e3858e2fe86cafb4f0fd5068869\library/std\src\io\mod.rs:1884
7: 0x7ffe61b6b885 - std::sys::backtrace::BacktraceLock::print
at /rustc/9fc6b43126469e3858e2fe86cafb4f0fd5068869\library/std\src\sys\backtrace.rs:42
8: 0x7ffe61b6e7a3 - std::panicking::default_hook::closure$1
at /rustc/9fc6b43126469e3858e2fe86cafb4f0fd5068869\library/std\src\panicking.rs:268
9: 0x7ffe61b6e582 - std::panicking::default_hook
at /rustc/9fc6b43126469e3858e2fe86cafb4f0fd5068869\library/std\src\panicking.rs:295
10: 0x7ffe6314ab1e - strncpy
11: 0x7ffe61b6eee2 - alloc::boxed::impl$30::call
at /rustc/9fc6b43126469e3858e2fe86cafb4f0fd5068869\library/alloc\src\boxed.rs:1986
12: 0x7ffe61b6eee2 - std::panicking::rust_panic_with_hook
at /rustc/9fc6b43126469e3858e2fe86cafb4f0fd5068869\library/std\src\panicking.rs:809
13: 0x7ffe647bbfdf - ar_archive_writer[5ad4fe434effa9d8]::object_reader::get_member_alignment
14: 0x7ffe647b6b29 - ar_archive_writer[5ad4fe434effa9d8]::object_reader::get_member_alignment
15: 0x7ffe647b516c - ar_archive_writer[5ad4fe434effa9d8]::object_reader::get_member_alignment
16: 0x7ffe64850eed - rustc_middle[87d6ac0499a2eaf7]::util::bug::bug_fmt
17: 0x7ffe648318bd - <rustc_middle[87d6ac0499a2eaf7]::ty::consts::Const>::to_valtree
18: 0x7ffe648316d6 - <rustc_middle[87d6ac0499a2eaf7]::ty::consts::Const>::to_valtree
19: 0x7ffe64850e22 - rustc_middle[87d6ac0499a2eaf7]::util::bug::bug_fmt
20: 0x7ffe61c3822a - rustc_interface[c014954558e9d384]::proc_macro_decls::proc_macro_decls_static
21: 0x7ffe5ebf33ad - llvm::DenseMap<llvm::StructType * __ptr64,llvm::detail::DenseSetEmpty,llvm::IRMover::StructTypeKeyInf
o,llvm::detail::DenseSetPair<llvm::StructType * __ptr64> >::~DenseMap<llvm::StructType * __ptr64,llvm::detail::DenseSetEmpty,llv
m::IRMover::StructTypeKeyIn
22: 0x7ffe5ec04d8a - llvm::DenseMap<llvm::StructType * __ptr64,llvm::detail::DenseSetEmpty,llvm::IRMover::StructTypeKeyInf
o,llvm::detail::DenseSetPair<llvm::StructType * __ptr64> >::~DenseMap<llvm::StructType * __ptr64,llvm::detail::DenseSetEmpty,llv
m::IRMover::StructTypeKeyIn
23: 0x7ffe61b80bad - alloc::boxed::impl$28::call_once
at /rustc/9fc6b43126469e3858e2fe86cafb4f0fd5068869\library/alloc\src\boxed.rs:1972
24: 0x7ffe61b80bad - alloc::boxed::impl$28::call_once
at /rustc/9fc6b43126469e3858e2fe86cafb4f0fd5068869\library/alloc\src\boxed.rs:1972
25: 0x7ffe61b80bad - std::sys::pal::windows::thread::impl$0::new::thread_start
at /rustc/9fc6b43126469e3858e2fe86cafb4f0fd5068869\library/std\src\sys\pal\windows\thread.rs:55
26: 0x7ffec3f081f4 - BaseThreadInitThunk
27: 0x7ffec676a251 - RtlUserThreadStart
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&templat
e=ice.md
note: rustc 1.84.0 (9fc6b4312 2025-01-07) running on x86_64-pc-windows-msvc
note: compiler flags: --crate-type bin -C embed-bitcode=no -C debuginfo=2 -C incremental=[REDACTED]
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
end of query stack
error: cached cgu 0bacoeif7vu57vzgx0itycv8y should have an object file, but doesn't
error: could not compile `guessing_game` (bin "guessing_game") due to 1 previous error
```
Not sure what's going on here. I just created a new project with cargo and moved on, but I would like to know how to fix this in the future.
| I-ICE,T-compiler,A-incr-comp,C-bug,S-needs-repro | low | Critical |
2,785,468,232 | rust | Compilation taking unexpected long after 1.84 | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I'm building `jf-pcs` library from https://github.com/EspressoSystems/jellyfish
On rust version 1.83
<details><summary>rust version meta</summary>
<p>
```bash
[$] rustc --version --verbose
rustc 1.83.0 (90b35a623 2024-11-26)
binary: rustc
commit-hash: 90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf
commit-date: 2024-11-26
host: aarch64-apple-darwin
release: 1.83.0
LLVM version: 19.1.1
```
</p>
</details>
It only took ~11s to build the library
```bash
[$] cargo build -p jf-pcs
Compiling proc-macro2 v1.0.86
Compiling unicode-ident v1.0.12
... [excess outputs omitted]
Compiling jf-utils v0.4.4 (https://github.com/EspressoSystems/jellyfish?tag=0.4.5#7d71dbef)
Compiling jf-pcs v0.1.0 (/Users/chengyu/espressosys/jellyfish/pcs)
Finished `dev` profile [unoptimized + debuginfo] target(s) in 11.43s
```
Now on rust 1.84 the compilation runs almost forever:
<details><summary>rust version meta</summary>
<p>
```bash
rustc 1.84.0 (9fc6b4312 2025-01-07)
binary: rustc
commit-hash: 9fc6b43126469e3858e2fe86cafb4f0fd5068869
commit-date: 2025-01-07
host: aarch64-apple-darwin
release: 1.84.0
LLVM version: 19.1.5
```
</p>
</details>
```bash
[$] cargo build -p jf-pcs
Compiling proc-macro2 v1.0.86
Compiling unicode-ident v1.0.12
... [excess outputs omitted]
Compiling jf-utils v0.4.4 (https://github.com/EspressoSystems/jellyfish?tag=0.4.5#7d71dbef)
Compiling jf-pcs v0.1.0 (/Users/chengyu/espressosys/jellyfish/pcs)
Finished `dev` profile [unoptimized + debuginfo] target(s) in 26m 43s
```
Suspect that this is due to the new trait solver.
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"lcnr"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | I-compiletime,P-high,T-compiler,regression-from-stable-to-stable,C-bug,S-has-mcve,T-types,WG-trait-system-refactor,S-has-bisection | medium | Critical |
2,785,470,765 | vscode | Focus on the Tab to the Left When Closing a Tab | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.96.2
- OS Version: NIXOS 24.11
Problem:
When I open preferences, then close them again, I am usually presented with a different tab/file than I was looking at before opening preferences (or any file/tab). This is very distracting, often to the point where I forget what I was doing. I look at this new file, confused about why it doesn't match what I'm thinking about, and by the time I figure out it's the wrong file and go through the process of getting the file I was working on back on the screen I've lost much of my focus and working memory.
For further reading on how important this sort of thing can bee, see: the principle of least surprise
Proposal:
VSCode can be made less distracting, less surprising and more predictable with an extremely simple change: focus to the left when closing a tab.
I argue that this solution is better than tracking which tabs were opened recently, because it also covers the use-case where it's been a while since the user changed focus and/or they don't remember the focus order.
It is valuable to have the interface behave in very easy to predict ways, so the user can just press some keys and not be distracted by the computer reacting in unusual (even if "smart") ways.
Use Cases:
Quick Open/Close: If the user quickly opens and closes a tab, such as settings, then closes it again they should be returned to the tab they were working on without any surprises.
After clicking on a tab: In cases where the user has recently manually switched tabs, focusing to the left is just as easy to predict as closing to the right. No harm done by this proposal.
After a long time of no focus changes: If the user has no recent memory of what tab was focused recently, the best behavior is to be easy to predict, i.e. closing to the right or left. No harm done by this proposal. Please note that this use case is made worse by having VSCode remember which tabs were open most recently and focusing the most recent one.
Closing several tabs: I want to be able to press ctrl-W several times in rapid succession, and have the tabs I expect close. This use-case is well served by focus-right (current behavior) and equally well with focus-left (my proposal) but would be terrible with focus-recent.
Conclusion:
The best user experience comes from simply focusing to the left on tab close.
Some programs implement a different focus-on-close behavior: Focusing on the most recently focused tab. I argue against this solution because it creates a worse overall experience as explained in Use Cases above. | feature-request,workbench-tabs | low | Critical |
2,785,502,861 | go | runtime: swiss map SIMD implementation for non-amd64 | https://go.dev/cl/626277 (in 1.24) added SIMD implementations for swissmap (still 8-byte) control word operations for amd64. We could do similar for other architectures, if the SIMD implementations improve performance. ARM64 is the obvious target here given its wide use. | Performance,NeedsFix,arch-arm64,compiler/runtime,Implementation | low | Major |
2,785,511,747 | vscode | [Firefox] File tree context menu shows just "Paste" and opens unreliably | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.96.3
- OS Version: Firefox on Mac through https://github.dev
Steps to Reproduce:
1. On Firefox, go to https://github.dev and authenticate
2. In the file tree, **Right Click** to trigger the context menu
3. ISSUE: Sometimes a menu with just "Paste" shows up
4. **Right Click** in the file tree again
5. ISSUE: Sometimes 2 menus show up (one with "Paste" and one with the actual context menu)
 | bug,file-explorer,web,firefox | low | Critical |
2,785,519,640 | ollama | Pulling models resets the download on raspberry pi 5 | ### What is the issue?
The raspberry pi 5 is connected over wifi but this shouldn't be a problem since my other device pulls the models correctly on the same wifi.
Suggested fix: I currently have to constantly cancel the downloads with CTRL + C, this saves the download state and then continues where it left off, functioning as some kind of save/checkpoint. Maybe its possible to backup the download state once in a while so that it doesn't stop downloading on a crash?
Feel free to request more info or let me know if you want me to test!
Edit: sometimes the error "Error: context canceled" shows up and I think it might be related
### OS
Linux
### GPU
Other
### CPU
Other
### Ollama version
0.5.4 | bug | medium | Critical |
2,785,541,272 | PowerToys | Mouse Without Borders sets the same Security Key on all machines after restoring from backup | ### Microsoft PowerToys version
0.87.1
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
Mouse Without Borders, General
### Steps to reproduce
Issue Summary: Mouse Without Borders sets the same Security Key on all machines after restoring from backup
Relevant Installed Components:
Microsoft OneDrive
PowerToys
Steps to replicate the issue:
1. Installed mouse without borders on 2 computers
2. generated a key on computer No.1 (Tachicoma) and backed it up using the backup process in the general section to the default backup folder on onedrive
3. on computer No2 (Pepper) use the restore process in the general section from the same backup folder on one drive
4. now whenever a new key is generated and a connection is attempted, both machine's default security key is set to be the same.
I'm attaching the screenshots from both machines to illustrate the issue. Please note that to take the screenshot on Tachicoma, I had to turn off Mouse Without Borders as there seems to be some sort of glitch that didn't allow me to use the snipping tool, but that's another matter. If I had it turned on, the same info would be displaying.
Attempted fixes that failed:
created separate backup folders
re-installed power toys on both machines
restarted everything multiple times


### ✔️ Expected Behavior
Per the documentation section for initial configuration -> [Link to Documentation](https://learn.microsoft.com/en-us/windows/powertoys/mouse-without-borders#initial-configuration)
I expect that the keys on both machines would be unique
### ❌ Actual Behavior
Issue Summary: Mouse Without Borders sets the same Security Key on all machines after restoring from backup
Relevant Installed Components:
Microsoft OneDrive
PowerToys
I'm attaching the screenshots from both machines to illustrate the issue. Please note that to take the screenshot on Tachicoma, I had to turn off Mouse Without Borders as there seems to be some sort of glitch that didn't allow me to use the snipping tool, but that's another matter. If I had it turned on, the same info would be displaying.


### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Critical |
2,785,552,628 | vscode | Investigate possible leak in code actions | From @jrieken
1. uncomment this line: https://github.com/microsoft/vscode/blob/a069d6f1f0014fff61a650994a84b716960d7bd8/src/vs/workbench/contrib/performance/browser/performance.contribution.ts#L149
2. Reload window
**Bug**
A potential leak is reported for code actions:
```
[LEAKED DISPOSABLE] Error: CREATED via:
at GCBasedDisposableTracker.trackDisposable (vscode-file://vscode-app/Users/jrieken/Code/vscode/out/vs/base/common/lifecycle.js:27:23)
at trackDisposable (vscode-file://vscode-app/Users/jrieken/Code/vscode/out/vs/base/common/lifecycle.js:198:24)
at new DisposableStore (vscode-file://vscode-app/Users/jrieken/Code/vscode/out/vs/base/common/lifecycle.js:296:9)
at getCodeActions (vscode-file://vscode-app/Users/jrieken/Code/vscode/out/vs/editor/contrib/codeAction/browser/codeAction.js:93:25)
at vscode-file://vscode-app/Users/jrieken/Code/vscode/out/vs/editor/contrib/codeAction/browser/codeActionModel.js:285:28
at createCancelablePromise (vscode-file://vscode-app/Users/jrieken/Code/vscode/out/vs/base/common/async.js:18:22)
at CodeActionOracle._signalChange (vscode-file://vscode-app/Users/jrieken/Code/vscode/out/vs/editor/contrib/codeAction/browser/codeActionModel.js:187:33)
at CodeActionOracle.trigger (vscode-file://vscode-app/Users/jrieken/Code/vscode/out/vs/editor/contrib/codeAction/browser/codeActionModel.js:34:14)
at CodeActionModel._update (vscode-file://vscode-app/Users/jrieken/Code/vscode/out/vs/editor/contrib/codeAction/browser/codeActionModel.js:310:42)
at UniqueContainer.value (vscode-file://vscode-app/Users/jrieken/Code/vscode/out/vs/editor/contrib/codeAction/browser/codeActionModel.js:149:62)
``` | freeze-slow-crash-leak,editor-code-actions | low | Critical |
2,785,564,537 | flutter | release/release_publish.py recipes must be updated to handle monorepo | The final step of the release process is to publish the release, which uses the https://flutter.googlesource.com/recipes/+/refs/heads/main/recipes/release/release_publish.py recipe. This will need to be updated to not fetch the engine repo when publishing a monorepo release. | team-release | low | Minor |
2,785,573,645 | pytorch | Tensor Stride Inconsistent (?) Behavior When One of the Dimension is 1 | Hi,
I noticed that when a Tensor has `1` in one of its dimensions, its `stride` exhibit inconsistent (?) behavior under transformations + `.contiguous()` compared to a new tensor initialized with the final shape.
Granted, since the dimension in question is `1`, we are never supposed to use index other than `0`. That being said, this could cause some custom (Triton) kernel that relied on certain stride behavior to fail.
```python
import torch
print("--- n = 1 ---")
X = torch.randn(16, 2048, 1, 128, device="cuda")
print("shape: ", X.shape, "\t\tstride: ", X.contiguous().stride())
X = X.transpose(dim0=1, dim1=2).contiguous()
print("shape: ", X.shape, "\t\tstride: ", X.contiguous().stride())
X = torch.randn(16, 1, 2048, 128, device="cuda")
print("shape: ", X.shape, "\t\tstride: ", X.contiguous().stride())
print("--- n = 2 ---")
X = torch.randn(16, 2048, 2, 128, device="cuda")
print("shape: ", X.shape, "\t\tstride: ", X.contiguous().stride())
X = X.transpose(dim0=1, dim1=2).contiguous()
print("shape: ", X.shape, "\t\tstride: ", X.contiguous().stride())
X = torch.randn(16, 2, 2048, 128, device="cuda")
print("shape: ", X.shape, "\t\tstride: ", X.contiguous().stride())
```
The above code would print out:
```python
--- n = 1 ---
shape: torch.Size([16, 2048, 1, 128]) stride: (262144, 128, 128, 1)
shape: torch.Size([16, 1, 2048, 128]) stride: (262144, 128, 128, 1) # <--- different
shape: torch.Size([16, 1, 2048, 128]) stride: (262144, 262144, 128, 1) # <--- different
--- n = 2 ---
shape: torch.Size([16, 2048, 2, 128]) stride: (524288, 256, 128, 1)
shape: torch.Size([16, 2, 2048, 128]) stride: (524288, 262144, 128, 1) # <--- the same
shape: torch.Size([16, 2, 2048, 128]) stride: (524288, 262144, 128, 1) # <--- the same
```
cc @jamesr66a | triaged,module: memory format | low | Minor |
2,785,582,133 | godot | Volumetric fog exhibits dark edges in bright scenes | ### Tested versions
reproducible in: v4.4.dev [d19147e09], 4.3.stable
### System information
Godot v4.4.dev (d19147e09) - macOS Sequoia (15.2.0) - Multi-window, 2 monitors - Metal (Forward+) - integrated Apple M1 (Apple7) - Apple M1 (8 threads)
Godot v4.3.stable (77dcf97d8) - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3060 Ti (NVIDIA; 32.0.15.6614) - Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz (8 Threads)
### Issue description
fog volumes have a pronounced halo around the edge, visible especially in bright scenes.

looked like an alpha mul issue to me so i applied the following workaround locally, but i have only tested that it helps in my specific cases and didn't look deeper into it.
```diff
diff --git a/servers/rendering/renderer_rd/shaders/environment/sky.glsl b/servers/rendering/renderer_rd/shaders/environment/sky.glsl
index 105a42d708..35bebfffa0 100644
--- a/servers/rendering/renderer_rd/shaders/environment/sky.glsl
+++ b/servers/rendering/renderer_rd/shaders/environment/sky.glsl
@@ -270,7 +270,10 @@ void main() {
if (sky_scene_data.volumetric_fog_enabled) {
vec4 fog = volumetric_fog_process(uv);
- frag_color.rgb = mix(frag_color.rgb, fog.rgb, fog.a * sky_scene_data.volumetric_fog_sky_affect);
+ if (fog.a > 0.0) {
+ float rcp_fog = 1.0 / fog.a;
+ frag_color.rgb = mix(frag_color.rgb, fog.rgb * rcp_fog, fog.a * sky_scene_data.volumetric_fog_sky_affect);
+ }
}
if (custom_fog.a > 0.0) {
diff --git a/servers/rendering/renderer_rd/shaders/forward_clustered/scene_forward_clustered.glsl b/servers/rendering/renderer_rd/shaders/forward_clustered/scene_forward_clustered.glsl
index 096e099391..ffc76abe5f 100644
--- a/servers/rendering/renderer_rd/shaders/forward_clustered/scene_forward_clustered.glsl
+++ b/servers/rendering/renderer_rd/shaders/forward_clustered/scene_forward_clustered.glsl
@@ -2756,8 +2756,10 @@ void fragment_shader(in SceneData scene_data) {
#endif
#ifndef FOG_DISABLED
- diffuse_buffer.rgb = mix(diffuse_buffer.rgb, fog.rgb, fog.a);
- specular_buffer.rgb = mix(specular_buffer.rgb, vec3(0.0), fog.a);
+ if (fog.a > 0.0) {
+ diffuse_buffer.rgb = mix(diffuse_buffer.rgb, fog.rgb / fog.a, fog.a);
+ specular_buffer.rgb = mix(specular_buffer.rgb, vec3(0.0), fog.a);
+ }
#endif //!FOG_DISABLED
#else //MODE_SEPARATE_SPECULAR
@@ -2773,7 +2775,9 @@ void fragment_shader(in SceneData scene_data) {
#ifndef FOG_DISABLED
// Draw "fixed" fog before volumetric fog to ensure volumetric fog can appear in front of the sky.
- frag_color.rgb = mix(frag_color.rgb, fog.rgb, fog.a);
+ if (fog.a > 0.0) {
+ frag_color.rgb = mix(frag_color.rgb, fog.rgb / fog.a, fog.a);
+ }
#endif //!FOG_DISABLED
#endif //MODE_SEPARATE_SPECULAR
```
result:

### Steps to reproduce
run MRP, should be visible immediately in editor and when running. always present with fog volumes. (open fog.tscn if it errors on load, not sure why it does that)
### Minimal reproduction project (MRP)
MRP without / with workaround:


[volumetric_halo.zip](https://github.com/user-attachments/files/18402653/volumetric_halo.zip)
MRP tested in 4.3 and v4.4.dev [d19147e09] | bug,topic:rendering,confirmed,topic:3d | low | Critical |
2,785,587,027 | kubernetes | `--docker-email` validation does not allow for '@' | ### What happened?
```
PS C:\Users\trisavo> kubectl create secret docker-registry --docker-username trisavoconnected --docker-password $TOKEN_PWD --docker-email [email protected] --docker-server=10.96.0.3
error: failed to create secret Secret "[email protected]" is invalid: metadata.name: Invalid value: "[email protected]": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
```
Seems the validation for email only allows for . and - characters. Is this expected?
### What did you expect to happen?
I expected to be able to use my email for docker-email.
### How can we reproduce it (as minimally and precisely as possible)?
Use the kubectl create secret command.
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
# paste output here
```
Client Version: v1.30.5
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.32.0
</details>
### Cloud provider
<details>
Azure
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
BuildNumber Caption OSArchitecture Version
22631 Microsoft Windows 11 Enterprise N 64-bit 10.0.22631
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/support,needs-sig,needs-triage | low | Critical |
2,785,626,036 | flutter | Expose search field in CupertinoSliverNavigationBar.search | ### Use case
Users might want to customize the search field used in `CupertinoSliverNavigationBar.search`. For example, users might want to customize the placeholder text, a common use case present in stock iOS apps like Weather and Apple Store.
### Proposal
After https://github.com/flutter/flutter/pull/159120 lands, expose the search field. | a: text input,c: new feature,framework,f: cupertino,c: proposal,P3,team-design,triaged-design | low | Minor |
2,785,629,643 | vscode | Git - VS Code's git commit error pop-up shows wrong hook failure |
Type: <b>Bug</b>
Description:
It seems when performing a git commit from the VS Code UI and there is a commit failure (e.x., in a git-hook) the error pop-up only shows only the first line of the output which is not necessarily the error.
Steps to Reproduce:
Perform a git commit from the VS Code UI so that you get a warning followed by an error. I'd expect to see the error message in the pop-up but instead the unrelated warning message shows which can be misleading.
Steps to Reproduce (Specific):
1. `lint-stages` issuing a warning "No staged files match any configured task" (which passes gracefully)
2. then there is an error from `commitlint` about a bad commit message.
More Details:

raw commit output:
```
filepath> git commit -m "foo: this will fail"
→ No staged files match any configured task.
⧗ input: foo: this will fail
✖ type must be one of [build, chore, ci, docs, feat, fix, perf, refactor, revert, style, test] [type-enum]
✖ found 1 problems, 0 warnings
ⓘ Get help: https://github.com/conventional-changelog/commitlint/#what-is-commitlint
husky - commit-msg hook exited with code 1 (error)
```
I'd like for the popup to read the actual error: "input: foo: this will fail, type must be one of [build, chore, ci, docs, feat, fix, perf, refactor, revert, style, test] [type-enum]".
Instead it shows the first line of output: "No staged files match any configured task." which is misleading since that's not the error.
This poster is having the same issue.
[stackoverflow: vs-codes-git-extension-shows-wrong-pre-commit-hook-failure](https://stackoverflow.com/questions/77507886/vs-codes-git-extension-shows-wrong-pre-commit-hook-failure)
VS Code version: Code 1.96.2 (fabdb6a30b49f79a7aba0f2ad9df9b399473380f, 2024-12-19T10:22:47.216Z)
OS version: Windows_NT x64 10.0.22631
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|13th Gen Intel(R) Core(TM) i7-1355U (12 x 2611)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|15.72GB (1.67GB free)|
|Process Argv|--crash-reporter-id 4ec418d0-d2a2-4af6-a76f-82a40b32c2a6|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (21)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-sqlite|ale|0.14.1
vscode-github-actions|git|0.27.0
vscode-pull-request-github|Git|0.102.0
rainbow-csv|mec|3.14.0
git-graph|mhu|1.30.0
vscode-docker|ms-|1.29.3
debugpy|ms-|2024.14.0
python|ms-|2024.22.2
vscode-pylance|ms-|2024.12.1
jupyter|ms-|2024.11.0
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.21
vscode-jupyter-cell-tags|ms-|0.1.9
vscode-jupyter-slideshow|ms-|0.1.6
remote-containers|ms-|0.394.0
remote-wsl|ms-|0.88.5
cpptools|ms-|1.22.11
vsc-octave-debugger|pau|0.5.13
code-spell-checker|str|4.0.34
octave|toa|0.0.3
pdf|tom|1.2.2
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805:30301674
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
vscaac:30438847
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupyter:31046869
nativerepl1:31139838
pythonrstrctxt:31112756
nativeloc2:31192216
cf971741:31144450
iacca1:31171482
notype1cf:31157160
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
```
</details>
<!-- generated by issue reporter --> | bug,git | low | Critical |
2,785,645,487 | vscode | Circular import detector | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
I really want a feature to detect circular imports. I don't see what could possibly be so difficult in doing it. And it would be really helpful for me. | *extension-candidate | low | Minor |
2,785,648,572 | kubernetes | Certificate info in expire logs | ### What happened?
When a certificate expires, the server logs it with something like:
`verifying certificate SN=xxxx, SKID=, AKID= failed: x509: certificate has expired or is not yet valid`
Its hard to track certs issued by k8s itself by SN. It would be extremely handy if it logged the CN too.
Also, it would be good if it would log it shortly before it expired.
### What did you expect to happen?
K8s would tell you what CN the cert tried to use.
### How can we reproduce it (as minimally and precisely as possible)?
Use an expired cert and try and auth.
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
1.32.0
```
</details>
### Cloud provider
<details>
N/A
</details>
### OS version
_No response_
### Install tools
_No response_
### Container runtime (CRI) and version (if applicable)
_No response_
### Related plugins (CNI, CSI, ...) and versions (if applicable)
_No response_ | kind/bug,sig/auth,needs-triage | low | Critical |
2,785,653,155 | flutter | [a11y] drag handle missing roles | When using this widget https://api.flutter.dev/flutter/widgets/ReorderableList-class.html, the semantics role should have the following
| OS | role |
|--------|--------|
| web |- |
| ios | - |
| macos | NSAccessibilityHandleRole |
| windows | - |
| android | - |
When using this widget https://api.flutter.dev/flutter/material/BottomSheet-class.html with showDragHandle, the semantics role should have the following
| OS | role |
|--------|--------|
| web | - |
| ios | - |
| macos | - |
| windows | - |
| android | BottomSheetDragHandleView | | framework,f: material design,a: accessibility,P3,team-accessibility,triaged-accessibility | low | Minor |
2,785,668,172 | flutter | [a11y] Spin button is missing semantics role | When using this widget https://api.flutter.dev/flutter/material/showDatePicker.html, the next and previous arrow button semantics role should have the following

| OS | role |
|--------|--------|
| web | spinbutton |
| ios | - |
| macos | NSAccessibilityIncrementorRole |
| windows | ROLE_SYSTEM_SPINBUTTON |
| android | BottomSheetDragHandleView | | framework,f: material design,a: accessibility,P3,team-accessibility,triaged-accessibility | low | Minor |
2,785,678,238 | flutter | [a11y] scroll bar is missing role | When using this widget https://api.flutter.dev/flutter/material/Scrollbar-class.html and https://api.flutter.dev/flutter/cupertino/CupertinoScrollbar-class.html, the widget is missing following role
| OS | role |
|--------|--------|
| web | scrollbar |
| ios | - |
| macos | NSAccessibilityScrollBarRole |
| windows | ROLE_SYSTEM_SCROLLBAR |
| android | - | | framework,f: material design,a: accessibility,f: cupertino,P2,team-accessibility,triaged-accessibility | low | Minor |
2,785,700,573 | kubernetes | Fine-Grained Scaling Control for DaemonSet/Deployment | ### Describe the issue
### What would you like to be added?
DaemonSet/Deployment supports controlling strategy for scaling pods similar to RollingUpdate.
### Why is this needed?
Currently, DaemonSets and Deployments (via ReplicaSets) offer some level of strategy control for rolling updates, but provide almost nothing for large-range scaling, apart from limiting the request pressure for the API server. As a result, a large number of creating pods will compete simultaneously for the same resources, leading to repeated failures and retries, and very slowly scaling.
In terms of pod scaling for DaemonSets/Deployments, the current solutions are based on various Autoscalers, but using Autoscalers to achieve specific-number scaling goals might also be inconsistent and awkward:
• For Deployments, scaling relies on the Horizontal Pod Autoscaler (HPA).
• For DaemonSets, scaling, that is node scaling,ctypically using the Cluster Autoscaler with Node Pools/Groups and HorizontalNodeScalingPolicy.
However, Node Pools/Groups depend on cloud provider supporting, unusable in local self-built Kubernetes clusters where manual node scaling via adding/removing labels/taints is common.
Additionally, the rolling update strategy for DaemonSets/Deployments is not flexible. It only supports a single fixed updating rate, whereas production environments often require a phased updating strategy of starting slowly and accelerating later.
Support Request also: [English Version](https://discuss.kubernetes.io/t/feature-request-fine-grained-scaling-control-for-daemonsets-deployments/30759?u=unilinu) [中文版本](https://discuss.kubernetes.io/t/daemonset-deployment/30760) | sig/scalability,sig/autoscaling,sig/apps,needs-triage | low | Critical |
2,785,704,570 | pytorch | assert size/strides for fallback kernel | ### 🐛 Describe the bug
Inductor right now does not generate size/stride asserts for fallback kernel. This makes issue like [this](https://fb.workplace.com/groups/1075192433118967/posts/1567334737238065) very hard to debug (this is a meta internal link).
Actually in ir.FallbackKernel, we have the following code whose intention is to assert for size/strides for fallback kernel:
https://github.com/pytorch/pytorch/blob/c15d6508bdb82580803ea4899230043bf6ac2c04/torch/_inductor/ir.py#L6669-L6670
However, Fallback kernel usually generate a node with MultiOutputLayout which does not pass the if check.
A fix is to iterate thru each item for the FallbackKernel (check self.outputs) and assert size/stride for each of them.
I use the following testing script:
```
import torch
import einops
from torch._inductor import config as inductor_config
from torch._dynamo.testing import rand_strided, reset_rng_state
inductor_config.fallback_random = True
image_latent = torch.randn((24, 16, 32, 32), device="cuda").to(memory_format=torch.channels_last).view(2, 12, 16, 32, 32)
def f(image_latent):
indices = torch.argsort(torch.rand(2, 12), dim=-1)[:, : 6]
tar_latent = image_latent[
torch.arange(2).unsqueeze(-1), indices[:, 3:]
]
tar_latent_rearranged = einops.rearrange(
tar_latent, "b n c h w -> (b n) c h w"
)
return {
"tar_latent": tar_latent,
"tar_latent_rearranged": tar_latent_rearranged,
}
reset_rng_state()
ref = f(image_latent)
opt_f = torch.compile(f)
reset_rng_state()
act = opt_f(image_latent)
print(f"max dif {(act['tar_latent'] - ref['tar_latent']).abs().max()}")
print(f"max dif {(act['tar_latent_rearranged'] - ref['tar_latent_rearranged']).abs().max()}")
```
The script may not be able to repro anymore once we fix the layout problem for index.Tensor .
### Versions
..
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @BoyuanFeng @eellison | high priority,good first issue,triaged,oncall: pt2,module: inductor | low | Critical |
2,785,704,843 | flutter | [a11y] context menu is missing semantics role | When using this widget https://api.flutter.dev/flutter/material/TextSelectionToolbar-class.html and https://api.flutter.dev/flutter/widgets/ContextMenuController-class.html, the widget is missing following role
| OS | role |
|--------|--------|
| web | toolbar |
| ios | - |
| macos | NSAccessibilityToolbarRole |
| windows | ROLE_SYSTEM_TOOLBAR |
| android | ContextMenu |
The item in the context menu
| OS | role |
|--------|--------|
| web | button |
| ios | - |
| macos | NSAccessibilityButtonRole, NSAccessibilityToolbarButtonSubrole |
| windows | ROLE_SYSTEM_PUSHBUTTON |
| android | - | | framework,a: accessibility,P2,team-accessibility,triaged-accessibility | low | Minor |
2,785,709,316 | pytorch | [xpu] Compilation of pytorch failed, unable to generate RegisterSparseXPU.cpp | ### 🐛 Describe the bug
Description: pytorch installation cannot generate the file `RegisterSparseXPU.cpp`
Back trace of the error:
```
[4/617] Generating ../../../xpu/ATen/XPUFunctions.h, ../../../xpu/ATen/RegisterXPU.cpp, ../../../xpu/ATe...xtend/c_shim_xpu.h, /home/jaytong/pytorch/torch/csrc/inductor/aoti_torch/generated/extend/c_shim_xpu.cpp
FAILED: xpu/ATen/XPUFunctions.h xpu/ATen/RegisterXPU.cpp xpu/ATen/RegisterSparseXPU.cpp /home/jaytong/pytorch/torch/csrc/inductor/aoti_torch/generated/extend/c_shim_xpu.h /home/jaytong/pytorch/torch/csrc/inductor/aoti_torch/generated/extend/c_shim_xpu.cpp /home/jaytong/pytorch/build/xpu/ATen/XPUFunctions.h /home/jaytong/pytorch/build/xpu/ATen/RegisterXPU.cpp /home/jaytong/pytorch/build/xpu/ATen/RegisterSparseXPU.cpp
cd /home/jaytong/pytorch && /home/jaytong/pyenv/pytorch_nightly_2/bin/python -m torchgen.gen --source-path /home/jaytong/pytorch/third_party/torch-xpu-ops/yaml/ --install-dir /home/jaytong/pytorch/build/xpu/ATen/ --per-operator-headers --static-dispatch-backend --backend-whitelist XPU SparseXPU --xpu --update-aoti-c-shim --extend-aoti-c-shim --aoti-install-dir=/home/jaytong/pytorch/torch/csrc/inductor/aoti_torch/generated/extend && cat /home/jaytong/pytorch/third_party/torch-xpu-ops/src/ATen/native/xpu/XPUFallback.template >> /home/jaytong/pytorch/build/xpu/ATen//RegisterXPU.cpp && /home/jaytong/pyenv/pytorch_nightly_2/bin/python /home/jaytong/pytorch/third_party/torch-xpu-ops/tools/codegen/remove_headers.py --register_xpu_path /home/jaytong/pytorch/build/xpu/ATen//RegisterXPU.cpp && /home/jaytong/pyenv/pytorch_nightly_2/bin/python /home/jaytong/pytorch/third_party/torch-xpu-ops/tools/codegen/remove_headers.py --register_xpu_path /home/jaytong/pytorch/build/xpu/ATen//RegisterSparseXPU.cpp
```
### Versions
Pytorch version: From `main` branch from commit: `c15d6508bdb82580803ea4899230043bf6ac2c04`
OS: Ubuntu 22.04.5 LTS
GCC: 11.4.0
cmake: 3.31.4
python: 3.10.12
cc @gujinghui @EikanWang @fengyuan14 @guangyey | triaged,module: xpu | low | Critical |
2,785,712,333 | next.js | Parallel Route with Slug makes page render twice | ### Link to the code that reproduces this issue
https://github.com/canopy-cj/Nextjs-Parallel-Route-Issue
### To Reproduce
1. Run build (`npm run build`) and start (`npm run start`)
2. Navigate to main page (`http:localhost:3000`). You should see Pokemon List page is rendered once.
3. Click on one of pokemon. You should see Pokemon page is rendered twice.
### Current vs. Expected behavior
https://github.com/user-attachments/assets/503a43c2-3f58-41d5-9a6e-253d59d50532
### Current
This happens when I add [slug] inside parallel route (breadcrumb) which makes page to render twice if I navigate to same path as slug. This impacts the fetch/database call in the Page to call twice.
The breadcrumb implementation is followed by [this example](https://app-router.vercel.app/patterns/breadcrumbs/books).
### Expected
I want the page to render once such that fetch/database call to be only called once.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.6.0: Fri Nov 15 15:12:37 PST 2024; root:xnu-10063.141.1.702.7~1/RELEASE_ARM64_T6030
Available memory (MB): 36864
Available CPU cores: 12
Binaries:
Node: 20.12.2
npm: 10.5.0
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 15.1.4 // Latest available version is detected (15.1.4).
eslint-config-next: 15.1.4
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Performance, Parallel & Intercepting Routes
### Which stage(s) are affected? (Select all that apply)
next start (local)
### Additional context
_No response_ | Performance,Parallel & Intercepting Routes | low | Major |
2,785,714,491 | next.js | [Turbopack] SharedWorker TypeScript scripts not compiled | ### Link to the code that reproduces this issue
https://github.com/christoph-pflueger/turbopack-shared-worker-reproduction
### To Reproduce
1. `npm run dev`
2. Visit `http://localhost:3000/`
3. Open the console. In Chrome you will notice the error `Failed to fetch a worker script.`.
4. Open chrome://inspect/#workers in Chrome. You will notice that only the SharedWorker with the `.js` script is running and has logged `Hello world!`.
### Current vs. Expected behavior
I expect to be able to run a SharedWorker with a TypeScript script.
Currently, the SharedWorker fails to run.
It appears that the script is not even being compiled: `.next/static/media/script.93184b23.ts`
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #1 SMP Tue Nov 5 00:21:55 UTC 2024
Available memory (MB): 15948
Available CPU cores: 12
Binaries:
Node: 22.13.0
npm: 10.9.2
Yarn: N/A
pnpm: 9.15.3
Relevant Packages:
next: 15.2.0-canary.7 // Latest available version is detected (15.2.0-canary.7).
eslint-config-next: N/A
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Turbopack, TypeScript
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
_No response_ | TypeScript,Turbopack,linear: turbopack | low | Critical |
2,785,770,950 | langchain | HTMLSemanticPreservingSplitter doesn't pull headers when using semantic HTML5 tags like <main> <article>, <section>, etc. | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_text_splitters import HTMLSemanticPreservingSplitter
html_string = """
<!DOCTYPE html>
<html>
<body>
<main>
<h1>Section 1</h1>
<p>This section contains an important table and list that should not be split across chunks.</p>
<table>
<tr>
<th>Item</th>
<th>Quantity</th>
<th>Price</th>
</tr>
<tr>
<td>Apples</td>
<td>10</td>
<td>$1.00</td>
</tr>
<tr>
<td>Oranges</td>
<td>5</td>
<td>$0.50</td>
</tr>
<tr>
<td>Bananas</td>
<td>50</td>
<td>$1.50</td>
</tr>
</table>
<h2>Subsection 1.1</h2>
<p>Additional text in subsection 1.1 that is separated from the table and list.</p>
<p>Here is a detailed list:</p>
<ul>
<li>Item 1: Description of item 1, which is quite detailed and important.</li>
<li>Item 2: Description of item 2, which also contains significant information.</li>
<li>Item 3: Description of item 3, another item that we don't want to split across chunks.</li>
</ul>
</main>
</body>
</html>
"""
headers_to_split_on = [("h1", "Header 1"), ("h2", "Header 2")]
splitter = HTMLSemanticPreservingSplitter(
headers_to_split_on=headers_to_split_on,
max_chunk_size=50,
elements_to_preserve=["table", "ul"],
)
documents = splitter.split_text(html_string)
print(documents)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
HTMLSemanticPreservingSplitter doesn't produce metadata if the html passed includes semantic HTML5 tags like `<main>` `<article>`, `<section>`, etc. If you take the example usage from the docs (https://python.langchain.com/docs/how_to/split_html/#preserving-tables-and-lists) and replace `<div>` with `<main>` you can see an example.
I've made a notebook where you can see the issue: https://colab.research.google.com/drive/19hZQzpIFOfVxtJGcOpT5PtZcuibmYZu-?usp=sharing
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Thu Jun 27 21:05:47 UTC 2024
> Python Version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.3.29
> langchain: 0.3.7
> langsmith: 0.1.147
> langchain_text_splitters: 0.3.5
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.11
> async-timeout: 4.0.3
> httpx: 0.28.1
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> orjson: 3.10.13
> packaging: 24.2
> pydantic: 2.10.4
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.36
> tenacity: 9.0.0
> typing-extensions: 4.12.2 | 🤖:bug | low | Critical |
2,785,850,599 | vscode | Investigate top deprecated API usages | Tracks investigating top users of some of our deprecated APIs
Tracking in linked issues plus:
- https://github.com/vscode-kubernetes-tools/vscode-kubernetes-tools/issues/602 | debt | low | Minor |
2,785,853,608 | vscode | Adopt `CodeAction` type for built-in css server | For #237857
The css language server is currently returning plain commands for code actions:
https://github.com/microsoft/vscode/blob/25a0bb98b7dfb524e0c71e01832e5b08961bfd00/extensions/css-language-features/server/src/cssServer.ts#L289
Could you please look into migrating to return actions of the `CodeAction` type instead? I think the css language service already supports this with `doCodeActions2` so it may be a very simple switch | debt | low | Minor |
2,785,860,272 | go | crypto/x509: incorrect user trust store directory is used for Android | In #58922, the following user trust store directory was added to crypto/x509 for Android:
https://github.com/golang/go/blob/6da16013ba4444e0d71540f68279f0283a92d05d/src/crypto/x509/root_linux.go#L29
However, Android hasn't used this path since 2014 and instead uses `/data/misc/user/<user ID>/cacerts-added`, where `<user ID>` is the Android user ID (not Linux/POSIX UID).
I'm not sure if there's a good public API to get the user ID. `UserHandle.myUserId()` (on the Java side) is not public. However, the Android user ID is [implemented as `getuid() / 100000`](https://android.googlesource.com/platform/frameworks/base/+/refs/tags/android-15.0.0_r12/core/java/android/os/UserHandle.java#288). As far as I can tell, this has never changed since Android got multi-user support ~14 years ago. Maybe it's good enough to rely on this implementation detail?
Some links:
* https://android.googlesource.com/platform/frameworks/base/+/refs/tags/android-15.0.0_r12/core/java/android/security/net/config/UserCertificateSource.java#34
* This is where Android loads `/data/misc/user/<user ID>/cacerts-added` for the Java TLS stack
* https://android.googlesource.com/platform/frameworks/native/+/07053fcb61436221fac2281394e98ec9d0feab3d%5E%21/
* This is the commit when Android migrated away from the old `/data/misc/keychain/certs-added` directory
---
Side note: For system CA certs, golang currently only loads `/system/etc/security/cacerts`, but ever since Android 14, the system CA certs became updatable and `/apex/com.android.conscrypt/cacerts` should have priority: https://android.googlesource.com/platform/frameworks/base/+/refs/tags/android-15.0.0_r12/core/java/android/security/net/config/SystemCertificateSource.java#48 | OS-Android,NeedsInvestigation,mobile,BugReport | low | Major |
2,785,861,669 | vscode | Migrate built-in php extension off of `CompletionItem.textEdit` | For #237857
The `CompletionItem.textEdit` property has been deprecated. Can you please look into migrating to use `CompletionItem.insertText` and `CompletionItem.range` instead | debt | low | Minor |
2,785,864,768 | vscode | Fix `CompletionItem.command` deprecation for built-in js/ts | For #237857
Looks like we think we can changing the completion item command during resolve, which has been marked deprecated
| debt | low | Minor |
2,785,877,683 | transformers | Support LLMs With No Image Placeholder Embedding in LLava-based Models | ### Feature request
Currently, llava-based models, e.g., llava-next, will throw `IndexError: index out of range in self` from the LLM embedding if the LLM does not contain the image embedding. However, at inference time, the embedding value isn't actually used, because the indices corresponding to the image token will be overwritten by the image features.
Other inference engines, e.g., vLLM, separately mask out the text and multimodal embeddings and merge them together (e.g., [here](https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/models/utils.py#L377)). This prevents such indexing errors if the image token is only part of the tokenizer vocabulary, and not part of the encapsulated language model's embedding vocab.
This can be fixed on the model side by resizing the token embeddings to add the image token to the LLM, but it would be nice to take a similar approach in transformers to allow use of models that don't have the image token in the LLM embedding vocabulary.
### Motivation
Fixing this will allow the use of llava-based models that don't have an embedding for the placeholder image token 😄
### Your contribution
I am happy to submit a PR for this if the team is open to it! | Feature request,Multimodal,VLM | low | Critical |
2,785,891,404 | flutter | [video_player][Android] only 1 video can play at a time | ### Steps to reproduce
Checkout and run on android (physical or simulator): https://github.com/willsmanley/video-issue
### Expected results
Should play both vids
### Actual results
Only plays 1 vid. Playing one stops the other.
### Code sample
https://github.com/willsmanley/video-issue
### Screenshots or Video
_No response_
### Logs
_No response_
### Flutter Doctor output
n/a | c: new feature,platform-android,p: video_player,package,has reproducible steps,P3,team-android,triaged-android,found in release: 3.27,found in release: 3.28 | low | Major |
2,785,901,815 | material-ui | MUI not working with React Router 7 and Vite | ### Steps to reproduce
Steps:
1. Open this link to live example: Because this is a build issue, it's harder for me to provide a live example.
2. Follow the basic build steps for React Router 7: https://reactrouter.com/start/library/installation
3. Add a basic import of an @mui/icons-material icon to root.tsx.
4. Run npm run build && npm start
### Current behavior
The React Router 7 server running node will fail to start, referring to inability to import directories and requiring specification of index.js files. If I hand patch or script patch the files, the issues seems to be endless and out of my depth as a non-library maintainer.
### Expected behavior
I realize that you're planning for full ES module compatibility in an upcoming version. The problem is that I haven't been able to find any viable workaround after hours research, trial and error. I've tried the "esm" and "modern" folders. I've tried vite plugins, scripting to modify node_modules, esbuild, everything. I'd appreciate any help with a workaround or proper solution. React Router just released 7, and a lot of people are going to be excited to try using it.
### Context
I'm trying to move from create-react-app to React Router 7, which comes with Vite, and use MUI.
### Your environment
<details>
<summary><code>npx @mui/envinfo</code></summary>
```
System:
OS: macOS 15.1.1
Binaries:
Node: 23.6.0 - ~/.nvm/versions/node/v23.6.0/bin/node
npm: 10.9.2 - ~/.nvm/versions/node/v23.6.0/bin/npm
pnpm: Not Found
Browsers:
Chrome: 131.0.6778.265
Edge: Not Found
Safari: 18.1.1
npmPackages:
@emotion/react: ^11.14.0 => 11.14.0
@emotion/styled: ^11.14.0 => 11.14.0
@mui/core-downloads-tracker: 6.3.1
@mui/icons-material: ^6.3.1 => 6.3.1
@mui/material: ^6.3.1 => 6.3.1
@mui/private-theming: 6.3.1
@mui/styled-engine: 6.3.1
@mui/system: 6.3.1
@mui/types: 7.2.21
@mui/utils: 6.3.1
@types/react: ^19.0.1 => 19.0.5
react: ^19.0.0 => 19.0.0
react-dom: ^19.0.0 => 19.0.0
typescript: ^5.7.2 => 5.7.3
```
</details>
**Search keywords**: react-router vite | bug 🐛 | low | Critical |
2,785,902,714 | godot | Camera wireframe widget gizmo doesn't match FOV. | ### Tested versions
4.2-4.4 dev 7
### System information
Godot v4.4.dev7 - Windows 10 (build 19045) - Multi-window, 2 monitors - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3080 (NVIDIA; 32.0.15.6603) - AMD Ryzen 7 3700X 8-Core Processor (16 threads)
### Issue description
If you try to place items in the world so that they line up a certain way with the camera using the wireframe shape of the gizmo for a guide, they don't show up where expected. For example, if I want something in the lower left corner of the view and line it up like this:

I would expect this to be in the lower left corner, but when viewed from the camera, it looks like this:

Quite far away from the corner. I have to position the mesh outside of what it looks like the camera to see in order to get it in the corner.
### Steps to reproduce
Make a simple 3D scene with a camera. Place an object where it looks like it will be in the corner of the camera. Run the scene or preview the camera and note that the object is inset quite a bit from the corner.
### Minimal reproduction project (MRP)
[test_camera_fov.zip](https://github.com/user-attachments/files/18403949/test_camera_fov.zip) | bug,topic:editor,topic:3d | low | Minor |
2,785,929,652 | angular | No error on easy to miss quote syntax error | ### Which @angular/* package(s) are the source of the bug?
Don't known / other
### Is this a regression?
No
### Description
The following code has a mistake in the template of the `App` component.
There should be two quotes `'` at the end of the template.
```ts
import { Component, Pipe } from '@angular/core';
import { bootstrapApplication } from '@angular/platform-browser';
@Pipe({
name: 'translate',
standalone: true,
})
export class TranslatePipe {
transform(val: string): string {
return val;
}
}
@Component({
selector: 'app-root',
template: `
{{ name ? (name | translate) : ' }}
`, // Missing quote here ^^^^
standalone: true,
imports: [TranslatePipe],
})
export class App {
name = 'Angular';
}
bootstrapApplication(App);
```
**Expected result:** compilation error.
**Actual result:** No error. Instead template is rendered as string. This error is easy to miss.
### Please provide a link to a minimal reproduction of the bug
https://stackblitz.com/edit/stackblitz-starters-qbomf7kh?file=src%2Fmain.ts
### Please provide the exception or error you saw
```true
```
### Please provide the environment you discovered this bug in (run `ng version`)
```true
Angular CLI: 19.0.0
Node: 18.20.3
Package Manager: npm 10.2.3
OS: linux x64
Angular: 19.0.0
... animations, build, cli, common, compiler, compiler-cli, core
... forms, platform-browser, router
Package Version
------------------------------------------------------
@angular-devkit/architect 0.1900.0
@angular-devkit/core 19.0.0
@angular-devkit/schematics 19.0.0
@schematics/angular 19.0.0
rxjs 7.8.1
typescript 5.6.3
zone.js 0.15.0
```
### Anything else?
_No response_ | area: compiler,P3,compiler: parser | low | Critical |
2,785,934,382 | godot | Grid disappears/fades out in ortho view. | ### Tested versions
4.2-4.4dev7. Behavior is different in 4.4. In 4.2, there's a circle around the center that's faded out. In 4.4, the whole smaller grid fades out.
### System information
Godot v4.4.dev7 - Windows 10 (build 19045) - Multi-window, 2 monitors - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3080 (NVIDIA; 32.0.15.6603) - AMD Ryzen 7 3700X 8-Core Processor (16 threads)
### Issue description
Sometimes the grid disappears or becomes very difficult to see in ortho view when panning around.

If I pan to the side, note the grid disappears. It stays gone even after panning back:

### Steps to reproduce
Open a 3D scene. Press 7 on the numpad to go to ortho view. Pan around and note that grid disappears.
### Minimal reproduction project (MRP)
N/A. | bug,topic:editor,topic:3d | low | Minor |
2,785,977,192 | rust | Tracking Issue for the `gpu-kernel` ABI | The feature gate for the issue is `#![feature(abi_gpu_kernel)]`.
The `extern "gpu-kernel"` calling convention represents the entry points exposed to the host, which are then called to actually execute the GPU kernel. Functions built for a GPU target are instead "device functions". Device functions must not call these host functions (and LLVM will emit a compilation error if it is attempted for some targets).
It is implemented for the following targets
- nvptx64 (translating to the `ptx_kernel` calling convention in LLVM)
- amdgpu (translating to the `amdgpu_kernel` calling convention in LLVM)
### About tracking issues
Tracking issues are used to record the overall progress of implementation.
They are also used as hubs connecting to other relevant issues, e.g., bugs or open design questions.
A tracking issue is however *not* meant for large scale discussion, questions, or bug reports about a feature.
Instead, open a dedicated issue for the specific matter and add the relevant feature gate label.
Discussion comments will get marked as off-topic or deleted.
Repeated discussions on the tracking issue may lead to the tracking issue getting locked.
### Steps
- [ ] RFC
- [ ] Adjust documentation ([see instructions on rustc-dev-guide][doc-guide])
- [ ] Formatting for new syntax has been added to the [Style Guide] ([nightly-style-procedure])
- [ ] Stabilization PR ([see instructions on rustc-dev-guide][stabilization-guide])
[stabilization-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#stabilization-pr
[doc-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#documentation-prs
[nightly-style-procedure]: https://github.com/rust-lang/style-team/blob/main/nightly-style-procedure.md
[Style Guide]: https://github.com/rust-lang/rust/tree/master/src/doc/style-guide
### Unresolved Questions
- [ ] May need a story for how to handle breaking changes in ABIs (esp. re: `ptx_kernel`)
- [ ] This may want to be `extern "device-kernel"` instead?
- [ ] May want to address or dismiss the question of SPIRV's [OpEntryPoint](https://registry.khronos.org/SPIR-V/specs/unified1/SPIRV.html#OpEntryPoint)
- [ ] What function signatures are actually valid? LLVM generally doesn't try to validate these, so we must determine that ourselves.
- [ ] Devices shouldn't be able to call these functions, but who can?
- [ ] Should these functions always be `unsafe` to call? If so, making them uncallable from device code may be redundant (but "my `unsafe` generated an LLVM error" would be kinda unsettling, even if we define it as UB)
### Implementation history
- [x] Implement the ABI https://github.com/rust-lang/rust/pull/135047
- [ ] Prevent safely calling `extern "gpu-kernel"` functions when compiling for GPU targets
- [ ] Document valid function signatures
- Must have no return value `()` or `!`
- [ ] Bar invalid function signatures | A-LLVM,T-lang,T-compiler,O-NVPTX,C-tracking-issue,A-ABI,O-amdgpu | low | Critical |
2,785,983,396 | PowerToys | Switching to another computer while a fullscreen app is active brings the taskbar over the fullscreen app. | ### Microsoft PowerToys version
0.87.1
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
Mouse Without Borders
### Steps to reproduce
1. Link two computers using Mouse Without Borders.
2. Move mouse from this computer to the other one.
3. Open a fullscreen app (e.g. a YouTube video) on the other computer.
4. Bring the mouse back to this computer.
### ✔️ Expected Behavior
I can keep watching my fullscreen videos no problem.
### ❌ Actual Behavior
The taskbar appears over the fullscreen video, obstructing it.
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,785,998,674 | godot | When moving a file in File System a random script is selected in the Script window | ### Tested versions
4.3 stable, 4.4 dev7.
This does not happen in 4.2.2
### System information
Godot v4.4 dev7 - Windows 10.0.19045 - GLES3 (Compatibility) - Radeon RX 560 Series (Advanced Micro Devices, Inc.; 31.0.14001.45012) - Intel(R) Core(TM) i5-4570 CPU @ 3.20GHz (4 Threads)
### Issue description
If we move a texture file from a folder to another folder, a random script is selected in the Script window.
I still don't understand the order of selecting a script, in one project the last script is selected, in another the script from the middle.
This doesn't happen if we move the .tscn file, at least for me.
https://github.com/user-attachments/assets/154b840c-8cb3-4bba-ad25-5938710ca1d0
### Steps to reproduce
1. Open MRP
2. Open the Script window and select any script from the list on the left
3. Move 7.png from the Textures folder to the icon folder
4. Another script is selected in the Script window
### Minimal reproduction project (MRP)
[bg.zip](https://github.com/user-attachments/files/18404252/bg.zip) | bug,topic:editor,regression | low | Minor |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.