id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
2,781,443,131 | godot | Can't use window manager shortcuts to resize/fullscreen/minimize a floating game window anymore depending on focus | - *Related to https://github.com/godotengine/godot/issues/100883.*
### Tested versions
- Reproducible in: 4.4.dev 24d74510e
### System information
Godot v4.4.dev (24d74510e) - Fedora Linux 41 (KDE Plasma) on X11 - X11 display driver, Multi-window, 1 monitor - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4090 (nvidia; 565.77) - 13th Gen Intel(R) Core(TM) i9-13900K (32 threads)
### Issue description
Since window embedding was implemented, it's not possible to use use window manager shortcuts to resize/fullscreen/minimize a floating game window anymore depending on which window is focused (the floating game window or the window embedded within). This is because these shortcuts are received by the embedded game window placed on top of the floating game window, not the floating game window itself.
On KDE X11, the following actions are affected when the embedded window has focus:
- <kbd>Alt + F9</kbd> (custom shortcut to minimize window)
- <kbd>Alt + F11</kbd> (custom shortcut to toggle fullscreen)
The shortcut to maximize the window works though (<kbd>Alt + F10</kbd> custom shortcut).
To resolve this, we should allow resizing the embedded game window and have the floating game window automatically adapt itself. This means there would be a two-way approach to window resizing: you can either resize the floating game window and have the embedded window adapt itself (taking its chrome[^1]'s size into account), or the other way around.
Here, I toggle fullscreen by pressing <kbd>Alt + F11</kbd> after initially running the project (the floating game window has focus). The embedded window then automatically gains focus because my mouse cursor is over it[^2], and I can't press the same shortcut to leave fullscreen until I click outside the embedded window.
https://github.com/user-attachments/assets/7a564916-f03d-43af-9a9c-e8b860a460b7
Implementing two-way window mangement would also resolve issues where the embedded window can get out of sync because you forcibly moved it, e.g. by making its window border visible with a custom shortcut, moving it with <kbd>Super + Left mouse button</kbd> or pressing <kbd>Super + Up arrow</kbd> which moves it to the upper half of the screen:
https://github.com/user-attachments/assets/4eeece17-d533-4dca-aa31-ba6893678b03
[^1]: "chrome" refers to the decorations/UI around the embedded window.
[^2]: This is behavior implemented by Godot, I'm not using a "focus follows mouse" configuration.
### Steps to reproduce
- Run a project with a floating game window.
- Try to use window manager manipulation shortcuts on the game window itself (i.e. the viewport, not the chrome around it).
### Minimal reproduction project (MRP)
N/A | bug,platform:linuxbsd,topic:editor,topic:porting | low | Minor |
2,781,456,293 | flutter | Flutter Notable Commits - 2025 | <!--- From https://upload.wikimedia.org/wikipedia/commons/e/e3/Weighing_a_marlin_on_a_wharf_%28AM_74863-1%29.jpg --->
<img src="https://user-images.githubusercontent.com/1377460/221257032-4e8a1ee4-5250-4ecb-9df9-4f03817d5e08.jpeg" align="left" width="250" />
<p>
For years, Flutter's Framework team has been reviewing "notable commits" in our weekly staff meeting at Google. The notable commits are a lightly curated list of the pull requests that have landed in the previous week. The goal is to celebrate our progress and the developers who have contributed to that progress. Unfortunately, many of those developers have not been in the weekly staff meeting's conference room, they come from other teams within Google and the wider Flutter community.
</p>
<p>
Each list typically covers about one week and is not comprehensive. Its focus is on commits to Flutter's framework "flutter/flutter" repo that involved the Framework team as pull-request authors or reviewers. That's not strictly true however, we all contribute to many repos - not the least of which is "flutter/packages" - and you'll find evidence of that here. And not all of the commits listed here necessarily involved someone from the Framework or Engine team. One final observation about the participants in the list below is the large number of contributions that have come from the Flutter community. We are blessed with a an active and generous developer community. Each week's notable commits list is a tribute to everyone who has helped advance Flutter, not just Google's Framework team.
</p>
#### Prior Notable Commits
- [2024](https://github.com/flutter/flutter/issues/121415)
- [2023](https://github.com/flutter/flutter/issues/121415)
| framework,engine,P1,permanently locked,team-framework,triaged-framework | medium | Minor |
2,781,460,028 | transformers | from_pretrained fails to save weights.py and layers.py into cache, therefore fails to find them in cache | - weights.py and layers.py confirmed exist in local model folder, originally saved via cloning (attempt1) and via save_pretrained (attempt2)
- attempting to set custom cache dir as argument or by TRANSFORMERS_CACHE ENV... both ways no effect
- cache folder and other files are saved successfully, so not a permissions issue
### System Info
- `transformers` version: 4.48.0
- Platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.39 (ubuntu 24.04.1)
- Python version: 3.12.3
- Huggingface_hub version: 0.27.1
- Safetensors version: 0.5.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1+cu124 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: no
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("/path/to/model", trust_remote_code=True, local_files_only=True)
```
### Expected behavior
cache is generated with all relevant model files saved successfully, then loaded from cache for loading the model. | bug | low | Minor |
2,781,460,982 | kubernetes | [Compatibility Version] Provide option for a feature to opt out of emulated version | ### Why is this needed?
If a Beta feature is going through a lot of changes, it could be impractical to be fully emulate the old behavior at the emulated version: it might mean a lot of `if else` statements.
### What should we do?
Instead of failing at unexpected places, or emulate the feature with unpredictable behavior, we can let the feature owner declare a feature is not emulatible. So if the feature is set to true, and emulated version is not equal to the binary version, we can return error at start up time after parsing the flags.
This exception should be used with great caution, and it should mostly be used for Beta features that are false by default, and are still evolving a lot or are switching to use a new API with non backward compatible data changes. | sig/api-machinery,sig/architecture,triage/accepted | low | Critical |
2,781,465,384 | godot | Float-to-int cast and rounding math functions are inconsistent between platforms on non-finite cases | ### Tested versions
- Reproducible in: v4.3.stable.flathub [77dcf97d8]
### System information
Phone:
Android 10, Samsung Galaxy J6+ (SM-J610G), armeabi-v7a - OpenGL API OpenGL ES 3.0 [email protected] (GIT@d39f783, I79de86aa2c, 1591296226) (Date:06/04/20) - Compatibility - Using Device: Qualcomm - Adreno (TM) 308
PC (x64):
Godot v4.3.stable (77dcf97d8) - Freedesktop SDK 24.08 (Flatpak runtime) - X11 - GLES3 (Compatibility) - Mesa Intel(R) HD Graphics 5500 (BDW GT2) - Intel(R) Core(TM) i5-5300U CPU @ 2.30GHz (4 Threads)
### Issue description
The GDScript math functions that round/cast a float to int (i.e. `ceili`, `floori`, `roundi`, `snappedi`, and `int` cast/constructor) are inconsistent between platforms on the edge cases of `(-/+)INF` and `NAN`. I'm going to guess this causes a bunch of other very hard-to-detect issues (like it did for me).
On Android it's not returning the same as on Web and in Linux (the editor).
I haven't tested other versions or other platforms.
I'm using GDScript, not the .NET-enabled version of Godot.
If you're providing these functions in core/GDScript, then I would expect at least consistency between platforms.
Android is the weird one here, since others seem to treat them as the `-9223372036854775808` negative limit value.
Then again ... even .NET itself seems to handle these differently by versions on my tests:
`csharp` command (Mono / .NET Framework 4) casts NaN,Infinity to `-9223372036854775808`
`dotnet-script` command (.NET 8) casts NaN,Infinity to `0`
No idea how e.g. C++ etc. handle these, or if it can be platform/architecture/language-dependent.
### Steps to reproduce
```gdscript
func _enter_tree() -> void:
prints(int(-INF), int(INF), int(NAN)) # int cast is equivalent to a truncatei operation
prints(ceili(-INF), ceili(INF), ceili(NAN))
prints(floori(-INF), floori(INF), floori(NAN))
prints(roundi(-INF), roundi(INF), roundi(NAN))
prints(snappedi(-INF, 0.5), snappedi(INF, 0.5), snappedi(NAN, 0.5))
```
All of these print:
- on Linux editor and Web: `-9223372036854775808 -9223372036854775808 -9223372036854775808`
- on the Android device: `-1 1 0`
IMHO, neither of the results above seem right (though Android seems very wrong).
I would expect this:
- `INF` input: treated as `0x7FFFFFFFFFFFFFFF` = `9223372036854775807` (max value)
- `-INF` input: treated as `0x8000000000000001` = `-9223372036854775807` (negative of max value);
either that or `0x8000000000000000` = `-9223372036854775808` (negative limit value)
- `NAN` input: treated as `0x8000000000000000` = `-9223372036854775808` (value with no positive counterpart)
Granted, I guess these aren't extremely reliable ... but this would IMHO be more consistent when making comparisons involving infinite cases. But I'm reporting this mostly because of the Android inconsistency (hidden bug?).
### Minimal reproduction project (MRP)
N/A | bug,topic:core,topic:porting | low | Critical |
2,781,468,988 | flutter | Cocoon `listFiles()` logic is faulty: only looks at the first 30 files in the PR! | Cocoon [uses a wrapper](https://github.com/flutter/cocoon/blob/00e056313e88d81ea5a96bea24f78f05b86b105e/app_dart/lib/src/service/github_service.dart#L270-L281) around the GitHub REST API that calls ["List pull request files"](https://docs.github.com/en/rest/pulls/pulls?apiVersion=2022-11-28#list-pull-requests-files):
```dart
/// Returns changed files for a [PullRequest].
///
/// See more:
/// * https://developer.github.com/v3/pulls/#list-pull-requests-files
Future<List<String>> listFiles(PullRequest pullRequest) async {
final List<PullRequestFile> files =
await github.pullRequests.listFiles(pullRequest.base!.repo!.slug(), pullRequest.number!).toList();
log.fine('List of files: $files');
return files.map((PullRequestFile file) {
return file.filename!;
}).toList();
}
```
As noted on <https://docs.github.com/en/rest/pulls/pulls?apiVersion=2022-11-28#list-pull-requests-files>:
> Responses include a maximum of 3000 files. The paginated **response returns 30 files per page by default**.
That is, in our current implementation, which uses `listFiles()` for a number of things including the implementation of `.ci.yaml`'s `runIf`, _if_ you change more than roughly 30 files (potentially less if the patches are large), we will erroneously compute what to run/not run. | team-infra,P1,c: tech-debt | medium | Minor |
2,781,475,416 | flutter | `Icons.link_sharp` displays as a thin outline at sizes < 100 on Windows desktop | ### Steps to reproduce
1. Add `Icon(Icons.link_sharp)` and `Icon(Icons.link_sharp, size: 100)` in flutter code.
2. Run the app and view the result.
### Expected results
Icon should be filled regardless of size.
### Actual results
Icon displays as a thin outline at sizes < 100. Below is a screenshot with the icon at default icon size, 50px, 100px, and 200px.

It turns out that this also occurs in the icon viewer site.

### Code sample
<details open><summary>Code sample</summary>
```dart
Column(
children: [
Icon(
Icons.link_sharp,
color: Colors.red,
),
Icon(
Icons.link_sharp,
size: 50,
color: Colors.red,
),
Icon(
Icons.link_sharp,
size: 100,
color: Colors.red,
),
Icon(
Icons.link_sharp,
size: 200,
color: Colors.red,
),
],
)
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>

</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[√] Flutter (Channel stable, 3.27.1, on Microsoft Windows [Version 10.0.22635.4660], locale en-CA)
• Flutter version 3.27.1 on channel stable at D:\Portable Programs\flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 17025dd882 (4 weeks ago), 2024-12-17 03:23:09 +0900
• Engine revision cb4b5fff73
• Dart version 3.6.0
• DevTools version 2.40.2
[√] Windows Version (Installed version of Windows is version 10 or higher)
[√] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at D:\Android_SDK
• Platform android-35, build-tools 35.0.0
• ANDROID_HOME = D:\Android_SDK
• Java binary at: C:\Users\Zarai\.gradle\jdks\adoptium-17-x64-hotspot-windows\bin\java
• Java version OpenJDK Runtime Environment Temurin-17.0.3+7 (build 17.0.3+7)
• All Android licenses accepted.
[√] Chrome - develop for the web
• Chrome at C:\Program Files (x86)\Google\Chrome\Application\chrome.exe
[√] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.8.3)
• Visual Studio at C:\Program Files\Microsoft Visual Studio\2022\Community
• Visual Studio Community 2022 version 17.8.34330.188
• Windows 10 SDK version 10.0.22621.0
[!] Android Studio (version 4.0)
• Android Studio at C:\Program Files\Android\Android Studio
• Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
X Unable to determine bundled Java version.
• Try updating or re-installing Android Studio.
[√] IntelliJ IDEA Community Edition (version 2024.1)
• IntelliJ at C:\Program Files\JetBrains\IntelliJ IDEA Community Edition 2024.1
• Flutter plugin version 80.0.2
• Dart plugin version 241.17890.8
[√] IntelliJ IDEA Ultimate Edition (version 2024.2)
• IntelliJ at C:\Program Files\JetBrains\IntelliJ IDEA 2024.1.4
• Flutter plugin version 83.0.3
• Dart plugin version 242.24931
[√] Connected device (3 available)
• Windows (desktop) • windows • windows-x64 • Microsoft Windows [Version 10.0.22635.4660]
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.265
• Edge (web) • edge • web-javascript • Microsoft Edge 122.0.2365.16
[√] Network resources
• All expected network resources are available.
! Doctor found issues in 1 category.
```
</details>
| platform-windows,a: desktop,P3,team-windows,triaged-windows | low | Minor |
2,781,498,679 | pytorch | Compiling with clang fails in torch inductor, miscategorized as gcc | ### 🐛 Describe the bug
In torch inductor, if the clang compiler is used on Linux, it may be miscategorized as gcc.
Specifically in the current code below, the regex will match with ```clang++```, and then return that the compiler is gcc.
```python
def _is_gcc(cpp_compiler: str) -> bool:
if sys.platform == "darwin" and _is_apple_clang(cpp_compiler):
return False
return bool(re.search(r"(gcc|g\+\+)", cpp_compiler))
```
---
This causes issues with runtime builds b/c of compile flag variations, and I specifically ran into the fact that clang (clang++18) does not support fno-tree-loop-vectorize.
I am not sure if clang is explicitly supported on linux, but considering it is used on macos it works, as long as it is detected properly.
---
As a fix, in the associated pull request, I just call the existing _is_clang, and return false if it is detected as clang.
### Versions
```
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4060 Ti
Nvidia driver version: 565.57.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 5 5600X 6-Core Processor
CPU family: 25
Model: 33
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 0
CPU max MHz: 5278.7100
CPU min MHz: 2200.0000
BogoMIPS: 8400.51
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm debug_swap
Virtualization: AMD-V
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 3 MiB (6 instances)
L3 cache: 32 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] executorch==0.5.0a0+68b0864
[pip3] numpy==1.21.3
[pip3] torch==2.6.0.dev20241218+cpu
[pip3] torchao==0.8.0+git2e032c6b
[pip3] torchaudio==2.6.0.dev20241218+cpu
[pip3] torchsr==1.0.4
[pip3] torchtune==0.5.0
[pip3] torchvision==0.22.0.dev20241218+cpu
[pip3] triton==3.1.0
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @BoyuanFeng | triaged,oncall: pt2,module: inductor | low | Critical |
2,781,500,704 | react-native | Warnings from react-native & @react-native/codegen about missing Peer Dependencies | ### Description
Using yarn 3.6.4 (berry) on a new react-native project, I get the following dependency warnings:
```shell
$ yarn
➤ YN0000: ┌ Resolution step
➤ YN0002: │ @react-native/babel-plugin-codegen@npm:0.76.6 doesn't provide @babel/preset-env (pda3fc), requested by @react-native/codegen
➤ YN0002: │ babel-plugin-transform-flow-enums@npm:0.0.2 doesn't provide @babel/core (pd2efd), requested by @babel/plugin-syntax-flow
➤ YN0002: │ react-native@npm:0.76.6 [24bb2] doesn't provide @babel/core (p230be), requested by babel-jest
➤ YN0002: │ react-native@npm:0.76.6 [24bb2] doesn't provide @babel/preset-env (pe8640), requested by @react-native/codegen
➤ YN0000: │ Some peer dependencies are incorrectly met; run yarn explain peer-requirements <hash> for details, where <hash> is the six-letter p-prefixed code
```
Of these, it appears babel-plugin-transform-flow-enums isn't provided by this repository, but the rest of them are.
## Similar Past Issues / Pulls
- https://github.com/facebook/react-native/issues/35913
- https://github.com/facebook/react-native/pull/35915
### Steps to reproduce
1. Use yarn 3.6.4 (berry)
2. Create a new react-native project
3. `yarn install`
(Reproducer provided)
### React Native Version
0.76.6
### Affected Platforms
Runtime - Android, Runtime - iOS, Runtime - Web, Runtime - Desktop
### Output of `npx react-native info`
```text
info Fetching system and libraries information...
System:
OS: macOS 15.1.1
CPU: (10) arm64 Apple M1 Max
Memory: 365.08 MB / 64.00 GB
Shell:
version: 3.2.57
path: /bin/bash
Binaries:
Node:
version: 23.6.0
path: ~/.nvm/versions/node/v23.6.0/bin/node
Yarn:
version: 3.6.4
path: ~/.nvm/versions/node/v23.6.0/bin/yarn
npm:
version: 10.9.2
path: ~/.nvm/versions/node/v23.6.0/bin/npm
Watchman:
version: 2024.12.02.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods: Not Found
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.2
- iOS 18.2
- macOS 15.2
- tvOS 18.2
- visionOS 2.2
- watchOS 11.2
Android SDK: Not Found
IDEs:
Android Studio: 2024.1 AI-241.19072.14.2412.12360217
Xcode:
version: 16.2/16C5032a
path: /usr/bin/xcodebuild
Languages:
Java:
version: 11.0.10
path: /usr/bin/javac
Ruby:
version: 2.6.10
path: /usr/bin/ruby
npmPackages:
"@react-native-community/cli": Not Found
react: Not Found
react-native:
installed: 0.76.6
wanted: 0.76.6
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: true
newArchEnabled: true
```
### Stacktrace or Logs
```text
$ yarn
➤ YN0000: ┌ Resolution step
➤ YN0002: │ @react-native/babel-plugin-codegen@npm:0.76.6 doesn't provide @babel/preset-env (pda3fc), requested by @react-native/codegen
➤ YN0002: │ babel-plugin-transform-flow-enums@npm:0.0.2 doesn't provide @babel/core (pd2efd), requested by @babel/plugin-syntax-flow
➤ YN0002: │ react-native@npm:0.76.6 [24bb2] doesn't provide @babel/core (p230be), requested by babel-jest
➤ YN0002: │ react-native@npm:0.76.6 [24bb2] doesn't provide @babel/preset-env (pe8640), requested by @react-native/codegen
➤ YN0000: │ Some peer dependencies are incorrectly met; run yarn explain peer-requirements <hash> for details, where <hash> is the six-letter p-prefixed code
```
### Reproducer
https://github.com/fbartho/react-native-peerDependencies-reproducer/
### Screenshots and Videos
```shell
{main} ~/Code/react-native-peerDependencies-reproducer/ReproducerApp $ yarn
➤ YN0070: Migrating from Yarn 1; automatically enabling the compatibility node-modules linker 👍
➤ YN0000: ┌ Resolution step
➤ YN0061: │ eslint@npm:8.57.1 is deprecated: This version is no longer supported. Please see https://eslint.org/version-support for other options.
➤ YN0061: │ @humanwhocodes/config-array@npm:0.13.0 is deprecated: Use @eslint/config-array instead
➤ YN0061: │ @humanwhocodes/object-schema@npm:2.0.3 is deprecated: Use @eslint/object-schema instead
➤ YN0061: │ glob@npm:7.2.3 is deprecated: Glob versions prior to v9 are no longer supported
➤ YN0061: │ sudo-prompt@npm:9.2.1 is deprecated: Package no longer supported. Contact Support at https://www.npmjs.com/support for more info.
➤ YN0032: │ fsevents@npm:2.3.3: Implicit dependencies on node-gyp are discouraged
➤ YN0061: │ rimraf@npm:3.0.2 is deprecated: Rimraf versions prior to v4 are no longer supported
➤ YN0061: │ inflight@npm:1.0.6 is deprecated: This module is not supported, and leaks memory. Do not use it. Check out lru-cache if you want a good and tested way to coalesce async requests by a key value, which is much more comprehensive and powerful.
➤ YN0061: │ rimraf@npm:2.6.3 is deprecated: Rimraf versions prior to v4 are no longer supported
➤ YN0061: │ @babel/plugin-proposal-class-properties@npm:7.18.6 is deprecated: This proposal has been merged to the ECMAScript standard and thus this plugin is no longer maintained. Please use @babel/plugin-transform-class-properties instead.
➤ YN0061: │ @babel/plugin-proposal-nullish-coalescing-operator@npm:7.18.6 is deprecated: This proposal has been merged to the ECMAScript standard and thus this plugin is no longer maintained. Please use @babel/plugin-transform-nullish-coalescing-operator instead.
➤ YN0061: │ @babel/plugin-proposal-optional-chaining@npm:7.21.0 is deprecated: This proposal has been merged to the ECMAScript standard and thus this plugin is no longer maintained. Please use @babel/plugin-transform-optional-chaining instead.
➤ YN0002: │ @react-native/babel-plugin-codegen@npm:0.76.6 doesn't provide @babel/preset-env (pda3fc), requested by @react-native/codegen
➤ YN0002: │ babel-plugin-transform-flow-enums@npm:0.0.2 doesn't provide @babel/core (pd2efd), requested by @babel/plugin-syntax-flow
➤ YN0002: │ react-native@npm:0.76.6 [9aafe] doesn't provide @babel/core (pde5e4), requested by babel-jest
➤ YN0002: │ react-native@npm:0.76.6 [9aafe] doesn't provide @babel/preset-env (p29d64), requested by @react-native/codegen
➤ YN0000: │ Some peer dependencies are incorrectly met; run yarn explain peer-requirements <hash> for details, where <hash> is the six-letter p-prefixed code
➤ YN0000: └ Completed in 11s 275ms
➤ YN0000: ┌ Fetch step
➤ YN0013: │ yargs-parser@npm:18.1.3 can't be found in the cache and will be fetched from the remote registry
➤ YN0013: │ yargs-parser@npm:21.1.1 can't be found in the cache and will be fetched from the remote registry
➤ YN0013: │ yargs@npm:15.4.1 can't be found in the cache and will be fetched from the remote registry
➤ YN0013: │ yargs@npm:17.7.2 can't be found in the cache and will be fetched from the remote registry
➤ YN0013: │ yocto-queue@npm:0.1.0 can't be found in the cache and will be fetched from the remote registry
➤ YN0000: └ Completed in 0s 549ms
➤ YN0000: ┌ Link step
➤ YN0000: └ Completed in 4s 3ms
➤ YN0000: Done with warnings in 15s 943ms
{main} ~/Code/react-native-peerDependencies-reproducer/ReproducerApp $ yarn
➤ YN0000: ┌ Resolution step
➤ YN0002: │ @react-native/babel-plugin-codegen@npm:0.76.6 doesn't provide @babel/preset-env (pda3fc), requested by @react-native/codegen
➤ YN0002: │ babel-plugin-transform-flow-enums@npm:0.0.2 doesn't provide @babel/core (pd2efd), requested by @babel/plugin-syntax-flow
➤ YN0002: │ react-native@npm:0.76.6 [9aafe] doesn't provide @babel/core (pde5e4), requested by babel-jest
➤ YN0002: │ react-native@npm:0.76.6 [9aafe] doesn't provide @babel/preset-env (p29d64), requested by @react-native/codegen
➤ YN0000: │ Some peer dependencies are incorrectly met; run yarn explain peer-requirements <hash> for details, where <hash> is the six-letter p-prefixed code
➤ YN0000: └ Completed
➤ YN0000: ┌ Fetch step
➤ YN0000: └ Completed
➤ YN0000: ┌ Link step
➤ YN0000: └ Completed
➤ YN0000: Done with warnings in 0s 446ms
``` | Resolution: PR Submitted | low | Minor |
2,781,502,794 | PowerToys | The software fails to enable the automatic display sleep on Windows systems. | ### Microsoft PowerToys version
0.87.1
### Installation method
Microsoft Store
### Running as admin
None
### Area(s) with issue?
Awake
### Steps to reproduce
After installing the software, using the system's power management to automatically turn off the display causes the system to immediately wake up each time the screen is turned off, preventing it from entering sleep mode. However, if the software is uninstalled, the system's sleep functionality returns to normal.
This translation conveys that the installed software interferes with the power management settings, causing an issue where the system does not properly maintain a sleep state. Uninstalling the software resolves this conflict and restores normal sleep functionality.
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,781,523,987 | flutter | [iOS] Hot restart randomly stops working on real devices after upgrading to Flutter 3.27.1 | ### Steps to reproduce
1. Upgrade to Flutter 3.27.1
2. Connect iPhone 15 Pro
3. `flutter run --verbose`
4. Press R
Happens on all apps.
### Expected results
Hot restart completes.
### Actual results
Hot restart almost never completes between iPhone Pro and MacBook Pro.
Rarely, and unexplainably, it does complete in <1000ms.
Attempted fixes:
* `flutter clean` - **doesn't work**
* Delete and reinstalling Flutter - **doesn't work**
* Upgrading Flutter to latest - **doesn't work**
* Upgrading Dart to latest stable - **doesn't work**
* Upgrading Dart to latest beta - **doesn't work**
* Upgrading iPhone to latest stable - **doesn't work**
* Upgrading macOS latest stable - **doesn't work**
* Different iPhones - **doesn't work**
* Different networks - **doesn't work**
* Different USB-C cables - **doesn't work**
* Different USB-C ports - **doesn't work**
* Factory resetting iPhone - **doesn't work**
* Factory resetting macOS - **doesn't work**
* Downgrade Flutter to 3.24.5 - **DOES work**
* Use iPhone simulator - **DOES work**
Relevant logs:
```
[ ] Performing hot restart...
[ +10 ms] Scanned through 939 files in 8ms
[ ] Syncing files to device Development iPhone...
[ +1 ms] <- reset
[ ] Compiling dart to kernel with 0 updated files
[ ] Processing bundle.
[ ] <- recompile package:app/main.dart 3e53627d-f17f-4999-83a4-c8eda46ebb62
[ ] <- 3e53627d-f17f-4999-83a4-c8eda46ebb62
[ +3 ms] Bundle processing done.
[ +66 ms] Updating files.
[ ] Pending asset builds completed. Writing dirty entries.
[+29475 ms] Error writing "main.dart.swap.dill" to DevFS: HttpException: Request has been aborted
[ ] trying again in a few - 9 more attempts left
[+61249 ms] Error writing "main.dart.swap.dill" to DevFS: HttpException: Request has been aborted
[ ] trying again in a few - 8 more attempts left
[+61263 ms] Error writing "main.dart.swap.dill" to DevFS: HttpException: Request has been aborted
[ ] trying again in a few - 7 more attempts left
[+61268 ms] Error writing "main.dart.swap.dill" to DevFS: HttpException: Request has been aborted
[ ] trying again in a few - 6 more attempts left
[+61244 ms] Error writing "main.dart.swap.dill" to DevFS: HttpException: Request has been aborted
[ ] trying again in a few - 5 more attempts left
[+61263 ms] Error writing "main.dart.swap.dill" to DevFS: HttpException: Request has been aborted
[ ] trying again in a few - 4 more attempts left
[+61247 ms] Error writing "main.dart.swap.dill" to DevFS: HttpException: Request has been aborted
[ ] trying again in a few - 3 more attempts left
[+61253 ms] Error writing "main.dart.swap.dill" to DevFS: HttpException: Request has been aborted
[ ] trying again in a few - 2 more attempts left
[+61246 ms] Error writing "main.dart.swap.dill" to DevFS: HttpException: Request has been aborted
[ ] trying again in a few - 1 more attempts left
[+61255 ms] Error writing "main.dart.swap.dill" to DevFS: HttpException: Request has been aborted
[ ] trying again in a few - 0 more attempts left
[+61262 ms] Error writing "main.dart.swap.dill" to DevFS: HttpException: Request has been aborted
[ +2 ms] Could not update files on device: HttpException: Request has been aborted
[ +3 ms] Syncing files to device Development iPhone... (completed in 673.4s)
[ +2 ms] <- reject
[ +75 ms] Performing hot restart... (completed in 673.5s)
[ ] Restarted application in 673,460ms.
[ ] Try again after fixing the above error(s).
```
### Code sample
<details open><summary>Code sample</summary>
```dart
[Paste your code here]
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.27.1, on macOS 15.2 24C101 darwin-arm64, locale en-US)
• Flutter version 3.27.1 on channel stable at /Users/james/fvm/versions/3.27.1
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 17025dd882 (4 weeks ago), 2024-12-17 03:23:09 +0900
• Engine revision cb4b5fff73
• Dart version 3.6.0
• DevTools version 2.40.2
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at /Users/james/Library/Android/sdk
• Platform android-35, build-tools 35.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 21.0.3+-79915917-b509.11)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 16.2)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16C5032a
• CocoaPods version 1.16.1
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.2)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 21.0.3+-79915917-b509.11)
[✓] VS Code (version 1.96.2)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.102.0
[✓] Connected device (6 available)
• Development iPhone (mobile) • 00008130-00184C1C0CA0001C • ios • iOS 18.2.1 22C161
• Virgo (mobile) • 00008140-0016219E3693001C • ios • iOS 18.2.1 22C161
• iPhone 16 (mobile) • 5558CC92-E694-44C3-BDFE-0158DB0BACE4 • ios • com.apple.CoreSimulator.SimRuntime.iOS-18-1 (simulator)
• macOS (desktop) • macos • darwin-arm64 • macOS 15.2 24C101 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 15.2 24C101 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.265
! Error: Browsing on the local area network for James’s Apple Watch. Ensure the device is unlocked and discoverable via Bluetooth. (code -27)
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| e: device-specific,platform-ios,tool,t: hot reload,e: OS-version specific,team-ios | low | Critical |
2,781,525,934 | langchain | Runtime/post init request header patch for LLMs does not work | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Say for example, we have user-defined `model`, and we want the same `model` object to be runnable in different environments, some environments require an additional request header for apikey auth
User-defined `model`
```python
from langchain_openai import ChatOpenAI
model = ChatOpenAI(model_name="gpt-4o-mini")
```
Now imagine we receive this model object, and for it to be run in a dedicated environment, it needs additional request header
- First attempt to add `default_headers` - this doesn't work as the openai clients are already initialized at `model` object init time ([code](https://github.com/langchain-ai/langchain/blob/62074bac6049225c1547e1e9aca5621622ed8f55/libs/partners/openai/langchain_openai/llms/base.py#L166-L197)), thus any change on `default_headers` would not be reflected in openai clients used.
```python
model.default_headers={"apikey": "xxx"}
```
- Second attempt to bind model - this doesn't work upon `model.invoke`: `TypeError: parse() got an unexpected keyword argument 'default_headers'`
```python
model.bind(default_headers={"apikey:": "xxx"})
```
There are probably hacky ways by updating the internal openai clients directly at `model.client._client.default_headers` etc. but I don't feel that's a robust pattern as `_client` being a private object could evolve without notice. Any idea on how to better support such use case in a robust way?
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I'm trying to patch additional header to a LLM model after its initialization, but couldn't find an easy and robust way. See example code for details.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Wed Jul 17 15:10:20 UTC 2024
> Python Version: 3.9.13 (main, Aug 23 2022, 09:14:58)
[GCC 10.2.1 20210110]
Package Information
-------------------
> langchain_core: 0.3.29
> langchain: 0.3.14
> langchain_community: 0.3.14
> langsmith: 0.2.10
> langchain_openai: 0.3.0
> langchain_text_splitters: 0.3.5
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.8.4
> async-timeout: 4.0.2
> dataclasses-json: 0.6.7
> httpx: 0.28.1
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.23.5
> openai: 1.59.6
> orjson: 3.10.14
> packaging: 24.2
> pydantic: 2.10.5
> pydantic-settings: 2.7.1
> PyYAML: 5.4.1
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.37
> tenacity: 8.2.2
> tiktoken: 0.8.0
> typing-extensions: 4.12.2 | 🤖:bug,Ɑ: models | low | Critical |
2,781,579,119 | go | proposal: sync: add WaitGroup.Notify to use with select | ## Background
Integrating `WaitGroup` with `select` statements, especially in context-aware operations where cancellation might be necessary, requires creating an additional channel for signaling.
## Proposal
I propose extending `sync.WaitGroup` with a new method:
```go
func (wg *WaitGroup) Notify() <- chan struct{}
```
This method would return a channel that is closed when `WaitGroup.Wait` returns.
## Example
Instead of:
```go
c := make(chan struct{})
go func() {
wg.Wait()
close(c)
}()
select {
case <-ctx.Done():
return context.Cause(ctx)
case <-c:
return nil
}
```
You could do:
```go
select {
case <- ctx.Done():
return context.Cause(ctx)
case <- wg.Notify():
return nil
}
```
## Conclusion
This boilerplate it replaces is small, but I believe it does a lot to bring `WaitGroup` closer to the built-in concurrency tools offered by the language.
CC #71076. | Proposal,LibraryProposal | low | Major |
2,781,582,823 | tauri | [bug] Cannot start iOS simulator without external network | ### Describe the bug
Hey,
When trying to start iOS simulator with `RUST_BACKTRACE=full. bun run tauri ios dev --host` I am unable to do so and tauri exits with the following stacktrace:
```
No external IP detected.
stack backtrace:
0: 0x11b54c5bc - _napi_register_module_v1
1: 0x11b179d84 - _bz_internal_error
2: 0x11b51cdf0 - _napi_register_module_v1
3: 0x11b54e91c - _napi_register_module_v1
4: 0x11b54f044 - _napi_register_module_v1
5: 0x11b54e9bc - _napi_register_module_v1
6: 0x11b54e954 - _napi_register_module_v1
7: 0x11b54e948 - _napi_register_module_v1
8: 0x11b94cecc - _napi_register_module_v1
9: 0x11b69945c - _napi_register_module_v1
10: 0x11b6985d0 - _napi_register_module_v1
11: 0x11b9687e0 - _napi_register_module_v1
12: 0x11b96c4bc - _napi_register_module_v1
13: 0x11b72d1b8 - _napi_register_module_v1
14: 0x11b748914 - _napi_register_module_v1
15: 0x11b73fbf4 - _napi_register_module_v1
16: 0x11b0b65ac - <unknown>
17: 0x11b0bfdfc - <unknown>
18: 0x11b551a6c - _napi_register_module_v1
19: 0x189f482e4 - __pthread_deallocate
```
This only happens when I'm entirely off-the-grid disconnected from WiFi and Ethernet. Most of the time that's fine, but occasionally my workflow requires me to work offline and in those cases I am unable to start the simulator.
FWIW, `bun run tauri dev` works fine.
### Reproduction
_No response_
### Expected behavior
I am able to work offline with the simulator.
### Full `tauri info` output
```text
$ tauri info
[✔] Environment
- OS: Mac OS 15.2.0 arm64 (X64)
✔ Xcode Command Line Tools: installed
✔ rustc: 1.82.0 (f6e511eec 2024-10-15)
✔ cargo: 1.82.0 (8f40fc59f 2024-08-21)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-aarch64-apple-darwin (default)
- node: 22.11.0
- npm: 10.9.0
- bun: 1.1.34
[-] Packages
- tauri 🦀: 2.2.1
- tauri-build 🦀: 2.0.5
- wry 🦀: 0.48.0
- tao 🦀: 0.31.1
- @tauri-apps/api : 2.2.0
- @tauri-apps/cli : 2.2.3
[-] Plugins
- tauri-plugin-http 🦀: 2.2.0
- @tauri-apps/plugin-http : 2.2.0
- tauri-plugin-shell 🦀: 2.2.0
- @tauri-apps/plugin-shell : 2.2.0
- tauri-plugin-fs 🦀: 2.2.0
- @tauri-apps/plugin-fs : not installed!
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- framework: React
- bundler: Vite
[-] iOS
- Developer Teams: XXX (ID: XXX)
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: needs triage | low | Critical |
2,781,589,304 | PowerToys | POWER RUN DOES NOT WORK PROPERLY | ### Microsoft PowerToys version
0.87.1
### Installation method
Microsoft Store
### Running as admin
None
### Area(s) with issue?
PowerToys Run
### Steps to reproduce
Execute the keyboard command to use Power Run
### ✔️ Expected Behavior
The keyboard command (default or custom) may show the search bar
### ❌ Actual Behavior
The command only works when the PowerToys window is selected, sometimes the error is fixed when PowerToys is restarted
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Critical |
2,781,619,181 | transformers | running utills.fx.symbolic_trace on gp2 raised an error: torch.fx.proxy.TraceError: Proxy object cannot be iterated, which does not occur in the previous version | ### System Info
- `transformers` version: 4.48.0
- Platform: Linux-4.18.0-535.el8.x86_64-x86_64-with-glibc2.28
- Python version: 3.9.21
- Huggingface_hub version: 0.27.0
- Safetensors version: 0.4.5
- Accelerate version: 1.2.1
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: MULTI_GPU
- mixed_precision: no
- use_cpu: False
- debug: False
- num_processes: 8
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: False
- main_training_function: main
- enable_cpu_affinity: False
- downcast_bf16: False
- tpu_use_cluster: False
- tpu_use_sudo: False
- PyTorch version (GPU?): 2.5.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA A100-SXM4-80GB
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import GPT2Model, GPT2Config
from transformers.utils import fx
config = GPT2Config.from_pretrained("gpt2")
model = GPT2Model(config)
trace = fx.symbolic_trace(model, input_names=["input_ids", "attention_mask"])
```
....
File "/.conda/envs/venv/lib/python3.9/site-packages/torch/fx/proxy.py", line 327, in iter
raise TraceError('Proxy object cannot be iterated. This can be '
torch.fx.proxy.TraceError: Proxy object cannot be iterated. This can be attempted when the Proxy is used in a loop or as a *args or **kwargs function argument. See the torch.fx docs on pytorch.org for a more detailed explanation of what types of control flow can be traced, and check out the Proxy docstring for help troubleshooting Proxy iteration errors
### Expected behavior
No error. | bug | low | Critical |
2,781,620,946 | pytorch | broken `torch.compile` with `"meta"` device tensors | ### 🐛 Describe the bug
Consider the following code:
```python
import torch
@torch.compile
def foobar(x):
return x * 2
def test(device):
foobar(torch.empty((1, 16, 128, 128), device = device))
foobar(torch.empty((1, 32, 64, 64), device = device))
# OK
test("cuda")
print("cuda ok")
# Fails
test("meta")
print("meta ok")
```
Running `test` with `"cuda"` works, but running `test` with the `"meta"` device fails with the following exception:
```
Traceback (most recent call last):
File ".venv/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 1446, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_dynamo/repro/after_dynamo.py", line 129, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_dynamo/repro/after_dynamo.py", line 129, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/__init__.py", line 2234, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 1521, in compile_fx
return aot_autograd(
^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_dynamo/backends/common.py", line 72, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 1071, in aot_module_simplified
compiled_fn = dispatch_and_compile()
^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 1056, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 522, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 759, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 179, in aot_dispatch_base
compiled_fw = compiler(fw_module, updated_flat_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 1350, in fw_compiler_base
return _fw_compiler_base(model, example_inputs, is_inference)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 1421, in _fw_compiler_base
return inner_compile(
^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 475, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_dynamo/repro/after_aot.py", line 85, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 661, in _compile_fx_inner
compiled_graph = FxGraphCache.load(
^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_inductor/codecache.py", line 1334, in load
compiled_graph = compile_fx_fn(
^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 570, in codegen_and_compile
compiled_graph = fx_codegen_and_compile(gm, example_inputs, **fx_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 859, in fx_codegen_and_compile
graph.run(*example_inputs)
File ".venv/lib/python3.11/site-packages/torch/_inductor/graph.py", line 780, in run
return super().run(*args)
^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/fx/interpreter.py", line 146, in run
self.env[node] = self.run_node(node)
^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_inductor/graph.py", line 1319, in run_node
result = super().run_node(n)
^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/fx/interpreter.py", line 203, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_inductor/graph.py", line 1024, in call_function
raise LoweringException(e, target, args, kwargs).with_traceback(
File ".venv/lib/python3.11/site-packages/torch/_inductor/graph.py", line 1021, in call_function
out = lowerings[target](*args, **kwargs) # type: ignore[index]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_inductor/lowering.py", line 361, in wrapped
out = decomp_fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_inductor/lowering.py", line 2844, in empty_strided
pointwise.realize()
File ".venv/lib/python3.11/site-packages/torch/_inductor/ir.py", line 6282, in realize
return self.data.realize()
^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_inductor/ir.py", line 6367, in realize
layout=FlexibleLayout(
^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/torch/_inductor/ir.py", line 3254, in __init__
super().__init__(device, dtype, size, strides)
File ".venv/lib/python3.11/site-packages/torch/_inductor/ir.py", line 2900, in __init__
assert all(isinstance(s, (Expr, int)) for s in size)
torch._inductor.exc.LoweringException: AssertionError:
target: aten.empty_strided.default
args[0]: (1, s0, s1, s2)
args[1]: (s0*s1*s2, s1*s2, s2, 1)
kwargs: {'dtype': torch.float32, 'device': device(type='meta')}
```
This only happens when `foobar` is called twice inside `test` *and* when the size of the tensor in the second call is different.
### Versions
(The `collect_env.py` script doesn't work for me so I'm pasting the versions manually)
```
torch 2.5.1
triton 3.1.0
python 3.11.8
```
cc @chauhang @penguinwu @ezyang @bobrenjc93 @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @BoyuanFeng | good first issue,triaged,actionable,oncall: pt2,module: dynamic shapes,module: inductor | low | Critical |
2,781,623,750 | rust | Building rustc-1.84.0 tarball fails when using system LLVM | I have LLVM 19.1.6 installed on my system, and when building, I encounter the following:
```
Dist rust-std-1.84.0-x86_64-apple-darwin
finished in 0.534 seconds
Installing stage2 std (x86_64-apple-darwin)
install: creating uninstall script at /tmp/dst.rustc/usr/local/lib/rustlib/uninstall.sh
install: installing component 'rust-std-x86_64-apple-darwin'
rust std installed.
thread 'main' panicked at src/lib.rs:1708:24:
src.symlink_metadata() failed with No such file or directory (os error 2) ("src = /tera/tera/debo/Projects/rustc/rustc-1.84.0-src/build/x86_64-apple-darwin/stage2/lib/rustlib/x86_64-apple-darwin/bin/llvm-objcopy")
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Build completed unsuccessfully in 0:29:14
```
It appears that it is trying to invoke a built copy of `llvm-objcopy`, when it should be using the installed version.
This is a regression.
| A-LLVM,O-macos,T-bootstrap,C-bug,O-apple | medium | Critical |
2,781,624,400 | PowerToys | MiniDump settings | ### Description of the new feature / enhancement
I would really like to see a tweaker that would allow us to set a windows memory dump to save in a particular place with a particular name that can be created by using varialbles. Mainly I want to see a way to save every crash with a name that uses the date in a format like yyyy-mm-dd.
### Scenario when this would be used?
I am currently struggling with a random repeating blue screen error, and this would help me immensely to keep a record of each crash.
### Supporting information
_No response_ | Needs-Triage | low | Critical |
2,781,627,676 | deno | Many new access requests for DNS and IPs that were not in 2.1.4 | Version: Deno 2.1.5
I'm connecting to Twitch with the `npm:@twurple/*` packages.
**None of these access requests are necessary in 2.1.4 or earlier.**
Re-confirmed by downgrading to 2.1.4.
Probably related to https://github.com/denoland/deno/pull/25470, maybe also https://github.com/denoland/deno/pull/27572.
Deno script
```
-N=0.0.0.0:8080,api.twitch.tv:443,id.twitch.tv:443
```
nslookup
```
id.twitch.tv => 52.89.33.131, 54.69.142.122, 54.191.180.173
api.twitch.tv => 3.167.227.4, 3.167.227.5, 3.167.227.59, 3.167.227.114
```
IP 1: 127.0.0.53:53 (localhost DNS)
┏ ⚠️ Deno requests net access to "127.0.0.53:53".
┠─ Requested by `Deno.resolveDns()` API.
┃ ├─ op_dns_resolve (ext:core/00_infra.js:264:44)
┃ ├─ Object.resolveDns (ext:deno_net/01_net.js:77:18)
┃ ├─ ext:deno_node/internal_binding/cares_wrap.ts:55:65
┃ ├─ Array.map (<anonymous>)
┃ ├─ ext:deno_node/internal_binding/cares_wrap.ts:55:42
┃ ├─ getaddrinfo (ext:deno_node/internal_binding/cares_wrap.ts:72:5)
┃ ├─ lookup (node:dns:131:15)
┃ ├─ emitLookup (node:net:534:5)
┃ ├─ defaultTriggerAsyncIdScope (ext:deno_node/internal/async_hooks.ts:193:18)
┃ └─ _lookupAndConnectMultiple (node:net:533:3)
┠─ Learn more at: https://docs.deno.com/go/--allow-net
┠─ Run again with --allow-net to bypass this prompt.
┗ Allow? [y/n/A] (y = yes, allow; n = no, deny; A = allow all net permissions) >
- if no to IP 1:
=> crash
❌ Denied net access to "127.0.0.53:53".
error: Uncaught Error: getaddrinfo ENOTFOUND id.twitch.tv
at __node_internal_captureLargerStackTrace (ext:deno_node/internal/errors.ts:93:9)
at __node_internal_ (ext:deno_node/internal/errors.ts:246:10)
at GetAddrInfoReqWrap.onlookupall [as oncomplete] (node:dns:43:26)
at ext:deno_node/internal_binding/cares_wrap.ts:71:9
at eventLoopTick (ext:core/01_core.js:175:7)
- if yes to IP 1:
=> IP 2: 54.191.180.173:443 (id.twitch.tv)
┏ ⚠️ Deno requests net access to "127.0.0.53:53".
┠─ Requested by `Deno.resolveDns()` API.
┃ ├─ op_dns_resolve (ext:core/00_infra.js:264:44)
┃ ├─ Object.resolveDns (ext:deno_net/01_net.js:77:18)
┃ ├─ ext:deno_node/internal_binding/cares_wrap.ts:55:65
✅ Granted net access to "127.0.0.53:53".
┏ ⚠️ Deno requests net access to "54.191.180.173:443".
┠─ Requested by `Deno.connect()` API.
┃ ├─ op_net_connect_tcp (ext:core/00_infra.js:264:44)
┃ ├─ Object.connect (ext:deno_net/01_net.js:583:61)
┃ ├─ TCP.#connect (ext:deno_node/internal_binding/tcp_wrap.ts:291:10)
┃ ├─ TCP.connect (ext:deno_node/internal_binding/tcp_wrap.ts:139:25)
┃ ├─ _internalConnectMultiple (node:net:347:24)
┃ ├─ defaultTriggerAsyncIdScope (ext:deno_node/internal/async_hooks.ts:193:18)
┃ ├─ GetAddrInfoReqWrap.emitLookup [as callback] (node:net:626:7)
┃ ├─ GetAddrInfoReqWrap.onlookupall [as oncomplete] (node:dns:54:8)
┃ ├─ ext:deno_node/internal_binding/cares_wrap.ts:71:9
┃ └─ eventLoopTick (ext:core/01_core.js:175:7)
┠─ Learn more at: https://docs.deno.com/go/--allow-net
┠─ Run again with --allow-net to bypass this prompt.
┗ Allow? [y/n/A] (y = yes, allow; n = no, deny; A = allow all net permissions) >
-- if no to IP 2:
=> IP 3: 52.89.33.131:443 (id.twitch.tv)
┏ ⚠️ Deno requests net access to "127.0.0.53:53".
┠─ Requested by `Deno.resolveDns()` API.
┃ ├─ op_dns_resolve (ext:core/00_infra.js:264:44)
┃ ├─ Object.resolveDns (ext:deno_net/01_net.js:77:18)
┃ ├─ ext:deno_node/internal_binding/cares_wrap.ts:55:65
✅ Granted net access to "127.0.0.53:53".
┏ ⚠️ Deno requests net access to "54.191.180.173:443".
┠─ Requested by `Deno.connect()` API.
┃ ├─ op_net_connect_tcp (ext:core/00_infra.js:264:44)
┃ ├─ Object.connect (ext:deno_net/01_net.js:583:61)
┃ ├─ TCP.#connect (ext:deno_node/internal_binding/tcp_wrap.ts:291:10)
❌ Denied net access to "54.191.180.173:443".
┏ ⚠️ Deno requests net access to "52.89.33.131:443".
┠─ Requested by `Deno.connect()` API.
┃ ├─ op_net_connect_tcp (ext:core/00_infra.js:264:44)
┃ ├─ Object.connect (ext:deno_net/01_net.js:583:61)
┃ ├─ TCP.#connect (ext:deno_node/internal_binding/tcp_wrap.ts:291:10)
┃ ├─ TCP.connect (ext:deno_node/internal_binding/tcp_wrap.ts:139:25)
┃ ├─ _internalConnectMultiple (node:net:347:24)
┃ ├─ _afterConnectMultiple (node:net:210:7)
┃ ├─ TCP.afterConnect (ext:deno_node/internal_binding/connection_wrap.ts:43:11)
┃ ├─ ext:deno_node/internal_binding/tcp_wrap.ts:306:14
┃ └─ eventLoopTick (ext:core/01_core.js:175:7)
┠─ Learn more at: https://docs.deno.com/go/--allow-net
┠─ Run again with --allow-net to bypass this prompt.
┗ Allow? [y/n/A] (y = yes, allow; n = no, deny; A = allow all net permissions) >
--- if no IP 3:
=> IP 4: 54.69.142.122:443
┏ ⚠️ Deno requests net access to "127.0.0.53:53".
┠─ Requested by `Deno.resolveDns()` API.
┃ ├─ op_dns_resolve (ext:core/00_infra.js:264:44)
┃ ├─ Object.resolveDns (ext:deno_net/01_net.js:77:18)
┃ ├─ ext:deno_node/internal_binding/cares_wrap.ts:55:65
✅ Granted net access to "127.0.0.53:53".
┏ ⚠️ Deno requests net access to "54.191.180.173:443".
┠─ Requested by `Deno.connect()` API.
┃ ├─ op_net_connect_tcp (ext:core/00_infra.js:264:44)
┃ ├─ Object.connect (ext:deno_net/01_net.js:583:61)
┃ ├─ TCP.#connect (ext:deno_node/internal_binding/tcp_wrap.ts:291:10)
❌ Denied net access to "54.191.180.173:443".
┏ ⚠️ Deno requests net access to "52.89.33.131:443".
┠─ Requested by `Deno.connect()` API.
┃ ├─ op_net_connect_tcp (ext:core/00_infra.js:264:44)
┃ ├─ Object.connect (ext:deno_net/01_net.js:583:61)
❌ Denied net access to "52.89.33.131:443".
┏ ⚠️ Deno requests net access to "54.69.142.122:443".
┠─ Requested by `Deno.connect()` API.
┃ ├─ op_net_connect_tcp (ext:core/00_infra.js:264:44)
┃ ├─ Object.connect (ext:deno_net/01_net.js:583:61)
┃ ├─ TCP.#connect (ext:deno_node/internal_binding/tcp_wrap.ts:291:10)
┃ ├─ TCP.connect (ext:deno_node/internal_binding/tcp_wrap.ts:139:25)
┃ ├─ _internalConnectMultiple (node:net:347:24)
┃ ├─ _afterConnectMultiple (node:net:210:7)
┃ ├─ TCP.afterConnect (ext:deno_node/internal_binding/connection_wrap.ts:43:11)
┃ ├─ ext:deno_node/internal_binding/tcp_wrap.ts:306:14
┃ └─ eventLoopTick (ext:core/01_core.js:175:7)
┠─ Learn more at: https://docs.deno.com/go/--allow-net
┠─ Run again with --allow-net to bypass this prompt.
┗ Allow? [y/n/A] (y = yes, allow; n = no, deny; A = allow all net permissions) >
---- if no to IP 4:
=> crash
┏ ⚠️ Deno requests net access to "127.0.0.53:53".
┠─ Requested by `Deno.resolveDns()` API.
┃ ├─ op_dns_resolve (ext:core/00_infra.js:264:44)
┃ ├─ Object.resolveDns (ext:deno_net/01_net.js:77:18)
┃ ├─ ext:deno_node/internal_binding/cares_wrap.ts:55:65
✅ Granted net access to "127.0.0.53:53".
┏ ⚠️ Deno requests net access to "54.191.180.173:443".
┠─ Requested by `Deno.connect()` API.
┃ ├─ op_net_connect_tcp (ext:core/00_infra.js:264:44)
┃ ├─ Object.connect (ext:deno_net/01_net.js:583:61)
┃ ├─ TCP.#connect (ext:deno_node/internal_binding/tcp_wrap.ts:291:10)
❌ Denied net access to "54.191.180.173:443".
┏ ⚠️ Deno requests net access to "52.89.33.131:443".
┠─ Requested by `Deno.connect()` API.
┃ ├─ op_net_connect_tcp (ext:core/00_infra.js:264:44)
┃ ├─ Object.connect (ext:deno_net/01_net.js:583:61)
❌ Denied net access to "52.89.33.131:443".
┏ ⚠️ Deno requests net access to "54.69.142.122:443".
┠─ Requested by `Deno.connect()` API.
┃ ├─ op_net_connect_tcp (ext:core/00_infra.js:264:44)
┃ ├─ Object.connect (ext:deno_net/01_net.js:583:61)
❌ Denied net access to "54.69.142.122:443".
error: Uncaught AggregateError
Error: connect ECONNREFUSED 54.191.180.173:443
at __node_internal_captureLargerStackTrace (ext:deno_node/internal/errors.ts:93:9)
at __node_internal_exceptionWithHostPort (ext:deno_node/internal/errors.ts:217:10)
at _createConnectionError (node:net:185:14)
at _afterConnectMultiple (node:net:205:16)
at TCP.afterConnect (ext:deno_node/internal_binding/connection_wrap.ts:43:11)
at ext:deno_node/internal_binding/tcp_wrap.ts:306:14
at eventLoopTick (ext:core/01_core.js:175:7)
Error: connect ECONNREFUSED 52.89.33.131:443
at __node_internal_captureLargerStackTrace (ext:deno_node/internal/errors.ts:93:9)
at __node_internal_exceptionWithHostPort (ext:deno_node/internal/errors.ts:217:10)
at _createConnectionError (node:net:185:14)
at _afterConnectMultiple (node:net:205:16)
at TCP.afterConnect (ext:deno_node/internal_binding/connection_wrap.ts:43:11)
at ext:deno_node/internal_binding/tcp_wrap.ts:306:14
at eventLoopTick (ext:core/01_core.js:175:7)
Error: connect ECONNREFUSED 54.69.142.122:443
at __node_internal_captureLargerStackTrace (ext:deno_node/internal/errors.ts:93:9)
at __node_internal_exceptionWithHostPort (ext:deno_node/internal/errors.ts:217:10)
at _createConnectionError (node:net:185:14)
at _afterConnectMultiple (node:net:205:16)
at TCP.afterConnect (ext:deno_node/internal_binding/connection_wrap.ts:43:11)
at ext:deno_node/internal_binding/tcp_wrap.ts:306:14
at eventLoopTick (ext:core/01_core.js:175:7)
at _internalConnectMultiple (node:net:308:18)
at _afterConnectMultiple (node:net:210:7)
at TCP.afterConnect (ext:deno_node/internal_binding/connection_wrap.ts:43:11)
at ext:deno_node/internal_binding/tcp_wrap.ts:306:14
at eventLoopTick (ext:core/01_core.js:175:7)
-- if yes to IP 2:
=> host 1: id.twitch.tv:0
┏ ⚠️ Deno requests net access to "127.0.0.53:53".
┠─ Requested by `Deno.resolveDns()` API.
┃ ├─ op_dns_resolve (ext:core/00_infra.js:264:44)
┃ ├─ Object.resolveDns (ext:deno_net/01_net.js:77:18)
┃ ├─ ext:deno_node/internal_binding/cares_wrap.ts:55:65
✅ Granted net access to "127.0.0.53:53".
┏ ⚠️ Deno requests net access to "54.191.180.173:443".
┠─ Requested by `Deno.connect()` API.
┃ ├─ op_net_connect_tcp (ext:core/00_infra.js:264:44)
┃ ├─ Object.connect (ext:deno_net/01_net.js:583:61)
┃ ├─ TCP.#connect (ext:deno_node/internal_binding/tcp_wrap.ts:291:10)
✅ Granted net access to "54.191.180.173:443".
┏ ⚠️ Deno requests net access to "id.twitch.tv:0".
┠─ Requested by `Deno.startTls()` API.
┃ ├─ node:http:306:27
┃ ├─ HttpsClientRequest._writeHeader (node:http:398:7)
┃ ├─ HttpsClientRequest._flushHeaders (node:_http_outgoing:382:12)
┃ ├─ Socket.onConnect (node:http:444:16)
┃ ├─ Socket.emit (ext:deno_node/_events.mjs:405:35)
┃ ├─ _afterConnect (node:net:159:12)
┃ ├─ _afterConnectMultiple (node:net:214:3)
┃ ├─ TCP.afterConnect (ext:deno_node/internal_binding/connection_wrap.ts:43:11)
┃ ├─ ext:deno_node/internal_binding/tcp_wrap.ts:299:14
┃ └─ eventLoopTick (ext:core/01_core.js:175:7)
┠─ Learn more at: https://docs.deno.com/go/--allow-net
┠─ Run again with --allow-net to bypass this prompt.
┗ Allow? [y/n/A] (y = yes, allow; n = no, deny; A = allow all net permissions) >
----- if no to host 1:
=> crash after a few more access requests, no matter if any more are granted
----- if yes to host 1:
=> crash if too slow:
(else continues with more access requests)
┏ ⚠️ Deno requests net access to "127.0.0.53:53".
┠─ Requested by `Deno.resolveDns()` API.
┃ ├─ op_dns_resolve (ext:core/00_infra.js:264:44)
┃ ├─ Object.resolveDns (ext:deno_net/01_net.js:77:18)
┃ ├─ ext:deno_node/internal_binding/cares_wrap.ts:55:65
✅ Granted net access to "127.0.0.53:53".
┏ ⚠️ Deno requests net access to "54.191.180.173:443".
┠─ Requested by `Deno.connect()` API.
┃ ├─ op_net_connect_tcp (ext:core/00_infra.js:264:44)
┃ ├─ Object.connect (ext:deno_net/01_net.js:583:61)
┃ ├─ TCP.#connect (ext:deno_node/internal_binding/tcp_wrap.ts:291:10)
✅ Granted net access to "54.191.180.173:443".
┏ ⚠️ Deno requests net access to "id.twitch.tv:0".
┠─ Requested by `Deno.startTls()` API.
┃ ├─ node:http:306:27
┃ ├─ HttpsClientRequest._writeHeader (node:http:398:7)
┃ ├─ HttpsClientRequest._flushHeaders (node:_http_outgoing:382:12)
✅ Granted net access to "id.twitch.tv:0".
error: Uncaught (in promise) TypeError: Failed to fetch: request body stream errored
at node:http:385:17
at eventLoopTick (ext:core/01_core.js:175:7)
Caused by: "resource closed"
Edit: Removed the `#` from the IP/host numbers, so it does not auto link to unrelated issues any more. | question,working as designed | low | Critical |
2,781,629,203 | godot | Static typing for for loop variable errors on invalid references | ### Tested versions
v4.3.stable.official [77dcf97d8]
### System information
Godot v4.3.stable - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 2060 (NVIDIA; 32.0.15.6636) - AMD Ryzen 5 7600X 6-Core Processor (12 Threads)
### Issue description
v4.3.stable.official [77dcf97d8]
The recently added static typing for for loop variables here https://github.com/godotengine/godot/pull/80247 has a weird quirk that I feel like I may be considered a bug, or at least strange behavior.
If you are looping over an array that contains node references, and one of those nodes gets freed, `<Freed Object>` remains in its index. While normally you can use `is_instance_valid()` for validating your node references, trying to do this in a typed for loop will fail with the error "Trying to assign invalid previously freed instance" at the "for" line, before it even gets to where you can validate it. However, it's possible to get around this by removing the type from the loop variable and checking it after the instance is validated, but that then makes typing the loop variable questionable in the first place. Since I think that may be unclear, here's some example code of what I mean:
```gdscript
# Tested with an Array with some generic Node2Ds
var objects_to_be_freed: Array = get_tree().get_nodes_in_group("object_freeing")
# After any of those nodes is freed...
# Runs without error
for object in objects_to_be_freed:
if is_instance_valid(object):
print("Valid!")
else:
print("Invalid!")
# Will cause error: "Trying to assign invalid previously freed instance"
for object: Node2D in objects_to_be_freed: # Errors at this line
if is_instance_valid(object): # I would expect this line to be how to catch the invalid reference before error
print("Valid!")
else:
print("Invalid!")
# Runs without error and effectively keeps static typing as far as I can tell
for object in objects_to_be_freed:
if is_instance_valid(object) and object is Node2D:
print("Valid!")
else:
print("Invalid!")
```
This makes me wonder if this behavior is intended, and if it is intended, why? The 3rd block (the workaround) seems like it would be the best way to handle this in all cases, and if that's the case, then why isn't the first block just interpreted the same way by the engine?
I would expect the first block to work without error as written, simply printing "Invalid!" if the object was freed.
### Steps to reproduce
Make an array, add node references to it, free any of the nodes without clearing the reference, create a for loop to iterate over that array, and make the for loop variable typed to those nodes. Tested this with the code snippet in the issue description.
### Minimal reproduction project (MRP)
N/A | bug,discussion,topic:gdscript | low | Critical |
2,781,646,353 | transformers | Unsupported: hasattr SkipFunctionVariable when i compile the mixtral model with muti-gpus | ### System Info
none
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
import torch
from transformers import StaticCache
NUM_TOKENS_TO_GENERATE = 40
torch_device = "cuda"
from torch.nn.attention import SDPBackend, sdpa_kernel
def decode_one_tokens(model, cur_token, input_pos, cache_position, past_key_values):
logits = model(
cur_token,
position_ids=input_pos,
cache_position=cache_position,
past_key_values=past_key_values,
return_dict=False,
use_cache=True
)[0]
new_token = torch.argmax(logits[:, -1], dim=-1)[:, None]
return new_token
batch_size, seq_length = inputs["input_ids"].shape
with torch.no_grad():
past_key_values = StaticCache(
config=model.config, max_batch_size=1, max_cache_len=4096, device=torch_device, dtype=model.dtype,layer_device_map=layer_device_map,
)
cache_position = torch.arange(seq_length, device=torch_device)
generated_ids = torch.zeros(
batch_size, seq_length + NUM_TOKENS_TO_GENERATE + 1, dtype=torch.int, device=torch_device
)
generated_ids[:, cache_position] = inputs["input_ids"].to(torch_device).to(torch.int)
logits = model(
**inputs, cache_position=cache_position, past_key_values=past_key_values,return_dict=False, use_cache=True
)[0]
next_token = torch.argmax(logits[:, -1], dim=-1)[:, None]
generated_ids[:, seq_length] = next_token[:, 0]
decode_one_tokens = torch.compile(decode_one_tokens, mode="reduce-overhead", fullgraph=True)
cache_position = torch.tensor([seq_length + 1], device=torch_device)
input_position=cache_position.clone
for _ in range(1, NUM_TOKENS_TO_GENERATE):
with sdpa_kernel(SDPBackend.MATH):
next_token = decode_one_tokens(model, next_token.clone(), input_position, cache_position, past_key_values)
generated_ids[:, cache_position] = next_token.int()
cache_position += 1
input_position+=1
text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
### Expected behavior
Unsupported: hasattr SkipFunctionVariable to
from user code:
File "/tmp/ipykernel_1957076/1822748636.py", line 7, in decode_one_tokens
logits = model(
File "/home/bcds/.conda/envs/llm/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/bcds/.conda/envs/llm/lib/python3.9/site-packages/accelerate/hooks.py", line 165, in new_forward
args, kwargs = module._hf_hook.pre_forward(module, *args, **kwargs)
File "/home/bcds/.conda/envs/llm/lib/python3.9/site-packages/accelerate/hooks.py", line 364, in pre_forward
return send_to_device(args, self.execution_device), send_to_device(
File "/home/bcds/.conda/envs/llm/lib/python3.9/site-packages/accelerate/utils/operations.py", line 184, in send_to_device
{
File "/home/bcds/.conda/envs/llm/lib/python3.9/site-packages/accelerate/utils/operations.py", line 185, in <dictcomp>
k: t if k in skip_keys else send_to_device(t, device, non_blocking=non_blocking, skip_keys=skip_keys)
File "/home/bcds/.conda/envs/llm/lib/python3.9/site-packages/accelerate/utils/operations.py", line 149, in send_to_device
if is_torch_tensor(tensor) or hasattr(tensor, "to"):
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True | bug | low | Critical |
2,781,691,945 | PowerToys | Applying a custom contrast theme in windows causes an exception | ### Microsoft PowerToys version
0.87.1
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
PowerToys Run
### Steps to reproduce
- Open the Windows Settings app
- Go to Accessibility/Contrast Themes
- Select and apply a **custom** contrast theme **OR** select *"None"* when already in a custom contrast theme

[PowerToysReport_2025-01-10-23-51-51.zip](https://github.com/user-attachments/files/18384896/PowerToysReport_2025-01-10-23-51-51.zip)
### ✔️ Expected Behavior
The theme is applied and there are no errors
### ❌ Actual Behavior
If Powertoys Run is enabled, when a custom theme is applied, 2-6 (equivalent) error dialogs are shown in the same spot, when switching from a custom theme to the default (None) theme then 1 dialog is shown
### Other Software
Windows 11 Settings (SystemSettings.exe) (Win version 24H2) | Issue-Bug,Needs-Triage | low | Critical |
2,781,715,044 | node | MD5 SIMD | ### What is the problem this feature will solve?
MD5 hashing is quite commonly used for file handling in HTTP. However, the algorithm is quite slow and is not able to fully utilize modern hardware.
### What is the feature you are proposing to solve the problem?
While it's not possible to do much more to optimize a single hashing instance there are techniques such as https://github.com/minio/md5-simd which is able to run multiple hashing instances over SIMD which can process 8 MD5 hashes in parallel using SIMD instructions.
In a real world application such as a http file server/client (using Content-MD5) with many parallel request this would provide a 2-6x real-world performance improvement. In some of our applications the MD5 hash takes more than 50% of cpu time.
This should be possible to implement without changing our API's.
### What alternatives have you considered?
Run hashing in a thread pool. One does not necessarily exclude the other. Using a thread pool would be more about avoiding latency spikes as in terms of throughput just forking the http server provides similar results. | feature request | low | Major |
2,781,725,672 | pytorch | Build breaks on FreeBSD on arm platforms: Unrecognized CMAKE_SYSTEM_NAME = FreeBSD | ### 🐛 Describe the bug
```
-- The ASM compiler identification is Clang with GNU-like command-line
-- Found assembler: /usr/local/llvm15/bin/clang
CMake Error at aten/src/ATen/native/quantized/cpu/qnnpack/CMakeLists.txt:65 (message):
Unrecognized CMAKE_SYSTEM_NAME = FreeBSD
-- Configuring incomplete, errors occurred!
```
### Versions
2.5.1
cc @malfet @seemethere | module: build,triaged | low | Critical |
2,781,726,542 | flutter | `DartUri: Cannot read packages spec` in workspace environment | ### Steps to reproduce
Short: Create a flutter project inside a workspace and add a web target.
1. Create an empty folder
2. Add a `pubspec.yaml` with a `name` and `workspace` attribute
3. Create a subfolder using `flutter create --platforms=web -e app`
4. Add `resolution: workspace` to its `pubspec.yaml` and the package directory to the root pubspec
5. Run the project in web
### Expected results
Not to print anything (-> not to throw errors)
### Actual results
```
DartUri: Cannot read packages spec: file:///C:/Users/jakob/Documents/GitHub/upsheep/packages/test_app/.dart_tool/package_config.jsonError: PathNotFoundException: Cannot open file, path = 'C:\Users\jakob\Documents\GitHub\upsheep\packages\test_app\.dart_tool\package_config.json' (OS Error: Das System kann die angegebene Datei nicht finden., errno = 2)
```
(second part is German and means something along the lines of "The system could not find the specified file")
I did some digging and by judging by previous issues, I believe this is an issue directly related to Flutter.
It occurs because `pub get` [deletes said `.dart_tool/package_config.json` file](https://dart.dev/tools/pub/workspaces#:~:text=Delete%20any%20other%20existing%20pubspec.lock%20and%20.dart_tool/package_config.json%20files%20next%20to%20workspace%20packages.) and "relocates" it to the root project.
Running the project still works, but on every refresh the error is printed. Not very cool
### Code sample
<details open><summary>Code sample</summary>
Not applicable, see "Steps to reproduce"
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
Not applicable
</details>
### Logs
The instructions say I have to add `--verbose`, it's quite a long output, so I also attached the output without verbose.
<details open><summary>Logs without Verbose</summary>
```console
C:\Users\jakob\Documents\GitHub\upsheep\packages\test_app>flutter run
Resolving dependencies in `C:\Users\jakob\Documents\GitHub\upsheep`...
Downloading packages...
_fe_analyzer_shared 76.0.0 (78.0.0 available)
analyzer 6.11.0 (7.1.0 available)
async 2.11.0 (2.12.0 available)
boolean_selector 2.1.1 (2.1.2 available)
characters 1.3.0 (1.4.0 available)
clock 1.1.1 (1.1.2 available)
collection 1.19.0 (1.19.1 available)
fake_async 1.3.1 (1.3.2 available)
leak_tracker 10.0.7 (10.0.8 available)
leak_tracker_flutter_testing 3.0.8 (3.0.9 available)
matcher 0.12.16+1 (0.12.17 available)
material_color_utilities 0.11.1 (0.12.0 available)
meta 1.15.0 (1.16.0 available)
path 1.9.0 (1.9.1 available)
source_span 1.10.0 (1.10.1 available)
stack_trace 1.12.0 (1.12.1 available)
stream_channel 2.1.2 (2.1.4 available)
string_scanner 1.3.0 (1.4.1 available)
term_glyph 1.2.1 (1.2.2 available)
test 1.25.8 (1.25.14 available)
test_api 0.7.3 (0.7.4 available)
test_core 0.6.5 (0.6.8 available)
vm_service 14.3.0 (15.0.0 available)
Got dependencies in `C:\Users\jakob\Documents\GitHub\upsheep`!
23 packages have newer versions incompatible with dependency constraints.
Try `flutter pub outdated` for more information.
Launching lib\main.dart on Chrome in debug mode...
Waiting for connection from debug service on Chrome... 23,3s
DartUri: Cannot read packages spec: file:///C:/Users/jakob/Documents/GitHub/upsheep/packages/test_app/.dart_tool/package_config.jsonError:
PathNotFoundException: Cannot open file, path = 'C:\Users\jakob\Documents\GitHub\upsheep\packages\test_app\.dart_tool\package_config.json' (OS
Error: Das System kann die angegebene Datei nicht finden.
, errno = 2)
This app is linked to the debug service: ws://127.0.0.1:51345/YC9YGKFnjaI=/ws
Debug service listening on ws://127.0.0.1:51345/YC9YGKFnjaI=/ws
🔥 To hot restart changes while running, press "r" or "R".
For a more detailed help message, press "h". To quit, press "q".
A Dart VM Service on Chrome is available at: http://127.0.0.1:51345/YC9YGKFnjaI=
The Flutter DevTools debugger and profiler on Chrome is available at: http://127.0.0.1:9102?uri=http://127.0.0.1:51345/YC9YGKFnjaI=
Application finished.
C:\Users\jakob\Documents\GitHub\upsheep\packages\test_app>
```
</details>
<details open><summary>Logs with Verbose</summary>
This reveals a lot more DartUri errors than initially thought. Wow
> Too long, see https://pastebin.com/PGQN912h
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[√] Flutter (Channel stable, 3.27.1, on Microsoft Windows [Version 10.0.26100.2605], locale de-DE)
• Flutter version 3.27.1 on channel stable at C:\flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 17025dd882 (4 weeks ago), 2024-12-17 03:23:09 +0900
• Engine revision cb4b5fff73
• Dart version 3.6.0
• DevTools version 2.40.2
[√] Windows Version (Installed version of Windows is version 10 or higher)
[√] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at C:\Users\jakob\AppData\Local\Android\Sdk
• Platform android-35, build-tools 35.0.0
• Java binary at: C:\Program Files\Android\Android Studio\jbr\bin\java
• Java version OpenJDK Runtime Environment (build 17.0.11+0--11852314)
• All Android licenses accepted.
[√] Chrome - develop for the web
• Chrome at C:\Program Files\Google\Chrome\Application\chrome.exe
[√] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.10.4)
• Visual Studio at C:\Program Files\Microsoft Visual Studio\2022\Community
• Visual Studio Community 2022 version 17.10.35027.167
• Windows 10 SDK version 10.0.22621.0
[√] Android Studio (version 2024.1)
• Android Studio at C:\Program Files\Android\Android Studio
• Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
• android-studio-dir = C:\Program Files\Android\Android Studio
• Java version OpenJDK Runtime Environment (build 17.0.11+0--11852314)
[√] VS Code, 64-bit edition (version 1.96.2)
• VS Code at C:\Program Files\Microsoft VS Code
• Flutter extension version 3.102.0
[√] Connected device (2 available)
• Windows (desktop) • windows • windows-x64 • Microsoft Windows [Version 10.0.26100.2605]
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.265
[√] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| tool,has reproducible steps,P1,team-tool,triaged-tool,found in release: 3.27,found in release: 3.28 | medium | Critical |
2,781,751,841 | flutter | How to obtain Flutter Engine Unit Test Coverage on Android | ### Steps to reproduce
I can run Flutter Engine Unit Test on Android through such a command:
`./testing/run_tests.py --type android --android-variant android_debug_arm64`
but how to obtain unit test coverage? It seems that --coverage is not supported on the android platform.
### Expected results
I can obtain the Flutter Engine Unit Test Coverage on Android
### Actual results
It seems Flutter Engine Unit Test Coverage not support on Android
### Code sample
<details open><summary>Code sample</summary>
```dart
[Paste your code here]
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[Paste your output here]
```
</details>
| a: tests,platform-android,engine,P2,team-android,triaged-android,e: engine-tests-in-framework-repo | low | Critical |
2,781,753,531 | deno | 2.1.5 requires API IsWow64Process2 | Version: Deno 2.1.5
Tested on 64-bit Windows Server 2012R2 / Windows 6.3 (build 6300).
Downgrading to 2.1.4 fixes the issue.
| bug,help wanted | low | Minor |
2,781,753,899 | three.js | Improve explanation on how to migrate from 151 to 152 color management | ### Description
I feel that certain revisions to instructions provided in "[Updates to Color Management in three.js r152](https://discourse.threejs.org/t/updates-to-color-management-in-three-js-r152/50791)" would benefit the community.
### Solution
Merge the following text into the current explanation where appropriate...
<!--StartFragment--><h3><strong>1. Renderer Settings</strong></h3>
Property | Old Name | New Name | Default Behavior Change?
-- | -- | -- | --
Renderer output encoding | renderer.outputEncoding | renderer.outputColorSpace | Yes: From THREE.LinearEncoding to THREE.SRGBColorSpace (r152).
Tone mapping | renderer.toneMapping | Same | No: Default remains THREE.NoToneMapping.
<!--EndFragment-->
<!--StartFragment--><h3><strong>Texture Properties</strong></h3>
Property | Old Name | New Name | Default Behavior Change?
-- | -- | -- | --
Texture encoding | texture.encoding | texture.colorSpace | No: Default remains THREE.LinearEncoding (non-color textures).
sRGB textures | THREE.sRGBEncoding | THREE.SRGBColorSpace | No: Only the name changed.
Linear textures | THREE.LinearEncoding | THREE.LinearSRGBColorSpace | No: Only the name changed.
<!--EndFragment-->
<!--StartFragment--><h3><strong>Shader Changes</strong></h3>
Function/Include | Old Name | New Name | Default Behavior Change?
-- | -- | -- | --
GLSL fragment includes | <encodings_pars_fragment> | <color_space_pars_fragment> | No: Name changed; behavior remains the same.
GLSL linear-to-output function | linearToOutputTexel() | linearToColorSpace() | No: Name changed; behavior remains the same.
GLSL sRGB-to-linear function | sRGBToLinear() | srgbToLinear() | No: Name changed; behavior remains the same.
<!--EndFragment-->
<!--StartFragment--><h3><strong>Material Properties</strong></h3>
Property | Old Behavior | New Behavior | Default Behavior Change?
-- | -- | -- | --
Colors and textures on materials | Assumed sRGB | Assumed sRGB | Yes: Colors are now converted from sRGB to Linear by default.
<!--EndFragment-->
<!--StartFragment--><h3><strong>Global Defaults</strong></h3>
Property | Old Behavior | New Behavior | Default Behavior Change?
-- | -- | -- | --
Color management toggle | THREE.ColorManagement.enabled | Same | Yes: Default is now true (enabled).
<!--EndFragment-->
<!--StartFragment--><h3><strong>Migration Steps</strong></h3>
Previously inputs, outputs, and blending were all done in the sRGB color space (Note: "colorSpace" was formerly called "encoding"). With the improved methodology, three.js still accepts sRGB inputs, but now it converts them to linear. All blending is done in linear colorSpace. Finally, the renderer converts the finished scene back to the sRBG color space.
Here's what you need to do:
1) Set the color space of textures with color explicitly to THREE.SRGBColorSpace. Textures that use non-color data (e.g., normal maps, height maps) can be left as-is, as they default to THREE.LinearSRGBColorSpace.
2) For all fragment shaders, if the old fragment shader produced sRGB values, under the new methodology it needs to generate linear values. To adapt the shader, apply the linearToOutputTexel conversion function before assigning the pixel value to gl_FragColor. Here's an example:
```
#include <color_space_pars_fragment>
// Old (output assumed to be sRGB):
gl_FragColor = vec4(color, 1.0);
// New (convert to linear space):
gl_FragColor = vec4(linearToOutputTexel(color), 1.0);
This ensures the shader aligns with the updated linear workflow.
<!--EndFragment-->
```
You can also include ```colorspace_fragment``` after assigning gl_FragColor.
```
gl_FragColor = vec4(color, 1.0);
#include <colorspace_fragment>
```
### Alternatives
If any of the proposed new instructions are inaccurate please correct them.
Explaining what the following two lines do in more detail might help.
```
#include <tonemapping_fragment>
#include <encodings_fragment>
```
### Additional context
_No response_
| Suggestion,Documentation | low | Major |
2,781,754,386 | godot | scaling node3d containing character3d to <0 causes hang (gpu or system) | ### Tested versions
4.3
### System information
windows 10
### Issue description
gpu or system hang.
dont want everything to break.
### Steps to reproduce
- run preview
- 2-view with one cinematic preview mode also
- transform scale a node3d containing a characterbody, to below 0
- hang gpu or system
did it twice, dont wanna anymore!
### Minimal reproduction project (MRP)
node3d(resize)>node3d(resize)>characterBody3d(w CS)>node3d>Camera3d(current)
editor has 2 views, one has Camera+ cinematic preview on | bug,topic:physics,needs testing,crash,topic:3d | low | Minor |
2,781,772,198 | transformers | Segmentation fault: address not mapped to object at address 0x100000007 | ### System Info
transformers-cli env causes the same Segfault.
Uname-a: `Linux matteo-thinkpadr61 6.12.8-arch1-1 #1 SMP PREEMPT_DYNAMIC Thu, 02 Jan 2025 22:52:26 +0000 x86_64 GNU/Linux
`
[cpuinfo.txt](https://github.com/user-attachments/files/18385563/cpuinfo.txt)
[piplist.txt](https://github.com/user-attachments/files/18385564/piplist.txt)
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
`from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline, GPT2Tokenizer
`
[problema_hf.txt](https://github.com/user-attachments/files/18385565/problema_hf.txt)
### Expected behavior
Hi!
I get a segmentation fault error whenever I try to import modules such as GPT2Tokenizer, GPT2Model, AutoTokenizer or AutoModelForCausalLM using version 4.48.0 of transformers installed from pip. The problem happens also with versions 4.47.1 and 4.47.0
By using version 4.46.3, instead, the issue is less severe, for example I can import GPT2Tokenizer, GPT2Model but still not AutoTokenizer or AutoModelForCausalLM or run transformers-cli env. I tried a random older version 4.36.1, and still segmentation fault on transformers-cli-env
The machine I am using it's not new: it's a Thinkpad R61 with Intel T9300 CPU, 8GB of DDR2 RAM and 1TB SSD. Even if the problem may lie in an old CPU I think it's important to notice if some update broke compatibility with older hardware, as they may be still in use by someone (yes, it seems incredible but I am very happily doing research on llm on that machine, with Arch it runs very fast and I have a remote HPC server at disposal for heavy jobs. I don't like to change things that works. ) Moreover it doesn't look like an Illegal Instruction error, so I am not sure CPU can be the issue.
Unfortunately I do not recall if I already ran successfully transformers on this machine, perhaps in October.
Datasets library works perfectly: I use it very often on this machine even for large datasets and I never had a problem.
I attach: segmentation fault error, cpuid output, list of installed packages
[problema_hf.txt](https://github.com/user-attachments/files/18385565/problema_hf.txt)
[cpuinfo.txt](https://github.com/user-attachments/files/18385563/cpuinfo.txt)
[piplist.txt](https://github.com/user-attachments/files/18385564/piplist.txt)
```
Caught signal 11 (Segmentation fault: address not mapped to object at address 0x100000007)
==== backtrace (tid: 105455) ====
0 0x000000000004c822 ucs_event_set_fd_get() ???:0
1 0x000000000004c9ed ucs_event_set_fd_get() ???:0
2 0x000000000003d1d0 __sigaction() ???:0
3 0x00000000001c6b36 PyLong_AsInt() ???:0
4 0x000000000017a284 _PyEval_EvalFrameDefault() ???:0
5 0x0000000000240ce5 PyEval_EvalCode() ???:0
6 0x000000000025a444 _PyDict_DelItemIf() ???:0
7 0x000000000018c55d _PyFunction_SetVersion() ???:0
8 0x00000000001771b4 _PyEval_EvalFrameDefault() ???:0
9 0x00000000001943c4 PyObject_GenericGetAttr() ???:0
10 0x00000000001cc09a PyObject_CallMethodObjArgs() ???:0
11 0x00000000001cb173 PyImport_ImportModuleLevelObject() ???:0
12 0x000000000017a2b4 _PyEval_EvalFrameDefault() ???:0
13 0x0000000000240ce5 PyEval_EvalCode() ???:0
14 0x000000000025a444 _PyDict_DelItemIf() ???:0
15 0x000000000018c55d _PyFunction_SetVersion() ???:0
16 0x00000000001771b4 _PyEval_EvalFrameDefault() ???:0
17 0x00000000001943c4 PyObject_GenericGetAttr() ???:0
18 0x00000000001cc09a PyObject_CallMethodObjArgs() ???:0
19 0x00000000001cb173 PyImport_ImportModuleLevelObject() ???:0
20 0x000000000017a2b4 _PyEval_EvalFrameDefault() ???:0
21 0x0000000000240ce5 PyEval_EvalCode() ???:0
22 0x000000000025a444 _PyDict_DelItemIf() ???:0
23 0x000000000018c55d _PyFunction_SetVersion() ???:0
24 0x00000000001771b4 _PyEval_EvalFrameDefault() ???:0
25 0x00000000001943c4 PyObject_GenericGetAttr() ???:0
26 0x00000000001cc09a PyObject_CallMethodObjArgs() ???:0
27 0x00000000001cb173 PyImport_ImportModuleLevelObject() ???:0
28 0x000000000017a2b4 _PyEval_EvalFrameDefault() ???:0
29 0x0000000000240ce5 PyEval_EvalCode() ???:0
30 0x000000000025a444 _PyDict_DelItemIf() ???:0
31 0x000000000018c55d _PyFunction_SetVersion() ???:0
32 0x00000000001771b4 _PyEval_EvalFrameDefault() ???:0
33 0x00000000001943c4 PyObject_GenericGetAttr() ???:0
34 0x00000000001cc09a PyObject_CallMethodObjArgs() ???:0
35 0x00000000001cb173 PyImport_ImportModuleLevelObject() ???:0
36 0x000000000018c55d _PyFunction_SetVersion() ???:0
37 0x00000000001771b4 _PyEval_EvalFrameDefault() ???:0
38 0x00000000001943c4 PyObject_GenericGetAttr() ???:0
39 0x00000000001cc09a PyObject_CallMethodObjArgs() ???:0
40 0x00000000001cb173 PyImport_ImportModuleLevelObject() ???:0
41 0x000000000017a2b4 _PyEval_EvalFrameDefault() ???:0
42 0x0000000000240ce5 PyEval_EvalCode() ???:0
43 0x000000000025a444 _PyDict_DelItemIf() ???:0
44 0x000000000018c55d _PyFunction_SetVersion() ???:0
45 0x00000000001771b4 _PyEval_EvalFrameDefault() ???:0
46 0x00000000001943c4 PyObject_GenericGetAttr() ???:0
47 0x00000000001cc09a PyObject_CallMethodObjArgs() ???:0
48 0x00000000001cb173 PyImport_ImportModuleLevelObject() ???:0
49 0x000000000017a2b4 _PyEval_EvalFrameDefault() ???:0
50 0x0000000000240ce5 PyEval_EvalCode() ???:0
51 0x000000000025a444 _PyDict_DelItemIf() ???:0
52 0x000000000018c55d _PyFunction_SetVersion() ???:0
53 0x00000000001771b4 _PyEval_EvalFrameDefault() ???:0
54 0x00000000001943c4 PyObject_GenericGetAttr() ???:0
55 0x00000000001cc09a PyObject_CallMethodObjArgs() ???:0
56 0x00000000001cb173 PyImport_ImportModuleLevelObject() ???:0
57 0x000000000018c55d _PyFunction_SetVersion() ???:0
58 0x00000000001771b4 _PyEval_EvalFrameDefault() ???:0
59 0x00000000001943c4 PyObject_GenericGetAttr() ???:0
=================================
Errore di segmentazione (core dump creato)
``` | bug | low | Critical |
2,781,776,407 | react-native | TextInput inside ScrollView is immediately closed when covered by IME | ### Description
`TextInput` inside a `ScrollView` immediately closes itself when obstructed by the input method window.
This was originally discovered by @anarchtism and haaaaah.
### Steps to reproduce
1. Install https://codeberg.org/natkr/SuperTallInputMethod (clone and `./gradlew installDebug`)
2. Open the SuperTallInputMethod app, and enable and select it using the buttons provided
3. Tap the text field, and note that the "keyboard" opens normally
4. Install and run https://codeberg.org/natkr/repros/src/branch/react-native/textinput-close-when-obstructed
5. Tap the text field, and note that the "keyboard" opens briefly but then closes immediately
When reproing this I've been running Android 15 (API 35) in the official emulator.
### React Native Version
0.76.6
### Affected Platforms
Runtime - Android, Build - Linux
### Output of `npx react-native info`
```text
System:
OS: Linux 6.12 Arch Linux
CPU: (24) x64 AMD Ryzen 9 5900X 12-Core Processor
Memory: 5.19 GB / 62.68 GB
Shell:
version: 5.2.37
path: /bin/bash
Binaries:
Node:
version: 23.1.0
path: /bin/node
Yarn:
version: 1.22.22
path: /bin/yarn
npm:
version: 10.9.0
path: /bin/npm
Watchman: Not Found
SDKs:
Android SDK:
API Levels:
- "35"
Build Tools:
- 34.0.0
- 35.0.0
- 35.0.0
System Images:
- android-10 | Google APIs Intel x86 Atom
- android-28 | Google Play Intel x86 Atom
- android-30 | Google Play Intel x86 Atom
- android-34 | Google APIs Intel x86_64 Atom
- android-35 | Intel x86_64 Atom
- android-35 | Google APIs Intel x86_64 Atom
Android NDK: Not Found
IDEs:
Android Studio: Not Found
Languages:
Java:
version: javac 23
path: /bin/javac
Ruby:
version: 3.3.5
path: /bin/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.0.1
wanted: 15.0.1
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.6
wanted: 0.76.6
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: Not found
newArchEnabled: false
```
```
### Stacktrace or Logs
```text
There is no error message, and nothing is reported in logcat or the react native devtools.
```
### Reproducer
https://codeberg.org/natkr/repros/src/branch/react-native/textinput-close-when-obstructed
### Screenshots and Videos
[Screen_recording_20250111_123121.webm](https://github.com/user-attachments/assets/de0114a8-cb98-401e-8376-aa42f79018b5)
| Component: TextInput,Component: ScrollView,Needs: Repro,Needs: Attention | low | Critical |
2,781,781,611 | pytorch | Inductor C++ Wrapper + autograd cause error in the second run because of FX graph cache | ### 🐛 Describe the bug
```python
import torch
import torch._inductor.config as config
from torch import Tensor
config.cpp_wrapper = True
@torch.compile
def foo(x: Tensor):
return x.sin()
x = torch.tensor(0.0, device="cuda", requires_grad=True)
foo(x).backward()
print(x.grad)
```
run this code __TWICE__ will get an error in the second run:
```
Traceback (most recent call last):
File "/root/modded-nanogpt/custom_op_cache.py", line 15, in <module>
foo(x).backward()
File "/root/modded-nanogpt/.venv/lib/python3.12/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/root/modded-nanogpt/.venv/lib/python3.12/site-packages/torch/autograd/__init__.py", line 347, in backward
_engine_run_backward(
File "/root/modded-nanogpt/.venv/lib/python3.12/site-packages/torch/autograd/graph.py", line 823, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/modded-nanogpt/.venv/lib/python3.12/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
^^^^^^^^^^^^^^^^^^^^
File "/root/modded-nanogpt/.venv/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1958, in backward
return impl_fn()
^^^^^^^^^
File "/root/modded-nanogpt/.venv/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1944, in impl_fn
out = CompiledFunction._backward_impl(ctx, all_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/modded-nanogpt/.venv/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2079, in _backward_impl
out = call_func_at_runtime_with_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/modded-nanogpt/.venv/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/utils.py", line 126, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
^^^^^^^
File "/root/modded-nanogpt/.venv/lib/python3.12/site-packages/torch/_inductor/output_code.py", line 464, in __call__
return self.current_callable(inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/modded-nanogpt/.venv/lib/python3.12/site-packages/torch/_inductor/utils.py", line 2203, in run
return model(new_inputs)
^^^^^^^^^^^^^^^^^
File "/tmp/torchinductor_root/pw/cpwoz7xtew3ko7zejrn4bsrizhftvllcrykvty7vz5xn6v3zmkbp.py", line 262, in g
output_handles = f(input_handles)
^^^^^^^^^^^^^^^^
RuntimeError: CUDA driver error: invalid device context
```
Turn off FX graph cache can fix it:
```python
import os
os.environ["TORCHINDUCTOR_FX_GRAPH_CACHE"] = "0"
import torch
import torch._inductor.config as config
from torch import Tensor
config.cpp_wrapper = True
@torch.compile
def foo(x: Tensor):
return x.sin()
x = torch.tensor(0.0, device="cuda", requires_grad=True)
foo(x).backward()
print(x.grad)
```
This bug might be relevant to https://github.com/pytorch/pytorch/issues/144344
### Versions
Collecting environment information...
PyTorch version: 2.7.0.dev20250110+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.4
Libc version: glibc-2.35
Python version: 3.12.8 (main, Dec 19 2024, 14:33:20) [Clang 18.1.8 ] (64-bit runtime)
Python platform: Linux-5.4.250-2-velinux1u1-amd64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.129.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.5.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 168
On-line CPU(s) list: 0-161
Off-line CPU(s) list: 162-167
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8457C
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 42
Socket(s): 2
Stepping: 8
BogoMIPS: 5199.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 wbnoinvd arat avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid cldemote movdiri movdir64b md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 3.9 MiB (84 instances)
L1i cache: 2.6 MiB (84 instances)
L2 cache: 168 MiB (84 instances)
L3 cache: 195 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-83
NUMA node1 CPU(s): 84-167
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Unknown: No mitigations
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.2.1
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.7.0.dev20250110+cu126
[conda] Could not collect
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @BoyuanFeng | triaged,module: fx,oncall: pt2,module: inductor,compile-cache | low | Critical |
2,781,797,020 | ui | [feat]: Button behavior | ### Feature description
In a recent project, I needed to add a new behavior to the button component, which is quite simple to do using ShadCN. Later, when I started another project, I felt the need for the same feature again and thought it might be useful if this behavior was part of the default ShadCN package.
The change I made to my button component was straightforward; I added a new behavior attribute. Here's how it turned out:
```js
const buttonVariants = cva(
"inline-flex items-center justify-center gap-2 whitespace-nowrap rounded-md text-sm font-medium ring-offset-background transition-colors focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-ring focus-visible:ring-offset-2 disabled:pointer-events-none disabled:opacity-50 [&_svg]:pointer-events-none [&_svg]:size-4 [&_svg]:shrink-0",
{
variants: {
variant: {
default: "bg-primary text-primary-foreground hover:bg-primary/90",
destructive:
"bg-destructive text-destructive-foreground hover:bg-destructive/90",
outline:
"border border-input bg-background hover:bg-accent hover:text-accent-foreground",
secondary:
"bg-secondary text-secondary-foreground hover:bg-secondary/80",
ghost: "hover:bg-accent hover:text-accent-foreground",
link: "text-primary underline-offset-4 hover:underline",
},
size: {
default: "h-10 px-4 py-2",
sm: "h-9 rounded-md px-3",
lg: "h-11 rounded-md px-8",
icon: "h-10 w-10",
},
behavior: {
fit: "w-fit",
full: "w-full",
},
},
defaultVariants: {
variant: "default",
size: "default",
behavior: "fit",
},
}
)
```
I added the property to the arguments like this: `({ className, variant, size, behavior, asChild = false, ...props }, ref)`
And the corresponding component setup: `className={cn(buttonVariants({ variant, size, behavior, className }))}`
To render a button it's like that:
```html
<Button>Regular</Button>
<Button behavior="full">Full width</Button>
```
I’d like to know your thoughts on whether this is a change worth implementing. I didn’t create a PR because I’m not entirely sure where exactly to do this in the ShadCN codebase, so I thought it’d be better to ask for your opinion first.
### Affected component/components
Button
### Additional Context
It's very helpfull
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Minor |
2,781,800,777 | rust | rustc segfaults on `global_asm!("jz [foo]")` on x86_64-pc-windows-msvc (exit code: 0xc0000005, STATUS_ACCESS_VIOLATION) | <!--
Thank you for finding an Internal Compiler Error! 🧊 If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
### Code
```Rust
use std::arch::global_asm;
global_asm!("jz [0x00547274]");
```
No other lines of code, no dependencies. Other code (also with `global_asm!`) builds fine.
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.84.0 (9fc6b4312 2025-01-07)
binary: rustc
commit-hash: 9fc6b43126469e3858e2fe86cafb4f0fd5068869
commit-date: 2025-01-07
host: x86_64-pc-windows-msvc
release: 1.84.0
LLVM version: 19.1.5
```
### Error output
```
PS C:\Users\Benni\rustccrash> cargo build
Compiling rustccrash v0.1.0 (C:\Users\Benni\rustccrash)
error: could not compile `rustccrash` (lib)
Caused by:
process didn't exit successfully: `C:\Users\Benni\.rustup\toolchains\stable-x86_64-pc-windows-msvc\bin\rustc.exe --crate-name rustccrash --edition=2021 src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --diagnostic-width=241 --crate-type lib --emit=dep-info,metadata,link -C embed-bitcode=no -C debuginfo=2 --check-cfg cfg(docsrs) --check-cfg "cfg(feature, values())" -C metadata=c9c8a2735a047fdf -C extra-filename=-c9c8a2735a047fdf --out-dir
C:\Users\Benni\rustccrash\target\debug\deps -C incremental=C:\Users\Benni\rustccrash\target\debug\incremental -L dependency=C:\Users\Benni\rustccrash\target\debug\deps` (exit code: 0xc0000005, STATUS_ACCESS_VIOLATION)
```
| I-crash,A-LLVM,O-windows,T-compiler,O-windows-msvc,C-bug | low | Critical |
2,781,821,785 | rust | Unexpected result when selecting aliases | Happens when building docs `cargo docs --document-private-items`
https://github.com/paradigmxyz/reth/pull/13735
https://github.com/paradigmxyz/reth/pull/13735/commits/2fe6d519b4a8d0c22aaf49d161aac498618c21b6
<!--
Thank you for finding an Internal Compiler Error! 🧊 If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
### Code
```Rust
type EngineServiceType<N, Client> = ChainOrchestrator<
EngineHandler<
EngineApiRequestHandler<
EngineApiRequest<<N as NodeTypesWithEngine>::Engine, <N as NodeTypes>::Primitives>,
<N as NodeTypes>::Primitives,
>,
EngineMessageStream<<N as NodeTypesWithEngine>::Engine>,
BasicBlockDownloader<Client, BlockTy<N>>,
>,
PipelineSync<N>,
>;
```
https://github.com/paradigmxyz/reth/actions/runs/12723951444/job/35469586386?pr=13735
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.86.0-nightly (a580b5c37 2025-01-08)
binary: rustc
commit-hash: a580b5c379b4fca50dfe5afc0fc0ce00921e4e00
commit-date: 2025-01-08
host: aarch64-apple-darwin
release: 1.86.0-nightly
LLVM version: 19.1.6
```
### Error output
```
thread 'rustc' panicked at compiler/rustc_trait_selection/src/traits/auto_trait.rs:720:33:
Unexpected result when selecting service::EngineService<N/#0, Client/#1, E/#2> Obligation(predicate=Binder { value: ProjectionPredicate(AliasTerm { args: [Client/#1], def_id: DefId(377:807 ~ reth_network_p2p[7774]::headers::client::HeadersClient::Header), .. }, Term::Ty(?15t)), bound_vars: [] }, depth=2)
query stack during panic:
end of query stack
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Backtrace</strong></summary>
<p>
```
0: 0x10cbedbc8 - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::hdf6b0ec9c000d35c
1: 0x10a3f02b0 - core::fmt::write::h16c5bd41dd834a6b
2: 0x10cbe1bb8 - std::io::Write::write_fmt::hec5eafa956dd01b2
3: 0x10cbeda88 - std::sys::backtrace::BacktraceLock::print::hdc26b9278d26ab98
4: 0x10cbeffac - std::panicking::default_hook::{{closure}}::ha59987241794d851
5: 0x10cbefda8 - std::panicking::default_hook::hd22dec52cb554d05
6: 0x10ae55a3c - std[3f4798804b5c7a44]::panicking::update_hook::<alloc[f8a9b9de03ffcad8]::boxed::Box<rustc_driver_impl[5f5e67e55176e86b]::install_ice_hook::{closure#1}>>::{closure#0}
7: 0x10cbf09d4 - std::panicking::rust_panic_with_hook::h749e8bc345b0ef8c
8: 0x10cbf0540 - std::panicking::begin_panic_handler::{{closure}}::h1b2213429acd8861
9: 0x10cbee058 - std::sys::backtrace::__rust_end_short_backtrace::h7a0f0fe1c5a10461
10: 0x10cbf0204 - _rust_begin_unwind
11: 0x10f2be958 - core::panicking::panic_fmt::h859835561e58f44c
12: 0x10c99fb04 - <rustc_trait_selection[424d8f1623cf1e5f]::traits::auto_trait::AutoTraitFinder>::evaluate_predicates
13: 0x100a9831c - rustdoc[c63cc3a5136b6fa1]::clean::auto_trait::synthesize_auto_trait_impl
14: 0x1009853d8 - rustdoc[c63cc3a5136b6fa1]::clean::utils::synthesize_auto_trait_and_blanket_impls
15: 0x100a7618c - <rustdoc[c63cc3a5136b6fa1]::passes::collect_trait_impls::SyntheticImplCollector as rustdoc[c63cc3a5136b6fa1]::visit::DocVisitor>::visit_item
16: 0x100a76288 - <rustdoc[c63cc3a5136b6fa1]::passes::collect_trait_impls::SyntheticImplCollector as rustdoc[c63cc3a5136b6fa1]::visit::DocVisitor>::visit_item
17: 0x100a76288 - <rustdoc[c63cc3a5136b6fa1]::passes::collect_trait_impls::SyntheticImplCollector as rustdoc[c63cc3a5136b6fa1]::visit::DocVisitor>::visit_item
18: 0x100a72f5c - rustdoc[c63cc3a5136b6fa1]::passes::collect_trait_impls::collect_trait_impls
19: 0x1009a8c5c - rustdoc[c63cc3a5136b6fa1]::core::run_global_ctxt
20: 0x100a95474 - rustdoc[c63cc3a5136b6fa1]::main_args::{closure#2}::{closure#0}
21: 0x1008a155c - <rustc_interface[de273c1981fa55fc]::passes::create_and_enter_global_ctxt<(), rustdoc[c63cc3a5136b6fa1]::main_args::{closure#2}::{closure#0}>::{closure#2} as core[b8802227b17cbab3]::ops::function::FnOnce<(&rustc_session[5f519a7c706e20bb]::session::Session, rustc_middle[5b96f28b450e74f0]::ty::context::CurrentGcx, &std[3f4798804b5c7a44]::sync::once_lock::OnceLock<rustc_middle[5b96f28b450e74f0]::ty::context::GlobalCtxt>, &rustc_data_structures[8091d705d2ed9275]::sync::worker_local::WorkerLocal<rustc_middle[5b96f28b450e74f0]::arena::Arena>, &rustc_data_structures[8091d705d2ed9275]::sync::worker_local::WorkerLocal<rustc_hir[fa2110f737f883df]::Arena>, rustdoc[c63cc3a5136b6fa1]::main_args::{closure#2}::{closure#0})>>::call_once::{shim:vtable#0}
22: 0x1008932ec - rustc_interface[de273c1981fa55fc]::interface::run_compiler::<(), rustdoc[c63cc3a5136b6fa1]::main_args::{closure#2}>::{closure#1}
23: 0x10082b3f4 - std[3f4798804b5c7a44]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[de273c1981fa55fc]::util::run_in_thread_with_globals<rustc_interface[de273c1981fa55fc]::util::run_in_thread_pool_with_globals<rustc_interface[de273c1981fa55fc]::interface::run_compiler<(), rustdoc[c63cc3a5136b6fa1]::main_args::{closure#2}>::{closure#1}, ()>::{closure#0}, ()>::{closure#0}::{closure#0}, ()>
24: 0x10089d894 - <<std[3f4798804b5c7a44]::thread::Builder>::spawn_unchecked_<rustc_interface[de273c1981fa55fc]::util::run_in_thread_with_globals<rustc_interface[de273c1981fa55fc]::util::run_in_thread_pool_with_globals<rustc_interface[de273c1981fa55fc]::interface::run_compiler<(), rustdoc[c63cc3a5136b6fa1]::main_args::{closure#2}>::{closure#1}, ()>::{closure#0}, ()>::{closure#0}::{closure#0}, ()>::{closure#1} as core[b8802227b17cbab3]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
25: 0x10cbfb9b8 - std::sys::pal::unix::thread::Thread::new::thread_start::h6d88171faf776f6f
26: 0x197b63fa8 - __pthread_joiner_wake
```
</p>
</details>
| T-rustdoc,I-ICE,C-bug,A-synthetic-impls,A-auto-traits | low | Critical |
2,781,821,907 | neovim | Treesitter `query:iter_captures()` don't respect `stop` parameter | ### Problem
Both reproducible on `latest` and `nightly`.
I use treesitter to query some special code block in python code. In this case, I want to capture the content of string expression statement in special range.
But the `start` parameter is respected, but `stop` parameter don't.
### Steps to reproduce
Save below code as `minimal.lua`, and `nvim -l minimal.lua`
```lua
local query = vim.treesitter.query.parse(
"python",
[[
(module
(expression_statement
(string
(string_start)
(string_content) @cellcontent
(string_end)
)
)
)
]]
)
local code = table.concat({
"# %% [md]", -- line 0
'"""', -- line 1
"this is markdown content", -- line 2
"```python", -- line 2
"foo", -- line 3
"```", -- line 4
'"""', -- line 5
"# %% [markdown]", -- line 6
'"""', -- line 7
"this is another markdown content", -- line 8
"```lua", -- line 9
"bar", -- line 10
"```", -- line 11
'"""', -- line 12
}, "\n")
local lang_tree = vim.treesitter.get_string_parser(code, "python")
local root = lang_tree:parse()[1]:root()
do
--- case 1
local starts = vim.iter(query:iter_captures(root, code, 6))
:filter(function(id)
return query.captures[id] == "cellcontent"
end)
:map(function(_, node)
local row = node:start()
return row
end)
:totable()
vim.print(starts)
-- assert.is_same({ 8 }, starts)
assert(#starts==1)
assert(starts[1]==8)
end
do
--- case 2
local starts = vim.iter(query:iter_captures(root, code, 0, 6))
:filter(function(id)
return query.captures[id] == "cellcontent"
end)
:map(function(_, node)
local row = node:start()
return row
end)
:totable()
vim.print(starts)
-- assert.is_same({ 1 }, starts)
assert(#starts==1, "only should first string expr")
assert(starts[1]==1, "should capture first string expr")
end
```
### Expected behavior
I expect pass two test cases, and output:
```
{ 8 }
{ 1 }
```
But I got:
```
{ 8 }
{ 1, 8 }
E5113: Error while calling lua chunk: mini.lua:65: only should first string expr
stack traceback:
[C]: in function 'assert'
mini.lua:65: in main chunk
```
### Nvim version (nvim -v)
NVIM v0.11.0-dev-1529+g9e0d40f7e4
### Vim (not Nvim) behaves the same?
nvim only
### Operating system/version
6.12.8-arch1-1
### Terminal name/version
alacritty+tmux
### $TERM environment variable
tmux-256color
### Installation
AUR | bug,has:workaround,treesitter | low | Critical |
2,781,835,163 | godot | Godot Editor does not support Custom Android Build Templates written in Kotlin (build.gradle.kts) | ### Tested versions
- Reproducible in **Godot 4.3** v4.3.stable.offlicial [77dcf97d8]
### System information
Windows 11 Godot v4.3.stable.offlicial [77dcf97d8] Mobile
### Issue description
In the Godot Source code: [platform/android/export/export_plugin.cpp#L2552](https://github.com/godotengine/godot/blob/abf8e1e6f9e165ea209350ab9a98f01feca93b90/platform/android/export/export_plugin.cpp#L2552) I can see that the _export_plugin_ is only checking for a **build.gradle** file to be present.
It should _also check for_ **build.gradle.kts** in the case of custom Android Build Templates written in _Kotlin_.
### Steps to reproduce
**GIVEN**
- A godot project that exports to **Android**.
- A custom build Android Build Template (e.g. [build.zip](https://github.com/user-attachments/files/18385846/build.zip))that:
- Uses the _same folder structure as the one provided by Godot_.
- Uses **build.gradle.kts** (i.e. Kotlin) to build as [recommended by the Android Team](https://developer.android.com/build/migrate-to-kotlin-dsl)
**WHEN**
- The Godot Editor's Export dialog box is loaded to export to android.
**THEN**
- Godot assumes an android template is _not installed_.

**Expected Behaviour**
- **The project should still be exportable**
### Minimal reproduction project (MRP)
**Any Godot Mobile Project** with this example build template [build.zip](https://github.com/user-attachments/files/18385846/build.zip) | enhancement,discussion,platform:android,topic:export | low | Minor |
2,781,848,795 | pytorch | some errors in torch.compile(model,fullgraph=True,mode="reduce-overhead") on muti-gpu | ### 🐛 Describe the bug
code:
```python
import torch
from transformers import StaticCache
NUM_TOKENS_TO_GENERATE = 40
torch_device = "cuda"
from torch.nn.attention import SDPBackend, sdpa_kernel
def decode_one_tokens(model, cur_token, input_pos, cache_position, past_key_values):
logits = model(
cur_token,
position_ids=input_pos,
cache_position=cache_position,
past_key_values=past_key_values,
return_dict=False,
use_cache=True
)[0]
new_token = torch.argmax(logits[:, -1], dim=-1)[:, None]
return new_token
from torch.nn.attention import SDPBackend, sdpa_kernel
batch_size, seq_length = inputs["input_ids"].shape
with torch.no_grad():
past_key_values = StaticCache(
config=model.config, batch_size=1, max_cache_len=4096, device=torch_device, dtype=model.dtype,layer_device_map=layer_device_map
)
cache_position = torch.arange(seq_length, device=torch_device)
generated_ids = torch.zeros(
batch_size, seq_length + NUM_TOKENS_TO_GENERATE + 1, dtype=torch.int, device=torch_device
)
generated_ids[:, cache_position] = inputs["input_ids"].to(torch_device).to(torch.int)
logits = model(
**inputs, cache_position=cache_position, past_key_values=past_key_values,return_dict=False, use_cache=True
)[0]
next_token = torch.argmax(logits[:, -1], dim=-1)[:, None]
generated_ids[:, seq_length] = next_token[:, 0]
print(next_token.device)
# decode_one_tokens = torch.compile(decode_one_tokens, mode="reduce-overhead", fullgraph=True)
compile_layer(model)
cache_position = torch.tensor([seq_length + 1], device=torch_device)
for _ in range(1, NUM_TOKENS_TO_GENERATE):
# with sdpa_kernel(SDPBackend.MATH):
next_token = decode_one_tokens(model, next_token.clone(), None, cache_position, past_key_values)
generated_ids[:, cache_position] = next_token.int()
cache_position += 1
text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
```
error:
```
Unsupported: torch.* op returned non-Tensor device call_function <built-in function getitem>
from user code:
File "/home/bcds/.conda/envs/llm/lib/python3.9/site-packages/accelerate/hooks.py", line 165, in new_forward
args, kwargs = module._hf_hook.pre_forward(module, *args, **kwargs)
File "/home/bcds/.conda/envs/llm/lib/python3.9/site-packages/accelerate/hooks.py", line 364, in pre_forward
return send_to_device(args, self.execution_device), send_to_device(
File "/home/bcds/.conda/envs/llm/lib/python3.9/site-packages/accelerate/utils/operations.py", line 184, in send_to_device
{
File "/home/bcds/.conda/envs/llm/lib/python3.9/site-packages/accelerate/utils/operations.py", line 185, in <dictcomp>
k: t if k in skip_keys else send_to_device(t, device, non_blocking=non_blocking, skip_keys=skip_keys)
File "/home/bcds/.conda/envs/llm/lib/python3.9/site-packages/accelerate/utils/operations.py", line 156, in send_to_device
return tensor.to(device, non_blocking=non_blocking)
File "/home/bcds/.conda/envs/llm/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1299, in to
device, dtype, non_blocking, convert_to_format = torch._C._nn._parse_to(
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Versions
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.9.21 (main, Dec 11 2024, 16:24:11) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.216.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: GenuineIntel
Model name: INTEL(R) XEON(R) PLATINUM 8558
CPU family: 6
Model: 207
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
Stepping: 2
CPU(s) scaling MHz: 35%
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect user_shstk avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 4.5 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 192 MiB (96 instances)
L3 cache: 520 MiB (2 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-23,96-119
NUMA node1 CPU(s): 24-47,120-143
NUMA node2 CPU(s): 48-71,144-167
NUMA node3 CPU(s): 72-95,168-191
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cudnn-cu11==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-nccl-cu11==2.21.5
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] torch==2.5.1+cu118
[pip3] torchaudio==2.5.1+cu118
[pip3] torchvision==0.20.1+cu118
[pip3] triton==3.1.0
[conda] numpy 1.26.3 pypi_0 pypi
[conda] nvidia-cublas-cu11 11.11.3.6 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu11 11.8.87 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu11 11.8.89 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu11 11.8.89 pypi_0 pypi
[conda] nvidia-cudnn-cu11 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu11 10.9.0.58 pypi_0 pypi
[conda] nvidia-curand-cu11 10.3.0.86 pypi_0 pypi
[conda] nvidia-cusolver-cu11 11.4.1.48 pypi_0 pypi
[conda] nvidia-cusparse-cu11 11.7.5.86 pypi_0 pypi
[conda] nvidia-nccl-cu11 2.21.5 pypi_0 pypi
[conda] nvidia-nvtx-cu11 11.8.86 pypi_0 pypi
[conda] torch 2.5.1+cu118 pypi_0 pypi
[conda] torchaudio 2.5.1+cu118 pypi_0 pypi
[conda] torchvision 0.20.1+cu118 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @chauhang @penguinwu | needs reproduction,triaged,oncall: pt2 | low | Critical |
2,781,866,496 | kubernetes | VolumeAttachment is not deleted when the CSI plugin change from requiring attach to not requiring attach | ### What happened?
After [volumeAttacher.Attach](https://github.com/kubernetes/kubernetes/blob/v1.32.0/pkg/volume/util/operationexecutor/operation_generator.go#L271) executes successfully, the volumeattachment will be created. At this point, if CSI's ATTACHREQUIRED changes from true to false, it will cause [MarkVolumeAsAttached](https://github.com/kubernetes/kubernetes/blob/v1.32.0/pkg/volume/util/operationexecutor/operation_generator.go#L301) to fail.
And the volume will be removed from the [dsw](https://github.com/kubernetes/kubernetes/blob/v1.32.0/pkg/controller/volume/attachdetach/populator/desired_state_of_world_populator.go#L166), leaving the VolumeAttachment stranded without being properly cleaned up.
### What did you expect to happen?
volumeattachment can be cleaned up.
### How can we reproduce it (as minimally and precisely as possible)?
1. Install xsky-nfs-csi (other CSI drivers should be able to reproduce this) and set attachRequired to true
2. Create PVC and pod using the CSI
3. Keep querying volumeattachment using 'kubectl get volumeattachment'. As soon as the volumeattachment appears, immediately uninstall the CSI driver, then reinstall it with attachRequired set to false
4. After waiting for some time, query volumeattachment again and find that the volumeattachment still hasn't been deleted
### Anything else we need to know?
[operation_generator.go](https://github.com/kubernetes/kubernetes/blob/v1.32.0/pkg/volume/util/operationexecutor/operation_generator.go#L301-L307)
I think we should add error handling here to mark the volume as "Uncertain", so the volume can be added to asw, allowing the next retry to successfully remove the VolumeAttachment.
I have tested that this can solve the problem. If confirmed this is an issue, I will submit a PR
### Kubernetes version
<details>
```console
v1.31.2
```
</details>
### Cloud provider
<details>
none
</details>
### OS version
<details>
```console
PRETTY_NAME="Ubuntu 22.04.3 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/storage,needs-triage | low | Critical |
2,781,877,129 | PowerToys | Add Cloud-Based TTS Voices as System-Wide Windows Voices | ### Description of the new feature / enhancement
The feature would allow users to register cloud-based Text-to-Speech (TTS) services, such as Azure Cognitive Services or OpenAI's TTS API, as system-wide Windows voices. These voices would appear in the Windows Speech settings and would be selectable in any application that supports the Windows Speech API, alongside built-in voices like “Microsoft Hazel.”
Users would configure the feature by providing API credentials and selecting their preferred voice from the cloud service. The TTS audio would be fetched in real-time from the cloud and processed for use by any TTS-enabled application.
### Scenario when this would be used?
This feature would be useful in scenarios where:
1. Users need higher-quality, natural-sounding voices for accessibility tools like screen readers.
2. Multilingual users require advanced language support that built-in Windows voices might lack.
3. Power users want to customize the TTS experience with advanced AI voices for productivity, such as voice-assisted workflows.
For example, a visually impaired user could benefit from hearing documents read aloud in a more natural and expressive voice, making the listening experience less monotonous. Similarly, professionals working in different languages could use advanced AI-generated voices for better pronunciation and clarity.
### Supporting information
This idea builds on PowerToys' existing integration with OpenAI for its advanced paste functionality, demonstrating that cloud-based services can be integrated effectively into the toolset. | Needs-Triage | low | Minor |
2,781,878,383 | godot | Gridmap Duplicate/Move tool: Cursor is jumping to a new position | ### Tested versions
Godot 4.3dev7
### System information
Godot v4.4.dev7 - Windows 11 (build 22631) - Multi-window, 2 monitors - OpenGL 3 (Compatibility) - NVIDIA GeForce RTX 2060 (NVIDIA; 32.0.15.5613) - Intel(R) Core(TM) i7-10750H CPU @ 2.60GHz (12 threads)
### Issue description
When pressent C or X to duplicate or move a selection of tiles, the cursor jumps to a new position, which is annoying / disrupting. Especially when the selection is large, as the cursor jumps to the bottom left corner of the selection.
https://github.com/user-attachments/assets/1ac6ae83-348b-4017-b44a-2d47c20b61e3
### Steps to reproduce
1. Create a Gridmap, place a few tiles
2. Select multiple tiles, starting from top-left and dragging to bottom-right
3. Press C or X
4. Observe the selection jump
### Minimal reproduction project (MRP)
I will make one when I start working on it (until then, please refer to step to reproduce or the video provided). | bug,topic:editor,topic:3d | low | Minor |
2,781,885,357 | godot | Gridmap Moving/Duplicating tool: Undo does not cancel the operation correctly | ### Tested versions
Godot 4.4dev7
### System information
Godot v4.4.dev7 - Windows 11 (build 22631) - Multi-window, 2 monitors - OpenGL 3 (Compatibility) - NVIDIA GeForce RTX 2060 (NVIDIA; 32.0.15.5613) - Intel(R) Core(TM) i7-10750H CPU @ 2.60GHz (12 threads)
### Issue description
When we press CTRL+Z (Undo) while in the middle of a moving/duplicating operation, the block being moved/duplicated does not get deleted. It is as if the operation is not being cancelled.
Below I cut/move a piece then press Undo (see CTRL+Z in video below), you can see the piece reappearing at its original place, but the piece at the cursor position is still there.
https://github.com/user-attachments/assets/587164e8-23e9-4b87-bffc-63b44333471c
### Steps to reproduce
1. Create a Gridmap and add a few tiles
2. Select a group of tiles
3. Press C or X, move around
4. Press CTRL+Z
5. Observe the results
### Minimal reproduction project (MRP)
I will create one later. | bug,topic:editor,topic:3d | low | Minor |
2,781,905,080 | flutter | MenuAnchor is overlapping with button when opening upwards | ### Steps to reproduce
Simply run the sample code within documentation page code editor of sample #2.
On rendered UI, click button on top and click button on bottom and compare the layout.
---
I was checking documentation of PopupMenuButton https://api.flutter.dev/flutter/material/PopupMenuButton-class.html , since I wanted to build a popup menu that opens upwards when I click on bottom navbar button. I wanted menu to not overlap with the navbar.
Sample #2 (flutter create --sample=material.PopupMenuButton.2 mysample) caught my eye. I used the code provided in sample to test whether MenuAnchor would suit my needs. I modified it slightly, changing alignment of button and menu opening direction. I ran my code in documentation page editor

### Expected results
- Green button opens context menu on click. This context menu opens downwards. Top border of opened menu does not overlap with bottom pixel of button.
- Red button opens context menu on click. This context menu opens upwnwards. Bottom border of opened menu does not overlap with top pixel of button.
### Actual results
- Green button opens context menu on click.✔️ This context menu opens downwards. ✔️Top border of opened menu does not overlap with bottom pixel of button. ✔️
- Red button opens context menu on click.✔️ This context menu opens upwnwards.✔️ Bottom border of opened menu **overlaps** with top pixels of button.❌
---
Instead of reporting bug, I could just add some padding to menu. And it might work if button has a constant height calculated in pixels. But if the button's height is flexible in pixels, and measured in % of screen height, then its not possible to handle this gracefully.
### Code sample
<details open><summary>Code sample</summary>
```
import 'package:flutter/material.dart';
/// Flutter code sample for [MenuAnchor].
void main() => runApp(const MenuAnchorApp());
// This is the type used by the menu below.
enum SampleItem { itemOne, itemTwo, itemThree }
class MenuAnchorApp extends StatelessWidget {
const MenuAnchorApp({super.key});
@override
Widget build(BuildContext context) {
return const MaterialApp(
home: MenuAnchorExample(),
);
}
}
class MenuAnchorExample extends StatefulWidget {
const MenuAnchorExample({super.key});
@override
State<MenuAnchorExample> createState() => _MenuAnchorExampleState();
}
class _MenuAnchorExampleState extends State<MenuAnchorExample> {
SampleItem? selectedMenu;
Widget _buildMenuButton(Color color) {
return MenuAnchor(
builder:
(BuildContext context, MenuController controller, Widget? child) {
return IconButton(
style: IconButton.styleFrom(backgroundColor: color),
onPressed: () {
if (controller.isOpen) {
controller.close();
} else {
controller.open();
}
},
icon: const Icon(Icons.more_horiz, size: 40),
tooltip: 'Show menu',
);
},
menuChildren: List<MenuItemButton>.generate(
3,
(int index) => MenuItemButton(
onPressed: () =>
setState(() => selectedMenu = SampleItem.values[index]),
child: Text('Item ${index + 1}'),
),
),
);
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: const Text('MenuAnchorButton'),
backgroundColor: Theme.of(context).primaryColorLight,
),
body:
Column(mainAxisAlignment: MainAxisAlignment.spaceBetween, children: [
_buildMenuButton(Colors.green),
_buildMenuButton(Colors.red),
]),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>


</details>
### Logs
<details open><summary>Logs</summary>
```console
irrelevant - I ran code in documentation page editor
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
irrelevant - I ran code in documentation page editor
```
</details>
| d: examples,platform-web,a: desktop,has reproducible steps,P2,team-design,triaged-design,found in release: 3.27,found in release: 3.28 | low | Critical |
2,781,918,475 | pytorch | [CUDA] Illegal Memory Access with `ConvTranspose2d` | ### 🐛 Describe the bug
The following code causes illegal memory access in PyTorch.
```python
import torch
D = 40000
C = 10
m1 = torch.randn(C, D, 2).cuda()
model = torch.nn.ConvTranspose2d(C, 2, kernel_size=(1, 1), stride=(200, 200)).cuda()
model(m1)
```
The bug is detected via `computer-sanitizer`
```bash
computer-sanitizer python3 poc1.py
```
### Versions
Environment:
```
Collecting environment information...
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.11.0-1007-oem-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.6.85
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H100 PCIe
Nvidia driver version: 560.35.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: GenuineIntel
Model name: INTEL(R) XEON(R) SILVER 4510
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 2
Stepping: 8
CPU(s) scaling MHz: 37%
CPU max MHz: 4100.0000
CPU min MHz: 800.0000
BogoMIPS: 4800.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust sgx bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect user_shstk avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd sgx_lc fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.1 MiB (24 instances)
L1i cache: 768 KiB (24 instances)
L2 cache: 48 MiB (24 instances)
L3 cache: 60 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-11,24-35
NUMA node1 CPU(s): 12-23,36-47
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] blas 1.0 mkl
[conda] cuda-cudart 12.4.127 0 nvidia
[conda] cuda-cupti 12.4.127 0 nvidia
[conda] cuda-libraries 12.4.1 0 nvidia
[conda] cuda-nvrtc 12.4.127 0 nvidia
[conda] cuda-nvtx 12.4.127 0 nvidia
[conda] cuda-opencl 12.6.77 0 nvidia
[conda] cuda-runtime 12.4.1 0 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libcublas 12.4.5.8 0 nvidia
[conda] libcufft 11.2.1.3 0 nvidia
[conda] libcurand 10.3.7.77 0 nvidia
[conda] libcusolver 11.6.1.9 0 nvidia
[conda] libcusparse 12.3.1.170 0 nvidia
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] libnvjitlink 12.4.127 0 nvidia
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py312h5eee18b_1
[conda] mkl_fft 1.3.11 py312h5eee18b_0
[conda] mkl_random 1.2.8 py312h526ad5a_0
[conda] numpy 2.1.3 py312hc5e2394_0
[conda] numpy-base 2.1.3 py312h0da6c21_0
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] pytorch 2.5.1 py3.12_cuda12.4_cudnn9.1.0_0 pytorch
[conda] pytorch-cuda 12.4 hc786d27_7 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.5.1 py312_cu124 pytorch
[conda] torchtriton 3.1.0 py312 pytorch
[conda] torchvision 0.20.1 py312_cu124 pytorch
```
cc @ptrblck @msaroufim @eqy | module: cuda,triaged,topic: fuzzer | low | Critical |
2,781,920,096 | pytorch | [CUDA] Illegal Memory Access with `torch.bmm` | ### 🐛 Describe the bug
The following code causes illegal memory access in PyTorch.
```python
import torch
m1 = torch.randn(2, 291105, 1).to_sparse().cuda()
m2 = torch.randn(2, 1, 1).cuda()
print([m1.size(), m2.size()])
torch.bmm(m1, m2)
```
The bug is detected via `computer-sanitizer`
```bash
computer-sanitizer python3 poc2.py
```
### Versions
Environment:
```
Collecting environment information...
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.11.0-1007-oem-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.6.85
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H100 PCIe
Nvidia driver version: 560.35.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: GenuineIntel
Model name: INTEL(R) XEON(R) SILVER 4510
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 2
Stepping: 8
CPU(s) scaling MHz: 37%
CPU max MHz: 4100.0000
CPU min MHz: 800.0000
BogoMIPS: 4800.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust sgx bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect user_shstk avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd sgx_lc fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.1 MiB (24 instances)
L1i cache: 768 KiB (24 instances)
L2 cache: 48 MiB (24 instances)
L3 cache: 60 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-11,24-35
NUMA node1 CPU(s): 12-23,36-47
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] blas 1.0 mkl
[conda] cuda-cudart 12.4.127 0 nvidia
[conda] cuda-cupti 12.4.127 0 nvidia
[conda] cuda-libraries 12.4.1 0 nvidia
[conda] cuda-nvrtc 12.4.127 0 nvidia
[conda] cuda-nvtx 12.4.127 0 nvidia
[conda] cuda-opencl 12.6.77 0 nvidia
[conda] cuda-runtime 12.4.1 0 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libcublas 12.4.5.8 0 nvidia
[conda] libcufft 11.2.1.3 0 nvidia
[conda] libcurand 10.3.7.77 0 nvidia
[conda] libcusolver 11.6.1.9 0 nvidia
[conda] libcusparse 12.3.1.170 0 nvidia
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] libnvjitlink 12.4.127 0 nvidia
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py312h5eee18b_1
[conda] mkl_fft 1.3.11 py312h5eee18b_0
[conda] mkl_random 1.2.8 py312h526ad5a_0
[conda] numpy 2.1.3 py312hc5e2394_0
[conda] numpy-base 2.1.3 py312h0da6c21_0
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] pytorch 2.5.1 py3.12_cuda12.4_cudnn9.1.0_0 pytorch
[conda] pytorch-cuda 12.4 hc786d27_7 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.5.1 py312_cu124 pytorch
[conda] torchtriton 3.1.0 py312 pytorch
[conda] torchvision 0.20.1 py312_cu124 pytorch
```
cc @ptrblck @msaroufim @eqy | module: cuda,triaged,topic: fuzzer | low | Critical |
2,781,928,167 | PowerToys | Advanced Paste: Switch from gpt-4o to gpt-4o-mini | ### Description of the new feature / enhancement
**Summary:**
Switching from GPT-4o to GPT-4o Mini in the Advanced Paste feature can save **94% on input tokens** and **16.7x cheaper output tokens**, with minimal performance difference. MMLU benchmark: 82% (GPT-4o Mini) vs. 88.7% (GPT-4o).
---
### Change:
- **File:** `src/modules/AdvancedPaste/AdvancedPaste/Services/OpenAI/KernelService.cs`
- **Line:** 19
- **Update:** Change `ModelName` from `"gpt-4o"` to `"gpt-4o-mini"`
```csharp
protected override string ModelName => "gpt-4o-mini";
```
### Benefits
**Cost savings** with negligible performance impact.
### Supporting information
[openAI pricing](https://openai.com/api/pricing/)
[GPT4o Mini benchmarks](https://openai.com/index/gpt-4o-mini-advancing-cost-efficient-intelligence/) | Needs-Triage | low | Major |
2,781,932,802 | transformers | static cache with mixtral will cause CUDA error: device-side assert triggered | ### System Info
None
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
## code
``` batch_size, seq_length = inputs["input_ids"].shape
with torch.no_grad():
past_key_values = StaticCache(
config=model.config, batch_size=2, max_cache_len=4096, device=torch_device, dtype=model.dtype,layer_device_map=layer_device_map
)
cache_position = torch.arange(seq_length, device=torch_device)
generated_ids = torch.zeros(
batch_size, seq_length + NUM_TOKENS_TO_GENERATE + 1, dtype=torch.int, device=torch_device
)
generated_ids[:, cache_position] = inputs["input_ids"].to(torch_device).to(torch.int)
logits = model(
**inputs, cache_position=cache_position, past_key_values=past_key_values,return_dict=False, use_cache=True
)[0]
next_token = torch.argmax(logits[:, -1], dim=-1)[:, None]
generated_ids[:, seq_length] = next_token[:, 0]
# decode_one_tokens = torch.compile(decode_one_tokens, mode="reduce-overhead", fullgraph=True)
cache_position = torch.tensor([seq_length + 1], device=torch_device)
for _ in range(1, NUM_TOKENS_TO_GENERATE):
with torch.backends.cuda.sdp_kernel(enable_flash=False, enable_mem_efficient=False, enable_math=True):
next_token = decode_one_tokens(model, next_token.clone(), None, cache_position, past_key_values)
generated_ids[:, cache_position] = next_token.int()
cache_position += 1
text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
print(text)
## error:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
/home/bcds/venv/dilab/floe/static_cache_test.ipynb 单元格 3 line 2
<a href='vscode-notebook-cell://ssh-remote%2B10.1.3.1/home/bcds/venv/dilab/floe/static_cache_test.ipynb#W2sdnNjb2RlLXJlbW90ZQ%3D%3D?line=19'>20</a> for _ in range(1, NUM_TOKENS_TO_GENERATE):
<a href='vscode-notebook-cell://ssh-remote%2B10.1.3.1/home/bcds/venv/dilab/floe/static_cache_test.ipynb#W2sdnNjb2RlLXJlbW90ZQ%3D%3D?line=20'>21</a> with torch.backends.cuda.sdp_kernel(enable_flash=False, enable_mem_efficient=False, enable_math=True):
---> <a href='vscode-notebook-cell://ssh-remote%2B10.1.3.1/home/bcds/venv/dilab/floe/static_cache_test.ipynb#W2sdnNjb2RlLXJlbW90ZQ%3D%3D?line=21'>22</a> next_token = decode_one_tokens(model, next_token.clone(), None, cache_position, past_key_values)
<a href='vscode-notebook-cell://ssh-remote%2B10.1.3.1/home/bcds/venv/dilab/floe/static_cache_test.ipynb#W2sdnNjb2RlLXJlbW90ZQ%3D%3D?line=22'>23</a> generated_ids[:, cache_position] = next_token.int()
<a href='vscode-notebook-cell://ssh-remote%2B10.1.3.1/home/bcds/venv/dilab/floe/static_cache_test.ipynb#W2sdnNjb2RlLXJlbW90ZQ%3D%3D?line=23'>24</a> cache_position += 1
/home/bcds/venv/dilab/floe/static_cache_test.ipynb 单元格 3 line 1
<a href='vscode-notebook-cell://ssh-remote%2B10.1.3.1/home/bcds/venv/dilab/floe/static_cache_test.ipynb#W2sdnNjb2RlLXJlbW90ZQ%3D%3D?line=17'>18</a> def decode_one_tokens(model, cur_token, input_pos, cache_position, past_key_values):
---> <a href='vscode-notebook-cell://ssh-remote%2B10.1.3.1/home/bcds/venv/dilab/floe/static_cache_test.ipynb#W2sdnNjb2RlLXJlbW90ZQ%3D%3D?line=18'>19</a> logits = model(
<a href='vscode-notebook-cell://ssh-remote%2B10.1.3.1/home/bcds/venv/dilab/floe/static_cache_test.ipynb#W2sdnNjb2RlLXJlbW90ZQ%3D%3D?line=19'>20</a> cur_token,
<a href='vscode-notebook-cell://ssh-remote%2B10.1.3.1/home/bcds/venv/dilab/floe/static_cache_test.ipynb#W2sdnNjb2RlLXJlbW90ZQ%3D%3D?line=20'>21</a> position_ids=input_pos,
<a href='vscode-notebook-cell://ssh-remote%2B10.1.3.1/home/bcds/venv/dilab/floe/static_cache_test.ipynb#W2sdnNjb2RlLXJlbW90ZQ%3D%3D?line=21'>22</a> cache_position=cache_position,
<a href='vscode-notebook-cell://ssh-remote%2B10.1.3.1/home/bcds/venv/dilab/floe/static_cache_test.ipynb#W2sdnNjb2RlLXJlbW90ZQ%3D%3D?line=22'>23</a> past_key_values=past_key_values,
<a href='vscode-notebook-cell://ssh-remote%2B10.1.3.1/home/bcds/venv/dilab/floe/static_cache_test.ipynb#W2sdnNjb2RlLXJlbW90ZQ%3D%3D?line=23'>24</a> return_dict=False,
<a href='vscode-notebook-cell://ssh-remote%2B10.1.3.1/home/bcds/venv/dilab/floe/static_cache_test.ipynb#W2sdnNjb2RlLXJlbW90ZQ%3D%3D?line=24'>25</a> use_cache=True
<a href='vscode-notebook-cell://ssh-remote%2B10.1.3.1/home/bcds/venv/dilab/floe/static_cache_test.ipynb#W2sdnNjb2RlLXJlbW90ZQ%3D%3D?line=25'>26</a> )[0]
<a href='vscode-notebook-cell://ssh-remote%2B10.1.3.1/home/bcds/venv/dilab/floe/static_cache_test.ipynb#W2sdnNjb2RlLXJlbW90ZQ%3D%3D?line=26'>27</a> new_token = torch.argmax(logits[:, -1], dim=-1)[:, None]
<a href='vscode-notebook-cell://ssh-remote%2B10.1.3.1/home/bcds/venv/dilab/floe/static_cache_test.ipynb#W2sdnNjb2RlLXJlbW90ZQ%3D%3D?line=27'>28</a> return new_token
File ~/.conda/envs/llm/lib/python3.9/site-packages/torch/nn/modules/module.py:1736, in Module._wrapped_call_impl(self, *args, **kwargs)
1734 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1735 else:
-> 1736 return self._call_impl(*args, **kwargs)
File ~/.conda/envs/llm/lib/python3.9/site-packages/torch/nn/modules/module.py:1747, in Module._call_impl(self, *args, **kwargs)
1742 # If we don't have any hooks, we want to skip the rest of the logic in
1743 # this function, and just call forward.
1744 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1745 or _global_backward_pre_hooks or _global_backward_hooks
1746 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1747 return forward_call(*args, **kwargs)
1749 result = None
1750 called_always_called_hooks = set()
File ~/.conda/envs/llm/lib/python3.9/site-packages/accelerate/hooks.py:170, in add_hook_to_module.<locals>.new_forward(module, *args, **kwargs)
168 output = module._old_forward(*args, **kwargs)
169 else:
--> 170 output = module._old_forward(*args, **kwargs)
171 return module._hf_hook.post_forward(module, output)
File ~/.conda/envs/llm/lib/python3.9/site-packages/transformers/models/mixtral/modeling_mixtral.py:1283, in MixtralForCausalLM.forward(self, input_ids, attention_mask, position_ids, past_key_values, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, output_router_logits, return_dict, cache_position, num_logits_to_keep, **loss_kwargs)
1280 return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1282 # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
-> 1283 outputs = self.model(
1284 input_ids=input_ids,
1285 attention_mask=attention_mask,
1286 position_ids=position_ids,
1287 past_key_values=past_key_values,
1288 inputs_embeds=inputs_embeds,
1289 use_cache=use_cache,
1290 output_attentions=output_attentions,
1291 output_hidden_states=output_hidden_states,
1292 output_router_logits=output_router_logits,
1293 return_dict=return_dict,
1294 cache_position=cache_position,
1295 )
1297 hidden_states = outputs[0]
1298 # Only compute necessary logits, and do not upcast them to float if we are not computing the loss
File ~/.conda/envs/llm/lib/python3.9/site-packages/torch/nn/modules/module.py:1736, in Module._wrapped_call_impl(self, *args, **kwargs)
1734 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1735 else:
-> 1736 return self._call_impl(*args, **kwargs)
File ~/.conda/envs/llm/lib/python3.9/site-packages/torch/nn/modules/module.py:1747, in Module._call_impl(self, *args, **kwargs)
1742 # If we don't have any hooks, we want to skip the rest of the logic in
1743 # this function, and just call forward.
1744 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1745 or _global_backward_pre_hooks or _global_backward_hooks
1746 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1747 return forward_call(*args, **kwargs)
1749 result = None
1750 called_always_called_hooks = set()
File ~/.conda/envs/llm/lib/python3.9/site-packages/transformers/models/mixtral/modeling_mixtral.py:998, in MixtralModel.forward(self, input_ids, attention_mask, position_ids, past_key_values, inputs_embeds, use_cache, output_attentions, output_hidden_states, output_router_logits, return_dict, cache_position)
986 layer_outputs = self._gradient_checkpointing_func(
987 decoder_layer.__call__,
988 hidden_states,
(...)
995 cache_position,
996 )
997 else:
--> 998 layer_outputs = decoder_layer(
999 hidden_states,
1000 attention_mask=causal_mask,
1001 position_ids=position_ids,
1002 past_key_value=past_key_values,
1003 output_attentions=output_attentions,
1004 output_router_logits=output_router_logits,
1005 use_cache=use_cache,
1006 cache_position=cache_position,
1007 )
1009 hidden_states = layer_outputs[0]
1011 if use_cache:
File ~/.conda/envs/llm/lib/python3.9/site-packages/torch/nn/modules/module.py:1736, in Module._wrapped_call_impl(self, *args, **kwargs)
1734 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1735 else:
-> 1736 return self._call_impl(*args, **kwargs)
File ~/.conda/envs/llm/lib/python3.9/site-packages/torch/nn/modules/module.py:1747, in Module._call_impl(self, *args, **kwargs)
1742 # If we don't have any hooks, we want to skip the rest of the logic in
1743 # this function, and just call forward.
1744 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1745 or _global_backward_pre_hooks or _global_backward_hooks
1746 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1747 return forward_call(*args, **kwargs)
1749 result = None
1750 called_always_called_hooks = set()
File ~/.conda/envs/llm/lib/python3.9/site-packages/accelerate/hooks.py:170, in add_hook_to_module.<locals>.new_forward(module, *args, **kwargs)
168 output = module._old_forward(*args, **kwargs)
169 else:
--> 170 output = module._old_forward(*args, **kwargs)
171 return module._hf_hook.post_forward(module, output)
File ~/.conda/envs/llm/lib/python3.9/site-packages/transformers/models/mixtral/modeling_mixtral.py:724, in MixtralDecoderLayer.forward(self, hidden_states, attention_mask, position_ids, past_key_value, output_attentions, output_router_logits, use_cache, cache_position, **kwargs)
721 hidden_states = self.input_layernorm(hidden_states)
723 # Self Attention
--> 724 hidden_states, self_attn_weights, present_key_value = self.self_attn(
725 hidden_states=hidden_states,
726 attention_mask=attention_mask,
727 position_ids=position_ids,
728 past_key_value=past_key_value,
729 output_attentions=output_attentions,
730 use_cache=use_cache,
731 cache_position=cache_position,
732 )
733 hidden_states = residual + hidden_states
735 # Fully Connected
File ~/.conda/envs/llm/lib/python3.9/site-packages/torch/nn/modules/module.py:1736, in Module._wrapped_call_impl(self, *args, **kwargs)
1734 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1735 else:
-> 1736 return self._call_impl(*args, **kwargs)
File ~/.conda/envs/llm/lib/python3.9/site-packages/torch/nn/modules/module.py:1747, in Module._call_impl(self, *args, **kwargs)
1742 # If we don't have any hooks, we want to skip the rest of the logic in
1743 # this function, and just call forward.
1744 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1745 or _global_backward_pre_hooks or _global_backward_hooks
1746 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1747 return forward_call(*args, **kwargs)
1749 result = None
1750 called_always_called_hooks = set()
File ~/.conda/envs/llm/lib/python3.9/site-packages/accelerate/hooks.py:170, in add_hook_to_module.<locals>.new_forward(module, *args, **kwargs)
168 output = module._old_forward(*args, **kwargs)
169 else:
--> 170 output = module._old_forward(*args, **kwargs)
171 return module._hf_hook.post_forward(module, output)
File ~/.conda/envs/llm/lib/python3.9/site-packages/transformers/models/mixtral/modeling_mixtral.py:544, in MixtralSdpaAttention.forward(self, hidden_states, attention_mask, position_ids, past_key_value, output_attentions, use_cache, cache_position)
541 cache_kwargs = {"sin": sin, "cos": cos, "cache_position": cache_position} # Specific to RoPE models
542 key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
--> 544 key_states = repeat_kv(key_states, self.num_key_value_groups)
545 value_states = repeat_kv(value_states, self.num_key_value_groups)
547 causal_mask = attention_mask
File ~/.conda/envs/llm/lib/python3.9/site-packages/transformers/models/mixtral/modeling_mixtral.py:262, in repeat_kv(hidden_states, n_rep)
260 return hidden_states
261 hidden_states = hidden_states[:, :, None, :, :].expand(batch, num_key_value_heads, n_rep, slen, head_dim)
--> 262 return hidden_states.reshape(batch, num_key_value_heads * n_rep, slen, head_dim)
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
### Expected behavior
generate correctly | bug | low | Critical |
2,781,935,134 | pytorch | The label marked by torch.profiler.profile.record_function() appears twice in the output | ### 🐛 Describe the bug
I have followed the tutorials in [link](https://pytorch.org/tutorials/recipes/recipes/profiler_recipe.html)
I ran the code as follows
```python
import torch
import torchvision.models as models
from torch.profiler import profile, record_function, ProfilerActivity
if torch.cuda.is_available():
device = 'cuda:2'
elif torch.xpu.is_available():
device = 'xpu'
else:
print('Neither CUDA nor XPU devices are available to demonstrate profiling on acceleration devices')
import sys
sys.exit(0)
activities = [ProfilerActivity.CPU, ProfilerActivity.CUDA]
sort_by_keyword = "cuda" +"_time_total"
model = models.resnet18().to(device)
inputs = torch.randn(5, 3, 224, 224).to(device)
warmup = 5
for i in range(warmup):
model(inputs)
if __name__ == "__main__":
with profile(activities=activities,record_shapes=True) as prof:
with record_function("model_inference"):
model(inputs)
print(prof.key_averages().table(sort_by=sort_by_keyword, row_limit=10))
```
And I get the results shown in the picture. There is only one "model_inference" in the tutorial, but there are two here

I don't why this happens. And the cuda time reported by the first model_inference is longer than actual runtime.
Thanks a lot.
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.12.8 | packaged by Anaconda, Inc. | (main, Dec 11 2024, 16:31:09) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-40-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
Nvidia driver version: 550.127.08
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.1.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 112
On-line CPU(s) list: 0-111
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6348 CPU @ 2.60GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 28
Socket(s): 2
Stepping: 6
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 5200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 2.6 MiB (56 instances)
L1i cache: 1.8 MiB (56 instances)
L2 cache: 70 MiB (56 instances)
L3 cache: 84 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-27,56-83
NUMA node1 CPU(s): 28-55,84-111
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] nvtx==0.2.10
[pip3] onnx==1.17.0
[pip3] onnx-graphsurgeon==0.5.2
[pip3] onnxruntime-gpu==1.20.1
[pip3] pytorch-quantization==2.1.2
[pip3] torch==2.5.1
[pip3] torchinfo==1.8.0
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[pip3] tritonclient==2.53.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] nvtx 0.2.10 pypi_0 pypi
[conda] pytorch-quantization 2.1.2 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] torchinfo 1.8.0 pypi_0 pypi
[conda] torchvision 0.20.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
[conda] tritonclient 2.53.0 pypi_0 pypi
cc @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise | oncall: profiler | low | Critical |
2,781,944,274 | pytorch | Different Result with Different GPUs (A6000, A40) | ### 🐛 Describe the bug
I set most of parameters right but get different result with different GPUs.
### Versions
```
def set_deterministic_pytorch(seed: int):
# Set CUBLAS workspace config
cublas_workspace_config = os.environ.get("CUBLAS_WORKSPACE_CONFIG")
if cublas_workspace_config is None:
os.environ["CUBLAS_WORKSPACE_CONFIG"] = ":4096:8"
# Set PyTorch deterministic settings
os.environ['PYTHONHASHSEED'] = str(seed)
torch.manual_seed(seed)
torch.use_deterministic_algorithms(True, warn_only=True) # alternative : torch.use_deterministic_algorithms(True)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
torch.utils.deterministic.fill_uninitialized_memory = True
# Disable TensorFloat32 for consistent precision
torch.backends.cuda.matmul.allow_tf32 = False
torch.backends.cudnn.allow_tf32 = False
# If using CUDA
if torch.cuda.is_available():
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed) # If using multi-GPU
```
cc @ptrblck @msaroufim @eqy @mruberry @kurtamohler | needs reproduction,module: cuda,triaged,module: determinism | low | Critical |
2,781,952,467 | PowerToys | Right click menu leftovers after uninstall - "Loading..." | ### Microsoft PowerToys version
0.87.1
### Installation method
GitHub
### Running as admin
None
### Area(s) with issue?
File Locksmith
### Steps to reproduce
Install Powertoys.
Uninstall Powertoys.
Right click on any file in the File Explorer.
Several lines with "Loading..." shows on the context menu briefly and then goes away.

### ✔️ Expected Behavior
After uninstalling Powertoys, my right click menu should go back to normal behavior.
### ❌ Actual Behavior
Several lines with "Loading..." show on the menu.
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,781,955,139 | PowerToys | [Settings] ImageResizer preset descriptions do not show units | ### Microsoft PowerToys version
0.87.1
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
Settings
### Steps to reproduce
Open the ImageResizer section of the Settings application and observe the **Image sizes** section of the page.
### ✔️ Expected Behavior
The unit should be present after the dimensions, i.e.:

### ❌ Actual Behavior
The presets each lack the unit designation after the size/percentage text:

### Other Software
_No response_ | Issue-Bug,Product-Settings,Resolution-Fix Committed,Product-Image Resizer | low | Minor |
2,781,958,802 | vscode | Ctrl-click links don't make browser active window |
Type: <b>Bug</b>
When clicking 'Follow link' on link hover it does activate browser window, instead they should either both do or don't (preferably do because I had no ideas browser tabs were even opening resulting in a mess of tabs which is bad UX)

VS Code version: Code 1.96.2 (fabdb6a30b49f79a7aba0f2ad9df9b399473380f, 2024-12-19T10:22:47.216Z)
OS version: Windows_NT x64 10.0.19045
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i5-6200U CPU @ 2.30GHz (4 x 2400)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|7.89GB (0.48GB free)|
|Process Argv|--crash-reporter-id b90a84ae-89ed-45c8-9adb-9989df75643b|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (7)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-eslint|dba|3.0.10
copilot|Git|1.256.0
copilot-chat|Git|0.23.2
vscode-codeql|Git|1.17.0
debugpy|ms-|2024.14.0
python|ms-|2024.22.2
errorlens|use|3.22.0
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
a9j8j154:30646983
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupytercf:31046870
nativerepl2:31139839
pythonrstrctxt:31112756
nativeloc2:31192216
cf971741:31144450
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
cf1a2727:31215809
6074i472:31201624
```
</details>
<!-- generated by issue reporter --> | info-needed | low | Critical |
2,782,025,300 | pytorch | Unified Pytorch for Nvidia (CUDA), Intel (XPU), AMD (ROCm) | ### 🚀 The feature, motivation and pitch
Allow a single pytorch binary/pkg to support all major gpu platforms. One pytorch env that can execute code on `cpu`, `mps`, `cuda`, `xpu`, and `rocm`. Not 3 torch virtual envs where they can't talk to each other.
Reasons why we need this:
* It is only the natural solution for end-users
* Pre-built consumer multiple device machines already exist: Arc iGPU (XPU) + Nvidia (CUDA) (LAPTOPS).
* Custom-builds: XPU + CUDA + ROCm in one machine. No one says you can only have a single device class in a system.
* LLM models can run optimally using mixed gpus in a more performant way and/or cost effective way.
* There is no technical reason I can think that shouldn't allow this natural state of multi-device torch env.
* Developers on multi-device envs are forced to use vritual envs where one device can't talk via pytorch api to another without some shm or rpc magic.
End-User Problems:
* Driver dependencies. Nvidia drivers are the easiest to use/install with ROCm and Intel/XPU less friendly in that order.
* Drivers have complex depends and single pkg for all platforms is hard for end-users and they have to do all the prep work
Current state:
* CUDA pytorch can't access XPU or ROCm.
* Intel XPU enabled Pytorch can't access CUDA or ROCm.
* Amd ROCm pytorch can't access CUDA, or XPU.
The current state of affairs is bad for pytorch and bad for developers.
Ask yourself this one question:
Why can't Pytorch natively transfer `tensors` from [`cpu`, `mps`, `cuda`, `xpu`, `rocm`] to/from [`cpu`, `mps`, `cuda`, `xpu`, `rocm`] in one environment?
### Alternatives
None. We don't want 3 separate environments. End-users should have the option to have a unified env for all devices. Not all users want this but many will, given the choice is there. Currently there is no choice.
### Additional context
_No response_
cc @malfet @seemethere @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | module: build,feature,module: rocm,triaged | low | Minor |
2,782,043,007 | godot | Posix formatting strings visible in Visual Studio output on building | ### Tested versions
4.4 Master
### System information
Windows 10 - Visual Studio 2022
### Issue description
I have noticed recently that the way the compilation status is displayed in the output has become not very readable, for example we can see text like `[34mCompiling [1mmodules\godot_physics_3d\godot_step_3d.cpp [22;23;24;29m ...[0m.`
For example `1m` and `modules` are written together, is that how it should be?

I also see extra symbols (for example `...<-[0m`) what is the point of them?
### Steps to reproduce
Start building the engine on VS 2022
### Minimal reproduction project (MRP)
N/A | platform:windows,topic:buildsystem,needs testing,regression | low | Major |
2,782,065,982 | langchain | Offset by 1 bug on RecursiveJsonSplitter::split_json() function | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_text_splitters import RecursiveJsonSplitter
input_data = {
"projects": {
"AS": {
"AS-1": {}
},
"DLP": {
"DLP-7": {},
"DLP-6": {},
"DLP-5": {},
"DLP-4": {},
"DLP-3": {},
"DLP-2": {},
"DLP-1": {}
},
"GTMS": {
"GTMS-22": {},
"GTMS-21": {},
"GTMS-20": {},
"GTMS-19": {},
"GTMS-18": {},
"GTMS-17": {},
"GTMS-16": {},
"GTMS-15": {},
"GTMS-14": {},
"GTMS-13": {},
"GTMS-12": {},
"GTMS-11": {},
"GTMS-10": {},
"GTMS-9": {},
"GTMS-8": {},
"GTMS-7": {},
"GTMS-6": {},
"GTMS-5": {},
"GTMS-4": {},
"GTMS-3": {},
"GTMS-2": {},
"GTMS-1": {}
},
"IT": {
"IT-3": {},
"IT-2": {},
"IT-1": {}
},
"ITSAMPLE": {
"ITSAMPLE-12": {},
"ITSAMPLE-11": {},
"ITSAMPLE-10": {},
"ITSAMPLE-9": {},
"ITSAMPLE-8": {},
"ITSAMPLE-7": {},
"ITSAMPLE-6": {},
"ITSAMPLE-5": {},
"ITSAMPLE-4": {},
"ITSAMPLE-3": {},
"ITSAMPLE-2": {},
"ITSAMPLE-1": {}
},
"MAR": {
"MAR-2": {},
"MAR-1": {}
}
}
}
splitter = RecursiveJsonSplitter(max_chunk_size=216)
json_chunks = splitter.split_json(json_data=input_data)
input_data_DLP_5 = input_data.get("projects", {}).get("DLP", {}).get("DLP-5", None)
input_data_GTMS_10 = input_data.get("projects", {}).get("GTMS", {}).get("GTMS-10", None)
input_data_ITSAMPLE_2 = input_data.get("projects", {}).get("ITSAMPLE", {}).get("ITSAMPLE-2", None)
chunk_DLP_5 = None
chunk_GTMS_10 = None
chunk_ITSAMPLE_2 = None
for chunk in json_chunks:
print(chunk)
node = chunk.get("projects", {}).get("DLP", {}).get("DLP-5", None)
if isinstance(node, dict):
chunk_DLP_5 = node
node = chunk.get("projects", {}).get("GTMS", {}).get("GTMS-10", None)
if isinstance(node, dict):
chunk_GTMS_10 = node
node = chunk.get("projects", {}).get("ITSAMPLE", {}).get("ITSAMPLE-2", None)
if isinstance(node, dict):
chunk_ITSAMPLE_2 = node
print("\nRESULTS:")
if isinstance(chunk_DLP_5, dict):
print(f"[PASS] - Node DLP-5 was found both in input_data and json_chunks")
else:
print(f"[TEST FAILED] - Node DLP-5 from input_data was NOT FOUND in json_chunks")
if isinstance(chunk_GTMS_10, dict):
print(f"[PASS] - Node GTMS-10 was found both in input_data and json_chunks")
else:
print(f"[TEST FAILED] - Node GTMS-10 from input_data was NOT FOUND in json_chunks")
if isinstance(chunk_ITSAMPLE_2, dict):
print(f"[PASS] - Node ITSAMPLE-2 was found both in input_data and json_chunks")
else:
print(f"[TEST FAILED] - Node ITSAMPLE-2 from input_data was NOT FOUND in json_chunks")
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am trying to use `langchain_text_splitters` library to split JSON content using the function `RecursiveJsonSplitter::split_json()`
For most cases it works, however I am experiencing some data being lost depending on the input JSON and the chunk size I am using.
I was able to consistently replicate the issue for the input JSON provided on my sample code. I always get the nodes "GTMS-10" and "ITSAMPLE-2" discarded when I split the JSON using `max_chunk_size=216`.
I noticed this issue always occurs with nodes that would be on the edge of the chunks. When you run my sample code, it will print all the 5 chunks generated:
```
python split_json_bug.py
{'projects': {'AS': {'AS-1': {}}, 'DLP': {'DLP-7': {}, 'DLP-6': {}, 'DLP-5': {}, 'DLP-4': {}, 'DLP-3': {}, 'DLP-2': {}, 'DLP-1': {}}}}
{'projects': {'GTMS': {'GTMS-22': {}, 'GTMS-21': {}, 'GTMS-20': {}, 'GTMS-19': {}, 'GTMS-18': {}, 'GTMS-17': {}, 'GTMS-16': {}, 'GTMS-15': {}, 'GTMS-14': {}, 'GTMS-13': {}, 'GTMS-12': {}, 'GTMS-11': {}}}}
{'projects': {'GTMS': {'GTMS-9': {}, 'GTMS-8': {}, 'GTMS-7': {}, 'GTMS-6': {}, 'GTMS-5': {}, 'GTMS-4': {}, 'GTMS-3': {}, 'GTMS-2': {}, 'GTMS-1': {}}, 'IT': {'IT-3': {}, 'IT-2': {}, 'IT-1': {}}}}
{'projects': {'ITSAMPLE': {'ITSAMPLE-12': {}, 'ITSAMPLE-11': {}, 'ITSAMPLE-10': {}, 'ITSAMPLE-9': {}, 'ITSAMPLE-8': {}, 'ITSAMPLE-7': {}, 'ITSAMPLE-6': {}, 'ITSAMPLE-5': {}, 'ITSAMPLE-4': {}, 'ITSAMPLE-3': {}}}}
{'projects': {'ITSAMPLE': {'ITSAMPLE-1': {}}, 'MAR': {'MAR-2': {}, 'MAR-1': {}}}}
RESULTS:
[PASS] - Node DLP-5 was found both in input_data and json_chunks
[TEST FAILED] - Node GTMS-10 from input_data was NOT FOUND in json_chunks
[TEST FAILED] - Node ITSAMPLE-2 from input_data was NOT FOUND in json_chunks
```
Please, noticed that the 2nd chunk ends with node "GTMS-11" and the 3rd chunk starts with "GTMS-9". Same thing for chunks number 4 (ends with "ITSAMPLE-3") and chunk number 5 (starts with "ITSAMPLE-1")
Because the chunks "GTMS-10" and "ITSAMPLE-2" were lost on the edges of chunks, I believe that might a case of an "offset by 1 bug" on the RecursiveJsonSplitter::split_json() Python function.
Since I am calling it exactly how it is described in the [documentation](https://python.langchain.com/docs/how_to/recursive_json_splitter/#basic-usage) and I couldn't find any bug and discussion mentioning it, I thought I should file a bug for it.
### System Info
```console
(.venv) user@User-MacBook-Air split_json_bug % python -m langchain_core.sys_info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.6.0: Thu Sep 12 23:34:49 PDT 2024; root:xnu-10063.141.1.701.1~1/RELEASE_X86_64
> Python Version: 3.11.9 (main, Apr 2 2024, 08:25:04) [Clang 15.0.0 (clang-1500.3.9.4)]
Package Information
-------------------
> langchain_core: 0.3.29
> langsmith: 0.2.10
> langchain_text_splitters: 0.3.5
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> httpx: 0.28.1
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> orjson: 3.10.14
> packaging: 24.2
> pydantic: 2.10.5
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> tenacity: 9.0.0
> typing-extensions: 4.12.2
> zstandard: Installed. No version info available.
```
```console
(.venv) user@User-MacBook-Air split_json_bug % pip freeze
annotated-types==0.7.0
anyio==4.8.0
certifi==2024.12.14
charset-normalizer==3.4.1
h11==0.14.0
httpcore==1.0.7
httpx==0.28.1
idna==3.10
jsonpatch==1.33
jsonpointer==3.0.0
langchain-core==0.3.29
langchain-text-splitters==0.3.5
langsmith==0.2.10
orjson==3.10.14
packaging==24.2
pydantic==2.10.5
pydantic_core==2.27.2
PyYAML==6.0.2
requests==2.32.3
requests-toolbelt==1.0.0
sniffio==1.3.1
tenacity==9.0.0
typing_extensions==4.12.2
urllib3==2.3.0
``` | 🤖:bug | low | Critical |
2,782,077,791 | yt-dlp | [vice] Unable to download JSON metadata: HTTP Error 404: Not Found | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Unites States
### Provide a description that is worded well enough to be understood
Receiving "Unable to download JSON metadata: HTTP Error 404: Not Found" error on all shows on vicetv.com. Usually downloading passing MSO credentials and cookies. For purposes of the debug, I am not passing though I get the same errors either way. Same URL loads properly in webpage once logged into MSO.
Happens with [email protected] as well as [email protected], both via pip.
Also tried with -4
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'https://www.vicetv.com/en_us/video/hells-kitchen/6695812bff53c4aba10ff4e2']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [65cf46cdd] (pip)
[debug] Python 3.11.11 (CPython x86_64 64bit) - Linux-5.4.0-204-generic-x86_64-with-glibc2.31 (OpenSSL 1.1.1f 31 Mar 2020, glibc 2.31)
[debug] exe versions: ffmpeg 4.2.7, ffprobe 4.2.1, phantomjs broken
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.02.02, curl_cffi-0.7.1, mutagen-1.47.0, requests-2.32.3, sqlite3-3.31.1, urllib3-2.2.2, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp)
[vice] Extracting URL: https://www.vicetv.com/en_us/video/hells-kitchen/6695812bff53c4aba10ff4e2
[vice] 6695812bff53c4aba10ff4e2: Downloading JSON metadata
ERROR: [vice] 6695812bff53c4aba10ff4e2: Unable to download JSON metadata: HTTP Error 404: Not Found (caused by <HTTPError 404: Not Found>)
File "/home/jeremy/.local/lib/python3.11/site-packages/yt_dlp/extractor/common.py", line 742, in extract
ie_result = self._real_extract(url)
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jeremy/.local/lib/python3.11/site-packages/yt_dlp/extractor/vice.py", line 106, in _real_extract
video = self._call_api('videos', 'id', video_id, locale, '''body
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jeremy/.local/lib/python3.11/site-packages/yt_dlp/extractor/vice.py", line 24, in _call_api
return self._download_json(
^^^^^^^^^^^^^^^^^^^^
File "/home/jeremy/.local/lib/python3.11/site-packages/yt_dlp/extractor/common.py", line 1152, in download_content
res = getattr(self, download_handle.__name__)(url_or_request, video_id, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jeremy/.local/lib/python3.11/site-packages/yt_dlp/extractor/common.py", line 1112, in download_handle
res = self._download_webpage_handle(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jeremy/.local/lib/python3.11/site-packages/yt_dlp/extractor/adobepass.py", line 1367, in _download_webpage_handle
return super()._download_webpage_handle(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jeremy/.local/lib/python3.11/site-packages/yt_dlp/extractor/common.py", line 962, in _download_webpage_handle
urlh = self._request_webpage(url_or_request, video_id, note, errnote, fatal, data=data,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jeremy/.local/lib/python3.11/site-packages/yt_dlp/extractor/common.py", line 911, in _request_webpage
raise ExtractorError(errmsg, cause=err)
File "/home/jeremy/.local/lib/python3.11/site-packages/yt_dlp/extractor/common.py", line 898, in _request_webpage
return self._downloader.urlopen(self._create_request(url_or_request, data, headers, query, extensions))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jeremy/.local/lib/python3.11/site-packages/yt_dlp/YoutubeDL.py", line 4172, in urlopen
return self._request_director.send(req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jeremy/.local/lib/python3.11/site-packages/yt_dlp/networking/common.py", line 117, in send
response = handler.send(request)
^^^^^^^^^^^^^^^^^^^^^
File "/home/jeremy/.local/lib/python3.11/site-packages/yt_dlp/networking/_helper.py", line 208, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jeremy/.local/lib/python3.11/site-packages/yt_dlp/networking/common.py", line 340, in send
return self._send(request)
^^^^^^^^^^^^^^^^^^^
File "/home/jeremy/.local/lib/python3.11/site-packages/yt_dlp/networking/_requests.py", line 365, in _send
raise HTTPError(res, redirect_loop=max_redirects_exceeded)
yt_dlp.networking.exceptions.HTTPError: HTTP Error 404: Not Found
```
| account-needed,geo-blocked,site-bug,triage,can-share-account | low | Critical |
2,782,078,480 | next.js | Next build fails only in Github Action | ### Link to the code that reproduces this issue
https://github.com/jpaise/search-frontend
### To Reproduce
```
- name: Setup pnpm
uses: pnpm/action-setup@v4
with:
version: 10
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: '18'
- name: Install dependencies
run: pnpm install
- name: Build and Export project
run: pnpm build
```
this would result
```
▲ Next.js 15.2.0-canary.4
Creating an optimized production build ...
✓ Compiled successfully
Linting and checking validity of types ...
ELIFECYCLE Command failed with exit code 1.
```
in github action with no clear error message. I've tried `npm`, `yarn` and `pnpm`, all same results. Earlier Nextjs version, same result, if I disable linting, then the error will display in the next step. So pretty much all the possible combinations has been tried.
next.config.js
```
/** @type {import('next').NextConfig} */
const nextConfig = {
output: 'export',
images: {
unoptimized: true,
domains: [], // Add any image domains you're using
},
trailingSlash: true,
assetPrefix: process.env.NODE_ENV === 'production' ? './' : '',
reactStrictMode: true,
}
module.exports = nextConfig
```
### Current vs. Expected behavior
Everything works fine in local (ubuntu) environment, but it just somehow stuck on github action build. All the local env matches cloud ones.
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #83~20.04.1-Ubuntu SMP Fri Oct 4 21:49:59 UTC 2024
Available memory (MB): 15992
Available CPU cores: 4
Binaries:
Node: 18.20.5
npm: 10.8.2
Yarn: 1.22.22
pnpm: 10.0.0
Relevant Packages:
next: 15.2.0-canary.4 // Latest available version is detected (15.2.0-canary.4).
eslint-config-next: 15.1.4
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.3
Next.js Config:
output: export
```
### Which area(s) are affected? (Select all that apply)
Output (export/standalone)
### Which stage(s) are affected? (Select all that apply)
next build (local), Other (Deployed)
### Additional context
_No response_ | Output (export/standalone) | low | Critical |
2,782,104,953 | next.js | Passing data to both client & server component result in always async server tree | ### Link to the code that reproduces this issue
https://stackblitz.com/edit/stackblitz-starters-ex8vzvqf?file=app%2Fpage.tsx
### To Reproduce
1. Start the application in development
2. Observe the code inside `page.tsx`, `components/client.tsx` and `components/server.tsx`.
3. Note what the children's type of the `InternalClientComponent` component
### Current vs. Expected behavior
- The children of the `InternalClientComponent` should be of type: `Symbol(React.element)` (or `Symbol(React.transitional.element)` in react-19)
- However, they are of type `Symbol(React.lazy)` indicating that they are suspending or awaiting smth, but they are not, they are a simple `<button>` element.
IMPORTANT NOTES:
- Not passing the `data` property to either the `ClientComponent` or `ServerComponent` resolves the issue, pointing me that this is mostly related to the data reference, not the components themselves.
- It is working with [next@14](https://stackblitz.com/edit/stackblitz-starters-vwuub5bs?description=The%20React%20framework%20for%20production&file=app%2Fpage.tsx,app%2Flayout.tsx,package.json,app%2Fcomponents%2Fclient.tsx&title=Next.js%20Starter)
- If the `data` is of primitive type, it's also working
- Not happening during `build`
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: Ubuntu 20.04.0 LTS Sat Jan 11 2025 21:19:29 GMT+0200 (Eastern European Standard Time)
Available memory (MB): NaN
Available CPU cores: 8
Binaries:
Node: 18.20.3
npm: 10.2.3
Yarn: 1.22.19
pnpm: 8.15.6
Relevant Packages:
next: 15.2.0-canary.4 // Latest available version is detected (15.2.0-canary.4).
eslint-config-next: N/A
react: 19.0.0
react-dom: 19.0.0
typescript: 5.2.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
create-next-app, Turbopack, Webpack
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
I tested a different combinations of next 14/15 & react 18/19 as well as `dev` & `build` commands and with `--turbo` flag and webpack respectively - the issue is consistently reproducable since [email protected]
I suspect that passing once a reference to a variable get's passed to a client component it gets turned to a promise so it can travel through the network, however there are also server component that are incorrectly reading the promise (instead of the original variable) which turns them into an async tree -> thus subsequent client components are receiving `async` server components regardless if they are suspending or not. | create-next-app,Webpack,Turbopack | low | Minor |
2,782,121,020 | ui | [bug]: Responsive Breadcrumb Example Doesn't Work | ### Describe the bug
Looking at the responsive example for the [breadcrumb](https://ui.shadcn.com/docs/components/breadcrumb) component and resizing my browser or opening on my phone doesn't ever seem to trigger the drawer behavior. Is the `useMediaQuery()` hook bugged?
### Affected component/components
Breadcrumb
### How to reproduce
Look at the docs.
### Codesandbox/StackBlitz link
https://github.com/shadcn-ui/ui/blob/1081536246b44b6664f4c99bc3f1b3614e632841/apps/www/public/r/styles/new-york/breadcrumb-responsive.json#L10
### Logs
```bash
n/a
```
### System Info
```bash
n/a
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,782,121,085 | react-native | Codegen complains about unsupported spec type despite of using `includesGeneratedCode: true` | ### Description
I'm developing Turbo Native Module, which utilizes [Event Emitter JS Spec](https://github.com/reactwg/react-native-new-architecture/blob/main/docs/turbo-modules.md#add-event-emitting-capabilities).
The Event Emitter JS Spec was [cherry-picked in 0.76.2](https://github.com/facebook/react-native/commit/a09df751ebfba5ef648037bf2aa13da96a259b01), so applications using rn version lower than 0.76.2 would break build when they try to build with my library.
So I decided to include generated codes, as documented in [codegen documentation](https://reactnative.dev/docs/the-new-architecture/codegen-cli#including-generated-code-into-libraries), by setting `includesGeneratedCode` flag to `true`.
What I expect is as we offers generated code with our library, codegen doesn't need to(and must not) run by application level for our library's JS spec because in the codegen documentation it says:
> No need to worry about Codegen version mismatch between what is used by the app, and what was used during library development.
But when I try to build application running on rn lower than 0.76.2 (via running `npx @react-native-community/cli codegen`) with included generated code, it still complains about the `UnsupportedModulePropertyParserError`(detailed error log in below).
Am I misunderstanding the `includesGeneratedCode` option?
### Steps to reproduce
1. Clone https://github.com/wjaykim/rn-codegen-issue-reproducer
2. Run `yarn install`
3. Run `npx @react-native-community/cli codegen`
### React Native Version
0.74.6 - 0.76.1
### Affected Platforms
Runtime - iOS, Build - MacOS
### Areas
Codegen
### Output of `npx react-native info`
```text
System:
OS: macOS 15.2
CPU: (10) arm64 Apple M1 Pro
Memory: 324.94 MB / 32.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 20.18.0
path: ~/.nvm/versions/node/v20.18.0/bin/node
Yarn:
version: 1.22.22
path: ~/.nvm/versions/node/v20.18.0/bin/yarn
npm:
version: 10.8.2
path: ~/.nvm/versions/node/v20.18.0/bin/npm
Watchman:
version: 2024.12.02.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.16.2
path: /Users/user/.rbenv/shims/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.2
- iOS 18.2
- macOS 15.2
- tvOS 18.2
- visionOS 2.2
- watchOS 11.2
Android SDK: Not Found
IDEs:
Android Studio: 2024.1 AI-241.18034.62.2412.12266719
Xcode:
version: 16.2/16C5032a
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.13
path: /usr/bin/javac
Ruby:
version: 3.3.5
path: /Users/user/.rbenv/shims/ruby
npmPackages:
"@react-native-community/cli": Not Found
react:
installed: 18.2.0
wanted: 18.2.0
react-native:
installed: 0.74.6
wanted: 0.74.6
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: false
iOS:
hermesEnabled: Not found
newArchEnabled: false
```
### Stacktrace or Logs
```text
UnsupportedModulePropertyParserError: Module Native***Module: TypeScript interfaces extending TurboModule must only contain 'FunctionTypeAnnotation's. Property 'on***' refers to a 'TSTypeReference'.
at throwIfModuleTypeIsUnsupported (/Users/user/test/node_modules/@react-native/codegen/lib/parsers/error-utils.js:163:11)
at buildPropertySchema (/Users/user/test/node_modules/@react-native/codegen/lib/parsers/parsers-commons.js:472:3)
at /Users/user/test/node_modules/@react-native/codegen/lib/parsers/parsers-commons.js:705:24
at guard (/Users/user/test/node_modules/@react-native/codegen/lib/parsers/utils.js:26:14)
at /Users/user/test/node_modules/@react-native/codegen/lib/parsers/parsers-commons.js:702:14
at Array.map (<anonymous>)
at buildModuleSchema (/Users/user/test/node_modules/@react-native/codegen/lib/parsers/parsers-commons.js:699:6)
at /Users/user/test/node_modules/@react-native/codegen/lib/parsers/parsers-commons.js:522:9
at guard (/Users/user/test/node_modules/@react-native/codegen/lib/parsers/utils.js:26:14)
at buildSchemaFromConfigType (/Users/user/test/node_modules/@react-native/codegen/lib/parsers/parsers-commons.js:521:22) {
nodes: [ undefined ]
}
```
### Reproducer
https://github.com/wjaykim/rn-codegen-issue-reproducer
### Screenshots and Videos
_No response_ | Platform: iOS,Needs: Triage :mag:,Type: New Architecture | low | Critical |
2,782,127,789 | rust | Status of the riscv32im-risc0-zkvm-elf target | We currently have a target called `riscv32im-risc0-zkvm-elf`, which targets the [RISC Zero Zero Knowledge VM](https://risczero.com/). This target is maintained by @SchmErik, @jbruestle, @flaub #134721 has revealed some issues in this target.
The biggest issue is that the target sets `target_os = "zkvm"`. A "Zero Knowledge VM" is a generic concept and not an operating system. The correct `target_os` would be `risc0`.
But there is another question, whether this target should exist in the first place. The alternative to this target would be using the existing `riscv32im-unknown-none-elf` target with a RISC0-provided crate for the system calls.
The thing that is currently gained from having this target is that it can have `std`, where the very few syscalls that exist are used to implement some standard library interfaces.
Concretely, the following functionality is provided:
- program arguments
- environment variables (read-only)
- stdio
- the system global allocator
- and of course HashMap
- (no_threads) std::sync
other features like `std::fs`, `std::io`, `std::process`, `std::thread`, `std::time`, `std::net` are not supported.
@SchmErik, who is a maintainer of the target, highlights how the `std` support is useful in https://github.com/rust-lang/rust/pull/134721#issuecomment-2578851596:
> Having std support is important to us because it allows developers to use crates outside of no_std. This has enabled many others to use our target much more easily with existing crates
Additionally, they mentioned how having the target allows them to add cfgs to forked ecosystem crates to make them use more of RISC Zero's APIs (though this could also be implemented without a target and a normal custom cfg).
It is always unsatisfactory to have targets with such incomplete standard libraries (at least it's not as bad as `wasm32-unknown-unknown` in this case). On the other hand, (just like the wasm target) the APIs that are implemented are likely useful.
This issue is about figuring out what to do with this target, whether to keep it (renamed) or remove it alltogether. | T-compiler,O-risc0 | low | Major |
2,782,132,053 | vscode | Open Recent doesn't include project if closed in 2nd app instance |
Type: <b>Bug</b>
Currently the list just builds as projects are opened but if 2 VSCodes are open and more than 10 projects opened in 2nd instance, closing open project in 1st instance will not include in list, even though it was factually among the 2 most recently open ones thus still very relevant/useful to this menu since user will expect it to be there for easy re-access so to fix, project closes must also trigger addition to list (if not already in it, if so a re-bump is even warranted since it was more recently open than the other 9 closed ones that newly buried it and is at risk of losing recency status with only 1 new open)
VS Code version: Code 1.96.2 (fabdb6a30b49f79a7aba0f2ad9df9b399473380f, 2024-12-19T10:22:47.216Z)
OS version: Windows_NT x64 10.0.19045
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i5-6200U CPU @ 2.30GHz (4 x 2400)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|7.89GB (0.38GB free)|
|Process Argv|--crash-reporter-id b90a84ae-89ed-45c8-9adb-9989df75643b|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (7)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-eslint|dba|3.0.10
copilot|Git|1.256.0
copilot-chat|Git|0.23.2
vscode-codeql|Git|1.17.0
debugpy|ms-|2024.14.0
python|ms-|2024.22.2
errorlens|use|3.22.0
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
a9j8j154:30646983
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupytercf:31046870
nativerepl2:31139839
pythonrstrctxt:31112756
nativeloc2:31192216
cf971741:31144450
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
cf1a2727:31215809
6074i472:31201624
```
</details>
<!-- generated by issue reporter --> | feature-request,workbench-history | low | Critical |
2,782,139,773 | godot | Scons-Generated VS Project is Missing Defines and Include Search Paths for IntelliSense | ### Tested versions
- Reproducible in: master branch
### System information
Windows 10 - Godot master branch
### Issue description
When I generate a Visual Studio project using Scons, the resulting solution opens/compiles/runs in Visual Studio 2022 without issue. However, IntelliSense for various files throughout the codebase shows a ton of errors.
From what I can tell, these IntelliSense errors are caused by a disconnect between the Defines and Include Search Paths that are used to compile the projects versus the Defines and Include Search Paths that VS IntelliSense is using.
After digging into the Scons files a bit, here are a few examples of this problem manifesting:
- The file `modules/SCsub` adds the `GODOT_MODULE` define, but only for that subdirectory's environment (`env_modules`). Later on, when the VS project is generated, the `GODOT_MODULE` define isn't present in the global `env`. As a result, IntelliSense in VS acts as though it isn't defined and shows a lot of errors in module header and cpp files.
- In situations where a file depends on a third party library, the Include Search Paths are often included only in that file's `SCsub` file. For example, `thirdparty/harfbuzz/src` is only added as an include search path deep in the `text_server_adv` module. But then when the VS project is generated, that search path isn't added. As a result, Intellisense acts as though it can't find includes from harfbuzz, and shows a lot of errors.
One way I was able to fix this was by manually adding various defines and search paths to the `VSHINT_DEFINES` and `VSHINT_INCLUDES` environment variables before vsproj generation. These env vars appear to be meant exactly for this (specifying additional defines/includes for the VS project generation), but they appear to be otherwise unused.
```python
if env["vsproj"]:
methods.generate_cpp_hint_file("cpp.hint")
env["CPPPATH"] = [Dir(path) for path in env["CPPPATH"]]
env["VSHINT_DEFINES"] = ["GODOT_MODULE"]
env["VSHINT_INCLUDES"] = ["thirdparty/harfbuzz/src",
"thirdparty/icu4c/common",
"thirdparty/icu4c/i18n",
"thirdparty/msdfgen",
"thirdparty/mbedtls/include",
"thirdparty/libogg"]
methods.generate_vs_project(env, ARGUMENTS, env["vsproj_name"])
```
You might argue that it isn't a big deal for IntelliSense to be broken, but I can tell you firsthand that it can be quite confusing for new contributors, and it can obviously be quite frustrating and confusing and distracting for IntelliSense to be telling you a ton of things are broken when that isn't actually the case.
I think some possible fixes for this issue could be:
- As I did in the above example, populate `VSHINT_DEFINES` and `VSHINT_INCLUDES` with reasonable values in the root `SConstruct` file to ensure that IntelliSense works as effectively as possible.
- Some other way to populate these two environment variables elegantly when `vsproj=yes` is specified and defines/includes are added in `SCsub` files that are then not included in the root env.
- If no source code changes are desired to fix this, at least updating the documentation (https://docs.godotengine.org/en/stable/contributing/development/configuring_an_ide/visual_studio.html) to highlight this issue and explain the correct way to populate these environment variables (e.g. can they be defined in a `custom.py`? or on the command line? or via some other means? if unfamiliar with Scons, it's unclear).
I'd also be happy to make the changes myself and issue a pull request to fix this, but would love some advisement as to the "best way" to fix the problem from the perspective of more seasoned Godot developers.
(Finally, just a note that this is similar to another issue (https://github.com/godotengine/godot/issues/94287), but that issue appears to be more related to custom module development. The issue I'm describing seems to happen for a clean clone with built-in Godot classes. But the root cause/fix may be the same?)
### Steps to reproduce
1. Clone the godot repo locally on a Windows machine with VS 2022 installed.
2. Generate a VS project in Scons: `scons platform=windows vsproj=yes dev_build=yes`
3. Open in VS 2022. Notice that building and running both work fine.
4. However, notice that many files appear broken and have a lot of IntelliSense issues. For example, `crypto_core.cpp` or `text_server_adv.cpp` or `text_server_adv.h`.
5. Add the `VSHINT_DEFINES` and `VSHINT_INCLUDES` lines from the description to the appropriate spot in the `SConstruct` file.
6. Regenerate the VS project.
7. Notice that IntelliSense now works for the files identified.
### Minimal reproduction project (MRP)
N/A | bug,topic:buildsystem,topic:thirdparty,needs testing | low | Critical |
2,782,144,974 | langchain | [Bug] Arg Returns in docstring not found in function signature even when it is provided | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
## Steps to Reproduce:
Create a Python function with type hints in the signature.
Add a detailed docstring that includes type information in the "Args" and "Returns" sections.
Try to run the function and observe the failure.
Example Code:
Below is a self-contained, minimal, reproducible example demonstrating the issue:
from typing import Optional, List, Tuple
def read_orders(customer: Optional[str] = None) -> List[Tuple[int, str, str, int, str]]:
"""
Reads orders from the orders table. If a customer name is provided, it retrieves orders for that customer.
Args:
- customer (str, optional): Name of the customer to filter orders by. If None, all orders are retrieved.
Returns:
- List[Tuple[int, str, str, int, str]]: A list of orders where each order is represented as a tuple containing:
- order_id (int)
- customer_name (str)
- product_name (str)
- quantity (int)
- order_date (str)
"""
## How I fix it
By delling the args and returns
### Error Message and Stack Trace (if applicable)
130 if docstring_arg not in annotations:
131 msg = f"Arg {docstring_arg} in docstring not found in function signature."
--> 132 raise ValueError(msg)
ValueError: Arg - customer (str, optional) in docstring not found in function signature.
### Description
## Description:
I am encountering an issue when using type declarations in both function signatures and docstrings in Python. The problem arises when I add type information in the docstring along with the type hints in the function signature. The function fails to execute correctly if both have type declarations.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.6.0: Thu Sep 12 23:36:12 PDT 2024; root:xnu-10063.141.1.701.1~1/RELEASE_ARM64_T6020
> Python Version: 3.10.0 (default, Mar 3 2022, 03:54:28) [Clang 12.0.0 ]
Package Information
-------------------
> langchain_core: 0.3.29
> langchain: 0.3.13
> langchain_community: 0.3.13
> langsmith: 0.2.7
> langchain_openai: 0.3.0
> langchain_text_splitters: 0.3.4
> langgraph_cli: 0.1.65
> langgraph_sdk: 0.1.48 | 🤖:bug,Ɑ: core | low | Critical |
2,782,169,289 | ui | CanvasRevealEffect does not work on react 19 bc cant import * as THREE from "three"; | ### Feature description
CanvasRevealEffect does not work on react 19 bc cant import * as THREE from "three";
even if you npm install --force framer-motion clsx tailwind-merge three @react-three/fiber
it wont work
<img width="409" alt="Screenshot 2025-01-11 at 3 16 34 PM" src="https://github.com/user-attachments/assets/5f836d16-c795-4b17-bdd2-94064b4b4b7d" />
### Affected component/components
component/ui/canvas-reveal-effect
### Additional Context
even if you npm install --force framer-motion clsx tailwind-merge three @react-three/fiber
it wont work
next.js 15 and react 19 I'm on
### Before submitting
- [X] I've searched for existing issues and PRs
- [X] I've made research efforts and searched the documentation | area: request | low | Minor |
2,782,175,212 | PowerToys | MacOS / Linux like screenshot feature | ### Description of the new feature / enhancement
Provide MacOS-like screenshot utility that could take screenshot and save the place of screenshot for another one (like CMD+SHIFT+5)
### Scenario when this would be used?
Power user is taking a lot of screenshots to document his job. The feature would help to remain professional and standarize the screenshot size.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,782,182,366 | godot | Vulkan error: Get_tree().quit() freezes computer for 10+ seconds with a black screen, returns with the console constantly printing this message (Compatability mode is fine) | ### Tested versions
Godot 4.2-4.4 dev 7
### System information
Godot v4.3.stable - Windows 10.0.26100 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1660 SUPER (NVIDIA; 32.0.15.6636) - AMD Ryzen 5 3600 6-Core Processor (12 Threads)
### Issue description

The editor is unusable at this point until I quit via the console and restart, assuming the console didn't crash too (coin flip).
### Steps to reproduce
*I've been using the escape key to call the function and not Xing out the window if it helps*
Happens randomly when exiting the game via get_tree().quit() with BOTH 2D AND 3D-only scenes.
### Minimal reproduction project (MRP)
[GodotProject.zip](https://github.com/user-attachments/files/18387598/GodotProject.zip) | bug,topic:rendering,needs testing | low | Critical |
2,782,187,499 | godot | IP.resolve_hostname(_addresses) fails on .local addresses | ### Tested versions
- Reproducible in v4.3.stable.official [77dcf97d8]
### System information
Godot v4.3.stable - Freedesktop SDK 24.08 (Flatpak runtime) - Wayland - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 2070 SUPER (nvidia) - Intel(R) Core(TM) i7-9700K CPU @ 3.60GHz (8 Threads)
### Issue description
When trying to resolve a .local domain, it fails. I have verified that Avahi mDNS is correctly configured on my machine, as I use it for ssh among other things.
```gdscript
print(IP.resolve_hostname_addresses('3dpi.local'))
# []
print(IP.resolve_hostname_addresses('google.com'))
# ["173.194.193.101", "173.194.193.102", ...]
```
### Steps to reproduce
Use either `IP.resolve_hostname` or `IP.resolve_hostname_addresses` with a .local domain.
### Minimal reproduction project (MRP)
In order to test a domain, select the node called "Node" in the scene panel and change the "Domain" property.
[Resolve Hostname Issue.zip](https://github.com/user-attachments/files/18387652/Resolve.Hostname.Issue.zip) | bug,topic:network | low | Minor |
2,782,195,854 | flutter | [3.27.] : Build error because of wrong code that's not being shown by IDE | ### Steps to reproduce
When trying to build the app using Flutter 3.27.1 it fails and report code error that's not on "PROBLEMS" tab, even after running flutter clean and everything it persists. Reversing to Flutter 3.22.x the project builds correctly.
### Expected results
Should build or IDE report code error before build.
### Actual results
Build errors.
### Code sample
<details open><summary>Sample code</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(MaterialApp(home: Test()));
}
class Test extends StatefulWidget {
const Test({super.key});
@override
State<Test> createState() => _TestState();
}
class _TestState extends State<Test> {
int v = 1;
@override
Widget build(BuildContext context) {
return Scaffold(
body: Column(
children: [
SizedBox(height: 20),
TextButton(
onPressed: () {
Navigator.push<void>(
context,
MaterialPageRoute<void>(
builder: (BuildContext context) => MyPage(
test: (v) => v,
),
),
);
},
child: Text('text')),
],
),
);
}
}
class MyPage extends StatelessWidget {
const MyPage({super.key, required this.test});
final Function test;
@override
Widget build(BuildContext context) {
return Column(
children: [
TextButton(
onPressed: () => test(Navigator.pop(context)),
child: Text('text'),
),
],
);
}
}
```
</details>
### Screenshots or Video
_No response_
### Logs
<details open><summary>Logs on 3.27.1</summary>
```console
vable and consumable. This is considered a legacy configuration and it will eventually only be possible to be one of these.
[ ] The configuration :classpath is both consumable and declarable. This combination is incorrect, only one of these flags should be set.
[ ] The configuration :classpath is both resolvable and consumable. This is considered a legacy configuration and it will eventually only be possible to be one of these.
[ ] The configuration :classpath is both consumable and declarable. This combination is incorrect, only one of these flags should be set.
[ +4 ms] The configuration :classpath is both resolvable and consumable. This is considered a legacy configuration and it will eventually only be possible to be one of these.
[ ] The configuration :classpath is both consumable and declarable. This combination is incorrect, only one of these flags should be set.
[ ] The configuration :classpath is both resolvable and consumable. This is considered a legacy configuration and it will eventually only be possible to be one of these.
[ ] The configuration :classpath is both consumable and declarable. This combination is incorrect, only one of these flags should be set.
[ ] The configuration :app:classpath is both resolvable and consumable. This is considered a legacy configuration and it will eventually only be possible to be one of these.
[ ] The configuration :app:classpath is both consumable and declarable. This combination is incorrect, only one of these flags should be set.
[ ] The configuration :app:classpath is both resolvable and consumable. This is considered a legacy configuration and it will eventually only be possible to be one of these.
[ ] The configuration :app:classpath is both consumable and declarable. This combination is incorrect, only one of these flags should be set.
[ +1 ms] The configuration :app:classpath is both resolvable and consumable. This is considered a legacy configuration and it will eventually only be possible to be one of these.
[ ] The configuration :app:classpath is both consumable and declarable. This combination is incorrect, only one of these flags should be set.
[ ] The configuration :app:classpath is both resolvable and consumable. This is considered a legacy configuration and it will eventually only be possible to be one of these.
[ ] The configuration :app:classpath is both consumable and declarable. This combination is incorrect, only one of these flags should be set.
[ ] The configuration :app:classpath is both resolvable and consumable. This is considered a legacy configuration and it will eventually only be possible to be one of these.
[ ] The configuration :app:classpath is both consumable and declarable. This combination is incorrect, only one of these flags should be set.
[ ] The configuration :app:classpath is both resolvable and consumable. This is considered a legacy configuration and it will eventually only be possible to be one of these.
[ ] The configuration :app:classpath is both consumable and declarable. This combination is incorrect, only one of these flags should be set.
[ ] Task name matched 'assembleDebug'
[ ] Selected primary task 'assembleDebug' from project :
[ +97 ms] WARNING: We recommend using a newer Android Gradle plugin to use compileSdk = 35
[ ] This Android Gradle plugin (8.1.0) was tested up to compileSdk = 33 (and compileSdkPreview = "UpsideDownCakePrivacySandbox").
[ ] You are strongly encouraged to update your project to use a newer
[ ] Android Gradle plugin that has been tested with compileSdk = 35.
[ ] If you are already using the latest version of the Android Gradle plugin,
[ ] you may need to wait until a newer version with support for compileSdk = 35 is available.
[ ] To suppress this warning, add/update
[ ] android.suppressUnsupportedCompileSdk=35
[ ] to this project's gradle.properties.
[ ] Tasks to be executed: [task ':app:preBuild', task ':app:preDebugBuild', task ':app:mergeDebugNativeDebugMetadata', task ':app:compileFlutterBuildDebug', task ':app:packJniLibsflutterBuildDebug', task ':app:checkDebugAarMetadata', task ':app:cleanMergeDebugAssets', task ':app:mergeDebugShaders', task ':app:compileDebugShaders', task ':app:generateDebugAssets', task ':app:mergeDebugAssets', task ':app:copyFlutterAssetsDebug', task ':app:generateDebugResValues', task ':app:mapDebugSourceSetPaths', task ':app:generateDebugResources', task ':app:mergeDebugResources', task ':app:packageDebugResources', task ':app:parseDebugLocalResources', task ':app:createDebugCompatibleScreenManifests', task ':app:extractDeepLinksDebug', task ':app:processDebugMainManifest', task ':app:processDebugManifest', task ':app:processDebugManifestForPackage', task ':app:processDebugResources', task ':app:compileDebugKotlin', task ':app:javaPreCompileDebug', task ':app:compileDebugJavaWithJavac', task ':app:compressDebugAssets', task ':app:processDebugJavaRes', task ':app:mergeDebugJavaResource', task ':app:checkDebugDuplicateClasses', task ':app:desugarDebugFileDependencies', task ':app:mergeExtDexDebug', task ':app:mergeLibDexDebug', task ':app:dexBuilderDebug', task ':app:mergeProjectDexDebug', task ':app:mergeDebugJniLibFolders', task ':app:mergeDebugNativeLibs', task ':app:stripDebugDebugSymbols', task ':app:validateSigningDebug', task ':app:writeDebugAppMetadata', task ':app:writeDebugSigningConfigVersions', task ':app:packageDebug', task ':app:createDebugApkListingFileRedirect', task ':app:assembleDebug']
[ +2 ms] Tasks that were excluded: []
[ ] Resolve mutations for :app:preBuild (Thread[#723,Execution worker Thread 5,5,main]) started.
[ ] :app:preBuild (Thread[#723,Execution worker Thread 5,5,main]) started.
[ ] > Task :app:preBuild UP-TO-DATE
[ ] Skipping task ':app:preBuild' as it has no actions.
[ ] Resolve mutations for :app:preDebugBuild (Thread[#723,Execution worker Thread 5,5,main]) started.
[ ] :app:preDebugBuild (Thread[#723,Execution worker Thread 5,5,main]) started.
[ ] > Task :app:preDebugBuild UP-TO-DATE
[ ] Skipping task ':app:preDebugBuild' as it has no actions.
[ ] Resolve mutations for :app:mergeDebugNativeDebugMetadata (Thread[#723,Execution worker Thread 5,5,main]) started.
[ ] :app:mergeDebugNativeDebugMetadata (Thread[#723,Execution worker Thread 5,5,main]) started.
[ ] > Task :app:mergeDebugNativeDebugMetadata NO-SOURCE
[ ] Skipping task ':app:mergeDebugNativeDebugMetadata' as it has no source files and no previous output files.
[ ] Resolve mutations for :app:compileFlutterBuildDebug (Thread[#723,Execution worker Thread 5,5,main]) started.
[ ] :app:compileFlutterBuildDebug (Thread[#723,Execution worker Thread 5,5,main]) started.
[ +331 ms] lib/main.dart:53:43: Error: This expression has type 'void' and can't be used.
[ ] onPressed: () => test(Navigator.pop(context)),
[ ] ^
[+1662 ms] > Task :app:compileFlutterBuildDebug
[ ] Caching disabled for task ':app:compileFlutterBuildDebug' because:
[ ] Build cache is disabled
[ ] Task ':app:compileFlutterBuildDebug' is not up-to-date because:
[ ] Task has failed previously.
[ ] Starting process 'command 'C:\Src\Flutter\bin\flutter.bat''. Working directory: C:\Users\PC\StudioProjects\untitled Command: C:\Src\Flutter\bin\flutter.bat --verbose assemble --no-version-check --depfile C:\Users\PC\StudioProjects\untitled\build\app\intermediates\flutter\debug/flutter_build.d --output C:\Users\PC\StudioProjects\untitled\build\app\intermediates\flutter\debug -dTargetFile=C:\Users\PC\StudioProjects\untitled\lib\main.dart -dTargetPlatform=android -dBuildMode=debug -dTrackWidgetCreation=true -dFlavor= -dAndroidArchs=android-x64 -dMinSdkVersion=21 debug_android_application
[ ] Successfully started process 'command 'C:\Src\Flutter\bin\flutter.bat''
[ ] [ +88 ms] Artifact Instance of 'AndroidGenSnapshotArtifacts' is not required, skipping update.
[ ] [ +3 ms] Artifact Instance of 'AndroidInternalBuildArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'IOSEngineArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'FlutterWebSdk' is not required, skipping update.
[ ] [ ] Artifact Instance of 'LegacyCanvasKitRemover' is not required, skipping update.
[ ] [ +3 ms] Artifact Instance of 'WindowsEngineArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'MacOSEngineArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'LinuxEngineArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'LinuxFuchsiaSDKArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'MacOSFuchsiaSDKArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'FlutterRunnerSDKArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'FlutterRunnerDebugSymbols' is not required, skipping update.
[ ] [ +84 ms] Artifact Instance of 'MaterialFonts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'GradleWrapper' is not required, skipping update.
[ ] [ +1 ms] Artifact Instance of 'AndroidInternalBuildArtifacts' is not required, skipping update.
[ +1 ms] [ ] Artifact Instance of 'IOSEngineArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'FlutterWebSdk' is not required, skipping update.
[ ] [ ] Artifact Instance of 'LegacyCanvasKitRemover' is not required, skipping update.
[ ] [ ] Artifact Instance of 'FlutterSdk' is not required, skipping update.
[ ] [ ] Artifact Instance of 'WindowsEngineArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'MacOSEngineArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'LinuxEngineArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'LinuxFuchsiaSDKArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'MacOSFuchsiaSDKArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'FlutterRunnerSDKArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'FlutterRunnerDebugSymbols' is not required, skipping update.
[ +1 ms] [ ] Artifact Instance of 'IosUsbArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'IosUsbArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'IosUsbArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'IosUsbArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'IosUsbArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'FontSubsetArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'PubDependencies' is not required, skipping update.
[ ] [ +46 ms] Initializing file store
[ ] [ +21 ms] Skipping target: gen_localizations
[ ] [ +8 ms] gen_dart_plugin_registrant: Starting due to {InvalidatedReasonKind.inputChanged: The following inputs have updated contents: C:\Users\PC\StudioProjects\untitled\.dart_tool\package_config_subset}
[ ] [ +91 ms] gen_dart_plugin_registrant: Complete
[ +3 ms] [ +2 ms] kernel_snapshot_program: Starting due to {}
[ ] [ +20 ms] C:\Src\Flutter\bin\cache\dart-sdk\bin\dartaotruntime.exe C:\Src\Flutter\bin\cache\dart-sdk\bin\snapshots\frontend_server_aot.dart.snapshot --sdk-root C:\Src\Flutter\bin\cache\artifacts\engine\common\flutter_patched_sdk/ --target=flutter --no-print-incremental-dependencies -Ddart.vm.profile=false -Ddart.vm.product=false --enable-asserts --track-widget-creation --no-link-platform --packages C:\Users\PC\StudioProjects\untitled\.dart_tool\package_config.json --output-dill C:\Users\PC\StudioProjects\untitled\.dart_tool\flutter_build\f4868d120ea41fdd1402145689cd2f23\program.dill --depfile C:\Users\PC\StudioProjects\untitled\.dart_tool\flutter_build\f4868d120ea41fdd1402145689cd2f23\kernel_snapshot_program.d --incremental --initialize-from-dill C:\Users\PC\StudioProjects\untitled\.dart_tool\flutter_build\f4868d120ea41fdd1402145689cd2f23\program.dill --verbosity=error package:untitled/main.dart
[ +476 ms] [+1884 ms] lib/main.dart:53:43: Error: This expression has type 'void' and can't be used.
[ +1 ms] [ +1 ms] onPressed: () => test(Navigator.pop(context)),
[ ] [ ] ^
[+3600 ms] [+3621 ms] Persisting file store
[ ] [ +4 ms] Done persisting file store
[ ] [ +5 ms] Target kernel_snapshot_program failed: Exception
[ ] #0 KernelSnapshotProgram.build (package:flutter_tools/src/build_system/targets/common.dart:274:7)
[ ] <asynchronous suspension>
[ ] #1 _BuildInstance._invokeInternal (package:flutter_tools/src/build_system/build_system.dart:891:9)
[ ] <asynchronous suspension>
[ ] #2 Future.wait.<anonymous closure> (dart:async/future.dart:520:21)
[ ] <asynchronous suspension>
[ ] #3 _BuildInstance.invokeTarget (package:flutter_tools/src/build_system/build_system.dart:829:32)
[ ] <asynchronous suspension>
[ ] #4 Future.wait.<anonymous closure> (dart:async/future.dart:520:21)
[ ] <asynchronous suspension>
[ ] #5 _BuildInstance.invokeTarget (package:flutter_tools/src/build_system/build_system.dart:829:32)
[ ] <asynchronous suspension>
[ ] #6 Future.wait.<anonymous closure> (dart:async/future.dart:520:21)
[ ] <asynchronous suspension>
[ ] #7 _BuildInstance.invokeTarget (package:flutter_tools/src/build_system/build_system.dart:829:32)
[ ] <asynchronous suspension>
[ ] #8 Future.wait.<anonymous closure> (dart:async/future.dart:520:21)
[ ] <asynchronous suspension>
[ ] #9 _BuildInstance.invokeTarget (package:flutter_tools/src/build_system/build_system.dart:829:32)
[ ] <asynchronous suspension>
[ ] #10 FlutterBuildSystem.build (package:flutter_tools/src/build_system/build_system.dart:651:16)
[ ] <asynchronous suspension>
[ ] #11 AssembleCommand.runCommand (package:flutter_tools/src/commands/assemble.dart:329:32)
[ ] <asynchronous suspension>
[ ] #12 FlutterCommand.run.<anonymous closure> (package:flutter_tools/src/runner/flutter_command.dart:1450:27)
[ ] <asynchronous suspension>
[ ] #13 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:153:19)
[ ] <asynchronous suspension>
[ ] #14 CommandRunner.runCommand (package:args/command_runner.dart:212:13)
[ ] <asynchronous suspension>
[ ] #15 FlutterCommandRunner.runCommand.<anonymous closure> (package:flutter_tools/src/runner/flutter_command_runner.dart:421:9)
[ ] <asynchronous suspension>
[ +1 ms] #16 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:153:19)
[ ] <asynchronous suspension>
[ ] #17 FlutterCommandRunner.runCommand (package:flutter_tools/src/runner/flutter_command_runner.dart:364:5)
[ ] <asynchronous suspension>
[ ] #18 run.<anonymous closure>.<anonymous closure> (package:flutter_tools/runner.dart:131:9)
[ ] <asynchronous suspension>
[ ] #19 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:153:19)
[ ] <asynchronous suspension>
[ ] #20 main (package:flutter_tools/executable.dart:94:3)
[ ] [ +7 ms] "flutter assemble" took 5.820ms.
[ ] <asynchronous suspension>
[ ] [ +19 ms]
[ ] #0 throwToolExit (package:flutter_tools/src/base/common.dart:10:3)
[ ] #1 AssembleCommand.runCommand (package:flutter_tools/src/commands/assemble.dart:346:7)
[ ] <asynchronous suspension>
[ ] #2 FlutterCommand.run.<anonymous closure> (package:flutter_tools/src/runner/flutter_command.dart:1450:27)
[ ] <asynchronous suspension>
[ ] #3 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:153:19)
[ ] <asynchronous suspension>
[ ] #4 CommandRunner.runCommand (package:args/command_runner.dart:212:13)
[ ] <asynchronous suspension>
[ ] #5 FlutterCommandRunner.runCommand.<anonymous closure> (package:flutter_tools/src/runner/flutter_command_runner.dart:421:9)
[ ] <asynchronous suspension>
[ ] #6 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:153:19)
[ ] <asynchronous suspension>
[ ] #7 FlutterCommandRunner.runCommand (package:flutter_tools/src/runner/flutter_command_runner.dart:364:5)
[ ] <asynchronous suspension>
[ ] #8 run.<anonymous closure>.<anonymous closure> (package:flutter_tools/runner.dart:131:9)
[ ] <asynchronous suspension>
[ ] #9 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:153:19)
[ ] <asynchronous suspension>
[ ] #10 main (package:flutter_tools/executable.dart:94:3)
[ ] <asynchronous suspension>
[ +196 ms] [ +179 ms] ensureAnalyticsSent: 176ms
[ ] [ ] Running 1 shutdown hook
[ ] [ ] Shutdown hooks complete
[ ] [ +6 ms] exiting with code 1
[ ] > Task :app:compileFlutterBuildDebug FAILED
[ ] FAILURE: Build failed with an exception.
[ ] * What went wrong:
[ ] Execution failed for task ':app:compileFlutterBuildDebug'.
[ ] > Process 'command 'C:\Src\Flutter\bin\flutter.bat'' finished with non-zero exit value 1
[ ] * Try:
[ ] > Run with --debug option to get more log output.
[ ] > Run with --scan to get full insights.
[ ] > Get more help at https://help.gradle.org.
[ ] * Exception is:
[ ] org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':app:compileFlutterBuildDebug'.
[ ] at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.lambda$executeIfValid$1(ExecuteActionsTaskExecuter.java:149)
[ ] at org.gradle.internal.Try$Failure.ifSuccessfulOrElse(Try.java:282)
[ ] at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeIfValid(ExecuteActionsTaskExecuter.java:147)
[ ] at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:135)
[ ] at org.gradle.api.internal.tasks.execution.FinalizePropertiesTaskExecuter.execute(FinalizePropertiesTaskExecuter.java:46)
[ +1 ms] at org.gradle.api.internal.tasks.execution.ResolveTaskExecutionModeExecuter.execute(ResolveTaskExecutionModeExecuter.java:51)
[ ] at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:57)
[ ] at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:74)
[ ] at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:36)
[ ] at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.executeTask(EventFiringTaskExecuter.java:77)
[ ] at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:55)
[ ] at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:52)
[ ] at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204)
[ ] at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:199)
[ ] at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
[ +2 ms] at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
[ ] at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:157)
[ ] at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
[ ] at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53)
[ ] at org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:73)
[ ] at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter.execute(EventFiringTaskExecuter.java:52)
[ ] at org.gradle.execution.plan.LocalTaskNodeExecutor.execute(LocalTaskNodeExecutor.java:42)
[ ] at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:331)
[ ] at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:318)
[ ] at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.lambda$execute$0(DefaultTaskExecutionGraph.java:314)
[ ] at org.gradle.internal.operations.CurrentBuildOperationRef.with(CurrentBuildOperationRef.java:80)
[ ] at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:314)
[ ] at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:303)
[ ] at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.execute(DefaultPlanExecutor.java:463)
[ ] at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.run(DefaultPlanExecutor.java:380)
[ +1 ms] at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
[ ] at org.gradle.internal.concurrent.AbstractManagedExecutor$1.run(AbstractManagedExecutor.java:47)
[ ] at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
[ ] at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
[ ] at java.base/java.lang.Thread.run(Unknown Source)
[ ] Caused by: org.gradle.process.internal.ExecException: Process 'command 'C:\Src\Flutter\bin\flutter.bat'' finished with non-zero exit value 1
[ ] at org.gradle.process.internal.DefaultExecHandle$ExecResultImpl.assertNormalExitValue(DefaultExecHandle.java:431)
[ ] at org.gradle.process.internal.DefaultExecAction.execute(DefaultExecAction.java:38)
[ ] at org.gradle.process.internal.DefaultExecActionFactory.exec(DefaultExecActionFactory.java:202)
[ ] at org.gradle.api.internal.project.DefaultProject.exec(DefaultProject.java:1232)
[ ] at org.gradle.api.internal.project.DefaultProject.exec(DefaultProject.java:1227)
[ ] at org.gradle.api.Project$exec$7.call(Unknown Source)
[ ] at BaseFlutterTask.buildBundle(flutter.groovy:1685)
[ ] at BaseFlutterTask$buildBundle.callCurrent(Unknown Source)
[ ] at FlutterTask.build(flutter.groovy:1823)
[ ] at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(Unknown Source)
[ ] at java.base/java.lang.reflect.Method.invoke(Unknown Source)
[ ] at org.gradle.internal.reflect.JavaMethod.invoke(JavaMethod.java:125)
[ ] at org.gradle.api.internal.project.taskfactory.StandardTaskAction.doExecute(StandardTaskAction.java:58)
[ ] at org.gradle.api.internal.project.taskfactory.StandardTaskAction.execute(StandardTaskAction.java:51)
[ +2 ms] at org.gradle.api.internal.project.taskfactory.StandardTaskAction.execute(StandardTaskAction.java:29)
[ ] at org.gradle.api.internal.tasks.execution.TaskExecution$3.run(TaskExecution.java:248)
[ ] at org.gradle.internal.operations.DefaultBuildOperationRunner$1.execute(DefaultBuildOperationRunner.java:29)
[ ] at org.gradle.internal.operations.DefaultBuildOperationRunner$1.execute(DefaultBuildOperationRunner.java:26)
[ ] at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
[ ] at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
[ ] at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:157)
[ ] at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
[ ] at org.gradle.internal.operations.DefaultBuildOperationRunner.run(DefaultBuildOperationRunner.java:47)
[ ] at org.gradle.internal.operations.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:68)
[ ] at org.gradle.api.internal.tasks.execution.TaskExecution.executeAction(TaskExecution.java:233)
[ ] at org.gradle.api.internal.tasks.execution.TaskExecution.executeActions(TaskExecution.java:216)
[ +2 ms] at org.gradle.api.internal.tasks.execution.TaskExecution.executeWithPreviousOutputFiles(TaskExecution.java:199)
[ ] at org.gradle.api.internal.tasks.execution.TaskExecution.execute(TaskExecution.java:166)
[ ] at org.gradle.internal.execution.steps.ExecuteStep.executeInternal(ExecuteStep.java:105)
[ ] at org.gradle.internal.execution.steps.ExecuteStep.access$000(ExecuteStep.java:44)
[ ] at org.gradle.internal.execution.steps.ExecuteStep$1.call(ExecuteStep.java:59)
[ ] at org.gradle.internal.execution.steps.ExecuteStep$1.call(ExecuteStep.java:56)
[ ] at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204)
[ ] at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:199)
[ ] at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
[ ] at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
[ ] at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:157)
[ ] at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
[ ] at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53)
[ ] at org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:73)
[ ] at org.gradle.internal.execution.steps.ExecuteStep.execute(ExecuteStep.java:56)
[ ] at org.gradle.internal.execution.steps.ExecuteStep.execute(ExecuteStep.java:44)
[ ] at org.gradle.internal.execution.steps.RemovePreviousOutputsStep.execute(RemovePreviousOutputsStep.java:67)
[ ] at org.gradle.internal.execution.steps.RemovePreviousOutputsStep.execute(RemovePreviousOutputsStep.java:37)
[ ] at org.gradle.internal.execution.steps.CancelExecutionStep.execute(CancelExecutionStep.java:41)
[ ] at org.gradle.internal.execution.steps.TimeoutStep.executeWithoutTimeout(TimeoutStep.java:74)
[ ] at org.gradle.internal.execution.steps.TimeoutStep.execute(TimeoutStep.java:55)
[ ] at org.gradle.internal.execution.steps.CreateOutputsStep.execute(CreateOutputsStep.java:50)
[ ] at org.gradle.internal.execution.steps.CreateOutputsStep.execute(CreateOutputsStep.java:28)
[ ] at org.gradle.internal.execution.steps.CaptureStateAfterExecutionStep.executeDelegateBroadcastingChanges(CaptureStateAfterExecutionStep.java:100)
[ ] at org.gradle.internal.execution.steps.CaptureStateAfterExecutionStep.execute(CaptureStateAfterExecutionStep.java:72)
[ +1 ms] at org.gradle.internal.execution.steps.CaptureStateAfterExecutionStep.execute(CaptureStateAfterExecutionStep.java:50)
[ ] at org.gradle.internal.execution.steps.ResolveInputChangesStep.execute(ResolveInputChangesStep.java:40)
[ ] at org.gradle.internal.execution.steps.ResolveInputChangesStep.execute(ResolveInputChangesStep.java:29)
[ ] at org.gradle.internal.execution.steps.BuildCacheStep.executeWithoutCache(BuildCacheStep.java:179)
[ ] at org.gradle.internal.execution.steps.BuildCacheStep.lambda$execute$1(BuildCacheStep.java:70)
[ ] at org.gradle.internal.Either$Right.fold(Either.java:175)
[ ] at org.gradle.internal.execution.caching.CachingState.fold(CachingState.java:59)
[ ] at org.gradle.internal.execution.steps.BuildCacheStep.execute(BuildCacheStep.java:68)
[ ] at org.gradle.internal.execution.steps.BuildCacheStep.execute(BuildCacheStep.java:46)
[ ] at org.gradle.internal.execution.steps.StoreExecutionStateStep.execute(StoreExecutionStateStep.java:36)
[ ] at org.gradle.internal.execution.steps.StoreExecutionStateStep.execute(StoreExecutionStateStep.java:25)
[ ] at org.gradle.internal.execution.steps.RecordOutputsStep.execute(RecordOutputsStep.java:36)
[ ] at org.gradle.internal.execution.steps.RecordOutputsStep.execute(RecordOutputsStep.java:22)
[ ] at org.gradle.internal.execution.steps.SkipUpToDateStep.executeBecause(SkipUpToDateStep.java:91)
[ ] at org.gradle.internal.execution.steps.SkipUpToDateStep.lambda$execute$2(SkipUpToDateStep.java:55)
[ ] at java.base/java.util.Optional.orElseGet(Unknown Source)
[ ] at org.gradle.internal.execution.steps.SkipUpToDateStep.execute(SkipUpToDateStep.java:55)
[ ] at org.gradle.internal.execution.steps.SkipUpToDateStep.execute(SkipUpToDateStep.java:37)
[ ] at org.gradle.internal.execution.steps.ResolveChangesStep.execute(ResolveChangesStep.java:65)
[ ] at org.gradle.internal.execution.steps.ResolveChangesStep.execute(ResolveChangesStep.java:36)
[ ] at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsFinishedStep.execute(MarkSnapshottingInputsFinishedStep.java:37)
[ ] at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsFinishedStep.execute(MarkSnapshottingInputsFinishedStep.java:27)
[ ] at org.gradle.internal.execution.steps.ResolveCachingStateStep.execute(ResolveCachingStateStep.java:77)
[ ] at org.gradle.internal.execution.steps.ResolveCachingStateStep.execute(ResolveCachingStateStep.java:38)
[ ] at org.gradle.internal.execution.steps.ValidateStep.execute(ValidateStep.java:94)
[ ] at org.gradle.internal.execution.steps.ValidateStep.execute(ValidateStep.java:49)
[ ] at org.gradle.internal.execution.steps.CaptureStateBeforeExecutionStep.execute(CaptureStateBeforeExecutionStep.java:71)
[ ] at org.gradle.internal.execution.steps.CaptureStateBeforeExecutionStep.execute(CaptureStateBeforeExecutionStep.java:45)
[ ] at org.gradle.internal.execution.steps.SkipEmptyWorkStep.executeWithNonEmptySources(SkipEmptyWorkStep.java:177)
[ ] at org.gradle.internal.execution.steps.SkipEmptyWorkStep.execute(SkipEmptyWorkStep.java:81)
[ +1 ms] at org.gradle.internal.execution.steps.SkipEmptyWorkStep.execute(SkipEmptyWorkStep.java:53)
[ ] at org.gradle.internal.execution.steps.RemoveUntrackedExecutionStateStep.execute(RemoveUntrackedExecutionStateStep.java:32)
[ ] at org.gradle.internal.execution.steps.RemoveUntrackedExecutionStateStep.execute(RemoveUntrackedExecutionStateStep.java:21)
[ ] at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsStartedStep.execute(MarkSnapshottingInputsStartedStep.java:38)
[ ] at org.gradle.internal.execution.steps.LoadPreviousExecutionStateStep.execute(LoadPreviousExecutionStateStep.java:36)
[ ] at org.gradle.internal.execution.steps.LoadPreviousExecutionStateStep.execute(LoadPreviousExecutionStateStep.java:23)
[ ] at org.gradle.internal.execution.steps.CleanupStaleOutputsStep.execute(CleanupStaleOutputsStep.java:75)
[ ] at org.gradle.internal.execution.steps.CleanupStaleOutputsStep.execute(CleanupStaleOutputsStep.java:41)
[ ] at org.gradle.internal.execution.steps.ExecuteWorkBuildOperationFiringStep.lambda$execute$2(ExecuteWorkBuildOperationFiringStep.java:66)
[ ] at java.base/java.util.Optional.orElseGet(Unknown Source)
[ ] at org.gradle.internal.execution.steps.ExecuteWorkBuildOperationFiringStep.execute(ExecuteWorkBuildOperationFiringStep.java:66)
[ ] at org.gradle.internal.execution.steps.ExecuteWorkBuildOperationFiringStep.execute(ExecuteWorkBuildOperationFiringStep.java:38)
[ ] at org.gradle.internal.execution.steps.AssignWorkspaceStep.lambda$execute$0(AssignWorkspaceStep.java:32)
[ ] at org.gradle.api.internal.tasks.execution.TaskExecution$4.withWorkspace(TaskExecution.java:293)
[ ] at org.gradle.internal.execution.steps.AssignWorkspaceStep.execute(AssignWorkspaceStep.java:30)
[ ] at org.gradle.internal.execution.steps.AssignWorkspaceStep.execute(AssignWorkspaceStep.java:21)
[ ] at org.gradle.internal.execution.steps.IdentityCacheStep.execute(IdentityCacheStep.java:37)
[ ] at org.gradle.internal.execution.steps.IdentityCacheStep.execute(IdentityCacheStep.java:27)
[ ] at org.gradle.internal.execution.steps.IdentifyStep.execute(IdentifyStep.java:47)
[ ] at org.gradle.internal.execution.steps.IdentifyStep.execute(IdentifyStep.java:34)
[ ] at org.gradle.internal.execution.impl.DefaultExecutionEngine$1.execute(DefaultExecutionEngine.java:64)
[ ] at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeIfValid(ExecuteActionsTaskExecuter.java:146)
[ ] at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:135)
[ ] at org.gradle.api.internal.tasks.execution.FinalizePropertiesTaskExecuter.execute(FinalizePropertiesTaskExecuter.java:46)
[ ] at org.gradle.api.internal.tasks.execution.ResolveTaskExecutionModeExecuter.execute(ResolveTaskExecutionModeExecuter.java:51)
[ ] at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:57)
[ ] at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:74)
[ ] at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:36)
[ ] at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.executeTask(EventFiringTaskExecuter.java:77)
[ ] at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:55)
[ ] at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:52)
[ ] at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204)
[ ] at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:199)
[ ] at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
[ ] at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
[ ] at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:157)
[ ] at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
[ ] at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53)
[ ] at org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:73)
[ ] at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter.execute(EventFiringTaskExecuter.java:52)
[ ] at org.gradle.execution.plan.LocalTaskNodeExecutor.execute(LocalTaskNodeExecutor.java:42)
[ ] at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:331)
[ ] at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:318)
[ ] at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.lambda$execute$0(DefaultTaskExecutionGraph.java:314)
[ ] at org.gradle.internal.operations.CurrentBuildOperationRef.with(CurrentBuildOperationRef.java:80)
[ ] at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:314)
[ ] at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:303)
[ ] at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.execute(DefaultPlanExecutor.java:463)
[ ] at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.run(DefaultPlanExecutor.java:380)
[ ] at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
[ ] at org.gradle.internal.concurrent.AbstractManagedExecutor$1.run(AbstractManagedExecutor.java:47)
[ ] at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
[ ] at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
[ ] at java.base/java.lang.Thread.run(Unknown Source)
[ ] BUILD FAILED in 7s
[ ] 5 actionable tasks: 1 executed, 4 up-to-date
[ ] Watched directory hierarchies: [C:\Src\Flutter\packages\flutter_tools\gradle, C:\Users\PC\StudioProjects\untitled\android]
[ +579 ms] Error: Gradle task assembleDebug failed with exit code 1
[ +9 ms] "flutter run" took 9.556ms.
[ +15 ms]
#0 throwToolExit (package:flutter_tools/src/base/common.dart:10:3)
#1 RunCommand.runCommand (package:flutter_tools/src/commands/run.dart:777:9)
<asynchronous suspension>
#2 FlutterCommand.run.<anonymous closure> (package:flutter_tools/src/runner/flutter_command.dart:1450:27)
<asynchronous suspension>
#3 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:153:19)
<asynchronous suspension>
#4 CommandRunner.runCommand (package:args/command_runner.dart:212:13)
<asynchronous suspension>
#5 FlutterCommandRunner.runCommand.<anonymous closure> (package:flutter_tools/src/runner/flutter_command_runner.dart:421:9)
<asynchronous suspension>
#6 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:153:19)
<asynchronous suspension>
#7 FlutterCommandRunner.runCommand (package:flutter_tools/src/runner/flutter_command_runner.dart:364:5)
<asynchronous suspension>
#8 run.<anonymous closure>.<anonymous closure> (package:flutter_tools/runner.dart:131:9)
<asynchronous suspension>
#9 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:153:19)
<asynchronous suspension>
#10 main (package:flutter_tools/executable.dart:94:3)
<asynchronous suspension>
[ +83 ms] ensureAnalyticsSent: 80ms
[ ] Running 2 shutdown hooks
[ +4 ms] Shutdown hooks complete
[ +103 ms] exiting with code 1
Exited (1).
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
PS C:\Users\PC\OneDrive\DevStuff\WMS\wms_main> flutter doctor
Doctor summary (to see all details, run flutter doctor -v):
[√] Flutter (Channel stable, 3.27.1, on Microsoft Windows [Version 10.0.19045.5247], locale pt-BR)
[√] Windows Version (Installed version of Windows is version 10 or higher)
[√] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
[X] Chrome - develop for the web (Cannot find Chrome executable at .\Google\Chrome\Application\chrome.exe)
! Cannot find Chrome. Try setting CHROME_EXECUTABLE to a Chrome executable.
[√] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.12.3)
[√] Android Studio (version 2024.2)
[√] VS Code (version 1.96.2)
[√] Connected device (3 available)
[√] Network resources
! Doctor found issues in 1 category.
PS C:\Users\PC\OneDrive\DevStuff\WMS\wms_main>
```
</details>
| framework,dependency: dart,f: routes,has reproducible steps,P2,team-framework,triaged-framework,found in release: 3.27,found in release: 3.28 | low | Critical |
2,782,196,455 | material-ui | [material-ui] Compatibility issues with Remix | ### Steps to reproduce
Steps:
1. Install remix defaults `npx create-remix@latest remix-mui-bug`
2. Install mui defaults `npm install @mui/material @emotion/react @emotion/styled`
3. Import and add any MUI component, like a `<Button>`
### Current behavior
Hydration errors (and also for some reason the button converts from `variant="contained"` to `variant="text"` upon hydration?)
```
Warning: Prop `charSet` did not match. Server: "null" Client: "utf-8" Error Component Stack
at meta (<anonymous>)
at head (<anonymous>)
at html (<anonymous>)
at Layout (root.tsx:25:26)
Uncaught Error: Hydration failed because the initial UI does not match what was rendered on the server.
at throwOnHydrationMismatch (react-dom.development.js:12507:9)
at tryToClaimNextHydratableInstance (react-dom.development.js:12535:7)
at updateHostComponent (react-dom.development.js:19931:5)
at beginWork (react-dom.development.js:21657:14)
at HTMLUnknownElement.callCallback2 (react-dom.development.js:4164:14)
at Object.invokeGuardedCallbackDev (react-dom.development.js:4213:16)
at invokeGuardedCallback (react-dom.development.js:4277:31)
at beginWork$1 (react-dom.development.js:27490:7)
at performUnitOfWork (react-dom.development.js:26596:12)
at workLoopConcurrent (react-dom.development.js:26582:5)
```
<img width="1510" alt="image" src="https://github.com/user-attachments/assets/2dd56614-78da-4548-b4b4-b9457c2cf368" />
### Expected behavior
I should be able to `npm install` mui and not have hydration errors
### Context
I followed the advice here: https://github.com/mui/material-ui/issues/39765 by adding [this provider](https://github.com/mahmoudmoravej/remix-mui/blob/4af29886c73da59cd56633343a19f63e26d43744/app/mui/MuiProvider.tsx) but still seeing the error
### Your environment
<details>
<summary><code>npx @mui/envinfo</code></summary>
```
System:
OS: macOS 14.4.1
Binaries:
Node: 20.11.1 - ~/.asdf/installs/nodejs/20.11.1/bin/node
npm: 10.2.4 - ~/.asdf/plugins/nodejs/shims/npm
pnpm: 8.15.5 - ~/.asdf/installs/nodejs/20.11.1/bin/pnpm
Browsers:
Chrome: 131.0.6778.265
Edge: Not Found
Safari: 17.4.1
npmPackages:
@emotion/react: ^11.14.0 => 11.14.0
@emotion/styled: ^11.14.0 => 11.14.0
@mui/core-downloads-tracker: 6.3.1
@mui/material: ^6.3.1 => 6.3.1
@mui/private-theming: 6.3.1
@mui/styled-engine: 6.3.1
@mui/system: 6.3.1
@mui/types: 7.2.21
@mui/utils: 6.3.1
@types/react: ^18.2.20 => 18.3.18
react: ^18.2.0 => 18.3.1
react-dom: ^18.2.0 => 18.3.1
typescript: ^5.1.6 => 5.7.3
```
</details>
**Search keywords**: remix Warning: Prop `charSet` did not match. Server: "null" Client: "utf-8" Error Component Stack | bug 🐛,package: material-ui | low | Critical |
2,782,198,653 | vscode | Your Code installation appears to be corrupt. Please reinstall. |
Type: <b>Bug</b>
I have uninstalled VS Code multiple times and attempted to install various versions, including both user and system installations. Despite my efforts, the issue persists. I’ve also tried following numerous YouTube tutorials and read several articles in search of a solution, but nothing has worked so far.
VS Code version: Code 1.96.2 (fabdb6a30b49f79a7aba0f2ad9df9b399473380f, 2024-12-19T10:22:47.216Z)
OS version: Windows_NT x64 10.0.26100
Modes: Unsupported
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|12th Gen Intel(R) Core(TM) i5-12500H (16 x 3110)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|15.69GB (1.77GB free)|
|Process Argv|--crash-reporter-id eb177e09-bc83-462f-bb91-755a47a81f4b|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (2)</summary>
Extension|Author (truncated)|Version
---|---|---
copilot|Git|1.256.0
copilot-chat|Git|0.23.2
(1 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805:30301674
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
a9j8j154:30646983
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupytercf:31046870
2f103344:31071589
nativerepl2:31139839
pythonrstrctxt:31112756
nativeloc1:31192215
cf971741:31144450
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
```
</details>
<!-- generated by issue reporter --> | info-needed | low | Critical |
2,782,200,935 | flutter | Gradle Wrapper-related files in Flutter Gradle Plugin sources should be either commited or gitignored | When working on `packages/fluter_tools/gradle` in IntelliJ, I always end up with:
```
$ git status
On branch master
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
packages/flutter_tools/gradle/gradle/
packages/flutter_tools/gradle/gradlew
packages/flutter_tools/gradle/gradlew.bat
no changes added to commit (use "git add" and/or "git commit -a")
```
We should either:
1 commit these files to git (will require some exemption because binary files generally aren't allowed to be committed, [see here](https://github.com/flutter/flutter/blob/bd1ebf2e1498bd022808f8b237654ce42ae537be/dev/bots/analyze.dart#L1695-L1700))
2. add them to `.gitignore` so they stop showing up
I'm somewhat partial to option (1) because it'd ensure we use the same Gradle version for developing the Flutter Gradle Plugin. No strong opinion, though.
cc @gmackall @reidbaker | platform-android,tool,t: gradle,c: proposal,P1,team-android,triaged-android | medium | Major |
2,782,217,891 | pytorch | 144x less efficient CPU usage when training NN past a certain width | ### 🐛 Describe the bug
The code below is a minimal NN training loop with a fully connected NN of shape (10->width->width->10).
When width is 45, everything is fine, the code takes about 1 second, and only uses 1 CPU.
When width is 46, the code takes 9 seconds, and uses all 16 cpus.
So it's 144x less efficient (what are all those cycles doing?)
I'm guessing it switches over to a multi-threaded implementation when multiplying matrices of a certain size. But something doesn't seem right.
I also tried the "mps" backend, which surprisingly has enough overhead that it isn't faster until the network is very wide.
```
import time
import os
import torch
from torch import nn, optim
print(f"{torch.__version__=}, {os.uname()=}")
batch_size = 128
all_inputs = torch.randn((batch_size * 100, 10))
all_targets = all_inputs + 0.01 * torch.randn((batch_size * 100, 10))
for device, omp_num_threads in [("cpu", None), ("cpu", 1), ("mps", 1)]:
if omp_num_threads is not None:
torch.set_num_threads(omp_num_threads)
for width in [32, 45, 46, 64, 128, 256, 512, 1024, 2048, 4096]:
if device == "cpu" and width > 256: break # too slow, don't bother
network = nn.Sequential(nn.Linear(10, width), nn.Linear(width, width), nn.Linear(width, 10)).to(device)
optimizer = optim.Adam(network.parameters(), lr=3e-4)
t_start = time.time()
for epoch in range(50):
for offset in range(0, len(all_inputs), batch_size):
inputs = all_inputs[offset:offset+batch_size].to(device)
targets = all_targets[offset:offset+batch_size].to(device)
optimizer.zero_grad()
((network(inputs) - targets) ** 2).mean().backward()
optimizer.step()
final_loss = ((network(all_inputs.to(device)) - all_targets.to(device)) ** 2).mean()
print(f"{torch.get_num_threads()=}, device={device}, nn_width={width}, final_loss={final_loss:2.5f}, took {time.time() - t_start:2.1f} secs")
```
output on my machine is:
```
torch.__version__='2.3.1.post100', os.uname()=posix.uname_result(sysname='Darwin', nodename='username.macbook.pro.m3.lan', release='23.5.0', version='Darwin Kernel Version 23.5.0: Wed May 1 20:17:33 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6031', machine='arm64')
torch.get_num_threads()=16, device=cpu, nn_width=32, final_loss=0.00010, took 0.8 secs
torch.get_num_threads()=16, device=cpu, nn_width=45, final_loss=0.00010, took 1.0 secs
torch.get_num_threads()=16, device=cpu, nn_width=46, final_loss=0.00010, took 8.8 secs <---- 16 cpus, and 9x slower!
torch.get_num_threads()=16, device=cpu, nn_width=64, final_loss=0.00011, took 6.8 secs
torch.get_num_threads()=16, device=cpu, nn_width=128, final_loss=0.00012, took 19.9 secs
torch.get_num_threads()=16, device=cpu, nn_width=256, final_loss=0.00015, took 65.6 secs
# everything is way faster with just 1 thread (below)
torch.get_num_threads()=1, device=cpu, nn_width=32, final_loss=0.00010, took 0.9 secs
torch.get_num_threads()=1, device=cpu, nn_width=45, final_loss=0.00010, took 1.0 secs
torch.get_num_threads()=1, device=cpu, nn_width=46, final_loss=0.00010, took 2.5 secs <---- 1 cpu, and faster!
torch.get_num_threads()=1, device=cpu, nn_width=64, final_loss=0.00011, took 1.9 secs
torch.get_num_threads()=1, device=cpu, nn_width=128, final_loss=0.00012, took 2.5 secs
torch.get_num_threads()=1, device=cpu, nn_width=256, final_loss=0.00015, took 4.2 secs
# mps has a lot of overhead, but eventually is faster
torch.get_num_threads()=1, device=mps, nn_width=32, final_loss=0.00010, took 8.7 secs
torch.get_num_threads()=1, device=mps, nn_width=45, final_loss=0.00010, took 8.7 secs
torch.get_num_threads()=1, device=mps, nn_width=46, final_loss=0.00010, took 8.9 secs
torch.get_num_threads()=1, device=mps, nn_width=64, final_loss=0.00011, took 8.4 secs
torch.get_num_threads()=1, device=mps, nn_width=128, final_loss=0.00012, took 11.8 secs
torch.get_num_threads()=1, device=mps, nn_width=256, final_loss=0.00015, took 8.4 secs
torch.get_num_threads()=1, device=mps, nn_width=512, final_loss=0.00019, took 11.2 secs
torch.get_num_threads()=1, device=mps, nn_width=1024, final_loss=0.00027, took 9.2 secs
torch.get_num_threads()=1, device=mps, nn_width=2048, final_loss=0.00033, took 10.0 secs
torch.get_num_threads()=1, device=mps, nn_width=4096, final_loss=0.00032, took 27.9 secs. <-- quadratic runtime starts here as expected
```
### Versions
```
Collecting environment information...
PyTorch version: 2.3.1.post100
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.5 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.3.9.4)
CMake version: Could not collect
Libc version: N/A
Python version: 3.11.0 | packaged by conda-forge | (main, Jan 14 2023, 12:26:40) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-14.5-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M3 Max
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.3.1.post100
[conda] numpy 1.26.4 py311he598dae_0
[conda] numpy-base 1.26.4 py311hfbfe69c_0
[conda] pytorch 2.3.1 gpu_mps_py311h7b7e308_100
```
cc @msaroufim @malfet @snadampal @milpuz01 | module: performance,triaged,module: arm | low | Critical |
2,782,226,572 | pytorch | [MPS] `torch.mps.synchronize` hangs on error | ### 🐛 Describe the bug
Consider following code
```python
import torch
lib=torch.mps._compile_shader("kernel void foo(device float* x) {__builtin_trap();}")
lib.foo(torch.rand(3, device="mps"))
torch.mps.synchronize()
```
It will hang the process, and few attempt to reproduce the same resulted in system hang
### Versions
Nightly
cc @kulinseth @albanD @DenisVieriu97 @jhavukainen | triaged,module: deadlock,module: mps | low | Critical |
2,782,236,338 | yt-dlp | ERROR: [Globo] xxxxxxx: Unable to download webpage: | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Brazil
### Provide a description that is worded well enough to be understood
$ yt-dlp --cookies-from-browser chrome -u [email protected] -p ******** "https://globoplay.globo.com/v/9203242/"
[Globo] Extracting URL: https://globoplay.globo.com/v/9203242/
[Globo] 9203242: Getting cookies
Extracting cookies from chrome
Extracted 3422 cookies from chrome
ERROR: [Globo] 9203242: Unable to download webpage: (<urllib3.connection.HTTPSConnection object at 0x70c95934db80>, 'Connection to globo-ab.globo.com timed out. (connect timeout=20.0)') (caused by TransportError("(<urllib3.connection.HTTPSConnection object at 0x70c95934db80>, 'Connection to globo-ab.globo.com timed out. (connect timeout=20.0)')"))
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
$ yt-dlp --cookies-from-browser chrome -u [email protected] -p ******** "https://globoplay.globo.com/v/9203242/" -vU
[debug] Command-line config: ['--cookies-from-browser', 'chrome', '-u', 'PRIVATE', '-p', 'PRIVATE', 'https://globoplay.globo.com/v/9203242/', '-vU']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [65cf46cdd] (pip)
[debug] Python 3.12.3 (CPython x86_64 64bit) - Linux-6.8.0-51-generic-x86_64-with-glibc2.39 (OpenSSL 3.0.13 30 Jan 2024, glibc 2.39)
[debug] exe versions: ffmpeg 7.1 (fdk,setts), ffprobe 7.1
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2023.11.17, mutagen-1.47.0, requests-2.32.3, secretstorage-3.3.3, sqlite3-3.45.1, urllib3-2.0.7, websockets-13.1
[debug] Proxy map: {}
Extracting cookies from chrome
[debug] Extracting cookies from: "/home/username/.config/google-chrome/Default/Cookies"
[Cookies] Loading cookie 0/ 4766[debug] detected desktop environment: GNOME
[debug] Chosen keyring: GNOMEKEYRING
Extracted 3422 cookies from chrome
[debug] cookie version breakdown: {'v10': 0, 'v11': 4766, 'other': 0, 'unencrypted': 0}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp)
[Globo] Extracting URL: https://globoplay.globo.com/v/9203242/
[Globo] 9203242: Getting cookies
ERROR: [Globo] 9203242: Unable to download webpage: (<urllib3.connection.HTTPSConnection object at 0x76a0de38a420>, 'Connection to globo-ab.globo.com timed out. (connect timeout=20.0)') (caused by TransportError("(<urllib3.connection.HTTPSConnection object at 0x76a0de38a420>, 'Connection to globo-ab.globo.com timed out. (connect timeout=20.0)')"))
File "/home/username/.local/lib/python3.12/site-packages/yt_dlp/extractor/common.py", line 742, in extract
ie_result = self._real_extract(url)
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/username/.local/lib/python3.12/site-packages/yt_dlp/extractor/globo.py", line 83, in _real_extract
self._request_webpage(
File "/home/username/.local/lib/python3.12/site-packages/yt_dlp/extractor/common.py", line 911, in _request_webpage
raise ExtractorError(errmsg, cause=err)
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 203, in _new_conn
sock = connection.create_connection(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/urllib3/util/connection.py", line 85, in create_connection
raise err
File "/usr/lib/python3/dist-packages/urllib3/util/connection.py", line 73, in create_connection
sock.connect(sa)
TimeoutError: timed out
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/username/.local/lib/python3.12/site-packages/yt_dlp/networking/_requests.py", line 328, in _send
requests_res = session.request(
^^^^^^^^^^^^^^^^
File "/home/username/.local/lib/python3.12/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/username/.local/lib/python3.12/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/username/.local/lib/python3.12/site-packages/requests/adapters.py", line 667, in send
resp = conn.urlopen(
^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 845, in urlopen
retries = retries.increment(
^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/urllib3/util/retry.py", line 447, in increment
raise reraise(type(error), error, _stacktrace)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/urllib3/util/util.py", line 39, in reraise
raise value
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 791, in urlopen
response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 492, in _make_request
raise new_e
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 468, in _make_request
self._validate_conn(conn)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 1097, in _validate_conn
conn.connect()
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 611, in connect
self.sock = sock = self._new_conn()
^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 212, in _new_conn
raise ConnectTimeoutError(
urllib3.exceptions.ConnectTimeoutError: (<urllib3.connection.HTTPSConnection object at 0x76a0de38a420>, 'Connection to globo-ab.globo.com timed out. (connect timeout=20.0)')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/username/.local/lib/python3.12/site-packages/yt_dlp/extractor/common.py", line 898, in _request_webpage
return self._downloader.urlopen(self._create_request(url_or_request, data, headers, query, extensions))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/username/.local/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 4172, in urlopen
return self._request_director.send(req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/username/.local/lib/python3.12/site-packages/yt_dlp/networking/common.py", line 117, in send
response = handler.send(request)
^^^^^^^^^^^^^^^^^^^^^
File "/home/username/.local/lib/python3.12/site-packages/yt_dlp/networking/_helper.py", line 208, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/username/.local/lib/python3.12/site-packages/yt_dlp/networking/common.py", line 340, in send
return self._send(request)
^^^^^^^^^^^^^^^^^^^
File "/home/username/.local/lib/python3.12/site-packages/yt_dlp/networking/_requests.py", line 356, in _send
raise TransportError(cause=e) from e
yt_dlp.networking.exceptions.TransportError: (<urllib3.connection.HTTPSConnection object at 0x76a0de38a420>, 'Connection to globo-ab.globo.com timed out. (connect timeout=20.0)')$ yt-dlp --cookies-from-browser chrome -u [email protected] -p ******** "https://globoplay.globo.com/v/9203242/"
[Globo] Extracting URL: https://globoplay.globo.com/v/9203242/
[Globo] 9203242: Getting cookies
Extracting cookies from chrome
Extracted 3422 cookies from chrome
ERROR: [Globo] 9203242: Unable to download webpage: (<urllib3.connection.HTTPSConnection object at 0x70c95934db80>, 'Connection to globo-ab.globo.com timed out. (connect timeout=20.0)') (caused by TransportError("(<urllib3.connection.HTTPSConnection object at 0x70c95934db80>, 'Connection to globo-ab.globo.com timed out. (connect timeout=20.0)')"))
```
| site-bug,triage | low | Critical |
2,782,248,172 | tauri | [bug] Rendering not working correctly with GDK_BACKEND=wayland | ### Describe the bug
I am new to tauri but I have this issue:
When running `cargo tauri dev` with `GDK_BACKED=wayland` my app looks like this:

However when running with `GDK_BACKED=x11` it looks like this:

Maybe this is a webkit issue?
### Reproduction
_No response_
### Expected behavior
`GDK_BACKED=wayland` should look the same as `GDK_BACKEND=x11`
### Full `tauri info` output
```text
[✔] Environment
- OS: NixOS 25.5.0 x86_64 (X64)
✔ webkit2gtk-4.1: 2.46.5
✔ rsvg2: 2.58.3
✔ rustc: 1.82.0 (f6e511eec 2024-10-15)
✔ cargo: 1.82.0 (8f40fc59f 2024-08-21)
✔ rustup: 1.27.1 (1980-01-01)
✔ Rust toolchain: stable-x86_64-unknown-linux-gnu (environment override by RUSTUP_TOOLCHAIN)
- node: 23.2.0
- yarn: 1.22.22
- npm: 10.9.0
[-] Packages
- tauri 🦀: 2.2.0
- tauri-build 🦀: 2.0.4
- wry 🦀: 0.48.0
- tao 🦀: 0.31.1
- tauri-cli 🦀: 2.1.0
- @tauri-apps/api : 2.2.0
- @tauri-apps/cli : 2.1.0
[-] Plugins
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../build
- devUrl: http://localhost:1420/
- framework: Svelte
- bundler: Vite
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: needs triage,platform: Nix/NixOS | low | Critical |
2,782,248,499 | flutter | Read-only TextField prevents focus from changing | ### Steps to reproduce
* Build the application for MacOS.
* Press tab three times.
### Expected results
The focus should be on the 3rd textfield.
### Actual results
The focus is still on the 2nd textfield which is `readOnly`.
### Code sample
<details open><summary>Code sample</summary>
```dart
void main() => runApp(const Foo());
class Foo extends StatelessWidget {
const Foo({super.key});
@override
Widget build(BuildContext context) => const MaterialApp(
home: Scaffold(
body: Column(
children: [
TextField(),
TextField(readOnly: true),
TextField(),
],
),
),
);
}
```
</details>
### Screenshots or Video
_No response_
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.27.1, on macOS 15.2 24C101 darwin-arm64, locale en-SG)
• Flutter version 3.27.1 on channel stable at /Users/matthias/Documents/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 17025dd882 (4 weeks ago), 2024-12-17 03:23:09 +0900
• Engine revision cb4b5fff73
• Dart version 3.6.0
• DevTools version 2.40.2
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/matthias/Library/Android/sdk
• Platform android-34, build-tools 34.0.0
• Java binary at: /Users/matthias/Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 21.0.4+-12422083-b607.1)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 16.2)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16C5032a
• CocoaPods version 1.16.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.2)
• Android Studio at /Users/matthias/Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 21.0.4+-12422083-b607.1)
[✓] IntelliJ IDEA Ultimate Edition (version 2024.3.1.1)
• IntelliJ at /Users/matthias/Applications/IntelliJ IDEA Ultimate.app
• Flutter plugin version 83.0.4
• Dart plugin version 243.23177
[✓] Connected device (4 available)
• Overpriced Phone (mobile) • 00008030-0009195E1E41802E • ios • iOS 18.1.1 22B91
• macOS (desktop) • macos • darwin-arm64 • macOS 15.2 24C101 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 15.2 24C101 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.265
[✓] Network resources
• All expected network resources are available.
```
</details>
| a: text input,framework,f: focus,has reproducible steps,P2,workaround available,team-text-input,triaged-text-input,found in release: 3.27,found in release: 3.28 | low | Major |
2,782,262,933 | godot | Uniforms break particle shaders after conversion from a ParticleProcessMaterial | ### Tested versions
- Reproducible in v4.3.stable
### System information
Godot v4.3.stable - Windows 10.0.22631 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4070 SUPER (NVIDIA; 32.0.15.6636) - 13th Gen Intel(R) Core(TM) i5-13600KF (20 Threads)
### Issue description
I'm experimenting with particle shaders and when I add a uniform variable it breaks the particles (they don't move at all). When the uniform is replaced with a hardcoded value then it works perfectly. Here's a video showing what I mean:
https://github.com/user-attachments/assets/7154ad42-681c-42b0-a8d0-cd0a8591a9d9
By the way the code in the start() function was stolen from the "convert to ShaderMaterial" option with the default particle process material.
### Steps to reproduce
1. Create a GPU Particles 2D node and attach a ShaderMaterial to it.
2. Write simple code with uniforms:
```
shader_type particles;
uniform float gravity = 98;
void start() {
if (RESTART_ROT_SCALE) {
TRANSFORM[0].xyz = vec3(1.0, 0.0, 0.0);
TRANSFORM[1].xyz = vec3(0.0, 1.0, 0.0);
TRANSFORM[2].xyz = vec3(0.0, 0.0, 1.0);
}
if (RESTART_POSITION) {
TRANSFORM[3].xyz = vec3(0.0);
TRANSFORM = EMISSION_TRANSFORM * TRANSFORM;
}
if (RESTART_VELOCITY) {
VELOCITY = vec3(0.0);
}
}
void process() {
vec2 velocity = VELOCITY.xy;
velocity = vec2(0.0, 1.0) * gravity;
VELOCITY.xy = velocity;
}
```
3. The particles are broken.
### Minimal reproduction project (MRP)
N/A | bug,needs testing,topic:shaders,topic:particles | low | Critical |
2,782,274,615 | TypeScript | Getting a setter-only property is typed as the setter's parameter's type instead of undefined | ### 🔎 Search Terms
setter only
### 🕗 Version & Regression Information
- This is the behavior in every version I tried, from 5.7.3 down to 3.3.3333
### ⏯ Playground Link
https://www.typescriptlang.org/play/?ts=5.7.3#code/MYewdgzgLgBANgQzAcwK4OQUxgXhgbwCgYYJNZhUAnKzMKACjAQFtMAuUqKgSxQEoCxEjCgALHhAB0cEMikAHVBDFNWmfgG5hAXwA0w2ck4BtALowEELrxTmDO7YUQp0WKZRp1YeAEQAxACVfJ1BIClx4JDQMTA9qWnptIA
### 💻 Code
```ts
const language = {
set current(name: string) {
this.log.push(name);
},
log: [] as string[],
};
language.current = "FR";
const c = language.current;
```
### 🙁 Actual behavior
Variable `c` is said to be of type `string`.
### 🙂 Expected behavior
As clearly stated [here](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/set#defining_a_setter_on_new_objects_in_object_initializers), it should be `undefined`.
> Note that `current` is not defined, and any attempts to access it will result in `undefined`.
### Additional information about the issue
_No response_ | Suggestion,Awaiting More Feedback | low | Major |
2,782,277,604 | pytorch | CheckpointError with `torch.distributed.algorithms._checkpoint.checkpoint_wrapper` and `torch.compile` | ### 🐛 Describe the bug
```python
import functools
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.distributed.algorithms._checkpoint.checkpoint_wrapper import apply_activation_checkpointing, checkpoint_wrapper, CheckpointImpl
@torch.compile(mode='reduce-overhead')
class SelfAttention(nn.Module):
def __init__(self, num_heads: int, head_dim: int, norm_eps: float, causal: bool):
super().__init__()
self.num_heads = num_heads
self.head_dim = head_dim
self.causal = causal
total_dim = num_heads * head_dim
self.to_qkv = nn.Linear(total_dim, total_dim * 3, bias=False)
self.to_out = nn.Linear(total_dim, total_dim, bias=False)
self.q_norm = nn.RMSNorm(head_dim, eps=norm_eps)
self.k_norm = nn.RMSNorm(head_dim, eps=norm_eps)
def forward(self, x_btc: torch.Tensor):
states = x_btc
batch_size, sequence_length, _ = states.shape
proj: torch.Tensor = self.to_qkv(states)
proj = proj.view(batch_size, sequence_length, self.num_heads, 3, self.head_dim).transpose(1, 2)
query, key, value = proj.unbind(-2)
query: torch.Tensor = self.q_norm(query)
key: torch.Tensor = self.k_norm(key)
hidden_states = F.scaled_dot_product_attention(
query, key, value, is_causal=self.causal
)
hidden_states = hidden_states.transpose(1, 2).reshape(batch_size, sequence_length, self.num_heads * self.head_dim)
hidden_states = hidden_states.to(query.dtype)
return self.to_out(hidden_states)
class Block(nn.Module):
def __init__(self):
super().__init__()
self.attn = SelfAttention(1, 64, 1e-5, False)
def forward(self, x):
return x + self.attn(x)
class Transformer(nn.Module):
def __init__(self):
super().__init__()
self.blocks = nn.ModuleList([Block() for _ in range(4)])
def forward(self, x):
for block in self.blocks:
x = block(x)
return x
if __name__ == '__main__':
mod = Transformer().cuda()
non_reentrant_wrapper = functools.partial(
checkpoint_wrapper,
checkpoint_impl=CheckpointImpl.NO_REENTRANT,
)
apply_activation_checkpointing(
mod, checkpoint_wrapper_fn=non_reentrant_wrapper,
check_fn=lambda mod: isinstance(mod, Block)
)
mod(torch.randn(3, 77, 64).cuda()).sum().backward()
```
Output:
```
/opt/conda/lib/python3.11/site-packages/torch/_inductor/compile_fx.py:167: UserWarning: TensorFloat32 tensor cores for float32 matrix multiplication available but not enabled. Consider setting `torch.set_float32_matmul_precision('high')` for better performance.
warnings.warn(
Traceback (most recent call last):
File "/root/bug/repro.py", line 74, in <module>
mod(torch.randn(3, 77, 64).cuda()).sum().backward()
File "/opt/conda/lib/python3.11/site-packages/torch/_tensor.py", line 581, in backward
torch.autograd.backward(
File "/opt/conda/lib/python3.11/site-packages/torch/autograd/__init__.py", line 347, in backward
_engine_run_backward(
File "/opt/conda/lib/python3.11/site-packages/torch/autograd/graph.py", line 825, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1740, in backward
ctx_saved_tensors = ctx.saved_tensors
^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/utils/checkpoint.py", line 1129, in unpack_hook
frame.check_recomputed_tensors_match(gid)
File "/opt/conda/lib/python3.11/site-packages/torch/utils/checkpoint.py", line 903, in check_recomputed_tensors_match
raise CheckpointError(
torch.utils.checkpoint.CheckpointError: torch.utils.checkpoint: Recomputed values for the following tensors have different metadata than during the forward pass.
tensor at position 12:
saved metadata: {'shape': torch.Size([]), 'dtype': torch.int64, 'device': device(type='cpu')}
recomputed metadata: {'shape': torch.Size([]), 'dtype': torch.int64, 'device': device(type='cuda', index=0)}
tensor at position 13:
saved metadata: {'shape': torch.Size([]), 'dtype': torch.int64, 'device': device(type='cpu')}
recomputed metadata: {'shape': torch.Size([]), 'dtype': torch.int64, 'device': device(type='cuda', index=0)}
```
### Versions
```
Collecting environment information...
PyTorch version: 2.5.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.5
Libc version: glibc-2.35
Python version: 3.11.10 | packaged by conda-forge | (main, Oct 16 2024, 01:27:36) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-4.18.0-477.21.1.el8_8.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.154.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 224
On-line CPU(s) list: 0-223
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8480+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 56
Socket(s): 2
Stepping: 8
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 5.3 MiB (112 instances)
L1i cache: 3.5 MiB (112 instances)
L2 cache: 224 MiB (112 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-55,112-167
NUMA node1 CPU(s): 56-111,168-223
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu11==9.1.0.70
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu11==2.21.5
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.1.105
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] onnx==1.17.0
[pip3] onnxruntime-gpu==1.20.1
[pip3] onnxscript==0.1.0.dev20250108
[pip3] optree==0.13.0
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.5.1+cu121
[pip3] torch.redstone==0.0.6
[pip3] torchaudio==2.5.1+cu121
[pip3] torchdiffeq==0.2.5
[pip3] torchelastic==0.2.2
[pip3] torchprofile==0.0.4
[pip3] torchvision==0.20.1+cu121
[pip3] triton==3.1.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu11 11.11.3.6 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu11 11.8.87 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu11 11.8.89 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu11 11.8.89 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu11 9.1.0.70 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu11 10.9.0.58 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu11 10.3.0.86 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu11 11.4.1.48 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu11 11.7.5.86 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu11 2.21.5 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-nvtx-cu11 11.8.86 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.5.1+cu121 pypi_0 pypi
[conda] torch-redstone 0.0.6 pypi_0 pypi
[conda] torchaudio 2.5.1+cu121 pypi_0 pypi
[conda] torchdiffeq 0.2.5 pypi_0 pypi
[conda] torchelastic 0.2.2 pypi_0 pypi
[conda] torchprofile 0.0.4 pypi_0 pypi
[conda] torchvision 0.20.1+cu121 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
```
cc @soulitzer @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @pradeepfn | module: checkpoint,triaged,activation-checkpointing | low | Critical |
2,782,288,815 | go | x/crypto/nacl: function docs should describe return values | ### Go version
go version go1.23.0 darwin/arm64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='arm64'
GOBIN=''
GOCACHE='/Users/jeff/Library/Caches/go-build'
GOENV='/Users/jeff/Library/Application Support/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='arm64'
GOHOSTOS='darwin'
GOINSECURE=''
GOMODCACHE='/Users/jeff/go/pkg/mod'
GOOS='darwin'
GOPATH='/Users/jeff/go'
GOPRIVATE='github.com/curvegrid/*'
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/opt/homebrew/Cellar/go/1.23.0/libexec'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/opt/homebrew/Cellar/go/1.23.0/libexec/pkg/tool/darwin_arm64'
GOVCS=''
GOVERSION='go1.23.0'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/Users/jeff/Library/Application Support/go/telemetry'
GCCGO='gccgo'
GOARM64='v8.0'
AR='ar'
CC='cc'
CXX='c++'
CGO_ENABLED='1'
GOMOD='/dev/null'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -arch arm64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -ffile-prefix-map=/var/folders/vv/s0_ykdm97l989629xrkbsy6m0000gn/T/go-build2163110709=/tmp/go-build -gno-record-gcc-switches -fno-common'
```
### What did you do?
The inline documentation for the functions in golang.org/x/crypto/nacl do not describe their return values.
### What did you see happen?
For example, `secretbox.Open()` and `secretbox.Seal()` should describe what their return values are. Contrast this with, for example, the documentation for [poly1305](https://pkg.go.dev/golang.org/x/[email protected]/poly1305), which either describes what the return value is ("Size returns the number of bytes Sum will return.") or makes it implicit through named return values (`func (h *MAC) Write(p []byte) (n int, err error)`).
### What did you expect to see?
See above comment | Documentation,NeedsInvestigation | low | Critical |
2,782,295,985 | PowerToys | "VK242 → Alt" works as bug | ### Microsoft PowerToys version
0.70
### Installation method
Microsoft Store
### Running as admin
None
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
After mapped "VK242 → Alt", any button doesn't work correctly when I use ""VK242 → Alt" + Enter"".
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Critical |
2,782,298,303 | angular | Angular Resource and rxResource | ### Which @angular/* package(s) are relevant/related to the feature request?
core
### Description
Introduce the ability to pause/stop/resume, the resource and rxResource, so that we can temporarily pause/resume the listener for signal changes.
We can use `.destroy()` but the data using this resource is also removed, so I am asking for `stop` feature, so that we can maintain the data that is present inside the resource.
[How to run function just one time when 2 signals have value?](https://stackoverflow.com/questions/79344955/how-to-run-function-just-one-time-when-2-signals-have-value/79345028#79345028)
### Proposed solution
Introduce the ability to pause/stop/resume, the resource and rxResource, so that we can temporarily pause/resume the listener for signal changes.
We can use `.destroy()` but the data using this resource is also removed, so I am asking for `stop` feature, so that we can pause the resource listener.
### Alternatives considered
Nothing | area: core,cross-cutting: signals | low | Minor |
2,782,303,895 | ollama | Add MoonDream 2 rev:2025-1-9 support | Moondream2 has a new release, rev 2025-1-9, which is incredibly good for its size
It also support bbox and gaze detection
MoonDream architecture is already supported in ollama, so I think it is easy to cook the new model in ollama | model request | low | Minor |
2,782,305,343 | frp | 将您重定向的次数过多。 | ### Bug Description
我通过frp 映射内网的esxi 服务器控制台地址访问的时候会这样
<img width="954" alt="image" src="https://github.com/user-attachments/assets/932f8f1e-9bf1-41bd-ad75-b20b06c868b5" />
我不确定是不是esxi 局域网访问会提示:不安全的连接提示那个问题导致的重定向
<img width="1337" alt="image" src="https://github.com/user-attachments/assets/84e81c8f-9675-4de9-a564-222478297f43" />
如果是局域网访问每次都要手动点击一次继续前往
### frpc Version
0.22.0
### frps Version
0.22.0
### System Architecture
linux/amd64
### Configurations
frps.ini
[common]
# frp监听的端口,用作服务端和客户端通信
bind_port = 7000
# HTTP 请求的端口
vhost_http_port = 80
# HTTPS 请求的端口
vhost_https_port = 443
dashboard_port = 7500
# dashboard 用户名密码
dashboard_user = *****
dashboard_pwd = *****
subdomain_host = co***g.cn
# 开启toke认证
authentication_method = token
authenticate_heartbeats = true
authenticate_new_work_conns = true
token = *****
# 开启tcp穿透端口范围
allow_ports = 2-30000
log_file = /home/ubuntu/frp/frps.log
log_max_days = 3
frpc.ini
[common]
server_addr = *****
server_port = 7000
authentication_method = token
authenticate_heartbeats = true
authenticate_new_work_conns = true
token = *****
[esxi]
type = http
local_port = 80
local_ip = 192.168.5.152
subdomain = esxi
### Logs
2025/01/12 05:52:49 [W] [newhttp.go:209] http: proxy error: no such domain
2025/01/12 05:52:49 [I] [proxy.go:87] [3da14647d62a820d] [esxi] get a new work connection: [14.145.4.246:53735]
2025/01/12 05:52:50 [I] [proxy.go:87] [3da14647d62a820d] [esxi] get a new work connection: [14.145.4.246:53735]
2025/01/12 05:52:51 [I] [proxy.go:87] [3da14647d62a820d] [esxi] get a new work connection: [14.145.4.246:53735]
2025/01/12 05:52:52 [I] [proxy.go:87] [3da14647d62a820d] [esxi] get a new work connection: [14.145.4.246:53735]
2025/01/12 05:52:53 [I] [proxy.go:87] [3da14647d62a820d] [esxi] get a new work connection: [14.145.4.246:53735]
2025/01/12 05:52:54 [I] [proxy.go:87] [3da14647d62a820d] [esxi] get a new work connection: [14.145.4.246:53735]
2025/01/12 05:52:55 [I] [proxy.go:87] [3da14647d62a820d] [esxi] get a new work connection: [14.145.4.246:53735]
### Steps to reproduce
1.
2.
3.
...
### Affected area
- [ ] Docs
- [ ] Installation
- [ ] Performance and Scalability
- [ ] Security
- [ ] User Experience
- [ ] Test and Release
- [ ] Developer Infrastructure
- [ ] Client Plugin
- [ ] Server Plugin
- [ ] Extensions
- [ ] Others | lifecycle/stale | low | Critical |
2,782,317,400 | pytorch | torch.vmap + autograd.Function + current_level bug | ### 🐛 Describe the bug
It we call ```torch._C._functorch.current_level()``` inside of an autograd function's ```setup_context``` method, and then apply ```torch.vmap``` on top of it, it errors out.
Repro:
* Checkout and apply https://github.com/pytorch/pytorch/pull/143811
* Replace ```key = id(Generated)``` with ```key = current_level()``` in ```setup_context```.
* Run the following example:
```
import torch
class LinearFunction(torch.autograd.Function):
generate_vmap_rule = True
# Note that forward, setup_context, and backward are @staticmethods
@staticmethod
def forward(input, weight, bias):
output = input.mm(weight.t())
if bias is not None:
output += bias.unsqueeze(0).expand_as(output)
return output
@staticmethod
# inputs is a Tuple of all of the inputs passed to forward.
# output is the output of the forward().
def setup_context(ctx, inputs, output):
input, weight, bias = inputs
ctx.save_for_backward(input, weight, bias)
# This function has only a single output, so it gets only one gradient
@staticmethod
def backward(ctx, grad_output):
input, weight, bias = ctx.saved_tensors
grad_input = grad_weight = grad_bias = None
if ctx.needs_input_grad[0]:
grad_input = grad_output.mm(weight)
if ctx.needs_input_grad[1]:
grad_weight = grad_output.t().mm(input)
if bias is not None and ctx.needs_input_grad[2]:
grad_bias = grad_output.sum(0)
return grad_input, grad_weight, grad_bias
def fn(input, weight, bias=None):
return torch.vmap(LinearFunction.apply)(input, weight, bias)
torch.manual_seed(124)
batch_input = torch.randn(4, 2, 2, dtype=torch.double, requires_grad=True)
batch_weight = torch.randn(4, 3, 2, dtype=torch.double, requires_grad=True)
batch_bias = torch.randn(4, 3, dtype=torch.double, requires_grad=True)
output = fn(batch_input, batch_weight, batch_bias)
print(output)
```
Then it errors out:
```
Traceback (most recent call last):
File "/data/users/ybliang/debug/debug7.py", line 44, in <module>
output = fn(batch_input, batch_weight, batch_bias)
File "/data/users/ybliang/debug/debug7.py", line 37, in fn
return torch.vmap(LinearFunction.apply)(input, weight, bias)
File "/home/ybliang/local/pytorch/torch/_functorch/apis.py", line 203, in wrapped
return vmap_impl(
File "/home/ybliang/local/pytorch/torch/_functorch/vmap.py", line 331, in vmap_impl
return _flat_vmap(
File "/home/ybliang/local/pytorch/torch/_functorch/vmap.py", line 481, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/home/ybliang/local/pytorch/torch/autograd/function.py", line 585, in apply
return custom_function_call(cls, *args, **kwargs)
File "/home/ybliang/local/pytorch/torch/_functorch/autograd_function.py", line 49, in __call__
return super().__call__(autograd_function, *args, **kwargs)
File "/home/ybliang/local/pytorch/torch/_ops.py", line 439, in __call__
return wrapper()
File "/home/ybliang/local/pytorch/torch/_dynamo/eval_frame.py", line 755, in _fn
return fn(*args, **kwargs)
File "/home/ybliang/local/pytorch/torch/_ops.py", line 435, in wrapper
return self.dispatch(
File "/home/ybliang/local/pytorch/torch/_ops.py", line 305, in dispatch
return dispatch_functorch(self, args, kwargs)
File "/home/ybliang/local/pytorch/torch/_functorch/pyfunctorch.py", line 294, in dispatch_functorch
return interpreter.process(op, args, kwargs)
File "/home/ybliang/local/pytorch/torch/_functorch/pyfunctorch.py", line 130, in process
return kernel(self, *args, **kwargs)
File "/home/ybliang/local/pytorch/torch/_functorch/autograd_function.py", line 300, in custom_function_call_vmap
return custom_function_call_vmap_generate_rule(
File "/home/ybliang/local/pytorch/torch/_functorch/autograd_function.py", line 384, in custom_function_call_vmap_generate_rule
outputs = custom_function_call(vmapped_function, *unwrapped_operands)
File "/home/ybliang/local/pytorch/torch/_functorch/autograd_function.py", line 50, in __call__
return autograd_function.apply(*args, **kwargs)
File "/home/ybliang/local/pytorch/torch/autograd/function.py", line 575, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/home/ybliang/local/pytorch/torch/_functorch/autograd_function.py", line 410, in setup_context
key = current_level()
RuntimeError: maybe_layer.has_value() INTERNAL ASSERT FAILED at "/data/users/ybliang/pytorch/torch/csrc/functorch/init.cpp":370, please report a bug to PyTorch.
```
### Versions
main
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan @zou3519 @Chillee @samdow @kshitij12345 | module: autograd,triaged,module: vmap,module: functorch | low | Critical |
2,782,325,719 | PowerToys | error | ### Microsoft PowerToys version
0.87.0.0
### Installation method
PowerToys auto-update
### Running as admin
None
### Area(s) with issue?
General
### Steps to reproduce
Version: 0.87.0.0
OS Version: Microsoft Windows NT 10.0.22631.0
IntPtr Length: 8
x64: True
Date: 2025/01/12 11:34:52
Exception:
System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown.
at System.Threading.Thread.StartInternal(ThreadHandle t, Int32 stackSize, Int32 priority, Char* pThreadName)
at System.Threading.Thread.StartInternal(ThreadHandle t, Int32 stackSize, Int32 priority, Char* pThreadName)
at System.Threading.Thread.StartCore()
at System.Threading.PortableThreadPool.WorkerThread.CreateWorkerThread()
at System.Threading.PortableThreadPool.WorkerThread.MaybeAddWorkingWorker(PortableThreadPool threadPoolInstance)
at System.Threading.PortableThreadPool.AdjustMaxWorkersActive()
at System.Threading.ThreadPoolWorkQueue.Dispatch()
at System.Threading.PortableThreadPool.WorkerThread.WorkerThreadStart()
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Critical |
2,782,346,710 | transformers | ValueError: MllamaForConditionalGeneration does not support Flash Attention 2.0 yet | ### System Info
- `transformers` version: 4.47.1
- Platform: Linux-4.18.0-553.22.1.el8_10.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.15
- Huggingface_hub version: 0.26.3
- Safetensors version: 0.4.5
- Accelerate version: 0.34.2
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA H200
### Who can help?
@amyeroberts, @qubvel
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
code:
```
import requests
import torch
from PIL import Image
from transformers import MllamaForConditionalGeneration, AutoProcessor
model_id = "meta-llama/Llama-3.2-11B-Vision-Instruct"
model = MllamaForConditionalGeneration.from_pretrained(model_id, device_map="auto", torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2")
processor = AutoProcessor.from_pretrained(model_id)
messages = [
[
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": "What does the image show?"}
]
}
],
]
text = processor.apply_chat_template(messages, add_generation_prompt=True)
url = "https://llava-vl.github.io/static/images/view.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=text, images=image, return_tensors="pt").to(model.device)
output = model.generate(**inputs, max_new_tokens=25)
print(processor.decode(output[0]))
```
output:
Traceback (most recent call last):
File "/home/user/reasoning/test_mllama.py", line 8, in <module>
model = MllamaForConditionalGeneration.from_pretrained(model_id, device_map="auto", torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2")
File "/home/user/cache/conda/envs/openinstruct/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4124, in from_pretrained
config = cls._autoset_attn_implementation(
File "/home/user/cache/conda/envs/openinstruct/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1586, in _autoset_attn_implementation
cls._check_and_enable_flash_attn_2(
File "/home/user/cache/conda/envs/openinstruct/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1707, in _check_and_enable_flash_attn_2
raise ValueError(
ValueError: MllamaForConditionalGeneration does not support Flash Attention 2.0 yet. Please request to add support where the model is hosted, on its model hub page: https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct/discussions/new or in the Transformers GitHub repo: https://github.com/huggingface/transformers/issues/new
### Expected behavior
Support flash attention 2 | bug | low | Critical |
2,782,353,194 | electron | typo in code-signing.md | while going through electronjs.org in https://www.electronjs.org/docs/latest/tutorial/code-signing
there is a typo in the page code signing being codesigning it should have a space
 | documentation :notebook:,bug :beetle: | low | Minor |
2,782,372,814 | rust | Tracking issue for release notes of #134300: remove long-deprecated no-op attributes no_start and crate_id |
This issue tracks the release notes text for #134300.
### Steps
- [ ] Proposed text is drafted by PR author (or team) making the noteworthy change.
- [ ] Issue is nominated for release team review of clarity for wider audience.
- [ ] Release team includes text in release notes/blog posts.
### Release notes text
The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing).
````markdown
# Compatibility Notes
- [Remove long-deprecated no-op attributes `no_start` and `crate_id`](https://github.com/rust-lang/rust/pull/134300)
````
> [!TIP]
> Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use.
> The category will be de-duplicated with all the other ones by the release team.
>
> *More than one section can be included if needed.*
### Release blog section
If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section.
*Otherwise leave it empty.*
````markdown
````
cc @RalfJung, @chenyukang -- origin issue/PR authors and assignees for starting to draft text
| A-attributes,T-lang,T-compiler,relnotes,relnotes-tracking-issue | low | Minor |
2,782,378,589 | react | Question: Async React Reconciler | Apologies for using the bug template, but I have a question regarding React Reconciler. I understand that this is not intended for questions, and I appreciate your understanding.
I'm building a custom React reconciler by using `react-reconciler` package. I'm wondering if it is possible having an async reconciler. I need to dynamic import a module in the _createInstance_ method of HostConfig based on the _type_ parameter. A minimal implementation example could be:
```typescript
// reconciler.ts
async createInstance(type, props, rootContainer, hostContext, internalHandle){
const Module = await import(`my-module-path/${type}`);
const instance = new Module();
return instance;
}
getPublicInstance(instance){
return instance; // this is a Promise
}
```
The main issue is that in this way i'm returning a Promise as ref and then i should handle it in this way:
```typescript
// Component.tsx
useEffect(() => {
ref.current.then((instance) => {
// access here to instance
})
}, [ref])
```
Mine is not a primary renderer so, as workaround, i thought to traversing the Fiber node (generated by React reconciler) to get the all tag names (JSX intrinsic elements) and then dynamic import them before to start the secondary rendering process (`createContainer` and `updateContainer`). Unfortunately this not entirely correct because the root Fiber node does not encompass all components of the tree. For example, in the following scenario, only one between the box and the sphere will be included:
```typescript
condition ? <box /> : <sphere />
```
Are there other ways to think about this problem?
---
React (react package) version: 18.3.1
React reconciler (react-reconciler package): 0.29.2
| Status: Unconfirmed,Resolution: Needs More Information | low | Critical |
2,782,382,519 | PowerToys | The PowerToys Preview pane cannot preview files properly, you need to close the format in the preview | ### Microsoft PowerToys version
0.87.1
### Installation method
GitHub
### Running as admin
Yes
### Area(s) with issue?
File Explorer: Preview Pane
### Steps to reproduce
Normal open file Explorer, preview txt display can not preview this file, but close the format support in file preview, and you can preview the file normally
### ✔️ Expected Behavior
Preview the file normally and display the file content
### ❌ Actual Behavior
This file cannot be previewed
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,782,385,781 | material-ui | [RFC] Start a small collaboration with RJSF to enhance the Material UI theme they have created? | ### What's the problem?
The [RJSF library](https://github.com/rjsf-team/react-jsonschema-form) has a community-maintained Material UI theme. The library is actively maintained and used in many projects.
However, some widgets do not properly use Material UI theming. There is the `data-url` widget and the date & time ones: `date`, `date-time`, and `time`. Documentation [here](https://rjsf-team.github.io/react-jsonschema-form/docs/usage/widgets/).
I'm sure there are developers here who had to build their custom widgets for the RJSF Material UI theme to conform to Material Design properly. Even sharing these examples would be helpful for those who don't yet have their solutions.
I decided to start this RFC after noticing that besides the date & time widgets missing, the `data-url` (file) widget is also missing proper Material UI integration. Creating this widget does not seem like a huge endeavor.
@oliviertassinari What do you think? Could the Material UI team consider contributing to the RJSF Material UI theme at some point?
Even just documentation examples (either on our side or theirs) would greatly help. I opened an issue regarding this over at rjsf-team/react-jsonschema-form#4447.
The Material UI theme for RJSF was, at least for me, a deciding factor in adopting Material UI overall in my project.
As a taste of what this would take, here's an issue I opened where I made a custom Material UI password widget for RJSF, with an eye icon to show/hide the input: rjsf-team/react-jsonschema-form#4274
### What are the requirements?
_No response_
### What are our options?
_No response_
### Proposed solution
_No response_
### Resources and benchmarks
_No response_
**Search keywords**: | waiting for 👍,ready to take,RFC | low | Minor |
2,782,387,115 | langchain | get_buffer_string() use deprecated keys | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
from langchain_core.tracers import ConsoleCallbackHandler
from langchain_core.runnables import RunnableConfig
from langchain_core.messages import HumanMessage, SystemMessage, AIMessage
from pydantic import SecretStr
@tool
def get_temperature_date(location: str, date: str, unit: str = "celsius"):
"""Get temperature at a location and date.
Args:
location: The location to get the temperature for, in the format "City, State, Country".
date: The date to get the temperature for, in the format "Year-Month-Day".
unit: The unit to return the temperature in. Defaults to "celsius". (choices: ["celsius", "fahrenheit"])
Returns:
the temperature, the location, the date and the unit in a dict
"""
return {
"temperature": 25.9,
"location": location,
"date": date,
"unit": unit,
}
messages = [
SystemMessage(
"You are Qwen, created by Alibaba Cloud. You are a helpful assistant.\n\nCurrent Date: 2025-01-11"
),
HumanMessage("What's the temperature in San Francisco now? How about tomorrow?"),
]
config = RunnableConfig(callbacks=[ConsoleCallbackHandler()])
model = ChatOpenAI(
model="Qwen/Qwen2.5-14B-Instruct",
base_url="http://localhost:8192/v1",
api_key=SecretStr("EMPTY"),
)
llm_with_tool = model.bind_tools([get_temperature_date])
res = llm_with_tool.invoke(messages)
print(res.tool_calls)
messages.append(res)
for tool_call in res.tool_calls:
tool_msg = get_temperature_date.invoke(tool_call)
messages.append(tool_msg)
res = llm_with_tool.invoke(messages, config=config)
print(res)
```
### Error Message and Stack Trace (if applicable)
After checking the output of ConsoleCallbackHandler, I find:
```
[llm/start] [llm:ChatOpenAI] Entering LLM run with input:
{
"prompts": [
"System: You are Qwen, created by Alibaba Cloud. You are a helpful assistant.\n\nCurrent Date: 2025-01-11\nHuman: What's the temperature in San Francisco now? How about tomorrow?\nAI: To provide you with the current and tomorrow's temperatures in San Francisco, I need to know today's date and then calculate tomorrow's date based on that. However, since you did not specify today's date, I will assume today is the current date of my last update, which might not be the actual current date. For the sake of this query, let's assume today is 2025-01-11. Therefore, tomorrow would be 2025-01-12.\n\nLet's get the temperature for both dates.\n\nTool: {\"temperature\": 25.9, \"location\": \"San Francisco, California, USA\", \"date\": \"2025-01-11\", \"unit\": \"fahrenheit\"}\nTool: {\"temperature\": 25.9, \"location\": \"San Francisco, California, USA\", \"date\": \"2025-01-12\", \"unit\": \"fahrenheit\"}"
]
}
```
Note that there is no information about tool use in the AI message.
### Description
I meet a problem when using `ConsoleCallbackHandler` with chat models.
The exact problems is that when output LLM input, it omit those related to tool use. After reading the source code of langchain, I find the reason. In the implementation of [`get_buffer_string()`](https://github.com/langchain-ai/langchain/blob/bbc3e3b2cfc20381b212f8fcb463cab56946ab0c/libs/core/langchain_core/messages/utils.py#L82), it still uses the key `'function_call'`. However, openai have deprecated it and start to use `'tool_calls'`.
This is the code snippet of `get_buffer_string()`:
```python
def get_buffer_string(
messages: Sequence[BaseMessage], human_prefix: str = "Human", ai_prefix: str = "AI"
) -> str:
"""Convert a sequence of Messages to strings and concatenate them into one string.
//.....
if isinstance(m, AIMessage) and "function_call" in m.additional_kwargs: # <------------ NOTE HERE!!!
message += f"{m.additional_kwargs['function_call']}"
string_messages.append(message)
//......
```
### System Info
System Information
------------------
> OS: Linux
> OS Version: #134-Ubuntu SMP Fri Sep 27 20:20:17 UTC 2024
> Python Version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.3.29
> langsmith: 0.2.10
> langchain_openai: 0.2.14
> langgraph_sdk: 0.1.51
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> httpx: 0.28.1
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> openai: 1.59.6
> orjson: 3.10.14
> packaging: 24.1
> pydantic: 2.7.4
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
> zstandard: Installed. No version info available.
| 🤖:bug | low | Critical |
2,782,424,628 | react-native | undefined is not an object (evaluating 'window.location.href') | ### Description
I am getting app crash on launch for both the platform android and ios, If I run app in debug mode then it works fine and as soon as removed from debug mode app getting crashed.
I am not getting clear log specific to a library or syntext, only gettine below error.
**Error** - undefined is not an object (evaluating 'window.location.href')
**version** - "react-native": "0.72.6",
### Steps to reproduce
1. Install the node_module.
2. Install the pod.
3. run the app
4. after installation it is getting crashed before launch.
### React Native Version
0.72.6
### Affected Platforms
Runtime - Android, Runtime - iOS
### Output of `npx react-native info`
```text
System:
OS: macOS 14.7.1
CPU: (10) arm64 Apple M2 Pro
Memory: 635.34 MB / 16.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 20.12.0
path: ~/.nvm/versions/node/v20.12.0/bin/node
Yarn: Not Found
npm:
version: 10.5.0
path: ~/.nvm/versions/node/v20.12.0/bin/npm
Watchman:
version: 2024.04.15.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods: Not Found
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.2
- iOS 18.2
- macOS 15.2
- tvOS 18.2
- visionOS 2.2
- watchOS 11.2
Android SDK: Not Found
IDEs:
Android Studio: 2023.2 AI-232.10300.40.2321.11668458
Xcode:
version: 16.2/16C5032a
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.11
path: /usr/bin/javac
Ruby:
version: 2.7.8
path: /Users/sandeep.mahajan/.rvm/rubies/ruby-2.7.8/bin/ruby
npmPackages:
"@react-native-community/cli":
installed: 11.3.7
wanted: 11.3.7
react:
installed: 18.2.0
wanted: 18.2.0
react-native:
installed: 0.72.6
wanted: 0.72.6
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: Not found
newArchEnabled: Not found
iOS:
hermesEnabled: false
newArchEnabled: false
```
### Stacktrace or Logs
```text
ERROR TypeError: undefined is not an object (evaluating 'window.location.href')
WARN Module BiometricModule requires main queue setup since it overrides `init` but doesn't implement `requiresMainQueueSetup`. In a future release React Native will default to initializing all native modules on a background thread unless explicitly opted-out of.
WARN Module RCTBraintreeModule requires main queue setup since it overrides `init` but doesn't implement `requiresMainQueueSetup`. In a future release React Native will default to initializing all native modules on a background thread unless explicitly opted-out of.
WARN Module AppAlternateIcon requires main queue setup since it overrides `init` but doesn't implement `requiresMainQueueSetup`. In a future release React Native will default to initializing all native modules on a background thread unless explicitly opted-out of.
WARN Module RNDataCollector requires main queue setup since it overrides `init` but doesn't implement `requiresMainQueueSetup`. In a future release React Native will default to initializing all native modules on a background thread unless explicitly opted-out of.
WARN Module RNNetPerformComponent requires main queue setup since it overrides `init` but doesn't implement `requiresMainQueueSetup`. In a future release React Native will default to initializing all native modules on a background thread unless explicitly opted-out of.
LOG Running "VfMobileapp" with {"rootTag":1,"initialProps":{}}
ERROR Invariant Violation: "VfMobileapp" has not been registered. This can happen if:
* Metro (the local dev server) is run from the wrong folder. Check if Metro is running, stop it and restart it in the current project.
* A module failed to load due to an error and `AppRegistry.registerComponent` wasn't called.
```
### Reproducer
https://github.com/aws-amplify/amplify-js/issues/4708
### Screenshots and Videos

| Needs: Repro,Needs: Attention,Type: Unsupported Version | low | Critical |
2,782,432,917 | PowerToys | A number of mistranslation in Korean text | ### Microsoft PowerToys version
0.87.1
### Utility with translation issue
General
### 🌐 Language affected
Korean
### ❌ Actual phrase(s)
1. Microsoft PowerToys는 **강력한 사용자**가 생산성을 높이기 위해 Windows 환경을 조정하고 간소화할 수 있는 유틸리티 세트입니다. **Microsoft 및 PowerToys 💗커뮤니티에서** 만들었습니다. (Microsoft PowerToys is a set of utilities for power users to tune and streamline their Windows experience for greater productivity. Made with 💗 by Microsoft and the PowerToys community.) *(General settings)*
2. **관리자 자격**으로 PowerToys 다시 시작 (Restart PowerToys as administrator) *(General settings)*
3. **특성** (Attribution) *(recurring phrase in a number of tool settings)*
4. **Awake 만들기에서 Den Delimarsky의 작업** (Den Delimarsky's work on creating Awake) *(Awake attribution)*
5. Rooler에서 **영감을 받습니다.** (Inspired by Rooler) *(Screen Ruler attribution)*
6. **빠른 강조**에 대한 자세한 정보 (Learn more about Quick Accent) *(Quick Accent settings)*


### ✔️ Expected phrase(s)
1. Microsoft PowerToys는 **고급 사용자**가 생산성을 높이기 위해 Windows 환경을 조정하고 간소화할 수 있는 유틸리티 세트입니다. **Microsoft 및 PowerToys 커뮤니티에서 💗을 담아** 만들었습니다.
2. **관리자 권한**으로 PowerToys 다시 시작
3. **기여자** (lit. *contributors*)
4. **Awake 개발에 Den Delimarsky님이 참여** (lit. *Den Delimarsky has participated on developing Awake*)
5. Rooler에서 **착안**
6. **빠른 악센트**에 대한 자세한 정보
### ℹ Why is the current translation wrong
1. First sentence is inconsistent with the identical Welcome to PowerToys text; additionally, the phrase "power user" makes little sense if translated literally to "강력한 사용자" (lit. *strong user*). Second sentence was mistranslated to mean "Made by Microsoft and the PowerToys 💗community".
2. Inconsistent with the rest of Windows interface, which consistently translates "as administrator" to "관리자 권한으로".
3. The Korean word "특성" only means characteristic/property/quality and can not be used for acknowledgement of authorship. "기여자" (or "기여" (lit. *contributions*)) is one of the closest term to *attribution* I can think of.
4. Translated sentence is very unnatural and awkward in Korean.
5. "받습니다" is considered present tense and easily considered awkward in this context. "영감을 받음" would be another good candidate, but "착안" (uses noun rather than inflected verb) seems more consistent and idiomatic.
6. Inconsistent with the rest of PowerToys interface, which consistently translates *Quick Accent* to "빠른 악센트". Additionally, "강조" means emphasis and can not be used to refer to diacritics. | Issue-Bug,Area-Localization,Needs-Triage,Issue-Translation | low | Minor |
2,782,463,769 | rust | E0499 mentions "previous iteration of the loop" in proc macro code without any loops. | ### Code
```Rust
use quote::quote;
#[proc_macro]
pub fn my_macro(_: proc_macro::TokenStream) -> proc_macro::TokenStream {
quote! {
struct Thing<'a>(&'a mut i32);
impl Drop for Thing<'_> {
fn drop(&mut self) {}
}
#[allow(unused)]
fn test() {
let mut foo = 1;
if let Some(thing) = Some(Thing(&mut foo)) {
} else {
let x = &mut foo;
}
}
}
.into()
}
```
### Current output
```Shell
error[E0499]: cannot borrow `foo` as mutable more than once at a time
--> src/lib.rs:1:1
|
1 | bar::my_macro!();
| ^^^^^^^^^^^^^^^-
| | |
| | ... and the first borrow might be used here, when that temporary is dropped and runs the destructor for type `Option<Thing<'_>>`
| `foo` was mutably borrowed here in the previous iteration of the loop
| a temporary with access to the first borrow is created here ...
|
= note: this error originates in the macro `bar::my_macro` (in Nightly builds, run with -Z macro-backtrace for more info)
For more information about this error, try `rustc --explain E0499`.
error: could not compile `foo` (lib) due to 1 previous error
```
### Desired output
```Shell
Something not mentioning loops
```
### Rationale and extra context
I was poking around to see how the rust 2024 "`if let` temporary scope" changes interact with different spans from proc macros, and I discovered this issue.
The error output above occurs when the macro is called from another crate with no arguments. Both the calling crate and the proc macro crate use edition 2021.
Full code for reproducing the issue: [foo.zip](https://github.com/user-attachments/files/18389368/foo.zip)
### Other cases
```Rust
```
### Rust Version
```Shell
rustc 1.86.0-nightly (eb54a5083 2025-01-11)
binary: rustc
commit-hash: eb54a50837ad4bcc9842924f27e7287ca66e294c
commit-date: 2025-01-11
host: aarch64-apple-darwin
release: 1.86.0-nightly
LLVM version: 19.1.6
```
### Anything else?
_No response_ | A-diagnostics,T-compiler | low | Critical |
2,782,466,846 | rust | Wrong type suggested in `overflowing_bin_hex` | ### Code
```Rust
fn main() {
_ = 0x8FFF_FFFF_FFFF_FFFEu32;
}
```
### Current output
```Shell
error: literal out of range for `i32`
--> src/main.rs:2:9
|
2 | _ = 0x8FFF_FFFF_FFFF_FFFE;
| ^^^^^^^^^^^^^^^^^^^^^
|
= note: the literal `0x8FFF_FFFF_FFFF_FFFE` (decimal `10376293541461622782`) does not fit into the type `i32` and will become `-2i32`
= help: consider using the type `i128` instead
= note: `#[deny(overflowing_literals)]` on by default
help: to use as a negative number (decimal `-2`), consider using the type `u32` for the literal and cast it to `i32`
|
2 | _ = 0x8FFF_FFFF_FFFF_FFFEu32 as i32;
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
```
### Desired output
```Shell
error: literal out of range for `i32`
--> src/main.rs:2:9
|
2 | _ = 0x8FFF_FFFF_FFFF_FFFE;
| ^^^^^^^^^^^^^^^^^^^^^
|
= note: the literal `0x8FFF_FFFF_FFFF_FFFE` (decimal `10376293541461622782`) does not fit into the type `i32` and will become `-2i32`
= help: consider using the type `i128` instead
= note: `#[deny(overflowing_literals)]` on by default
help: to use as a negative number (decimal `-2`), consider using the type `u64` for the literal and cast it to `i32`
|
2 | _ = 0x8FFF_FFFF_FFFF_FFFEu64 as i32;
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
```
### Rationale and extra context
Found this when working on #135249. Wrong type is suggested in overflowing bin/hex literals for them to be casted back to current type.
### Other cases
```Rust
```
### Rust Version
```Shell
rustc 1.84.0 (9fc6b4312 2025-01-07)
binary: rustc
commit-hash: 9fc6b43126469e3858e2fe86cafb4f0fd5068869
commit-date: 2025-01-07
host: x86_64-unknown-linux-gnu
release: 1.84.0
LLVM version: 19.1.5
```
### Anything else?
_No response_
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"11happy"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | A-diagnostics,T-compiler | low | Critical |
2,782,468,199 | puppeteer | [Bug]: getting started seems buggy | ### Minimal, reproducible example
```TypeScript
// index.js
import puppeteer from "puppeteer";
(async () => {
// Launch the browser and open a new blank page
const browser = await puppeteer.launch({ headless: false });
const page = await browser.newPage();
// Navigate the page to a URL
await page.goto("https://developer.chrome.com/");
// Set screen size
await page.setViewport({ width: 1080, height: 1024 });
// Type into search box
await page.type(".devsite-search-field", "automate beyond recorder");
// Wait and click on first result
const searchResultSelector = ".devsite-result-item-link";
await page.waitForSelector(searchResultSelector);
await page.click(searchResultSelector);
// Locate the full title with a unique string
const textSelector = await page.waitForSelector(
"text/Customize and automate"
);
const fullTitle = await textSelector?.evaluate((el) => el.textContent);
// Print the full title
console.log('The title of this blog post is "%s".', fullTitle);
await browser.close();
})();
```
### Background
trying to get started lol
### Expectation
this getting started should work without debug
### Reality
i tried node index.js page.type not working so page.waitForSelector timed out
but when i used debug mod add break point in each step, it works lol, pretty strange
### Puppeteer configuration file (if used)
_No response_
### Puppeteer version
24.0.0
### Node version
18.12.1
### Package manager
npm
### Package manager version
8.19.2
### Operating system
Windows | bug,confirmed,documentation,P3 | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.