id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
2,792,682,164 | excalidraw | feature: fuzzy search scene text | I find myself increasingly adding rough notes to scene's and with an increasing number of scene's it would be useful for the global search feature to fuzzy search on scene text. | Excalidraw+ | low | Minor |
2,792,687,795 | rust | Using multi-threading while building rustc from source | I am trying to build rustc from the source. I have a 64-core RISC-V machine. Currently, I am using `./x.py build && ./x.py install` to build from source. But it takes about 4 hours and 36 minutes which is pretty slow. I am using a MILK-V Pioneer Box computer with 128GB of RAM.
Is there a way to use multithreading so that the build utilizes all cores (or the specified number of cores)? | T-bootstrap,C-discussion | low | Major |
2,792,702,313 | vscode | Format on save is aborted when interacting with the editor | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
## Steps to Reproduce
1. Install a formatter that takes a significant amount of time for large files (for example [YAPF](https://marketplace.visualstudio.com/items?itemName=eeyore.yapf) for Python). Note while this might require the installation of an extension, the bug is still located within the code of VS Code. With a large enough file, you can probably reproduce this with VS Code's built-in formatters as well.
2. Set `"editor.formatOnSave": true` in the settings of VS Code.
3. Open a large file so that formatting takes long enough (for YAPF, any Python file with more than 1000 lines will do).
4. Make some changes to the file make the file unformatted.
5. Save.
6. Wait until the `Saving '[...]': Running Code Actions and Formatters...` message is displayed in the status bar and the `Running '[...]' Formatter` notification is shown.
7. Before the formatter finishes and VS Code saves the formatted file, do any of the following:
* Click anywhere in the editor so that the position of the caret is changed.
* Press any of the four arrow keys.
* Change the active file in the editor by clicking on another editor tab.
## Expected Behavior
The file is formatted and saved as soon as the formatter finishes.
## Actual Behavior
The file is neither formatted nor saved, even if waiting long enough. This is also indicated by the status bar message and notification immediately vanishing when performing any of the actions in step 7 above.
## Additional Information
- VS Code Version: 1.96.3
- OS Version: Windows 11
- This issue was originally included in #238044, but has been moved to separate feature request from bug report. | bug,formatting | low | Critical |
2,792,712,413 | go | cmd/link: unexpected R_386_GOT32 relocation | ### Go version
go version go1.23.4 linux/386
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='386'
GOBIN=''
GOCACHE='/builddir/.cache/go-build'
GOENV='/builddir/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='386'
GOHOSTOS='linux'
GOINSECURE=''
GOMODCACHE='/builddir/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPATH='/builddir/go'
GOPRIVATE=''
GOPROXY='direct'
GOROOT='/usr/lib/golang'
GOSUMDB='off'
GOTMPDIR=''
GOTOOLCHAIN='local'
GOTOOLDIR='/usr/lib/golang/pkg/tool/linux_386'
GOVCS=''
GOVERSION='go1.23.4'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/builddir/.config/go/telemetry'
GCCGO='gccgo'
GO386='sse2'
AR='ar'
CC='gcc'
CXX='g++'
CGO_ENABLED='1'
GOMOD='/dev/null'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -m32 -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build3544270386=/tmp/go-build -gno-record-gcc-switches'
```
### What did you do?
Compile the following program using GCC 15.0.1.
```
sh-5.2# gcc --version
gcc (GCC) 15.0.1 20250114 (Red Hat 15.0.1-0)
sh-5.2# cat test.go
package main
/*
extern void crosscall2 (void (*fn) (void *, int), void *, int);
extern void _cgo_panic (void *, int);
extern void _cgo_allocate (void *, int);
void
callPanic (void)
{
struct { const char *p; } a;
a.p = "panic from C";
crosscall2 (_cgo_panic, &a, sizeof a);
*(int*) 1 = 1;
}
*/
import "C"
func main() {
C.callPanic()
}
sh-5.2# CGO_CFLAGS="-O2 -m32 -fpic" go build -ldflags "-linkmode internal" test.go
```
This issue is also present when running `go tool dist test -run=^cmd/cgo/internal/test:internal$` using GCC 15.0.1.
### What did you see happen?
The Go linker encounters an unexpected relocation in GCC emitted object code.
```
sh-5.2# CGO_CFLAGS="-O2 -m32 -fpic" go build -ldflags "-linkmode internal" test.go
# command-line-arguments
main(.text): unexpected GOT reloc for non-dynamic symbol _cgo_panic
main(.text): unsupported dynamic relocation for symbol _cgo_panic (type=259 (R_386_GOT32) stype=1 (STEXT))
```
The linker is expecting to find the instruction `pushl _cgo_panic@GOT(%ebx)`, but instead encounters `pushl _cgo_panic@GOT(%eax)`. According to gcc devs, [this is valid (though sub optimal) code that the Go linker should handle](https://gcc.gnu.org/bugzilla/show_bug.cgi?id=118497).
### What did you expect to see?
The example program should build and link correctly. | NeedsInvestigation,compiler/runtime | low | Critical |
2,792,805,877 | langchain | UnstructuredPDFLoader - No module named 'pdf2image' | ### Checked other resources
- [x] I added a very descriptive title to this issue.
- [x] I searched the LangChain documentation with the integrated search.
- [x] I used the GitHub search to find a similar question and didn't find it.
- [x] I am sure that this is a bug in LangChain rather than my code.
- [x] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```py
# main.py
from langchain_community.document_loaders import UnstructuredPDFLoader
loader = UnstructuredPDFLoader("example.pdf")
docs = loader.load()
```
```bash
pip install -U langchain-community unstructured
python3 main.py
```
### Error Message and Stack Trace (if applicable)
```
File "/Users/.../env/lib/python3.13/site-packages/unstructured/partition/pdf.py", line 25, in <module> import pdf2image
ModuleNotFoundError: No module named 'pdf2image'
```
### Description
I follow this [guide on UnstructuredPDFLoader](https://python.langchain.com/docs/integrations/document_loaders/unstructured_pdfloader/).
I am trying to use `UnstructuredPDFLoader` but `.load()` throws the `ModuleNotFoundError: No module named 'pdf2image'` error.
Expected: Running `python3 main.py` should work.
Actual: `ModuleNotFoundError: No module named 'pdf2image'`
### System Info
Python: 3.13.0
langchain-community==0.3.14
unstructured==0.11.8 | 🤖:bug | low | Critical |
2,792,814,382 | godot | Crash after AnimationMixer: couldn't resolve track: warning | ### Tested versions
v4.3.stable.mono.official [77dcf97d8]
### System information
Godot v4.3.stable.mono - Windows 10.0.19045 - GLES3 (Compatibility) - GeForce MX250 - Intel(R) Core(TM) i5-10210U CPU @ 1.60GHz (8 Threads)
### Issue description
Under some specific conditions, the editor crashes, which may be related to this warning
WARNING: AnimationMixer: '', couldn't resolve track: ''. This warning can be disabled in Project Settings.
at: AnimationMixer::_update_caches (scene\animation\animation_mixer.cpp:668)
After some testing, I'm guessing: after triggering this warning in some way, switching to a scene without an AnimationPlayer and then running the game triggers this crash
The crash stacktrace:
```
CrashHandlerException: Program crashed
Engine version: Godot Engine v4.3.stable.mono.official (77dcf97d8)
Dumping the backtrace. Please include this when reporting the bug to the project developer.
[0] HashMap<StringName,AnimationMixer::AnimationData,HashMapHasherDefault,HashMapComparatorDefault<StringName>,DefaultTypedAllocator<HashMapElement<StringName,AnimationMixer::AnimationData> > >::_lookup_pos (D:\Data\Godot\Code\SpaceEngine\core\templates\hash_map.h:100)
[1] HashMap<StringName,AnimationMixer::AnimationData,HashMapHasherDefault,HashMapComparatorDefault<StringName>,DefaultTypedAllocator<HashMapElement<StringName,AnimationMixer::AnimationData> > >::has (D:\Data\Godot\Code\SpaceEngine\core\templates\hash_map.h:311)
[2] AnimationMixer::get_animation (D:\Data\Godot\Code\SpaceEngine\scene\animation\animation_mixer.cpp:392)
[3] AnimationPlayerEditor::_current_animation_changed (D:\Data\Godot\Code\SpaceEngine\editor\plugins\animation_player_editor_plugin.cpp:1336)
[4] call_with_variant_args_helper<AnimationPlayerEditor,String const &,0> (D:\Data\Godot\Code\SpaceEngine\core\variant\binder_common.h:303)
[5] call_with_variant_args<AnimationPlayerEditor,String const &> (D:\Data\Godot\Code\SpaceEngine\core\variant\binder_common.h:418)
[6] CallableCustomMethodPointer<AnimationPlayerEditor,String const &>::call (D:\Data\Godot\Code\SpaceEngine\core\object\callable_method_pointer.h:104)
[7] Callable::callp (D:\Data\Godot\Code\SpaceEngine\core\variant\callable.cpp:58)
[8] Object::emit_signalp (D:\Data\Godot\Code\SpaceEngine\core\object\object.cpp:1188)
[9] Node::emit_signalp (D:\Data\Godot\Code\SpaceEngine\scene\main\node.cpp:3868)
[10] Object::emit_signal<StringName> (D:\Data\Godot\Code\SpaceEngine\core\object\object.h:936)
[11] AnimationPlayer::set_assigned_animation (D:\Data\Godot\Code\SpaceEngine\scene\animation\animation_player.cpp:557)
[12] AnimationMixer::_blend_process (D:\Data\Godot\Code\SpaceEngine\scene\animation\animation_mixer.cpp:1693)
[13] AnimationMixer::_process_animation (D:\Data\Godot\Code\SpaceEngine\scene\animation\animation_mixer.cpp:943)
[14] AnimationPlayer::seek_internal (D:\Data\Godot\Code\SpaceEngine\scene\animation\animation_player.cpp:611)
[15] AnimationPlayer::seek (D:\Data\Godot\Code\SpaceEngine\scene\animation\animation_player.cpp:617)
[16] AnimationMixer::reset (D:\Data\Godot\Code\SpaceEngine\scene\animation\animation_mixer.cpp:2108)
[17] AnimationMixer::apply_reset (D:\Data\Godot\Code\SpaceEngine\scene\animation\animation_mixer.cpp:2140)
[18] _reset_animation_mixers (D:\Data\Godot\Code\SpaceEngine\editor\editor_node.cpp:1810)
[19] EditorNode::_save_scene (D:\Data\Godot\Code\SpaceEngine\editor\editor_node.cpp:1841)
[20] EditorNode::_save_all_scenes (D:\Data\Godot\Code\SpaceEngine\editor\editor_node.cpp:1989)
[21] EditorNode::_menu_option_confirm (D:\Data\Godot\Code\SpaceEngine\editor\editor_node.cpp:2781)
[22] EditorNode::_menu_option (D:\Data\Godot\Code\SpaceEngine\editor\editor_node.cpp:1424)
[23] EditorNode::try_autosave (D:\Data\Godot\Code\SpaceEngine\editor\editor_node.cpp:1946)
[24] EditorRunBar::_run_scene (D:\Data\Godot\Code\SpaceEngine\editor\gui\editor_run_bar.cpp:231)
[25] EditorRunBar::play_current_scene (D:\Data\Godot\Code\SpaceEngine\editor\gui\editor_run_bar.cpp:286)
[26] EditorRunBar::_play_current_pressed (D:\Data\Godot\Code\SpaceEngine\editor\gui\editor_run_bar.cpp:145)
[27] call_with_variant_args_helper<EditorRunBar> (D:\Data\Godot\Code\SpaceEngine\core\variant\binder_common.h:308)
[28] call_with_variant_args<EditorRunBar> (D:\Data\Godot\Code\SpaceEngine\core\variant\binder_common.h:418)
[29] CallableCustomMethodPointer<EditorRunBar>::call (D:\Data\Godot\Code\SpaceEngine\core\object\callable_method_pointer.h:104)
[30] Callable::callp (D:\Data\Godot\Code\SpaceEngine\core\variant\callable.cpp:58)
[31] Object::emit_signalp (D:\Data\Godot\Code\SpaceEngine\core\object\object.cpp:1188)
[32] Node::emit_signalp (D:\Data\Godot\Code\SpaceEngine\scene\main\node.cpp:3868)
[33] Object::emit_signal<> (D:\Data\Godot\Code\SpaceEngine\core\object\object.h:936)
[34] BaseButton::_pressed (D:\Data\Godot\Code\SpaceEngine\scene\gui\base_button.cpp:138)
[35] BaseButton::on_action_event (D:\Data\Godot\Code\SpaceEngine\scene\gui\base_button.cpp:169)
[36] BaseButton::gui_input (D:\Data\Godot\Code\SpaceEngine\scene\gui\base_button.cpp:69)
[37] Control::_call_gui_input (D:\Data\Godot\Code\SpaceEngine\scene\gui\control.cpp:1590)
[38] Viewport::_gui_call_input (D:\Data\Godot\Code\SpaceEngine\scene\main\viewport.cpp:1570)
[39] Viewport::_gui_input_event (D:\Data\Godot\Code\SpaceEngine\scene\main\viewport.cpp:1836)
[40] Viewport::push_input (D:\Data\Godot\Code\SpaceEngine\scene\main\viewport.cpp:3259)
[41] Window::_window_input (D:\Data\Godot\Code\SpaceEngine\scene\main\window.cpp:1682)
[42] call_with_variant_args_helper<Window,Ref<InputEvent> const &,0> (D:\Data\Godot\Code\SpaceEngine\core\variant\binder_common.h:303)
[43] call_with_variant_args<Window,Ref<InputEvent> const &> (D:\Data\Godot\Code\SpaceEngine\core\variant\binder_common.h:418)
[44] CallableCustomMethodPointer<Window,Ref<InputEvent> const &>::call (D:\Data\Godot\Code\SpaceEngine\core\object\callable_method_pointer.h:104)
[45] Callable::callp (D:\Data\Godot\Code\SpaceEngine\core\variant\callable.cpp:58)
[46] Callable::call<Ref<InputEvent> > (D:\Data\Godot\Code\SpaceEngine\core\variant\variant.h:875)
[47] DisplayServerWindows::_dispatch_input_event (D:\Data\Godot\Code\SpaceEngine\platform\windows\display_server_windows.cpp:3557)
[48] DisplayServerWindows::_dispatch_input_events (D:\Data\Godot\Code\SpaceEngine\platform\windows\display_server_windows.cpp:3528)
[49] Input::_parse_input_event_impl (D:\Data\Godot\Code\SpaceEngine\core\input\input.cpp:775)
[50] Input::flush_buffered_events (D:\Data\Godot\Code\SpaceEngine\core\input\input.cpp:1056)
[51] DisplayServerWindows::process_events (D:\Data\Godot\Code\SpaceEngine\platform\windows\display_server_windows.cpp:3025)
[52] OS_Windows::run (D:\Data\Godot\Code\SpaceEngine\platform\windows\os_windows.cpp:1675)
[53] widechar_main (D:\Data\Godot\Code\SpaceEngine\platform\windows\godot_windows.cpp:181)
[54] _main (D:\Data\Godot\Code\SpaceEngine\platform\windows\godot_windows.cpp:206)
[55] main (D:\Data\Godot\Code\SpaceEngine\platform\windows\godot_windows.cpp:220)
[56] WinMain (D:\Data\Godot\Code\SpaceEngine\platform\windows\godot_windows.cpp:234)
[57] __scrt_common_main_seh (D:\a\_work\1\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl:288)
[58] <couldn't map PC to fn name>
-- END OF BACKTRACE --
```
### Steps to reproduce
1. Open MRP
2. Open Scene res://levelInfoNode/levelInfoNode.tscn
3. Select unlockAnimPlayer Node and confirm it is opened in the AnimationEditor
4. Switch to scene a.tscn(its an empty scene)
5. Run project in any way and get the crash
### Minimal reproduction project (MRP)
[mrp.zip](https://github.com/user-attachments/files/18440239/mrp.zip) | bug,topic:editor,needs testing,crash,topic:animation | low | Critical |
2,792,872,130 | pytorch | RuntimeError "global alloc not supported yet" when using TorchScript optimization. | ### 🐛 Describe the bug
Calling the `forward` method on TorchScript models can, under some specific conditions, raise a `RuntimeError` with message: "Global alloc not supported yet.".
I think this is linked to an old issue: https://github.com/pytorch/pytorch/issues/69078, however, I managed to consistently reproduce this error.
The code that reproduces the bug is quite long and needs some explanation. It was taken from a very complex 3D pose estimation model with lots of pre and post processing whose code was largely generated by `torch.fx`. In the post-processing part, there is a class `ComputeCentroids` that computes the center of mass of queried instances in a batch of segmentation masks that appears to be causing the error.
The `ComputeCentroids` was tested with both cpu and gpu devices, with and without `torch::jit::script` and it appears to be working as desired, even with empty input queries.
The error is raised only if **all** the three following conditions apply:
- The inference device is set to "cuda".
- The torch jit optimization is turned on, as suggested by @OlofHarrysson in https://github.com/pytorch/pytorch/issues/69078
- The first inference is performed with an empty query. Maybe something goes wrong in the torchscript profiling executor?
```python
from typing import cast
import torch
import torch.nn as nn
from torch import Tensor
class ComputeCentroids(nn.Module):
def forward(self, b_idx: Tensor, i_idx: Tensor, segm: Tensor) -> Tensor:
dev = segm.device
B, H, W = segm.shape
N = int(segm.max()) + 1
hh, ww = torch.arange(H, device=dev), torch.arange(W, device=dev)
i, j = torch.meshgrid(hh, ww, indexing="ij")
xy = torch.stack([j, i], dim=-1).float().view(-1, 2).repeat(B, 1)
segm_f = (segm.view(B, -1) + torch.arange(B, device=dev)[:, None] * N).view(-1)
eq = segm_f[:, None] == (i_idx + b_idx * N)[None]
c_xy = (eq[..., None] * xy[:, None]).sum(0) / eq[..., None].sum(0)
c_xy.nan_to_num_(-1.0)
return c_xy
def zero_init() -> dict[str, Tensor]:
b_idx = torch.zeros(0, device=dev)
x_idx = torch.zeros(0, device=dev)
segm = torch.zeros(1, 256, 256, device=dev)
return {"b_idx": b_idx, "i_idx": x_idx, "segm": segm}
def random_init() -> dict[str, Tensor]:
b_idx = torch.tensor([0, 0, 0, 0], device=dev)
i_idx = torch.tensor([0, 1, 2, 3], device=dev)
segm = torch.randint(0, 10, (1, 256, 256), device=dev)
return {"b_idx": b_idx, "i_idx": i_idx, "segm": segm}
if __name__ == "__main__":
compute_cxy = cast(ComputeCentroids, torch.jit.script(ComputeCentroids()))
# Bug can be reproduced if all the following conditions are verified:
# - Device is set to "cuda".
# - Optimized execution is activated.
# - First inference pass is the result of zero_init().
dev = "cuda:0" # "cpu"
optimize = True # False
zero_init_first = True # False
with torch.jit.optimized_execution(optimize): # type: ignore
if zero_init_first:
compute_cxy(**zero_init())
for _ in range(5):
compute_cxy(**random_init())
```
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-2ubuntu1~20.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.10.15 (main, Sep 7 2024, 18:35:33) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-130-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.3.107
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2080 Ti
Nvidia driver version: 550.127.08
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 158
Model name: Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz
Stepping: 13
CPU MHz: 3600.000
CPU max MHz: 5000,0000
CPU min MHz: 800,0000
BogoMIPS: 7200.00
Virtualization: VT-x
L1d cache: 256 KiB
L1i cache: 256 KiB
L2 cache: 2 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cudnn-cu11==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-nccl-cu11==2.21.5
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] torch==2.5.1+cu118
[pip3] torchaudio==2.5.1+cu118
[pip3] torchvision==0.20.1+cu118
[pip3] triton==3.1.0
[conda] Could not collect
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @EikanWang @jgong5 @wenzhe-nrv @sanchitintel | high priority,triage review,oncall: jit | low | Critical |
2,792,910,816 | PowerToys | Eternal process PowerToys.Settings.exe | ### Microsoft PowerToys version
0.87.1
### Installation method
Microsoft Store
### Running as admin
No
### Area(s) with issue?
PowerToys Run
### Steps to reproduce
The PowerToys.Settings.exe process is constantly running in the background and is taking up RAM resources. Can this be removed?
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,792,935,652 | storybook | [Bug]: Sidebar item hover background colour makes text unreadable | ### Describe the bug
The sidebar item hover background colour variable "--tree-node-background-hover" is being updated on hover and focus to a lighter or darker shade of the themes secondary colour but the font colour is not being updated to accommodate this change mean that now the text is unreadable when the items are in hover or focus.

### Reproduction link
https://stackblitz.com/edit/github-djnsqscp?file=package.json
### Reproduction steps
1. Update from 8.4.7 to 8.5.0+
2. Set colorSecondary for the storybook theme to a dark colour eg. #1a115a
3. Run storybook
4. hover/focus on unselected items within the sidemenu are now almost unreadable
### System
```bash
Storybook Environment Info:
System:
OS: macOS 15.2
CPU: (12) arm64 Apple M3 Pro
Shell: 5.9 - /bin/zsh
Binaries:
Node: 20.18.0 - /usr/local/bin/node
npm: 10.8.2 - /usr/local/bin/npm <----- active
Browsers:
Chrome: 132.0.6834.84
Edge: 131.0.2903.147
Safari: 18.2
npmPackages:
@storybook/addon-a11y: ^8.4.7 => 8.5.0
@storybook/addon-docs: ^8.4.7 => 8.5.0
@storybook/addon-essentials: ^8.4.7 => 8.5.0
@storybook/addon-links: ^8.4.7 => 8.5.0
@storybook/addon-themes: ^8.4.7 => 8.5.0
@storybook/blocks: ^8.4.7 => 8.5.0
@storybook/html: ^8.4.7 => 8.5.0
@storybook/html-webpack5: ^8.4.7 => 8.5.0
@storybook/manager-api: ^8.4.7 => 8.5.0
@storybook/preview-api: ^8.4.7 => 8.5.0
@storybook/theming: ^8.4.7 => 8.5.0
storybook: ^8.4.7 => 8.5.0
storybook-addon-pseudo-states: ^4.0.2 => 4.0.2
storybook-addon-tag-badges: ^1.4.0 => 1.4.0
storybook-addon-turbo-build: ^2.0.1 => 2.0.1
storybook-dark-mode: ^4.0.2 => 4.0.2
storybook-preset-inline-svg: ^1.0.1 => 1.0.1
```
### Additional context
_No response_ | bug,ui,accessibility,upgrade:8.5 | low | Critical |
2,792,937,361 | PowerToys | Text extractor - Handling non langual text | ### Description of the new feature / enhancement
Being able to extract text from an image that is not made of words.
### Scenario when this would be used?
For instance, having links, codes, pseudos, things like this. These days we have more and more text that are not words, it would be usefull to have this enhancement.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,792,953,184 | node | http2: confusion with how aborted ClientHttp2Stream is reported | Take this sample code:
```js
const h2 = require('http2');
const server = h2.createServer();
server.on('stream', (stream) => {
stream.session.destroy();
});
server.listen(0, () => {
const client = h2.connect(`http://localhost:${server.address().port}`);
client.on('close', () => {
server.close();;
});
const clientStream = client.request();
clientStream.on('aborted', () => {
// Never called
console.log('aborted');
});
clientStream.on('close', () => {
// `rstCode === 8 (NGHTTP2_CANCEL)
console.log('close', clientStream.rstCode);
});
clientStream.on('error', (err) => {
// Never called
throw err;
});
});
```
in which `clientStream` is aborted before any response is generated via the `stream.session.destroy()` call. I would expect in this case `clientStream` to emit an `error`, but by looking at the [code](https://github.com/nodesource/nsolid/blob/61bf0cd143936e1650527903bbee0926945c8ba2/lib/internal/http2/core.js#L2358-L2362), this specific case is explicitly not reporting the error because:
> // RST code 8 not emitted as an error as its used by clients to signify
// abort and is already covered by aborted event, also allows more
// seamless compatibility with http1
but, as demonstrated in the example, the `aborted` event is not reported. I don't know exactly what the http1 compatibility means in this context.
Also, if we look at the documentation of the [`Http2Stream` `close` event](https://nodejs.org/api/http2.html#event-close_1), it seems to contradict the following statement:
> The HTTP/2 error code used when closing the stream can be retrieved using the http2stream.rstCode property. If the code is any value other than NGHTTP2_NO_ERROR (0), an 'error' event will have also been emitted.
Any thoughts whether this is a bug in the code, the documentation, both?.
Thanks! | http2 | low | Critical |
2,792,957,527 | angular | TS-994005: Angular compiler option "extendedDiagnostics.checks" has an unknown check: "skipHydrationNotStatic". | ### Which @angular/* package(s) are the source of the bug?
compiler-cli
### Is this a regression?
Yes
### Description
Extended diagnostic [skipHydrationNotStatic](https://angular.dev/extended-diagnostics/NG8108) does not seem to be available.
### Please provide a link to a minimal reproduction of the bug
[Stackblitz](https://stackblitz.com/edit/stackblitz-starters-kezx2y?file=tsconfig.json)
Run `ng serve` to see the error.
### Please provide the exception or error you saw
```true
Application bundle generation failed. [39.291 seconds]
✘ [ERROR] TS-994005: Angular compiler option "extendedDiagnostics.checks" has an unknown check: "skipHydrationNotStatic".
Allowed check names are:
controlFlowPreventingContentProjection
unusedStandaloneImports
invalidBananaInBox
nullishCoalescingNotNullable
optionalChainNotNullable
missingControlFlowDirective
textAttributeNotBinding
missingNgForOfLet
suffixNotSupported
interpolatedSignalNotInvoked
uninvokedFunctionInEventBinding
unusedLetDeclaration [plugin angular-compiler]
Watch mode enabled. Watching for file changes...
```
### Please provide the environment you discovered this bug in (run `ng version`)
```true
_ _ ____ _ ___
/ \ _ __ __ _ _ _| | __ _ _ __ / ___| | |_ _|
/ △ \ | '_ \ / _` | | | | |/ _` | '__| | | | | | |
/ ___ \| | | | (_| | |_| | | (_| | | | |___| |___ | |
/_/ \_\_| |_|\__, |\__,_|_|\__,_|_| \____|_____|___|
|___/
Angular CLI: 19.1.1
Node: 20.18.1
Package Manager: yarn 1.22.22
OS: win32 x64
Angular: 19.1.1
... animations, cli, common, compiler, compiler-cli, core, forms
... language-service, platform-browser, platform-browser-dynamic
... platform-server, router
Package Version
---------------------------------------------------------
@angular-devkit/architect 0.1901.1
@angular-devkit/build-angular 19.1.1
@angular-devkit/core 19.1.1
@angular-devkit/schematics 19.1.1
@angular/cdk 19.0.5
@schematics/angular 19.1.1
rxjs 7.8.1
typescript 5.7.3
zone.js 0.15.0
```
### Anything else?
_No response_ | area: compiler | low | Critical |
2,792,990,843 | flutter | 3.29 beta branch info | ### 3.29 beta branch info ###
| Repo | Branch | Commit |
| --- | --- | --- |
| flutter/flutter | [flutter-3.29-candidate.0](https://github.com/flutter/flutter/tree/flutter-3.29-candidate.0) | https://github.com/flutter/flutter/commit/010c8a806bccad64ab9972286b85dd24ca98441f |
| dart-lang/sdk | [3.7.0-323.0.dev](https://github.com/dart-lang/sdk/tree/3.7.0-323.0.dev) | https://github.com/dart-lang/sdk/commit/f6ed8d7df6bfdf6fb08b38dd93c2ee1eba476b5a |
| google/skia | as needed | https://github.com/google/skia/commit/e7b8d078851fd505475fe74359e31a421e6968ea |
CC @christopherfujino @eyebrowsoffire @reidbaker | team-release | low | Minor |
2,793,088,328 | react | [DevTools Bug]: incorrect error on react-hooks/exhaustive-deps | ### Website or app
http://localhost:3008/
### Repro steps
```
React Hook useMemo has unnecessary dependencies: 'store1.list' and 'store2.list'. Either exclude them or remove the dependency array. Outer scope values like 'domains.content' aren't valid dependencies because mutating them doesn't re-render the component react-hooks/exhaustive-deps
```
For an unknown reason, it appears in the following code.
```
const list = useMemo(() => _.map(store1.list, item => ({
...item,
warnings: _.map(
_.filter(store2.list, { name: item.name }),
({ info }) => ({ label: `Important ${info}` })
)
})), [store1.list, store2.list])
```
And I am sure there is a bug because it doesn't appear in the following code:
```
const list1 = store1.list
const list2 = store2.list
const list = useMemo(() => _.map(list1, item => ({
...item,
warnings: _.map(
_.filter(list2, { name: item.name }),
({ info }) => ({ label: `Important ${info}` })
)
})), [list1, list2])
```
### How often does this bug happen?
Every time
### DevTools package (automated)
_No response_
### DevTools version (automated)
_No response_
### Error message (automated)
_No response_
### Error call stack (automated)
```text
```
### Error component stack (automated)
```text
```
### GitHub query string (automated)
```text
``` | Type: Bug,Status: Unconfirmed | medium | Critical |
2,793,095,740 | pytorch | Support something similar to export dynamic dims for torch.compile with fullgraph=True | ### 🚀 The feature, motivation and pitch
In export, instead of using mark_dynamic on input Tensors, you can directly specify which inputs are dynamic or not
```
dynamic_shapes = (({0: Dim("dim")}, None, None, None),)
torch.export.export(
Slice(),
inp,
dynamic_shapes=dynamic_shapes,
)
```
We should support an analogous concept for torch.compile(fullgraph=True)
Note that we cannot easily support this when fullgraph=False, because a graph break inside the region will result in a bunch of intermediate tensors that won't have accurate dynamic/not dynamic annotations.
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @bobrenjc93 | triaged,oncall: pt2,module: dynamic shapes | low | Minor |
2,793,167,092 | go | proposal: cmd/go: add `work` package pattern matching all packages in work modules | ### Proposal Details
I'm making this proposal to resolve #50745
This proposal is to add a new package meta-pattern to join the already existing `all`, `std`, `cmd`, and `tool` patterns. The work pattern would match all packages in the work (formerly called main) modules: either the single work module in module mode or the set of workspace modules in workspace mode.
This would allow for a single pattern to build or test all of those packages, similar to using `./...` at the root of a module to test a single module's packages.
cc @hyangah @rsc @samthanawalla | Proposal,ToolProposal | low | Minor |
2,793,179,468 | next.js | Unexpected request close/abort after 5 minutes | ### Link to the code that reproduces this issue
https://github.com/eebirke/request-timeout-repro
### To Reproduce
1. Start the app (either dev or build & start).
2. Throttle network speed in dev tools to 1 Mbps or lower, or Fast 3G or slower.
3. Select a sufficiently large file (I'm testing with 110 MB, but 50 MB should be fine).
4. After 5-5.5 minutes the request fails (error is displayed in both the server console and in the app - fails with ECONNRESET on the server).
(Example adapted from a file streaming case where client streams files to the API, which in turn streams it into S3).
### Current vs. Expected behavior
It seems unexpected that there is a 5 minute hard-limit after which requests are stopped with no clear reason, with seemingly no way to override the behavior.
After a few days of debugging I figured out that the Node Server instance associated with the requests had default values for requestTimeout that are not directly configurable from what I can see.
After making this change to `start-server.js` in `node_modules/next/dist/server/lib` I could get the upload to run for an hour.
```diff
+ const ONE_HOUR = 3600000;
const server = selfSignedCertificate ? _https.default.createServer({
key: _fs.default.readFileSync(selfSignedCertificate.key),
cert: _fs.default.readFileSync(selfSignedCertificate.cert),
+ requestTimeout: ONE_HOUR
- }, requestListener) : _http.default.createServer(requestListener);
+ }, requestListener) : _http.default.createServer({requestTimeout: ONE_HOUR}, requestListener);
```
I suppose a quick fix on our end is using the custom server setup and supplying our own node Server instance, though I am not entirely sure which of Next's features we'd lose out on in that case.
I could not find any reference to timeout issues in the docs.
I'm guessing that increasing the requestTimeout across all requests could create issues when you get many concurrent slow connections, so are there any other ways of doing this (except splitting the original upload into multiple, chunked upload requests)?
Should Next.js highlight these limitations, or should it ideally be configurable?
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.2.0: Fri Dec 6 19:02:12 PST 2024; root:xnu-11215.61.5~2/RELEASE_ARM64_T6031
Available memory (MB): 36864
Available CPU cores: 14
Binaries:
Node: 22.6.0
npm: 10.8.2
Yarn: N/A
pnpm: 9.15.0
Relevant Packages:
next: 15.1.4 // Latest available version is detected (15.1.4).
eslint-config-next: 15.1.4
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Runtime
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local), next start (local)
### Additional context
This happens on both Next 14 and 15, and across Node 18, 20 and 22.
Have tested this on Firefox, Chrome and Safari on MacOS, as well as Chrome on Windows.
Might be related to discussion thread on socket abort issues, but not sure https://github.com/vercel/next.js/discussions/60148 | Runtime | low | Critical |
2,793,182,715 | angular | i18n: RTL separated CSS files. | ### Which @angular/* package(s) are relevant/related to the feature request?
_No response_
### Description
Hi Angular developer !
I hope this catch you well ,our organization is leveraging Angular in order to develop a large-scale web application , choosing Angular was my own decision as the framework is the most organized framework out there without any bias (: .
You know how important for any organization to cut the maintenance costs by making it easy for developers to quickly add new features and scale up the software easily thank to the Angular way of separating folders and providing a well-organized file structures.
In our case , we are developing a multi-language software and our default language is Arabic which is a unique-styled language as you all know , it is a RTL not LTR.
We can easily support RTL of course by leveraging the built-in i18n or other third parties libraries and this was done by our fellow developers several time.
**The issue :**
When the user browse the application in RTL-mode we don't need to load the resources and CCS files of other LTR files this of course is a well-performant way as we get rid of unnecessary resources but.
- Best appreciations and regards
Mohssine Khaloufi
### Proposed solution
**The suggested Solution :**
We need a mechanism to add a folder or a file within each component called **"componnent-rtl-css.css"** that loads depends on the selected local, this will indeed make that framework more performant and organized.
### Alternatives considered
Instead of conditionally rendering the component.html file to check the current local (that of course consume memory and impact the performance) , why we don't just built-it by default so, we don't worry about which resource to put in action every time we switch the locals. | area: i18n | low | Major |
2,793,197,522 | godot | Firefox 134 causes webexports with bigger pck files to be unable to cache correctly | ### Tested versions
This was tested on godot version 4.3.stable and 4.4.dev7.
It was also tested on firefox 133 and 134
### System information
Godot v4.4.dev7 - Windows 10 (build 19045) - Multi-window, 2 monitors - OpenGL 3 (Compatibility) - NVIDIA GeForce RTX 3050 (NVIDIA; 32.0.15.6094) - AMD Ryzen 5 5500 (12 threads)
### Issue description
After the Firefox 134 update many of our users have had problem using Firefox to play godot games via web exports. It seems that a large pck file causes the Firefox cache to not work correctly, giving the following error: Couldn't load project data at path ".". Is the .pck file missing? If you've renamed the executable, the associated .pck file should also be renamed to match the executable's name (without the extension)

I can get it to work when disabling the cache on our live games but with the MRP I can barely get it to work after refreshing multiple times.
### Steps to reproduce
1. Create a empty project
2. Fill the client folder with some big files so the pck size gets above 40 mb
3. Export the project or run it with remote debug using firefox 134
4. Refresh the page a couple of times
### Minimal reproduction project (MRP)
I could not upload my MRP since it was more then 10 mb. Instead I have uploaded my MRP with only 1 bloat file called BEEG.png. To reproduce the error you will have to duplicate this file around 20 times.
[Firefox_test.zip](https://github.com/user-attachments/files/18442457/Firefox_test.zip) | bug,platform:web,confirmed,topic:thirdparty,topic:export | low | Critical |
2,793,233,511 | kubernetes | configure Node `ephemeral-storage` Allocatable and Capacity via kubelet pods path | ### What would you like to be added?
Ability to set the path for ephemeral storage (e.g. `/var/lib/kubelet/pods`) as opposed to relying on root directory of Kubelet.
By default Capacity and Allocatable for ephemeral-storage in standard kubernetes environment is sourced from filesystem (mounted to `rootDir` /var/lib/kubelet). This is the default location for kubelet directory.
In my use-case, I mount `/var/lib/kublet` to the root volume, but I mount `/var/lib/kubelet/pods` to a different drive.
```
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 39G 4.5G 35G 12% /
/dev/mapper/nvme-containerd_state 1.7T 11G 1.7T 1% /var/lib/containerd
/dev/mapper/nvme-kubelet_pods 1.7T 77G 1.6T 5% /var/lib/kubelet/pods
```
As you can see above, I allocate 1.7T to be used for ephemeral storage, but given that kubelet uses the root dir (i.e. `/var/lib/kubelet`), it defaults to size of the root drive (see blow).
```
Capacity:
ephemeral-storage: 40462524Ki
Allocatable:
ephemeral-storage: 38365372Ki
```
### Why is this needed?
Because the path for ephemeral storage is `/var/lib/kubelet/pods`, and not the root drive for kubelet. | area/kubelet,sig/node,kind/feature,triage/accepted | low | Major |
2,793,237,420 | vscode | Add grouping to Suggestions Popup by Interface + Class and Property + Method | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
Basically add grouping that visually splits methods and properties introduced (or overridden) by a class or an interface.
Something like:
```
----- Contexts -----
add
forEach @
----- Map -----
clear
delete
entries
forEach (dimmed)
get
has
keys
set
size
```
- The interface and classes would be a slightly different colors with their own icons to distinguish (`(C)` and `(I)`).
- If a property/method is overridden then under the super class group it would become dimmed. In the sub class group it would indicate it was overridden. It can be both dimmed and indicated since there can be many super classes.
Related changes:
- Add a better support displaying JS Symbols, currently it just shows `[Symbol]` without ANY details. This is related since it can be also grouped listing all the symbols.
Disired changes:
- Add sorting by methods and properties, so the properties go first and then methods or vice versa.
---
I propose this as an option, it should let user to
- enable displaying both classes+interfaces or only one
- enable displaying overrides
- enable detailed JS Symbols
- enable sorting
I think there could be a little button right next to `Show Less` button to trigger options or a master switch to toggle this entire feature.
---
It seems it can't be done with an extension but not sure. | feature-request,suggest | low | Minor |
2,793,274,737 | deno | Allow compiling tests | It doesn't seem like it's currently possible to use Deno's compile command to compile tests into an executable. I use Deno to write small test suites, and it would be useful to be able to compile them and share them with other developers that aren't necessarily familiar with Deno.
Alternatively, if there was an API for Deno's runner that allowed programmers to write scripts that run tests instead of the `deno test` command, that script could be compiled and achieve the same goal.
Apologies if this is already possible and I missed something, and thanks for all the work you all do on Deno! | needs info,compile | low | Minor |
2,793,304,339 | kubernetes | Improve visibility / observability for long lived certificates close to expiry | ### What would you like to be added?
If a certificate is within a week of expiring, and its a long lived certificate (endtime - starttime > 3 months) then its probably human managed, not automatically generated, and will probably need human intervention to track down and fix.
Currently, the cert is logged in the apiserver logs after it expires but only with an SN, which is hard to track down. It should log the CN too. It should also alert before hand, as described above, in the log.
### Why is this needed?
We have alerts as part of prometheus (stock kube-prometheus-stack) already that alert when a cert is going to expire soon. But its really hard to then track it down.
`kubeadm kubeconfig user` creates such certificates and are not very easy to inspect. | kind/feature,needs-sig,needs-triage | low | Major |
2,793,305,486 | godot | [TRACKER] Movie Maker mode issues | This is a tracker for issues related to [Movie Maker mode](https://docs.godotengine.org/en/latest/tutorials/animation/creating_movies.html).
```[tasklist]
### Bugs
- [ ] https://github.com/godotengine/godot/issues/52821
- [ ] https://github.com/godotengine/godot/issues/69965
- [ ] https://github.com/godotengine/godot/issues/71254
- [ ] https://github.com/godotengine/godot/issues/76625
- [ ] https://github.com/godotengine/godot/issues/79221
- [ ] https://github.com/godotengine/godot/issues/98033
```
```[tasklist]
### Enhancements
- [ ] https://github.com/godotengine/godot-proposals/issues/4730
- [ ] https://github.com/godotengine/godot-proposals/issues/4751
- [ ] https://github.com/godotengine/godot-proposals/issues/4752
- [ ] https://github.com/godotengine/godot-proposals/issues/5387
- [ ] https://github.com/godotengine/godot-proposals/issues/5790
- [ ] https://github.com/godotengine/godot-proposals/issues/5809
- [ ] https://github.com/godotengine/godot-proposals/issues/5989
- [ ] https://github.com/godotengine/godot-proposals/issues/6041
- [ ] https://github.com/godotengine/godot-proposals/issues/6390
- [ ] https://github.com/godotengine/godot-proposals/issues/6409
- [ ] https://github.com/godotengine/godot-proposals/issues/6698
- [ ] https://github.com/godotengine/godot-proposals/issues/7594
- [ ] https://github.com/godotengine/godot-proposals/issues/10505
- [ ] https://github.com/godotengine/godot-proposals/issues/11541
```
```[tasklist]
### Pull requests
- [ ] https://github.com/godotengine/godot/pull/75148
- [ ] https://github.com/godotengine/godot-proposals/issues/3726
``` | bug,enhancement,topic:core,tracker | low | Critical |
2,793,317,534 | rust | Missing lifetimes needed in `impl` item don't have enough help for newcomer devs | If you write like
```rust
impl IntoIterator for &S {
type Item = &T;
type IntoIter = std::collections::btree_map::Values<i32, T>;
fn into_iter(self) -> Self::IntoIter {
todo!()
}
}
```
[you get the following errors](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=7b192052b2b1ac8574eb89130b318108)
```
error: in the trait associated type is declared without lifetime parameters, so using a borrowed type for them requires that lifetime to come from the implemented type
--> src/main.rs:5:17
|
5 | type Item = &T;
| ^ this lifetime must come from the implemented type
error[E0106]: missing lifetime specifier
--> src/main.rs:6:56
|
6 | type IntoIter = std::collections::btree_map::Values<i32, T>;
| ^ expected named lifetime parameter
|
help: consider introducing a named lifetime parameter
|
6 | type IntoIter<'a> = std::collections::btree_map::Values<'a, i32, T>;
| ++++ +++
```
ignoring the first one for now, [if you follow the suggestion you get](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2021&gist=b9eb24218645c31dccc1d5ad1b0939c7)
```
error[E0195]: lifetime parameters or bounds on type `IntoIter` do not match the trait declaration
--> src/main.rs:6:18
|
6 | type IntoIter<'a> = std::collections::btree_map::Values<'a, i32, T>;
| ^^^^ lifetimes do not match type in trait
```
Both of these cases should instead mention likely adding the lifetime to the `impl` *if* either the trait or the self type have *any* lifetimes, named or anon. We should mention that the `<'a>` should be removed to match the def, and point at the def if local to modify it to add it to the trait.
If we don't add the `<'a>` to the `type IntoIter`, [we get something closer to what we want](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2021&gist=cbac1193e7340e33472dc6773b6d0db9):
```
error[E0261]: use of undeclared lifetime name `'a`
--> src/main.rs:6:57
|
6 | type IntoIter = std::collections::btree_map::Values<'a, i32, T>;
| ^^ undeclared lifetime
|
help: consider introducing lifetime `'a` here
|
6 | type IntoIter<'a> = std::collections::btree_map::Values<'a, i32, T>;
| ++++
help: consider introducing lifetime `'a` here
|
4 | impl<'a> IntoIterator for &S {
| ++++
```
We shouldn't be suggesting modifying `type IntoIter` if we can know that the trait definition didn't have named lifetimes.
If we add `impl<'a>`, then [we get a very terse output](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2021&gist=6bed38a375e923b8b5a8f8272b6f0212):
```
error[E0207]: the lifetime parameter `'a` is not constrained by the impl trait, self type, or predicates
--> src/main.rs:4:6
|
4 | impl<'a> IntoIterator for &S {
| ^^ unconstrained lifetime parameter
```
This should look at the trait and self type and see if there is any anon lifetime at play, and suggest using the named lifetime there. In this case, `&'a S`.
In all of these cases we still get:
```
error: in the trait associated type is declared without lifetime parameters, so using a borrowed type for them requires that lifetime to come from the implemented type
--> src/main.rs:5:17
|
5 | type Item = &T;
| ^ this lifetime must come from the implemented type
```
When `impl` has lifetimes, the above should suggest using that lifetime.
Currently the documentation of [`E0261`](https://doc.rust-lang.org/error_codes/E0261.html) mentions a similar enough case, but [`E0207`](https://doc.rust-lang.org/error_codes/E0207.html) doesn't. We'd likely want to add a mention about lifetimes in the latter.
_Inspired by https://github.com/mainmatter/100-exercises-to-learn-rust/issues/245_
---
- [ ] on `type NoLt = TypeMissingLt;`, do not suggest `type NoLt<'a> = TypeMisingLt<'a>;` if the trait doesn't accept it
- [ ] on `type NoLt = TypeMissingLt;`, suggest adding a lifetime to the `impl` if the trait or self type have anon lifetimes
- [ ] on `type NoLt<'a> = TypeMissingLt<'a>;`, suggest removing the lifetime from the `type` if the `impl` has a named lifetime
- [ ] on `type NoLt = TypeMissingLt<'a>;`, suggest adding `'a` to the `impl` if the trait or self type have anon lifetimes
- [ ] on `impl<'a> IntoIterator for &S`, suggest changing the self type to `&'a S`
- [ ] on `type NoLt = &T;`, if `impl` has a lifetime suggest using that
- [ ] on `type NoLt = &T;`, suggest using a named lifetime from the `impl` if present
- [ ] on `type NoLt = &T;`, suggest adding `'a` to the `impl` if the trait or self type have anon lifetimes
- [x] expand the documentation of `E0207` to mention these cases | A-diagnostics,A-lifetimes,T-compiler,A-suggestion-diagnostics,D-newcomer-roadblock,D-invalid-suggestion,D-terse | low | Critical |
2,793,325,834 | flutter | Flutter_lints, unawaited_futures rule does not trigger in switch statements | ### Steps to reproduce
We are using the flutter_lint package in our project. Recently we noticed, that, when we have mixed Future and sync return values in a switch statement, dart "sets" the return value as sync. However, the futures are then not marked as "unawaited" by the linter rule, as expected.
1. Turn on the unawaited_futures rule
2. Create a Method with a Future return type, which returns the result of a switch statement (=> notation).
3. Make it so, that at least one case points to a Future, and one not.
### Expected results
It is expected, that the Future in the switch statement triggers the unawaited_futures linter rule. I guess it is implicitly unawaited by the switch. This is a potential bug, and the linter rule can help to identify it.
### Actual results
The linter does not mark the future as unawaited.
### Code sample
<details open><summary>Code sample</summary>
```dart
class Test {
void doSomethingSync() {}
Future<void> doSomethingAsync() async {
Future<void>.value(); // linter rule unawaited_futures is triggered by this
}
void onlySyncSwitch(TestOption option) {
final result = switch (option) {
TestOption.optionA => doSomethingSync(),
TestOption.optionB => doSomethingSync(),
};
return result; // result is void
}
Future<void> onlyAsyncSwitch(TestOption option) {
final result = switch (option) {
TestOption.optionA => doSomethingAsync(),
TestOption.optionB => doSomethingAsync(),
};
return result; // result is Future<void>
}
Future<void> mixedSyncAndAsyncSwitch(TestOption option) async {
final result = switch (option) {
TestOption.optionA => doSomethingSync(),
TestOption.optionB =>
doSomethingAsync(), // expect that unawaited_futures triggers, but it does not.
};
return result; // result is void
}
}
enum TestOption {
optionA,
optionB,
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>

</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[!] Flutter (Channel [user-branch], 3.24.2, on macOS 14.6.1 23G93 darwin-arm64, locale de-DE)
! Flutter version 3.24.2 on channel [user-branch] at /Users/user/sdk/flutter
Currently on an unknown channel. Run `flutter channel` to switch to an official channel.
If that doesn't fix the issue, reinstall Flutter by following instructions at https://flutter.dev/setup.
! Upstream repository unknown source is not a standard remote.
Set environment variable "FLUTTER_GIT_URL" to unknown source to dismiss this error.
[✓] Android toolchain - develop for Android devices (Android SDK version 33.0.0)
[✓] Xcode - develop for iOS and macOS (Xcode 15.3)
[✓] Chrome - develop for the web
[✓] Android Studio (version 2024.2)
[✓] IntelliJ IDEA Community Edition (version 2023.3.3)
[✓] IntelliJ IDEA Community Edition (version 2021.1)
[✓] VS Code (version 1.90.2)
```
</details>
| waiting for customer response,in triage | low | Critical |
2,793,360,505 | opencv | Building cpp apps with opencv-python package | ### Describe the feature and motivation
OpenCV package available in some Linux distros could be outdated and to build a cpp executable with OpenCV one needs to build OpenCV from source (that's a default installation path provided it the [docs](https://docs.opencv.org/4.x/d7/d9f/tutorial_linux_install.html)). At the same time, `opencv-python` package can be utilized as a source of OpenCV binaries to link against.
For instance, one can imagine the following works from CMake:
```cmake
find_package(Python3 REQUIRED)
execute_process(
COMMAND ${Python3_EXECUTABLE} -c "from cv2.build_utils import get_cmake_path; print(get_cmake_path(), end='')"
OUTPUT_VARIABLE OpenCV_DIR_PY
ERROR_QUIET
)
find_package(OpenCV REQUIRED
COMPONENTS core
HINTS "${OpenCV_DIR_PY}")
```
In that case `pip install opencv-python` is enough to start building cpp apps with OpenCV.
### Additional context
_No response_ | feature | low | Critical |
2,793,366,607 | pytorch | [compiled autograd] It would be nice if the compiiled autograd graph was actually runnable | It won't be after the stack in https://github.com/pytorch/pytorch/pull/143296
cc @chauhang @penguinwu @xmfan @yf225 | triaged,oncall: pt2,module: compiled autograd | low | Minor |
2,793,368,657 | tauri | [feat] Remove invalid autostart during the uninstallation process | ### Describe the problem
在 Windows 11 通过 `tauri-plugin-autostart` 开启自启动后,执行 uninstall.exe 将程序卸载,此时仍然会有一个无效启动项被保留,是否可以将其在卸载过程中一并删除呢?
[AI Translation] When a program is set to autostart on Windows 11 using `tauri-plugin-autostart` and then uninstalled via `uninstall.exe`, an invalid startup entry often persists in the registry. Can this entry be removed during the uninstallation process?
### Describe the solution you'd like
当自启动启用后,注册表 `HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Run` 下会新建一个项,卸载后这个项仍旧会被保留,或许卸载程序可以考虑将此项一并删除?
[AI Translation] When autostart is enabled for a program, a new registry key is created at `HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Run`. Upon uninstallation, this registry key may persist. It would be beneficial if uninstallers were modified to remove this registry key during the uninstallation process.
### Alternatives considered
_No response_
### Additional context
_No response_ | good first issue,type: feature request,platform: Windows,scope: bundler | low | Minor |
2,793,374,935 | rust | running `rustdoc-js-std` tests does not rebuild rustdoc even if it is dirty | reproduction steps:
1. `./x test -v tests/rustdoc-js-std/return-based-sort.js` (or any other unit test in the folder, probably)
2. make a change to `search.js` (one that impacts the result of the test will make it more obvious)
3. `./x test -v tests/rustdoc-js-std/return-based-sort.js` (again)
rustdoc will not get rebuilt.
i've run into this a few times before but i only now figured out what was happening (i think?). | A-testsuite,T-bootstrap,C-bug | low | Major |
2,793,395,056 | tensorflow | Urgent help needed. | Hello, hope anybody help me please!
I am going to train a model for information extraction from photo.
I am using Windows 10.
And am using python 3.10.10.
Actually I used python 3.12 but I couldn't install tensorflow-io package with python 3.12. Not sure reason yet.
So I use python 3.10 now.
I have prepared dataset and downloaded predefined model from Model Zoo.
Now I need to train.
Steps that I did:
1. git clone https://github.com/tensorflow/models.git
cd models
2. pip install tensorflow-text (It will automatically install tensorflow==2.10.1)
pip install tensorflow-io
3. cd official
pip install -r requirements.txt
4. cd ..
cd research
in object_detection/packages/tf2/setup.py file
change version of tf-models-official into 2.10.1 or 2.10.0
python object_detection/packages/tf2/setup.py install
5. $env:PYTHONPATH = "$(Get-Location):$(Get-Location)\slim" in Powershell or
set PYTHONPATH=%cd%;%cd%\slim in Cmd
6. protoc object_detection/protos/*.proto --python_out=.
protoc and protobuf version are same. 3.19.6
7. Run train command
python object_detection/model_main_tf2.py --pipeline_config_path=../../model/ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8/pipeline.config
--model_dir=../../model/ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8/checkpoint --num_train_steps=5000 --
sample_1_of_n_eval_examples=1 --alsologtostderr
I have not succeded in here.
For not below error I have.
venv\lib\site-packages\tensorflow\python\framework\dtypes.py", line 34, in <module>
_np_bfloat16 = _pywrap_bfloat16.TF_bfloat16_type()
TypeError: Unable to convert function return value to a Python type! The signature was
() -> handle
But I tried with many other python versions, tensorflow versions, but every time I met version conflict issues and bugs inside packages.
Hope anybody help me run this process to train. I need help urgently.
Thanks.
| stat:awaiting response,type:build/install,subtype:windows,TF 2.10 | low | Critical |
2,793,397,697 | flutter | flutter build appbundle fails due to failing Gradle task signReleaseBundle, but exits 0 | ### Steps to reproduce
1. Have a standard flutter project and try to build a release package, for example:
```shell
flutter build appbundle --build-number 0 --build-name 0.1.2 --release
```
2. Observe gradle `app:signReleaseBundle` fail, for a common reason: not having your key.properties and jks file set up. Note that gradle explicitly tells you it exits with exitcode `1`.
3. Execute `echo $?` and see `0` instead of `1`
### Expected results
I expect a failing child-task exit-code to propagate to flutter. Due to this issue, flutter exits without error even though a child-task (in this case gradle) exits with a non-zero exit status. This causes all kinds of hell with build systems and CI environments.
### Actual results
Parent task exits with non-zero exit code, `flutter build appbundle` exits zero.
### Code sample
This should be occuring with any flutter project, I don't have specific code that causes it; just not having the required key.properties and jks file for Gradle will cause the Gradle signReleaseBundle task to fail, and that should show that flutter in turn obliviously returns 0.
### Screenshots or Video
### Logs
```
[rubin@FRAME app]$ flutter build appbundle --build-number 0 --build-name 1.2.3 --release
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':app:signReleaseBundle'.
> A failure occurred while executing com.android.build.gradle.internal.tasks.FinalizeBundleTask$BundleToolRunnable
> java.lang.NullPointerException (no error message)
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
> Get more help at https://help.gradle.org.
BUILD FAILED in 3s
Running Gradle task 'bundleRelease'... 4.0s
Gradle task bundleRelease failed with exit code 1
```
```
[rubin@FRAME app]$ echo $?
0
```
### Flutter Doctor output
```
[rubin@FRAME app]$ flutter doctor -v
[✓] Flutter (Channel stable, 3.27.2, on Arch Linux 6.12.9-arch1-1, locale en_US.UTF-8)
• Flutter version 3.27.2 on channel stable at /home/rubin/.local/share/flutterup
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 68415ad1d9 (3 days ago), 2025-01-13 10:22:03 -0800
• Engine revision e672b006cb
• Dart version 3.6.1
• DevTools version 2.40.2
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at /opt/android
• Platform android-35, build-tools 35.0.0
• ANDROID_HOME = /opt/android
• Java binary at: /opt/jetbrains-toolbox/apps/android-studio/jbr/bin/java
• Java version OpenJDK Runtime Environment (build 21.0.4+-12422083-b607.1)
• All Android licenses accepted.
[✓] Chrome - develop for the web
• CHROME_EXECUTABLE = chromium
[✓] Linux toolchain - develop for Linux desktop
• clang version 19.1.6
• cmake version 3.31.4
• ninja version 1.12.1
• pkg-config version 2.3.0
[✓] Android Studio (version 2024.2)
• Android Studio at /opt/jetbrains-toolbox/apps/android-studio
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 21.0.4+-12422083-b607.1)
[✓] IntelliJ IDEA Ultimate Edition (version 2024.3)
• IntelliJ at /opt/jetbrains-toolbox/apps/intellij-idea-ultimate
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
[✓] Connected device (3 available)
• sdk gphone64 x86 64 (mobile) • emulator-5554 • android-x64 • Android 14 (API 34) (emulator)
• Linux (desktop) • linux • linux-x64 • Arch Linux 6.12.9-arch1-1
• Chrome (web) • chrome • web-javascript • Chromium 131.0.6778.264 Arch Linux
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
| in triage | low | Critical |
2,793,424,120 | yt-dlp | [CWTV] ERROR: An extractor error has occurred. (caused by KeyError('title')) | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [x] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [x] I'm reporting that yt-dlp is broken on a **supported** site
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [x] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [x] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [x] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
United States
### Provide a description that is worded well enough to be understood
Related to issue https://github.com/yt-dlp/yt-dlp/issues/9935#issuecomment-2408612814. The extractor no longer works with a key extractor error in the last few days. Possible to update? Thank you.
### Provide verbose output that clearly demonstrates the problem
- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [x] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
ERROR: An extractor error has occurred. (caused by KeyError('title')); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you a
re on the latest version using yt-dlp -U
``` | geo-blocked,site-bug | low | Critical |
2,793,455,659 | PowerToys | Request: Composition guide | ### Description of the new feature / enhancement
A simple utility with a couple of handy features to aid in compositional guidance regardless of underlying content and unbound to a specific program. By default this would be a thirds grid, but ideally has customization and features to make it much more useful. Here are my initial suggested features for this tool request:
1. Basic toggleable overlay with settings for either all screens/windows or specific ones. Ability to customize lines drawn and potentially (bonus) control an overlay opacity and color for each quadrant. Ability to have it apply to bounds of screen or a set area with a dropdown for standard image ratios. Ability to create and name custom grids that shift things off standard default thirds.
2. A hotkey-able marquee box that can be used on its own or in conjunction with/on top of the standard grid. One dragged this could be repositioned and adjusted like any window, and minimized or closed in a similar fashion (hide these options until hovering mouse and pressing a key).
3. A hotkey to toggle all grid visibility as set up on and off.
4. Drawn over the top but ability to have it not render into a screen capture video or screenshot if easily togglable.
5. Golden ratio spiral overlay options; global across the whole grid and per cell of the grid.
6. Ability to have different guide grids such as center point crosshair with a customizable shape around it (circle, square, star for example) that can be manipulated in size uniformly and non-uniformly to make ovals, rectangles, etc.
7. Ability to add V shape overlay lines that can also be manipulated, allowing them to be top to bottom, bottom to top, left or right side biased, and any other configuration as positioned and manipulated by a user.
### Scenario when this would be used?
For the many people doing creative visual work, this would be a great inbuilt feature to reference while working or check any content displayed on a screen against. Being on the OS instead of within a given tool, it could apply to anything. The base rule of thirds is a great place to start, but the customization can allow art direction to establish modified composition scenarios which can be shared with teams.
A video editor could full screen a particular image, video or youtube content and examine composition against standard or create a variation off of something particularly pleasing, then reference it later in their work or share it with peers. An Art director could establish a unique composition bias for a project and both check work against it and share it with their team to guide their work.
Other such similar or undiscovered opportunities for usefulness.
### Supporting information
A few example images of some standard composition guides that ideally could be both used directly and re-created with options within the utility.



 | Needs-Triage | low | Minor |
2,793,486,149 | langchain | ChatPerplexity: 'str' object has no attribute 'choices' | ### Checked other resources
- [x] I added a very descriptive title to this issue.
- [x] I searched the LangChain documentation with the integrated search.
- [x] I used the GitHub search to find a similar question and didn't find it.
- [x] I am sure that this is a bug in LangChain rather than my code.
- [x] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.chat_models import ChatPerplexity
research_model = ChatPerplexity(
model="llama-3.1-sonar-large-128k-online",
temperature=0,
# to not emit messages to the client
disable_streaming=True,
)
research_prompt = 'How many starts are there?'
research_response = research_model.invoke([HumanMessage(content=research_prompt)])
```
### Error Message and Stack Trace (if applicable)
AttributeError("'str' object has no attribute 'choices'")Traceback (most recent call last):
File "/Users/nsviridenko/.local/share/virtualenvs/nce-agent-dIPyehe6/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 633, in generate
self._generate_with_cache(
File "/Users/nsviridenko/.local/share/virtualenvs/nce-agent-dIPyehe6/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 851, in _generate_with_cache
result = self._generate(
^^^^^^^^^^^^^^^
File "/Users/nsviridenko/.local/share/virtualenvs/nce-agent-dIPyehe6/lib/python3.11/site-packages/langchain_community/chat_models/perplexity.py", line 265, in _generate
content=response.choices[0].message.content,
^^^^^^^^^^^^^^^^
AttributeError: 'str' object has no attribute 'choices'
### Description
When trying to use ChatPerplexity, it throws an error.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 20.6.0: Tue Jun 21 20:50:28 PDT 2022; root:xnu-7195.141.32~1/RELEASE_X86_64
> Python Version: 3.11.5 (main, Sep 29 2024, 15:09:05) [Clang 12.0.5 (clang-1205.0.22.11)]
Package Information
-------------------
> langchain_core: 0.3.21
> langchain: 0.3.9
> langchain_community: 0.3.9
> langsmith: 0.1.147
> langchain_anthropic: 0.3.0
> langchain_openai: 0.2.10
> langchain_text_splitters: 0.3.2
> langgraph_api: 0.0.15
> langgraph_cli: 0.1.65
> langgraph_license: Installed. No version info available.
> langgraph_sdk: 0.1.48
> langgraph_storage: Installed. No version info available.
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.11
> anthropic: 0.42.0
> async-timeout: Installed. No version info available.
> click: 8.1.8
> cryptography: 43.0.3
> dataclasses-json: 0.6.7
> defusedxml: 0.7.1
> httpx: 0.28.1
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> jsonschema-rs: 0.25.1
> langgraph: 0.2.56
> langgraph-checkpoint: 2.0.9
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.55.3
> orjson: 3.10.13
> packaging: 24.2
> pydantic: 2.10.4
> pydantic-settings: 2.7.1
> pyjwt: 2.10.1
> python-dotenv: 1.0.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.27
> sse-starlette: 2.1.3
> starlette: 0.45.2
> structlog: 24.4.0
> tenacity: 8.5.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
> uvicorn: 0.34.0
> watchfiles: 1.0.3 | 🤖:bug | low | Critical |
2,793,488,798 | deno | LSP: Investigate 'auto-imports from npm dependencies' hack for perf issues | https://github.com/nayeemrmn/deno/blob/9dbb99a83cb599028aef662b23e13faf2df15297/cli/lsp/tsc.rs#L4315-L4317
We use this to hide the fact that these paths are in a `/node_modules/` folder because tsc wasn't giving auto-import suggestions from them. Did this have a knockback effect on performance? Could we only apply this to direct dependencies? How does node/tsc give these suggestions? | perf,lsp | low | Major |
2,793,492,932 | pytorch | DISABLED test_allow_implicit_sharing (__main__.TestQuantizePT2E) | Platforms: mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_allow_implicit_sharing&suite=TestQuantizePT2E&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35717971181).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_allow_implicit_sharing`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_quantization.py`
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim @clee2000 @wdvr @malfet @albanD | oncall: quantization,triaged,module: flaky-tests,module: macos,skipped | low | Critical |
2,793,499,658 | godot | get_rpm() on VehicleWheel3D is calculated incorrectly | ### Tested versions
- Reproducible in: 4.2 stable, 4.3 stable, 4.4.dev7
### System information
Godot v4.3.stable (77dcf97d8) - Windows 10.0.26100 - Vulkan (Forward+) - integrated Intel(R) UHD Graphics (Intel Corporation; 31.0.101.2130) - Intel(R) Core(TM) i7-10610U CPU @ 1.80GHz (8 Threads)
### Issue description
When you call get_rpm on a VehicleWheel3D when the steering angle is not equal to 0, it returns the wrong RPM. The only workaround I have found is to use trigonometry:
```gdscript
var corrected_rpm = wheel.get_rpm() / cos(wheel.steering)
```
### Steps to reproduce
- Create a VehicleBody3D
- Create a VehicleWheel3D as a child
- Spin up the wheel (engine_force or with friction against the ground)
- Turn the wheel with steering on the VehicleBody3D
- Call get_rpm() on the VehicleWheel3D which now gives the wrong rpm
### Minimal reproduction project (MRP)
[vehicle-bug.zip](https://github.com/user-attachments/files/18447812/vehicle-bug.zip) | bug,topic:physics,topic:3d | low | Critical |
2,793,512,777 | godot | "Manage Export Templates" causes crash | ### Tested versions
Godot v3.6 stable Flathub
### System information
Manjaro 6.6
### Issue description
Attempting to download export template via "Manage Export Templates" causes the editor to crash.
### Steps to reproduce
Open Godot, start a brand new project, click on Manage Export Templates (either through the Help menu or the Project > Export menu, crash.
Flatpak output:
```
================================================================
handle_crash: Program crashed with signal 11
Engine version: Godot Engine v3.6.stable.flathub (de2f0f147c5b7eff2d0f6dbc35042a4173fd59be)
Dumping the backtrace. Please include this when reporting the bug to the project developer.
[1] /usr/lib/x86_64-linux-gnu/libc.so.6(+0x41140) [0x7fe1628c6140] (??:0)
[2] /app/bin/godot-bin() [0x1948010] (??:0)
[3] /app/bin/godot-bin() [0x10cf443] (??:0)
[4] /app/bin/godot-bin() [0x1089f7c] (??:0)
[5] /app/bin/godot-bin() [0x50b7e5] (??:0)
[6] /app/bin/godot-bin() [0x283671d] (??:0)
[7] /app/bin/godot-bin() [0x283eb89] (??:0)
[8] /app/bin/godot-bin() [0x283f919] (??:0)
[9] /app/bin/godot-bin() [0x1ab92bd] (??:0)
[10] /app/bin/godot-bin() [0x1ae62c1] (??:0)
[11] /app/bin/godot-bin() [0x791d91] (??:0)
[12] /app/bin/godot-bin() [0x2836267] (??:0)
[13] /app/bin/godot-bin() [0x1989940] (??:0)
[14] /app/bin/godot-bin() [0x19ab480] (??:0)
[15] /app/bin/godot-bin() [0x19ad6f9] (??:0)
[16] /app/bin/godot-bin() [0x19ad9f9] (??:0)
[17] /app/bin/godot-bin() [0x791d91] (??:0)
[18] /app/bin/godot-bin() [0x283671d] (??:0)
[19] /app/bin/godot-bin() [0x2819122] (??:0)
[20] /app/bin/godot-bin() [0x197fccb] (??:0)
[21] /app/bin/godot-bin() [0x199369b] (??:0)
[22] /app/bin/godot-bin() [0x493e9a] (??:0)
[23] /app/bin/godot-bin() [0x4952dd] (??:0)
[24] /app/bin/godot-bin() [0x490c4f] (??:0)
[25] /app/bin/godot-bin() [0x455ab8] (??:0)
[26] /usr/lib/x86_64-linux-gnu/libc.so.6(+0x2a188) [0x7fe1628af188] (??:0)
[27] /usr/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x8b) [0x7fe1628af24b] (??:0)
[28] /app/bin/godot-bin() [0x45a5a5] (??:0)
-- END OF BACKTRACE --
================================================================
```
### Minimal reproduction project (MRP)
[New Game Project.zip](https://github.com/user-attachments/files/18444199/New.Game.Project.zip) | bug,topic:editor,crash | low | Critical |
2,793,515,988 | rust | clean up and unify logic used by `rustdoc-js` and `rustdoc-js-std` test suites. | I cannot find the code that actually runs rustdoc on the `rustdoc-js/*.rs` files, and for some unknown reason, that code is not actually getting run (leading to issues about `search-index.js` not being found).
This code is quite a mess, `rustdoc-js-std` seems to be mostly implemented in bootstrap, while `rustdoc-js` is mostly implemented in compiletest.
This discrepancy leads to an ever increasing number of discrepancies between the exact behavior of the two test suites. Rather than trying to patch these up one by one, I would rather we take a big picture approach.
| C-cleanup,T-rustdoc,T-bootstrap | low | Minor |
2,793,533,135 | pytorch | [torchbench] Missing meta function for aten::_cudnn_rnn_flatten_weight | ### 🚀 The feature, motivation and pitch
Torchbench for export-aot-inductor for tts_angular model fails due to missing Meta fn.
```
python benchmarks/dynamo/torchbench.py --only tts_angular --accuracy --no-translation-validation --inference --bfloat16 --export-aot-inductor --disable-cudagraphs --device cuda
```
```
NotImplementedError: aten::_cudnn_rnn_flatten_weight: attempted to run this operator with Meta tensors, but there was no fake impl or Meta kernel registered. You may have run into this message while using an operator with PT2 compilation APIs (torch.compile/torch.export); in order to use this operator with those APIs you'll need to add a fake impl. Please see the following for next steps: https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html
TorchDynamo optimized model failed to run because of following error
fail_to_run
```
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu | triaged,oncall: pt2,pt2-pass-rate-regression | low | Critical |
2,793,557,806 | rust | Unused Argument Warnings in Intrinsics Without Body | <!--
Thank you for finding an Internal Compiler Error! 🧊 If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
Problem:
The compiler emits warnings about unused arguments in intrinsic functions without bodies (when `#[rustc_intrinsic]` is used). These arguments are inherently unused, but the warnings create unnecessary noise.
Proposed Solution:
Update the compiler to automatically suppress unused argument warnings for intrinsics without bodies. This eliminates the need for manual workarounds like prefixing variable names with _.
Issue reproduced here:
https://github.com/rust-lang/rust/pull/135333#discussion_r1918902988
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"vayunbiyani"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | A-diagnostics,T-compiler,A-intrinsics | low | Critical |
2,793,562,634 | TypeScript | Design Meeting Notes, 1/10/2025 | # Make `lib` Replacement Opt-In by Default
https://github.com/microsoft/TypeScript/pull/60829
* Since TypeScript 4.5, we've supported loading `lib` files from `node_modules` if something is located in `@typescript/lib-*`.
* But this lib replacement is on for everyone by default, and that leads to us
* doing a whole bunch of module resolution all the time at project load going up the spine
* add file watchers for each of the directories that *don't* exist.
* This is a perf hit for everyone (and it's noisy to look at when diagnosing issues) so we want to make it opt-in.
* According to a rudimentary search, 352 files across all of GitHub use a custom `dom`. Limited impact?
* 2 major groups who originally asked for this?
* People who need to lock on an older `dom` file like `@types/web`.
* People who want a custom version of the built-in lib.
* How do we roll this out?
* Introduce this in 5.8, and then make it the default in 6.0.
| Design Notes | low | Minor |
2,793,571,525 | pytorch | [torchbench] stable_diffusion_unet compilation failure | ### 🐛 Describe the bug
```
python benchmarks/dynamo/torchbench.py --only stable_diffusion_unet --performance --cold-start-latency --inference --bfloat16 --export-aot-inductor --disable-cudagraphs --device cuda
```
```
cuda eval stable_diffusion_unet
ERROR:common:Backend dynamo failed in warmup()
Traceback (most recent call last):
File "/data/users/ivankobzarev/a/pytorch/benchmarks/dynamo/common.py", line 3386, in warmup
fn(model, example_inputs)
File "/data/users/ivankobzarev/a/pytorch/benchmarks/dynamo/common.py", line 1635, in export_aot_inductor
optimized = AOTInductorModelCache.load(model, example_inputs)
File "/data/users/ivankobzarev/a/pytorch/benchmarks/dynamo/common.py", line 1596, in load
ep = torch.export.export(
File "/data/users/ivankobzarev/a/pytorch/torch/export/__init__.py", line 368, in export
return _export(
File "/data/users/ivankobzarev/a/pytorch/torch/export/_trace.py", line 1040, in wrapper
raise e
File "/data/users/ivankobzarev/a/pytorch/torch/export/_trace.py", line 1013, in wrapper
ep = fn(*args, **kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/export/exported_program.py", line 128, in wrapper
return fn(*args, **kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/export/_trace.py", line 2064, in _export
return _export_for_training(
File "/data/users/ivankobzarev/a/pytorch/torch/export/_trace.py", line 1040, in wrapper
raise e
File "/data/users/ivankobzarev/a/pytorch/torch/export/_trace.py", line 1013, in wrapper
ep = fn(*args, **kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/export/exported_program.py", line 128, in wrapper
return fn(*args, **kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/export/_trace.py", line 1929, in _export_for_training
export_artifact = export_func( # type: ignore[operator]
File "/data/users/ivankobzarev/a/pytorch/torch/export/_trace.py", line 1864, in _non_strict_export
aten_export_artifact = _to_aten_func( # type: ignore[operator]
File "/data/users/ivankobzarev/a/pytorch/torch/export/_trace.py", line 1650, in _export_to_aten_ir_make_fx
gm, graph_signature = transform(_make_fx_helper)(
File "/data/users/ivankobzarev/a/pytorch/torch/export/_trace.py", line 1794, in _aot_export_non_strict
gm, sig = aot_export(wrapped_mod, args, kwargs=kwargs, **flags)
File "/data/users/ivankobzarev/a/pytorch/torch/export/_trace.py", line 1570, in _make_fx_helper
gm = make_fx(
File "/data/users/ivankobzarev/a/pytorch/torch/fx/experimental/proxy_tensor.py", line 2200, in wrapped
return make_fx_tracer.trace(f, *args)
File "/data/users/ivankobzarev/a/pytorch/torch/fx/experimental/proxy_tensor.py", line 2138, in trace
return self._trace_inner(f, *args)
File "/data/users/ivankobzarev/a/pytorch/torch/fx/experimental/proxy_tensor.py", line 2109, in _trace_inner
t = dispatch_trace(
File "/data/users/ivankobzarev/a/pytorch/torch/_compile.py", line 51, in inner
return disable_fn(*args, **kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/_dynamo/eval_frame.py", line 755, in _fn
return fn(*args, **kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/fx/experimental/proxy_tensor.py", line 1142, in dispatch_trace
graph = tracer.trace(root, concrete_args) # type: ignore[arg-type]
File "/data/users/ivankobzarev/a/pytorch/torch/fx/experimental/proxy_tensor.py", line 1698, in trace
res = super().trace(root, concrete_args)
File "/data/users/ivankobzarev/a/pytorch/torch/fx/_symbolic_trace.py", line 843, in trace
(self.create_arg(fn(*args)),),
File "/data/users/ivankobzarev/a/pytorch/torch/fx/experimental/proxy_tensor.py", line 1197, in wrapped
out = f(*tensors) # type:ignore[call-arg]
File "<string>", line 1, in <lambda>
File "/data/users/ivankobzarev/a/pytorch/torch/export/_trace.py", line 1473, in wrapped_fn
return tuple(flat_fn(*args))
File "/data/users/ivankobzarev/a/pytorch/torch/_functorch/_aot_autograd/utils.py", line 184, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 879, in functional_call
out = mod(*args[params_len:], **kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/fx/_symbolic_trace.py", line 821, in module_call_wrapper
return self.call_module(mod, forward, args, kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/fx/experimental/proxy_tensor.py", line 1768, in call_module
return Tracer.call_module(self, m, forward, args, kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/fx/_symbolic_trace.py", line 539, in call_module
ret_val = forward(*args, **kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/fx/_symbolic_trace.py", line 814, in forward
return _orig_module_call(mod, *args, **kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/nn/modules/module.py", line 1749, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/nn/modules/module.py", line 1760, in _call_impl
return forward_call(*args, **kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/export/_trace.py", line 1778, in forward
tree_out = mod(*args, **kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/fx/_symbolic_trace.py", line 821, in module_call_wrapper
return self.call_module(mod, forward, args, kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/fx/experimental/proxy_tensor.py", line 1768, in call_module
return Tracer.call_module(self, m, forward, args, kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/fx/_symbolic_trace.py", line 539, in call_module
ret_val = forward(*args, **kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/fx/_symbolic_trace.py", line 814, in forward
return _orig_module_call(mod, *args, **kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/nn/modules/module.py", line 1749, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/nn/modules/module.py", line 1760, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/diffusers/models/unets/unet_2d_condition.py", line 1246, in forward
sample = self.mid_block(
File "/data/users/ivankobzarev/a/pytorch/torch/fx/_symbolic_trace.py", line 821, in module_call_wrapper
return self.call_module(mod, forward, args, kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/fx/experimental/proxy_tensor.py", line 1768, in call_module
return Tracer.call_module(self, m, forward, args, kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/fx/_symbolic_trace.py", line 539, in call_module
ret_val = forward(*args, **kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/fx/_symbolic_trace.py", line 814, in forward
return _orig_module_call(mod, *args, **kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/nn/modules/module.py", line 1749, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data/users/ivankobzarev/a/pytorch/torch/nn/modules/module.py", line 1760, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/diffusers/models/unets/unet_2d_blocks.py", line 884, in forward
for attn, resnet in zip(self.attentions, self.resnets[1:]):
File "/data/users/ivankobzarev/a/pytorch/torch/nn/modules/container.py", line 322, in __getitem__
return self.__class__(list(self._modules.values())[idx])
TypeError: _ModuleStackTracer.__init__.<locals>.AttrProxy.__init__() missing 1 required positional argument: 'path'
warmup_failed
```
### Error logs
_No response_
### Versions
master Jan 16
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 | triaged,oncall: pt2,oncall: export,module: aotinductor,pt2-pass-rate-regression | low | Critical |
2,793,576,737 | PowerToys | "The REG file editor could not be opened" when clicking Edit in Registry Preview | ### Microsoft PowerToys version
0.87.1
### Installation method
WinGet
### Running as admin
No
### Area(s) with issue?
Registry Preview
### Steps to reproduce
1. In Registry Editor, export the registry key to a .REG file - for example, HKEY_CURRENT_USER\Control Panel\Desktop.
2. In PowerToys Settings, expand Advanced and click Registry Preview.
3. Toggle **Default app** to On.
4. Open the .REG file in Registry Preview.
5. Click Edit, and the error message will appear.
### ✔️ Expected Behavior
It is supposed to open in Notepad.
### ❌ Actual Behavior
A pop-up message says:
**Error**
The REG file editor could not be opened.
### Other Software
_No response_ | Issue-Bug,Priority-3,Needs-Triage,Product-Registry Preview | low | Critical |
2,793,581,967 | create-react-app | npx create-react-app conflicts with react 19 | ## Current system Versions
npm -v -->11.0.0
npx -v --> 11.0.0
### Run
npx create-react-app simple-app
### Error
Installing template dependencies using npm...
npm error code ERESOLVE
npm error ERESOLVE unable to resolve dependency tree
npm error
npm error While resolving: [email protected]
npm error Found: [email protected]
npm error node_modules/react
npm error react@"^19.0.0" from the root project
npm error
npm error Could not resolve dependency:
npm error peer react@"^18.0.0" from @testing-library/[email protected]
npm error node_modules/@testing-library/react
npm error @testing-library/react@"^13.0.0" from the root project
### Explaination
Issue with the newer react version 19 :D | needs triage,issue: bug report | medium | Critical |
2,793,616,596 | rust | WF check has error code `E0277` which doesn't say anything about the current error | ### Code
```Rust
struct Unsized {
inner2: [u8],
inner3: [u8],
}
```
### Current output
```Shell
error[E0277]: the size for values of type `[u8]` cannot be known at compilation time
[...]
```
### Desired output
```Shell
error: the size for values of type `[u8]` cannot be known at compilation time
[...]
(Don't mention E0277)
```
### Rationale and extra context
When you read up on this error it goes on about whether I'm not implementing a trait. Nothing there talks about `Sized`, or why we're not implmenting it.
The current output error is very good, but that part is a bit misleading.
### Other cases
```Rust
```
### Rust Version
```Shell
cargo 1.86.0-nightly (fd784878c 2025-01-03)
```
### Anything else?
_No response_ | A-diagnostics,T-compiler | low | Critical |
2,793,621,559 | svelte | $state array does not update on page back when reassigned | ### Describe the bug
When using the page back browser button, an array declared with $state is not reactive when using reassignment to change its value in firefox
### Reproduction
```
let favourites = $state([])
onMount(() => {
//this is a network promise in the actual code
Promise.resolve({"favourites": [1, 2, 3]}).then(res => {
//-----this causes the state to not update
favourites = res["favourites"];
//----
//===but when using this, it works
favourites.length = 0;
for(let a of res["favourites"]){
favourites.push(a);
}
//====
})
})
```
### System Info
```shell
Observed only in firefox (my version - Mozilla Firefox 128.5.0esr)- not in chromium, AND only when "Disable cache" in network dev tools is turned off
```
### Severity
annoyance | awaiting submitter | low | Critical |
2,793,653,713 | electron | Two concurrent videos cannot be loaded from custom protocol | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
34.0.0
### What operating system(s) are you using?
macOS
### Operating System Version
Sequoia 15.2
### What arch are you using?
arm64 (including Apple Silicon)
### Last Known Working Electron version
34.0.0-alpha.7
### Expected Behavior
Two videos should play just fine at the same time as the did in Electron 33 (and up to 34.0.0-alpha.7):
<img width="912" alt="Image" src="https://github.com/user-attachments/assets/0c1114f2-ab56-4d60-997e-245abf965c32" />
### Actual Behavior
If there are > 1 `<video/>` elements in the page - all will fail to load:
<img width="912" alt="Image" src="https://github.com/user-attachments/assets/7f84d45a-0ef1-4f41-bf11-20953caa14ee" />
### Testcase Gist URL
https://gist.github.com/indutny-signal/84a80dff2cf6af02dcaef9db8fdaa6f4
### Additional Information
I "bisected" this bug to begin to happen between 34.0.0-alpha.7 and 34.0.0-alpha.8 if it helps. | platform/macOS,bug :beetle:,status/confirmed,has-repro-gist,34-x-y,35-x-y | low | Critical |
2,793,654,404 | yt-dlp | [Dropbox] Error: No video formats found! | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [x] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [x] I'm reporting that yt-dlp is broken on a **supported** site
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [x] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [x] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [x] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
World
### Provide a description that is worded well enough to be understood
This is the only video that returns this error
https://www.dropbox.com/s/fnxkf6gvr9zl7ow/IMG_3996.MOV?dl=0
I tried the full link and its the same
https://www.dropbox.com/scl/fi/8n13ei80sb3bmfm9nrcmw/IMG_3996.MOV?rlkey=t2mf7yg8m0vzenb432bklo0z0&e=1&dl=0
There is a playable video and it should download like all others
I tried everything in my power to fix it without results
Dont judge me on the video lmao
### Provide verbose output that clearly demonstrates the problem
- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', '-P', '/Archive/Twerk/KateCakes/', 'https://www.dropbox.com/s/fnxkf6gvr9zl7ow/IMG_3996.MOV?dl=0']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [c8541f8b1] (zip)
[debug] Python 3.12.3 (CPython x86_64 64bit) - Linux-6.8.0-51-generic-x86_64-with-glibc2.39 (OpenSSL 3.0.13 30 Jan 2024, glibc 2.39)
[debug] exe versions: ffmpeg 6.1.1 (setts), ffprobe 6.1.1
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2023.11.17, mutagen-1.46.0, pyxattr-0.8.1, requests-2.31.0, sqlite3-3.45.1, urllib3-2.0.7, websockets-10.4
[debug] Proxy map: {}
[debug] Request Handlers: urllib
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp)
[Dropbox] Extracting URL: https://www.dropbox.com/s/fnxkf6gvr9zl7ow/IMG_3996.MOV?dl=0
[Dropbox] fnxkf6gvr9zl7ow: Downloading webpage
ERROR: [Dropbox] fnxkf6gvr9zl7ow: No video formats found!; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
Traceback (most recent call last):
File "/home/ok/.local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1637, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ok/.local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1793, in __extract_info
return self.process_ie_result(ie_result, download, extra_info)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ok/.local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1852, in process_ie_result
ie_result = self.process_video_result(ie_result, download=download)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ok/.local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 2859, in process_video_result
self.raise_no_formats(info_dict)
File "/home/ok/.local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1126, in raise_no_formats
raise ExtractorError(msg, video_id=info['id'], ie=info['extractor'],
yt_dlp.utils.ExtractorError: [Dropbox] fnxkf6gvr9zl7ow: No video formats found!; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
``` | NSFW,site-bug,triage | low | Critical |
2,793,659,480 | vscode | support resolving / displaying what symlinks point to in suggest terminal details view | @tyriar suggested that we should support resolving symlinks and showing them in the terminal suggest details view
example:
/usr/local/bin/python3 -> /blah/python3.13 | feature-request,terminal-suggest | low | Minor |
2,793,660,256 | ui | [feat]: New components and improvements | ### Feature description
Hello everyone. As a UI designer, I want to thank this team for making such a great open source UI kit! It is just amazing. I also tried Figma file I don't know if designer is related to this team, since I didn't found Github for design file I will share my thoughts here. I would like to report some bugs for Figma file, but not sure is this right place? Anyhow these issue is for improvements.
The following suggestions are not related to the appearance itself, but to the functionality, the pictures serve only as an example. Some of them are from Material Design, some of them from eBay Evo design system.
A slider with two handles and the possibility of writing the value below permanently or on hover

An icon in front of the text in the toast, as well as the possibility of the button being below if there are more lines of text.

Tabs that look like this should perhaps be renamed to segmented button component like in other kits. Also the possibility to add icons in front of labels in tabs and segmented buttons.

Tabs look like this, I found this in Figma file for this ui kit.

The date picker component may have a way to change the year in some way. Google made material in two ways, it can be an inspiration.


The input should have a prefix, an icon in front, a suffix and then an icon at the end.

It would be good if there was a variants of input when the label is inside the field.

New input chip component in the combo box


New component filter chip to have the possibility to insert an icon in front and at the end

### Affected component/components
Tabs, Input, Date picker, Toast, Slider
### Before submitting
- [x] I've made research efforts and searched the documentation
- [x] I've searched for existing issues and PRs | area: request | low | Critical |
2,793,678,197 | transformers | Audio-Classification Pipeline top_k Documentation mismatch and bug (possibly generalizes to any classification pipelines) | ### System Info
- `transformers` version: 4.48.0
- Platform: macOS-14.6-arm64-arm-64bit
- Python version: 3.12.4
- Huggingface_hub version: 0.27.1
- Safetensors version: 0.5.2
- Accelerate version: 1.2.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: Neither/NA
### Who can help?
@Rocketknight1 @HERIUN
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
1. For the documentation mismatch, running the below is enough:
```python
from transformers import pipeline
import torch
import numpy as np
# model_name = 'audeering/wav2vec2-large-robust-12-ft-emotion-msp-dim'
model_name = 'pollner/distilhubert-finetuned-ravdess'
top_k = None
device = 'cuda' if torch.cuda.is_available() else 'cpu'
classification_pipeline = pipeline(
"audio-classification",
model=model_name,
# top_k=top_k,
#function_to_apply='none',
device=device,
)
# dummy signal
sampling_rate = 16000
signal = np.zeros((sampling_rate), dtype=np.float32)
print(classification_pipeline(signal))
```
2. Setting top_k to None in the initialization of the pipeline causes breaking behavior if the number of labels is less than 5
```python
from transformers import pipeline
import torch
import numpy as np
model_name = 'audeering/wav2vec2-large-robust-12-ft-emotion-msp-dim'
#model_name = 'pollner/distilhubert-finetuned-ravdess'
top_k = None
device = 'cuda' if torch.cuda.is_available() else 'cpu'
classification_pipeline = pipeline(
"audio-classification",
model=model_name,
top_k=top_k,
#function_to_apply='none',
device=device,
)
# dummy signal
sampling_rate = 16000
signal = np.zeros((sampling_rate), dtype=np.float32)
print(classification_pipeline(signal))
```
Stak trace:
```python
Some weights of Wav2Vec2ForSequenceClassification were not initialized from the model checkpoint at audeering/wav2vec2-large-robust-12-ft-emotion-msp-dim and are newly initialized: ['classifier.bias', 'classifier.weight', 'projector.bias', 'projector.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Device set to use cpu
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[16], line 20
18 sampling_rate = 16000
19 signal = np.zeros((sampling_rate), dtype=np.float32)
---> 20 print(classification_pipeline(signal))
21 # example_audio = {
22 # 'raw': signal,
23 # 'sampling_rate': sampling_rate
24 # }
File ~/Library/Caches/pypoetry/virtualenvs/senselab-MihWUM64-py3.12/lib/python3.12/site-packages/transformers/pipelines/audio_classification.py:141, in AudioClassificationPipeline.__call__(self, inputs, **kwargs)
103 def __call__(
104 self,
105 inputs: Union[np.ndarray, bytes, str],
106 **kwargs,
107 ):
108 """
109 Classify the sequence(s) given as inputs. See the [`AutomaticSpeechRecognitionPipeline`] documentation for more
110 information.
(...)
139 - **score** (`float`) -- The corresponding probability.
140 """
--> 141 return super().__call__(inputs, **kwargs)
File ~/Library/Caches/pypoetry/virtualenvs/senselab-MihWUM64-py3.12/lib/python3.12/site-packages/transformers/pipelines/base.py:1362, in Pipeline.__call__(self, inputs, num_workers, batch_size, *args, **kwargs)
1354 return next(
1355 iter(
1356 self.get_iterator(
(...)
1359 )
1360 )
1361 else:
-> 1362 return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File ~/Library/Caches/pypoetry/virtualenvs/senselab-MihWUM64-py3.12/lib/python3.12/site-packages/transformers/pipelines/base.py:1370, in Pipeline.run_single(self, inputs, preprocess_params, forward_params, postprocess_params)
1368 model_inputs = self.preprocess(inputs, **preprocess_params)
1369 model_outputs = self.forward(model_inputs, **forward_params)
-> 1370 outputs = self.postprocess(model_outputs, **postprocess_params)
1371 return outputs
File ~/Library/Caches/pypoetry/virtualenvs/senselab-MihWUM64-py3.12/lib/python3.12/site-packages/transformers/pipelines/audio_classification.py:227, in AudioClassificationPipeline.postprocess(self, model_outputs, top_k, function_to_apply)
225 else:
226 probs = model_outputs.logits[0]
--> 227 scores, ids = probs.topk(top_k)
229 scores = scores.tolist()
230 ids = ids.tolist()
RuntimeError: selected index k out of range
```
### Expected behavior
1. For the documentation mismatch, the [documentation](https://huggingface.co/docs/transformers/v4.47.1/en/main_classes/pipelines#transformers.AudioClassificationPipeline.__call__.top_k) states that if no `top_k` is provided (or None value), then we should return all labels. The magic number of 5 in the initialization of the pipeline causes this to not be the case for models that are returning more than 5 labels (code example above).
2. Setting `top_k` to None in the initializer should not create possible bugs for models with less than 5 (the magic number) labels. Instead the code should set the `top_k` to the maximum allowable labels. Additionally, the above is still true, that if we set to `None` in the pipeline initialization and do not overwrite it on any calls, we should output all labels (expected behavior number one). This issue is from 4.47.0 due to [this change](https://github.com/huggingface/transformers/commit/854dc7941bf250ab27610f114d048ec9ca08ad1d) not taking into account of a user sending in `None` for the argument. | bug | low | Critical |
2,793,703,707 | transformers | Significant Performance Gap Between MaskFormer and Mask2Former Despite Identical Training Code | ### System Info
#### Description
I observed a significant performance difference between **MaskFormer** and **Mask2Former** when training both models on my dataset for instance segmentation. The training code is identical except for the model-specific configurations. Below, I outline my setup, preprocessing steps, results, and relevant code to help pinpoint any potential issues.
#### Dataset
- **Task**: Instance segmentation.
- **Format**: The dataset is designed as follows:
- The **R channel** contains the semantic class labels.
- The **G channel** contains the instance IDs for each object.
---
#### Preprocessing
For both models, I used the following preprocessing configuration. The only difference lies in the model type and the specific pre-trained weights used.
For both **MaskFormer** and **Mask2Former**, I set:
- `do_reduce_labels=True`
- `ignore_index=255`
The purpose of `do_reduce_labels=True` is to ensure that class indices start from 0 and are incremented sequentially. This shifts class indices by `-1`, as shown in the Hugging Face [[documentation](https://github.com/huggingface/transformers/blob/94af1c0aa242d60faa8e69acf3391b1e4fd0d4bc/examples/pytorch/instance-segmentation/run_instance_segmentation.py#L403)](https://github.com/huggingface/transformers/blob/94af1c0aa242d60faa8e69acf3391b1e4fd0d4bc/examples/pytorch/instance-segmentation/run_instance_segmentation.py#L403). The value `255` for `ignore_index` ensures that pixels labeled as background are ignored during loss computation.
---
#### Results
Both models were trained for 20 epochs with the same hyperparameters:
- Learning rate: `5e-5`
- Optimizer: Adam
Test Image

Here are the results:
**MaskFormer**:
```plaintext
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Test metric DataLoader 0
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────
test_loss 1.0081120729446411
test_map 0.038004860281944275
test_map_50 0.06367719173431396
test_map_75 0.040859635919332504
test_map_large 0.5004204511642456
test_map_medium 0.04175732284784317
test_map_small 0.007470746990293264
test_mar_1 0.01011560671031475
test_mar_10 0.05838150158524513
test_mar_100 0.06329479813575745
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────
```
Test Image Result with **MaskFormer**:

---
**Mask2Former**:
```plaintext
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Test metric DataLoader 0
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────
test_loss 15.374979972839355
test_map 0.44928184151649475
test_map_50 0.6224347949028015
test_map_75 0.5011898279190063
test_map_large 0.8390558958053589
test_map_medium 0.6270320415496826
test_map_small 0.32075226306915283
test_mar_1 0.03526011481881142
test_mar_10 0.24104046821594238
test_mar_100 0.5274566411972046
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────
```
Test Image Result with **Mask2Former**:

---
---
#### Observations
As you can see, the performance gap between the two models is substantial, despite identical training setups and preprocessing pipelines. **Mask2Former** achieves significantly better performance in terms of `mAP` and other metrics, while **MaskFormer** struggles to achieve meaningful results.
Any insights or suggestions would be greatly appreciated. Thank you!
---
### Who can help?
@amyeroberts, @qubvel
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
#### Relevant Code
##### `train_maskformer.py`
```python
import os
import logging
import pytorch_lightning as pl
from pytorch_lightning.loggers import CSVLogger, TensorBoardLogger, MLFlowLogger
from pytorch_lightning.callbacks import EarlyStopping, ModelCheckpoint
import torch
import numpy as np
from src.dataset_singlegpu import SegmentationDataModule # Replace with src.dataset_distributed for multi-GPU training
from src.maskformer_singlegpu import MaskFormer # Replace with src.maskformer_distributed for multi-GPU training
torch.backends.cudnn.benchmark = True
np.random.seed(1)
torch.manual_seed(1)
torch.cuda.manual_seed(1)
def setup_logging():
formatter = logging.Formatter(
"%(asctime)s - %(name)s - %(levelname)s - %(message)s"
)
stream_handler = logging.StreamHandler()
stream_handler.setFormatter(formatter)
file_handler = logging.FileHandler("instance_segmentation/maskformer/train.log")
file_handler.setFormatter(formatter)
root_logger = logging.getLogger()
root_logger.setLevel(logging.INFO)
root_logger.addHandler(stream_handler)
root_logger.addHandler(file_handler)
logger = logging.getLogger(__name__)
return logger
def main():
logger = setup_logging()
dataset_dir = "data/instance_segmentation" # Path to dataset directory
dm = SegmentationDataModule(dataset_dir=dataset_dir, batch_size=2, num_workers=4)
model = MaskFormer()
# Create logs dir if doesn't exist
if not os.path.isdir("logs/instance_segmentation"):
os.makedirs("logs/instance_segmentation")
loggers = [
CSVLogger(save_dir="logs/csv_logs", name="maskformer"),
TensorBoardLogger(save_dir="logs/tb_logs", name="maskformer"),
MLFlowLogger(
experiment_name="maskformer",
tracking_uri="file:logs/mlflow_logs",
),
]
checkpoint_callback = ModelCheckpoint(
dirpath="checkpoints/maskformer",
monitor="val_map",
filename="maskformer-{epoch:02d}-{val_map:.2f}",
save_top_k=1,
mode="max",
save_last=True,
verbose=True,
)
callbacks = [
EarlyStopping(monitor="val_map", mode="max", patience=10),
checkpoint_callback,
]
trainer = pl.Trainer(
accelerator="gpu",
devices=[0], # Use [0] for single GPU, [0, 1] for multiple GPUs
logger=loggers,
callbacks=callbacks,
min_epochs=1,
max_epochs=20,
precision="32-true",
num_sanity_val_steps=0,
)
# Start training (resume if a checkpoint exists)
if os.path.exists("checkpoints/maskformer/last.ckpt"):
trainer.fit(
model,
dm,
ckpt_path="checkpoints/maskformer/last.ckpt",
)
else:
trainer.fit(model, dm)
best_model_path = checkpoint_callback.best_model_path
logger.info(f"Best model saved at: {best_model_path}")
best_model = MaskFormer.load_from_checkpoint(checkpoint_path=best_model_path)
trainer.test(best_model, dm)
if __name__ == "__main__":
main()
```
##### `train_mask2former.py`
```python
import os
import logging
import pytorch_lightning as pl
from pytorch_lightning.loggers import CSVLogger, TensorBoardLogger, MLFlowLogger
from pytorch_lightning.callbacks import EarlyStopping, ModelCheckpoint
import torch
import numpy as np
from src.dataset_singlegpu import SegmentationDataModule # Replace with src.dataset_distributed for multi-GPU training
from src.mask2former_singlegpu import Mask2Former # Replace with src.mask2former_distributed for multi-GPU training
torch.backends.cudnn.benchmark = True
np.random.seed(1)
torch.manual_seed(1)
torch.cuda.manual_seed(1)
def setup_logging():
formatter = logging.Formatter(
"%(asctime)s - %(name)s - %(levelname)s - %(message)s"
)
stream_handler = logging.StreamHandler()
stream_handler.setFormatter(formatter)
file_handler = logging.FileHandler("logs/instance_segmentation/mask2former/train.log")
file_handler.setFormatter(formatter)
root_logger = logging.getLogger()
root_logger.setLevel(logging.INFO)
root_logger.addHandler(stream_handler)
root_logger.addHandler(file_handler)
logger = logging.getLogger(__name__)
return logger
def main():
logger = setup_logging()
dataset_dir = "data/instance_segmentation" # Path to dataset directory
dm = SegmentationDataModule(dataset_dir=dataset_dir, batch_size=2, num_workers=4)
model = Mask2Former()
# Create logs dir if doesn't exist
if not os.path.isdir("logs/instance_segmentation"):
os.makedirs("logs/instance_segmentation")
loggers = [
CSVLogger(save_dir="logs/csv_logs", name="mask2former"),
TensorBoardLogger(save_dir="logs/tb_logs", name="mask2former"),
MLFlowLogger(
experiment_name="mask2former",
tracking_uri="file:logs/mlflow_logs",
),
]
checkpoint_callback = ModelCheckpoint(
dirpath="checkpoints/mask2former",
monitor="val_map",
filename="mask2former-{epoch:02d}-{val_map:.2f}",
save_top_k=1,
mode="max",
save_last=True,
verbose=True,
)
callbacks = [
EarlyStopping(monitor="val_map", mode="max", patience=10),
checkpoint_callback,
]
trainer = pl.Trainer(
accelerator="gpu",
devices=[0], # Use [0] for single GPU, [0, 1] for multiple GPUs
logger=loggers,
callbacks=callbacks,
min_epochs=1,
max_epochs=20,
precision="32-true",
num_sanity_val_steps=0,
)
# Start training (resume if a checkpoint exists)
if os.path.exists("checkpoints/mask2former/last.ckpt"):
trainer.fit(
model,
dm,
ckpt_path="checkpoints/mask2former/last.ckpt",
)
else:
trainer.fit(model, dm)
best_model_path = checkpoint_callback.best_model_path
logger.info(f"Best model saved at: {best_model_path}")
best_model = Mask2Former.load_from_checkpoint(checkpoint_path=best_model_path)
trainer.test(best_model, dm)
if __name__ == "__main__":
main()
```
##### `maskformer.py`
```python
import logging
import torch
import pytorch_lightning as pl
from transformers import AutoImageProcessor
from transformers import MaskFormerForInstanceSegmentation
from torchmetrics.detection import MeanAveragePrecision
logger = logging.getLogger(__name__)
class MaskFormer(pl.LightningModule):
def __init__(self, learning_rate=5e-5):
super().__init__()
self.lr = learning_rate
self.mAP = MeanAveragePrecision(iou_type="segm", class_metrics=True)
# self.id2label = {0: "background", 1: "unhealty"}
self.id2label = {0: "unhealty"}
self.label2id = {v: int(k) for k, v in self.id2label.items()}
self.processor = AutoImageProcessor.from_pretrained(
"facebook/maskformer-swin-small-coco",
do_reduce_labels=True,
reduce_labels=True,
ignore_index=255,
do_resize=False,
do_rescale=False,
do_normalize=False,
)
self.model = self.setup_model()
self.save_hyperparameters()
def setup_model(self):
model = MaskFormerForInstanceSegmentation.from_pretrained(
"facebook/maskformer-swin-small-coco",
id2label=self.id2label,
label2id=self.label2id,
ignore_mismatched_sizes=True,
)
model.train()
return model
def forward(self, pixel_values, mask_labels=None, class_labels=None):
return self.model(
pixel_values=pixel_values,
mask_labels=mask_labels,
class_labels=class_labels,
)
def compute_metrics(self, outputs, batch):
# For metric computatation we need to provide:
# - targets in a form of list of dictionaries with keys "masks", "labels"
# - predictions in a form of list of dictionaries with leys "masks", "labels", "scores"
targets = []
for masks, labels in zip(batch["mask_labels"], batch["class_labels"]):
target = {
"masks": masks.to("cuda").to(torch.bool),
"labels": labels.to("cuda"),
}
targets.append(target)
threshold = 0.5
target_sizes = [
(image.shape[1], image.shape[2]) for image in batch["pixel_values"]
]
processed_outputs = self.processor.post_process_instance_segmentation(
outputs=outputs,
threshold=threshold,
target_sizes=target_sizes,
return_binary_maps=True,
)
preds = []
for output, target_size in zip(processed_outputs, target_sizes):
if output["segments_info"]:
pred = {
"masks": output["segmentation"].to(dtype=torch.bool, device="cuda"),
"labels": torch.tensor(
[x["label_id"] for x in output["segments_info"]]
).to("cuda"),
"scores": torch.tensor(
[x["score"] for x in output["segments_info"]]
).to("cuda"),
}
else:
# for void predictions, we need to provide empty tensors
pred = {
"masks": torch.zeros([0, *target_size], dtype=torch.bool).to(
"cuda"
),
"labels": torch.tensor([]).to("cuda"),
"scores": torch.tensor([]).to("cuda"),
}
preds.append(pred)
return preds, targets
def training_step(self, batch, batch_idx):
outputs = self(
pixel_values=batch["pixel_values"],
mask_labels=[labels for labels in batch["mask_labels"]],
class_labels=[labels for labels in batch["class_labels"]],
)
loss = outputs.loss
self.log(
"train_loss",
loss,
batch_size=len(batch),
prog_bar=True,
on_step=False,
on_epoch=True,
)
return loss
def validation_step(self, batch, batch_idx):
outputs = self(
pixel_values=batch["pixel_values"],
mask_labels=[labels for labels in batch["mask_labels"]],
class_labels=[labels for labels in batch["class_labels"]],
)
loss = outputs.loss
self.log(
"val_loss",
loss,
batch_size=len(batch),
prog_bar=True,
on_step=False,
on_epoch=True,
)
preds, targets = self.compute_metrics(outputs, batch)
self.mAP.update(preds, targets)
return loss
def on_validation_epoch_end(self):
result = self.mAP.compute()
self.log("val_map", result["map"])
self.log("val_map_50", result["map_50"])
self.log("val_map_75", result["map_75"])
self.log("val_map_small", result["map_small"])
self.log("val_map_medium", result["map_medium"])
self.log("val_map_large", result["map_large"])
self.log("val_mar_1", result["mar_1"])
self.log("val_mar_10", result["mar_10"])
self.log("val_mar_100", result["mar_100"])
self.mAP.reset()
def test_step(self, batch, batch_idx):
outputs = self(
pixel_values=batch["pixel_values"],
mask_labels=[labels for labels in batch["mask_labels"]],
class_labels=[labels for labels in batch["class_labels"]],
)
loss = outputs.loss
self.log(
"test_loss",
loss,
batch_size=len(batch),
prog_bar=True,
on_step=False,
on_epoch=True,
)
preds, targets = self.compute_metrics(outputs, batch)
self.mAP.update(preds, targets)
return loss
def on_test_epoch_end(self):
result = self.mAP.compute()
self.log("test_map", result["map"])
self.log("test_map_50", result["map_50"])
self.log("test_map_75", result["map_75"])
self.log("test_map_small", result["map_small"])
self.log("test_map_medium", result["map_medium"])
self.log("test_map_large", result["map_large"])
self.log("test_mar_1", result["mar_1"])
self.log("test_mar_10", result["mar_10"])
self.log("test_mar_100", result["mar_100"])
self.mAP.reset()
def configure_optimizers(self):
optimizer = torch.optim.Adam(
[p for p in self.parameters() if p.requires_grad],
lr=self.lr,
)
return optimizer
```
##### `mask2former.py`
```python
import logging
import torch
import pytorch_lightning as pl
from transformers import AutoImageProcessor
from transformers import Mask2FormerForUniversalSegmentation
from torchmetrics.detection import MeanAveragePrecision
logger = logging.getLogger(__name__)
class Mask2Former(pl.LightningModule):
def __init__(self, learning_rate=5e-5):
super().__init__()
self.lr = learning_rate
self.mAP = MeanAveragePrecision(iou_type="segm", class_metrics=True)
# self.id2label = {0: "background", 1: "unhealty"}
self.id2label = {0: "unhealty"}
self.label2id = {v: int(k) for k, v in self.id2label.items()}
self.processor = AutoImageProcessor.from_pretrained(
"facebook/mask2former-swin-small-coco-instance",
do_reduce_labels=True,
reduce_labels=True,
ignore_index=255,
do_resize=False,
do_rescale=False,
do_normalize=False,
)
self.model = self.setup_model()
self.save_hyperparameters()
def setup_model(self):
model = Mask2FormerForUniversalSegmentation.from_pretrained(
"facebook/mask2former-swin-small-coco-instance",
id2label=self.id2label,
label2id=self.label2id,
ignore_mismatched_sizes=True,
)
model.train()
return model
def forward(self, pixel_values, mask_labels=None, class_labels=None):
return self.model(
pixel_values=pixel_values,
mask_labels=mask_labels,
class_labels=class_labels,
)
def compute_metrics(self, outputs, batch):
# For metric computatation we need to provide:
# - targets in a form of list of dictionaries with keys "masks", "labels"
# - predictions in a form of list of dictionaries with leys "masks", "labels", "scores"
targets = []
for masks, labels in zip(batch["mask_labels"], batch["class_labels"]):
target = {
"masks": masks.to("cuda").to(dtype=torch.bool, device="cuda"),
"labels": labels.to("cuda"),
}
targets.append(target)
threshold = 0.5
target_sizes = [
(image.shape[1], image.shape[2]) for image in batch["pixel_values"]
]
processed_outputs = self.processor.post_process_instance_segmentation(
outputs=outputs,
threshold=threshold,
target_sizes=target_sizes,
return_binary_maps=True,
)
# TODO: remove detach
# detached_outputs = [
# {
# "segmentation": output["segmentation"].to("cpu"),
# "segments_info": output["segments_info"],
# }
# for output in processed_outputs
# ]
preds = []
for output, target_size in zip(processed_outputs, target_sizes):
if output["segments_info"]:
pred = {
"masks": output["segmentation"].to(dtype=torch.bool, device="cuda"),
"labels": torch.tensor(
[x["label_id"] for x in output["segments_info"]], device="cuda"
),
"scores": torch.tensor(
[x["score"] for x in output["segments_info"]], device="cuda"
),
}
else:
# for void predictions, we need to provide empty tensors
pred = {
"masks": torch.zeros(
[0, *target_size], dtype=torch.bool, device="cuda"
),
"labels": torch.tensor([], device="cuda"),
"scores": torch.tensor([], device="cuda"),
}
preds.append(pred)
return preds, targets
def training_step(self, batch, batch_idx):
outputs = self(
pixel_values=batch["pixel_values"],
mask_labels=[labels for labels in batch["mask_labels"]],
class_labels=[labels for labels in batch["class_labels"]],
)
loss = outputs.loss
self.log(
"train_loss",
loss,
batch_size=len(batch),
prog_bar=True,
on_step=False,
on_epoch=True,
)
return loss
def validation_step(self, batch, batch_idx):
outputs = self(
pixel_values=batch["pixel_values"],
mask_labels=[labels for labels in batch["mask_labels"]],
class_labels=[labels for labels in batch["class_labels"]],
)
loss = outputs.loss
self.log(
"val_loss",
loss,
batch_size=len(batch),
prog_bar=True,
on_step=False,
on_epoch=True,
)
preds, targets = self.compute_metrics(outputs, batch)
self.mAP.update(preds, targets)
return loss
def on_validation_epoch_end(self):
result = self.mAP.compute()
self.log("val_map", result["map"])
self.log("val_map_50", result["map_50"])
self.log("val_map_75", result["map_75"])
self.log("val_map_small", result["map_small"])
self.log("val_map_medium", result["map_medium"])
self.log("val_map_large", result["map_large"])
self.log("val_mar_1", result["mar_1"])
self.log("val_mar_10", result["mar_10"])
self.log("val_mar_100", result["mar_100"])
self.mAP.reset()
def test_step(self, batch, batch_idx):
outputs = self(
pixel_values=batch["pixel_values"],
mask_labels=[labels for labels in batch["mask_labels"]],
class_labels=[labels for labels in batch["class_labels"]],
)
loss = outputs.loss
self.log(
"test_loss",
loss,
batch_size=len(batch),
prog_bar=True,
on_step=False,
on_epoch=True,
)
preds, targets = self.compute_metrics(outputs, batch)
self.mAP.update(preds, targets)
return loss
def on_test_epoch_end(self):
result = self.mAP.compute()
self.log("test_map", result["map"])
self.log("test_map_50", result["map_50"])
self.log("test_map_75", result["map_75"])
self.log("test_map_small", result["map_small"])
self.log("test_map_medium", result["map_medium"])
self.log("test_map_large", result["map_large"])
self.log("test_mar_1", result["mar_1"])
self.log("test_mar_10", result["mar_10"])
self.log("test_mar_100", result["mar_100"])
self.mAP.reset()
def configure_optimizers(self):
optimizer = torch.optim.Adam(
[p for p in self.parameters() if p.requires_grad],
lr=self.lr,
)
return optimizer
```
##### `dataset_maskformer.py`
```python
import os
import logging
import numpy as np
import torch
from torch.utils.data import Dataset, DataLoader
import pytorch_lightning as pl
import datasets
from transformers import AutoImageProcessor
import albumentations as A
logger = logging.getLogger(__name__)
class SegmentationDataset(Dataset):
def __init__(self, dataset: str, transform=None):
self.dataset = dataset
self.transform = transform
self.processor = AutoImageProcessor.from_pretrained(
"facebook/maskformer-swin-small-coco",
do_reduce_labels=True,
reduce_labels=True,
ignore_index=255,
do_resize=False,
do_rescale=False,
do_normalize=False,
)
def __len__(self):
return len(self.dataset)
def __getitem__(self, idx):
image = np.array(self.dataset[idx]["image"].convert("RGB"))
mask = np.array(self.dataset[idx]["annotation"].convert("RGB"))
class_id_map = mask[:, :, 0]
instance_seg = mask[:, :, 1]
class_labels = np.unique(class_id_map)
inst2class = {}
for label in class_labels:
instance_ids = np.unique(instance_seg[class_id_map == label])
inst2class.update({i: label for i in instance_ids})
if self.transform is not None:
augmentations = self.transform(image=image, mask=instance_seg)
image = augmentations["image"]
mask = augmentations["mask"]
inputs = self.processor(
[image],
[mask],
instance_id_to_semantic_id=inst2class,
return_tensors="pt",
)
return {
"pixel_values": inputs.pixel_values[0],
"mask_labels": inputs.mask_labels[0],
"class_labels": inputs.class_labels[0],
}
class SegmentationDataModule(pl.LightningDataModule):
def __init__(self, dataset_dir, batch_size, num_workers):
super().__init__()
self.dataset_dir = dataset_dir
self.batch_size = batch_size
self.num_workers = num_workers
self.dataset = None
# ImageNet mean and std
self.mean = [0.485, 0.456, 0.406]
self.std = [0.229, 0.224, 0.225]
self.train_transform = A.Compose(
[
A.Resize(height=512, width=512),
A.HorizontalFlip(p=0.5),
A.VerticalFlip(p=0.5),
A.RandomBrightnessContrast(p=0.5),
A.ColorJitter(brightness=0.4, contrast=0.4, saturation=0.4, hue=0.1),
A.Normalize(mean=self.mean, std=self.std),
]
)
self.val_test_transform = A.Compose(
[
A.Resize(height=512, width=512),
A.Normalize(mean=self.mean, std=self.std),
]
)
def prepare_data(self):
if os.path.exists(self.dataset_dir):
if os.path.isdir(self.dataset_dir):
try:
self.dataset = datasets.load_from_disk(self.dataset_dir)
logger.info(f"Loaded dataset from disk: {self.dataset_dir}")
except Exception as e:
logger.info(f"Failed to load dataset from disk: {e}")
def collate_fn(self, examples):
batch = {}
batch["pixel_values"] = torch.stack(
[example["pixel_values"] for example in examples]
)
batch["class_labels"] = [example["class_labels"] for example in examples]
batch["mask_labels"] = [example["mask_labels"] for example in examples]
return batch
def setup(self, stage=None):
if stage == "fit" or stage is None:
logger.info("Setting up training dataset")
self.train_dataset = SegmentationDataset(
dataset=self.dataset["train"], transform=self.train_transform
)
logger.info("Setting up validation dataset")
self.val_dataset = SegmentationDataset(
dataset=self.dataset["validation"], transform=self.val_test_transform
)
if stage == "test" or stage is None:
logger.info("Setting up test dataset")
self.test_dataset = SegmentationDataset(
dataset=self.dataset["test"], transform=self.val_test_transform
)
def train_dataloader(self):
logger.info("Creating training DataLoader")
return DataLoader(
self.train_dataset,
batch_size=self.batch_size,
num_workers=self.num_workers,
shuffle=True,
collate_fn=self.collate_fn,
)
def val_dataloader(self):
logger.info("Creating val DataLoader")
return DataLoader(
self.val_dataset,
batch_size=self.batch_size,
num_workers=self.num_workers,
shuffle=False,
collate_fn=self.collate_fn,
)
def test_dataloader(self):
logger.info("Creating test DataLoader")
return DataLoader(
self.test_dataset,
batch_size=self.batch_size,
num_workers=self.num_workers,
shuffle=False,
collate_fn=self.collate_fn,
)
```
##### `dataset_mask2former.py`
```python
import os
import logging
import numpy as np
import torch
from torch.utils.data import Dataset, DataLoader
import pytorch_lightning as pl
import datasets
from transformers import AutoImageProcessor
import albumentations as A
logger = logging.getLogger(__name__)
class SegmentationDataset(Dataset):
def __init__(self, dataset: str, transform=None):
self.dataset = dataset
self.transform = transform
self.processor = AutoImageProcessor.from_pretrained(
"facebook/mask2former-swin-small-coco-instance",
do_reduce_labels=True,
reduce_labels=True,
ignore_index=255,
do_resize=False,
do_rescale=False,
do_normalize=False,
)
def __len__(self):
return len(self.dataset)
def __getitem__(self, idx):
image = np.array(self.dataset[idx]["image"].convert("RGB"))
mask = np.array(self.dataset[idx]["annotation"].convert("RGB"))
class_id_map = mask[:, :, 0]
instance_seg = mask[:, :, 1]
class_labels = np.unique(class_id_map)
inst2class = {}
for label in class_labels:
instance_ids = np.unique(instance_seg[class_id_map == label])
inst2class.update({i: label for i in instance_ids})
if self.transform is not None:
augmentations = self.transform(image=image, mask=instance_seg)
image = augmentations["image"]
mask = augmentations["mask"]
inputs = self.processor(
[image],
[mask],
instance_id_to_semantic_id=inst2class,
return_tensors="pt",
)
return {
"pixel_values": inputs.pixel_values[0],
"mask_labels": inputs.mask_labels[0],
"class_labels": inputs.class_labels[0],
}
class SegmentationDataModule(pl.LightningDataModule):
def __init__(self, dataset_dir, batch_size, num_workers):
super().__init__()
self.dataset_dir = dataset_dir
self.batch_size = batch_size
self.num_workers = num_workers
self.dataset = None
self.mean = [0.485, 0.456, 0.406]
self.std = [0.229, 0.224, 0.225]
self.train_transform = A.Compose(
[
A.Resize(height=512, width=512),
A.HorizontalFlip(p=0.5),
A.VerticalFlip(p=0.5),
A.RandomBrightnessContrast(p=0.5),
A.ColorJitter(brightness=0.4, contrast=0.4, saturation=0.4, hue=0.1),
A.Normalize(mean=self.mean, std=self.std),
]
)
self.val_test_transform = A.Compose(
[
A.Resize(height=512, width=512),
A.Normalize(mean=self.mean, std=self.std),
]
)
def prepare_data(self):
if os.path.exists(self.dataset_dir):
if os.path.isdir(self.dataset_dir):
try:
self.dataset = datasets.load_from_disk(self.dataset_dir)
logger.info(f"Loaded dataset from disk: {self.dataset_dir}")
except Exception as e:
logger.info(f"Failed to load dataset from disk: {e}")
def collate_fn(self, examples):
batch = {}
batch["pixel_values"] = torch.stack(
[example["pixel_values"] for example in examples]
)
batch["class_labels"] = [example["class_labels"] for example in examples]
batch["mask_labels"] = [example["mask_labels"] for example in examples]
return batch
def setup(self, stage=None):
if stage == "fit" or stage is None:
logger.info("Setting up training dataset")
self.train_dataset = SegmentationDataset(
dataset=self.dataset["train"], transform=self.train_transform
)
logger.info("Setting up validation dataset")
self.val_dataset = SegmentationDataset(
dataset=self.dataset["validation"], transform=self.val_test_transform
)
if stage == "test" or stage is None:
logger.info("Setting up test dataset")
self.test_dataset = SegmentationDataset(
dataset=self.dataset["test"], transform=self.val_test_transform
)
def train_dataloader(self):
logger.info("Creating training DataLoader")
return DataLoader(
self.train_dataset,
batch_size=self.batch_size,
num_workers=self.num_workers,
shuffle=True,
collate_fn=self.collate_fn,
)
def val_dataloader(self):
logger.info("Creating val DataLoader")
return DataLoader(
self.val_dataset,
batch_size=self.batch_size,
num_workers=self.num_workers,
shuffle=False,
collate_fn=self.collate_fn,
)
def test_dataloader(self):
logger.info("Creating test DataLoader")
return DataLoader(
self.test_dataset,
batch_size=self.batch_size,
num_workers=self.num_workers,
shuffle=False,
collate_fn=self.collate_fn,
)
```
##### `requirements.txt`
```
albumentations>=2.0.0
datasets>=3.2.0
deepspeed>=0.16.2
evaluate>=0.4.3
google-cloud-storage>=2.18.2
ijson>=3.3.0
ipykernel>=6.29.5
ipywidgets>=8.1.5
label-studio-sdk>=1.0.7
lxml>=5.3.0
mlflow>=2.19.0
opencv-python>=4.10.0.84
pillow>=11.0.0
protobuf>=5.28.3
pytorch-lightning>=2.5.0.post0
pyyaml>=6.0.2
ruff>=0.8.4
tensorboard>=2.18.0
torchmetrics[detection]>=1.6.0
torchvision>=0.20.1
transformers>=4.48.0
```
### Expected behavior
MaskFormer and Mask2Former should have similar results | bug | low | Critical |
2,793,707,883 | transformers | Audio-Classification pipeline function_to_apply ignores initialized values (possibly generalizes to other classification pipelines) | ### System Info
- `transformers` version: 4.48.0
- Platform: macOS-14.6-arm64-arm-64bit
- Python version: 3.12.4
- Huggingface_hub version: 0.27.1
- Safetensors version: 0.5.2
- Accelerate version: 1.2.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: None/NA
### Who can help?
@Rocketknight1
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```python
from transformers import pipeline
import torch
import numpy as np
model_name = 'audeering/wav2vec2-large-robust-12-ft-emotion-msp-dim'
#model_name = 'pollner/distilhubert-finetuned-ravdess'
top_k = 5
device = 'cuda' if torch.cuda.is_available() else 'cpu'
classification_pipeline = pipeline(
"audio-classification",
model=model_name,
top_k=top_k,
function_to_apply='none',
device=device,
)
# dummy signal
sampling_rate = 16000
signal = np.zeros((sampling_rate), dtype=np.float32)
print('No call parameter should match passing none:')
print(classification_pipeline(signal))
print('Call parameter with none:')
print(classification_pipeline(signal, function_to_apply='none'))
print('Call parameter with softmax which matches no parameter:')
print(classification_pipeline(signal, function_to_apply='softmax'))
print('Call parameter with sigmoid for show:')
print(classification_pipeline(signal, function_to_apply='sigmoid'))
```
### Expected behavior
I will note that this behavior could make sense, but should probably be noted somewhere if it is the intended behavior. I assume this is not intended behavior however because in [this line](https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/base.py#L1320) we are only overwriting initialized parameters if they were sent with the call function, theoretically, but [this line](https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/base.py#L1320) in `_sanitize_parameters` returns a default value that will always overwrite the value from initialization. | bug | low | Minor |
2,793,709,529 | flutter | [CupertinoDatePicker] year field in unselected value baseline is broken | ### Steps to reproduce
The text baseline of the year column is not vertically aligned with the date and year, see attached image. Tested on both master and stable. Tested on Flutter version 3.24.5 and master.
### Expected results
The text baseline of the year column should be vertically aligned with the date and year.
### Actual results
The text baseline of the year column is not vertically aligned with the date and year.
### Code sample
<details open><summary>Code sample</summary>
```dart
Container(
height: MediaQuery.sizeOf(context).height * 0.35,
child: CupertinoTheme(
data: CupertinoThemeData(
brightness: Brightness.dark,
),
child: CupertinoDatePicker(
onDateTimeChanged: (dateTime) => selectedDateTime = dateTime,
mode: CupertinoDatePickerMode.date,
initialDateTime: selectedDateTime,
minimumDate: DateTime(1900),
),
),
),
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>

</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
Flutter version 3.24.5
| framework,f: date/time picker,f: cupertino,has reproducible steps,P2,team-design,triaged-design,found in release: 3.27,found in release: 3.28 | low | Critical |
2,793,736,146 | opencv | Many test failures on powerpc64le with qemu | ### System Information
- OpenCV @ 4.x (Jan 16, 2024) (1d701d1690b8cc9aa6b86744bffd5d9841ac6fd3)
- ubuntu 22.04 with g++-powerpc64le-linux-gnu and qemu-user packages
### Detailed description
Many tests fail when running core test on qemu for powerpc64le
### Steps to reproduce
Build using toolchain file:
```.cmake
set(CMAKE_SYSTEM_NAME Linux)
set(CMAKE_SYSTEM_PROCESSOR powerpc64le)
set(CMAKE_C_COMPILER /usr/bin/powerpc64le-linux-gnu-gcc)
set(CMAKE_CXX_COMPILER /usr/bin/powerpc64le-linux-gnu-g++)
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_PACKAGE ONLY)
```
Run:
```
OPENCV_TEST_DATA_PATH=/opencv_extra/testdata \
qemu-ppc64le \
-L /usr/powerpc64le-linux-gnu/ \
./bin/opencv_test_core
```
Result: 307 test failed
[failures.txt](https://github.com/user-attachments/files/18445146/failures.txt)
### Issue submission checklist
- [x] I report the issue, it's not a question
- [x] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [x] I updated to the latest OpenCV version and the issue is still there
- [x] There is reproducer code and related data files (videos, images, onnx, etc) | bug,priority: low,platform: ppc (PowerPC) | low | Critical |
2,793,779,156 | go | x/tools/gopls/internal/analysis/modernize: incomplete simplification in appendclipped | The appendclipped modernizer has an existing test case whose output (after applying a fix) is:
```go
print(slices.Concat(other[:len(other):len(other)], s, other)) // want "Replace append with slices.Concat"
```
The first argument should also have been simplified to other. Investigate + fix. | gopls,Tools,Refactoring,BugReport | low | Minor |
2,793,789,463 | ollama | Llama-3_1-Nemotron-51B-Instruct | Please add the Nvidia Model.
https://huggingface.co/bartowski/Llama-3_1-Nemotron-51B-Instruct-GGUF | model request | low | Minor |
2,793,817,034 | PowerToys | Fixed Scrolling through the history in the color picker editor with a mouse wheel. #33551 PR regression in latest versions (not working anymore) | ### Microsoft PowerToys version
0.87.1
### Installation method
PowerToys auto-update
### Running as admin
None
### Area(s) with issue?
ColorPicker
### Steps to reproduce
Open the color picker menu
try to scroll with the mose wheel in the color history and see that it dosen't work even tough it was fixed in a previous version.
See #33551
### ✔️ Expected Behavior
The color list shohuld scroll along the mouse scroll
### ❌ Actual Behavior
It dosen't scroll
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,793,829,872 | pytorch | DISABLED test_profiler_mark_wrapper_call_dynamic_shapes_cuda (__main__.DynamicShapesGPUTests) | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_profiler_mark_wrapper_call_dynamic_shapes_cuda&suite=DynamicShapesGPUTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35732496276).
Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 16 failures and 8 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_profiler_mark_wrapper_call_dynamic_shapes_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_torchinductor_dynamic_shapes.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | module: rocm,triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | medium | Critical |
2,793,829,945 | pytorch | DISABLED test_aoti_eager_support_str_dynamic_shapes_cuda (__main__.DynamicShapesCodegenGPUTests) | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_aoti_eager_support_str_dynamic_shapes_cuda&suite=DynamicShapesCodegenGPUTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35728355868).
Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 8 failures and 8 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_aoti_eager_support_str_dynamic_shapes_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 1025, in test_aoti_eager_support_str
res_value = getattr(torch.ops.aten, op_name)(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 1158, in __call__
return self._op(*args, **(kwargs or {}))
RuntimeError: create_func_( &container_handle_, num_models, device_str.c_str(), cubin_dir.empty() ? nullptr : cubin_dir.c_str()) API call failed at /var/lib/jenkins/workspace/torch/csrc/inductor/aoti_runner/model_container_runner.cpp, line 81
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor_codegen_dynamic_shapes.py DynamicShapesCodegenGPUTests.test_aoti_eager_support_str_dynamic_shapes_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_codegen_dynamic_shapes.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | module: rocm,triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | medium | Critical |
2,793,830,107 | pytorch | DISABLED test_list_clearing_dynamic_shapes_cuda (__main__.DynamicShapesGPUTests) | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_list_clearing_dynamic_shapes_cuda&suite=DynamicShapesGPUTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35728356843).
Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 8 failures and 8 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_list_clearing_dynamic_shapes_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 9819, in test_list_clearing
fn_compiled(inps)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/output_code.py", line 464, in __call__
return self.current_callable(inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1228, in run
return compiled_fn(new_inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 397, in deferred_cudagraphify
fn, out = cudagraphify(model, inputs, new_static_input_idxs, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 427, in cudagraphify
return manager.add_function(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 2255, in add_function
return fn, fn(inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 1949, in run
out = self._run(new_inputs, function_id)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 2057, in _run
out = self.run_eager(new_inputs, function_id)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 2221, in run_eager
return node.run(new_inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 635, in run
check_memory_pool(self.device_index, self.cuda_graphs_pool, refs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 1754, in check_memory_pool
if torch._C._cuda_checkPoolLiveAllocations(device, pool_id, unique_storages):
MemoryError: std::bad_alloc
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor_dynamic_shapes.py DynamicShapesGPUTests.test_list_clearing_dynamic_shapes_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_dynamic_shapes.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | module: rocm,triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | medium | Critical |
2,793,830,121 | pytorch | DISABLED test_cow_input_masked_argmin_cuda_float32 (__main__.TestCompositeComplianceCUDA) | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_cow_input_masked_argmin_cuda_float32&suite=TestCompositeComplianceCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35727153944).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_cow_input_masked_argmin_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1167, in test_wrapper
return test(*args, **kwargs)
File "/var/lib/jenkins/pytorch/test/test_ops.py", line 1905, in test_cow_input
check_cow_input(arg, args_copy[idx], idx)
File "/var/lib/jenkins/pytorch/test/test_ops.py", line 1858, in check_cow_input
self.assertTrue(
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 687, in assertTrue
raise self.failureException(msg)
AssertionError: False is not true : Argument 0 during forward call avoided materialization, but the operation mutated its data.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3128, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3128, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 465, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1628, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1179, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 24: SampleInput(input=Tensor[size=(3, 5), device="cuda:0", dtype=torch.float32], args=(), kwargs={'mask': 'Tensor[size=(3, 5), device="cuda:0", dtype=torch.bool]', 'dim': '-1', 'keepdim': 'False'}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=24 PYTORCH_TEST_WITH_ROCM=1 python test/test_ops.py TestCompositeComplianceCUDA.test_cow_input_masked_argmin_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_ops.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr | module: rocm,triaged,module: flaky-tests,skipped,module: unknown | low | Critical |
2,793,832,386 | ollama | Maintain Object Key Order in JSON Schema Outputs | Currently, when generating JSON outputs based on a provided schema, the keys in objects do not retain the order specified in the schema. This behavior differs from OpenAI's implementation, where the order of keys is preserved as defined. Maintaining the specified key order is crucial for applications that rely on consistent JSON structures. Implementing this feature would enhance compatibility and predictability of the generated outputs. | feature request | low | Minor |
2,793,837,154 | ollama | Enhance JSON Schema Support to Include Arrays and Complex Types | The current implementation of JSON schema support in Ollama has limitations when handling arrays and complex data types. For instance, defining a schema with an array of strings (`{"type": "array", "items": {"type": "string"}}`) is not processed as expected. Expanding the JSON schema support to fully adhere to the JSON Schema specification would allow for more versatile and accurate data representations, aligning Ollama's capabilities with standard practices. | feature request | low | Minor |
2,793,863,224 | go | crypto/ecdsa: use variable time ScalarBaseMult/ScalarMult in Verify | We're leaving some perf on the table by doing these operations in constant time. We should use a variable time scalar mult, similar to what we do for crypto/ed25519:
https://github.com/golang/go/blob/1a93e4a2cf43b0ded141d33620966bb252cac1bd/src/crypto/internal/fips140/ed25519/ed25519.go#L335
cc @FiloSottile | Performance,NeedsInvestigation,Implementation | low | Minor |
2,793,872,784 | vscode | terminal suggest with `__` paths | deprioritize or don't show names with `__`. Here the `__pycache__` isn't something I want to reference. It could also be based on whats in the vscode ignore
<img width="1235" alt="Image" src="https://github.com/user-attachments/assets/67922204-8f3c-4f87-85a2-b68e40ad3f59" /> | under-discussion,terminal-suggest | low | Major |
2,793,915,671 | kubernetes | Object Count Quota For Non Namespaced object | ### What would you like to be added?
The object count qouta in ResourceQouta could have a similiar resource that is not namespaced but applies to cluster wide objects.
https://kubernetes.io/docs/concepts/policy/resource-quotas/#object-count-quota
### Why is this needed?
object count qouta is pretty handy for keeping api server from being accidentally ovewhelmed by the creation of lost of objects by some user. However some objects are not namespaced and thus cannot be managedby a namespaced ResourceQouta (would love to be wrong).
An example might be CiliumIdentities used the cilium cni. As they are created for every unique label combination a namespace label change or the generation of pods with guids in labels can create a huge number of cluster level resources and it might be better to fail the creation of identities rather than explode the api server.
Without that a webhook is necessary to implement this which has its down sides. But is not impossible.
Realize this is not a trivial feature but creating an issue to see if others had | sig/api-machinery,kind/feature,needs-triage | low | Minor |
2,793,922,080 | pytorch | Add ATen functions in native_functions.yaml to torch_in_graph_functions list automatically | ### 🐛 Describe the bug
Currently, whenever a new native function is added, we must manually add an entry to `torch/_dynamo/trace_rules.torch_c_binding_in_graph_functions` to ensure that Dynamo includes it in the FX graph during tracing. If this step is missed, the pull request introducing the native function may encounter confusing CI failures(e.g. #132135). For example, a PR author intending to add a native function for eager mode might see numerous compile-related test failures, which can be extremely challenging for open-source contributors to diagnose and resolve.
To address this, we propose introducing a helper function that automatically adds all ATen functions from `native_functions.yaml` to the `torch_c_binding_in_graph_functions` list. Since changes to ATen functions or Torch-level APIs are the most common scenarios, this solution would cover the majority of use cases and alleviate the current pain points a lot.
### Versions
main
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @albanD @anijain2305 @zou3519 | triaged,module: dynamo | low | Critical |
2,793,927,299 | godot | Embedded window is not captured by screen capture and runs into edge cases | ### Tested versions
v4.4.beta1.official [d33da79d3]
### System information
N/A Not system specific
### Issue description
The way the embedding is currently implemented by placing the game window as a child of the game workspace.
While this was easy to implement, it runs into many edge cases like with using shortcuts that move the window, and is not supported on MacOS. The approach also makes window capture by programs like OBS or Discord not capture the game itself, which to me specifically is a big deal.
I cannot explain exactly how this would be re-implemented to fix all of the above since I am not knowledgeable on the subject, but to get the idea across:
The game is still a separate process, but it runs without a window (or with a hidden window?). That process is "streamed" into a texture placed in the editor workspace.
Some details on Mac: https://github.com/godotengine/godot/pull/99010#issuecomment-2558125270
While this is _necessary_ only on MacOS, this is how the embedding should work on all supported platforms because as I initially mentioned the current one has edge cases and limitations. The current implementation would remain as a fallback if the API doesn't support "streaming" (does OpenGL not support it at all?).
### Steps to reproduce
Try recording the editor window with a program like OBS.
### Minimal reproduction project (MRP)
N/A | bug,enhancement,topic:editor | low | Minor |
2,793,930,874 | godot | AudioStreamPlayer clip falsely loops after animation end in web export | ### Tested versions
- Reproducible in 4.3.stable, 4.3.dev7
### System information
Godot v4.4.dev7 - macOS Sequoia (15.1.1) - Multi-window, 1 monitor - OpenGL 3 (Compatibility) - Apple M1 Pro - Apple M1 Pro (8 threads)
### Issue description
**Expected behavior (as in desktop exports):**
When triggering a looping audio clip to play through an AnimationPlayer, the clip always ends when either the clip ends after a trim in the animation timeline or when the animation finishes playing, regardless of the `loop` import property.
**Actual behavior:**
In web builds (with sample playback), when no "end" of the clip is reached (i.e. the end of the clip is after the animation end) the clip will keep looping indefinitely - the end of the animation does not end the clip (and its "end" outside of the animation doesn't either, even when the clip is trimmed). A particularly annoying edge-case occurs when the clip isn't trimmed and thus is allowed to keep looping until the animation stops - in web builds the clip will keep looping forever as no "playback end" is ever encountered. Once the animation has ended, playing the animation again will start a fresh instance of the clip, layering them on top.
### Steps to reproduce
1. Create a new AudioStreamPlayer
2. Import an audio file and set its `loop` import property to true
3. Create an AnimationPlayer, create an autoplaying animation, add an Audio Playback Track with the looping audio.
4. Make sure that the animation end is before the end of the audio clip.
5. Run the web export - the audio will keep looping.
Alternatively (with MRP):
1. Open the minimal reproduction project
2. Run the web export
3. Press the "Trigger animation" button to start playback
4. Note how in the editor the audio will stop after the animation end but the web version keeps playing (even though the clip is additionally trimmed)
### Minimal reproduction project (MRP)
[web-animation-audio-loop-bug.zip](https://github.com/user-attachments/files/18446101/web-animation-audio-loop-bug.zip) | bug,platform:web,topic:audio | low | Critical |
2,793,931,082 | flutter | 🛑 `flutter config --explicit-package-dependencies-`: Gradle release build fails with only dev-dependencies | In https://github.com/flutter/flutter/pull/160289, I attempt to enable `flutter config --explicit-package-dependencies`.
It's getting there... the next blocker after updating the lock files is this:
```sh
$ cd dev/integration_tests/spell_check
$ flutter config --explicit-package-dependencies
$ flutter build apk
e: file:///Users/matanl/Developer/flutter/dev/integration_tests/spell_check/android/app/src/main/kotlin/com/example/sc_int_test/MainActivity.kt:3:8 Unresolved reference: io
e: file:///Users/matanl/Developer/flutter/dev/integration_tests/spell_check/android/app/src/main/kotlin/com/example/sc_int_test/MainActivity.kt:5:22 Unresolved reference: FlutterActivity
FAILURE: Build failed with an exception.
```
From what I can tell, it's due to this stanza in `flutter.groovy`:
https://github.com/flutter/flutter/blob/90f926edc1357eb1216a09fe7b3716d5e95fe9f9/packages/flutter_tools/gradle/src/main/groovy/flutter.groovy#L600-L603
With our new logic flow (TM), it's possible for plugins to evaluate to `size() > 0` _and_ no plugins to receive the Flutter embedding dependencies. From talking to @gmackall this may or may not be intentional, we can figure that out debugging and submitting PRs fixing these tests.
What I'd like to see is:
1. `android_release_builds_exclude_dev_dependencies_test` showcases the failure being talked about (and is fixed)
2. The above workflow (`cd dev/integration_tests/spell_check && flutter build apk`) works when the flag is enabled
https://github.com/flutter/flutter/blob/90f926edc1357eb1216a09fe7b3716d5e95fe9f9/dev/devicelab/bin/tasks/android_release_builds_exclude_dev_dependencies_test.dart#L32-L36 | platform-android,f: integration_test,P1,team-android,fyi-tool | medium | Critical |
2,793,937,529 | godot | Objects are not shown as selected when the game is paused | ### Tested versions
v4.4.beta1.official [d33da79d3]
### System information
w10 64
### Issue description
As shown in this video, objects are not marked as selected when the game is paused.
https://github.com/user-attachments/assets/6c95adda-e950-4762-a864-a5ff3c830451
### Steps to reproduce
See the video
### Minimal reproduction project (MRP)
... | discussion,topic:editor,usability | low | Minor |
2,793,990,440 | godot | Can't open godot with Nvidia 566.36 with a 3070 gpu | ### Tested versions
Reproducible in every version of 4.4 including Dev 1
### System information
Windows 11 - Godot 4.4 (any version) - OpenGL API - Nvidia 3070 or 970(happens with both) - AMD Ryzen 7 3700X 8-Core Processor 3.60 GHz -
### Issue description

everytime I try to open Godot, it just closes before it even finished opening
this happens with both my 3070 and 970 gpus
### Steps to reproduce
all I do is open the application
### Minimal reproduction project (MRP)
N/A | bug,platform:windows,topic:rendering,needs testing | low | Major |
2,794,037,824 | go | proposal: cmd/covdata: html subcommand | ### Proposal Details
if you need multi run coverage, it seems you currently have two options. first option:
~~~sh
go test -cover '-test.gocoverdir' cover -run Zero
go test -cover '-test.gocoverdir' cover -run One
go tool covdata textfmt -i cover -o cover.txt
go tool cover -html cover.txt
~~~
second option:
~~~sh
go test -coverprofile zero.txt -run Zero
go test -coverprofile one.txt -run One
# manually merge files
go tool cover -html cover.txt
~~~
with the first option, you are having to convert to the legacy format [1] before you can get HTML output. with the second option, it seems no current method is available for multiple runs using the text format, meaning user needs to manually combine the resulting files somehow. to that end, I propose adding a new subcommand:
~~~
go tool covdata html -i cover -o cover.html
~~~
1. https://go.dev/doc/build-cover#converting-to-legacy-text-format | Proposal,ToolProposal | low | Minor |
2,794,055,530 | pytorch | DISABLED test_equivalent_template_code (__main__.BenchmarkMultiTemplateFusionCudaTest) | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_equivalent_template_code&suite=BenchmarkMultiTemplateFusionCudaTest&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35738064722).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_equivalent_template_code`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_benchmark_fusion.py", line 286, in test_equivalent_template_code
).run(
RuntimeError: Expected to find "triton_tem_fused_addmm_relu_0.run" but did not find it
Searched string:
with torch.cuda._DeviceGuard(0):
torch.cuda.set_device(0)
buf0 = empty_strided_cuda((256, 256), (256, 1), torch.float16)
# Topologically Sorted Source Nodes: [a], Original ATen: [aten.addmm]
stream0 = get_raw_stream(0)
triton_tem_fused_addmm_0.run(arg2_1, arg0_1, buf0, grid=torch._inductor.kernel.mm_common.mm_grid(256, 256, meta0), stream=stream0)
del arg0_1
del arg2_1
buf1 = buf0; del buf0 # reuse
# Topologically Sorted Source Nodes: [a, relu], Original ATen: [aten.addmm, aten.relu]
stream0 = get_raw_stream(0)
triton_poi_fused_addmm_relu_1.run(buf1, arg1_1, 65536, grid=grid(65536), stream=stream0)
del arg1_1
return (buf1, )
def benchmark_compiled_module(times=10, repeat=10):
from torch._dynamo.testing import rand_strided
from torch._inductor.utils import print_performance
arg0_1 = rand_strided((256, 256), (256, 1), device='cuda:0', dtype=torch.float16)
arg1_1 = rand_strided((256, ), (1, ), device='cuda:0', dtype=torch.float16)
arg2_1 = rand_strided((256, 256), (256, 1), device='cuda:0', dtype=torch.float16)
fn = lambda: call([arg0_1, arg1_1, arg2_1])
return print_performance(fn, times=times, repeat=repeat)
if __name__ == "__main__":
from torch._inductor.wrapper_benchmark import compiled_module_main
compiled_module_main('None', benchmark_compiled_module)
From CHECK: triton_tem_fused_addmm_relu_0.run
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_benchmark_fusion.py BenchmarkMultiTemplateFusionCudaTest.test_equivalent_template_code
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_benchmark_fusion.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | module: rocm,triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | medium | Critical |
2,794,055,572 | pytorch | DISABLED test_sparse_add_cuda_float64 (__main__.TestSparseCSRCUDA) | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_sparse_add_cuda_float64&suite=TestSparseCSRCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35738066324).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 4 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_sparse_add_cuda_float64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/test_sparse_csr.py", line 2338, in test_sparse_add
run_test(m, n, index_dtype)
File "/var/lib/jenkins/pytorch/test/test_sparse_csr.py", line 2330, in run_test
self.assertEqual(actual, expected)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4036, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Tensor-likes are not close!
Mismatched elements: 3 / 15 (20.0%)
Greatest absolute difference: 5.944452556227633 at index (4, 0) (up to 1e-07 allowed)
Greatest relative difference: inf at index (4, 1) (up to 1e-07 allowed)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/test_sparse_csr.py TestSparseCSRCUDA.test_sparse_add_cuda_float64
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_sparse_csr.py`
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr | module: sparse,module: rocm,triaged,module: flaky-tests,skipped | medium | Critical |
2,794,084,907 | vscode | Investigate inline value decoration receiving tokenization css | we should figure out why the decoration we add gets tokenization colors to avoid `!important` if possible.
_Originally posted by @rebornix in https://github.com/microsoft/vscode/pull/237995#discussion_r1919086685_
| bug,notebook-cell-editor | low | Minor |
2,794,090,856 | rust | `rustc-1.83` tarball contains a GCC checkout | While I understand the motivation for putting a GCC in dev environments / CI (it's good to keep the GCC backend tested), I think it might be overkill to ship it in the source tarball. This has a number of downsides:
1. It puts GPL source code into the `rustc` release tarball.
2. It's a significant increase in the size of the tarball (part of the reason we noticed was that one of the ingestion systems was unhappy with the file count).
3. To the best of my knowledge, the GCC backend is not stable - providing the ability to build it out of the stable tarball without an additional library doesn't seem necessary.
4. I suspect that most people who *want* the GCC backend (e.g. distros, specialized environments) will want to provide their own GCC. | T-bootstrap,T-release,C-discussion,A-gcc | low | Major |
2,794,106,224 | rust | `-Csplit-debuginfo=off` is not actually supported on apple(?) targets | > i will quibble with “supported” on macos, `-C split-debuginfo=off` just silently disables debuginfo there. but we should fix that in the official docs, not here.
_Originally posted by @jyn514 in https://github.com/rust-lang/rust/pull/135572#discussion_r1918357724_
The docs for this cli flag might benefit from being slightly reworded. | A-debuginfo,T-compiler,A-docs,A-CLI,O-apple | low | Critical |
2,794,134,683 | rust | Valgrind leak check reports a "possibly lost" leak on `std::thread::current()` | ### Code
I tried this code:
```rust
fn main() {
let _ = std::thread::current();
}
```
...under `valgrind`.
I expected to see this happen: no leaks
Instead, this happened:
```
==2008936== Memcheck, a memory error detector
==2008936== Copyright (C) 2002-2024, and GNU GPL'd, by Julian Seward et al.
==2008936== Using Valgrind-3.24.0 and LibVEX; rerun with -h for copyright info
==2008936== Command: target/debug/cringe
==2008936==
==2008936==
==2008936== HEAP SUMMARY:
==2008936== in use at exit: 48 bytes in 1 blocks
==2008936== total heap usage: 8 allocs, 7 frees, 2,120 bytes allocated
==2008936==
==2008936== 48 bytes in 1 blocks are possibly lost in loss record 1 of 1
==2008936== at 0x48447A8: malloc (vg_replace_malloc.c:446)
==2008936== by 0x138B77: alloc (alloc.rs:96)
==2008936== by 0x138B77: alloc_impl (alloc.rs:192)
==2008936== by 0x138B77: allocate (alloc.rs:254)
==2008936== by 0x138B77: {closure#0}<std::thread::Inner> (sync.rs:484)
==2008936== by 0x138B77: allocate_for_layout<core::mem::maybe_uninit::MaybeUninit<std::thread::Inner>, alloc::sync::{impl#14}::new_uninit::{closure_env#0}<std::thread::Inner>, fn(*mut u8) -> *mut alloc::sync::ArcInner<core::mem::maybe_uninit::MaybeUninit<std::thread::Inner>>> (sync.rs:1948)
==2008936== by 0x138B77: new_uninit<std::thread::Inner> (sync.rs:482)
==2008936== by 0x138B77: std::thread::Thread::new (mod.rs:1429)
==2008936== by 0x138909: std::thread::current::init_current (current.rs:227)
==2008936== by 0x11D516: cringe::main (main.rs:2)
==2008936== by 0x11D44A: core::ops::function::FnOnce::call_once (function.rs:250)
==2008936== by 0x11D3CD: std::sys::backtrace::__rust_begin_short_backtrace (backtrace.rs:152)
==2008936== by 0x11D3A0: std::rt::lang_start::{{closure}} (rt.rs:194)
==2008936== by 0x13852F: call_once<(), (dyn core::ops::function::Fn<(), Output=i32> + core::marker::Sync + core::panic::unwind_safe::RefUnwindSafe)> (function.rs:284)
==2008936== by 0x13852F: do_call<&(dyn core::ops::function::Fn<(), Output=i32> + core::marker::Sync + core::panic::unwind_safe::RefUnwindSafe), i32> (panicking.rs:587)
==2008936== by 0x13852F: try<i32, &(dyn core::ops::function::Fn<(), Output=i32> + core::marker::Sync + core::panic::unwind_safe::RefUnwindSafe)> (panicking.rs:550)
==2008936== by 0x13852F: catch_unwind<&(dyn core::ops::function::Fn<(), Output=i32> + core::marker::Sync + core::panic::unwind_safe::RefUnwindSafe), i32> (panic.rs:358)
==2008936== by 0x13852F: {closure#0} (rt.rs:163)
==2008936== by 0x13852F: do_call<std::rt::lang_start_internal::{closure_env#0}, isize> (panicking.rs:587)
==2008936== by 0x13852F: try<isize, std::rt::lang_start_internal::{closure_env#0}> (panicking.rs:550)
==2008936== by 0x13852F: catch_unwind<std::rt::lang_start_internal::{closure_env#0}, isize> (panic.rs:358)
==2008936== by 0x13852F: std::rt::lang_start_internal (rt.rs:159)
==2008936== by 0x11D386: std::rt::lang_start (rt.rs:193)
==2008936== by 0x11D54D: main (in /home/purplesyringa/cringe/target/debug/cringe)
==2008936==
==2008936== LEAK SUMMARY:
==2008936== definitely lost: 0 bytes in 0 blocks
==2008936== indirectly lost: 0 bytes in 0 blocks
==2008936== possibly lost: 48 bytes in 1 blocks
==2008936== still reachable: 0 bytes in 0 blocks
==2008936== suppressed: 0 bytes in 0 blocks
==2008936==
==2008936== For lists of detected and suppressed errors, rerun with: -s
==2008936== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
```
This is basically https://github.com/rust-lang/rust/issues/133574 again. Calling this a regression might be stretching it a bit too far, but the consensus on that issue seemed to be that we should fix this if we can.
### Version it worked on
It most recently worked on: Rust 1.85.0-beta.2, as far as I'm aware
### Version with regression
`rustc --version --verbose`:
```
rustc 1.86.0-nightly (419b3e2d3 2025-01-15)
binary: rustc
commit-hash: 419b3e2d3e350822550eee0e82eeded4d324d584
commit-date: 2025-01-15
host: x86_64-unknown-linux-gnu
release: 1.86.0-nightly
LLVM version: 19.1.6
```
| C-bug,T-libs,A-thread | low | Critical |
2,794,136,094 | PowerToys | Mouse Without Borders does not switch computers when a certain app is in focus (Visual Studio Code) | ### Microsoft PowerToys version
0.87.1
### Installation method
Microsoft Store
### Running as admin
No
### Area(s) with issue?
Mouse Without Borders
### Steps to reproduce
1. Connect two computers
2. Install VSCode on one
3. Try and switch to the other computer while VSCode is open
### ✔️ Expected Behavior
Keyboard/mouse should switch.
### ❌ Actual Behavior
Keyboard/mouse do not switch when going to edge of screen (while when Chrome in focus it does).

### Other Software
Visual Studio Code
Google Chrome | Issue-Bug,Needs-Triage | low | Minor |
2,794,174,438 | yt-dlp | CWTV unsuported URL | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [x] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [x] I'm reporting that yt-dlp is broken on a **supported** site
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [x] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [x] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [x] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
United Statea
### Provide a description that is worded well enough to be understood
It worked last night, but as of this morning (2025-01-16) all of the CWTV site is unsupported. I have tried https://www.cwtv.com and it produces this error. I picked a random movie to test just in case that provides more information. I also have the problem in #12108 ('title' error) when I go to a specific URL for a series episode and am willing to provide a verbose log for that error if you'd like.
### Provide verbose output that clearly demonstrates the problem
- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'https://www.cwtv.com/shows/the-crush/?viewContext=Movies+Swimlane']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [c8541f8b1] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg 7.0.1-full_build-www.gyan.dev (setts), ffprobe 7.0.1-full_build-www.gyan.dev, rtmpdump 2.4
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.12.14, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.3.0, websockets-14.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp)
[generic] Extracting URL: https://www.cwtv.com/shows/the-crush/?viewContext=Movies+Swimlane
[generic] ?viewContext=Movies+Swimlane: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] ?viewContext=Movies+Swimlane: Extracting information
[debug] Looking for embeds
ERROR: Unsupported URL: https://www.cwtv.com/shows/the-crush/?viewContext=Movies+Swimlane
Traceback (most recent call last):
File "yt_dlp\YoutubeDL.py", line 1637, in wrapper
File "yt_dlp\YoutubeDL.py", line 1772, in __extract_info
File "yt_dlp\extractor\common.py", line 742, in extract
File "yt_dlp\extractor\generic.py", line 2553, in _real_extract
yt_dlp.utils.UnsupportedError: Unsupported URL: https://www.cwtv.com/shows/the-crush/?viewContext=Movies+Swimlane
``` | site-bug,triage | low | Critical |
2,794,196,565 | godot | Godot becomes unresponsive when editing an AnimationTree. | ### Tested versions
Reproducible in Godot 4.4 Dev 7 and 4.4 Beta 1 Official.
### System information
Godot v4.4.beta1 - macOS Sequoia (15.3.0) - Multi-window, 2 monitors - Metal (Forward+) - integrated Apple M1 (Apple7) - Apple M1 (8 threads)
### Issue description
I have a project that has one AnimationTree that is a child to a CharacterBody2D node. In Godot 4.4 Dev 7 and Godot 4.4 Beta 1. Randomly when editing BlendSpace2D's (There are 4) Godot will freeze up and stop taking input specifically in the AnimationPlayer tab. I have experienced this when doing many different things within the AnimationTree editor. Creating BlendSpaces, connecting them, and editing them. I have not seen any patterns as of yet. You can select other nodes and other tabs (but you can't do anything in them). I am running Godot on a mac m1 (Sequioa). I have not experienced this issue in other versions of Godot but I have not attempted to use AnimationTree's until now. I have to close Godot and it is fine for a while when It opens back up.
### Steps to reproduce
I am able to reproduce the issue with just normal use of the AnimationTree editor.
### Minimal reproduction project (MRP)
I do not have a sample project. | bug,topic:editor,needs testing,topic:animation | low | Minor |
2,794,229,780 | vscode | Explore a QuickPick-like experience close to what invoked it | Scenarios:
* The attach context in Chat:

* Git branch status bar item:

* Language, Encoding, Tab Size status bar items:

All of these open Quick Picks that are aligned at the top... and we've gotten wishes that the quick pick could be rendered closer to these items.
# Explorations
## Explore simply moving the existing quick pick closer to what's invoking it
Hacked up in https://github.com/microsoft/vscode/commit/88f4209887aa16faa8253f77cc78a5ccb43dae97 if you click in the Chat input and click the paper clip, the quick pick renders closer to it:

another example if your cursor was in the Search:

and another example if you use the branch picker:

There are a few things to iron out:
* this commit only renders things close to things if they're in the outer 30% of the screen. This would be modified to logic that says " always render close to the thing... if you're in the main editor part, render in the middle".. but requires additional layer work since WorkbenchLayerService is in `workbench` and quick pick is in `platform`
* Clicking on buttons does not take focus... so if my cursor was in the Search bar, and I clicked the Attach Context with my mouse, the quick pick will render close to Search
* current logic breaks D&D and that would need to come back
Those are things that could be solved with some work... but I think the big problem is that........ it just doesn't look that great. I was really expecting it to look and feel nicer, but I don't think it does. I wish that the quick pick felt more attached to the thing that is opening it... like the attach context, or the Git status bar item.
That inspired the 2nd exploration.
## Explore a new kind of widget
For the attach context and the status bar items, the wish is for something that really feels attached to that. Something like the label picker on GitHub:

but with a tail like a hover (like the Language Status Bar item):

It could look something like this (please forgive my bad Figma skills):

and in the branch picker:

Writing down a loose idea from notes:
```
* make a QuickContextWidget that is similar to a quick pick, but smaller and attachable to a UI component (it could use shared code with QP)
* adopt this in Core for the attach context
* adopt this for all Core contributed Status Bar Items
* add a `showQuickContext(items: QuickContextItem[]): Promise<QuickContextItem | undefined>`
* adopt the `showQuickContext` in git extension
```
I think we need more mockups... open questions:
* What would the height of these be?
* Could we render the input at the bottom?
* What do InputBoxes look like (instead of QuickPicks)? | quick-pick,under-discussion | low | Major |
2,794,231,148 | PowerToys | ThinkorSwim | ### Microsoft PowerToys version
87.1
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
FancyZones
### Steps to reproduce
Control Shift ` to activate Fancy Zones; Shift-Drag window to zone
### ✔️ Expected Behavior
trying to get two trading applications (running under Win11 64) to sit side by side
### ❌ Actual Behavior
Fidelity Active Trader Pro complies with command, as expected;
ThinkorSwim does not (highlight does not appear on Shift-Drag) nor does it 'stick' to zone
### Other Software
ThinkorSwim [build1983] Nov23, 2024 (i believe it might utilize some elements of the Chrome browser) | Issue-Bug,Needs-Triage | low | Minor |
2,794,285,319 | PowerToys | Feature Request: Add Mac-like Touchpad Gesture Support to PowerToys (Smart Zoom, Rotate, Swipe, etc.) | ### Description of the new feature / enhancement
I propose adding a feature to PowerToys that enables Mac-like touchpad gestures on Windows. This feature would enhance the current Windows touchpad capabilities by implementing additional gestures such as Smart Zoom, Rotate, Swipe between pages, and Three-Finger Drag, while taking inspiration from macOS. These gestures would provide users with a more intuitive and efficient way to navigate and interact with the system.
### Scenario when this would be used?
Smart Zoom: Double-tap with two fingers to zoom in on a specific area, similar to macOS.
Rotate: Use two fingers to rotate images or objects in supported applications.
Swipe Between Pages: Enable two-finger horizontal swipes to navigate back and forward between pages in browsers or supported apps.
Three-Finger Drag: Drag windows or select text with three fingers, eliminating the need for a click-and-drag action.
Show Desktop: A four-finger pinch to quickly minimize all windows and reveal the desktop.
Launchpad/Quick Launch: Use a four-finger pinch-out gesture to launch a customizable utility or application menu (similar to macOS's Launchpad).
By incorporating these gestures, users can enjoy a more seamless and versatile touchpad experience on Windows.
### Supporting information
[Mac Touchpad Gestures - Apple Support](https://support.apple.com/en-us/102482) | Needs-Triage | low | Minor |
2,794,304,305 | PowerToys | Workspace doesn't recognize application: Thinkorswim | ### Microsoft PowerToys version
87.1
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
Workspaces
### Steps to reproduce
After having activated WORKSPACES, i try to add "Fidelity Active Trader" and "ThinkorSwim", both Win 64 applications
### ✔️ Expected Behavior
The applications should be saved in the Workspace mechanism.
After saving the workspace, the applications should launch from the taskbar shortcut that was created in prior step.
### ❌ Actual Behavior
Fidelity Active Trader launches, in its expected quadrant of the Desktop (Workspace), ( THINKORSWIM wouldn't save to the Workspace)
this is probably similar to the condition that i reported in FancyZones
### Other Software
ThinkorSwim [build1983] Nov23, 2024 (i believe it might utilize some elements of the Chrome browser) | Issue-Bug,Needs-Triage,Product-Workspaces | low | Minor |
2,794,309,062 | node | Strange slowness about crypto.webcrypto.subtle.deriveBits, when called with identical inputs a second time | ### Version
22.2.0
### Platform
```text
Darwin MBP.local 21.6.0 Darwin Kernel Version 21.6.0: Wed Aug 10 14:28:23 PDT 2022; root:xnu-8020.141.5~2/RELEASE_ARM64_T6000 arm64
```
### Subsystem
_No response_
### What steps will reproduce the bug?
Execute this:
```ts
import webcrypto from 'tiny-webcrypto';
import encryptor from 'tiny-encryptor';
const _deriveBits = webcrypto.subtle.deriveBits.bind(webcrypto.subtle);
webcrypto.subtle.deriveBits = async function ( ...args ) {
// console.log(args);
console.time('deriveBits');
const res = await _deriveBits(...args);
console.timeEnd('deriveBits');
return res;
};
const SECRET = 'P@ssword!';
const SAMPLE_1GB = new Uint8Array ( 1024 * 1024 * 1024 );
const enc = await encryptor.encrypt ( SAMPLE_1GB, SECRET );
const dec = await encryptor.decrypt ( enc, SECRET );
```
I see the following output:
```
~/Desktop/repro ❯ node --version
v22.2.0
~/Desktop/repro ❯ node index.mjs
deriveBits: 1.262ms
deriveBits: 69.418ms
~/Desktop/repro ❯ bun index.mjs
[0.43ms] deriveBits
[0.70ms] deriveBits
```
I also tried bundling this little repro for the browser and the problem doesn't manifest there either.
Basically through the course of executing that code we end up calling `webcrypto.subtle.deriveBits` twice, with identical arguments, reported below (but you can log these yourself by uncommenting the console.log in the repro), also asking Node to do very little work to begin with (just one iteration of the derivation function, not a million), and crucially as far as I can see there should be nothing else running concurrently that is blocking the main thread, but still the second execution in this specific scenario is always way slower than the first one, which seems symptomatic of some underlying issue in Node.
```
[
{
name: "PBKDF2",
salt: Uint8Array(32) [ 242, 78, 191, 112, 241, 109, 103, 131, 247, 218, 234, 20, 139, 106, 24, 50, 87, 41, 33, 23, 250, 89, 1, 228, 230, 71, 135, 106, 133, 145, 86, 63 ],
iterations: 1,
hash: {
name: "SHA-256",
},
}, CryptoKey {
type: "secret",
extractable: false,
algorithm: {
name: "PBKDF2",
},
usages: [ "deriveBits" ],
}, 256
]
```
I think this is worth a look just because of the wild difference in performance between the two calls, but also since we are dealing with crypto stuff it's probably worth making sure we aren't messing up something important internally.
### How often does it reproduce? Is there a required condition?
Always, just execute the code.
### What is the expected behavior? Why is that the expected behavior?
The expected behavior is that calling the same function twice takes about the same amount of time basically.
It could be that the GC is triggered during the second call for some reason? But it seems unlikely that if that's the problem it would reproduce pretty much exactly every time, and also ~70ms spent on a GC for what? There are relatively few objects getting allocated here in the first place, at least in userland as far as I can see.
### What do you see instead?
I see the second call always taking much longer, which shouldn't be happening.
### Additional information
_No response_ | webcrypto | low | Critical |
2,794,309,660 | angular | linkedSignal should provide previous value to computation fn without source | ### Which @angular/* package(s) are relevant/related to the feature request?
_No response_
### Description
Currently, the only way to get the previous value of a linkedSignal is via a source/computation object.
> When the value of the computation changes, the value of the `linkedSignal` changes to the computation result.
> https://angular.dev/guide/signals/linked-signal
Ultimately, I see value in the simple `linkedSignal` API to provide the `previous` value as well.
current:
```ts
export function linkedSignal<D>(
computation: () => D,
options?: {equal?: ValueEqualityFn<NoInfer<D>>},
): WritableSignal<D>;
```
proposed:
```ts
export function linkedSignal<D>(
computation: (previous?: D) => D,
options?: {equal?: ValueEqualityFn<NoInfer<D>>},
): WritableSignal<D>;
```
As an aside, I'm not really sure why we have a separate `source` input here since we could have multiple sources and they'd all automatically be handled by the same logic as a normal `computed`, so I don't really see the value in having an explicit input object separate from the simpler API which mostly mirrors `computed`.
### Proposed solution
```ts
export function linkedSignal<D>(
computation: (previous?: D) => D,
options?: {equal?: ValueEqualityFn<NoInfer<D>>},
): WritableSignal<D>;
```
### Alternatives considered
This is not ideal:
```ts
protected readonly columnSettingsKey = linkedSignal<unknown, number>({
source: this.columnSettings,
computation: (source, prev) => {
const preVal = prev?.value ?? 0;
return preVal + 1;
},
});
```
not sure if this is an anti-pattern, but this is feels cleaner to me, since I do not care about the previous source value, only the previous computed value
```ts
protected readonly columnSettingsKey = linkedSignal((prev) => {
// trigger when:
this.columnSettings();
const preVal = prev?? 0;
return preVal + 1;
});
``` | area: core,needs: discussion,core: reactivity,cross-cutting: signals | low | Major |
2,794,354,421 | pytorch | Issue: Illegal Memory Access in Backward Pass of `scaled_dot_product_attention` with Custom Attention Mask | ### 🐛 Describe the bug
**Bug Description:**
When using a custom attention mask in the `scaled_dot_product_attention` function, an illegal memory access error (`an illegal memory access was encountered`) occurs during the backward pass when the sequence length of `QK` (query-key) is greater than or equal to 65,536.
**reproducing code:**
```python
import torch
import torch.nn.functional as F
def torch_attention(q, k,v, n, ks, ts, upcast=False):
if upcast:
q = q.to(torch.float32)
k = k.to(torch.float32)
v = v.to(torch.float32)
attn_mask = generate_mask(n, ks, max((n-ts)//ks,0), ts, "cuda", dtype=q.dtype)
with torch.backends.cuda.sdp_kernel(enable_flash=True):
attention_torch = F.scaled_dot_product_attention(q, k, v, attn_mask=attn_mask)
return attention_torch
def generate_mask(seq_len, ks, nk, ts, device='cpu', dtype=torch.bfloat16):
row_ind = torch.arange(seq_len, device=device, dtype=torch.long)+1
k_ind = torch.arange(nk, device=device, dtype=torch.long)+1
mask_k = (row_ind.unsqueeze(1)>ts) * (torch.floor_divide(row_ind-ts, ks).unsqueeze(1) >= k_ind.unsqueeze(0))
col_ind = torch.arange(seq_len, device=device, dtype=torch.long)+1
ts = torch.tensor([ts]*seq_len, device=device, dtype=torch.long)
nking_token = torch.maximum(torch.floor_divide(row_ind-ts, ks)*ks, torch.tensor([0]*seq_len, device=device, dtype=torch.long))
remain_num = torch.maximum(row_ind-nking_token-ts, torch.tensor([0]*seq_len, device=device, dtype=torch.long))
ts = ts+remain_num
mask_t = (row_ind.unsqueeze(1)>=col_ind.unsqueeze(0)) * ((row_ind-ts).unsqueeze(1)<col_ind.unsqueeze(0))
bool_mask = torch.concat([mask_k, mask_t], dim=1)
final_mask = torch.zeros((seq_len, seq_len+nk), device=device, dtype=dtype)
final_mask = torch.masked_fill(final_mask, ~bool_mask, -torch.inf)
return final_mask
def test_torch_attn(bz, h, n, d, ks, ts):
print(f"{bz=}, {h=}, {n=}, {d=}, {ks=}, {ts=}")
nk = (n-ts)//ks
torch.manual_seed(20)
q = (torch.empty((bz, h, n, d), dtype=torch.bfloat16, device="cuda").normal_(mean=0.0, std=0.5).requires_grad_())
k = (torch.empty((bz, h, n+nk, d), dtype=torch.bfloat16, device="cuda").normal_(mean=0.0, std=0.5).requires_grad_())
v = (torch.empty((bz, h, n+nk, d), dtype=torch.bfloat16, device="cuda").normal_(mean=0.0, std=0.5).requires_grad_())
do = torch.randn((bz, h, n, d), dtype=torch.bfloat16, device="cuda")
attention_torch = torch_attention(q,k, v, n, ks, ts, upcast=True)
gq_torch, gk_torch, gv_torch = torch.autograd.grad(attention_torch, (q, k, v), do)
print(gq_torch-torch.zeros_like(gq_torch))
test_torch_attn(1,32,1024*64,128, ks=16, ts=1024)
```
**Erros:**
```
Traceback (most recent call last):
File "/debug_report.py", line 47, in test_torch_attn
print(gq_torch-torch.zeros_like(gq_torch))
File "/opt/conda/envs/torch220/lib/python3.9/site-packages/torch/_tensor.py", line 461, in __repr__
return torch._tensor_str._str(self, tensor_contents=tensor_contents)
File "/opt/conda/envs/torch220/lib/python3.9/site-packages/torch/_tensor_str.py", line 677, in _str
return _str_intern(self, tensor_contents=tensor_contents)
File "/opt/conda/envs/torch220/lib/python3.9/site-packages/torch/_tensor_str.py", line 597, in _str_intern
tensor_str = _tensor_str(self, indent)
File "/opt/conda/envs/torch220/lib/python3.9/site-packages/torch/_tensor_str.py", line 349, in _tensor_str
formatter = _Formatter(get_summarized_data(self) if summarize else self)
File "/opt/conda/envs/torch220/lib/python3.9/site-packages/torch/_tensor_str.py", line 387, in get_summarized_data
return torch.stack([get_summarized_data(x) for x in self])
File "/opt/conda/envs/torch220/lib/python3.9/site-packages/torch/_tensor_str.py", line 387, in <listcomp>
return torch.stack([get_summarized_data(x) for x in self])
File "/opt/conda/envs/torch220/lib/python3.9/site-packages/torch/_tensor_str.py", line 385, in get_summarized_data
return torch.stack([get_summarized_data(x) for x in (start + end)])
File "/opt/conda/envs/torch220/lib/python3.9/site-packages/torch/_tensor_str.py", line 385, in <listcomp>
return torch.stack([get_summarized_data(x) for x in (start + end)])
File "/opt/conda/envs/torch220/lib/python3.9/site-packages/torch/_tensor_str.py", line 385, in get_summarized_data
return torch.stack([get_summarized_data(x) for x in (start + end)])
File "/opt/conda/envs/torch220/lib/python3.9/site-packages/torch/_tensor_str.py", line 385, in <listcomp>
return torch.stack([get_summarized_data(x) for x in (start + end)])
File "/opt/conda/envs/torch220/lib/python3.9/site-packages/torch/_tensor_str.py", line 375, in get_summarized_data
return torch.cat(
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
### Versions
This issue is observed on an **NVIDIA A800 GPU** with **PyTorch 2.2.0**. | triaged,module: sdpa | low | Critical |
2,794,363,819 | pytorch | TorchDispatchMode cann't capture the operator which name is aten::index_put_ impl_ | ### 🐛 Describe the bug
🐛 Describe the bug
In my understanding, **TorchDispatchMode** should capture atens which call actual kernel. When I run the code below, I found it missed **aten::index_put_ impl_**.
And I try to print torch ops dispatch stack, and try using profiler. They both can see **aten::index_put_ impl_**.
What's the reason for this?
# 0x01 Using TorchDispatchMode
```python
import torch
from tests.unit_tests.test_utilities import Utils
import numpy as np
import torch.distributed as dist
from torch.utils._python_dispatch import TorchDispatchMode
from megatron.core.tensor_parallel.cross_entropy import vocab_parallel_cross_entropy
class PrintingMode(TorchDispatchMode):
def __torch_dispatch__(self, func, types, args=(), kwargs=None):
print(f"{func.__module__}.{func.__name__}")
return func(*args, **kwargs)
def __enter__(self):
# 进入with块时执行的代码
print("Entering PrintingMode")
return super().__enter__()
def __exit__(self, exc_type, exc_value, traceback):
# 退出with块时执行的代码
return super().__exit__(exc_type, exc_value, traceback)
def test_vocab_parallel_cross_entropy():
Utils.initialize_model_parallel(1,1)
# vocab_parallel_logits = torch.range(0,7).repeat(16,4).cuda()
# target = torch.arange(0,32,2).cuda()
vocab_parallel_logits = torch.empty((4096, 1, 32000), dtype=torch.float16, device='cuda:0')
# 设置步长
vocab_parallel_logits = vocab_parallel_logits.as_strided(
(4096, 1, 32000), (32000, 32000, 1))
# 创建 target 张量
target = torch.empty((4096, 1), dtype=torch.int64, device='cuda:0')
# 设置步长
target = target.as_strided((4096, 1), (1, 4096))
print(vocab_parallel_logits.shape)
print(target.shape)
output = vocab_parallel_cross_entropy(vocab_parallel_logits, target)
Utils.destroy_model_parallel()
# 初始化分布式环境
#dist.init_process_group(backend='nccl', init_method='env://', world_size=1, rank=0, device_ids=[0])
with PrintingMode():
test_vocab_parallel_cross_entropy()
# 销毁进程组
dist.destroy_process_group()
```
Outputs is bellow:
> No aten::index_put_ impl_
```bash
> python test_cross_entropy.py
Entering PrintingMode
Initializing torch.distributed with rank: 0, world_size: 1
torch._ops.aten.empty.memory_format
torch._ops.c10d.barrier.default
[rank0]:[W117 10:31:55.661054174 ProcessGroupNCCL.cpp:4457] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. Specify device_ids in barrier() to force use of a particular device, or call init_process_group() with a device_id.
torch._ops.aten.empty.memory_format
torch._ops.aten.as_strided.default
torch._ops.aten.empty.memory_format
torch._ops.aten.as_strided.default
torch.Size([4096, 1, 32000])
torch.Size([4096, 1])
torch._ops.aten._to_copy.default
torch._ops.aten.max.dim
torch._ops.c10d.allreduce_.default
torch._ops.aten.unsqueeze.default
torch._ops.aten.sub_.Tensor
torch._ops.aten.lt.Scalar
torch._ops.aten.ge.Scalar
torch._ops.aten.bitwise_or.Tensor
torch._ops.aten.clone.default
torch._ops.aten.sub.Tensor
torch._ops.aten.lift_fresh.default
torch._ops.aten.index_put_.default
torch._ops.aten.view.default
torch._ops.aten.view.default
torch._ops.aten.arange.start
torch._ops.aten.index.Tensor
torch._ops.aten.clone.default
torch._ops.aten.view.default
torch._ops.aten.lift_fresh.default
torch._ops.aten.index_put_.default
torch._ops.aten.exp.out
torch._ops.aten.sum.dim_IntList
torch._ops.c10d.allreduce_.default
torch._ops.c10d.allreduce_.default
torch._ops.aten.log.default
torch._ops.aten.sub.Tensor
torch._ops.aten.unsqueeze.default
torch._ops.aten.div_.Tensor
torch._ops.aten.empty.memory_format
torch._ops.c10d.barrier.default
```
# 0x02 export TORCH_SHOW_DISPATCH_TRACE=1

We can see the line 476, it called **aten::index_put_ impl_**, and seems like using actual kernel.
> Has aten::index_put_ impl_

# 0x03 Profiler
While I using profiler, I can also see the **aten::index_put_ impl_**

> Also has aten::index_put_ impl_

### Versions
# Version
```bash
> python collect_env.py
Collecting environment information...
PyTorch version: 2.6.0a0+git78543e6
Is debug build: True
CUDA used to build PyTorch: 12.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.1
Libc version: glibc-2.35
Python version: 3.9.20 (main, Oct 3 2024, 07:27:41) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA T1000
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
架构: x86_64
CPU 运行模式: 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
字节序: Little Endian
CPU: 16
在线 CPU 列表: 0-15
厂商 ID: GenuineIntel
型号名称: 11th Gen Intel(R) Core(TM) i7-11700 @ 2.50GHz
CPU 系列: 6
型号: 167
每个核的线程数: 2
每个座的核数: 8
座: 1
步进: 1
CPU 最大 MHz: 4900.0000
CPU 最小 MHz: 800.0000
BogoMIPS: 4992.00
标记: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap avx512ifma clflushopt intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm md_clear flush_l1d arch_capabilities
虚拟化: VT-x
L1d 缓存: 384 KiB (8 instances)
L1i 缓存: 256 KiB (8 instances)
L2 缓存: 4 MiB (8 instances)
L3 缓存: 16 MiB (1 instance)
NUMA 节点: 1
NUMA 节点0 CPU: 0-15
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] optree==0.13.1
[pip3] torch==2.6.0a0+git78543e6
[conda] magma-cuda121 2.6.1 1 pytorch
[conda] mkl-include 2025.0.0 hf2ce2f3_941 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
[conda] mkl-static 2025.0.0 ha770c72_941 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
[conda] numpy 2.0.2 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] torch 2.6.0a0+git78543e6 dev_0 <develop>
```
cc @Chillee @ezyang @zou3519 @albanD @samdow | triaged,module: __torch_dispatch__ | low | Critical |
2,794,382,662 | pytorch | Bug when using reparameterized model evaluating with DDP | ### 🐛 Describe the bug
I usually evaluate my model at the end of each training epoch. But in DDP mode, validation using the reparameterized model reports an error. If a single GPU is used for training and validation neither will report an error.
The error message is given below:
```triple quotes blocks
BUG INFO
[rank0]: Traceback (most recent call last):
[rank0]: File "run.py", line 97, in <module>
[rank0]: main()
[rank0]: File "run.py", line 92, in main
[rank0]: cli.train()
[rank0]: File "/home/xuexufeng/project/psf_cv_framework/trainer/mobileone.py", line 37, in train
[rank0]: self.val_one_epoch()
[rank0]: File "/home/xuexufeng/miniconda/envs/py3.8/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
[rank0]: return func(*args, **kwargs)
[rank0]: File "/home/xuexufeng/project/psf_cv_framework/trainer/mobileone.py", line 96, in val_one_epoch
[rank0]: self.val_one_batch(batch)
[rank0]: File "/home/xuexufeng/miniconda/envs/py3.8/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
[rank0]: return func(*args, **kwargs)
[rank0]: File "/home/xuexufeng/project/psf_cv_framework/trainer/mobileone.py", line 79, in val_one_batch
[rank0]: predict_cls: torch.Tensor = self.model_eval(batch)["cls"]
[rank0]: File "/home/xuexufeng/miniconda/envs/py3.8/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/home/xuexufeng/miniconda/envs/py3.8/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: File "/home/xuexufeng/miniconda/envs/py3.8/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1589, in forward
[rank0]: inputs, kwargs = self._pre_forward(*inputs, **kwargs)
[rank0]: File "/home/xuexufeng/miniconda/envs/py3.8/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1487, in _pre_forward
[rank0]: self._sync_buffers()
[rank0]: File "/home/xuexufeng/miniconda/envs/py3.8/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 2129, in _sync_buffers
[rank0]: self._sync_module_buffers(authoritative_rank)
[rank0]: File "/home/xuexufeng/miniconda/envs/py3.8/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 2133, in _sync_module_buffers
[rank0]: self._default_broadcast_coalesced(authoritative_rank=authoritative_rank)
[rank0]: File "/home/xuexufeng/miniconda/envs/py3.8/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 2155, in _default_broadcast_coalesced
[rank0]: self._distributed_broadcast_coalesced(bufs, bucket_size, authoritative_rank)
[rank0]: File "/home/xuexufeng/miniconda/envs/py3.8/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 2070, in _distributed_broadcast_coalesced
[rank0]: dist._broadcast_coalesced(
[rank0]: RuntimeError: !tensors.empty() INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1716905971873/work/torch/csrc/distributed/c10d/reducer.cpp":2090, please report a bug to PyTorch.
```
A sample instance as follow:
```Python
import os
import sys
import torch
import torch.nn as nn
from torch.utils import data
class MyTrainer:
def __init__(self):
self.rank, self.num_gpus = get_dist_info()
self.model: nn.Module = Mobileone("s0")
self.val_data_loader: data.DataLoader = MyDataloader()
@staticmethod
def reparameterize_model(model: nn.Module) -> torch.nn.Module:
""" Refer: Mobileone https://github.com/apple/ml-mobileone
Method returns a model where a multi-branched structure
used in training is re-parameterized into a single branch
for inference.
:param model: MobileOne model in train mode.
:return: MobileOne model in inference mode.
"""
# Avoid editing original graph
model_local = deepcopy(model)
for module in model_local.modules():
if hasattr(module, 'reparameterize'):
module.reparameterize()
return model_local
def train(self,):
# Assuming the training process is complete
# Evaluating only in rank-0
if self.rank == 0:
self.val_one_epoch()
@torch.no_grad()
def val_one_epoch(self, ):
self.model.eval()
self.model_eval: nn.Module = self.reparameterize_model(self.model)
for _, batch in enumerate(self.val_data_loader):
self.val_one_batch(batch)
def val_one_batch(self, data):
predict_cls: torch.Tensor = self.model_eval(data)
```
### Versions
PyTorch version: 2.3.1
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.8.19 (default, Mar 20 2024, 19:58:24) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-125-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
GPU 4: NVIDIA GeForce RTX 3090
GPU 5: NVIDIA GeForce RTX 3090
GPU 6: NVIDIA GeForce RTX 3090
GPU 7: NVIDIA GeForce RTX 3090
Nvidia driver version: 550.54.14
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.0.0
/usr/local/cuda-12.4/targets/x86_64-linux/lib/libcudnn.so.9
/usr/local/cuda-12.4/targets/x86_64-linux/lib/libcudnn_adv.so.9
/usr/local/cuda-12.4/targets/x86_64-linux/lib/libcudnn_cnn.so.9
/usr/local/cuda-12.4/targets/x86_64-linux/lib/libcudnn_engines_precompiled.so.9
/usr/local/cuda-12.4/targets/x86_64-linux/lib/libcudnn_engines_runtime_compiled.so.9
/usr/local/cuda-12.4/targets/x86_64-linux/lib/libcudnn_graph.so.9
/usr/local/cuda-12.4/targets/x86_64-linux/lib/libcudnn_heuristic.so.9
/usr/local/cuda-12.4/targets/x86_64-linux/lib/libcudnn_ops.so.9
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7543 32-Core Processor
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3737.8899
CPU min MHz: 1500.0000
BogoMIPS: 5599.44
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Virtualization: AMD-V
L1d cache: 2 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 32 MiB (64 instances)
L3 cache: 512 MiB (16 instances)
NUMA node(s): 8
NUMA node0 CPU(s): 0-7,64-71
NUMA node1 CPU(s): 8-15,72-79
NUMA node2 CPU(s): 16-23,80-87
NUMA node3 CPU(s): 24-31,88-95
NUMA node4 CPU(s): 32-39,96-103
NUMA node5 CPU(s): 40-47,104-111
NUMA node6 CPU(s): 48-55,112-119
NUMA node7 CPU(s): 56-63,120-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] onnx==1.16.1
[pip3] onnxruntime==1.18.1
[pip3] onnxsim==0.4.36
[pip3] torch==2.3.1
[pip3] torchaudio==2.3.1
[pip3] torchvision==0.18.1
[pip3] triton==2.3.1
[conda] blas 1.0 mkl conda-forge
[conda] cuda-cudart 11.8.89 0 nvidia
[conda] cuda-cupti 11.8.87 0 nvidia
[conda] cuda-libraries 11.8.0 0 nvidia
[conda] cuda-nvrtc 11.8.89 0 nvidia
[conda] cuda-nvtx 11.8.86 0 nvidia
[conda] cuda-runtime 11.8.0 0 nvidia
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcublas 11.11.3.6 0 nvidia
[conda] libcufft 10.9.0.58 0 nvidia
[conda] libcurand 10.3.6.82 0 nvidia
[conda] libcusolver 11.4.1.48 0 nvidia
[conda] libcusparse 11.7.5.86 0 nvidia
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] liblapacke 3.9.0 16_linux64_mkl conda-forge
[conda] mkl 2022.1.0 hc2b9512_224
[conda] numpy 1.24.4 py38h59b608b_0 conda-forge
[conda] pytorch 2.3.1 py3.8_cuda11.8_cudnn8.7.0_0 pytorch
[conda] pytorch-cuda 11.8 h7e8668a_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.3.1 py38_cu118 pytorch
[conda] torchtriton 2.3.1 py38 pytorch
[conda] torchvision 0.18.1 py38_cu118 pytorch
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed,triaged | low | Critical |
2,794,384,495 | angular | hmr causes element disappear | ### Which @angular/* package(s) are the source of the bug?
Don't known / other
### Is this a regression?
No
### Description
After upgrading to 19.1, template change keeps causing strange behaviors:
- binding stops until I resize the browser window (variable binding with checkbox on the page for example)
- my change simply does not appear not even a total page reload until I restart ng server
- element disappear after any change happening on template ex. a nested component inside the template just disappears after I made tiny change on the template to trigger a hmr
I had to use `no-hmr` ng server --no-hmr to avoid this from happening.
So, it seems it's related to hmr.
Binding failure appears quite often on wsl2 but not appearing on Ubuntu 24 machine.
The element disappearing happens on both wsl2 and Linux.
```
Distributor ID: Ubuntu
Description: Ubuntu 24.04.1 LTS
Release: 24.04
Codename: noble
```
```
Angular CLI: 19.1.0
Node: 22.12.0
Package Manager: npm 10.9.0
OS: linux x64
Angular: 19.1.0
... animations, cli, common, compiler, compiler-cli, core, forms
... localize, platform-browser, platform-browser-dynamic
... platform-server, router
Package Version
---------------------------------------------------------
@angular-devkit/architect 0.1900.7
@angular-devkit/build-angular 19.1.0
@angular-devkit/core 19.1.0 (cli-only)
@angular-devkit/schematics 19.1.0
@angular/cdk 19.0.5
@schematics/angular 19.1.0
rxjs 7.8.1
typescript 5.6.3
zone.js 0.15.0
```
### Please provide a link to a minimal reproduction of the bug
_No response_
### Please provide the exception or error you saw
```true
```
### Please provide the environment you discovered this bug in (run `ng version`)
```true
```
### Anything else?
_No response_ | area: core,core: hot module replacement (HMR) | medium | Critical |
2,794,447,043 | pytorch | DISABLED test_strided_inputs_dynamic_shapes_cuda (__main__.DynamicShapesGPUTests) | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_strided_inputs_dynamic_shapes_cuda&suite=DynamicShapesGPUTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35749033733).
Over the past 3 hours, it has been determined flaky in 12 workflow(s) with 12 failures and 12 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_strided_inputs_dynamic_shapes_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 7162, in test_strided_inputs
self.assertTrue(same(fn(*inputs), inputs[0] + inputs[1]))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 576, in _fn
return fn(*args, **kwargs)
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 7154, in fn
@torch.compile(backend="inductor")
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 755, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1211, in forward
return compiled_fn(full_args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 322, in runtime_wrapper
all_outs = call_func_at_runtime_with_args(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/utils.py", line 126, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 671, in inner_fn
outs = compiled_fn(args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 489, in wrapper
return compiled_fn(runtime_args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/output_code.py", line 464, in __call__
return self.current_callable(inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1228, in run
return compiled_fn(new_inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 397, in deferred_cudagraphify
fn, out = cudagraphify(model, inputs, new_static_input_idxs, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 427, in cudagraphify
return manager.add_function(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 2255, in add_function
return fn, fn(inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 1949, in run
out = self._run(new_inputs, function_id)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 2057, in _run
out = self.run_eager(new_inputs, function_id)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 2221, in run_eager
return node.run(new_inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 635, in run
check_memory_pool(self.device_index, self.cuda_graphs_pool, refs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 1754, in check_memory_pool
if torch._C._cuda_checkPoolLiveAllocations(device, pool_id, unique_storages):
MemoryError: std::bad_alloc
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor_dynamic_shapes.py DynamicShapesGPUTests.test_strided_inputs_dynamic_shapes_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_dynamic_shapes.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | module: rocm,triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | low | Critical |
2,794,461,297 | go | cmd/compile: TestScript/issue70173 failures | ```
#!watchflakes
default <- pkg == "cmd/compile" && test == "TestScript/issue70173"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8725554851541405009)):
=== RUN TestScript/issue70173
=== PAUSE TestScript/issue70173
=== CONT TestScript/issue70173
run.go:223: 2025-01-16T23:23:33Z
run.go:225: $WORK=/Users/swarming/.swarming/w/ir/x/t/TestScriptissue701733273506122/001
run.go:232:
BOTO_CONFIG=/Users/swarming/.swarming/w/ir/x/a/gsutil-bbagent/.boto
CIPD_ARCHITECTURE=amd64
CIPD_CACHE_DIR=/Users/swarming/.swarming/w/ir/cache/cipd_cache
CIPD_PROTOCOL=v2
...
PWD=/Users/swarming/.swarming/w/ir/x/t/TestScriptissue701733273506122/001
WORK=/Users/swarming/.swarming/w/ir/x/t/TestScriptissue701733273506122/001
TMPDIR=/Users/swarming/.swarming/w/ir/x/t/TestScriptissue701733273506122/001/tmp
> go run main.go
[stderr]
# runtime
ThreadSanitizer: CHECK failed: tsan_sync.cpp:95 "((0)) != (0)" (0x0, 0x0) (tid=478668)
run.go:232: FAIL: testdata/script/issue70173.txt:1: go run main.go: exit status 1
--- FAIL: TestScript/issue70173 (23.51s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation,compiler/runtime | low | Critical |
2,794,465,241 | godot | With instance uniform shader 2D, queue_free instances don't clear the buffer | ### Tested versions
The problem is related to version 4.4 beta 1 which adds support for uniform instances for 2D shaders
### System information
Godot v4.4.beta1 - Windows 10 (build 19045) - Multi-window, 2 monitors - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1060 6GB (NVIDIA; 32.0.15.6636) - AMD Ryzen 7 2700X Eight-Core Processor (16 threads)
### Issue description
I used the uniform instances of 2D shaders ( from 4.4 beta 1 ) in my project but I get an error after a while, the buffer seems saturated, I don't know if this is normal behavior, destroying the instances is not supposed to clear the buffer ?
The errors I get:
global_shader_parameters_instance_allocate: Too many instances using shader instance variables. Increase buffer size in Project
global_shader_parameters_instance_allocate: Condition "global_shader_uniforms.instance_buffer_pos.has(p_instance)" is true.
### Steps to reproduce
Simply instantiated a lot of elements that use uniform instances (the problem being that deleting them does not free up space in the buffer)
### Minimal reproduction project (MRP)
"N/A" | bug,topic:shaders | low | Critical |
2,794,569,730 | go | cmd/go: TestScript/test_fuzz_test_race failures | ```
#!watchflakes
default <- pkg == "cmd/go" && test == "TestScript/test_fuzz_test_race"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8725538157882983345)):
=== RUN TestScript/test_fuzz_test_race
=== PAUSE TestScript/test_fuzz_test_race
=== CONT TestScript/test_fuzz_test_race
script_test.go:135: 2025-01-17T03:48:20Z
script_test.go:137: $WORK=/Users/swarming/.swarming/w/ir/x/t/cmd-go-test-3791638293/tmpdir3335962428/test_fuzz_test_race378197647
script_test.go:159:
PATH=/Users/swarming/.swarming/w/ir/x/t/cmd-go-test-3791638293/tmpdir3335962428/testbin:/Users/swarming/.swarming/w/ir/x/w/goroot/bin:/Users/swarming/.swarming/w/ir/x/w/goroot/bin:/Users/swarming/.swarming/w/ir/x/w/goroot/bin:/Users/swarming/.swarming/w/ir/cache/tools/bin:/Users/swarming/.swarming/w/ir/bbagent_utility_packages:/Users/swarming/.swarming/w/ir/bbagent_utility_packages/bin:/Users/swarming/.swarming/w/ir/cipd_bin_packages:/Users/swarming/.swarming/w/ir/cipd_bin_packages/bin:/Users/swarming/.swarming/w/ir/cipd_bin_packages/cpython3:/Users/swarming/.swarming/w/ir/cipd_bin_packages/cpython3/bin:/Users/swarming/.swarming/w/ir/cache/cipd_client:/Users/swarming/.swarming/w/ir/cache/cipd_client/bin:/Users/swarming/.swarming/cipd_cache/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin
HOME=/no-home
CCACHE_DISABLE=1
GOARCH=amd64
...
[stdout]
FAIL test [build failed]
[stderr]
# internal/fuzz
<autogenerated>:1: internal compiler error: panic: runtime error: invalid memory address or nil pointer dereference
Please file a bug report including a short program that triggers the error.
https://go.dev/issue/new
script_test.go:159: FAIL: testdata/script/test_fuzz_test_race.txt:16: go test -race -v: exit status 1
--- FAIL: TestScript/test_fuzz_test_race (198.20s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,794,613,505 | go | x/sys/unix: Connectx is broken on darwin/amd64 | ### Go version
go version go1.23.5 darwin/arm64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='arm64'
GOBIN=''
GOCACHE='/Users/database64128/Library/Caches/go-build'
GOENV='/Users/database64128/Library/Application Support/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='arm64'
GOHOSTOS='darwin'
GOINSECURE=''
GOMODCACHE='/Users/database64128/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='darwin'
GOPATH='/Users/database64128/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/opt/homebrew/Cellar/go/1.23.5/libexec'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='local'
GOTOOLDIR='/opt/homebrew/Cellar/go/1.23.5/libexec/pkg/tool/darwin_arm64'
GOVCS=''
GOVERSION='go1.23.5'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/Users/database64128/Library/Application Support/go/telemetry'
GCCGO='gccgo'
GOARM64='v8.0'
AR='ar'
CC='cc'
CXX='c++'
CGO_ENABLED='1'
GOMOD='/dev/null'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -arch arm64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -ffile-prefix-map=/var/folders/xd/v25clzgj08d6_cbbp7lwszfr0000gn/T/go-build880517508=/tmp/go-build -gno-record-gcc-switches -fno-common'
```
### What did you do?
@wwqgtxx reports that [`x/sys/unix.Connectx`](https://pkg.go.dev/golang.org/x/sys/unix?GOOS=darwin#Connectx) is broken on darwin/amd64. The call succeeds, but does not return the correct number of bytes sent, and may cause memory corruption (`fatal error: stack not a power of 2`).
They were able to work around the issue by making the following minimal change:
```diff
diff --git a/sys/unix/zsyscall_darwin_amd64.go b/sys/unix/zsyscall_darwin_amd64.go
index 24b346e..f8cc117 100644
--- a/sys/unix/zsyscall_darwin_amd64.go
+++ b/sys/unix/zsyscall_darwin_amd64.go
@@ -848,7 +848,7 @@ func connectx(fd int, endpoints *SaEndpoints, associd SaeAssocID, flags uint32,
} else {
_p0 = unsafe.Pointer(&_zero)
}
- _, _, e1 := syscall_syscall9(libc_connectx_trampoline_addr, uintptr(fd), uintptr(unsafe.Pointer(endpoints)), uintptr(associd), uintptr(flags), uintptr(_p0), uintptr(len(iov)), uintptr(unsafe.Pointer(n)), uintptr(unsafe.Pointer(connid)), 0)
+ _, _, e1 := Syscall9(SYS_CONNECTX, uintptr(fd), uintptr(unsafe.Pointer(endpoints)), uintptr(associd), uintptr(flags), uintptr(_p0), uintptr(len(iov)), uintptr(unsafe.Pointer(n)), uintptr(unsafe.Pointer(connid)), 0)
if e1 != 0 {
err = errnoErr(e1)
}
```
`Connectx` was added by me in golang/sys@59665e5b431b8aea43003a9fba126c1d2f36a5d9 ([CL 606155](https://go.dev/cl/606155)). It does not have any issues on darwin/arm64. We spent hours on this and were not able to pinpoint the exact cause.
### What did you see happen?
`Connectx` is broken on darwin/amd64.
### What did you expect to see?
`Connectx` works on darwin/amd64.
| OS-Darwin,NeedsInvestigation,compiler/runtime,BugReport | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.