id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
โ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
2,728,784,654 | deno | compile - ability to exclude modules or files | It would be useful to have a flag for excluding files or modules (ex. exclude some statically analyzable dynamic import or files in the node_modules directory that aren't necessary). | suggestion,compile | low | Minor |
2,728,809,387 | go | cmd/link/internal/ld: TestRISCVTrampolines failures | ```
#!watchflakes
default <- pkg == "cmd/link/internal/ld" && test == "TestRISCVTrampolines"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8729302970188864273)):
=== RUN TestRISCVTrampolines
=== PAUSE TestRISCVTrampolines
=== CONT TestRISCVTrampolines
ld_test.go:425: Build failed: exit status 1, output: go: error obtaining buildID for go tool link: fork/exec /home/swarming/.swarming/w/ir/x/w/goroot/pkg/tool/netbsd_arm/link: bad file descriptor
--- FAIL: TestRISCVTrampolines (224.01s)
โ [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,728,839,069 | pytorch | Floating point exception (core dumped) in `native_channel_shuffle` | ### ๐ Describe the bug
Under specific inputs, `native_channel_shuffle` triggered a crash.
```python
import torch
self = torch.full((1, 4, 10, 10,), 0.5, dtype=torch.float64, requires_grad=False)
groups = 0
torch.native_channel_shuffle(self, groups)
```
Output
```
Floating point exception (core dumped)
```
### Versions
PyTorch version: 2.5.0a0+git32f585d
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] torch==2.5.0a0+git32f585d
[conda] numpy 2.1.3 pypi_0 pypi
[conda] torch 2.5.0a0+git32f585d pypi_0 pypi
cc @malfet @albanD | module: crash,module: error checking,triaged,module: python frontend,module: edge cases | low | Critical |
2,728,843,172 | pytorch | Segmentation fault (core dumped) in `torch.max_pool1d` | ### ๐ Describe the bug
Under specific inputs, `torch.max_pool1d` triggered a crash.
```python
import torch
self = torch.full((1, 2, 3,), 0.5, dtype=torch.float64, requires_grad=False)
kernel_size = [8608480567731124087]
stride = []
padding = [1250999896764]
dilation = [1250999896764]
ceil_mode = True
torch.max_pool1d(self, kernel_size, stride, padding, dilation, ceil_mode)
```
Output
```
Segmentation fault (core dumped)
```
### Versions
PyTorch version: 2.5.0a0+git32f585d
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] torch==2.5.0a0+git32f585d
[conda] numpy 2.1.3 pypi_0 pypi
[conda] torch 2.5.0a0+git32f585d pypi_0 pypi
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @malfet | module: crash,module: nn,module: error checking,triaged,actionable,module: pooling,module: edge cases | low | Critical |
2,728,857,144 | pytorch | Aborted (core dumped) in `reflection_pad 2d` | ### ๐ Describe the bug
Under specific inputs,`reflection_pad 2d` triggered a crash.
```python
import torch
self = torch.full((5, 5, 5, 5, 5,), 3.5e+35, dtype=torch.double)
padding = [-1, -1, -1, -1]
torch.ops.aten.reflection_pad2d(self, padding)
```
Output
```
munmap_chunk(): invalid pointer
Aborted (core dumped)
```
### Versions
PyTorch version: 2.5.0a0+git32f585d
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] torch==2.5.0a0+git32f585d
[conda] numpy 2.1.3 pypi_0 pypi
[conda] torch 2.5.0a0+git32f585d pypi_0 pypi
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @malfet | module: crash,module: nn,module: error checking,triaged,actionable,module: padding | low | Critical |
2,728,869,033 | pytorch | Aborted (core dumped) in `slow_conv_transpose3d` | ### ๐ Describe the bug
Under specific inputs, `slow_conv_transpose3d` triggered a crash.
```python
import torch
self = torch.full((1, 2, 4, 5, 4,), 0.5, dtype=torch.double)
weight = torch.full((2, 3, 2, 3, 2,), 0.5, dtype=torch.double)
kernel_size = [1, 1, 1]
bias = torch.full((3,), 0.5, dtype=torch.double)
stride = [1, 1, 1]
padding = [2, 2, 2]
output_padding = [2, 2, 2]
dilation = [1879048192, 1879048192, 1879048192]
torch.ops.aten.slow_conv_transpose3d(self, weight, kernel_size, bias, stride, padding, output_padding, dilation)
```
Output
```
double free or corruption (!prev)
Aborted (core dumped)
```
### Versions
PyTorch version: 2.5.0a0+git32f585d
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] torch==2.5.0a0+git32f585d
[conda] numpy 2.1.3 pypi_0 pypi
[conda] torch 2.5.0a0+git32f585d pypi_0 pypi
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @malfet | module: crash,module: nn,module: error checking,module: convolution,triaged,actionable,module: edge cases | low | Critical |
2,728,886,282 | godot | UIDs reported as missing on editor load, yet associated errors disappear after running project | ### Tested versions
Godot 4.3 stable [77dcf97d8]
### System information
Windows 10, Debian 12
### Issue description
In one of my active gdscript files, I preload some Resources via UID, as shown below:
```gdscript
var mat : StandardMaterial3D = preload("uid://7ykjq67rxmbx")
```
This works as intended and without error when running the project. However upon closing the editor and reloading the project, on startup I get a stream of cascading errors up the chain of dependency from the initial resource-loading script, such as the following:
```
core/io/resource_uid.cpp:132 - Condition "!unique_ids.has(p_id)" is true. Returning: String()
res://scripts/components/shadow.gd:33 - Parse Error: Preload file "uid://7ykjq67rxmbx" does not exist.
res://entities/isochan/isochan.gd:-1 - Compile Error:
res://scripts/room_manager.gd:-1 - Compile Error:
res://autoload/save_manager.gd:-1 - Compile Error:
modules/gdscript/gdscript.cpp:2936 - Failed to load script "res://autoload/save_manager.gd" with error "Parse error". (User)
```
Thing is, the reported UID in question _does_ in fact exist; if I right-click the resource in question in the editor and hit >Copy UID, the pasted string is exactly the same. This is further reinforced by simply running the project, in which the resource loads as intended without error or warning, and the initial editor load errors are cleared. The project is seemingly without issue until I close and reload the editor, and the same error consistently shows itself on editor startup thereon out.
### Steps to reproduce
I could not get the same error to happen in a separate MRP when when trying to manufacture the same conditions, so unfortunately I'm lost on how to reproduce this. My best guess is that whatever tracks the UIDs in this particular project has had _something_ screwy happen to it, but as far as I'm aware I cannot pin it down a singular cause, nor do I have any way of knowing what the state of the UIDs in my project is. Additionally it is unclear under what exact conditions the error starts to pop up. In the two times I've ran into it, the only way to mitigate the issue was to revert back to an earlier commit, and manually redo whatever changes I had made since. On the first occurrence of doing this, the errors, inexplicably, disappeared. Just recently I started getting the same error on a completely different set of resources. Most notably I have other resources in the project loaded in the exact same way as above, yet those ones have not given me issue.
### Minimal reproduction project (MRP)
As stated above, unfortunately I cannot seem to replicate this in an MRP. It seems to be specific to the active project I'm working on, or perhaps something that affects bigger projects with a number of existing resources, though the exact catalyst is unknown. That said, I'm more than willing to take question and try out suggestions in order to help narrow this down. | bug,topic:core,topic:gdscript | low | Critical |
2,728,902,315 | pytorch | Segmentation fault (core dumped) in `max_pool3d_with_indices_backward` | ### ๐ Describe the bug
Under specific inputs, `max_pool3d_with_indices_backward` triggered a crash.
```python
import torch
grad_output = torch.full((1, 2, 1, 3, 2,), 1, dtype=torch.float)
self = torch.full((1, 2, 3, 6, 5,), 1, dtype=torch.float)
kernel_size = [3, 3, 3]
stride = [2, 2, 2]
padding = [0, 0, 0]
dilation = [1, 1, 1]
ceil_mode = True
indices = torch.full((0, 2, 1, 3, 2,), 1, dtype=torch.long)
torch.ops.aten.max_pool3d_with_indices_backward(grad_output, self, kernel_size, stride, padding, dilation, ceil_mode, indices)
```
Output
```
Segmentation fault (core dumped)
```
### Versions
PyTorch version: 2.5.0a0+git32f585d
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] torch==2.5.0a0+git32f585d
[conda] numpy 2.1.3 pypi_0 pypi
[conda] torch 2.5.0a0+git32f585d pypi_0 pypi
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @malfet | module: crash,module: nn,module: error checking,triaged,actionable,module: edge cases | low | Critical |
2,728,933,707 | flutter | [iPad] Tapping a blank area in a composing region in a TextField with `stylusHandwritingEnabled:false` makes the application crash | ### Steps to reproduce
1. launch app;
2. input chinese , IME bar show;
4. tap on a blank area;
### Expected results
app not crash
### Actual results
app crash
### Code sample
<details open><summary>Code sample</summary>
```dart
// ignore_for_file: avoid_print
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
// This widget is the root of your application.
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple),
useMaterial3: true,
),
home: const MyHomePage(title: 'TextField dismiss'),
);
}
}
class MyHomePage extends StatefulWidget {
const MyHomePage({super.key, required this.title});
final String title;
@override
State<MyHomePage> createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
final FocusNode node = FocusNode();
final ScrollController scrollController = ScrollController();
// final QuillController controller = QuillController.basic();
void onDismissClick() {
print('[log] : dissmiss click at ${DateTime.now()}');
FocusManager.instance.primaryFocus?.unfocus();
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: const Text('dismiss not work'),
),
// body: quillWidget(),
body: const Column(
children: <Widget>[
Expanded(
child: TextField(
// scribbleEnabled: false,
stylusHandwritingEnabled: false,
autofocus: true,
maxLines: 100,
),
),
],
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/3e1cfd23-72b2-4262-ad72-370ac9d526f3
</details>
### Logs
<details open><summary>Logs</summary>
```console
2024-12-10 11:47:43.363754+0800 Runner[6843:342629] flutter: The Dart VM service is listening on http://127.0.0.1:56344/XfBcMkYLPwM=/
2024-12-10 11:47:47.603782+0800 Runner[6843:342346] *** Terminating app due to uncaught exception 'NSRangeException', reason: '*** -[__NSArray0 objectAtIndex:]: index 0 beyond bounds for empty NSArray'
*** First throw call stack:
(0x1812a2d78 0x199f07734 0x18132b258 0x107c022c0 0x1841fabd8 0x183ad0358 0x183a6974c 0x183aa3bb4 0x1838e42cc 0x18380ed00 0x18381c0e4 0x1839c9ae4 0x1837ef9ec 0x1837e471c 0x184634470 0x183e52084 0x1844d7cb0 0x1844d7478 0x1812c4f04 0x1812d5c90 0x18120f184 0x181214b4c 0x1812286b8 0x19d2c2374 0x183b8de88 0x18390f5ec 0x198f04ecc 0x104af060c 0x104af0584 0x104af0688 0x104c8dce4)
libc++abi: terminating with uncaught exception of type NSException
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
fvm flutter --version
Flutter 3.27.0-1.0.pre.745 โข channel master โข https://github.com/flutter/flutter.git
Framework โข revision 69ae667301 (20 hours ago) โข 2024-12-09 02:49:32 -0500
Engine โข revision 13231e3e48
Tools โข Dart 3.7.0 (build 3.7.0-224.0.dev) โข DevTools 2.41.0
```
</details>
| c: crash,platform-ios,a: tablet,has reproducible steps,P2,c: fatal crash,team-text-input,triaged-text-input,found in release: 3.27 | low | Critical |
2,728,934,206 | pytorch | Floating point exception (core dumped) in `unfold_backward` | ### ๐ Describe the bug
Under specific inputs, `unfold_backward` triggered a crash.
```python
import torch
grad_in = torch.full((), 0, dtype=torch.double)
input_sizes = []
dim = 0
size = 1250999896764
step = 0
torch.ops.aten.unfold_backward(grad_in, input_sizes, dim, size, step)
```
Output
```
Floating point exception (core dumped)
```
### Versions
PyTorch version: 2.5.0a0+git32f585d
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] torch==2.5.0a0+git32f585d
[conda] numpy 2.1.3 pypi_0 pypi
[conda] torch 2.5.0a0+git32f585d pypi_0 pypi
cc @malfet | module: crash,module: error checking,triaged,module: empty tensor,topic: fuzzer | low | Critical |
2,728,968,609 | godot | set(property_string, property_value) works inconsistently with typed dictionaries | ### Tested versions
Reproduced in: v4.4.dev5
### System information
Windows 10
### Issue description
Assigning typed dictionary to another typed dictionary does not cast Variants to target typing.
If that's intended, I'd expect the editor to spew an error of some kind, or output a warning. Instead the assignment is just skipped silently.
### Steps to reproduce
```
extends Node2D
class_name TestBug
var test_dict1: Dictionary[String, Variant] = {"Works": "false"}
var test_dict2: Dictionary[Variant, Variant] = {"Works": "true"}
func _ready() -> void:
var target_dict: Dictionary[Variant, Variant] = {"Test": "Test"}
set("test_dict1", target_dict.duplicate(true))
set("test_dict2", target_dict.duplicate(true))
print(test_dict1)
print(test_dict2)
```
I would expect this to print
`
{ "Test": "Test" }
{ "Test": "Test" }
`
But instead, I got
`
{ "Works": "false" }
{ "Test": "Test" }
`
### Minimal reproduction project (MRP)
Here's the project zip:
[dictbug.zip](https://github.com/user-attachments/files/18071694/dictbug.zip)
### Edit:
I was able to get around this issue by simply doing
```
get(dictionary_name).clear()
for key in result_dict.keys():
get(dictionary_name)[key] = result_dict[key]
```
The target dictionary is typed while `result_dict `isn't. The fact that set() doesn't work but this does feels inconsistent. | discussion,topic:core,topic:gdscript,documentation | low | Critical |
2,728,985,115 | pytorch | [Inductor] `torch.compile` can tolerate different dtypes when `bias=False` of Conv1d, Conv2d, Conv3d | ### ๐ Describe the bug
When setting `Conv2d(bias=False)` in **torch.float16**, Inductor can pass the check but eager can NOT.
This problem occurs both on CPU and CUDA.
I think `Inductor` should also reject this model
```python
import torch
import torch.nn as nn
class Model(nn.Module):
def __init__(self, bias):
super(Model, self).__init__()
self.conv = nn.Conv2d(1, 1, kernel_size=1, bias=bias)
def forward(self, x):
x = self.conv(x)
return x
x = torch.randn(1, 1, 1, 1, dtype=torch.float16)
def run_test(input_tensor, compile_mode: bool, bias: bool, device: str):
model = Model(bias)
if device == 'cuda':
model = model.cuda()
input_tensor = input_tensor.cuda()
if compile_mode:
model = torch.compile(model)
try:
output = model(input_tensor)
print("success")
except Exception as e:
print(e)
run_test(x, False, True, 'cuda') # Input type (c10::Half) and bias type (float) should be the same
run_test(x, False, False, 'cuda') # Input type (torch.cuda.HalfTensor) and weight type (torch.cuda.FloatTensor) should be the same
run_test(x, True, True, 'cuda')
# Failed running call_function <built-in method conv2d of type object at 0x7f77415ebe60>(*(FakeTensor(...,
# device='cuda:0', size=(1, 1, 1, 1), dtype=torch.float16), Parameter(FakeTensor(..., device='cuda:0', size=(1, 1, 1,
# 1), requires_grad=True)), Parameter(FakeTensor(..., device='cuda:0', size=(1,), requires_grad=True)), (1, 1), (0,
# 0), (1, 1), 1), **{}): Input type (c10::Half) and bias type (float) should be the same
run_test(x, True, False, 'cuda') # success
```
### Error logs
```
Input type (c10::Half) and bias type (float) should be the same
Input type (torch.cuda.HalfTensor) and weight type (torch.cuda.FloatTensor) should be the same
Failed running call_function <built-in method conv2d of type object at 0x7fa4a9d22e60>(*(FakeTensor(..., device='cuda:0', size=(1, 1, 1, 1), dtype=torch.float16), Parameter(FakeTensor(..., device='cuda:0', size=(1, 1, 1, 1), requires_grad=True)), Parameter(FakeTensor(..., device='cuda:0', size=(1,), requires_grad=True)), (1, 1), (0, 0), (1, 1), 1), **{}): Input type (c10::Half) and bias type (float) should be the same
success
```
### Versions
torch version: 2.6.0.dev20241205+cu124
OS: Ubuntu 20.04.6 LTS (x86_64)
CPU: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
GPU: V100
<details>
<summary>click for detailed env</summary>
```
PyTorch version: 2.6.0.dev20241205+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 16.0.1
CMake version: version 3.26.0
Libc version: glibc-2.31
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-200-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
GPU 2: Tesla V100-SXM2-32GB
GPU 3: Tesla V100-SXM2-32GB
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 40 bits physical, 48 bits virtual
CPU(s): 20
On-line CPU(s) list: 0-19
Thread(s) per core: 1
Core(s) per socket: 20
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2500.000
BogoMIPS: 5000.00
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 640 KiB
L1i cache: 640 KiB
L2 cache: 80 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch topoext cpuid_fault invpcid_single pti ssbd ibrs ibpb fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke avx512_vnni
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.13.1
[pip3] pytorch-triton==3.2.0+git35c6c7c6
[pip3] torch==2.6.0.dev20241205+cu124
[pip3] torchaudio==2.5.0.dev20241205+cu124
[pip3] torchvision==0.20.0.dev20241205+cu124
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git35c6c7c6 pypi_0 pypi
[conda] torch 2.6.0.dev20241205+cu124 pypi_0 pypi
[conda] torchaudio 2.5.0.dev20241205+cu124 pypi_0 pypi
[conda] torchvision 0.20.0.dev20241205+cu124 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
```
</details>
cc @chauhang @penguinwu @zou3519 @bdhirsh @yf225 | triaged,oncall: pt2,module: aotdispatch,module: pt2-dispatcher | low | Critical |
2,729,070,179 | vscode | Chat Widget does not follow display language changes correctly | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.95.3 (Universal)
- OS Version: macOS 15.1.1 (24B91)
Steps to Reproduce:
1. Open the Copilot Chat side panel.
2. Change VSCode language via `Configure Display Language` command. Confirm `Restart` button.
3. VScode restarts and successfully switches to new language. Copilot Chat still displaying with old language.
4. Reload VSCode via `Developer: Reload Window` command. (This step must be performed after Copilot Chat has finished initializing.)
5. Copilot Chat switches to new language successfully.
P.S. `renderWelcomeViewContentIfNeeded` cache is likely the culprit. It is called before `viewModel` exists.
https://github.com/microsoft/vscode/blob/05fd1308240072efb09749954fd478221fd01d55/src/vs/workbench/contrib/chat/browser/chatWidget.ts#L563-L582
| bug,chat | low | Critical |
2,729,125,022 | yt-dlp | [Jamendo] Embedding thumbnails causes downloads to fail | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
United States (Though it should be accessible from most countries)
### Provide a description that is worded well enough to be understood
When using the --embed-thumbnail flag to attempt to embed thumbnails while downloading an album of Jamendo songs, it fails on every track with the error:
```
ERROR: The extracted extension ('com') is unusual and will be skipped for safety reasons. If you believe this is an error, please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
```
This happens regardless of whether I am downloading an album or a single track, and nothing is downloaded.
From what I can tell, on Jamendo, thumbnails are stored on `usercontent.jamendo.com` with a path of `/` (no file extension or anything) and all specific information is in the query string.
Example: `https://usercontent.jamendo.com/?type=album&id=83048&width=300&trackid=711265`
This may have caused the extractor to think the file extension is `.com` because it is directly after the last period in the string, and flagged it because `.com` also happens to be the file extension of an old MS-DOS executable format.
When visiting the image url above, the `content-type` response header reads `image/jpeg`, indicating that the file is an image instead of a `com` executable. However, when downloading the album associated with that album art using --embed-thumbnail (`yt-dlp --embed-thumbnail "https://www.jamendo.com/album/83048/anitek-instrumentals-vol-7"`), it fails with the above error. Note that omitting the embed thumbnail option successfully downloads a series of thumbnail-less FLAC files.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
$ yt-dlp -vU --embed-thumbnail "https://www.jamendo.com/track/711265/dont-leave-me"
[debug] Command-line config: ['-vU', '--embed-thumbnail', 'https://www.jamendo.com/track/711265/dont-leave-me']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [4bd265539] (zip)
[debug] Python 3.13.0 (CPython x86_64 64bit) - Linux-6.11.10-300.fc41.x86_64-x86_64-with-glibc2.40 (OpenSSL 3.2.2 4 Jun 2024, glibc 2.40)
[debug] exe versions: ffmpeg 7.0.2 (setts), ffprobe 7.0.2, phantomjs broken
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2023.05.07, mutagen-1.47.0, requests-2.32.3, sqlite3-3.46.1, urllib3-1.26.20, websockets-12.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp)
[Jamendo] Extracting URL: https://www.jamendo.com/track/711265/dont-leave-me
[Jamendo] 711265: Downloading JSON metadata
[Jamendo] 359034: Downloading JSON metadata
[Jamendo] 83048: Downloading JSON metadata
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec, channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] 711265: Downloading 1 format(s): flac
ERROR: The extracted extension ('com') is unusual and will be skipped for safety reasons. If you believe this is an error, please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
Traceback (most recent call last):
File "/home/oirnoir/.local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 177, in wrapper
return func(self, *args, **kwargs)
File "/home/oirnoir/.local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 3272, in process_info
thumb_files = self._write_thumbnails(
'video', info_dict, temp_filename, self.prepare_filename(info_dict, 'thumbnail'))
File "/home/oirnoir/.local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 4388, in _write_thumbnails
thumb_filename = replace_extension(filename, thumb_ext, info_dict.get('ext'))
File "/home/oirnoir/.local/bin/yt-dlp/yt_dlp/utils/_utils.py", line 2130, in _change_extension
return f'{filename}.{_UnsafeExtensionError.sanitize_extension(ext)}'
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^
File "/home/oirnoir/.local/bin/yt-dlp/yt_dlp/utils/_utils.py", line 5212, in sanitize_extension
raise cls(extension)
yt_dlp.utils._UnsafeExtensionError: unsafe file extension: 'com'
```
| site-bug,patch-available | low | Critical |
2,729,130,653 | pytorch | Wrong error message | ### ๐ Describe the bug
import torch
input=torch.tensor([[1,2,3]]) # dtype is torch.int64
torch.nn.ConvTranspose1d(1,1,2)(input) # The error message says ```expected scalar type Long but found Float.```
# It should say -"expected scalar type FLOAT but found LONG."
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.30.5
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.1.85+-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.6
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
Model: 79
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
Stepping: 0
BogoMIPS: 4400.34
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB (1 instance)
L1i cache: 32 KiB (1 instance)
L2 cache: 256 KiB (1 instance)
L3 cache: 55 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Not affected; BHI: Vulnerable (Syscall hardening enabled)
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.6.0.74
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-nccl-cu12==2.23.4
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvtx==0.2.10
[pip3] optree==0.13.1
[pip3] pynvjitlink-cu12==0.4.0
[pip3] torch==2.5.1+cu121
[pip3] torchaudio==2.5.1+cu121
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.20.1+cu121
[conda] Could not collect
cc @malfet | module: error checking,module: convolution,triaged | low | Critical |
2,729,152,392 | go | net: DialTimeout causes persistent slowdown on windows | ### Go version
go version go1.23.3 windows/amd64
### Output of `go env` in your module/workspace:
```shell
set GO111MODULE=
set GOARCH=amd64
set GOBIN=
set GOCACHE=C:\Users\Administrator\AppData\Local\go-build
set GOENV=C:\Users\Administrator\AppData\Roaming\go\env
set GOEXE=.exe
set GOEXPERIMENT=
set GOFLAGS=
set GOHOSTARCH=amd64
set GOHOSTOS=windows
set GOINSECURE=
set GOMODCACHE=C:\Users\Administrator\go\pkg\mod
set GONOPROXY=
set GONOSUMDB=
set GOOS=windows
set GOPATH=C:\Users\Administrator\go
set GOPRIVATE=
set GOPROXY=https://goproxy.cn,direct
set GOROOT=C:\Program Files\Go
set GOSUMDB=sum.golang.org
set GOTMPDIR=
set GOTOOLCHAIN=auto
set GOTOOLDIR=C:\Program Files\Go\pkg\tool\windows_amd64
set GOVCS=
set GOVERSION=go1.23.3
set GODEBUG=
set GOTELEMETRY=local
set GOTELEMETRYDIR=C:\Users\Administrator\AppData\Roaming\go\telemetry
set GCCGO=gccgo
set GOAMD64=v1
set AR=ar
set CC=gcc
set CXX=g++
set CGO_ENABLED=0
set GOMOD=NUL
set GOWORK=
set CGO_CFLAGS=-O2 -g
set CGO_CPPFLAGS=
set CGO_CXXFLAGS=-O2 -g
set CGO_FFLAGS=-O2 -g
set CGO_LDFLAGS=-O2 -g
set PKG_CONFIG=pkg-config
set GOGCCFLAGS=-m64 -fno-caret-diagnostics -Qunused-arguments -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=C:\Users\ADMINI~1\AppData\Local\Temp\go-build1513341593=/tmp/go-build -gno-record-gcc-switches
```
### What did you do?
I am practicing with a simple port scanner. At first I used net.Dial and the program completes very fast as the following shows:

However when I switch to net.DialTimeout, the program beomes mysteriously slow (30~40 times slower). The side effect seems to be system level, rollback to net.Dial and recompile can't solve the problem. The side effect is gone after a restart.


I make some test and found out the side effect seems to happen after net.DialTimeout connect a non local address. DialTimeout on 127.0.0.1 is fine, after connecting a nonlocal ip address, the forementioned side effect is triggered. Dialing localhost becomes very slow(both net.Dial and net.DialTimeout ), and the only way to solve it is to restart.

Here is the full code:
```
package main
import (
"flag"
"fmt"
"net"
"sync"
"time"
)
var (
ip string
minport int
maxport int
timeout int
)
func init() {
flag.StringVar(&ip, "ip", "127.0.0.1", "Ip to scan.")
flag.IntVar(&minport, "p1", 0, "Port to scan from.")
flag.IntVar(&maxport, "p2", 1024, "Port to scan stop.")
flag.IntVar(&timeout, "t", 1000, "Time out in microseconds")
flag.Parse()
}
func main() {
start := time.Now()
ports := []int{}
wg := &sync.WaitGroup{}
mutex := &sync.Mutex{}
for p := minport; p <= maxport; p++ {
wg.Add(1)
go portScan(ip, p, wg, &ports, mutex)
}
wg.Wait()
elasped := time.Since(start)
fmt.Println("Scanned from ", minport, " to ", maxport, " in ", elasped)
fmt.Println(ports)
}
func portScan(ip string, port int, wg *sync.WaitGroup, ports *[]int, mutex *sync.Mutex) {
defer wg.Done()
address := fmt.Sprintf("%s:%d", ip, port)
fmt.Println("Connecting ", address)
conn, err := net.Dial("tcp", address)
// conn, err := net.DialTimeout("tcp", address, 500*time.Microsecond)
if err == nil {
defer conn.Close()
// fmt.Println("Connection successful.")
// localAddr := conn.LocalAddr().String()
// remoteAddr := conn.RemoteAddr().String()
// fmt.Println("Local Address:", localAddr)
// fmt.Println("Remote Address:", remoteAddr)
mutex.Lock()
*ports = append(*ports, port)
mutex.Unlock()
} else {
// fmt.Println("Error", err)
}
}
```
### What did you see happen?
As above.
### What did you expect to see?
As above. | OS-Windows,NeedsInvestigation | low | Critical |
2,729,156,059 | pytorch | Using DeviceMesh Slicing with torch.compile() | ### ๐ The feature, motivation and pitch
This is a follow-up issue for https://github.com/pytorch/pytorch/pull/142287. Some questions from the reviewers:
* the internal tensor usage- is it always a CPU tensor regardless of the device used for training/communication
* the internal tensor should not be visible to anyone, other than indirectly (by calling other device-mesh APIs)
* how does device mesh behave under torch.compile and faketensor-trace? does a 'device-mesh' become an input to a graph? or does it get 'desugared' into primitive components, like ProcessGroup does when it gets traced?
We need to have clarities on how these work and add the supports if any features missing. Or we just need to explicitly add checks to forbid users from using DeviceMesh slicing with torch.compile() and document the limitation.
### Alternatives
_No response_
### Additional context
_No response_
cc @H-Huang @awgu @kwen2501 @wanchaol @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed,triaged,module: DeviceMesh | low | Minor |
2,729,174,063 | PowerToys | Upscaled Image Previews in File Explorer | ### Description of the new feature / enhancement
### **Problem:**
Windows File Explorer currently displays image previews scaled to **_at most_** 100%, even when using "Large icons" or "Extra large icons." This makes small images, such as 16x16 pixel art PNGs, difficult to see. Users often need to open these images in an external editor to zoom in, which is time-consuming.
### **Proposed Solution:**
Introduce an addon for PowerToys that:
1. Upscales small image previews to fill the entire preview container in File Explorer.
2. Offers customizable scaling methods, such as Nearest Neighbor (for crisp pixel art) or Bicubic (for smooth interpolation).
3. Includes options to ignore specific file types (e.g., `.jpg`) or directories (e.g., `C:/MyFolder`).
### Scenario when this would be used?
This feature would benefit anyone working with pixel art or low-resolution graphics, such as game developers, graphic designers, or hobbyists. It streamlines workflows by eliminating the need to open files in an editor just to inspect them.

### Supporting information
Some specialized applications, like [Aseprite](https://www.aseprite.org/), already provide this functionality for their proprietary file types. Adding similar support for standard image files in File Explorer would be invaluable for users handling low-resolution assets.

| Product-File Explorer,Needs-Triage | low | Minor |
2,729,175,974 | pytorch | DISABLED test_simple (__main__.TestNumericDebugger) | Platforms: mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_simple&suite=TestNumericDebugger&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/34162312572).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_simple`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/Users/ec2-user/runner/_work/pytorch/pytorch/test/quantization/pt2e/test_numeric_debugger.py", line 47, in test_simple
ep = torch.export.export(m, example_inputs)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_12246164370/lib/python3.9/site-packages/torch/export/__init__.py", line 368, in export
return _export(
File "/Users/ec2-user/runner/_work/_temp/conda_environment_12246164370/lib/python3.9/site-packages/torch/export/_trace.py", line 1035, in wrapper
raise e
File "/Users/ec2-user/runner/_work/_temp/conda_environment_12246164370/lib/python3.9/site-packages/torch/export/_trace.py", line 1008, in wrapper
ep = fn(*args, **kwargs)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_12246164370/lib/python3.9/site-packages/torch/export/exported_program.py", line 128, in wrapper
return fn(*args, **kwargs)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_12246164370/lib/python3.9/site-packages/torch/export/_trace.py", line 1970, in _export
return _export_for_training(
File "/Users/ec2-user/runner/_work/_temp/conda_environment_12246164370/lib/python3.9/site-packages/torch/export/_trace.py", line 1035, in wrapper
raise e
File "/Users/ec2-user/runner/_work/_temp/conda_environment_12246164370/lib/python3.9/site-packages/torch/export/_trace.py", line 1008, in wrapper
ep = fn(*args, **kwargs)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_12246164370/lib/python3.9/site-packages/torch/export/exported_program.py", line 128, in wrapper
return fn(*args, **kwargs)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_12246164370/lib/python3.9/site-packages/torch/export/_trace.py", line 1834, in _export_for_training
export_artifact = export_func( # type: ignore[operator]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_12246164370/lib/python3.9/site-packages/torch/export/_trace.py", line 1283, in _strict_export_lower_to_aten_ir
gm_torch_level = _export_to_torch_ir(
File "/Users/ec2-user/runner/_work/_temp/conda_environment_12246164370/lib/python3.9/site-packages/torch/export/_trace.py", line 662, in _export_to_torch_ir
gm_torch_level, _ = torch._dynamo.export(
File "/Users/ec2-user/runner/_work/_temp/conda_environment_12246164370/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 1564, in inner
result_traced = opt_f(*args, **kwargs)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_12246164370/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_12246164370/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_12246164370/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 573, in _fn
return fn(*args, **kwargs)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_12246164370/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1380, in __call__
return self._torchdynamo_orig_callable(
File "/Users/ec2-user/runner/_work/_temp/conda_environment_12246164370/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 547, in __call__
return _compile(
File "/Users/ec2-user/runner/_work/_temp/conda_environment_12246164370/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 925, in _compile
raise CacheLimitExceeded(f"{limit_type} reached")
torch._dynamo.exc.CacheLimitExceeded: cache_size_limit reached
To execute this test, run the following from the base repo dir:
python test/quantization/pt2e/test_numeric_debugger.py TestNumericDebugger.test_simple
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_quantization.py`
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim @clee2000 @wdvr @malfet @albanD | oncall: quantization,triaged,module: flaky-tests,module: macos,skipped | low | Critical |
2,729,192,713 | next.js | Empty URL search params are swallowed when rewriting via middleware on Vercel | ### Link to the code that reproduces this issue
https://github.com/migueloller/vercel-middleware-search-params-bug-repro
### To Reproduce
Deploy the application to Vercel. The easiest way is using the Vercel CLI. I have a deployed version that can be used to test [here](https://search-params-drab.vercel.app/?foo=bar).
Visit the path `/?foo=` and then visit `/?foo=bar`. Note that while the search params are there for `/?foo=bar`, they are not there for `/?foo=`.
### Current vs. Expected behavior
The expectation is that URL search params are there even if the value is an empty string. Note, this is only happening because of the rewrite in the middleware. If the `middleware.ts` file is deleted, then the expected behavior occurs.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.6.0: Mon Jul 29 21:13:04 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T6020
Available memory (MB): 16384
Available CPU cores: 10
Binaries:
Node: 18.19.0
npm: 10.2.3
Yarn: 1.22.21
pnpm: 8.15.6
Relevant Packages:
next: 15.0.3 // There is a newer version (15.0.4) available, upgrade recommended!
eslint-config-next: 15.0.3
react: 19.0.0-rc-66855b96-20241106
react-dom: 19.0.0-rc-66855b96-20241106
typescript: 5.7.2
Next.js Config:
output: N/A
โ There is a newer version (15.0.4) available, upgrade recommended!
Please try the latest canary version (`npm install next@canary`) to confirm the issue still exists before creating a new issue.
Read more - https://nextjs.org/docs/messages/opening-an-issue
```
### Which area(s) are affected? (Select all that apply)
Middleware
### Which stage(s) are affected? (Select all that apply)
Vercel (Deployed)
### Additional context
The issue only happens when deploying to Vercel, it does not reproduce during local development. This is likely due to differences in the runtime middleware runs on in Vercel vs local dev. | bug,Middleware | low | Critical |
2,729,217,376 | kubernetes | Preemption picks wrong victim node with higher priority pod on it after #128307 | ### What happened?
related: #128307
After #128307 has been merged, preemption logic picks wrong victim node with higher priority pod on it.
In the following situation, `high` pod on `worker1` not `mid` on `worker2` is evicted when `very-high` pod(Priority=10000) attempts to schedule.
- `worker1`
- `high` pod(Priority=1000)
- `low` pod(Priority=0, preempting it will violate PDB)
- `worker2`
- `mid` pod(Priority=100, preempting it will violate PDB)
### What did you expect to happen?
According to here, `mid` pod on `worker2` seems to be picked as a victim.
https://github.com/kubernetes/kubernetes/blob/1148e5ee5fd95117db6c2fb92194272df574cc38/pkg/scheduler/framework/preemption/preemption.go#L411-L424
### How can we reproduce it (as minimally and precisely as possible)?
We can reproduce it by the following steps using kind.
Preparation
<details>
Use kind.
```bash
$ kind version
kind v0.25.0 go1.22.9 linux/amd64
```
Build Node image of v1.31.3.
```bash
$ kind build node-image v1.31.3 --image ndest/node:1.31.3-build
```
Prepare kind cluster config and create cluster.
`kind-config-1.31.3.yaml`
```yaml
kind: Cluster
apiVersion: "kind.x-k8s.io/v1alpha4"
name: kind-v1.31.3
nodes:
- role: control-plane
image: ndest/node:1.31.3-build
- role: worker
image: ndest/node:1.31.3-build
- role: worker
image: ndest/node:1.31.3-build
```
```bash
$ kind create cluster --config kind-config-1.31.3.yaml
$ kubectl get no
NAME STATUS ROLES AGE VERSION
kind-v1.31.3-control-plane Ready control-plane 33s v1.31.3
kind-v1.31.3-worker Ready <none> 18s v1.31.3
kind-v1.31.3-worker2 Ready <none> 18s v1.31.3
```
Add PriorityClass to DaemonSet/kindnet to prevent it from being preempted.
```bash
$ kubectl -n kube-system patch ds kindnet --patch '{"spec": {"template": {"spec": {"priorityClassName": "system-node-critical"}}}}'
```
Modify `maxPods` of `kubelet` config on `worker` and `worker2` to trigger preemption.
```bash
# worker
$ docker exec -it kind-v1.31.3-worker /bin/bash
root@kind-v1:/# echo "maxPods: 4" >> /var/lib/kubelet/config.yaml
root@kind-v1:/# systemctl restart kubelet
root@kind-v1:/# exit
# worker2
$ docker exec -it kind-v1.31.3-worker2 /bin/bash
root@kind-v1:/# echo "maxPods: 3" >> /var/lib/kubelet/config.yaml
root@kind-v1:/# systemctl restart kubelet
root@kind-v1:/# exit
```
Now, we can schedule 2 pods on `worker` and 1 pod on `worker2`.
```bash
$ k get no -A -o='custom-columns=NAME:.metadata.name,MAXPOD:.status.capacity.pods,VERSION:.status.nodeInfo.kubeletVersion'
NAME MAXPOD VERSION
kind-v1.31.3-control-plane 110 v1.31.3
kind-v1.31.3-worker 4 v1.31.3
kind-v1.31.3-worker2 3 v1.31.3
$ k get po -A -owide | grep -w kind-v1.31.3-worker | wc -l
2
$ k get po -A -owide | grep -w kind-v1.31.3-worker2 | wc -l
2
```
</details>
Create `PriorityClass`, `PodDisruptionBudget`, and `Pod`.
Applying these manifests, we can see the following situation.
<details>
`high-priority.yaml`
```yaml
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: high-priority
value: 1000
globalDefault: false
description: "This priority class should be used for high priority pods."
```
`high.yaml`
```yaml
apiVersion: v1
kind: Pod
metadata:
name: high
labels:
app: high
spec:
containers:
- name: nginx
image: nginx
priorityClassName: high-priority
nodeSelector:
kubernetes.io/hostname: kind-v1.31.3-worker
```
`mid-priority.yaml`
```yaml
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: mid-priority
value: 100
globalDefault: false
description: "This priority class should be used for mid priority pods."
```
`mid.yaml`
```yaml
apiVersion: v1
kind: Pod
metadata:
name: mid
labels:
app: mid
spec:
containers:
- name: nginx
image: nginx
priorityClassName: mid-priority
nodeSelector:
kubernetes.io/hostname: kind-v1.31.3-worker2
```
`low-priority.yaml`
```yaml
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: low-priority
value: 0
globalDefault: false
description: "This priority class should be used for low priority pods."
```
`low.yaml`
```yaml
apiVersion: v1
kind: Pod
metadata:
name: low
labels:
app: low
spec:
containers:
- name: nginx
image: nginx
priorityClassName: low-priority
nodeSelector:
kubernetes.io/hostname: kind-v1.31.3-worker
```
`mid-pdb.yaml`
```yaml
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: mid-pdb
spec:
minAvailable: 1
selector:
matchLabels:
app: mid
```
`low-pdb.yaml`
```yaml
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: mid-pdb
spec:
minAvailable: 1
selector:
matchLabels:
app: mid
```
`very-high-priority.yaml`
```yaml
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: very-high-priority
value: 10000
globalDefault: false
description: "This priority class should be used for very high priority pods."
```
</details>
`worker` and `worker2` habe reached their `maxPods` limit and evicting `low` or `mid` pod violates PDB.
```bash
$ k get po -o='custom-columns=NAME:.metadata.name,STATUS:.status.phase,PRIORITY:.spec.priority,NODE:.spec.nodeName'
NAME STATUS PRIORITY NODE
high Running 1000 kind-v1.31.3-worker
low Running 0 kind-v1.31.3-worker
mid Running 100 kind-v1.31.3-worker2
$ k get priorityclasses
NAME VALUE GLOBAL-DEFAULT AGE
high-priority 1000 false 26s
low-priority 0 false 26s
mid-priority 100 false 26s
system-cluster-critical 2000000000 false 17m
system-node-critical 2000001000 false 17m
very-high-priority 10000 false 26s
$ k get pdb
NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
low-pdb 1 N/A 0 47s
mid-pdb 1 N/A 0 47s
$ k get po -A -owide | grep -w kind-v1.31.3-worker | wc -l
4
ubuntu-user@ubuntu-server01 ~/work/kubernetes/preemption $ k get po -A -owide | grep -w kind-v1.31.3-worker2 | wc -l
3
```
Now, attempt to schedule `very-high` pod.
`very-high.yaml`
```yaml
apiVersion: v1
kind: Pod
metadata:
name: very-high
labels:
app: very-high
spec:
containers:
- name: nginx
image: nginx
priorityClassName: very-high-priority
```
```bash
$ k apply -f very-high.yaml
```
We can see that `high` pod on `worker` is evicted instead of `mid` pod on `worker2`.
```bash
$ k get po -owide -w
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
high 1/1 Running 0 6m 10.244.1.2 kind-v1.31.3-worker <none> <none>
low 1/1 Running 0 6m 10.244.1.3 kind-v1.31.3-worker <none> <none>
mid 1/1 Running 0 6m 10.244.2.2 kind-v1.31.3-worker2 <none> <none>
very-high 0/1 Pending 0 0s <none> <none> <none> <none>
high 1/1 Running 0 6m9s 10.244.1.2 kind-v1.31.3-worker <none> <none>
high 1/1 Terminating 0 6m9s 10.244.1.2 kind-v1.31.3-worker <none> <none>
very-high 0/1 Pending 0 0s <none> <none> kind-v1.31.3-worker <none>
high 1/1 Terminating 0 6m9s 10.244.1.2 kind-v1.31.3-worker <none> <none>
high 0/1 Completed 0 6m9s 10.244.1.2 kind-v1.31.3-worker <none> <none>
high 0/1 Completed 0 6m10s 10.244.1.2 kind-v1.31.3-worker <none> <none>
high 0/1 Completed 0 6m10s 10.244.1.2 kind-v1.31.3-worker <none> <none>
very-high 0/1 Pending 0 2s <none> kind-v1.31.3-worker kind-v1.31.3-worker <none>
very-high 0/1 ContainerCreating 0 2s <none> kind-v1.31.3-worker <none> <none>
very-high 1/1 Running 0 3s 10.244.1.4 kind-v1.31.3-worker <none> <none>
$ k get po -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
low 1/1 Running 0 7m23s 10.244.1.3 kind-v1.31.3-worker <none> <none>
mid 1/1 Running 0 7m23s 10.244.2.2 kind-v1.31.3-worker2 <none> <none>
very-high 1/1 Running 0 74s 10.244.1.4 kind-v1.31.3-worker <none> <none>
```
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
Client Version: v1.31.0
Kustomize Version: v5.4.2
Server Version: v1.31.3
```
</details>
### Cloud provider
<details>
none
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
$ kind version
kind v0.25.0 go1.22.9 linux/amd64
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/scheduling,needs-triage | low | Major |
2,729,219,626 | angular | some diagrams are hard to read in dark mode | ### Describe the problem that you experienced
Many of the diagrams don't work well in dark mode. For example:
Light mode:

Dark mode:

The "User" label on the start node uses white text on a very bright yellow background, making it almost impossible to read.
The other nodes, while readable, contrast _very_ harshly with the black background, making the entire diagram unpleasant to look at.
### Enter the URL of the topic with the problem
https://angular.dev/guide/forms#data-flow-in-reactive-forms
### Describe what you were looking for in the documentation
_No response_
### Describe the actions that led you to experience the problem
_No response_
### Describe what you want to experience that would fix the problem
_No response_
### Add a screenshot if that helps illustrate the problem
_No response_
### If this problem caused an exception or error, please paste it here
```true
```
### If the problem is browser-specific, please specify the device, OS, browser, and version
```true
```
### Provide any additional information here in as much as detail as you can
```true
Doesn't seem to be browser specific, looks identical on Firefox, Chrome and Edge.
``` | help wanted,good first issue,area: docs | low | Critical |
2,729,260,662 | kubernetes | NodeResourcesBalancedAllocation cause too many pods scheduled to the same node | ### What happened?
NodeResourcesBalancedAllocation will return different score if pod request is empty.
```
I1210 06:42:54.701779 1 resource_allocation.go:70] "Listing internal info for allocatable resources, requested resources and score" pod="tuyaco-k8s/task-worker-9" node="10.20.96.50" resourceAllocationScorer="NodeResourcesBalancedAllocation" allocatableResource=map[cpu:15600 memory:28727588291] requestedResource=map[cpu:14580 memory:24593301504] resourceScore=96
I1210 06:42:54.701793 1 resource_allocation.go:70] "Listing internal info for allocatable resources, requested resources and score" pod="tuyaco-k8s/task-worker-9" node="10.20.96.8" resourceAllocationScorer="NodeResourcesBalancedAllocation" allocatableResource=map[cpu:15600 memory:28727600579] requestedResource=map[cpu:15350 memory:24235737088] resourceScore=92
I1210 06:42:54.701807 1 resource_allocation.go:70] "Listing internal info for allocatable resources, requested resources and score" pod="tuyaco-k8s/task-worker-9" node="10.20.96.9" resourceAllocationScorer="NodeResourcesBalancedAllocation" allocatableResource=map[cpu:15600 memory:28727600579] requestedResource=map[cpu:13480 memory:19474153472] resourceScore=90
```
### What did you expect to happen?
NodeResourcesBalancedAllocation score should return zero if podRequest is zero.
https://github.com/kubernetes/kubernetes/blob/a499facee693a1a83daadb82d88f7b51d324ffc5/pkg/scheduler/framework/plugins/noderesources/resource_allocation.go#L85-L114
The code below use `IsPrefixedNativeResource`, which ignore `cpu` and `memory`.
https://github.com/kubernetes/kubernetes/blob/a499facee693a1a83daadb82d88f7b51d324ffc5/pkg/scheduler/util/utils.go#L158-L162
I think we should use below function.
https://github.com/kubernetes/kubernetes/blob/a499facee693a1a83daadb82d88f7b51d324ffc5/pkg/apis/core/v1/helper/helpers.go#L54-L60
### How can we reproduce it (as minimally and precisely as possible)?
create pod whcih request is empty and enable NodeResourcesBalancedAllocation.
scheduler config:
```yaml
- name: NodeResourcesBalancedAllocation
args:
resources:
- name: cpu
- name: memory
```
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
# paste output here
Server Version: v1.24.15
```
</details>
### Cloud provider
<details>
vanilla
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/scheduling,needs-triage | medium | Critical |
2,729,302,745 | next.js | Failed to build with the error occurred in `next/font` | ### Link to the code that reproduces this issue
https://stackblitz.com/edit/stackblitz-starters-yqm1pksx?file=package.json
### To Reproduce
1. Install next.js v14.2.20
2. Exec `npm run build`
### Current vs. Expected behavior
**Current**
Failed to build with errors shown below
```
> next build
โฒ Next.js 14.2.20
- Environments: .env.local
Creating an optimized production build ...
Failed to compile.
src/app/layout.tsx
An error occurred in `next/font`.
Error: Cannot find module '$HOME/node_modules/@jridgewell/gen-mapping/dist/gen-mapping.umd.js'
```
**Expected**
Passing to build with no errors.
### Provide environment information
```bash
Operating System:
Platform: Darwin
Arch: arm64
Version: 22.6.0
Binaries:
Node: 20.16.0
npm: 10.5.1
Relevant Packages:
next: 14.2.20
Next.js Config:
output: standalone
```
### Which area(s) are affected? (Select all that apply)
Font (next/font)
### Which stage(s) are affected? (Select all that apply)
next build (local)
### Additional context
I think this error is related with https://github.com/jridgewell/gen-mapping/issues/14 | bug,Font (next/font) | low | Critical |
2,729,305,831 | react | [eslint-plugin-react-hooks] v5.1.0 was released without any changes in github | The newer version is released https://www.npmjs.com/package/eslint-plugin-react-hooks/v/5.1.0
However, there aren't any changes in the package dir here as of the time of posting the issue.
https://github.com/facebook/react/tree/372ec00c0384cd2089651154ea7c67693ee3f2a5/packages/eslint-plugin-react-hooks
This is concerning because it could indicate that someone published on behalf of the react team. | Status: Unconfirmed | low | Minor |
2,729,312,659 | transformers | More rich documentation on pipelines | ### Feature request
For instance, in the case of pipelines, there are merely some community examples such as automatic-speech-recognition. There is no indication of who constitutes it or how the Voice Activity Detection (VAD) is involved. As a result, users are perplexed.
### Motivation
More detailed descriptions on pipline details.
### Your contribution
currently not | Feature request | low | Major |
2,729,323,440 | rust | Split `run-make` into two test suites: fast-path that don't need to build cargo/rustdoc and a slow-path that requires these tools | Building stage 1 cargo and rustdoc takes quite a bit of time and is annoying if your run-make test doesn't even need cargo/rustdoc. Building stage 1 cargo for tests that do need cargo is actually necessary because beta cargo (the usual bootstrap cargo) might not have changes that are present in nightly cargo. | A-testsuite,E-hard,C-enhancement,T-bootstrap,A-compiletest,A-run-make,A-test-infra | low | Major |
2,729,342,177 | rust | compiletest: inconsistent and inconvenient ways to request verbose test output | compiletest has some strange `--nocapture` (which is a libtest thing) support for `ui` tests but inconsistently for other test suites. `crashes` test suite uses an env var `COMPILETEST_VERBOSE_CRASHES=1` to unhide the crashes test failure output. `run-make` test suite has multiple layer of hiding going on.
There's also the bootstrap verbose/verbose tests -> compiletest handling. | A-testsuite,E-hard,T-bootstrap,C-bug,A-compiletest,A-test-infra,E-needs-investigation | low | Critical |
2,729,405,567 | vscode | Disable inline completions when composing via an IME (Input Method Editor) | <!-- โ ๏ธโ ๏ธ Do Not Delete This! feature_request_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
I would like inline completions to be disabled when composing characters via an IME. The completions given at those moments are not super useful as the characters typed in are intermediate and would just be replaced after selection. It would greatly reduce distractions while using GitHub Copilot or extensions alike, especially for users who usually code in multiple languages using IMEs.
Here is an example of me receiving unhelpful inline completions while composing Chinese characters via an IME:

As far as I know, the input event is not exposed to extension developers, so it would be hard for them to know when to stop providing completion results. Feel like here is the right place to ask. | editor-input-IME,inline-completions | low | Major |
2,729,411,256 | tensorflow | ValueError: as_list() is not defined on an unknown TensorShape. | ### Issue type
Support
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
source
### TensorFlow version
2.18.0
### Custom code
Yes
### OS platform and distribution
Ubuntu 20.04.6 LTS
### Mobile device
_No response_
### Python version
3.11
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
The created dataset's <class 'tensorflow.python.data.ops.prefetch_op._PrefetchDataset'> shape is unknown.
which when used for model.evaluate results in error:
```
self.model.evaluate(self.data_loader.get_valid_loader(batch_size), verbose=1)
File "/home/perfuser/shailesh/9_dec/lib/python3.11/site-packages/keras/src/utils/traceback_utils.py", line 122, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/home/perfuser/shailesh/9_dec/lib/python3.11/site-packages/keras/src/utils/traceback_utils.py", line 122, in error_handler
raise e.with_traceback(filtered_tb) from None
^^^^^^^^^^^^^^^
ValueError: as_list() is not defined on an unknown TensorShape.
```
Apologies for sharing partial code.
My question is: how can I set the shape of tensorflow.python.data.ops.prefetch_op._PrefetchDataset dataset, or what is the solution to resolve the issue "as_list() is not defined on an unknown TensorShape"?
The same dataset with an unknown shape worked with TensorFlow 2.13.
### Standalone code to reproduce the issue
```shell
ds_val = ds_val.map(lambda x: tf.py_function(self.read_nifti_file,
[x, False], [tf.float32, tf.float32]),
num_parallel_calls=tf.data.experimental.AUTOTUNE)
self.model.evaluate(self.data_loader.get_valid_loader(batch_size), verbose=1)
```
```
### Relevant log output
```shell
ds_val 3 <_PrefetchDataset element_spec=(TensorSpec(shape=<unknown>, dtype=tf.float32, name=None), TensorSpec(shape=<unknown>, dtype=tf.float32, name=None))>
type(ds_val) <class 'tensorflow.python.data.ops.prefetch_op._PrefetchDataset'>
type(ds) <class 'tuple'>
shape(ds) (2, 64, 64, 64, 1)
type(ds) <class 'tuple'>
shape(ds) (2, 64, 64, 64, 1)
type(ds) <class 'tuple'>
shape(ds) (2, 64, 64, 64, 1)
type(ds) <class 'tuple'>
shape(ds) (2, 64, 64, 64, 1)
type(ds) <class 'tuple'>
shape(ds) (2, 64, 64, 64, 1)
type(ds) <class 'tuple'>
shape(ds) (2, 64, 64, 64, 1)
type(ds) <class 'tuple'>
shape(ds) (2, 64, 64, 64, 1)
type(ds) <class 'tuple'>
shape(ds) (2, 64, 64, 64, 1)
type(ds) <class 'tuple'>
shape(ds) (2, 64, 64, 64, 1)
```
| type:support,TF 2.18 | medium | Critical |
2,729,426,857 | ant-design | Spin็ปไปถfullscreenๆจกๅผไธwrapperClassNameๅฑๆงไธ็ๆ | ### Reproduction link
[](https://codesandbox.io/p/sandbox/antd-reproduction-template-forked-7z7zfx?file=%2Findex.tsx)
### Steps to reproduce
Spin็ปไปถๆทปๅ fullscreenๅwrapperClassNameๅฑๆง
### What is expected?
wrapperClassNameๅฑๆง็ๆ๏ผspinๅคๅฑๅ
็ด ๆ่ชๅฎไน็ฑปๅ
### What is actually happening?
wrapperClassNameๅฑๆงๆช็ๆ
| Environment | Info |
| --- | --- |
| antd | 5.22.3 |
| React | 18.0.0 |
| System | macos14.5 |
| Browser | chrome |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | Inactive | low | Minor |
2,729,467,854 | deno | Allow expanding the types displayed on hover | When hovering over a complex type, it's usually hard to read. It would be cool to be able to expand the types manually in order to see the final computed type.
This would upstream the changes in https://github.com/microsoft/vscode/issues/94679#issuecomment-2529656036 and therefore requires TS 5.7. | feat,lsp | low | Minor |
2,729,470,985 | ant-design | ๆๆบ็ซฏH5ไฝฟ็จSelect ็ปไปถ๏ผ่ฎพ็ฝฎallowClearไนๅ๏ผ็ฑไบๆๆบ็ซฏๆฒกๆhoverไบไปถ๏ผ้่ฆ็นๅปๆ่ฝ็ๅฐๆธ
้คๅพๆ ใๆญคๅบๆฏๆๅ้็ๆนๆกๅ๏ผๆฏๅฆ้ๆฉๆกๆๅผ็ๆถๅ๏ผๅฐฑๅฑ็คบๆธ
้คๅพๆ ใ | ### Reproduction link
[](https://codesandbox.io)
### Steps to reproduce
ๆๆบ็ซฏH5ไฝฟ็จ
### What is expected?
ๆๆบ็ซฏH5ไฝฟ็จSelect ็ปไปถ๏ผ่ฎพ็ฝฎallowClearไนๅ๏ผ็ฑไบๆๆบ็ซฏๆฒกๆhoverไบไปถ๏ผ้่ฆ็นๅปๆ่ฝ็ๅฐๆธ
้คๅพๆ ใๆญคๅบๆฏๆๅ้็ๆนๆกๅ๏ผๆฏๅฆ้ๆฉๆกๆๅผ็ๆถๅ๏ผๅฐฑๅฑ็คบๆธ
้คๅพๆ ใ
### What is actually happening?
ๆๆบ็ซฏH5ไฝฟ็จSelect ็ปไปถ๏ผ่ฎพ็ฝฎallowClearไนๅ๏ผ็ฑไบๆๆบ็ซฏๆฒกๆhoverไบไปถ๏ผ้่ฆ็นๅปๆ่ฝ็ๅฐๆธ
้คๅพๆ ใ
| Environment | Info |
| --- | --- |
| antd | 5.20.0 |
| React | ^18.2.0 |
| System | window |
| Browser | google |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | Inactive,๐ฑMobile Device | low | Minor |
2,729,533,902 | transformers | Adding Mamba2ForTokenClassification to Mamba2 | ### Feature request
Iโve noticed that many newly added models do not include a `ForTokenClassification` implementation. Is this due to fundamental challenges in implementation (though I donโt perceive any major obstaclesโperhaps Iโve overlooked something), or is it simply a matter of development priorities and time constraints?
### Motivation
I am currently testing a prototype based on the Mamba series models, which requires token classification outputs.
### Your contribution
If itโs merely a time issue preventing the implementation of `ForTokenClassification` in `transformers`, Iโd be more than willing to contribute by adding this feature for Mamba/Mamba2. If time allows, Iโd also be happy to extend the support to other models.
From my understanding, replicating the approach used in `LlamaForTokenClassification` should suffice to implement token classification model for most models. Any advice or guidance would be highly appreciated! | Feature request | low | Minor |
2,729,551,015 | rust | Tracking issue for release notes of #93235: Tracking Issue for `const_nonnull_new` |
This issue tracks the release notes text for #93235.
### Steps
- [ ] Proposed text is drafted by PR author (or team) making the noteworthy change.
- [ ] Issue is nominated for release team review of clarity for wider audience.
- [ ] Release team includes text in release notes/blog posts.
### Release notes text
The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing).
````markdown
# Const Stabilized APIs
- [`NonNull::new`](https://doc.rust-lang.org/stable/std/ptr/struct.NonNull.html#method.new)
````
> [!TIP]
> Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use.
> The category will be de-duplicated with all the other ones by the release team.
>
> *More than one section can be included if needed.*
### Release blog section
If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section.
*Otherwise leave it empty.*
````markdown
````
cc @lilasta -- origin issue/PR authors and assignees for starting to draft text
| T-libs-api,relnotes,A-const-eval,relnotes-tracking-issue | low | Minor |
2,729,583,142 | pytorch | DISABLED test_override_cpu_sum (__main__.TestPythonRegistration) | Platforms: asan, linux, mac, macos, rocm, win, windows, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_override_cpu_sum&suite=TestPythonRegistration&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/34161750972).
Over the past 3 hours, it has been determined flaky in 36 workflow(s) with 72 failures and 36 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_override_cpu_sum`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_python_dispatch.py", line 330, in test_override_cpu_sum
self.assertEqual(torch.sum(x), x)
File "/opt/conda/envs/py_3.11/lib/python3.11/site-packages/torch/testing/_internal/common_utils.py", line 4016, in assertEqual
raise error_metas.pop()[0].to_error(
AssertionError: The values for attribute 'shape' do not match: torch.Size([]) != torch.Size([2]).
To execute this test, run the following from the base repo dir:
python test/test_python_dispatch.py TestPythonRegistration.test_override_cpu_sum
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_python_dispatch.py`
ConnectionTimeoutError: Connect timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/test_python_dispatch.py -2 (connected: false, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @clee2000 @wdvr | triaged,module: flaky-tests,skipped,module: python dispatcher | low | Critical |
2,729,609,429 | langchain | How to chain RemoteRunnable clients to local llm server (hosted using langserve)? | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
On the server side, I used HuggingFacePipeline to load a local model
```python
from fastapi import FastAPI
# from langchain_anthropic import ChatAnthropic
from langchain_openai import ChatOpenAI
import transformers
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
from langchain_huggingface.llms import HuggingFacePipeline
from langchain_huggingface import ChatHuggingFace
from langserve import add_routes
import torch
import os
cache_dir = "./transforms_cache"
os.environ['TRANSFORMERS_CACHE'] = cache_dir
os.environ['HF_HOME']=cache_dir
transformers.utils.move_cache(new_cache_dir=cache_dir)
app = FastAPI(
title="LangChain Server",
version="1.0",
description="Spin up a simple api server using Langchain's Runnable interfaces",
)
model_name = "allenai/Llama-3.1-Tulu-3-8B"
tokenizer = AutoTokenizer.from_pretrained(model_name,cache_dir=cache_dir)
tulu_model = AutoModelForCausalLM.from_pretrained(model_name,cache_dir=cache_dir,
torch_dtype=torch.float16, device_map="auto",)
hf_pipeline = pipeline("text-generation", model=tulu_model, tokenizer=tokenizer, max_new_tokens=6400)
hf = HuggingFacePipeline(pipeline=hf_pipeline)
chat=ChatHuggingFace(llm=hf)
app = FastAPI(title="LLM Server", version="1.0")
# Add the LLM to the server
add_routes(app, chat, path="/llm")
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8099)
```
Client side, use RemoteRunnable to connect, although it can successfully invoke an input string, it failed to be applied in LLMChain
```python
from langserve import RemoteRunnable
from langchain import LLMChain, PromptTemplate
from langchain.chains import SimpleSequentialChain
# Create a RemoteRunnable that points to your deployed model endpoint
remote_llm = RemoteRunnable(url="http://0.0.0.0:8099/llm")
# Define prompt templates for each step of your chain
capital_prompt = PromptTemplate.from_template("What is the capital city of {country}?")
population_prompt = PromptTemplate.from_template("What is the population of {city}?")
# Create two LLMChains:
# 1. The first chain takes a country and returns the capital city.
chain1 = LLMChain(llm=remote_llm, prompt=capital_prompt)
# 2. The second chain takes the city name (returned by chain1) and returns the population.
chain2 = LLMChain(llm=remote_llm, prompt=population_prompt)
# Combine them into a SimpleSequentialChain:
# SimpleSequentialChain by default passes the output of the first chain
# as the input to the second chain.
overall_chain = SimpleSequentialChain(chains=[chain1, chain2], verbose=True)
# Run the combined chain:
result = overall_chain.run("France")
```
### Error Message and Stack Trace (if applicable)
site-packages/langserve/client.py:448, in RemoteRunnable.batch(self, inputs, config, return_exceptions, **kwargs)
439 def batch(
440 self,
441 inputs: List[Input],
(...)
445 **kwargs: Any,
446 ) -> List[Output]:
447 if kwargs:
--> 448 raise NotImplementedError(f"kwargs not implemented yet. Got {kwargs}")
449 return self._batch_with_config(
450 self._batch, inputs, config, return_exceptions=return_exceptions
451 )
NotImplementedError: kwargs not implemented yet. Got {'stop': None}
### Description
I try to use langserve to start a server and use RemoteRunnable as clients to communicate with it. This is helpful to try multiple time without worry about client failure, because restart a client is way faster than reload a llm model. Although, I can do simple llm.invoke using RemoteRunnable, but I cannot use any Chain classes, e.g. LLMChain, SimpleSequentialChain, SequentialChain.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Thu Jun 6 09:41:19 UTC 2024
> Python Version: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:16:10) [GCC 13.3.0]
Package Information
-------------------
> langchain_core: 0.3.22
> langchain: 0.3.10
> langchain_community: 0.3.10
> langsmith: 0.1.147
> langchain_huggingface: 0.1.2
> langchain_openai: 0.2.11
> langchain_text_splitters: 0.3.2
> langgraph_sdk: 0.1.43
> langserve: 0.3.0
Other Dependencies
------------------
> aiohttp: 3.11.10
> async-timeout: 4.0.3
> dataclasses-json: 0.6.7
> fastapi: 0.115.6
> httpx: 0.28.1
> httpx-sse: 0.4.0
> huggingface-hub: 0.26.5
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.57.0
> orjson: 3.10.12
> packaging: 24.2
> pydantic: 2.10.3
> pydantic-settings: 2.6.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> sentence-transformers: 3.3.1
> SQLAlchemy: 2.0.36
> sse-starlette: 1.8.2
> tenacity: 9.0.0
> tiktoken: 0.8.0
> tokenizers: 0.21.0
> transformers: 4.47.0
> typing-extensions: 4.12.2 | โฑญ: core | low | Critical |
2,729,618,823 | react | Bug: `select` menu won't update highlight when options change | If a `<select>`'s options changes (e.g. if options are loaded asynchronously) while the menu is open, the `<select>` won't re-examine its options to update its highlight position (`selectedIndex`).
In the Codesandbox example below, I have a select menu that will have its options swapped out 1 second after it receives focus. This is to simulate the fetch call that I'm doing in my real app.
React version: 19.0.0
## Steps To Reproduce
**Browser:** Either use Google Chrome or Microsoft Edge, it's not reproducible in Firefox.
Browser versions:
- Google Chrome โ Version 131.0.6778.109 (Official Build) (64-bit)
- Microsoft Edge โ Version 131.0.2903.86 (Official build) (64-bit)
1. Open [this Codesandbox](https://codesandbox.io/p/sandbox/select-menu-with-dynamic-options-forked-nvt488?workspaceId=ws_DVJY22si5M9kiWeZb4Ybth)
2. Click on the select menu
3. Watch as the complete options list replaces the old one
1  2  3 
## The current behavior
When the new options replaces the old ones, the highlight position stays on the first element. Only after closing and re-opening the menu, the highlight goes to the option with `value="italy"`
## The expected behavior
I want the select menu to highlight the `value="italy"` option after options are replaced | Status: Unconfirmed | medium | Critical |
2,729,703,981 | vscode | Statusbar has strange theme defaults | In default dark, you get a nice hover feedback:

Even for prominent items:

In default light theme, there is no hover feedback at all?

In high contrast themes, should we be using background color at all for prominent or other indicators?

Because we do not in dark high contrast. | bug,ux,workbench-status | low | Minor |
2,729,754,244 | bitcoin | Data corruption on MacOS when using exFAT datadir or blocksdir | As reported in #28552, #28613 and various other places online there are intermittent issues when using an exFAT-formatted drive on MacOS.
I was easily able to reproduce the issue, using an extrenal Samsung T7 SSD formatted to exFAT and connected via USB-C to an M4 macbook. Failures were seen at seemingly random points: blocks ~80_000, 170_000, 270_000.
The failures report in multiple ways, depending on where things have gone wrong. For example:
```log
2024-12-09T10:35:17Z [error] ReadBlockFromDisk: Deserialize or I/O error - ReadCompactSize(): size too large: unspecified iostream_category error at FlatFilePos(nFile=102, nPos=84929524)
2024-12-09T10:35:17Z [error] A fatal internal error occurred, see debug.log for details: Failed to read block.
Error: A fatal internal error occurred, see debug.log for details: Failed to read block.
```
or:
```log
2024-12-09T21:58:44Z [error] A fatal internal error occurred, see debug.log for details: Corrupt block found indicating potential hardware failure.
2024-12-09T21:58:44Z [error] ConnectTip: ConnectBlock 0000000000000010bca0ce4ce37c165cd4e820356194bf5991a7b9af792a83d5 failed, bad-txnmrklroot, hashMerkleRoot mismatch
2024-12-09T21:58:44Z tor: Thread interrupt
2024-12-09T21:58:44Z [error] ProcessNewBlock: ActivateBestChain failed (bad-txnmrklroot, hashMerkleRoot mismatch)
```
In debugging both, I found that the first was simply garbage at that offset in the block file. No magic bytes were present nearby. The second was able to deserialise as the header and first bytes were OK, but the block later contained corrupt bytes, causing the merkle check to fail.
I added debugging to ensure that it wasn't an issue with our read/write code, which showed it was not:
<details>
<summary>Details</summary>
coloured link: https://paste.256k1.dev/erjellines.log
```log
/Volumes/data/bitcoin-macos-debug
$ rg 0000000000022043f9ea2c298f3381c6ddd86b968271117dd55cb48ac297954c debug.log -C5
1647168-2024-12-09T16:40:38Z ReadBlockFromDisk: Starting read for block at FlatFilePos(nFile=0, nPos=69397708) (header at FlatFilePos(nFile=0, nPos=69397700))
1647169-2024-12-09T16:40:38Z ReadBlockFromDisk: Read header magic=f9beb4d9 size=2538 at FlatFilePos(nFile=0, nPos=69397700)
1647170-2024-12-09T16:40:38Z ReadBlockFromDisk: Reading block data at file offset 69397708 size=2538
1647171-2024-12-09T16:40:38Z ReadBlockFromDisk: Read complete - block hash=000000000001612689260e1db03677c70bb96e5707d55fe90a1e1f0c5dcac318 size=2538 nTx=9
1647172-2024-12-09T16:40:38Z UpdateTip: new best=000000000001612689260e1db03677c70bb96e5707d55fe90a1e1f0c5dcac318 height=104762 version=0x00000001 log2_work=59.466998 tx=245649 date='2011-01-26T23:23:07Z' progress=0.000218 cache=22.9MiB(134951txo)
1647173:2024-12-09T16:40:38Z AcceptBlock: Beginning write for block 0000000000022043f9ea2c298f3381c6ddd86b968271117dd55cb48ac297954c height=104770 size=33103
1647174:2024-12-09T16:40:38Z AcceptBlock: Finding new position for block 0000000000022043f9ea2c298f3381c6ddd86b968271117dd55cb48ac297954c
1647175:2024-12-09T16:40:38Z SaveBlockToDisk: Finding position for block 0000000000022043f9ea2c298f3381c6ddd86b968271117dd55cb48ac297954c size=33111 height=104770
1647176:2024-12-09T16:40:38Z SaveBlockToDisk: Got position file=0 pos=69408910 for block 0000000000022043f9ea2c298f3381c6ddd86b968271117dd55cb48ac297954c
1647177:2024-12-09T16:40:38Z WriteBlockToDisk: Opening file for block 0000000000022043f9ea2c298f3381c6ddd86b968271117dd55cb48ac297954c at FlatFilePos(nFile=0, nPos=69408910)
1647178:2024-12-09T16:40:38Z WriteBlockToDisk: Writing header for block 0000000000022043f9ea2c298f3381c6ddd86b968271117dd55cb48ac297954c: magic=f9beb4d9 size=33103
1647179:2024-12-09T16:40:38Z WriteBlockToDisk: Writing block 0000000000022043f9ea2c298f3381c6ddd86b968271117dd55cb48ac297954c data at file offset 69408918
1647180:2024-12-09T16:40:38Z WriteBlockToDisk: Successfully wrote block 0000000000022043f9ea2c298f3381c6ddd86b968271117dd55cb48ac297954c to disk
1647181:2024-12-09T16:40:38Z SaveBlockToDisk: Successfully wrote block 0000000000022043f9ea2c298f3381c6ddd86b968271117dd55cb48ac297954c to FlatFilePos(nFile=0, nPos=69408918)
1647182:2024-12-09T16:40:38Z AcceptBlock: Wrote block 0000000000022043f9ea2c298f3381c6ddd86b968271117dd55cb48ac297954c to file=0 pos=69408918
1647183:2024-12-09T16:40:38Z AcceptBlock: Successfully recorded block 0000000000022043f9ea2c298f3381c6ddd86b968271117dd55cb48ac297954c transactions
1647184-2024-12-09T16:40:38Z AcceptBlock: Beginning write for block 0000000000000fe919539878073d169245f55a86cab68e382b8a33502c656ac3 height=104772 size=215
1647185-2024-12-09T16:40:38Z AcceptBlock: Finding new position for block 0000000000000fe919539878073d169245f55a86cab68e382b8a33502c656ac3
1647186-2024-12-09T16:40:38Z SaveBlockToDisk: Finding position for block 0000000000000fe919539878073d169245f55a86cab68e382b8a33502c656ac3 size=223 height=104772
1647187-2024-12-09T16:40:38Z SaveBlockToDisk: Got position file=0 pos=69442021 for block 0000000000000fe919539878073d169245f55a86cab68e382b8a33502c656ac3
1647188-2024-12-09T16:40:38Z WriteBlockToDisk: Opening file for block 0000000000000fe919539878073d169245f55a86cab68e382b8a33502c656ac3 at FlatFilePos(nFile=0, nPos=69442021)
--
1647316-2024-12-09T16:40:38Z ReadBlockFromDisk: Read complete - block hash=00000000000053dd7549b03afd260f695e5bfcecca55055d4f3e980cccdfb389 size=215 nTx=1
1647317-2024-12-09T16:40:38Z UpdateTip: new best=00000000000053dd7549b03afd260f695e5bfcecca55055d4f3e980cccdfb389 height=104769 version=0x00000001 log2_work=59.468001 tx=245698 date='2011-01-27T00:27:29Z' progress=0.000218 cache=22.9MiB(134960txo)
1647318-2024-12-09T16:40:38Z ReadBlockFromDisk: Starting read for block at FlatFilePos(nFile=0, nPos=69408918) (header at FlatFilePos(nFile=0, nPos=69408910))
1647319-2024-12-09T16:40:38Z ReadBlockFromDisk: Read header magic=f9beb4d9 size=33103 at FlatFilePos(nFile=0, nPos=69408910)
1647320-2024-12-09T16:40:38Z ReadBlockFromDisk: Reading block data at file offset 69408918 size=33103
1647321:2024-12-09T16:40:38Z ReadBlockFromDisk: Read complete - block hash=0000000000022043f9ea2c298f3381c6ddd86b968271117dd55cb48ac297954c size=33103 nTx=3
1647322-2024-12-09T16:40:38Z [error] A fatal internal error occurred, see debug.log for details: Corrupt block found indicating potential hardware failure.
1647323:2024-12-09T16:40:38Z [error] ConnectTip: ConnectBlock 0000000000022043f9ea2c298f3381c6ddd86b968271117dd55cb48ac297954c failed, bad-txnmrklroot, hashMerkleRoot mismatch
1647324-2024-12-09T16:40:38Z [error] ProcessNewBlock: ActivateBestChain failed (bad-txnmrklroot, hashMerkleRoot mismatch)
1647325-2024-12-09T16:40:38Z tor: Thread interrupt
1647326-2024-12-09T16:40:38Z Shutdown: In progress...
1647327-2024-12-09T16:40:38Z opencon thread exit
1647328-2024-12-09T16:40:38Z torcontrol thread exit
```
</details>
We attempt to read the correct section, but find incorrect data.
I did try adding even more logging than this, but after a point the corruption became less frequent; indicating to me that slowing down IBD perhaps gives these syncs enough time to process properly.
As mentioned in #31453, I suspect the root cause here is the `F_FULLFSYNC fcntl` is improperly implemented for exFAT on MacOS.
I am unable to think of any other ways we could fix this in our codebase, without resorting to extreme measures like opening the block files without caching (and therefore forcing direct I/O). | macOS,Upstream,Block storage,Data corruption | low | Critical |
2,729,760,673 | rust | Investigate using `crosstool-ng` instead of CentOS baseline | In some `dist` builders (mainly the x64 one), we use an old Linux distro (e.g. CentOS 7) to have a low glibc baseline. However, it seems like using `crosstool-ng`, and essentially "cross-compiling" from platform A to platform A could also work, and perhaps result in a simpler CI configuration (https://github.com/rust-lang/rust/pull/133902). | C-enhancement,T-infra,A-CI,E-needs-investigation | low | Minor |
2,729,795,484 | bitcoin | multiprocess: build failure on Alpine with depends & `DEBUG=1` | Alpine 3.21 aarch64
gcc (Alpine 14.2.0) 14.2.0
```bash
make -C depends/ NO_USDT=1 NO_WALLET=1 NO_QT=1 NO_ZMQ=1 MULTIPROCESS=1 DEBUG=1
cmake -B build --toolchain /bitcoin/depends/aarch64-unknown-linux-musl/toolchain.cmake
cmake --build build --target bitcoin-node
<snip>
[100%] Linking CXX executable bitcoin-node
/usr/lib/gcc/aarch64-alpine-linux-musl/14.2.0/../../../../aarch64-alpine-linux-musl/bin/ld: ipc/libbitcoin_ipc.a(init.capnp.proxy-client.c++.o): in function `mp::ProxyClientBase<ipc::capnp::messages::Echo, interfaces::Echo>::ProxyClientBase(ipc::capnp::messages::Echo::Client, mp::Connection*, bool)::{lambda()#2}::operator()() const':
/bitcoin/depends/aarch64-unknown-linux-musl/include/mp/proxy-io.h:409:(.text._ZNSt17_Function_handlerIFvvEZN2mp15ProxyClientBaseIN3ipc5capnp8messages4EchoEN10interfaces4EchoEEC4ENS6_6ClientEPNS1_10ConnectionEbEUlvE0_E9_M_invokeERKSt9_Any_data[_ZNSt17_Function_handlerIFvvEZN2mp15ProxyClientBaseIN3ipc5capnp8messages4EchoEN10interfaces4EchoEEC4ENS6_6ClientEPNS1_10ConnectionEbEUlvE0_E9_M_invokeERKSt9_Any_data]+0x40): undefined reference to `mp::Connection::removeSyncCleanup(std::_List_iterator<std::function<void ()> >)'
/usr/lib/gcc/aarch64-alpine-linux-musl/14.2.0/../../../../aarch64-alpine-linux-musl/bin/ld: ipc/libbitcoin_ipc.a(init.capnp.proxy-client.c++.o): in function `mp::ProxyClientBase<ipc::capnp::messages::Mining, interfaces::Mining>::ProxyClientBase(ipc::capnp::messages::Mining::Client, mp::Connection*, bool)::{lambda()#2}::operator()() const':
/bitcoin/depends/aarch64-unknown-linux-musl/include/mp/proxy-io.h:409:(.text._ZNSt17_Function_handlerIFvvEZN2mp15ProxyClientBaseIN3ipc5capnp8messages6MiningEN10interfaces6MiningEEC4ENS6_6ClientEPNS1_10ConnectionEbEUlvE0_E9_M_invokeERKSt9_Any_data[_ZNSt17_Function_handlerIFvvEZN2mp15ProxyClientBaseIN3ipc5capnp8messages6MiningEN10interfaces6MiningEEC4ENS6_6ClientEPNS1_10ConnectionEbEUlvE0_E9_M_invokeERKSt9_Any_data]+0x40): undefined reference to `mp::Connection::removeSyncCleanup(std::_List_iterator<std::function<void ()> >)'
/usr/lib/gcc/aarch64-alpine-linux-musl/14.2.0/../../../../aarch64-alpine-linux-musl/bin/ld: ipc/libbitcoin_ipc.a(init.capnp.proxy-client.c++.o): in function `_ZN2mp16CustomBuildFieldINS_11StructFieldINS_8AccessorINS_11init_fields7ContextELi17EEEN5capnp7RequestIN3ipc5capnp8messages4Init14MakeEchoParamsENSB_15MakeEchoResultsEEEEEEEvNS_8TypeListIJEEENS_8PriorityILi1EEERNS_19ClientInvokeContextEOT_PNSt9enable_ifIXsrSt7is_sameIDTcldtfL0p2_3getEENS_7Context7BuilderEE5valueEvE4typeE':
/bitcoin/depends/aarch64-unknown-linux-musl/include/mp/proxy-types.h:71:(.text._ZZN2mp12clientInvokeINS_11ProxyClientIN3ipc5capnp8messages4InitEEEMNS5_6ClientEFN5capnp7RequestINS5_14MakeEchoParamsENS5_15MakeEchoResultsEEEN2kj5MaybeINS8_11MessageSizeEEEEJNS_11ClientParamINS_8AccessorINS_11init_fields7ContextELi17EEEJEEENSJ_INSK_INSL_6ResultELi18EEEJRSt10unique_ptrIN10interfaces4EchoESt14default_deleteIST_EEEEEEEEvRT_RKT0_DpOT1_ENKUlvE_clEv[_ZZN2mp12clientInvokeINS_11ProxyClientIN3ipc5capnp8messages4InitEEEMNS5_6ClientEFN5capnp7RequestINS5_14MakeEchoParamsENS5_15MakeEchoResultsEEEN2kj5MaybeINS8_11MessageSizeEEEEJNS_11ClientParamINS_8AccessorINS_11init_fields7ContextELi17EEEJEEENSJ_INSK_INSL_6ResultELi18EEEJRSt10unique_ptrIN10interfaces4EchoESt14default_deleteIST_EEEEEEEEvRT_RKT0_DpOT1_ENKUlvE_clEv]+0xa4): undefined reference to `mp::SetThread(std::map<mp::Connection*, mp::ProxyClient<mp::Thread>, std::less<mp::Connection*>, std::allocator<std::pair<mp::Connection* const, mp::ProxyClient<mp::Thread> > > >&, std::mutex&, mp::Connection*, std::function<mp::Thread::Client ()>)'
/usr/lib/gcc/aarch64-alpine-linux-musl/14.2.0/../../../../aarch64-alpine-linux-musl/bin/ld: /bitcoin/depends/aarch64-unknown-linux-musl/include/mp/proxy-types.h:89:(.text._ZZN2mp12clientInvokeINS_11ProxyClientIN3ipc5capnp8messages4InitEEEMNS5_6ClientEFN5capnp7RequestINS5_14MakeEchoParamsENS5_15MakeEchoResultsEEEN2kj5MaybeINS8_11MessageSizeEEEEJNS_11ClientParamINS_8AccessorINS_11init_fields7ContextELi17EEEJEEENSJ_INSK_INSL_6ResultELi18EEEJRSt10unique_ptrIN10interfaces4EchoESt14default_deleteIST_EEEEEEEEvRT_RKT0_DpOT1_ENKUlvE_clEv[_ZZN2mp12clientInvokeINS_11ProxyClientIN3ipc5capnp8messages4InitEEEMNS5_6ClientEFN5capnp7RequestINS5_14MakeEchoParamsENS5_15MakeEchoResultsEEEN2kj5MaybeINS8_11MessageSizeEEEEJNS_11ClientParamINS_8AccessorINS_11init_fields7ContextELi17EEEJEEENSJ_INSK_INSL_6ResultELi18EEEJRSt10unique_ptrIN10interfaces4EchoESt14default_deleteIST_EEEEEEEEvRT_RKT0_DpOT1_ENKUlvE_clEv]+0xf0): undefined reference to `mp::SetThread(std::map<mp::Connection*, mp::ProxyClient<mp::Thread>, std::less<mp::Connection*>, std::allocator<std::pair<mp::Connection* const, mp::ProxyClient<mp::Thread> > > >&, std::mutex&, mp::Connection*, std::function<mp::Thread::Client ()>)'
/usr/lib/gcc/aarch64-alpine-linux-musl/14.2.0/../../../../aarch64-alpine-linux-musl/bin/ld: ipc/libbitcoin_ipc.a(init.capnp.proxy-client.c++.o): in function `_ZN2mp16CustomBuildFieldINS_11StructFieldINS_8AccessorINS_11init_fields7ContextELi17EEEN5capnp7RequestIN3ipc5capnp8messages4Init16MakeMiningParamsENSB_17MakeMiningResultsEEEEEEEvNS_8TypeListIJEEENS_8PriorityILi1EEERNS_19ClientInvokeContextEOT_PNSt9enable_ifIXsrSt7is_sameIDTcldtfL0p2_3getEENS_7Context7BuilderEE5valueEvE4typeE':
/bitcoin/depends/aarch64-unknown-linux-musl/include/mp/proxy-types.h:71:(.text._ZZN2mp12clientInvokeINS_11ProxyClientIN3ipc5capnp8messages4InitEEEMNS5_6ClientEFN5capnp7RequestINS5_16MakeMiningParamsENS5_17MakeMiningResultsEEEN2kj5MaybeINS8_11MessageSizeEEEEJNS_11ClientParamINS_8AccessorINS_11init_fields7ContextELi17EEEJEEENSJ_INSK_INSL_6ResultELi18EEEJRSt10unique_ptrIN10interfaces6MiningESt14default_deleteIST_EEEEEEEEvRT_RKT0_DpOT1_ENKUlvE_clEv[_ZZN2mp12clientInvokeINS_11ProxyClientIN3ipc5capnp8messages4InitEEEMNS5_6ClientEFN5capnp7RequestINS5_16MakeMiningParamsENS5_17MakeMiningResultsEEEN2kj5MaybeINS8_11MessageSizeEEEEJNS_11ClientParamINS_8AccessorINS_11init_fields7ContextELi17EEEJEEENSJ_INSK_INSL_6ResultELi18EEEJRSt10unique_ptrIN10interfaces6MiningESt14default_deleteIST_EEEEEEEEvRT_RKT0_DpOT1_ENKUlvE_clEv]+0xa4): undefined reference to `mp::SetThread(std::map<mp::Connection*, mp::ProxyClient<mp::Thread>, std::less<mp::Connection*>, std::allocator<std::pair<mp::Connection* const, mp::ProxyClient<mp::Thread> > > >&, std::mutex&, mp::Connection*, std::function<mp::Thread::Client ()>)'
/usr/lib/gcc/aarch64-alpine-linux-musl/14.2.0/../../../../aarch64-alpine-linux-musl/bin/ld: /bitcoin/depends/aarch64-unknown-linux-musl/include/mp/proxy-types.h:89:(.text._ZZN2mp12clientInvokeINS_11ProxyClientIN3ipc5capnp8messages4InitEEEMNS5_6ClientEFN5capnp7RequestINS5_16MakeMiningParamsENS5_17MakeMiningResultsEEEN2kj5MaybeINS8_11MessageSizeEEEEJNS_11ClientParamINS_8AccessorINS_11init_fields7ContextELi17EEEJEEENSJ_INSK_INSL_6ResultELi18EEEJRSt10unique_ptrIN10interfaces6MiningESt14default_deleteIST_EEEEEEEEvRT_RKT0_DpOT1_ENKUlvE_clEv[_ZZN2mp12clientInvokeINS_11ProxyClientIN3ipc5capnp8messages4InitEEEMNS5_6ClientEFN5capnp7RequestINS5_16MakeMiningParamsENS5_17MakeMiningResultsEEEN2kj5MaybeINS8_11MessageSizeEEEEJNS_11ClientParamINS_8AccessorINS_11init_fields7ContextELi17EEEJEEENSJ_INSK_INSL_6ResultELi18EEEJRSt10unique_ptrIN10interfaces6MiningESt14default_deleteIST_EEEEEEEEvRT_RKT0_DpOT1_ENKUlvE_clEv]+0xf0): undefined reference to `mp::SetThread(std::map<mp::Connection*, mp::ProxyClient<mp::Thread>, std::less<mp::Connection*>, std::allocator<std::pair<mp::Connection* const, mp::ProxyClient<mp::Thread> > > >&, std::mutex&, mp::Connection*, std::function<mp::Thread::Client ()>)'
/usr/lib/gcc/aarch64-alpine-linux-musl/14.2.0/../../../../aarch64-alpine-linux-musl/bin/ld: ipc/libbitcoin_ipc.a(echo.capnp.proxy-client.c++.o): in function `_ZN2mp16CustomBuildFieldINS_11StructFieldINS_8AccessorINS_11echo_fields7ContextELi17EEEN5capnp7RequestIN3ipc5capnp8messages4Echo13DestroyParamsENSB_14DestroyResultsEEEEEEEvNS_8TypeListIJEEENS_8PriorityILi1EEERNS_19ClientInvokeContextEOT_PNSt9enable_ifIXsrSt7is_sameIDTcldtfL0p2_3getEENS_7Context7BuilderEE5valueEvE4typeE':
/bitcoin/depends/aarch64-unknown-linux-musl/include/mp/proxy-types.h:71:(.text._ZZN2mp12clientInvokeINS_11ProxyClientIN3ipc5capnp8messages4EchoEEEMNS5_6ClientEFN5capnp7RequestINS5_13DestroyParamsENS5_14DestroyResultsEEEN2kj5MaybeINS8_11MessageSizeEEEEJNS_11ClientParamINS_8AccessorINS_11echo_fields7ContextELi17EEEJEEEEEEvRT_RKT0_DpOT1_ENKUlvE_clEv[_ZZN2mp12clientInvokeINS_11ProxyClientIN3ipc5capnp8messages4EchoEEEMNS5_6ClientEFN5capnp7RequestINS5_13DestroyParamsENS5_14DestroyResultsEEEN2kj5MaybeINS8_11MessageSizeEEEEJNS_11ClientParamINS_8AccessorINS_11echo_fields7ContextELi17EEEJEEEEEEvRT_RKT0_DpOT1_ENKUlvE_clEv]+0xa4): undefined reference to `mp::SetThread(std::map<mp::Connection*, mp::ProxyClient<mp::Thread>, std::less<mp::Connection*>, std::allocator<std::pair<mp::Connection* const, mp::ProxyClient<mp::Thread> > > >&, std::mutex&, mp::Connection*, std::function<mp::Thread::Client ()>)'
/usr/lib/gcc/aarch64-alpine-linux-musl/14.2.0/../../../../aarch64-alpine-linux-musl/bin/ld: ipc/libbitcoin_ipc.a(echo.capnp.proxy-client.c++.o):/bitcoin/depends/aarch64-unknown-linux-musl/include/mp/proxy-types.h:89: more undefined references to `mp::SetThread(std::map<mp::Connection*, mp::ProxyClient<mp::Thread>, std::less<mp::Connection*>, std::allocator<std::pair<mp::Connection* const, mp::ProxyClient<mp::Thread> > > >&, std::mutex&, mp::Connection*, std::function<mp::Thread::Client ()>)' follow
/usr/lib/gcc/aarch64-alpine-linux-musl/14.2.0/../../../../aarch64-alpine-linux-musl/bin/ld: ipc/libbitcoin_ipc.a(mining.capnp.proxy-client.c++.o): in function `mp::ProxyClientBase<ipc::capnp::messages::BlockTemplate, interfaces::BlockTemplate>::ProxyClientBase(ipc::capnp::messages::BlockTemplate::Client, mp::Connection*, bool)::{lambda()#2}::operator()() const':
/bitcoin/depends/aarch64-unknown-linux-musl/include/mp/proxy-io.h:409:(.text._ZNSt17_Function_handlerIFvvEZN2mp15ProxyClientBaseIN3ipc5capnp8messages13BlockTemplateEN10interfaces13BlockTemplateEEC4ENS6_6ClientEPNS1_10ConnectionEbEUlvE0_E9_M_invokeERKSt9_Any_data[_ZNSt17_Function_handlerIFvvEZN2mp15ProxyClientBaseIN3ipc5capnp8messages13BlockTemplateEN10interfaces13BlockTemplateEEC4ENS6_6ClientEPNS1_10ConnectionEbEUlvE0_E9_M_invokeERKSt9_Any_data]+0x40): undefined reference to `mp::Connection::removeSyncCleanup(std::_List_iterator<std::function<void ()> >)'
/usr/lib/gcc/aarch64-alpine-linux-musl/14.2.0/../../../../aarch64-alpine-linux-musl/bin/ld: ipc/libbitcoin_ipc.a(mining.capnp.proxy-client.c++.o): in function `_ZN2mp16CustomBuildFieldINS_11StructFieldINS_8AccessorINS_13mining_fields7ContextELi17EEEN5capnp7RequestIN3ipc5capnp8messages6Mining17IsTestChainParamsENSB_18IsTestChainResultsEEEEEEEvNS_8TypeListIJEEENS_8PriorityILi1EEERNS_19ClientInvokeContextEOT_PNSt9enable_ifIXsrSt7is_sameIDTcldtfL0p2_3getEENS_7Context7BuilderEE5valueEvE4typeE':
/bitcoin/depends/aarch64-unknown-linux-musl/include/mp/proxy-types.h:71:(.text._ZZN2mp12clientInvokeINS_11ProxyClientIN3ipc5capnp8messages6MiningEEEMNS5_6ClientEFN5capnp7RequestINS5_17IsTestChainParamsENS5_18IsTestChainResultsEEEN2kj5MaybeINS8_11MessageSizeEEEEJNS_11ClientParamINS_8AccessorINS_13mining_fields7ContextELi17EEEJEEENSJ_INSK_INSL_6ResultELi2EEEJRbEEEEEEvRT_RKT0_DpOT1_ENKUlvE_clEv[_ZZN2mp12clientInvokeINS_11ProxyClientIN3ipc5capnp8messages6MiningEEEMNS5_6ClientEFN5capnp7RequestINS5_17IsTestChainParamsENS5_18IsTestChainResultsEEEN2kj5MaybeINS8_11MessageSizeEEEEJNS_11ClientParamINS_8AccessorINS_13mining_fields7ContextELi17EEEJEEENSJ_INSK_INSL_6ResultELi2EEEJRbEEEEEEvRT_RKT0_DpOT1_ENKUlvE_clEv]+0xa0): undefined reference to `mp::SetThread(std::map<mp::Connection*, mp::ProxyClient<mp::Thread>, std::less<mp::Connection*>, std::allocator<std::pair<mp::Connection* const, mp::ProxyClient<mp::Thread> > > >&, std::mutex&, mp::Connection*, std::function<mp::Thread::Client ()>)'
/usr/lib/gcc/aarch64-alpine-linux-musl/14.2.0/../../../../aarch64-alpine-linux-musl/bin/ld: /bitcoin/depends/aarch64-unknown-linux-musl/include/mp/proxy-types.h:89:(.text._ZZN2mp12clientInvokeINS_11ProxyClientIN3ipc5capnp8messages6MiningEEEMNS5_6ClientEFN5capnp7RequestINS5_17IsTestChainParamsENS5_18IsTestChainResultsEEEN2kj5MaybeINS8_11MessageSizeEEEEJNS_11ClientParamINS_8AccessorINS_13mining_fields7ContextELi17EEEJEEENSJ_INSK_INSL_6ResultELi2EEEJRbEEEEEEvRT_RKT0_DpOT1_ENKUlvE_clEv[_ZZN2mp12clientInvokeINS_11ProxyClientIN3ipc5capnp8messages6MiningEEEMNS5_6ClientEFN5capnp7RequestINS5_17IsTestChainParamsENS5_18IsTestChainResultsEEEN2kj5MaybeINS8_11MessageSizeEEEEJNS_11ClientParamINS_8AccessorINS_13mining_fields7ContextELi17EEEJEEENSJ_INSK_INSL_6ResultELi2EEEJRbEEEEEEvRT_RKT0_DpOT1_ENKUlvE_clEv]+0xdc): undefined reference to `mp::SetThread(std::map<mp::Connection*, mp::ProxyClient<mp::Thread>, std::less<mp::Connection*>, std::allocator<std::pair<mp::Connection* const, mp::ProxyClient<mp::Thread> > > >&, std::mutex&, mp::Connection*, std::function<mp::Thread::Client ()>)'
/usr/lib/gcc/aarch64-alpine-linux-musl/14.2.0/../../../../aarch64-alpine-linux-musl/bin/ld: ipc/libbitcoin_ipc.a(mining.capnp.proxy-client.c++.o): in function `_ZN2mp16CustomBuildFieldINS_11StructFieldINS_8AccessorINS_13mining_fields7ContextELi17EEEN5capnp7RequestIN3ipc5capnp8messages6Mining28IsInitialBlockDownloadParamsENSB_29IsInitialBlockDownloadResultsEEEEEEEvNS_8TypeListIJEEENS_8PriorityILi1EEERNS_19ClientInvokeContextEOT_PNSt9enable_ifIXsrSt7is_sameIDTcldtfL0p2_3getEENS_7Context7BuilderEE5valueEvE4typeE':
/bitcoin/depends/aarch64-unknown-linux-musl/include/mp/proxy-types.h:71:(.text._ZZN2mp12clientInvokeINS_11ProxyClientIN3ipc5capnp8messages6MiningEEEMNS5_6ClientEFN5capnp7RequestINS5_28IsInitialBlockDownloadParamsENS5_29IsInitialBlockDownloadResultsEEEN2kj5MaybeINS8_11MessageSizeEEEEJNS_11ClientParamINS_8AccessorINS_13mining_fields7ContextELi17EEEJEEENSJ_INSK_INSL_6ResultELi2EEEJRbEEEEEEvRT_RKT0_DpOT1_ENKUlvE_clEv[_ZZN2mp12clientInvokeINS_11ProxyClientIN3ipc5capnp8messages6MiningEEEMNS5_6ClientEFN5capnp7RequestINS5_28IsInitialBlockDownloadParamsENS5_29IsInitialBlockDownloadResultsEEEN2kj5MaybeINS8_11MessageSizeEEEEJNS_11ClientParamINS_8AccessorINS_13mining_fields7ContextELi17EEEJEEENSJ_INSK_INSL_6ResultELi2EEEJRbEEEEEEvRT_RKT0_DpOT1_ENKUlvE_clEv]+0xa0): undefined reference to `mp::SetThread(std::map<mp::Connection*, mp::ProxyClient<mp::Thread>, std::less<mp::Connection*>, std::allocator<std::pair<mp::Connection* const, mp::ProxyClient<mp::Thread> > > >&, std::mutex&, mp::Connection*, std::function<mp::Thread::Client ()>)'
/usr/lib/gcc/aarch64-alpine-linux-musl/14.2.0/../../../../aarch64-alpine-linux-musl/bin/ld: /bitcoin/depends/aarch64-unknown-linux-musl/include/mp/proxy-types.h:89:(.text._ZZN2mp12clientInvokeINS_11ProxyClientIN3ipc5capnp8messages6MiningEEEMNS5_6ClientEFN5capnp7RequestINS5_28IsInitialBlockDownloadParamsENS5_29IsInitialBlockDownloadResultsEEEN2kj5MaybeINS8_11MessageSizeEEEEJNS_11ClientParamINS_8AccessorINS_13mining_fields7ContextELi17EEEJEEENSJ_INSK_INSL_6ResultELi2EEEJRbEEEEEEvRT_RKT0_DpOT1_ENKUlvE_clEv[_ZZN2mp12clientInvokeINS_11ProxyClientIN3ipc5capnp8messages6MiningEEEMNS5_6ClientEFN5capnp7RequestINS5_28IsInitialBlockDownloadParamsENS5_29IsInitialBlockDownloadResultsEEEN2kj5MaybeINS8_11MessageSizeEEEEJNS_11ClientParamINS_8AccessorINS_13mining_fields7ContextELi17EEEJEEENSJ_INSK_INSL_6ResultELi2EEEJRbEEEEEEvRT_RKT0_DpOT1_ENKUlvE_clEv]+0xdc): undefined reference to `mp::SetThread(std::map<mp::Connection*, mp::ProxyClient<mp::Thread>, std::less<mp::Connection*>, std::allocator<std::pair<mp::Connection* const, mp::ProxyClient<mp::Thread> > > >&, std::mutex&, mp::Connection*, std::function<mp::Thread::Client ()>)'
/usr/lib/gcc/aarch64-alpine-linux-musl/14.2.0/../../../../aarch64-alpine-linux-musl/bin/ld: ipc/libbitcoin_ipc.a(mining.capnp.proxy-client.c++.o): in function `_ZN2mp16CustomBuildFieldINS_11StructFieldINS_8AccessorINS_13mining_fields7ContextELi17EEEN5capnp7RequestIN3ipc5capnp8messages6Mining12GetTipParamsENSB_13GetTipResultsEEEEEEEvNS_8TypeListIJEEENS_8PriorityILi1EEERNS_19ClientInvokeContextEOT_PNSt9enable_ifIXsrSt7is_sameIDTcldtfL0p2_3getEENS_7Context7BuilderEE5valueEvE4typeE':
/bitcoin/depends/aarch64-unknown-linux-musl/include/mp/proxy-types.h:71:(.text._ZZN2mp12clientInvokeINS_11ProxyClientIN3ipc5capnp8messages6MiningEEEMNS5_6ClientEFN5capnp7RequestINS5_12GetTipParamsENS5_13GetTipResultsEEEN2kj5MaybeINS8_11MessageSizeEEEEJNS_11ClientParamINS_8AccessorINS_13mining_fields7ContextELi17EEEJEEENSJ_INSK_INSL_6ResultELi22EEEJRSt8optionalIN10interfaces8BlockRefEEEEEEEEvRT_RKT0_DpOT1_ENKUlvE_clEv[_ZZN2mp12clientInvokeINS_11ProxyClientIN3ipc5capnp8messages6MiningEEEMNS5_6ClientEFN5capnp7RequestINS5_12GetTipParamsENS5_13GetTipResultsEEEN2kj5MaybeINS8_11MessageSizeEEEEJNS_11ClientParamINS_8AccessorINS_13mining_fields7ContextELi17EEEJEEENSJ_INSK_INSL_6ResultELi22EEEJRSt8optionalIN10interfaces8BlockRefEEEEEEEEvRT_RKT0_DpOT1_ENKUlvE_clEv]+0xa0): undefined reference to `mp::SetThread(std::map<mp::Connection*, mp::ProxyClient<mp::Thread>, std::less<mp::Connection*>, std::allocator<std::pair<mp::Connection* const, mp::ProxyClient<mp::Thread> > > >&, std::mutex&, mp::Connection*, std::function<mp::Thread::Client ()>)'
/usr/lib/gcc/aarch64-alpine-linux-musl/14.2.0/../../../../aarch64-alpine-linux-musl/bin/ld: ipc/libbitcoin_ipc.a(mining.capnp.proxy-client.c++.o):/bitcoin/depends/aarch64-unknown-linux-musl/include/mp/proxy-types.h:89: more undefined references to `mp::SetThread(std::map<mp::Connection*, mp::ProxyClient<mp::Thread>, std::less<mp::Connection*>, std::allocator<std::pair<mp::Connection* const, mp::ProxyClient<mp::Thread> > > >&, std::mutex&, mp::Connection*, std::function<mp::Thread::Client ()>)' follow
/usr/lib/gcc/aarch64-alpine-linux-musl/14.2.0/../../../../aarch64-alpine-linux-musl/bin/ld: ipc/libbitcoin_ipc.a(process.cpp.o): in function `ipc::(anonymous namespace)::ProcessImpl::spawn(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, fs::path const&, int&)':
/bitcoin/build/src/ipc/./ipc/process.cpp:36:(.text+0x260): undefined reference to `mp::SpawnProcess(int&, std::function<std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > (int)>&&)'
/usr/lib/gcc/aarch64-alpine-linux-musl/14.2.0/../../../../aarch64-alpine-linux-musl/bin/ld: ipc/libbitcoin_ipc.a(protocol.cpp.o): in function `mp::ProxyClientBase<ipc::capnp::messages::Init, interfaces::Init>::ProxyClientBase(ipc::capnp::messages::Init::Client, mp::Connection*, bool)::{lambda()#2}::operator()() const':
/bitcoin/depends/aarch64-unknown-linux-musl/include/mp/proxy-io.h:409:(.text._ZNSt17_Function_handlerIFvvEZN2mp15ProxyClientBaseIN3ipc5capnp8messages4InitEN10interfaces4InitEEC4ENS6_6ClientEPNS1_10ConnectionEbEUlvE0_E9_M_invokeERKSt9_Any_data[_ZNSt17_Function_handlerIFvvEZN2mp15ProxyClientBaseIN3ipc5capnp8messages4InitEN10interfaces4InitEEC4ENS6_6ClientEPNS1_10ConnectionEbEUlvE0_E9_M_invokeERKSt9_Any_data]+0x40): undefined reference to `mp::Connection::removeSyncCleanup(std::_List_iterator<std::function<void ()> >)'
collect2: error: ld returned 1 exit status
``` | Build system,interfaces | low | Critical |
2,729,797,132 | bitcoin | Use clang in VS build? | MSVC has many issues, for example:
* non-optimized codegen: https://github.com/bitcoin/bitcoin/pull/29852#issuecomment-2049803970
* compile failure: https://github.com/bitcoin/bitcoin/issues/31303
* legal, but brittle stdlib: https://github.com/bitcoin/bitcoin/pull/31391#issuecomment-2510762011
* unspecified issue: https://github.com/bitcoin/bitcoin/pull/31061#issuecomment-2531244463
Thus, it could make sense to evaluate whether to switch the build docs to clang (possibly with libc++), see https://learn.microsoft.com/en-us/cpp/build/clang-support-msbuild | Brainstorming,Windows,Build system | low | Critical |
2,729,829,558 | rust | STATUS_ACCESS_VIOLATION happens frequently with rust 1.83 under windows x64 environment | <!--
Thank you for filing a bug report! ๐ Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code:
```rust
<code>
fn main() {
println!("hihi");
}
```
I expected to see this happen:
*Just a normal "hihi" string* printed out on my console
Instead, this happened: *explanation*
```
PS C:\Users\maple\Desktop\etc\dev\temp> cargo run
Compiling temp v0.1.0 (C:\Users\maple\Desktop\etc\dev\temp)
error: could not compile `temp` (bin "temp")
Caused by:
process didn't exit successfully: `C:\Users\maple\.rustup\toolchains\1.83-x86_64-pc-windows-msvc\bin\rustc.exe --crate-name temp --edition=2021 src/main.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --diagnostic-width=142 --crate-type bin --emit=dep-info,link -C embed-bitcode=no -C debuginfo=2 --check-cfg cfg(docsrs) --check-cfg "cfg(feature, values())" -C metadata=d54b924dfe4288ad --out-dir C:\Users\maple\Desktop\etc\dev\temp\target\debug\deps -C incremental=C:\Users\maple\Desktop\etc\dev\temp\target\debug\incremental -L dependency=C:\Users\maple\Desktop\etc\dev\temp\target\debug\deps` (exit code: 0xc0000005, STATUS_ACCESS_VIOLATION)
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
<version>
PS C:\Users\maple\Desktop\etc\dev\temp> rustc --version --verbose
rustc 1.83.0 (90b35a623 2024-11-26)
binary: rustc
commit-hash: 90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf
commit-date: 2024-11-26
host: x86_64-pc-windows-msvc
release: 1.83.0
LLVM version: 19.1.1
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary>Backtrace</summary>
<p>
</p>
</details>
Hi, I used rust 1.81 before and upgraded to 1.83 today. When I write any codes and execute through 'cargo run' command, error happens frequently and randomly. I tried this on wsl, but it was fine.
I guess there might be some bug with recently rust compiler (version 1.83) under windows / x86_64 environment
| S-needs-repro | low | Critical |
2,729,837,636 | rust | Type inference fails where LUB coercion succeeds | This is probably a known effect, but I'd like to get confirmation that this is intended behavior or a known bug. It sure caught me by surprise.
I tried this code:
```rust
let a1: _ = [b"a", b"a" as &[u8]];
let b1: _ = [b"a" as &[u8], b"a"];
let a2: [_; 2] = [b"a", b"a" as &[u8]];
let b2: [_; 2] = [b"a" as &[u8], b"a"];
let a3: [&[u8]; 2] = [b"a", b"a" as &[u8]];
let b3: [&[u8]; 2] = [b"a" as &[u8], b"a"];
fn unify<T>(_x: T, _y: T) {}
unify(b"a", b"a" as &[u8]); // a4
unify(b"a" as &[u8], b"a"); // b4
if true { b"a" } else { b"a" as &[u8] }; // a5
if true { b"a" as &[u8] } else { b"a" }; // b5
```
I expected to see this happen: Each pair of lines is symmetrical, so intuitively, either they should both compile or both fail typeck.
Instead, this happened:
- `a1` and `b1` both compile
- `a2` *fails to compile*, `b2` compiles
- `a3` and `b2` both compile
- `a4` *fails to compile*, `b4` compiles
- `a5` and `b5` both compile
The reason is that in cases 1, 3, 5, LUB coercion is used, and in cases 2 and 4, a more generic type inference mechanism is used (I think). I can kind of see why `a4` needs to fail compilation, but `a2` failing is extremely counterintuitive.
### Meta
`rustc --version --verbose`:
```
rustc 1.84.0-nightly (1e4f10ba6 2024-10-29)
binary: rustc
commit-hash: 1e4f10ba6476e48a42a79b9f846a2d9366525b9e
commit-date: 2024-10-29
host: x86_64-unknown-linux-gnu
release: 1.84.0-nightly
LLVM version: 19.1.1
``` | A-inference,C-bug,A-coercions,T-types | low | Critical |
2,729,848,479 | deno | Panic when passing a windows style file path in `patch` field of `deno.json` | The patch field looks like this: `patch: [ "D:\\a\\fresh\\fresh\\" ]`
```
============================================================
Deno has panicked. This is a bug in Deno. Please report this
at https://github.com/denoland/deno/issues/new.
If you can reliably reproduce this panic, include the
reproduction steps and re-run with the RUST_BACKTRACE=1 env
var set and include the backtrace in your report.
Platform: windows x86_64
Version: 2.1.3
Args: ["C:\\hostedtoolcache\\windows\\deno\\2.1.3\\x64\\deno.exe", "run", "-A", "main.ts"]
thread 'main' panicked at C:\Users\runneradmin\.cargo\registry\src\index.crates.io-6f17d22bba15001f\deno_config-0.39.3\src\workspace\discovery.rs:953:56:
called `Result::unwrap()` on an `Err` value: UrlToFilePathError(Url { scheme: "d", cannot_be_a_base: true, username: "", password: None, host: None, port: None, path: "\\a\\fresh\\fresh\\/", query: None, fragment: None })
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
``` | cli,config,panic | low | Critical |
2,729,869,108 | three.js | Select if a directional light should contribute to the lighting color contribution, or to a shadow map only ( real time shadowing ) | ### Description
A while ago I did hack my way on a custom threejs of the webGLRenderer to render both baked and real time lighting,
And I found out really useful to determine / selecting if a light should contribute of the color lighting output, or **cast shadow only** and only contributes to the shadow mapping
**Example for a scene, let say you have two directional lights :**
1 used for baking shadows globally and for lighting color contribution
1 used for real time shadowing, but this one should not contribute to the color lighting as it would add up a second contribution, and on top affecting perfs since one light is only needed for the lighting color contribution
### Solution
Maybe adding an boolean property to toggle off the color contribution ?
### Additional context
_No response_ | Enhancement | low | Minor |
2,729,887,090 | TypeScript | The identifiers field of Sourcefile is not correct | ### ๐ Search Terms
The following programming practices:
1. open or create an arkts project and create a new HSP module (shared library)
2. Create a new test.ets file under library->pages, and enter the code:export function cc (param:string) {}
export function name(params: string) {
}
export const sss = ''
3. Create a new test2.ets file. Enter the code:
export function name(param:string) {
}
4. Modify the index.ets file code under the library module as:
export {name} from "./src/main/ets/pages/test2"
export { add } from './src/main/ets/utils/Calc'
5. Execute undo๏ผctrl+z๏ผ
6. Right click generate... Declarations in the test.ets file, select all the items to be exported, and click OK๏ผThe code of the index.ets file under the check module is as follows. Name is not a unique name
export {name} from "./src/main/ets/pages/test2"
export { add } from './src/main/ets/utils/Calc'
export { cc, name, sss } from "./src/main/ets/pages/test"
### โ
Viability Checklist
- [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
- [x] This wouldn't change the runtime behavior of existing JavaScript code
- [x] This could be implemented without emitting different JS based on the types of the expressions
- [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.)
- [x] This isn't a request to add a new utility type: https://github.com/microsoft/TypeScript/wiki/No-New-Utility-Types
- [x] This feature would agree with the rest of our Design Goals: https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals
### โญ Suggestion
The identifier value can be updated in real time according to the text, so that the naming of the exported function in the index.ets file can ensure uniqueness
### ๐ Motivating Example
Expected results๏ผ
export {name} from "./src/main/ets/pages/test2"
export { add } from './src/main/ets/utils/Calc'
export { cc, name as name_1, sss } from "./src/main/ets/pages/test"
### ๐ป Use Cases
1. What do you want to use this for?
2. What shortcomings exist with current approaches?
3. What workarounds are you using in the meantime?
| Needs More Info | low | Minor |
2,729,892,390 | vscode | Getting blue screens. I installed few days back "chicoff.mapfile" extension and working with MapServer, GDAL builds. | We have written the needed data into your clipboard because it was too large to send. Please paste.
Type: <b>Performance Issue</b>
Today I had 4 blue screens. I installed few days back "chicoff.mapfile" extension and working with MapServer, GDAL builds. Looks like when opening multiple instances of the VS and work with some git and Mapserver files. I get blue screen
VS Code version: Code 1.95.3 (f1a4fb101478ce6ec82fe9627c43efbf9e98c813, 2024-11-13T14:50:04.152Z)
OS version: Windows_NT x64 10.0.26100
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|12th Gen Intel(R) Core(TM) i7-12700H (20 x 2688)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|31.70GB (18.00GB free)|
|Process Argv|--crash-reporter-id f7840eca-bffe-49de-8fcf-4122e941140d|
|Screen Reader|no|
|VM|0%|
</details><details>
<summary>Process Info</summary>
```
CPU % Mem MB PID Process
0 150 21464 code main
0 347 9144 extensionHost [1]
0 92 4664 electron-nodejs (eslintServer.js )
0 54 17316 "C:\Program Files\dotnet\dotnet.EXE" c:\Users\VidmantasDrasutis\.vscode\extensions\tintoy.msbuild-project-tools-0.6.6\language-server\MSBuildProjectTools.LanguageServer.Host.dll
0 7 9220 C:\WINDOWS\system32\conhost.exe 0x4
0 75 18168 c:\Users\VidmantasDrasutis\.vscode\extensions\codeium.codeium-1.24.8\dist\ef806f7cf5ef5903f9da0506527ae0e7897e84db\language_server_windows_x64.exe --api_server_url https://server.codeium.com --manager_dir C:\Users\VIDMAN~1\AppData\Local\Temp\26d72939-2557-4ea5-a77b-35619a3f79b4\codeium\manager --enable_chat_web_server --enable_lsp --inference_api_server_url https://inference.codeium.com --database_dir C:\Users\VidmantasDrasutis\.codeium\database\9c0694567290725d9dcba14ade58e297 --enable_index_service --enable_local_search --search_max_workspace_file_count 5000 --indexed_files_retention_period_days 30 --teams_mode --workspace_id file_c_3A_Dev_M17_DevKit --sentry_telemetry
20 598 22932 c:\Users\VidmantasDrasutis\.vscode\extensions\codeium.codeium-1.24.8\dist\ef806f7cf5ef5903f9da0506527ae0e7897e84db\language_server_windows_x64.exe --api_server_url https://server.codeium.com --manager_dir C:\Users\VIDMAN~1\AppData\Local\Temp\26d72939-2557-4ea5-a77b-35619a3f79b4\codeium\manager --enable_chat_web_server --enable_lsp --inference_api_server_url https://inference.codeium.com --database_dir C:\Users\VidmantasDrasutis\.codeium\database\9c0694567290725d9dcba14ade58e297 --enable_index_service --enable_local_search --search_max_workspace_file_count 5000 --indexed_files_retention_period_days 30 --teams_mode --workspace_id file_c_3A_Dev_M17_DevKit --sentry_telemetry --run_child --limit_go_max_procs 4 --random_port --random_port_dir=C:\Users\VIDMAN~1\AppData\Local\Temp\26d72939-2557-4ea5-a77b-35619a3f79b4\codeium\manager/child_random_port_1733830968465256600_1290787035841887470 --manager_lock_file=C:\Users\VIDMAN~1\AppData\Local\Temp\26d72939-2557-4ea5-a77b-35619a3f79b4\codeium\manager/locks/manager.lock --child_lock_file C:\Users\VIDMAN~1\AppData\Local\Temp\26d72939-2557-4ea5-a77b-35619a3f79b4\codeium\manager/locks/child_lock_1733830968465809500_1747493436358289977
0 7 24240 C:\WINDOWS\system32\conhost.exe 0x4
0 4 19068 electron-nodejs (config.js )
0 7 2752 C:\WINDOWS\system32\conhost.exe 0x4
0 133 20104 electron-nodejs (config.js )
0 76 22592 "c:\Users\VidmantasDrasutis\.vscode\extensions\ms-dotnettools.csdevkit-1.14.14-win32-x64\components\vs-green-server\platforms\win32-x64\node_modules\@microsoft\servicehub-controller-net60.win32-x64/Microsoft.ServiceHub.Controller" 1801aa9783da58ba7dee7ea26e83ed901a63c6a2c377ec24afb551b002ddd37f /ControllerCooldownTimeout:30000 "/TelemetrySession:{\"TelemetryLevel\":\"all\",\"IsOptedIn\":false,\"HostName\":\"Default\",\"AppInsightsInstrumentationKey\":null,\"AsimovInstrumentationKey\":null,\"CollectorApiKey\":\"0c6ae279ed8443289764825290e4f9e2-1a736e7c-1324-4338-be46-fc2a58ae4d14-7255\",\"AppId\":1010,\"UserId\":\"40eb661f-f7b7-4ed7-bfbc-9cdb3bcce547\",\"Id\":\"3c45681f-31d3-476d-8c2e-e03236bfc77e1733830959075\",\"ProcessStartTime\":133783045630391522,\"SkuName\":null,\"VSExeVersion\":null,\"BucketFiltersToEnableWatsonForFaults\":[],\"BucketFiltersToAddDumpsToFaults\":[]}"
0 112 23448 "c:\Users\VidmantasDrasutis\.vscode\extensions\ms-dotnettools.csdevkit-1.14.14-win32-x64\components\vs-green-server\platforms\win32-x64\node_modules\@microsoft\visualstudio-code-servicehost.win32-x64/Microsoft.VisualStudio.Code.ServiceHost.exe" dotnet$C94B8CFE-E3FD-4BAF-A941-2866DBB566FE net.pipe://225922A2F62BC6EB510A6AEC16BC7C9FE8F3B "/TelemetrySession:{\"TelemetryLevel\":\"all\",\"IsOptedIn\":false,\"HostName\":\"Default\",\"AppInsightsInstrumentationKey\":null,\"AsimovInstrumentationKey\":null,\"CollectorApiKey\":\"0c6ae279ed8443289764825290e4f9e2-1a736e7c-1324-4338-be46-fc2a58ae4d14-7255\",\"AppId\":1010,\"UserId\":\"40eb661f-f7b7-4ed7-bfbc-9cdb3bcce547\",\"Id\":\"3c45681f-31d3-476d-8c2e-e03236bfc77e1733830959075\",\"ProcessStartTime\":133783045630391522,\"SkuName\":null,\"VSExeVersion\":null,\"BucketFiltersToEnableWatsonForFaults\":[],\"BucketFiltersToAddDumpsToFaults\":[]}"
0 7 23492 C:\WINDOWS\system32\conhost.exe 0x4
0 179 23484 "c:\Users\VidmantasDrasutis\.vscode\extensions\ms-dotnettools.csdevkit-1.14.14-win32-x64\components\vs-green-server\platforms\win32-x64\node_modules\@microsoft\visualstudio-code-servicehost.win32-x64\Microsoft.VisualStudio.Code.ServiceHost.exe" dotnet.projectSystem$C94B8CFE-E3FD-4BAF-A941-2866DBB566FE net.pipe://225922A2F62BC6EB510A6AEC16BC7C9FE8F3B "/TelemetrySession:{\"TelemetryLevel\":\"all\",\"IsOptedIn\":false,\"HostName\":\"Default\",\"AppInsightsInstrumentationKey\":null,\"AsimovInstrumentationKey\":null,\"CollectorApiKey\":\"0c6ae279ed8443289764825290e4f9e2-1a736e7c-1324-4338-be46-fc2a58ae4d14-7255\",\"AppId\":1010,\"UserId\":\"40eb661f-f7b7-4ed7-bfbc-9cdb3bcce547\",\"Id\":\"3c45681f-31d3-476d-8c2e-e03236bfc77e1733830959075\",\"ProcessStartTime\":133783045630391522,\"SkuName\":null,\"VSExeVersion\":null,\"BucketFiltersToEnableWatsonForFaults\":[],\"BucketFiltersToAddDumpsToFaults\":[]}"
0 7 23500 C:\WINDOWS\system32\conhost.exe 0x4
0 64 23916 "C:\Program Files\dotnet\dotnet.exe" "c:\Users\VidmantasDrasutis\.vscode\extensions\ms-dotnettools.csdevkit-1.14.14-win32-x64\components\CPS\platforms\win32-x64\node_modules\@microsoft\visualstudio-projectsystem-buildhost.win32-x64/Microsoft.VisualStudio.ProjectSystem.Server.BuildHost.dll"
0 7 23924 C:\WINDOWS\system32\conhost.exe 0x4
0 418 19476 c:\Users\VidmantasDrasutis\.vscode\extensions\ms-dotnettools.csharp-2.55.29-win32-x64\.roslyn\Microsoft.CodeAnalysis.LanguageServer.exe --logLevel Information --razorSourceGenerator c:\Users\VidmantasDrasutis\.vscode\extensions\ms-dotnettools.csharp-2.55.29-win32-x64\.razor\Microsoft.CodeAnalysis.Razor.Compiler.dll --razorDesignTimePath c:\Users\VidmantasDrasutis\.vscode\extensions\ms-dotnettools.csharp-2.55.29-win32-x64\.razor\Targets\Microsoft.NET.Sdk.Razor.DesignTime.targets --devKitDependencyPath c:\Users\VidmantasDrasutis\.vscode\extensions\ms-dotnettools.csharp-2.55.29-win32-x64\.roslynDevKit\Microsoft.VisualStudio.LanguageServices.DevKit.dll --sessionId 3c45681f-31d3-476d-8c2e-e03236bfc77e1733830959075 --extension c:\Users\VidmantasDrasutis\.vscode\extensions\ms-dotnettools.csharp-2.55.29-win32-x64\.xamlTools\Microsoft.VisualStudio.DesignTools.CodeAnalysis.dll --extension c:\Users\VidmantasDrasutis\.vscode\extensions\ms-dotnettools.csharp-2.55.29-win32-x64\.xamlTools\Microsoft.VisualStudio.DesignTools.CodeAnalysis.Diagnostics.dll --telemetryLevel all --extensionLogDirectory c:\Users\VidmantasDrasutis\AppData\Roaming\Code\logs\20241210T134238\window1\exthost\ms-dotnettools.csharp
0 88 21476 electron-nodejs (serverMain.js )
0 116 23180 electron-nodejs (main-bundle.js )
0 94 23676 "C:\Users\VidmantasDrasutis\AppData\Local\Programs\Microsoft VS Code\Code.exe" "c:\Users\VidmantasDrasutis\AppData\Local\Programs\Microsoft VS Code\resources\app\extensions\json-language-features\server\dist\node\jsonServerMain" --node-ipc --clientProcessId=9144
3 339 24056 c:\Users\VidmantasDrasutis\.vscode\extensions\ms-vscode.cpptools-1.22.11-win32-x64\bin\cpptools.exe
0 7 24108 C:\WINDOWS\system32\conhost.exe 0x4
0 33 9276 crashpad-handler
0 198 12304 gpu-process
0 49 12768 utility-network-service
0 94 14144 fileWatcher [1]
0 150 20124 shared-process
0 273 22488 window [1] (build.sbt - DevKit - Visual Studio Code)
```
</details>
<details>
<summary>Workspace Info</summary>
```
| Window (build.sbt - DevKit - Visual Studio Code)
| Folder (DevKit): more than 20812 files
| File types: hpp(14715) h(2020) md(851) c(194) py(137) dll(100) lib(98)
| exe(81) csv(75) gfs(35)
| Conf files: cmake(30) sln(1) makefile(1) dockerfile(1);
```
</details>
<details><summary>Extensions (64)</summary>
Extension|Author (truncated)|Version
---|---|---
nugetpackagemanagergui|ali|2.1.1
tailwindcss-extension-pack|and|1.1.4
vscode-json|and|1.5.2
tailwind-docs|aus|2.1.0
color-info|bie|0.7.2
vscode-tailwindcss|bra|0.12.16
path-intellisense|chr|2.10.0
format-json|Cle|1.0.3
codeium|Cod|1.24.8
vscode-eslint|dba|3.0.10
githistory|don|0.6.20
fastendpoints|dri|1.2.1
prettier-vscode|esb|11.0.0
auto-close-tag|for|0.5.15
auto-complete-tag|for|0.1.0
auto-rename-tag|for|0.1.10
gitlab-workflow|Git|5.21.0
go|gol|0.42.1
headwind|hey|1.7.0
rest-client|hum|0.25.1
better-shellscript-syntax|jef|1.10.0
docomment|k--|1.0.0
csharpextensions|kre|1.7.3
tailwind-sass-syntax|mac|1.3.0
rainbow-csv|mec|3.13.0
groovy|Mel|1.0.0
git-graph|mhu|1.30.0
vue-volar-extention-pack|Mis|2.0.8
azure-dev|ms-|0.8.4
vscode-docker|ms-|1.29.3
csdevkit|ms-|1.14.14
csharp|ms-|2.55.29
vscode-dotnet-runtime|ms-|2.2.3
debugpy|ms-|2024.12.0
python|ms-|2024.20.0
vscode-pylance|ms-|2024.12.1
prompty|ms-|0.1.2024060511
remote-containers|ms-|0.388.0
remote-ssh|ms-|0.115.1
remote-ssh-edit|ms-|0.87.0
remote-wsl|ms-|0.88.5
vscode-remote-extensionpack|ms-|0.26.0
cmake-tools|ms-|1.19.52
cpptools|ms-|1.22.11
cpptools-extension-pack|ms-|1.3.0
makefile-tools|ms-|0.11.13
powershell|ms-|2024.4.0
remote-explorer|ms-|0.4.3
remote-server|ms-|1.5.2
vetur|oct|0.37.3
fix-json|oli|0.1.2
material-icon-theme|PKi|5.15.0
inline-sql-syntax|quf|2.16.0
startanyshell|rem|0.3.1
scala|sca|0.5.8
vscode-scss-formatter|sib|3.0.0
tailwindcss-transpiler|sud|0.0.8
sass-indented|syl|1.8.31
vscode-h2o|tet|0.2.15
msbuild-project-tools|tin|0.6.6
run-in-powershell|tob|1.2.0
cmake|twx|0.0.17
volar|Vue|2.1.10
five-server|yan|0.3.1
(1 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vswsl492:30256859
vscod805:30301674
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupytercf:31046870
nativerepl2:31139839
pythonrstrctxt:31112756
nativeloc2:31192216
cf971741:31144450
iacca1:31171482
notype1cf:31157160
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
```
</details>
<!-- generated by issue reporter --> | freeze-slow-crash-leak,under-discussion | low | Critical |
2,729,985,247 | pytorch | [Inductor] `ConvTranspose` does not check different `dtypes` with Inductor | ### ๐ Describe the bug
This both occurs on **ConvTranspose1d, 2d, and 3d**.
When setting different `dtypes` for ConvTranspose and inputs, Eager would reject the model but Inductor seems to pass the check.
Another evidence is that when I do the same thing on `Conv`, Inductor would also reject the model.
```python
import torch
import torch.nn as nn
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.convts = nn.ConvTranspose1d(1, 1, kernel_size=1)
def forward(self, x):
x = self.convts(x)
return x
x = torch.randn(1, 1, 1, dtype=torch.float16)
def run_test(input_tensor, compile_mode: bool, device: str):
model = Model()
if device == 'cuda':
model = model.cuda()
input_tensor = input_tensor.cuda()
if compile_mode:
model = torch.compile(model)
try:
output = model(input_tensor)
print("success")
except Exception as e:
print(e)
run_test(x, False, 'cpu') # expected scalar type Half but found Float
run_test(x, True, 'cpu') # success
run_test(x, False, 'cuda') # Input type (torch.cuda.HalfTensor) and weight type (torch.cuda.FloatTensor) should be the same
run_test(x, True, 'cuda') # success
```
### Error logs
```
expected scalar type Half but found Float
success
Input type (torch.cuda.HalfTensor) and weight type (torch.cuda.FloatTensor) should be the same
success
```
### Versions
torch version: 2.6.0.dev20241205+cu124
OS: Ubuntu 20.04.6 LTS (x86_64)
CPU: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
GPU: V100
<details>
<summary>click for detailed env</summary>
```
PyTorch version: 2.6.0.dev20241205+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 16.0.1
CMake version: version 3.26.0
Libc version: glibc-2.31
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-200-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
GPU 2: Tesla V100-SXM2-32GB
GPU 3: Tesla V100-SXM2-32GB
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 40 bits physical, 48 bits virtual
CPU(s): 20
On-line CPU(s) list: 0-19
Thread(s) per core: 1
Core(s) per socket: 20
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2500.000
BogoMIPS: 5000.00
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 640 KiB
L1i cache: 640 KiB
L2 cache: 80 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch topoext cpuid_fault invpcid_single pti ssbd ibrs ibpb fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke avx512_vnni
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.13.1
[pip3] pytorch-triton==3.2.0+git35c6c7c6
[pip3] torch==2.6.0.dev20241205+cu124
[pip3] torchaudio==2.5.0.dev20241205+cu124
[pip3] torchvision==0.20.0.dev20241205+cu124
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git35c6c7c6 pypi_0 pypi
[conda] torch 2.6.0.dev20241205+cu124 pypi_0 pypi
[conda] torchaudio 2.5.0.dev20241205+cu124 pypi_0 pypi
[conda] torchvision 0.20.0.dev20241205+cu124 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
```
</details>
cc @chauhang @penguinwu @zou3519 @bdhirsh @yf225 | triaged,oncall: pt2,module: aotdispatch,module: pt2-dispatcher | low | Critical |
2,729,990,183 | deno | otel: automatically inject `ContextManager`, `MetricsProvider`, and `TraceProvider` | Right now you have to import `@deno/otel` to hook Deno's built in OpenTelemetry API into `@opentelemetry/api`. `@opentelemetry/api` however has code to read all three of these from the global object. This code is intended to ensure that multiple copies of `@opentelemetry/api` can still work correctly.
I propose we set this global on startup as follows:
```js
globalThis[Symbol.for("opentelemetry.js.api.1")] = {
trace: new Deno.telemetry.TraceProvider(),
context: new Deno.telemetry.ContextManager(),
metrics: new Deno.telemetry.MetricsProvider(),
};
```
This may seem a bit unorthodox, but it is very useful because users do not have to import `@deno/otel` anymore. This is particularly helpful when using frameworks where you may not control the entrypoint (Next.js, Remix, etc).
I want to run this by the otel-js folks, but I don't think this is really very controversial - one could think of this as bundling `@opentelemetry/api` and automatically invoking `trace.setGlobalTraceProvider` and friends on startup. | feat,otel | low | Minor |
2,730,053,571 | react | Bug: Detached Elements Observed When Toggling Content with ReactDOM.createPortal() | We observed a possible memory leak when using `ReactDOM.createPortal()` to render components. The issue occurs when toggling the portal content between a `RadioGroup` and the text `Loading...`. If the `RadioGroup` is interacted with before toggling, some DOM elements are not properly cleaned up, leaving detached nodes in memory.
However, if no interaction happens with the `RadioGroup`, the content toggles without any memory issues.
React version: 18.3.1
## Steps To Reproduce
1. Visit the [JSFiddle example](https://jsfiddle.net/egz6jbx5/2/#&togetherjs=hoy8etANLl).
2. Click on the Load Content button to render the `RadioGroup`.
3. Interact with the `RadioGroup` by selecting the Other option. (eg: option B)
4. Click on the Unload Content button to remove the `RadioGroup`.
5. Open the Memory tab in the browser developer tools and trigger Garbage Collection.
6. Select the **Detached Elements** profiling type and take a snapshot of the memory to observe detached nodes.
Link to code example: [JSFiddle example](https://jsfiddle.net/egz6jbx5/2/#&togetherjs=hoy8etANLl)
## The current behavior
Detached elements are present in the memory snapshot after interacting with the `RadioGroup` and toggling it off. This indicates a potential memory leak caused by `ReactDOM.createPortal()`.

## The expected behavior
Detached nodes should not be present after toggling the `RadioGroup` and triggering garbage collection, regardless of interactions. | Status: Unconfirmed | medium | Critical |
2,730,081,057 | kubernetes | Kube scheduler has a confusing error message when scheduling pods that use claims with `ReadWriteOncePod` access mode | ### What happened?
When we deploy a claim like:
```yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ebs-claim
spec:
accessModes:
- ReadWriteOncePod
resources:
requests:
storage: 4Gi
```
And then deploy two pods:
```yaml
---
apiVersion: v1
kind: Pod
metadata:
name: app
spec:
containers:
- name: app
image: centos
command: ["/bin/sh"]
args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: ebs-claim
---
apiVersion: v1
kind: Pod
metadata:
name: app2
spec:
containers:
- name: app
image: centos
command: ["/bin/sh"]
args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: ebs-claim
```
Second pod is rightfully not scheduled.
It has the following error message:
```
k describe po app2
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 5s default-scheduler 0/5 nodes are available: 5 node has pod using PersistentVolumeClaim with the same name and ReadWriteOncePod access mode. preemption: 0/5 nodes are available: 5 No preemption victims found for incoming pod.
Warning FailedScheduling 2s default-scheduler 0/5 nodes are available: 5 node has pod using PersistentVolumeClaim with the same name and ReadWriteOncePod access mode. preemption: 0/5 nodes are available: 5 No preemption victims found for incoming pod.
```
This error message is misleading - there are no 5 nodes that have pods using this PersistentVolumeClaim.
Error message comes from https://github.com/kubernetes/kubernetes/blob/v1.30.5/pkg/scheduler/framework/plugins/volumerestrictions/volume_restrictions.go#L55.
It's set in the following method - https://github.com/kubernetes/kubernetes/blob/v1.30.5/pkg/scheduler/framework/plugins/volumerestrictions/volume_restrictions.go#L280
It's raised upon calling the `Filter` method - https://github.com/kubernetes/kubernetes/blob/v1.30.5/pkg/scheduler/framework/plugins/volumerestrictions/volume_restrictions.go#L314
### What did you expect to happen?
I expect that we get an error message that reads something like:
```
Warning FailedScheduling 5s default-scheduler 0/5 nodes are available: 5 nodes were not able to schedule a pod using PersistentVolumeClaim <PVC-Name> name, ReadWriteOncePod access mode. preemption: 0/5 nodes are available: 5 No preemption victims found for incoming pod.
```
or similar message, that points the operator to the actual reason for not scheduling the pod.
### How can we reproduce it (as minimally and precisely as possible)?
Build a cluster with 3 or more nodes and apply the following manifests:
```ymal
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ebs-claim
spec:
accessModes:
- ReadWriteOncePod
resources:
requests:
storage: 4Gi
---
apiVersion: v1
kind: Pod
metadata:
name: app
spec:
containers:
- name: app
image: centos
command: ["/bin/sh"]
args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: ebs-claim
---
apiVersion: v1
kind: Pod
metadata:
name: app2
spec:
containers:
- name: app
image: centos
command: ["/bin/sh"]
args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: ebs-claim
```
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
k version
Client Version: v1.31.0
Kustomize Version: v5.4.2
Server Version: v1.30.5
```
</details>
but I believe it's applicable for other versions as well.
### Cloud provider
Message is the same, regardless of the provider.
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/scheduling,needs-triage | low | Critical |
2,730,103,018 | kubernetes | When a pod is removed, running execs stay open but frozen | ### What happened?
When a pod is removed, running execs stay. Their standard streams are not closed.
### What did you expect to happen?
The root processes of the execs are killed, so the running exec should be terminated by the kubeapi-server, or at least the stdout/stderr streams should be closed.
### How can we reproduce it (as minimally and precisely as possible)?
- create a pod with a single container
- exec into that container with `kubectl exec` an interactive command (like a shell)
- remove that pod
- the `kubectl exec` is still running but is `frozen`
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
Client Version: v1.31.0
Kustomize Version: v5.4.2
Server Version: v1.31.0
```
</details>
### Cloud provider
<details>
minikube or GKE
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/node,triage/needs-information,triage/not-reproducible,needs-triage | low | Major |
2,730,159,869 | pytorch | `RuntimeError: MKL FFT error: Intel oneMKL DFTI ERROR: Inconsistent configuration parameters` in `TestJitCPU.test_variant_consistency_jit_fft_fft2_cpu_complex64` against mkl 2024* | ### ๐ Describe the bug
While working on conda-forge packages for PyTorch, I've hit a suspicious test failure. I've been able to reproduce it with PyTorch 2.5.1 (from conda-forge and from pip) and nightly (from pip). When building for conda-forge, I've confirmed that it happens with mkl 2024.1.0 through 2024.2.2; 2024.0.0 failed for other reasons. So the last working version is mkl 2023.2.0.
Output / reproducer;
```
$ pip install -q --pre torch --index-url https://download.pytorch.org/whl/nightly/cu118
$ pip install -q hypothesis expecttest pytest-flakefinder pytest-xdist numpy pytest-flakefinder
$ PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/test_ops_jit.py TestJitCPU.test_variant_consistency_jit_fft_fft2_cpu_complex64
E
======================================================================
ERROR: test_variant_consistency_jit_fft_fft2_cpu_complex64 (__main__.TestJitCPU.test_variant_consistency_jit_fft_fft2_cpu_complex64)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/var/tmp/conda-bld/work_moved_libtorch-2.5.1-cpu_mkl_h5f89428_106_linux-64/test/test_ops_jit.py", line 109, in test_variant_consistency_jit
self.indiv_variant_test_jit(
File "/var/tmp/conda-bld/work_moved_libtorch-2.5.1-cpu_mkl_h5f89428_106_linux-64/test/test_ops_jit.py", line 156, in indiv_variant_test_jit
check_against_reference(
File "/var/tmp/conda-bld/work_moved_libtorch-2.5.1-cpu_mkl_h5f89428_106_linux-64/.venv/lib/python3.12/site-packages/torch/testing/_internal/common_jit.py", line 108, in check_against_reference
grads = torch.autograd.grad(allSum(outputs), recording_tensors,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/tmp/conda-bld/work_moved_libtorch-2.5.1-cpu_mkl_h5f89428_106_linux-64/.venv/lib/python3.12/site-packages/torch/autograd/__init__.py", line 496, in grad
result = _engine_run_backward(
^^^^^^^^^^^^^^^^^^^^^
File "/var/tmp/conda-bld/work_moved_libtorch-2.5.1-cpu_mkl_h5f89428_106_linux-64/.venv/lib/python3.12/site-packages/torch/autograd/graph.py", line 823, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: MKL FFT error: Intel oneMKL DFTI ERROR: Inconsistent configuration parameters
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/var/tmp/conda-bld/work_moved_libtorch-2.5.1-cpu_mkl_h5f89428_106_linux-64/.venv/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1162, in test_wrapper
return test(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/var/tmp/conda-bld/work_moved_libtorch-2.5.1-cpu_mkl_h5f89428_106_linux-64/test/test_ops_jit.py", line 120, in test_variant_consistency_jit
raise Exception(variant_error_info) from e # noqa: TRY002
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Exception:
Error testing fft.fft2 function variant
with dtype: torch.complex64
with inputs SampleInput(input=tensor([[[ 0.4519-1.6687j, -1.3497+3.1941j, -8.9264-4.0518j, 3.6276-6.9550j,
-8.5880-5.2066j, -2.2049-4.3718j, 6.0923-0.6148j],
[ 0.8153-1.4545j, -5.3751+2.8520j, -2.8915-7.5140j, 0.1147+3.8840j,
-7.9774-5.3941j, -4.7232+6.2016j, -7.0199-1.3300j],
[ 0.6387-2.9121j, 4.9206-4.0014j, 6.7648-2.2684j, -4.0965-2.3907j,
2.3617+8.0499j, -0.9205-1.8945j, 3.0848-3.1877j],
[ 1.0688-2.7264j, -3.7338+2.4207j, -5.2879+7.2414j, -0.4194+2.7186j,
1.6498-7.2553j, -5.6285+0.5283j, 6.1463+6.6689j],
[ 4.5116+5.5040j, -4.5468+3.8238j, 2.0432-4.0802j, 7.7975+3.6276j,
-3.9898+6.4498j, -2.0038-1.4510j, -5.5658-4.9406j],
[ 0.9235-6.0854j, 7.3032-4.9497j, 3.0275+2.7930j, -7.1406+7.8375j,
-2.5508+3.3611j, -7.5744-7.7703j, 3.0497+6.1885j]],
[[ 2.7347+1.4139j, -5.8085-1.2180j, 2.9595+6.3969j, -3.0274+5.5602j,
-4.6561-4.1872j, 6.7226-3.6585j, -5.0018+4.3699j],
[-3.6242-3.0665j, -2.2476+7.2501j, 7.5155-6.1942j, -7.7748-8.6985j,
-8.6626+5.4833j, -7.7189+6.7175j, 5.9785-0.3973j],
[-1.6851-5.0840j, 4.2354+4.9089j, 4.1848-5.6581j, 3.4160+3.6999j,
6.0055+5.5249j, 2.4432-0.0450j, 8.8184-4.5192j],
[ 1.4375-8.2578j, -8.9884-5.9804j, 8.3846+0.4880j, -2.7509-3.0532j,
-0.4755-1.3543j, -7.5389+2.9812j, -4.2917-0.3567j],
[-1.4569+5.7937j, 1.2211-8.2981j, 8.7474-8.8791j, 5.8483+6.9294j,
-4.9306-1.6164j, 3.1223+5.1523j, 7.7248-4.4619j],
[ 7.0505+1.2090j, -4.5486+5.1965j, 8.9690-6.1245j, -4.6085-2.7860j,
-4.8467-5.9044j, 4.2504-6.3633j, -3.9363+3.5810j]],
[[ 1.4855+2.6930j, -1.9112+3.1822j, -1.2487+8.1739j, 0.1605+6.7395j,
4.7378+5.3385j, -5.9193+4.2485j, 0.5862-1.5016j],
[-6.4440+8.0353j, 2.8137-6.1565j, -6.3559+2.8448j, -2.9265+2.0644j,
3.1701-8.6227j, -1.3536+7.4368j, 5.7509+7.1081j],
[-1.3466+7.0001j, -2.8740+3.2901j, -3.5271-7.2450j, -2.2556-8.7708j,
-8.8105+1.7250j, -0.5918+4.0027j, 7.7862-3.4462j],
[ 5.5815+6.4713j, -6.9697+7.0167j, 8.0875-0.2620j, -3.6310+4.7267j,
6.0261+4.6868j, 0.7755+0.4853j, -1.8758+4.8651j],
[ 0.8443-4.4743j, -6.6609+2.1552j, 1.0760+8.3737j, 0.9953+8.9865j,
-3.5583-6.3304j, -1.1792-3.2809j, -1.4970-2.7631j],
[ 8.2300+3.9946j, 1.4586-0.1478j, 8.5900+3.9895j, -2.2568-8.6140j,
-8.6059-3.3950j, 1.5673+1.5325j, 4.2586+4.0916j]],
[[-3.8467+4.3392j, 4.9000+2.3846j, 7.2577-6.4897j, 1.4237-1.9427j,
8.1282-7.0420j, 2.9550-6.4592j, -6.0499-8.1850j],
[ 1.8745+8.2029j, -4.5403+0.6454j, -7.3490-6.4714j, 3.5490-1.2901j,
-1.4631-5.0934j, 5.5921-6.8921j, -4.1412+5.8519j],
[ 3.9626-8.2865j, -8.7425-5.2016j, -8.6528+6.3069j, -4.3739+3.5793j,
-7.3106+2.2135j, 8.3889+5.8287j, 8.0646-1.9299j],
[-1.2084-7.4524j, -3.8084-5.4723j, -0.3318+0.4794j, -2.2864-3.7192j,
8.7766+1.1045j, -8.1575-3.2041j, 4.9697-8.4564j],
[ 8.3740-0.9771j, 3.4598-2.1305j, -5.8572+4.8399j, 2.7278-4.8294j,
3.6917-4.7133j, -6.6076-8.8687j, 6.6301+7.2904j],
[ 6.2127+6.2237j, 8.2466+0.1138j, -8.3478-5.6941j, 5.3688-7.1158j,
-1.1510-0.6876j, -0.2579-8.1676j, 7.4570-1.4829j]],
[[-1.9497-3.2609j, -1.6071-1.4398j, -5.0406+7.4493j, 1.4543+1.0708j,
-6.3619+8.9065j, -2.7798+5.7593j, -4.9274-1.1785j],
[ 0.3750+0.5040j, -1.0892-2.0268j, -5.2186-2.7966j, -6.8724+6.9988j,
-3.4293+0.1416j, -7.5392-6.6634j, -5.8855+3.4055j],
[ 8.3858+0.0659j, 6.9550-8.3678j, -8.9337+3.4571j, -4.9290-4.6410j,
7.1276-0.1551j, 5.7600-3.2788j, 1.0814+8.6245j],
[ 0.7012+5.1418j, 8.0857+7.2150j, 7.5094-4.5757j, 3.3091-0.2827j,
-4.1042-5.0177j, 3.9241+4.4959j, 0.9279+7.5567j],
[ 2.9002+7.3141j, 4.6324-1.4293j, -3.2982-7.4901j, 8.9872-2.4968j,
3.5917+5.0777j, -6.2488+4.3580j, 5.5003-4.6500j],
[-3.4046+1.7749j, 7.6435+8.2865j, -1.8083-4.3086j, 4.2663-1.5021j,
1.4435-7.2924j, 1.0314+2.2837j, -3.1866+1.4575j]]],
requires_grad=True), args=(), kwargs={'s': (3, 10), 'dim': (1, 2), 'norm': 'ortho'}, broadcasts_input=False, name=''):
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/var/tmp/conda-bld/work_moved_libtorch-2.5.1-cpu_mkl_h5f89428_106_linux-64/.venv/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3108, in wrapper
method(*args, **kwargs)
File "/var/tmp/conda-bld/work_moved_libtorch-2.5.1-cpu_mkl_h5f89428_106_linux-64/.venv/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 460, in instantiated_test
result = test(self, **param_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/tmp/conda-bld/work_moved_libtorch-2.5.1-cpu_mkl_h5f89428_106_linux-64/.venv/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1242, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/tmp/conda-bld/work_moved_libtorch-2.5.1-cpu_mkl_h5f89428_106_linux-64/.venv/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1621, in wrapper
fn(*args, **kwargs)
File "/var/tmp/conda-bld/work_moved_libtorch-2.5.1-cpu_mkl_h5f89428_106_linux-64/.venv/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1174, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=Tensor[size=(5, 6, 7), device="cpu", dtype=torch.complex64], args=(), kwargs={'s': '(3,10)', 'dim': '(1,2)', 'norm': "'ortho'"}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/test_ops_jit.py TestJitCPU.test_variant_consistency_jit_fft_fft2_cpu_complex64
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
----------------------------------------------------------------------
Ran 1 test in 0.078s
FAILED (errors=1)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0.dev20241210+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Gentoo Linux (x86_64)
GCC version: (Gentoo 14.2.1_p20241116 p3) 14.2.1 20241116
Clang version: 20.0.0git32f7f001
CMake version: version 3.31.2
Libc version: glibc-2.40
Python version: 3.12.7 (main, Oct 19 2024, 06:58:55) [GCC 14.2.1 20240921] (64-bit runtime)
Python platform: Linux-6.12.4-gentoo-dist-x86_64-AMD_Ryzen_5_3600_6-Core_Processor-with-glibc2.40
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 5 3600 6-Core Processor
CPU family: 23
Model: 113
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 98%
CPU max MHz: 4208,0000
CPU min MHz: 550,0000
BogoMIPS: 7189,90
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 3 MiB (6 instances)
L3 cache: 32 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.0
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cudnn-cu11==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-nccl-cu11==2.21.5
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] pytorch-triton==3.2.0+git35c6c7c6
[pip3] torch==2.6.0.dev20241210+cu118
[conda] Could not collect
```
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano @frank-wei @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 | triaged,module: mkl,module: linear algebra,module: intel | low | Critical |
2,730,183,094 | deno | Compile a workspace | Version: Deno 2.1.3
Hi,
Let's say I have the following code (you can also find it here https://github.com/tjoskar/deno-workspace-bug-maybe ):
```ts
// lib-a/main.ts
export const a = "Hello from lib-a";
```
```json
// lib-a/package.json
{
"name": "@my-scope/lib-a",
"type": "module",
"version": "1.0.0",
"main": "main.ts",
"exports": "./main.ts"
}
```
```ts
// lib-b/main.ts
import { a } from "@my-scope/lib-a";
console.log(a);
```
```json
// lib-b/package.json
{
"name": "@my-scope/lib-b",
"type": "module",
"version": "1.0.0",
"main": "main.ts",
"exports": "./main.ts"
}
```
```
// deno.json
{
"workspace": ["./lib-a", "./lib-b"]
}
```
Running `deno run lib-b/main.ts` works fine but if I run `deno check lib-b/main.ts` I get the following error:
```
error: Failed resolving types. [ERR_TYPES_NOT_FOUND] Could not find types for 'file:///deno-compile-workspace/lib-a/index.js' imported from 'file:///deno-compile-workspace/lib-b/main.ts'
at file:///deno-compile-workspace/lib-b/main.ts:1:19
```
And if I run `deno compile --no-check --output test lib-b/main.ts` and then `./test` I get the same error (ish):
```
error: [ERR_MODULE_NOT_FOUND] Cannot find module 'file:///var/folders/n3/3d2w2d152hb6fs7d1hkwyvjc0000gn/T/deno-compile-test/deno-compile-workspace/lib-a/index.js' imported from 'file:///var/folders/n3/3d2w2d152hb6fs7d1hkwyvjc0000gn/T/deno-compile-test/deno-compile-workspace/lib-b/main.ts'
```
Am I missing something? Or isn't this supported?
It works fine if I replace both `package.json` with corresponding deno.json files, but I can't do that for several reasons right now.
Thanks! And only love to Deno!
| bug,compile,workspaces | low | Critical |
2,730,216,897 | go | x/tools/gopls: fix quickfix to add missing import when package name does not match directory name | ### gopls version
Build info
----------
golang.org/x/tools/gopls v0.16.1
golang.org/x/tools/[email protected] h1:1hO/dCeUvjEYx3V0rVvCtOkwnpEpqS29paE+Jw4dcAc=
github.com/BurntSushi/[email protected] h1:9F2/+DoOYIOksmaJFPw1tGFy1eDnIJXg+UHjuD8lTak=
github.com/google/[email protected] h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
golang.org/x/exp/[email protected] h1:2O2DON6y3XMJiQRAS1UWU+54aec2uopH3x7MAiqGW6Y=
golang.org/x/[email protected] h1:5+9lSbEzPSdWkH32vYPBwEpX8KwDbM52Ud9xBUvNlb0=
golang.org/x/[email protected] h1:YsImfSBoP9QPYL0xyKJPq0gcaJdG3rInoqxTWbfQu9M=
golang.org/x/[email protected] h1:3Wt8mZlbFwG8llny+t18kh7AXxyWePFycXMuVdHxnyM=
golang.org/x/[email protected] h1:a94ExnEXNtEwYLGJSIUxnWoxoRz/ZcCsV63ROupILh4=
golang.org/x/[email protected] h1:Kd+Z5Pm6uwYx3T2KEkeHMHUMZxDPb/q6b1m+zEcy62c=
golang.org/x/[email protected] h1:SP0mPeg2PmGCu03V+61EcQiOjmpri2XijexKdzv8Z1I=
honnef.co/go/[email protected] h1:9MDAWxMoSnB6QoSqiVr7P5mtkT9pOc1kSxchzPCnqJs=
mvdan.cc/[email protected] h1:G3QvahNDmpD+Aek/bNOLrFR2XC6ZAdo62dZu65gmwGo=
mvdan.cc/xurls/[email protected] h1:lyBNOm8Wo71UknhUs4QTFUNNMyxy2JEIaKKo0RWOh+8=
go: go1.23.0
### go env
```shell
GO111MODULE=''
GOARCH='amd64'
GOBIN=''
GOCACHE='/home/sebastien/.cache/go-build'
GOENV='/home/sebastien/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='linux'
GOINSECURE=''
GOMODCACHE='/home/sebastien/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPATH='/home/sebastien/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/usr/lib/go-1.23'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/usr/lib/go-1.23/pkg/tool/linux_amd64'
GOVCS=''
GOVERSION='go1.23.4'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/home/sebastien/.config/go/telemetry'
GCCGO='gccgo'
GOAMD64='v1'
AR='ar'
CC='gcc'
CXX='g++'
CGO_ENABLED='1'
GOMOD='/home/sebastien/dev/test-gopls/go.mod'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -m64 -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build2151940005=/tmp/go-build -gno-record-gcc-switches'
```
### What did you do?
bar/bar.go:
```go
package notbar
type NotBar struct {}
```
baz/baz.go:
```go
package baz
type Baz struct {}
```
foo/foo.go:
```go
package foo
type foo struct {
bar notbar.NotBar
baz baz.Baz
}
```
Position the cursor in `foo/foo.go`, on the `bar` property and open the code actions.
### What did you see happen?
The `Add import` (quickfix) option is missing.

### What did you expect to see?
The `Add import` (quickfix) option is available, like it is for `baz`.

### Editor and settings
I am using `vim-lsp` with those settings:
```vimscript
au User lsp_setup call lsp#register_server({
\ 'name': 'go',
\ 'cmd': {server_info->['gopls']},
\ 'allowlist': ['go'],
\ })
```
(note: the issue also exists when using `coc.nvim`).
### Logs
```23:02:01.740 end textDocument/codeAction (+191.519598ms) method="textDocument/codeAction" direction="in" id="#6"
* 23:02:01.740 event (+191.518216ms) label= status.code="OK"
* 23:02:01.549 start queued
* 23:02:01.549 end queued (+87.685ยตs)
* 23:02:01.549 start lsp.Server.codeAction
* 23:02:01.740 end lsp.Server.codeAction (+191.032031ms)
* 23:02:01.549 start golang.allImportsFixes
* 23:02:01.740 end golang.allImportsFixes (+190.861599ms)
* 23:02:01.549 start cache.importsState.runProcessEnvFunc
* 23:02:01.740 end cache.importsState.runProcessEnvFunc (+190.849156ms)
* 23:02:01.549 start imports.FixImports
* 23:02:01.740 end imports.FixImports (+190.777661ms)
* 23:02:01.558 start imports.addExternalCandidates
* 23:02:01.740 end imports.addExternalCandidates (+181.587673ms)
* 23:02:01.558 start imports.ModuleResolver.scan
* 23:02:01.740 end imports.ModuleResolver.scan (+181.493396ms)
* 23:02:01.740 start cache.forEachPackage packages=0
* 23:02:01.740 end cache.forEachPackage (+11.221ยตs) packages=0
* 23:02:01.740 start cache.ParseGoSrc file="/home/sebastien/dev/test-gopls/foo/foo.go"
* 23:02:01.740 end cache.ParseGoSrc (+22.131ยตs) file="/home/sebastien/dev/test-gopls/foo/foo.go"``` | gopls,Tools,gopls/imports | low | Critical |
2,730,231,855 | terminal | [Terminal Chat] Unable insert command using keyboard | ### Description of the new feature
There are two things:
1. I'm not able to hit Enter or Space to trigger insert command into Terminal.
2. The mouse cursor should change to  (pointer) when mouse hover on the button.

### Proposed technical implementation details
_No response_ | Product-Terminal,Issue-Task,Needs-Tag-Fix,Area-Chat | low | Minor |
2,730,284,402 | three.js | GLTFLoader: meshes with multiple primitives and multiple instances have non-unique name | ### Description
The GLTF loader (to my understanding) attempts to assign unique names to all nodes it parses.
However I noticed that duplicated names can appear in meshes created for nodes in certain cases. When a GLTF mesh has multiple primitives the GLTF loader will create one THREE.Mesh per mesh primitive under a THREE.Group. When instantiating this GLTF mesh multiple times, the THREE.Group seems to usually receive a unique name, but the meshes under the group have the same names in each instance.
This can be problematic in software that manipulates parts of 3D scenes based on mesh names (e.g. user selected mesh names).
I'm not sure what the intended behaviour from the THREE.js maintainers here is, so apologies if this should have been a feature request.
### Reproduction steps
1. Open https://threejs.org/editor/
2. Import this GLB test file: [MultiInstanceMultiPrimitiveCube.glb.zip](https://github.com/user-attachments/files/18080898/MultiInstanceMultiPrimitiveCube.glb.zip)
3. Inspect the scene hierarchy, particularly the mesh names under `Cube` and `Cube001`
### Code
```js
// code goes here
```
### Live example
* [jsfiddle-latest-release WebGLRenderer](https://jsfiddle.net/3mrkqyea/)
* [jsfiddle-dev WebGLRenderer](https://jsfiddle.net/gcqx26jv/)
* [jsfiddle-latest-release WebGPURenderer](https://jsfiddle.net/mnqr9oj0/)
* [jsfiddle-dev WebGPURenderer](https://jsfiddle.net/xno7bmw0/)
### Screenshots
<img width="354" alt="Bildschirmfoto 2024-12-10 um 15 25 12" src="https://github.com/user-attachments/assets/62e75d82-70b5-4611-bdd3-2443acb74452">
### Version
r171
### Device
_No response_
### Browser
_No response_
### OS
_No response_ | Loaders | low | Minor |
2,730,322,411 | kubernetes | kube-proxy: net.InterfaceAddrs may return error due to a race condition, causing the Nodeport Service to be inaccessible | ### What happened?
In our production environment, which verson is 1.24.4
1. kube-proxy's error log:
2024-12-09T07:35:24.325300135+08:00 E1209 07:35:24.325193 1 proxier.go:1131] "Failed to get node IP address matching nodeport cidr" err="error listing all interfaceAddrs from host, error: route ip+net: no such network interface"
2. Then the nodeport service's backend pod on this node cannot be accessible by the Nodeport Service.
3. On the save time, in this node's /var/log/message, the log is:
Dec 9 07:35:24 VM-96-121-tencentos containerd: 2024-12-09T07:35:24+08:00 [info] cmdDel: {containerId c915c615cb43c0f7711d125e39e722581384bc45b3453e3c3c79ebe5ed994e71, netNs /var/run/netns/cni-278204fe-6400-6354-982f-ae2299bfae17, ifName eth0, args K8S_POD_INFRA_CONTAINER_ID=c915c615cb43c0f7711d125e39e722581384bc45b3453e3c3c79ebe5ed994e71;K8S_POD_UID=0a3c7874-54ae-4ec6-b4a5-b5c3b9d63a12;IgnoreUnknown=1;K8S_POD_NAMESPACE=bkmonitor-operator;K8S_POD_NAME=bcs-blackbox-job-073518-f8n4c, path /opt/cni/bin, stdinData {"capabilities":{"bandwidth":true,"portMappings":true},"cniVersion":"0.3.1","defaultDelegates":"tke-bridge","kubeconfig":"/etc/kubernetes/tke-cni-kubeconfig","logLevel":"info","name":"multus-cni","type":"multus"}}, <nil>, <nil>
### What did you expect to happen?
The Nodeport Service should be accessible.
### How can we reproduce it (as minimally and precisely as possible)?
In the latest version, it also have the same problem.
1, syncProxyRules->GetNodeIPs->net.InterfaceAddrs()->interfaceAddrTable(which is in interface_linux.go for Linux)->syscall.NetlinkRIB(syscall.RTM_GETADDR, syscall.AF_UNSPEC) is called.
2. Then an interface is delete by CNI.
3. Then interfaceAddrTable(which is in interface_linux.go) call interfaceTable(which is in interface_linux.go)->syscall.NetlinkRIB(syscall.RTM_GETLINK, syscall.AF_UNSPEC)
The key point is that, RTM_GETADDR's return have the delete interface, but the RTM_GETLINK's return donot have the deleted interface.
4. Then in interfaceAddrTable->addrTable will return err.
<img width="805" alt="Clipboard_Screenshot_1733841188" src="https://github.com/user-attachments/assets/d84160a2-2b10-4a0a-9078-faad3a010076">
5. Then the syncProxyRules will not write correct forward configuration because the GetNodeIPs's return donot have any ip.
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
# paste output here
```
This production environment cluster version is v1.24.4
But any version have the same problem.
</details>
### Cloud provider
<details>
TKE
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
Centos8
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
containerd
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/network,triage/accepted | low | Critical |
2,730,323,275 | pytorch | Stop special-casing einops in Dynamo | Imports can have unwanted side effects. See https://github.com/pytorch/pytorch/blob/fb529c2c84f876b4897e2a0d7405c00ff94b29b5/torch/_dynamo/decorators.py#L611
Internal xref: https://fb.workplace.com/groups/257735836456307/posts/804793021750583/?comment_id=805229281706957&reply_comment_id=805232695039949¬if_id=1733795290072385¬if_t=work_group_comment_mention
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | triaged,oncall: pt2,module: dynamo | low | Minor |
2,730,345,033 | vscode | Extensions: `workbench.extensions.installExtension(...)` should prompt for confirmation | <!-- โ ๏ธโ ๏ธ Do Not Delete This! feature_request_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
The `workbench.extensions.installExtension(...)` API should prompt for user confirmation.
CC @isidorn | under-discussion | low | Major |
2,730,372,269 | pytorch | Stop importing arbitrary third-party libraries in Dynamo | The problem is that arbitrary third-party libraries can have side effects on import. Instead, we should check if the libraries have already been imported. Example of what we should do: https://github.com/pytorch/pytorch/blob/34127fc6880e25e71c62daf8fc8f909a4efddd2d/torch/_dynamo/variables/dicts.py#L712
Also, part of this issue is figuring out how to add a lint rule (against `import <third_party_library>` in torch/_dynamo) so that this doesn't happen again.
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | triaged,oncall: pt2,module: dynamo | low | Minor |
2,730,390,022 | vscode | Editor over cuts off markdown content in backticks when it exceeds the length of the line | 
Expanded:

Can we tweak `whitespace` for this?
| bug,editor-hover | low | Minor |
2,730,423,252 | vscode | triage for issue #7102 |
Type: <b>Bug</b>
reference case:
https://github.com/code/code-server/issues/7102
triage reson: verify if help report points to the same repo
VS Code version: Code 1.95.3 (f1a4fb101478ce6ec82fe9627c43efbf9e98c813, 2024-11-13T14:50:04.152Z)
OS version: Windows_NT x64 10.0.22631
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|12th Gen Intel(R) Core(TM) i7-12650H (16 x 2688)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|63.71GB (44.99GB free)|
|Process Argv|--crash-reporter-id 68bdc46b-dc78-4e71-8d99-08fc2ab4c4ab|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (1)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-language-pack-es|MS-|1.95.2024102309
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
2i9eh265:30646982
962ge761:30959799
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupytercf:31046870
nativerepl2:31139839
pythonrstrctxt:31112756
nativeloc1:31192215
cf971741:31144450
iacca1:31171482
notype1cf:31157160
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
```
</details>
<!-- generated by issue reporter --> | bug,menus,confirmation-pending | low | Critical |
2,730,452,152 | pytorch | Custom ops are not logged to dynamo_compile/TORCH_TRACE when an error occurs | ### ๐ Describe the bug
Sample internal log where this happens: https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/aps-shuaiyang-25071ebc0f/attempt_0/version_0/rank_0/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=100If you compare 13/0 with 13/1, 13/0 reports custom ops, but 13/1 reports nothing. It should be obvious what the problem is from inspecting where we log to dynamo_compile/tlparse.
How to fix? I believe we accumulate the set of custom ops in some global mutable state, so we just need to also consult it in the failure case too.
cc @chauhang @penguinwu @masnesral
### Versions
main | module: logging,triaged,oncall: pt2 | low | Critical |
2,730,453,302 | PowerToys | Allow customization of shortcut icons for Worspaces | ### Description of the new feature / enhancement
Currently, when creating an Workspace shortcut in PowerToys, the shortcut icon is automatically set to the first letter of the shortcut's name. While functional, this default icon may not align with the user's preferences or the applications represented by the Workspace.
It would be beneficial to allow users to customize the shortcut icon when creating or editing an Workspace. This feature could include options such as:
* Uploading a custom image file (e.g., PNG, JPEG, ICO) as the icon.
* Selecting from a predefined set of icons.
This enhancement would improve user experience and make it easier to visually identify specific shortcuts, especially when managing multiple Workspaces.
### Scenario when this would be used?
This feature would be used in scenarios where users manage multiple Workspaces with distinct purposes, such as development, testing, or specific tasks like video editing or design. Customizing the icon helps quickly and visually identify the correct Workspace without relying solely on the shortcut name, saving time and reducing errors.
As a power user, having visually distinct icons streamlines my workflow by allowing me to quickly locate and launch the appropriate Workspace. It enhances organization, reduces cognitive load, and ensures a more efficient multitasking experience, especially when juggling several Workspaces throughout the day.
### Supporting information
_No response_ | Idea-Enhancement,Needs-Triage,Product-Workspaces | low | Critical |
2,730,469,077 | PowerToys | Right clic menu reshuffle each time | ### Microsoft PowerToys version
0.86.0
### Installation method
Microsoft Store
### Running as admin
No
### Area(s) with issue?
General
### Steps to reproduce
Hi. Great tool you did. There is just something annoying at use. Whenever I use the right mouse clic on an item (file, folder, etc.) the menu reshuffles each time taking a fraction of second to build itself. That slows down productivity and makes me wait to be shure that everything is in place.
### โ๏ธ Expected Behavior
Right clic menu should be as fast as windows 11 shows it
### โ Actual Behavior
As written above, right mouse clic menu reshuffles each time
### Other Software
_No response_ | Issue-Bug,Needs-Triage,Area-Context Menu | low | Major |
2,730,535,047 | godot | GPU Particles with Damping set dissapear when Scene Tree is Paused | ### Tested versions
- Reproducible in 4.3
### System information
Godot v4.3.stable.mono - Pop!_OS 22.04 LTS - X11 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4070 Ti - AMD Ryzen 7 7700X 8-Core Processor (16 Threads)
### Issue description
When creating blood splatters that stay for a long period of time, I noticed that when the game is paused they would disappear.
Feels related to https://github.com/godotengine/godot/issues/83258
### Steps to reproduce
1. Create a GPU Particle effect
2. Set Gravity to 0,0,0
3. Set Damping to [1, 1]
4. Make a script to pause the scene tree
5. When paused, the particles disappears
6. The particles will re-appear if the lifetime is set to a small number, if it's set to something like 300, no new particles will appear.
### Minimal reproduction project (MRP)
[gpuParticleCheck.zip](https://github.com/user-attachments/files/18082412/gpuParticleCheck.zip)
| bug,topic:particles | low | Minor |
2,730,556,098 | next.js | dnyamicParams = true doesnt work with "output: export" config | ### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/heuristic-hooks-cndpsr
### To Reproduce
The CodeSandbox from NextJS to create minimal reproductions is broken, so my linked sandbox is too
use the output: export config,
create a page with a dynamic path: `/assessment/results/[id]/page.tsx`
add these contents:
```
export const dynamicParams = true
export function generateStaticParams() {
return [{ params: { id: '1' } }]
}
export default function Page () .... // (doesnt matter what this is)
```
Run dev server and visit `http://localhost:3000/assessment/results/6
### Current vs. Expected behavior
it should render the page dynamically, instead I get the following error:
```
Error: Page "/assessment/results/[id]/page" is missing param "/assessment/results/6" in "generateStaticParams()", which is required with "output: export" config.
```
### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 10 Education
Available memory (MB): 32700
Available CPU cores: 8
Binaries:
Node: 18.17.1
npm: N/A
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 14.2.3 // An outdated version detected (latest is 15.0.4), upgrade is highly recommended!
eslint-config-next: 14.0.4
react: 18.2.0
react-dom: 18.2.0
typescript: 5.3.3
Next.js Config:
output: export
```
### Which area(s) are affected? (Select all that apply)
Navigation, Output (export/standalone), Pages Router
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
_No response_ | bug,Output (export/standalone),Navigation,Pages Router | low | Critical |
2,730,602,968 | storybook | [Documentation]: Additional configuration required for Remix Run | ### Describe the problem
When adding Storybook to a brand new remix run application, the following error shows up whenever attempting to run Storybook:
```
=> Failed to build the preview
Error: The Remix Vite plugin requires the use of a Vite config file
ย ย at configResolved (./node_modules/@remix-run/dev/dist/vite/plugin.js:744:15)
ย ย at async Promise.all (index 2)
ย ย at resolveConfig (file://./node_modules/vite/dist/node/chunks/dep-CB_7IfJ-.js:66404:3)
ย ย at getOptimizeDeps (./node_modules/@storybook/builder-vite/dist/index.js:69:2932)
ย ย at createViteServer (./node_modules/@storybook/builder-vite/dist/index.js:69:3557)
ย ย at Module.start (./node_modules/@storybook/builder-vite/dist/index.js:69:4465)
ย ย at storybookDevServer (./node_modules/@storybook/core/dist/core-server/index.cjs:36000:11)
ย ย at buildOrThrow (./node_modules/@storybook/core/dist/core-server/index.cjs:35017:12)
ย ย at buildDevStandalone (./node_modules/@storybook/core/dist/core-server/index.cjs:37190:78)
ย ย at withTelemetry (./node_modules/@storybook/core/dist/core-server/index.cjs:35757:12)
WARN Broken build, fix the error above.
WARN You may need to refresh the browser
```
In order to resolve this problem, you have to disable the Remix plugin when Storybook is active. We achieved this using the following:
```
/**
* Update the Vite config to the following
* file: vite.config.ts
* /
import { vitePlugin as remix } from "@remix-run/dev";
import { defineConfig } from "vite";
import tsconfigPaths from "vite-tsconfig-paths";
import type { PluginOption } from "vite";
declare module "@remix-run/node" {
interface Future {
v3_singleFetch: true;
}
}
export default defineConfig(() => {
const plugins: PluginOption[] = [tsconfigPaths()];
if (!process.env.STORYBOOK) {
plugins.push(
remix({
future: {
v3_fetcherPersist: true,
v3_relativeSplatPath: true,
v3_throwAbortReason: true,
v3_singleFetch: true,
v3_lazyRouteDiscovery: true,
},
})
);
}
return {
plugins,
};
});
```
```
/**
* set the STORYBOOK env programmatically from the storybook main
* file: .storybook/main.ts
* /
process.env.STORYBOOK = "true";
import type { StorybookConfig } from "@storybook/react-vite";
const config: StorybookConfig = {
stories: [
"../stories/**/*.mdx",
"../stories/**/*.stories.@(js|jsx|mjs|ts|tsx)",
],
addons: [
"@storybook/addon-onboarding",
"@storybook/addon-essentials",
"@chromatic-com/storybook",
"@storybook/addon-interactions",
],
framework: {
name: "@storybook/react-vite",
options: {},
},
};
export default config;
```
### Additional context
I am using `bun` as my package manager/runtime. Below is my package.json as well as the run commands for setting up a new Remix project (as referenced above) and installing storybook
```sh
# set up new project
bunx create-remix@latest
```
```sh
# install storybook
bunx storybook@latest init
```
`package.json` on the project in question
```json
{
"name": "web",
"private": true,
"sideEffects": false,
"type": "module",
"scripts": {
"build": "remix vite:build",
"dev": "remix vite:dev",
"lint": "eslint --ignore-path .gitignore --cache --cache-location ./node_modules/.cache/eslint .",
"start": "remix-serve ./build/server/index.js",
"typecheck": "tsc",
"storybook": "storybook dev -p 6006",
"build-storybook": "storybook build"
},
"dependencies": {
"@radix-ui/react-slot": "^1.1.0",
"@remix-run/node": "^2.15.0",
"@remix-run/react": "^2.15.0",
"@remix-run/serve": "^2.15.0",
"class-variance-authority": "^0.7.1",
"clsx": "^2.1.1",
"isbot": "^4.1.0",
"lucide-react": "^0.468.0",
"react": "^18.2.0",
"react-dom": "^18.2.0",
"tailwind-merge": "^2.5.5",
"tailwindcss-animate": "^1.0.7"
},
"devDependencies": {
"@chromatic-com/storybook": "^3.2.2",
"@remix-run/dev": "^2.15.0",
"@storybook/addon-essentials": "^8.4.7",
"@storybook/addon-interactions": "^8.4.7",
"@storybook/addon-onboarding": "^8.4.7",
"@storybook/blocks": "^8.4.7",
"@storybook/react": "^8.4.7",
"@storybook/react-vite": "^8.4.7",
"@storybook/test": "^8.4.7",
"@types/react": "^18.2.20",
"@types/react-dom": "^18.2.7",
"@typescript-eslint/eslint-plugin": "^6.7.4",
"@typescript-eslint/parser": "^6.7.4",
"autoprefixer": "^10.4.19",
"eslint": "^8.38.0",
"eslint-import-resolver-typescript": "^3.6.1",
"eslint-plugin-import": "^2.28.1",
"eslint-plugin-jsx-a11y": "^6.7.1",
"eslint-plugin-react": "^7.33.2",
"eslint-plugin-react-hooks": "^4.6.0",
"eslint-plugin-storybook": "^0.11.1",
"postcss": "^8.4.38",
"storybook": "^8.4.7",
"tailwindcss": "^3.4.4",
"typescript": "^5.1.6",
"vite": "^5.1.0",
"vite-tsconfig-paths": "^4.2.1"
},
"engines": {
"node": ">=20.0.0"
}
}
``` | documentation | low | Critical |
2,730,656,310 | pytorch | Improve `docstring_linter` | ### ๐ The feature, motivation and pitch
The docstring_linter is very useful but here are some practical features that make it more so (after discussion with @amjames).
## "Grandfather list" of existing failures instead of numeric limits
The Issue with the numeric limits is that we can create all sorts of cruft lower than the initially very high limits for undocumented classes and functions (over 1000 lines right now). Instead, if we just write a list of all the existing undocumented functions, and keep the numeric limits low, then new undocumented functions can't be written.
## Don't count "local functions"
Functions local to a function might be very long but not need documentation.
## Either or both of `__init__` and the class might have a docstring
The issue is that if you write copious documentation on the class, you don't need documentation on `__init__` and vice versa, so it's "unfair" to force the programmer to add them.
----
The plan is to add all these three features as options controlled by command line flags, so we can experiment between the old and new behavior.
cc @svekars @brycebortree @sekyondaMeta @AlannaBurke | module: docs,module: lint,triaged,better-engineering | low | Critical |
2,730,688,216 | excalidraw | "glyf: empty gid 410 used as component in glyph 16" right after loading the dev environment in Firefox | 
| bug | low | Minor |
2,730,701,527 | flutter | [a11y][iOS] bottomNavigationBar cannot be navigated using Switch Control | ### Steps to reproduce
1. Enable Switch Control in iOS settings app (https://support.apple.com/en-us/119835).
2. Using the repro app below, attempt to navigate between `NavigationDestination`s within the `NavigationBar`.
### Expected results
It is possible to navigate between `NavigationDestination` tabs.
### Actual results
Focus seems to be stuck on a single `NavigationDestination` tab.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(MyApp());
}
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
theme: ThemeData(useMaterial3: false),
title: 'Flutter Demo',
home: MyHomePage(title: 'Flutter Demo Home Page'),
);
}
}
class MyHomePage extends StatefulWidget {
MyHomePage({Key? key, required this.title}) : super(key: key);
final String title;
@override
_MyHomePageState createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
int _counter = 0;
int _selectedIndex = 0;
void _incrementCounter() {
setState(() {
_counter++;
});
}
void _changeIndex(int index) {
setState(() {
_selectedIndex = index;
});
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text(widget.title, style: TextStyle(fontFamily: 'ProductSans')),
),
body: Center(
child: Column(
children: [
Text(
'Button tapped $_counter time${_counter == 1 ? '' : 's'}.',
// This optional key is used to uniquely identify this widget in
// the integration test at: test_driver/tap_test.dart.
key: Key('CountText'),
style: TextStyle(fontSize: 24),
),
MaterialButton(
onPressed: () {
print('Button tapped');
},
color: Colors.blue,
child: Text('Test button 1'),
),
MaterialButton(
onPressed: () {
print('Button tapped');
},
color: Colors.blue,
child: Text('Test button 2'),
),
MaterialButton(
onPressed: () {
print('Button tapped');
},
color: Colors.blue,
child: Text('Test button 3'),
),
],
),
),
bottomNavigationBar: NavigationBar(
onDestinationSelected: (index) {
print('Selected destination $index');
_incrementCounter();
_changeIndex(index);
},
selectedIndex: _selectedIndex,
destinations: [
NavigationDestination(label: 'one', icon: Icon(Icons.home)),
NavigationDestination(label: 'two', icon: Icon(Icons.home)),
NavigationDestination(label: 'three', icon: Icon(Icons.home)),
],
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Video demonstration</summary>
https://github.com/user-attachments/assets/7e246174-37fd-4d5b-a8be-d21c4e9a449b
</details>
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[โ] Flutter (Channel google3)
โข Framework revision 1d60fc7843 (2 days ago), 2024-12-08T00:00:00.000
โข Engine revision 82d7c1b2c2
โข Dart version c7e47c6c5d
[โ] Xcode - develop for iOS and macOS (Xcode 16.0)
โข Xcode at /Applications/Xcode_16.0.0.app/Contents/Developer
โข Build 16A242d
```
</details>
Google bug tracker: b/382363824.
| platform-ios,f: material design,a: accessibility,customer: google,f: focus,has reproducible steps,P1,found in release: 3.24,team-accessibility,triaged-accessibility,found in release: 3.27 | medium | Critical |
2,730,704,928 | react | Bug: `react-hooks/rules-of-hooks` does not allow `_Component` names underscore prefix for 'private naming convention' | <!--
Please provide a clear and concise description of what the bug is. Include
screenshots if needed. Please test using the latest version of the relevant
React packages to make sure your issue has not already been fixed.
-->
Latest version `[email protected]`
## Steps To Reproduce
```
export function _SomeComponentWeNotWantUsedOutsideThisDirectory() {
const result = useSomeHook();
}
```
## The current behavior
Produces eslint error
> React Hook "useSomeHook" is called in function "_SomeComponentWeNotWantUsedOutsideThisDirectory" that is neither a React function component nor a custom React Hook function. React component names must start with an uppercase letter. React Hook names must start with the word "use". eslint([react-hooks/rules-of-hooks](https://reactjs.org/docs/hooks-rules.html))
## The expected behavior
No eslint warning, because we have clearly indicated it is a component using the uppercase letter after the underscore prefix.
## Proposed solution
https://github.com/facebook/react/blob/4a8fc0f92e0f75257962522b51a938bf4dfda77a/packages/eslint-plugin-react-hooks/src/RulesOfHooks.js#L49
could be easily changed to
```ts
return node.type === 'Identifier' && /^_?[A-Z]/.test(node.name);
```
I have thought about whether it should be a config option but I'm leaning towards 'less is more'. Either way, I'm willing to make PR for this and test coverage but need confirmation you guys will take this into consideration before I put in the work. | Type: Discussion,Component: ESLint Rules | low | Critical |
2,730,768,303 | vscode | Unable to resize window on wayland when native titlebar used | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.95.3 (Commit: f1a4fb101478ce6ec82fe9627c43efbf9e98c813)
- OS Version: Linux x64 6.11.10-200.fc40.x86_64 (Fedora Workstation 41)
Also tested on insiders build
- VS Code Version: 1.96.0-insider (Commit: 475acb4b9bdd77d4204224990bf288ed5bfe6b0e)
- OS Version: the same
Steps to Reproduce:
1. Change TitleBar style to `native`;
2. Start VS Code with command `code --ozone-platform=wayland`. With or without options `--ozone-platform-hint` and `--enable-features` it does not matter;
3. Try to resize window with mouse;
4. You don't see any changes in cursor appearance when pointer close to application window edge or right above the edge.
The only ways to change window size are
1. Double click on titlebar to maximize window;
2. By calling `window.resizeTo(w, h);` from developer tools. | bug,upstream,electron,wayland | low | Critical |
2,730,798,892 | godot | [3.x] The scene inheritance was cleared, but Godot still gives warnings on the TileMap node. | ### Tested versions
v3.6.stable.official [de2f0f147]
### System information
w10 64
### Issue description
The Node2D2 scene inherits from the Node2D scene.
In the video, you can see that I remove the inheritance of Node2D2 from Node2D, leaving Node2D2 independent, but even so, the TileMap node still shows warnings.
https://github.com/user-attachments/assets/7dd25d4f-0f47-4617-9970-f1ff6923a243
### Steps to reproduce
Follow the steps shown in the video.
### Minimal reproduction project (MRP)
[test.zip](https://github.com/user-attachments/files/18083776/test.zip)
| bug,topic:2d | low | Minor |
2,730,840,216 | vscode | Incorrect Batch Regex Replacement | <!-- Please search existing issues to avoid creating duplicates. -->
## Environment data
- VS Code version: 1.95.3
- Jupyter Extension version (available under the Extensions sidebar): v2024.10.0
- Python Extension version (available under the Extensions sidebar): v2024.20.0
- OS (Windows | Mac | Linux distro) and version: Mac 15.2 Beta (24C5089c)
- Python and/or Anaconda version: Python 3.12.2 conda 24.9.2
- Type of virtual environment used (N/A | venv | virtualenv | conda | ...): conda
- Jupyter server running: Local
## Expected behaviour
When performing a regex-based batch replacement for the pattern `(\d)\+(\d)` with replacement `$1 + $2`, all occurrences across cells should be updated consistently.
Input ipynb JSON:
```json
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"3+4\n",
"2+3"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"2+3\n",
"1+2\n",
"0+1"
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
```
Expected single and batch replacement result:
```json
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"3 + 4\n",
"2 + 3"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"2 + 3\n",
"1 + 2\n",
"0 + 1"
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
```
## Actual behaviour
When performing batch replacement, the second cellโs content is processed incorrectly, resulting in an incorrect update. The output looks as follows:
```json
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"3 + 4\n",
"2 + 3"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"3 + 4\n",
"2 + 3\n",
"2 + 3"
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
```
## Steps to reproduce:
1. Use the above input ipynb file.
2. Perform regex replacement with pattern `(\d)\+(\d)` and replacement `$1 + $2` in the Jupyter notebook.
3. Compare results for single and batch replacement modes.
## Logs
There is no relevant output in the Jupyter panel.
I donโt know whether this is an issue with VS Code or Jupyter, but this kind of error only seems to occur in .ipynb files. | bug,notebook-find | low | Critical |
2,730,848,507 | vscode | js-debug terminal icon surrounded by $() in Settings editor | Testing https://github.com/microsoft/vscode/issues/123790

I have two js-debug options because I seem to have both stable and nightly, but they both have improper icons. | bug,settings-editor | low | Critical |
2,730,870,750 | flutter | [Proposal] Add an Api for RemoveUntil Route without an overhead to dispose intermediate routes | ### Use case
quickly back to a deep route without triggering didPopNext or something
### Proposal
like this:
> flutter\packages\flutter\lib\src\widgets\navigator.dart:
```dart
void removeUntil(RoutePredicate predicate) {
_RouteEntry? candidate = _lastRouteEntryWhereOrNull(_RouteEntry.isPresentPredicate);
while (candidate != null) {
if (predicate(candidate.route)) {
return;
}
removeRoute(candidate.route);
candidate = _lastRouteEntryWhereOrNull(_RouteEntry.isPresentPredicate);
}
}
```
so we can add new function such that
> get-4.6.6\lib\get_navigation\src\extension_navigation.dart
```dart
void removeUntil(RoutePredicate predicate, {int? id}) {
return global(id).currentState?.removeUntil(predicate);
}
``` | c: new feature,framework,f: routes,c: proposal,P3,team-framework,triaged-framework | low | Minor |
2,730,957,736 | vscode | Outline does not track current position when "follow cursor" is on | Type: <b>Bug</b>
No matter that "follow cursor" is switched on, the Outline doesn't track location.
Here is the proof:



VS Code version: Code 1.90.0 (89de5a8d4d6205e5b11647eb6a74844ca23d2573, 2024-06-04T19:43:07.605Z)
OS version: Linux x64 6.4.6-060406-generic
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Xeon(R) CPU E5-2696 v4 @ 2.20GHz (88 x 1197)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: disabled_off<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: disabled_software<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: disabled_off|
|Load (avg)|1, 2, 2|
|Memory (System)|251.76GB (227.01GB free)|
|Process Argv|--crash-reporter-id 27d42247-63fb-4d9e-9cb0-87d9974843dc|
|Screen Reader|no|
|VM|50%|
|DESKTOP_SESSION|ubuntu-xorg|
|XDG_CURRENT_DESKTOP|Unity|
|XDG_SESSION_DESKTOP|ubuntu-xorg|
|XDG_SESSION_TYPE|x11|
</details><details><summary>Extensions (53)</summary>
Extension|Author (truncated)|Version
---|---|---
numbered-bookmarks|ale|8.4.0
project-manager|ale|12.8.0
zoomer|ant|0.3.1
vscode-django|bat|1.15.0
docs-view|bie|0.1.0
path-intellisense|chr|2.9.0
gitignore|cod|0.9.0
git-extension-pack|don|0.1.3
githistory|don|0.6.20
python-environment-manager|don|1.2.4
python-extension-pack|don|1.7.0
gitlens|eam|15.1.0
code-runner|for|0.12.2
vscode-edit-csv|jan|0.9.1
vscode-icon-theme|jtl|1.6.6
vsc-python-indent|Kev|1.18.0
vscode-checkpoints|mic|1.3.3
vscode-docker|ms-|1.29.1
csdevkit|ms-|1.6.8
csharp|ms-|2.31.19
vscode-dotnet-runtime|ms-|2.0.6
vscodeintellicode-csharp|ms-|2.1.11
black-formatter|ms-|2024.2.0
debugpy|ms-|2024.6.0
pylint|ms-|2023.10.1
python|ms-|2024.8.0
vscode-pylance|ms-|2024.6.1
datawrangler|ms-|1.2.0
jupyter|ms-|2024.5.0
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.18
vscode-jupyter-cell-tags|ms-|0.1.9
vscode-jupyter-slideshow|ms-|0.1.6
remote-containers|ms-|0.369.0
remote-ssh|ms-|0.110.1
remote-ssh-edit|ms-|0.86.0
remote-wsl|ms-|0.88.2
vscode-remote-extensionpack|ms-|0.25.0
remote-explorer|ms-|0.4.3
remote-server|ms-|1.5.1
vs-keybindings|ms-|0.2.1
one-dark-theme|msk|1.14.2
autodocstring|njp|0.6.1
git-file-history|pom|1.0.1
gpack|Sey|2.0.0
markdown-preview-enhanced|shd|0.8.13
intellicode-api-usage-examples|Vis|0.2.8
vscodeintellicode|Vis|1.3.1
vscode-conventional-commits|viv|1.25.0
jinja|who|0.0.8
markdown-all-in-one|yzh|3.6.2
material-theme|zhu|3.17.2
vscode-open-in-github|ziy|1.3.6
(12 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscoreces:30445986
vscod805:30301674
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
2i9eh265:30646982
962ge761:30959799
pythongtdpath:30769146
welcomedialog:30910333
pythonidxpt:30866567
pythonnoceb:30805159
asynctok:30898717
pythontestfixt:30902429
pythonregdiag2:30936856
pythonmypyd1:30879173
pythoncet0:30885854
h48ei257:31000450
pythontbext0:30879054
accentitlementst:30995554
dsvsc016:30899300
dsvsc017:30899301
dsvsc018:30899302
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
jchc7451:31067544
chatpanelt:31048053
dsvsc021:30996838
9c06g630:31013171
pythoncenvpt:31062603
a69g1124:31058053
dvdeprecation:31068756
pythonprt:31056678
dwnewjupyter:31046869
26j00206:31048877
```
</details>
<!-- generated by issue reporter -->
| bug,notebook-toc-outline | low | Critical |
2,730,959,550 | vscode | "Go To" button is not appearing when running a cell | 
There is one more very annoying problem: only when reporting in THIS extension, the data below is NOT automatically populated (and I reported this as another bug).
## Environment data
Everything latest version.
I won't type this in manually by hands: fix your bugs in the reporting form, please!
When reporting from the VS code, it is NOT automatically populated.
- VS Code version: XXX
- Jupyter Extension version (available under the Extensions sidebar): XXX
- Python Extension version (available under the Extensions sidebar): XXX
- OS (Windows | Mac | Linux distro) and version: XXX
- Python and/or Anaconda version: XXX
- Type of virtual environment used (N/A | venv | virtualenv | conda | ...): XXX
- Jupyter server running: Local | Remote | N/A
## Expected behaviour
XXX
## Actual behaviour
XXX
## Steps to reproduce:
[**NOTE**: Self-contained, minimal reproducing code samples are **extremely** helpful and will expedite addressing your issue]
1. XXX
<!--
Note: If you think a GIF of what is happening would be helpful, consider tools like https://www.cockos.com/licecap/, https://github.com/phw/peek or https://www.screentogif.com/ .
-->
## Logs
<details>
<summary>Output for <code>Jupyter</code> in the <code>Output</code> panel (<code>View</code>โ<code>Output</code>, change the drop-down the upper-right of the <code>Output</code> panel to <code>Jupyter</code>)
</summary>
<p>
```
XXX
```
</p>
</details>
| bug,notebook-execution | low | Critical |
2,730,967,522 | vscode | keybindings to "find" next/previous occurences not working ? | ### Applies To
- [X] Notebooks (.ipynb files)
- [ ] Interactive Window and\/or Cell Scripts (.py files with \#%% markers)
### What happened?
Hi, I can use my keybindings on a classic file (ie a python file) to find the next/previous occurence of the string defined on the find widget (the one appearing on the top right inputbox). That's working well using :
```
{ "key": "ctrl+n", "command": "editor.action.nextMatchFindAction"},
{ "key": "ctrl+p", "command": "editor.action.previousMatchFindAction"},
```
Therefore, it does not seem working on a notebook file. I can see with the "Keyboard Shortcuts Troubleshooting", that these commands are well triggered, but I cannot see the next/previous occurence focused.
Are these commands supposed to work on notebooks ?
If not, which commands should I use ?
Thanks in advance, I am trying to have a config where I use the mouse as least possible.
### VS Code Version
1.89.1
### Jupyter Extension Version
v2024.4.0
### Jupyter logs
_No response_
### Coding Language and Runtime Version
_No response_
### Language Extension Version (if applicable)
_No response_
### Anaconda Version (if applicable)
_No response_
### Running Jupyter locally or remotely?
None | bug,notebook-cell-editor,notebook-find | low | Minor |
2,730,974,153 | flutter | `matchesGoldenFile` when imported by `integration_test` routes to the host system | Would fix https://github.com/flutter/flutter/issues/143299.
The implementation of [`matchesGoldenFile`](https://api.flutter.dev/flutter/flutter_test/matchesGoldenFile.html) is a wrapper around the current [`goldenFileComparator`](https://api.flutter.dev/flutter/flutter_test/goldenFileComparator.html), and works like this:
```dart
if (autoUpdateGoldenFiles) {
await goldenFileComparator.update(testNameUri, buffer);
return null;
}
try {
final bool success = await goldenFileComparator.compare(buffer, testNameUri);
return success ? null : 'does not match';
} on TestFailure catch (ex) {
return ex.message;
}
```
The default `goldenFileComparator` is [`TrivialComparator`](https://api.flutter.dev/flutter/flutter_test/TrivialComparator-class.html), which is effectively a NOP, but the one used by default by most Flutter users is [`LocalFileComparator`](https://api.flutter.dev/flutter/flutter_test/LocalFileComparator-class.html), which both uses `dart:ui` and `dart:io`, and the custom comparators written as part of the internal [`flutter_goldens` pacakge](https://github.com/flutter/flutter/blob/master/packages/flutter_goldens/lib/flutter_goldens.dart), which also uses `dart:ui` (through `flutter_test`) and `dart:io`.
When running as an _integration test_, access to the local (host) file system is not guaranteed (it will happen to work on desktop, but fail on iOS, Android, and Web). We would like (1) a _routing_ `goldenFileComparator` that knows to talk to a remote server (i.e. the driver script) and for (2) the actual implementation of the comparator to live (and run) on the host as a plain Dart-VM script that does not have access to `dart:ui`.
Another option, which is perhaps more palatable, is to continue doing comparisons on the device, and only use the host for IO access (i.e. read/write files, or make calls into Skia Gold). This would be a bit more complicated in some ways and easier in others.
There is something called [`WebGoldenComparator`](https://api.flutter.dev/flutter/flutter_test/WebGoldenComparator-class.html) that claims to do exactly this, so maybe it could conceptually be reused for non-web use cases and renamed `ExternalWebGoldenComparator` (or such).
---
My proposal would be to add this behind an experimental runtime flag in order to iterate without breaking users. | c: proposal,f: integration_test,P1,fyi-web | medium | Critical |
2,730,974,210 | pytorch | If a dynamic value is later specialized, it is not removed from parameter list | ### ๐ Describe the bug
If we made something dynamic, we add the symexpr to parameter list, and later specialize it, we do NOT remove it from the parameter list. This looks like an oversight and the symexpr should be removed.
I have not yet been able to write a local repro but the internal job looks like
When dynamic,
```
def forward(self, a: "bf16[128, 1735, 192][333120, 192, 1]cuda:0", b: "bf16[672, 1735][1735, 1]cuda:0"):
```
When later specialized,
```
def forward(self, a: "bf16[128, 385, 192][73920, 192, 1]cuda:0", b: "Sym(385)", c: "bf16[672, 385][385, 1]cuda:0",
```
(param names anonymized)
Guards point to tensor.view for the specialization. Message me for FB only repro
### Versions
main
cc @chauhang @penguinwu @ezyang @bobrenjc93 | triaged,oncall: pt2,module: dynamic shapes | low | Critical |
2,731,000,033 | go | net/http: builtin slash-suffix redirect does not persist the raw (encoded) url | ### Go version
go version go1.22.6 linux/amd64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='amd64'
GOBIN=''
GOCACHE='/home/westley/.cache/go-build'
GOENV='/home/westley/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='linux'
GOINSECURE=''
GOMODCACHE='/home/westley/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPATH='/home/westley/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/usr/local/go'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/usr/local/go/pkg/tool/linux_amd64'
GOVCS=''
GOVERSION='go1.22.6'
GCCGO='gccgo'
GOAMD64='v1'
AR='ar'
CC='gcc'
CXX='g++'
CGO_ENABLED='1'
GOMOD='/dev/null'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -m64 -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build4141769775=/tmp/go-build -gno-record-gcc-switches'
```
### What did you do?
https://go.dev/play/p/dw5wFNBPzix (go 1.23, sometimes the playground time outs, just rerun it)
The default/builtin slash-suffix redirector does not persist the raw url, specifically a url encoded url.
### What did you see happen?
Got:
* Request: `/api/resource%2fsubdir/info` returns `<a href="/api/resource/subdir/info/">Moved Permanently</a>.`
Returns a url which is invalid, and does not match the original path.
### What did you expect to see?
Expected:
* Request: `/api/resource%2fsubdir/info` should return `<a href="/api/resource%2fsubdir/info/">Moved Permanently</a>.`
Which persists the url encoded url. | NeedsInvestigation | low | Minor |
2,731,004,119 | next.js | MDX Plugin String Format resolution doesn't work with ESM plugins with multiple exports | ### Link to the code that reproduces this issue
https://github.com/wesbos/next-mdx-plugin-string-issue
### To Reproduce
When using strings to import a MDX Rehype plugin, the Next.js importer fails.
I believe this is only when the package has multiple ESM exports - like this package: https://github.com/stefanprobst/rehype-extract-toc/blob/main/package.json
Thansk to @karlhorky for linking me to the possible commit / code. CC @timneutkens
https://github.com/vercel/next.js/pull/72802/files#diff-d5904dff78d88856dc003d263e6f70f0a607166230fba8e4b947a3bae5e4e87cR10
```js
import createMDX from "@next/mdx";
const withMDX = createMDX({
options: {
rehypePlugins: [
["@stefanprobst/rehype-extract-toc"],
["@stefanprobst/rehype-extract-toc/mdx"],
],
},
});
```
We get the Error:
```
Error: No "exports" main defined in /Users/wesbos/Sites/delete-me/mdx-plugin-issue/node_modules/@stefanprobst/rehype-extract-toc/package.json
at Array.map (<anonymous>) {
code: 'ERR_PACKAGE_PATH_NOT_EXPORTED'
}
โจฏ unhandledRejection: Error: No "exports" main defined in /Users/wesbos/Sites/delete-me/mdx-plugin-issue/node_modules/@stefanprobst/rehype-extract-toc/package.json
at Array.map (<anonymous>) {
code: 'ERR_PACKAGE_PATH_NOT_EXPORTED'
}
โจฏ unhandledRejection: Error: No "exports" main defined in /Users/wesbos/Sites/delete-me/mdx-plugin-issue/node_modules/@stefanprobst/rehype-extract-toc/package.json
at Array.map (<anonymous>) {
code: 'ERR_PACKAGE_PATH_NOT_EXPORTED'
}
```
Or
```
Error: Package subpath './mdx' is not defined by "exports" in /Users/wesbos/Sites/delete-me/mdx-plugin-issue/node_modules/@stefanprobst/rehype-extract-toc/package.json
at Array.map (<anonymous>) {
code: 'ERR_PACKAGE_PATH_NOT_EXPORTED'
}
```
This error does not happen if the plugin is imported inside `next.config.mjs` and passed as a javascript function, but since Turborepo must pass as a string, I cannot do this.
The error exists both with and without --turbo.
### Current vs. Expected behavior
_
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.1.0: Thu Oct 10 21:03:15 PDT 2024; root:xnu-11215.41.3~2/RELEASE_ARM64_T6000
Available memory (MB): 65536
Available CPU cores: 10
Binaries:
Node: 23.1.0
npm: 10.9.0
Yarn: 1.22.22
pnpm: 9.10.0
Relevant Packages:
next: 15.0.4-canary.51 // Latest available version is detected (15.0.4-canary.51).
eslint-config-next: N/A
react: 19.0.0-beta-04b058868c-20240508
react-dom: 19.0.0-beta-04b058868c-20240508
typescript: 5.1.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Markdown (MDX)
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
_No response_ | bug,Markdown (MDX) | low | Critical |
2,731,019,914 | react-native | app always crashes on IOS when inputting to a TextInput the word 'near' (lowercase) | ### Description
app always crashes on IOS when inputting to a TextInput the word 'near', see attached Snack.
It will happen every time you type slow enough so that the autocomplete feature of IOS will mark some letters.
If you type really fast, before the autocomplete is able to mark anything - you will be able to 'get through' (not carsh).
To make it more bizzare - it crashes for any combination of lowercase/uppercase, as long as the last letter ('r') is lowercase.
If you do enter it with a capital 'R', even when the autocorrect changes it to lowercase 'r', it will 'get through' (not carsh).
But when you delete and input it again with lowercase 'r' - it will crash again.
### Steps to reproduce
Here is a snack to reproduce:
[Snack link](https://snack.expo.dev/uRdLAyV2e7gVfKBbGh4-x)
### React Native Version
0.76.3
### Affected Platforms
Runtime - iOS
### Output of `npx react-native info`
```text
info Fetching system and libraries information...
System:
OS: macOS 14.2.1
CPU: (10) arm64 Apple M1 Pro
Memory: 305.55 MB / 32.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 23.3.0
path: /opt/homebrew/bin/node
Yarn:
version: 1.22.22
path: /opt/homebrew/bin/yarn
npm:
version: 10.9.1
path: /opt/homebrew/bin/npm
Watchman:
version: 2024.10.07.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods: Not Found
SDKs:
iOS SDK:
Platforms:
- DriverKit 23.5
- iOS 17.5
- macOS 14.5
- tvOS 17.5
- visionOS 1.2
- watchOS 10.5
Android SDK: Not Found
IDEs:
Android Studio: 2024.1 AI-241.18034.62.2411.12071903
Xcode:
version: 15.4/15F31d
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.12
path: /usr/bin/javac
Ruby:
version: 2.6.10
path: /usr/bin/ruby
npmPackages:
"@react-native-community/cli": Not Found
react: Not Found
react-native: Not Found
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: Not found
newArchEnabled: Not found
iOS:
hermesEnabled: Not found
newArchEnabled: Not found
info React Native v0.76.5 is now available (your project is running on v0.76.3).
info Changelog: https://github.com/facebook/react-native/releases/tag/v0.76.5
info Diff: https://react-native-community.github.io/upgrade-helper/?from=0.76.3&to=0.76.5
info For more info, check out "https://reactnative.dev/docs/upgrading?os=macos".
```
### Stacktrace or Logs
```text
Using 'npx react-native log-ios', I didn't get any interesting logs:
info Tailing logs for device iPhone SE (3rd generation) (8A123318-1E64-47A1-8581-0ADE9F78E2A1)
NOTE: Most system logs have moved to a new logging system. See log(1) for more information.
Jul 5 03:00:46 ****** bootlog[0] <Notice>: BOOT_TIME 1720141246 903822
```
### Reproducer
https://snack.expo.dev/uRdLAyV2e7gVfKBbGh4-x
### Screenshots and Videos
_No response_ | Platform: iOS,Component: TextInput,Needs: Triage :mag:,Newer Patch Available | low | Critical |
2,731,033,343 | rust | Crater runs for 1.84 | Note: Please do not conduct triage on these runs without discussing how to do so with a release team member first. Thanks! | S-waiting-on-review | medium | Major |
2,731,059,376 | vscode | Missing support for asymmetric visibility when using PHP constructor property promotion | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.95.3
- OS Version: macOS 15.1.1
Steps to Reproduce:
With asymmetric visibility

Without asymmetric visibility

The scope becomes
```
meta.function.parameters.php
meta.function.php
meta.class.body.php
meta.class.php
source.php
meta.embedded.block.php
text.html.php
```
when using asymmetric visibility instead of
```
storage.modifier.php
meta.function.parameter.promoted-property.php
meta.function.parameters.php
meta.function.php
meta.class.body.php
meta.class.php
source.php
meta.embedded.block.php
text.html.php
```
| help wanted,feature-request,php,grammar | low | Critical |
2,731,091,063 | storybook | Support showing the sidebar context menu for root entries | Roots should behave exactly like groups and components, the kebab-menu button just need to show _next_ to the "expand all" button.

| feature request,addon: test | low | Minor |
2,731,102,158 | pytorch | [MPS] `out_channels` <= 2**16 guard for convolution is too broad | ### ๐ Describe the bug
In https://github.com/sdatkinson/neural-amp-modeler/issues/505, @sdatkinson discovered that the guard against `out_channels` > 2**16 in convolution
https://github.com/pytorch/pytorch/blob/117b6c3e2c477793e3b5f964ec01bd0718313aae/aten/src/ATen/native/mps/operations/Convolution.mm#L168-L173
is checking all output dims as opposed to just `out_channels`. Assuming the size limitation is only relevant for `out_channels`, we should consider relaxing this guard to only checking it.
### Versions
PyTorch version: 2.6.0a0+gite1196df
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.1.1 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.4)
CMake version: version 3.31.1
Libc version: N/A
Python version: 3.12.8 | packaged by conda-forge | (main, Dec 5 2024, 14:19:53) [Clang 18.1.8 ] (64-bit runtime)
Python platform: macOS-15.1.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M3 Max
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] optree==0.13.1
[pip3] torch==2.6.0a0+gite1196df
[pip3] torchaudio==2.5.0a0+ba696ea
[pip3] torchvision==0.20.0a0+e9a3213
[conda] numpy 2.1.3 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] torch 2.6.0a0+gite1196df dev_0 <develop>
[conda] torchaudio 2.5.0a0+ba696ea dev_0 <develop>
[conda] torchvision 0.20.0a0+e9a3213 dev_0 <develop>
cc @malfet @kulinseth @albanD @DenisVieriu97 @jhavukainen | module: error checking,triaged,module: mps | low | Critical |
2,731,112,363 | vscode | "Modeless" controls for notebook manipulation | I absolutely love the jupyter vscode extension :-)
One feature that I would really appreciate would be a way to manipulate notebook contents (add cells, remove cells, change cell type etc) using a "modeless" interface. That is to say it would be nice to have the option to perform these operations using simple keyboard shortcuts, rather than the existing approach which is to first move into "command mode" by deactivating any active cell and then issuing a single letter command. This latter approach reminds me of another text editor, and I greatly prefer the straightforward user experience of vscode over that other editor! ;-p
I notice that many notebook manipulation commands are already present in the command palette, which goes along way to getting me what I'd like with some custom shortcuts. The remaining part would be providing options to effectively disable the modal nature of the notebook editor, leaving it "insert mode" all of the time.
I *think* a very small set of tweaks could make what I want possible without much change. Notably, at the moment executing an existing cell that has a cell below brings the editor back to "command mode", and I don't think there is a command that allows one to avoid that. I think the addition of a command "execute cell and activate below" would allow one to execute cells without entering "command mode". There may well be other edge cases that I haven't stumbled upon that would need tweaks or additions to avoid "command mode", but this is the most noticeable.
While it might be possible to make things effectively "modeless" with a few small changes and a custom shortcut configuration, I wonder whether it might also be of value to have a setting which fully stops cells from being deactivated, thus making it truly modeless.
| feature-request,notebook-commands | low | Minor |
2,731,158,223 | go | x/tools/internal/refactor/inline: reduce call if neither binding decl nor callee body would create name conflicts | Inlining even simple calls such as:
```go
_ = astutil.NodeContains(p.File, typeError.Pos)
```
causes literalization due to the need for a binding decl within the function body:
```go
_ = func() bool { var n ast.Node = p.File; return n.Pos() <= typeError.Pos && typeError.Pos <= n.End() }()
```
But the name n is not used in the caller block, so it would be fine to melt the call down to:
```go
var n ast.Node = p.File
_ = n.Pos() <= typeError.Pos && typeError.Pos <= n.End()
```
FWIW, without the `_ = ...` assignment, the inliner does reduce the call:
```
{
var n ast.Node = p.File
_ = n.Pos() <= typeError.Pos && typeError.Pos <= n.End()
}
```
@findleyr | gopls,Tools,Refactoring | low | Critical |
2,731,162,194 | vscode | Terminal toggle size to content width vertical scroll bar is in wrong position | Repro:
1. `ls`
2. Size terminal narrow so lines wrap
3. Press alt+z

5. shift+wheel down

| bug,terminal-layout | low | Minor |
2,731,163,602 | vscode | Terminal toggle size to content width lacks horizontal scroll bar | Repro:
1. `ls`
2. Size terminal narrow so lines wrap
3. Press alt+z

5. shift+wheel down

๐ there's no horizontal scroll bar (vertical scroll bar position is tracked in https://github.com/microsoft/vscode/issues/235776) | bug,terminal-layout | low | Minor |
2,731,177,554 | svelte | Svelte scoped global styles do not work | ### Describe the bug
As discussed in this [closed feature request](https://github.com/sveltejs/svelte/issues/14578), the newly introduced scoped global styles do not work in most cases because of CSS specificity.
Because of CSS specificity if an element has more classes than another then it takes precedence when being rendered. So if in one of my +page.svelte files I want to edit an element to make it work perfectly for that page I will usually add a class to it (in most of my pages I have special cases that need custom styles).

However as you can see my custom styles are being ignored in favour of the global styles from my +layout.svelte file.
I did try using ID's instead of classes to select the element and that does work in some cases, but would not work if I wanted to select the same class name that is called in the +layout.svelte file (without creating a second selector for every element).

Also generally ID's are for javascript and classes for CSS, I have not used them in my design work for a long long time. It would also require an substantial amount of work to change all classes to ID's.
### Reproduction
Download this repository: https://github.com/Loizzus/SvelteScopedGlobalStylesIssue
npm i
npm run dev
### Logs
_No response_
### System Info
```shell
System:
OS: Windows 11 10.0.22631
CPU: (12) x64 AMD Ryzen 5 3600XT 6-Core Processor
Memory: 14.93 GB / 31.91 GB
Binaries:
Node: 20.11.0 - C:\Program Files\nodejs\node.EXE
npm: 10.2.4 - C:\Program Files\nodejs\npm.CMD
Browsers:
Edge: Chromium (130.0.2849.80)
Internet Explorer: 11.0.22621.3527
npmPackages:
svelte: ^5.0.0 => 5.10.1
```
### Severity
annoyance | documentation | low | Critical |
2,731,190,977 | deno | Array.indexOf() returns -1 for negative integers which exist in arrays of length >= 48 | Version: Deno 2.1.3
poc:
```ts
for (let value = -2; value < 2; value++) {
for (let fill = -2; fill < 2; fill++) {
if (value == fill) continue;
for (let i = 0; i < 3; i++) {
const length = 5 * Math.pow(10, i);
const index = Math.floor(length / 2);
const array = new Array(length).fill(fill);
array[index] = value;
const indexOf = array.indexOf(value);
console.log({ value, fill, length, index, indexOf });
}
}
}
```
when run under node with:
```sh
docker run -it node node
.editor
```
[copy and paste poc code above]
[ctrl-d]
the `index` and `indexOf` always correctly match
when run under deno with:
`docker run -it denoland/deno repl`
[copy and paste poc code above]
`indexOf` is -1 when the array is of length>=50 and we're searching for an item whose value is a negative integer
find the array size at which the bug occurs:
```ts
for (let length = 5; length <= 50; length++) {
const index = Math.floor(length / 2);
const array = new Array(length).fill(0);
array[index] = -1;
const indexOf = array.indexOf(-1);
console.log({ length, index, indexOf });
}
```
this works as expected under node but starts getting indexOf===-1 at length 48 in deno.
| upstream,v8 | low | Critical |
2,731,197,268 | electron | net.WebSocket | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a feature request that matches the one I want to file, without success.
### Problem Description
Our application needs to supports web proxies. It also needs to open WebSocket connections from the main process.
The natural choice would be [the `net` module](https://www.electronjs.org/docs/latest/api/net) "uses Chromium's native networking library instead of the Node.js implementation, offering better support for web proxies." However, it only supports HTTP(S) requests; not WebSocket.
### Proposed Solution
[The `net` module](https://www.electronjs.org/docs/latest/api/net) should expose Chromium's WebSocket implementation like this:
```js
const { net } = require('electron')
new net.WebSocket(...)
```
### Alternatives Considered
I have been unable to find any alternative implementation that lets us open WebSocket connections from the main process and support all web proxies. I would also appreciate any advice on alternatives to the Electron `net` module.
I found https://github.com/jfromaniello/mac-ca and https://github.com/ukoloff/win-ca, but
* They are OS-specific. I would like to support all platforms supported by Electron.
* It's unclear to me whether this supports all web proxies. Is the only significant difference between the Node.js and Chromium network stacks their list of root CAs?
### Additional Information
I found [this previous issue](https://github.com/electron/electron/issues/8931) which is very similar. It was closed because "it will take a considerable amount of work since the net layer of chromium doesn't expose it directly as a part of URLRequest." 7.5 years later, perhaps this has changed? | enhancement :sparkles: | low | Minor |
2,731,209,084 | TypeScript | Allow `interface extends object` syntax | ### ๐ Search Terms
interface extends object
### โ
Viability Checklist
- [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
- [x] This wouldn't change the runtime behavior of existing JavaScript code
- [x] This could be implemented without emitting different JS based on the types of the expressions
- [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.)
- [x] This isn't a request to add a new utility type: https://github.com/microsoft/TypeScript/wiki/No-New-Utility-Types
- [x] This feature would agree with the rest of our Design Goals: https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals
### โญ Suggestion
I propose to allow the following syntax:
```typescript
interface MyInterface extends object {}
```
This would have identical semantics to the following (currently legal) syntax:
```typescript
type _object = object
interface MyInterface extends _object {}
```
Details:
1. No primitive may be assigned to such an interface or an intersection containing such an interface
2. This requirement is inherited by any extending interfaces
I'm requesting this feature mainly because it was the only pain-point mentioned in this [related issue](https://github.com/microsoft/TypeScript/issues/60582). To quote:
> No one is going to turn on this behavior if it means every time they write
> ```
> interface Person {
> name: string
> }
> ```
> they actually need to write
> ```
> interface _PersonFields {
> name: string;
> }
> type Person = object & _PersonFields;
> ```
Regardless of the outcome of that other issue however, it would be nice to be able to `extends object` without going through that kind of hassle.
### ๐ Motivating Example
A small usability improvement to allow interfaces to more easily declare that primitives cannot be assigned to them.
### ๐ป Use Cases
1. What do you want to use this for? To avoid non-idiomatic verbosity
2. What shortcomings exist with current approaches? Non-idiomatic verbosity
3. What workarounds are you using in the meantime? Non-idiomatic verbosity
| Suggestion,In Discussion | low | Minor |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.