id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
โ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
2,689,325,111 | excalidraw | [Feature Request] Add Master Frame/Template functionality similar to PowerPoint master slides | ### Problem Description
When creating multiple frames in Excalidraw+, there's currently no way to maintain consistent elements (like logos, headers, footers) across all frames. Each element needs to be manually copied and placed in each frame individually.
### Proposed Solution
Implement a master frame/template feature similar to PowerPoint's master slides, where:
- Users can define a master frame with common elements
- All new frames automatically inherit these elements
- Updates to master elements reflect across all frames
- Option to override master elements in specific frames if needed
### Use Cases
1. Adding company logos to all frames in a presentation
2. Maintaining consistent header/footer information
3. Applying common background elements or watermarks
4. Ensuring brand consistency across multiple frames
### Benefits
- Saves time in creating consistent multi-frame documents
- Reduces manual work when updating common elements
- Ensures consistency across all frames
- Makes it easier to maintain brand guidelines | E+/presentations | low | Minor |
2,689,334,331 | vscode | Chat response provider with identifier <REDACTED: user-file-path> is already registered. | ```javascript
Error: Chat response provider with identifier <REDACTED: user-file-path> is already registered.
at OPe.registerLanguageModelChat in out-vscode/vs/workbench/contrib/chat/common/vs/workbench/contrib/chat/common/languageModels.ts:287:10
at LAt.$registerLanguageModelProvider in out-vscode/vs/workbench/api/browser/vs/workbench/api/browser/mainThreadLanguageModels.ts:56:45
at Mut.S in src/vs/workbench/services/extensions/common/rpcProtocol.ts:458:17
at Mut.Q in src/vs/workbench/services/extensions/common/rpcProtocol.ts:443:32
at Mut.M in src/vs/workbench/services/extensions/common/rpcProtocol.ts:373:19
at Mut.L in src/vs/workbench/services/extensions/common/rpcProtocol.ts:299:10
at $Z.value in src/vs/workbench/services/extensions/common/rpcProtocol.ts:161:57
at x.B in src/vs/base/common/event.ts:1243:13
at x.fire in src/vs/base/common/event.ts:1274:9
at L6.fire in src/vs/base/parts/ipc/common/ipc.net.ts:652:19
```
[Go to Errors Site](https://errors.code.visualstudio.com/card?ch=90868576241dd25c6c5da64adadc0a09de91a9fe&bH=e7939965-890a-1284-ee6b-48c4d582730c) | bug,error-telemetry,chat | low | Critical |
2,689,335,136 | vscode | TreeError [Chat] Tree element not found: [object Object] | ```javascript
Error: TreeError [Chat] Tree element not found: [object Object]
at Ose.o in src/vs/base/browser/ui/tree/objectTreeModel.ts:313:10
at Ose.expandTo in src/vs/base/browser/ui/tree/objectTreeModel.ts:261:25
at jk.reveal in out-vscode/vs/base/browser/ui/tree/vs/base/browser/ui/tree/abstractTree.ts:3037:14
at PD.sb in out-vscode/vs/workbench/contrib/chat/browser/vs/workbench/contrib/chat/browser/chatWidget.ts:455:14
at PD.layout in out-vscode/vs/workbench/contrib/chat/browser/vs/workbench/contrib/chat/browser/chatWidget.ts:1140:9
at Ije.x in out-vscode/vs/workbench/contrib/inlineChat/browser/vs/workbench/contrib/inlineChat/browser/inlineChatWidget.ts:338:20
at Ije.x in out-vscode/vs/workbench/contrib/inlineChat/browser/vs/workbench/contrib/inlineChat/browser/inlineChatWidget.ts:553:9
at Ije.layout in out-vscode/vs/workbench/contrib/inlineChat/browser/vs/workbench/contrib/inlineChat/browser/inlineChatWidget.ts:319:9
at Wje.E in out-vscode/vs/workbench/contrib/inlineChat/browser/vs/workbench/contrib/inlineChat/browser/inlineChatZoneWidget.ts:161:15
at Wje.w in src/vs/editor/contrib/zoneWidget/browser/zoneWidget.ts:298:9
```
[Go to Errors Site](https://errors.code.visualstudio.com/card?ch=90868576241dd25c6c5da64adadc0a09de91a9fe&bH=f99e8fe7-7324-faa4-723c-d10b0e8be5e4) | bug,error-telemetry,chat | low | Critical |
2,689,337,384 | vscode | Illegal value for lineNumber | ```javascript
Error: Illegal value for lineNumber
at Lw.getLineMaxColumn in src/vs/editor/common/model/textModel.ts:860:10
at Gte.G in src/vs/editor/browser/controller/editContext/native/nativeEditContext.ts:306:42
at Gte.C in src/vs/editor/browser/controller/editContext/native/nativeEditContext.ts:243:33
at Gte.prepareRender in src/vs/editor/browser/controller/editContext/native/nativeEditContext.ts:185:8
at Object.prepareRender in src/vs/editor/browser/view.ts:596:15
at <anonymous> in src/vs/editor/browser/view.ts:529:36
at func in src/vs/editor/browser/view.ts:759:10
at safeInvokeNoArg in src/vs/editor/browser/view.ts:529:4
at SIe.render in src/vs/editor/browser/view.ts:666:9
at <anonymous> in out-vscode/vs/editor/browser/widget/codeEditor/vs/editor/browser/widget/codeEditor/codeEditorWidget.ts:1609:26
```
[Go to Errors Site](https://errors.code.visualstudio.com/card?ch=90868576241dd25c6c5da64adadc0a09de91a9fe&bH=027e98a6-23bc-de8b-56b0-cb5e1d518492) | error-telemetry,editor-edit-context | low | Critical |
2,689,340,711 | PowerToys | In Windows 11, newly created, deleted, copied, or pasted files aren't refreshing in real-time. A manual refresh (F5) or restarting Windows Explorer is required | ### Microsoft PowerToys version
0.73.0
### Installation method
GitHub
### Running as admin
No
### Area(s) with issue?
General
### Steps to reproduce
[PowerToysReport_2024-11-25-14-16-52.zip](https://github.com/user-attachments/files/17898182/PowerToysReport_2024-11-25-14-16-52.zip)
### โ๏ธ Expected Behavior
_No response_
### โ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,689,340,858 | vscode | [140] potential listener LEAK detected, having 178 listeners already. MOST frequent listener (2): | ```javascript
Error
at oci.create in src/vs/base/common/event.ts:933:15
at RHe.q [as onModelAdded] in src/vs/base/common/event.ts:1140:34
at nu.u in out-vscode/vs/workbench/browser/vs/workbench/browser/labels.ts:160:36
at new nu in out-vscode/vs/workbench/browser/vs/workbench/browser/labels.ts:137:8
at Qht.o in src/vs/platform/instantiation/common/instantiationService.ts:162:18
at Qht.createInstance in src/vs/platform/instantiation/common/instantiationService.ts:128:18
at new sFe in out-vscode/vs/workbench/contrib/chat/browser/chatContentParts/vs/workbench/contrib/chat/browser/chatContentParts/chatAttachmentsContentPart.ts:30:70
at Qht.o in src/vs/platform/instantiation/common/instantiationService.ts:162:18
at Qht.createInstance in src/vs/platform/instantiation/common/instantiationService.ts:128:18
at fce.mb in out-vscode/vs/workbench/contrib/chat/browser/vs/workbench/contrib/chat/browser/chatListRenderer.ts:890:36
```
[Go to Errors Site](https://errors.code.visualstudio.com/card?ch=90868576241dd25c6c5da64adadc0a09de91a9fe&bH=30cf9369-5e8a-11fa-f760-8e3586c39324) | error-telemetry | low | Critical |
2,689,668,836 | pytorch | Segmentation fault (core dumped) in `segment_reduce` | ### ๐ Describe the bug
Under specific inputs, `segment_reduce` triggered a crash.
```python
import torch
data = torch.full((6,9,1,1,8,), 7.17172e+12, dtype=torch.double)
reduce = "max"
lengths = None
indices = None
offsets = torch.full((5,0,0,9,2,), 0, dtype=torch.long)
axis = 4
unsafe = False
initial = 7.17172e+12
torch.segment_reduce(data=data, reduce=reduce, lengths=lengths, indices=indices, offsets=offsets, axis=axis, unsafe=unsafe, initial=initial)
```
Output
```
Segmentation fault (core dumped)
```
### Versions
PyTorch version: 2.5.0a0+git32f585d
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] torch==2.5.0a0+git32f585d
[conda] numpy 2.1.3 pypi_0 pypi
[conda] torch 2.5.0a0+git32f585d pypi_0 pypi
cc @albanD | module: crash,triaged,module: python frontend,module: edge cases | low | Critical |
2,689,693,977 | rust | Non pub functions are exported as DLL symbols | <!--
Thank you for filing a bug report! ๐ Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code:
```rust
#[no_mangle]
extern "C" fn foo() {}
```
I build a Windows DLL, I use this foo function with other static library written in C.
I think that this function should not be exported as DLL symbols, but it does
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.82.0 (f6e511eec 2024-10-15)
binary: rustc
commit-hash: f6e511eec7342f59a25f7c0534f1dbea00d01b14
commit-date: 2024-10-15
host: x86_64-pc-windows-msvc
release: 1.82.0
LLVM version: 19.1.1
```
| C-discussion | low | Critical |
2,689,713,224 | pytorch | Can't use dataparallel | ### ๐ Describe the bug
`output = model(input_data)` has no output, don't return.
```
import os
import torch
import torch.nn as nn
from models import encoders
os.environ['CUDA_VISIBLE_DEVICES'] = '3,4'
model = encoders.ResnetEncoder(
18, True, num_input_images=2)
if torch.cuda.device_count() > 1:
print("use more GPU...")
model = nn.DataParallel(model)
model.to('cuda')
input_data = torch.randn(10, 6,256,320).to('cuda')
output = model(input_data)
print(output.shape)
```
```
use more GPU...
Traceback (most recent call last):
File "/home/daoqwj01/users/hjt/EndoDAC/trainer_flow.py", line 15, in <module>
output = model(input_data)
File "/home/daoqwj01/anaconda3/envs/endo_depth/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/daoqwj01/anaconda3/envs/endo_depth/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/daoqwj01/anaconda3/envs/endo_depth/lib/python3.10/site-packages/torch/nn/parallel/data_parallel.py", line 193, in forward
outputs = self.parallel_apply(replicas, inputs, module_kwargs)
File "/home/daoqwj01/anaconda3/envs/endo_depth/lib/python3.10/site-packages/torch/nn/parallel/data_parallel.py", line 212, in parallel_apply
return parallel_apply(
File "/home/daoqwj01/anaconda3/envs/endo_depth/lib/python3.10/site-packages/torch/nn/parallel/parallel_apply.py", line 118, in parallel_apply
thread.join()
File "/home/daoqwj01/anaconda3/envs/endo_depth/lib/python3.10/threading.py", line 1096, in join
self._wait_for_tstate_lock()
File "/home/daoqwj01/anaconda3/envs/endo_depth/lib/python3.10/threading.py", line 1116, in _wait_for_tstate_lock
if lock.acquire(block, timeout):
KeyboardInterrupt
```
### Versions
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.15 | packaged by conda-forge | (main, Oct 16 2024, 01:24:24) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-49-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A30
GPU 1: NVIDIA A30
GPU 2: NVIDIA A30
GPU 3: NVIDIA A30
GPU 4: NVIDIA A30
GPU 5: NVIDIA A30
GPU 6: NVIDIA A30
GPU 7: NVIDIA A30
GPU 8: NVIDIA A30
GPU 9: NVIDIA A30
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6248R CPU @ 3.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 1200.0000
BogoMIPS: 6000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts vnmi pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.5 MiB (48 instances)
L1i cache: 1.5 MiB (48 instances)
L2 cache: 48 MiB (48 instances)
L3 cache: 71.5 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cudnn-cu11==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-nccl-cu11==2.21.5
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] blas 1.0 mkl conda-forge
[conda] cuda-cudart 11.8.89 0 nvidia
[conda] cuda-cupti 11.8.87 0 nvidia
[conda] cuda-libraries 11.8.0 0 nvidia
[conda] cuda-nvrtc 11.8.89 0 nvidia
[conda] cuda-nvtx 11.8.86 0 nvidia
[conda] cuda-runtime 11.8.0 0 nvidia
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcublas 11.11.3.6 0 nvidia
[conda] libcufft 10.9.0.58 0 nvidia
[conda] libcurand 10.3.7.77 0 nvidia
[conda] libcusolver 11.4.1.48 0 nvidia
[conda] libcusparse 11.7.5.86 0 nvidia
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] libopenvino-pytorch-frontend 2024.4.0 h5888daf_2 conda-forge
[conda] mkl 2022.1.0 hc2b9512_224 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
[conda] numpy 1.26.3 pypi_0 pypi
[conda] nvidia-cublas-cu11 11.11.3.6 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu11 11.8.87 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu11 11.8.89 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu11 11.8.89 pypi_0 pypi
[conda] nvidia-cudnn-cu11 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu11 10.9.0.58 pypi_0 pypi
[conda] nvidia-curand-cu11 10.3.0.86 pypi_0 pypi
[conda] nvidia-cusolver-cu11 11.4.1.48 pypi_0 pypi
[conda] nvidia-cusparse-cu11 11.7.5.86 pypi_0 pypi
[conda] nvidia-nccl-cu11 2.21.5 pypi_0 pypi
[conda] nvidia-nvtx-cu11 11.8.86 pypi_0 pypi
[conda] pytorch 2.5.1 py3.10_cuda11.8_cudnn9.1.0_0 pytorch
[conda] pytorch-cuda 11.8 h7e8668a_6 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.5.1 py310_cu118 pytorch
[conda] torchtriton 3.1.0 py310 pytorch
[conda] torchvision 0.20.1 py310_cu118 pytorch
[conda] triton 3.1.0 pypi_0 pypi | needs reproduction,triaged,module: data parallel | low | Critical |
2,689,740,860 | transformers | [i18n-zh] Translating docs to Chinese Translate bertlogy.md into Chinese | I have translated bertlogy.md into Chinese and my PR is here #34908 .This is my second contribution, and I hope to do better than the last time. Welcome anyone to review it. | WIP | low | Minor |
2,689,825,285 | svelte | Add minimum browser versions to the documentation | ### Describe the problem
Hello.
We have encountered the fact that the documentation is not entirely clear what minimum browser versions svelte@5 requires.
I would like to understand what things need to be polyfilled on older versions of browsers.
For example, on 70 and 71 versions of Chrome, Svelte does not work because there is no `queueMicrotask` and `getAnimations`.

### Describe the proposed solution
Supplement the documentation with information on browser support
### Importance
would make my life easier | documentation | low | Major |
2,689,834,318 | TypeScript | Static Index Signature with Generics | ### ๐ Search Terms
- static mappings with generics
- static generic index signature
- class index signature static
- generic index signature
- https://github.com/microsoft/TypeScript/issues/58258
- https://github.com/microsoft/TypeScript/issues/17867
- https://www.typescriptlang.org/docs/handbook/2/classes.html#:~:text=%5Bs%3A%20string%5D%3A%20boolean%20%7C%20((s%3A%20string)%20%3D%3E%20boolean)%3B
### โ
Viability Checklist
- [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
- [x] This wouldn't change the runtime behavior of existing JavaScript code
- [x] This could be implemented without emitting different JS based on the types of the expressions
- [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.)
- [x] This isn't a request to add a new utility type: https://github.com/microsoft/TypeScript/wiki/No-New-Utility-Types
- [x] This feature would agree with the rest of our Design Goals: https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals
### โญ Suggestion
Currently, static index signatures can be declared for a type, under the condition, that the mapping cannot be a literal type or generic type. My proposal is to allow TypeScripts' powerful typing system on static indexing by removing this restriction. As you see below, there is one utility type that already allows the elimination of a string subset, that being `Uppercase`.
```ts
class Abc{
static readonly [k: number]: number;
}
class Def{
static readonly [k: number | Uppercase<string>]: number; // This is allowed
static hello() {} // Since `'hello' extends Uppercase<string> ? true : false` yields false, this is allowed.
}
```
By allowing generic indexing, the "value" type can be further specified by the "key" type. Currently, static index signatures with string mapping have the problem of overriding (lowercase) method types, causing the type checker to warn a conflict. By allowing TS' conventional `T in Types` notation, the subset of index keys can be decreased and shown in InteliSense, and also providing the value type based on key set.
### ๐ Motivating Example
TypeScript's powerful type system can now be used in static and non-static index signatures, allowing you to type generic indexation.
### ๐ป Use Cases
This can be used to provide great simplifications for API writers, by exposing a niche interface and type hinting on a class. Currently this is possible by obfuscating a class with another type, which is not that great, or using a typed proxy, which is the best approach currently. However all of these mean that the projects' classes have to be split and harder to maintain. There is no way, other than with `Uppercase` to limit static string index signatures.
By using generic index signatures, as well as being able to use them in static context, a lot of type workload can be exercised within a class range.
```ts
const Format = {
'PlayerJoin': [number, string],
'PlayerChat': [number, string],
'PlayerLeave': [number],
}
export class MessageType<F extends keyof typeof Format> {
// This will allow to index `MessageTypes` as a collection of data entries.
[Index in number]: (typeof Format)[F][Index];
// Type System: (typeof Format)[F][0] ~> [number, ...][0] ~> number
public get playerId() { return this[0]; }
// This is already possible, nothing of too much interest
constructor(messageType: F) { return Proxy(...) }
}
```
The fun part comes when you have a collection like `Block` or `Item`, where you not only want to have some internal state, but also allow to list all "content" statically on the class.
```ts
const Blocks = [
{ id: 0, name: "air" },
{ id: 1, name: "earth" },
{ id: 2, name: "water" },
];
export class Block<Id extends (typeof Blocks)[number]['id']> {
// Currently, InteliSense will tell you, that using the generic argument from the class is not supported. This should be kept as a feature, but adding new generic types should not be hindered
static readonly [SId in (typeof Blocks)[number]['id']]: Block<SId>;
...
}
``` | Suggestion,Awaiting More Feedback | low | Minor |
2,689,851,262 | kubernetes | Failure cluster [5c42e849...]: TestApfWatchHandlePanic/post-execute_panic DATA RACE | ### Failure cluster [5c42e84947d532cf1eee](https://go.k8s.io/triage#5c42e84947d532cf1eee)
The test accesses a variable marked as "atomicRead" without using an atomic read...
https://github.com/kubernetes/kubernetes/blob/e4c1f980b76fecece30c2f77885a7117192170a6/staging/src/k8s.io/apiserver/pkg/server/filters/priority-and-fairness_test.go#L188
Most recently touched in https://github.com/kubernetes/kubernetes/pull/127029. Prior attempts to fix this include https://github.com/kubernetes/kubernetes/pull/126574, reverted in https://github.com/kubernetes/kubernetes/pull/127089.
cc @tkashem
##### Error text:
```
Failed; Failed
=== RUN TestApfWatchHandlePanic/post-execute_panic
==================
WARNING: DATA RACE
Read at 0x0000055ea734 by goroutine 5917:
k8s.io/apiserver/pkg/server/filters.newApfHandlerWithFilter.func2.1.1()
/home/prow/go/src/k8s.io/kubernetes/staging/src/k8s.io/apiserver/pkg/server/filters/priority-and-fairness_test.go:188 +0x30
runtime.deferreturn()
...
Previous write at 0x0000055fe734 by goroutine 5974:
sync/atomic.AddInt32()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/cache/mod/golang.org/[email protected]/src/runtime/race_amd64.s:281 +0xb
sync/atomic.AddInt32()
<autogenerated>:1 +0x14
k8s.io/apiserver/pkg/server/filters.(*priorityAndFairnessHandler).Handle.func7()
/home/prow/go/src/k8s.io/kubernetes/staging/src/k8s.io/apiserver/pkg/server/filters/priority-and-fairness.go:206 +0x1a7
k8s.io/apiserver/pkg/server/filters.(*fakeWatchApfFilter).Handle()
```
#### Recent failures:
[11/20/2024, 12:40:47 PM ci-kubernetes-unit](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-unit/1859200012680957952)
/kind failing-test
/kind flake
/sig api-machinery
| sig/api-machinery,kind/flake,kind/failing-test,triage/accepted | low | Critical |
2,689,885,855 | rust | thread 'rustc' panicked at compiler/rustc_metadata/src/rmeta/def_path_hash_map.rs:23:54: called `Option::unwrap()` on a `None` value | This is my first time opening an issue in the Rust repository.
I encountered a rustc error for the first time, and since the error message included "this is a bug," I decided to report it here.
I am developing a program where an embedded device communicates with a host PC. The main crates I am using are:
- tokio
- egui, eframe
- postcard-rpc
This project is private, so I cannot share the entire source code. However, I will try to provide a minimal reproducible example if possible. At the moment, I do not have time to create one, nor I do not have any ideas where to tuckle. I'll read "Rust Bug Minimization Patterns" later...But before reading it(too long!!!!XD), anyway I decided to make an issue first.
I couldnโt figure out what is causing the error, so if you could provide any hints or guidance, I would greatly appreciate it.
<!--
Thank you for finding an Internal Compiler Error! ๐ง If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
### Code
Sorry, now I can't provide a code. I'll do it later!
```Rust
<code>
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
<version>
rustc 1.82.0 (f6e511eec 2024-10-15)
binary: rustc
commit-hash: f6e511eec7342f59a25f7c0534f1dbea00d01b14
commit-date: 2024-10-15
host: x86_64-apple-darwin
release: 1.82.0
LLVM version: 19.1.1
```
### Error output
```
<output>
thread 'rustc' panicked at compiler/rustc_metadata/src/rmeta/def_path_hash_map.rs:23:54:
called `Option::unwrap()` on a `None` value
stack backtrace:
0: 0x10e76eb26 - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::hcaf66bc4c0c453df
1: 0x10baf542b - core::fmt::write::hc9c5f1836b413410
2: 0x10e7627f2 - std::io::Write::write_fmt::h49df280499063c09
3: 0x10e7714f8 - std::panicking::default_hook::{{closure}}::h52c0b2f44f6107c5
4: 0x10e771182 - std::panicking::default_hook::h5a6cf31501c161b2
5: 0x10c6c189d - std[cb1cdfafde4df3cf]::panicking::update_hook::<alloc[979052cd25504fde]::boxed::Box<rustc_driver_impl[aca610d65443316e]::install_ice_hook::{closure#0}>>::{closure#0}
6: 0x10e772745 - std::panicking::rust_panic_with_hook::hda4640ee332466e9
7: 0x10e771b32 - std::panicking::begin_panic_handler::{{closure}}::haa3060694b34ea3d
8: 0x10e76efd9 - std::sys::backtrace::__rust_end_short_backtrace::h8eb44913cfe71457
9: 0x10e7717ac - _rust_begin_unwind
10: 0x11138c2fa - core::panicking::panic_fmt::h31edc3d6ff0aadca
11: 0x11138c3a4 - core::panicking::panic::hfd2e4211468d1768
12: 0x11138c248 - core::option::unwrap_failed::h5efa68320d76c7d1
13: 0x10d2163f1 - <rustc_metadata[47ce8b317361ba99]::rmeta::decoder::cstore_impl::provide_cstore_hooks::{closure#0} as core[4a130405b5326760]::ops::function::FnOnce<(rustc_middle[1883ba4f984b676]::query::plumbing::TyCtxtAt, rustc_span[94136502c03fbd56]::def_id::DefPathHash, rustc_span[94136502c03fbd56]::def_id::StableCrateId)>>::call_once
14: 0x10d3fbe2b - <rustc_middle[1883ba4f984b676]::ty::context::TyCtxt>::def_path_hash_to_def_id
15: 0x10dd8751a - rustc_query_impl[abd10f3ee75d8aa1]::plumbing::force_from_dep_node::<rustc_query_impl[abd10f3ee75d8aa1]::DynamicConfig<rustc_query_system[59d7a088f82cb72f]::query::caches::DefIdCache<rustc_middle[1883ba4f984b676]::query::erase::Erased<[u8; 8usize]>>, false, false, false>>
16: 0x10dd18e19 - <rustc_query_impl[abd10f3ee75d8aa1]::plumbing::query_callback<rustc_query_impl[abd10f3ee75d8aa1]::query_impl::type_of::QueryType>::{closure#0} as core[4a130405b5326760]::ops::function::FnOnce<(rustc_middle[1883ba4f984b676]::ty::context::TyCtxt, rustc_query_system[59d7a088f82cb72f]::dep_graph::dep_node::DepNode)>>::call_once
17: 0x10dba8dfe - <rustc_query_system[59d7a088f82cb72f]::dep_graph::graph::DepGraphData<rustc_middle[1883ba4f984b676]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[abd10f3ee75d8aa1]::plumbing::QueryCtxt>
18: 0x10dba8d87 - <rustc_query_system[59d7a088f82cb72f]::dep_graph::graph::DepGraphData<rustc_middle[1883ba4f984b676]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[abd10f3ee75d8aa1]::plumbing::QueryCtxt>
19: 0x10dba8d87 - <rustc_query_system[59d7a088f82cb72f]::dep_graph::graph::DepGraphData<rustc_middle[1883ba4f984b676]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[abd10f3ee75d8aa1]::plumbing::QueryCtxt>
20: 0x10dba8d87 - <rustc_query_system[59d7a088f82cb72f]::dep_graph::graph::DepGraphData<rustc_middle[1883ba4f984b676]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[abd10f3ee75d8aa1]::plumbing::QueryCtxt>
21: 0x10dba8b33 - <rustc_query_system[59d7a088f82cb72f]::dep_graph::graph::DepGraphData<rustc_middle[1883ba4f984b676]::dep_graph::DepsType>>::try_mark_green::<rustc_query_impl[abd10f3ee75d8aa1]::plumbing::QueryCtxt>
22: 0x10dbf896c - rustc_query_system[59d7a088f82cb72f]::query::plumbing::try_execute_query::<rustc_query_impl[abd10f3ee75d8aa1]::DynamicConfig<rustc_query_system[59d7a088f82cb72f]::query::caches::DefaultCache<rustc_type_ir[a20e815c0cc40cbe]::canonical::Canonical<rustc_middle[1883ba4f984b676]::ty::context::TyCtxt, rustc_middle[1883ba4f984b676]::ty::ParamEnvAnd<rustc_middle[1883ba4f984b676]::ty::predicate::Predicate>>, rustc_middle[1883ba4f984b676]::query::erase::Erased<[u8; 2usize]>>, false, false, false>, rustc_query_impl[abd10f3ee75d8aa1]::plumbing::QueryCtxt, true>
23: 0x10de3a90a - rustc_query_impl[abd10f3ee75d8aa1]::query_impl::evaluate_obligation::get_query_incr::__rust_end_short_backtrace
24: 0x10e553211 - <rustc_infer[48c0d49994c5621c]::infer::InferCtxt as rustc_trait_selection[eda121f7d81737f5]::traits::query::evaluate_obligation::InferCtxtExt>::evaluate_obligation
25: 0x10e55353d - <rustc_infer[48c0d49994c5621c]::infer::InferCtxt as rustc_trait_selection[eda121f7d81737f5]::traits::query::evaluate_obligation::InferCtxtExt>::evaluate_obligation_no_overflow
26: 0x10e5434c1 - <rustc_trait_selection[eda121f7d81737f5]::traits::fulfill::FulfillProcessor>::process_trait_obligation
27: 0x10e54200a - <rustc_trait_selection[eda121f7d81737f5]::traits::fulfill::FulfillProcessor as rustc_data_structures[2abec748834a34ed]::obligation_forest::ObligationProcessor>::process_obligation
28: 0x10cb8ce75 - <rustc_data_structures[2abec748834a34ed]::obligation_forest::ObligationForest<rustc_trait_selection[eda121f7d81737f5]::traits::fulfill::PendingPredicateObligation>>::process_obligations::<rustc_trait_selection[eda121f7d81737f5]::traits::fulfill::FulfillProcessor>
29: 0x10cc88b46 - <rustc_trait_selection[eda121f7d81737f5]::traits::fulfill::FulfillmentContext<rustc_trait_selection[eda121f7d81737f5]::traits::FulfillmentError> as rustc_infer[48c0d49994c5621c]::traits::engine::TraitEngine<rustc_trait_selection[eda121f7d81737f5]::traits::FulfillmentError>>::select_where_possible
30: 0x10cd44674 - <rustc_hir_typeck[622e33563e2ec088]::fn_ctxt::FnCtxt>::check_argument_types
31: 0x10cd4337a - <rustc_hir_typeck[622e33563e2ec088]::fn_ctxt::FnCtxt>::check_method_argument_types
32: 0x10cdff864 - <rustc_hir_typeck[622e33563e2ec088]::fn_ctxt::FnCtxt>::check_expr_kind
33: 0x10cd0e5bf - <rustc_hir_typeck[622e33563e2ec088]::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
34: 0x10cd53460 - <rustc_hir_typeck[622e33563e2ec088]::fn_ctxt::FnCtxt>::check_decl
35: 0x10cd53eee - <rustc_hir_typeck[622e33563e2ec088]::fn_ctxt::FnCtxt>::check_block_with_expected
36: 0x10cd0e5bf - <rustc_hir_typeck[622e33563e2ec088]::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
37: 0x10cd10687 - <rustc_hir_typeck[622e33563e2ec088]::fn_ctxt::FnCtxt>::check_return_expr
38: 0x10cdda71a - rustc_hir_typeck[622e33563e2ec088]::check::check_fn
39: 0x10cdfec36 - <rustc_hir_typeck[622e33563e2ec088]::fn_ctxt::FnCtxt>::check_expr_kind
40: 0x10cd0e5bf - <rustc_hir_typeck[622e33563e2ec088]::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
41: 0x10cd44aa8 - <rustc_hir_typeck[622e33563e2ec088]::fn_ctxt::FnCtxt>::check_argument_types
42: 0x10cce3a83 - <rustc_hir_typeck[622e33563e2ec088]::fn_ctxt::FnCtxt>::confirm_builtin_call
43: 0x10cdf84ab - <rustc_hir_typeck[622e33563e2ec088]::fn_ctxt::FnCtxt>::check_expr_kind
44: 0x10cd0e5bf - <rustc_hir_typeck[622e33563e2ec088]::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
45: 0x10cd44aa8 - <rustc_hir_typeck[622e33563e2ec088]::fn_ctxt::FnCtxt>::check_argument_types
46: 0x10cce3a83 - <rustc_hir_typeck[622e33563e2ec088]::fn_ctxt::FnCtxt>::confirm_builtin_call
47: 0x10cdf84ab - <rustc_hir_typeck[622e33563e2ec088]::fn_ctxt::FnCtxt>::check_expr_kind
48: 0x10cd0e5bf - <rustc_hir_typeck[622e33563e2ec088]::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
49: 0x10cdee142 - <rustc_hir_typeck[622e33563e2ec088]::fn_ctxt::FnCtxt>::check_expr_kind
50: 0x10cd0e5bf - <rustc_hir_typeck[622e33563e2ec088]::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
51: 0x10cd53e8e - <rustc_hir_typeck[622e33563e2ec088]::fn_ctxt::FnCtxt>::check_block_with_expected
52: 0x10cd0e5bf - <rustc_hir_typeck[622e33563e2ec088]::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
53: 0x10cd10687 - <rustc_hir_typeck[622e33563e2ec088]::fn_ctxt::FnCtxt>::check_return_expr
54: 0x10cdda71a - rustc_hir_typeck[622e33563e2ec088]::check::check_fn
55: 0x10cdd46b0 - rustc_hir_typeck[622e33563e2ec088]::typeck
56: 0x10dd8fa2c - rustc_query_impl[abd10f3ee75d8aa1]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[abd10f3ee75d8aa1]::query_impl::typeck::dynamic_query::{closure#2}::{closure#0}, rustc_middle[1883ba4f984b676]::query::erase::Erased<[u8; 8usize]>>
57: 0x10dc2f994 - rustc_query_system[59d7a088f82cb72f]::query::plumbing::try_execute_query::<rustc_query_impl[abd10f3ee75d8aa1]::DynamicConfig<rustc_query_system[59d7a088f82cb72f]::query::caches::VecCache<rustc_span[94136502c03fbd56]::def_id::LocalDefId, rustc_middle[1883ba4f984b676]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[abd10f3ee75d8aa1]::plumbing::QueryCtxt, true>
58: 0x10ddd9b41 - rustc_query_impl[abd10f3ee75d8aa1]::query_impl::typeck::get_query_incr::__rust_end_short_backtrace
59: 0x10caf6aec - rustc_hir_analysis[120c8ebca98fd2c2]::check_crate
60: 0x10d04fd14 - rustc_interface[8c7d73dd21884758]::passes::run_required_analyses
61: 0x10d0526ac - rustc_interface[8c7d73dd21884758]::passes::analysis
62: 0x10dd8fadc - rustc_query_impl[abd10f3ee75d8aa1]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[abd10f3ee75d8aa1]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[1883ba4f984b676]::query::erase::Erased<[u8; 1usize]>>
63: 0x10dbdc0ac - rustc_query_system[59d7a088f82cb72f]::query::plumbing::try_execute_query::<rustc_query_impl[abd10f3ee75d8aa1]::DynamicConfig<rustc_query_system[59d7a088f82cb72f]::query::caches::SingleCache<rustc_middle[1883ba4f984b676]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[abd10f3ee75d8aa1]::plumbing::QueryCtxt, true>
64: 0x10dda82ae - rustc_query_impl[abd10f3ee75d8aa1]::query_impl::analysis::get_query_incr::__rust_end_short_backtrace
65: 0x10c6779db - <rustc_interface[8c7d73dd21884758]::queries::QueryResult<&rustc_middle[1883ba4f984b676]::ty::context::GlobalCtxt>>::enter::<core[4a130405b5326760]::result::Result<(), rustc_span[94136502c03fbd56]::ErrorGuaranteed>, rustc_driver_impl[aca610d65443316e]::run_compiler::{closure#0}::{closure#1}::{closure#5}>
66: 0x10c6c0042 - rustc_interface[8c7d73dd21884758]::interface::run_compiler::<core[4a130405b5326760]::result::Result<(), rustc_span[94136502c03fbd56]::ErrorGuaranteed>, rustc_driver_impl[aca610d65443316e]::run_compiler::{closure#0}>::{closure#1}
67: 0x10c6b07c7 - std[cb1cdfafde4df3cf]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[8c7d73dd21884758]::util::run_in_thread_with_globals<rustc_interface[8c7d73dd21884758]::interface::run_compiler<core[4a130405b5326760]::result::Result<(), rustc_span[94136502c03fbd56]::ErrorGuaranteed>, rustc_driver_impl[aca610d65443316e]::run_compiler::{closure#0}>::{closure#1}, core[4a130405b5326760]::result::Result<(), rustc_span[94136502c03fbd56]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[4a130405b5326760]::result::Result<(), rustc_span[94136502c03fbd56]::ErrorGuaranteed>>
68: 0x10c6c5bca - <<std[cb1cdfafde4df3cf]::thread::Builder>::spawn_unchecked_<rustc_interface[8c7d73dd21884758]::util::run_in_thread_with_globals<rustc_interface[8c7d73dd21884758]::interface::run_compiler<core[4a130405b5326760]::result::Result<(), rustc_span[94136502c03fbd56]::ErrorGuaranteed>, rustc_driver_impl[aca610d65443316e]::run_compiler::{closure#0}>::{closure#1}, core[4a130405b5326760]::result::Result<(), rustc_span[94136502c03fbd56]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[4a130405b5326760]::result::Result<(), rustc_span[94136502c03fbd56]::ErrorGuaranteed>>::{closure#1} as core[4a130405b5326760]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
69: 0x10e77d90b - std::sys::pal::unix::thread::Thread::new::thread_start::h55ff15b5f2276bcd
70: 0x7fff6e0b9109 - __pthread_start
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: rustc 1.82.0 (f6e511eec 2024-10-15) running on x86_64-apple-darwin
note: compiler flags: --crate-type bin -C embed-bitcode=no -C debuginfo=2 -C split-debuginfo=unpacked -C incremental=[REDACTED]
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
#0 [evaluate_obligation] evaluating trait selection obligation `{coroutine witness@postcard_rpc::host_client::HostClient<postcard_rpc::standard_icd::WireError>::send_resp<k6e_dcap2_common::icd::ProjectCommandEndpoint>::{closure#0}}: core::marker::Send`
#1 [typeck] type-checking `app::run`
end of query stack
there was a panic while trying to force a dep node
try_mark_green dep node stack:
#0 TraitSelect(da253ef9e1c0301d-d59ba1c5604ee998)
#1 TraitSelect(995cde0fe76c64d4-2fc80b69fa2bffe)
#2 TraitSelect(c2206d994b90fa71-3a71e4da02677409)
#3 evaluate_obligation(fb2de09ea71bf4e2-6748f5794fb6fb62)
end of try_mark_green dep node stack
```
I also did `RUST_BACKTRACE=1 cargo build` and the message was the same.
</p>
</details>
| I-ICE,T-compiler,A-incr-comp,C-bug,S-needs-repro | low | Critical |
2,689,923,572 | yt-dlp | Check archive prior to every download | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm requesting a feature unrelated to a specific site
- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Provide a description that is worded well enough to be understood
Using --download-archive options saves info about the already downloaded video. If the archive.txt file was changed manually, for example manually added "site video_id" for not available videos, yt-dlp ignores it until next start and is trying to downloadt it again. Often the same video is available miltiple times in the playlist. Could you please let yt-dlp to check this file not only at start but also for every not downloaded video.
Many thanks in advance
### Provide verbose output that clearly demonstrates the problem
- [ ] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [ ] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
_No response_ | enhancement,regression | low | Critical |
2,689,959,333 | PowerToys | FancyZones: Allow exclusion of a specific window rather than just an application | ### Description of the new feature / enhancement
At present, you can only exclude an entire application, I need to be able to exclude a single window - namely outlook's reminder window because it keeps snapping to a much larger size than it needs to.
### Scenario when this would be used?
When Outlook pops up its reminder window, I don't want it snapping to 1/2 my monitor size, it should remain at its default size.
### Supporting information
_No response_ | Product-FancyZones,Needs-Triage | low | Minor |
2,690,018,872 | pytorch | DISABLED test_weight_norm_bwd_dynamic_shapes_cpu (__main__.DynamicShapesCpuTests) | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_weight_norm_bwd_dynamic_shapes_cpu&suite=DynamicShapesCpuTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/33457450172).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_weight_norm_bwd_dynamic_shapes_cpu`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 11794, in test_weight_norm_bwd
opt_f(m, x)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 573, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 11772, in f
def f(m, x):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 744, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 1132, in forward
return compiled_fn(full_args)
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 311, in runtime_wrapper
all_outs = call_func_at_runtime_with_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/utils.py", line 126, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/utils.py", line 100, in g
return f(*args)
^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/autograd/function.py", line 575, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: std::bad_alloc
To execute this test, run the following from the base repo dir:
python test/inductor/test_torchinductor_dynamic_shapes.py DynamicShapesCpuTests.test_weight_norm_bwd_dynamic_shapes_cpu
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_dynamic_shapes.py`
cc @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | low | Critical |
2,690,056,584 | three.js | LineNodeMaterial uv() & positionGeometry are relative to the segments not the line | ### Description
In `LineNodeMaterial`, `uv()` and `positionGeometry` values are relative to the "instance/segment" of the line.
I expect both or at least one of them to return the position on the full line so it makes easy to make custom effect on the line.
### Reproduction steps
1. [this fiddle](https://jsfiddle.net/Makio64/snq84pke/8/) or code below
### Code
```js
material.lineColorNode = vec3( positionGeometry.y ) // by instance
material.lineColorNode = vec3( uv().y ) // by instance
```
### Live example
https://jsfiddle.net/Makio64/snq84pke/8/
### Screenshots
`material.lineColorNode = vec3( positionGeometry.y ) // by instance`
<img width="185" alt="Screenshot 2024-11-25 at 18 44 09" src="https://github.com/user-attachments/assets/b65a54e2-e7b3-41bc-bba1-bb9690957f8b">
`material.lineColorNode = vec3( uv().y ) // by instance`
<img width="175" alt="Screenshot 2024-11-25 at 18 48 09" src="https://github.com/user-attachments/assets/18add0f1-1b7d-4077-a3af-6d4fda0ec828">
### Version
r170
### Device
_No response_
### Browser
_No response_
### OS
_No response_ | Nodes | low | Minor |
2,690,063,722 | vscode | extensions.autoUpdate setting should be group policy controlled | Continuation from https://github.com/microsoft/vscode/issues/84756
Here's the user request https://github.com/microsoft/vscode/issues/84756#issuecomment-2492115153
By making extensions.autoUpdate enterprise controlled, we would allow enterprises to either disable extension auto updates, or make sure that their developers are always on latest versions of extensions (to always have latest security patches). | feature-request,extensions | low | Minor |
2,690,199,949 | rust | An unsafe const fn being used to compute an array length or const generic is incorrectly described as being an "item". | ### Code
```Rust
const unsafe fn foo() -> usize { 1 }
fn main() {
unsafe {
let _x = [0; foo()];
}
}
```
### Current output
```Shell
Compiling playground v0.0.1 (/playground)
warning: unnecessary `unsafe` block
--> src/main.rs:4:5
|
4 | unsafe {
| ^^^^^^ unnecessary `unsafe` block
|
= note: `#[warn(unused_unsafe)]` on by default
error[E0133]: call to unsafe function `foo` is unsafe and requires unsafe function or block
--> src/main.rs:5:22
|
4 | unsafe {
| ------ items do not inherit unsafety from separate enclosing items
5 | let _x = [0; foo()];
| ^^^^^ call to unsafe function
|
= note: consult the function's documentation for information on how to avoid undefined behavior
For more information about this error, try `rustc --explain E0133`.
warning: `playground` (bin "playground") generated 1 warning
error: could not compile `playground` (bin "playground") due to 1 previous error; 1 warning emitted
```
### Desired output
```Shell
Say something other than "items do not inherit unsafety from separate enclosing items"
```
### Rationale and extra context
Given that unsafe blocks apply "through" closures, I find it a bit weird that it doesn't apply through array lengths or const generics. Maybe this is fine, but at the very least, the error message should not describe the problem as being about "items", since there aren't any relevant items in sight.
### Other cases
Other similar cases with similar errors:
```rust
const unsafe fn foo() -> usize { 1 }
fn main() {
unsafe {
<[i32; foo()]>::default();
}
}
```
```rust
const unsafe fn foo() -> usize { 1 }
fn lol<const N: usize>() {}
fn main() {
unsafe {
lol::<{foo()}>();
}
}
```
```rust
const unsafe fn foo() -> usize { 1 }
struct Thing<const N: usize>;
fn main() {
unsafe {
let _x: Thing<{foo()}>;
}
}
```
### Rust Version
```Shell
Reproducible on the playground with stable rust version 1.82.0, and nightly rust version `1.85.0-nightly (2024-11-22 a47555110cf09b3ed598)`
```
### Anything else?
_No response_ | A-diagnostics,T-lang,T-compiler,I-lang-nominated,T-types | low | Critical |
2,690,278,057 | react-native | TextInput: No events triggered and no text displayed when typing Korean on AndroidReact Native 0.73.8) | ### Description
I created a new React Native project using npx react-native init text.
After completing the setup, I ran the app on Android using `yarn android` without any additional configurations.
However, when using the TextInput component, typing Korean does not trigger any events (onChange, onChangeText, etc.) and no text is displayed in the input field.
### Steps to reproduce
1. Create a new React Native project: npx react-native init text.
2. Run the app on an Android device using yarn android.
3. Add a TextInput component with onChange or onChangeText event handlers.
4. Type Korean characters into the TextInput.
### React Native Version
0.73.8
### Affected Platforms
Runtime - Android
### Output of `npx react-native info`
```text
System:
OS: macOS 14.6
CPU: (12) arm64 Apple M3 Pro
Memory: 84.44 MB / 18.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 20.11.0
path: ~/.nvm/versions/node/v20.11.0/bin/node
Yarn:
version: 1.22.22
path: ~/.nvm/versions/node/v20.11.0/bin/yarn
npm:
version: 10.2.4
path: ~/.nvm/versions/node/v20.11.0/bin/npm
Watchman:
version: 2024.10.21.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.15.2
path: /Users/kimgyeongseok/.rbenv/shims/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.0
- iOS 18.0
- macOS 15.0
- tvOS 18.0
- visionOS 2.0
- watchOS 11.0
Android SDK:
API Levels:
- "31"
- "33"
- "34"
- "34"
- "34"
- "34"
- "35"
Build Tools:
- 33.0.1
- 34.0.0
- 35.0.0
System Images:
- android-35 | Google APIs Intel x86_64 Atom
- android-35 | Google Play ARM 64 v8a
Android NDK: Not Found
IDEs:
Android Studio: 2024.2 AI-242.21829.142.2421.12409432
Xcode:
version: 16.0/16A242d
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.12
path: /usr/bin/javac
Ruby:
version: 3.1.6
path: /Users/kimgyeongseok/.rbenv/shims/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.0.1
wanted: 15.0.1
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.3
wanted: 0.76.3
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: true
newArchEnabled: true
```
### Stacktrace or Logs
```text
no error console
```
### Reproducer
https://github.com/KimGSeok/react-native-text-input
### Screenshots and Videos
<img width="1035" alt="แแ
ณแแ
ณแ
แ
ตแซแแ
ฃแบ 2024-11-25 แแ
ฉแแ
ฎ 7 51 21" src="https://github.com/user-attachments/assets/095010dd-6f96-43f0-a08e-6a3407d9acba">
| Component: TextInput,Platform: Android,Needs: Attention,Type: Unsupported Version | low | Critical |
2,690,287,652 | flutter | flutter doctor should not make an issue on Android Studio if IntelliJ IDEA Ultimate Edition with Android plugin is installed | ### Steps to reproduce
1. Install IntelliJ IDEA Ultimate Edition
2. Install the "Android" plugin in the IDE
3. Install flutter
4. Install Android SDK and Visual Studio
5. run `flutter doctor`
### Expected results
`flutter doctor` should tell me there are no issues.
IntelliJ IDEA Ultimate Edition is an all-in-one IDE which delivers the functionality of Android Studio using the Android plugin.
### Actual results
`flutter doctor` finds an issue in 1 category. Refer to doctor output at the bottom.
### Code sample
No code needed
### Screenshots or Video
_No response_
### Logs
Not relevant
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Doctor summary (to see all details, run flutter doctor -v):
[โ] Flutter (Channel stable, 3.24.5, on Microsoft Windows [Version 10.0.22621.4460], locale en-GB)
[โ] Windows Version (Installed version of Windows is version 10 or higher)
[โ] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[โ] Chrome - develop for the web
[โ] Visual Studio - develop Windows apps (Visual Studio Build Tools 2022 17.12.1)
[!] Android Studio (not installed)
[โ] IntelliJ IDEA Ultimate Edition (version 2024.3)
[โ] Connected device (3 available)
[โ] Network resources
```
</details>
| tool,t: flutter doctor,P3,team-tool,triaged-tool | low | Major |
2,690,372,083 | react-native | Upgrading React native from 0.74 to 0.76 in existing project creates Jittering effect | ### Description
I was using expo sdk 51.0.28 and RN 0.74.5 after upgrading it to latest expo and RN, I am facing lot of jittering effect in app
**Earlier package.json**
`{
"name": "vyayam",
"version": "1.0.0",
"main": "expo/AppEntry.js",
"scripts": {
"start": "expo start",
"android": "expo start --android",
"ios": "expo start --ios",
"web": "expo start --web"
},
"dependencies": {
"@expo/vector-icons": "^14.0.4",
"@hookform/resolvers": "^3.9.0",
"@react-native-async-storage/async-storage": "^1.23.1",
"@react-native-community/datetimepicker": "^8.2.0",
"@react-native-community/slider": "4.5.2",
"@react-native-picker/picker": "^2.8.1",
"@react-navigation/bottom-tabs": "^6.6.1",
"@react-navigation/drawer": "^6.7.2",
"@react-navigation/native": "^6.1.18",
"@react-navigation/native-stack": "^6.11.0",
"@reduxjs/toolkit": "^2.2.8",
"axios": "^1.7.7",
"expo": "~51.0.28",
"expo-font": "~12.0.10",
"expo-linear-gradient": "^13.0.2",
"expo-linking": "~6.3.1",
"expo-secure-store": "^13.0.2",
"expo-splash-screen": "~0.27.6",
"expo-status-bar": "~1.12.1",
"lottie-react-native": "6.7.0",
"react": "18.2.0",
"react-hook-form": "^7.53.0",
"react-native": "0.74.5",
"react-native-chart-kit": "^6.12.0",
"react-native-gesture-handler": "~2.16.1",
"react-native-htmlview": "^0.17.0",
"react-native-paper": "^5.12.5",
"react-native-reanimated": "~3.10.1",
"react-native-safe-area-context": "4.10.5",
"react-native-screens": "3.31.1",
"react-native-svg": "^15.8.0",
"react-native-webview": "13.8.6",
"react-redux": "^9.1.2",
"redux-persist": "^6.0.0",
"redux-thunk": "^3.1.0",
"yup": "^1.4.0"
},
"devDependencies": {
"@babel/core": "^7.20.0"
},
"private": true
}
`
**After Upgrade to latest one**
`{
"name": "vyayam",
"version": "1.0.0",
"main": "expo/AppEntry.js",
"scripts": {
"start": "expo start",
"android": "expo start --android",
"ios": "expo start --ios",
"web": "expo start --web"
},
"dependencies": {
"@expo/vector-icons": "^14.0.2",
"@hookform/resolvers": "^3.9.0",
"@react-native-async-storage/async-storage": "1.23.1",
"@react-native-community/datetimepicker": "8.2.0",
"@react-native-community/slider": "4.5.5",
"@react-native-picker/picker": "2.9.0",
"@react-navigation/bottom-tabs": "^6.6.1",
"@react-navigation/drawer": "^6.7.2",
"@react-navigation/native": "^6.1.18",
"@react-navigation/native-stack": "^6.11.0",
"@reduxjs/toolkit": "^2.2.8",
"axios": "^1.7.7",
"expo": "~52.0.11",
"expo-font": "~13.0.1",
"expo-linear-gradient": "~14.0.1",
"expo-linking": "~7.0.3",
"expo-secure-store": "~14.0.0",
"expo-splash-screen": "~0.29.13",
"expo-status-bar": "~2.0.0",
"lottie-react-native": "7.1.0",
"react": "18.3.1",
"react-hook-form": "^7.53.0",
"react-native": "0.76.3",
"react-native-chart-kit": "^6.12.0",
"react-native-gesture-handler": "~2.20.2",
"react-native-htmlview": "^0.17.0",
"react-native-paper": "^5.12.5",
"react-native-reanimated": "~3.16.1",
"react-native-safe-area-context": "4.12.0",
"react-native-screens": "~4.1.0",
"react-native-svg": "15.8.0",
"react-native-webview": "13.12.2",
"react-redux": "^9.1.2",
"redux-persist": "^6.0.0",
"redux-thunk": "^3.1.0",
"yup": "^1.4.0"
},
"devDependencies": {
"@babel/core": "^7.25.2"
},
"private": true
}
`
https://github.com/user-attachments/assets/ba5917e8-bf79-4e8e-a3bc-e704a4d9b1f9
https://github.com/user-attachments/assets/5fdd5771-853a-47b0-956a-87cdee25906f
### Steps to reproduce
https://github.com/shreygargofficial/vyayam/tree/upgrade
clone it do npm i then npm start
### React Native Version
0.76.3
### Output of `npx react-native info`
```text
โ ๏ธ react-native depends on @react-native-community/cli for cli commands. To fix update your package.json to include:
"devDependencies": {
"@react-native-community/cli": "latest",
}
```
### Screenshots and Videos

| Needs: Attention | medium | Critical |
2,690,373,072 | flutter | Menu Anchor alignment offset does not work when right alignment is set | ### Steps to reproduce
1. flutter create bug
2. paste the code sample
3. flutter run -d chrome
4. press the menu button
5. the open menu is aligned to the right of the screen, but it hugs the edge. The additional offset is ignored.
6. Change the dx offset from `-60` to `60` and use `Alignment.bottomLeft` for the menu style alignment
7. Observe that the offset is applied, along with the alignment
### Expected results
The alignment offset should be applied on top of the general menu style alignment, regardless of the axis (where applicable).
In other words, if the menu offset moves the menu away from the menu alignment direction, towards the center of the FlutterView, this should be allowed.
I.e. the following should be allowed (if the FlutterView has enough space)
- left alignment, but offset towards the right
- right alignment, but offset towards the left
- top alignment, but offset towards the bottom
- bottom alignment, but offset towards the top
### Actual results
The alignment X-offset is ignored for `Alignment.bottomRight` and `Alignment.topRight`.
The alignment X-offset is working as expected for `Alignment.bottomLeft`.
The alignment Y-offset is working as expected.
I did not test any other combinations, so the various Alignments should be tested with all possible combinations of +/- x and y offsets. Maybe there are other cases that don't work?
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() => runApp(const MyApp());
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
home: Scaffold(
backgroundColor: Colors.white,
body: Align(
alignment: Alignment.topCenter,
child: Container(
height: 72.0,
color: Colors.red,
child: const Align(
alignment: Alignment.centerRight,
child: Padding(
padding: EdgeInsets.only(left: 4, right: 16),
child: CustomMenuAnchor(),
),
),
),
),
),
);
}
}
class CustomMenuAnchor extends StatefulWidget {
const CustomMenuAnchor({super.key});
@override
State<CustomMenuAnchor> createState() => _CustomMenuAnchorState();
}
class _CustomMenuAnchorState extends State<CustomMenuAnchor> {
final FocusNode _buttonFocusNode = FocusNode();
final MenuController _menuController = MenuController();
final ValueNotifier<bool> _isMenuOpenNotifier = ValueNotifier<bool>(false);
@override
void dispose() {
_buttonFocusNode.dispose();
_isMenuOpenNotifier.dispose();
super.dispose();
}
@override
Widget build(BuildContext context) {
return _CustomMenuAnchorImpl(
buttonFocusNode: _buttonFocusNode,
menuController: _menuController,
isMenuOpenNotifier: _isMenuOpenNotifier,
);
}
}
class _CustomMenuAnchorImpl extends StatelessWidget {
const _CustomMenuAnchorImpl({
required this.buttonFocusNode,
required this.isMenuOpenNotifier,
required this.menuController,
});
final FocusNode buttonFocusNode;
final ValueNotifier<bool> isMenuOpenNotifier;
final MenuController menuController;
void _onOpenMenu() {
isMenuOpenNotifier.value = true;
}
void _onCloseMenu() {
isMenuOpenNotifier.value = false;
}
void _toggleMenuOpen() {
if (menuController.isOpen) {
menuController.close();
} else {
menuController.open();
}
}
MenuStyle _getMenuStyleForScreenDimensions(Size screenSize) {
final Size maximumSize;
final EdgeInsets padding;
if (screenSize.width < 360) {
maximumSize = Size(screenSize.width - 16, 400);
} else {
maximumSize = const Size(600, 400);
}
if (screenSize.width < 490) {
padding = const EdgeInsets.symmetric(vertical: 16);
} else {
padding = const EdgeInsets.symmetric(horizontal: 32, vertical: 16);
}
return MenuStyle(
// TODO: not providing the alignment here makes the offset work? It also works for left aligned values.
alignment: Alignment.bottomRight,
maximumSize: WidgetStatePropertyAll<Size>(maximumSize),
padding: WidgetStatePropertyAll<EdgeInsets>(padding),
shape: const WidgetStatePropertyAll<OutlinedBorder?>(
RoundedRectangleBorder(borderRadius: BorderRadius.all(Radius.circular(6))),
),
);
}
Widget _buildMenuButton(BuildContext context, MenuController controller) {
const Color iconColor = Colors.white;
const Widget downIcon = Icon(Icons.arrow_drop_down, key: ValueKey<bool>(true), color: iconColor);
const Widget upIcon = Icon(Icons.arrow_drop_up, key: ValueKey<bool>(false), color: iconColor);
final Widget menuArrowIcon = ValueListenableBuilder<bool>(
valueListenable: isMenuOpenNotifier,
builder: (BuildContext context, bool isOpen, Widget? child) {
return AnimatedSwitcher(
duration: const Duration(milliseconds: 200),
transitionBuilder: (Widget child, Animation<double> anim) => RotationTransition(
turns: Tween<double>(begin: child.key == const ValueKey<bool>(false) ? 0 : 1, end: 0.5).animate(anim),
child: FadeTransition(opacity: anim, child: child),
),
child: isOpen ? downIcon : upIcon,
);
},
);
return Builder(
builder: (BuildContext context) {
final ButtonStyle? style = TextButtonTheme.of(context).style;
return TextButtonTheme(
data: TextButtonThemeData(
style: style?.copyWith(splashFactory: NoSplash.splashFactory),
),
child: Align(
alignment: Alignment.centerRight,
child: TextButton.icon(
label: const Text(
'Some button',
style: TextStyle(color: iconColor),
overflow: TextOverflow.ellipsis,
textAlign: TextAlign.start,
maxLines: 1,
),
icon: menuArrowIcon,
iconAlignment: IconAlignment.end,
focusNode: buttonFocusNode,
onPressed: _toggleMenuOpen,
),
),
);
},
);
}
List<Widget> _buildMenuItems(BuildContext context) {
const TextStyle menuItemStyle = TextStyle(fontSize: 18, color: Colors.black);
return <Widget>[
for (int i = 0; i < 10; i++) ...<Widget>[
MenuItemButton(
onPressed: () {},
child: ConstrainedBox(
constraints: const BoxConstraints(maxWidth: 260),
child: Text(
'Item $i',
style: menuItemStyle,
overflow: TextOverflow.ellipsis,
maxLines: 3,
softWrap: true,
),
),
),
const Divider(thickness: 1, height: 1),
],
MenuItemButton(
onPressed: () {},
child: const Text(
'Some other button',
style: menuItemStyle,
overflow: TextOverflow.ellipsis,
),
),
];
}
@override
Widget build(BuildContext context) {
final MenuStyle menuStyle = _getMenuStyleForScreenDimensions(MediaQuery.sizeOf(context));
return MenuAnchor(
// TODO: the negative x alignment offset does not work, because top/bottom right alignment is specified?
// Move the menu 60 pixels to the left, away from the screen right edge.
// Strangely enough, the y axis offset does work.
alignmentOffset: const Offset(-60, 0),
controller: menuController,
childFocusNode: buttonFocusNode,
style: menuStyle,
onOpen: _onOpenMenu,
onClose: _onCloseMenu,
menuChildren: _buildMenuItems(context),
builder: (BuildContext context, MenuController controller, _) => _buildMenuButton(context, controller),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[โ] Flutter (Channel stable, 3.24.5, on macOS 14.6.1 23G93 darwin-x64, locale en-BE)
โข Flutter version 3.24.5 on channel stable at /Users/navaronbracke/Documents/flutter
โข Upstream repository [email protected]:navaronbracke/flutter.git
โข FLUTTER_GIT_URL = [email protected]:navaronbracke/flutter.git
โข Framework revision dec2ee5c1f (12 days ago), 2024-11-13 11:13:06 -0800
โข Engine revision a18df97ca5
โข Dart version 3.5.4
โข DevTools version 2.37.3
[โ] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
โข Android SDK at /Users/navaronbracke/Library/Android/sdk
โข Platform android-34, build-tools 34.0.0
โข ANDROID_HOME = /Users/navaronbracke/Library/Android/sdk
โข Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
โข Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
โข All Android licenses accepted.
[โ] Xcode - develop for iOS and macOS (Xcode 16.1)
โข Xcode at /Applications/Xcode.app/Contents/Developer
โข Build 16B40
โข CocoaPods version 1.16.2
[โ] Chrome - develop for the web
โข Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[โ] Android Studio (version 2024.1)
โข Android Studio at /Applications/Android Studio.app/Contents
โข Flutter plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/9212-flutter
โข Dart plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/6351-dart
โข Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
[โ] VS Code (version 1.95.3)
โข VS Code at /Applications/Visual Studio Code.app/Contents
โข Flutter extension version 3.100.0
[โ] Connected device (2 available)
โข macOS (desktop) โข macos โข darwin-x64 โข macOS 14.6.1 23G93 darwin-x64
โข Chrome (web) โข chrome โข web-javascript โข Google Chrome 131.0.6778.86
[โ] Network resources
โข All expected network resources are available.
โข No issues found!
```
</details>
| framework,f: material design,has reproducible steps,P2,team-design,triaged-design,found in release: 3.24,found in release: 3.27 | low | Critical |
2,690,425,028 | rust | Meta tracking issue for release notes PRs | To hopefully make it easier to find previous release notes PRs (due to github PR search funny business). Only goes back to 1.50 for the time being.
- 1.84: #134568
- 1.83: #133320
- 1.82: #131137
- 1.81: #128523
- 1.80.1:
- stable: #128635
- master: #129930
- 1.80: #127083
- 1.79: #125470
- 1.78: #123248
- 1.77.2:
- stable: #123681
- master: #123683
- 1.77.1:
- stable: #123105 (yes, indeed)
- master: #123187
- 1.77: #121862
- 1.76: #120004
- 1.75: #118729
- 1.74.1:
- stable: #118607
- master: #119068
- 1.74: #116778
- 1.73: #116083
- 1.72.1:
- stable: #115787
- master: #115978
- 1.72: #114563
- 1.71.1:
- stable: #114284
- master: #115997
- 1.71: #113138
- 1.70: #111006
- 1.69: #110033
- 1.68.2:
- stable: #109644
- master: #109658
- 1.68.1:
- stable: #109335
- master: #109658
- 1.68: #108041
- 1.67.1:
- stable: #107743
- master: #107861
- 1.67: #107142
- 1.66.1:
- stable: #106685
- master: #106687
- 1.66: #104405
- 1.65: #102659
- 1.64: #101093
- 1.63: #99524
- 1.62.1:
- stable: #99299
- master: #99524 (yes, indeed)
- 1.62: #97454
- 1.61: #96539
- 1.60: #94817
- 1.59: #93639
- 1.58.1:
- stable โ1: #93071
- stable โ2: #93110
- master: #93136
- 1.58: #92055
- 1.57: #90832
- 1.56.1:
- stable: #90460
- master: #90462
- 1.56: #88893
- 1.55: #88170
- 1.54: #86696
- 1.53: #85541
- 1.52.1:
- stable: #85097
- master: #85404
- 1.52: #84183
- 1.51: #81917
- 1.50: #80812
---
Also including some search keywords to make *this* issue easier to find: relnotes, release notes | C-tracking-issue,T-release,A-meta,S-tracking-forever | low | Minor |
2,690,465,745 | angular | resource loader param `request` fails to exclude `undefined` from `request` returned value | ### Which @angular/* package(s) are the source of the bug?
core
### Is this a regression?
No
### Description
`request` field in `ResourceLoaderParams` should exclude `undefined` type from returned value of `request` function.
```ts
export class ProductViewer {
productId = input.required<number | undefined>();
productResource: ResourceRef<Product> = resource({
request: () => this.productId(), // <= number | undefined
loader: async ({ request: productId, abortSignal }) => {
assertDefined(productId); // number;
const resp = await fetch(`https://dummyjson.com/products/${productId}`);
return resp.json() as Promise<Product>;
},
});
}
```
Source: https://github.com/angular/angular/blob/main/packages/core/src/resource/api.ts#L148-L154
```ts
export interface ResourceLoaderParams<R> {
request: Exclude<NoInfer<R>, undefined>;
abortSignal: AbortSignal;
previous: {
status: ResourceStatus;
};
}
```
However, `undefined` is not excluded for certain types. In my experiments, the behavior is observed when the type is not a primitive like `string` or `number`, and can be reproduced by simply changing the type number to Number, for example:
```ts
export class ProductViewer {
productId = input.required<Number | undefined>();
productResource: ResourceRef<Product> = resource({
request: () => this.productId(), // <= Number | undefined
loader: async ({ request: productId, abortSignal }) => {
assertDefined(productId); // !!! undefined remains !!!
const resp = await fetch(`https://dummyjson.com/products/${productId}`);
return resp.json() as Promise<Product>;
},
});
}
```
This is a typedef issue, as the `request` parameter cannot be undefined.
### Please provide a link to a minimal reproduction of the bug
https://stackblitz.com/edit/stackblitz-starters-mp9zqz?file=src%2Fmain.ts
### Please provide the exception or error you saw
```true
```
### Please provide the environment you discovered this bug in (run `ng version`)
```true
Angular CLI: 19.0.0
Node: 18.20.3
Package Manager: npm 10.2.3
OS: linux x64
Angular: 19.0.0
... animations, cli, common, compiler, compiler-cli, core, forms
... platform-browser, router
```
### Anything else?
_No response_ | area: core,cross-cutting: types,bug,core: reactivity | low | Critical |
2,690,470,375 | PowerToys | Workspaces | ### Microsoft PowerToys version
0.86.0
### Installation method
Microsoft Store
### Running as admin
No
### Area(s) with issue?
Workspaces
### Steps to reproduce

### โ๏ธ Expected Behavior


Turkish:
Merhaba ben kendi รงalฤฑลma ortamฤฑmฤฑ hazฤฑrladฤฑm. Sonra รงalฤฑลma ortamฤฑnฤฑ baลlata bastฤฑgฤฑmda uygulamalar hepsi aynฤฑ anda baลlatฤฑldฤฑgฤฑndan Spotify aรงฤฑlamadan Google Chrome aรงฤฑlฤฑyor ve bu da benim ayarladฤฑฤฤฑm dรผzene uymuyor รงรผnkรผ ben Spotifynin arka planda Intellij Idea ve Google Chromenin ise รถn planda ekranda yarฤฑ yarฤฑya olmasฤฑnฤฑ istiyorum.
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
English:
Hello, I have prepared my own work environment. Then when I press start the work environment, since the applications are all started at the same time, Google Chrome opens before Spotify can be opened and this does not fit the layout I have set because I want Spotify to be in the background and Intellij Idea and Google Chrome to be half and half on the screen in the foreground.
### โ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage,Product-Workspaces | low | Minor |
2,690,506,494 | godot | [3.6] Cannot drop file into exported GDScript variable | ### Tested versions
In Godot 3.6, in the box created by export var a:GDScript, scripts cannot be dragged in and referenced.

3.5.2 No problem
### System information
win10
### Issue description
```
extends Control
export var a:GDScript
func _ready():
pass # Replace with function body.
```
### Steps to reproduce
1
### Minimal reproduction project (MRP)
1 | bug,topic:editor,regression | low | Minor |
2,690,527,119 | svelte | Svelte 5 runtime crash inside each block | ### Describe the bug
In our crashlog system I can see the following error
<img width="1305" alt="image" src="https://github.com/user-attachments/assets/fb2531aa-2c37-433a-8907-6cbe537742a3">
### Reproduction
Unfortunately I am unable to figure out what exactly is causing this. Will come back with repro if will catch it. But maybe someone from the maintainers could make a good guess out of the stack trace from the above.
### Logs
_No response_
### System Info
```shell
Svelte 5.2.7
```
### Severity
annoyance | awaiting submitter | low | Critical |
2,690,534,026 | tensorflow | This method creates a model with a 100% memory leak loop using model. fit() | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.18
### Custom code
Yes
### OS platform and distribution
ubuntu 2.2 or mac m1
### Mobile device
ubuntu 2.2 or mac m1
### Python version
3.11
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
I have tried various methods, but the memory is definitely leaking, it seems that the release of memory cannot keep up. Through the logs, it can be found that there is periodic memory recycling, but with the increase of time, there is still a clear upward trend
### Standalone code to reproduce the issue
```shell
import gc
import keras
import numpy as np
import psutil
from keras.optimizers import Adam
from keras.layers import Dense, Dropout, Input, LSTM
import tensorflow as tf
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
from tensorflow.keras.models import Sequential, load_model
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
import time
import json
num_samples = 6
num_features = 3
num_classes = 4
epochs = 50
batch_size = 2
identifier = "test_model"
num_iterations = 500
def build_model(X, num_classes):
model = Sequential()
model.add(Input(shape=(X.shape[1], X.shape[2])))
model.add(LSTM(16, return_sequences=True))
model.add(LSTM(16))
model.add(Dropout(0.4))
model.add(Dense(8, activation='tanh'))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer=Adam(), metrics=['categorical_accuracy'])
return model
data_X = np.random.rand(num_samples, num_features)
data_Y = np.random.randint(0, num_classes, size=(num_samples, 1))
data_Y = np.eye(num_classes)[data_Y.flatten()]
print(type(data_X))
scaler = MinMaxScaler()
data_X_scaled = scaler.fit_transform(data_X)
train_X, test_X, train_Y, test_Y = train_test_split(data_X_scaled, data_Y, train_size=0.6, random_state=42)
train_X = np.expand_dims(train_X, axis=1)
test_X = np.expand_dims(test_X, axis=1)
best_loss = np.inf
best_model_data = None
for iteration in range(num_iterations):
tf.keras.backend.clear_session()
tf.compat.v1.reset_default_graph()
model = build_model(train_X, num_classes)
model_name = f"model_{iteration}"
early_stopping = EarlyStopping(monitor='loss', patience=10, verbose=0, restore_best_weights=True)
print(f"Iteration {iteration + 1}/{num_iterations}")
process = psutil.Process()
mem_info = process.memory_info()
print(f"start Current memory usage: {mem_info.rss / (1024 * 1024):.2f} MB") # RSS - Resident Set Size
try:
history = model.fit(train_X, train_Y, epochs=epochs, batch_size=batch_size, shuffle=True,
validation_data=(test_X, test_Y), verbose=0)
current_loss = history.history['loss'][-1]
print(f"Training model: {model.name}")
del model
tf.keras.backend.clear_session()
gc.collect()
except Exception as e:
print("err:", e)
finally:
process = psutil.Process()
mem_info = process.memory_info()
print(f"end Current memory usage: {mem_info.rss / (1024 * 1024):.2f} MB") # RSS - Resident Set Size
print("end๏ผ")
if best_model_data:
model_json = best_model_data["model_architecture"]
model_weights = json.loads(best_model_data["model_weights"], object_hook=lambda d: np.array(d))
model = tf.keras.models.model_from_json(model_json)
model.set_weights(model_weights)
model.compile(loss='categorical_crossentropy', optimizer=Adam(), metrics=['categorical_accuracy'])
print("ok")
else:
print("not found")
```
### Relevant log output
```shell
Iteration 1/500
start Current memory usage: 450.69 MB
Training model: sequential
end Current memory usage: 524.41 MB
Iteration 2/500
start Current memory usage: 524.52 MB
Training model: sequential
end Current memory usage: 564.97 MB
Iteration 3/500
start Current memory usage: 564.98 MB
Training model: sequential
end Current memory usage: 598.00 MB
Iteration 4/500
start Current memory usage: 598.03 MB
Training model: sequential
end Current memory usage: 624.69 MB
Iteration 5/500
start Current memory usage: 624.69 MB
Training model: sequential
end Current memory usage: 653.89 MB
Iteration 6/500
start Current memory usage: 653.91 MB
Training model: sequential
end Current memory usage: 679.45 MB
Iteration 7/500
start Current memory usage: 679.45 MB
Training model: sequential
end Current memory usage: 701.59 MB
Iteration 8/500
start Current memory usage: 701.59 MB
Training model: sequential
end Current memory usage: 726.83 MB
Iteration 9/500
start Current memory usage: 726.84 MB
Training model: sequential
end Current memory usage: 749.56 MB
Iteration 10/500
start Current memory usage: 749.56 MB
Training model: sequential
end Current memory usage: 782.56 MB
Iteration 11/500
start Current memory usage: 782.56 MB
Training model: sequential
end Current memory usage: 805.92 MB
Iteration 12/500
start Current memory usage: 805.92 MB
Training model: sequential
end Current memory usage: 833.17 MB
Iteration 13/500
start Current memory usage: 833.17 MB
Training model: sequential
end Current memory usage: 852.84 MB
Iteration 14/500
start Current memory usage: 852.84 MB
Training model: sequential
end Current memory usage: 875.05 MB
Iteration 15/500
start Current memory usage: 875.06 MB
Training model: sequential
end Current memory usage: 901.56 MB
Iteration 16/500
start Current memory usage: 901.56 MB
Training model: sequential
end Current memory usage: 930.62 MB
Iteration 17/500
start Current memory usage: 705.70 MB
Training model: sequential
end Current memory usage: 762.64 MB
Iteration 18/500
start Current memory usage: 762.70 MB
Training model: sequential
end Current memory usage: 798.06 MB
Iteration 19/500
start Current memory usage: 798.17 MB
Training model: sequential
end Current memory usage: 824.98 MB
Iteration 20/500
start Current memory usage: 824.98 MB
Training model: sequential
end Current memory usage: 850.34 MB
Iteration 21/500
start Current memory usage: 850.42 MB
Training model: sequential
end Current memory usage: 876.81 MB
Iteration 22/500
start Current memory usage: 876.81 MB
Training model: sequential
end Current memory usage: 904.02 MB
Iteration 23/500
start Current memory usage: 904.08 MB
Training model: sequential
end Current memory usage: 929.70 MB
Iteration 24/500
start Current memory usage: 929.73 MB
Training model: sequential
end Current memory usage: 952.33 MB
Iteration 25/500
start Current memory usage: 952.34 MB
Training model: sequential
end Current memory usage: 952.28 MB
Iteration 26/500
start Current memory usage: 952.47 MB
Training model: sequential
end Current memory usage: 980.39 MB
Iteration 27/500
start Current memory usage: 978.78 MB
Training model: sequential
end Current memory usage: 999.02 MB
Iteration 28/500
start Current memory usage: 999.05 MB
Training model: sequential
end Current memory usage: 1023.50 MB
Iteration 29/500
start Current memory usage: 1023.53 MB
Training model: sequential
end Current memory usage: 1047.80 MB
Iteration 30/500
start Current memory usage: 1047.83 MB
Training model: sequential
end Current memory usage: 1068.88 MB
Iteration 31/500
start Current memory usage: 1068.94 MB
Training model: sequential
end Current memory usage: 1095.78 MB
Iteration 32/500
start Current memory usage: 1095.78 MB
Training model: sequential
end Current memory usage: 1119.03 MB
Iteration 33/500
start Current memory usage: 1119.03 MB
Training model: sequential
end Current memory usage: 1039.41 MB
Iteration 34/500
start Current memory usage: 1022.78 MB
Training model: sequential
end Current memory usage: 1040.88 MB
Iteration 35/500
start Current memory usage: 1040.70 MB
Training model: sequential
end Current memory usage: 1054.58 MB
Iteration 36/500
start Current memory usage: 1054.58 MB
Training model: sequential
end Current memory usage: 1076.16 MB
Iteration 37/500
start Current memory usage: 1076.19 MB
Training model: sequential
end Current memory usage: 1097.02 MB
Iteration 38/500
start Current memory usage: 1097.03 MB
Training model: sequential
end Current memory usage: 1113.70 MB
Iteration 39/500
start Current memory usage: 1114.12 MB
Training model: sequential
end Current memory usage: 1140.30 MB
Iteration 40/500
start Current memory usage: 1140.33 MB
Training model: sequential
end Current memory usage: 1163.81 MB
Iteration 41/500
start Current memory usage: 1163.86 MB
Training model: sequential
end Current memory usage: 1195.83 MB
Iteration 42/500
start Current memory usage: 1195.83 MB
Training model: sequential
end Current memory usage: 1221.53 MB
Iteration 43/500
start Current memory usage: 1221.55 MB
Training model: sequential
end Current memory usage: 1231.09 MB
Iteration 44/500
start Current memory usage: 1231.14 MB
Training model: sequential
end Current memory usage: 1245.78 MB
Iteration 45/500
start Current memory usage: 1199.55 MB
Training model: sequential
end Current memory usage: 1221.59 MB
Iteration 46/500
start Current memory usage: 1221.59 MB
Training model: sequential
end Current memory usage: 1249.11 MB
Iteration 47/500
start Current memory usage: 1249.22 MB
Training model: sequential
end Current memory usage: 1275.50 MB
Iteration 48/500
start Current memory usage: 1259.83 MB
Training model: sequential
end Current memory usage: 1290.91 MB
Iteration 49/500
start Current memory usage: 1285.67 MB
Training model: sequential
end Current memory usage: 1296.75 MB
Iteration 50/500
start Current memory usage: 1296.75 MB
Training model: sequential
end Current memory usage: 1306.59 MB
Iteration 51/500
start Current memory usage: 1306.59 MB
Training model: sequential
end Current memory usage: 1287.53 MB
Iteration 52/500
start Current memory usage: 1287.53 MB
Training model: sequential
end Current memory usage: 1297.23 MB
Iteration 53/500
start Current memory usage: 1297.25 MB
Training model: sequential
end Current memory usage: 1285.45 MB
Iteration 54/500
start Current memory usage: 1285.45 MB
Training model: sequential
end Current memory usage: 1290.36 MB
Iteration 55/500
start Current memory usage: 1282.14 MB
Training model: sequential
end Current memory usage: 1302.14 MB
Iteration 56/500
start Current memory usage: 1302.14 MB
Training model: sequential
end Current memory usage: 1287.70 MB
Iteration 57/500
start Current memory usage: 1287.75 MB
Training model: sequential
end Current memory usage: 1282.77 MB
Iteration 58/500
start Current memory usage: 1271.38 MB
Training model: sequential
end Current memory usage: 1232.14 MB
Iteration 59/500
start Current memory usage: 1212.70 MB
Training model: sequential
end Current memory usage: 1201.16 MB
Iteration 60/500
start Current memory usage: 1200.53 MB
Training model: sequential
end Current memory usage: 1169.45 MB
Iteration 61/500
start Current memory usage: 1169.45 MB
Training model: sequential
end Current memory usage: 1209.73 MB
Iteration 62/500
start Current memory usage: 1207.19 MB
Training model: sequential
end Current memory usage: 1226.28 MB
Iteration 63/500
start Current memory usage: 1226.28 MB
Training model: sequential
end Current memory usage: 1231.45 MB
Iteration 64/500
start Current memory usage: 1210.11 MB
Training model: sequential
end Current memory usage: 1176.00 MB
Iteration 65/500
start Current memory usage: 1173.97 MB
Training model: sequential
end Current memory usage: 1201.42 MB
Iteration 66/500
start Current memory usage: 1201.42 MB
Training model: sequential
end Current memory usage: 1223.94 MB
Iteration 67/500
start Current memory usage: 1222.50 MB
Training model: sequential
end Current memory usage: 1229.80 MB
Iteration 68/500
start Current memory usage: 1227.14 MB
Training model: sequential
end Current memory usage: 1219.02 MB
Iteration 69/500
start Current memory usage: 1210.48 MB
Training model: sequential
end Current memory usage: 1247.17 MB
Iteration 70/500
start Current memory usage: 1245.94 MB
Training model: sequential
end Current memory usage: 1259.84 MB
Iteration 71/500
start Current memory usage: 1259.86 MB
Training model: sequential
end Current memory usage: 1286.39 MB
Iteration 72/500
start Current memory usage: 1286.53 MB
Training model: sequential
end Current memory usage: 1316.52 MB
Iteration 73/500
start Current memory usage: 1311.53 MB
Training model: sequential
end Current memory usage: 1338.72 MB
Iteration 74/500
start Current memory usage: 1338.75 MB
Training model: sequential
end Current memory usage: 1348.45 MB
Iteration 75/500
start Current memory usage: 1338.30 MB
Training model: sequential
end Current memory usage: 1354.97 MB
Iteration 76/500
start Current memory usage: 1353.83 MB
Training model: sequential
end Current memory usage: 1385.67 MB
Iteration 77/500
start Current memory usage: 1385.69 MB
Training model: sequential
end Current memory usage: 1408.83 MB
Iteration 78/500
start Current memory usage: 1408.88 MB
Training model: sequential
end Current memory usage: 1430.91 MB
Iteration 79/500
start Current memory usage: 1430.94 MB
Training model: sequential
end Current memory usage: 1443.62 MB
Iteration 80/500
start Current memory usage: 1428.00 MB
Training model: sequential
end Current memory usage: 1436.50 MB
Iteration 81/500
start Current memory usage: 1436.64 MB
Training model: sequential
end Current memory usage: 1454.66 MB
Iteration 82/500
start Current memory usage: 1440.91 MB
Training model: sequential
end Current memory usage: 1461.81 MB
Iteration 83/500
start Current memory usage: 1460.47 MB
Training model: sequential
end Current memory usage: 1481.19 MB
Iteration 84/500
start Current memory usage: 1481.19 MB
Training model: sequential
end Current memory usage: 1477.84 MB
Iteration 85/500
start Current memory usage: 1477.84 MB
Training model: sequential
end Current memory usage: 1493.55 MB
Iteration 86/500
start Current memory usage: 1493.58 MB
Training model: sequential
end Current memory usage: 1509.50 MB
Iteration 87/500
start Current memory usage: 1509.50 MB
Training model: sequential
end Current memory usage: 1543.94 MB
Iteration 88/500
start Current memory usage: 1542.83 MB
Training model: sequential
end Current memory usage: 1516.17 MB
Iteration 89/500
start Current memory usage: 1516.20 MB
Training model: sequential
end Current memory usage: 1470.17 MB
Iteration 90/500
start Current memory usage: 1470.22 MB
Training model: sequential
end Current memory usage: 1443.72 MB
Iteration 91/500
start Current memory usage: 1444.36 MB
Training model: sequential
end Current memory usage: 1486.23 MB
Iteration 92/500
start Current memory usage: 1476.41 MB
Training model: sequential
end Current memory usage: 1524.97 MB
Iteration 93/500
start Current memory usage: 1524.97 MB
Training model: sequential
end Current memory usage: 1534.94 MB
Iteration 94/500
start Current memory usage: 1551.98 MB
Training model: sequential
end Current memory usage: 1853.48 MB
Iteration 95/500
start Current memory usage: 1853.48 MB
Training model: sequential
end Current memory usage: 1790.12 MB
Iteration 96/500
start Current memory usage: 1792.27 MB
Training model: sequential
end Current memory usage: 1883.20 MB
Iteration 97/500
start Current memory usage: 1879.05 MB
Training model: sequential
end Current memory usage: 1759.69 MB
Iteration 98/500
start Current memory usage: 1669.66 MB
Training model: sequential
end Current memory usage: 1596.77 MB
Iteration 99/500
start Current memory usage: 1597.12 MB
Training model: sequential
end Current memory usage: 1568.83 MB
Iteration 100/500
start Current memory usage: 1532.98 MB
Training model: sequential
end Current memory usage: 1516.75 MB
Iteration 101/500
start Current memory usage: 1465.98 MB
Training model: sequential
end Current memory usage: 1486.66 MB
Iteration 102/500
start Current memory usage: 1483.34 MB
Training model: sequential
end Current memory usage: 1523.19 MB
Iteration 103/500
start Current memory usage: 1523.14 MB
Training model: sequential
end Current memory usage: 1532.77 MB
Iteration 104/500
start Current memory usage: 1531.14 MB
Training model: sequential
end Current memory usage: 1561.78 MB
Iteration 105/500
start Current memory usage: 1555.67 MB
Training model: sequential
end Current memory usage: 1586.70 MB
Iteration 106/500
start Current memory usage: 1586.75 MB
Training model: sequential
end Current memory usage: 1608.41 MB
Iteration 107/500
start Current memory usage: 1603.81 MB
Training model: sequential
end Current memory usage: 1629.00 MB
Iteration 108/500
start Current memory usage: 1629.05 MB
Training model: sequential
end Current memory usage: 1609.25 MB
Iteration 109/500
start Current memory usage: 1609.31 MB
Training model: sequential
end Current memory usage: 1630.09 MB
Iteration 110/500
start Current memory usage: 1629.20 MB
Training model: sequential
end Current memory usage: 1638.66 MB
Iteration 111/500
start Current memory usage: 1620.30 MB
Training model: sequential
end Current memory usage: 1642.81 MB
Iteration 112/500
start Current memory usage: 1642.94 MB
Training model: sequential
end Current memory usage: 1659.45 MB
Iteration 113/500
start Current memory usage: 1655.17 MB
Training model: sequential
end Current memory usage: 1687.80 MB
Iteration 114/500
start Current memory usage: 1673.33 MB
Training model: sequential
end Current memory usage: 1705.94 MB
Iteration 115/500
start Current memory usage: 1699.95 MB
Training model: sequential
end Current memory usage: 1708.22 MB
Iteration 116/500
start Current memory usage: 1707.88 MB
Training model: sequential
end Current memory usage: 1648.23 MB
Iteration 117/500
start Current memory usage: 1634.03 MB
Training model: sequential
end Current memory usage: 1670.97 MB
Iteration 118/500
start Current memory usage: 1671.97 MB
Training model: sequential
end Current memory usage: 1649.69 MB
Iteration 119/500
start Current memory usage: 1645.14 MB
Training model: sequential
end Current memory usage: 1698.64 MB
Iteration 120/500
start Current memory usage: 1699.69 MB
Training model: sequential
end Current memory usage: 1737.67 MB
Iteration 121/500
start Current memory usage: 1737.67 MB
Training model: sequential
end Current memory usage: 1738.05 MB
Iteration 122/500
start Current memory usage: 1721.47 MB
Training model: sequential
end Current memory usage: 1730.64 MB
Iteration 123/500
start Current memory usage: 1729.53 MB
Training model: sequential
end Current memory usage: 1766.12 MB
Iteration 124/500
start Current memory usage: 1761.22 MB
Training model: sequential
end Current memory usage: 1796.58 MB
Iteration 125/500
start Current memory usage: 1796.73 MB
Training model: sequential
end Current memory usage: 1709.02 MB
Iteration 126/500
start Current memory usage: 1721.67 MB
Training model: sequential
end Current memory usage: 1771.50 MB
Iteration 127/500
start Current memory usage: 1771.50 MB
Training model: sequential
end Current memory usage: 1777.38 MB
Iteration 128/500
start Current memory usage: 1757.58 MB
Training model: sequential
end Current memory usage: 1806.50 MB
Iteration 129/500
start Current memory usage: 1758.81 MB
Training model: sequential
end Current memory usage: 1812.45 MB
Iteration 130/500
start Current memory usage: 1812.86 MB
Training model: sequential
end Current memory usage: 1811.14 MB
Iteration 131/500
start Current memory usage: 1799.61 MB
Training model: sequential
end Current memory usage: 1835.33 MB
Iteration 132/500
start Current memory usage: 1716.38 MB
Training model: sequential
end Current memory usage: 1759.75 MB
Iteration 133/500
start Current memory usage: 1752.44 MB
Training model: sequential
end Current memory usage: 1818.41 MB
Iteration 134/500
start Current memory usage: 1811.42 MB
Training model: sequential
end Current memory usage: 1853.58 MB
Iteration 135/500
start Current memory usage: 1853.70 MB
Training model: sequential
end Current memory usage: 1858.50 MB
Iteration 136/500
start Current memory usage: 1858.56 MB
Training model: sequential
end Current memory usage: 1874.84 MB
Iteration 137/500
start Current memory usage: 1862.92 MB
Training model: sequential
end Current memory usage: 1768.23 MB
Iteration 138/500
start Current memory usage: 1762.73 MB
Training model: sequential
end Current memory usage: 1843.39 MB
Iteration 139/500
start Current memory usage: 1843.52 MB
Training model: sequential
end Current memory usage: 1885.88 MB
Iteration 140/500
start Current memory usage: 1885.95 MB
Training model: sequential
end Current memory usage: 1924.86 MB
Iteration 141/500
start Current memory usage: 1925.05 MB
Training model: sequential
end Current memory usage: 1946.80 MB
Iteration 142/500
start Current memory usage: 1946.69 MB
Training model: sequential
end Current memory usage: 1977.53 MB
Iteration 143/500
start Current memory usage: 1974.27 MB
Training model: sequential
end Current memory usage: 1995.17 MB
Iteration 144/500
start Current memory usage: 1992.41 MB
Training model: sequential
end Current memory usage: 1984.45 MB
Iteration 145/500
start Current memory usage: 1963.42 MB
Training model: sequential
end Current memory usage: 1947.31 MB
Iteration 146/500
start Current memory usage: 1944.47 MB
Training model: sequential
end Current memory usage: 1996.00 MB
Iteration 147/500
start Current memory usage: 1996.08 MB
Training model: sequential
end Current memory usage: 2008.41 MB
Iteration 148/500
start Current memory usage: 1999.69 MB
Training model: sequential
end Current memory usage: 1951.30 MB
Iteration 149/500
start Current memory usage: 1942.98 MB
Training model: sequential
end Current memory usage: 1992.28 MB
Iteration 150/500
start Current memory usage: 1982.86 MB
Training model: sequential
end Current memory usage: 2008.83 MB
Iteration 151/500
start Current memory usage: 2008.83 MB
Training model: sequential
end Current memory usage: 1946.42 MB
Iteration 152/500
start Current memory usage: 1946.92 MB
Training model: sequential
end Current memory usage: 1992.48 MB
Iteration 153/500
start Current memory usage: 1979.52 MB
Training model: sequential
end Current memory usage: 2035.66 MB
Iteration 154/500
start Current memory usage: 2023.91 MB
Training model: sequential
end Current memory usage: 2030.31 MB
Iteration 155/500
start Current memory usage: 1974.39 MB
Training model: sequential
end Current memory usage: 2029.30 MB
Iteration 156/500
start Current memory usage: 1997.42 MB
Training model: sequential
end Current memory usage: 2000.31 MB
Iteration 157/500
start Current memory usage: 1964.38 MB
Training model: sequential
end Current memory usage: 1979.45 MB
Iteration 158/500
start Current memory usage: 1973.12 MB
Training model: sequential
```
| stat:awaiting tensorflower,type:bug,comp:keras,TF 2.18 | low | Critical |
2,690,535,994 | angular | When @defer-loading just one component form a library, Angular doesn't tree-shake unused contents of this library | ### Which @angular/* package(s) are the source of the bug?
core
### Is this a regression?
No
### Description
When @defer-loading just one component from a (3rd party) Angular library, the whole library is bundled in the lazy loaded chunk (even if the library has `sideEffects: false` in its `package.json`). Instead, I'd expect the unused content of that library to be tree-shaken.
(_Just for clarity: I'm talking about production builds of libraries done with ng-packgr and production build of an application - with `optimization: true` enabled in `angular.json` then hosted via http server. - NOT vite dev server)_
```html
@defer (on interaction) {
<just-one-component />
} @placeholder {
<div>PLACEHOLDER</div>
}
```
```ts
import { JustOneComponent } from 'lib1';
@Component({
standalone: true,
imports: [JustOneComponent],
...
})
export class AppComponent
```
โ ๏ธ **Now _all unused contents_ of the `lib1` are bundled in the lazy loaded chunk alongside the JustOneComponent**
Note 1: However, the tree-shaking _does_ happen, when @defer-loading a _local component_ that wraps a component from a library.
```html
@defer (on interaction) {
<local-wrapper-component />
} @placeholder {
<div>PLACEHOLDER</div>
}
```
```ts
import { LocalWrapperComponent } from './local-wrapper.component';
@Component({
standalone: true,
imports: [LocalWrapperComponent],
...
})
export class AppComponent
```
where the local wrapper component is:
```html
<just-one-component />
```
```ts
import { JustOneComponent } from 'lib1';
@Component({
standalone: true,
imports: [JustOneComponent],
...
})
export class LocalWrapperComponent
```
**Now _only_ the JustOneComponent of the `lib1` (together with the LocalWrapperComponent) are bundled in the lazy loaded chunk, without unused stuff from `lib1`**
Note 2: The tree-shaking _does_ happen, when eager loading a component from a library (no @defer-loading).
To try it out, see the repo linked below:
### Please provide a link to a minimal reproduction of the bug
https://github.com/Platonn/ng19-defer-and-tree-shaking/
### Please provide the exception or error you saw
```true
N/A
```
### Please provide the environment you discovered this bug in (run `ng version`)
```true
Angular CLI: 19.0.1
Node: 20.14.0
Package Manager: npm 10.7.0
OS: darwin arm64
Angular: 19.0.0
... animations, common, compiler, compiler-cli, core, forms
... platform-browser, platform-browser-dynamic, router
Package Version
---------------------------------------------------------
@angular-devkit/architect 0.1900.1
@angular-devkit/build-angular 19.0.1
@angular-devkit/core 19.0.1
@angular-devkit/schematics 19.0.1
@angular/cli 19.0.1
@schematics/angular 19.0.1
ng-packagr 19.0.1
rxjs 7.8.1
typescript 5.6.3
zone.js 0.15.0
```
### Anything else?
_No response_ | area: core,core: defer | medium | Critical |
2,690,590,953 | rust | Tracking Issue for `io::const_error!` | Feature gate: `#![feature(io_const_error)]`
This is a tracking issue for `const_error!`, a macro to create `io::Error`s from a string literal without allocating.
### Public API
```rust
// std::io
macro const_error($kind:expr, $message:expr $(,)?) { ... }
```
### Steps / History
- [x] ACP: https://github.com/rust-lang/libs-team/issues/205
- [ ] Implementation: #...
- [ ] Final comment period (FCP)[^1]
- [ ] Stabilization PR
### Unresolved Questions
- None yet.
[^1]: https://std-dev-guide.rust-lang.org/feature-lifecycle/stabilization.html
| T-libs-api,C-tracking-issue | low | Critical |
2,690,592,901 | react | [Compiler Bug]: React Hook placement prevents memoization of dependent variables | ### What kind of issue is this?
- [X] React Compiler core (the JS output is incorrect, or your app works incorrectly after optimization)
- [ ] babel-plugin-react-compiler (build issue installing or using the Babel plugin)
- [ ] eslint-plugin-react-compiler (build issue installing or using the eslint plugin)
- [ ] react-compiler-healthcheck (build issue installing or using the healthcheck script)
### Link to repro
https://playground.react.dev/#N4Igzg9grgTgxgUxALhAHQHYIB4AcIwAuABAGZQZyECWEGxAMtWIQBQCUxwmxxcdLYgBtmJALzEoYBExYcA3Dz4CSAEwCGhdcQlSEAEU3qFmJfwyDcMCIjDTVs8cQDmCQgAVrt+49YiW7Ir0ktIAyhAAtm4AFtQYzgrEAPRJxACyEABuCMSEsWDE0RAQANbEAEYIpAQ5IljE6uVZCKbB5oIYEKoIBRIaWgB0Eeq4rKyd3ZxiAHxcSrztJP6EAJKECBE6xFY2PT6iANoTCAPUqgC6Qby8MG6w9NzB18QDr8cANPPXkFF5cc7IYSiNYbAY-GL-T5PYgAXyusMCrRudxg9FYXwAPABhSL4LAYEjHMBiYBEmHJaZKREYGGYEAwoA
### Repro steps
I noticed an interesting behavior in the compiler. If you place a React hook between two variables that depend on each other, the compiler skips memoizing them. But if you move the hook somewhere else (like above both variables), the memoization works as expected.
| Hook between | Hook above |
| - | - |
|||
### How often does this bug happen?
Every time
### What version of React are you using?
19.0.0-rc.1
### What version of React Compiler are you using?
19.0.0-beta-0dec889-20241115 | Type: Bug,Status: Unconfirmed,Component: Optimizing Compiler | medium | Critical |
2,690,641,682 | godot | Cannot look among all items when using Quick Open and Quick Open Script anymore | - *Related to https://github.com/godotengine/godot/issues/98726
and https://github.com/godotengine/godot/pull/56772.*
### Tested versions
- Reproducible in: 4.4.dev5
- Not reproducible in: 4.3.stable
### System information
Godot v4.4.dev (9e6098432) - Fedora Linux 41 (KDE Plasma) on X11 - X11 display driver, Multi-window, 1 monitor - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4090 (nvidia; 565.57.01) - 13th Gen Intel(R) Core(TM) i9-13900K (32 threads)
### Issue description
I can't look among all items when using Quick Open and Quick Open Script anymore. This is useful to do in small/medium projects where you want to comb through all of them:

As soon as I enter a character, I can see something:

The generic Quick Open dialog is also affected:

The workaround is to search for `.` as this will find all files with an extension (including C# in a C#-enabled build), but we should be able to look for files in the list without entering anything.
In comparison, in 4.3.stable, I can see the list of scripts without having to enter anything in the search field:

This issue doesn't occur with Quick Open Scene, where I can see the list of scenes without needing to enter anything in the search field:

### Steps to reproduce
- Press <kbd>Ctrl + Alt + O</kbd> (Quick Open Script) or <kbd>Alt + Shift + O</kbd> (Quick Open) in the editor.
### Minimal reproduction project (MRP)
https://github.com/godotengine/godot-benchmarks, but any project with several scenes/scripts will do | discussion,topic:editor,usability,regression | low | Minor |
2,690,648,155 | langchain | ๐Bug: The aload function, contrary to its name, is not an asynchronous function, so it cannot work concurrently with other asynchronous functions. | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community import document_loaders as dl
async def do_something():
await asyncio.sleep(1)
async def main():
loader1 = dl.WebBaseLoader("https://www.fntimes.com/html/view.php?ud=202411242104045546dd55077bc2_18")
results = await asyncio.gather(loader1.aload(), do_something())
print(results)
if __name__ == "__main__":
import asyncio
asyncio.run(main())
```
### Error Message and Stack Trace (if applicable)
```bash
python bug_langchain.py
USER_AGENT environment variable not set, consider setting it to identify your requests.
Traceback (most recent call last):
File "/home/dnsgkr23/bug_langchain.py", line 15, in <module>
asyncio.run(main())
File "/usr/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/home/dnsgkr23/bug_langchain.py", line 9, in main
results = await asyncio.gather(loader1.aload(), do_something())
^^^^^^^^^^^^^^^
File "/home/dnsgkr23/langchain/libs/community/langchain_community/document_loaders/web_base.py", line 337, in aload
results = self.scrape_all(self.web_paths)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dnsgkr23/langchain/libs/community/langchain_community/document_loaders/web_base.py", line 278, in scrape_all
results = asyncio.run(self.fetch_all(urls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/asyncio/runners.py", line 186, in run
raise RuntimeError(
RuntimeError: asyncio.run() cannot be called from a running event loop
sys:1: RuntimeWarning: coroutine 'WebBaseLoader.fetch_all' was never awaited
```
### Description
The aload function, contrary to its name, is not an asynchronous function,
so it cannot work concurrently with other asynchronous functions.
### System Info
```bash
python -m langchain_core.sys_info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Debian 6.1.115-1 (2024-11-01)
> Python Version: 3.11.2 (main, Sep 14 2024, 03:00:30) [GCC 12.2.0]
Package Information
-------------------
> langchain_core: 0.3.19
> langchain: 0.3.7
> langchain_community: 0.3.7
> langsmith: 0.1.142
> langchain_tests: 0.3.4
> langchain_text_splitters: 0.3.2
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.10
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> httpx: 0.27.2
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> numpy: 1.26.4
> orjson: 3.10.11
> packaging: 24.2
> pydantic: 2.9.2
> pydantic-settings: 2.6.1
> pytest: 7.4.4
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.35
> syrupy: 4.7.2
> tenacity: 9.0.0
> typing-extensions: 4.12.2
``` | ๐ค:bug | low | Critical |
2,690,678,481 | TypeScript | JSDoc property modifiers on computed properties | ### ๐ Search Terms
All declarations of '{property}' must have identical modifiers
JSDoc
Symbol
Computed Property Names
Property Modifiers
### ๐ Version & Regression Information
This is the behavior in every version I tried (4.X.X - 5.X.X)
### โฏ Playground Link
https://www.typescriptlang.org/play/?ts=5.7.2&filetype=js#code/MYewdgzgLgBA1gIQIYCcYF4YGUCeBbAIxABsAKAIgNXIEoBuAKAeGKQghgDMQQAGGAN4MYImAHoAVBOGiYEmAAEADihBQApsA0ATGaPkKoOJesEwwAV2LEYAHxjQUASzABzGAF89IiWO8BtRFQAXQxzK2I6cTEYAB4AWniYAEFrGG1NVhQkKCdwDhBOGAByQOQUYOKYPAtoGAALJAA3UycMsFzgJBs8EG0nTid1FAgmWVd1WCoUUhpBb1kUSYsUMBgoeqcIMpDGWS9ZSWlZOUUjEzNHF3cD2V9vCEmYadIm7ot1OaET0Q2tnYqYTexA+e1EXi8zFY7C4PAAjPNDlIFgYVGpNDoUWdjKYBOE0vYrm5PCi-LIAPrk6ZhSzWKJiGIJJIAeTgY1EEymqFmiJ+MCWUBWaz+EAAdJTpmCRLcREcsYYcZcoM5iTKfGTRI8uTNgR8vgtfpsxRLUED3uopSSPEA
### ๐ป Code
```js
const kBar = Symbol("bar");
class foo0 {
/**
* @protected
* @type { null | string }
*/
[kBar] = null; // <-- All declarations of '[kBar]' must have identical modifiers
get bar() {
return this[kBar];
}
/**
* @type { string }
*/
set bar(value) {
this[kBar] = value;
}
}
```
### ๐ Actual behavior
`All declarations of '[kBar]' must have identical modifiers`
but if non-symbol key is used, no error occurs
```js
class foo1 {
/**
* @protected
* @type { null | string }
*/
__bar = null; // <-- Ok
get bar() {
return this.__bar;
}
/**
* @type { string }
*/
set bar(value) {
this.__bar = value;
}
}
```
### ๐ Expected behavior
Symbol key behave same as string key
### Additional information about the issue
using exact same annotations on every symbol-property assigment allow avoid this error,
but property type becomes `any`
```js
const kBar = Symbol("bar");
class foo0 {
/**
* @protected
* @type { null | string }
*/
[kBar] = null; // <-- Ok
get bar() {
return this[kBar];
}
/**
* @type { string }
*/
set bar(value) {
/**
* @protected
* @type { null | string }
*/
this[kBar] = value;
}
}
``` | Bug,Help Wanted | low | Critical |
2,690,758,343 | vscode | Git commands do not work if the profile folder name does not contain ASCI characters |
Type: <b>Bug</b>
Perform any git command from vs code (pull/push/clone)
result:
> git pull --tags origin master
Could not create directory '/c/Users/\335\344\343\340\360/.ssh' (No such file or directory).
Failed to add the host to the list of known hosts (/c/Users/\335\344\343\340\360/.ssh/known_hosts).
[email protected]: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Commands from console and other clients works fine
VS Code version: Code 1.95.3 (f1a4fb101478ce6ec82fe9627c43efbf9e98c813, 2024-11-13T14:50:04.152Z)
OS version: Windows_NT x64 10.0.19045
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|AMD Ryzen 7 5800X 8-Core Processor (16 x 3800)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|31.91GB (17.29GB free)|
|Process Argv|--disable-extensions --crash-reporter-id c8391010-3648-499c-8f2b-043a2f1aee5c|
|Screen Reader|no|
|VM|50%|
</details>Extensions disabled<details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vswsl492cf:30256860
vscod805:30301674
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
2i9eh265:30646982
962ge761:30959799
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
9c06g630:31013171
dvdeprecation:31068756
dwnewjupytercf:31046870
nativerepl1:31139838
pythonrstrctxt:31112756
nativeloc1:31185841
cf971741:31144450
iacca1:31171482
notype1cf:31157160
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
```
</details>
<!-- generated by issue reporter --> | bug,git | low | Critical |
2,690,797,533 | TypeScript | JSDoc implements with imported types | ### ๐ Search Terms
Declaration emit for this file requires using private name '__type' from module '{path}'. An explicit type annotation may unblock declaration emit
implements
import
type
JSDoc
### ๐ Version & Regression Information
This is the behavior in every version I tried (v5.5.4 - Nightly)
### โฏ Playground Link
https://www.typescriptlang.org/play/?jsx=0&module=0&ts=5.5.4&filetype=js#code/PTAEAEDMEsBsFMB2BDAtvAXKSB7HA6AKwGcAoYAKgtNAogBcBPAB3gBN5JQBvH7PLMXoAnaIgDmoAL7TQAMTw0KwUuTBQ4SNJlAAjZMKJlKFcNFTMcw+nwU5ZkYTlSgARPmC4CJV7RXkqMwsEdER6Yh47GWVSAGNYZGIIgCEDPhBQAB4AWmzQABF4eINkemgcRFB4VGgbXGFQegALaAiYBFBheABHAFdoLoje4jFJZlEAN1L4UBR0UAByAH0lplYF-FAAQUr4AA9mWGhY2saWGeRERBx6UvLK1GRGUF7EXVgcWIBrUA5i4TuFSqNXo+BooAhXlAAF43AAWABMrgA3KQpEA
### ๐ป Code
```js
// @filename: foo.js
/**
* @typedef { { foo: string } } Foo
*/
// @filename: bar.js
/**@import { Foo } from "./foo.js" */
/**@implements {Foo} */
class Bar { // <-- Declaration emit for this file requires using private name '__type'. An explicit type annotation may unblock declaration emit.
foo = "42";
}
```
### ๐ Actual behavior
`Declaration emit for this file requires using private name '__type'. An explicit type annotation may unblock declaration emit`
### ๐ Expected behavior
no errors.
same code in typescript files works fine.
### Additional information about the issue
if redefine type in the same file, error can be avoided
```js
// @filename: foo.js
/**
* @typedef { { foo: string } } Foo
*/
// @filename: bar.js
/**
* @import { Foo } from "./foo.js"
* @typedef { Foo } $Foo
*/
/**@implements {$Foo} */
class Bar { // <-- Ok
foo = "42";
}
``` | Bug,Help Wanted | low | Critical |
2,690,799,797 | vscode | Cannot create file or folder in root directory | 

Macbook Sequoia 15.1.1, Visual Studio Code 1.95.3
After opening the project in VS Code, I am unable to create files or folders in the project root directory, but everything works fine in the other folders within the project. I have tried disabling all extensions, but the issue persists. Additionally, the permissions for the project folders are normal. | info-needed | low | Minor |
2,690,840,167 | opencv | videoio: capture by index/name check is wrong on failure with API preference set | ### System Information
OpenCV version: 4.10.0 (checked: still the same code in 4.x & 5.x branch)
### Detailed description
#22700 introduced a check to print the `backend is generally available but can't be used to capture by index/name` message.
however, this message is also printed even if the backend does support the capture mode but it couldn't open the capture device for another reason.
i think the proper thing to do is to check the capabilities of the backend before issuing the message, i.e. check for `MODE_CAPTURE_BY_INDEX` and `MODE_CAPTURE_BY_FILENAME` (what about `MODE_WRITER`?).
at a first glance there doesn't seem to be an API to get the mode right now? so this might need an extension to `videoio_registry.cpp` to add `getBackendMode` or similar?
### Steps to reproduce
1. build with aravis support (`HAVE_ARAVIS=ON`)
2. try to create a new video capture for this: `cv::VideoCapture cap(0, cv::CAP_ARAVIS);` (or `cv::VideoCapture cap("foo", cv::CAP_ARAVIS);`)
3. run this code (w/o an USB3 VISION / GigE VISION camera attached)
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [X] There is reproducer code and related data files (videos, images, onnx, etc) | bug,category: videoio | low | Critical |
2,690,841,569 | vscode | Git - Git blame editor decoration color | With `git.blame.editorDecoration.enabled`. I was expecting it to pick up my theme's comment color as the default color since that's the closest thing to what it is:

Personally I want it to be very subtle just like comments. | bug,git,ux | low | Minor |
2,690,857,512 | vscode | Git blame editor decoration leverage horizontal space? | What it looks like currently:

If I zoom out though it reveals a lot of unused space:

Instead of cluttering things, perhaps it makes sense to allow users to position them at the ruler(s) which are often where a user wraps their code. | feature-request,scm | low | Minor |
2,690,886,842 | stable-diffusion-webui | [Bug]: Getting and error: runtimeerror: no hip gpus are available | ### Checklist
- [ ] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
Im on a completely fresh install of Ubuntu 22.04.2. I followed the steps on Automatic Installation on Linux. I got WebUI open but it gives me an error: runtimeerror: no hip gpus are available. What am i doing wrong? What do i need to get this thing installed and working? My system is intel 9900k, 32gb ram and a radeon 6700XT. I am new to this stuff and don't know half of the stuff im doing so please be patient with me.
### Steps to reproduce the problem
1. open terminal in the folder i want to instal WebUI
2. copied this to terminal:
sudo apt install git python3.10-venv -y
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui && cd stable-diffusion-webui
python3.10 -m venv venv
3. Then i ran it with:
./webui.sh --upcast-sampling --skip-torch-cuda-test
4. WebUI opens and when i try to generate an image it spits out:
error: runtimeerror: no hip gpus are available
### What should have happened?
It should open the WebUI and use my GPU to generate images...
### What browsers do you use to access the UI ?
Mozilla Firefox
### Sysinfo
[sysinfo-2024-11-25-14-11.json](https://github.com/user-attachments/files/17904136/sysinfo-2024-11-25-14-11.json)
### Console logs
```Shell
serwu@serwu-Z390-AORUS-MASTER:~/Desktop/Ai/stable-diffusion-webui$ ./webui.sh --upcast-sampling --skip-torch-cuda-test
################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer.
################################################################
################################################################
Running on serwu user
################################################################
################################################################
Repo already cloned, using it as install directory
################################################################
################################################################
Create and activate python venv
################################################################
################################################################
Launching launch.py...
################################################################
glibc version is 2.35
Cannot locate TCMalloc. Do you have tcmalloc or google-perftool installed on your system? (improves CPU memory usage)
Python 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Launching Web UI with arguments: --upcast-sampling --skip-torch-cuda-test
/home/serwu/Desktop/Ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/timm/models/layers/__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Warning: caught exception 'No HIP GPUs are available', memory monitor disabled
Loading weights [6ce0161689] from /home/serwu/Desktop/Ai/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Creating model from config: /home/serwu/Desktop/Ai/stable-diffusion-webui/configs/v1-inference.yaml
/home/serwu/Desktop/Ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/huggingface_hub/file_download.py:797: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
Startup time: 4.8s (import torch: 2.3s, import gradio: 0.5s, setup paths: 0.5s, other imports: 0.2s, load scripts: 0.3s, create ui: 0.3s, gradio launch: 0.5s).
Applying attention optimization: InvokeAI... done.
loading stable diffusion model: RuntimeError
Traceback (most recent call last):
File "/usr/lib/python3.10/threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/usr/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/initialize.py", line 149, in load_model
shared.sd_model # noqa: B018
File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/shared_items.py", line 175, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/sd_models.py", line 693, in get_sd_model
load_model()
File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/sd_models.py", line 868, in load_model
with devices.autocast(), torch.no_grad():
File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/devices.py", line 228, in autocast
if has_xpu() or has_mps() or cuda_no_autocast():
File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/devices.py", line 28, in cuda_no_autocast
device_id = get_cuda_device_id()
File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/devices.py", line 40, in get_cuda_device_id
) or torch.cuda.current_device()
File "/home/serwu/Desktop/Ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/__init__.py", line 778, in current_device
_lazy_init()
File "/home/serwu/Desktop/Ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/__init__.py", line 293, in _lazy_init
torch._C._cuda_init()
RuntimeError: No HIP GPUs are available
Stable diffusion model failed to load
Using already loaded model v1-5-pruned-emaonly.safetensors [6ce0161689]: done in 0.0s
*** Error completing request
*** Arguments: ('task(y5cdfr3bjrgz0kp)', <gradio.routes.Request object at 0x7ff2024c1480>, 'woman', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
Traceback (most recent call last):
File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/call_queue.py", line 74, in f
res = list(func(*args, **kwargs))
File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/call_queue.py", line 53, in f
res = func(*args, **kwargs)
File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/txt2img.py", line 109, in txt2img
processed = processing.process_images(p)
File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/processing.py", line 847, in process_images
res = process_images_inner(p)
File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/processing.py", line 920, in process_images_inner
with devices.autocast():
File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/devices.py", line 228, in autocast
if has_xpu() or has_mps() or cuda_no_autocast():
File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/devices.py", line 28, in cuda_no_autocast
device_id = get_cuda_device_id()
File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/devices.py", line 40, in get_cuda_device_id
) or torch.cuda.current_device()
File "/home/serwu/Desktop/Ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/__init__.py", line 778, in current_device
_lazy_init()
File "/home/serwu/Desktop/Ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/__init__.py", line 293, in _lazy_init
torch._C._cuda_init()
RuntimeError: No HIP GPUs are available
---
```
### Additional information
_No response_ | bug-report | low | Critical |
2,690,890,201 | angular | linkedSignal / resource: small APIs inconsistency | ### Which @angular/* package(s) are relevant/related to the feature request?
core
### Description
Hey!
I've being playing with `linkedSignal` and `resource` in the last few days. Just a small remark on the new APIs:
```ts
linked = linkedSignal({
source: () => this.params(),
computation: (one, two) => {...},
});
resource = resource({
request: () => this.params(),
loader: ({ request, previous }) => {...},
});
```
Maybe I'm missing something obvious, but ideally I'd like to have computation / loader treated the same way:
```ts
linked = linkedSignal({
source: () => this.params(),
computation: ({ source, previous }) => {...},
});
resource = resource({
request: () => this.params(),
loader: ({ request, previous }) => {...},
});
```
Not a huge deal, but still... I think it would be nice to align the APIs (assuming I'm not missing something else).
### Proposed solution
```ts
linked = linkedSignal({
source: () => this.params(),
computation: ({ source, previous }) => {...},
});
resource = resource({
request: () => this.params(),
loader: ({ request, previous }) => {...},
});
``` | area: core,core: reactivity,cross-cutting: signals | low | Major |
2,690,894,541 | three.js | GLTFLoader: Assets with url-encoded UTF8 characters in filenames don't load correctly | ### Description
While tackling various internationalization issues with glTF assets I made a few new test assets:
- https://github.com/KhronosGroup/glTF-Sample-Assets/pull/157
In general, three.js fares second-best of the viewers I tested, but one asset doesn't load at all, thus this report.
### Reproduction steps
See live example or code.
This URL works: https://raw.githubusercontent.com/prefrontalcortex/glTF-Sample-Assets/sample/relativeUris/Models/UriTest/glTF-UriTest-01-RelativeTexture-NoUTF8/RelativeResourcePaths.gltf
This URL does not work: https://raw.githubusercontent.com/prefrontalcortex/glTF-Sample-Assets/sample/relativeUris/Models/UriTest/glTF-UriTest-02-RelativeTexture-UTF8/RelativeResourcePaths.gltf
Here's ZIP files of the assets:
[glTF-UriTest-01-RelativeTexture-NoUTF8.zip](https://github.com/user-attachments/files/17904155/glTF-UriTest-01-RelativeTexture-NoUTF8.zip)
[glTF-UriTest-02-RelativeTexture-UTF8.zip](https://github.com/user-attachments/files/17904156/glTF-UriTest-02-RelativeTexture-UTF8.zip)
Both assets pass the glTF Validator and load in e.g. UnityGLTF or glTFast. Other viewers also have issues with URL-encoded UTF8 characters in path names.
### Code
```js
const loader = new GLTFLoader();
loader.load(
// Working URL (non-UTF8):
// 'https://raw.githubusercontent.com/prefrontalcortex/glTF-Sample-Assets/sample/relativeUris/Models/UriTest/glTF-UriTest-01-RelativeTexture-NoUTF8/RelativeResourcePaths.gltf',
// Broken URL (UTF8):
'https://raw.githubusercontent.com/prefrontalcortex/glTF-Sample-Assets/sample/relativeUris/Models/UriTest/glTF-UriTest-02-RelativeTexture-UTF8/RelativeResourcePaths.gltf',
// called when the resource is loaded
function ( gltf ) {
scene.add( gltf.scene );
}
);
```
### Live example
https://jsfiddle.net/st52uj40/10/
### Screenshots
<img width="719" alt="image" src="https://github.com/user-attachments/assets/2a0a7421-0602-4912-846f-a5fbfbd9f3b5">
### Version
latest
### Device
_No response_
### Browser
_No response_
### OS
_No response_ | Loaders | low | Critical |
2,690,912,884 | vscode | Editor GPU: Bracket pair colorization doesn't update when theme is changed | Repro:
1. editor gpu rendering on
2. Change `workbench.colorCustomizations.editorBracketHighlight.foreground*` settings
3. Save, ๐ existing editors don't update | bug,editor-gpu | medium | Minor |
2,690,991,016 | vscode | Editor GPU: Support non-regular inline decorations | See https://github.com/microsoft/vscode/pull/234579 | plan-item,editor-gpu | low | Minor |
2,691,036,532 | transformers | `transformers.image_transforms.resize` doesnot work for negative values | ### System Info
- `transformers` version: 4.47.0.dev0
- Platform: Linux-6.8.0-48-generic-x86_64-with-glibc2.39
- Python version: 3.12.5
- Huggingface_hub version: 0.25.0
- Safetensors version: 0.4.5
- Accelerate version: 1.0.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.1+cu121 (False)
- Tensorflow version (GPU?): 2.17.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@qubvel @amyeroberts
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
If you try to resize an image using `transformers.image_transforms.resize` after normalization has been applied using `transformers.image_transforms.normalize`, it raises error
This was the origin of this problem: https://github.com/huggingface/transformers/pull/34583#issuecomment-2497099400
**Reproduce**
code
```py
import numpy as np
from transformers.image_transforms import resize, normalize, to_pil_image
import torch
# scaled image between 0 and 1
x = np.random.random((3, 64, 64))
x = normalize(x, mean=0.5, std=0.5)
# the raises an error
x = resize(x, size=(400, 400))
# this would work without any error
# x = torch.nn.functional.interpolate(
# torch.from_numpy(x).unsqueeze(0),
# (400, 400), mode='bilinear'
# )
```
error
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
---> 11 x = resize(x, size=(400, 400))
File transformers/image_transforms.py:337, in resize(image, size, resample, reducing_gap, data_format, return_numpy, input_data_format)
336 if not isinstance(image, PIL.Image.Image):
--> 337 do_rescale = _rescale_for_pil_conversion(image)
File transformers/image_transforms.py:158, in _rescale_for_pil_conversion(image)
--> 158 raise ValueError(
ValueError: The image to be converted to a PIL image contains values outside the range [0, 1], got [-0.9999031475386853, 0.9996538887265269] which cannot be converted to uint8.
```
**Explanation**
1. in this specific normalization, resultant array gets negative values
2. during `resize` internally it tries to convert the array to `PIL` image
3. before converting, this function is called `transformers.image_transforms._rescale_for_pil_conversion` which expects the input to be non-negative and hence raises the exception
4. why does it require the image to be non-negative?
**Possible Solution**
I can think of a few possible solutions
- use `torch` instead of `PIL` or use both and create condition that selects the option depending on the data or put this as an option for user
- use scipy interpolations or custom functions for interpolations
- scale this image for range [0, 255] but we donot know can't scale safely without knowing the original mean and std that were used to normalize the image
- keep this function as it is and create another function that can interpolate negative values as well, based on one of the above methods
### Expected behavior
resize works with normalized data that may contain negative values, as interpolation is possible with negative values | WIP,bug,Vision,Processing | low | Critical |
2,691,094,330 | kubernetes | DRA: seamless kubelet plugin upgrade | ### What would you like to be added?
When updating the DaemonSet of a DRA driver, ideally we want that to be seamless:
- minimal period of time where the kubelet can't call NodePrepareResources
- no deletion + recreation or updates of ResourceSlices
A DaemonSet supports [rolling updates](https://pkg.go.dev/k8s.io/api/apps/v1#RollingUpdateDaemonSet) such that a new pod is already running on a node in parallel to the old one. It's the default!
We need to ensure that:
- the kubelet can reliably detect stale sockets (covered by https://github.com/kubernetes/kubernetes/issues/128696)
- if two pods register in parallel, the kubelet picks the newer one, regardless of the order in which they register (the kubelet might get restarted and then needs to handle registration in arbitrary order)
- there's clear documentation on how to configure a DRA driver daemonset (which socket paths should it use - the same for both pods or per-pod?) and that the example driver follows those best practices
/sig node
/wg device-management
/cc @SergeyKanzhelev
### Why is this needed?
Less downtime, faster updates. | sig/node,kind/feature,needs-triage,wg/device-management | low | Major |
2,691,131,693 | kubernetes | DRA: version skew testing | ### What would you like to be added?
Upgrading a 1.31 alpha cluster to 1.32 with only the beta API enabled is supported. We should have automated tests for version skew and upgrade scenarios:
- creating a v1alpha3 ResourceClaim, upgrading to 1.32 beta, getting it allocated
- creating a DaemonSet with admin access enabled, upgrading to 1.32 with admin access disabled =>
- scheduled, not-running pods should start with admin access
- not-scheduled pods should not get scheduled (error in scheduler: "feature not enabled"), without adverse affects on the DaemonSet controller (like re-creating pods)
- kubelet from v1.31 with control plane from v1.32 with v1alpha3 API enabled => can start pods with claims
Later, downgrades also need to be tested. At the moment, downgrades from 1.32 to 1.31 are not supported (storage version is v1beta1, not supported by 1.31).
### Why is this needed?
Test coverage. Good for production readiness when considering GA. | sig/node,kind/feature,needs-triage,wg/device-management | low | Critical |
2,691,144,851 | react | [React 19] Internal error: TypeError: Invalid state: ReadableStream is already closed | ## Summary
When I declared a component as an async function and used React.use() to retrieve the parameter of a dynamic route within it, I received following error:
โจฏ Internal error: TypeError: Invalid state: ReadableStream is already closed
at __node_internal_captureLargerStackTrace (node:internal/errors:496:5)
at new NodeError (node:internal/errors:405:5)
at ReadableByteStreamController.enqueue (node:internal/webstreams/readablestream:1151:13)
at flushCompletedChunks (/home/emu/data/development/private/react/Session9/nextrouter-blog-app/node_modules/next/dist/compiled/next-server/app-page.runtime.dev.js:114:57155)
at performWork (/home/emu/data/development/private/react/Session9/nextrouter-blog-app/node_modules/next/dist/compiled/next-server/app-page.runtime.dev.js:114:55822)
at Immediate._onImmediate (/home/emu/data/development/private/react/Session9/nextrouter-blog-app/node_modules/next/dist/compiled/next-server/app-page.runtime.dev.js:114:21159)
at process.processImmediate (node:internal/timers:478:21)
at process.callbackTrampoline (node:internal/async_hooks:128:17)
digest: "1841623159"
GET /pages/post/1 200 in 1206ms
โจฏ next/dist/compiled/react-server-dom-webpack/cjs/react-server-dom-webpack-server.edge.development.js (291:15) @ Error
โจฏ Error: Expected a suspended thenable. This is a bug in React. Please file an issue.
at AsyncResource.runInAsyncScope (node:async_hooks:203:9)
digest: "3140543578"
289 | function getSuspendedThenable() {
290 | if (null === suspendedThenable)
> 291 | throw Error(
| ^
292 | "Expected a suspended thenable. This is a bug in React. Please file an issue."
293 | );
294 | var thenable = suspendedThenable;
โจฏ next/dist/src/server/pipe-readable.ts (144:11) @ pipeToNodeResponse
โจฏ Error: failed to pipe response
at pipeToNodeResponse (webpack://next/dist/src/server/pipe-readable.ts:144:10)
at async FlightRenderResult.pipeToNodeResponse (webpack://next/dist/src/server/render-result.ts:291:4)
at async sendRenderResult (node_modules/next/src/server/send-payload.ts:110:2)
at async DevServer.pipeImpl (node_modules/next/src/server/base-server.ts:1715:6)
at async NextNodeServer.handleCatchallRenderRequest (node_modules/next/src/server/next-server.ts:1034:6)
at async DevServer.handleRequestImpl (node_modules/next/src/server/base-server.ts:1462:8)
at async (node_modules/next/src/server/dev/next-dev-server.ts:514:13)
at async Span.traceAsyncFn (node_modules/next/src/trace/trace.ts:143:13)
at async DevServer.handleRequest (node_modules/next/src/server/dev/next-dev-server.ts:512:19)
at async invokeRender (node_modules/next/src/server/lib/router-server.ts:284:10)
at async handleRequest (node_modules/next/src/server/lib/router-server.ts:530:15)
at async requestHandlerImpl (node_modules/next/src/server/lib/router-server.ts:576:6)
at async Server.requestListener (node_modules/next/src/server/lib/start-server.ts:146:6)
142 | if (isAbortError(err)) return
143 |
> 144 | throw new Error('failed to pipe response', { cause: err })
| ^
145 | }
146 | }
147 | {
[cause]: Error: Expected a suspended thenable. This is a bug in React. Please file an issue.
at Error (webpack://next/dist/compiled/react-server-dom-webpack/cjs/react-server-dom-webpack-server.edge.development.js:291:14)
at getSuspendedThenable (webpack://next/dist/compiled/react-server-dom-webpack/cjs/react-server-dom-webpack-server.edge.development.js:2271:18)
at retryTask (webpack://next/dist/compiled/react-server-dom-webpack/cjs/react-server-dom-webpack-server.edge.development.js:2313:10)
at performWork (webpack://next/dist/compiled/react-server-dom-webpack/cjs/react-server-dom-webpack-server.edge.development.js:1160:21) {
digest: '3140543578'
}
}
142 | if (isAbortError(err)) return
143 |
> 144 | throw new Error('failed to pipe response', { cause: err })
| ^
145 | }
146 | }
147 |
GET /pages/post/1 500 in 856ms
โจฏ next/dist/compiled/react-server-dom-webpack/cjs/react-server-dom-webpack-server.edge.development.js (291:15) @ Error
โจฏ Error: Expected a suspended thenable. This is a bug in React. Please file an issue.
at AsyncResource.runInAsyncScope (node:async_hooks:203:9)
digest: "3140543578"
289 | function getSuspendedThenable() {
290 | if (null === suspendedThenable)
> 291 | throw Error(
| ^
292 | "Expected a suspended thenable. This is a bug in React. Please file an issue."
293 | );
294 | var thenable = suspendedThenable;
โจฏ next/dist/src/server/pipe-readable.ts (144:11) @ pipeToNodeResponse
โจฏ Error: failed to pipe response
at pipeToNodeResponse (webpack://next/dist/src/server/pipe-readable.ts:144:10)
at async render_result_RenderResult.pipeToNodeResponse (webpack://next/dist/src/server/render-result.ts:291:4)
at async sendRenderResult (node_modules/next/src/server/send-payload.ts:110:2)
at async DevServer.pipeImpl (node_modules/next/src/server/base-server.ts:1715:6)
at async NextNodeServer.handleCatchallRenderRequest (node_modules/next/src/server/next-server.ts:1034:6)
at async DevServer.handleRequestImpl (node_modules/next/src/server/base-server.ts:1462:8)
at async (node_modules/next/src/server/dev/next-dev-server.ts:514:13)
at async Span.traceAsyncFn (node_modules/next/src/trace/trace.ts:143:13)
at async DevServer.handleRequest (node_modules/next/src/server/dev/next-dev-server.ts:512:19)
at async invokeRender (node_modules/next/src/server/lib/router-server.ts:284:10)
at async handleRequest (node_modules/next/src/server/lib/router-server.ts:530:15)
at async requestHandlerImpl (node_modules/next/src/server/lib/router-server.ts:576:6)
at async Server.requestListener (node_modules/next/src/server/lib/start-server.ts:146:6)
142 | if (isAbortError(err)) return
143 |
> 144 | throw new Error('failed to pipe response', { cause: err })
| ^
145 | }
146 | }
147 | {
[cause]: Error: Expected a suspended thenable. This is a bug in React. Please file an issue.
at Error (webpack://next/dist/compiled/react-server-dom-webpack/cjs/react-server-dom-webpack-server.edge.development.js:291:14)
at getSuspendedThenable (webpack://next/dist/compiled/react-server-dom-webpack/cjs/react-server-dom-webpack-server.edge.development.js:2271:18)
at retryTask (webpack://next/dist/compiled/react-server-dom-webpack/cjs/react-server-dom-webpack-server.edge.development.js:2313:10)
at performWork (webpack://next/dist/compiled/react-server-dom-webpack/cjs/react-server-dom-webpack-server.edge.development.js:1160:21) {
digest: '3140543578'
}
}
142 | if (isAbortError(err)) return
143 |
> 144 | throw new Error('failed to pipe response', { cause: err })
| ^
145 | }
146 | }
147 |
โ Compiling /_error ...
โ Compiled /_error in 1381ms (910 modules)
Find a snippet here:
```
export default function getPosts() {
return [
{ id: "1", title: "Introduction to Next.js", content: "Next.js is a React framework..." },
{ id: "2", title: "Understanding Routing", content: "Routing in Next.js is file-based..." },
{ id: "3", title: "Deploying a Next.js App", content: "You can deploy Next.js on Vercel..." },
]
}
export default async function Post({ params }) {
const { id } = React.use(params)
const posts = getPosts()
const post = posts.find((post) => post.id === id);
return post ? (
<div>
<Navbar />
<h1>{post.title}</h1>
<p>{post.content}</p>
</div>
) : <p>Loading...</p>;
}
``` | React 19 | medium | Critical |
2,691,195,418 | rust | The default values for TargetOption violate the target consistency check | `TargetOption` defaults to `relocation_model: Pic` and `dynamic_linking: false` and `position_independent_executables: false`. However, our consistency check that this is an invalid combination of options.
So it seems we should either fix our defaults, or the consistency check is a bit overzealous?
This check was added in https://github.com/rust-lang/rust/pull/100537, Cc @petrochenkov @oli-obk | T-compiler,A-target-specs,C-bug,A-targets | low | Major |
2,691,232,871 | vscode | Not displaying completion items with empty string text labels | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.95.3
- OS Version: Darwin arm64 23.4.0
Steps to Reproduce:
1. [TypeScript Playground](https://www.typescriptlang.org/play/?#code/MYGwhgzhAEBCCuBzaBvAUNTmBOBTR842ACtgPYAO0AvNAOQDuZ2A1hHRlvd7YwBYBLAC64IFMMFwcu3CnRr1xAEyW4l0rnXm9cAWwpCAnvOicsaAL5o0oSDABKuCiGi4AHiIB2SmAmToZYDJPCCFseGAhZgAKAEpUMxlMCHgKXGw4xKSAemzoTzIGaDDDYrJoCUkoYr5cV30jaFDsAU9kCnI07EaGYT5oACNsCRZcIQgsmSFBCABtLQBdLKsrayA)
```ts
class Bug {
' ' = 'whitespace'
' p' = 'padded'
'' = 'empty'
}
class Repl extends Bug {
constructor() {
super()
// now try to access the empty string property with brackets
this[''] // receives 3 entries from Typescript, displays only 2
}
}
```

Entries from `extensions/typescript-language-features/src/languageFeatures/completions.ts`
```json
[
{
"name": "",
"kind": "property",
"kindModifiers": "",
"sortText": "11",
},
{
"name": " ",
"kind": "property",
"kindModifiers": "",
"sortText": "11",
},
{
"name": " p",
"kind": "property",
"kindModifiers": "",
"sortText": "11",
}
]
```
| suggest,under-discussion | low | Critical |
2,691,244,818 | storybook | [Bug]: Nuxt init broken | ### Describe the bug
When I create a new Nuxt project and try to initialize Storybook, it says that the initialization is successful but nothing is installed. I would expect that it would either install the Nuxt framework or the `vue3-vite` framework or to fail with an error.
### Reproduction link
N/A
### Reproduction steps
```sh
npx nuxi@latest init test-nuxt
cd test-nuxt
npx storybook@next init
```
### System
```bash
Storybook Environment Info:
System:
OS: macOS 15.1.1
CPU: (10) arm64 Apple M1 Pro
Shell: 5.9 - /bin/zsh
Binaries:
Node: 20.17.0 - /usr/local/bin/node
npm: 10.8.2 - /usr/local/bin/npm
pnpm: 9.13.2 - ~/Library/pnpm/pnpm <----- active
Browsers:
Chrome: 131.0.6778.86
Safari: 18.1.1
```
### Additional context
_No response_ | bug,cli,vue3,nuxt,sev:S2 | low | Critical |
2,691,254,904 | next.js | Middleware called redirect but the browser is not redirected | ### Link to the code that reproduces this issue
https://github.com/kuanjiahong/nextjs-middleware-redirect-issue-demo
### To Reproduce
1. npm run dev
2. Go to http://localhost:3000
3. Click "Go to Dashboard" on "/" page
4. Click "Set Session Cookie" on "/set-cookie" page
5. Click "Go to Dashboard" on "/" page
6. Click "Remove Session" on "/dashboard" page
7. Click "Call Server Action" on "/dashboard" page
### Current vs. Expected behavior
**Current behavior**
After clicking "Remove Session" and then clicking "My Server Action", the page is not redirected to "/set-cookie" page
**Expected behaviour**
After clicking "Remove Session" and then clicking "My Server Action", the page should be redirected to "/set-cookie" page
### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 11 Home
Available memory (MB): 16086
Available CPU cores: 8
Binaries:
Node: 22.11.0
npm: 10.9.0
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 15.0.4-canary.27 // Latest available version is detected (15.0.4-canary.27).
eslint-config-next: N/A
react: 19.0.0-rc-b01722d5-20241114
react-dom: 19.0.0-rc-b01722d5-20241114
typescript: 5.3.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Middleware
### Which stage(s) are affected? (Select all that apply)
next dev (local), next start (local)
### Additional context
I tested my reproduction on 15.0.4-canary.27 and also 15.0.3. The issues are reproduceable for both of these version. | bug,Middleware | low | Minor |
2,691,309,300 | vscode | Let language extensions dynamically configure soft word wrap for long lines | <!-- โ ๏ธโ ๏ธ Do Not Delete This! feature_request_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
VS Code currently offers a word wrap option, which breaks lines at a certain fixed length in the editor UI. This works fine, but is intended mainly for written text/comments/markdown and doesn't take language syntaxes into account. The feature isn't very highly used, probably by design.
There are also language-specific autoformatters that understand syntax and do hard line breaks. This is the better option for clearer and more understandable code, and so is very commonly used across most codebases (e.g. using Prettier and/or built-in linters). The drawback is that because the formatting changes are written to disk, the same line length setting must apply to everyone working in a repo regardless of their monitor size and personal preferences.
There's a possible solution that combines both of these by letting custom formatters provide both a hard break (`\n`) _and_ a soft break signal to the editor. The word wrap setting thus doesn't have to be static for the entire editor but can be automatically set per file and per line. The lines themselves can be arbitrarily long on disk, and each end user can configure their own view depending on their setup.
Additional advantages include:
- No surprise formatting of the entire file when changing one line because someone else committed without running the formatter.
- Significantly cleaner diffs.
- Fewer fights between devs working on the same codebase. | feature-request,editor-wrapping | low | Minor |
2,691,446,774 | vscode | Indicate that version property should not be used for reporting telemetry | I have been using version property name to report the version on an extension in my telemetry event and I discovered that this property name is reserved for product version. It would be helpful if there is some indication that this property is reserved and cannot be used. | debt,telemetry | low | Minor |
2,691,489,469 | go | x/tools/gopls: "slice bounds out of range" crash in ExtractToNewFile | ```
#!stacks
"goPanicSliceAcap" &&
("golang.ExtractToNewFile:+72" || "golang.ExtractToNewFile:+98")
```
Issue created by [stacks](https://pkg.go.dev/golang.org/x/tools/gopls/internal/telemetry/cmd/stacks).
```go
fileStart := pgf.File.FileStart
buf.Write(pgf.Src[start-fileStart : end-fileStart]) // slice bounds out of range
```
This stack `fFKP7w` was [reported by telemetry](https://storage.googleapis.com/prod-telemetry-merged/2024-11-22.json):
- `crash/crash`
- [`runtime.gopanic:+69`](https://cs.opensource.google/go/go/+/go1.23.3:src/runtime/panic.go;l=804)
- [`runtime.goPanicSliceAcap:+2`](https://cs.opensource.google/go/go/+/go1.23.3:src/runtime/panic.go;l=141)
- [`golang.org/x/tools/gopls/internal/golang.ExtractToNewFile:+72`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0-pre.2:gopls/internal/golang/extracttofile.go;l=155)
- [`golang.org/x/tools/gopls/internal/server.(*commandHandler).ExtractToNewFile.func1:+1`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0-pre.2:gopls/internal/server/command.go;l=1139)
- [`golang.org/x/tools/gopls/internal/server.(*commandHandler).run.func2:+3`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0-pre.2:gopls/internal/server/command.go;l=381)
- [`golang.org/x/tools/gopls/internal/server.(*commandHandler).run:+77`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0-pre.2:gopls/internal/server/command.go;l=412)
- [`golang.org/x/tools/gopls/internal/server.(*commandHandler).ExtractToNewFile:+1`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0-pre.2:gopls/internal/server/command.go;l=1135)
- [`golang.org/x/tools/gopls/internal/protocol/command.Dispatch:+81`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0-pre.2:gopls/internal/protocol/command/command_gen.go;l=199)
- [`golang.org/x/tools/gopls/internal/server.(*server).ResolveCodeAction:+21`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0-pre.2:gopls/internal/server/code_action.go;l=247)
- [`golang.org/x/tools/gopls/internal/protocol.serverDispatch:+46`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0-pre.2:gopls/internal/protocol/tsserver.go;l=216)
- [`golang.org/x/tools/gopls/internal/lsprpc.(*streamServer).ServeStream.ServerHandler.func3:+5`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0-pre.2:gopls/internal/protocol/protocol.go;l=160)
- [`golang.org/x/tools/gopls/internal/lsprpc.(*streamServer).ServeStream.handshaker.func4:+52`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0-pre.2:gopls/internal/lsprpc/lsprpc.go;l=509)
- [`golang.org/x/tools/gopls/internal/protocol.Handlers.MustReplyHandler.func1:+2`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0-pre.2:internal/jsonrpc2/handler.go;l=35)
- [`golang.org/x/tools/gopls/internal/protocol.Handlers.AsyncHandler.func2.2:+3`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0-pre.2:internal/jsonrpc2/handler.go;l=104)
- `runtime.goexit:+0`
```
golang.org/x/tools/[email protected] go1.23.3 linux/amd64 vscode (3)
```
Dups: VdbNJw | NeedsInvestigation,gopls,Tools,gopls/telemetry-wins | low | Critical |
2,691,505,687 | godot | Shortcut to restart particles doesn't work when multiple particle systems are selected | ### Tested versions
4.3-stable
### System information
Windows
### Issue description
Seleting multiple particle systems disables the shortcut to restart them doens't work
### Steps to reproduce
Open mrp, open node3d scene. Select a single particle node, hit ctrl+r, observe it restart
Select the two particle nodes, hit ctrl+r, nothing happens.
ctrl+r should call restart() on all particles in the current open scene
### Minimal reproduction project (MRP)
[particle-shortcut-mrp.zip](https://github.com/user-attachments/files/17906500/particle-shortcut-mrp.zip)
| topic:editor,topic:particles | low | Minor |
2,691,510,154 | go | x/tools/gopls: refactor.rewrite.removeUnusedParam: avoid literalization for recursive spread call | ### gopls version
Build info
----------
golang.org/x/tools/gopls v0.16.2
golang.org/x/tools/[email protected] h1:K1z03MlikHfaMTtG01cUeL5FAOTJnITuNe0TWOcg8tM=
github.com/BurntSushi/[email protected] h1:9F2/+DoOYIOksmaJFPw1tGFy1eDnIJXg+UHjuD8lTak=
github.com/google/[email protected] h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
golang.org/x/exp/[email protected] h1:2O2DON6y3XMJiQRAS1UWU+54aec2uopH3x7MAiqGW
6Y=
golang.org/x/[email protected] h1:utOm6MM3R3dnawAiJgn0y+xvuYRsm1RKM/4giyfDgV0=
golang.org/x/[email protected] h1:3NFvSEYkUoMifnESzZl15y791HH1qU2xm6eCJU5ZPXQ=
golang.org/x/[email protected] h1:Wm3cG5X6sZ0RSVRc/H1/sciC4AT6HAKgLCSH2lbpR/c=
golang.org/x/[email protected] h1:a94ExnEXNtEwYLGJSIUxnWoxoRz/ZcCsV63ROupILh4=
golang.org/x/[email protected] h1:6bJEg2w2kUHWlfdJaESYsmNfI1LKAZQi6zCa7LUn7eI=
golang.org/x/[email protected] h1:SP0mPeg2PmGCu03V+61EcQiOjmpri2XijexKdzv8Z1I=
honnef.co/go/[email protected] h1:9MDAWxMoSnB6QoSqiVr7P5mtkT9pOc1kSxchzPCnqJs=
mvdan.cc/[email protected] h1:G3QvahNDmpD+Aek/bNOLrFR2XC6ZAdo62dZu65gmwGo=
mvdan.cc/xurls/[email protected] h1:lyBNOm8Wo71UknhUs4QTFUNNMyxy2JEIaKKo0RWOh+8=
go: go1.23.1
### go env
```shell
GO111MODULE=''
GOARCH='arm64'
GOBIN=''
GOCACHE='/Users/toad/Library/Caches/go-build'
GOENV='/Users/toad/Library/Application Support/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='arm64'
GOHOSTOS='darwin'
GOINSECURE=''
GOMODCACHE='/Users/toad/work/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='darwin'
GOPATH='/Users/toad/work'
GOPRIVATE=''
GOPROXY='https://goproxy.cn,direct'
GOROOT='/Users/toad/go/go1.23.1/'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/Users/toad/go/go1.23.1/pkg/tool/darwin_arm64'
GOVCS=''
GOVERSION='go1.23.1'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/Users/toad/Library/Application Support/go/telemetry'
GCCGO='gccgo'
GOARM64='v8.0'
AR='ar'
CC='cc'
CXX='clang++'
CGO_ENABLED='1'
GOMOD='/Users/toad/work/demo10/go.mod'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -arch arm64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -ffile-pref
ix-map=/var/folders/g1/tgmnlrdn3vxgv08kdgh9vpkw0000gn/T/go-build4008461701=/tmp/go-build -gno-record-gcc-switch
es -fno-common'
```
### What did you do?
```
package main
func CC(aa, bb int) { // Remove useless parameters bb.
CC(BB(aa))
}
func BB(aa int) (int, int) {
return 0, 0
}
```
### What did you expect to see?
report an error.
### What did you see happen?
```
func CC(aa int) {
func(aa, _ int) {
CC(aa)
}(BB(aa))
}
func BB(aa int) (int, int) {
return 0, 0
}
```
I see "A better solution would be to allow for ad-hoc snapshots that expose the full machinery of real snapshots: minimal invalidation, batched type checking, etc. " in the comments of the [inline_all.inlineAllCalls](https://github.com/golang/tools/blob/017e1b8d601912bd35c6e90d4c5321d0888c942c/gopls/internal/golang/inline_all.go#L47).
Maybe we should implement it.
"Extract parameter struct" will also use it.
### Editor and settings
_No response_
### Logs
_No response_ | gopls,Tools,Refactoring | low | Critical |
2,691,630,620 | deno | Deno.dlopen cannot open static asset | Version: Deno 2.1.1
I tried to bundle my native library and script into a single executable file. Of course, I used the --include flag during compilation. After running the executable, I found that Deno.readFile successfully reads the static asset, but Deno.dlopen is unable to open it. Below is an example of the code:
```ts
const data = Deno.readFileSync(
import.meta.dirname + "/static/libexample.dylib",
);
console.log(data.buffer.byteLength); // return correct size 1955522
const lib = Deno.dlopen(
import.meta.dirname + "/static/libexample.dylib",
{
openWebView: {
parameters: [],
result: "void",
nonblocking: false,
},
},
); // but Error: Could not open library: Could not open library
```
| feat,compile | low | Critical |
2,691,649,295 | vscode | Feature Request: Enhance Edit > Find to a dropdown list | <!-- โ ๏ธโ ๏ธ Do Not Delete This! feature_request_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
Enhance Edit > Find to a dropdown list
I constantly use Edit > Find with two or three requests again and again.
It would be great to have The edit > Find textbox enhanced to a dropdown list
of the last five entries. That way, one could simply select the search option from current history.
Copy / Paste doesn't really work when you're working between multiple search options and editing code.
Thanks!
Visual Studio Code
Version: 1.95.3 | feature-request,editor-find | low | Minor |
2,691,740,037 | PowerToys | FancyZones - zones created will shift up the height of the taskbar and then shift back down | ### Microsoft PowerToys version
0.86.0
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
FancyZones, FancyZones Editor
### Steps to reproduce
Usually on reboot but sometimes I'll go to move a window into a zone and notice that I've got one of two situations appearing - either windows will be behind the taskbar or they will be exactly twice the distance from the bottom of the screen (two taskbar lengths). I have taken to creating two versions of the same layout, one "low" and one "high" that compensate for whichever version of this bug is rearing its head. At one point I had my taskbar set to auto-hide (I have an OLED monitor) but now have it set to static and that's when I first noticed the problem but ever since the bug has haunted me.

### โ๏ธ Expected Behavior
I expect my zones to all stay where I intended them to be when I first set them up but I've been stuck with this bug for so long that I actually anticipate it happening and have two versions of my FZ layout set up to compensate.
### โ Actual Behavior
Zones shift up or down based on how the computer is feeling that day (I honestly can't tell what the trigger is, it seems arbitrary). I have two versions of the same layout that compensate for my computer's mood. Taskbar is currently set to always show, so my desktop remains static and unchanging but Fancy Zones is kind of temperamental about it.
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Critical |
2,691,742,353 | react-native | React Native 0.76.3 Eslint Parser Issue with Flow | ### Description
I have recently migrated the app to 0.76.3 following the upgrade helper and also upgraded `@react-native/babel-preset` which has resulted in eslint parsing error for flow typed files.
### Steps to reproduce
Using latest react native 0.76.3 with @react-native/babel-preset version 0.76.3, add some flow files.
Run `yarn eslint`
### React Native Version
0.76.3
### Affected Platforms
Other (please specify)
### Output of `npx react-native info`
```text
System:
OS: macOS 15.0.1
CPU: (12) arm64 Apple M2 Pro
Memory: 502.45 MB / 32.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 18.20.5
path: /usr/local/bin/node
Yarn:
version: 1.22.19
path: /opt/homebrew/bin/yarn
npm:
version: 10.8.2
path: /usr/local/bin/npm
Watchman: Not Found
Managers:
CocoaPods:
version: 1.15.2
path: /Users/ravindragupta/.rbenv/shims/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.0
- iOS 18.0
- macOS 15.0
- tvOS 18.0
- visionOS 2.0
- watchOS 11.0
Android SDK:
API Levels:
- "31"
- "33"
- "34"
- "35"
Build Tools:
- 29.0.2
- 30.0.2
- 30.0.3
- 31.0.0
- 33.0.0
- 33.0.1
- 34.0.0
- 35.0.0
System Images:
- android-26 | ARM 64 v8a
- android-26 | Google APIs ARM 64 v8a
- android-28 | Google ARM64-V8a Play ARM 64 v8a
- android-30 | Google APIs ARM 64 v8a
- android-30 | Google Play ARM 64 v8a
- android-31 | Google Play ARM 64 v8a
- android-33 | Google APIs ARM 64 v8a
- android-34 | Google APIs ARM 64 v8a
- android-34 | Google Play ARM 64 v8a
Android NDK: Not Found
IDEs:
Android Studio: 2024.2 AI-242.23339.11.2421.12483815
Xcode:
version: 16.0/16A242d
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.9
path: /usr/bin/javac
Ruby:
version: 3.0.4
path: /Users/ravindragupta/.rbenv/shims/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.0.1
wanted: 15.0.1
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.3
wanted: 0.76.3
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: true
newArchEnabled: false
```
### Stacktrace or Logs
```text
Error while parsing /Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/services/syncData/checkConnection.js
Line undefined, column undefined: Cannot read properties of undefined (reading 'forEach')
`parseForESLint` from parser `/Users/ravindragupta/Desktop/Projects/mobilev2/native/node_modules/@babel/eslint-parser/lib/index.cjs` is invalid and will just be ignored
Error while parsing /Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/api/oauth.api.js
Line undefined, column undefined: Cannot read properties of undefined (reading 'forEach')
`parseForESLint` from parser `/Users/ravindragupta/Desktop/Projects/mobilev2/native/node_modules/@babel/eslint-parser/lib/index.cjs` is invalid and will just be ignored
Error while parsing /Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/services/syncData/checkConnection.js
Line undefined, column undefined: Cannot read properties of undefined (reading 'forEach')
`parseForESLint` from parser `/Users/ravindragupta/Desktop/Projects/mobilev2/native/node_modules/@babel/eslint-parser/lib/index.cjs` is invalid and will just be ignored
Error while parsing /Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/api/types.js
Line undefined, column undefined: Cannot read properties of undefined (reading 'forEach')
`parseForESLint` from parser `/Users/ravindragupta/Desktop/Projects/mobilev2/native/node_modules/@babel/eslint-parser/lib/index.cjs` is invalid and will just be ignored
Error while parsing /Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/api/types.js
Line undefined, column undefined: Cannot read properties of undefined (reading 'forEach')
`parseForESLint` from parser `/Users/ravindragupta/Desktop/Projects/mobilev2/native/node_modules/@babel/eslint-parser/lib/index.cjs` is invalid and will just be ignored
Error while parsing /Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/api/offline.api.js
Line undefined, column undefined: Cannot read properties of undefined (reading 'forEach')
`parseForESLint` from parser `/Users/ravindragupta/Desktop/Projects/mobilev2/native/node_modules/@babel/eslint-parser/lib/index.cjs` is invalid and will just be ignored
Error while parsing /Users/ravindragupta/Desktop/Projects/mobilev2/native/src/actions/saveUtils.js
Line undefined, column undefined: Cannot read properties of undefined (reading 'forEach')
`parseForESLint` from parser `/Users/ravindragupta/Desktop/Projects/mobilev2/native/node_modules/@babel/eslint-parser/lib/index.cjs` is invalid and will just be ignored
Error while parsing /Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/api/offline.api.js
Line undefined, column undefined: Cannot read properties of undefined (reading 'forEach')
`parseForESLint` from parser `/Users/ravindragupta/Desktop/Projects/mobilev2/native/node_modules/@babel/eslint-parser/lib/index.cjs` is invalid and will just be ignored
Error while parsing /Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/api/records.api.js
Line undefined, column undefined: Cannot read properties of undefined (reading 'forEach')
`parseForESLint` from parser `/Users/ravindragupta/Desktop/Projects/mobilev2/native/node_modules/@babel/eslint-parser/lib/index.cjs` is invalid and will just be ignored
Error while parsing /Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/api/offline.api.js
Line undefined, column undefined: Cannot read properties of undefined (reading 'forEach')
`parseForESLint` from parser `/Users/ravindragupta/Desktop/Projects/mobilev2/native/node_modules/@babel/eslint-parser/lib/index.cjs` is invalid and will just be ignored
Error while parsing /Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/api/types.js
Line undefined, column undefined: Cannot read properties of undefined (reading 'forEach')
`parseForESLint` from parser `/Users/ravindragupta/Desktop/Projects/mobilev2/native/node_modules/@babel/eslint-parser/lib/index.cjs` is invalid and will just be ignored
Error while parsing /Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/api/urlRoute.js
Line undefined, column undefined: Cannot read properties of undefined (reading 'forEach')
`parseForESLint` from parser `/Users/ravindragupta/Desktop/Projects/mobilev2/native/node_modules/@babel/eslint-parser/lib/index.cjs` is invalid and will just be ignored
Error while parsing /Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/api/types.js
Line undefined, column undefined: Cannot read properties of undefined (reading 'forEach')
`parseForESLint` from parser `/Users/ravindragupta/Desktop/Projects/mobilev2/native/node_modules/@babel/eslint-parser/lib/index.cjs` is invalid and will just be ignored
Error while parsing /Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/api/offline.api.js
Line undefined, column undefined: Cannot read properties of undefined (reading 'forEach')
`parseForESLint` from parser `/Users/ravindragupta/Desktop/Projects/mobilev2/native/node_modules/@babel/eslint-parser/lib/index.cjs` is invalid and will just be ignored
Error while parsing /Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/api/records.api.js
Line undefined, column undefined: Cannot read properties of undefined (reading 'forEach')
`parseForESLint` from parser `/Users/ravindragupta/Desktop/Projects/mobilev2/native/node_modules/@babel/eslint-parser/lib/index.cjs` is invalid and will just be ignored
Error while parsing /Users/ravindragupta/Desktop/Projects/mobilev2/native/src/actions/saveUtils.js
Line undefined, column undefined: Cannot read properties of undefined (reading 'forEach')
`parseForESLint` from parser `/Users/ravindragupta/Desktop/Projects/mobilev2/native/node_modules/@babel/eslint-parser/lib/index.cjs` is invalid and will just be ignored
Error while parsing /Users/ravindragupta/Desktop/Projects/mobilev2/native/src/components/helpers/listField.js
Line undefined, column undefined: Cannot read properties of undefined (reading 'forEach')
`parseForESLint` from parser `/Users/ravindragupta/Desktop/Projects/mobilev2/native/node_modules/@babel/eslint-parser/lib/index.cjs` is invalid and will just be ignored
Error while parsing /Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/api/oauth.api.js
Line undefined, column undefined: Cannot read properties of undefined (reading 'forEach')
`parseForESLint` from parser `/Users/ravindragupta/Desktop/Projects/mobilev2/native/node_modules/@babel/eslint-parser/lib/index.cjs` is invalid and will just be ignored
Error while parsing /Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/services/syncData/checkConnection.js
Line undefined, column undefined: Cannot read properties of undefined (reading 'forEach')
`parseForESLint` from parser `/Users/ravindragupta/Desktop/Projects/mobilev2/native/node_modules/@babel/eslint-parser/lib/index.cjs` is invalid and will just be ignored
Error while parsing /Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/api/offline.api.js
Line undefined, column undefined: Cannot read properties of undefined (reading 'forEach')
`parseForESLint` from parser `/Users/ravindragupta/Desktop/Projects/mobilev2/native/node_modules/@babel/eslint-parser/lib/index.cjs` is invalid and will just be ignored
Error while parsing /Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/api/urlRoute.js
Line undefined, column undefined: Cannot read properties of undefined (reading 'forEach')
`parseForESLint` from parser `/Users/ravindragupta/Desktop/Projects/mobilev2/native/node_modules/@babel/eslint-parser/lib/index.cjs` is invalid and will just be ignored
Error while parsing /Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/BaseField.js
Line undefined, column undefined: Cannot read properties of undefined (reading 'forEach')
`parseForESLint` from parser `/Users/ravindragupta/Desktop/Projects/mobilev2/native/node_modules/@babel/eslint-parser/lib/index.cjs` is invalid and will just be ignored
Error while parsing /Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/api/offline.api.js
Line undefined, column undefined: Cannot read properties of undefined (reading 'forEach')
`parseForESLint` from parser `/Users/ravindragupta/Desktop/Projects/mobilev2/native/node_modules/@babel/eslint-parser/lib/index.cjs` is invalid and will just be ignored
Error while parsing /Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/api/offline.api.js
Line undefined, column undefined: Cannot read properties of undefined (reading 'forEach')
`parseForESLint` from parser `/Users/ravindragupta/Desktop/Projects/mobilev2/native/node_modules/@babel/eslint-parser/lib/index.cjs` is invalid and will just be ignored
Error while parsing /Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/api/offline.api.js
Line undefined, column undefined: Cannot read properties of undefined (reading 'forEach')
`parseForESLint` from parser `/Users/ravindragupta/Desktop/Projects/mobilev2/native/node_modules/@babel/eslint-parser/lib/index.cjs` is invalid and will just be ignored
Error while parsing /Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/api/records.api.js
Line undefined, column undefined: Cannot read properties of undefined (reading 'forEach')
`parseForESLint` from parser `/Users/ravindragupta/Desktop/Projects/mobilev2/native/node_modules/@babel/eslint-parser/lib/index.cjs` is invalid and will just be ignored
Error while parsing /Users/ravindragupta/Desktop/Projects/mobilev2/native/src/actions/saveUtils.js
Line undefined, column undefined: Cannot read properties of undefined (reading 'forEach')
`parseForESLint` from parser `/Users/ravindragupta/Desktop/Projects/mobilev2/native/node_modules/@babel/eslint-parser/lib/index.cjs` is invalid and will just be ignored
Error while parsing /Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/BaseField.js
Line undefined, column undefined: Cannot read properties of undefined (reading 'forEach')
`parseForESLint` from parser `/Users/ravindragupta/Desktop/Projects/mobilev2/native/node_modules/@babel/eslint-parser/lib/index.cjs` is invalid and will just be ignored
Error while parsing /Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/api/offline.api.js
Line undefined, column undefined: Cannot read properties of undefined (reading 'forEach')
`parseForESLint` from parser `/Users/ravindragupta/Desktop/Projects/mobilev2/native/node_modules/@babel/eslint-parser/lib/index.cjs` is invalid and will just be ignored
Error while parsing /Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/api/offline.api.js
Line undefined, column undefined: Cannot read properties of undefined (reading 'forEach')
`parseForESLint` from parser `/Users/ravindragupta/Desktop/Projects/mobilev2/native/node_modules/@babel/eslint-parser/lib/index.cjs` is invalid and will just be ignored
Error while parsing /Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/api/offline.api.js
Line undefined, column undefined: Cannot read properties of undefined (reading 'forEach')
`parseForESLint` from parser `/Users/ravindragupta/Desktop/Projects/mobilev2/native/node_modules/@babel/eslint-parser/lib/index.cjs` is invalid and will just be ignored
Error while parsing /Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/api/records.api.js
Line undefined, column undefined: Cannot read properties of undefined (reading 'forEach')
`parseForESLint` from parser `/Users/ravindragupta/Desktop/Projects/mobilev2/native/node_modules/@babel/eslint-parser/lib/index.cjs` is invalid and will just be ignored
Error while parsing /Users/ravindragupta/Desktop/Projects/mobilev2/native/src/actions/saveUtils.js
Line undefined, column undefined: Cannot read properties of undefined (reading 'forEach')
`parseForESLint` from parser `/Users/ravindragupta/Desktop/Projects/mobilev2/native/node_modules/@babel/eslint-parser/lib/index.cjs` is invalid and will just be ignored
Error while parsing /Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/BaseField.js
Line undefined, column undefined: Cannot read properties of undefined (reading 'forEach')
`parseForESLint` from parser `/Users/ravindragupta/Desktop/Projects/mobilev2/native/node_modules/@babel/eslint-parser/lib/index.cjs` is invalid and will just be ignored
Error while parsing /Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/api/offline.api.js
Line undefined, column undefined: Cannot read properties of undefined (reading 'forEach')
`parseForESLint` from parser `/Users/ravindragupta/Desktop/Projects/mobilev2/native/node_modules/@babel/eslint-parser/lib/index.cjs` is invalid and will just be ignored
Error while parsing /Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/services/syncData/checkConnection.js
Line undefined, column undefined: Cannot read properties of undefined (reading 'forEach')
`parseForESLint` from parser `/Users/ravindragupta/Desktop/Projects/mobilev2/native/node_modules/@babel/eslint-parser/lib/index.cjs` is invalid and will just be ignored
Error while parsing /Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/api/offline.api.js
Line undefined, column undefined: Cannot read properties of undefined (reading 'forEach')
`parseForESLint` from parser `/Users/ravindragupta/Desktop/Projects/mobilev2/native/node_modules/@babel/eslint-parser/lib/index.cjs` is invalid and will just be ignored
Error while parsing /Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/BaseField.js
Line undefined, column undefined: Cannot read properties of undefined (reading 'forEach')
`parseForESLint` from parser `/Users/ravindragupta/Desktop/Projects/mobilev2/native/node_modules/@babel/eslint-parser/lib/index.cjs` is invalid and will just be ignored
Error while parsing /Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/api/offline.api.js
Line undefined, column undefined: Cannot read properties of undefined (reading 'forEach')
`parseForESLint` from parser `/Users/ravindragupta/Desktop/Projects/mobilev2/native/node_modules/@babel/eslint-parser/lib/index.cjs` is invalid and will just be ignored
/Users/ravindragupta/Desktop/Projects/mobilev2/native/src/actions/popupMenu.action.js
0:0 error Parsing error: Cannot read properties of undefined (reading 'forEach')
/Users/ravindragupta/Desktop/Projects/mobilev2/native/src/actions/saveUtils.js
0:0 error Parsing error: Cannot read properties of undefined (reading 'forEach')
/Users/ravindragupta/Desktop/Projects/mobilev2/native/src/components/AnswerFields/AnswerRenderer.js
0:0 error Parsing error: Cannot read properties of undefined (reading 'forEach')
/Users/ravindragupta/Desktop/Projects/mobilev2/native/src/components/Detail/FieldsController.js
178:9 warning Invalid loop. Its body allows only one iteration no-unreachable-loop
/Users/ravindragupta/Desktop/Projects/mobilev2/native/src/components/helpers/listField.js
0:0 error Parsing error: Cannot read properties of undefined (reading 'forEach')
/Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/BaseField.js
0:0 error Parsing error: Cannot read properties of undefined (reading 'forEach')
/Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/api/batch.js
0:0 error Parsing error: Cannot read properties of undefined (reading 'forEach')
/Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/api/batchList.js
0:0 error Parsing error: Cannot read properties of undefined (reading 'forEach')
/Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/api/filters.api.js
0:0 error Parsing error: Cannot read properties of undefined (reading 'forEach')
/Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/api/filters.operators.api.js
0:0 error Parsing error: Cannot read properties of undefined (reading 'forEach')
/Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/api/filters.operators.offline.api.js
0:0 error Parsing error: Cannot read properties of undefined (reading 'forEach')
/Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/api/metadata.api.js
0:0 error Parsing error: Cannot read properties of undefined (reading 'forEach')
/Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/api/oauth.api.js
0:0 error Parsing error: Cannot read properties of undefined (reading 'forEach')
/Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/api/offline.api.js
0:0 error Parsing error: Cannot read properties of undefined (reading 'forEach')
/Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/api/prefetch.api.js
0:0 error Parsing error: Cannot read properties of undefined (reading 'forEach')
/Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/api/prefetch.diffsync.api.js
0:0 error Parsing error: Cannot read properties of undefined (reading 'forEach')
/Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/api/prefetch.etags.api.js
0:0 error Parsing error: Cannot read properties of undefined (reading 'forEach')
/Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/api/records.api.js
0:0 error Parsing error: Cannot read properties of undefined (reading 'forEach')
/Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/api/types.js
0:0 error Parsing error: Cannot read properties of undefined (reading 'forEach')
/Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/api/urlRoute.js
0:0 error Parsing error: Cannot read properties of undefined (reading 'forEach')
/Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/services/realm.js
10:1 warning Dependency cycle detected import/no-cycle
/Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/services/realmDb/RealmDbUpdates.js
11:1 warning Dependency cycle detected import/no-cycle
/Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/services/syncData/checkConnection.js
0:0 error Parsing error: Cannot read properties of undefined (reading 'forEach')
/Users/ravindragupta/Desktop/Projects/mobilev2/native/src/mobileCore/services/syncData/syncDataWrappers.js
0:0 error Parsing error: Cannot read properties of undefined (reading 'forEach')
โ 24 problems (21 errors, 3 warnings)
```
### Reproducer
https://github.com/test
### Screenshots and Videos
<img width="1135" alt="Screenshot 2024-11-25 at 11 38 55โฏPM" src="https://github.com/user-attachments/assets/8f7e699c-37da-4aee-aa1e-9926750d55d7">
| Flow,Needs: Repro,Needs: Attention | low | Critical |
2,691,745,634 | go | x/tools: Windows build OOM crash | ```
#!watchflakes
default <- pkg == "golang.org/x/tools/go/callgraph/vta" && test == "" && goos == "windows"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8730352753718672849)):
PASS
ok golang.org/x/tools/go/callgraph/vta 14.936s
โ [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation,Tools | low | Critical |
2,691,798,368 | transformers | Uniformize zero-shot object detection postprocessing methods | ## Uniformizing Zero-Shot Object Detection post-processing
### Introduction
Currently, we have four **zero-shot object detection models** in the Transformers library:
- **OwlVit**
- **OwlV2**
- **Grounding Dino**
- **OmDet Turbo**
Each model uses slightly different **postprocessing arguments** and produces **different output formats**, which complicates user experience and makes it harder to use them in pipelines.
To address these inconsistencies, proposed a **unified postprocessing interface** for all four models. This will enhance usability, reduce confusion, and enable seamless integration with existing pipelines.
## Comparison of Postprocessing Methods
Below is a comparison of the current `postprocessing` methods and their arguments:
| **Model** | **Postprocessing Method** | **Key Arguments** |
|--------------------|--------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| **OwlVit / OwlV2** | post_process_object_detection | `outputs`, `threshold`, `target_sizes` |
| **Grounding Dino** | post_process_grounded_object_detection | `outputs`, `input_ids`, `box_threshold`, `text_threshold`, `target_sizes` |
| **OmDet Turbo** | post_process_grounded_object_detection | `outputs`, `classes`, `score_threshold`, `nms_threshold`, `target_sizes`, `max_num_det` |
### Suggested Changes to Arguments
To standardize postprocessing across all models, the following suggestions are proposed:
1. **Standardize Method Naming:**
Use a single method, `post_process_grounded_object_detection`, for all models for text-guided object detection. For backward compatibility, retain additional methods (e.g., OwlVit/OwlV2โs `post_process_object_detection`) with a deprecation cycle.
2. **Unify Required Arguments:**
Make `outputs` the only required argument.
- For **Grounding Dino**, pass `input_ids` inside the `outputs` parameter.
- For **OmDet Turbo**, make `classes` optional to provide additional flexibility.
3. **Rename Threshold Parameters:**
Standardize parameter names (`score_threshold` and `box_threshold`) to a single name: `threshold`. These parameters perform the same function (filtering detections by confidence score), so a uniform name reduces confusion.
4. **Add `text_labels` Argument:**
Introduce an optional `text_labels` parameter to map detected labels (integer IDs) to their corresponding text names.
### Final Unified Method Signature
The new method would look like this:
```python
def post_process_grounded_object_detection(
self,
outputs,
threshold: float = ...,
target_sizes: Optional[Union[TensorType, List[Tuple]]] = None,
text_labels: Union[List[str], List[List[str]]] = None,
<additional model-specific params>
)
```
---
## Postprocessing Outputs
### Current outputs by post processing
| **Model** | **Current Output Format** |
|--------------------|------------------------------------------------------------------------------------------------------------------|
| **OwlVit / OwlV2** | `{"scores": score, "labels": label, "boxes": box}`<br> *(labels are integer class IDs)* |
| **Grounding Dino** | `{"scores": score, "labels": label, "boxes": box}`<br> *(labels are text names decoded of detected objects from `input_ids`)* |
| **OmDet Turbo** | `{"scores": score, "classes": class, "boxes": box}`<br> *(classes are text names of detected objects)* |
---
#### Suggested unified output format
The output format will be standardized to:
```python
{
"scores": score,
"labels": label, # Integer class IDs
"boxes": box, # Detected bounding boxes
"text_labels": text # Optional: text labels
}
```
### Detailed Model Changes
#### **OwlVit / OwlV2**
Current:
```python
{"scores": score, "labels": label, "boxes": box}
```
Proposed:
```python
{
"scores": score,
"labels": label,
"boxes": box,
"text_labels": text
}
```
#### **Grounding Dino**
Current:
```python
{"scores": score, "labels": label, "boxes": box}
```
Proposed:
```python
{
"scores": score,
"labels": text, # Will be set to `None` with deprecation cycle
"boxes": box,
"text_labels": text
}
```
#### **OmDet Turbo**
Current:
```python
{"scores": score, "classes": class, "boxes": box}
```
Proposed:
```python
{
"scores": score,
"labels": label, # Add integer labels
"boxes": box,
"text_labels": text, # Copy of current `classes`
"classes": text # Retain temporarily, remove with deprecation cycle
}
```
Feel free to provide feedback on the suggested changes!
### Motivation
This will enhance usability, reduce confusion, and enable integration with existing zero-shot object detection pipelines.
### Your contribution
I will work on this and already have draft PRs. | Feature request,Vision,Processing | low | Minor |
2,691,880,898 | react-native | RN 0.76: std::__1::bad_function_call | ### Description
After upgrading to RN 0.76.x the iOS app started crashing.
### Steps to reproduce
1. Build & start
### React Native Version
0.76.3
### Affected Platforms
Runtime - iOS
### Output of `npx react-native info`
```text
System:
OS: macOS 15.1.1
CPU: (12) arm64 Apple M2 Max
Memory: 813.77 MB / 96.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 20.18.0
path: ~/.nvm/versions/node/v20.18.0/bin/node
Yarn:
version: 1.22.19
path: /opt/homebrew/bin/yarn
npm:
version: 10.8.2
path: ~/.nvm/versions/node/v20.18.0/bin/npm
Watchman:
version: 2024.11.04.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.16.2
path: /opt/homebrew/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.0
- iOS 18.0
- macOS 15.0
- tvOS 18.0
- visionOS 2.0
- watchOS 11.0
Android SDK:
API Levels:
- "34"
- "35"
Build Tools:
- 34.0.0
- 35.0.0
Android NDK: Not Found
IDEs:
Android Studio: 2024.2 AI-242.23339.11.2421.12550806
Xcode:
version: 16.0/16A242d
path: /usr/bin/xcodebuild
Languages:
Java:
version: 23.0.1
path: /opt/homebrew/opt/openjdk/bin/javac
Ruby:
version: 2.6.10
path: /usr/bin/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.1.2
wanted: ^15.1.2
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.3
wanted: 0.76.3
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: false
iOS:
hermesEnabled: true
newArchEnabled: true
```
### Stacktrace or Logs
```text
2024-11-25 20:35:31.281 requestAuthorizationWithOptions: granted nil
2024-11-25 20:35:31.830 applicationDidBecomeActive
2024-11-25 20:35:31.877 PushCredentials: AF703A956C2754F369DAE139A99C9A6E887EF3DC81FB7D55B29732**********
2024-11-25 20:35:36.088 Device token: 32 bytes
libc++abi: terminating due to uncaught exception of type std::__1::bad_function_call: std::exception
```
### Reproducer
https://github.com/react-native-community/reproducer-react-native
### Screenshots and Videos
<img width="1706" alt="Screenshot 2024-11-25 at 20 46 33" src="https://github.com/user-attachments/assets/c13a72ec-478e-4659-abf0-9724d407b20f">
<img width="351" alt="Screenshot 2024-11-25 at 20 47 10" src="https://github.com/user-attachments/assets/eccb025b-7ea1-4c1a-bc8e-282a194d1c94">
| Needs: Author Feedback,Needs: Repro | medium | Critical |
2,691,890,092 | go | runtime: make Windows VirtualAlloc failure more informative | When a Windows build fails due to OOM, the error log looks like this:
https://logs.chromium.org/logs/golang/buildbucket/cr-buildbucket/8730352753718672849/+/u/step/19/log/2
It contains a thread dump of the go test command at the moment after VirtualAlloc failed, but I suspect the real culprits here are the test child processes, among which is x/tools/go/ssa, which is known to have a big appetite for memory. Unfortunately that information is somewhat obscure in the log.
The task of this issue is to consider whether there are ways that the Windows OOM crash could be made more informative, for example by including the committed size of the current process. Alternatively, whether there are changes we could make to go test or the builders that would point the finger of blame more clearly.
@prattmic | help wanted,OS-Windows,NeedsInvestigation,compiler/runtime | low | Critical |
2,691,898,113 | transformers | Recomputed tensor size does not match when using activation checkpointing when using FSDP and accelerate | ### System Info
```
- `transformers` version: 4.46.3
- Platform: Linux-6.8.0-1015-aws-x86_64-with-glibc2.35
- Python version: 3.12.6
- Huggingface_hub version: 0.26.2
- Safetensors version: 0.4.5
- Accelerate version: 1.1.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: distributed (`accelerate`)
- Using GPU in script?: Yes
- GPU type: NVIDIA A100-SXM4-40GB
```
### Who can help?
@muellerz @SunMarc @ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm running into the following error while trying to use the SFTTrainer with FSDP and the `accelerate` library (full stack trace provided at the very bottom of this post).
```
torch.utils.checkpoint.CheckpointError: torch.utils.checkpoint: Recomputed values for the following tensors have different metadata than during the forward pass
```
This occurs when I set `gradient_checkpointing: false` and `activation_checkpointing: true`. Curiously, it actually seems to work if I set `gradient_checkpointing: true` and `activation_checkpointing: false`, **but** that produces the following warning message:
```
# When using FSDP full shard, instead of using `gradient_checkpointing` in TrainingArguments, please use `activation_checkpointing` in `fsdp_config`. The former introduces a redundant AllGather operation in backward pass. Reference: https://github.com/huggingface/transformers/issues/30404`.
```
There are a few related GitHub issues around that touch on this issue:
1. https://github.com/Lightning-AI/pytorch-lightning/issues/19267
2. https://github.com/huggingface/transformers/issues/28499
3. https://github.com/pytorch/pytorch/issues/124788
4. https://github.com/huggingface/transformers/issues/32073
One of these suggested setting `use_reentrant: true`, but that doesn't resolve the issue for me.
I'm attempting to run this as a SageMaker training job using the official HuggingFace estimator (this amounts to the following command: `torchrun --nnodes 1 --nproc_per_node 8 train.py`. My training script is essentially a lightly adapted version of the official examples. Below is how I'm instantiating the HuggingFace estimator object:
```
huggingface_estimator = HuggingFace(
entry_point = 'train.py', # train script
#entry_point = 'launch.py', # train script
dependencies=['requirements.txt'],
source_dir = './',
instance_type = 'ml.p4d.24xlarge',
instance_count = 1,
max_run = 2*24*60*60,
base_job_name = job_name,
role = role,
volume_size = 1024,
transformers_version = '4.36.0',
pytorch_version = '2.1.0',
py_version = 'py310',
hyperparameters = {
"config_s3_uri": "s3://<foo>
},
#metric_definitions=metric_definitions,
disable_output_compression = True,
distribution={"torch_distributed": {"enabled": True}}, # enables torchrun
environment = {
"HUGGINGFACE_HUB_CACHE": "/tmp/.cache",
"HF_TOKEN": HfFolder.get_token(),
"ACCELERATE_USE_FSDP": "1", # enable FSDP
"FSDP_CPU_RAM_EFFICIENT_LOADING": "0", # enable CPU RAM efficient loading
"FSDP_AUTO_WRAP_POLICY": "TRANSFORMER_BASED_WRAP",
"FSDP_BACKWARD_PREFETCH": "BACKWARD_PRE",
"FSDP_STATE_DICT_TYPE": "FULL_STATE_DICT",
"NCCL_TIMEOUT": "3600", # 1 hour timeout
"NCCL_DEBUG": "WARN",
"NCCL_IB_TIMEOUT": "3600",
"NCCL_SOCKET_TIMEOUT": "3600",
"NCCL_ASYNC_ERROR_HANDLING": "1",
"NCCL_P2P_LEVEL": "NVL",
"CUDA_DEVICE_MAX_CONNECTIONS": "1",
"MAX_JOBS": "1",
"PYTORCH_CUDA_ALLOC_CONF": "max_split_size_mb:512",
"TORCH_DISTRIBUTED_DEBUG": "DETAIL",
},
checkpoint_s3_uri=f's3://<foo>'
)
```
Below are some of the relevant parameters from my input config.
```
gradient_checkpointing: false
gradient_checkpointing_kwargs:
use_reentrant: true
attn_implementation: "flash_attention_2"
packing: false
bf16: "auto"
fsdp: "full_shard auto_wrap offload"
fsdp_config:
limit_all_gathers: true
backward_prefetch: "backward_pre"
forward_prefetch: "false"
use_orig_params: "false"
min_num_params: 0
activation_checkpointing: "true"
```
*Full Stack Trace*
```
Traceback (most recent call last):
File "train.py", line 224, in <module>
main(cfg)
File "train.py", line 207, in main
main(cfg)
File "train.py", line 207, in main
trainer.train()
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 2123, in train
trainer.train()
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 2123, in train
Traceback (most recent call last):
File "train.py", line 224, in <module>
main(cfg)main(cfg)
File "train.py", line 207, in main
trainer.train()trainer.train()
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 2123, in train
return inner_training_loop(
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 2481, in _inner_training_loop
return inner_training_loop(
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 2481, in _inner_training_loop
return inner_training_loop(return inner_training_loop(
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 2481, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 3612, in training_step
tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 3612, in training_step
tr_loss_step = self.training_step(model, inputs, num_items_in_batch)tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 3612, in training_step
self.accelerator.backward(loss, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/accelerate/accelerator.py", line 2241, in backward
Traceback (most recent call last):
File "/opt/ml/code/train.py", line 224, in <module>
main(cfg)
File "/opt/ml/code/train.py", line 207, in main
trainer.train()
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 2123, in train
return inner_training_loop(
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 2481, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 3612, in training_step
self.accelerator.backward(loss, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/accelerate/accelerator.py", line 2241, in backward
loss.backward(**kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/_tensor.py", line 492, in backward
torch.autograd.backward(
File "/opt/conda/lib/python3.10/site-packages/torch/autograd/__init__.py", line 251, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 1075, in unpack_hook
frame.check_recomputed_tensors_match(gid)
File "/opt/conda/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 850, in check_recomputed_tensors_match
raise CheckpointError(
torch.utils.checkpoint.CheckpointError: torch.utils.checkpoint: Recomputed values for the following tensors have different metadata than during the forward pass.
tensor at position 18:
saved metadata: {'shape': torch.Size([2, 1024, 28, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)}
recomputed metadata: {'shape': torch.Size([2, 2048, 28, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)}
tensor at position 19:
saved metadata: {'shape': torch.Size([2, 1024, 28, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)}
recomputed metadata: {'shape': torch.Size([2, 2048, 28, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)}
loss.backward(**kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/_tensor.py", line 492, in backward
loss.backward(**kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/_tensor.py", line 492, in backward
self.accelerator.backward(loss, **kwargs)self.accelerator.backward(loss, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/accelerate/accelerator.py", line 2241, in backward
File "/opt/conda/lib/python3.10/site-packages/accelerate/accelerator.py", line 2241, in backward
torch.autograd.backward(
File "/opt/conda/lib/python3.10/site-packages/torch/autograd/__init__.py", line 251, in backward
torch.autograd.backward(
File "/opt/conda/lib/python3.10/site-packages/torch/autograd/__init__.py", line 251, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 1075, in unpack_hook
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 1075, in unpack_hook
frame.check_recomputed_tensors_match(gid)
File "/opt/conda/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 850, in check_recomputed_tensors_match
loss.backward(**kwargs)loss.backward(**kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/_tensor.py", line 492, in backward
File "/opt/conda/lib/python3.10/site-packages/torch/_tensor.py", line 492, in backward
frame.check_recomputed_tensors_match(gid)
File "/opt/conda/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 850, in check_recomputed_tensors_match
torch.autograd.backward(torch.autograd.backward(
File "/opt/conda/lib/python3.10/site-packages/torch/autograd/__init__.py", line 251, in backward
File "/opt/conda/lib/python3.10/site-packages/torch/autograd/__init__.py", line 251, in backward
raise CheckpointError(
torch.utils.checkpoint.CheckpointError: torch.utils.checkpoint: Recomputed values for the following tensors have different metadata than during the forward pass.
tensor at position 18:
saved metadata: {'shape': torch.Size([2, 1024, 28, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=2)}
recomputed metadata: {'shape': torch.Size([2, 2048, 28, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=2)}
tensor at position 19:
saved metadata: {'shape': torch.Size([2, 1024, 28, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=2)}
recomputed metadata: {'shape': torch.Size([2, 2048, 28, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=2)}
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward passVariable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 1075, in unpack_hook
File "/opt/conda/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 1075, in unpack_hook
raise CheckpointError(
torch.utils.checkpoint.CheckpointError: torch.utils.checkpoint: Recomputed values for the following tensors have different metadata than during the forward pass.
tensor at position 18:
saved metadata: {'shape': torch.Size([2, 1024, 28, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=1)}
recomputed metadata: {'shape': torch.Size([2, 2048, 28, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=1)}
tensor at position 19:
saved metadata: {'shape': torch.Size([2, 1024, 28, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=1)}
recomputed metadata: {'shape': torch.Size([2, 2048, 28, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=1)}
frame.check_recomputed_tensors_match(gid)frame.check_recomputed_tensors_match(gid)
File "/opt/conda/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 850, in check_recomputed_tensors_match
File "/opt/conda/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 850, in check_recomputed_tensors_match
raise CheckpointError(
torch.utils.checkpoint.CheckpointError: torch.utils.checkpoint: Recomputed values for the following tensors have different metadata than during the forward pass.
tensor at position 18:
saved metadata: {'shape': torch.Size([2, 1024, 28, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=3)}
recomputed metadata: {'shape': torch.Size([2, 2048, 28, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=3)}
tensor at position 19:
saved metadata: {'shape': torch.Size([2, 1024, 28, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=3)}
recomputed metadata: {'shape': torch.Size([2, 2048, 28, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=3)}
raise CheckpointError(
torch.utils.checkpoint.CheckpointError: torch.utils.checkpoint: Recomputed values for the following tensors have different metadata than during the forward pass.
tensor at position 18:
saved metadata: {'shape': torch.Size([2, 1024, 28, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=6)}
recomputed metadata: {'shape': torch.Size([2, 2048, 28, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=6)}
tensor at position 19:
saved metadata: {'shape': torch.Size([2, 1024, 28, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=6)}
recomputed metadata: {'shape': torch.Size([2, 2048, 28, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=6)}
0%| | 0/100 [00:13<?, ?it/s]
[E ProcessGroupGloo.cpp:138] Rank 5 successfully reached monitoredBarrier, but received errors while waiting for send/recv from rank 0. Please check rank 0 logs for faulty rank.
[E ProcessGroupGloo.cpp:138] Rank 4 successfully reached monitoredBarrier, but received errors while waiting for send/recv from rank 0. Please check rank 0 logs for faulty rank.
[E ProcessGroupGloo.cpp:138] Rank 7 successfully reached monitoredBarrier, but received errors while waiting for send/recv from rank 0. Please check rank 0 logs for faulty rank.
[2024-11-25 18:39:43,758] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 69 closing signal SIGTERM
[2024-11-25 18:39:43,758] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 73 closing signal SIGTERM
[2024-11-25 18:39:43,758] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 74 closing signal SIGTERM
[2024-11-25 18:39:43,758] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 76 closing signal SIGTERM
[2024-11-25 18:39:47,931] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 1 (pid: 70) of binary: /opt/conda/bin/python
```
### Expected behavior
The expected behavior is for the SFTTrainer's `train()` method to run without errors. | bug | low | Critical |
2,692,039,471 | opencv | cap_aravis: support specifying device by GUID/name | ### Describe the feature and motivation
currently when using aravis you have to specify the ID of the device:
https://github.com/opencv/opencv/blob/65d4112fa52511cd609a5c794a263bee9b5a8d43/modules/videoio/src/cap_aravis.cpp#L206-L225
it'd be much more usable if it were possible to specify the GUID (aka device name) to then pass that on to `arv_camera_new`.
a workaround might be to manually use Aravis to find the ID of the device (by looping over them and filtering for the GUID) and then pass that to OpenCV (which then loops over the same list again, filtering by ID to get the GUID...).
### Additional context
other Aravis-based tools always use the GUID, e.g. [`camera_aravis2` in ROS 2](https://github.com/FraunhoferIOSB/camera_aravis2/blob/468637ef1b76172ca69c3c47b09778829a3ccbda/camera_aravis2/src/camera_aravis_node_base.cpp#L133) | feature,category: videoio(camera) | low | Minor |
2,692,044,837 | pytorch | [AudioLM] Inductor `LoweringException: NotImplementedError: View` | ### ๐ Describe the bug
This is a carryover from https://github.com/pytorch/pytorch/issues/121345#issuecomment-2492686541. The model no longer fails in Dynamo on latest main, but fails in Inductor.
Repro code: https://gist.github.com/ezyang/64c24c9fc5529f3afed4ee4266f6adc5
### Error logs
```
2024-11-25 11:49:55 | INFO | fairseq.tasks.text_to_speech | Please install tensorboardX: pip install tensorboardX
training with dataset of 2 samples and validating with randomly splitted 1 samples
W1125 11:50:00.561000 1134905 torch/_dynamo/variables/tensor.py:869] [0/0] Graph break from `Tensor.item()`, consider setting:
W1125 11:50:00.561000 1134905 torch/_dynamo/variables/tensor.py:869] [0/0] torch._dynamo.config.capture_scalar_outputs = True
W1125 11:50:00.561000 1134905 torch/_dynamo/variables/tensor.py:869] [0/0] or:
W1125 11:50:00.561000 1134905 torch/_dynamo/variables/tensor.py:869] [0/0] env TORCHDYNAMO_CAPTURE_SCALAR_OUTPUTS=1
W1125 11:50:00.561000 1134905 torch/_dynamo/variables/tensor.py:869] [0/0] to include these operations in the captured graph.
W1125 11:50:00.561000 1134905 torch/_dynamo/variables/tensor.py:869] [0/0]
W1125 11:50:00.561000 1134905 torch/_dynamo/variables/tensor.py:869] [0/0] Graph break: from user code at:
W1125 11:50:00.561000 1134905 torch/_dynamo/variables/tensor.py:869] [0/0] File "/home/ryanguo99/.conda/envs/audiolm/lib/python3.10/site-packages/audiolm_pytorch/soundstream.py", line 840, in forward
W1125 11:50:00.561000 1134905 torch/_dynamo/variables/tensor.py:869] [0/0] x, indices, commit_loss = self.rq(x)
W1125 11:50:00.561000 1134905 torch/_dynamo/variables/tensor.py:869] [0/0] File "/home/ryanguo99/.conda/envs/audiolm/lib/python3.10/site-packages/vector_quantize_pytorch/residual_vq.py", line 498, in forward
W1125 11:50:00.561000 1134905 torch/_dynamo/variables/tensor.py:869] [0/0] rand_quantize_dropout_fixed_seed = get_maybe_sync_seed(device) if self.training else None
W1125 11:50:00.561000 1134905 torch/_dynamo/variables/tensor.py:869] [0/0] File "/home/ryanguo99/.conda/envs/audiolm/lib/python3.10/site-packages/vector_quantize_pytorch/residual_vq.py", line 50, in get_maybe_sync_seed
W1125 11:50:00.561000 1134905 torch/_dynamo/variables/tensor.py:869] [0/0] return rand_int.item()
W1125 11:50:00.561000 1134905 torch/_dynamo/variables/tensor.py:869] [0/0]
W1125 11:50:00.561000 1134905 torch/_dynamo/variables/tensor.py:869] [0/0]
W1125 11:50:30.329000 1134905 torch/_dynamo/convert_frame.py:915] [23/8] torch._dynamo hit config.cache_size_limit (8)
W1125 11:50:30.329000 1134905 torch/_dynamo/convert_frame.py:915] [23/8] function: 'forward' (/home/ryanguo99/pt/torchaudio/src/torchaudio/transforms/_transforms.py:100)
W1125 11:50:30.329000 1134905 torch/_dynamo/convert_frame.py:915] [23/8] last reason: 23/0: tensor 'L['self']._buffers['window']' size mismatch at index 0. expected 64, actual 1024
W1125 11:50:30.329000 1134905 torch/_dynamo/convert_frame.py:915] [23/8] To log all recompilation reasons, use TORCH_LOGS="recompiles".
W1125 11:50:30.329000 1134905 torch/_dynamo/convert_frame.py:915] [23/8] To diagnose recompilation issues, see https://pytorch.org/docs/main/torch.compiler_troubleshooting.html.
Traceback (most recent call last):
File "/home/ryanguo99/pt/scratch/compile.py", line 139, in <module>
trainer.train()
File "/home/ryanguo99/.conda/envs/audiolm/lib/python3.10/site-packages/audiolm_pytorch/trainer.py", line 710, in train
logs = self.train_step()
File "/home/ryanguo99/.conda/envs/audiolm/lib/python3.10/site-packages/audiolm_pytorch/trainer.py", line 578, in train_step
loss, (recon_loss, multi_spectral_recon_loss, adversarial_loss, feature_loss, all_commitment_loss) = self.soundstream(wave, return_loss_breakdown = True)
File "/home/ryanguo99/pt/pytorch-310/torch/nn/modules/module.py", line 1738, in _wrapped_call_impl
return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
File "/home/ryanguo99/pt/pytorch-310/torch/_dynamo/eval_frame.py", line 569, in _fn
return fn(*args, **kwargs)
File "/home/ryanguo99/pt/pytorch-310/torch/nn/modules/module.py", line 1751, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ryanguo99/.conda/envs/audiolm/lib/python3.10/site-packages/audiolm_pytorch/soundstream.py", line 840, in forward
x, indices, commit_loss = self.rq(x)
File "/home/ryanguo99/.conda/envs/audiolm/lib/python3.10/site-packages/audiolm_pytorch/soundstream.py", line 956, in torch_dynamo_resume_in_forward_at_840
(stft_real_logits, stft_real_intermediates), (stft_fake_logits, stft_fake_intermediates) = map(partial(self.stft_discriminator, return_intermediates=True), (real, fake))
File "/home/ryanguo99/pt/pytorch-310/torch/nn/modules/module.py", line 1740, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/ryanguo99/pt/pytorch-310/torch/nn/modules/module.py", line 1751, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ryanguo99/pt/pytorch-310/torch/_dynamo/convert_frame.py", line 1414, in __call__
return self._torchdynamo_orig_callable(
File "/home/ryanguo99/pt/pytorch-310/torch/_dynamo/convert_frame.py", line 1198, in __call__
result = self._inner_convert(
File "/home/ryanguo99/pt/pytorch-310/torch/_dynamo/convert_frame.py", line 550, in __call__
return _compile(
File "/home/ryanguo99/pt/pytorch-310/torch/_dynamo/convert_frame.py", line 995, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/ryanguo99/pt/pytorch-310/torch/_dynamo/convert_frame.py", line 717, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/home/ryanguo99/pt/pytorch-310/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/home/ryanguo99/pt/pytorch-310/torch/_dynamo/convert_frame.py", line 752, in _compile_inner
out_code = transform_code_object(code, transform)
File "/home/ryanguo99/pt/pytorch-310/torch/_dynamo/bytecode_transformation.py", line 1349, in transform_code_object
transformations(instructions, code_options)
File "/home/ryanguo99/pt/pytorch-310/torch/_dynamo/convert_frame.py", line 234, in _fn
return fn(*args, **kwargs)
File "/home/ryanguo99/pt/pytorch-310/torch/_dynamo/convert_frame.py", line 665, in transform
tracer.run()
File "/home/ryanguo99/pt/pytorch-310/torch/_dynamo/symbolic_convert.py", line 2865, in run
super().run()
File "/home/ryanguo99/pt/pytorch-310/torch/_dynamo/symbolic_convert.py", line 1054, in run
while self.step():
File "/home/ryanguo99/pt/pytorch-310/torch/_dynamo/symbolic_convert.py", line 964, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/ryanguo99/pt/pytorch-310/torch/_dynamo/symbolic_convert.py", line 3045, in RETURN_VALUE
self._return(inst)
File "/home/ryanguo99/pt/pytorch-310/torch/_dynamo/symbolic_convert.py", line 3030, in _return
self.output.compile_subgraph(
File "/home/ryanguo99/pt/pytorch-310/torch/_dynamo/output_graph.py", line 1117, in compile_subgraph
self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root)
File "/home/ryanguo99/pt/pytorch-310/torch/_dynamo/output_graph.py", line 1358, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/home/ryanguo99/pt/pytorch-310/torch/_dynamo/output_graph.py", line 1408, in call_user_compiler
return self._call_user_compiler(gm)
File "/home/ryanguo99/pt/pytorch-310/torch/_dynamo/output_graph.py", line 1459, in _call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "/home/ryanguo99/pt/pytorch-310/torch/_dynamo/output_graph.py", line 1438, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/home/ryanguo99/pt/pytorch-310/torch/_dynamo/repro/after_dynamo.py", line 130, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "/home/ryanguo99/pt/pytorch-310/torch/__init__.py", line 2301, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/home/ryanguo99/pt/pytorch-310/torch/_inductor/compile_fx.py", line 1735, in compile_fx
return aot_autograd(
File "/home/ryanguo99/pt/pytorch-310/torch/_dynamo/backends/common.py", line 73, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "/home/ryanguo99/pt/pytorch-310/torch/_functorch/aot_autograd.py", line 1103, in aot_module_simplified
compiled_fn = dispatch_and_compile()
File "/home/ryanguo99/pt/pytorch-310/torch/_functorch/aot_autograd.py", line 1079, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
File "/home/ryanguo99/pt/pytorch-310/torch/_functorch/aot_autograd.py", line 527, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "/home/ryanguo99/pt/pytorch-310/torch/_functorch/aot_autograd.py", line 778, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
File "/home/ryanguo99/pt/pytorch-310/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 655, in aot_dispatch_autograd
compiled_fw_func = aot_config.fw_compiler(fw_module, adjusted_flat_args)
File "/home/ryanguo99/pt/pytorch-310/torch/_inductor/compile_fx.py", line 1548, in fw_compiler_base
return _fw_compiler_base(model, example_inputs, is_inference)
File "/home/ryanguo99/pt/pytorch-310/torch/_inductor/compile_fx.py", line 1617, in _fw_compiler_base
return inner_compile(
File "/home/ryanguo99/pt/pytorch-310/torch/_inductor/compile_fx.py", line 600, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
File "/home/ryanguo99/pt/pytorch-310/torch/_dynamo/repro/after_aot.py", line 102, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "/home/ryanguo99/pt/pytorch-310/torch/_inductor/compile_fx.py", line 757, in _compile_fx_inner
compiled_graph = FxGraphCache.load(
File "/home/ryanguo99/pt/pytorch-310/torch/_inductor/codecache.py", line 1588, in load
compiled_graph = compile_fx_fn(
File "/home/ryanguo99/pt/pytorch-310/torch/_inductor/compile_fx.py", line 664, in codegen_and_compile
compiled_graph = fx_codegen_and_compile(gm, example_inputs, **fx_kwargs)
File "/home/ryanguo99/pt/pytorch-310/torch/_inductor/compile_fx.py", line 956, in fx_codegen_and_compile
graph.run(*example_inputs)
File "/home/ryanguo99/pt/pytorch-310/torch/_inductor/graph.py", line 828, in run
return super().run(*args)
File "/home/ryanguo99/pt/pytorch-310/torch/fx/interpreter.py", line 167, in run
self.env[node] = self.run_node(node)
File "/home/ryanguo99/pt/pytorch-310/torch/_inductor/graph.py", line 1450, in run_node
result = super().run_node(n)
File "/home/ryanguo99/pt/pytorch-310/torch/fx/interpreter.py", line 228, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "/home/ryanguo99/pt/pytorch-310/torch/_inductor/graph.py", line 1097, in call_function
raise LoweringException(e, target, args, kwargs).with_traceback(
File "/home/ryanguo99/pt/pytorch-310/torch/_inductor/graph.py", line 1087, in call_function
out = lowerings[target](*args, **kwargs) # type: ignore[index]
File "/home/ryanguo99/pt/pytorch-310/torch/_inductor/lowering.py", line 401, in wrapped
out = decomp_fn(*args, **kwargs)
File "/home/ryanguo99/pt/pytorch-310/torch/_inductor/kernel/conv.py", line 581, in convolution
bias.freeze_layout()
File "/home/ryanguo99/pt/pytorch-310/torch/_inductor/ir.py", line 6808, in freeze_layout
return self.data.freeze_layout()
File "/home/ryanguo99/pt/pytorch-310/torch/_inductor/ir.py", line 539, in freeze_layout
raise NotImplementedError(type(self).__name__)
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
LoweringException: NotImplementedError: View
target: aten.convolution.default
args[0]: TensorBox(
ReinterpretView(
StorageBox(
MultiOutput(
python_kernel_name=None,
name=buf14,
layout=FixedLayout('cpu', torch.float32, size=[1, 1, 513, 41, 2], stride=[42066, 1026, 2, 1026, 1]),
inputs=[FallbackKernel(
python_kernel_name='torch.ops.aten.view_as_real.default',
name=buf13,
layout=MultiOutputLayout(device=device(type='cpu')),
inputs=[MultiOutput(
python_kernel_name=None,
name=buf8,
layout=FixedLayout('cpu', torch.complex64, size=[1, 1, 513, 41], stride=[21033, 513, 1, 513]),
inputs=[FallbackKernel(
python_kernel_name='torch.ops.aten.reshape.default',
name=buf7,
layout=MultiOutputLayout(device=device(type='cpu')),
inputs=[MultiOutput(
python_kernel_name=None,
name=buf6,
layout=FixedLayout('cpu', torch.complex64, size=[1, 513, 41], stride=[21033, 1, 513]),
inputs=[FallbackKernel(
python_kernel_name='torch.ops.aten.permute.default',
name=buf5,
layout=MultiOutputLayout(device=device(type='cpu')),
inputs=[MultiOutput(
python_kernel_name=None,
name=buf4,
layout=FixedLayout('cpu', torch.complex64, size=[1, 41, 513], stride=[21033, 513, 1]),
inputs=[FallbackKernel(
python_kernel_name='torch.ops.aten._fft_r2c.default',
name=buf3,
layout=MultiOutputLayout(device=device(type='cpu')),
inputs=[ComputedBuffer(name='buf2', layout=FixedLayout('cpu', torch.float32, size=[1, 41, 1024], stride=[41984, 1024, 1]), data=Pointwise(device=device(type='cpu'), dtype=torch.float32, inner_fn=<function make_pointwise.<locals>.inner.<locals>.inner_fn at 0x7f927f995870>, ranges=[1, 41, 1024]))],
constant_args=(2, 0, True),
kwargs={},
output_view=None,
python_kernel_name=torch.ops.aten._fft_r2c.default,
cpp_kernel_name=None,
ordered_kwargs_for_cpp_kernel=[],
kwarg_properties=None,
unbacked_bindings=None,
mutation_outputs=[],
origin_node=permute,
origins=OrderedSet([permute])
)],
constant_args=(),
kwargs={},
output_view=None,
python_kernel_name=None,
cpp_kernel_name=None,
ordered_kwargs_for_cpp_kernel=(),
op_overload=None,
arg_properties=[{}],
kwarg_properties=None,
unbacked_bindings={},
mutation_outputs=[],
origin_node=permute,
origins=OrderedSet([permute])
)],
constant_args=(1, 1, 513, 41),
kwargs={},
output_view=None,
python_kernel_name=torch.ops.aten.reshape.default,
cpp_kernel_name=None,
ordered_kwargs_for_cpp_kernel=[],
op_overload=aten.reshape.default,
arg_properties=[{'name': 'self', 'type': Tensor, 'default_value': None}, {'name': 'shape', 'type': List[int], 'default_value': None}],
kwarg_properties=None,
unbacked_bindings=None,
mutation_outputs=[],
origin_node=view_3,
origins=OrderedSet([view_3])
)],
constant_args=(),
kwargs={},
output_view=None,
python_kernel_name=None,
cpp_kernel_name=None,
ordered_kwargs_for_cpp_kernel=(),
op_overload=None,
arg_properties=[{}],
kwarg_properties=None,
unbacked_bindings={},
mutation_outputs=[],
origin_node=view_3,
origins=OrderedSet([view_3])
)],
constant_args=(),
kwargs={},
output_view=None,
python_kernel_name=torch.ops.aten.view_as_real.default,
cpp_kernel_name=None,
ordered_kwargs_for_cpp_kernel=[],
op_overload=aten.view_as_real.default,
arg_properties=[{'name': 'self', 'type': Tensor, 'default_value': None}],
kwargs={},
output_view=None,
python_kernel_name=torch.ops.aten.view_as_complex.default,
cpp_kernel_name=None,
ordered_kwargs_for_cpp_kernel=[],
op_overload=aten.view_as_complex.default,
arg_properties=[{'name': 'self', 'type': Tensor, 'default_value': None}],
kwarg_properties=None,
unbacked_bindings=None,
mutation_outputs=[],
origin_node=view_as_complex,
origins=OrderedSet([view_as_complex])
)],
constant_args=(),
kwargs={},
output_view=None,
python_kernel_name=None,
cpp_kernel_name=None,
ordered_kwargs_for_cpp_kernel=(),
op_overload=None,
arg_properties=[{}],
kwarg_properties=None,
unbacked_bindings={},
mutation_outputs=[],
origin_node=view_as_complex,
origins=OrderedSet([view_as_complex])
)],
constant_args=(),
kwargs={},
output_view=None,
python_kernel_name=torch.ops.aten.view_as_real.default,
cpp_kernel_name=None,
ordered_kwargs_for_cpp_kernel=[],
op_overload=aten.view_as_real.default,
arg_properties=[{'name': 'self', 'type': Tensor, 'default_value': None}],
kwarg_properties=None,
unbacked_bindings=None,
mutation_outputs=[],
origin_node=view_as_real_1,
origins=OrderedSet([view_as_real_1])
)],
constant_args=(),
kwargs={},
output_view=None,
python_kernel_name=None,
cpp_kernel_name=None,
ordered_kwargs_for_cpp_kernel=(),
op_overload=None,
arg_properties=[{}],
kwarg_properties=None,
unbacked_bindings={},
mutation_outputs=[],
origin_node=view_as_real_1,
origins=OrderedSet([view_as_real_1])
)
),
FixedLayout('cpu', torch.float32, size=[32, 1, 7, 7], stride=
unbacked_bindings={},
mutation_outputs=[],
origin_node=view_as_complex_1,
origins=OrderedSet([view_as_complex_1])
)],
constant_args=(),
kwargs={},
output_view=None,
python_kernel_name=torch.ops.aten.view_as_real.default,
cpp_kernel_name=None,
ordered_kwargs_for_cpp_kernel=[],
op_overload=aten.view_as_real.default,
arg_properties=[{'name': 'self', 'type': Tensor, 'default_value': None}],
kwarg_properties=None,
unbacked_bindings=None,
mutation_outputs=[],
origin_node=view_as_real_2,
origins=OrderedSet([view_as_real_2])
)],
constant_args=(),
kwargs={},
output_view=None,
python_kernel_name=None,
cpp_kernel_name=None,
ordered_kwargs_for_cpp_kernel=(),
op_overload=None,
arg_properties=[{}],
kwarg_properties=None,
unbacked_bindings={},
mutation_outputs=[],
origin_node=view_as_real_2,
origins=OrderedSet([view_as_real_2])
)
),
FixedLayout('cpu', torch.float32, size=[32, 1], stride=[2, 1]),
origins=OrderedSet([select_4])
),
size=[32],
reindex=lambda i0: [i0, 0],
origins=OrderedSet([select_4])
)
)
args[3]: [1, 1]
args[4]: [3, 3]
args[5]: [1, 1]
args[6]: False
args[7]: [0, 0]
args[8]: 1
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Versions
Python 3.10, da94ab0, CUDA 12.4
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov | triaged,oncall: pt2,module: inductor | low | Critical |
2,692,050,386 | vscode | icon is not colored correctly for failing task terminal | 
| bug,tasks,terminal-tabs | low | Minor |
2,692,059,434 | opencv | cap_aravis: no image when not explicitly specifying `cv::CAP_PROP_FOURCC` | ### System Information
OpenCV version: 4.10.0 (with `HAVE_ARAVIS`!)
OS: Ubuntu 24.04 (in WSL 2)
Compiler: CLang 18.1.8
camera & driver from [get cameras / VAimaging / Daheng Imaging](https://www.get-cameras.com/)
### Detailed description
when creating a new Aravis `cv::VideoCapture` the image will be empty unless `cv::CAP_PROP_FOURCC` is specified (and _just the right kind_).
this is because `CvCaptureCAM_Aravis::retrieveFrame` only runs if one of four pixel formats is set:
https://github.com/opencv/opencv/blob/65d4112fa52511cd609a5c794a263bee9b5a8d43/modules/videoio/src/cap_aravis.cpp#L321-L334
and it seems that this setting is not kept over camera restarts (i had the camera working just fine using the manufacturer-provided GUI).
### Steps to reproduce
prerequisites:
* USB3 VISION camera or GigE VISION camera (admittedly not something you might just have lying around at home; note that these are *not* normal USB webcams)
* [aravis](https://github.com/AravisProject/aravis) installed
* camera-specific drivers installed (needed so that aravis & co. can see the genIcam API of the USB3/GigE VISION cameras)
1. compile OpenCV with `HAVE_ARAVIS`
2. run the following code:
```cpp
#include <iostream>
#include <opencv2/opencv.hpp>
int main() {
cv::VideoCapture cap{0, cv::CAP_ARAVIS, {
cv::CAP_PROP_ARAVIS_AUTOTRIGGER, 1,
cv::CAP_PROP_AUTO_EXPOSURE, 1,
// cv::CAP_PROP_FOURCC, cv::VideoWriter::fourcc('G','R','E','Y')
}};
if (!cap.isOpened()) {
std::cerr << "failed to open the camera!" << std::endl;
return 1;
}
uint8_t retry = 0;
while (true) {
cv::Mat frame;
cap >> frame;
if (frame.empty()) {
++retry;
if (retry > 10) {
std::cerr << "failed to get a frame 10 consecutive times!" << std::endl;
return 1;
}
continue;
} else {
retry = 0;
}
cv::imshow("Frame | press q to quit", frame);
auto key = cv::waitKey(1);
switch (key) {
case 'f':
flip = !flip;
break;
case 'q':
return 0;
default:
break;
}
}
}
```
with `cv::CAP_PROP_FOURCC` commented out it will fail since all frames are empty. once you uncomment that line it'll start working.
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [X] There is reproducer code and related data files (videos, images, onnx, etc) | bug,category: videoio | low | Critical |
2,692,187,492 | storybook | [Bug]: Import Absolute Path in MDX files using Blocks | ### Describe the bug
I want to use an absolute path alias instead of a relative path in the MDX file for the documentation.
I have this `button.mdx` file:
```mdx
import { Meta, Canvas } from '@storybook/blocks'
import * as ButtonStories from './button.stories'
import * as DropdownButtonStories from '../../dropdown-button/dropdown-button.stories'
import { Typography } from '../../../components'
<Meta of={ButtonStories} />
....
<Typography variant="heading" fontSize={300} fontWeight={400}>
Related Components
</Typography>
<Canvas
of={DropdownButtonStories.Default}
/>
```
I would like to know if it is possible to use it like this:
```mdx
import { Meta, Canvas } from '@storybook/blocks'
import * as ButtonStories from './button.stories'
import * as DropdownButtonStories from 'components/dropdown-button/dropdown-button.stories'
import { Typography } from 'components'
<Meta of={ButtonStories} />
....
<Typography variant="heading" fontSize={300} fontWeight={400}>
Related Components
</Typography>
<Canvas
of={DropdownButtonStories.Default}
/>
```
I tried with modification of the main.js file, but it doesn't work:
```javascript
async viteFinal(config) {
return mergeConfig(config, {
resolve: {
alias: {
'components': resolve(__dirname, '../src/components'),
},
},
plugins: [tsconfigPaths()],
})
},
```
The following error occurs:

### Reproduction link
https://stackblitz.com/edit/github-xgzeyx?file=vite.config.ts
### Reproduction steps
_No response_
### System
```bash
I'm using Storybook v8.4.2
```
### Additional context
_No response_ | bug,mdx | low | Critical |
2,692,208,382 | terminal | x-markdown - right click Select All crashes | ### Steps to reproduce
Load up a markdown document in an `x-markdown` pane. Right click the body text and Select All.
### Actual Behavior
It blow up | Issue-Bug,Area-UserInterface,Severity-Crash,Product-Terminal | low | Critical |
2,692,253,611 | TypeScript | Dynamically importing JSON should require import attribute with node16/nodenext | ### ๐ Search Terms
dynamic import, import attribute, resolveJsonModule,
### ๐ Version & Regression Information
- This is the behavior in every version I tried, and I reviewed the FAQ for entries about [TS 5.7's new checks around importing JSON](https://devblogs.microsoft.com/typescript/announcing-typescript-5-7/)
### โฏ Playground Link
https://github.com/kirkwaiblinger/repro-TS-dynamic-json-import-validation
### ๐ป Code
```ts
// module: node16/nodenext, resolveJsonModule
// .cts or .mts, it doesn't matter.
async function main() {
const somethingDynamic = await import('./someThing.json');
console.log('dynamically imported JSON:', somethingDynamic);
}
main();
export {}
```
### ๐ Actual behavior
No error, even though this is a runtime error in nodejs.
### ๐ Expected behavior
Error because dynamic [`import('./someThing.json')` requires import attribute](https://nodejs.org/api/esm.html#json-modules).
### Additional information about the issue
Unlike the static import case, this is the case for _both_ commonjs and esm outputs, since [`import()` is available in commonjs modules in node](https://nodejs.org/api/esm.html#import-expressions) and has the same semantics there as the ESM import (and therefore TS doesn't transform the `import()` statement to a `Promise.resolve(require())`). | Needs Investigation,Fix Available | low | Critical |
2,692,267,531 | vscode | terminal mac os web smoke tests failing | seems like a few consistent smoke test failures:
see build: https://dev.azure.com/monacotools/Monaco/_build/results?buildId=306954&view=results and https://dev.azure.com/monacotools/Monaco/_build/results?buildId=306866&view=results
```
1) VSCode Smoke Tests (Web)
Terminal
Terminal Input
Auto replies
should automatically reply to a custom entry:
Error: Timeout: get terminal buffer '#terminal .terminal-wrapper' after 20 seconds.
at Code.poll (/Users/runner/work/1/s/test/automation/out/code.js:205:23)
at async Code.waitForTerminalBuffer (/Users/runner/work/1/s/test/automation/out/code.js:176:9)
at async Terminal.waitForTerminalText (/Users/runner/work/1/s/test/automation/out/terminal.js:262:13)
at async Context.<anonymous> (out/areas/terminal/terminal-input.test.js:38:17)
```
see build: https://dev.azure.com/monacotools/Monaco/_build/results?buildId=306898&view=results and https://dev.azure.com/monacotools/Monaco/_build/results?buildId=306921&view=results and some more
anywhere from 1-14 tests failing like below
```
1) VSCode Smoke Tests (Web)
Terminal
Terminal Shell Integration
Process-based tests
Decorations
terminal.integrated.shellIntegration.decorationsEnabled should determine gutter and overview ruler decoration visibility
"before each" hook for "never":
Error: Timeout: is active element ' .monaco-editor[data-uri$="settings.json"] .native-edit-context' after 20 seconds.
at Code.poll (/Users/runner/work/1/s/test/automation/out/code.js:205:23)
at async Code.waitForActiveElement (/Users/runner/work/1/s/test/automation/out/code.js:167:9)
at async Editor.waitForEditorFocus (/Users/runner/work/1/s/test/automation/out/editor.js:68:9)
at async SettingsEditor.openUserSettingsFile (/Users/runner/work/1/s/test/automation/out/settings.js:51:9)
at async SettingsEditor.clearUserSettings (/Users/runner/work/1/s/test/automation/out/settings.js:42:9)
at async Context.<anonymous> (out/areas/terminal/terminal-shellIntegration.test.js:54:25)
2) VSCode Smoke Tests (Web)
Terminal
Terminal Shell Integration
Process-based tests
"after all" hook in "Process-based tests":
Error: Timeout: is active element ' .monaco-editor[data-uri$="settings.json"] .native-edit-context' after 20 seconds.
at Code.poll (/Users/runner/work/1/s/test/automation/out/code.js:205:23)
at async Code.waitForActiveElement (/Users/runner/work/1/s/test/automation/out/code.js:167:9)
at async Editor.waitForEditorFocus (/Users/runner/work/1/s/test/automation/out/editor.js:68:9)
at async SettingsEditor.openUserSettingsFile (/Users/runner/work/1/s/test/automation/out/settings.js:51:9)
at async SettingsEditor.clearUserSettings (/Users/runner/work/1/s/test/automation/out/settings.js:42:9)
at async Context.<anonymous> (out/areas/terminal/terminal-shellIntegration.test.js:33:17)
3) VSCode Smoke Tests (Web)
Terminal
Terminal Shell Integration
Write data-based tests
"before all" hook in "Write data-based tests":
Error: Timeout: is active element ' .monaco-editor[data-uri$="settings.json"] .native-edit-context' after 20 seconds.
at Code.poll (/Users/runner/work/1/s/test/automation/out/code.js:205:23)
at async Code.waitForActiveElement (/Users/runner/work/1/s/test/automation/out/code.js:167:9)
at async Editor.waitForEditorFocus (/Users/runner/work/1/s/test/automation/out/editor.js:68:9)
at async SettingsEditor.openUserSettingsFile (/Users/runner/work/1/s/test/automation/out/settings.js:51:9)
at async SettingsEditor.addUserSettings (/Users/runner/work/1/s/test/automation/out/settings.js:36:9)
at async setTerminalTestSettings (out/areas/terminal/terminal-helpers.js:9:5)
at async Context.<anonymous> (out/areas/terminal/terminal-shellIntegration.test.js:89:17)
4) VSCode Smoke Tests (Web)
Terminal
Terminal Shell Integration
Write data-based tests
"after all" hook in "Write data-based tests":
Error: Timeout: is active element ' .monaco-editor[data-uri$="settings.json"] .native-edit-context' after 20 seconds.
at Code.poll (/Users/runner/work/1/s/test/automation/out/code.js:205:23)
at async Code.waitForActiveElement (/Users/runner/work/1/s/test/automation/out/code.js:167:9)
at async Editor.waitForEditorFocus (/Users/runner/work/1/s/test/automation/out/editor.js:68:9)
at async SettingsEditor.openUserSettingsFile (/Users/runner/work/1/s/test/automation/out/settings.js:51:9)
```
cc. @lszomoru
| smoke-test-failure | low | Critical |
2,692,282,149 | react-native | [iOS] Some weird animation happening on inputs and buttons when switching screens | ### Description
When switching between screens using simple unmount/mount logic like this:
```
{activeTab === 1 && <TabOne />}
{activeTab === 2 && <TabTwo />}
```
it seems like React Native is trying to "reuse" some inputs or components in some way.
**Result:** This causes strange animations or transitions between screens (and screen components like TextView or Button)
### Steps to reproduce
1. Run the provided example code (Expo Snack)
2. Click between "TAB ONE" and "TAB TWO"
### React Native Version
0.76.3
### Affected Platforms
Runtime - iOS
### Areas
Fabric - The New Renderer
### Output of `npx react-native info`
```text
I'm not entirely sure how to run this `npx react-native info ` on expo snack so I downloaded .zip from expo snack and run this command...
System:
OS: macOS 15.1.1
CPU: (12) arm64 Apple M4 Pro
Memory: 131.67 MB / 24.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 20.16.0
path: ~/.nvm/versions/node/v20.16.0/bin/node
Yarn: Not Found
npm:
version: 10.8.1
path: ~/.nvm/versions/node/v20.16.0/bin/npm
Watchman:
version: 2024.11.18.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.16.2
path: /opt/homebrew/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.1
- iOS 18.1
- macOS 15.1
- tvOS 18.1
- visionOS 2.1
- watchOS 11.1
Android SDK: Not Found
IDEs:
Android Studio: 2024.2 AI-242.23339.11.2421.12550806
Xcode:
version: 16.1/16B40
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.13
path: /usr/bin/javac
Ruby:
version: 2.6.10
path: /usr/bin/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.1.2
wanted: latest
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.3
wanted: 0.76.3
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: Not found
newArchEnabled: Not found
iOS:
hermesEnabled: Not found
newArchEnabled: Not found
```
### Stacktrace or Logs
```text
No errors in the console
```
### Reproducer
https://snack.expo.dev/@mbackonja/animation-issue
### Screenshots and Videos
https://github.com/user-attachments/assets/195804c8-e187-4221-9df2-3ae8f9838e9e
| Platform: iOS,Component: Button,Component: Switch,Needs: Author Feedback,Type: New Architecture | low | Critical |
2,692,336,769 | material-ui | [Tooltip] Stays visible when scrolling away | ### Steps to reproduce
Steps:
1. [Open this link to live example](https://stackblitz.com/edit/react-44fyjx?file=LastName.js)
2. Hover over one of the tooltips in the Last Name column
3. Scroll away (without moving the mouse)
4. You'll notice the tooltip stays open
### Current behavior
The tooltip stays open on scroll .
### Expected behavior
The tooltip should close when scrolled away from.
### Context
We are using a tooltip inside of a data grid. When a user hovers the tooltip, we have an api call that happens to populate the it's contents. This call takes milliseconds to complete. When a user hovers the tooltip, and scrolls in the Datagrid, the tool tip stays open even though the user's cursor is no longer hovering the tooltip.
Order ID: 80452
### Your environment
<details>
<summary><code>npx @mui/envinfo</code></summary>
```
Don't forget to mention which browser you used.
Output from `npx @mui/envinfo` goes here.
```
</details>
**Search keywords**: Tooltip, stays open, tooltip hover | bug ๐,component: tooltip,package: material-ui | low | Major |
2,692,353,149 | flutter | Rebuild Autocomplete options at arbitrary time | ### Use case
When the user finishes typing in the field, a user of Autocomplete might make two network requests: one to quickly fetch some approximate options and another, longer request to get more detailed results. Autocomplete should be able to update the options with the result of each request. Currently the optionsViewBuilder is only called in response to the completion of the future returned by optionsBuilder.
### Proposal
The hacky solution is probably to introduce a controller with a method to imperatively update the options: `AutocompleteController.setOptions`.
A better solution might require a breaking change. Really it seems like maybe the options should just be passed into Autocomplete as a parameter? That would also require the ability to listen to changes in the field.
### Related
There are other request to be able to display things in the options view at certain times, for example while loading: https://github.com/flutter/flutter/issues/147377. Solving this issue would probably also fix that. | a: text input,P2,team-text-input,triaged-text-input | low | Minor |
2,692,373,221 | flutter | Implement WidgetStateProperty Styling for Tab and TabBar Widgets | ### Use case
In a TabBar, you can control certain visual aspects of the tabs through overlayColor, unselectedLabelStyle, and labelStyle. However, this is a very limiting in what you can do and also doesn't consider other states such as Focused or Disabled.
### Proposal
My proposal would be to update Tab and TabBar widgets to implement WidgetStateProperty stylings (like FilledButton and many other widgets). This would both allow for a reduced set of properties as well as flexibility in customization for visual states. | c: new feature,framework,f: material design,c: proposal,P3,customer: castaway,team-design,triaged-design | low | Minor |
2,692,378,488 | TypeScript | rewriteRelativeImportExtensions does not rewrite the extension for an import expressions unless the file is detected to be a module | ### ๐ Search Terms
rewriteRelativeImportExtensions, import expression
### ๐ Version & Regression Information
- This changed between versions 5.7
### โฏ Playground Link
https://www.typescriptlang.org/play/?rewriteRelativeImportExtensions=true#code/PTAEEsFsAcHsCcAuoDkA6YAzWs2IM4oDcAsAFBRxIAU6WOehAlOUA
### ๐ป Code
Test One
```ts
import('./foo.ts')
```
Test Two
```ts
import './foo.ts';
import('./foo.ts')
```
### ๐ Actual behavior
JS Output
Test One
```ts
import('./foo.ts') // โ
```
Test Two
```ts
import './foo.js'; // โ
import('./foo.js') // โ
```
### ๐ Expected behavior
JS Output
Test One
```ts
import('./foo.js') // โ
```
Test Two
```ts
import './foo.js'; // โ
import('./foo.js') // โ
```
### Additional information about the issue
_No response_ | Needs Investigation | low | Major |
2,692,394,084 | kubernetes | Add configuration property to delete cronjobs if it succeeds after retry | ### What would you like to be added?
Add a boolean cronjob property `deleteFailedJobsOnSuccess` that, when set to `true`, will effectively delete the failed jobs history of that cronjob when it succeeds based on retries dependent on `backoffLimit`.
So e.g. `backoffLimit: 2` and the 1. run fails, but the 2. succeeds, the 1. failed job should not be kept in history.
### Why is this needed?
Sometimes cronjob fail because of temporary issues e.g. database not reachable for short amount of time, then the cronjob succeeds when it automatically runs again, but the failed job is still in the job history.
Setting `failedJobsHistoryLimit: 0` is not a solution as jobs are also not kept when the cronjob fails all the time.
Setting `ttlSecondsAfterFinished` (if possible for cronjobs) is also not a solution as the time is highly dependent on the schedule and has to be calculated to keep a failed job for some time until the next schedule, also the kept failed job might still be unnecessary as the cronjob could have been successful after a direct retry. | kind/feature,sig/apps,needs-triage | low | Critical |
2,692,394,295 | excalidraw | Arrow navigation in canvas search moves text cursor | https://github.com/user-attachments/assets/a150489a-fca7-4b10-af40-a33cbe0b2228
| bug | low | Minor |
2,692,404,262 | TypeScript | Javascript function type inference breaks across a property accessor but not a function call | Type: <b>Bug</b>
I am writing a typescript declaration file `declarations.d.ts` that describes some interfaces and a global object. The declarations will be use to assist JavaScript developers that are writing script for an embedded v8 engine that offers global objects like the one being described.
There is a global variable `Global` of interface type `SomeGlobal`, which in turn has a property `someType` of interface type `SomeType`, and finally `SomeType` has a `callback` property of type function like `(other:OtherType) => void`. Here is the entire file: `declarations.d.ts`
``` typescript
interface OtherType
{
stringProperty: string;
}
interface SomeType
{
callback: (other:OtherType) => void;
}
interface SomeGlobal
{
someType: SomeType;
getSomeType(): SomeType;
}
declare var Global: SomeGlobal;
// This is not part of the declarations, it's here to show you that the type of `o` is properly inferred through the property accessor `someType`
Global.someType.callback = (o) => {o.stringProperty}
```
and the `jsconfig.json` being used to ensure that code in main.js can see these types:
``` json
{
"compilerOptions": {
"target": "es6",
"lib": ["es6"],
"strict": true
},
"include": [
"*.js",
"declarations.d.ts"
]
}
```
From here on, working a in `main.js` that looks like this:
``` javascript
Global.someType.callback = (other) => {};
Global.getSomeType().callback = (other) => {};
```
In VSCode when you attempt to assign the `callback` property of `SomeType` and open the parentheses to start building the arrow function like so:
``` javascript
Global.someType.callback = (
```
I would expect to see autocomplete help like `callback(other: OtherType): void`, prompting me to complete the statement like:
``` javascript
Global.someType.callback = (other) => {/* do something*/};
```
But this is not what happens. If I hover over `callback` VS will display the appropriate signature for the property `(property) SomeType.callback: (other: OtherType) => void`, but inside the arrow function param list or function body, the `other` param just shows up as type `any`.
What's interesting is that if I go through a function call instead of a property:
``` javascript
Global.getSomeType().callback = (
```
Then there is correct function type inference for `callback` and I get the intellisense autocomplete you'd expect.
Am I just missing something silly here, or have I found a bug?
I have attached the 3 demo files for you to inspect. Any help would be most appreciated!
VS Code version: Code 1.95.3 (f1a4fb101478ce6ec82fe9627c43efbf9e98c813, 2024-11-13T14:50:04.152Z)
OS version: Windows_NT x64 10.0.22635
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i9-7900X CPU @ 3.30GHz (20 x 3312)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|63.68GB (33.97GB free)|
|Process Argv|. --crash-reporter-id 4c3d0139-457c-4b56-b000-09d11a0fb061|
|Screen Reader|no|
|VM|40%|
</details><details><summary>Extensions (57)</summary>
Extension|Author (truncated)|Version
---|---|---
terraform|4op|0.2.5
vscode-base64|ada|0.1.0
github-markdown-preview|bie|0.3.0
markdown-checkbox|bie|0.4.0
markdown-emoji|bie|0.3.0
markdown-footnotes|bie|0.1.1
markdown-mermaid|bie|1.27.0
markdown-preview-github-styles|bie|2.1.0
markdown-yaml-preamble|bie|0.1.0
vscode-markdownlint|Dav|0.57.0
vscode-eslint|dba|3.0.10
gitlens|eam|16.0.4
go|gol|0.42.1
terraform|has|2.34.0
vscode-drawio|hed|1.6.6
vscode-ansi|ili|1.1.7
svg|joc|1.5.4
compare-folders|mos|0.25.1
vscode-docker|ms-|1.29.3
csdevkit|ms-|1.13.9
csharp|ms-|2.55.29
vscode-dotnet-runtime|ms-|2.2.3
vscode-kubernetes-tools|ms-|1.3.18
debugpy|ms-|2024.12.0
isort|ms-|2023.10.1
python|ms-|2024.20.0
vscode-pylance|ms-|2024.11.3
jupyter|ms-|2024.10.0
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.21
vscode-jupyter-cell-tags|ms-|0.1.9
vscode-jupyter-slideshow|ms-|0.1.6
remote-containers|ms-|0.388.0
remote-ssh|ms-|0.115.1
remote-ssh-edit|ms-|0.87.0
remote-wsl|ms-|0.88.5
vscode-remote-extensionpack|ms-|0.26.0
azurecli|ms-|0.6.0
cpptools|ms-|1.22.11
makefile-tools|ms-|0.11.13
remote-explorer|ms-|0.4.3
remote-server|ms-|1.5.2
fabric8-analytics|red|0.9.5
java|red|1.36.0
vscode-yaml|red|1.15.0
LiveServer|rit|5.7.9
p5-vscode|sam|1.2.16
rewrap|stk|1.16.3
code-spell-checker|str|4.0.21
intellicode-api-usage-examples|Vis|0.2.9
vscodeintellicode|Vis|1.3.2
vscode-gradle|vsc|3.16.4
vscode-java-debug|vsc|0.58.1
vscode-java-dependency|vsc|0.24.1
vscode-java-pack|vsc|0.29.0
vscode-java-test|vsc|0.43.0
vscode-maven|vsc|0.44.0
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805:30301674
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
a9j8j154:30646983
962ge761:30959799
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
f3je6385:31013174
dvdeprecation:31068756
dwnewjupyter:31046869
nativerepl1:31139838
pythonrstrctxt:31112756
cf971741:31144450
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
```
</details>
[tssandbox.zip](https://github.com/user-attachments/files/17909324/tssandbox.zip)
<!-- generated by issue reporter --> | Suggestion,Help Wanted,Experience Enhancement | low | Critical |
2,692,433,598 | vscode | Git - Git blame editor decoration appears before the folding ... one | The ... should be before the git one

| bug,git | low | Minor |
2,692,571,008 | svelte | Svelte 5: Preserve local state during HMR | ### Describe the bug
With a brand new SvelteKit project with Svelte 5, I realized that all my states are reset whenever HMR updates
Whenever I change anything in the HTML part of a Svelte file, HMR kicks in and resets all my `$state`, which makes it very cumbersome to develop a large and complex form
I read that earlier version support directives such as `@hmr:keep-all` or `preserveLocalState`, but according to https://github.com/sveltejs/kit/issues/12985, these are outdated and no longer used
So I am unsure what to do now to preserve states between HMR reloads - is this a bug or a feature request?
### Reproduction
Reproduction link in SvelteLab: https://www.sveltelab.dev/63iv5zkf3ed2806
Steps to reproduce:
1. Check the checkbox
2. Click the button
3. Change something in the HTML, like adding more text to the paragraph
5. All states get reset - the checkbox gets unchecked, and `text` resets to initial value ๐ฉ
I understand that SvelteLab may not reflect real-world environments, but I tried it on my laptop inside WSL and the "bug" is still there
### Logs
_No response_
### System Info
```shell
System:
OS: Linux 5.15 Ubuntu 22.04.4 LTS 22.04.4 LTS (Jammy Jellyfish)
CPU: (8) x64 AMD Ryzen 7 8845HS w/ Radeon 780M Graphics
Memory: 13.97 GB / 14.90 GB
Container: Yes
Shell: 5.1.16 - /bin/bash
Binaries:
Node: 23.3.0 - ~/.local/share/mise/installs/node/23/bin/node
npm: 10.9.0 - ~/.local/share/mise/installs/node/23/bin/npm
pnpm: 9.14.2 - ~/.local/share/mise/installs/node/23/bin/pnpm
npmPackages:
svelte: ^5.1.13 => 5.1.13
```
### Severity
annoyance | feature request | low | Critical |
2,692,653,252 | rust | `avr-rjmp-offset` is flaky on `x86_64-mingw` | > This test seems to be failing somewhat often:
>
> * 2024-11-25 Hang: https://github.com/rust-lang/rust/pull/133465#issuecomment-2499233485
> * 2024-11-20 Error: https://github.com/rust-lang/rust/pull/132629#issuecomment-2490054965
> * 2024-11-13 Error: https://github.com/rust-lang/rust/pull/132997
> * 2024-11-08 Error: https://github.com/rust-lang/rust/pull/132757#issuecomment-2465793359
> * 2024-10-26 Error: https://github.com/rust-lang/rust/pull/131527#issuecomment-2439511330
> * 2024-10-23 Error: https://github.com/rust-lang/rust/pull/123550#issuecomment-2434164699
All on `x86_64-mingw`. Can we maybe disable it on `x86_64-mingw` until it is fixed?
_Originally posted by @ehuss in https://github.com/rust-lang/rust/issues/131755#issuecomment-2499254372_
I think the linker is crashing...
| A-LLVM,A-testsuite,T-compiler,O-windows-gnu,C-bug,O-AVR,A-run-make,A-linkers,C-external-bug,I-flaky-test | low | Critical |
2,692,708,355 | react-native | Bug in the height of the text input when it is multiline | ### Description
When specifying the "multiline" and "numberOfLines" properties, the height of the text input acts differently in the new and old architecture.
In the new architecture the height remains the same as if the properties were not specified, while in the old architecture the height increases.
### Steps to reproduce
1. Clone the sample repository.
2. Enable or disable the new architecture.
3. Run "yarn android".
4. Display the text input on the device.
### React Native Version
0.76.3
### Affected Platforms
Runtime - Android
### Areas
Fabric - The New Renderer
### Output of `npx react-native info`
```text
System:
OS: Windows 11 10.0.22631
CPU: "(12) x64 AMD Ryzen 5 5600G with Radeon Graphics "
Memory: 4.79 GB / 29.79 GB
Binaries:
Node:
version: 20.14.0
path: ~\AppData\Local\Temp\yarn--1732581775916-0.8361455801386266\node.CMD
Yarn:
version: 1.22.22
path: ~\AppData\Local\Temp\yarn--1732581775916-0.8361455801386266\yarn.CMD
npm:
version: 10.7.0
path: C:\Program Files\nodejs\npm.CMD
Watchman: Not Found
SDKs:
Android SDK:
API Levels:
- "28"
- "30"
- "33"
- "34"
- "35"
Build Tools:
- 30.0.3
- 33.0.0
- 34.0.0
- 35.0.0
- 35.0.0
System Images:
- android-23 | Google APIs Intel x86 Atom
- android-29 | Google Play Intel x86 Atom
- android-34 | Google Play Intel x86_64 Atom
Android NDK: Not Found
Windows SDK:
AllowDevelopmentWithoutDevLicense: Enabled
Versions:
- 10.0.19041.0
- 10.0.22000.0
- 10.0.22621.0
IDEs:
Android Studio: AI-241.18034.62.2411.12169540
Visual Studio:
- 17.10.34928.147 (Visual Studio Community 2022)
Languages:
Java: 20.0.2
Ruby: Not Found
npmPackages:
"@react-native-community/cli":
installed: 15.0.1
wanted: 15.0.1
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.3
wanted: 0.76.3
react-native-windows: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: Not found
newArchEnabled: Not found
```
### Stacktrace or Logs
```text
It is not necessary, the application does not crash or fail.
```
### Reproducer
https://github.com/cervisebas/react-native-bug-text-input
### Screenshots and Videos
### New Architecture

### Old Architecture

| Issue: Author Provided Repro,Type: New Architecture | low | Critical |
2,692,771,737 | svelte | `$state(undefined)` is typed as never on svelte-check | ### Describe the bug
On a project generated by `[email protected]`, running a type check to the following code will get a result that `p` is typed as `never`.
```html
<script lang="ts">
type Person = { name: string }
let p: Person | undefined = $state(undefined)
$effect(() => {
// load person
setTimeout(() => {
p = { name: "bob" }
}, 1000);
});
let displayName = $derived(p?.name ?? "anon") // => Error: Property 'name' does not exist on type 'never'. (ts)
</script>
<div>{ displayName }</div>
```
This is considered a bug because it does not occur when the following changes are made.
```diff
- let p: Person | undefined = $state(undefined) // NG
+ let p: Person | undefined = $state() // OK
```
### Reproduction
https://github.com/ykrods/svelte-reproduction-state-type
### Logs
```shell
$ npm run check
> [email protected] check
> svelte-check --tsconfig ./tsconfig.json && tsc -p tsconfig.node.json
====================================
Loading svelte-check in workspace: /home/ykrods/work/svelte-reproduction-state-type
Getting Svelte diagnostics...
/home/ykrods/work/svelte-reproduction-state-type/src/App.svelte:13:33
Error: Property 'name' does not exist on type 'never'. (ts)
let displayName = $derived(p?.name ?? "anon")
</script>
====================================
svelte-check found 1 error and 0 warnings in 1 file
```
### System Info
```shell
System:
OS: Linux 6.10 Manjaro Linux
CPU: (16) x64 AMD Ryzen 7 5700U with Radeon Graphics
Memory: 8.43 GB / 14.98 GB
Container: Yes
Shell: 5.9 - /bin/zsh
Binaries:
Node: 22.8.0 - /usr/bin/node
npm: 10.8.3 - /usr/bin/npm
Browsers:
Chromium: 131.0.6778.85
npmPackages:
svelte: ^5.1.3 => 5.2.8
```
### Severity
annoyance | types / typescript | low | Critical |
2,692,779,664 | flutter | Replace use of ATK with accessing AT-SPI directly | The Linux embedder currently uses the ATK library to provide accessibility. [GTK4 switched from using this library to accessing AT-SPI directly over D-Bus](https://gitlab.gnome.org/GNOME/gtk/-/commit/c63087a5631e72cd1c45bdc5a41bf605195be64c), this may also be a good more for Flutter as this removes the usage of an old library and removes a complex layer that makes using accessibility complicated. This would be a significant amount of work to implement. | engine,a: accessibility,platform-linux,c: proposal,P2,team-linux,triaged-linux | low | Minor |
2,692,880,188 | vscode | Visual glitches in editor when hardware acceleration is disabled | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.95.3
- OS Version: Debian 12
Visual glitches appear when using enter/backspace keys to create or delete lines in a document, from lines 2-9.
Also, although inconsistent, switching lines using the mouse is laggy when going to a higher column position on another row (below line 10).
These bugs only appear when the --disable-gpu option is set, or the disable-hardware-acceleration option is true.
Steps to Reproduce:
1. Open VS Code in Debian (KDE Plasma as desktop environment, not sure if relevant) with the following options: --unity-launch --enable-features=UseOzonePlatform --ozone-platform=wayland --disable-gpu
2. Create a new document
3. Use enter key and backspace key to create/delete lines. Visual glitches with the cursor should appear, and line numbers may not show up.
(I am on line 3 in the image)

| bug,linux,editor-rendering,chromium,wayland | low | Critical |
2,692,944,159 | transformers | Add support for causal language modeling for `DistilBertModel` | ### Feature request
HuggingFace Transformers currently supports causal language modeling (CLM) fine-tuning for BERT using`BertLMHeadModel` [as shown in the docs](https://huggingface.co/docs/transformers/model_doc/bert#transformers.BertLMHeadModel). My request is simply to extend this support to `DistilBertModel`.
### Motivation
I want to use a distilBERT model to initialize a `EncoderDecoderModel`, but I am getting an error message that says it does not support CLM.
```python
from transformers import EncoderDecoderModel
EncoderDecoderModel.from_encoder_decoder_pretrained(
encoder_pretrained_model_name_or_path="distilbert/distilbert-base-multilingual-cased",
decoder_pretrained_model_name_or_path="distilbert/distilbert-base-multilingual-cased",
)
```
Here is the error message:
```
ValueError: Unrecognized configuration class <class 'transformers.models.distilbert.configuration_distilbert.DistilBertConfig'> for this kind of AutoModel: AutoModelForCausalLM.
Model type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, LlamaConfig, CodeGenConfig, CohereConfig, CpmAntConfig, CTRLConfig, Data2VecTextConfig, DbrxConfig, ElectraConfig, ErnieConfig, FalconConfig, FuyuConfig, GemmaConfig, Gemma2Config, GitConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, JambaConfig, JetMoeConfig, LlamaConfig, MambaConfig, Mamba2Config, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MistralConfig, MixtralConfig, MptConfig, MusicgenConfig, MusicgenMelodyConfig, MvpConfig, NemotronConfig, OlmoConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PersimmonConfig, PhiConfig, Phi3Config, PLBartConfig, ProphetNetConfig, QDQBertConfig, Qwen2Config, Qwen2MoeConfig, RecurrentGemmaConfig, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2Text2Config, StableLmConfig, Starcoder2Config, TransfoXLConfig, TrOCRConfig, WhisperConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig.
```
### Your contribution
I'm happy to contribute. | Feature request | low | Critical |
2,692,946,499 | pytorch | [Inductor] [CPU] `GroupNorm` triggers inconsistency when using Inductor | ### ๐ Describe the bug
`GroupNorm` does not seem to be optimized correctly, I think this is related to #129823
I can provide the reproduction.
BTW, it only prints `False` on **cpu**.
Although the error on **cuda** is not **0**, it is within the tolerance range of `torch.allclose`
```python
import torch
import torch.nn as nn
torch.manual_seed(0)
class Model(nn.Module):
def __init__(self):
super().__init__()
self.gn = nn.GroupNorm(num_groups=32, num_channels=32)
def forward(self, x):
x = self.gn(x)
return x
model = Model().eval()
c_model = torch.compile(model)
x = torch.randn(1, 32, 128, 128, 128)
inputs = [x]
output = model(*inputs)
c_output = c_model(*inputs)
print(torch.max(torch.abs(output - c_output)))
print(torch.allclose(output, c_output, 1.3e-6, 1e-5))
```
### Error logs
on CPU
```
tensor(7.0095e-05, grad_fn=<MaxBackward1>)
False
```
on CUDA
```
tensor(1.4305e-06, device='cuda:0', grad_fn=<MaxBackward1>)
True
```
### Versions
Brief information about my environment:
OS: Ubuntu 20.04.6 LTS (x86_64)
torch version: 2.6.0.dev20241115+cu124
<details>
<summary>Click here for detailed env</summary>
```
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 16.0.1
CMake version: version 3.26.0
Libc version: glibc-2.31
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-200-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
GPU 2: Tesla V100-SXM2-32GB
GPU 3: Tesla V100-SXM2-32GB
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.4.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 40 bits physical, 48 bits virtual
CPU(s): 20
On-line CPU(s) list: 0-19
Thread(s) per core: 1
Core(s) per socket: 20
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2499.994
BogoMIPS: 4999.98
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 640 KiB
L1i cache: 640 KiB
L2 cache: 80 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch topoext cpuid_fault invpcid_single pti ssbd ibrs ibpb fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke avx512_vnni
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.13.1
[pip3] pytorch-triton==3.1.0+cf34004b8a
[pip3] torch==2.6.0.dev20241115+cu124
[pip3] torchaudio==2.5.0.dev20241115+cu124
[pip3] torchvision==0.20.0.dev20241115+cu124
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] pytorch-triton 3.1.0+cf34004b8a pypi_0 pypi
[conda] torch 2.6.0.dev20241115+cu124 pypi_0 pypi
[conda] torchaudio 2.5.0.dev20241115+cu124 pypi_0 pypi
[conda] torchvision 0.20.0.dev20241115+cu124 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
```
</details>
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov | triaged,oncall: pt2,module: inductor,module: pt2 accuracy | low | Critical |
2,692,957,167 | tensorflow | Aborted (core dumped) in `RaggedBincount` | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
tf2.17.0 tf2.16.1
### Custom code
Yes
### OS platform and distribution
Linux Ubuntu 20.04
### Mobile device
_No response_
### Python version
3.11
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
When the value of size is close to the maximum value for the dtype, an integer overflow error will occur in following multiplication calculations.
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
splits = tf.constant([0, 3, 5, 9], dtype=tf.int64)
values = tf.constant(1, shape=[3,3], dtype=tf.int64)
size = tf.constant(6522107765268123892, dtype=tf.int64)
weights = tf.constant(1, shape=[3,3], dtype=tf.float32)
counts = tf.raw_ops.RaggedBincount(splits=splits, values=values, size=size, weights=weights)
```
### Relevant log output
```shell
Status: INVALID_ARGUMENT: Encountered overflow when multiplying 3 with 6522107765268123892, result: -1
Aborted (core dumped)
```
| stat:awaiting tensorflower,type:bug,comp:ops,2.17 | low | Critical |
2,692,984,193 | PowerToys | Enhancement Request: Introduce a Variable to Fetch Parent Folder Name in PowerRename | ### Description of the new feature / enhancement
We propose the addition of a new variable in the PowerRename tool that allows users to dynamically retrieve the name of the parent folder for any given file or folder. This feature would be particularly useful for users who frequently perform batch renaming operations based on the context of their file hierarchy.
The new variable, which we suggest naming **${ParentFolderName}**, would be integrated into the PowerRename's search and replace functionality. When a user initiates a rename operation, PowerRename would automatically populate this variable with the name of the folder that contains the selected files or folders. This would enable users to include the parent folder's name in their renaming patterns, providing a more contextual and organized approach to file management.
For example, if a user has a folder named "ProjectX" with several subfolders and files, they could use ${ParentFolderName} to prepend or append the parent folder's name to each file's name, resulting in filenames like "ProjectX_File1.txt" or "File1_ProjectX.txt".
### Scenario when this would be used?
The **${ParentFolderName}** variable would be particularly beneficial in scenarios where users need to maintain a clear and organized file structure, especially in collaborative environments or when working with large projects that span multiple directories.
1. Project Management: When organizing files for different projects, being able to include the parent folder's name in the filename can help quickly identify the project a file belongs to without opening it.
2. Data Analysis: In data analysis, where datasets are often categorized by location or department, the ability to incorporate the parent folder's name into filenames can streamline the sorting and analysis process.
3. Backup and Archive: For users who create backups or archives of files, having the parent folder's name as part of the filename can help in quickly restoring files to their original locations.
4. Legal and Compliance: In legal or compliance-related work, where files need to be organized by case or matter, the %ParentFolderName% variable can assist in maintaining a structured naming convention that aligns with case management systems.
By implementing this feature, PowerRename would offer users a more robust and flexible tool for managing their files, enhancing productivity and organization in various professional and personal contexts.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,692,996,489 | pytorch | ShardedGradScaler is not documented on the website. | ### ๐ The doc issue
the function of ShardedGradScaler is not documented on the website. I want to ask that this function is indeed public or unexpectedly exposed.
### Suggest a potential alternative/fix
_No response_
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @svekars @brycebortree @sekyondaMeta @AlannaBurke @zhaojuanmao @mrshenli @rohan-varma @chauhang | oncall: distributed,module: docs,module: fsdp,topic: docs | low | Minor |
2,693,007,846 | vscode | Inconsistent decorations behavior in explorer panel | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.95.3
- OS Version: Sonoma 14.7.1
Steps to Reproduce:
1. The problems decorations in the explorer panel behaves in an inconsistent and erratic way, as shown in the pictures attached. The error decorations (both color and badge) appear and disappear erratically when the file is opened /closed or when its name is selected/deselected in the explorer panel.
2. Errors and warnings are correctly reported and underlined in each file when they are opened.
3. This is observed for all file types (tested on .py, .ipynb, .json, .yaml, .js)
4. If the directory is in a Git repository, the usual decorations for git status (green for unmodified, yellow for modified, etc...) do not appear at all.
5. Problem still occurs with all extensions disabled. I have changed diverse settings such as `"problems.decorations.enabled": true`, `"explorer.decorations.colors": true`, without any effect.



| bug,file-explorer,file-decorations | low | Critical |
2,693,023,198 | kubernetes | statefulset can not recreate the lost replicas in case of node loss | ### What happened?
kube-state-metric was deployed with statefulset, in multiple shard mode
shard 1-3 are lost during a disaster, but they never get recreated

### What did you expect to happen?
the lost replicas should be recreated automatically
### How can we reproduce it (as minimally and precisely as possible)?
create a kind cluster with multiple nodes
create a statefulset with replicas spread over these nodes
delete one or more nodes and check
### Anything else we need to know?
I'am not sure if it's designed by purpose or a known issue waiting for fix
### Kubernetes version
happened on 1.20.7
assumed it still exists in lastest release
### Cloud provider
not specific to any cloud provider
### OS version
not specific to any OS version
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/apps,needs-triage | low | Major |
2,693,109,381 | deno | thread 'worker-2' panicked ... called `Result::unwrap()` on an `Err` value | Deno 2.1.1
```
Deno has panicked. This is a bug in Deno. Please report this
at https://github.com/denoland/deno/issues/new.
If you can reliably reproduce this panic, include the
reproduction steps and re-run with the RUST_BACKTRACE=1 env
var set and include the backtrace in your report.
Platform: linux x86_64
Version: 2.1.1
Args: ["/usr/bin/deno", "test", "-A", "--env-file=.env.test", "--deny-read=.env", "--junit-path=./deno-test.xml", "--coverage=cov_profile"]
thread 'worker-2' panicked at /home/runner/.cargo/registry/src/index.crates.io-6f17d22bba15001f/deno_core-0.321.0/modules/map.rs:544:40:
called `Result::unwrap()` on an `Err` value: Global { data: 0x78d7d48da020, isolate_handle: IsolateHandle(IsolateAnnex { isolate: 0x78d7d440e000, isolate_mutex: Mutex { data: (), poisoned: false, .. } }) }
stack backtrace:
0: rust_begin_unwind
1: core::panicking::panic_fmt
2: core::result::unwrap_failed
3: deno_core::modules::map::ModuleMap::new_synthetic_module
4: deno_core::modules::map::ModuleMap::new_module
5: deno_runtime::web_worker::run_web_worker::{{closure}}
6: tokio::runtime::task::raw::poll
```
This is happening consistently in my CI runner after switching to a different machine. Logs: https://gitlab.com/soapbox-pub/ditto/-/jobs/8469492115 | bug,panic | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.