id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
โ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
2,804,595,679 | material-ui | Rollup failed to resolve import "@mui/system/RtlProvider" | ### Steps to reproduce
Steps:
1. just run npm install
### Current behavior

### Expected behavior
To succesfully install the package.
### Context
_No response_
### Your environment
<details>
<summary><code>npx @mui/envinfo</code></summary>
```
Don't forget to mention which browser you used.
Output from `npx @mui/envinfo` goes here.
```
</details>
**Search keywords**: Rollup failed to resolve import "@mui/system/RtlProvider"
**Search keywords**: | status: waiting for author | low | Critical |
2,804,613,078 | deno | "Move to new file" quickfix does not work for `import * as ...` | Version: Deno 2.1.6
```ts
// main.ts
import * as a from 'data:application/json,{"a":1}' with { type: "json" };
const y = a;
```
When you try to execute the "Move to a new file" quick fix in VS Code in the `main.ts` file, the LSP fails to respond to the `codeAction/resolve` request.
It works if the file referenced from the import does not exist. It fails if it does exist (data url, local file, npm specifier all fail). | lsp,tsc | low | Minor |
2,804,630,849 | PowerToys | Make default log base in Run calculator a setting | ### Description of the new feature / enhancement
Writing `log(5)` in the PowerToys Run calculator used to calculate the natural logarithm but this was changed following #14687 to instead use base 10. As discussed in that issue, the notation is inherently ambiguous and different users might have different, but very strong, expectations as to what should be the base. Anyone with a math/physics/programming background would surely expect the natural logarithm.
There are now also `log10`, `log2` and `ln`, so the question comes down to which one of these `log` should be an alias for. It would be great if this could be a setting, so users can choose themselves which convention to follow.
### Scenario when this would be used?
Logarithms are common and expectations for the standard base can be strong.
### Supporting information
- [Wikipedia](https://en.wikipedia.org/wiki/Logarithm#Particular_bases) lists `log` as an alias for bases 2, e and 10. (Though I haven't seen base 10 except for in handheld calculators and, by extensions, applications mimicking those.)
- In programming languages, `log` usually refers to the natural logarithm (e.g. [C++](https://cplusplus.com/reference/cmath/log/), [Python](https://docs.python.org/3/library/math.html#math.log)/[numpy](https://numpy.org/doc/stable/reference/generated/numpy.log.html), [JavaScript](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/log), [Matlab](https://www.mathworks.com/help/matlab/ref/double.log.html)).
- In those same languages, `log10` and `log2` are present to refer to logarithms of the other common bases, so the presence of those may lead users to expect the same convention to be followed.
- `ln` on the other hand is not usually present. (Its presence may lead users to expect the other ISO notations `lg` and `lb` to also exist, which they don't. Maybe adding those aliases is also worthwhile.) | Needs-Triage | low | Minor |
2,804,648,948 | react | Bug: eagerly adding `wheel` events to app root opts into changing input `type="number"` value on scroll | <!--
Please provide a clear and concise description of what the bug is. Include
screenshots if needed. Please test using the latest version of the relevant
React packages to make sure your issue has not already been fixed.
-->
React version: 17+
## Steps To Reproduce
1. Render `<input type="number" defaultValue={0} />`
2. Focus input
3. Scroll mouse wheel over input
<!--
Your bug will get fixed much faster if we can run your code and it doesn't
have dependencies other than React. Issues without reproduction steps or
code examples may be immediately closed as not actionable.
-->
Link to code example: https://stackblitz.com/edit/react-wheel-event-number-input-mouse-wheel?file=src%2Fmain.tsx&terminal=dev
<!--
Please provide a CodeSandbox (https://codesandbox.io/s/new), a link to a
repository on GitHub, or provide a minimal code example that reproduces the
problem. You may provide a screenshot of the application if you think it is
relevant to your bug report. Here are some tips for providing a minimal
example: https://stackoverflow.com/help/mcve.
-->
## The current behavior
The value of the input changes when you scroll the mouse wheel over the number input. This behavior is _only_ enabled in browsers if you add a `wheel` event to the input or an ancestor. React eagerly listens for a `wheel` event on the app's root, whether you use one in app code or not.
Therefore React is forcibly opting you into behavior that is undesirable and error-prone. There is no way to opt out. You must instead add a clunky workaround (that may not even work correctly because the events are passive).
This codepen demonstrates how you get opted into the behavior outside of a React context: https://codepen.io/WickyNilliams/pen/xbKyVQE
## The expected behavior
The input's value should not change on mouse wheel, unless I have added an event listener explicitly. This aligns with actual browser behaviour outside of React
| Status: Unconfirmed | low | Critical |
2,804,654,789 | flutter | iOS release crash : `figTimebaseCopyTargetTimebase` : `pthread_mutex_lock$VARIANT$mp` | ### Steps to reproduce
The crash has happened in release mode on my live user. I have no way to reproduce it as i don't know what is it and never encountered it during development.
Unfortunately i have no way to contact this user either.
I am guessing according to the [StackTrace](https://github.com/user-attachments/files/18507520/trace.txt) that this is a crash from the `video_player` but i am defintely not sure.
I only see `AVFCore` and `CoreMedia` so i am guessing..
### Expected results
The app should not crash
### Actual results
The app is crashing
### Code sample
<details open><summary>Code sample</summary>
```dart
[Paste your code here]
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
Crashed: com.apple.root.default-qos
0 libsystem_pthread.dylib 0xae18 pthread_mutex_lock$VARIANT$mp + 8
1 CoreMedia 0x2b8ac figTimebaseCopyTargetTimebase + 48
2 CoreMedia 0x2bec8 FigTimebaseCopyUltimateMasterClockAndHeight + 28
3 CoreMedia 0x2cd58 CMTimebaseGetTimeClampedAboveAnchorTime + 84
4 AVFCore 0x3d614 AVTimebaseObserver_figTimebaseGetTime + 44
5 AVFCore 0xb048 -[AVPeriodicTimebaseObserver _handleTimeDiscontinuity] + 76
6 AVFCore 0xac5c __AVTimebaseObserver_timebaseNotificationCallback_block_invoke + 112
7 libdispatch.dylib 0x63094 _dispatch_call_block_and_release + 24
8 libdispatch.dylib 0x64094 _dispatch_client_callout + 16
9 libdispatch.dylib 0x6858 _dispatch_queue_override_invoke + 720
10 libdispatch.dylib 0x13b94 _dispatch_root_queue_drain + 340
11 libdispatch.dylib 0x1439c _dispatch_worker_thread2 + 172
12 libsystem_pthread.dylib 0x1dc4 _pthread_wqthread + 224
13 libsystem_pthread.dylib 0x192c start_wqthread + 8
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[โ] Flutter (Channel stable, 3.27.2, on macOS 15.2 24C101 darwin-x64, locale fr-FR)
โข Flutter version 3.27.2 on channel stable at /Users/foxtom/Desktop/flutter
โข Upstream repository https://github.com/flutter/flutter.git
โข Framework revision 68415ad1d9 (9 days ago), 2025-01-13 10:22:03 -0800
โข Engine revision e672b006cb
โข Dart version 3.6.1
โข DevTools version 2.40.2
[โ] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
โข Android SDK at /Users/foxtom/Library/Android/sdk
โข Platform android-35, build-tools 35.0.0
โข Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
โข Java version OpenJDK Runtime Environment (build 21.0.4+-12422083-b607.1)
โข All Android licenses accepted.
[โ] Xcode - develop for iOS and macOS (Xcode 16.2)
โข Xcode at /Applications/Xcode.app/Contents/Developer
โข Build 16C5032a
โข CocoaPods version 1.16.2
[โ] Chrome - develop for the web
โข Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[โ] Android Studio (version 2024.2)
โข Android Studio at /Applications/Android Studio.app/Contents
โข Flutter plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/9212-flutter
โข Dart plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/6351-dart
โข Java version OpenJDK Runtime Environment (build 21.0.4+-12422083-b607.1)
[โ] VS Code (version 1.96.4)
โข VS Code at /Applications/Visual Studio Code.app/Contents
โข Flutter extension can be installed from:
๐จ https://marketplace.visualstudio.com/items?itemName=Dart-Code.flutter
[โ] Connected device (5 available)
โข moto g 8 power (mobile) โข adb-ZY22BNDW2C-yyjDO7._adb-tls-connect._tcp โข android-arm64 โข Android 11 (API 30)
โข Now You See Me (mobile) โข 00008020-001204401E78002E โข ios โข iOS 18.2.1 22C161
โข macOS (desktop) โข macos โข darwin-x64 โข macOS 15.2 24C101 darwin-x64
โข Chrome (web) โข chrome โข web-javascript โข Google Chrome 131.0.6778.265
[โ] Network resources
โข All expected network resources are available.
โข No issues found!
```
</details>
| c: crash,platform-ios,waiting for customer response,engine,a: production,needs repro info,team-engine,fyi-ios | low | Critical |
2,804,701,893 | pytorch | distributed.new_group with backend GLOO hangs when distributed.split_group was called before | ### ๐ Describe the bug
A call to `distributed.new_group` with backend GLOO hangs if `distributed.split_group` was called before and not all ranks are part of a new ProcessGroup (whether in `new_group` and/or `split_group`).
Reproducer:
```python
#!/usr/bin/env python3
import os
import torch
import torch.distributed as dist
LOCAL_RANK = int(os.getenv("LOCAL_RANK"))
torch.distributed.init_process_group(backend='cpu:gloo,cuda:nccl', device_id=torch.device("cuda", LOCAL_RANK))
WORLD_SIZE = dist.get_world_size()
# hang in v2.5.1 and 2.7.0.dev20250120+cu126.
ranks_split = [ list(range(WORLD_SIZE-1)) ]
ranks_new = list(range(WORLD_SIZE))
# hang in v2.5.1, crash in tear down in 2.7.0.dev20250120+cu126.
# ranks_split = [ list(range(WORLD_SIZE)) ]
# ranks_new = list(range(WORLD_SIZE-1))
# hang in v2.5.1, crash in tear down in 2.7.0.dev20250120+cu126.
# ranks_split = [ list(range(WORLD_SIZE-1)) ]
# ranks_new = list(range(WORLD_SIZE-1))
# works
# ranks_split = [ list(range(WORLD_SIZE)) ]
# ranks_new = list(range(WORLD_SIZE))
dist.split_group(split_ranks=ranks_split)
print("new_group ...")
dist.new_group(ranks=ranks_new, backend=dist.Backend.GLOO) # hang occurs here
print("done")
dist.barrier()
```
Run with: `torchrun --nproc-per-node 2 ./torch-split-group-repro.py`
### Versions
Reproducible with PyTorch 2.5.1 and latest 2.7.0.dev20250120+cu126.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed | low | Critical |
2,804,709,801 | pytorch | Inductor autograd raises an error in the second run may because of fx graph cache | ### ๐ Describe the bug
```python
import torch
import os
os.environ["CUDA_LAUNCH_BLOCKING"] = "1"
@torch.compile
def func(x):
return x * x
x = torch.tensor(0.0, device="cuda", requires_grad=True)
func(x).backward()
print(x.grad)
```
run the code twice will get a triton error in the second run.
```
Traceback (most recent call last):
File "/root/dev/temp/tst.py", line 14, in <module>
func(x).backward()
File "/root/dev/pytorch/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/root/dev/pytorch/torch/autograd/__init__.py", line 353, in backward
_engine_run_backward(
File "/root/dev/pytorch/torch/autograd/graph.py", line 815, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/root/dev/pytorch/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/root/dev/pytorch/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1958, in backward
return impl_fn()
File "/root/dev/pytorch/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1944, in impl_fn
out = CompiledFunction._backward_impl(ctx, all_args)
File "/root/dev/pytorch/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2079, in _backward_impl
out = call_func_at_runtime_with_args(
File "/root/dev/pytorch/torch/_functorch/_aot_autograd/utils.py", line 126, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
File "/root/dev/pytorch/torch/_inductor/output_code.py", line 464, in __call__
return self.current_callable(inputs)
File "/root/dev/pytorch/torch/_inductor/utils.py", line 2228, in run
return model(new_inputs)
File "/tmp/torchinductor_root/ra/crazrzms2jyia4lhreqvggnuhmqpq44ag44s5qjmcvsbwhbd2hdr.py", line 95, in call
triton_poi_fused_add_mul_0.run(tangents_1, primals_1, buf0, 1, grid=grid(1), stream=stream0)
File "/root/dev/pytorch/torch/_inductor/runtime/triton_heuristics.py", line 961, in run
return launcher(
File "<string>", line 6, in launcher
File "/usr/local/lib/python3.10/dist-packages/triton/backends/nvidia/driver.py", line 435, in __call__
self.launch(*args, **kwargs)
RuntimeError: Triton Error [CUDA]: an illegal memory access was encountered
```
set `TORCHINDUCTOR_FX_GRAPH_CACHE=0` can fix it.
### Versions
Collecting environment information...
PyTorch version: 2.7.0a0+git62ce3e6
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 20.0.0git (https://github.com/llvm/llvm-project.git ece4e1276e2140d84b05b8c430a0e547a1f23210)
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060 Laptop GPU
Nvidia driver version: 551.61
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i7-12700H
CPU family: 6
Model: 154
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 3
BogoMIPS: 5376.02
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 480 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 12.5 MiB (10 instances)
L3 cache: 24 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Vulnerable: No microcode
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.0
[pip3] optree==0.14.0
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.7.0a0+git62ce3e6
[pip3] torch-xla==2.5.0+git3d860bf
[conda] Could not collect
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @ezyang @gchanan @zou3519 @msaroufim @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @atalman @malfet @ptrblck @nWEIdia @xwang233 | high priority,triaged,bug,oncall: pt2,module: inductor | low | Critical |
2,804,711,534 | react-native | Memory Leak in Animated API | ### Description
A memory leak has been identified in the Animated API, specifically in the `_children` array within the AnimatedWithChildren class. The issue arises when rendering animated components using transforms and interpolation. Over time, the children array grows indefinitely, leading to increased memory usage and performance degradation.
The memory leak only exists on web, ios and android don't seem to leak (see snack below).
### Steps to reproduce
1. Implement a component that periodically renders Animated elements using transform and interpolate, as these trigger the issue more frequently.
2. Monitor performance degradation visually as animations run.
3. Use tools like Chrome DevTools to confirm memory growth over time, by taking heap snapshots.
### React Native Version
0.76.6
### Affected Platforms
Runtime - Web
### Output of `npx react-native info`
```text
info Fetching system and libraries information...
System:
OS: macOS 15.1.1
CPU: (10) arm64 Apple M1 Pro
Memory: 128.17 MB / 16.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 18.20.3
path: ~/.nvm/versions/node/v18.20.3/bin/node
Yarn:
version: 1.22.22
path: ~/.nvm/versions/node/v18.20.3/bin/yarn
npm:
version: 10.7.0
path: ~/.nvm/versions/node/v18.20.3/bin/npm
Watchman: Not Found
Managers:
CocoaPods:
version: 1.16.2
path: /Users/christophermiles/.asdf/shims/pod
SDKs:
iOS SDK: Not Found
Android SDK: Not Found
IDEs:
Android Studio: Not Found
Xcode:
version: /undefined
path: /usr/bin/xcodebuild
Languages:
Java: Not Found
Ruby:
version: 2.7.2
path: /Users/christophermiles/.asdf/shims/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.1.3
wanted: ^15.1.3
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.6
wanted: 0.76.6
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: Not found
newArchEnabled: Not found
iOS:
hermesEnabled: Not found
newArchEnabled: Not found
info React Native v0.77.0 is now available (your project is running on v0.76.6).
info Changelog: https://github.com/facebook/react-native/releases/tag/v0.77.0
info Diff: https://react-native-community.github.io/upgrade-helper/?from=0.76.6&to=0.77.0
info For more info, check out "https://reactnative.dev/docs/upgrading?os=macos".
```
### Stacktrace or Logs
```text
See memory and heap snapshots in screenshots below.
```
### Reproducer
https://snack.expo.dev/@miles121/smelly-blue-croissant
### Screenshots and Videos
<img width="600" alt="Image" src="https://github.com/user-attachments/assets/0031d466-6e6c-42cf-bc60-1a6af71e57ab" />
<br>
<img width="600" alt="Image" src="https://github.com/user-attachments/assets/07032ec0-a0ac-4e54-a5ae-da33e1ef970c" /> | Issue: Author Provided Repro,API: Animated | low | Major |
2,804,712,308 | ui | [bug]: Error adding shadcn for Turborepo | ### Describe the bug
Unable to add shadcn using turborepo.
`npx shadcn@canary init`
โ Preflight checks.
โ Verifying framework.
We could not detect a supported framework at /app-name.
Visit https://ui.shadcn.com/docs/installation/manual to manually configure your project.
Once configured, you can use the cli to add components.
### Affected component/components
Not installing
### How to reproduce
1. Create monorepo using turborepo
2. Run shadcn/ui CLI to add for monorepo
### Codesandbox/StackBlitz link
_No response_
### Logs
```bash
```
### System Info
```bash
System, pnpm
```
### Before submitting
- [x] I've made research efforts and searched the documentation
- [x] I've searched for existing issues | bug | low | Critical |
2,804,716,187 | electron | [Bug] unable to get `context-menu` event when using `-webkit-app-region: drag` | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
32.2.7
### What operating system(s) are you using?
Linux
### Operating System Version
Ubuntu 22.0.4
### What arch are you using?
arm64 (including Apple Silicon)
### Last Known Working Electron version
Was probably never supported on Linux.
### Expected Behavior
An element with `-webkit-app-region: drag` should still receive context menu events.
### Actual Behavior
No `context-menu` event fires.
### Testcase Gist URL
https://gist.github.com/bpasero/b04271deaacf0e099b0a0fa2289e8d28
### Additional Information
//cc @codebytere @deepak1556 | platform/linux,component/menu,bug :beetle:,has-repro-gist,32-x-y,33-x-y,34-x-y | low | Critical |
2,804,726,746 | kubernetes | [Flaking Test][sig-node] Container Runtime blackbox test on terminated container should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] | ### Which jobs are flaking?
master-blocking
- gce-ubuntu-master-containerd
### Which tests are flaking?
E2eNode Suite.[It] [sig-node] Container Runtime blackbox test on terminated container should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
[Prow](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-crio-cgroupv1-node-e2e-conformance/1881671364948004864)
[Triage](https://storage.googleapis.com/k8s-triage/index.html?test=Container%20Runtime%20blackbox%20test%20on%20terminated%20container%20should%20report%20termination%20message%20from%20file%20when%20pod%20succeeds%20and%20TerminationMessagePolicy%20FallbackToLogsOnError%20is%20set)
### Since when has it been flaking?
[1/10/2025, 6:57:34 PM](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-crio-cgroupv1-node-e2e-conformance/1877882125122801664)
[1/21/2025, 5:54:07 AM](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-crio-cgroupv1-node-e2e-conformance/1881671364948004864)
### Testgrid link
https://testgrid.k8s.io/sig-release-master-blocking#ci-crio-cgroupv1-node-e2e-conformance
### Reason for failure (if possible)
```
{ failed [FAILED] Timed out after 300.001s.
Expected
<v1.PodPhase>: Failed
to equal
<v1.PodPhase>: Succeeded
In [It] at: k8s.io/kubernetes/test/e2e/common/node/runtime.go:157 @ 01/21/25 12:18:04.935
}
```
### Anything else we need to know?
N/A
### Relevant SIG(s)
/sig node
cc @kubernetes/release-team-release-signal | sig/node,kind/flake,triage/accepted | low | Critical |
2,804,730,726 | pytorch | Loading weights using `torch.distributed.checkpoint` leads to large loss values | ### ๐ Describe the bug
Using different init method leads to losses with different scales:
```python
# NOTE: This will produce loss in range [3, 5]
return init_with_meta(self, auto_wrap_policy)
# NOTE: This will produce normal loss in range [0.4, 1]
return init_with_hf(self, auto_wrap_policy)
```
However, I have checked that the `distcp` checkpoints should be correct (I converted the distcp to safetensors and checked the generations are reasonable). Is there anything I am missing?
The complete code to reproduce:
```python
import torch
import torch.distributed as dist
import torch.nn.functional as F
from functools import cached_property
class Dataset:
def __init__(self, dialogues: list[list[dict[str, str]]], tokenizer):
self.dialogues = [self.process_history(dialogue, tokenizer) for dialogue in dialogues]
def process_history(self, history: list[dict[str, str]], tokenizer):
if len(history) == 0:
raise ValueError("History is empty")
standard_history = []
for message in history:
if "from" in message:
message["role"] = message.pop("from")
if "value" in message:
message["content"] = message.pop("value")
assert "role" in message and "content" in message
message["role"] = message["role"].lower()
standard_history.append(message)
generation_prompt = "<|start_header_id|>assistant<|end_header_id|>\n\n<|begin_of_thought|>\n\n"
# Apply chat template, tokenize, and get labels
prev, input_ids, attn_mask, labels = "", [], [], []
for index in range(len(standard_history)):
templated = tokenizer.apply_chat_template(
standard_history[: index + 1],
tokenize=False,
add_generation_prompt=False
)
if templated.endswith(generation_prompt):
templated = templated[:-len(generation_prompt)]
assert templated.startswith(prev), (templated, prev)
prev, current_templated = templated, templated[len(prev) :]
tokenized = tokenizer(current_templated, add_special_tokens=False)
ids, mask = tokenized.input_ids, tokenized.attention_mask
input_ids.extend(ids)
attn_mask.extend(mask)
if standard_history[index].get("calculate_loss") is not None:
if standard_history[index]["calculate_loss"]:
lbl = [x for x in ids]
else:
lbl = [-100] * len(ids)
elif standard_history[index]["role"] != "assistant":
lbl = [-100] * len(ids)
else:
lbl = [x for x in ids]
labels.extend(lbl)
return {
"input_ids": torch.tensor(input_ids, dtype=torch.long),
"attention_mask": torch.tensor(attn_mask, dtype=torch.long),
"labels": torch.tensor(labels, dtype=torch.long),
}
def __len__(self):
return len(self.dialogues)
def __getitem__(self, idx: int):
return self.dialogues[idx]
def zero_pad_sequences(sequences: list[torch.Tensor], side: str = "left", value=0, max_len: int | None = None):
assert side in ("left", "right")
if max_len is not None:
sequences = [x[..., :max_len] for x in sequences]
max_seq_len = max(seq.size(-1) for seq in sequences)
else:
max_len = max_seq_len
padded_sequences = []
for seq in sequences:
pad_len = max_len - seq.size(-1)
padding = (pad_len, 0) if side == "left" else (0, pad_len)
padded_sequences.append(F.pad(seq, padding, value=value))
return torch.stack(padded_sequences, dim=0)
class Exp:
model_path: str = "/checkpoints/Meta-Llama-3.1-8B-Instruct/"
distcp_path: str = "/checkpoints/Meta-Llama-3.1-8B-Instruct/distcp"
data_path: str = "/data/sft_data_sample.json"
num_epochs: int = 1
def run(self):
from tqdm import tqdm
for epoch in range(self.num_epochs):
pbar = tqdm(self.dataloader, desc=f"Epoch {epoch+1}/{self.num_epochs}")
losses, max_loss_counts = [], 5
for batch in pbar:
input_ids = batch["input_ids"].cuda()
attention_mask = batch["attention_mask"].cuda()
labels = batch["labels"].cuda()
logits = self.model(input_ids, attention_mask=attention_mask).logits
logits = logits[:, :-1, :].contiguous()
labels = labels[:, 1:].contiguous()
loss = self.criterion(logits.view(-1, logits.size(-1)), labels.view(-1))
loss.backward()
self.optimizer.step()
losses.append(loss.item())
if len(losses) > max_loss_counts:
losses.pop(0)
mean_loss = sum(losses) / len(losses)
pbar.set_postfix({"avg. loss": mean_loss})
@cached_property
def criterion(self):
import torch.nn as nn
return nn.CrossEntropyLoss(ignore_index=-100)
@cached_property
def dataloader(self):
import json
from torch.utils.data import DistributedSampler, DataLoader
def collate_fn(batch: list[dict]) -> dict:
input_ids = zero_pad_sequences(
[x["input_ids"] for x in batch],
side="right",
value=self.tokenizer.pad_token_id,
max_len=self.max_seq_len
)
attention_mask = zero_pad_sequences(
[x["attention_mask"] for x in batch],
side="right",
value=0,
max_len=self.max_seq_len
)
labels = zero_pad_sequences(
[x["labels"] for x in batch],
side="right",
value=-100,
max_len=self.max_seq_len
)
return {k: torch.cat([x[k] for x in batch]) for k in batch[0].keys()}
with open(self.data_path, "r") as f:
dialogues = json.load(f)
dataset = Dataset(dialogues, self.tokenizer)
sampler = DistributedSampler(dataset, num_replicas=self.world_size, rank=self.rank)
return DataLoader(dataset, batch_size=1, sampler=sampler)
@cached_property
def model(self):
import torch.distributed.checkpoint as dcp
from functools import partial
from transformers import LlamaForCausalLM, AutoConfig
from transformers.models.llama.modeling_llama import LlamaDecoderLayer
from torch.distributed.fsdp import FullyShardedDataParallel as FSDP, StateDictType
from torch.distributed.fsdp.wrap import transformer_auto_wrap_policy
def init_with_meta(self, auto_wrap_policy):
with torch.device("meta"):
model = LlamaForCausalLM(
AutoConfig.from_pretrained(
self.model_path,
torch_dtype=torch.bfloat16,
device_map="cuda",
attn_implementation="flash_attention_2",
)
)
model.gradient_checkpointing_enable()
model = model.to(torch.bfloat16)
fsdp_model = FSDP(
model,
auto_wrap_policy=auto_wrap_policy,
device_id=self.rank,
param_init_fn=lambda x: x.to_empty(device=torch.cuda.current_device(), recurse=False)
)
with FSDP.state_dict_type(fsdp_model, StateDictType.SHARDED_STATE_DICT):
state_dict = {"model": fsdp_model.state_dict()}
dcp.load(
state_dict,
storage_reader=dcp.FileSystemReader(self.distcp_path),
)
fsdp_model.load_state_dict(state_dict["model"])
fsdp_model = fsdp_model.to(torch.bfloat16)
return fsdp_model
def init_with_hf(self, auto_wrap_policy):
model = LlamaForCausalLM.from_pretrained(
self.model_path,
torch_dtype=torch.bfloat16,
device_map="cuda",
attn_implementation="flash_attention_2",
)
model.gradient_checkpointing_enable()
fsdp_model = FSDP(
model,
auto_wrap_policy=auto_wrap_policy,
device_id=self.rank,
param_init_fn=lambda x: x.to_empty(device=torch.cuda.current_device(), recurse=False)
)
return fsdp_model
auto_wrap_policy = partial(
transformer_auto_wrap_policy,
transformer_layer_cls={LlamaDecoderLayer},
)
# NOTE: This will produce loss in range [3, 5]
return init_with_meta(self, auto_wrap_policy)
# NOTE: This will produce normal loss in range [0.4, 1]
return init_with_hf(self, auto_wrap_policy)
@cached_property
def optimizer(self):
from torch.optim import AdamW
optimizer = AdamW(self.model.parameters(), lr=1e-5)
return optimizer
@cached_property
def tokenizer(self):
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(self.model_path)
return tokenizer
@cached_property
def rank(self):
return dist.get_rank()
@cached_property
def world_size(self):
return dist.get_world_size()
if __name__ == "__main__":
dist.init_process_group()
exp = Exp()
torch.cuda.set_device(exp.rank)
exp.run()
dist.destroy_process_group()
```
### Versions
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-153-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.66
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A800-SXM4-80GB
GPU 1: NVIDIA A800-SXM4-80GB
GPU 2: NVIDIA A800-SXM4-80GB
GPU 3: NVIDIA A800-SXM4-80GB
GPU 4: NVIDIA A800-SXM4-80GB
GPU 5: NVIDIA A800-SXM4-80GB
GPU 6: NVIDIA A800-SXM4-80GB
GPU 7: NVIDIA A800-SXM4-80GB
Nvidia driver version: 525.147.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-95
Off-line CPU(s) list: 96-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 6
Frequency boost: enabled
CPU max MHz: 3400.0000
CPU min MHz: 800.0000
BogoMIPS: 5200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid md_clear pconfig flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 80 MiB (64 instances)
L3 cache: 96 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.1.105
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] torch==2.4.0+cu121
[pip3] torchaudio==2.4.0+cu121
[pip3] torchvision==0.19.0+cu121
[pip3] triton==3.0.0
[conda] numpy 1.26.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] torch 2.4.0+cu121 pypi_0 pypi
[conda] torchaudio 2.4.0+cu121 pypi_0 pypi
[conda] torchvision 0.19.0+cu121 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed | low | Critical |
2,804,736,173 | PowerToys | Enhanced Functionality of "Always on Top" with Transparency | ### Description of the new feature / enhancement
Allow the user to apply opacity to a window that has the Always on top feature applied to.
Customizable settings allowing specifc apps/windows to be effected differently or not at all.
### Scenario when this would be used?
### Enhanced Functionality of "Always on Top" with Transparency
#### **Improved Multitasking Efficiency**
- **Reference Without Obstruction:** Overlay a window (e.g., calculator or reference document) while interacting with content underneath, ideal for single-display setups.
- **Streamlined Workflows:** Designers, coders, and content creators can use transparent overlays to reference designs or follow guidelines without interrupting their workflow.
#### **Space Optimization for Single-Display Setups**
- **Eliminates Resizing Hassles:** Maintain workspace layouts without rearranging or resizing windows.
- **Facilitates Full-Screen Applications:** Overlay transparent windows on full-screen apps like video editing software, gaming dashboards, or virtual meeting platforms.
#### **Better Accessibility for Cross-Referencing**
- **Educational Use Cases:** Keep tutorials or training videos visible while working in a secondary window.
- **Live Monitoring:** Use overlays for live logs, error messages, or monitoring tools while maintaining primary workspace access.
#### **Enhanced Visual Collaboration**
- **Design Adjustments:** Overlay grids, palettes, or guides directly on projects without obstructing the main canvas.
- **Code and Debugging:** Debug efficiently with overlaid snippets or API documentation.
#### **Improved Workflow for Content Creators**
- **Streaming and Recording:** Keep chat, analytics, or timers visible without obstructing broadcast content.
- **Video Editing and Review:** Overlay subtitles or notes during playback for real-time adjustments.
#### **Broader Accessibility for Niche Use Cases**
- **Data Entry and Forms:** Reference documents seamlessly during repetitive tasks like manual data entry.
- **Medical and Technical Applications:** Overlay scans or schematics for cross-referencing while inputting notes or edits.
#### **Customization and User Experience**
- **User-Controlled Transparency Levels:** Adjustable transparency allows users to balance visibility with unobstructed workflows.
- **Consistency in UI/UX:** Aligns with modern design principles, offering flexibility for diverse workflows.
#### **Potential Limitations Addressed**
- **Avoids Screen Clutter:** Transparency reduces clutter from overlapping opaque windows, ensuring an organized display.
- **Minimizes Disruptions:** Keeps full-screen content visible and reduces window switching or resizing needs.
### Supporting information
_No response_ | Needs-Triage | medium | Critical |
2,804,740,502 | langchain | PERF: Langchain imports stat too many files causing 12 second import time | ### Checked other resources
- [x] I added a very descriptive title to this issue.
- [x] I searched the LangChain documentation with the integrated search.
- [x] I used the GitHub search to find a similar question and didn't find it.
- [x] I am sure that this is a bug in LangChain rather than my code.
- [x] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Here's the stats for a single import:
```python
import cProfile
import pstats
cProfile.run("from langchain_core.documents import Document", "langchainImport")
p = pstats.Stats("langchainImport")
```
output:
```
ncalls tottime percall cumtime percall filename:lineno(function)
2227 1.461 0.001 1.461 0.001 {built-in method posix.stat}
520 0.624 0.001 0.624 0.001 {built-in method io.open_code}
520 0.234 0.000 0.234 0.000 {method 'read' of '_io.BufferedReader' objects}
549 0.183 0.000 0.183 0.000 {method '__exit__' of '_io._IOBase' objects}
37/34 0.115 0.003 0.124 0.004 {built-in method _imp.create_dynamic}
```
As you can see this import calls `posix.stat` `2227` times causing the import to take 1.4 seconds to complete.
Other langchain imports have similarly egregious results revolving around `posix.stat`
What's causing all these file IO operations? Is there any way for them to be deferred or removed to improve performance?
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I'm expecting imports to take under 100ms
### System Info
```
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Thu Oct 5 21:02:42 UTC 2023
> Python Version: 3.10.12 | packaged by conda-forge | (main, Jun 23 2023, 22:40:32) [GCC 12.3.0]
Package Information
-------------------
> langchain_core: 0.3.31
> langchain: 0.3.15
> langchain_community: 0.3.15
> langsmith: 0.3.0
> langchain_openai: 0.3.1
> langchain_text_splitters: 0.3.5
> langchainhub: 0.1.21
> langgraph_sdk: 0.1.51
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.11
> async-timeout: 4.0.3
> dataclasses-json: 0.6.7
> httpx: 0.28.1
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.60.0
> orjson: 3.10.14
> packaging: 24.2
> pydantic: 2.10.5
> pydantic-settings: 2.7.1
> pytest: 8.3.4
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> rich: 13.9.4
> SQLAlchemy: 2.0.37
> tenacity: 9.0.0
> tiktoken: 0.8.0
> types-requests: 2.32.0.20241016
> typing-extensions: 4.12.2
> zstandard: 0.23.0
``` | โฑญ: core | low | Critical |
2,804,768,206 | vscode | very many error |
Type: <b>Bug</b>
very many error
VS Code version: Code 1.96.4 (cd4ee3b1c348a13bafd8f9ad8060705f6d4b9cba, 2025-01-16T00:16:19.038Z)
OS version: Windows_NT x64 10.0.19045
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i5-7300U CPU @ 2.60GHz (4 x 2712)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|15.88GB (9.27GB free)|
|Process Argv|--crash-reporter-id 4391a7a7-6178-4b86-b201-49cde4eb170e|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (4)</summary>
Extension|Author (truncated)|Version
---|---|---
copilot|Git|1.257.0
copilot-chat|Git|0.23.2
remote-wsl|ms-|0.88.5
powershell|ms-|2025.0.0
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805:30301674
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyonecf:30548226
vscrp:30673768
a9j8j154:30646983
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupyter:31046869
nativerepl2:31139839
pythonrstrctxt:31112756
nativeloc1:31192215
cf971741:31144450
iacca1:31171482
notype1cf:31157160
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
cf1a2727:31215809
6074i472:31201624
dwoutputs:31217127
9064b325:31222308
copilot_t_ci:31222730
```
</details>
<!-- generated by issue reporter --> | info-needed | low | Critical |
2,804,775,547 | pytorch | torch.logit works incorrectly when input < eps after torch.compile | ### ๐ Describe the bug
According to the doc https://pytorch.org/docs/stable/special.html#torch.special.logit, when input < eps, the actual computation is: `ln(eps/(1-eps))`. But this is not what `torch.compile` (with inductor backend) does.
```python
import torch
input = torch.tensor(0.3, dtype=torch.float64)
eps = torch.tensor(0.9, dtype=torch.float64)
compiled = torch.compile(torch.logit)
print(f"compiled: {compiled(input, eps)}")
print(f"expected: {torch.log(eps / (1 - eps))}")
```
```
compiled: -2.1972245773362196
expected: 2.1972245773362196
```
When using `aot_eager` to compile `torch.logit`, the compiled API's result is expected. So I think the issue lies in the inductor backend.
### Error logs
```
compiled: -2.1972245773362196
expected: 2.1972245773362196
```
### Versions
[pip3] numpy==1.26.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.13.1
[pip3] torch==2.5.1
[pip3] triton==3.1.0
[conda] numpy 1.26.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @chauhang @penguinwu @SherlockNoMad @zou3519 @bdhirsh @yf225 | triaged,bug,oncall: pt2,module: decompositions,module: aotdispatch,module: pt2-dispatcher | low | Critical |
2,804,790,179 | vscode | problem in vs code |
Type: <b>Performance Issue</b>
I am unable to select folder from the system
VS Code version: Code 1.96.4 (cd4ee3b1c348a13bafd8f9ad8060705f6d4b9cba, 2025-01-16T00:16:19.038Z)
OS version: Windows_NT x64 10.0.26100
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|11th Gen Intel(R) Core(TM) i5-11320H @ 3.20GHz (8 x 3187)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|7.79GB (1.05GB free)|
|Process Argv|C:\\Users\\LENOVO\\AppData\\Local\\Packages\\Microsoft.WindowsTerminal_8wekyb3d8bbwe\\LocalState\\settings.json --crash-reporter-id 0173ccc9-12b7-4d77-9e88-3a123b2ff560|
|Screen Reader|no|
|VM|0%|
</details><details>
<summary>Process Info</summary>
```
CPU % Mem MB PID Process
0 159 19484 code main
0 124 4556 shared-process
0 90 5008 fileWatcher [1]
0 181 7632 gpu-process
0 202 8496 window [1] (Visual Studio Code)
0 157 9656 extensionHost [1]
0 89 8456 "C:\Users\LENOVO\AppData\Local\Programs\Microsoft VS Code\Code.exe" "c:\Users\LENOVO\AppData\Local\Programs\Microsoft VS Code\resources\app\extensions\json-language-features\server\dist\node\jsonServerMain" --node-ipc --clientProcessId=9656
0 35 15940 crashpad-handler
0 50 17384 utility-network-service
```
</details>
<details>
<summary>Workspace Info</summary>
```
;
```
</details>
<details><summary>Extensions (22)</summary>
Extension|Author (truncated)|Version
---|---|---
emojisense|bie|0.10.0
prettier-vscode|esb|11.0.0
code-runner|for|0.12.2
codespaces|Git|1.17.3
vscode-pull-request-github|Git|0.102.0
debugpy|ms-|2024.14.0
python|ms-|2024.22.2
vscode-pylance|ms-|2024.12.1
cmake-tools|ms-|1.19.52
cpptools|ms-|1.22.11
cpptools-extension-pack|ms-|1.3.0
java|red|1.39.0
LiveServer|rit|5.7.9
cmake|twx|0.0.17
intellicode-api-usage-examples|Vis|0.2.9
vscodeintellicode|Vis|1.3.2
vscode-gradle|vsc|3.16.4
vscode-java-debug|vsc|0.58.1
vscode-java-dependency|vsc|0.24.1
vscode-java-pack|vsc|0.29.0
vscode-java-test|vsc|0.43.0
vscode-maven|vsc|0.44.0
(1 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805:30301674
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyonecf:30548226
a9j8j154:30646983
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupytercf:31046870
newcmakeconfigv2:31071590
nativerepl2:31139839
pythonrstrctxt:31112756
nativeloc1:31192215
cf971741:31144450
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
dwoutputs:31217127
hdaa2157:31222309
copilot_t_ci:31222730
```
</details>
<!-- generated by issue reporter --> | info-needed,triage-needed | low | Critical |
2,804,799,342 | kubernetes | prober_probe_total stability | The `prober_probe_total` has been marked as Alpha since 3a5091779523a02278ad1ea334df7119ab4b2e5f (part of 1.16). Is it intended to be promoted or removed? | sig/instrumentation,triage/accepted | low | Minor |
2,804,806,333 | kubernetes | Scheduler topologySpreadConstraint to account for device plugin requests | ### What happened?
In my cluster, I have two separate nodegroups:
- "regular" set of worker nodes that span 3 Availability Zones (`topology.kubernetes.io/zone`)
- gpu nodes with nvidia device plugin (Allocatable: `nvidia.com/gpu`) that span 2 Availability Zones
I have a Deployment with the following PodSpec:
```yaml
spec:
replicas: 3
template:
spec:
containers:
- name: main
resources:
limits:
cpu: 3500m
memory: 16G
nvidia.com/gpu: "1"
requests:
cpu: 200m
memory: 5G
nvidia.com/gpu: "1"
topologySpreadConstraints:
- labelSelector:
matchLabels:
app.kubernetes.io/name: foo
app.kubernetes.io/instance: bar
matchLabelKeys:
- pod-template-hash
maxSkew: 1
nodeTaintsPolicy: Honor
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: DoNotSchedule
```
Given that I have **not** defined `nodeSelector` or `affinity.nodeAffinity`, scheduler takes all available nodes into account when calculating the min topology domains. This leads to a failed scheduling of one of the pods.
```
status:
availableReplicas: 2
conditions:
- lastTransitionTime: "2025-01-22T13:56:55Z"
lastUpdateTime: "2025-01-22T14:22:01Z"
message: ReplicaSet "REDACATED-dfc668c69"
has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
- lastTransitionTime: "2025-01-22T15:17:02Z"
lastUpdateTime: "2025-01-22T15:17:02Z"
message: Deployment does not have minimum availability.
reason: MinimumReplicasUnavailable
status: "False"
type: Available
observedGeneration: 55
readyReplicas: 2
replicas: 3
unavailableReplicas: 1
updatedReplicas: 3
```
This behavior matches the [topologySpreadConstraints](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/) documentation
>nodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are:
Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations.
### What did you expect to happen?
Given that the workload is requesting `nvidia.com/gpu: "1"`, scheduler should only consider nodes that have Allocatable ` nvidia.com/gpu` when calculating topology domain.
### How can we reproduce it (as minimally and precisely as possible)?
Deployment
```
apiVersion: apps/v1
kind: Deployment
metadata:
name: foo-bar
spec:
replicas: 3
selector:
matchLabels:
app.kubernetes.io/name: foo
app.kubernetes.io/instance: bar
template:
metadata:
labels:
app.kubernetes.io/name: foo
app.kubernetes.io/instance: bar
spec:
containers:
- name: main
image: nginx:1.14.2
resources:
limits:
cpu: 1
memory: 2G
nvidia.com/gpu: "1"
requests:
cpu: 100m
memory: 1G
nvidia.com/gpu: "1"
topologySpreadConstraints:
- labelSelector:
matchLabels:
app.kubernetes.io/name: foo
app.kubernetes.io/instance: bar
matchLabelKeys:
- pod-template-hash
maxSkew: 1
nodeTaintsPolicy: Honor
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: DoNotSchedule
```
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
# paste output here
Client Version: v1.30.5
Server Version: v1.30.6
```
</details>
### Cloud provider
<details>
aws
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
PRETTY_NAME="Ubuntu 22.04.5 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.5 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy
$ uname -a
Linux ip-10-72-XXX-XXX.REDACTED 6.8.0-1021-aws #23~22.04.1-Ubuntu SMP Tue Dec 10 16:50:46 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
CONTAINERD_VERSION=1.7.24
CRITOOLS_VERSION=1.30.1
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
aws vpc cni v1.19.0
CNI_VERSION=1.6.2
</details>
| sig/scheduling,kind/feature,needs-triage | low | Critical |
2,804,815,817 | go | fmt: is %% a literal percent, or a degenerate conversion? | During the review of the printf format-string parser used by gopls (extracted from the one in vet's printf checker), the question arose as to whether %% is nothing more than a literal percent sign, dealt with by the printf scanner, or a degenerate conversion supporting width, flags, precision, and so on.
Empirically, the surprising answer is that it is very much a conversion. Observe that it accepts a width, even indirectly, and updates the argument index accordingly (though it doesn't actually left-pad the output):
https://go.dev/play/p/Z4s7oV2HIM7
```go
var unused any
width := 456
v := 789
fmt.Printf("%[2]*[4]% %v", unused, width, unused, v) // "% 789"
```
(Either way this seems wrong: the argument at index 4 should either be rejected or consumed by the %%; instead it becomes the operand of %v.)
Aside: I checked C for precedent, and found that it too empirically treats %% as a conversion:
```
$ cat a.c
#include <stdio.h>
int main() { printf("%10.2%%d\n", 4); }
$ gcc a.c && ./a.out
%4
```
Notice the left-padding to ten spaces. However, this program has undefined behavior according to [POSIX.1](https://pubs.opengroup.org/onlinepubs/009695399/functions/printf.html):
> Print a '%' character; no argument is converted. The complete conversion specification shall be %%.
What is the intended behavior for Go?
| NeedsInvestigation | low | Minor |
2,804,818,621 | rust | Tracking Issue for APC #316: In-place case change methods for String | <!--
Thank you for creating a tracking issue!
Tracking issues are for tracking a feature from implementation to stabilization.
Make sure to include the relevant RFC for the feature if it has one.
If the new feature is small, it may be fine to skip the RFC process. In that
case, you can use `issue = "none"` in your initial implementation PR. The
reviewer will ask you to open a tracking issue if they agree your feature can be
added without an RFC.
-->
Feature gate: `#![feature(string_make_uplowercase)]`
This is a tracking issue for [APC #316](https://github.com/rust-lang/libs-team/issues/316)
This APC proposes that a new API is added to String to change cases and do so efficiently by consuming self and reusing the buffer, not allocating in most cases.
The exact implementation remains to be discussed, but the idea would be that in cases where it is possible, the case change is done in place. Once that isn't possible, a auxiliary DE-queue can be used to store bytes temporarily.
### Public API
This would add the following methods (names to be determined)
```rust
impl String {
fn into_uppercase(&mut self);
fn into_lowercase(&mut self);
}
```
### Steps / History
<!--
For larger features, more steps might be involved.
If the feature is changed later, please add those PRs here as well.
-->
- [ ] Implementation: #135888
- [ ] Final comment period (FCP)[^1]
- [ ] Stabilization PR
<!--
Once the feature has gone through a few release cycles and there are no
unresolved questions left, the feature might be ready for stabilization.
If this feature didn't go through the RFC process, a final comment period
(FCP) is always needed before stabilization. This works as follows:
A library API team member can kick off the stabilization process, at which point
the rfcbot will ask all the team members to verify they agree with
stabilization. Once enough members agree and there are no concerns, the final
comment period begins: this issue will be marked as such and will be listed
in the next This Week in Rust newsletter. If no blocking concerns are raised in
that period of 10 days, a stabilization PR can be opened by anyone.
-->
### Unresolved Questions
<!--
Include any open questions that need to be answered before the feature can be
stabilised. If multiple (unrelated) big questions come up, it can be a good idea
to open a separate issue for each, to make it easier to keep track of the
discussions.
It's useful to link any relevant discussions and conclusions (whether on GitHub,
Zulip, or the internals forum) here.
-->
- None yet.
[^1]: https://std-dev-guide.rust-lang.org/feature-lifecycle/stabilization.html
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"krtab"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | T-libs-api,C-tracking-issue | low | Minor |
2,804,820,295 | vscode | Extend VSIX options for downloading other versions | <!-- โ ๏ธโ ๏ธ Do Not Delete This! feature_request_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
You can since 1.96.4 download the VSIX from a plugin by RMB and then select Download VSIX or the pre-released version.
For example from Python.
Extend this menu to download also older VSIX versions
And then be able to select a specific older version.
Reason for asking is that our Linux environment is very strict and closed with no internet connection.
VSIX are allowed to install.
But our CODE version is not always the latest one.
So with 1.96.4 I would like to download the 1.89.1 version of the VSIX from Python.
This is currently not possible
So something like this:
 | feature-request,extensions | low | Minor |
2,804,853,646 | flutter | Allow depending on `ModalRoute.popDispositionOf(context)` | ### Use case
I want to implement a custom widget that can close the current route using a gesture. One of my requirements is that the gesture is not accepted, if the current route can't be popped using a gesture (aka its `popDisposition` is `.doNotPop`).
Unfortunately, I found no way to keep my widget updated, since depending on `ModalRoute.of(context)` doesn't trigger a rebuild when the `popDisposition` changes, and its Notifications bubble upwards, so the route's contents can't receive them.
### Proposal
Add `popDisposition` as an aspect to the respective `InheritedModel` so that we can depend on it reliably. | c: new feature,framework,f: routes,c: proposal,team-framework | low | Minor |
2,804,854,006 | angular | Signals with custom change methods | ### Which @angular/* package(s) are relevant/related to the feature request?
core
### Description
By default a signal comes equipped with a generic `.set` / `.update` methods. However, many signals used in practice represent a piece of state that can / should only change in a particular way - as an example a counter can be incremented, decremented and reset. In other words, there are many cases where we want to _enforce constraints_ on the possible state changes.
Ideally we could collocate _state_ and _change methods_ in one cohesive unit. We do recognize the need in this area and see a couple of patterns used in practice. Opening this issue to capture the patterns we are aware of, judge interest, solicit more ideas etc.
### signal creation method with custom mutation methods on a getter
Example from https://github.com/angular/angular/issues/59615:
```typescript
export function createCounter(initialValue: number = 0): WritableSignal<number> & {increase: () => void; decrease: () => void; reset: () => void} {
const c = signal(initialValue);
return Object.assign(c, {
increase: () => {
c.update(x => x + 1);
},
decrease: () => {
c.update(x => x - 1);
},
reset: () => {
c.set(initialValue);
}
})
}
// usage
@Component()
export class SomeCmp {
c = createCounter();
onClick() {
this.c.increase();
this.c.decrease();
this.c.reset();
this.c.set(100);
this.c.update(x => x + 2);
}
}
```
Pros:
* feels natural as have a similar API shape as a regular signal
Cons:
* somewhat awkward to write
* doesn't account for hiding / removing the default `set` / `update` methods
### signal wrapped in a class with custom mutation methods
Example:
```typescript
class CounterState {
private _state = signal(0);
public state = this._state.asReadonly();
increment() {
this._state.update(v => v + 1);
}
decrement() {
this._state.update(v => v - 1);
}
reset: () => {
this._state.set(0);
}
}
// usage
@Component()
export class SomeCmp {
c = new CounterState();
onClick() {
this.c.increase();
this.c.decrease();
this.c.reset();
// default mutation methods removed from the public API
// this.c.set(100);
// this.c.update(x => x + 2);
}
}
```
Pros:
* fairly straightforward
* can control if default set / update mutations are exposed
Cons:
* somewhat verbose to write
* requires additional property access (`c.state()` instead of `c()`) to get to the value
* doesn't account for hiding / removing the default `set` / `update` methods | area: core,canonical,core: reactivity,cross-cutting: signals | medium | Major |
2,804,858,304 | vscode | Automatically zoom the window level |
Type: <b>Bug</b>
Automatically zoom the window to about 40-60% when I open a new instance of VS Code.
VS Code version: Code 1.96.4 (cd4ee3b1c348a13bafd8f9ad8060705f6d4b9cba, 2025-01-16T00:16:19.038Z)
OS version: Windows_NT x64 10.0.22631
Modes:
<!-- generated by issue reporter --> | feature-request,zoom | low | Critical |
2,804,874,139 | material-ui | [Chip] Variant typescript error | ### Steps to reproduce
```jsx
<Chip variant="outlined" color="warning" />
```
### Current behavior
No overload matches this call.
Overload 1 of 2, '(props: { component: ElementType<any, keyof IntrinsicElements>; } & ChipOwnProps & CommonProps & Omit<any, "className" | ... 15 more ... | "variant">): Element | null', gave the following error.
Type '"outlined"' is not assignable to type '"positive" | "negative" | undefined'.
Overload 2 of 2, '(props: DefaultComponentProps<ChipTypeMap<{}, "div">>): Element | null', gave the following error.
Type '"outlined"' is not assignable to type '"positive" | "negative" | undefined'.
### Expected behavior
no error
### Context
_No response_
### Your environment
System:
OS: Windows 10 10.0.19045
Binaries:
Node: 22.11.0 - C:\Program Files\nodejs\node.EXE
npm: 10.9.0 - C:\Program Files\nodejs\npm.CMD
pnpm: Not Found
Browsers:
Chrome: Not Found
Edge: Chromium (127.0.2651.74)
npmPackages:
@emotion/react: ^11.13.5 => 11.13.5
@emotion/styled: ^11.13.5 => 11.13.5
@mui/base: 5.0.0-beta.40
@mui/core-downloads-tracker: 5.16.14
@mui/icons-material: ^5.16.14 => 5.16.14
@mui/lab: 5.0.0-alpha.173
@mui/material: ^5.16.14 => 5.16.14
@mui/private-theming: 5.16.14
@mui/styled-engine: 5.16.14
@mui/system: 5.16.14
@mui/types: 7.2.19
@mui/utils: 5.16.14
@mui/x-date-pickers: 5.0.20
@types/react: ^18.3.12 => 18.3.12
react: ^18.3.1 => 18.3.1
react-dom: ^18.3.1 => 18.3.1
typescript: ^4.9.5 => 4.9.5
**Search keywords**: chip | component: chip,status: waiting for author,typescript | low | Critical |
2,804,885,786 | electron | useSystemPicker when toggled on/off makes getDisplayMedia hang/crash | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
33.3.1
### What operating system(s) are you using?
macOS
### Operating System Version
15.2
### What arch are you using?
arm64 (including Apple Silicon)
### Last Known Working Electron version
N/A
### Expected Behavior
We should be able to change the `setDisplayMediaRequestHandler` as many times as we wish, with different options, without it breaking `getDisplayMedia`.
Sometimes, but not always, it triggers a SIGTRAP:
```
*** Terminating app due to uncaught exception 'NSGenericException', reason: '*** Collection <__NSArrayM: 0x10c051e0f00> was mutated while being enumerated.'
*** First throw call stack:
(
0 CoreFoundation 0x0000000190062e80 __exceptionPreprocess + 176
1 libobjc.A.dylib 0x000000018fb4acd8 objc_exception_throw + 88
2 CoreFoundation 0x0000000190109c34 -[__NSSingleObjectEnumerator init] + 0
3 ScreenCaptureKit 0x00000001f31da658 -[SCContentSharingPicker contentPickerDidSelectFilter:forStream:] + 436
4 ReplayKit 0x00000001da3c7208 -[RPScreenRecorder contentPickerDidSelectFilter:forStream:] + 84
5 ReplayKit 0x00000001da3d5944 -[RPDaemonProxy contentPickerDidSelectFilter:forStream:] + 224
6 CoreFoundation 0x000000018ffd0f94 __invoking___ + 148
7 CoreFoundation 0x000000018ffd0e0c -[NSInvocation invoke] + 428
8 ReplayKit 0x00000001da3d0ea8 -[RPDaemonProxy connection:handleInvocation:isReply:] + 316
9 Foundation 0x0000000191b685a4 -[NSXPCConnection _decodeAndInvokeMessageWithEvent:reply:flags:] + 1108
10 Foundation 0x0000000191b69e10 message_handler_message + 88
11 Foundation 0x0000000191184840 message_handler + 152
12 libxpc.dylib 0x000000018fc173ac _xpc_connection_call_event_handler + 144
13 libxpc.dylib 0x000000018fc15b00 _xpc_connection_mach_event + 1120
14 libdispatch.dylib 0x000000018fd55674 _dispatch_client_callout4 + 20
15 libdispatch.dylib 0x000000018fd71c88 _dispatch_mach_msg_invoke + 464
16 libdispatch.dylib 0x000000018fd5ca38 _dispatch_lane_serial_drain + 352
17 libdispatch.dylib 0x000000018fd729dc _dispatch_mach_invoke + 456
18 libdispatch.dylib 0x000000018fd5ca38 _dispatch_lane_serial_drain + 352
19 libdispatch.dylib 0x000000018fd5d764 _dispatch_lane_invoke + 432
20 libdispatch.dylib 0x000000018fd689a0 _dispatch_root_queue_drain_deferred_wlh + 288
21 libdispatch.dylib 0x000000018fd681ec _dispatch_workloop_worker_thread + 540
22 libsystem_pthread.dylib 0x000000018ff043d8 _pthread_wqthread + 288
23 libsystem_pthread.dylib 0x000000018ff030f0 start_wqthread + 8
)
```
### Actual Behavior
After toggling `useSystemPicker` on/off/on again, all calls to `getDisplayMedia` using the system picker hang indefinitely. This is important that this feature works, as it's used for loopback audio, until chrome eventually implements it.
### Testcase Gist URL
_No response_
### Additional Information
Toggling...
```
session.defaultSession.setDisplayMediaRequestHandler(() => {}, {useSystemPicker: true});
getDisplayMedia()
^ -- end it
session.defaultSession.setDisplayMediaRequestHandler(() => {}, {useSystemPicker: false});
getDisplayMedia()
^ -- end it
session.defaultSession.setDisplayMediaRequestHandler(() => {}, {useSystemPicker: true});
getDisplayMedia()
^ -- hangs or crashes
``` | platform/macOS,bug :beetle:,33-x-y,34-x-y,35-x-y | low | Critical |
2,804,891,282 | rust | Tracking Issue for `VecDeque::pop_front_if` & `VecDeque::pop_back_if` | Feature gate: `#![feature(vec_deque_pop_if)]`
Similar to #122741, but this time with `VecDeque` instead of `Vec`.
### Public API
```rust
impl<T> VecDeque<T> {
pub fn pop_front_if(&mut self, predicate: impl FnOnce(&mut T) -> bool) -> Option<T>;
pub fn pop_back_if(&mut self, predicate: impl FnOnce(&mut T) -> bool) -> Option<T>;
}
```
### Steps / History
- [x] ACP: https://github.com/rust-lang/libs-team/issues/208
- [ ] Implementation: #135890
- [ ] Final comment period (FCP)[^1]
- [ ] Stabilization PR
### Unresolved Questions
- ~`Vec::pop_if` uses an explicit generic parameter for the predicate. But that might change (https://github.com/rust-lang/rust/issues/122741#issuecomment-2607734786). If it does not, probably change these methods instead to be consistent.~
[^1]: https://std-dev-guide.rust-lang.org/feature-lifecycle/stabilization.html | T-libs-api,C-tracking-issue | low | Minor |
2,804,912,082 | langchain | Using LangChain's ContextualCompressionRetriever with Milvus and Voyage AI Models | ### Checked other resources
- [x] I added a very descriptive title to this issue.
- [x] I searched the LangChain documentation with the integrated search.
- [x] I used the GitHub search to find a similar question and didn't find it.
- [x] I am sure that this is a bug in LangChain rather than my code.
- [x] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
vector_store = Milvus(
embedding_function=embedding,
collection_name="test",
connection_args={
"uri": app.config['MILVUS_URI'],
"db_name": "test",
"token": app.config["MILVUS_TOKEN"]
},
enable_dynamic_field=True
)
retriever = vector_store.as_retriever(
search_type="similarity",
search_kwargs={"k": 20, "param": {"ef": 50}},
consistency_level="Strong"
)
compressor = VoyageAIRerank(model="rerank-2", top_k=12)
compression_retriever = CustomContextualCompressionRetriever(base_compressor=compressor, base_retriever=retriever)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I'm currently working with LangChain's ContextualCompressionRetriever and leveraging the following setup:
1. Base Retriever: Milvus Database integrated with the Voyage AI Large 2 Instruct Model for embeddings.
2. Base Compressor: Voyage AI Re-Ranker 2 Model for document filtering and ranking.
This setup successfully retrieves relevant documents, and the results have been promising. However, I now need to analyze the token count for the following:
- Re-Ranker model token usage: How many tokens are processed during the re-ranking step?
- Query embedding token usage: How many tokens are used to generate the query embedding?
Questions:
1. Is there a straightforward way to track and calculate the token count in this setup?
2. Are there any specific LangChain utilities or Voyage model APIs that provide this information?
If anyone has insights or has faced a similar challenge, Iโd greatly appreciate your guidance. Thank you!
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.12.2 (tags/v3.12.2:6abddd9, Feb 6 2024, 21:26:36) [MSC v.1937 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.3.29
> langchain: 0.3.14
> langsmith: 0.2.10
> langchain_aws: 0.2.10
> langchain_milvus: 0.1.7
> langchain_text_splitters: 0.3.5
> langchain_voyageai: 0.1.4
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.11
> async-timeout: Installed. No version info available.
> boto3: 1.35.99
> httpx: 0.28.1
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> orjson: 3.10.14
> packaging: 24.2
> pydantic: 2.10.4
> pymilvus: 2.5.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.37
> tenacity: 9.0.0
> typing-extensions: 4.12.2
> voyageai: 0.3.2
> zstandard: Installed. No version info available. | โฑญ: vector store | low | Critical |
2,804,915,777 | TypeScript | Design Meeting Notes, 1/21/2025 | # Flag for Erasable Syntax Only
* Flag in light of strippable type support for TypeScript in Node.js.
* `--erasableSyntaxOnly`
* What is it?
* No enums, namespaces, parameter properties.
* Allow all these in ambient contexts (`declare` keywords).
* Can we get it in in the next week?
* Probably.
* `--verbatimModuleSyntax`?
* Compilers can do this independently?
* Module elision is not in line with what Node itself supports.
* Also, purists might not really be into module elision in the first place.
* Unclear
* What about uninstantiated namespaces?
* They should be allowed...but Node.js disallows them.
* ```ts
// uninstantiated - should be allowed, currently is NOT in Node.js โ ๏ธ
export namespace foo1 {
export interface I {}
}
// ambient - should be allowed, currently is in Node.js
export declare namespace foo2 {
export interface I {}
}
// instantiated - should *not* be allowed, and is currently disallowed in Node.js
export namespace foo3 {
export interface I {}
1 + 2;
}
```
* Arguable that being forced to write `declare` is a bit of a footgun to force everyone to write: https://github.com/microsoft/TypeScript/issues/59601#issuecomment-2296958990
* Nice that `declare` makes it clear there's no JS
* On the other hand, `declare` implies something is a little bit odd about the surrounding environment.
* We think we'll allow uninstantiated namespaces, may restrict more later if we really made a mistake. New usage of this code is fairly low regardless. | Design Notes | low | Minor |
2,804,920,178 | react | Bug: `Suspense` components rendered by `renderToReadableStream()` cause render abort when served with Bun | React version: `19.0.0`
## Steps To Reproduce
1. Install [Bun](https://bun.sh/) and [Node](https://nodejs.org/en/download)
2. Clone my [reproduction repo](https://github.com/jmuzina/react-repro)
3. Install dependencies with Bun: `bun install`
4. Run React server with Node: `bun run ssr:node`. Open the [app](http://localhost:3000) and see that the suspended Button component resolves after 2 seconds and no errors are thrown in the serverside or clientside consoles.
5. Run React server with Bun; `bun run ssr:bun`. Open the [app](http://localhost:3000) and see that the Button component remains suspended after 2 seconds, is resolved in the serverside console but not the clientside console, and the following error is thrown in the server console:
```
8169 | try {
8170 | var abortableTasks = request.abortableTasks;
8171 | if (0 < abortableTasks.size) {
8172 | var error =
8173 | void 0 === reason
8174 | ? Error("The render was aborted by the server without a reason.")
^
error: The render was aborted by the server without a reason.
at abort (/home/jmuzina/software/work/canonical/repos/react-repro/node_modules/react-dom/cjs/react-dom-server.bun.development.js:8174:13)
at cancel (/home/jmuzina/software/work/canonical/repos/react-repro/node_modules/react-dom/cjs/react-dom-server.bun.development.js:8262:17)
at close (3:17)
```
6. Cause an app re-render by clicking the "Increment" button. See that the lazy button is resolved after 2 seconds (the "click me" button is rendered and the clientside log reports the lazy component has been resolved).
Link to code example: https://github.com/jmuzina/react-repro
## The current behavior
- Node can stream responses compatible with React.Suspense by using `react-dom.renderToPipeableStream()`.
- Bun can stream responses with `react-dom.renderToReadableStream()`, but any Suspense boundaries are not resolved until after a re-render, and the first render of a Suspense boundary is aborted on the serverside with no reason.
## The expected behavior
Node and Bun can both stream responses compatible with React.Suspense.
| Status: Unconfirmed | medium | Critical |
2,804,963,839 | pytorch | flaky test issues should close themselves if the test doesn't exist anymore | I've been going through the pt2 flaky test issues and some of the tests look like they've been deleted. It would be nice for this to be automated.
cc @seemethere @malfet @pytorch/pytorch-dev-infra @clee2000 @wdvr | module: ci,triaged,module: flaky-tests,module: infra | low | Minor |
2,804,967,430 | pytorch | Windows Pytorch compiler crash some version of cl.exe. Fix provided | ### ๐ Describe the bug
Hi.
In _cpp_builder.py / function 'check_compiler_exist_windows'_ you check for the existence of the cl C++ compiler by calling it with a '/help' option.
However for some versions of cl.exe, the header of the help message contains some invisible invalid utf8 char (here a single xff):
_Compilateur d\'optimisation Microsoft (R) C/C++ version\xff19.35.32216.1_
This causes the following crash:
```torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 54: invalid start byte'
```
The solution would to be to remove the decode line since this is only an existence test you don't need to process the help message
```
@functools.lru_cache(None)
def check_compiler_exist_windows(compiler: str) -> None:
"""
Check if compiler is ready, in case end user not activate MSVC environment.
"""
try:
output_msg = (
subprocess.check_output( [compiler, "\help" ] , stderr=subprocess.STDOUT)
.strip()
#.decode(*SUBPROCESS_DECODE_ARGS)
)
```
### Versions
not needed
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @chauhang @penguinwu | module: windows,triaged,oncall: pt2 | low | Critical |
2,804,967,554 | vscode | Code Not complete properly |
Type: <b>Performance Issue</b>
Response time need so time to fix it .. If the script have so many code AI stop working ..Fix It
VS Code version: Code 1.96.4 (cd4ee3b1c348a13bafd8f9ad8060705f6d4b9cba, 2025-01-16T00:16:19.038Z)
OS version: Windows_NT x64 10.0.19045
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|AMD Ryzen 5 3500X 6-Core Processor (6 x 3593)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|15.95GB (3.03GB free)|
|Process Argv|C:\\Users\\Asfack Nabil\\Desktop\\DDFW\\Server-Data\\resources\\[new]\\GZ-ChillCafe --crash-reporter-id 058810eb-24d8-45d7-89f0-d855b6c6c093|
|Screen Reader|no|
|VM|0%|
</details><details>
<summary>Process Info</summary>
```
CPU % Mem MB PID Process
2 122 11732 code main
0 288 9828 window [1] (cl_chillcafe.lua - GZ-ChillCafe - Visual Studio Code)
0 107 11772 gpu-process
0 103 11912 ptyHost
0 78 13540 fileWatcher [1]
0 28 14252 crashpad-handler
0 335 15532 extensionHost [1]
0 95 17612 shared-process
0 43 20316 utility-network-service
```
</details>
<details>
<summary>Workspace Info</summary>
```
| Window (cl_chillcafe.lua - GZ-ChillCafe - Visual Studio Code)
| Folder (GZ-ChillCafe): 31 files
| File types: png(15) lua(11) md(1)
| Conf files:;
```
</details>
<details><summary>Extensions (12)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-html-css|ecm|2.0.12
codespaces|Git|1.17.3
copilot|Git|1.257.0
copilot-chat|Git|0.23.2
discord-vscode|icr|5.8.0
cmake-tools|ms-|1.19.52
cpptools|ms-|1.22.11
cpptools-extension-pack|ms-|1.3.0
LiveServer|rit|5.7.9
code-snapshot|rob|0.2.1
cmake|twx|0.0.17
errorlens|use|3.22.0
(2 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vswsl492:30256859
vscod805cf:30301675
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
vscaac:30438847
c4g48928:30535728
azure-dev_surveyone:30548225
2i9eh265:30646982
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
2e7ec940:31000449
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupyter:31046869
nativerepl1:31139838
pythonrstrctxt:31112756
nativeloc1:31192215
cf971741:31144450
iacca1:31171482
notype1cf:31157160
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
3d9ag387:31215808
6074i472:31201624
dwoutputs:31217127
hdaa2157:31222309
copilot_t_ci:31222730
```
</details>
<!-- generated by issue reporter --> | info-needed | low | Critical |
2,804,977,080 | ollama | Ollama stops giving outputs after a few runs | ### What is the issue?
I've been trying to run "smallthinker" and "llama3.2:1b", but after around 30 runs, the models stop giving outputs. However, ollama is running with 100% CPU in the background on my Mac.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.5.7 | bug | low | Minor |
2,804,980,114 | go | maps: make Equal O(1) for pointer-identical maps | ### Go version
go1.23
### Output of `go env` in your module/workspace:
```shell
GOARCH=amd64
GOOS=linux
```
### What did you do?
Ran this benchmark:
```
var m = func() map[int]string {
m := make(map[int]string)
for i := range 1000000 {
m[i] = strconv.Itoa(i)
}
return m
}()
func Benchmark(b *testing.B) {
for range b.N {
maps.Equal(m, m)
}
}
```
### What did you see happen?
```
Benchmark-32 75 15531818 ns/op
```
### What did you expect to see?
```
Benchmark-32 1000000000 0.1882 ns/op
```
As a trivial test, Equal could use unsafe to check whether the map pointers are identical.
In which case, the maps are equal.
Unfortunately, we need to skip this optimization if the key and value types could possibly contain floating point values, which violate the reflexive property of equality for NaNs. | Performance,NeedsInvestigation,LibraryProposal | low | Major |
2,805,001,155 | next.js | Error when using namespace/compound components in React Server Component | ### Link to the code that reproduces this issue
https://github.com/DanielGiljam/nextjs-rsc-jsx-issue
### To Reproduce
1. Start the application in development (next dev)
3. See the error
```
React.jsx: type is invalid -- expected a string (for built-in components) or a class/function (for composite components) but got: undefined. You likely forgot to export your component from the file it's defined in, or you might have mixed up default and named imports.
```
### Current vs. Expected behavior
Current behavior: you get the error described above.
Expected behavior: namespace/compound components would work like they do in client components.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.1.0: Thu Oct 10 21:03:15 PDT 2024; root:xnu-11215.41.3~2/RELEASE_ARM64_T6000
Available memory (MB): 32768
Available CPU cores: 10
Binaries:
Node: 23.3.0
npm: 10.9.0
Yarn: N/A
pnpm: 9.14.2
Relevant Packages:
next: 15.2.0-canary.19 // Latest available version is detected (15.2.0-canary.19).
eslint-config-next: N/A
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Module Resolution, Not sure
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local)
### Additional context
_No response_ | linear: next,Module Resolution | low | Critical |
2,805,002,807 | godot | Typed Dictionary cannot use subclass of type in key with Invalid index type for a base type | ### Tested versions
v4.4.beta1.official [d33da79d3]
### System information
Godot v4.4.beta1 - Windows 10 (build 19045) - Multi-window, 3 monitors - Vulkan (Forward+) - dedicated GeForce GTX 1050 - Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz (12 threads)
### Issue description
A typed Dictionary with index of type X cannot use a variable of type Y (that extends X) in keys.
### Steps to reproduce
1) Create a base class
2) Create another class that extends first class
3) Create a typed dictionary. Use the base class as the key type
4) Create a variable of the subtype
5) attempt to use that variable as a key in dictionary
Expected: can use any subclass of index type as a key
What happens: Parser Error: Invalid index type "(subclass name)" for a base of type "Dictionary[(base type), String]"
### Minimal reproduction project (MRP)
[mpr-typed-dictionary-subclass-index.zip](https://github.com/user-attachments/files/18509677/mpr-typed-dictionary-subclass-index.zip) | discussion,topic:gdscript,documentation | low | Critical |
2,805,029,010 | pytorch | DISABLED test_view_of_slice_cuda (__main__.TestUnbackedSymintsCUDA) | Platforms: rocm
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22inductor%2Ftest_unbacked_symints.py%3A%3ATestUnbackedSymintsCUDA%3A%3Atest_view_of_slice_cuda%22%5D)).
This seems to be an mi300-specific failure.
cc @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | module: rocm,triaged,skipped | low | Critical |
2,805,039,550 | PowerToys | Stop to work | ### Microsoft PowerToys version
0.87.1
### Installation method
Microsoft Store
### Running as admin
No
### Area(s) with issue?
Mouse Without Borders
### Steps to reproduce
My laptop just stop to connect to my computer. I try to restart, uninstall and reinstall and still not work.
### โ๏ธ Expected Behavior
_No response_
### โ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,805,042,208 | opencv | 16-bit APNG Reading and Writing Problems | ### System Information
OpenCV version: 4.12.0-dev
### Detailed description
The `imreadanimation` function currently fails to properly read specific 16-bit APNG files, including those with 2 channels (grayscale + alpha). Sample images for testing can be found in the APNGKit repository:
[033.png](https://github.com/onevcat/APNGKit/blob/master/Tests/APNGKitTests/Resources/SpecTesting/033.png)
[034.png](https://github.com/onevcat/APNGKit/blob/master/Tests/APNGKitTests/Resources/SpecTesting/034.png)
[035.png](https://github.com/onevcat/APNGKit/blob/master/Tests/APNGKitTests/Resources/SpecTesting/035.png)
A possible solution could be to read 2-channel APNG images as 4-channel images, similar to how imread (as far as I know) processes 2-channel standard PNGs as 4-channel images.
Additionally, the `imwriteanimation` function currently converts 16-bit frames to 8-bit before saving. Would it be feasible to support saving 16-bit APNGs if there is a real need for it? (I did not find any 16-bit APNG files except for the mentioned test files.)
I am planning to work on this issue, and any feedback would be helpful.
### Steps to reproduce
```
import cv2 as cv
import numpy as np
animation = cv.Animation()
filename = "034.png"
success, animation = cv.imreadanimation(filename)
cv.imshow("Frame", animation.frames[0])
cv.waitKey()
```
also take a look on related [test code](https://github.com/opencv/opencv/blob/4.x/modules/imgcodecs/test/test_animation.cpp#L363-L406)
### Issue submission checklist
- [x] I report the issue, it's not a question
- [ ] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [x] I updated to the latest OpenCV version and the issue is still there
- [x] There is reproducer code and related data files (videos, images, onnx, etc) | bug,category: imgcodecs | low | Minor |
2,805,047,673 | pytorch | create DISABLED issues for specific runner labels | ### ๐ The feature, motivation and pitch
ROCm CI runners are a mix of MI200 and MI300 systems. At the time of writing this issue, the MI200 runners are used pre-merge and the MI300 runners are only used post-merge.
- rocm / linux-focal-rocm6.3-py3.10 / test (default, 1, 6, linux.rocm.gpu.mi300.2) [post-merge]
- rocm / linux-focal-rocm6.3-py3.10 / test (default, 1, 6, linux.rocm.gpu.2) [pre-merge]
Other HW vendors might also support different runner labels for the same flows.
We are seeing tests getting DISABLED as flaky because they pass on mi200 pre-merge then fail on mi300 post-merge. Unfortunately, the DISABLED issues are disabling both mi200 and mi300 runner labels for the same flows which means we are losing the mi200 signal unnecessarily.
Is it possible to create DISABLED issues that can also specify the runner label?
### Alternatives
_No response_
### Additional context
_No response_
cc @seemethere @malfet @pytorch/pytorch-dev-infra | module: ci,triaged | low | Minor |
2,805,047,969 | vscode | terminal is not working |
Type: <b>Bug</b>
terminal is not working and i have windows 10 installed but vs code running 8 version
VS Code version: Code 1.96.4 (cd4ee3b1c348a13bafd8f9ad8060705f6d4b9cba, 2025-01-16T00:16:19.038Z)
OS version: Windows_NT x64 6.0.6002
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Celeron(R) CPU N3060 @ 1.60GHz (2 x 1600)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: disabled_off<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|3.83GB (0.34GB free)|
|Process Argv|--crash-reporter-id fb6c5a50-8a7a-4cc7-8401-bc15b680d067|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (10)</summary>
Extension|Author (truncated)|Version
---|---|---
bracket-pair-color-dlw|Bra|0.0.6
npm-intellisense|chr|1.4.5
path-intellisense|chr|2.10.0
vscode-eslint|dba|3.0.13
prettier-vscode|esb|11.0.0
auto-rename-tag|for|0.1.10
powershell|ms-|2025.0.0
LiveServer|rit|5.7.9
es7-react-js-snippets|rod|1.9.3
JavaScriptSnippets|xab|1.8.0
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
a9j8j154:30646983
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupytercf:31046870
newcmakeconfigv2:31071590
nativerepl2:31139839
pythonrstrctxt:31112756
nativeloc2:31192216
cf971741:31144450
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
dwoutputs:31217127
9064b325:31222308
copilot_t_ci:31222730
```
</details>
<!-- generated by issue reporter --> | info-needed | low | Critical |
2,805,062,011 | godot | hint_range emit wrong error | ### Tested versions
- Reproducible in v4.3.stable.mono.official [77dcf97d8]
### System information
Apple M1 Pro, Sequoia 15.2
### Issue description
hint_range with a non literal value produce the error: `Expected an integer constant after!.`
hint_range works with floats but does not work with constants, so the error should most likely be `Expected a literal after`.
### Steps to reproduce
In any shader, add a line:
```glsl
uniform float foo : hint_range(1.1, PI) = 0.0;
```
However if you just specify a float, it compiles with no errors:
```glsl
uniform float foo : hint_range(1.1, 2.2) = 0.0;
```
### Minimal reproduction project (MRP)
It is easily reproducible. Let me know if it is absolutely necessery. | bug,confirmed,topic:shaders | low | Critical |
2,805,084,927 | rust | Tracking Issue for `atomic_try_update` |
Feature gate: `#![feature(atomic_try_update)]`
This is a tracking issue for an infallible version of `AtomicT::fetch_update` as well as a new name for the existing fallible version.
<!--
Include a short description of the feature.
-->
### Public API
<!--
For most library features, it'd be useful to include a summarized version of the public API.
(E.g. just the public function signatures without their doc comments or implementation.)
-->
```rust
impl AtomicT {
// same as `fetch_update`
pub fn try_update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(T) -> Option<T>,
) -> Result<T, T>;
pub fn update(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: impl FnMut(T) -> T,
) -> T;
}
```
### Steps / History
- [x] ACP: https://github.com/rust-lang/libs-team/issues/490
- [x] Design requested by t-libs-api in https://github.com/rust-lang/rust/pull/133829#issuecomment-2590524152
- [ ] Implementation: #133829
- [ ] Final comment period (FCP)[^1]
- [ ] Stabilization PR
### Unresolved Questions
- https://github.com/rust-lang/rust/pull/133829#issuecomment-2600217185
[^1]: https://std-dev-guide.rust-lang.org/feature-lifecycle/stabilization.html
| T-libs-api,C-tracking-issue | low | Minor |
2,805,096,920 | ollama | Add support for the AI HAT+ | Add support for the new AI HAT+ that can be added on to Raspberry Pi 5 [info here](https://www.raspberrypi.com/products/ai-hat/) to enable speedups. | feature request | low | Minor |
2,805,103,134 | PowerToys | Hotkey to Refresh Connections in Mouse Without Borders | ### Description of the new feature / enhancement
It would be useful to have a Hotkey command to refresh connections in Mouse Without Borders. I have noticed that when my laptop connects or disconnects from the VPN I need for work, MWB loses connection to it and I have to go into the settings to hit the Refresh Connections button. It happens frequently enough that the extra steps add up to a lot of time.
### Scenario when this would be used?
I have noticed that when my laptop connects or disconnects from the VPN I need for work, MWB loses connection to it and I have to go into the settings to hit the Refresh Connections button. It happens frequently enough that the extra steps add up to a lot of time. The Mouse Without Borders functionality works well in both the VPN connected or disconnected states, but the systems lose connection with each other when switching between them (including if there is any disruption and it automatically reconnects.)
### Supporting information
I haven't experimented with other VPN clients, but with the Cisco AnyConnect client, I have this behavior. I suspect it will happen with other VPN clients as well. | Needs-Triage | low | Minor |
2,805,103,881 | godot | Visual shader editor stuck in pan mode | ### Tested versions
Godot v4.3.stable
v4.4.beta1.official [d33da79d3]
### System information
Fedora Linux 41 (KDE Plasma) - Wayland - Vulkan
### Issue description
If you pan with Space + left mouse button while mouse is over a interactive control you get stuck in pan mode.
You can get back to the normal state by pressing space again.
https://github.com/user-attachments/assets/bb987a8e-a35b-45f5-bd4a-afa978f8d067
### Steps to reproduce
Create a visual shader
Add some node with controls that can be interacted with
Place mouse cursor over the control
Use Space + left mouse button to pan
### Minimal reproduction project (MRP)
[shader.zip](https://github.com/user-attachments/files/18510225/shader.zip) | bug,topic:editor,topic:input | low | Minor |
2,805,109,517 | godot | Wayland Cursor custom image doesn't update when using Image instead of Texture2D | ### Tested versions
- v4.4.beta.gentoo [4ce466d7f]
### System information
Linux Gentoo using Wayland
### Issue description
When using the Wayland DisplayServer you can only update the custom mouse cursor once if you are using Image instead of Texture2D resources and the images have the same hotspot.
The function `Input.set_custom_mouse_cursor` claims that it is okay to use either an Image or a Texture2D as a custom cursor image. The caching code in `DisplayServerWayland::cursor_set_custom_image` checks for cursor reuse using "get_rid()" on the provided resource. Since an Image does not have a RID, any image will always enter the branch `// We have a cached cursor. Nice.`.
Bug is visible since #96647 was fixed, although the underlying bug was already older than that.
### Workarounds
* Use Texture2D instead of image resources - may cause undesired overhead to convert back to image
* Reset the cursor in between - may cause cursor flickering
* Move the hotspot locations to force a cache miss - unwanted extra work to make images suitable with different hotspots
### Steps to reproduce
* Force editor and game to use the Wayland display server
* Make sure the referenced `img1` and `img2` are imported as Image and not as Texture.
```gdscript
Input.set_custom_mouse_cursor(img1, Input.CURSOR_ARROW)
# Correctly shows img1 as the cursor
Input.set_custom_mouse_cursor(img2, Input.CURSOR_ARROW)
# Still shows img1 as the cursor
```
### Minimal reproduction project (MRP)
[wayland-cursor-reproducer.zip](https://github.com/user-attachments/files/18510307/wayland-cursor-reproducer.zip) | bug,platform:linuxbsd,topic:porting,topic:input | low | Critical |
2,805,111,523 | neovim | More structured health checks | ### Problem
Neovim provides a `:checkhealth` feature for plugins and Neovim itself to show "self-diagnostics". This allows users to
1. quickly see detectable problems and misconfigurations ("runtime is missing", "parsers are broken", "server not executable", "TERM misconfigured")
2. information about the internal state (loaded treesitter parsers, active and inactive LSP configurations)
While this information is very useful for bug reports and understanding why your Neovim behaves as it does, the increasing amount of information (especially around (nvim-)treesitter and LSP) makes it increasingly hard to navigate. So we need to add ways of structuring and navigating this information.
### Expected behavior
There are several (independent) improvements that can be made, roughly in order of ascending complexity:
* [ย ] Add a table of contents (`gO`, already supported for `help` buffers, which `checkhealth` inherits from).
* [ ] Add section navigation via `]]` (which would also be useful for `help` buffers)
* [ ] Add color to section headers (white/green for "all clear" or info only, orange if the section contains a warning, red if the section contains an error)
* [ ] Add folding to sections (foldtext should be section header, colored as above). This is a bit controversial but a net win if
- `foldenable` is respected (so people can opt out of all folding);
- sections are either all folded or not folded by default;
- folding is not enabled if only a single section is shown (e.g., `:check lsp`).
* [ ] Only show the most important, actionable, health checks by default and reserve the full information for an "extended view" that can be toggled by either pressing, say, `D` in a `checkhealth` buffer or calling `:checkhealth!`.
(This could be implemented by adding a flag to `vim.health.info` to indicate that the message should only be shown in extended mode. Bonus points for evaluating this lazily -- "Treesitter parsers", I'm thinking of you!) | checkhealth | low | Critical |
2,805,115,049 | go | runtime: SIGBUS in go/types in TestVet | ```
#!watchflakes
default <- pkg == "cmd/vet" && test == "TestVet/method"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8725028079657686513)):
=== RUN TestVet/method
=== PAUSE TestVet/method
=== CONT TestVet/method
vet_test.go:195: error check failed:
Unmatched Errors:
unexpected fault address 0x10769e80
fatal error: fault
[signal SIGBUS: bus error code=0x2 addr=0x10769e80 pc=0x10769e80]
goroutine 1 gp=0xc000002380 m=2 mp=0xc00004c808 [running]:
runtime.throw({0x10663d90?, 0x28?})
...
../../runtime/proc.go:435 +0xce fp=0xc000043f38 sp=0xc000043f18 pc=0x1038d26e
runtime.gcBgMarkWorker(0xc0000ca620)
../../runtime/mgc.go:1423 +0xe9 fp=0xc000043fc8 sp=0xc000043f38 pc=0x10339389
runtime.gcBgMarkStartWorkers.gowrap1()
../../runtime/mgc.go:1339 +0x25 fp=0xc000043fe0 sp=0xc000043fc8 pc=0x10339265
runtime.goexit({})
../../runtime/asm_amd64.s:1700 +0x1 fp=0xc000043fe8 sp=0xc000043fe0 pc=0x10394601
created by runtime.gcBgMarkStartWorkers in goroutine 1
../../runtime/mgc.go:1339 +0x105
--- FAIL: TestVet/method (4.09s)
โ [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation,compiler/runtime | low | Critical |
2,805,122,330 | flutter | Cupertino navbars apply too much top padding within a sheet | When using a navbar within a CupertinoSheetRoute, the navbar applies unnecessary safe area padding at the top. Ideally the safe area padding would be reduced either by default or easily done manually.
| Expected | Actual |
| --- | --- |
| <img width="206" alt="Image" src="https://github.com/user-attachments/assets/c5711111-ca66-4f70-b6af-d0bdc1fa545d" /> | <img width="200" alt="Image" src="https://github.com/user-attachments/assets/237459e7-ae05-426c-ad3f-0a711caf630c" /> |
See comments [here](https://github.com/flutter/flutter/pull/157568#issuecomment-2592535332) and [here](https://github.com/flutter/flutter/pull/157568#issuecomment-2590955571). | a: fidelity,f: cupertino,P2,team-design,triaged-design | low | Minor |
2,805,132,253 | flutter | [SwiftPM] Improve error message if app needs to raise its minimum deployment | ### Background
Swift Package Manager strictly enforces a package's platform requirements. For example, if your Flutter iOS app has a minimum deployment of iOS 12.0 and it depends on a plugin that requires iOS 15.0, you'll get an error like:
```
Failed to build iOS app
Target Integrity (Xcode): The package product 'my-plugin' requires minimum platform version 15.0 for the iOS platform, but this target supports 12.0
```
### Solution
We should improve this error message. For example:
```
Failed to build iOS app
The plugin 'my_plugin' requires iOS 15.0, but this app supports 12.0.
To use 'my_plugin', update your app's minimum deployment to 15.0:
1. Open `ios/Runner.xcworkspace` using Xcode
2. Open Runner target > Build Settings > Minimum Deployments
3. Set iOS to `15.0.`
```
| platform-ios,tool,P3,team-ios,triaged-ios | low | Critical |
2,805,137,433 | tauri | [feat] Improve mobile app version management | ### Describe the problem
For iOS, Tauri synchronizes both `CFBundleVersion` and `CFBundleShortVersionString` with the version property in `tauri.conf.json`. A similiar issue exists for Android builds.
This means we cannot distinguish between the _user-facing_ version visible in stores and the _developer-facing_ build version. [Expo](https://docs.expo.dev/build-reference/app-versions/) for React Native handles this well by defining the following configuration properties:
| Property | Description |
| -------- | ------- |
| `version` | The user-facing version visible in stores. On Android, it represents `versionName` name in android/app/build.gradle. On iOS, it represents `CFBundleShortVersionString` in Info.plist. |
| `android.versionCode` | The developer-facing build version for Android. It represents `versionCode` in android/app/build.gradle. |
| `ios.buildNumber` | The developer-facing build version for iOS. It represents `CFBundleVersion` in Info.plist. |
### Describe the solution you'd like
Implement a similiar approach in `tauri.config.json` for defining the _developer-facing_ build version for Android/iOS.
### Alternatives considered
N/A
### Additional context
#10944. | type: feature request | low | Minor |
2,805,149,749 | TypeScript | rewriteRelativeImportExtensions & enforce consistent extensions | ### ๐ Search Terms
"rewriteRelativeImportExtensions", "extensions"
### โ
Viability Checklist
- [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
- [x] This wouldn't change the runtime behavior of existing JavaScript code
- [x] This could be implemented without emitting different JS based on the types of the expressions
- [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.)
- [x] This isn't a request to add a new utility type: https://github.com/microsoft/TypeScript/wiki/No-New-Utility-Types
- [x] This feature would agree with the rest of our Design Goals: https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals
### โญ Suggestion
An option/param to enforce `import`s with relative paths to point to files existing in sources rather than in built files, in short, make `import`s only allowing `.ts` and not `.js` extensions when importing a Typescript file.
### ๐ Motivating Example
`rewriteRelativeImportExtensions` is a great new feature and it works well. It permits to create hybrid projects directly executed by recent nodejs versions, but also buildable by tsc for distributing them. But since we now have the possibility to use the `.ts` extensions in `import`s because they will be rewritten at compilation time, we still can continue to use the `.js` extensions in parallel, pointing to built files.
As everyone knows, nodejs requires relative `import`s to have an extension and to point to an existing file and so, extensions have to be `.ts` (or `.mts`/`.cts`) to import other Typescript files.
It would be probably great that Typescript gives us an error when using unreachable file in this context, like its quasi-exact opposite: `TS5097: An import path can only end with a .ts extension when allowImportingTsExtensions is enabled` when `.ts` extensions are not allowed by configs.
### ๐ป Use Cases
1. What do you want to use this for?
In projects where nodejs can be used for any reason (tests, various scripts) and where code have to be built to be distributed over npm or any other repository, to avoid detecting bad `import`s extensions at nodejs runtime and to keep a consistent code base.
2. What shortcomings exist with current approaches?
Only its permittivity, allowing to use inconsistent `import`s styles, that may don't run in nodejs, without detecting them at coding time.
3. What workarounds are you using in the meantime?
It's not a bug, there's no need of workarounds, only vigilance. | Suggestion,In Discussion | low | Critical |
2,805,169,621 | pytorch | [EXPORT AOTI] `aoti_compile_and_package` custom_ops dependecies | ### ๐ Describe the bug
I was trying to `export` and `aoti_compile_and_package` a model with this custom op:
https://github.com/state-spaces/mamba/pull/651
`aoti_load_package` is working correctly on the same export env.
But it is not going to work in a fresh env when I don't have the custom ops dependency installed (e.g. `selective_scan_cuda.cpython-311-x86_64-linux-gnu.so`).
In this case we have `Error during testing: Could not find schema for custom_ops::selective_scan_fwd`
Is this cause the custom_op `.so` isn't included in the packaged `aoti_compile_and_package`?
If yes, Is it an expected behavior by design?
/cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @zou3519
### Versions
nightly | oncall: pt2,export-triaged,oncall: export | low | Critical |
2,805,176,620 | angular | `HttpFeature` only accepts type `Provider` as providers | ### Which @angular/* package(s) are the source of the bug?
common
### Is this a regression?
No
### Description
Since `HttpFeature` providers passed to `provideHttpClient()` are being type cast to `EnvironmentProviders`, is there a reason why they only accept `Provider[]` and not `(Provider | EnvironmentProvider)[]` ?
```ts
export interface HttpFeature<KindT extends HttpFeatureKind> {
ษตkind: KindT;
ษตproviders: Provider[]; // <-- why does this not accept `EnvironmentProvider`?
}
```
### Please provide a link to a minimal reproduction of the bug
_No response_
### Please provide the exception or error you saw
```true
```
### Please provide the environment you discovered this bug in (run `ng version`)
```true
```
### Anything else?
_No response_ | feature,area: common/http | low | Critical |
2,805,183,052 | vscode | [wsl] bring back "Reopen Folder in WSL" to the remote menu | <!-- โ ๏ธโ ๏ธ Do Not Delete This! feature_request_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
Whoever did this, you probably inconvenienced millions of people. Good one. | feature-request,WSL | low | Minor |
2,805,183,388 | flutter | Create benchmarking app for flutter. It should have minimal assets, ios only, (optional) with endless scrolling. | null | platform-ios,f: scrolling,team: benchmark,team-ios | low | Minor |
2,805,184,029 | flutter | Create equivalent app for SwiftUI. | null | platform-ios,team: benchmark,team-ios | low | Minor |
2,805,191,903 | flutter | Time to first frame | null | platform-ios,c: rendering,perf: speed,team: benchmark,team-ios | low | Minor |
2,805,191,965 | flutter | Scrolling | null | platform-ios,f: scrolling,team: benchmark,team-ios | low | Minor |
2,805,206,824 | rust | `f16` creates doc-link ambiguity on stable | I `cargo doc`ed this code:
```rust
#[allow(non_camel_case_types)]
pub struct f16 {}
/// Blah blah blah [`f16`]
pub fn foo() -> f16 {
f16 {}
}
```
I got:
```
warning: `f16` is both a struct and a primitive type
--> src/lib.rs:4:22
|
4 | /// Blah blah blah [`f16`]
| ^^^ ambiguous link
|
= note: `#[warn(rustdoc::broken_intra_doc_links)]` on by default
help: to link to the struct, prefix with `struct@`
|
4 | /// Blah blah blah [`struct@f16`]
| +++++++
help: to link to the primitive type, prefix with `prim@`
|
4 | /// Blah blah blah [`prim@f16`]
```
and the link was missing in the generated documentation. Instead, because `f16` is unstable, it should have no effect and there should be no warning.
Additionally, even when `f16` is stable, the link should not be considered ambiguous since primitive `f16` is being shadowed. (Otherwise, currently valid code will get broken links when `f16` stabilizes.)
### Meta
`rustc --version --verbose`:
```
rustc 1.86.0-nightly (ed43cbcb8 2025-01-21)
binary: rustc
commit-hash: ed43cbcb882e7c06870abdd9305dc1f17eb9bab9
commit-date: 2025-01-21
host: aarch64-apple-darwin
release: 1.86.0-nightly
LLVM version: 19.1.7
```
Also occurs on stable 1.84. | T-rustdoc,C-bug,A-intra-doc-links | low | Critical |
2,805,207,325 | TypeScript | TypeScript 5.8 Iteration Plan | This document outlines our focused tasks for TypeScript 5.8. It minimally indicates intent to investigate tasks or contribute to an implementation. Nothing is set in stone, but we will strive to complete these tasks in a reasonable timeframe.
Date | Event
---------------|-------------------------
2024-11-22 | TypeScript 5.7 Release
2025-01-24 | Create 5.8 Beta (5.8.0) Build for Testing
2025-01-28 | **TypeScript 5.8 Beta Release**
2025-02-07 | Create 5.8 RC (5.8.1) Build for Testing
2025-02-11 | **TypeScript 5.8 RC Release**
2025-02-21 | Create 5.8 Final (5.8.2) Build for Testing
2025-02-25 | **TypeScript 5.8 Final Release** ๐
```mermaid
gantt
dateFormat YYYY-MM-DD
TypeScript 5.7 Stabilization Period : 2024-11-18, 2024-11-22
TypeScript 5.8 Beta Development : 2024-11-18, 2025-01-24
TypeScript 5.8 RC Development : 2025-01-25, 2025-02-07
TypeScript 5.8 Stabilization Period : 2025-02-08, 2025-06-21
todayMarker stroke-width:5px,stroke:#0f0,opacity:0.5
```
# Compiler and Language
* [Checked Return Statements for Conditional and Indexed Access Types](https://github.com/microsoft/TypeScript/pull/56941)
* [`--module node18`](https://github.com/microsoft/TypeScript/pull/60722) and [`--module node20`](https://github.com/microsoft/TypeScript/pull/60761)
* [`--erasableSyntaxOnly` Option](https://github.com/microsoft/TypeScript/issues/59601)
* [Investigate Preserving Computed Properties with Entity Names for Declaration Emit](https://github.com/microsoft/TypeScript/pull/60052)
* [Introduce `--libReplacement`](https://github.com/microsoft/TypeScript/pull/60829)
* [`lib.d.ts` Updates](https://github.com/microsoft/TypeScript/issues/60985)
# Editor and Language Service
* [Iterate on Expandable Hovers](https://github.com/microsoft/TypeScript/pull/59940)
# Performance
* [Normalize Paths with Fewer Allocations](https://github.com/microsoft/TypeScript/pull/60812)
* [Skip `tsconfig.json` Checks on Program Updates](https://github.com/microsoft/TypeScript/pull/60754)
# Website and Docs
* [Simplify and Refactor Website for Faster Builds](https://github.com/microsoft/TypeScript-Website/issues/2730) | Planning | low | Major |
2,805,226,175 | go | runtime: -msan / -asan stack corruption with CPU profiling and SetCgoTraceback context callback | [`msancall`](https://cs.opensource.google/go/go/+/master:src/runtime/msan_amd64.s;l=71;drc=b7c630dc3ac3f43b2294f803f26f512d75a54fc6) and [`asancall`](https://cs.opensource.google/go/go/+/master:src/runtime/asan_amd64.s;l=73;drc=5f7a40856372142372d3b67c9dd737373932f088) are used to call into the MSAN and ASAN C runtimes, respectively.
These wrappers need to handle stack switching, similar to `asmcgocall`.
If the caller is running on `g0`, then they just perform the call, otherwise they switch SP to `g0.sched.sp` and then make the call. This is normally fine, but in a signal context we will be on `gsignal` (not `g0`!), but the code the signal interrupted may have been on `g0`. By using `g0.sched.sp`, the MSAN/ASAN call will scribble all over the stack that the interrupted code is using.
As far as I know, MSAN/ASAN calls are possible from signal context in only one case:
* [`runtime.cgoContextPCs`](https://cs.opensource.google/go/go/+/master:src/runtime/traceback.go;l=1646;drc=d90ce588eac7b9105c0ca556a7c6e975fd5c1eca) contains `msanwrite`/`asanwrite` calls.
* `runtime.cgoContextPCs` is reachable from the SIGPROF signal handler: `runtime.sigprof` -> `runtime.tracebackPCs` -> `runtime.(*unwinder).cgoCallers` -> `runtime.cgoContextPCs`.
* This is only reachable if the application has registered cgo traceback via `runtime.SetCgoTraceback`. Note that both the `traceback` and `context` handlers must be registered. The latter is required because `runtime.cgoContextPCs` only calls the traceback function if `gp.cgoCtxt` is active, which requires a context handler.
* Note that the popular https://pkg.go.dev/github.com/ianlancetaylor/cgosymbolizer _does not_ set a context handler, so it cannot trigger this bug.
https://go.dev/cl/643875 contains a reproducer. The allocator runs portions on the system stack, so with MSAN/ASAN plus profiling, we see crashes due to stack corruption in the allocator.
```
$ GOFLAGS=-msan CC=clang go test -run CgoTracebackContextProfile -v runtime
=== RUN TestCgoTracebackContextProfile
=== PAUSE TestCgoTracebackContextProfile
=== CONT TestCgoTracebackContextProfile
crash_test.go:172: running /usr/local/google/home/mpratt/src/go/bin/go build -o /tmp/go-build4253652554/testprogcgo.exe
crash_test.go:194: built testprogcgo in 1.417734407s
crash_cgo_test.go:292: /tmp/go-build4253652554/testprogcgo.exe TracebackContextProfile: exit status 2
crash_cgo_test.go:295: expected "OK\n" got SIGSEGV: segmentation violation
PC=0x50d8e2 m=7 sigcode=1 addr=0x1b
goroutine 0 gp=0xc0003021c0 m=7 mp=0xc000300008 [idle]:
runtime.callers.func1()
/usr/local/google/home/mpratt/src/go/src/runtime/traceback.go:1100 +0xc2 fp=0x7f6637ffed40 sp=0x7f6637ffec78 pc=0x50d8e2
msancall()
/usr/local/google/home/mpratt/src/go/src/runtime/msan_amd64.s:87 +0x2d fp=0x7f6637ffed50 sp=0x7f6637ffed40 pc=0x525c2d
goroutine 24 gp=0xc000103180 m=7 mp=0xc000300008 [running, locked to thread]:
runtime.systemstack_switch()
/usr/local/google/home/mpratt/src/go/src/runtime/asm_amd64.s:479 +0x8 fp=0xc00051abb0 sp=0xc00051aba0 pc=0x522728
runtime.callers(0x7f6684100788?, {0xc00030e000?, 0x219cd20?, 0x7f6684e18470?})
/usr/local/google/home/mpratt/src/go/src/runtime/traceback.go:1097 +0x92 fp=0xc00051ac18 sp=0xc00051abb0 pc=0x5215f2
runtime.mProf_Malloc(0xc000300008, 0xc000330880, 0x80)
/usr/local/google/home/mpratt/src/go/src/runtime/mprof.go:447 +0x74 fp=0xc00051ac98 sp=0xc00051ac18 pc=0x4db374
runtime.profilealloc(0xc000300008?, 0xc000330880?, 0x80?)
/usr/local/google/home/mpratt/src/go/src/runtime/malloc.go:1802 +0x9b fp=0xc00051acc8 sp=0xc00051ac98 pc=0x4be47b
runtime.mallocgcSmallNoscan(0xc000330800?, 0x80?, 0x0?)
/usr/local/google/home/mpratt/src/go/src/runtime/malloc.go:1327 +0x23c fp=0xc00051ad20 sp=0xc00051acc8 pc=0x4bd61c
runtime.mallocgc(0x80, 0x688f80, 0x1)
/usr/local/google/home/mpratt/src/go/src/runtime/malloc.go:1055 +0xb9 fp=0xc00051ad58 sp=0xc00051ad20 pc=0x51b4f9
runtime.makeslice(0x0?, 0xc000103180?, 0x4b3c45?)
/usr/local/google/home/mpratt/src/go/src/runtime/slice.go:116 +0x49 fp=0xc00051ad80 sp=0xc00051ad58 pc=0x51f449
main.TracebackContextProfileGoFunction(...)
/usr/local/google/home/mpratt/src/go/src/runtime/testdata/testprogcgo/tracebackctxt.go:176
_cgoexp_b32fe38f1ae6_TracebackContextProfileGoFunction(0x0?)
_cgo_gotypes.go:868 +0x27 fp=0xc00051adb0 sp=0xc00051ad80 pc=0x658227
runtime.cgocallbackg1(0x658200, 0x7f6637ffedd0, 0x1)
/usr/local/google/home/mpratt/src/go/src/runtime/cgocall.go:444 +0x28b fp=0xc00051ae68 sp=0xc00051adb0 pc=0x4b3b8b
runtime.cgocallbackg(0x658200, 0x7f6637ffedd0, 0x1)
/usr/local/google/home/mpratt/src/go/src/runtime/cgocall.go:350 +0x133 fp=0xc00051aed0 sp=0xc00051ae68 pc=0x4b3833
runtime.cgocallbackg(0x658200, 0x7f6637ffedd0, 0x1)
<autogenerated>:1 +0x29 fp=0xc00051aef8 sp=0xc00051aed0 pc=0x526cc9
runtime.cgocallback(0xc00051af58, 0x51a8f5, 0x662270)
/usr/local/google/home/mpratt/src/go/src/runtime/asm_amd64.s:1084 +0xcc fp=0xc00051af20 sp=0xc00051aef8 pc=0x5244ec
cFunction
tracebackctxt.go:65792 pc=0x100
cFunction
tracebackctxt.go:256 pc=0x100
runtime.systemstack_switch()
/usr/local/google/home/mpratt/src/go/src/runtime/asm_amd64.s:479 +0x8 fp=0xc00051af30 sp=0xc00051af20 pc=0x522728
runtime.cgocall(0x662270, 0xc00051af90)
/usr/local/google/home/mpratt/src/go/src/runtime/cgocall.go:185 +0x75 fp=0xc00051af68 sp=0xc00051af30 pc=0x51a8f5
main._Cfunc_TracebackContextProfileCallGo()
_cgo_gotypes.go:267 +0x3a fp=0xc00051af90 sp=0xc00051af68 pc=0x6478fa
main.TracebackContextProfile.func1()
/usr/local/google/home/mpratt/src/go/src/runtime/testdata/testprogcgo/tracebackctxt.go:161 +0x7e fp=0xc00051afe0 sp=0xc00051af90 pc=0x6574be
runtime.goexit({})
/usr/local/google/home/mpratt/src/go/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc00051afe8 sp=0xc00051afe0 pc=0x524741
created by main.TracebackContextProfile in goroutine 1
/usr/local/google/home/mpratt/src/go/src/runtime/testdata/testprogcgo/tracebackctxt.go:158 +0x10e
...
```
I haven't tested older versions, but this code hasn't changed in a while, so I suspect that 1.22 and 1.23 are also affected. | NeedsFix,compiler/runtime,BugReport | low | Critical |
2,805,227,385 | kubernetes | Add conversion function to translate between `corev1.LabelSelector` and apimachinery `labels.Selector` | ### What would you like to be added?
While writing k8s controllers, I have found a need to translate between the K8s API model type `LabelSelector` and the API Machinery `labels.Selector` type. While searching online has yielded a [stack overflow issue](https://stackoverflow.com/questions/77870908/convert-kubernetes-labels-selector-to-metav1-labelselector) which recommends translating to a string and parsing said string, it would be beneficial for the community to have a method which handles this translation.
### Why is this needed?
String serialization/de-serialization has the potential to fail silently if the logic behind the serialization changes which is why having a method to handle this would be better. | sig/api-machinery,kind/feature,needs-triage | low | Minor |
2,805,247,209 | storybook | [Bug]: Storybook v8.5 First Test Fails on Cypress | ### Describe the bug
When running Storybook v8.5 with Cypress for end-to-end (E2E) tests, the first test always fails, irrespective of the test content. Subsequent tests execute and pass as expected. The failure occurs because the Storybook server does not appear to be running for the first test, displaying a white screen. However, the server is loaded for the second test onward.
### Reproduction link
N/A
### Reproduction steps
1. Use the following nx script for running E2E tests:
```
"targets": {
"e2e": {
"executor": "nx:run-commands",
"options": {
"command": "start-server-and-test 'nx run shared-react-design-system:storybook' http-get://localhost:4400 'nx run shared-react-design-system-e2e:e2e:cypress'"
}
},
"e2e:cypress": {
"executor": "@nx/cypress:cypress",
"options": {
"cypressConfig": "libs/shared/react/design-system-e2e/cypress.config.ts",
"testingType": "e2e"
},
"configurations": {
"ci": {
"devServerTarget": "shared-react-design-system:storybook:ci"
}
}
}
}
```
2. Run the e2e target.
3a. The first test fails, displaying a white screen when running from localhost on port 4400.
3b. Logged errors:
` CypressError: `cy.visit()` failed trying to load:`
`We attempted to make an http request to this URL but the request failed without a response.`
` Error: ESOCKETTIMEDOUT`
4. The subsequent tests complete successfully.
### System
```bash
macOS: "15.0.1"
nx: "20.2.2"
cypress: "13.17.0"
```
### Additional context
This still occurs when the first test is removed and the next available test is bumped up to run first.
Updating just the storybook package to v8.5 is fine, this issues occurs when updating the storybook addon packages to:
```
"@storybook/addon-a11y": "8.5.0",
"@storybook/addon-essentials": "8.5.0",
"@storybook/core-server": "8.5.0",
"@storybook/nextjs": "8.5.0",
"@storybook/react": "8.5.0",
``` | question / support,compatibility with other tools | low | Critical |
2,805,250,527 | flutter | [Impeller] Re-enable PowerVR GPU support. | Most of the remaining severe crashes or rendering issues on Vulkan Impeller are specific to certain PowerVR GPUs. For example, we have flickering when rendering on the Google TV streamer, or a complete black screen on an Oppo-Something. Both are 9000 series (though I don't have a fully understanding of how the model numbers work yet).
For now we've disabled PowerVR. This issue tracks figuring out:
1. Are the sets or subsets of PowerVR GPU / Drivers that work OK?
2. Are there reasonable workarounds we can do to make them work?
3. Are there existing known issues we can use to bootstrap this discovery process. | P3,e: impeller,team-engine,triaged-engine | low | Critical |
2,805,251,424 | flutter | [pigeon]pigeon documentation is missing task queue usage | ### Use case
I'm trying to understand how pigeon's task queue works. It looks like there's no such example in both https://github.com/flutter/packages/tree/main/packages/pigeon and https://github.com/flutter/packages/blob/main/packages/pigeon/example/README.md
### Proposal
A example usage of TaskQueue API would be helpful for me to understand | package,team-ecosystem,p: pigeon,P3,triaged-ecosystem | low | Major |
2,805,269,782 | go | x/build: run platform/arch specific builders when platform/arch constrained files are changed | When making changes to platform or arch constrained files (.go/.s) it would be nice for LUCI to automatically include builders of those types in the set that is run. Currently this has to be done manually, which is not always particularly obvious, and can lead to changes appearing to pass builders when they would fail in the specific builders were included.
This doesn't seem to be easily done with location filters regexps in the luci main.star, since we want to be more specific when there is a platform suffix (e.g. if `example_linux_amd64.go` changes, we only want to run the linux-amd64 builder), and less specific when there is only an arch suffix (e.g. if `example_amd64.go` changes, we want to run all of the amd64 builders). I fiddled around with this and couldn't find an obvious way to accomplish it.
One possible option is to add a special mode to golangbuild which scans the file names changed in a CL, and does this matching itself, spawning an additional set of builders.
Ideally we'd also do this for build constrained files which use the `//go:build` syntax, but this seems a lot more complex, as it's not particularly easy to extract this information via the gerrit API (as far as I can tell). For a first pass, it'd probably be fine to just do the file name based approach, since that covers the majority of use cases.
cc @golang/release | Builders,NeedsInvestigation | low | Minor |
2,805,275,613 | PowerToys | Please add folder comparison function | ### Description of the new feature / enhancement
Compare two folders, such as the number of files inside, whether each file has a corresponding file in another file, MD5 verification, etc.
### Scenario when this would be used?
Development!!!
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,805,276,189 | pytorch | torch.compile has different numerics for var_mean | ### ๐ Describe the bug
```
import torch
from torch._dynamo.utils import same
def foo(x):
return torch.ops.aten.var_mean.correction(x, [1], correction = 0, keepdim = True)
inp = torch.rand([112958, 384], device="cuda", dtype=torch.float16)
print(same(foo(inp), torch.compile(foo)(inp)))
```
> [ERROR]:Accuracy failed: allclose not within tol=0.0001
Maybe this is a numerics sensitive op, but it throws up bisecting and is a general pain.
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @ezyang
### Versions
master | oncall: pt2,module: inductor | low | Critical |
2,805,277,698 | PowerToys | Key swap sticky issue | ### Microsoft PowerToys version
v0.81.0
### Installation method
WinGet
### Running as admin
Yes
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
this bug happens when I _**switched on Keyboard Manager and replaced the CapsLock to CTRL and vice versa.**_
I am an architect who deal with rhino for most of the time, the bug occurs when PowerToys is on for a while (after startup from reboot) and rhino is on for a while. The duration is uncertain, can be minutes, hours or days depending on the heavyness of the rhino model.
_**The symptom is the replaced CTRL key would became sticky, behavior like a CapsLock key but function as a CTRL key.**_
### โ๏ธ Expected Behavior
No misfunction between desired key and changed key.
### โ Actual Behavior
CTRL key replacing over CapsLock will act like a CapsLock key but with CTRL key functions.
(i.e., unintentional multiselection in Windows Explorer, unintentional viewtype orientation in certain software)
### Other Software
Rhino 8
| Issue-Bug,Needs-Triage | low | Critical |
2,805,289,310 | ollama | I should not have to write the full model name | If I want to run mistral, and mistral is the only model I have starting with an "m", i should just have to type "ollama run m"
if theres something called "mestral", then i can type "ollama run mi", and so on | feature request | low | Major |
2,805,303,566 | rust | `extern` symbols are always mangled on `wasm32-unknown-emscripten` | Attempting to compile a program with `EMCC_CFLAGS="-s ERROR_ON_UNDEFINED_SYMBOLS=0" cargo build --target wasm32-unknown-emscripten --release` results in all `extern` functions having unpredictable names, whether or not they have `#[no_mangle]`.
I tried this code:
```rust
extern "C" {
fn foo() -> f64;
}
fn main() {
println!("{}", unsafe { foo() });
}
```
I expected to see this happen: A generated `.js` loader with a stub in `wasmImports["foo"]`.
Instead, this happened: The js loader renames the function to a practically random symbol. In this case:
```js
function _foo() {
abort("missing function: foo")
}
_foo.stub = true;
var wasmImports = {
// .. more
y: _foo,
// .. more
};
```
### Meta
Tested against 1.83.0 and latest nightly.
`rustc --version --verbose`:
```
rustc 1.83.0 (90b35a623 2024-11-26)
binary: rustc
commit-hash: 90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf
commit-date: 2024-11-26
host: aarch64-apple-darwin
release: 1.83.0
LLVM version: 19.1.1
``` | T-compiler,C-bug,O-emscripten,A-name-mangling | low | Critical |
2,805,311,074 | godot | Wrong window height in project conversion dialogue | ### Tested versions
reproducible in version 4.4 beta 1
not reproducible in version 4.3 stable
### System information
Windows 11 (build 26100) - Multi-window, 2 monitors
### Issue description
When converting an old Godot project from version 3 to version 4.4 beta 1, there is bug, where the height of the window displaying the conversion dialogue is too high, so the window controls extend beyond the screens dimensions. This happens on a 4K monitor, on full hd resolution this bug does not appear.
<img width="432" alt="Image" src="https://github.com/user-attachments/assets/95ea56ae-5afb-4b07-adb3-3113573af8ec" />
On Godot version 4.3 stable, the window is much greater in height, but its height is still reasonable, so that the control elements can be used.
<img width="437" alt="Image" src="https://github.com/user-attachments/assets/e880feff-a359-453c-844e-20f525db8a5a" />
### Steps to reproduce
Import the example project and follow the dialogues until the conversion dialogue appears.
This must be done on a 4K monitor.
### Minimal reproduction project (MRP)
[ExampleProject.zip](https://github.com/user-attachments/files/18511407/ExampleProject.zip) | bug,topic:editor,regression | low | Critical |
2,805,340,531 | langchain | ChatPerplexity does not implement `bind_tools` for structured output | ### Checked other resources
- [x] I added a very descriptive title to this issue.
- [x] I searched the LangChain documentation with the integrated search.
- [x] I used the GitHub search to find a similar question and didn't find it.
- [x] I am sure that this is a bug in LangChain rather than my code.
- [x] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
ChatPerplexity(
temperature=0,
model="llama-3.1-sonar-huge-128k-online",
pplx_api_key=config.perplexity_key,
).with_structured_output(schema=OutputSchema)
```
returns a Not Implemented Error
### Error Message and Stack Trace (if applicable)
```
backend-glen_ai-1 | File "/usr/local/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py", line 1242, in with_structured_output
backend-glen_ai-1 | llm = self.bind_tools([schema], tool_choice="any")
backend-glen_ai-1 | File "/usr/local/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py", line 1119, in bind_tools
backend-glen_ai-1 | raise NotImplementedError
backend-glen_ai-1 | NotImplementedError
```
### Description
* Documentation says `with_structured_output` is supported (https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.perplexity.ChatPerplexity.html#langchain_community.chat_models.perplexity.ChatPerplexity.with_structured_output)
* Perplexity has structured output support (https://docs.perplexity.ai/guides/structured-outputs)
### System Info
langchain-community = "^0.3.15"
langchain = "^0.3.15"
| โฑญ: models | low | Critical |
2,805,359,023 | rust | `..` should be suggested when attempting to ignore a private enum variant with `_` | ### Code
```Rust
mod foo {
struct Bar;
pub enum Foo {
#[allow(private_interfaces)]
A(Bar),
}
}
fn foo_bar(v: foo::Foo) {
match v {
foo::Foo::A(_) => {}
}
}
```
### Current output
```Shell
rustc: error: type `jobs::purge_old_event_summaries::foo::Bar` is private
--> platform/spec-proxy-sidekick/src/jobs/purge_old_event_summaries.rs:454:21
|
454 | foo::Foo::A(_) => {}
| ^ private type
```
### Desired output
```Shell
rustc: you might want to ignore the private value by using `..`
```
### Rationale and extra context
On ignoring a pattern in a match, the [reference](https://doc.rust-lang.org/book/ch18-03-pattern-syntax.html#ignoring-values-in-a-pattern) reads:
> There are a few ways to ignore entire values or parts of values in a pattern: using the _ pattern (which youโve seen), using the _ pattern within another pattern, using a name that starts with an underscore, or using .. to ignore remaining parts of a value. Letโs explore how and why to use each of these patterns.
Their detailed sections read, in part:
> We can also use _ inside another pattern to ignore just part of a value, for example, when we want to test for only part of a value but have no use for the other parts in the corresponding code we want to run.
and
> With values that have many parts, we can use the .. syntax to use specific parts and ignore the rest, avoiding the need to list underscores for each ignored value. The .. pattern ignores any parts of a value that we havenโt explicitly matched in the rest of the pattern.
All of this makes ignoring a value via `_` and `..` seem functionally equivalent, and generally this is true, except for when you are ignoring a value whose type is private. In that case, `_` causes an error, while `..` is fine.
I would have assumed that the compiler was able to entirely ignore your "use" of the private type in either case, but given that it isn't so, it would be nice to have a helpful message pointing in that direction. It seems like `_` sufficiently expresses the intent to ignore the private value such that it would justify suggesting `..` as an alternative. This would be a reasonable suggestion for 1-tuple data enums and for N-tuple data enums, where assigning a private type to `_` can also be replaced with `..`.
A real world example of where this would have been useful can be seen here: https://github.com/mlua-rs/mlua/issues/502, where I opened an Issue with the mlua project asking for a workaround to allow exhaustive pattern matching in a public enum that had been updated to contain a private type. The library was updated to make the type public but undocumented, but another commenter later pointed out that `..` would have also been an option.
It would also be nice to provide similar hints for structs, for example:
```rust
mod foo {
struct Bar;
pub struct Baz {
#[allow(private_interfaces)]
pub a: Bar,
}
}
fn struct_baz(v: foo::Baz) {
let foo::Baz { a: _ } = v;
}
```
In addition to the current error of:
```
rustc: error: type `jobs::purge_old_event_summaries::foo::Bar` is private
--> platform/spec-proxy-sidekick/src/jobs/purge_old_event_summaries.rs:467:23
|
467 | let foo::Baz { a: _ } = v;
| ^ private type
```
To add something like: "you might want to ignore the field 'a' entirely by using `..`"
### Other cases
```Rust
```
### Rust Version
```Shell
โฏ rustc --version --verbose
rustc 1.84.0 (9fc6b4312 2025-01-07)
binary: rustc
commit-hash: 9fc6b43126469e3858e2fe86cafb4f0fd5068869
commit-date: 2025-01-07
host: x86_64-unknown-linux-gnu
release: 1.84.0
LLVM version: 19.1.5
```
### Anything else?
_No response_ | A-diagnostics,T-compiler | low | Critical |
2,805,404,851 | bitcoin | `DEFAULT_TRANSACTION_MAXFEE` is 0.1โฏโฟ | ### Please describe the feature you'd like to see added.
I was reviewing a pull request, when I noticed that the `DEFAULT_TRANSACTION_MAXFEE` is set to 0.1รCOIN.
`src/wallet/wallet.h:constexpr CAmount DEFAULT_TRANSACTION_MAXFEE{COIN / 10};`
### Describe the solution you'd like
It seems to me that given the current exchange rate we might want to lower that by about a magnitude. | Brainstorming,Wallet,TX fees and policy | low | Minor |
2,805,413,524 | flutter | [Impeller] Playground and golden generator generate different images | ### Steps to reproduce
When I generate goldens for some Impeller playground tests they are scaled differently than they are in playground and as a result their content can be clipped. This is on MacOS.
For example, try running a golden output for `Play/AiksTest.MaskBlurVariantTestOuterTranslucent/Metal` and compare it to its playground view. The playground view looks correct, but the golden output is scaled larger than what I see in playground and as a result some of the content is clipped.
I noticed this first when looking at some golden failures for this test on a PR. The local goldens didn't indicate any errors because the differences were in the clipped-out part in my local tests. So, it looks like the golden checks in PR pre-submits are fine as they don't exhibit this "scaling and clipping" issue, but it makes debugging those errors difficult locally because the problem happens locally on my MBP.
### Expected results
The output of the two mechanisms should match and the golden results should not be clipped.
### Actual results
The content looks correct in playground, but over-scaled and clipped in the locally produced golden images.
### Code sample
`impeller_unittests --gtest_filter="Play/AiksTest.MaskBlurVariantTestOuterTranslucent/Metal" --enable_playground`
vs
`impeller_golden_tests --gtest_filter="Play/AiksTest.MaskBlurVariantTestOuterTranslucent/Metal" --working_dir=<...>`
### Screenshots or Video
<details>
<summary>Playground image</summary>
<img width="1017" alt="Image" src="https://github.com/user-attachments/assets/2307891d-70a7-4683-ac98-f00a7942f26b" />
</details>
<details>
<summary>Golden image</summary>

</details> | engine,P3,e: impeller,team-engine,triaged-engine | low | Critical |
2,805,432,857 | godot | Floating window causing errors when running with --headless | ### Tested versions
Reproducible in 4.3.stable
### System information
Godot v4.3.stable - Windows 10.0.22631 - Vulkan (Forward+) - dedicated AMD Radeon RX 7900 XT (Advanced Micro Devices, Inc.; 32.0.11037.4004) - AMD Ryzen Threadripper 7970X 32-Cores (64 Threads)
### Issue description
Exporting with --headless when your cached editor layout has a floating window will cause an error:
`ERROR: Condition "!is_window_available()" is true.`
This error is pretty much benign except being a red herring when generating builds. This error is pretty much benign except being a red herring when generating builds. It smells similar to [this issue](https://github.com/godotengine/godot/issues/86806), which I'm also hitting. I'm sure both issues can be fixed by just adding a headless check somewhere, but I'm not sure where.
### Steps to reproduce
Run command and view error in output:
```
Godot_v4.3-stable_win64_console.exe --export-debug --headless Web-Debug "output\out.html"
Godot Engine v4.3.stable.official.77dcf97d8 - https://godotengine.org
savepack: begin: Packing steps: 102
savepack: step 2: Storing File: res://.godot/imported/out.apple-touch-icon.png-a0cd21a521c42b6c1e93d3f601d82855x
savepack: step 2: Storing File: res://output/out.apple-touch-icon.png.import
savepack: step 22: Storing File: res://.godot/imported/out.icon.png-333c99a988fc5e116d9b8646f4d13f35.ctex
savepack: step 22: Storing File: res://output/out.icon.png.import
savepack: step 42: Storing File: res://.godot/imported/out.png-aef36046165722104faf40ac0d340d9c.ctex
savepack: step 42: Storing File: res://output/out.png.import
savepack: step 62: Storing File: res://.godot/imported/icon.svg-218a8f2b3041327d8a5756f3a245f83b.ctex
savepack: step 62: Storing File: res://icon.svg.import
savepack: step 82: Storing File: res://.godot/exported/133200997/export-14584830dbc22d3f76a596eed5f4948e-node_3n
savepack: step 82: Storing File: res://node_3d.tscn.remap
savepack: step 82: Storing File: res://.godot/global_script_class_cache.cfg
savepack: step 82: Storing File: res://icon.svg
savepack: step 82: Storing File: res://.godot/uid_cache.bin
savepack: step 82: Storing File: res://project.binary
savepack: end
ERROR: Condition "!is_window_available()" is true.
at: restore_window_from_saved_position (editor/window_wrapper.cpp:222)
```
### Minimal reproduction project (MRP)
To make the minimal build, all I did was create and set a root scene node for startup, create a default web5 export (I'd assume it's an issue with out exports), made the Node window be floating, and close the editor. I then ran the command and triggered the error.
[MinimumTest.zip](https://github.com/user-attachments/files/18512057/MinimumTest.zip)
Note: Anybody who downloads this will have to make the Node window floating, otherwise they won't get the error. | bug,topic:editor,needs testing | low | Critical |
2,805,441,316 | rust | Missing safety blocks in solid/io.rs | ### Location
- Second impl block for `BorrowedFd<'_>`, safety is in the first one: https://github.com/rust-lang/rust/blob/dee7d0e730a3a3ed98c89dd33c4ac16edc82de8a/library/std/src/os/solid/io.rs#L125-L126
- Near impl safety block of `OwnedFd`: https://github.com/rust-lang/rust/blob/dee7d0e730a3a3ed98c89dd33c4ac16edc82de8a/library/std/src/os/solid/io.rs#L170-L171
### Summary
This is my first issue so maybe there is no issue.
I'm checking some unsafe blocks just in case something is missing and there are those without safety blocks.
Since I saw a different file with a safety block in the `drop` function (see: https://github.com/rust-lang/rust/blob/dee7d0e730a3a3ed98c89dd33c4ac16edc82de8a/library/core/src/cell/lazy.rs#L150-L152) I thought it may be missing here.
What guide could I check to make sure?
Thanks in advance. | C-cleanup,T-libs | low | Minor |
2,805,457,070 | next.js | Turbopack Error: "Next.js package not found" during pnpm dev | ### Link to the code that reproduces this issue
https://github.com/mjeanbosco19/events-mis
### To Reproduce
To Reproduce
1. Clone the Repository: https://github.com/mjeanbosco19/events-mis
2. Use pnpm to install all project dependencies
3. Run the development server using Turbopack: pnpm dev
4. Observe the Error

### Current vs. Expected behavior
Current Behavior when running the development server with pnpm dev, the process fails, and the following error message is displayed in the terminal:
FATAL: An unexpected Turbopack error occurred.
Error [TurbopackInternalError]: Next.js package not found
Debug info:
- Execution of get_entrypoints_with_issues failed
- Execution of Project::entrypoints failed
- Execution of AppProject::routes failed
- Execution of directory_tree_to_entrypoints_internal failed
- Execution of directory_tree_to_loader_tree failed
- Execution of *FileSystemPath::join failed
- Execution of get_next_package failed
- Next.js package not found
No additional logs are generated in the provided log file path:
C:\Users\pc\AppData\Local\Temp\next-panic-4bc82ebe23177e94ca24c26cf186725.log.
This prevents the development server from starting and halts all development work.
Expected Behavior the development server should start successfully using Turbopack when running the pnpm dev command. Expected output in the terminal should look like:
โฒ Next.js 15.1.4 (Turbopack)
- Local: http://localhost:3000
- Network: http://192.168.1.81:3000
- Environments: .env
โ Starting...
The application should be accessible at the specified local and network URLs without errors.
### Provide environment information
```bash
$ npx --no-install next info
'yarn' is not recognized as an internal or external command,
operable program or batch file.
Operating System:
Platform: win32
Arch: x64
Version: Windows 11 Pro
Available memory (MB): 12013
Available CPU cores: 8
Binaries:
Node: 22.13.0
npm: 10.9.2
Yarn: N/A
pnpm: 9.15.4
Relevant Packages:
next: 15.2.0-canary.20 // Latest available version is detected (15.2.0-canary.20).
eslint-config-next: 15.1.4
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Turbopack
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
The issue occurs locally while running the development server using pnpm dev. No deployment platform is involved; this error arises during local development. | Turbopack,linear: turbopack | low | Critical |
2,805,474,723 | excalidraw | Remote element drag updates are slow | **Reproduction**
1. Connect two excalidraw clients (A and B) through a shared scene
1. Create freedraw element on client A
1. Drag the freedraw element on client A
1. Observe the updates being reflected in slow-mo speeds on client B
**Technical details**
The issue seems to lie in shape cache misses, likely due to remote elements being always new instances and hence always missing the cache & re-generating from scratch.
**Notes**
- Probably does not affect just freedraw, though the difference there might be the most noticable.
- See the related screenshots, capturing the bottleneck (freedraw shape re-generation) which repeats on every single remote update.

 | bug,performance โก๏ธ | low | Major |
2,805,485,385 | PowerToys | error | ### Microsoft PowerToys version
0.87.1.0
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
PowerToys Run
### Steps to reproduce
Version: 0.87.1.0
OS Version: Microsoft Windows NT 10.0.22631.0
IntPtr Length: 8
x64: True
Date: 2025/01/23 7:34:19
Exception:
System.Runtime.InteropServices.COMException (0x80263001): {ใในใฏใใใๆงๆใ็กๅนๅใใใฆใใพใ} ใในใฏใใใๆงๆใ็กๅนๅใใใฆใใใใใซใๆไฝใๅฎไบใงใใพใใใงใใใ (0x80263001)
at Standard.NativeMethods.DwmExtendFrameIntoClientArea(IntPtr hwnd, MARGINS& pMarInset)
at System.Windows.Appearance.WindowBackdropManager.UpdateGlassFrame(IntPtr hwnd, WindowBackdropType backdropType)
at System.Windows.Appearance.WindowBackdropManager.ApplyBackdrop(IntPtr hwnd, WindowBackdropType backdropType)
at System.Windows.Appearance.WindowBackdropManager.SetBackdrop(Window window, WindowBackdropType backdropType)
at System.Windows.ThemeManager.ApplyStyleOnWindow(Window window, Boolean useLightColors)
at System.Windows.ThemeManager.ApplyFluentOnWindow(Window window)
at System.Windows.ThemeManager.OnWindowThemeChanged(Window window, ThemeMode oldThemeMode, ThemeMode newThemeMode)
at System.Windows.Window.CreateSourceWindow(Boolean duringShow)
at System.Windows.Window.ShowHelper(Object booleanBox)
at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Int32 numArgs)
at System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Int32 numArgs, Delegate catchHandler)
### โ๏ธ Expected Behavior
_No response_
### โ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Critical |
2,805,489,863 | godot | Error spam "target doesn't match" with OpenGL and global uniform of type `sampler2DArray` with no value | ### Tested versions
- Reproducible in v4.4.beta1.official [d33da79d3]
### System information
Godot v4.4.beta1 - Pop!_OS 22.04 LTS on X11 - X11 display driver, Single-window, 3 monitors - OpenGL 3 (Compatibility) - NVIDIA GeForce RTX 4070 Ti - AMD Ryzen 9 7950X 16-Core Processor (32 threads)
### Issue description
With the compatibility (OpenGL) renderer, if you add a global uniform of type `sampler2DArray` (with no value) and make a scene with a shader that uses that global uniform, the following message will be spammed to the console every frame:
```
ERROR: GL ERROR: Source: OpenGL Type: Error ID: 1282 Severity: High Message: GL_INVALID_OPERATION error generated. Target doesn't match the texture's target.
at: _gl_debug_print (drivers/gles3/rasterizer_gles3.cpp:189)
```
In fact, this will happen for any texture type other than `sampler2D` (including `sampler3D` and `samplerCube`).
What I suspect is happening, is that it's using the wrong default texture, but the correct target type. If I assign a value to the global uniform then the error stops.
The MRP below will start spamming the message just on opening the project.
### Steps to reproduce
Using the MRP is easiest - just open it!
However, if you want to set it up manually:
- In **Project Settings** add a shader global of type `sampler2DArray` - I'm going to use the name "my_test_global" here. Don't assign a value to it (the issue won't show if it has a value, which is why I think this is about using the wrong default texture)
- Create a scene with a **MeshInstance3D** with a **BoxMesh**
- Create a new shader material in the surface material override, for example:
```
shader_type spatial;
global uniform sampler2DArray my_test_global;
void fragment() {
ALBEDO = vec3(1.0, 0.0, 0.0) * texture(my_test_global, vec3(UV, 0.0)).r;
}
```
- Enjoy the error spam!
### Minimal reproduction project (MRP)
[opengl-global-uniform-sampler2darray.zip](https://github.com/user-attachments/files/18512431/opengl-global-uniform-sampler2darray.zip) | bug,topic:rendering | low | Critical |
2,805,526,447 | PowerToys | Color Picker | ### Description of the new feature / enhancement
Is it possible to get a history feature/option for the Color Picker function?
### Scenario when this would be used?
IT would be nice to go back and look at the colors captured over a period of time.
### Supporting information
Nothing to add here. | Needs-Triage | low | Minor |
2,805,533,453 | flutter | Unresolved reference: StringListObjectInputStream | ### What package does this bug report belong to?
shared_preferences
### What target platforms are you seeing this bug on?
Android
### Have you already upgraded your packages?
Yes
### Dependency versions
<details><summary>pubspec.lock</summary>
```lock
# Generated by pub
# See https://dart.dev/tools/pub/glossary#lockfile
packages:
_flutterfire_internals:
dependency: transitive
description:
name: _flutterfire_internals
sha256: e4f2a7ef31b0ab2c89d2bde35ef3e6e6aff1dce5e66069c6540b0e9cfe33ee6b
url: "https://pub.dev"
source: hosted
version: "1.3.50"
animations:
dependency: "direct main"
description:
name: animations
sha256: d3d6dcfb218225bbe68e87ccf6378bbb2e32a94900722c5f81611dad089911cb
url: "https://pub.dev"
source: hosted
version: "2.0.11"
archive:
dependency: transitive
description:
name: archive
sha256: "6199c74e3db4fbfbd04f66d739e72fe11c8a8957d5f219f1f4482dbde6420b5a"
url: "https://pub.dev"
source: hosted
version: "4.0.2"
args:
dependency: transitive
description:
name: args
sha256: bf9f5caeea8d8fe6721a9c358dd8a5c1947b27f1cfaa18b39c301273594919e6
url: "https://pub.dev"
source: hosted
version: "2.6.0"
async:
dependency: "direct main"
description:
name: async
sha256: "947bfcf187f74dbc5e146c9eb9c0f10c9f8b30743e341481c1e2ed3ecc18c20c"
url: "https://pub.dev"
source: hosted
version: "2.11.0"
base32:
dependency: "direct main"
description:
name: base32
sha256: ddad4ebfedf93d4500818ed8e61443b734ffe7cf8a45c668c9b34ef6adde02e2
url: "https://pub.dev"
source: hosted
version: "2.1.3"
boolean_selector:
dependency: transitive
description:
name: boolean_selector
sha256: "6cfb5af12253eaf2b368f07bacc5a80d1301a071c73360d746b7f2e32d762c66"
url: "https://pub.dev"
source: hosted
version: "2.1.1"
characters:
dependency: transitive
description:
name: characters
sha256: "04a925763edad70e8443c99234dc3328f442e811f1d8fd1a72f1c8ad0f69a605"
url: "https://pub.dev"
source: hosted
version: "1.3.0"
checked_yaml:
dependency: transitive
description:
name: checked_yaml
sha256: feb6bed21949061731a7a75fc5d2aa727cf160b91af9a3e464c5e3a32e28b5ff
url: "https://pub.dev"
source: hosted
version: "2.0.3"
cli_util:
dependency: transitive
description:
name: cli_util
sha256: ff6785f7e9e3c38ac98b2fb035701789de90154024a75b6cb926445e83197d1c
url: "https://pub.dev"
source: hosted
version: "0.4.2"
clock:
dependency: transitive
description:
name: clock
sha256: cb6d7f03e1de671e34607e909a7213e31d7752be4fb66a86d29fe1eb14bfb5cf
url: "https://pub.dev"
source: hosted
version: "1.1.1"
cloud_firestore_platform_interface:
dependency: transitive
description:
name: cloud_firestore_platform_interface
sha256: c3a2987addea08273c582a91f5fb173ca81916ef6d7f8e1a6760c3a8a3a53fc7
url: "https://pub.dev"
source: hosted
version: "6.6.2"
cloud_firestore_web:
dependency: "direct main"
description:
name: cloud_firestore_web
sha256: "4c1bc404d825c68153660b12fd937b90b75cf3aa622cc077da5308ccaec17a9e"
url: "https://pub.dev"
source: hosted
version: "4.4.2"
collection:
dependency: "direct main"
description:
name: collection
sha256: ee67cb0715911d28db6bf4af1026078bd6f0128b07a5f66fb2ed94ec6783c09a
url: "https://pub.dev"
source: hosted
version: "1.18.0"
community_charts_common:
dependency: transitive
description:
name: community_charts_common
sha256: d997ade57f15490346de46efbe23805d378a672aafbf5e47e19517964b671009
url: "https://pub.dev"
source: hosted
version: "1.0.4"
community_charts_flutter:
dependency: "direct main"
description:
name: community_charts_flutter
sha256: "4614846b99782ab79b613687704865e5468ecada3f0ad1afe1cdc3ff5b727f72"
url: "https://pub.dev"
source: hosted
version: "1.0.4"
convert:
dependency: transitive
description:
name: convert
sha256: b30acd5944035672bc15c6b7a8b47d773e41e2f17de064350988c5d02adb1c68
url: "https://pub.dev"
source: hosted
version: "3.1.2"
country_flags:
dependency: "direct main"
description:
name: country_flags
sha256: dbc4f76e7c801619b2d841023e0327191ba00663f1f1b4770394e9bc6632c444
url: "https://pub.dev"
source: hosted
version: "2.2.0"
cross_file:
dependency: transitive
description:
name: cross_file
sha256: "7caf6a750a0c04effbb52a676dce9a4a592e10ad35c34d6d2d0e4811160d5670"
url: "https://pub.dev"
source: hosted
version: "0.3.4+2"
crypto:
dependency: transitive
description:
name: crypto
sha256: "1e445881f28f22d6140f181e07737b22f1e099a5e1ff94b0af2f9e4a463f4855"
url: "https://pub.dev"
source: hosted
version: "3.0.6"
csslib:
dependency: transitive
description:
name: csslib
sha256: "09bad715f418841f976c77db72d5398dc1253c21fb9c0c7f0b0b985860b2d58e"
url: "https://pub.dev"
source: hosted
version: "1.0.2"
cupertino_icons:
dependency: "direct main"
description:
name: cupertino_icons
sha256: ba631d1c7f7bef6b729a622b7b752645a2d076dba9976925b8f25725a30e1ee6
url: "https://pub.dev"
source: hosted
version: "1.0.8"
curved_navigation_bar:
dependency: "direct main"
description:
name: curved_navigation_bar
sha256: bb4ab128fcb6f4a9f0f1f72d227db531818b20218984789777f049fcbf919279
url: "https://pub.dev"
source: hosted
version: "1.0.6"
fake_async:
dependency: transitive
description:
name: fake_async
sha256: "511392330127add0b769b75a987850d136345d9227c6b94c96a04cf4a391bf78"
url: "https://pub.dev"
source: hosted
version: "1.3.1"
ffi:
dependency: transitive
description:
name: ffi
sha256: "16ed7b077ef01ad6170a3d0c57caa4a112a38d7a2ed5602e0aca9ca6f3d98da6"
url: "https://pub.dev"
source: hosted
version: "2.1.3"
file:
dependency: transitive
description:
name: file
sha256: a3b4f84adafef897088c160faf7dfffb7696046cb13ae90b508c2cbc95d3b8d4
url: "https://pub.dev"
source: hosted
version: "7.0.1"
file_picker:
dependency: "direct main"
description:
name: file_picker
sha256: c904b4ab56d53385563c7c39d8e9fa9af086f91495dfc48717ad84a42c3cf204
url: "https://pub.dev"
source: hosted
version: "8.1.7"
firebase_core:
dependency: "direct main"
description:
name: firebase_core
sha256: d851c1ca98fd5a4c07c747f8c65dacc2edd84a4d9ac055d32a5f0342529069f5
url: "https://pub.dev"
source: hosted
version: "3.10.1"
firebase_core_platform_interface:
dependency: transitive
description:
name: firebase_core_platform_interface
sha256: d7253d255ff10f85cfd2adaba9ac17bae878fa3ba577462451163bd9f1d1f0bf
url: "https://pub.dev"
source: hosted
version: "5.4.0"
firebase_core_web:
dependency: transitive
description:
name: firebase_core_web
sha256: fbc008cf390d909b823763064b63afefe9f02d8afdb13eb3f485b871afee956b
url: "https://pub.dev"
source: hosted
version: "2.19.0"
firebase_messaging:
dependency: "direct main"
description:
name: firebase_messaging
sha256: e20ea2a0ecf9b0971575ab3ab42a6e285a94e50092c555b090c1a588a81b4d54
url: "https://pub.dev"
source: hosted
version: "15.2.1"
firebase_messaging_platform_interface:
dependency: transitive
description:
name: firebase_messaging_platform_interface
sha256: c57a92b5ae1857ef4fe4ae2e73452b44d32e984e15ab8b53415ea1bb514bdabd
url: "https://pub.dev"
source: hosted
version: "4.6.1"
firebase_messaging_web:
dependency: transitive
description:
name: firebase_messaging_web
sha256: "83694a990d8525d6b01039240b97757298369622ca0253ad0ebcfed221bf8ee0"
url: "https://pub.dev"
source: hosted
version: "3.10.1"
fixnum:
dependency: transitive
description:
name: fixnum
sha256: b6dc7065e46c974bc7c5f143080a6764ec7a4be6da1285ececdc37be96de53be
url: "https://pub.dev"
source: hosted
version: "1.1.1"
flutter:
dependency: "direct main"
description: flutter
source: sdk
version: "0.0.0"
flutter_launcher_icons:
dependency: "direct dev"
description:
name: flutter_launcher_icons
sha256: "526faf84284b86a4cb36d20a5e45147747b7563d921373d4ee0559c54fcdbcea"
url: "https://pub.dev"
source: hosted
version: "0.13.1"
flutter_plugin_android_lifecycle:
dependency: transitive
description:
name: flutter_plugin_android_lifecycle
sha256: "615a505aef59b151b46bbeef55b36ce2b6ed299d160c51d84281946f0aa0ce0e"
url: "https://pub.dev"
source: hosted
version: "2.0.24"
flutter_secure_storage:
dependency: "direct main"
description:
name: flutter_secure_storage
sha256: "9cad52d75ebc511adfae3d447d5d13da15a55a92c9410e50f67335b6d21d16ea"
url: "https://pub.dev"
source: hosted
version: "9.2.4"
flutter_secure_storage_linux:
dependency: transitive
description:
name: flutter_secure_storage_linux
sha256: bf7404619d7ab5c0a1151d7c4e802edad8f33535abfbeff2f9e1fe1274e2d705
url: "https://pub.dev"
source: hosted
version: "1.2.2"
flutter_secure_storage_macos:
dependency: transitive
description:
name: flutter_secure_storage_macos
sha256: "6c0a2795a2d1de26ae202a0d78527d163f4acbb11cde4c75c670f3a0fc064247"
url: "https://pub.dev"
source: hosted
version: "3.1.3"
flutter_secure_storage_platform_interface:
dependency: transitive
description:
name: flutter_secure_storage_platform_interface
sha256: cf91ad32ce5adef6fba4d736a542baca9daf3beac4db2d04be350b87f69ac4a8
url: "https://pub.dev"
source: hosted
version: "1.1.2"
flutter_secure_storage_web:
dependency: transitive
description:
name: flutter_secure_storage_web
sha256: f4ebff989b4f07b2656fb16b47852c0aab9fed9b4ec1c70103368337bc1886a9
url: "https://pub.dev"
source: hosted
version: "1.2.1"
flutter_secure_storage_windows:
dependency: transitive
description:
name: flutter_secure_storage_windows
sha256: b20b07cb5ed4ed74fc567b78a72936203f587eba460af1df11281c9326cd3709
url: "https://pub.dev"
source: hosted
version: "3.1.2"
flutter_test:
dependency: "direct dev"
description: flutter
source: sdk
version: "0.0.0"
flutter_web_plugins:
dependency: transitive
description: flutter
source: sdk
version: "0.0.0"
freezed_annotation:
dependency: transitive
description:
name: freezed_annotation
sha256: c2e2d632dd9b8a2b7751117abcfc2b4888ecfe181bd9fca7170d9ef02e595fe2
url: "https://pub.dev"
source: hosted
version: "2.4.4"
google_fonts:
dependency: "direct main"
description:
name: google_fonts
sha256: b1ac0fe2832c9cc95e5e88b57d627c5e68c223b9657f4b96e1487aa9098c7b82
url: "https://pub.dev"
source: hosted
version: "6.2.1"
html:
dependency: transitive
description:
name: html
sha256: "1fc58edeaec4307368c60d59b7e15b9d658b57d7f3125098b6294153c75337ec"
url: "https://pub.dev"
source: hosted
version: "0.15.5"
http:
dependency: "direct main"
description:
name: http
sha256: fe7ab022b76f3034adc518fb6ea04a82387620e19977665ea18d30a1cf43442f
url: "https://pub.dev"
source: hosted
version: "1.3.0"
http_interceptor:
dependency: "direct main"
description:
name: http_interceptor
sha256: "288c6ded4a2c66de2730a16b30cbd29d05d042a5e61304d9b4be0e16378f4082"
url: "https://pub.dev"
source: hosted
version: "2.0.0"
http_parser:
dependency: transitive
description:
name: http_parser
sha256: "2aa08ce0341cc9b354a498388e30986515406668dbcc4f7c950c3e715496693b"
url: "https://pub.dev"
source: hosted
version: "4.0.2"
image:
dependency: transitive
description:
name: image
sha256: "8346ad4b5173924b5ddddab782fc7d8a6300178c8b1dc427775405a01701c4a6"
url: "https://pub.dev"
source: hosted
version: "4.5.2"
intl:
dependency: "direct main"
description:
name: intl
sha256: d6f56758b7d3014a48af9701c085700aac781a92a87a62b1333b46d8879661cf
url: "https://pub.dev"
source: hosted
version: "0.19.0"
jovial_misc:
dependency: transitive
description:
name: jovial_misc
sha256: "4b10a4cac4f492d9692e97699bff775efa84abdba29909124cbccf3126e31cea"
url: "https://pub.dev"
source: hosted
version: "0.9.0"
jovial_svg:
dependency: transitive
description:
name: jovial_svg
sha256: ca14d42956b9949c36333065c9141f100e930c918f57f4bd8dd59d35581bd3fc
url: "https://pub.dev"
source: hosted
version: "1.1.24"
js:
dependency: transitive
description:
name: js
sha256: f2c445dce49627136094980615a031419f7f3eb393237e4ecd97ac15dea343f3
url: "https://pub.dev"
source: hosted
version: "0.6.7"
json_annotation:
dependency: transitive
description:
name: json_annotation
sha256: "1ce844379ca14835a50d2f019a3099f419082cfdd231cd86a142af94dd5c6bb1"
url: "https://pub.dev"
source: hosted
version: "4.9.0"
leak_tracker:
dependency: transitive
description:
name: leak_tracker
sha256: "3f87a60e8c63aecc975dda1ceedbc8f24de75f09e4856ea27daf8958f2f0ce05"
url: "https://pub.dev"
source: hosted
version: "10.0.5"
leak_tracker_flutter_testing:
dependency: transitive
description:
name: leak_tracker_flutter_testing
sha256: "932549fb305594d82d7183ecd9fa93463e9914e1b67cacc34bc40906594a1806"
url: "https://pub.dev"
source: hosted
version: "3.0.5"
leak_tracker_testing:
dependency: transitive
description:
name: leak_tracker_testing
sha256: "6ba465d5d76e67ddf503e1161d1f4a6bc42306f9d66ca1e8f079a47290fb06d3"
url: "https://pub.dev"
source: hosted
version: "3.0.1"
logging:
dependency: transitive
description:
name: logging
sha256: c8245ada5f1717ed44271ed1c26b8ce85ca3228fd2ffdb75468ab01979309d61
url: "https://pub.dev"
source: hosted
version: "1.3.0"
matcher:
dependency: transitive
description:
name: matcher
sha256: d2323aa2060500f906aa31a895b4030b6da3ebdcc5619d14ce1aada65cd161cb
url: "https://pub.dev"
source: hosted
version: "0.12.16+1"
material_color_utilities:
dependency: transitive
description:
name: material_color_utilities
sha256: f7142bb1154231d7ea5f96bc7bde4bda2a0945d2806bb11670e30b850d56bdec
url: "https://pub.dev"
source: hosted
version: "0.11.1"
material_symbols_icons:
dependency: "direct main"
description:
name: material_symbols_icons
sha256: "89aac72d25dd49303f71b3b1e70f8374791846729365b25bebc2a2531e5b86cd"
url: "https://pub.dev"
source: hosted
version: "4.2801.1"
meta:
dependency: transitive
description:
name: meta
sha256: bdb68674043280c3428e9ec998512fb681678676b3c54e773629ffe74419f8c7
url: "https://pub.dev"
source: hosted
version: "1.15.0"
mime:
dependency: transitive
description:
name: mime
sha256: "41a20518f0cb1256669420fdba0cd90d21561e560ac240f26ef8322e45bb7ed6"
url: "https://pub.dev"
source: hosted
version: "2.0.0"
nested:
dependency: transitive
description:
name: nested
sha256: "03bac4c528c64c95c722ec99280375a6f2fc708eec17c7b3f07253b626cd2a20"
url: "https://pub.dev"
source: hosted
version: "1.0.0"
otp:
dependency: "direct main"
description:
name: otp
sha256: fcb7f21e30c4cd80a0a982c27a9b75151cc1fe3d8f7ee680673c090171b1ad55
url: "https://pub.dev"
source: hosted
version: "3.1.4"
package_info_plus:
dependency: "direct main"
description:
name: package_info_plus
sha256: "739e0a5c3c4055152520fa321d0645ee98e932718b4c8efeeb51451968fe0790"
url: "https://pub.dev"
source: hosted
version: "8.1.3"
package_info_plus_platform_interface:
dependency: transitive
description:
name: package_info_plus_platform_interface
sha256: a5ef9986efc7bf772f2696183a3992615baa76c1ffb1189318dd8803778fb05b
url: "https://pub.dev"
source: hosted
version: "3.0.2"
path:
dependency: transitive
description:
name: path
sha256: "087ce49c3f0dc39180befefc60fdb4acd8f8620e5682fe2476afd0b3688bb4af"
url: "https://pub.dev"
source: hosted
version: "1.9.0"
path_provider:
dependency: "direct main"
description:
name: path_provider
sha256: "50c5dd5b6e1aaf6fb3a78b33f6aa3afca52bf903a8a5298f53101fdaee55bbcd"
url: "https://pub.dev"
source: hosted
version: "2.1.5"
path_provider_android:
dependency: transitive
description:
name: path_provider_android
sha256: "4adf4fd5423ec60a29506c76581bc05854c55e3a0b72d35bb28d661c9686edf2"
url: "https://pub.dev"
source: hosted
version: "2.2.15"
path_provider_foundation:
dependency: transitive
description:
name: path_provider_foundation
sha256: "4843174df4d288f5e29185bd6e72a6fbdf5a4a4602717eed565497429f179942"
url: "https://pub.dev"
source: hosted
version: "2.4.1"
path_provider_linux:
dependency: transitive
description:
name: path_provider_linux
sha256: f7a1fe3a634fe7734c8d3f2766ad746ae2a2884abe22e241a8b301bf5cac3279
url: "https://pub.dev"
source: hosted
version: "2.2.1"
path_provider_platform_interface:
dependency: transitive
description:
name: path_provider_platform_interface
sha256: "88f5779f72ba699763fa3a3b06aa4bf6de76c8e5de842cf6f29e2e06476c2334"
url: "https://pub.dev"
source: hosted
version: "2.1.2"
path_provider_windows:
dependency: transitive
description:
name: path_provider_windows
sha256: bd6f00dbd873bfb70d0761682da2b3a2c2fccc2b9e84c495821639601d81afe7
url: "https://pub.dev"
source: hosted
version: "2.3.0"
petitparser:
dependency: transitive
description:
name: petitparser
sha256: c15605cd28af66339f8eb6fbe0e541bfe2d1b72d5825efc6598f3e0a31b9ad27
url: "https://pub.dev"
source: hosted
version: "6.0.2"
platform:
dependency: transitive
description:
name: platform
sha256: "5d6b1b0036a5f331ebc77c850ebc8506cbc1e9416c27e59b439f917a902a4984"
url: "https://pub.dev"
source: hosted
version: "3.1.6"
plugin_platform_interface:
dependency: transitive
description:
name: plugin_platform_interface
sha256: "4820fbfdb9478b1ebae27888254d445073732dae3d6ea81f0b7e06d5dedc3f02"
url: "https://pub.dev"
source: hosted
version: "2.1.8"
posix:
dependency: transitive
description:
name: posix
sha256: a0117dc2167805aa9125b82eee515cc891819bac2f538c83646d355b16f58b9a
url: "https://pub.dev"
source: hosted
version: "6.0.1"
provider:
dependency: "direct main"
description:
name: provider
sha256: c8a055ee5ce3fd98d6fc872478b03823ffdb448699c6ebdbbc71d59b596fd48c
url: "https://pub.dev"
source: hosted
version: "6.1.2"
share_plus:
dependency: "direct main"
description:
name: share_plus
sha256: fce43200aa03ea87b91ce4c3ac79f0cecd52e2a7a56c7a4185023c271fbfa6da
url: "https://pub.dev"
source: hosted
version: "10.1.4"
share_plus_platform_interface:
dependency: transitive
description:
name: share_plus_platform_interface
sha256: cc012a23fc2d479854e6c80150696c4a5f5bb62cb89af4de1c505cf78d0a5d0b
url: "https://pub.dev"
source: hosted
version: "5.0.2"
shared_preferences:
dependency: "direct main"
description:
name: shared_preferences
sha256: a752ce92ea7540fc35a0d19722816e04d0e72828a4200e83a98cf1a1eb524c9a
url: "https://pub.dev"
source: hosted
version: "2.3.5"
shared_preferences_android:
dependency: transitive
description:
name: shared_preferences_android
sha256: "138b7bbbc7f59c56236e426c37afb8f78cbc57b094ac64c440e0bb90e380a4f5"
url: "https://pub.dev"
source: hosted
version: "2.4.2"
shared_preferences_foundation:
dependency: transitive
description:
name: shared_preferences_foundation
sha256: "6a52cfcdaeac77cad8c97b539ff688ccfc458c007b4db12be584fbe5c0e49e03"
url: "https://pub.dev"
source: hosted
version: "2.5.4"
shared_preferences_linux:
dependency: transitive
description:
name: shared_preferences_linux
sha256: "580abfd40f415611503cae30adf626e6656dfb2f0cee8f465ece7b6defb40f2f"
url: "https://pub.dev"
source: hosted
version: "2.4.1"
shared_preferences_platform_interface:
dependency: transitive
description:
name: shared_preferences_platform_interface
sha256: "57cbf196c486bc2cf1f02b85784932c6094376284b3ad5779d1b1c6c6a816b80"
url: "https://pub.dev"
source: hosted
version: "2.4.1"
shared_preferences_web:
dependency: transitive
description:
name: shared_preferences_web
sha256: d2ca4132d3946fec2184261726b355836a82c33d7d5b67af32692aff18a4684e
url: "https://pub.dev"
source: hosted
version: "2.4.2"
shared_preferences_windows:
dependency: transitive
description:
name: shared_preferences_windows
sha256: "94ef0f72b2d71bc3e700e025db3710911bd51a71cefb65cc609dd0d9a982e3c1"
url: "https://pub.dev"
source: hosted
version: "2.4.1"
simple_gesture_detector:
dependency: transitive
description:
name: simple_gesture_detector
sha256: ba2cd5af24ff20a0b8d609cec3f40e5b0744d2a71804a2616ae086b9c19d19a3
url: "https://pub.dev"
source: hosted
version: "0.2.1"
sky_engine:
dependency: transitive
description: flutter
source: sdk
version: "0.0.99"
sodium:
dependency: "direct main"
description:
name: sodium
sha256: "0149888bdf1b4ba32d368036294d312f3a1c3049578498485bbfbb098402cdeb"
url: "https://pub.dev"
source: hosted
version: "3.3.0"
sodium_libs:
dependency: "direct main"
description:
name: sodium_libs
sha256: ee77a8b0290d46554b10843301d5809d7c6e6be8a84316ff3ff9d656004a186e
url: "https://pub.dev"
source: hosted
version: "3.2.0+1"
source_span:
dependency: transitive
description:
name: source_span
sha256: "53e943d4206a5e30df338fd4c6e7a077e02254531b138a15aec3bd143c1a8b3c"
url: "https://pub.dev"
source: hosted
version: "1.10.0"
sprintf:
dependency: transitive
description:
name: sprintf
sha256: "1fc9ffe69d4df602376b52949af107d8f5703b77cda567c4d7d86a0693120f23"
url: "https://pub.dev"
source: hosted
version: "7.0.0"
stack_trace:
dependency: transitive
description:
name: stack_trace
sha256: "73713990125a6d93122541237550ee3352a2d84baad52d375a4cad2eb9b7ce0b"
url: "https://pub.dev"
source: hosted
version: "1.11.1"
stream_channel:
dependency: transitive
description:
name: stream_channel
sha256: ba2aa5d8cc609d96bbb2899c28934f9e1af5cddbd60a827822ea467161eb54e7
url: "https://pub.dev"
source: hosted
version: "2.1.2"
string_scanner:
dependency: transitive
description:
name: string_scanner
sha256: "556692adab6cfa87322a115640c11f13cb77b3f076ddcc5d6ae3c20242bedcde"
url: "https://pub.dev"
source: hosted
version: "1.2.0"
synchronized:
dependency: transitive
description:
name: synchronized
sha256: "69fe30f3a8b04a0be0c15ae6490fc859a78ef4c43ae2dd5e8a623d45bfcf9225"
url: "https://pub.dev"
source: hosted
version: "3.3.0+3"
table_calendar:
dependency: "direct main"
description:
name: table_calendar
sha256: b2896b7c86adf3a4d9c911d860120fe3dbe03c85db43b22fd61f14ee78cdbb63
url: "https://pub.dev"
source: hosted
version: "3.1.3"
term_glyph:
dependency: transitive
description:
name: term_glyph
sha256: a29248a84fbb7c79282b40b8c72a1209db169a2e0542bce341da992fe1bc7e84
url: "https://pub.dev"
source: hosted
version: "1.2.1"
test_api:
dependency: transitive
description:
name: test_api
sha256: "5b8a98dafc4d5c4c9c72d8b31ab2b23fc13422348d2997120294d3bac86b4ddb"
url: "https://pub.dev"
source: hosted
version: "0.7.2"
timezone:
dependency: transitive
description:
name: timezone
sha256: "2236ec079a174ce07434e89fcd3fcda430025eb7692244139a9cf54fdcf1fc7d"
url: "https://pub.dev"
source: hosted
version: "0.9.4"
typed_data:
dependency: transitive
description:
name: typed_data
sha256: f9049c039ebfeb4cf7a7104a675823cd72dba8297f264b6637062516699fa006
url: "https://pub.dev"
source: hosted
version: "1.4.0"
url_launcher:
dependency: "direct main"
description:
name: url_launcher
sha256: "9d06212b1362abc2f0f0d78e6f09f726608c74e3b9462e8368bb03314aa8d603"
url: "https://pub.dev"
source: hosted
version: "6.3.1"
url_launcher_android:
dependency: transitive
description:
name: url_launcher_android
sha256: "6fc2f56536ee873eeb867ad176ae15f304ccccc357848b351f6f0d8d4a40d193"
url: "https://pub.dev"
source: hosted
version: "6.3.14"
url_launcher_ios:
dependency: transitive
description:
name: url_launcher_ios
sha256: "16a513b6c12bb419304e72ea0ae2ab4fed569920d1c7cb850263fe3acc824626"
url: "https://pub.dev"
source: hosted
version: "6.3.2"
url_launcher_linux:
dependency: transitive
description:
name: url_launcher_linux
sha256: "4e9ba368772369e3e08f231d2301b4ef72b9ff87c31192ef471b380ef29a4935"
url: "https://pub.dev"
source: hosted
version: "3.2.1"
url_launcher_macos:
dependency: transitive
description:
name: url_launcher_macos
sha256: "17ba2000b847f334f16626a574c702b196723af2a289e7a93ffcb79acff855c2"
url: "https://pub.dev"
source: hosted
version: "3.2.2"
url_launcher_platform_interface:
dependency: transitive
description:
name: url_launcher_platform_interface
sha256: "552f8a1e663569be95a8190206a38187b531910283c3e982193e4f2733f01029"
url: "https://pub.dev"
source: hosted
version: "2.3.2"
url_launcher_web:
dependency: transitive
description:
name: url_launcher_web
sha256: "772638d3b34c779ede05ba3d38af34657a05ac55b06279ea6edd409e323dca8e"
url: "https://pub.dev"
source: hosted
version: "2.3.3"
url_launcher_windows:
dependency: transitive
description:
name: url_launcher_windows
sha256: "3284b6d2ac454cf34f114e1d3319866fdd1e19cdc329999057e44ffe936cfa77"
url: "https://pub.dev"
source: hosted
version: "3.1.4"
uuid:
dependency: transitive
description:
name: uuid
sha256: a5be9ef6618a7ac1e964353ef476418026db906c4facdedaa299b7a2e71690ff
url: "https://pub.dev"
source: hosted
version: "4.5.1"
vector_math:
dependency: transitive
description:
name: vector_math
sha256: "80b3257d1492ce4d091729e3a67a60407d227c27241d6927be0130c98e741803"
url: "https://pub.dev"
source: hosted
version: "2.1.4"
vm_service:
dependency: transitive
description:
name: vm_service
sha256: "5c5f338a667b4c644744b661f309fb8080bb94b18a7e91ef1dbd343bed00ed6d"
url: "https://pub.dev"
source: hosted
version: "14.2.5"
web:
dependency: transitive
description:
name: web
sha256: cd3543bd5798f6ad290ea73d210f423502e71900302dde696f8bff84bf89a1cb
url: "https://pub.dev"
source: hosted
version: "1.1.0"
web_socket:
dependency: transitive
description:
name: web_socket
sha256: "3c12d96c0c9a4eec095246debcea7b86c0324f22df69893d538fcc6f1b8cce83"
url: "https://pub.dev"
source: hosted
version: "0.1.6"
web_socket_channel:
dependency: "direct main"
description:
name: web_socket_channel
sha256: "0b8e2457400d8a859b7b2030786835a28a8e80836ef64402abef392ff4f1d0e5"
url: "https://pub.dev"
source: hosted
version: "3.0.2"
weekday_selector:
dependency: "direct main"
description:
name: weekday_selector
sha256: "783954997aa30a8890b24196784543752dbf179282ca4d9139ecd70d63fea99e"
url: "https://pub.dev"
source: hosted
version: "1.1.0"
win32:
dependency: transitive
description:
name: win32
sha256: "154360849a56b7b67331c21f09a386562d88903f90a1099c5987afc1912e1f29"
url: "https://pub.dev"
source: hosted
version: "5.10.0"
xdg_directories:
dependency: transitive
description:
name: xdg_directories
sha256: "7a3f37b05d989967cdddcbb571f1ea834867ae2faa29725fd085180e0883aa15"
url: "https://pub.dev"
source: hosted
version: "1.1.0"
xml:
dependency: transitive
description:
name: xml
sha256: b015a8ad1c488f66851d762d3090a21c600e479dc75e68328c52774040cf9226
url: "https://pub.dev"
source: hosted
version: "6.5.0"
yaml:
dependency: transitive
description:
name: yaml
sha256: b9da305ac7c39faa3f030eccd175340f968459dae4af175130b3fc47e40d76ce
url: "https://pub.dev"
source: hosted
version: "3.1.3"
sdks:
dart: ">=3.5.0 <4.0.0"
flutter: ">=3.24.0"
```
</details>
### Steps to reproduce
Build Android project
### Expected results
Successful build
### Actual results
Build fails with unresolved reference: StringListObjectInputStream
### Code sample
<details open><summary>Code sample</summary>
```dart
Ran pub upgrade which pulled in the latest shared_preference packages (see above pubspec.lock)
```
</details>
### Screenshots or Videos
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
Launching lib\main.dart on Pixel 7 in debug mode...
Running Gradle task 'assembleDebug'...
e: file:///C:/Users/Rick/AppData/Local/Pub/Cache/hosted/pub.dev/shared_preferences_android-2.4.2/android/src/main/kotlin/io/flutter/plugins/sharedpreferences/SharedPreferencesPlugin.kt:424:18 Unresolved reference: StringListObjectInputStream
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':shared_preferences_android:compileDebugKotlin'.
> A failure occurred while executing org.jetbrains.kotlin.compilerRunner.GradleCompilerRunnerWithWorkers$GradleKotlinCompilerWorkAction
> Compilation error. See log for more details
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
* Get more help at https://help.gradle.org
BUILD FAILED in 1m 57s
Error: Gradle task assembleDebug failed with exit code 1
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[โ] Flutter (Channel stable, 3.24.3, on Microsoft Windows [Version 10.0.19045.5371], locale en-US)
โข Flutter version 3.24.3 on channel stable at C:\Users\Rick\flutter
โข Upstream repository https://github.com/flutter/flutter.git
โข Framework revision 2663184aa7 (4 months ago), 2024-09-11 16:27:48 -0500
โข Engine revision 36335019a8
โข Dart version 3.5.3
โข DevTools version 2.37.3
[โ] Windows Version (Installed version of Windows is version 10 or higher)
[โ] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
โข Android SDK at C:\Users\Rick\AppData\Local\Android\sdk
โข Platform android-34, build-tools 34.0.0
โข Java binary at: C:\Program Files\Android\Android Studio1\jbr\bin\java
โข Java version OpenJDK Runtime Environment (build 17.0.9+0--11185874)
โข All Android licenses accepted.
[โ] Chrome - develop for the web
โข Chrome at C:\Program Files (x86)\Google\Chrome\Application\chrome.exe
[โ] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.3.6)
โข Visual Studio at C:\Program Files\Microsoft Visual Studio\2022\Community
โข Visual Studio Community 2022 version 17.3.32929.385
โข Windows 10 SDK version 10.0.19041.0
[โ] Android Studio (version 2024.1)
โข Android Studio at C:\Program Files\Android\Android Studio1
โข Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
โข Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
โข Java version OpenJDK Runtime Environment (build 17.0.9+0--11185874)
[โ] Connected device (4 available)
โข Pixel 7 (mobile) โข 2A101FDH2001YA โข android-arm64 โข Android 15 (API 35)
โข Windows (desktop) โข windows โข windows-x64 โข Microsoft Windows [Version 10.0.19045.5371]
โข Chrome (web) โข chrome โข web-javascript โข Google Chrome 131.0.6778.265
โข Edge (web) โข edge โข web-javascript โข Microsoft Edge 128.0.2739.67
[โ] Network resources
โข All expected network resources are available.
โข No issues found!
Process finished with exit code 0
```
</details>
| platform-android,p: shared_preferences,package,team-ecosystem,P2,fyi-android | low | Critical |
2,805,554,279 | vscode | c# dev kit is downloading dotnet 8.0 version and failing to open the solutino. | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.96.4
- OS Version: MAC Darwin x64 24.2.0
-
Steps to Reproduce:
1. .Net Install tool in output : ms-dotnettools.csdevkit requested to download the .NET ASP.NET Runtime.
Downloading .NET version(s) 8.0.12~x64~aspnetcore ........... Done!
.NET 8.0.12~x64~aspnetcore executable path: /Users/mpadmaos10/Library/Application Support/Code/User/globalStorage/ms-dotnettools.vscode-dotnet-runtime/.dotnet/8.0.12~x64~aspnetcore/dotnet
2. While building the solution, it is throwing the below error.
Activating the "Microsoft.VisualStudio.CpsProjectIconSourceService (0.1)" service failed.
| triage-needed | low | Critical |
2,805,567,817 | vscode | Updating issue | The following message keeps popping up even after restarting it "Restart Visual Studio Code to apply the latest update" | info-needed | low | Minor |
2,805,571,274 | pytorch | aot inductor intermediate tensor debug printing (setting 2) not working | ### ๐ Describe the bug
Code:
```python
from torch._inductor.fuzzer import ConfigFuzzer, visualize_results #, create_simple_test_model_gpu
import torch
def create_simple_test_model_gpu():
"""Create a simple test model function for demonstration."""
batch_size = 32
seq_length = 50
hidden_size = 768
def test_fn():
inp = torch.randn(batch_size, seq_length, hidden_size, device="cuda")
weight = torch.randn(hidden_size, hidden_size, device="cuda")
matmul_output = inp @ weight
final_output = torch.nn.LayerNorm(hidden_size, device="cuda")(matmul_output)
return True
return test_fn
tf = create_simple_test_model_gpu()
comp = torch.compile(options={"aot_inductor.debug_intermediate_value_printer": "2"})(tf)
comp()
```
Error msg:
```
Traceback (most recent call last):
File "/home/gabeferns/org/debug/fuzzer-0/bug.py", line 23, in <module>
comp()
File "/home/gabeferns/pt-envs/fuzzer/torch/_dynamo/eval_frame.py", line 566, in _fn
return fn(*args, **kwargs)
File "/home/gabeferns/org/debug/fuzzer-0/bug.py", line 11, in test_fn
def test_fn():
File "/home/gabeferns/pt-envs/fuzzer/torch/_dynamo/eval_frame.py", line 745, in _fn
return fn(*args, **kwargs)
File "/home/gabeferns/pt-envs/fuzzer/torch/_functorch/aot_autograd.py", line 1199, in forward
return compiled_fn(full_args)
File "/home/gabeferns/pt-envs/fuzzer/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 326, in runtime_wrapper
all_outs = call_func_at_runtime_with_args(
File "/home/gabeferns/pt-envs/fuzzer/torch/_functorch/_aot_autograd/utils.py", line 126, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
File "/home/gabeferns/pt-envs/fuzzer/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 687, in inner_fn
outs = compiled_fn(args)
File "/home/gabeferns/pt-envs/fuzzer/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 493, in wrapper
return compiled_fn(runtime_args)
File "/home/gabeferns/pt-envs/fuzzer/torch/_inductor/output_code.py", line 457, in __call__
return self.current_callable(inputs)
File "/tmp/torchinductor_gabeferns/us/cusdgx2jfgdi7skkxb27i4l7xuwe2afa2blsn3kgbqsuldogqqin.py", line 133, in call
_print_debugging_tensor_value_info("inductor: before_launch - triton_poi_fused_randn_0 - 0", 0)
File "/home/gabeferns/pt-envs/fuzzer/torch/_inductor/codegen/debug_utils.py", line 26, in _print_debugging_tensor_value_info
numel = arg.float().numel()
AttributeError: 'int' object has no attribute 'float'
```
I have a fix incoming.
### Versions
git hash: 40e27fbcf2b
cc @chauhang @penguinwu | triaged,oncall: pt2 | low | Critical |
2,805,612,037 | pytorch | XPU - UserWarning: Failed to initialize XPU devices. when run on Machine without without Intel GPU Driver | ### ๐ Describe the bug
Based on following issue: https://github.com/pytorch/pytorch/issues/145290#issuecomment-2606374858
Test on Linux x86 without Intel Driver installed:
```
pip install --pre torch --index-url https://download.pytorch.org/whl/test/xpu
Looking in indexes: https://download.pytorch.org/whl/test/xpu
Collecting torch
Obtaining dependency information for torch from https://download.pytorch.org/whl/test/xpu/torch-2.6.0%2Bxpu-cp310-cp310-linux_x86_64.whl.metadata
Downloading https://download.pytorch.org/whl/test/xpu/torch-2.6.0%2Bxpu-cp310-cp310-linux_x86_64.whl.metadata (27 kB)
Collecting jinja2
Downloading https://download.pytorch.org/whl/test/Jinja2-3.1.4-py3-none-any.whl (133 kB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 133.3/133.3 kB 28.6 MB/s eta 0:00:00
Collecting typing-extensions>=4.10.0
Downloading https://download.pytorch.org/whl/test/typing_extensions-4.12.2-py3-none-any.whl (37 kB)
Collecting networkx
Downloading https://download.pytorch.org/whl/test/networkx-3.3-py3-none-any.whl (1.7 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 1.7/1.7 MB 114.9 MB/s eta 0:00:00
Collecting intel-cmplr-lib-rt==2025.0.2
Downloading https://download.pytorch.org/whl/test/xpu/intel_cmplr_lib_rt-2025.0.2-py2.py3-none-manylinux_2_28_x86_64.whl (45.9 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 45.9/45.9 MB 57.1 MB/s eta 0:00:00
Collecting tcmlib==1.2.0
Downloading https://download.pytorch.org/whl/test/xpu/tcmlib-1.2.0-py2.py3-none-manylinux_2_28_x86_64.whl (4.2 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 4.2/4.2 MB 117.4 MB/s eta 0:00:00
Collecting pytorch-triton-xpu==3.2.0
Obtaining dependency information for pytorch-triton-xpu==3.2.0 from https://download.pytorch.org/whl/test/pytorch_triton_xpu-3.2.0-cp310-cp310-linux_x86_64.whl.metadata
Downloading https://download.pytorch.org/whl/test/pytorch_triton_xpu-3.2.0-cp310-cp310-linux_x86_64.whl.metadata (1.3 kB)
Collecting sympy==1.13.1
Downloading https://download.pytorch.org/whl/test/sympy-1.13.1-py3-none-any.whl (6.2 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 6.2/6.2 MB 134.6 MB/s eta 0:00:00
Collecting fsspec
Downloading https://download.pytorch.org/whl/test/fsspec-2024.6.1-py3-none-any.whl (177 kB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 177.6/177.6 kB 47.3 MB/s eta 0:00:00
Collecting filelock
Downloading https://download.pytorch.org/whl/test/filelock-3.13.1-py3-none-any.whl (11 kB)
Collecting intel-cmplr-lib-ur==2025.0.2
Downloading https://download.pytorch.org/whl/test/xpu/intel_cmplr_lib_ur-2025.0.2-py2.py3-none-manylinux_2_28_x86_64.whl (25.1 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 25.1/25.1 MB 88.2 MB/s eta 0:00:00
Collecting umf==0.9.1
Downloading https://download.pytorch.org/whl/test/xpu/umf-0.9.1-py2.py3-none-manylinux_2_28_x86_64.whl (161 kB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 161.6/161.6 kB 43.3 MB/s eta 0:00:00
Collecting intel-pti==0.10.0
Downloading https://download.pytorch.org/whl/test/xpu/intel_pti-0.10.0-py2.py3-none-manylinux_2_28_x86_64.whl (651 kB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 651.8/651.8 kB 104.1 MB/s eta 0:00:00
Collecting intel-cmplr-lic-rt==2025.0.2
Downloading https://download.pytorch.org/whl/test/xpu/intel_cmplr_lic_rt-2025.0.2-py2.py3-none-manylinux_2_28_x86_64.whl (18 kB)
Collecting intel-sycl-rt==2025.0.2
Downloading https://download.pytorch.org/whl/test/xpu/intel_sycl_rt-2025.0.2-py2.py3-none-manylinux_2_28_x86_64.whl (12.4 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 12.4/12.4 MB 135.9 MB/s eta 0:00:00
Collecting packaging
Downloading https://download.pytorch.org/whl/test/packaging-22.0-py3-none-any.whl (42 kB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 42.6/42.6 kB 14.9 MB/s eta 0:00:00
Collecting mpmath<1.4,>=1.1.0
Downloading https://download.pytorch.org/whl/test/mpmath-1.3.0-py3-none-any.whl (536 kB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 536.2/536.2 kB 47.2 MB/s eta 0:00:00
Collecting MarkupSafe>=2.0
Downloading https://download.pytorch.org/whl/test/MarkupSafe-2.1.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (25 kB)
Downloading https://download.pytorch.org/whl/test/xpu/torch-2.6.0%2Bxpu-cp310-cp310-linux_x86_64.whl (1029.5 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 1.0/1.0 GB 694.9 kB/s eta 0:00:00
Downloading https://download.pytorch.org/whl/test/pytorch_triton_xpu-3.2.0-cp310-cp310-linux_x86_64.whl (348.4 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 348.4/348.4 MB 3.0 MB/s eta 0:00:00
Using cached https://download.pytorch.org/whl/test/xpu/torch-2.6.0%2Bxpu-cp310-cp310-linux_x86_64.whl (1029.5 MB)
Using cached https://download.pytorch.org/whl/test/pytorch_triton_xpu-3.2.0-cp310-cp310-linux_x86_64.whl (348.4 MB)
Installing collected packages: tcmlib, mpmath, intel-pti, intel-cmplr-lic-rt, intel-cmplr-lib-rt, umf, typing-extensions, sympy, packaging, networkx, MarkupSafe, fsspec, filelock, pytorch-triton-xpu, jinja2, intel-cmplr-lib-ur, intel-sycl-rt, torch
Successfully installed MarkupSafe-2.1.5 filelock-3.13.1 fsspec-2024.6.1 intel-cmplr-lib-rt-2025.0.2 intel-cmplr-lib-ur-2025.0.2 intel-cmplr-lic-rt-2025.0.2 intel-pti-0.10.0 intel-sycl-rt-2025.0.2 jinja2-3.1.4 mpmath-1.3.0 networkx-3.3 packaging-22.0 pytorch-triton-xpu-3.2.0 sympy-1.13.1 tcmlib-1.2.0 torch-2.6.0+xpu typing-extensions-4.12.2 umf-0.9.1
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
root@b90276700e5d:/# python
Python 3.10.16 (main, Jan 14 2025, 05:29:27) [GCC 12.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
/usr/local/lib/python3.10/site-packages/torch/_subclasses/functional_tensor.py:275: UserWarning: Failed to initialize NumPy: No module named 'numpy' (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:81.)
cpu = _conversion_method_template(device=torch.device("cpu"))
>>> torch.__version__
'2.6.0+xpu'
>>> exit()
root@b90276700e5d:/# git clone https://github.com/pytorch/pytorch.git
Cloning into 'pytorch'...
remote: Enumerating objects: 1047258, done.
remote: Counting objects: 100% (959/959), done.
remote: Compressing objects: 100% (445/445), done.
remote: Total 1047258 (delta 717), reused 565 (delta 514), pack-reused 1046299 (from 3)
Receiving objects: 100% (1047258/1047258), 955.33 MiB | 47.36 MiB/s, done.
Resolving deltas: 100% (838919/838919), done.
Updating files: 100% (17938/17938), done.
root@b90276700e5d:/# cd pytorch/.ci/pytorch/
root@b90276700e5d:/pytorch/.ci/pytorch# cd smoke_test/
root@b90276700e5d:/pytorch/.ci/pytorch/smoke_test# pip install numpy
Collecting numpy
Downloading numpy-2.2.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (16.4 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 16.4/16.4 MB 103.5 MB/s eta 0:00:00
Installing collected packages: numpy
Successfully installed numpy-2.2.2
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
[notice] A new release of pip is available: 23.0.1 -> 24.3.1
[notice] To update, run: pip install --upgrade pip
root@b90276700e5d:/pytorch/.ci/pytorch/smoke_test# python smoke_test.py --package torchonly
torch: 2.6.0+xpu
ATen/Parallel:
at::get_num_threads() : 4
at::get_num_interop_threads() : 4
OpenMP 201511 (a.k.a. OpenMP 4.5)
omp_get_max_threads() : 4
Intel(R) oneAPI Math Kernel Library Version 2025.0.1-Product Build 20241031 for Intel(R) 64 architecture applications
mkl_get_max_threads() : 4
Intel(R) MKL-DNN v3.5.3 (Git Hash 66f0cb9eb66affd2da3bf5f8d897376f04aae6af)
std::thread::hardware_concurrency() : 8
Environment variables:
OMP_NUM_THREADS : [not set]
MKL_NUM_THREADS : [not set]
ATen parallel backend: OpenMP
Skip version check for channel None as stable version is None
Testing smoke_test_conv2d
Testing smoke_test_linalg on cpu
Testing smoke_test_compile for cpu and torch.float16
/usr/local/lib/python3.10/site-packages/torch/xpu/__init__.py:60: UserWarning: Failed to initialize XPU devices. The driver may not be installed, installed incorrectly, or incompatible with the current setup. Please refer to the guideline (https://github.com/pytorch/pytorch?tab=readme-ov-file#intel-gpu-support) for proper installation and configuration. (Triggered internally at /pytorch/c10/xpu/XPUFunctions.cpp:54.)
return torch._C._xpu_getDeviceCount()
/usr/local/lib/python3.10/site-packages/torch/xpu/__init__.py:60: UserWarning: Failed to initialize XPU devices. The driver may not be installed, installed incorrectly, or incompatible with the current setup. Please refer to the guideline (https://github.com/pytorch/pytorch?tab=readme-ov-file#intel-gpu-support) for proper installation and configuration. (Triggered internally at /pytorch/c10/xpu/XPUFunctions.cpp:54.)
return torch._C._xpu_getDeviceCount()
Testing smoke_test_compile for cpu and torch.float32
Testing smoke_test_compile for cpu and torch.float64
Picked CPU ISA VecAVX512 bit width 512
Testing smoke_test_compile with mode 'max-autotune' for torch.float32
```
Creates following warning when testing torch.compile:
```
Testing smoke_test_compile for cpu and torch.float16
/usr/local/lib/python3.10/site-packages/torch/xpu/__init__.py:60: UserWarning: Failed to initialize XPU devices. The driver may not be installed, installed incorrectly, or incompatible with the current setup. Please refer to the guideline (https://github.com/pytorch/pytorch?tab=readme-ov-file#intel-gpu-support) for proper installation and configuration. (Triggered internally at /pytorch/c10/xpu/XPUFunctions.cpp:54.)
```
Instead of Generating warning every time we could perhaps Display it once during torch import and during torch compile call fallback on cpu ?
### Versions
2.6.0 and 2.7.0
cc @gujinghui @EikanWang @fengyuan14 @guangyey | triaged,module: xpu | low | Critical |
2,805,634,028 | node | Add Option to Skip Initial Run in Watch Mode | ### What is the problem this feature will solve?
Currently, when running a Node.js application in `--watch` mode, the application executes immediately upon startup. While this is useful in many scenarios, there are cases where skipping the initial run would be beneficial, such as:
- When setting up for long-running tasks that do not need immediate execution.
- When using watch mode solely for monitoring changes without requiring an initial execution.
- To avoid redundant initialization when the program has already been run before starting the watch mode.
### What is the feature you are proposing to solve the problem?
Introduce a new flag, such as `--skip-initial-run`, to the `--watch` mode. When this flag is enabled:
1. The application does not execute immediately upon startup.
2. The watcher waits for the first file change to trigger an execution.
### Example Usage
```bash
node --watch --skip-initial-run app.js
```
### Benefits
- Provides greater flexibility in watch mode usage.
- Avoids unnecessary initialization or resource usage when immediate execution is not needed.
### Alternatives
- Users can implement workarounds in their code (e.g., using a custom flag in the application), but this increases complexity and is not a clean solution.
### Additional Context
This feature would align with similar functionality in other tools like nodemon, which supports skipping the initial run using a configuration option.
Would this be a feasible addition? Happy to discuss and contribute!
Thank you for considering this feature request!
### What alternatives have you considered?
_No response_ | feature request | low | Minor |
2,805,664,674 | pytorch | seg fault in aot_inductor_package on arm GPU with 2.6.0 RC | ### ๐ Describe the bug
When running internal test for 2.6.0 RC ARM wheels (https://download.pytorch.org/whl/test/torch/) on Grace Hopper 1GPU, getting seg fault/bus errors which are happening alternating on below test
Reproduced errors on both CUDA and CPU wheels.
```python test/inductor/test_aot_inductor_package.py -k test_add -k cpu```
Error:
```
Running only test/inductor/test_aot_inductor_package.py::TestAOTInductorPackage_cpu::test_add
Running 1 items in this shard
Fatal Python error: Segmentation fault
```
Backtrace:
```
(gdb) bt
#0 0x0000ef67c4019c54 in std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release() ()
from /tmp/KFSXvp/data/aotinductor/model/cnxl5jak4cmycxqpoiy3wdbyygqqgbph4tl5wjzolu24zpiqo25v.so
#1 0x0000ef67c40312f4 in torch::aot_inductor::AOTInductorModelContainer::AOTInductorModelContainer(unsigned long, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::optional<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > const&) ()
from /tmp/KFSXvp/data/aotinductor/model/cnxl5jak4cmycxqpoiy3wdbyygqqgbph4tl5wjzolu24zpiqo25v.so
#2 0x0000ef67c4017744 in AOTInductorModelContainerCreateWithDevice ()
from /tmp/KFSXvp/data/aotinductor/model/cnxl5jak4cmycxqpoiy3wdbyygqqgbph4tl5wjzolu24zpiqo25v.so
#3 0x0000ef6b33cbc464 in torch::inductor::AOTIModelContainerRunner::AOTIModelContainerRunner(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned long, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) ()
from /usr/local/lib/python3.12/dist-packages/torch/lib/libtorch_cpu.so
#4 0x0000ef6b33cbcf80 in torch::inductor::AOTIModelContainerRunnerCpu::AOTIModelContainerRunnerCpu(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned long) () from /usr/local/lib/python3.12/dist-packages/torch/lib/libtorch_cpu.so
#5 0x0000ef6b33cbd038 in torch::inductor::(anonymous namespace)::create_aoti_runner_cpu(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned long, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) ()
from /usr/local/lib/python3.12/dist-packages/torch/lib/libtorch_cpu.so
```
Output from our nightly CI with cuda-gdb
```
Program terminated with signal SIGBUS, Bus error.
#0 0x0000eca2a34b7628 in ?? () from /usr/lib/aarch64-linux-gnu/libc.so.6
[Current thread is 1 (Thread 0xeca2a37b4580 (LWP 706))]
(cuda-gdb) where
#0 0x0000eca2a34b7628 in ?? () from /usr/lib/aarch64-linux-gnu/libc.so.6
#1 0x0000eca2a346cb3c in raise () from /usr/lib/aarch64-linux-gnu/libc.so.6
#2 <signal handler called>
#3 0x0000ec9e30089c54 in std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release() () from /tmp/6cr6fe/data/aotinductor/model/csovzderskxoxbxohbxsgppmjvvjbbnermfydfa4ubnngqepcq2c.so
#4 0x0000ec9e300a12f4 in torch::aot_inductor::AOTInductorModelContainer::AOTInductorModelContainer(unsigned long, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::optional<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > const&) () from /tmp/6cr6fe/data/aotinductor/model/csovzderskxoxbxohbxsgppmjvvjbbnermfydfa4ubnngqepcq2c.so
#5 0x0000ec9e30087744 in AOTInductorModelContainerCreateWithDevice () from /tmp/6cr6fe/data/aotinductor/model/csovzderskxoxbxohbxsgppmjvvjbbnermfydfa4ubnngqepcq2c.so
#6 0x0000eca292c1c7e8 in torch::inductor::AOTIModelContainerRunner::AOTIModelContainerRunner(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned long, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) ()
from /usr/local/lib/python3.12/dist-packages/torch/lib/libtorch_cpu.so
#7 0x0000eca292c1d2b8 in torch::inductor::AOTIModelContainerRunnerCpu::AOTIModelContainerRunnerCpu(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned long) ()
from /usr/local/lib/python3.12/dist-packages/torch/lib/libtorch_cpu.so
#8 0x0000eca292c1d368 in torch::inductor::(anonymous namespace)::create_aoti_runner_cpu(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned long, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) ()
from /usr/local/lib/python3.12/dist-packages/torch/lib/libtorch_cpu.so
#9 0x0000eca292c19f2c in torch::inductor::AOTIModelPackageLoader::AOTIModelPackageLoader(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) () from /usr/local/lib/python3.12/dist-packages/torch/lib/libtorch_cpu.so
#10 0x0000eca2976da644 in pybind11::cpp_function::initialize<pybind11::detail::initimpl::constructor<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&>::execute<pybind11::class_<torch::inductor::AOTIModelPackageLoader>, , 0>(pybind11::class_<torch::inductor::AOTIModelPackageLoader>&)::{lambda(pybind11::detail::value_and_holder&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)#1}, void, pybind11::detail::value_and_holder&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::detail::is_new_style_constructor>(pybind11::class_<torch::inductor::AOTIModelPackageLoader>&&, void (*)(pybind11::detail::value_and_holder&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&, pybind11::detail::is_new_style_constructor const&)::{lambda(pybind11::detail::function_call&)#3}::_FUN(pybind11::detail::function_call&) ()
from /usr/local/lib/python3.12/dist-packages/torch/lib/libtorch_python.so
#11 0x0000eca29719e430 in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) () from /usr/local/lib/python3.12/dist-packages/torch/lib/libtorch_python.so
#12 0x00000000005041c4 in ?? ()
```
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 @atalman @malfet @ptrblck @nWEIdia @xwang233
### Versions
Reproduced with plain ubuntu 24.04 container with 2.6.0 RC wheel | high priority,module: crash,triaged,oncall: pt2,oncall: export,module: aotinductor | low | Critical |
2,805,677,144 | ollama | Ollama does not perform structured output correctly. | ### What is the issue?
request:
`{
"model":"llama3.2",
"messages":datas+[
{
"role":"user",
"content":input_data,
"images":[screenshot_base64]
}
],
"stream":False,
"format":{
"type":"object",
"reply":{"type":"string"},
"properties":{
"operations":{"type":"array",
"instruct":{
"type":"object",
"functions":{"type":"object",
"function_name":{
"type":"string"},
"parameter":{"type":"array","items":{"type":"string"}}
}
}
}
}
},
"required":["reply"]
}`
reply:
`{"operations": []}`
Ollama did not reply with the required "reply".
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.7 | bug | low | Minor |
2,805,681,909 | ui | [chore]: Upgrade to TailwindCSS v4 | ### Feature description
Request to upgrade the project dependency to TailwindCSS v4 to ensure compatibility, leverage new features, and improve performance.
Follow the [TailwindCSS Upgrade Guide](https://tailwindcss.com/docs/upgrade-guide#using-the-upgrade-tool) to streamline the update process.
### Affected component/components
_No response_
### Additional Context
Additional details here...
### Before submitting
- [x] I've made research efforts and searched the documentation
- [x] I've searched for existing issues and PRs | area: request,tailwind | high | Major |
2,805,693,005 | pytorch | [dynamo] `random.Random` gives wrong result on second call | `random.Random` calls are not correct on the second call onward. This is because Dynamo generates RNG on `random.Random` objects by creating a new object and applying the RNG state at the time of tracing, which is not correct in general.
Example failing test:
```python
def test_random_object_repeat(self):
def fn(x, rng):
return x + rng.randint(1, 100)
inp = torch.randn(3, 3)
opt_fn = torch.compile(fn, backend="eager", fullgraph=True)
rng1 = random.Random(0)
rng2 = random.Random(0)
self.assertEqual(fn(inp, rng1), opt_fn(inp, rng2))
with torch.compiler.set_stance("fail_on_recompile"):
self.assertEqual(fn(inp, rng1), opt_fn(inp, rng2))
self.assertEqual(fn(inp, rng1), opt_fn(inp, rng2))
self.assertEqual(rng1.getstate(), rng2.getstate())
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | triaged,oncall: pt2,module: dynamo | low | Minor |
2,805,694,771 | pytorch | [dynamo] fix graph break on random.random | We used to not graph break on `random.random`, but we do now.
```python
import random
import torch
@torch.compile(backend="eager", fullgraph=True)
def fn(x):
return x + random.random()
fn(torch.ones(5, 5))
```
This does not happen to other supported random functions - `randint`, `randrange`, `uniform`, listed in https://github.com/pytorch/pytorch/blob/d95a6babcc581ff06d1b914ee9f92c81b2e850e2/torch/_dynamo/variables/user_defined.py#L743.
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | triaged,module: regression,oncall: pt2,module: dynamo | low | Minor |
2,805,697,173 | react | [DevTools Bug]: react-devtools not start up | ### Website or app
sorrry I don't know how to reproduce this.
### Repro steps
npm install -g react-devtools
react-devtools
react-devtools not start up on ubuntu24 , nothing happen , no error message at all
### How often does this bug happen?
Every time
### DevTools package (automated)
_No response_
### DevTools version (automated)
_No response_
### Error message (automated)
_No response_
### Error call stack (automated)
```text
```
### Error component stack (automated)
```text
```
### GitHub query string (automated)
```text
``` | Type: Bug,Status: Unconfirmed,Component: Developer Tools | medium | Critical |
2,805,705,762 | ui | [feat]: Switch to radix-ui mono package | ### Feature description
Radix-ui has release a [new package](https://www.npmjs.com/package/radix-ui) which exposes the latest version of all Radix Primitives from a single place. Using it would avoid conflicting or duplicate dependencies.
### Affected component/components
All components
### Additional Context
Additional details here...
### Before submitting
- [x] I've made research efforts and searched the documentation
- [x] I've searched for existing issues and PRs | area: request | low | Minor |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.