id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
โ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
2,749,974,999 | ollama | Unable to install Ollama on Macbook Air running MacOS Sequoia 15.2 | ### What is the issue?
I am using MacOS 15.2 and downloaded the Ollama isntaller for Mac. After downloading when I try to install, it asked me to move the packege to application folder instead of Donwloads folder. I did that and then while installing from the Application folder, the splash creen opens up for installing the command line, on clicking install I am prompted for Administrator password. Once I enter the password nothing happens.
I opened the package contents and run the Ollama executable in the terminal. This is what I got...
Last login: Thu Dec 19 15:34:50 on ttys000
mymacbook-Air ~ % /Applications/Ollama.app/Contents/MacOS/Ollama ; exit;
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.
[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
- using env: export GIN_MODE=release
- using code: gin.SetMode(gin.ReleaseMode)
[GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
[GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
[GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
[GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
[GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
[GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
[GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
[GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
[GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
2024/12/19 15:34:50 routes.go:1259: INFO server config env="map[HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/Users/bheeshma/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false http_proxy: https_proxy: no_proxy:]"
time=2024-12-19T15:34:50.852+05:30 level=INFO source=images.go:757 msg="total blobs: 0"
time=2024-12-19T15:34:50.852+05:30 level=INFO source=images.go:764 msg="total unused blobs removed: 0"
time=2024-12-19T15:34:50.853+05:30 level=INFO source=routes.go:1310 msg="Listening on 127.0.0.1:11434 (version 0.5.4)"
time=2024-12-19T15:34:50.853+05:30 level=INFO source=routes.go:1339 msg="Dynamic LLM libraries" runners="[metal cpu_avx cpu_avx2]"
time=2024-12-19T15:34:50.880+05:30 level=INFO source=types.go:131 msg="inference compute" id=0 library=metal variant="" compute="" driver=0.0 name="" total="5.3 GiB" available="5.3 GiB"
2024-12-19 15:34:51.342 Ollama[18214:481886] +[IMKClient subclass]: chose IMKClient_Modern
2024-12-19 15:34:51.342 Ollama[18214:481886] +[IMKInputSession subclass]: chose IMKInputSession_Modern
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.5.4 | bug | low | Critical |
2,749,985,853 | ui | [bug]: Combobox selected style is incorrect | ### Describe the bug

When I selected Astro and reopened popover, Astro was not selected as the background color, but was displayed on Next.js
### Affected component/components
Combobox
### How to reproduce
1.select Astro
2.reopen popover
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
macos 15.1
chrome 131.0.6778.140
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,750,006,342 | next.js | `next lint` creates .eslintrc.json instead of eslint.config.mjs after deleting initial ESLint config | ### Link to the code that reproduces this issue
https://github.com/uwoobeat/nextjs-eslint-config-bug
### To Reproduce
1. Clone the repository (created via `npx create-next-app@canary`)
2. Install dependencies: `npm install`
3. Delete `eslint.config.mjs`
4. Run `npm run lint` (or `pnpm lint`)
5. Observe that `.eslintrc.json` is created instead of regenerating `eslint.config.mjs`
### Current vs. Expected behavior
Current behavior:
- After deleting eslint.config.mjs, running `next lint` creates a new .eslintrc.json file with legacy ESLint configuration format, even though the project was initially set up with ESLint v9's config
Expected behavior:
- When running `next lint` after deleting the config file, it should regenerate the same type of configuration file that was initially created (eslint.config.mjs), maintaining consistency with the project's initial ESLint setup
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.2.0: Fri Dec 6 18:40:14 PST 2024; root:xnu-11215.61.5~2/RELEASE_ARM64_T8103
Available memory (MB): 16384
Available CPU cores: 8
Binaries:
Node: 23.4.0
npm: 10.9.2
Yarn: 1.22.22
pnpm: 9.15.0
Relevant Packages:
next: 15.1.1-canary.13 // Latest available version is detected (15.1.1-canary.13).
eslint-config-next: 15.1.1-canary.13
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
create-next-app, Linting
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
While #71218 added support for ESLint v9, examining `runLintCheck` and `writeDefaultConfig` shows that Next.js still creates .eslintrc.json when regenerating config files. It's unclear whether this is intentional or if the flat config support was meant to be implemented for config regeneration as well.
I noticed that while Next.js detects and properly uses flat config files (eslint.config.*) during linting process, the default config generation still creates legacy format files. Would like to know if this behavior is intended or if it needs to be updated to maintain consistency with the initial flat config setup from create-next-app. | create-next-app,Linting | low | Critical |
2,750,011,788 | rust | No way to get mutable reference to Vec member without expressing uniqueness over the full Vec | I've ran into the following standard library defect. There appears to be no possible sound way to go from a `Vec<T>` to a `&mut T` without expressing uniqueness over the full `Vec`. `Vec::as_mut_ptr` does not suffice because it takes a `&mut self` which expresses unique access over the full vec. A function which goes from `*mut Vec<T> -> *mut T` is necessary, but missing. The same is true for `*mut [T] -> *mut T`.
---
Consider the following example, where I have a `v: Vec<Vec<u32>>` where `m.len() == n` and `m[i].len() == t` for all `i`. I have `t` threads and I now want to process this array mutably in parallel, transposed. For example suppose I want to do the following in-place cumulative sum:
```rust
(0..t).into_par_iter().map(|thread_idx| {
let mut cumsum = 0;
for i in 0..n {
// This obviously doesn't work, but you can't write this soundly with pointers either.
let x: &mut u32 = &mut v[i][thread_idx];
cumsum += *x;
*x = cumsum;
}
})
```
As far as I can tell, there is no way to write this soundly without introducing (potentially significant) overhead **at all**. The only way to write this soundly is to first create a temporary array of pointers and to modify the inner loop to use those pointers:
```rust
let ptrs: Vec<*mut u32> = v.iter_mut().map(|vi| vi.as_mut_ptr()).collect();
// ...
let x: &mut u32 = unsafe { &mut *ptrs[i].add(thread_idx) };
}
```
I really think the standard library should offer some kind of way to write this loop soundly without the unnecessary overhead of creating a pointer array. The simplest solution I think is to add functions which go from `*mut Vec<T> -> *mut T` and `*mut [T] -> *mut T` without creating an intermediate unique reference to the entire `Vec` or slice. | A-collections,T-libs-api,C-feature-request,needs-acp | low | Minor |
2,750,014,171 | next.js | adjustFontFallback is not working with Google Fonts in Next.js 15 | ### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/sweet-moon-yzv9yt
### To Reproduce
1. Import any font from fonts/google
2. Set adjustFontFallback to false
3. Apply className of font to body
4. Check if font fallback is applied in the inspector
### Current vs. Expected behavior
**Current behavior:** font fallback is always provided regardless of the adjustFontFallback value
**Expected behavior:** font fallback should only be applied when adjustFontFallback is true. When it's false it should not be applied
### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 11 Pro
Available memory (MB): 16272
Available CPU cores: 12
Binaries:
Node: 20.17.0
npm: 10.9.2
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 15.1.1-canary.1 // Latest available version is detected (15.1.1-canary.1).
eslint-config-next: N/A
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Font (next/font)
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
adjustFontFallback is working correctly in Next.js 14. However, after updating and not touching the font import and config code, it's not working. | Font (next/font) | low | Minor |
2,750,027,870 | vscode | Make editor dimming a core workbench feature | The dim unfocused feature is currently built using some CSS overrides to accomplish the majority of what users wanted without much work. For example:
https://github.com/microsoft/vscode/blob/35be9bf683eace09796e59d54f1f225bbc3a7866/src/vs/workbench/contrib/accessibility/browser/unfocusedViewDimmingContribution.ts#L38-L62
Since this relies upon internal CSS classes it's prone to break though. When this feature went through the test phase there were a collection of problems reported, mostly around certain edge cases not working. I'm merging all those issues into this one to track any further development on the feature if we choose to do so:
- #190102
- #191608
- #191615
- #191616
- #191671
- #191714
- #191743
- #191744
- #191757
- #191775 | feature-request,workbench-dim-unfocused | low | Minor |
2,750,056,673 | pytorch | Issues linking to libtorch on M2 mac | ### ๐ Describe the bug
I am following the minimal example for compiling libtorch provided here: https://pytorch.org/cppdocs/installing.html
I am using libtorch for mac downloaded from here: https://download.pytorch.org/libtorch/cpu/libtorch-macos-arm64-2.5.1.zip
This looks like it should be for arm64 and is the latest version from the PyTorch homepage.
I can run CMake fine, but am encountering issues when it comes to building and linking the c++.
```
[user@Mac build]$ cmake --build . --config Release
[ 50%] Building CXX object CMakeFiles/example.dir/example.cpp.o
[100%] Linking CXX executable example
Undefined symbols for architecture arm64:
"at::_ops::rand::call(c10::ArrayRef<c10::SymInt>, std::optional<c10::ScalarType>, std::optional<c10::Layout>, std::optional<c10::Device>, std::optional<bool>)", referenced from:
_main in example.cpp.o
"at::print(std::ostream&, at::Tensor const&, long long)", referenced from:
_main in example.cpp.o
"c10::TensorImpl::set_autograd_meta(std::unique_ptr<c10::AutogradMetaInterface, std::default_delete<c10::AutogradMetaInterface>>)", referenced from:
torch::autograd::make_variable(at::Tensor, bool, bool) in example.cpp.o
torch::autograd::make_variable(at::Tensor, bool, bool) in example.cpp.o
torch::autograd::make_variable(at::Tensor, bool, bool) in example.cpp.o
torch::autograd::make_variable(at::Tensor, bool, bool) in example.cpp.o
"c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char>> const&)", referenced from:
c10::fromIntArrayRefSlow(c10::ArrayRef<long long>) in example.cpp.o
ld: symbol(s) not found for architecture arm64
collect2: error: ld returned 1 exit status
make[2]: *** [example] Error 1
make[1]: *** [CMakeFiles/example.dir/all] Error 2
make: *** [all] Error 2
```
This seems to be similar to what happened when there was no arm binary available and we had to build libtorch from source (see #110810), but more recently binaries have been available and worked fine.
I have been looking through various issues and can't find anything similar, so apolofied if I've missed something, but to my knowledge I am just following the example from the website.
Note that this is using gcc-14 installed via homebrew which has worked fine in the past.
If I try and build without specifying a CXX or C compiler it defaults to AppleClang 16.0.0.16000026 and I get a different error of:
```
[jwa34@Mac build]$ cmake --build . --config Release
[ 50%] Building CXX object CMakeFiles/example.dir/example.cpp.o
In file included from /Users/jwa34/libtorch_test/example.cpp:1:
In file included from /Users/jwa34/libtorch_test/libtorch/include/torch/csrc/api/include/torch/torch.h:3:
In file included from /Users/jwa34/libtorch_test/libtorch/include/torch/csrc/api/include/torch/all.h:7:
In file included from /Users/jwa34/libtorch_test/libtorch/include/torch/csrc/api/include/torch/autograd.h:3:
In file included from /Users/jwa34/libtorch_test/libtorch/include/torch/csrc/autograd/autograd.h:3:
In file included from /Users/jwa34/libtorch_test/libtorch/include/torch/csrc/autograd/variable.h:6:
In file included from /Users/jwa34/libtorch_test/libtorch/include/torch/csrc/autograd/cpp_hook.h:2:
In file included from /Users/jwa34/libtorch_test/libtorch/include/torch/csrc/autograd/function_hook.h:3:
In file included from /Users/jwa34/libtorch_test/libtorch/include/ATen/Tensor.h:3:
In file included from /Users/jwa34/libtorch_test/libtorch/include/ATen/core/Tensor.h:3:
In file included from /Users/jwa34/libtorch_test/libtorch/include/ATen/core/TensorBody.h:11:
In file included from /Users/jwa34/libtorch_test/libtorch/include/c10/core/Device.h:3:
/Users/jwa34/libtorch_test/libtorch/include/c10/core/DeviceType.h:10:10: fatal error: 'cstddef' file not found
10 | #include <cstddef>
| ^~~~~~~~~
1 error generated.
make[2]: *** [CMakeFiles/example.dir/example.cpp.o] Error 1
make[1]: *** [CMakeFiles/example.dir/all] Error 2
make: *** [all] Error 2
```
I have seen this sort of thing before where the default AppleClang becomes out of date, hence my decision to use gcc.
Note further that this behaviour was observed first in a larger project, but I decided to use the minimal example to try and pin down where things were going wrong.
The same error is occuring in the CI for that larger project and can be seen at e.g. https://github.com/Cambridge-ICCS/FTorch/actions/runs/12399866549/job/34615725295?pr=164
Many thanks for your assistance.
### Versions
Using libtorch 2.5.1 downloaded from https://download.pytorch.org/libtorch/cpu/libtorch-macos-arm64-2.5.1.zip
cc @malfet @snadampal @milpuz01 | triaged,module: arm | low | Critical |
2,750,071,135 | rust | `Don't know how to soften fpowi to fpow` on x86 uefi targets | <!--
Thank you for filing a bug report! ๐ Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code:
```rust
#[no_mangle]
fn f(x: f32, n: i32) -> f32 {
x.powi(n)
}
```
```
cargo build --target x86_64-unknown-uefi
```
I expected to see this happen: *it compiles*
Instead, this happened:
```
error: Don't know how to soften fpowi to fpow
```
Same error on `i686-unknown-uefi`, but not on `aarch64-unknown-uefi`.
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.83.0 (90b35a623 2024-11-26)
binary: rustc
commit-hash: 90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf
commit-date: 2024-11-26
host: x86_64-unknown-linux-gnu
release: 1.83.0
LLVM version: 19.1.1
``` | A-LLVM,T-compiler,O-UEFI,C-external-bug | low | Critical |
2,750,072,340 | pytorch | cuda graphs produce two additional kernel calls | ### ๐ Describe the bug
When using cuda graph capture, the replay() function produces two additional kernel calls before the launchGraph.
Additional calls are to fillFunctor, probably the result of replay_prologue(), line 229 in https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/cuda/CUDAGraph.cpp.
This is unexpected behavior that makes graphs a non-viable option for smaller code sections.
Use nsys profile -t cuda python file.py
on the following code to see the problem.
```
import torch
N, D_in, H, D_out = 640, 4096, 2048, 1024
model = torch.nn.Linear(D_in, H).cuda()
# Placeholders used for capture
static_input = torch.randn(N, D_in, device='cuda')
static_target = torch.randn(N, D_out, device='cuda')
# warmup
s = torch.cuda.Stream()
s.wait_stream(torch.cuda.current_stream())
with torch.cuda.stream(s):
for i in range(3):
y_pred = model(static_input)
torch.cuda.current_stream().wait_stream(s)
# capture
g = torch.cuda.CUDAGraph()
with torch.cuda.graph(g):
static_y_pred = model(static_input)
real_inputs = [torch.rand_like(static_input) for _ in range(100)]
real_targets = [torch.rand_like(static_target) for _ in range(100)]
for data, target in zip(real_inputs, real_targets):
static_input.copy_(data)
static_target.copy_(target)
g.replay()
```
### Versions
Collecting environment information...
PyTorch version: 2.4.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.11.0 (main, Mar 1 2023, 18:26:19) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.5.40
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX 3500 Ada Generation Laptop GPU
Nvidia driver version: 553.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i7-13800H
CPU family: 6
Model: 186
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 2
BogoMIPS: 5836.79
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 480 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 12.5 MiB (10 instances)
L3 cache: 24 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] nvtx==0.2.10
[pip3] pytorch-lightning==2.4.0
[pip3] torch==2.4.1
[pip3] torchmetrics==1.6.0
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] nvtx 0.2.10 pypi_0 pypi
[conda] pytorch-lightning 2.4.0 pypi_0 pypi
[conda] torch 2.4.1 pypi_0 pypi
[conda] torchmetrics 1.6.0 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @mcarilli @ezyang @eellison @penguinwu | triaged,module: cuda graphs | low | Critical |
2,750,079,829 | flutter | [DatePickerThemeData] doesn't allow changes to the sub-header styling | ### Steps to reproduce
1. Create a Flutter app with a DatePicker widget (the fastest way is to copy-paste the code from the docs [https://api.flutter.dev/flutter/material/showDatePicker.html])
2. Style your DatePicker using DatePickerThemeData in the MaterialApp theme
3. There's no way to change the sub-header color
### Expected results
I should be able to change the sub-header color from a DatePickerThemeData attribute.
### Actual results
The only way to change the sub-header attribute is from the colorScheme section, specifying the desired color in the onSurface parameter (in the example, set to orange).
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() => runApp(const DatePickerApp());
class DatePickerApp extends StatelessWidget {
const DatePickerApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
restorationScopeId: 'app',
home: DatePickerExample(restorationId: 'main'),
theme: ThemeData(
colorScheme: ColorScheme.light(onSurface: Colors.orange),
datePickerTheme: DatePickerThemeData(
backgroundColor: Colors.white,
// Add button style to set text color to black
cancelButtonStyle: ButtonStyle(
foregroundColor: WidgetStatePropertyAll(Colors.black),
),
confirmButtonStyle: ButtonStyle(
backgroundColor: WidgetStatePropertyAll(Colors.blue),
foregroundColor: WidgetStatePropertyAll(Colors.white),
),
dayBackgroundColor: WidgetStateProperty.resolveWith((states) {
if (states.contains(WidgetState.selected)) {
return Colors.blue;
}
return Colors.transparent;
}),
dayForegroundColor: WidgetStateProperty.resolveWith((states) {
if (states.contains(WidgetState.selected)) {
return Colors.white;
}
return Colors.black;
}),
headerBackgroundColor: Colors.blue,
headerForegroundColor: Colors.white,
inputDecorationTheme: InputDecorationTheme(
labelStyle: TextStyle(
color: Colors.black,
fontSize: 16,
fontWeight: FontWeight.w500,
),
isDense: true,
floatingLabelBehavior: FloatingLabelBehavior.auto,
border: OutlineInputBorder(),
),
todayBackgroundColor: WidgetStateProperty.resolveWith(
(states) {
if (states.contains(WidgetState.selected)) {
return Colors.blue;
}
return Colors.transparent;
},
),
todayForegroundColor: WidgetStateProperty.resolveWith(
(states) {
if (states.contains(WidgetState.selected)) {
return Colors.white;
}
return Colors.blue;
},
),
weekdayStyle: const TextStyle(color: Colors.black),
yearBackgroundColor: WidgetStateProperty.resolveWith(
(states) {
if (states.contains(WidgetState.selected)) {
return Colors.blue;
}
return Colors.transparent;
},
),
yearForegroundColor: WidgetStateProperty.resolveWith(
(states) {
if (states.contains(WidgetState.selected)) {
return Colors.white;
}
return Colors.black;
},
),
yearOverlayColor: WidgetStatePropertyAll(Colors.blue),
),
),
);
}
}
class DatePickerExample extends StatefulWidget {
const DatePickerExample({super.key, this.restorationId});
final String? restorationId;
@override
State<DatePickerExample> createState() => _DatePickerExampleState();
}
/// RestorationProperty objects can be used because of RestorationMixin.
class _DatePickerExampleState extends State<DatePickerExample>
with RestorationMixin {
// In this example, the restoration ID for the mixin is passed in through
// the [StatefulWidget]'s constructor.
@override
String? get restorationId => widget.restorationId;
final RestorableDateTime _selectedDate =
RestorableDateTime(DateTime(2021, 7, 25));
late final RestorableRouteFuture<DateTime?> _restorableDatePickerRouteFuture =
RestorableRouteFuture<DateTime?>(
onComplete: _selectDate,
onPresent: (NavigatorState navigator, Object? arguments) {
return navigator.restorablePush(
_datePickerRoute,
arguments: _selectedDate.value.millisecondsSinceEpoch,
);
},
);
@pragma('vm:entry-point')
static Route<DateTime> _datePickerRoute(
BuildContext context,
Object? arguments,
) {
return DialogRoute<DateTime>(
context: context,
builder: (BuildContext context) {
return DatePickerDialog(
restorationId: 'date_picker_dialog',
initialEntryMode: DatePickerEntryMode.calendarOnly,
initialDate: DateTime.fromMillisecondsSinceEpoch(arguments! as int),
firstDate: DateTime(2021),
lastDate: DateTime(2022),
);
},
);
}
@override
void restoreState(RestorationBucket? oldBucket, bool initialRestore) {
registerForRestoration(_selectedDate, 'selected_date');
registerForRestoration(
_restorableDatePickerRouteFuture, 'date_picker_route_future');
}
void _selectDate(DateTime? newSelectedDate) {
if (newSelectedDate != null) {
setState(() {
_selectedDate.value = newSelectedDate;
ScaffoldMessenger.of(context).showSnackBar(SnackBar(
content: Text(
'Selected: ${_selectedDate.value.day}/${_selectedDate.value.month}/${_selectedDate.value.year}'),
));
});
}
}
@override
Widget build(BuildContext context) {
return Scaffold(
body: Center(
child: OutlinedButton(
onPressed: () {
_restorableDatePickerRouteFuture.present();
},
child: const Text('Open Date Picker'),
),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>

</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[Paste your output here]
```
</details>
| framework,f: material design,f: date/time picker,c: proposal,good first issue,P3,team-design,triaged-design | low | Major |
2,750,083,945 | vscode | Unable to uninstall vscode: Windows cannot find unins000.exe | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.96.1
- OS Version: 23H2
Steps to Reproduce:
1. I was programming Agda using agda-mode extension. Then I couldn't compile my code anymore using Ctrl+C Ctrl+L.
2. Then I closed the vscode to open it again, but it didn't open.
3. Then I restarted my computer . after that I couldn't open vscode.
4. Then I tried to uninstall the vscode to install it again, now I cannot uninstall vscode
5. 
| bug,install-update | low | Critical |
2,750,104,131 | next.js | `next.config.ts` breaks when using import aliases | ### Link to the code that reproduces this issue
https://github.com/Larsv94/tsconfig-alias-import-repro
### To Reproduce
1. Start the application in development (`next dev`) or build the application (`next build`)
### Current vs. Expected behavior
When building or starting the application with `next dev` the commands fail with "Error: Cannot find module './src/config/utils/someUtils'" because of `someConfig.ts` import `someUtils.ts` via an import alias.
It does work when `someConfig.ts` uses a relative import (included in example)
Expect import aliases to work as well
### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 11 Pro
Available memory (MB): 32476
Available CPU cores: 28
Binaries:
Node: 20.18.1
npm: 10.8.2
Yarn: 4.5.3
pnpm: N/A
Relevant Packages:
next: 15.1.2 // Latest available version is detected (15.1.2).
eslint-config-next: 15.1.2
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Developer Experience, TypeScript
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local)
### Additional context
_No response_ | TypeScript | low | Critical |
2,750,113,279 | rust | compiletest: FileCheck-based tests conflate FileCheck prefixes with compiletest revisions | > Auto-registering revision names as check prefix is a bit sketchy, and that having to pass `--allow-unused-prefix` is an unfortunate side-effect of not knowing whether the test author actually wanted revision-specific check prefixes or not.
>
> -- https://github.com/rust-lang/rust/pull/134463#discussion_r1890184927
With the current scheme where compiletest `//@ revisions` imply FileCheck prefixes of the same names, we can't distinguish between whether the author wanted to:
1. Use compiletest revisions but ONLY for compiletest directive purposes.
2. Use compiletest revisions but ONLY for filecheck prefix purposes.
3. Use compiletest revisions for BOTH purposes.
We might want to investigate splitting these orthogonal concepts and have something like `//@ filecheck-prefixes: XXX` register FileCheck prefixes separately.
Marked as E-hard as this will take quite a bit of investigation/design, and E-tedious as this probably involves updating a whole bunch of tests, and I don't have the bandwidth to mentor. | C-cleanup,A-testsuite,E-hard,T-compiler,T-bootstrap,A-compiletest,E-tedious | low | Major |
2,750,144,331 | TypeScript | Computed property invalid in type declaration emitted in d.ts file | ### ๐ Search Terms
computed property unique symbols declaration emit
### ๐ Version & Regression Information
- This is the behavior in every version I tried
### โฏ Playground Link
https://www.typescriptlang.org/play/?ts=5.7.2#code/MYGwhgzhAEAq0G8BQ1XQgFzBglsaATgKZgAmA9gHYgCe0A2gEQaMC6AXNAK6U4COXIuhoBbAEbkQAbiQBfJCCIZoAD07I0DWExasO0Sl3FECM2TMXK6AXkQo09bczb6AjHKRA
### ๐ป Code
```ts
class T {
static readonly ["t"]: unique symbol;
}
let x: {
[T["t"]]: number;
};
let y = {
[T["t"]]: 1
}
```
### ๐ Actual behavior
The type of `x` has an error saying: `A computed property name in a type literal must refer to an expression whose type is a literal type or a 'unique symbol' `
The type of `y` is emitted in declaration files just like the type of `x` and causes an error in the d.ts file
### ๐ Expected behavior
The type of `x` should be valid. If we use a non-computed property name, the type would be valid (`let x: { [T.t]: number; }`, [Playground Link](https://www.typescriptlang.org/play/?ts=5.7.2#code/MYGwhgzhAEAq0G8BQ1XQgFzBglsaATgKZgAmA9gHYgCe0A2gEQaMC6AXNAK6U4COXIuhoBbAEbkQAbiQBfJCCIZoAD07I0DWADoMHaJS7iiBGbKlA)). Declaration emit should not emit invalid code.
### Additional information about the issue
Probably fixed by #60052 since the limitation on computed property names in types goes away. | Help Wanted,Possible Improvement | low | Critical |
2,750,157,397 | PowerToys | PowerToys Run - System.InvalidOperationException: Cyclic reference found while evaluating the Style property on element | ### Microsoft PowerToys version
0.87.0.0
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
PowerToys Run
### Steps to reproduce
Exception occurs in the middle of any other work, even not using any PowerToys tools at this time.
New problem with version 0.87.0.0, this never happened before.
### โ๏ธ Expected Behavior
_No response_
### โ Actual Behavior
Version: 0.87.0.0
OS Version: Microsoft Windows NT 10.0.26100.0
IntPtr Length: 8
x64: True
Date: 19.12.2024 13:14:49
Exception:
System.InvalidOperationException: Cyclic reference found while evaluating the Style property on element 'System.Windows.Controls.ScrollViewer'.
at System.Windows.FrameworkElement.UpdateStyleProperty()
at System.Windows.TreeWalkHelper.InvalidateStyleAndReferences(DependencyObject d, ResourcesChangeInfo info, Boolean containsTypeOfKey)
at System.Windows.TreeWalkHelper.OnResourcesChanged(DependencyObject d, ResourcesChangeInfo info, Boolean raiseResourceChangedEvent)
at System.Windows.TreeWalkHelper.OnResourcesChangedCallback(DependencyObject d, ResourcesChangeInfo info, Boolean visitedViaVisualTree)
at System.Windows.DescendentsWalker`1.WalkLogicalChildren(FrameworkElement feParent, FrameworkContentElement fceParent, IEnumerator logicalChildren)
at System.Windows.DescendentsWalker`1.WalkFrameworkElementLogicalThenVisualChildren(FrameworkElement feParent, Boolean hasLogicalChildren)
at System.Windows.DescendentsWalker`1.IterateChildren(DependencyObject d)
at System.Windows.DescendentsWalker`1.WalkLogicalChildren(FrameworkElement feParent, FrameworkContentElement fceParent, IEnumerator logicalChildren)
at System.Windows.DescendentsWalker`1.WalkFrameworkElementLogicalThenVisualChildren(FrameworkElement feParent, Boolean hasLogicalChildren)
at System.Windows.DescendentsWalker`1.IterateChildren(DependencyObject d)
at System.Windows.DescendentsWalker`1.VisitNode(DependencyObject d, Boolean visitedViaVisualTree)
at System.Windows.DescendentsWalker`1.WalkFrameworkElementLogicalThenVisualChildren(FrameworkElement feParent, Boolean hasLogicalChildren)
at System.Windows.DescendentsWalker`1.IterateChildren(DependencyObject d)
at System.Windows.DescendentsWalker`1.WalkLogicalChildren(FrameworkElement feParent, FrameworkContentElement fceParent, IEnumerator logicalChildren)
at System.Windows.DescendentsWalker`1.WalkFrameworkElementLogicalThenVisualChildren(FrameworkElement feParent, Boolean hasLogicalChildren)
at System.Windows.DescendentsWalker`1.IterateChildren(DependencyObject d)
at System.Windows.DescendentsWalker`1.WalkLogicalChildren(FrameworkElement feParent, FrameworkContentElement fceParent, IEnumerator logicalChildren)
at System.Windows.DescendentsWalker`1.WalkFrameworkElementLogicalThenVisualChildren(FrameworkElement feParent, Boolean hasLogicalChildren)
at System.Windows.DescendentsWalker`1.IterateChildren(DependencyObject d)
at System.Windows.DescendentsWalker`1.WalkLogicalChildren(FrameworkElement feParent, FrameworkContentElement fceParent, IEnumerator logicalChildren)
at System.Windows.DescendentsWalker`1.WalkFrameworkElementLogicalThenVisualChildren(FrameworkElement feParent, Boolean hasLogicalChildren)
at System.Windows.DescendentsWalker`1.IterateChildren(DependencyObject d)
at System.Windows.DescendentsWalker`1.WalkLogicalChildren(FrameworkElement feParent, FrameworkContentElement fceParent, IEnumerator logicalChildren)
at System.Windows.DescendentsWalker`1.WalkFrameworkElementLogicalThenVisualChildren(FrameworkElement feParent, Boolean hasLogicalChildren)
at System.Windows.DescendentsWalker`1.IterateChildren(DependencyObject d)
at System.Windows.DescendentsWalker`1.WalkLogicalChildren(FrameworkElement feParent, FrameworkContentElement fceParent, IEnumerator logicalChildren)
at System.Windows.DescendentsWalker`1.WalkFrameworkElementLogicalThenVisualChildren(FrameworkElement feParent, Boolean hasLogicalChildren)
at System.Windows.DescendentsWalker`1.IterateChildren(DependencyObject d)
at System.Windows.DescendentsWalker`1.StartWalk(DependencyObject startNode, Boolean skipStartNode)
at System.Windows.TreeWalkHelper.InvalidateOnResourcesChange(FrameworkElement fe, FrameworkContentElement fce, ResourcesChangeInfo info)
at System.Windows.ResourceDictionary.NotifyOwners(ResourcesChangeInfo info)
at System.Windows.ResourceDictionary.OnMergedDictionariesChanged(Object sender, NotifyCollectionChangedEventArgs e)
at System.Collections.ObjectModel.ObservableCollection`1.OnCollectionChanged(NotifyCollectionChangedEventArgs e)
at System.Windows.ThemeManager.AddOrUpdateThemeResources(ResourceDictionary rd, ResourceDictionary newDictionary)
at System.Windows.ThemeManager.ApplyFluentOnWindow(Window window)
at System.Windows.ThemeManager.OnSystemThemeChanged()
at System.Windows.SystemResources.SystemThemeFilterMessage(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled)
at MS.Win32.HwndSubclass.DispatcherCallbackOperation(Object o)
at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Int32 numArgs)
at System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Int32 numArgs, Delegate catchHandler)
### Other Software
_No response_ | Issue-Bug,Product-PowerToys Run,Severity-High,Needs-Triage | low | Major |
2,750,198,826 | pytorch | Torch elastic restart fails with torch-2.6.0 nightly build: NCCL unhandled system error | ### ๐ Describe the bug
**Summary:**
Run multi-gpu training with torch elastic run, backend is nccl,
Torch 2.5.1, job restarted successfully
Nightly torch 2.6.0, same nccl version as above, after job restarted, NCCL error is reported:
`[rank1]: torch.distributed.DistBackendError: NCCL error in: /pytorch/torch/csrc/distributed/c10d/NCCLUtils.cpp:77, unhandled system error (run with NCCL_DEBUG=INFO for details), NCCL version 2.21.5`
**Reproduce:**
Here is a minimal bug repro.
```
import os
import random
import sys
import time
import torch
def main():
local_rank = int(os.environ['LOCAL_RANK'])
device = torch.device('cuda')
torch.cuda.set_device(local_rank)
torch.distributed.init_process_group(backend='nccl', init_method='env://')
rank = torch.distributed.get_rank()
if rank == 0:
print("#### NEW RUN ###")
device_ids = [local_rank]
torch.distributed.barrier(device_ids=device_ids)
torch.distributed.destroy_process_group()
sys.exit(123) # force torchrun restart
if __name__ == "__main__":
main()
```
1. Inside a container, for example: pytorch/pytorch:2.5.1-cuda12.4-cudnn9-runtime
2. Install cuSPARSELt when necessary (not related to this issue)
3. Install torch 2.6.0 without changing other libraries: `pip install --no-deps torch-2.6.0.dev20241218+cu124-cp311-cp311-linux_x86_64.whl`
4. Run: `torchrun --nproc-per-node=2 --max-restarts=1 ./repro.py`
**Output:**
Expected result:
2 runs exit with code 123
What happens (sometimes 2-3 attempts are needed):
```
[rank1]: Traceback (most recent call last):
[rank1]: File "/workspace/./repro.py", line 57, in <module>
[rank1]: main()
[rank1]: File "/workspace/./repro.py", line 50, in main
[rank1]: torch.distributed.barrier(device_ids=device_ids)
[rank1]: File "/opt/conda/lib/python3.11/site-packages/torch/distributed/c10d_logger.py", line 81, in wrapper
[rank1]: return func(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/opt/conda/lib/python3.11/site-packages/torch/distributed/distributed_c10d.py", line 4551, in barrier
[rank1]: work = group.barrier(opts=opts)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: torch.distributed.DistBackendError: NCCL error in: /pytorch/torch/csrc/distributed/c10d/NCCLUtils.cpp:77, unhandled system error (run with NCCL_DEBUG=INFO for details), NCCL version 2.21.5
[rank1]: ncclSystemError: System call (e.g. socket, malloc) or external library call failed or device error.
[rank1]: Last error:
[rank1]: socketStartConnect: Connect to 172.17.0.2<35553> failed : Software caused connection abort
```
### Versions
The environment is pytorch/pytorch:2.5.1-cuda12.4-cudnn9-runtime
Without modifying any packages, the original script works as expected (restarted for 1 time, and exit code 123 twice)
Below is the failed environment, only changes are:
1. cuSPARSELt 0.6.3 installed to enable torch import
2. pip install --no-deps torch-2.6.0.dev20241218+cu124-cp311-cp311-linux_x86_64.whl
(from https://download.pytorch.org/whl/nightly/torch/)
```
PyTorch version: 2.6.0.dev20241218+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: Could not collect
Clang version: Could not collect
CMake version: version 3.30.5
Libc version: glibc-2.35
Python version: 3.11.10 | packaged by conda-forge | (main, Oct 16 2024, 01:27:36) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-190-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7742 64-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU max MHz: 2250.0000
CPU min MHz: 1500.0000
BogoMIPS: 4491.92
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca sme sev sev_es
Virtualization: AMD-V
L1d cache: 2 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 32 MiB (64 instances)
L3 cache: 256 MiB (16 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.13.0
[pip3] torch==2.6.0.dev20241218+cu124
[pip3] torchaudio==2.5.1+cu124
[pip3] torchelastic==0.2.2
[pip3] torchvision==0.20.1+cu124
[pip3] triton==3.1.0
[conda] numpy 2.1.2 py311h71ddf71_0 conda-forge
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] torch 2.6.0.dev20241218+cu124 pypi_0 pypi
[conda] torchaudio 2.5.1+cu124 pypi_0 pypi
[conda] torchelastic 0.2.2 pypi_0 pypi
[conda] torchvision 0.20.1+cu124 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed,module: nccl | low | Critical |
2,750,199,598 | vscode | NodeJS code has a strange problem when debugging with the npm run dev command! And there is also a strange phenomenon in Remote SSH! | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version:
็ๆฌ: 1.96.1 (system setup)
ๆไบค: 42b266171e51a016313f47d0c48aca9295b9cbb2
ๆฅๆ: 2024-12-17T17:50:05.206Z
Electron: 32.2.6
ElectronBuildId: 10629634
Chromium: 128.0.6613.186
Node.js: 20.18.1
V8: 12.8.374.38-electron.0
- OS Version:
OS: Windows_NT x64 10.0.17763
Version: Windows Server 2019 Datacenter
Version number: 1809
Installation Date: 11/7/2024
OS Build : 17763.6532
- File structure and all code

Steps to Reproduce:
1. I don't have any third-party plugins installed, except for the Chinese language pack.
2. In the above configuration, press F5 to debug.
3. The "terminal" prints out npm run dev and waits for about 15 seconds before continuing to print the result (123).
4. And there is no output in the debug console, and the breakpoint is invalid, and the code cannot be debugged.
5. After pressing Shift+F5 to stop debugging, manually enter "npm run dev" in the terminal, press the enter key, basically no waiting, and the content will be output soon.
6. Manually enter "node index.js" in the terminal, press the enter key, and "123" will be output instantly.
7. The same code and vscode will work fine on another Windows 11.
8. I use vscode's remote ssh plugin from local Windows 11 to remote this test project, and the same problem occurs. But it's fine to run local code, and it's okay to run other servers remotely.
9. There is also a phenomenon that when I remotely this server from a local computer, I have to add VSCode's remote. SSH.useExecServer setting is turned off in order to link the remote normally, otherwise it will never be able to connect, but it does not need to be turned off when connecting to other servers.
I really don't know what went wrong. Please help take a look, thank you!
| info-needed | low | Critical |
2,750,201,703 | tensorflow | How to run TFLite benchmark with QNN delegate in Android | ### Issue type
Feature Request
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
source
### TensorFlow version
2.15.0
### Custom code
No
### OS platform and distribution
macOS 15.2
### Mobile device
One Plus 7 Pro, Android 11
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
I have built/installed/run TFLite benchmark following this [instruction](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/tools/benchmark#on-android) for Android, and used TensorFlow 2.15.0 according to [issue#66015](https://github.com/tensorflow/tensorflow/issues/66015). I test the benchmark via the following commands and the output result seems correct.
```shell
adb push /Users/handleychen/Github/tensorflow/tensorflow/bazel-bin/tensorflow/lite/tools/benchmark/benchmark_model /data/local/tmp
adb shell chmod +x /data/local/tmp/benchmark_model
adb shell "mkdir /data/local/tmp/models"
adb push /Users/handleychen/Github/tensorflow/models/*.tflite /data/local/tmp/models
adb shell /data/local/tmp/benchmark_model --graph=/data/local/tmp/models/mobilenet_v1_1.0_224.tflite --num_threads=4 --enable_op_profiling=true
adb shell /data/local/tmp/benchmark_model --graph=/data/local/tmp/models/mobilenet_v1_1.0_224.tflite --use_gpu=true --enable_op_profiling=true
adb shell /data/local/tmp/benchmark_model --graph=/data/local/tmp/models/mobilenet_v1_1.0_224.tflite --use_nnapi=true --enable_op_profiling=true
```
[benchmark result.txt](https://github.com/user-attachments/files/18197819/benchmark.result.txt)
Now I want to run the benchmark with QNN delegate. I [setup the on device environment](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/TfLite-Delegate_setup.html#on-device-environment-setup) and [run a QNN delegate using an external delegate](https://docs.qualcomm.com/bundle/publicresource/topics/80-70015-54/sample-applications.html#run-a-qnn-delegate-using-an-external-delegate). The [model](https://storage.googleapis.com/download.tensorflow.org/models/tflite/task_library/image_classification/android_java/mobilenet_v1_1.0_224_quantized_1_metadata_1.tflite) being tested comes from tflite example [image_classification](https://github.com/tensorflow/examples/tree/master/lite/examples/image_classification). I tested the benchmark using the following commands, but the result was a failure.
```shell
adb shell "mkdir /data/local/tmp/qnn_delegate"
adb push /Users/handleychen/Github/quic/SDK/qairt/2.26.0.240828/lib/aarch64-android/* /data/local/tmp/qnn_delegate
adb shell
cd /data/local/tmp
export LD_LIBRARY_PATH=/data/local/tmp/qnn_delegate
export ADSP_LIBRARY_PATH="/data/local/tmp/qnn_delegate"
./benchmark_model --graph=/data/local/tmp/models/mobilenet_v1_1.0_224_quantized_1_metadata_1.tflite --external_delegate_path=/data/local/tmp/qnn_delegate/libQnnTFLiteDelegate.so --external_delegate_options='backend_type:gpu;'
./benchmark_model --graph=/data/local/tmp/models/mobilenet_v1_1.0_224_quantized_1_metadata_1.tflite --external_delegate_path=/data/local/tmp/qnn_delegate/libQnnTFLiteDelegate.so --external_delegate_options='backend_type:htp;'
# I also tried setting htp_precision:1, but the result was the same.
./benchmark_model --graph=/data/local/tmp/models/mobilenet_v1_1.0_224_quantized_1_metadata_1.tflite --external_delegate_path=/data/local/tmp/qnn_delegate/libQnnTFLiteDelegate.so --external_delegate_options='backend_type:htp;htp_precision:1'
```
```shell
# for gpu delegate
โฆโฆ
INFO: Though EXTERNAL delegate is explicitly applied, the model graph will not be executed by the delegate.
โฆโฆ
# for npu delegate
INFO: STARTING!
INFO: Log parameter values verbosely: [0]
INFO: Graph: [/data/local/tmp/models/mobilenet_v1_1.0_224_quantized_1_metadata_1.tflite]
INFO: External delegate path: [/data/local/tmp/qnn_delegate/libQnnTFLiteDelegate.so]
INFO: External delegate options: [backend_type:htp;htp_precision:1]
INFO: Loaded model /data/local/tmp/models/mobilenet_v1_1.0_224_quantized_1_metadata_1.tflite
INFO: Initialized TensorFlow Lite runtime.
INFO: EXTERNAL delegate created.
ERROR: [QNN Delegate] Failed to create device_handle for Backend ID 6, error=1008
ERROR: Restored original execution plan after delegate application failure.
ERROR: Failed to apply EXTERNAL delegate.
ERROR: Benchmarking failed.
```
The full output is attached. [benchmarkQNN result.txt](https://github.com/user-attachments/files/18206356/benchmarkQNN.result.txt)
I have also tested it on Android phones equipped with Snapdragon 855 and Gen 3 chips, and the results were the same.
Could anyone tell me how to deal with this?
### Standalone code to reproduce the issue
```shell
as described above
```
### Relevant log output
_No response_ | stat:awaiting tensorflower,type:bug,comp:lite | low | Critical |
2,750,260,983 | langchain | sql_agent demo Action and Final Answer appear together to cause an exception | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
# llm = ChatOpenAI()
db = SQLDatabase.from_uri("sqlite:///../somedb/db.sqlite")
agent_executor = create_sql_agent(
llm=llm,
db=db,
verbose=True,
)
agent_executor.handle_parsing_errors=True
resp = agent_executor.invoke("ๆฅ่ฏขไธๅ็ๅฝไธญ็ๆ้ซ็็ๅ")
print(resp)
```
### Error Message and Stack Trace (if applicable)
langsmith:
https://smith.langchain.com/public/e8196299-7b02-4314-a38e-bbae8f2f857e/r
### Description
I am using sql_agent at an example time, when the agent is currently running, it is not possible to run the program.
My current LLM analysis calculation time is likely to be a hit. `includes_answer = FINAL_ANSWER_ACTION in text` Use `includes_answer == True` ,then the result is `FINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE`
### System Info
look langsmith | ๐ค:bug,investigate | low | Critical |
2,750,284,763 | vscode | Valor de variรกveis de ambiente (.env) atualizam apenas ao fechar o vs code |
Type: <b>Bug</b>
Depois de atualizar o vs code para a versรฃo lanรงada no dia 19/12/2024 (1.96), o mesmo passou a apresentar problemas em variรกveis de ambiente atualizadas no meu projeto.
Exemplo: Atualizei uma variรกvel de True para False. No entanto, o sistema continuava inicializando essa variรกvel como True. Apenas ao fechar o vs code e abrir novamente, o sistema passou a identificar o valor atualizado da variรกvel. Isso segue acontecendo para qualquer alteraรงรฃo que faรงo.
Testado pelo menos 10 vezes.
Criar variรกvel de ambiente booleana com valor True
Printar o valor da variรกvel
Alterar o valor para False
Printar o valor da variรกvel (Continuarรก como True)
Fechar o vs code a abrir novamente
Printar o valor da variรกvel (Agora serรก False)
VS Code version: Code 1.96.1 (42b266171e51a016313f47d0c48aca9295b9cbb2, 2024-12-17T17:50:05.206Z)
OS version: Linux x64 6.8.0-49-generic snap
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|13th Gen Intel(R) Core(TM) i7-1355U (12 x 4008)|
|GPU Status|2d_canvas: unavailable_software<br>canvas_oop_rasterization: disabled_off<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: disabled_software<br>multiple_raster_threads: enabled_on<br>opengl: disabled_off<br>rasterization: disabled_software<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: disabled_software<br>video_encode: disabled_software<br>vulkan: disabled_off<br>webgl: unavailable_software<br>webgl2: unavailable_software<br>webgpu: disabled_off<br>webnn: unavailable_software|
|Load (avg)|5, 5, 4|
|Memory (System)|15.31GB (7.29GB free)|
|Process Argv|--no-sandbox --force-user-env --crash-reporter-id 5fa2d761-1817-41d3-b1d1-33fbb4c5f66a|
|Screen Reader|no|
|VM|0%|
|DESKTOP_SESSION|ubuntu|
|XDG_CURRENT_DESKTOP|Unity|
|XDG_SESSION_DESKTOP|ubuntu|
|XDG_SESSION_TYPE|wayland|
</details><details><summary>Extensions (35)</summary>
Extension|Author (truncated)|Version
---|---|---
project-manager|ale|12.8.0
cucumberautocomplete|ale|3.0.5
python-environment-manager|don|1.2.4
es7-react-js-snippets|dsz|4.4.3
gitlens|eam|16.0.5
prettier-vscode|esb|11.0.0
code-runner|for|0.12.2
todo-tree|Gru|0.0.226
csharpextensions|kre|1.7.3
vscode-docker|ms-|1.29.3
vscode-language-pack-pt-BR|MS-|1.96.2024121109
csdevkit|ms-|1.14.14
csharp|ms-|2.60.26
vscode-dotnet-runtime|ms-|2.2.3
vscodeintellicode-csharp|ms-|2.2.3
python|ms-|2024.22.0
vscode-pylance|ms-|2024.12.1
cpptools|ms-|1.22.11
live-server|ms-|0.4.15
material-icon-theme|PKi|5.16.0
java|red|1.37.0
LiveServer|rit|5.7.9
vscode-coverage-gutters|rya|2.12.0
intellicode-api-usage-examples|Vis|0.2.9
vscodeintellicode|Vis|1.3.2
vscode-boot-dev-pack|vmw|0.2.1
vscode-spring-boot|vmw|1.59.0
vscode-gradle|vsc|3.16.4
vscode-java-debug|vsc|0.58.1
vscode-java-dependency|vsc|0.24.1
vscode-java-pack|vsc|0.29.0
vscode-java-test|vsc|0.43.0
vscode-maven|vsc|0.44.0
vscode-spring-boot-dashboard|vsc|0.14.0
vscode-spring-initializr|vsc|0.11.2
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vswsl492cf:30256860
vscod805:30301674
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
2i9eh265:30646982
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupytercf:31046870
nativerepl1:31139838
pythonrstrctxt:31112756
nativeloc2:31192216
cf971741:31144450
iacca1:31171482
notype1cf:31157160
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
```
</details>
<!-- generated by issue reporter --> | info-needed,translation-required-portuguese-brazil,triage-needed | low | Critical |
2,750,302,014 | tauri | [feat] Support for native webview clipboard methods without security prompt | ### Describe the problem
The webview has native clipboard methods like `navigator.clipboard.read`, `navigator.clipboard.write` and `navigator.clipboard.readText`, `navigator.clipboard.writeText` but accessing these triggers a security prompt in the webview with an ugly dialog.
Using the clipboard-manager plugin to read an UHD image from the clipboard takes several seconds, because the clipboard content is transferred as an uncompressed 100MB+ bitmap to the webview.
This the native `navigator.clipboard.read` the JavaScript app can read the clipboard content as image/png and the image is available instantly.
### Describe the solution you'd like
Tauri should support using these native webview clipboard methods without showing the ugly security prompt (alternatively configure permissions in tauri config instead). (like Socket Runtime does: https://github.com/socketsupply/socket)
### Alternatives considered
The "clipboard-manager" plugin allows limited access to the clipboard but has currently many drawbacks:
- images are always uncompressed bitmaps and reading/writing these to the clipboard is slow and inefficient
- not all formats like html are fully supported
### Additional context
The native clipboard APIs have more features and are already available in the native webviews. If technically possible Tauri should support webstandard APIs. This makes porting apps to Tauri easier and more accessible. | type: feature request | low | Major |
2,750,321,289 | rust | Rustdoc failure in `armv7` targets with `-neon` with `target_feature(enable = "neon")` | <!--
Thank you for filing a bug report! ๐ Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code:
```rust
#![feature(arm_target_feature)]
#[target_feature(enable = "neon")]
unsafe fn foo() {}
```
This always compiles, but in `armv7` targets that have `-neon` target-feature, running `rustdoc` on it produces the error
```
error: target feature `neon` cannot be toggled with `#[target_feature]`: unsound on hard-float targets because it changes float ABI
--> bug.rs:3:18
|
5 | #[target_feature(enable = "neon")]
|
```
even if the target has `+soft-float` (e.g `armv7-unknown-linux-gnueabi`).
`rustc --version --verbose`:
```
rustc 1.85.0-nightly (9e136a30a 2024-12-19)
binary: rustc
commit-hash: 9e136a30a965bf4e63f03095c57df7257bf96fd6
commit-date: 2024-12-19
host: x86_64-unknown-linux-gnu
release: 1.85.0-nightly
LLVM version: 19.1.6
```
Related: #133417 | T-rustdoc,O-Arm,C-bug,A-target-feature | low | Critical |
2,750,328,232 | ui | [bug]: Primary colors used in New York style tooltips | ### Describe the bug
Here's the tooltip in NewYork Style:
```
"use client";
import * as React from "react";
import * as TooltipPrimitive from "@radix-ui/react-tooltip";
import { cn } from "@/lib/utils";
const TooltipProvider = TooltipPrimitive.Provider;
const Tooltip = TooltipPrimitive.Root;
const TooltipTrigger = TooltipPrimitive.Trigger;
const TooltipContent = React.forwardRef<
React.ElementRef<typeof TooltipPrimitive.Content>,
React.ComponentPropsWithoutRef<typeof TooltipPrimitive.Content>
>(({ className, sideOffset = 4, ...props }, ref) => (
<TooltipPrimitive.Portal>
<TooltipPrimitive.Content
ref={ref}
sideOffset={sideOffset}
className={cn(
"z-50 overflow-hidden rounded-md bg-primary px-3 py-1.5 text-xs text-primary-foreground animate-in fade-in-0 zoom-in-95 data-[state=closed]:animate-out data-[state=closed]:fade-out-0 data-[state=closed]:zoom-out-95 data-[side=bottom]:slide-in-from-top-2 data-[side=left]:slide-in-from-right-2 data-[side=right]:slide-in-from-left-2 data-[side=top]:slide-in-from-bottom-2",
className
)}
{...props}
/>
</TooltipPrimitive.Portal>
));
TooltipContent.displayName = TooltipPrimitive.Content.displayName;
export { Tooltip, TooltipTrigger, TooltipContent, TooltipProvider };
```
If you notice it uses primary colors. So if I switch the theme to blue or some other colorful theme, the tooltip background also changes to blue which looks pretty odd.
Under default theme it uses popover colors which still looks decent. Am I missing something?
### Affected component/components
tooltip
### How to reproduce
Check the code of tooltip component
### Codesandbox/StackBlitz link
https://ui.shadcn.com/docs/components/tooltip
### Logs
_No response_
### System Info
```bash
Chrome
MacOS
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,750,372,475 | TypeScript | Computed property name is not escaped in declaration emit | ### ๐ Search Terms
computed property symbol declaration emit
### ๐ Version & Regression Information
- This changed between versions 3.7 and 3.8
### โฏ Playground Link
https://www.typescriptlang.org/play/?ts=3.7.5#code/MYGwhgzhAEAq0G8BQ1XQgFzBglsaATgKZgAmA9gHYgCe0A2gEQA6GjAugFzQCulOARx5F0NALYAjciADcSAL5IQRDNAAe0ALyIUaerCasOXaAEYFMoA
### ๐ป Code
```ts
class T {
static readonly ["\t"]: unique symbol;
}
let x = {
[T["\t"]]: 1
};
```
### ๐ Actual behavior
The type of `x` is emitted as `{ [T["\t"]]: number; }` in declaration files, making the file invalid since the property does not exist on `T`
### ๐ Expected behavior
The escape should be preserved in the property name.
### Additional information about the issue
Was introduced by https://github.com/microsoft/TypeScript/pull/18860/commits/8bb7230729f43f1c134f895a535e73c4cd58af3e
| Bug | low | Minor |
2,750,426,826 | three.js | Replacing an attribute of a geometry instanced by another new attribute with a superior length break the renderer | ### Description
On previous version of Three.JS, ( r162 havnt tested between then and now ) ,we were able to replace an entire instancedgeometry attribute with a new length and set the instanceCount to a superior value
This is a valuable feature for performances if you play with huge dynamic instanced geometry that increments count over time and allows to first allocate a smaller buffer then grow with a safe multiple to limit the size of the buffer
But this also limits flexibility of building geometries by a lot
In both WebGL and WebGPU back end it now breaks the renderer :
<img width="523" alt="Screenshot 2024-12-19 at 15 02 09" src="https://github.com/user-attachments/assets/450d9260-3e84-41e3-9708-52bdd25a13be" />
Look at the switchArray function
[Live example](https://jsfiddle.net/ar6kdvzp/)
| Bug | low | Major |
2,750,445,769 | pytorch | [ONNX] Output order is switched when exporting model phi-2 with or without input cache | ### ๐ Describe the bug
The cache is not flattened in the same order if the cache is given as one of the input when the model is exported. It seems to be introduced by one the passes made by the exporter.
With no input cache, the model (2 layers only) outputs ``key_cache_0, value_cache_0, key_cache_1, value_cache_1``, it becomes ``return (view_34, _to_copy_7, transpose_3, _to_copy_10, transpose_8)``
last line of the fx graph with no decomposition: ``return (linear_12, to_7, transpose_3, to_10, transpose_8)``

With input cache given as input, the model outputs: ``key_cache_0, key_cache_1, value_cache_0, value_cache_1``
last line of the fx graph with no decomposition: ``return (linear_12, to_7, cat_6, to_10, cat_12)`` but it becomes ``return (view_34, _to_copy_7, _to_copy_10, cat_6, cat_12)``

### Versions
```
Collecting environment information...
PyTorch version: 2.6.0.dev20241218+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.12.8 (main, Dec 4 2024, 08:54:12) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4060 Laptop GPU
Nvidia driver version: 538.92
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i7-13800H
CPU family: 6
Model: 186
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 2
BogoMIPS: 5836.79
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 480 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 12.5 MiB (10 instances)
L3 cache: 24 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.2.0
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.18.0
[pip3] onnx-extended==0.3.0
[pip3] onnxruntime-training==1.21.0+cu126
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.6.0.dev20241218+cu126
[pip3] torchaudio==2.6.0.dev20241218+cu126
[pip3] torchvision==0.22.0.dev20241218+cu126
[conda] Could not collect
``` | module: onnx,triaged | low | Critical |
2,750,481,369 | deno | ext/node: Do not expose `fs.promises.fstat` | We currently expose `fs.promises.fstat` from `node:fs`, but that doesn't seem an public API of Node https://nodejs.org/api/fs.html
(`fs.promises` namespace generally doesn't expose `f` prefixed APIs) | bug,node compat | low | Minor |
2,750,522,753 | flutter | Swipe, Scroll and other animations are not smooth | ### Steps to reproduce
This is simple new app
## Swipe to Right in PageView widget
|  |  |  |
|---------------------------|---------------------------|---------------------------|
| 1 | 2 | 3 |
## Swipe to Left in PageView widget
|  |  |  |
|---------------------------|---------------------------|---------------------------|
| 1 | 2 | 3 |
it looks like there is wave animation happening
## Scrolling Up in ListView: its very bad in complex lists
|  |  |  |
|---------------------------|---------------------------|---------------------------|
| 1 | 2 | 3 |
## DevTool
|  |  |  |
|---------------------------|---------------------------|---------------------------|
| side by side | bad | good |
when i record using screen recorder app, the result video does not show the glitches and look good. but from my point of view is really bad
I disabled impeller and same problem exists
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
pageTransitionsTheme: const PageTransitionsTheme(builders: {
TargetPlatform.android: CupertinoPageTransitionsBuilder(),
}),
colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple),
useMaterial3: true,
),
home: const MyHomePage(),
);
}
}
class MyHomePage extends StatefulWidget {
const MyHomePage({super.key});
@override
State<MyHomePage> createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: const Text("v3.27.1"),
),
body: PageView(
children: [
ListViewScreenTest(),
Container(
color: Colors.limeAccent,
child: const Center(
child: Text("Text"),
),
),
Container(
color: Colors.greenAccent,
child: const Center(
child: Text("Text"),
),
),
],
),
floatingActionButton: FloatingActionButton(
onPressed: () {
Navigator.push(
context,
MaterialPageRoute(
builder: (context) => const Scaffold(
backgroundColor: Colors.black,
),
),
);
},
child: const Icon(Icons.window),
),
);
}
}
class ListViewScreenTest extends StatelessWidget {
const ListViewScreenTest({super.key});
@override
Widget build(BuildContext context) {
return ListView(
children: [
ListTile(
title: Container(color: Colors.red, height: 16),
subtitle: Text("some text"),
),
ListTile(
title: Container(color: Colors.teal, height: 16),
subtitle: Text("some text"),
),
ListTile(
title: Container(color: Colors.black, height: 16),
subtitle: Text("some text"),
),
ListTile(
title: Container(color: Colors.blue, height: 16),
subtitle: Text("some text"),
),
ListTile(
title: Container(color: Colors.pink, height: 16),
subtitle: Text("some text"),
),
ListTile(
title: Container(color: Colors.yellow, height: 16),
subtitle: Text("some text"),
),
ListTile(
title: Container(color: Colors.amber, height: 16),
subtitle: Text("some text"),
),
ListTile(
title: Container(color: Colors.green, height: 16),
subtitle: Text("some text"),
),
ListTile(
title: Container(color: Colors.orange, height: 16),
subtitle: Text("some text"),
),
ListTile(
title: Container(color: Colors.lime, height: 16),
subtitle: Text("some text"),
),
ListTile(
title: Container(color: Colors.lightBlue, height: 16),
subtitle: Text("some text"),
),
ListTile(
title: Container(color: Colors.purple, height: 16),
subtitle: Text("some text"),
),
ListTile(
title: Container(color: Colors.indigo, height: 16),
subtitle: Text("some text"),
),
ListTile(
title: Container(color: Colors.brown, height: 16),
subtitle: Text("some text"),
),
ListTile(
title: Container(color: Colors.grey, height: 16),
subtitle: Text("some text"),
),
ListTile(
title: Container(color: Colors.black, height: 16),
subtitle: Text("some text"),
),
],
);
}
}
```
</details>
### Performance profiling on master channel
- [X] The issue still persists on the master channel
### Timeline Traces
<details open><summary>Timeline Traces JSON</summary>
```json
[its large 30mb, I uploaded it in google drive ]
```
https://drive.google.com/file/d/1VuVfLAEFa7BoZdYl0piSrx1sE92L4w9s/view?usp=share_link
</details>
### Video demonstration
<details open>
<summary>Video demonstration</summary>
[[https://drive.google.com/file/d/1e6EHClmUA9BOs_v7NqymqeTJDH6R6ZI6/view?usp=share_link]](https://drive.google.com/file/d/1e6EHClmUA9BOs_v7NqymqeTJDH6R6ZI6/view?usp=share_link)
</details>
### What target platforms are you seeing this bug on?
Android
### OS/Browser name and version | Device information
SAMSUNG Galaxy A20, SM A205FN (android-arm64)
|  |  |  |
|---------------------------|---------------------------|---------------------------|
| CPU | System | GPU |
### Does the problem occur on emulator/simulator as well as on physical devices?
Unknown
### Is the problem only reproducible with Impeller?
N/A
### Logs
<details open><summary>Logs</summary>
```console
Launching lib/main.dart on SM A205FN in profile mode...
โ Built build/app/outputs/flutter-apk/app-profile.apk (13.7MB)
I/flutter (20852): [IMPORTANT:flutter/shell/platform/android/android_context_vk_impeller.cc(60)] Using the Impeller rendering backend (Vulkan).
I/SurfaceControl(20852): nativeRelease nativeObject s[520716953856]
I/SurfaceControl(20852): nativeRelease nativeObject e[520716953856]
I/SurfaceControl(20852): assignNativeObject: nativeObject = 0 Surface(name=null)/@0xcc3f884 / android.view.SurfaceControl.readFromParcel:1117 android.view.IWindowSession$Stub$Proxy.relayout:1820 android.view.ViewRootImpl.relayoutWindow:9005 android.view.ViewRootImpl.performTraversals:3360 android.view.ViewRootImpl.doTraversal:2618 android.view.ViewRootImpl$TraversalRunnable.run:9971 android.view.Choreographer$CallbackRecord.run:1010 android.view.Choreographer.doCallbacks:809 android.view.Choreographer.doFrame:744 android.view.Choreographer$FrameDisplayEventReceiver.run:995
I/SurfaceControl(20852): nativeRelease nativeObject s[525662631200]
I/SurfaceControl(20852): nativeRelease nativeObject e[525662631200]
I/SurfaceControl(20852): nativeRelease nativeObject s[525662630816]
I/SurfaceControl(20852): nativeRelease nativeObject e[525662630816]
I/ViewRootImpl@f29f789[MainActivity](20852): Relayout returned: old=(0,0,720,1560) new=(0,0,720,1560) req=(720,1560)0 dur=11 res=0x1 s={true 522959650816} ch=false fn=2
I/ViewRootImpl@f29f789[MainActivity](20852): updateBoundsLayer: shouldReparent = false t = android.view.SurfaceControl$Transaction@2ca18f0 sc = Surface(name=Bounds for - com.example.application_test/com.example.application_test.MainActivity@0)/@0xd45ee69 frame = 2
I/ViewRootImpl@f29f789[MainActivity](20852): MSG_WINDOW_FOCUS_CHANGED 1 1
D/InputMethodManager(20852): prepareNavigationBarInfo() DecorView@af5b8b7[MainActivity]
D/InputMethodManager(20852): getNavigationBarColor() -16711423
D/InputMethodManager(20852): prepareNavigationBarInfo() DecorView@af5b8b7[MainActivity]
D/InputMethodManager(20852): getNavigationBarColor() -16711423
V/InputMethodManager(20852): Starting input: tba=com.example.application_test ic=null mNaviBarColor -16711423 mIsGetNaviBarColorSuccess true , NavVisible : true , NavTrans : false
D/InputMethodManager(20852): startInputInner - Id : 0
I/InputMethodManager(20852): startInputInner - mService.startInputOrWindowGainedFocus
D/InputTransport(20852): Input channel constructed: 'ClientS', fd=104
D/InputMethodManager(20852): prepareNavigationBarInfo() DecorView@af5b8b7[MainActivity]
D/InputMethodManager(20852): getNavigationBarColor() -16711423
V/InputMethodManager(20852): Starting input: tba=com.example.application_test ic=null mNaviBarColor -16711423 mIsGetNaviBarColorSuccess true , NavVisible : true , NavTrans : false
D/InputMethodManager(20852): startInputInner - Id : 0
I/SurfaceControl(20852): nativeRelease nativeObject s[520716955008]
I/SurfaceControl(20852): nativeRelease nativeObject e[520716955008]
I/SurfaceControl(20852): nativeRelease nativeObject s[520716953856]
I/SurfaceControl(20852): nativeRelease nativeObject e[520716953856]
D/ProfileInstaller(20852): Installing profile for com.example.application_test
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/SurfaceControl(20852): nativeRelease nativeObject s[522959076544]
I/SurfaceControl(20852): nativeRelease nativeObject e[522959076544]
I/SurfaceControl(20852): nativeRelease nativeObject s[525662632256]
I/SurfaceControl(20852): nativeRelease nativeObject e[525662632256]
I/SurfaceControl(20852): nativeRelease nativeObject s[525662632160]
I/SurfaceControl(20852): nativeRelease nativeObject e[525662632160]
I/ViewRootImpl@f29f789[MainActivity](20852): MSG_RESIZED_REPORT: frame=(0,0,720,1560) ci=(0,53,0,0) vi=(0,0,0,0) or=1
I/ViewRootImpl@f29f789[MainActivity](20852): [DP] dp(1) 1 android.view.ViewRootImpl.reportNextDraw:10957 android.view.ViewRootImpl.access$1200:256 android.view.ViewRootImpl$ViewRootHandler.handleMessage:6101
I/SurfaceControl(20852): nativeRelease nativeObject s[520716954144]
I/SurfaceControl(20852): nativeRelease nativeObject e[520716954144]
I/SurfaceControl(20852): assignNativeObject: nativeObject = 0 Surface(name=null)/@0xcc3f884 / android.view.SurfaceControl.readFromParcel:1117 android.view.IWindowSession$Stub$Proxy.relayout:1820 android.view.ViewRootImpl.relayoutWindow:9005 android.view.ViewRootImpl.performTraversals:3360 android.view.ViewRootImpl.doTraversal:2618 android.view.ViewRootImpl$TraversalRunnable.run:9971 android.view.Choreographer$CallbackRecord.run:1010 android.view.Choreographer.doCallbacks:809 android.view.Choreographer.doFrame:744 android.view.Choreographer$FrameDisplayEventReceiver.run:995
I/SurfaceControl(20852): nativeRelease nativeObject s[522959076640]
I/SurfaceControl(20852): nativeRelease nativeObject e[522959076640]
I/ViewRootImpl@f29f789[MainActivity](20852): Relayout returned: old=(0,0,720,1560) new=(0,0,720,1560) req=(720,1560)0 dur=11 res=0x1 s={true 522959650816} ch=false fn=3
I/ViewRootImpl@f29f789[MainActivity](20852): [DP] pd() Asnyc report
W/libEGL (20852): EGLNativeWindowType 0x79c2d30010 disconnect failed
I/ViewRootImpl@f29f789[MainActivity](20852): [DP] pdf(0) 1 android.view.ViewRootImpl.lambda$performDraw$1$ViewRootImpl:4668 android.view.-$$Lambda$ViewRootImpl$DJd0VUYJgsebcnSohO6h8zc_ONI.run:6 android.os.Handler.handleCallback:938
I/ViewRootImpl@f29f789[MainActivity](20852): [DP] rdf()
I/ViewRootImpl@f29f789[MainActivity](20852): MSG_WINDOW_FOCUS_CHANGED 0 1
I/SurfaceControl(20852): nativeRelease nativeObject s[520716954144]
I/SurfaceControl(20852): nativeRelease nativeObject e[520716954144]
I/ViewRootImpl@f29f789[MainActivity](20852): MSG_WINDOW_FOCUS_CHANGED 1 1
D/InputMethodManager(20852): prepareNavigationBarInfo() DecorView@af5b8b7[MainActivity]
D/InputMethodManager(20852): getNavigationBarColor() -16711423
D/InputMethodManager(20852): prepareNavigationBarInfo() DecorView@af5b8b7[MainActivity]
D/InputMethodManager(20852): getNavigationBarColor() -16711423
V/InputMethodManager(20852): Starting input: tba=com.example.application_test ic=null mNaviBarColor -16711423 mIsGetNaviBarColorSuccess true , NavVisible : true , NavTrans : false
D/InputMethodManager(20852): startInputInner - Id : 0
I/InputMethodManager(20852): startInputInner - mService.startInputOrWindowGainedFocus
D/InputTransport(20852): Input channel constructed: 'ClientS', fd=105
D/InputTransport(20852): Input channel destroyed: 'ClientS', fd=104
I/ViewRootImpl@f29f789[MainActivity](20852): MSG_WINDOW_FOCUS_CHANGED 0 1
I/SurfaceControl(20852): nativeRelease nativeObject s[523117998208]
I/SurfaceControl(20852): nativeRelease nativeObject e[523117998208]
I/SurfaceControl(20852): nativeRelease nativeObject s[523118000128]
I/SurfaceControl(20852): nativeRelease nativeObject e[523118000128]
I/SurfaceControl(20852): nativeRelease nativeObject s[523117999456]
I/SurfaceControl(20852): nativeRelease nativeObject e[523117999456]
I/SurfaceView(20852): onWindowVisibilityChanged(8) false io.flutter.embedding.android.FlutterSurfaceView{a4dde97 V.E...... ........ 0,0-720,1560} of ViewRootImpl@f29f789[MainActivity]
I/SurfaceView(20852): surfaceDestroyed callback.size 1 #2 io.flutter.embedding.android.FlutterSurfaceView{a4dde97 V.E...... ........ 0,0-720,1560}
I/SurfaceView(20852): remove() from RT android.view.SurfaceView$2@34d9b87 Surface(name=SurfaceView - com.example.application_test/com.example.application_test.MainActivity@a4dde97@0)/@0x87cd33
I/SurfaceControl(20852): nativeRelease nativeObject s[522959974208]
I/SurfaceControl(20852): nativeRelease nativeObject e[522959974208]
I/SurfaceControl(20852): nativeRelease nativeObject s[522959964512]
I/SurfaceControl(20852): nativeRelease nativeObject e[522959964512]
W/libEGL (20852): EGLNativeWindowType 0x79c2d30010 disconnect failed
I/SurfaceControl(20852): nativeRelease nativeObject s[525662632160]
I/SurfaceControl(20852): nativeRelease nativeObject e[525662632160]
I/SurfaceControl(20852): assignNativeObject: nativeObject = 0 Surface(name=null)/@0x7ae16dd / android.view.SurfaceControl.readFromParcel:1117 android.view.IWindowSession$Stub$Proxy.relayout:1810 android.view.ViewRootImpl.relayoutWindow:9005 android.view.ViewRootImpl.performTraversals:3360 android.view.ViewRootImpl.doTraversal:2618 android.view.ViewRootImpl$TraversalRunnable.run:9971 android.view.Choreographer$CallbackRecord.run:1010 android.view.Choreographer.doCallbacks:809 android.view.Choreographer.doFrame:744 android.view.Choreographer$FrameDisplayEventReceiver.run:995
I/SurfaceControl(20852): assignNativeObject: nativeObject = 0 Surface(name=null)/@0xcc3f884 / android.view.SurfaceControl.readFromParcel:1117 android.view.IWindowSession$Stub$Proxy.relayout:1820 android.view.ViewRootImpl.relayoutWindow:9005 android.view.ViewRootImpl.performTraversals:3360 android.view.ViewRootImpl.doTraversal:2618 android.view.ViewRootImpl$TraversalRunnable.run:9971 android.view.Choreographer$CallbackRecord.run:1010 android.view.Choreographer.doCallbacks:809 android.view.Choreographer.doFrame:744 android.view.Choreographer$FrameDisplayEventReceiver.run:995
I/SurfaceControl(20852): nativeRelease nativeObject s[522959964608]
I/SurfaceControl(20852): nativeRelease nativeObject e[522959964608]
I/SurfaceControl(20852): nativeRelease nativeObject s[523118000224]
I/SurfaceControl(20852): nativeRelease nativeObject e[523118000224]
I/ViewRootImpl@f29f789[MainActivity](20852): Relayout returned: old=(0,0,720,1560) new=(0,0,720,1560) req=(720,1560)8 dur=39 res=0x5 s={false 0} ch=true fn=4
I/SurfaceView(20852): windowStopped(true) false io.flutter.embedding.android.FlutterSurfaceView{a4dde97 V.E...... ........ 0,0-720,1560} of ViewRootImpl@f29f789[MainActivity]
D/InputTransport(20852): Input channel destroyed: 'ClientS', fd=105
I/ViewRootImpl@f29f789[MainActivity](20852): stopped(true) old=false
I/SurfaceControl(20852): nativeRelease nativeObject s[525662632160]
I/SurfaceControl(20852): nativeRelease nativeObject e[525662632160]
I/SurfaceView(20852): onWindowVisibilityChanged(4) false io.flutter.embedding.android.FlutterSurfaceView{a4dde97 G.E...... ......I. 0,0-720,1560} of ViewRootImpl@f29f789[MainActivity]
I/SurfaceControl(20852): assignNativeObject: nativeObject = 0 Surface(name=null)/@0x7ae16dd / android.view.SurfaceControl.readFromParcel:1117 android.view.IWindowSession$Stub$Proxy.relayout:1810 android.view.ViewRootImpl.relayoutWindow:9005 android.view.ViewRootImpl.performTraversals:3360 android.view.ViewRootImpl.doTraversal:2618 android.view.ViewRootImpl$TraversalRunnable.run:9971 android.view.Choreographer$CallbackRecord.run:1010 android.view.Choreographer.doCallbacks:809 android.view.Choreographer.doFrame:744 android.view.Choreographer$FrameDisplayEventReceiver.run:995
I/SurfaceControl(20852): assignNativeObject: nativeObject = 0 Surface(name=null)/@0xcc3f884 / android.view.SurfaceControl.readFromParcel:1117 android.view.IWindowSession$Stub$Proxy.relayout:1820 android.view.ViewRootImpl.relayoutWindow:9005 android.view.ViewRootImpl.performTraversals:3360 android.view.ViewRootImpl.doTraversal:2618 android.view.ViewRootImpl$TraversalRunnable.run:9971 android.view.Choreographer$CallbackRecord.run:1010 android.view.Choreographer.doCallbacks:809 android.view.Choreographer.doFrame:744 android.view.Choreographer$FrameDisplayEventReceiver.run:995
I/ViewRootImpl@f29f789[MainActivity](20852): Relayout returned: old=(0,0,720,1560) new=(0,0,720,1560) req=(720,1560)4 dur=29 res=0x1 s={false 0} ch=false fn=-1
I/ViewRootImpl@f29f789[MainActivity](20852): stopped(false) old=true
I/ViewRootImpl@f29f789[MainActivity](20852): stopped(false) old=false
I/SurfaceView(20852): onWindowVisibilityChanged(0) false io.flutter.embedding.android.FlutterSurfaceView{a4dde97 V.E...... ......ID 0,0-720,1560} of ViewRootImpl@f29f789[MainActivity]
I/SurfaceControl(20852): assignNativeObject: nativeObject = 0 Surface(name=null)/@0xcc3f884 / android.view.SurfaceControl.readFromParcel:1117 android.view.IWindowSession$Stub$Proxy.relayout:1820 android.view.ViewRootImpl.relayoutWindow:9005 android.view.ViewRootImpl.performTraversals:3360 android.view.ViewRootImpl.doTraversal:2618 android.view.ViewRootImpl$TraversalRunnable.run:9971 android.view.Choreographer$CallbackRecord.run:1010 android.view.Choreographer.doCallbacks:809 android.view.Choreographer.doFrame:744 android.view.Choreographer$FrameDisplayEventReceiver.run:995
I/ViewRootImpl@f29f789[MainActivity](20852): Relayout returned: old=(0,0,720,1560) new=(0,0,720,1560) req=(720,1560)0 dur=18 res=0x7 s={true 522959650816} ch=true fn=-1
I/SurfaceView(20852): windowStopped(false) true io.flutter.embedding.android.FlutterSurfaceView{a4dde97 V.E...... ......ID 0,0-720,1560} of ViewRootImpl@f29f789[MainActivity]
I/SurfaceView(20852): surfaceCreated 1 #1 io.flutter.embedding.android.FlutterSurfaceView{a4dde97 V.E...... ......ID 0,0-720,1560}
I/SurfaceView(20852): surfaceChanged (720,1560) 1 #1 io.flutter.embedding.android.FlutterSurfaceView{a4dde97 V.E...... ......ID 0,0-720,1560}
I/ViewRootImpl@f29f789[MainActivity](20852): [DP] dp(1) 1 android.view.SurfaceView.updateSurface:1311 android.view.SurfaceView.setWindowStopped:343 android.view.SurfaceView.surfaceCreated:1835
I/ViewRootImpl@f29f789[MainActivity](20852): [DP] pdf(0) 1 android.view.SurfaceView.notifyDrawFinished:577 android.view.SurfaceView.performDrawFinished:564 android.view.SurfaceView.lambda$TWz4D2u33ZlAmRtgKzbqqDue3iM:0
I/ViewRootImpl@f29f789[MainActivity](20852): [DP] rdf()
I/ViewRootImpl@f29f789[MainActivity](20852): updateBoundsLayer: shouldReparent = true t = android.view.SurfaceControl$Transaction@2ca18f0 sc = Surface(name=Bounds for - com.example.application_test/com.example.application_test.MainActivity@1)/@0x891319e frame = 1
I/ViewRootImpl@f29f789[MainActivity](20852): [DP] dp(1) 1 android.view.ViewRootImpl.reportNextDraw:10957 android.view.ViewRootImpl.performTraversals:3845 android.view.ViewRootImpl.doTraversal:2618
I/ViewRootImpl@f29f789[MainActivity](20852): [DP] pd() Asnyc report
I/SurfaceView(20852): setParentSpaceRectangle: useBLAST = false position = Rect(0, 0 - 720, 1560) frameNumber = 1 t = android.view.SurfaceControl$Transaction@f446ca2
I/SurfaceView(20852): applySurfaceTransforms: t = android.view.SurfaceControl$Transaction@f446ca2 surfaceControl = Surface(name=SurfaceView - com.example.application_test/com.example.application_test.MainActivity@a4dde97@1)/@0x124747f frame = 1
I/SurfaceView(20852): applySurfaceTransforms: postScaleX = 1.0 postScaleY = 1.0
I/ViewRootImpl@f29f789[MainActivity](20852): [DP] pdf(0) 1 android.view.ViewRootImpl.lambda$performDraw$1$ViewRootImpl:4668 android.view.-$$Lambda$ViewRootImpl$DJd0VUYJgsebcnSohO6h8zc_ONI.run:6 android.os.Handler.handleCallback:938
I/ViewRootImpl@f29f789[MainActivity](20852): [DP] rdf()
I/ViewRootImpl@f29f789[MainActivity](20852): MSG_WINDOW_FOCUS_CHANGED 1 1
D/InputMethodManager(20852): prepareNavigationBarInfo() DecorView@af5b8b7[MainActivity]
D/InputMethodManager(20852): getNavigationBarColor() -16711423
D/InputMethodManager(20852): prepareNavigationBarInfo() DecorView@af5b8b7[MainActivity]
D/InputMethodManager(20852): getNavigationBarColor() -16711423
V/InputMethodManager(20852): Starting input: tba=com.example.application_test ic=null mNaviBarColor -16711423 mIsGetNaviBarColorSuccess true , NavVisible : true , NavTrans : false
D/InputMethodManager(20852): startInputInner - Id : 0
I/InputMethodManager(20852): startInputInner - mService.startInputOrWindowGainedFocus
D/InputTransport(20852): Input channel constructed: 'ClientS', fd=106
I/SurfaceControl(20852): nativeRelease nativeObject s[523117994656]
I/SurfaceControl(20852): nativeRelease nativeObject e[523117994656]
I/SurfaceControl(20852): nativeRelease nativeObject s[523117993984]
I/SurfaceControl(20852): nativeRelease nativeObject e[523117993984]
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 0
I/ViewRootImpl@f29f789[MainActivity](20852): ViewPostIme pointer 1
D/InputMethodManager(20852): prepareNavigationBarInfo() DecorView@af5b8b7[MainActivity]
D/InputMethodManager(20852): getNavigationBarColor() -16711423
V/InputMethodManager(20852): Starting input: tba=com.example.application_test ic=null mNaviBarColor -16711423 mIsGetNaviBarColorSuccess true , NavVisible : true , NavTrans : false
D/InputMethodManager(20852): startInputInner - Id : 0
I/InputMethodManager(20852): startInputInner - mService.startInputOrWindowGainedFocus
D/InputTransport(20852): Input channel constructed: 'ClientS', fd=107
D/InputTransport(20852): Input channel destroyed: 'ClientS', fd=106
I/ViewRootImpl@f29f789[MainActivity](20852): stopped(true) old=false
I/SurfaceView(20852): windowStopped(true) false io.flutter.embedding.android.FlutterSurfaceView{a4dde97 V.E...... ........ 0,0-720,1560} of ViewRootImpl@f29f789[MainActivity]
I/SurfaceView(20852): surfaceDestroyed callback.size 1 #1 io.flutter.embedding.android.FlutterSurfaceView{a4dde97 V.E...... ........ 0,0-720,1560}
I/SurfaceView(20852): remove() io.flutter.embedding.android.FlutterSurfaceView{a4dde97 V.E...... ........ 0,0-720,1560} Surface(name=SurfaceView - com.example.application_test/com.example.application_test.MainActivity@a4dde97@1)/@0x124747f
I/SurfaceControl(20852): nativeRelease nativeObject s[520716954336]
I/SurfaceControl(20852): nativeRelease nativeObject e[520716954336]
I/SurfaceControl(20852): nativeRelease nativeObject s[523118000224]
I/SurfaceControl(20852): nativeRelease nativeObject e[523118000224]
I/SurfaceControl(20852): nativeRelease nativeObject s[520716954240]
I/SurfaceControl(20852): nativeRelease nativeObject e[520716954240]
I/SurfaceControl(20852): nativeRelease nativeObject s[525662632160]
I/SurfaceControl(20852): nativeRelease nativeObject e[525662632160]
I/SurfaceView(20852): onWindowVisibilityChanged(8) false io.flutter.embedding.android.FlutterSurfaceView{a4dde97 G.E...... ......I. 0,0-720,1560} of ViewRootImpl@f29f789[MainActivity]
W/libEGL (20852): EGLNativeWindowType 0x79c2d30010 disconnect failed
I/SurfaceControl(20852): assignNativeObject: nativeObject = 0 Surface(name=null)/@0x7ae16dd / android.view.SurfaceControl.readFromParcel:1117 android.view.IWindowSession$Stub$Proxy.relayout:1810 android.view.ViewRootImpl.relayoutWindow:9005 android.view.ViewRootImpl.performTraversals:3360 android.view.ViewRootImpl.doTraversal:2618 android.view.ViewRootImpl$TraversalRunnable.run:9971 android.view.Choreographer$CallbackRecord.run:1010 android.view.Choreographer.doCallbacks:809 android.view.Choreographer.doFrame:744 android.view.Choreographer$FrameDisplayEventReceiver.run:995
I/SurfaceControl(20852): assignNativeObject: nativeObject = 0 Surface(name=null)/@0xcc3f884 / android.view.SurfaceControl.readFromParcel:1117 android.view.IWindowSession$Stub$Proxy.relayout:1820 android.view.ViewRootImpl.relayoutWindow:9005 android.view.ViewRootImpl.performTraversals:3360 android.view.ViewRootImpl.doTraversal:2618 android.view.ViewRootImpl$TraversalRunnable.run:9971 android.view.Choreographer$CallbackRecord.run:1010 android.view.Choreographer.doCallbacks:809 android.view.Choreographer.doFrame:744 android.view.Choreographer$FrameDisplayEventReceiver.run:995
I/SurfaceControl(20852): nativeRelease nativeObject s[523118005120]
I/SurfaceControl(20852): nativeRelease nativeObject e[523118005120]
I/SurfaceControl(20852): nativeRelease nativeObject s[523117997632]
I/SurfaceControl(20852): nativeRelease nativeObject e[523117997632]
I/SurfaceControl(20852): nativeRelease nativeObject s[523117997056]
I/SurfaceControl(20852): nativeRelease nativeObject e[523117997056]
I/ViewRootImpl@f29f789[MainActivity](20852): Relayout returned: old=(0,0,720,1560) new=(0,0,720,1560) req=(720,1560)8 dur=38 res=0x5 s={false 0} ch=false fn=-1
I/ViewRootImpl@f29f789[MainActivity](20852): MSG_WINDOW_FOCUS_CHANGED 0 1
I/SurfaceControl(20852): nativeRelease nativeObject s[520716954336]
I/SurfaceControl(20852): nativeRelease nativeObject e[520716954336]
I/SurfaceControl(20852): nativeRelease nativeObject s[520716954240]
I/SurfaceControl(20852): nativeRelease nativeObject e[520716954240]
I/SurfaceControl(20852): nativeRelease nativeObject s[525662632160]
I/SurfaceControl(20852): nativeRelease nativeObject e[525662632160]
I/SurfaceView(20852): onWindowVisibilityChanged(4) false io.flutter.embedding.android.FlutterSurfaceView{a4dde97 G.E...... ......I. 0,0-720,1560} of ViewRootImpl@f29f789[MainActivity]
I/SurfaceControl(20852): assignNativeObject: nativeObject = 0 Surface(name=null)/@0x7ae16dd / android.view.SurfaceControl.readFromParcel:1117 android.view.IWindowSession$Stub$Proxy.relayout:1810 android.view.ViewRootImpl.relayoutWindow:9005 android.view.ViewRootImpl.performTraversals:3360 android.view.ViewRootImpl.doTraversal:2618 android.view.ViewRootImpl$TraversalRunnable.run:9971 android.view.Choreographer$CallbackRecord.run:1010 android.view.Choreographer.doCallbacks:809 android.view.Choreographer.doFrame:744 android.view.Choreographer$FrameDisplayEventReceiver.run:995
I/SurfaceControl(20852): assignNativeObject: nativeObject = 0 Surface(name=null)/@0xcc3f884 / android.view.SurfaceControl.readFromParcel:1117 android.view.IWindowSession$Stub$Proxy.relayout:1820 android.view.ViewRootImpl.relayoutWindow:9005 android.view.ViewRootImpl.performTraversals:3360 android.view.ViewRootImpl.doTraversal:2618 android.view.ViewRootImpl$TraversalRunnable.run:9971 android.view.Choreographer$CallbackRecord.run:1010 android.view.Choreographer.doCallbacks:809 android.view.Choreographer.doFrame:744 android.view.Choreographer$FrameDisplayEventReceiver.run:995
I/SurfaceControl(20852): nativeRelease nativeObject s[523118005216]
I/SurfaceControl(20852): nativeRelease nativeObject e[523118005216]
I/ViewRootImpl@f29f789[MainActivity](20852): Relayout returned: old=(0,0,720,1560) new=(0,0,720,1560) req=(720,1560)4 dur=12 res=0x1 s={false 0} ch=false fn=-1
I/ViewRootImpl@f29f789[MainActivity](20852): stopped(false) old=true
I/ViewRootImpl@f29f789[MainActivity](20852): stopped(false) old=false
I/SurfaceView(20852): onWindowVisibilityChanged(0) false io.flutter.embedding.android.FlutterSurfaceView{a4dde97 V.E...... ......ID 0,0-720,1560} of ViewRootImpl@f29f789[MainActivity]
I/SurfaceControl(20852): assignNativeObject: nativeObject = 0 Surface(name=null)/@0xcc3f884 / android.view.SurfaceControl.readFromParcel:1117 android.view.IWindowSession$Stub$Proxy.relayout:1820 android.view.ViewRootImpl.relayoutWindow:9005 android.view.ViewRootImpl.performTraversals:3360 android.view.ViewRootImpl.doTraversal:2618 android.view.ViewRootImpl$TraversalRunnable.run:9971 android.view.Choreographer$CallbackRecord.run:1010 android.view.Choreographer.doCallbacks:809 android.view.Choreographer.doFrame:744 android.view.Choreographer$FrameDisplayEventReceiver.run:995
I/SurfaceControl(20852): nativeRelease nativeObject s[525662632160]
I/SurfaceControl(20852): nativeRelease nativeObject e[525662632160]
I/ViewRootImpl@f29f789[MainActivity](20852): Relayout returned: old=(0,0,720,1560) new=(0,0,720,1560) req=(720,1560)0 dur=33 res=0x7 s={true 522982289408} ch=true fn=-1
I/SurfaceView(20852): windowStopped(false) true io.flutter.embedding.android.FlutterSurfaceView{a4dde97 V.E...... ......ID 0,0-720,1560} of ViewRootImpl@f29f789[MainActivity]
I/SurfaceView(20852): surfaceCreated 1 #1 io.flutter.embedding.android.FlutterSurfaceView{a4dde97 V.E...... ......ID 0,0-720,1560}
I/SurfaceView(20852): surfaceChanged (720,1560) 1 #1 io.flutter.embedding.android.FlutterSurfaceView{a4dde97 V.E...... ......ID 0,0-720,1560}
I/ViewRootImpl@f29f789[MainActivity](20852): [DP] dp(1) 1 android.view.SurfaceView.updateSurface:1311 android.view.SurfaceView.setWindowStopped:343 android.view.SurfaceView.surfaceCreated:1835
I/ViewRootImpl@f29f789[MainActivity](20852): [DP] pdf(0) 1 android.view.SurfaceView.notifyDrawFinished:577 android.view.SurfaceView.performDrawFinished:564 android.view.SurfaceView.lambda$TWz4D2u33ZlAmRtgKzbqqDue3iM:0
I/ViewRootImpl@f29f789[MainActivity](20852): [DP] rdf()
I/ViewRootImpl@f29f789[MainActivity](20852): updateBoundsLayer: shouldReparent = true t = android.view.SurfaceControl$Transaction@2ca18f0 sc = Surface(name=Bounds for - com.example.application_test/com.example.application_test.MainActivity@2)/@0xc4f9511 frame = 1
I/ViewRootImpl@f29f789[MainActivity](20852): [DP] dp(1) 1 android.view.ViewRootImpl.reportNextDraw:10957 android.view.ViewRootImpl.performTraversals:3845 android.view.ViewRootImpl.doTraversal:2618
I/ViewRootImpl@f29f789[MainActivity](20852): [DP] pd() Asnyc report
I/SurfaceView(20852): setParentSpaceRectangle: useBLAST = false position = Rect(0, 0 - 720, 1560) frameNumber = 1 t = android.view.SurfaceControl$Transaction@f446ca2
I/SurfaceView(20852): applySurfaceTransforms: t = android.view.SurfaceControl$Transaction@f446ca2 surfaceControl = Surface(name=SurfaceView - com.example.application_test/com.example.application_test.MainActivity@a4dde97@2)/@0x973d676 frame = 1
I/SurfaceView(20852): applySurfaceTransforms: postScaleX = 1.0 postScaleY = 1.0
I/ViewRootImpl@f29f789[MainActivity](20852): [DP] pdf(0) 1 android.view.ViewRootImpl.lambda$performDraw$1$ViewRootImpl:4668 android.view.-$$Lambda$ViewRootImpl$DJd0VUYJgsebcnSohO6h8zc_ONI.run:6 android.os.Handler.handleCallback:938
I/ViewRootImpl@f29f789[MainActivity](20852): [DP] rdf()
I/SurfaceControl(20852): nativeRelease nativeObject s[520716954240]
I/SurfaceControl(20852): nativeRelease nativeObject e[520716954240]
I/ViewRootImpl@f29f789[MainActivity](20852): MSG_RESIZED_REPORT: frame=(0,0,720,1560) ci=(0,53,0,0) vi=(0,53,0,0) or=1
I/ViewRootImpl@f29f789[MainActivity](20852): [DP] dp(1) 1 android.view.ViewRootImpl.reportNextDraw:10957 android.view.ViewRootImpl.access$1200:256 android.view.ViewRootImpl$ViewRootHandler.handleMessage:6101
I/ViewRootImpl@f29f789[MainActivity](20852): MSG_RESIZED_REPORT: frame=(0,0,720,1560) ci=(0,53,0,0) vi=(0,53,0,0) or=1
I/ViewRootImpl@f29f789[MainActivity](20852): [DP] pd() Asnyc report
I/ViewRootImpl@f29f789[MainActivity](20852): [DP] pdf(0) 1 android.view.ViewRootImpl.lambda$performDraw$1$ViewRootImpl:4668 android.view.-$$Lambda$ViewRootImpl$DJd0VUYJgsebcnSohO6h8zc_ONI.run:6 android.os.Handler.handleCallback:938
I/ViewRootImpl@f29f789[MainActivity](20852): [DP] rdf()
I/ViewRootImpl@f29f789[MainActivity](20852): MSG_WINDOW_FOCUS_CHANGED 1 1
D/InputMethodManager(20852): prepareNavigationBarInfo() DecorView@af5b8b7[MainActivity]
D/InputMethodManager(20852): getNavigationBarColor() -16711423
D/InputMethodManager(20852): prepareNavigationBarInfo() DecorView@af5b8b7[MainActivity]
D/InputMethodManager(20852): getNavigationBarColor() -16711423
V/InputMethodManager(20852): Starting input: tba=com.example.application_test ic=null mNaviBarColor -16711423 mIsGetNaviBarColorSuccess true , NavVisible : true , NavTrans : false
D/InputMethodManager(20852): startInputInner - Id : 0
I/InputMethodManager(20852): startInputInner - mService.startInputOrWindowGainedFocus
D/InputTransport(20852): Input channel constructed: 'ClientS', fd=105
D/InputTransport(20852): Input channel destroyed: 'ClientS', fd=107
D/InputMethodManager(20852): prepareNavigationBarInfo() DecorView@af5b8b7[MainActivity]
D/InputMethodManager(20852): getNavigationBarColor() -16711423
V/InputMethodManager(20852): Starting input: tba=com.example.application_test ic=null mNaviBarColor -16711423 mIsGetNaviBarColorSuccess true , NavVisible : true , NavTrans : false
D/InputMethodManager(20852): startInputInner - Id : 0
I/InputMethodManager(20852): startInputInner - mService.startInputOrWindowGainedFocus
D/InputTransport(20852): Input channel constructed: 'ClientS', fd=106
D/InputTransport(20852): Input channel destroyed: 'ClientS', fd=105
I/ViewRootImpl@f29f789[MainActivity](20852): stopped(true) old=false
I/SurfaceView(20852): windowStopped(true) false io.flutter.embedding.android.FlutterSurfaceView{a4dde97 V.E...... ........ 0,0-720,1560} of ViewRootImpl@f29f789[MainActivity]
I/SurfaceView(20852): surfaceDestroyed callback.size 1 #1 io.flutter.embedding.android.FlutterSurfaceView{a4dde97 V.E...... ........ 0,0-720,1560}
I/SurfaceView(20852): remove() io.flutter.embedding.android.FlutterSurfaceView{a4dde97 V.E...... ........ 0,0-720,1560} Surface(name=SurfaceView - com.example.application_test/com.example.application_test.MainActivity@a4dde97@2)/@0x973d676
I/SurfaceControl(20852): nativeRelease nativeObject s[523117997056]
I/SurfaceControl(20852): nativeRelease nativeObject e[523117997056]
I/SurfaceControl(20852): nativeRelease nativeObject s[525662632160]
I/SurfaceControl(20852): nativeRelease nativeObject e[525662632160]
I/SurfaceControl(20852): nativeRelease nativeObject s[520716954336]
I/SurfaceControl(20852): nativeRelease nativeObject e[520716954336]
I/SurfaceControl(20852): nativeRelease nativeObject s[523118005216]
I/SurfaceControl(20852): nativeRelease nativeObject e[523118005216]
I/SurfaceView(20852): onWindowVisibilityChanged(8) false io.flutter.embedding.android.FlutterSurfaceView{a4dde97 G.E...... ......I. 0,0-720,1560} of ViewRootImpl@f29f789[MainActivity]
W/libEGL (20852): EGLNativeWindowType 0x79c42c7010 disconnect failed
I/SurfaceControl(20852): assignNativeObject: nativeObject = 0 Surface(name=null)/@0x7ae16dd / android.view.SurfaceControl.readFromParcel:1117 android.view.IWindowSession$Stub$Proxy.relayout:1810 android.view.ViewRootImpl.relayoutWindow:9005 android.view.ViewRootImpl.performTraversals:3360 android.view.ViewRootImpl.doTraversal:2618 android.view.ViewRootImpl$TraversalRunnable.run:9971 android.view.Choreographer$CallbackRecord.run:1010 android.view.Choreographer.doCallbacks:809 android.view.Choreographer.doFrame:744 android.view.Choreographer$FrameDisplayEventReceiver.run:995
I/SurfaceControl(20852): assignNativeObject: nativeObject = 0 Surface(name=null)/@0xcc3f884 / android.view.SurfaceControl.readFromParcel:1117 android.view.IWindowSession$Stub$Proxy.relayout:1820 android.view.ViewRootImpl.relayoutWindow:9005 android.view.ViewRootImpl.performTraversals:3360 android.view.ViewRootImpl.doTraversal:2618 android.view.ViewRootImpl$TraversalRunnable.run:9971 android.view.Choreographer$CallbackRecord.run:1010 android.view.Choreographer.doCallbacks:809 android.view.Choreographer.doFrame:744 android.view.Choreographer$FrameDisplayEventReceiver.run:995
I/SurfaceControl(20852): nativeRelease nativeObject s[520716955200]
I/SurfaceControl(20852): nativeRelease nativeObject e[520716955200]
I/SurfaceControl(20852): nativeRelease nativeObject s[525662633696]
I/SurfaceControl(20852): nativeRelease nativeObject e[525662633696]
I/SurfaceControl(20852): nativeRelease nativeObject s[520716955296]
I/SurfaceControl(20852): nativeRelease nativeObject e[520716955296]
I/ViewRootImpl@f29f789[MainActivity](20852): Relayout returned: old=(0,0,720,1560) new=(0,0,720,1560) req=(720,1560)8 dur=17 res=0x5 s={false 0} ch=false fn=-1
I/ViewRootImpl@f29f789[MainActivity](20852): MSG_WINDOW_FOCUS_CHANGED 0 1
I/SurfaceControl(20852): nativeRelease nativeObject s[523118005216]
I/SurfaceControl(20852): nativeRelease nativeObject e[523118005216]
I/SurfaceControl(20852): nativeRelease nativeObject s[523117997056]
I/SurfaceControl(20852): nativeRelease nativeObject e[523117997056]
I/SurfaceControl(20852): nativeRelease nativeObject s[520716954336]
I/SurfaceControl(20852): nativeRelease nativeObject e[520716954336]
W/pplication_tes(20852): Reducing the number of considered missed Gc histogram windows from 141 to 100
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Doctor summary (to see all details, run flutter doctor -v):
[โ] Flutter (Channel stable, 3.27.1, on macOS 14.6.1 23G93 darwin-arm64, locale en-DZ)
[โ] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[โ] Xcode - develop for iOS and macOS (Xcode 15.3)
[โ] Chrome - develop for the web
[โ] Android Studio (version 2022.3)
[โ] IntelliJ IDEA Community Edition (version 2023.2)
[โ] VS Code (version 1.96.0)
[โ] Connected device (5 available)
[โ] Network resources
โข No issues found!
```
</details>
| platform-android,c: performance,a: quality,P2,team-android,triaged-android | low | Critical |
2,750,561,228 | flutter | define `ThemeExtension` outside of material, with static `.of` and an `InheritedWidget` or `InheritedModel` | ### Use case
I am building a design system outside of material. I don't desire to use `MaterialApp` or `Theme` widget to define the UI. I would like the ability to use `ThemeExtension`s without depending on `material`'s `Theme[Data]`.
### Proposal
I would like
- `ThemeExtension` to be defined and exported from `package:flutter/widgets.dart`
- a new encapsulation of ThemeExtensions in an InheritedWidget or InheritedModel
- a new `.of` static method that "feels" some where between `ThemeData.extension` and in a typical style of `InheritedWidget.of`
- the ability to introduce ThemeExtensions in a `WidgetsApp` (and thus available in `CupertinoApp`) if not in it's own InheritedWidget
example:
```dart
import 'package:flutter/widgets.dart';
void main() {
return runApp(
// new: some inherited model or widget for ThemeExtensions
ThemeExtensions(
child: WidgetsApp(home: Home(), ...),
extensions: const [
MyColor(brandColor: Color(0xFFE53935)),
],
),
);
}
class Home extends StatelessWidget {
@override
Widget build(BuildContext context) {
// new: lookup my extension MyColor with ".of"
final myColor = ThemeExtension.of<MyColor>(context);
return const ColoredBox(color: myColor.brandColor);
}
}
```
some possibly related issues:
- https://github.com/flutter/flutter/issues/102993
- https://github.com/flutter/flutter/issues/154740 | c: new feature,c: proposal,P3,team-design,triaged-design,f: theming | low | Minor |
2,750,566,547 | TypeScript | Bug: settings in `tsconfig.json#paths` and `package.json#imports` conflicts | ### ๐ Search Terms
"package.json imports path"
### ๐ Version & Regression Information
- This is the behavior in every version I tried, and I reviewed the FAQ for entries about this issue
### โฏ Playground Link
_No response_
### ๐ป Code
[My repository](https://github.com/KostyaTretyak/package-json-imports-bug) is intended to reproduce a bug that manifests when the following three settings are combined:
1. `"rootDir": "."` in the `tsconfig.json` file, which is required only for VS Code and `eslint.config.mjs`;
2. `paths` in the `tsconfig.json` file creates an alias for the path `#lib/*`, pointing to `./src/*`;
3. `imports` in the `package.json` file creates an alias for the path `#lib/*`, pointing to `./dist/*`.
In such cases, TypeScript suggestions merge content from both the `./dist/*` and `./src/*` directories:
---

---
### ๐ Actual behavior
As you can see, `file1.js`, `file2.js`, and `index.js` appear at the same level as the `src` and `test` folders, which is incorrect.
### ๐ Expected behavior
1. The setting described in point 2 should take higher precedence for TypeScript over the setting in point 3.
2. In any case, the settings in points 2 and 3 should not be merged.
### Additional information about the issue
If in point 1 the setting is changed to `"rootDir": "src"`, the bug disappears. However, in this case, you cannot use `"include": ["test"]` in the `tsconfig.json` file, which is very inconvenient. Instead, it is more practical to use the `"rootDir": "src"` setting in the `tsconfig.build.json` file. | Bug,Help Wanted | low | Critical |
2,750,570,672 | ui | [bug]: Can't find use-mobile hook in shadcn doc | ### Describe the bug
Because restricted firewall in my company, I can't use the cli for adding components because it point to external registry, so I'm using manual installation for sidebar component and adding requirement one by one. Until I want add the @/components/hooks/use-mobile but can't find the example
### Affected component/components
Sidebar
### How to reproduce
1. Go to shadcn website
2. Add Sidebar with Manual Installation
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Chrome
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,750,608,502 | pytorch | `assert_size_stride` failing in `_inductor/utils.py` `return model(new_inputs)` | ### ๐ Describe the bug
I was trying to create a custom_ops for Mamba `selective_scan` as suggested by @ezyang at https://github.com/pytorch/pytorch/issues/130150#issuecomment-2211312921
So I've prepared https://github.com/state-spaces/mamba/pull/651 to extend the original test to the `custom_op` version. Custom op tests are passing correctly as the original impl tests but the `torch.compile` version of the `custom_op` is generating these errors.
To reproduce on the PR just run the compiled test
`pyest -k compile tests/ops/test_selective_scan.py`
### Error logs
```python
FAILED tests/ops/test_selective_scan.py::test_selective_scan[True-True-1-True-True-True-True-True-128-itype0-wtype0-compiled] - AssertionError: expected size 2==2, stride 64==8192 at dim=0; expected size 4==4, stride 16==2048 at dim=1; expected size 1==128, stride 16==16 at dim=2
FAILED tests/ops/test_selective_scan.py::test_selective_scan[True-True-1-True-True-True-True-True-256-itype0-wtype0-compiled] - AssertionError: expected size 2==2, stride 64==16384 at dim=0; expected size 4==4, stride 16==4096 at dim=1; expected size 1==256, stride 16==16 at dim=2
FAILED tests/ops/test_selective_scan.py::test_selective_scan[True-True-1-True-True-True-True-True-512-itype0-wtype0-compiled] - AssertionError: expected size 2==2, stride 64==32768 at dim=0; expected size 4==4, stride 16==8192 at dim=1; expected size 1==512, stride 16==16 at dim=2
FAILED tests/ops/test_selective_scan.py::test_selective_scan[True-True-1-True-True-True-True-True-1024-itype0-wtype0-compiled] - AssertionError: expected size 2==2, stride 64==65536 at dim=0; expected size 4==4, stride 16==16384 at dim=1; expected size 1==1024, stride 16==16 at dim=2
FAILED tests/ops/test_selective_scan.py::test_selective_scan[True-True-1-True-True-True-True-True-2048-itype0-wtype0-compiled] - AssertionError: expected size 2==2, stride 64==131072 at dim=0; expected size 4==4, stride 16==32768 at dim=1; expected size 1==2048, stride 16==16 at dim=2
FAILED tests/ops/test_selective_scan.py::test_selective_scan[True-True-1-True-True-True-True-True-4096-itype0-wtype0-compiled] - AssertionError: expected size 2==2, stride 128==262144 at dim=0; expected size 4==4, stride 32==65536 at dim=1; expected size 2==4096, stride 16==16 at dim=2
FAILED tests/ops/test_selective_scan.py::test_selective_scan[True-True-2-True-True-True-True-True-128-itype0-wtype0-compiled] - AssertionError: expected size 2==2, stride 64==8192 at dim=0; expected size 4==4, stride 16==2048 at dim=1; expected size 1==128, stride 16==16 at dim=2
FAILED tests/ops/test_selective_scan.py::test_selective_scan[True-True-2-True-True-True-True-True-256-itype0-wtype0-compiled] - AssertionError: expected size 2==2, stride 64==16384 at dim=0; expected size 4==4, stride 16==4096 at dim=1; expected size 1==256, stride 16==16 at dim=2
FAILED tests/ops/test_selective_scan.py::test_selective_scan[True-True-2-True-True-True-True-True-512-itype0-wtype0-compiled] - AssertionError: expected size 2==2, stride 64==32768 at dim=0; expected size 4==4, stride 16==8192 at dim=1; expected size 1==512, stride 16==16 at dim=2
FAILED tests/ops/test_selective_scan.py::test_selective_scan[True-True-2-True-True-True-True-True-1024-itype0-wtype0-compiled] - AssertionError: expected size 2==2, stride 64==65536 at dim=0; expected size 4==4, stride 16==16384 at dim=1; expected size 1==1024, stride 16==16 at dim=2
FAILED tests/ops/test_selective_scan.py::test_selective_scan[True-True-2-True-True-True-True-True-2048-itype0-wtype0-compiled] - AssertionError: expected size 2==2, stride 64==131072 at dim=0; expected size 4==4, stride 16==32768 at dim=1; expected size 1==2048, stride 16==16 at dim=2
FAILED tests/ops/test_selective_scan.py::test_selective_scan[True-True-2-True-True-True-True-True-4096-itype0-wtype0-compiled] - AssertionError: expected size 2==2, stride 128==262144 at dim=0; expected size 4==4, stride 32==65536 at dim=1; expected size 2==4096, stride 16==16 at dim=2
```
The origin was quite similar to all the failing tests. Just to paste one
```python
op_impl = <function selective_scan_fn_custom_op at 0x7f64c61bdee0>, is_variable_B = True, is_variable_C = True, varBC_groups = 2, has_D = True, has_z = True, has_delta_bias = True, delta_softplus = True, return_last_state = True, seqlen = 2048, itype = torch.float32, wtype = torch.float32
@pytest.mark.parametrize(
"op_impl",
[
selective_scan_fn,
selective_scan_fn_custom_op,
torch.compile(selective_scan_fn_custom_op),
],
ids=["original", "custom", "compiled"],
)
# @pytest.mark.parametrize('wtype', [torch.float32, torch.complex64])
@pytest.mark.parametrize("wtype", [torch.float32])
# @pytest.mark.parametrize('itype', [torch.float32, torch.float16, torch.bfloat16])
@pytest.mark.parametrize("itype", [torch.float32])
# @pytest.mark.parametrize('seqlen', [8, 16, 32, 64, 128, 256, 372, 512, 784, 1024, 1134, 2048, 4096])
@pytest.mark.parametrize("seqlen", [128, 256, 512, 1024, 2048, 4096])
# @pytest.mark.parametrize("return_last_state", [False, True])
@pytest.mark.parametrize("return_last_state", [True])
# @pytest.mark.parametrize('has_delta_bias', [False, True])
@pytest.mark.parametrize("has_delta_bias", [True])
# @pytest.mark.parametrize('delta_softplus', [False, True])
@pytest.mark.parametrize("delta_softplus", [True])
# @pytest.mark.parametrize('has_z', [False, True])
@pytest.mark.parametrize("has_z", [True])
# @pytest.mark.parametrize('has_D', [False, True])
@pytest.mark.parametrize("has_D", [True])
@pytest.mark.parametrize("varBC_groups", [1, 2])
# @pytest.mark.parametrize("varBC_groups", [1])
# @pytest.mark.parametrize("is_variable_C", [False, True])
@pytest.mark.parametrize("is_variable_C", [True])
# @pytest.mark.parametrize("is_variable_B", [False, True])
@pytest.mark.parametrize("is_variable_B", [True])
def test_selective_scan(
op_impl,
is_variable_B,
is_variable_C,
varBC_groups,
has_D,
has_z,
has_delta_bias,
delta_softplus,
return_last_state,
seqlen,
itype,
wtype,
):
if varBC_groups > 1 and (not is_variable_B or not is_variable_C):
pytest.skip() # This config is not applicable
device = "cuda"
rtol, atol = (6e-4, 2e-3) if itype == torch.float32 else (3e-3, 5e-3)
if itype == torch.bfloat16:
rtol, atol = 3e-2, 5e-2
rtolw, atolw = (1e-3, 1e-3)
if has_z: # If we have z, the errors on the weights seem higher
rtolw = max(rtolw, rtol)
atolw = max(atolw, atol)
# set seed
torch.random.manual_seed(0)
batch_size = 2
dim = 4
dstate = 8
is_complex = wtype == torch.complex64
A = (-0.5 * torch.rand(dim, dstate, device=device, dtype=wtype)).requires_grad_()
if not is_variable_B:
B_shape = (dim, dstate)
elif varBC_groups == 1:
B_shape = (batch_size, dstate, seqlen if not is_complex else seqlen * 2)
else:
B_shape = (
batch_size,
varBC_groups,
dstate,
seqlen if not is_complex else seqlen * 2,
)
B = torch.randn(
*B_shape,
device=device,
dtype=wtype if not is_variable_B else itype,
requires_grad=True,
)
if not is_variable_C:
C_shape = (dim, dstate)
elif varBC_groups == 1:
C_shape = (batch_size, dstate, seqlen if not is_complex else seqlen * 2)
else:
C_shape = (
batch_size,
varBC_groups,
dstate,
seqlen if not is_complex else seqlen * 2,
)
C = torch.randn(
*C_shape,
device=device,
dtype=wtype if not is_variable_C else itype,
requires_grad=True,
)
if has_D:
D = torch.randn(dim, device=device, dtype=torch.float32, requires_grad=True)
else:
D = None
if has_z:
z = torch.randn(
batch_size, dim, seqlen, device=device, dtype=itype, requires_grad=True
)
else:
z = None
if has_delta_bias:
delta_bias = (
0.5 * torch.rand(dim, device=device, dtype=torch.float32)
).requires_grad_()
else:
delta_bias = None
u = torch.randn(
batch_size, dim, seqlen, device=device, dtype=itype, requires_grad=True
)
delta = (
0.5 * torch.rand(batch_size, dim, seqlen, device=device, dtype=itype)
).requires_grad_()
A_ref = A.detach().clone().requires_grad_()
B_ref = B.detach().clone().requires_grad_()
C_ref = C.detach().clone().requires_grad_()
D_ref = D.detach().clone().requires_grad_() if D is not None else None
z_ref = z.detach().clone().requires_grad_() if z is not None else None
u_ref = u.detach().clone().requires_grad_()
delta_ref = delta.detach().clone().requires_grad_()
delta_bias_ref = (
delta_bias.detach().clone().requires_grad_() if delta_bias is not None else None
)
out, *rest = op_impl(
u,
delta,
A,
B,
C,
D,
z=z,
delta_bias=delta_bias,
delta_softplus=delta_softplus,
return_last_state=return_last_state,
)
if return_last_state:
state = rest[0]
out_ref, *rest = selective_scan_ref(
u_ref,
delta_ref,
A_ref,
B_ref,
C_ref,
D_ref,
z=z_ref,
delta_bias=delta_bias_ref,
delta_softplus=delta_softplus,
return_last_state=return_last_state,
)
if return_last_state:
state_ref = rest[0]
# dA = torch.exp(torch.einsum('bdl,dn->bdln', delta, A))
# dt_u = delta * u
print(f"Output max diff: {(out - out_ref).abs().max().item()}")
print(f"Output mean diff: {(out - out_ref).abs().mean().item()}")
assert torch.allclose(out, out_ref, rtol=rtol, atol=atol)
if return_last_state:
print(f"State max diff: {(state - state_ref).abs().max().item()}")
assert torch.allclose(state, state_ref, rtol=rtol, atol=atol)
g = torch.randn_like(out)
out_ref.backward(g)
> out.backward(g)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/opt/conda/lib/python3.11/site-packages/torch/_tensor.py:648: in backward
torch.autograd.backward(
/opt/conda/lib/python3.11/site-packages/torch/autograd/__init__.py:347: in backward
_engine_run_backward(
/opt/conda/lib/python3.11/site-packages/torch/autograd/graph.py:823: in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
/opt/conda/lib/python3.11/site-packages/torch/autograd/function.py:307: in apply
return user_fn(self, *args)
/opt/conda/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py:1958: in backward
return impl_fn()
/opt/conda/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py:1944: in impl_fn
out = CompiledFunction._backward_impl(ctx, all_args)
/opt/conda/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py:2079: in _backward_impl
out = call_func_at_runtime_with_args(
/opt/conda/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/utils.py:126: in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
/opt/conda/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py:751: in _fn
return fn(*args, **kwargs)
/opt/conda/lib/python3.11/site-packages/torch/_inductor/output_code.py:465: in __call__
return self.current_callable(inputs)
/opt/conda/lib/python3.11/site-packages/torch/_inductor/utils.py:2191: in run
return model(new_inputs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = []
def call(args):
primals_2, primals_5, primals_6, primals_7, primals_8, primals_10, primals_11, primals_12, primals_13, primals_16, primals_18, primals_1, primals_3, primals_4, primals_9, primals_14, primals_15, primals_17, primals_19, getitem_2, getitem_3, tangents_1, tangents_2 = args
args.clear()
s0 = primals_2
s1 = primals_5
s2 = primals_6
s3 = primals_7
s4 = primals_8
s5 = primals_10
s6 = primals_11
s7 = primals_12
s8 = primals_13
s9 = primals_16
s10 = primals_18
assert_size_stride(primals_1, (4, ), (1, ))
assert_size_stride(primals_3, (2, 4, s0), (4*s0, s0, 1))
assert_size_stride(primals_4, (4, ), (1, ))
assert_size_stride(primals_9, (s1, s2, s3, s4), (s2*s3*s4, s3*s4, s4, 1))
assert_size_stride(primals_14, (s5, s6, s7, s8), (s6*s7*s8, s7*s8, s8, 1))
assert_size_stride(primals_15, (4, 8), (8, 1))
assert_size_stride(primals_17, (2, 4, s9), (4*s9, s9, 1))
assert_size_stride(primals_19, (2, 4, s10), (4*s10, s10, 1))
assert_size_stride(getitem_2, (2, 4, s10), (4*s10, s10, 1))
> assert_size_stride(getitem_3, (2, 4, s10, 16), (64*s10, 16*s10, 16, 1))
E AssertionError: expected size 2==2, stride 64==131072 at dim=0; expected size 4==4, stride 16==32768 at dim=1; expected size 1==2048, stride 16==16 at dim=2
/tmp/torchinductor_root/gz/cgzkum44b45xqpnatebwkxq45ixpx4p4cpxq7ucx3tkpkwahog3p.py:56: AssertionError
```
### Versions
stable and nightly
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @zou3519 @bdhirsh | triaged,module: custom-operators,oncall: pt2,module: inductor,module: pt2-dispatcher | low | Critical |
2,750,637,555 | flutter | [espresso] (feature request) doesNotExist assertion for widgets | ### Use case
I'm trying to add an Android integration test to one of the packages, I use Espresso for it, like suggested in [Plugin-Tests.md](https://github.com/flutter/flutter/blob/master/docs/ecosystem/testing/Plugin-Tests.md). I need to check if a widget is not rendered on the screen, after that I'll perform an action, and then verify the existence of a widget again (this time it should be visible).
In native Espresso library we can do that with `ViewAssertions.doesNotExist`, but there's no equivalent in Espresso for Flutter - we only have `isExisting` here.
### Proposal
I'd suggest adding another static function (`doesNotExist`) to `FlutterMatchers` class. This function would return a proper matcher that would check if a given Flutter widget exists. If it does, it should throw an exception.
Example code:
```dart
WidgetInteraction playButton = onFlutterWidget(withValueKey("Play"));
playButton.check(matches(doesNotExist()));
```
| c: new feature,platform-android,package,c: proposal,P3,p: espresso,team-android,triaged-android | low | Minor |
2,750,637,701 | pytorch | RuntimeError: expect_autograd_hooks_ INTERNAL ASSERT FAILED at "../torch/csrc/distributed/c10d/reducer.cpp" | ### ๐ Describe the bug
When using `DistributedDataParallel` (DDP) with `static_graph=True` and multiple backward passes on the same forward pass within a `no_sync()` context, a runtime error may occur. Specifically, if the very first forward/backward call sequence on the model is made within a `no_sync()` block and involves calling `backward(retain_graph=True)` (also occurs when calling with `retain_graph=False`) on one loss followed by a second backward call on another loss derived from the same forward pass, an internal PyTorch assertion error can be triggered. This issue does not occur if a normal forward/backward pass is performed first (outside of `no_sync()`), and it also does not happen if `no_sync()` is never used.
Run the below scripts by running `torchrun script_name.py`
## Reproduces the error
```python
import torch
import torch.distributed as dist
from torch.nn.parallel import DistributedDataParallel as DDP
import torch.multiprocessing as mp
def func(model):
im = torch.empty(1, 3, 224, 224, device="cuda")
seq = torch.randint(0, 1000, (1, 128), device="cuda").long()
loss, speculative_loss = model(im, seq)
loss.backward(retain_graph=True)
speculative_loss.backward()
def worker(rank, world_size):
dist.init_process_group("nccl", rank=rank, world_size=world_size)
torch.cuda.set_device(rank)
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.lin = torch.nn.Linear(3*224*224, 10)
def forward(self, im, seq):
out = self.lin(im.flatten(1))
loss = out.mean()
return loss, loss
model = Model().to(rank)
model = DDP(model, device_ids=[rank], static_graph=True)
# This scenario triggers the error
with model.no_sync():
func(model)
if __name__ == "__main__":
worker(0, 1)
```
expected output
```bash
[rank0]: Traceback (most recent call last):
[rank0]: File "/workspaces/data-engine/temp/script_mixed_sync.py", line 39, in <module>
[rank0]: worker(0, 1)
[rank0]: File "/workspaces/data-engine/temp/script_mixed_sync.py", line 32, in worker
[rank0]: func(model)
[rank0]: File "/workspaces/data-engine/temp/script_mixed_sync.py", line 10, in func
[rank0]: loss.backward(retain_graph=True)
[rank0]: File "/workspaces/data-engine/jobs/extractor-training/.venv/lib/python3.10/site-packages/torch/_tensor.py", line 581, in backward
[rank0]: torch.autograd.backward(
[rank0]: File "/workspaces/data-engine/jobs/extractor-training/.venv/lib/python3.10/site-packages/torch/autograd/__init__.py", line 347, in backward
[rank0]: _engine_run_backward(
[rank0]: File "/workspaces/data-engine/jobs/extractor-training/.venv/lib/python3.10/site-packages/torch/autograd/graph.py", line 825, in _engine_run_backward
[rank0]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[rank0]: RuntimeError: expect_autograd_hooks_ INTERNAL ASSERT FAILED at "../torch/csrc/distributed/c10d/reducer.cpp":1603, please report a bug to PyTorch.
```
## Demonstration of it working without `no_sync`
```python
import torch
import torch.distributed as dist
from torch.nn.parallel import DistributedDataParallel as DDP
import torch.multiprocessing as mp
def func(model):
im = torch.empty(1, 3, 224, 224, device="cuda")
seq = torch.randint(0, 1000, (1, 128), device="cuda").long()
loss, speculative_loss = model(im, seq)
loss.backward(retain_graph=True)
speculative_loss.backward()
def worker(rank, world_size):
dist.init_process_group("nccl", rank=rank, world_size=world_size)
torch.cuda.set_device(rank)
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.lin = torch.nn.Linear(3*224*224, 10)
def forward(self, im, seq):
out = self.lin(im.flatten(1))
loss = out.mean()
return loss, loss
model = Model().to(rank)
model = DDP(model, device_ids=[rank], static_graph=True)
# No no_sync() context - works without error
func(model)
if __name__ == "__main__":
worker(0, 1)
```
## Demonstration of it working when running without `no_sync` first
```python
import torch
import torch.distributed as dist
from torch.nn.parallel import DistributedDataParallel as DDP
import torch.multiprocessing as mp
def func(model):
im = torch.empty(1, 3, 224, 224, device="cuda")
seq = torch.randint(0, 1000, (1, 128), device="cuda").long()
loss, speculative_loss = model(im, seq)
loss.backward(retain_graph=True)
speculative_loss.backward()
def worker(rank, world_size):
dist.init_process_group("nccl", rank=rank, world_size=world_size)
torch.cuda.set_device(rank)
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.lin = torch.nn.Linear(3*224*224, 10)
def forward(self, im, seq):
out = self.lin(im.flatten(1))
loss = out.mean()
return loss, loss
model = Model().to(rank)
model = DDP(model, device_ids=[rank], static_graph=True)
func(model)
with model.no_sync():
func(model)
if __name__ == "__main__":
worker(0, 1)
```
[scripts.zip](https://github.com/user-attachments/files/18199631/scripts.zip)
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 12 (bookworm) (x86_64)
GCC version: (Debian 12.2.0-14) 12.2.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.36
Python version: 3.10.13 (main, Mar 12 2024, 12:22:40) [GCC 12.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-1021-aws-x86_64-with-glibc2.36
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA L40S
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.5.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7R13 Processor
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 2
Socket(s): 1
Stepping: 1
BogoMIPS: 5299.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save vaes vpclmulqdq rdpid
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 64 KiB (2 instances)
L1i cache: 64 KiB (2 instances)
L2 cache: 1 MiB (2 instances)
L3 cache: 8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-3
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] Could not collect
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed,module: ddp | low | Critical |
2,750,662,826 | vscode | Support collapsing/expanding notebook cell comments | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: macOS 15.1.1 (24B91)
- OS Version: 1.96.1
Steps to Reproduce:
1. Create a review comment on a notebook cell.
2. Press the collapse button.
Expected:
Collapses the comment somehow.
Actual:
Nothing happens.
-----
Looks like this was never implemented for notebooks: https://github.com/microsoft/vscode/blob/8467007af3f3dca1e9653a476fba3c5536f8d0cc/src/vs/workbench/contrib/notebook/browser/view/cellParts/cellComments.ts#L74
I guess one challenge in implementing is coming up with a UX for collapsed comments and how to uncollapse them, given there is no line gutter icon which is used on text files for that. Maybe we could have a similar red icon in the bottom margin that also has the language picker on the far right. There is some space there, and it is relatively close to where the comments would be...
What do you think, @rebornix?
| feature-request,notebook | low | Critical |
2,750,666,993 | go | x/tools/gopls: fails to detect tests starting with an underscore | <!--
For asking questions, see:
- [Stack Overflow](https://stackoverflow.com/questions/tagged/go+visual-studio-code)
- [GitHub Discussions (Help)](https://github.com/golang/vscode-go/discussions/categories/help)
- [`#vscode` channel in Gophers Slack](https://invite.slack.golangbridge.org/messages/vscode)
Before filing an issue, please review our troubleshooting guides
* [Troubleshooting problems with debugging](https://github.com/golang/vscode-go/wiki/debugging#troubleshooting)
* [Troubleshooting other problems](https://github.com/golang/vscode-go/wiki/troubleshooting)
Please answer these questions before submitting your issue. Thanks!
-->
### What version of Go, VS Code & VS Code Go extension are you using?
<details><summary>Version Information</summary><br>
* Run `go version` to get version of Go from _the VS Code integrated terminal_.
- go1.23.4 linux/amd64
* Run `gopls -v version` to get version of Gopls from _the VS Code integrated terminal_.
- golang.org/x/tools/gopls v0.17.0-pre.4
* Run `code -v` or `code-insiders -v` to get version of VS Code or VS Code Insiders.
- 1.96.0 138f619c86f1199955d53b4166bef66ef252935c x64
* Check your installed extensions to get the version of the VS Code Go extension
- 0.43.4 (pre-release)
* Run Ctrl+Shift+P (Cmd+Shift+P on Mac OS) > `Go: Locate Configured Go Tools` command.
- <Paste the output here>
</details>
### Share the Go related settings you have added/edited
Run `Preferences: Open Settings (JSON)` command to open your settings.json file.
Share all the settings with the `go.` or `["go"]` or `gopls` prefixes.
### Describe the bug
The new experimental Go Companion tests infrastructure ignores tests whose name (other than the `Test`) prefix starts with an underscore, while `go test` does not.
### Steps to reproduce the behavior:
1. Create a new empty package with a test file in it
2. Create a test named `Test_foo`
3. Run `go test -v ./...` within that package -- observe the test is run
4. Enable the Go Companion experimental test explorer and code lens
5. Observe that the companion doesn't detect the test, and so the code lens is hidden, and the package (lacking any other tests) is omitted from the test explorer
6. Change the test name to `TestX_foo`
7. Observe that Go Companion sees the test now
8. Extra fun: rename it back. Observe that at least sometimes Go Companion still sees the test with the _old_ name as existing, and shows the code lens and the entry in the test explorer, but since it tries to run it with the old name, it doesn't work / produces no results.
The `Test_foo` naming pattern in my case came from "generate tests for function" helper for a non-exported method on a non-exported type, so the test name I have is `Test_typeName_methodName`
Spun out of a prior discussion with @firelizzard18: https://github.com/golang/vscode-go/issues/1636#issuecomment-2540066996
This feels like it might be a bug in gopls, if the extension is just asking it to list discovered tests, but I'm not sure how to confirm that.
### Screenshots or recordings
N/A
| gopls,Tools,gopls/analysis | low | Critical |
2,750,698,125 | rust | compiletest: more granular `--nocapture` for `run-make` tests | > Would it be possible to add a way to revert back to the old behavior? For cg_clif this makes testing the rustc test suite very verbose. I have to use `--nocapture` to see any test failures in the first place as `panic=abort` means that without it compiletest aborts before it gets a chance to print the test failures, but I would like to not have my terminal getting spammed with the output of all passing tests.
_Originally posted by @bjorn3 in https://github.com/rust-lang/rust/issues/134111#issuecomment-2554625017_
| T-compiler,T-bootstrap,C-bug,A-compiletest,A-run-make,A-test-infra | low | Critical |
2,750,756,379 | go | go/printer: bit operations on multiple lines were not aligned during formatting. | ### gopls version
golang.org/x/tools/gopls v0.17.0
golang.org/x/tools/[email protected] h1:yiwvxZX6lAQzZtJyDhKbGUiCepoGOEVw7E/i31JUcLE=
github.com/BurntSushi/[email protected] h1:pxW6RcqyfI9/kWtOwnv/G+AzdKuy2ZrqINhenH4HyNs=
github.com/google/[email protected] h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
golang.org/x/exp/[email protected] h1:1P7xPZEwZMoBoz0Yze5Nx2/4pxj6nw9ZqHWXqP0iRgQ=
golang.org/x/[email protected] h1:D4nJWe9zXqHOmWqj4VMOJhvzj7bEZg4wEYa759z1pH4=
golang.org/x/[email protected] h1:fEo0HyrW1GIgZdpbhCRO0PkJajUS5H9IFUztCgEo2jQ=
golang.org/x/[email protected] h1:TCDqnvbBsFapViksHcHySl/sW4+rTGNIAoJJesHRuMM=
golang.org/x/[email protected] h1:gK/Kv2otX8gz+wn7Rmb3vT96ZwuoxnQlY+HlJVj7Qug=
golang.org/x/[email protected] h1:dFDhAo0DFSbmpMYZcvCfIQK9q/wH3fMI8V18Gbcnm9E=
golang.org/x/[email protected] h1:SP0mPeg2PmGCu03V+61EcQiOjmpri2XijexKdzv8Z1I=
honnef.co/go/[email protected] h1:4bH5o3b5ZULQ4UrBmP+63W9r7qIkqJClEA9ko5YKx+I=
mvdan.cc/[email protected] h1:bg91ttqXmi9y2xawvkuMXyvAA/1ZGJqYAEGjXuP0JXU=
mvdan.cc/xurls/[email protected] h1:lyBNOm8Wo71UknhUs4QTFUNNMyxy2JEIaKKo0RWOh+8=
go: go1.23.4
### go env
```shell
GO111MODULE=''
GOARCH='arm64'
GOBIN=''
GOCACHE=''
GOENV=''
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='arm64'
GOHOSTOS='darwin'
GOINSECURE=''
GOMODCACHE='*'
GONOPROXY=''
GONOSUMDB=''
GOOS='darwin'
GOPATH='*'
GOPRIVATE=''
GOPROXY='https://goproxy.cn,direct'
GOROOT='*'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='*'
GOVCS=''
GOVERSION='go1.23.4'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='*'
GCCGO='gccgo'
GOARM64='v8.0'
AR='ar'
CC='clang'
CXX='clang++'
CGO_ENABLED='1'
GOMOD='/dev/null'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -arch arm64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -ffile-prefix-map=/var/folders/sn/5swgsgnd0590drsv3qz933dr0000gp/T/go-build2086004526=/tmp/go-build -gno-record-gcc-switches -fno-common'
```
### What did you do?
Call format.
```
func GetCapabilities() uint64 {
return uint64(
protobufs.ServerCapabilities_ServerCapabilities_AcceptsStatus |
protobufs.ServerCapabilities_ServerCapabilities_OffersRemoteConfig |
protobufs.ServerCapabilities_ServerCapabilities_AcceptsEffectiveConfig |
protobufs.ServerCapabilities_ServerCapabilities_OffersPackages |
protobufs.ServerCapabilities_ServerCapabilities_AcceptsPackagesStatus |
protobufs.ServerCapabilities_ServerCapabilities_AcceptsConnectionSettingsRequest,
)
}
```
### What did you see happen?
```
func GetCapabilities() uint64 {
return uint64(
protobufs.ServerCapabilities_ServerCapabilities_AcceptsStatus |
protobufs.ServerCapabilities_ServerCapabilities_OffersRemoteConfig |
protobufs.ServerCapabilities_ServerCapabilities_AcceptsEffectiveConfig |
protobufs.ServerCapabilities_ServerCapabilities_OffersPackages |
protobufs.ServerCapabilities_ServerCapabilities_AcceptsPackagesStatus |
protobufs.ServerCapabilities_ServerCapabilities_AcceptsConnectionSettingsRequest,
)
}
```
### What did you expect to see?
```
func GetCapabilities() uint64 {
return uint64(
protobufs.ServerCapabilities_ServerCapabilities_AcceptsStatus |
protobufs.ServerCapabilities_ServerCapabilities_OffersRemoteConfig |
protobufs.ServerCapabilities_ServerCapabilities_AcceptsEffectiveConfig |
protobufs.ServerCapabilities_ServerCapabilities_OffersPackages |
protobufs.ServerCapabilities_ServerCapabilities_AcceptsPackagesStatus |
protobufs.ServerCapabilities_ServerCapabilities_AcceptsConnectionSettingsRequest,
)
}
```
We should be support `Split parameters into separate lines` and `Join parameters into one line`.
### Editor and settings
_No response_
### Logs
_No response_ | NeedsInvestigation | low | Critical |
2,750,783,908 | ollama | qwen 2.5 coder stuck "Stopping" | ### What is the issue?
I have an ollama server alone on a server with a L4 Nvidia card :
Ubuntu 22.04.5 LTS (GNU/Linux 5.15.0-122-generic x86_64)
NVIDIA-SMI 550.107.02 Driver Version: 550.107.02 CUDA Version: 12.4
The only environment variable I've configured is Environment="OLLAMA_KEEP_ALIVE=360m" (tried various values)
And the ollama server hosts only one model
qwen2.5-coder:latest 2b0496514337 4.7 GB 27 hours ago
ollama version is 0.5.4
And this ollama server is used for an internal copilot tool in my company (mainly with continue plugin). We use it also for the embedding (it may be the problem, as you'll see).
Sometimes, the server doesn't handle new queries. There aren't any reply at all, continue plugin waits indefinitely for an answer.
"ollama ps" shows this
NAME ID SIZE PROCESSOR UNTIL
qwen2.5-coder:latest 2b0496514337 6.0 GB 100% GPU Stopping...
The only method I've found is to restart the service.
Since yesterday, I've got a crontab that every minute checks the status and restarts ollama if it seems stuck. I have a log to have the history.
It happened today again, so I've looked at ollama log, and may have found something.
My log is this :
Thu Dec 19 12:47:01 UTC 2024 - Server is running normally.
Thu Dec 19 12:48:01 UTC 2024 - Detected 'Stopping...' status. Restarting the server...
So I've looked at what happened before, and I've got this
```
Dec 19 12:43:01 copilot ollama[3122]: [GIN] 2024/12/19 - 12:43:01 | 200 | 28.31ยตs | 127.0.0.1 | HEAD "/"
Dec 19 12:43:01 copilot ollama[3122]: [GIN] 2024/12/19 - 12:43:01 | 200 | 31.81ยตs | 127.0.0.1 | GET "/api/ps"
Dec 19 12:44:01 copilot CRON[44051]: (root) CMD (/root/ollama_detector.sh)
Dec 19 12:44:01 copilot ollama[3122]: [GIN] 2024/12/19 - 12:44:01 | 200 | 29.14ยตs | 127.0.0.1 | HEAD "/"
Dec 19 12:44:01 copilot ollama[3122]: [GIN] 2024/12/19 - 12:44:01 | 200 | 25.74ยตs | 127.0.0.1 | GET "/api/ps"
Dec 19 12:45:01 copilot CRON[44068]: (root) CMD (/root/ollama_detector.sh)
Dec 19 12:45:01 copilot ollama[3122]: [GIN] 2024/12/19 - 12:45:01 | 200 | 27.819ยตs | 127.0.0.1 | HEAD "/"
Dec 19 12:45:01 copilot ollama[3122]: [GIN] 2024/12/19 - 12:45:01 | 200 | 27.72ยตs | 127.0.0.1 | GET "/api/ps"
Dec 19 12:45:29 copilot ollama[3122]: [GIN] 2024/12/19 - 12:45:29 | 200 | 30m40s | 145.239.103.2 | POST "/api/embed"
Dec 19 12:46:01 copilot CRON[44083]: (root) CMD (/root/ollama_detector.sh)
Dec 19 12:46:01 copilot ollama[3122]: [GIN] 2024/12/19 - 12:46:01 | 200 | 32.771ยตs | 127.0.0.1 | HEAD "/"
Dec 19 12:46:01 copilot ollama[3122]: [GIN] 2024/12/19 - 12:46:01 | 200 | 28.599ยตs | 127.0.0.1 | GET "/api/ps"
Dec 19 12:46:10 copilot ollama[3122]: [GIN] 2024/12/19 - 12:46:10 | 200 | 31m22s | 145.239.103.2 | POST "/api/embed"
Dec 19 12:46:46 copilot ollama[3122]: [GIN] 2024/12/19 - 12:46:46 | 200 | 31m57s | 145.239.103.2 | POST "/api/embed"
Dec 19 12:47:01 copilot CRON[44098]: (root) CMD (/root/ollama_detector.sh)
Dec 19 12:47:01 copilot ollama[3122]: [GIN] 2024/12/19 - 12:47:01 | 200 | 29.859ยตs | 127.0.0.1 | HEAD "/"
Dec 19 12:47:01 copilot ollama[3122]: [GIN] 2024/12/19 - 12:47:01 | 200 | 31.32ยตs | 127.0.0.1 | GET "/api/ps"
Dec 19 12:48:01 copilot CRON[44115]: (root) CMD (/root/ollama_detector.sh)
Dec 19 12:48:01 copilot ollama[3122]: [GIN] 2024/12/19 - 12:48:01 | 200 | 26.699ยตs | 127.0.0.1 | HEAD "/"
Dec 19 12:48:01 copilot ollama[3122]: [GIN] 2024/12/19 - 12:48:01 | 200 | 22.65ยตs | 127.0.0.1 | GET "/api/ps"
Dec 19 12:48:01 copilot systemd[1]: Stopping Ollama Service...
```
So, it seems that the server was "ok" for 5 minutes before my cron detected something.
But actually, the previous line is :
`Dec 19 12:42:52 copilot ollama[3122]: time=2024-12-19T12:42:52.483Z level=ERROR source=routes.go:479 msg="embedding generation failed" error="context canceled"
`
And before this line, from 12:42:13 up to the previous line, the log is filled with 23 493 lines of log (!)
It looks like this
```
Dec 19 12:42:52 copilot ollama[3122]: time=2024-12-19T12:42:52.479Z level=INFO source=server.go:875 msg="aborting embedding request due to client closing the connection"
Dec 19 12:42:52 copilot ollama[3122]: time=2024-12-19T12:42:52.481Z level=INFO source=server.go:875 msg="aborting embedding request due to client closing the connection"
Dec 19 12:42:52 copilot ollama[3122]: message repeated 6 times: [ time=2024-12-19T12:42:52.481Z level=INFO source=server.go:875 msg="aborting embedding request due to client closing the connection"]
Dec 19 12:42:52 copilot ollama[3122]: time=2024-12-19T12:42:52.479Z level=INFO source=server.go:875 msg="aborting embedding request due to client closing the connection"
Dec 19 12:42:52 copilot ollama[3122]: time=2024-12-19T12:42:52.479Z level=INFO source=server.go:875 msg="aborting embedding request due to client closing the connection"
Dec 19 12:42:52 copilot ollama[3122]: time=2024-12-19T12:42:52.481Z level=INFO source=server.go:875 msg="aborting embedding request due to client closing the connection"
Dec 19 12:42:52 copilot ollama[3122]: message repeated 55 times: [ time=2024-12-19T12:42:52.481Z level=INFO source=server.go:875 msg="aborting embedding request due to client closing the connection"]
```
Seems somethhing wrong, don't you think ?
By the way, when my crontab restarted ollama, I've had thousands of other log lines like this before seeing the log for the server startup
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.4 | bug | low | Critical |
2,750,800,412 | vscode | [html] support `goto definition` on built-in symbols inside of `script` | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
Version: 1.96.1 (user setup)
Commit: 42b266171e51a016313f47d0c48aca9295b9cbb2
OS: Windows_NT x64 10
Steps to Reproduce:
1. Create new file with `html` type
2. Put code
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Document</title>
</head>
<body>
<script>
const b = new Blob([]);
</script>
</body>
</html>
```
3. Press `ctrl` and hover cursor over `Blob` to see underline to go to definition
4. No underline below `Blob` and click on it doesnt do anything
| help wanted,feature-request,html | low | Critical |
2,750,804,116 | TypeScript | if-statements allow multiple arguments and don't check for constant conditions after the first argument | ### ๐ Search Terms
"if statement", "constant condition"
### ๐ Version & Regression Information
- This is the behavior in every version I tried, and I reviewed the FAQ for entries about common bugs
### โฏ Playground Link
https://www.typescriptlang.org/play/?#code/MYewdgzgLgBBIFsCmBhAFk4BrAlmA5gGICuYwUO4MAvDABQAOAhgE5MICMAXHFC3vgA0MZmwQAmHtH4FhIBhXBMANgAVW7AMw0YfYkgCUNAHwwA3gCgY1mAHpbMACYhexAGZurNlkijEWYLos+gDcFgC+Fhb2MCoA7kwAnhAwbioQSMI+EMTKsCBucIioGNgCJGSKgfFJKTj4YCA+jhY4bnTwyOiYuAQV5JRgdABEouwcw8KjGhLDBsJpyhlGljYwEVExjkgMSGDbYPmB2bn5hZ0lPeWkA+Ct7RfdZX03VSNjnJMw02LiX4vLFZeawRIA
### ๐ป Code
```ts
const someCheckingFunction = (param1: string, param2: string, optionalParam3 = true) => {
// do stuff
return true;
}
// always false, result of someCheckingFunction always ignored
if(someCheckingFunction("param1", "param2"), false) {
}
// dependent on result of someCheckingFunction
if(someCheckingFunction("param1", "param2", false)) {
}
```
### ๐ Actual behavior
Due to a mistake in my bracket placement as seen in the example, I accidentally learned that multiple arguments in an if statement are possible.
Currently that means:
1. only the last argument is used for the if statement check
2. even for constant conditions there is not error visible
### ๐ Expected behavior
What I would expect:
1. don't allow multiple arguments, this only happens because someone made an error
2. at least show an error if there is a constant condition
### Additional information about the issue
_No response_ | Suggestion,Awaiting More Feedback | low | Critical |
2,750,808,558 | pytorch | [CI] XPU linux ci test has flaky error with sccache service | Noticed that there are some torch xpu ci test failure with sccache caused by the S3 token permission expired, refer https://github.com/pytorch/pytorch/actions/runs/12374649660/job/34540161843#step:14:3461
```
sccache: error: Server startup failed: cache storage failed to read: Unexpected (permanent) at read => S3Error { code: "ExpiredToken", message: "The provided token has expired.", resource: "", request_id: "NMJ4H2V91GQ7S2BZ" }
```
The workflow is https://github.com/pytorch/pytorch/blob/main/.github/workflows/_xpu-test.yml runs on self-hosted runner `linux.idc.xpu` with docker containers
cc @seemethere @malfet @pytorch/pytorch-dev-infra @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 | module: ci,triaged,module: infra,intel | low | Critical |
2,750,846,646 | flutter | AVIF Image doesn't work on Chrome browsers | ### Steps to reproduce
Try to render any AVIF in Chrome failed but works fine on Safari and Firefox
```dart
Image.network('https://raw.githubusercontent.com/link-u/avif-sample-images/refs/heads/master/kimono.avif')
```
### Expected results
Image rendered
### Actual results
```
โโโก EXCEPTION CAUGHT BY IMAGE RESOURCE SERVICE โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
The following ImageCodecException was thrown resolving an image codec:
Failed to decode image using the browser's ImageDecoder API.
Image source: encoded image bytes
Original browser error: InvalidStateError: Failed to retrieve track metadata.
When the exception was thrown, this was the stack
Image provider:
NetworkImage("https://raw.githubusercontent.com/link-u/avif-sample-images/refs/heads/master/kimono.avif",
scale: 1.0)
Image key:
NetworkImage("https://raw.githubusercontent.com/link-u/avif-sample-images/refs/heads/master/kimono.avif",
scale: 1.0)
```
### Code sample
```dart
import dart:math' as math;
import 'package:flutter/material.dart';
const int maxSeeds = 250;
void main() {
runApp(const Sunflower());
}
class Sunflower extends StatefulWidget {
const Sunflower({super.key});
@override
State<StatefulWidget> createState() {
return _SunflowerState();
}
}
class _SunflowerState extends State<Sunflower> {
int seeds = maxSeeds ~/ 2;
@override
Widget build(BuildContext context) {
return MaterialApp(
theme: ThemeData(
brightness: Brightness.dark,
appBarTheme: const AppBarTheme(elevation: 2),
),
debugShowCheckedModeBanner: false,
home: Scaffold(
appBar: AppBar(
title: const Text('Sunflower'),
),
body: Center(
child: Column(
crossAxisAlignment: CrossAxisAlignment.center,
children: [
Image.network('https://raw.githubusercontent.com/link-u/avif-sample-images/refs/heads/master/kimono.avif')
],
),
),
),
);
}
}
class SunflowerWidget extends StatelessWidget {
static const tau = math.pi * 2;
static const scaleFactor = 1 / 40;
static const size = 600.0;
static final phi = (math.sqrt(5) + 1) / 2;
static final rng = math.Random();
final int seeds;
const SunflowerWidget(this.seeds, {super.key});
@override
Widget build(BuildContext context) {
final seedWidgets = <Widget>[];
for (var i = 0; i < seeds; i++) {
final theta = i * tau / phi;
final r = math.sqrt(i) * scaleFactor;
seedWidgets.add(AnimatedAlign(
key: ValueKey(i),
duration: Duration(milliseconds: rng.nextInt(500) + 250),
curve: Curves.easeInOut,
alignment: Alignment(r * math.cos(theta), -1 * r * math.sin(theta)),
child: const Dot(true),
));
}
for (var j = seeds; j < maxSeeds; j++) {
final x = math.cos(tau * j / (maxSeeds - 1)) * 0.9;
final y = math.sin(tau * j / (maxSeeds - 1)) * 0.9;
seedWidgets.add(AnimatedAlign(
key: ValueKey(j),
duration: Duration(milliseconds: rng.nextInt(500) + 250),
curve: Curves.easeInOut,
alignment: Alignment(x, y),
child: const Dot(false),
));
}
return FittedBox(
fit: BoxFit.contain,
child: SizedBox(
height: size,
width: size,
child: Stack(children: seedWidgets),
),
);
}
}
class Dot extends StatelessWidget {
static const size = 5.0;
static const radius = 3.0;
final bool lit;
const Dot(this.lit, {super.key});
@override
Widget build(BuildContext context) {
return DecoratedBox(
decoration: BoxDecoration(
color: lit ? Colors.orange : Colors.grey.shade700,
borderRadius: BorderRadius.circular(radius),
),
child: const SizedBox(
height: size,
width: size,
),
);
}
}
```
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
Chrome:

Safari:

</details>
### Logs
<details open><summary>Logs</summary>
```console
โโโก EXCEPTION CAUGHT BY IMAGE RESOURCE SERVICE โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
The following ImageCodecException was thrown resolving an image codec:
Failed to decode image using the browser's ImageDecoder API.
Image source: encoded image bytes
Original browser error: InvalidStateError: Failed to retrieve track metadata.
When the exception was thrown, this was the stack
Image provider:
NetworkImage("https://raw.githubusercontent.com/link-u/avif-sample-images/refs/heads/master/kimono.avif",
scale: 1.0)
Image key:
NetworkImage("https://raw.githubusercontent.com/link-u/avif-sample-images/refs/heads/master/kimono.avif",
scale: 1.0)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Doctor summary (to see all details, run flutter doctor -v):
[โ] Flutter (Channel stable, 3.22.2, on macOS 15.1.1 24B91 darwin-arm64, locale en-PL)
[โ] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[!] Xcode - develop for iOS and macOS (Xcode 16.1)
! iOS 18.1 Simulator not installed; this may be necessary for iOS and macOS development.
To download and install the platform, open Xcode, select Xcode > Settings > Platforms,
and click the GET button for the required platform.
For more information, please visit:
https://developer.apple.com/documentation/xcode/installing-additional-simulator-runtimes
[โ] Chrome - develop for the web
[โ] Android Studio (version 2023.2)
[โ] VS Code (version 1.96.0)
[โ] Connected device (3 available)
[โ] Network resources
! Doctor found issues in 1 category.
```
</details>
| platform-web,a: images,has reproducible steps,P2,browser: chrome-desktop,team-web,triaged-web,found in release: 3.27,found in release: 3.28 | low | Critical |
2,750,884,727 | vscode | Automatically adjust minimap scale according to display scale | <!-- โ ๏ธโ ๏ธ Do Not Delete This! feature_request_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
I have a 27" 4K monitor as my main display and a 24" 1080p monitor as a secondary display. To make everything have consistent sizes between the two displays, I have set my main display to 175% scale. The majority of VS Code scales accordingly, but the minimap does not, which results in it being very small and difficult to use. I can change `editor.minimap.scale` to 2 to account for this on my main display, but then if I move a window over to my secondary display, the minimap is far too large.
I would like the minimap to automatically account for display scaling (or have a way to set a different scale per display) such that it is a consistent size on my two displays. This may require supporting non-integer scales (#84168), or it could round to the nearest integer (which may not be ideal for 150% scale). | bug,editor-minimap,under-discussion | low | Minor |
2,750,932,124 | flutter | [go_router] Support for Handling Action-Based Deep Links Without Navigation | ### Use case
**Problem**
The `go_router` package provides a great mechanism for handling deep links through the `redirect` callback. However, there are scenarios where a deep link should trigger an action (e.g., saving a referral code, showing a snack bar) without navigating to any route or modifying the current screen.
Currently, the `redirect` function requires returning either `null` (to proceed to the intended route) or another string (to redirect to a different route). There is no clean way to "do nothing" while still handling the deep link logic.
**Use Case**
An example is a referral link (`/referral?code=XYZ123`) that should:
1. Save the referral code locally.
2. Show a snack bar confirming the referral code was saved.
3. Prevent navigation, keeping the user on their current screen.
### Proposal
1. Introduce a separate `onDeepLink` callback that is triggered for every incoming deep link. This callback could be used to handle action-based links independently of routing.
```dart
GoRouter(
onDeepLink: (context, state) {
if (state.uri.path == '/referral') {
// Perform the action here
return true; // Handled successfully
}
return false; // Continue with routing
},
// Other configurations...
);
```
2. Provide a mechanism to block/prevent navigation dynamically based on deep link conditions.
```dart
redirect: (context, state) {
if (state.host.isNotEmpty && state.uri.path == '/referral') {
// Perform the action and block navigation
}
return null;
}
``` | c: new feature,package,c: proposal,P2,p: go_router,team-go_router,triaged-go_router | low | Minor |
2,750,932,221 | angular | output-migration Removes types from output | ### Which @angular/* package(s) are the source of the bug?
core
### Is this a regression?
No
### Description
When running a migration `ng generate @angular/core:output-migration`
My @outputs when migrating to output lose the type causing errors everywhere.

Instead of turning this
`@Output() EventMoveMovimientos: EventEmitter<resupuesta> = new EventEmitter();`
into
`readonly EventMoveMovimientos = output();`
It should be into this
`readonly EventMoveMovimientos = output<resupuesta>();`
By the way the event emit is this:
```
this.EventMoveMovimientos.emit({
idInstrumento: nemo.idNemotecnico,
idCuenta: nemo.idCuenta,
});
```
And the interface is this:
```
export interface resupuesta {
idInstrumento: number;
idCuenta: number;
}
```
### Please provide a link to a minimal reproduction of the bug
_No response_
### Please provide the exception or error you saw
```true
X [ERROR] TS2345: Argument of type '{ idInstrumento: any; idCuenta: any; }' is not assignable to parameter of type 'void'. [plugin angular-compiler]
src/app/modules/shared-module/tabla-instrumento/forward/forward.component.ts:97:35:
97 โ this.EventMoveMovimientos.emit({
โต ^
```
### Please provide the environment you discovered this bug in (run `ng version`)
```true
_ _ ____ _ ___
/ \ _ __ __ _ _ _| | __ _ _ __ / ___| | |_ _|
/ โณ \ | '_ \ / _` | | | | |/ _` | '__| | | | | | |
/ ___ \| | | | (_| | |_| | | (_| | | | |___| |___ | |
/_/ \_\_| |_|\__, |\__,_|_|\__,_|_| \____|_____|___|
|___/
Angular CLI: 19.0.6
Node: 20.13.1
Package Manager: npm 10.5.2
OS: win32 x64
Angular: 19.0.5
... animations, common, compiler, compiler-cli, core, forms
... language-service, platform-browser, platform-browser-dynamic
... platform-server, router, service-worker
Package Version
---------------------------------------------------------
@angular-devkit/architect 0.1900.6
@angular-devkit/build-angular 19.0.6
@angular-devkit/core 19.0.6 (cli-only)
@angular-devkit/schematics 19.0.6
@angular/cdk 19.0.4
@angular/cli 19.0.6
@angular/flex-layout 15.0.0-beta.42
@angular/material 19.0.4
@schematics/angular 19.0.6
rxjs 7.8.1
typescript 5.5.3
zone.js 0.15.0
```
### Anything else?
_No response_ | core: inputs / outputs,area: migrations | low | Critical |
2,750,989,853 | vscode | theme-color is not updated when the PWA is inactive. | - VS Code Version: 1.97.0-insider
- OS Version: PWA (Mac OS, but should apply to all)
Steps to Reproduce:
1. Set the colour `titleBar.inactiveBackground` to something significantly different to the `activeBackground` to easily distinguish the focused app.
2. Focus on any other app.

Notice that even when the PWA is inactive, it still takes the theme-color from the active titlebar background.
https://github.com/microsoft/vscode/blob/main/src/vs/workbench/browser/style.ts#L27-L41
I think that ideally the theme-color would match the titlebar colour, regardless of whether the app is active or not.
| bug,titlebar,confirmed,web | low | Minor |
2,750,992,867 | transformers | [`Mamba2`] Varlen implementation | ### Feature request
Use varlen implementations (cu_seq_lens) of mamba2 and conv1d when requirements are met, i.e. mostly version dependencies.
### Motivation
It's similar to how fa2 works with varlen and it should boost performance while guaranteeing correct performance on batches.
### Your contribution
I can make a PR, not sure when I'll get to it - prolly after xmas days. | Feature request | low | Major |
2,751,010,535 | flutter | CupertinoTextField Does Not Follow Primary Color for Text Selection During Autocorrect | ### Steps to reproduce
1. Create a Flutter app with a `CupertinoTextField` and set a custom `primaryColor` in the `CupertinoThemeData`.
2. Type text into the `CupertinoTextField` and observe the behavior during autocorrect.
3. Compare it with the native iOS `TextField` behavior.
### Expected results
- The selection handles and marked text during autocorrect in `CupertinoTextField` should follow the `primaryColor` defined in `CupertinoThemeData`.
- This behavior should match the native iOS `TextField`, which dynamically applies the app's primary color.
### Actual results
- `CupertinoTextField` does not respect the `primaryColor` for selection handles or marked text while autocorrect or typing in.
- On iOS, the native `TextField` dynamically uses the primary color, ensuring visual consistency.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/cupertino.dart';
import 'package:flutter/material.dart';
void main() {
runApp(CupertinoTestApp());
}
class CupertinoTestApp extends StatelessWidget {
const CupertinoTestApp({super.key});
@override
Widget build(BuildContext context) {
return const CupertinoApp(
theme: CupertinoThemeData(
primaryColor: Colors.red, // Set the primary color to red
),
home: MyScreen(),
);
}
}
class MyScreen extends StatelessWidget {
const MyScreen({super.key});
@override
Widget build(BuildContext context) {
return CupertinoPageScaffold(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
crossAxisAlignment: CrossAxisAlignment.center,
children: [
CupertinoTextField(),
],
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
| **Native iOS TextField** | **Flutter CupertinoTextField**
|----------------------|----------------------|
|  |  |
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Doctor summary (to see all details, run flutter doctor -v):
[โ] Flutter (Channel stable, 3.24.1, on macOS 14.5 23F79 darwin-arm64, locale en-US)
[โ] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[โ] Xcode - develop for iOS and macOS (Xcode 15.4)
[โ] Chrome - develop for the web (Cannot find Chrome executable at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome)
! Cannot find Chrome. Try setting CHROME_EXECUTABLE to a Chrome executable.
[โ] Android Studio (version 2023.3)
[โ] VS Code (version 1.96.0)
[โ] Connected device (4 available)
[โ] Network resources
! Doctor found issues in 1 category.```
</details>
| a: text input,platform-ios,f: cupertino,has reproducible steps,P2,c: parity,team-text-input,triaged-text-input,found in release: 3.27,found in release: 3.28 | low | Minor |
2,751,018,724 | godot | Editor sub-windows are slow to respond when displayed in "tabbed" mode in Sway (Wayland) | ### Tested versions
Experienced in v4.3.stable.arch_linux.
### System information
`Godot v4.3.stable unknown - Arch Linux #1 ZEN SMP PREEMPT_DYNAMIC Mon, 09 Dec 2024 14:30:31 +0000 - Wayland - Vulkan (Forward+) - integrated Intel(R) Iris(R) Xe Graphics (ADL GT2) - 12th Gen Intel(R) Core(TM) i5-1235U (12 Threads)`
### Issue description
Sub-windows - such as the "Create New Node" dialog - displayed in Sway's "tabbed" mode respond slowly. It's unclear whether this is a slow response to input (applying both for mouse input and keyboard input), or just a laggy interface in general resulting in high latency. The issue does not occur in "split" modes.
This video demonstrates both cases:
https://github.com/user-attachments/assets/73df318b-7311-4f8c-92db-63a9158d9778
### Steps to reproduce
Open any sub-window in tabbed mode while using the Sway Wayland compositor. Attempt to make inputs. This can be performed in a fresh project.
### Minimal reproduction project (MRP)
N/A | bug,platform:linuxbsd,topic:porting | low | Major |
2,751,026,797 | TypeScript | Computed property type is invalid in type emitted if it is a unique symbol coming from a property that has a space in the name | ### ๐ Search Terms
computed property unique symbols declaration emit
### ๐ Version & Regression Information
- This changed between versions 3.7 and 3.8
### โฏ Playground Link
[Playground Link](https://www.typescriptlang.org/play/?ts=5.7.2#code/MYGwhgzhAEAq0G8BQ1XQgFzBglsaATgKZgAmA9gHYgCe0A2gOQCC0YjAugFzQCulOAI68i6GgFsARuRABuJAF8kIIhmgAPaAF5EKNADNy5LrCat2HRWxjAqmWUA)
### ๐ป Code
```ts
class T {
static readonly ['A a']: unique symbol;
}
let x = {
foo:T['A a']
} as const;
```
### ๐ Actual behavior
The type of `x` is emitted as `{ readonly foo: typeof T.A a; }` which is invlaid.
### ๐ Expected behavior
Declaration emit should be valid
### Additional information about the issue
_No response_ | Bug,Help Wanted | low | Minor |
2,751,028,230 | vscode | Should we offer a config to control if we set theme-color in a PWA | This is related to #236615
Should we offer a setting which when enabled would prevent VS Code from setting the `theme-color` meta tag and instead have VS Code honour the users OS preference for titlebars? | feature-request,themes,web | low | Minor |
2,751,073,668 | godot | Closing PopupMenu of OptionButton via code on window resize error : '_sub_window_update: Condition "index == -1" is true' | ### Tested versions
v4.3.stable.official [77dcf97d8]
### System information
Windows 11 - Godot 4.3 - Compatibility
### Issue description
What is proper way to close popup menu of OptionButton?
There is `self.show_popup()` to show it, but I do not see a close option. Am I missing something?
Trying to do (on "size_changed" signal)
```
self.get_popup().visible = false
```
hides it but logs an error
```
E 0:00:07:0991 _sub_window_update: Condition "index == -1" is true.
<C++ Source> scene/main/viewport.cpp:310 @ _sub_window_update()
```
Note that docs state that using visible property should be okay?
```
PopupMenu get_popup() const
Returns the PopupMenu contained in this button.
Warning: This is a required internal node, removing and freeing it may cause a crash. If you wish to hide it or any of its children, use their Window.visible property.
```
### Steps to reproduce
1. Create new project
2. Create new main scene (Control)
3. Add OptionButton as child with at least 1 option
4. Attach script to main scene
```
extends Control
@onready var option_button: OptionButton = $OptionButton
func _ready() -> void:
get_tree().get_root().connect("size_changed", _on_root_size_changed)
func _on_root_size_changed() -> void:
option_button.get_popup().visible = false
```
5. Start the project
6. Click on option button to open popup
7. Resize the window (trigger signal)
8. Read error in console
### Minimal reproduction project (MRP)
[new-game-project.zip](https://github.com/user-attachments/files/18201550/new-game-project.zip)
| bug,topic:gui | low | Critical |
2,751,079,480 | godot | Terminal output is ordered by instance number when running multiple instances of a game | ### Tested versions
Found on v4.3.stable.official [77dcf97d8]
### System information
Arch - v4.3.stable.official [77dcf97d8]
### Issue description
When running 2 or more instances of a game from the editor (using Debug > Customize Run Instances > Run Multiple Instances), the output from those instances produced by `print()` will be ordered in the order of the instances, not time of execution.
This is mainly problematic when trying to debug multiplayer games and trying to figure out in which order things actually happen on the server and clients.
I assume the issue occurs due to buffering of the output produced and not accounting for the actual order in which output was produced.
### Steps to reproduce
General description:
Run 2 instances of a game from the editor.
Make one instance a server and another a client.
Run the following code:
```
print("before rpc")
message.rpc_id(1,"time traveler")
print("after rpc")
```
Depending on which instance is first and which is the second, 'time traveler' will be printed before or after 'before rpc' and 'after rpc' respectively, at times making it seem like the rpc got executed before it was called.
MRP:
Run 2 instances. Press 'start server' button in one, 'start client' in another, after a second output of the code above will be printed in the order, depending which instance is first and which is second.
### Minimal reproduction project (MRP)
[output_ordering_mrp.zip](https://github.com/user-attachments/files/18201508/output_ordering_mrp.zip)
| bug,topic:editor,topic:network | low | Critical |
2,751,127,223 | pytorch | Full BFGS optimizer | ### ๐ The feature, motivation and pitch
Currently, the torch optimizer supports [LBFGS](https://pytorch.org/docs/stable/generated/torch.optim.LBFGS.html), which is a limited-memory version of the full [BFGS](https://en.wikipedia.org/wiki/Broyden%E2%80%93Fletcher%E2%80%93Goldfarb%E2%80%93Shanno_algorithm#:~:text=In%20numerical%20optimization%2C%20the%20Broyden,solving%20unconstrained%20nonlinear%20optimization%20problems.) optimizer. Admittedly, the BFGS optimizer requires O(n^2) memory for storing the running Hessian, so it is totally impractical for training moderately sized neural networks. However, I'm currently working on some social science projects which requires running regression models where the dataset is large, but the number of regression coefficients (trainable weights) are small, usually under 100. In this case, I think the BFGS optimizer is the perfect fit, because:
- The optimization space is small dimensional
- The problem is usually convex
- Stability of convergence and precision of the (local) minimum is very important
Actually, I think in this case the full BFGS optimizer can even be more efficient compared to LBFGS.
JAX currently has an implementation of [BFGS](https://github.com/jax-ml/jax/blob/main/jax/_src/scipy/optimize/bfgs.py), which I'm currently using, but the ecosystem of JAX is very awkward and I'd personally prefer using torch for everything.
### Alternatives
_No response_
### Additional context
This is potentially not a good fit for torch because I believe it's really meant for training neural networks, and not solving convex regression problems. However, if BFGS can reuse components from the existing LBFGS implementation, maybe it's worth pursuing.
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar | module: optimizer,triaged | low | Minor |
2,751,131,868 | vscode | Spinner appears on every paste after initial copy, sometimes not completing |
Type: <b>Bug</b>
> Issue troubleshooting has identified that the issue is with Visual Studio Code.
- Copy text from editor
- Try to paste that text back in the same file or another file
Expected: text is pasted
Actual: spinner appears, text usually never gets pasted
Can sometimes get past the issue by taking other actions before trying to paste.
VS Code version: Code 1.96.0 (Universal) (138f619c86f1199955d53b4166bef66ef252935c, 2024-12-11T02:29:09.626Z)
OS version: Darwin arm64 23.6.0
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Apple M1 Max (10 x 2400)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|24, 17, 12|
|Memory (System)|64.00GB (5.83GB free)|
|Process Argv|--crash-reporter-id a6648a3f-85df-458f-a45c-8947c29596ff|
|Screen Reader|no|
|VM|0%|
</details>Extensions: none<details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
2i9eh265:30646982
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupytercf:31046870
2f103344:31071589
nativerepl2:31139839
pythonrstrctxt:31112756
nativeloc2:31192216
cf971741:31144450
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
```
</details>
<!-- generated by issue reporter --> | info-needed | medium | Critical |
2,751,197,437 | rust | Tracking issue for rustdoc `--extract-doctests` command-line flag | This option is needed by the Rust-for-Linux project. The goal is to allow them to extract doctests so they can modify them and run them however they want to simplify their testing pipeline.
### Steps
- [ ] Implement the feature (https://github.com/rust-lang/rust/pull/134531)
- [ ] Stabilization PR ([see instructions on rustc-dev-guide][stabilization-guide])
[stabilization-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#stabilization-pr
### Unresolved Questions
None currently. | T-rustdoc,C-tracking-issue,A-doctests,A-rust-for-linux | low | Minor |
2,751,217,284 | pytorch | TCPStore crash when initializing from multiple threads | ### ๐ Describe the bug
There's a bug in pybind which is causing TCPStore to crash on deletion when instantiating it from multiple threads.
```
terminate called after throwing an instance of 'std::runtime_error'
what(): pybind11_object_dealloc(): Tried to deallocate unregistered instance!
```
Full repro and stack traces is at: https://gist.github.com/d4l3k/24fd4ac1994ceb4b5a063b125ace1fe3
### Versions
Python 2.5.1, main
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @c-p-i-o | oncall: distributed,module: c10d,bug | low | Critical |
2,751,222,617 | rust | Regression: source code location is bad in nightly (doctests) | ### Code
I tried this code in xso/src/from_xml_doc.md :
#### Example without codec
```rust
# // extern crate alloc;
# use xso::FromXml;
#[derive(FromXml, Debug, PartialEq)]
#[xml(namespace = "urn:example", name = "foo")]
struct Foo {
#[xml(text)]
a: String,
};
let foo: Foo = xso::from_bytes(b"<foo xmlns='urn:example'>hello</foo>").unwrap();
assert_eq!(foo, Foo {
a: "hello".to_string(),
});
```
(i commented extern crate alloc but in start it was just not there)
I expected to see this happen: reporting error produced in xso/src/from_xml_doc.md line ~600
Instead, this happened: reported error produced in xso/src/lib.rs line ~600 (does not exist)
```
failures:
---- xso/src/lib.rs - FromXml (line 648) stdout ----
error[E0433]: failed to resolve: could not find `alloc` in the list of imported crates
--> xso/src/lib.rs:652:10
|
7 | #[derive(FromXml, Debug, PartialEq)]
| ^^^^^^^ could not find `alloc` in the list of imported crates
|
= note: this error originates in the derive macro `FromXml` (in Nightly builds, run with -Z macro-backtrace for more info)
help: consider importing this module
|
3 + use std::string;
|
error[E0433]: failed to resolve: could not find `alloc` in the list of imported crates
--> xso/src/lib.rs:652:10
|
7 | #[derive(FromXml, Debug, PartialEq)]
| ^^^^^^^ could not find `alloc` in the list of imported crates
|
= note: this error originates in the derive macro `FromXml` (in Nightly builds, run with -Z macro-backtrace for more info)
help: consider importing this struct
|
3 + use std::string::String;
|
error: aborting due to 2 previous errors
For more information about this error, try `rustc --explain E0433`.
Couldn't compile the test.
failures:
xso/src/lib.rs - FromXml (line 648)
test result: FAILED. 9 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.39s
error: doctest failed, to rerun pass `--doc`
```
### Version it worked on
Works on stable:
```
---- xso/src/from_xml_doc.md - FromXml (line 590) stdout ----
error[E0433]: failed to resolve: could not find `alloc` in the list of imported crates
--> xso/src/from_xml_doc.md:594:10
|
7 | #[derive(FromXml, Debug, PartialEq)]
| ^^^^^^^ could not find `alloc` in the list of imported crates
|
= note: this error originates in the derive macro `FromXml` (in Nightly builds, run with -Z macro-backtrace for more info)
help: consider importing this module
|
3 + use std::string;
|
error[E0433]: failed to resolve: could not find `alloc` in the list of imported crates
--> xso/src/from_xml_doc.md:594:10
|
7 | #[derive(FromXml, Debug, PartialEq)]
| ^^^^^^^ could not find `alloc` in the list of imported crates
|
= note: this error originates in the derive macro `FromXml` (in Nightly builds, run with -Z macro-backtrace for more info)
help: consider importing this struct
|
3 + use std::string::String;
|
error: aborting due to 2 previous errors
For more information about this error, try `rustc --explain E0433`.
Couldn't compile the test.
failures:
xso/src/from_xml_doc.md - FromXml (line 590)
test result: FAILED. 9 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 1.04s
error: doctest failed, to rerun pass `--doc`
```
### Version with regression
`rustc --version --verbose`:
```
rustc 1.85.0-nightly (4ba4ac612 2024-12-18)
binary: rustc
commit-hash: 4ba4ac612d36e3409e8e1e31e12acc028808f85f
commit-date: 2024-12-18
host: x86_64-unknown-linux-gnu
release: 1.85.0-nightly
LLVM version: 19.1.6
```
<!--
@rustbot modify labels: +regression-from-stable-to-{channel} -regression-untriaged
-->
| T-rustdoc,C-bug,A-doctests,E-needs-mcve,I-prioritize,regression-untriaged | low | Critical |
2,751,255,143 | node | Slow streams over network connection on windows share | ### Version
v23.4.0
### Platform
```text
Microsoft Windows NT 10.0.19045.0 x64
```
### Subsystem
streams
### What steps will reproduce the bug?
```
const fs = require('node:fs');
const readline = require('node:readline');
const sourcePath = "G:\\file.txt"; // 700 MB text file with 547 characters long lines on a windows share
const targetPath = "G:\\file.txt.copy";
const myInterface = readline.createInterface({
input: fs.createReadStream(sourcePath).setEncoding('binary')
});
fs.closeSync(fs.openSync(targetPath, 'w'));
const output = fs.createWriteStream(targetPath);
output.setDefaultEncoding('utf-8');
myInterface.on('line', function (line) {
output.write(line);
});
myInterface.on('close', function (line) {
output.end();
console.log(`Saved!`); // In node, at about 3 or 4 MB over network it shows this "Saved!", but continues copying?!
});
```
### How often does it reproduce? Is there a required condition?
every time
### What is the expected behavior? Why is that the expected behavior?
Faster stream copy
### What do you see instead?
On 1 GB ethernet connection over windows share, with node v23.4.0 -> 513 seconds . It seemed a lot to me, so I tried with Bun
Same connection and share, with bun 1.1.39 -> 13 seconds.
So node is about 39x (slower)
And it's also strange that at about 3 or 4 MB file.copy, node shows the "Saved!" message but continues copying.
### Additional information
On local disk, node is a bit faster than bun.
So it seems that problem is over network connection | stream | low | Critical |
2,751,283,609 | ui | [bug]: NavigationMenu content does not show right undernith of a NavigationMenuItem | ### Describe the bug
NavigationMenu content does not show right undernith of a NavigationMenuItem. It just sticks to left side.
[Screencast from 2024-12-20 01-39-29.webm](https://github.com/user-attachments/assets/e427dfa5-caf2-43a8-99ed-ebc2a1f60965)
### Affected component/components
NavigationMenu
### How to reproduce
This is the component:
```typescript
"use client";
import * as React from "react";
import Link from "next/link";
import { cn } from "@/lib/utils";
import {
NavigationMenu,
NavigationMenuItem,
NavigationMenuList,
NavigationMenuContent,
NavigationMenuLink,
NavigationMenuTrigger,
} from "@/components/shadcn-ui/navigation-menu";
import { usePathname } from "next/navigation";
import { Separator } from "@/components/shadcn-ui/separator";
interface Brand {
brandId: string;
categoryId: string;
brand: {
id: string;
name: string;
};
}
interface ProductSubcategory {
id: string;
name: string;
categoryId?: string;
}
interface Category {
id: string;
name: string;
productSubcategory: ProductSubcategory[];
brands: Brand[];
}
export default function NavbarMenu({ categories }: { categories: Category[] }) {
const router = usePathname();
const shouldHide = router.includes("/dashboard");
if (shouldHide) {
return null;
}
return (
<section className="hidden md:block w-full border-b bg-white/95 backdrop-blur relative z-50">
<div className="flex min-h-14 items-center justify-center">
<NavigationMenu className="max-w-screen-2xl">
<NavigationMenuList className="flex flex-wrap gap-1">
{categories?.map((category) => (
<NavigationMenuItem key={category.id} className="">
<NavigationMenuTrigger className="">
{category.name}
</NavigationMenuTrigger>
<NavigationMenuContent className="">
<div className="grid gap-3 p-6 md:w-[400px] lg:w-[500px] lg:grid-cols-[.75fr_1fr]">
<div className="row-span-3">
<NavigationMenuLink asChild>
<Link
className="flex h-full w-full select-none flex-col justify-end rounded-md bg-gradient-to-b from-muted/50 to-muted p-6 no-underline outline-none focus:shadow-md"
href={`/search?categoryId=${category.id}`}
>
<div className="mb-2 mt-4 text-lg font-medium">
{category.name}
</div>
<p className="text-sm leading-tight text-muted-foreground">
Explore all {category.name} products
</p>
</Link>
</NavigationMenuLink>
</div>
<SubcategoryList category={category} />
</div>
</NavigationMenuContent>
</NavigationMenuItem>
))}
</NavigationMenuList>
</NavigationMenu>
</div>
<Separator />
</section>
);
}
const SubcategoryList = ({ category }: { category: Category }) => {
return (
<div className="">
<ul className="grid gap-3 p-2 md:w-[200px] md:grid-cols-1 lg:w-[300px] lg:grid-cols-2 ">
{category.productSubcategory.map((subcategory) => (
<Link
key={subcategory.id}
href={`/search?categoryId=${category.id}&subcategoryId=${subcategory.id}`}
>
<ListItem title={subcategory.name} />
</Link>
))}
</ul>
{category.brands.length > 0 && (
<ul className="grid gap-3 p-2 md:w-[200px] md:grid-cols-1 lg:w-[300px] lg:grid-cols-2">
{category.brands.map((brandItem) => (
<Link
key={brandItem.brand.id}
href={`/search?categoryId=${brandItem.categoryId}&brandId=${brandItem.brand.id}`}
>
<ListItem title={brandItem.brand.name} />
</Link>
))}
</ul>
)}
</div>
);
};
const ListItem = React.forwardRef<
React.ElementRef<"a">,
React.ComponentPropsWithoutRef<"a">
>(({ className, title, children, ...props }, ref) => {
return (
<li>
<NavigationMenuLink asChild>
<span
ref={ref}
className={cn(
"block select-none space-y-1 rounded-md p-3 leading-none no-underline outline-none transition-colors hover:bg-accent hover:text-accent-foreground focus:bg-accent focus:text-accent-foreground",
className
)}
{...props}
>
<div className="text-sm font-medium leading-none">{title}</div>
<p className="line-clamp-2 text-sm leading-snug text-muted-foreground">
{children}
</p>
</span>
</NavigationMenuLink>
</li>
);
});
ListItem.displayName = "ListItem";
```
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Firefox, Chrome, Brave
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,751,284,870 | godot | "Tool button action "null::null" is an invalid callable" when using @export_tool_button | ### Tested versions
Reproducible in v4.4.dev3.official [f4af8201b], v4.4.dev6.official [1f47e4c4e]. These are the only ones i tested.
### System information
Godot v4.4.dev6 - Windows 10.0.22631 - Multi-window, 2 monitors - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 2060 SUPER (NVIDIA; 32.0.15.6590) - Intel(R) Core(TM) i7-9700 CPU @ 3.00GHz (8 threads)
### Issue description
(I would leave a video but github didn't accept it)
I made a `@export_tool_button` in in of my scripts. It worked perfectly fine. Then i made another scene and tried to use it there. Gave me two issues.
Issue one was `The value of property "insertVarHere" is Nil, but Callable was expected.`(#97834), and the issue i am making now which is `Tool button action "null::null" is an invalid callable`.
In the Steps to reproduce section i mention that reloading the project worked, but that only worked in the MRP not in my main project.
### Steps to reproduce
To get the error `Tool button action "null::null" is an invalid callable` you need to comment out the `= funky` and restart the project.
Then uncomment it and press the button.
To fix it reload the project while everything is uncommented.
### Minimal reproduction project (MRP)
[toolbuttonbug.zip](https://github.com/user-attachments/files/18202509/toolbuttonbug.zip)
| bug,topic:core | low | Critical |
2,751,302,609 | tauri | [bug] dmg background DPI issues | ### Describe the bug
Currently in order to get a custom background working, you must make your image pixel dimensions match the window dimensions. However, this doesn't take DPI into account.
The default size is 660x400. I made an image that size, and it worked. However the image looks bad since it's not displaying at the display's higher DPI: that image is being stretched over an area of 1320x800 physical pixels.
It would be nice if you could give an image with the correct aspect ratio at 2x or 3x of the window dimensions, so it can look more crisp. If you try that now, it will just cut off most of the image. I think making the window non-resizable and having the background image stretch to the window size would help.
It would also be nice if the docs told you what the default window dimensions and recommended image size were.
### Reproduction
1) Create a `dmg` high res background image with 1320x800 pixels.
2) Create an `dmg` bundle with the image:
```
"dmg": {
"background": "../../app-icons/DmgBackground.png",
"windowSize": {
"height": 400,
"width": 660
}
}
```
3) Observe the created window cuts off the high-res image and shows it as too large and blurry.
### Expected behavior
The image is not cut off and is displayed crisply on high-DPI displays.
### Full `tauri info` output
```text
[โ] Environment
- OS: Mac OS 15.2.0 arm64 (X64)
โ Xcode Command Line Tools: installed
โ rustc: 1.82.0 (f6e511eec 2024-10-15)
โ cargo: 1.82.0 (8f40fc59f 2024-08-21)
โ rustup: 1.27.1 (54dd3d00f 2024-04-24)
โ Rust toolchain: stable-aarch64-apple-darwin (default)
- node: 22.8.0
- pnpm: 9.1.4
- npm: 10.8.2
[-] Packages
- tauri ๐ฆ: 2.1.1
- tauri-build ๐ฆ: 2.0.3
- wry ๐ฆ: 0.47.0
- tao ๐ฆ: 0.30.8
- @tauri-apps/api ๎: not installed!
- @tauri-apps/cli ๎: 2.1.0
[-] Plugins
- tauri-plugin-single-instance ๐ฆ: 2.0.1
- @tauri-apps/plugin-single-instance ๎: not installed!
- tauri-plugin-window-state ๐ฆ: 2.0.2
- @tauri-apps/plugin-window-state ๎: not installed!
- tauri-plugin-shell ๐ฆ: 2.0.2
- @tauri-apps/plugin-shell ๎: not installed!
- tauri-plugin-os ๐ฆ: 2.0.1
- @tauri-apps/plugin-os ๎: not installed!
- tauri-plugin-deep-link ๐ฆ: 2.0.1
- @tauri-apps/plugin-deep-link ๎: not installed!
- tauri-plugin-notification ๐ฆ: 2.2.0
- @tauri-apps/plugin-notification ๎: not installed!
- tauri-plugin-log ๐ฆ: 2.0.2
- @tauri-apps/plugin-log ๎: not installed!
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:5173/
- bundler: Rollup
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: needs triage | low | Critical |
2,751,351,946 | kubernetes | MutatingAdmissionPolicy mutation ordering issue | ### What happened?
This works:
```
- patchType: "JSONPatch"
jsonPatch:
expression: >
[
JSONPatch{
op: "add", path: "/spec/initContainers",
value: []
},
JSONPatch{
op: "add", path: "/spec/initContainers/-",
value: Object.spec.initContainers{
name: "mesh-proxy",
image: "mesh-proxy/v1.0.0",
restartPolicy: "Always"
}
}
]
```
But, this fails for a pod:
```
- patchType: "JSONPatch"
jsonPatch:
expression: >
[
JSONPatch{
op: "add", path: "/spec/initContainers",
value: []
}
]
- patchType: "JSONPatch"
jsonPatch:
expression: >
[
JSONPatch{
op: "add", path: "/spec/initContainers/-",
value: Object.spec.initContainers{
name: "mesh-proxy",
image: "mesh-proxy/v1.0.0",
restartPolicy: "Always"
}
}
]
```
with:
```
denied request: JSON Patch: add operation does not apply: doc is missing path: "/spec/initContainers/-": missing value
```
Seems like the ordering is wrong or the input of one isn't fed into the next
### What did you expect to happen?
Both should work
### How can we reproduce it (as minimally and precisely as possible)?
Try chaining the mutations together in the same object, one relying on the next as above
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
Client Version: v1.32.0
Kustomize Version: v5.5.0
Server Version: v1.32.0
```
</details>
### Cloud provider
NA
### OS version
minikube
### Install tools
minikube
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/api-machinery,triage/accepted | low | Major |
2,751,363,319 | godot | Insane loading time on mobile devices (espesially on poor performance devices) | ### Tested versions
You can probably do it in a lot of versions, I used 4.3
### System information
(Samsung) Wear OS (One UI version) 6.0 Watch; System version: 14; Wear OS version 5.0; Tested: (Vulkan mobile, Compatibile)
### Issue description
I have a big problem with these games and apps made in Godot. their loading time is insane. my 3D game loads in around 17 seconds.
I tried to make a blank app with text that changes, and it is still taking around 9 seconds to load. My Smartwatch isn't so bad. It can run that 3D game in around 46 fps (not accurate). and it has around 1,5 GB or RAM. And And it's worth noting that this nearly empty app took 9 seconds to load. Why it is taking so long, for a blank app to open?
Other games and apps open in like few seconds.
Ther should be option for a export template or something that strips everything not important for those devices, and options for very poor performance devices.
### Steps to reproduce
1. Make a new project
2. Build Android APK
3. Transfer it to Smartwatch, (Via ADB debugging)
4. Done
### Minimal reproduction project (MRP)
N/A works like that in every project | enhancement,topic:core,topic:porting,needs testing,performance | low | Critical |
2,751,382,416 | transformers | A warning message showing that `MultiScaleDeformableAttention.so` is not found in `/root/.cache/torch_extensions` if `ninja` is installed with `transformers` | ### System Info
* `transformers`: `4.47.1`
* `torch`: `2.5.1`
* `timm`: `1.0.12`
* `ninja`: `1.11.1.3`
* `python`: `3.10.14`
* `pip`: `23.0.1`
* CUDA runtime installed by `torch`: `nvidia-cuda-runtime-cu12==12.4.127`
* OS (in container): Debian GNU/Linux 12 (bookworm)
* OS (native device): Windows 11 Enterprise 23H2 (`10.0.22631 Build 22631`)
* Docker version: `27.3.1, build ce12230`
* NVIDIA Driver: `565.57.02`
### Who can help?
I am asking help for [`DeformableDetrModel`](https://huggingface.co/docs/transformers/v4.47.1/en/model_doc/deformable_detr#transformers.DeformableDetrModel)
vision models: @amyeroberts, @qubvel
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Start a new docker container by
```sh
docker run --gpus all -it --rm --shm-size=1g python:3.10-slim bash
```
2. Install dependencies
```sh
pip install transformers[torch] requests pillow timm
```
3. Run the following script (copied from [the document](https://huggingface.co/docs/transformers/v4.47.1/en/model_doc/deformable_detr#transformers.DeformableDetrModel.forward.example)), it works fine and does not show any message.
```python
from transformers import AutoImageProcessor, DeformableDetrModel
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("SenseTime/deformable-detr")
model = DeformableDetrModel.from_pretrained("SenseTime/deformable-detr")
inputs = image_processor(images=image, return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
```
4. Install ninja:
```sh
pip install ninja
```
5. Run [the same script](https://huggingface.co/docs/transformers/v4.47.1/en/model_doc/deformable_detr#transformers.DeformableDetrModel.forward.example) again, this time, the following warning messages will show
```text
!! WARNING !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Your compiler (c++) is not compatible with the compiler Pytorch was
built with for this platform, which is g++ on linux. Please
use g++ to to compile your extension. Alternatively, you may
compile PyTorch from source using c++, and then you can also use
c++ to compile your extension.
See https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md for help
with compiling PyTorch from source.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! WARNING !!
warnings.warn(WRONG_COMPILER_WARNING.format(
Could not load the custom kernel for multi-scale deformable attention: CUDA_HOME environment variable is not set. Please set it to your CUDA install root.
Could not load the custom kernel for multi-scale deformable attention: /root/.cache/torch_extensions/py310_cu124/MultiScaleDeformableAttention/MultiScaleDeformableAttention.so: cannot open shared object file: No such file or directory
Could not load the custom kernel for multi-scale deformable attention: /root/.cache/torch_extensions/py310_cu124/MultiScaleDeformableAttention/MultiScaleDeformableAttention.so: cannot open shared object file: No such file or directory
Could not load the custom kernel for multi-scale deformable attention: /root/.cache/torch_extensions/py310_cu124/MultiScaleDeformableAttention/MultiScaleDeformableAttention.so: cannot open shared object file: No such file or directory
Could not load the custom kernel for multi-scale deformable attention: /root/.cache/torch_extensions/py310_cu124/MultiScaleDeformableAttention/MultiScaleDeformableAttention.so: cannot open shared object file: No such file or directory
Could not load the custom kernel for multi-scale deformable attention: /root/.cache/torch_extensions/py310_cu124/MultiScaleDeformableAttention/MultiScaleDeformableAttention.so: cannot open shared object file: No such file or directory
Could not load the custom kernel for multi-scale deformable attention: /root/.cache/torch_extensions/py310_cu124/MultiScaleDeformableAttention/MultiScaleDeformableAttention.so: cannot open shared object file: No such file or directory
Could not load the custom kernel for multi-scale deformable attention: /root/.cache/torch_extensions/py310_cu124/MultiScaleDeformableAttention/MultiScaleDeformableAttention.so: cannot open shared object file: No such file or directory
Could not load the custom kernel for multi-scale deformable attention: /root/.cache/torch_extensions/py310_cu124/MultiScaleDeformableAttention/MultiScaleDeformableAttention.so: cannot open shared object file: No such file or directory
Could not load the custom kernel for multi-scale deformable attention: /root/.cache/torch_extensions/py310_cu124/MultiScaleDeformableAttention/MultiScaleDeformableAttention.so: cannot open shared object file: No such file or directory
Could not load the custom kernel for multi-scale deformable attention: /root/.cache/torch_extensions/py310_cu124/MultiScaleDeformableAttention/MultiScaleDeformableAttention.so: cannot open shared object file: No such file or directory
Could not load the custom kernel for multi-scale deformable attention: /root/.cache/torch_extensions/py310_cu124/MultiScaleDeformableAttention/MultiScaleDeformableAttention.so: cannot open shared object file: No such file or directory
```
Certainly, `/root/.cache/torch_extensions/py310_cu124/MultiScaleDeformableAttention/` is empty.
The issue happens only when both `ninja` and `transformers` are installed. I believe that the following issue may be related to this issue:
https://app.semanticdiff.com/gh/huggingface/transformers/pull/32834/overview
### Expected behavior
It seems that ninja will let `DeformableDetrModel` throw unexpected error messages (despite that the script still works). That's may be because I am using a container without any compiler or CUDA preinstalled (the CUDA run time is installed by `pip`).
I think there should be a check that automatically turn of the `ninja` related functionalities even if `ninja` is installed by `pip`, as long as the requirements like compiler version, CUDA path, or something, are not fulfilled.
| bug | low | Critical |
2,751,385,480 | vscode | Explore switching back to absolute imports in core | As part of the esm migration, we switch all of the imports to relative paths. This makes it more difficult to know where a file is coming from and also makes it harder to move around code. I'd like to see if we can switch back to absolute paths instead
In https://github.com/microsoft/vscode/pull/236640 I'm switching us to use `nodenext` for module resolution to fix a few other issues (such as no errors on extension less imports). This also will also let us use import path mapping
For this, we just need to add this to the `package.json`:
```json
"imports": {
"#vs/*.js": "./src/vs/*.js"
},
```
Then we can write `import { ILogService } from '#vs/platform/log/common/log.js'` and TS will be able to resolve the right file
We still need to figure out how to get our loader understanding these paths
| debt | low | Critical |
2,751,394,641 | next.js | `use cache` + `cacheLife` unexpectedly requires Suspense boundary | ### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/dawn-voice-vrxzm9
### To Reproduce
1. Visit the reproduction app
2. Click the `cacheLife("minutes")` link. Notice that it works and does not cause errors.
3. Go back, and click the `cacheLife("seconds")` link. Notice the following server-side error:
> [ Server ] Error: Route "/seconds": A component accessed data, headers, params, searchParams, or a short-lived cache without a Suspense boundary nor a "use cache" above it. We don't have the exact line number added to error messages yet but you can see which component in the stack below. See more info: https://nextjs.org/docs/messages/next-prerender-missing-suspense
4. Go back, and click the `cacheLife({ expire: 299 })` link. Notice the following server-side error:
> [ Server ] Error: Route "/expire299": A component accessed data, headers, params, searchParams, or a short-lived cache without a Suspense boundary nor a "use cache" above it. We don't have the exact line number added to error messages yet but you can see which component in the stack below. See more info: https://nextjs.org/docs/messages/next-prerender-missing-suspense
### Current vs. Expected behavior
The current behavior demands a Suspense boundary when the developer changes a `cacheLife("minutes")` call to `cacheLife("seconds")` call. I expect for such a change to be local / isolated, and to not require any other changes to an app.
Likewise for changing a `cacheLife({ expire: 300 })` call to `cacheLife({ expire: 299 })` (or, generally, from >= 300 to < 300).
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #1 SMP PREEMPT_DYNAMIC Sun Aug 6 20:05:33 UTC 2023
Available memory (MB): 4102
Available CPU cores: 2
Binaries:
Node: 20.9.0
npm: 9.8.1
Yarn: 1.22.19
pnpm: 8.10.2
Relevant Packages:
next: 15.1.1-canary.13 // Latest available version is detected (15.1.1-canary.13).
eslint-config-next: N/A
react: 19.0.0
react-dom: 19.0.0
typescript: 5.3.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
dynamicIO
### Which stage(s) are affected? (Select all that apply)
next dev (local), next start (local), Vercel (Deployed), Other (Deployed)
### Additional context
The cause of this behavior is this code:
https://github.com/vercel/next.js/blob/4fda39ce1069efabd2c7a68da3963cf1840572e7/packages/next/src/server/use-cache/use-cache-wrapper.ts#L645-L658
I discussed this code a bit in https://github.com/vercel/next.js/pull/72145#discussion_r1826169507. I understand the intention behind the code, however, it results in poor DX. It's as if I, the app developer, have entered into an agreement with Next.js, wherein I add `'use cache'` + revalidation as appropriate, and in exchange I get improved performance. But then Next.js suddenly decides to do extra work that I didn't agree to, and demands more payment in return (a Suspense boundary).
To be clear: I _do not_ think the solution is documentation. Even if this behavior were documented, it would be poor DX. | dynamicIO | low | Critical |
2,751,461,286 | flutter | Color deprecations should have a dart fix | Deprecations from https://github.com/flutter/engine/pull/54737 should have a dart fix.
I noticed user-created content out in the wild trying to help folks migrate:

Dart fix has not been enabled for `dart:ui` and others, following up in https://github.com/dart-lang/sdk/issues/59764
WIP PR to add this support in https://github.com/flutter/flutter/pull/160616
For reference, the migration guide: https://docs.flutter.dev/release/breaking-changes/wide-gamut-framework | engine,c: proposal,P3,team-engine,triaged-engine | low | Minor |
2,751,504,722 | flutter | Unable To Run Flutter App With Apple Watch Extension After Upgrading 3.27.0 | ### Steps to reproduce
My app has an Apple Watch extension and working without any problem before upgrading Flutter 3.27
But after upgrade I cannot run my app from VSCode. But I can run either app itself or Apple Watch extension from XCode without any problem.
I know you will ask a reproducable example but I can only provide what I have for now. I will try to create a new project asap.
### Expected results
I expect to run without any problem
### Actual results
Build fails on VSCode
### Code sample
<details open><summary>Code sample</summary>
```dart
// no code available
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
Launching lib/main.dart on iPhone 16 Pro in debug mode...
Xcode build done. 40.4s
Failed to build iOS app
Swift Compiler Error (Xcode): 'StateObject' is only available in iOS 14.0 or newer
/Users/bahadirarslan/Development/Flutter/myproject/ios/MyProject%20Watch%20App/MainView.swift:10:5
Swift Compiler Error (Xcode): 'main()' is only available in iOS 14.0 or newer
/Users/bahadirarslan/Development/Flutter/myproject/ios/MyProject%20Watch%20App/MyProjectApp.swift:9:0
Swift Compiler Error (Xcode): 'Scene' is only available in iOS 14.0 or newer
/Users/bahadirarslan/Development/Flutter/myproject/ios/MyProject%20Watch%20App/MyProjectApp.swift:12:19
Swift Compiler Error (Xcode): Type 'WatchViewModel' does not conform to protocol 'WCSessionDelegate'
/Users/bahadirarslan/Development/Flutter/myproject/ios/MyProject%20Watch%20App/ViewModel/WatchViewModel.swift:79:0
Uncategorized (Xcode): Command SwiftCompile failed with a nonzero exit code
Could not build the application for the simulator.
Error launching application on iPhone 16 Pro.
Exited (1).
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[โ] Flutter (Channel stable, 3.27.0, on macOS 15.1 24B83 darwin-arm64, locale en-TR)
โข Flutter version 3.27.0 on channel stable at /Users/bahadirarslan/Development/SDKS/flutter
โข Upstream repository https://github.com/flutter/flutter.git
โข Framework revision 8495dee1fd (9 days ago), 2024-12-10 14:23:39 -0800
โข Engine revision 83bacfc525
โข Dart version 3.6.0
โข DevTools version 2.40.2
[โ] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
โข Android SDK at /Users/bahadirarslan/Library/Android/sdk
โข Platform android-35, build-tools 35.0.0
โข Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
โข Java version OpenJDK Runtime Environment (build 21.0.3+-79915917-b509.11)
โข All Android licenses accepted.
[โ] Xcode - develop for iOS and macOS (Xcode 16.2)
โข Xcode at /Applications/Xcode.app/Contents/Developer
โข Build 16C5032a
โข CocoaPods version 1.15.2
[โ] Chrome - develop for the web
โข Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[โ] Android Studio (version 2024.2)
โข Android Studio at /Applications/Android Studio.app/Contents
โข Flutter plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/9212-flutter
โข Dart plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/6351-dart
โข Java version OpenJDK Runtime Environment (build 21.0.3+-79915917-b509.11)
[โ] VS Code (version 1.96.0)
โข VS Code at /Applications/Visual Studio Code.app/Contents
โข Flutter extension version 3.102.0
[โ] Connected device (4 available)
โข iPhone 16 Pro (mobile) โข B5611E87-5F6A-4178-ABAD-7B837493DE55 โข ios โข com.apple.CoreSimulator.SimRuntime.iOS-18-2 (simulator)
โข macOS (desktop) โข macos โข darwin-arm64 โข macOS 15.1 24B83 darwin-arm64
โข Mac Designed for iPad (desktop) โข mac-designed-for-ipad โข darwin โข macOS 15.1 24B83 darwin-arm64
โข Chrome (web) โข chrome โข web-javascript โข Google Chrome 131.0.6778.205
! Error: Browsing on the local area network for Bahadฤฑrโs iPhone. Ensure the device is unlocked and attached with a cable or associated with the same local area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code -27)
! Error: Browsing on the local area network for Bahadir Arslanโs iPad. Ensure the device is unlocked and attached with a cable or associated with the same local area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code -27)
! Error: Browsing on the local area network for Bahadฤฑr's iPad. Ensure the device is unlocked and attached with a cable or associated with the same local area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code -27)
[โ] Network resources
โข All expected network resources are available.
โข No issues found!
```
</details>
| platform-ios,tool,P2,team-ios,triaged-ios,fyi-tool | low | Critical |
2,751,508,425 | kubernetes | DRA: Using All allocation mode will schedule to nodes with zero devices | ### What happened?
I created a resource claim template to get "All" GPUs on a node:
```yaml
apiVersion: resource.k8s.io/v1beta1
kind: ResourceClaimTemplate
metadata:
name: all-gpus
spec:
spec:
devices:
requests:
- name: gpu
deviceClassName: gpu.nvidia.com
allocationMode: All
```
I then created a deployment that had a Pod that used that claim. The Pod was scheduled to a node. However, my DRA driver on that node was not running, so there were no resource slices for that node.
### What did you expect to happen?
I expected the pod to not schedule, since there were no available devices meeting the request. "All" should mean "at least one".
### How can we reproduce it (as minimally and precisely as possible)?
Create the resource claim template as shown and a deployment, with no DRA driver running. The pod will still schedule.
### Anything else we need to know?
/wg device-management
### Kubernetes version
<details>
```console
$ kubectl version
Client Version: v1.32.0
Kustomize Version: v5.5.0
Server Version: v1.32.0-gke.1358000
```
</details>
### Cloud provider
<details>
GKE
</details>
### OS version
<details>
```console
$ cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux rodete"
NAME="Debian GNU/Linux rodete"
VERSION_CODENAME=rodete
ID=debian
HOME_URL="https://go/glinux"
SUPPORT_URL="https://go/techstop"
BUG_REPORT_URL="https://go/techstop"
$ uname -a
Linux jbelamaric.c.googlers.com 6.10.11-1rodete2-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.10.11-1rodete2 (2024-10-16) x86_64 GNU/Linux
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,priority/important-soon,triage/accepted,wg/device-management | low | Critical |
2,751,510,101 | kubernetes | Use test-only API types for tests that depend on API graduations | https://github.com/kubernetes/kubernetes/pull/128279#discussion_r1890549751 and https://github.com/kubernetes/kubernetes/pull/128279/files?diff=unified&w=1#r1882520074 broke when bumping the master branch to 1.33 due to tests that depend on buildin APIs that meet specific graduation criteria of what is being tested, and stop working once the minor version of kubernetes increased to the point that the criteria falls out of our support/emulation windows.
We should instead modify these tests to depend on a test-only API that will stay valid forever, and don't depend on the k8s minor version.
This should be feasible to do. Here's an existing test that has some test-only APIs that we could crib from:
https://github.com/kubernetes/kubernetes/blob/9274a584b8a30262f05694f773a2523e4ec52920/staging/src/k8s.io/apiserver/pkg/server/genericapiserver_test.go#L184-L212
/sig api-machinery
/cc @Jefftree @siyuanfoundation @liggitt | sig/api-machinery,triage/accepted | low | Minor |
2,751,525,190 | langchain | DOC: Documentation for testing documentation changes does not work | ### URL
https://python.langchain.com/docs/contributing/how_to/documentation/setup/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
The command:
> poetry install --with lint,docs --no-root
produces the output:
> Group(s) not found: docs (via --with)
### Idea or request for content:
_No response_ | ๐ค:docs | low | Minor |
2,751,534,284 | PowerToys | Option to add pixel values custom_layouts.json | ### Description of the new feature / enhancement
To add entries in custom-layouts.json for pixel sizes
### Scenario when this would be used?
For a user to manually edit a layout having a measure in pixels as well or insead of percentages.
In the layout editor the sizes are visibly in pixels but in the JSON they are in " rows-percentage "

But even those percentages don't make sense because they are thousands.
It is a work around to not bother retrieving resolution information from the display?
```
"info": {
"rows": 2,
"columns": 3,
"rows-percentage": [
5000,
5000
],
"columns-percentage": [
5000,
2500,
2500
],
"cell-child-map": [
[
0,
2,
3
],
[
1,
2,
3
]
],
"show-spacing": true,
"spacing": 0,
"sensitivity-radius": 20
}
```
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,751,537,518 | kubernetes | [Compatibility Version]Improve hack/verify-featuregates.sh script | ### What would you like to be added?
Improve the code in https://github.com/kubernetes/kubernetes/tree/master/test/featuregates_linter
1. since all the feature gates have been migrated to the versioned feature gate, we can remove the code to check no new feature is added to the unversioned feature gate. And prevent the usage of the unversioned `DefaultMutableFeatureGate.Add()` with a golangci-lint (https://github.com/kubernetes/kubernetes/issues/126893)
2. we need to add checks to verify that a feature can only be removed after been locked for at least 3 releases.
* We support EmulationVersion up to N-3. So if a feature is locked starting N, the feature gate could still be set in `BinaryVersion=N+1, EmulationVersion=N-1,-2`, and `BinaryVersion=N+2, EmulationVersion=N-1`. So the feature gate can only be removed starting the `BinaryVersion=N+3`
4. rename the go script to something more appropriate like featuregate-tracker rather than linter because it is not really a linter.
### Why is this needed?
To improve code health and developer experience wrt Compatibility Version. | kind/feature,needs-sig,needs-triage | low | Minor |
2,751,551,165 | vscode | getting different results depending on prefix | These should yield the same suggestions
<img width="567" alt="Image" src="https://github.com/user-attachments/assets/518f3c17-522b-4657-9967-729ddb3b4883" />
<img width="634" alt="Image" src="https://github.com/user-attachments/assets/a96cb086-30d9-4ed6-8ec2-21b64aee550c" />
Note that this has always been the case, see behavior in pwsh in VS Code stable 1.95
<img width="720" alt="Image" src="https://github.com/user-attachments/assets/16196f25-1611-40fe-8805-39e6bb082e42" />
<img width="495" alt="Image" src="https://github.com/user-attachments/assets/f824f735-7c0f-42c0-bbd5-ece87d99da12" />
| bug,terminal-suggest | low | Minor |
2,751,562,166 | flutter | RawAutocomplete options don't relayout if the field layout algorithm changes the field's width | ### Steps to reproduce
1. Wrap a `RawAutocomplete` in an `Align` with `alignment: .centerLeft`.
2. Wrap the field in `RawAutocomplete.fieldViewBuilder` in an `AnimatedBuilder` to animate the field width.
### Expected results
The options width should always match the field width, and so should grow/shrink in sync with the field.
### Actual results
The options width doesn't grow/shrink while the field width animates.
### Code sample
<details><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
/// Flutter code sample for [Autocomplete].
void main() => runApp(const AutocompleteExampleApp());
class AutocompleteExampleApp extends StatelessWidget {
const AutocompleteExampleApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
appBar: AppBar(
title: const Text('Autocomplete Basic'),
),
body: CustomScrollView(
slivers: [
const SliverToBoxAdapter(child: SizedBox(height: 1000)),
SliverToBoxAdapter(
child: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
Text(
'Type below to autocomplete the following possible results: ${AutocompleteBasicExample._kOptions}.'),
const Padding(
padding: EdgeInsets.symmetric(horizontal: 32.0),
child: AutocompleteBasicExample(),
),
],
),
),
),
const SliverToBoxAdapter(child: SizedBox(height: 1000)),
],
),
),
);
}
}
class AutocompleteBasicExample extends StatefulWidget {
const AutocompleteBasicExample({super.key});
static const List<String> _kOptions = <String>[
'aardvark',
'bobcat',
'chameleon',
];
@override
State<AutocompleteBasicExample> createState() =>
_AutocompleteBasicExampleState();
}
class _AutocompleteBasicExampleState extends State<AutocompleteBasicExample>
with TickerProviderStateMixin {
late final AnimationController _controller = AnimationController(
duration: const Duration(seconds: 10),
vsync: this,
)..repeat();
TextEditingController textEditingController = TextEditingController();
final FocusNode focusNode = FocusNode();
bool disposedController = false;
@override
void dispose() {
_controller.dispose();
super.dispose();
}
@override
Widget build(BuildContext context) {
return Align(
alignment: Alignment.centerLeft,
child: RawAutocomplete<String>(
focusNode: focusNode,
textEditingController: textEditingController,
fieldViewBuilder: (BuildContext buildContext,
TextEditingController controller,
FocusNode focusNode,
VoidCallback onSubmitted) {
return AnimatedBuilder(
animation: _controller,
child: TextField(
controller: controller,
focusNode: focusNode,
onSubmitted: (String value) => onSubmitted(),
),
builder: (BuildContext context, Widget? child) {
return SizedBox(
width: 500 * _controller.value,
child: child,
);
},
);
},
optionsViewBuilder: (BuildContext context,
AutocompleteOnSelected<String> onSelected,
Iterable<String> options) {
return Align(
alignment: Alignment.topLeft,
child: Material(
elevation: 4.0,
child: SizedBox(
height: 200.0,
child: ListView.builder(
padding: const EdgeInsets.all(8.0),
itemCount: options.length,
itemBuilder: (BuildContext context, int index) {
final String option = options.elementAt(index);
return GestureDetector(
onTap: () {
onSelected(option);
},
child: ListTile(
title: Text(option),
),
);
},
),
),
),
);
},
optionsBuilder: (TextEditingValue textEditingValue) {
if (textEditingValue.text == '') {
return const Iterable<String>.empty();
}
return AutocompleteBasicExample._kOptions.where((String option) {
return option.contains(textEditingValue.text.toLowerCase());
});
},
onSelected: (String selection) {
debugPrint('You just selected $selection');
},
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Video demonstration</summary>
https://github.com/user-attachments/assets/97f9fa11-fbf8-43ae-b206-874d008665f1
</details>
### Logs
_No response_
### Flutter Doctor output
<details><summary>Doctor output</summary>
```console
Doctor summary (to see all details, run flutter doctor -v):
[!] Flutter (Channel [user-branch], 3.27.0-1.0.pre.149, on macOS 15.1.1 24B91 darwin-arm64,
locale en)
! Flutter version 3.27.0-1.0.pre.149 on channel [user-branch] at
/Users/victorsanni/development/flutter
Currently on an unknown channel. Run `flutter channel` to switch to an official
channel.
If that doesn't fix the issue, reinstall Flutter by following instructions at
https://flutter.dev/setup.
! Upstream repository unknown source is not a standard remote.
Set environment variable "FLUTTER_GIT_URL" to unknown source to dismiss this error.
[โ] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[โ] Xcode - develop for iOS and macOS (Xcode 16.1)
[โ] Chrome - develop for the web
[โ] Android Studio (version 2023.3)
[โ] VS Code (version 1.88.1)
[โ] Connected device (4 available)
! Error: Browsing on the local area network for Victorโs iPhone. Ensure the device is
unlocked and attached with a cable or associated with the same local area network as
this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code -27)
[โ] Network resources
```
</details>
| framework,has reproducible steps,P2,team-framework,triaged-framework,found in release: 3.27,found in release: 3.28 | low | Critical |
2,751,562,918 | godot | SpinBox `update_on_text_changed = true` regression: Can't input decimals. | ### Tested versions
- Reproducible in: v4.3.stable, v4.4.dev6 [1f47e4c4e]
- Not reproducible in: v4.2.stable
### System information
Godot v4.3.stable.mono - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1080 (NVIDIA; 32.0.15.6603) - Intel(R) Core(TM) i7-7700K CPU @ 4.20GHz (8 Threads)
### Issue description
Setting `Update on Text Changed` to true prevents decimals from being input. It seems to be similar to what happened with #81989. You can copy and paste decimal numbers into the SpinBox just fine, but trying to type a decimal will do nothing. Changing `step` to a decimal value < 1 still doesn't allow a decimal to be typed. This is true for both GDscript and C# in 4.3 & 4.4-dev6, but not in 4.2 (as #81989 fixed it in that version)
### Steps to reproduce
1. Create a new scene
2. Add a SpinBox node
3. Set `Update on Text Changed` to true
4. Run the current scene
5. Type decimal (.) into the SpinBox

### Minimal reproduction project (MRP)
N/A | bug,topic:gui | low | Minor |
2,751,589,044 | flutter | Cloning from `main` is broken | Cloning the `main` branch of Flutter now produces a tree that doesn't work:
```
$ cd $(mktemp -d)
$ git clone --depth=1000 -b main https://github.com/flutter/flutter
Cloning into 'flutter'...
remote: Enumerating objects: 45912, done.
remote: Counting objects: 100% (45912/45912), done.
remote: Compressing objects: 100% (22762/22762), done.
remote: Total 45912 (delta 25470), reused 36603 (delta 19754), pack-reused 0 (from 0)
Receiving objects: 100% (45912/45912), 42.65 MiB | 23.93 MiB/s, done.
Resolving deltas: 100% (25470/25470), done.
$ flutter/bin/flutter --version
fatal: Not a valid object name upstream/master
```
This is currently breaking Zulip's CI:
* https://github.com/zulip/zulip-flutter/issues/1177
because we clone the repo at the `main` branch.
The issue looks similar to #160558, and I expect the fix will involve the same pair of scripts that were edited in #160574 to fix that issue.
In the Flutter repo `main` always points to the same commit as `master`, and this is the first time Zulip has run into any sort of incompatibility by just always using `main` as the name. So it'd be good to fix this issue and return to the state where `main` works equally well to `master`.
(In the meantime we have a workaround: https://github.com/zulip/zulip-flutter/pull/1186, adding a step `git update-ref refs/remotes/origin/master origin/main` after clone.)
| team-infra,P2,triaged-infra,monorepo | low | Critical |
2,751,589,322 | yt-dlp | Add Support for video.infosec.exchange | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a new site support request
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required
### Region
Europe
### Example URLs
https://video.infosec.exchange/w/p/5044b454-2043-485c-8832-eee872a0251b
### Provide a description that is worded well enough to be understood
Add support for video.infosec.exchange please.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
$ ~/yt-dlp -vU https://video.infosec.exchange/w/p/5044b454-2043-485c-8832-eee872a0251b
[debug] Command-line config: ['-vU', 'https://video.infosec.exchange/w/p/5044b454-2043-485c-8832-eee872a0251b']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp (linux_exe)
[debug] Python 3.11.11 (CPython x86_64 64bit) - Linux-6.1.0-28-amd64-x86_64-with (OpenSSL 3.1.7 3 Sep 2024)
[debug] exe versions: ffmpeg 5.1.6-0 (setts), ffprobe 5.1.6-0, rtmpdump 2.4
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.7.1, mutagen-1.47.0, requests-2.32.3, secretstorage-3.3.3, sqlite3-3.44.2, urllib3-2.2.3, websockets-14.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp)
[generic] Extracting URL: https://video.infosec.exchange/w/p/5044b454-2043-485c-8832-eee872a0251b
[generic] 5044b454-2043-485c-8832-eee872a0251b: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] 5044b454-2043-485c-8832-eee872a0251b: Extracting information
[debug] Looking for embeds
[debug] Identified a twitter:player iframe
[generic] Extracting URL: https://video.infosec.exchange/video-playlists/embed/5044b454-2043-485c-8832-eee872a0251b
[generic] 5044b454-2043-485c-8832-eee872a0251b: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] 5044b454-2043-485c-8832-eee872a0251b: Extracting information
[debug] Looking for embeds
ERROR: Unsupported URL: https://video.infosec.exchange/video-playlists/embed/5044b454-2043-485c-8832-eee872a0251b
Traceback (most recent call last):
File "yt_dlp/YoutubeDL.py", line 1624, in wrapper
File "yt_dlp/YoutubeDL.py", line 1759, in __extract_info
File "yt_dlp/extractor/common.py", line 742, in extract
File "yt_dlp/extractor/generic.py", line 2553, in _real_extract
yt_dlp.utils.UnsupportedError: Unsupported URL: https://video.infosec.exchange/video-playlists/embed/5044b454-2043-485c-8832-eee872a0251b
```
| site-request,triage | low | Critical |
2,751,610,862 | flutter | Monorepo Progress Updates: Day N+2 | Hello Flutter Community!
**We will use this issue to communicate further progress.**
**Day 1** saw the merging of three repositories and then some fast-follow updates to kick the tires on presubmits and the Merge Queue building production artifacts.
**Day 2** was landing quick updates to re-add some missing files and make our update scripts aware of[ multiple origins](https://github.com/flutter/flutter/issues/160558). It was rough going through the MQ as our PR, which passed all pre-submits, was blocked by infrastructure failures. We were sure our changes would work and so we guided the PR through CI till it landed.
**Day N+2**: You are here
* We've fixed some scheduling errors on the dashboard - backfilling appears to be working.
* We've landed our format-the-world PRs
* **The tree is open.**
If you have a PR that was created **before the merge**, you will run into some problems. We believe these are related to the PR structure on GitHub and are investigating. The work around is:
* Create a new PR after the monorepo merge
* Run dart format over your changed files
* Now's a good time to turn on "auto format on save" in your editor of choice.
If you have any problems with presubmit tests, the merge queue, or generally about the monorepo; please [file an issue and use the <monorepo> label](https://github.com/flutter/flutter/issues/new?labels=monorepo). | monorepo | medium | Critical |
2,751,620,733 | transformers | SinkCache (StreamLLM) implemented over Post-RoPE Key cache might result in confused position for inference | ### System Info
The current implementation of SinkCache might result in confusion position for attention computation.
1. HF Key Cache: Post-RoPE
2. After Pre-filling over sequence of length N
i) Post-RoPE Key cache with length N: [ 0, 1, 2, โฆ, N-1]
3. During first token generation (Current Query & Key position is N, since KV cache size is N.), what does StreamLLM update based (Sink_size = S, Recent_window = R)
i) Initial: Post-RoPE Key cache: [0, 1, โฆ, S-1] + [N - R +1, โฆ, N-1] + [N]; len([N - R +1, โฆ, N-1] = R - 1
ii) Rotate:
HF applies Rotation over R-1 keys with position (N - R +1, โฆ, N-1) to make their position as (S, S +1, โฆ, S + R - 2) and keep this in StreamLLM KV cache.
iii) Updated StreamLLM Key Cache position: [ 0, 1, โฆ, S-1] + [S, โฆ, S + R - 2] + [N], that is: the last (S+R)th element with actual position N, since len([S, โฆ, S + R - 2]) = R - 1
5. Continue next token prediction.
i) Current Query and Key position are depends on Stream KV cache size = S + R
ii) StreamLLM Key Cache position update:
a) Initial: [ 0, 1, โฆ, S-1] + [S + 1, โฆ, S + R - 2, N] + [S + R] (note: N is the position of the (S+R - 1)th element.
b) Rotate (all keep ones minus 1)
[0, 1, โฆ, S-1] + [S, โฆ, S + R - 3, N-1] + [S + R] (position S+R is not involved Rotation, please refer `https://github.com/huggingface/transformers/blob/main/src/transformers/cache_utils.py, SinkCache, row: 1038`)
len([S, โฆ, S + R - 3, N-1]) = R - 1
Now: We get S + R Key cache with positions = [0, 1, โฆ, S-1] + [S, โฆ, S + R - 3, N-1] + [S + R]. Since the current query position is S+R. For long context inference, N - 1 >> S + R. This means that Query will interact with a Key with future position.
**Note**: all the number in [] means the position used for token generation.
### Who can help?
@gante @ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Post RoPE key cache:
```
https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py (row: 277 --283)
cos, sin = position_embeddings
query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin)
if past_key_value is not None:
# sin and cos are specific to RoPE models; cache_position needed for the static cache
cache_kwargs = {"sin": sin, "cos": cos, "cache_position": cache_position}
key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
```
Inference stage:
next token position = Key cache length
```
https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py#L489 (Row: 556 - 563)
if cache_position is None:
past_seen_tokens = past_key_values.get_seq_length() if past_key_values is not None else 0
cache_position = torch.arange(
past_seen_tokens, past_seen_tokens + inputs_embeds.shape[1], device=inputs_embeds.device
)
if position_ids is None:
position_ids = cache_position.unsqueeze(0)
```
### Expected behavior
During inference: Query position should be larger or equal to Key position. | bug | low | Minor |
2,751,637,393 | tauri | [bug] Android 'getting started' guide steps fail with no error message under OSX 15 | ### Describe the bug
Hello, I decided to try out tauri today, but I have not been able to make the android 'getting started' tauri guide work. The android development mode server fails to start with no error output describing what is wrong beyond "failed to assemble APK".
### Reproduction
$ cargo create-tauri-app
(pick typescript/javascript; npm; vanilla; typescript)
$ cd tauri-app
$ npm install
$ npm run tauri android init
> [email protected] tauri
> tauri android init
Generating Android Studio project...
Info "/Users/dthurn/Desktop/tmp/tauri-app/src-tauri" relative to "/Users/dthurn/Desktop/tmp/tauri-app/src-tauri/gen/android/tauri_app" is "../../../"
victory: Project generated successfully!
Make cool apps! ๐ป ๐ ๐
$ npm run tauri android dev
...
Execution failed for task ':app:rustBuildArm64Debug'.
> A problem occurred starting process 'command 'npm''
Failed to assemble APK: command [...]
Full command output here: https://gist.github.com/thurn/2ad2dd7d9a0d1783d1c7b5a00e1962c2
$ echo $JAVA_HOME
/Users/dthurn/Applications/Android Studio.app/Contents/jbr/Contents/Home
$ echo $ANDROID_HOME
/Users/dthurn/Library/Android/sdk
$echo $NDK_HOME
/Users/dthurn/Library/Android/sdk/ndk/28.0.12674087
### Expected behavior
'npm run tauri android dev' should clearly describe why it failed to start.
### Full `tauri info` output
```text
> [email protected] tauri
> tauri info
[โ] Environment
- OS: Mac OS 15.1.1 arm64 (X64)
โ Xcode Command Line Tools: installed
โ rustc: 1.83.0 (90b35a623 2024-11-26)
โ cargo: 1.83.0 (5ffbef321 2024-10-29)
โ rustup: 1.27.1 (54dd3d00f 2024-04-24)
โ Rust toolchain: stable-aarch64-apple-darwin (default)
- node: 23.4.0
- npm: 11.0.0
[-] Packages
- tauri ๐ฆ: 2.1.1
- tauri-build ๐ฆ: 2.0.3
- wry ๐ฆ: 0.47.2
- tao ๐ฆ: 0.30.8
- tauri-cli ๐ฆ: 1.5.14
- @tauri-apps/api ๎: 2.1.1
- @tauri-apps/cli ๎: 2.1.0
[-] Plugins
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- bundler: Vite
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: needs triage | low | Critical |
2,751,656,054 | vscode | "command" content interpretation broken in tasks.json, possible arg tokenization problem? | After latest update to VSCode Insiders, interpretation of `tasks.json` appears to have changed in a way that breaks how I'm currently using it.
A task containing a command of the form:
```
"env": {
"PYTHON": "tools/vscode/scripts/python.sh",
},
...
"command": "$PYTHON -m tools.vscode.scripts.compdb_service"
```
Results in terminal output of:
```
/bin/bash: $PYTHON -m tools.vscode.scripts.compdb_service: No such file or directory
```
After a bunch of testing, it appears that VSCode changed how "command" is interpreted. Previously it seemed like it was treated like `bash -c "$COMMAND"` to now being executed as `bash "$COMMAND"`.
However attempts to pass args separately via:
```
"args": ["-m", "tools.vscode.scripts.compdb_service"]
```
...resulted in the same failure indicating a deeper problem. It seems to imply it's executing something like `bash "$COMMAND $ARGS"`
Attempts to manually expand env var PYTHON also resulted in same failure.
The exact same `tasks.json` works just fine on the latest normal VSCode.
<!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes (sort of? I ran `open -a /Applications/Visual\ Studio\ Code\ -\ Insiders.app --args --disable-extensions` but the extensions still seem to be running, and I am running Remote Development so if they didn't I really wouldn't be able to test) Attempted biset and it did almost nothing before saying: `Extension Bisect is done but no extension has been identified. This might be a problem with Code - Insiders.`
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.97.0-insider (Universal)
Commit: 225d1ca870a984369bde1a7fcd75f863fc69fee1
- OS Version: MacOS 14.7/Ubuntu
Steps to Reproduce:
1. add task in tasks.json with a command with spaces between args
2. trigger the execution
3. see output say something like "/bin/bash: $PYTHON -m tools.vscode.scripts.compdb_service: No such file or directory"
| bug,tasks | low | Critical |
2,751,732,951 | godot | Adding points to Path3D curve does not respect grid snap | ### Tested versions
v4.4.dev7.official [46c8f8c5c]
### System information
Godot v4.4.dev7 - Windows 10 (build 19045) - Multi-window, 2 monitors - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3080 (NVIDIA; 32.0.15.6603) - AMD Ryzen 7 3700X 8-Core Processor (16 threads)
### Issue description
If you want to place curve points on grid, it's currently impossible. If you have grid snap enabled and add a point, it adds a point off grid in an arbitrary location. If you try to move that point, it snap incrementally, so you can never get it on grid.
### Steps to reproduce
- Make a 3D scene.
- Turn on "Use snap (Y)"
- Add a Path 3D.
- Ctrl+click or use the "add point" tool and click
- Note that the points are not on grid
- Try moving the points and note they do not snap to grid
### Minimal reproduction project (MRP)
N/A. | bug,topic:editor,topic:3d | low | Minor |
2,751,745,113 | godot | PanelContainer does not auto-resize to smaller size when inner RichTextLabel text gets shorter | ### Tested versions
Tested in Godot 4.3.stable (Linux). Also tested in 4.4 Dev 7.
### System information
Godot v4.4.dev7 - Manjaro Linux #1 SMP PREEMPT_DYNAMIC Mon, 09 Dec 2024 11:58:37 +0000 on X11 - X11 display driver, Multi-window, 2 monitors - Vulkan (Forward+) - integrated AMD Radeon Graphics (RADV RENOIR) - AMD Ryzen 7 4800H with Radeon Graphics (16 threads)
### Issue description
A PanelContainer fits its size to a containting RichTextLabel (Fit Content ON, AutoWrap Smart) but does NOT fit its size once the richtextlabel text gets smaller.
Both the PanelContainer and the RichTextLabel have a custom minimum size set (though behaviour is the same without a custom minimum size).
Looking at the remote view, the RichTextLabel has a ScrollBar child even if "scroll active" is set to false (RichtTextLabel) and the scrollbar takes up the whole height of the PanelContainer (not sure if this is relevant).
https://github.com/user-attachments/assets/f7b87601-2aa4-484d-91a8-0a63af3a92ed
Having something like a VBoxContainer between the PanelContainer and the RichTextLabel will properly resize the VBoxContainer but will still not resize the PanelContainer. (Video Example has the following setup: PanelContainer (Grey) -> VBoxContainer -> MarginContainer -> PanelContainer (Red) -> RichTextLabel). Issue persists even without the MarginContainer...
https://github.com/user-attachments/assets/9ff68c0b-fa0c-4fa6-b355-bbc2432cc50c
I have a proper project with a more complex issue (probably related to this one) where upon instantiating a scene with RichTextLabel has wrong dimensions for the first frame after add_child().
### Steps to reproduce
Create a PanelContainer, add child RichTextLabel, set "fit content" to ON.
In-game set RichTextLabel.text to something larger, then set it back to something smaller. The PanelContainer does not resize back.
### Minimal reproduction project (MRP)
Not quite MRP (there is some variants on the issue I did to narrow down the issue): [PanelContainerResize.zip](https://github.com/user-attachments/files/18204838/PanelContainerResize.zip)
Files are now for 4.4 Dev 7 (auto converted) | bug,topic:gui | low | Minor |
2,751,747,341 | neovim | Treesitter crashes on fresh install of Neovim with no other plugins? | ### Problem
NOTE: I am coming from the Treesitter github. They denied that the issue was on their end and suggested I submit the bug on the Neovim github.
I am currently going insane trying to figure out why this is happening. Treesitter seems to indiscriminately crash on any file that has had a parser downloaded for it. Note: I am on Windows 11 and install neovim with winget. I am running neovim in PowerShell 7, but I have the same result in CMD prompt
The last thing I did to test and isolate was the following steps: I uninstalled neovim, deleted the nvim-data folder, removed all plugins except for packer and treesitter. I reinstalled neovim, I loaded my packer config file and ran :PackerSync. All that installed was packer and treesitter.
Unless I'm missing anything that should be a fresh install. I have no other configs loaded except my treesitter config which I will include.
I open Cargo.toml (the file doesn't matter I've tested at least a dozen different files, but the behavior is mostly the same), it opens fine and treesitter auto installs and compiles the tree sitter for *.toml. No crash yet as the parser isn't running.
I close neovim and reopen Cargo.toml. Instant crash, no information is provided.
:checkhealth Treesitter is good
There are no logs so far as I can find.
My attempts to debug with lldb go like so:
nvim ./ #in the same directory as Cargo.toml --- no crash
lldb -p $(nvim pid)
I try to open Cargo.toml from inside of Neovim #Instant freeze or crash
I've gotten two results in lldb
exception code: 0xc0000409
and
User-mode data execution prevention (DEP) violation at location 0x00000000 #I've only gotten this once
I just tried again using main.rs and was able to run lldb before the parser compiled. AS the parser was compiling it threw this error:
Exception 0xc0000005 encountered at address 0x7ffd8ed315d1: Access violation reading location 0x00000200
Idk why this is happening as I've never had issues with Treesitter before.
Here's my tree sitter config:
```
require 'nvim-treesitter.configs'.setup {
sync_install = false,
auto_install = true,
ignore_install = { "lua" },
highlight = {
enable = true,
additional_vim_regex_highlighting = false,
},
}
```
At this point I'm convinced it must be cpu architecture or OS related as I have the exact same setup on my remote linux and don't have this issue.
### Steps to reproduce
With TOML Files
1. I open Cargo.toml. it opens fine and treesitter auto installs and compiles the tree sitter for *.toml. No crash yet as the parser isn't running.
2. I close neovim and reopen Cargo.toml. Instant crash, no information is provided.
3. Trying to reopen Cargo.toml results in an instant crash with no information
With RS Files
1. I open main.rs. It opens fine and Treesitter begins to automatically download the parser. No issues. While attempting to compile Neovim freezes.
2. Trying to open any Rust file if the parser was pre-installed causes the terminal to freeze.
### Expected behavior
Neovim opens and Treesitter initializes without issue
### Nvim version (nvim -v)
0.10.2
### Vim (not Nvim) behaves the same?
n/a
### Operating system/version
Windows 11 23H2
### Terminal name/version
Powershell 7.5.0
### $TERM environment variable
n/a
### Installation
winget | platform:windows,needs:repro,bug-crash,treesitter | low | Critical |
2,751,768,678 | tauri | [bug] | ### Describe the bug
When I use the latest version of Tauri with a frontend stack of Vue3 + Vite, and utilize the plugin '@tauri-apps/plugin-http', this error occurs.

### Reproduction
_No response_
### Expected behavior

### Full `tauri info` output
```text
[โ] Environment
- OS: Windows 10.0.22631 x86_64 (X64)
โ WebView2: 131.0.2903.99
โ MSVC: Visual Studio ็ๆๅทฅๅ
ท 2022
โ rustc: 1.83.0 (90b35a623 2024-11-26)
โ cargo: 1.83.0 (5ffbef321 2024-10-29)
โ rustup: 1.27.1 (54dd3d00f 2024-04-24)
โ Rust toolchain: stable-x86_64-pc-windows-msvc (default)
- node: 18.18.0
- npm: 9.8.1
[-] Packages
- tauri ๐ฆ: 2.1.1
- tauri-build ๐ฆ: 2.0.3
- wry ๐ฆ: 0.47.2
- tao ๐ฆ: 0.30.8
- @tauri-apps/api ๎: not installed!
- @tauri-apps/cli ๎: 2.1.0
[-] Plugins
- tauri-plugin-fs ๐ฆ: 2.2.0
- @tauri-apps/plugin-fs ๎: not installed!
- tauri-plugin-http ๐ฆ: 2.2.0
- @tauri-apps/plugin-http ๎: 2.2.0
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- framework: Vue.js
- bundler: Vite
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: needs triage | low | Critical |
2,751,772,085 | flutter | [tech debt] Move url_launcher into the Flutter SDK | Let's move `package:url_launcher` into the Flutter SDK. Links are essential parts of building client apps. You can tell by the likes and download counter on [pub](https://pub.dev/packages/url_launcher):
<img width="228" alt="Screenshot 2024-12-19 at 6 07 08โฏPM" src="https://github.com/user-attachments/assets/53eb4b6f-7519-479b-ac85-141734841413" />
Additionally, the Flutter team is slower to react when a fix is needed in the package. For example, https://github.com/flutter/packages/pull/6711 has been waiting for the stable Flutter SDK for months. It is also a separate PR from the engine and framework PRs. The current setup is simply more costly for what would otherwise be a quick one-PR fix. | P2,team-framework,triaged-framework | low | Major |
2,751,775,268 | godot | Path3D Curve Tilt lerped incorrectly | ### Tested versions
v4.4.dev7.official [46c8f8c5c] (happens in earlier builds as well)
### System information
Godot v4.4.dev7 - Windows 10 (build 19045) - Multi-window, 2 monitors - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3080 (NVIDIA; 32.0.15.6603) - AMD Ryzen 7 3700X 8-Core Processor (16 threads)
### Issue description
When using the tilt aspect of a spline it seems to have a very abrupt turn in and out, so I graphed out the values and noticed that the up vector is directly lerped, but really the angle should be lerped.

This is from a path where the tilt goes from 0 to 180 degrees. The red line is a graph of the up.y. The orange is a graph of the angle. Note how the angle slope jumps up, flattens out, then jumps up again at the end in kind of an S shape.
### Steps to reproduce
Add a Path3D. Add 2 points. Rotate the tilt on the end point. Have something follow the path and note that the rotational speed is inconsistent while traveling the path.
### Minimal reproduction project (MRP)
[test_spline_up_lerp.zip](https://github.com/user-attachments/files/18205147/test_spline_up_lerp.zip)
| bug,topic:3d | low | Minor |
2,751,793,274 | rust | Tracking issue for release notes of #134367: Stabilize `feature(trait_upcasting)` |
This issue tracks the release notes text for #134367.
### Steps
- [x] Proposed text is drafted by PR author (or team) making the noteworthy change.
- [ ] Issue is nominated for release team review of clarity for wider audience.
- [ ] Release team includes text in release notes/blog posts.
### Release notes text
The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing).
````markdown
# Language
- [Stabilize `feature(trait_upcasting)`](https://github.com/rust-lang/rust/pull/134367)
````
> [!TIP]
> Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use.
> The category will be de-duplicated with all the other ones by the release team.
>
> *More than one section can be included if needed.*
### Release blog section
If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section.
*Otherwise leave it empty.*
````markdown
# Trait upcasting
This release includes a long awaited feature โ the ability to upcast trait objects.
If a trait has a [supertrait](https://doc.rust-lang.org/reference/items/traits.html#supertraits) you can coerce trait object of that trait to a trait object of the super trait:
```rust
trait Trait: Supertrait {}
trait Supertrait {}
fn upcast(x: &dyn Trait) -> &dyn Supertrait {
x
}
```
The same would work with any other kind of (smart)-pointer, like `Arc<dyn Trait> -> Arc<dyn Supertrait>` or `*const dyn Trait -> *const dyn Supertrait`.
Note that this adds a new _safety invariant_ to raw pointers โ leaking a raw pointer to a trait object with an invalid vtable for that trait into safe code may lead to undefined behavior, since trait upcasting works on raw pointers and requires a valid vtable.
Trait upcasting may be especially useful with the `Any` trait, as it allows upcasting your trait object to `dyn Any` to call the downcast methods. Whereas before you'd have to write workarounds for this or use external crates.
```rust
use std::any::Any;
trait MyAny: Any {}
impl dyn MyAny {
fn downcast_ref<T>(&self) -> Option<&T> {
(self as &dyn Any).downcast_ref()
}
}
```
You can [learn more about trait upcasting in the Rust reference](https://doc.rust-lang.org/reference/type-coercions.html#unsized-coercions).
````
cc @WaffleLapkin, @compiler-errors -- origin issue/PR authors and assignees for starting to draft text
| T-lang,relnotes,F-trait_upcasting,relnotes-tracking-issue | low | Critical |
2,751,816,874 | next.js | Next15 + Turbo dev cause `instantiated because it was required from module but the module factory is not available.It might have been deleted in an HMR update.` | ### Link to the code that reproduces this issue
https://github.com/arvinxx/antd-next-cssvar
### To Reproduce
just git clone and `npm run dev`. then open `/`.
### Current vs. Expected behavior
if run using `next dev --turbo`, it will case instantiated error.
```
Error: Module [project]/node_modules/.pnpm/[email protected][email protected][email protected][email protected]/node_modules/next/dist/compiled/react/jsx-dev-runtime.js [app-client] (ecmascript) was instantiated because it was required from module [project]/app/Theme.tsx [app-client] (ecmascript), but the module factory is not available. It might have been deleted in an HMR update.
at instantiateModule (http://localhost:3000/_next/static/chunks/_26785d._.js:650:15)
at getOrInstantiateModuleFromParent (http://localhost:3000/_next/static/chunks/_26785d._.js:624:12)
at esmImport (http://localhost:3000/_next/static/chunks/_26785d._.js:142:20)
at [project]/app/Theme.tsx [app-client] (ecmascript) (http://localhost:3000/_next/static/chunks/app_c03c0b._.js:67:308)
at http://localhost:3000/_next/static/chunks/_26785d._.js:693:27
at runModuleExecutionHooks (http://localhost:3000/_next/static/chunks/_26785d._.js:738:9)
at instantiateModule (http://localhost:3000/_next/static/chunks/_26785d._.js:691:9)
at getOrInstantiateModuleFromParent (http://localhost:3000/_next/static/chunks/_26785d._.js:624:12)
at commonJsRequire (http://localhost:3000/_next/static/chunks/_26785d._.js:157:20)
at requireModule (http://localhost:3000/_next/static/chunks/2a6fd_next_dist_compiled_79b2d4._.js:2676:29)
at initializeModuleChunk (http://localhost:3000/_next/static/chunks/2a6fd_next_dist_compiled_79b2d4._.js:3218:25)
at readChunk (http://localhost:3000/_next/static/chunks/2a6fd_next_dist_compiled_79b2d4._.js:3123:17)
at react-stack-bottom-frame (http://localhost:3000/_next/static/chunks/2a6fd_next_dist_compiled_react-dom_f83a73._.js:13468:20)
at beginWork (http://localhost:3000/_next/static/chunks/2a6fd_next_dist_compiled_react-dom_f83a73._.js:5343:77)
at runWithFiberInDEV (http://localhost:3000/_next/static/chunks/2a6fd_next_dist_compiled_react-dom_f83a73._.js:631:20)
at performUnitOfWork (http://localhost:3000/_next/static/chunks/2a6fd_next_dist_compiled_react-dom_f83a73._.js:7955:97)
at workLoopConcurrent (http://localhost:3000/_next/static/chunks/2a6fd_next_dist_compiled_react-dom_f83a73._.js:7951:58)
at renderRootConcurrent (http://localhost:3000/_next/static/chunks/2a6fd_next_dist_compiled_react-dom_f83a73._.js:7933:71)
at performWorkOnRoot (http://localhost:3000/_next/static/chunks/2a6fd_next_dist_compiled_react-dom_f83a73._.js:7565:175)
at performWorkOnRootViaSchedulerTask (http://localhost:3000/_next/static/chunks/2a6fd_next_dist_compiled_react-dom_f83a73._.js:8394:9)
at MessagePort.performWorkUntilDeadline (http://localhost:3000/_next/static/chunks/2a6fd_next_dist_compiled_79b2d4._.js:2353:64)
```
but if this project is run on webpack mode (`next dev`), it will run correctly.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 22.4.0: Mon Mar 6 21:00:41 PST 2023; root:xnu-8796.101.5~3/RELEASE_ARM64_T8103
Available memory (MB): 16384
Available CPU cores: 8
Binaries:
Node: 22.12.0
npm: 10.9.0
Yarn: N/A
pnpm: 9.4.0
Relevant Packages:
next: 15.1.2 // Latest available version is detected (15.1.2).
eslint-config-next: 15.1.2
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Developer Experience, Turbopack
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
I have tested 15.1.1-canary.14. It's still failed. | Turbopack,linear: turbopack | low | Critical |
2,751,825,866 | react | Bug: React 19 Build Error Report | # React 19 Build Error Report
## React version
I'm using React 19
## Steps To Reproduce
1. First, I cloned the React 19 source code from its official repository using the standard git clone command.
2. After that, I navigated to the cloned directory in the terminal and ran the yarn install command to install all the necessary dependencies for the project.
3. Then, I executed the yarn run build command to start the build process of the React 19 source code.
However, during the build process, an error occurred. The detailed error message is as follows:
<img width="1099" alt="image" src="https://github.com/user-attachments/assets/16ef52c3-286e-45f0-9aad-75ce7e4582a7" />
## Link to code example
Since it's the React 19 source code itself that I'm building, I'm unable to provide a separate external code example link at the moment. The error occurs during the standard build process of the cloned React 19 source code following the steps mentioned above.
## The current behavior
When running the yarn run build command after cloning the code and installing dependencies, the build process fails with the SyntaxError shown above. Specifically, the rollup-plugin-flow-remove-types plugin encounters an issue while parsing the react-dom-client.development.js file, deeming an expression at line 68, column 5 as invalid. This halts the entire build process for React 19, preventing the successful generation of the expected build output.
## The expected behavior
The expected behavior is that when I follow these steps (cloning the code, installing dependencies, and running yarn run build), the build process for React 19 should complete without any errors. The rollup-plugin-flow-remove-types plugin should be able to correctly parse the react-dom-client.development.js file and perform its intended operations smoothly, allowing the build to finish successfully and produce the necessary files for React 19 as intended. | Status: Unconfirmed | low | Critical |
2,751,847,669 | opencv | Null pointer dereference | Hi all,
This is Qianxin CodeSafe Team, we found a suspicious issue,
Suppose 'd' < 'n_detections',then entry into circulation at
https://github.com/opencv/opencv/blob/6a0affdbcea8c0048fd82264086678acb0907c40/modules/gapi/src/3rdparty/vasot/src/components/ot/mtt/objects_associator.cpp#L138
Suppose't' < 'n_tracklets',then entry into circulation at
https://github.com/opencv/opencv/blob/6a0affdbcea8c0048fd82264086678acb0907c40/modules/gapi/src/3rdparty/vasot/src/components/ot/mtt/objects_associator.cpp#L140
Assume that the value of tracking_per_class_ is false at
https://github.com/opencv/opencv/blob/6a0affdbcea8c0048fd82264086678acb0907c40/modules/gapi/src/3rdparty/vasot/src/components/ot/mtt/objects_associator.cpp#L141
Dereference null pointer "(tracklets[t]->GetRgbFeatures())" at
https://github.com/opencv/opencv/blob/6a0affdbcea8c0048fd82264086678acb0907c40/modules/gapi/src/3rdparty/vasot/src/components/ot/mtt/objects_associator.cpp#L146
which GetRgbFeatures() is defined at \opencv-4.9.0\modules\gapi\src\3rdparty\vasot\src\components\ot\mtt
https://github.com/opencv/opencv/blob/6a0affdbcea8c0048fd82264086678acb0907c40/modules/gapi/src/3rdparty/vasot/src/components/ot/tracklet.cpp#L100
https://github.com/opencv/opencv/blob/6a0affdbcea8c0048fd82264086678acb0907c40/modules/gapi/src/3rdparty/vasot/src/components/ot/tracklet.cpp#L101
Therefore,this results in a null point dereference | category: g-api / gapi | low | Minor |
2,751,852,204 | tauri | [feat] Add `TAURI_SIGNING_PUBLIC_KEY` environment variable | ### Describe the problem
Currently, I cannot specify my public signing key from an environment variable, and I have to hard-code it in `tauri.conf.json`.
### Describe the solution you'd like
I would like to set the public signing key via a `TAURI_SIGNING_PUBLIC_KEY` environment variable.
### Alternatives considered
I have considered dynamically generating a custom `tauri.conf.json` during build, but that seems unnecessarily excessive.
### Additional context
_No response_ | type: feature request | low | Minor |
2,751,946,514 | transformers | Any plans to add AIMv2 in the model? | ### Model description
Is there a plan to add AIMv2 https://huggingface.co/collections/apple/aimv2-6720fe1558d94c7805f7688c as part of Huggingface Model? AIMv2 showed better performance than SigLIP. I think it will be helpful for the community to have AIMv2 in Huggingface Model.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Model implementation: https://github.com/apple/ml-aim
Model weights: https://huggingface.co/collections/apple/aimv2-6720fe1558d94c7805f7688c | New model | low | Major |
2,751,964,152 | PowerToys | [File Management :: Peek] change the DEFAULT activation short cut. | ### Description of the new feature / enhancement
`Ctrl-space` is widely used as switching IME (input method) (particularly on Windows), apps should never use such shortcut as default.
Making `Ctrl-space` as the default is unthoughtful and ignorant.
### Scenario when this would be used?
always
### Supporting information
At least let user decide if he/she like to keep the `Ctrl-space` assignment, or detect the current setting for IME and use a fallback shortcut.
If you are going to implement this, please also aware that some user are used to `ctrl-space` and don't want it being changed. A better solution is change the default for new installations, and ask existing users if they want keep the 'ctrl-space' assignment.
Also, developers may prevent user from setting `ctrl-space`, if they read IME settings (from registry or somewhere it is stored) and find a conflict. | Needs-Triage | low | Minor |
2,751,979,133 | kubernetes | [Bug] Unexpected scheduling results due to mismatch between the inter-pod affinity rule implementation and the doc | ### What happened?
There is a special rule in the scheduler's `pod affinity` plugin for scheduling a group of pods with inter-pod affinity to themselves. However, the current implementation does not match the doc and the comment, causing unexpected scheduling results.
> [[doc]](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity)
> #### Scheduling a group of pods with inter-pod affinity to themselves
> If the current pod being scheduled is the first in a series that have affinity to themselves, it is allowed to be scheduled if it passes all other affinity checks. This is determined by verifying that no other pod in the cluster matches the namespace and selector of this pod, that the pod matches its own terms, and the chosen node matches all requested topologies. This ensures that there will not be a deadlock even if all the pods have inter-pod affinity specified.
#### The Inconsistency
In the current version of the [documentation](https://github.com/kubernetes/website/blob/8a1bb4cf42fdd9c8065d66a1d2ebace4731ba414/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md?plain=1#L292C16-L292C90) and the current version of the [code comment](https://github.com/kubernetes/kubernetes/blob/a4b8a3b2e33a3b591884f69b64f439e6b880dc40/pkg/scheduler/framework/plugins/interpodaffinity/filtering.go#L352C55-L353C67), both say that "**no other pod in the cluster** matches the namespace and selector of this pod," which implies that the scheduler will check **all pods**.
However, after investigating the implementation, the scheduler is actually checking **all pods on nodes with at least one topology key matched**, instead of **all pods**. (For more details, please see "Anything else we need to know?".)
As a result, the current implementation leads to unexpected scheduling results.
#### The Original Intent
We have investigated the history of this special rule, and it shows:
1. [At the very beginning](https://github.com/kubernetes/kubernetes/blob/587d164307de060d271f10f2386f39153360fba9/plugin/pkg/scheduler/algorithm/predicates/predicates.go#L836C4-L848C5), the code and the comment were consistent, both executing/stating that the scheduler would check **all pods in the cluster**.
2. [Later](https://github.com/kubernetes/kubernetes/blob/587d164307de060d271f10f2386f39153360fba9/plugin/pkg/scheduler/algorithm/predicates/predicates.go#L836C4-L848C5), previous developers introduced a mechanism for pre-calculating some data structures and using them to filter pod affinity. The newly added code became inconsistent with the comment:
- The code was checking **all pods on nodes with at least one topology key matched**.
- The comment, however, still stated **all pods in the cluster**.
At this point, the scheduler had fallback logic to the original code if the pre-calculated data didn't exist. Therefore, the scheduler have two routes simultaneouslyโone consistent and the other inconsistent.
3. [Finally](https://github.com/kubernetes/kubernetes/blob/a4b8a3b2e33a3b591884f69b64f439e6b880dc40/pkg/scheduler/framework/plugins/interpodaffinity/filtering.go#L350C2-L360C3), previous developers removed both the fallback logic and the original code. The current implementation only uses the pre-calculated data structures, which are inconsistent with the comment.
### What did you expect to happen?
According to the history of this rule, we assume the original intend was checking **all pods** in the cluster. It's because the new added data structure, the implementation became wrong as it checks **all pods on nodes with at least one topology key matched**.
But we think this still need developers' help to check the original / ideal intent of this rule.
1. If the intent is "check all pods" -> the implementation should be fixed.
2. If the intent is "only check pods on nodes with at least one topology key matched", then the comment in kubernetes/kubernetes and documentation in kubernetes/website should be updated.
### How can we reproduce it (as minimally and precisely as possible)?
#### Steps:
The incoming pod affinity's selector matches itself and also the existing pod.
The incoming pod's pod affinity has 2 terms with 2 different topology keys:
- node-0 has 2 of these topology keys
- node-1 has 1 of these topology keys
- node-2 has 0 of these topology keys
1. add 3 nodes using below yaml file.
`kubectl apply -f nodes.yaml`
2. add the existing pod first, it will land on node-2.
`kubectl apply -f existing_pod.yaml`
3. add the incoming pod, it can be scheduled, although at this time there is a pod that also matches these selectors in the cluster. Because that pod is on node-2, and node-2 doesn't match any topology key.
`kubectl apply -f incoming_pod.yaml`
4. delete all pods.
`kubectl delete pod --all`
5. change the existing pod's `nodeSelector` into `node-name: node-1`, add the existing pod, it will land on node-1.
`(change the existing pod's nodeSelector)`
`kubectl apply -f existing_pod.yaml`
6. add the incoming pod, it will not be scheduled, this time the pod that also matches these selectors is on node-1, and node-1 matches 1 topology key.
`kubectl apply -f incoming_pod.yaml`
#### Nodes:
```yaml
apiVersion: v1
kind: Node
metadata:
labels:
label1: value1
label2: value2
node-name: node-0
name: node-0
namespace: default
status:
allocatable:
cpu: '10000'
memory: 64T
pods: '100000'
capacity:
cpu: '10000'
memory: 64T
pods: '10000'
---
apiVersion: v1
kind: Node
metadata:
labels:
label2: value2
label3: value3
node-name: node-1
name: node-1
namespace: default
status:
allocatable:
cpu: '10000'
memory: 64T
pods: '100000'
capacity:
cpu: '10000'
memory: 64T
pods: '10000'
---
apiVersion: v1
kind: Node
metadata:
labels:
label3: value3
node-name: node-2
name: node-2
namespace: default
status:
allocatable:
cpu: '10000'
memory: 64T
pods: '100000'
capacity:
cpu: '10000'
memory: 64T
pods: '10000'
```
#### Existing pod:
```yaml
apiVersion: v1
kind: Pod
metadata:
labels:
pod-label1: value1
name: pod-0
namespace: default
spec:
containers:
- image: nginx
name: test-container
nodeSelector:
node-name: node-2
```
#### Incoming pod:
```yaml
apiVersion: v1
kind: Pod
metadata:
labels:
pod-label1: value1
name: test-pod
namespace: default
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: pod-label1
operator: In
values:
- value1
topologyKey: label1
- labelSelector:
matchExpressions:
- key: pod-label1
operator: In
values:
- value1
topologyKey: label2
containers:
- image: nginx
name: test-container
```
### Anything else we need to know?
#### Why the current implementation is checking pods on nodes with at least one topology key matched?
The `state.affinityCounts` is a map that maps "topology key-value pairs" to the "number of pods in the topology domain that match the namespace and selector." Below is the [code](https://github.com/kubernetes/kubernetes/blob/a4b8a3b2e33a3b591884f69b64f439e6b880dc40/pkg/scheduler/framework/plugins/interpodaffinity/filtering.go#L104C1-L125C1) related to this rule:
```go
# Check the special rule: no other pod in the cluster matches the namespace and selector of this pod.
if len(state.affinityCounts) == 0 &&...
# update state.affinityCounts
func (m topologyToMatchedTermCount) updateWithAffinityTerms(
terms []framework.AffinityTerm, pod *v1.Pod, node *v1.Node, value int64) {
if podMatchesAllAffinityTerms(terms, pod) {
for _, t := range terms {
m.update(node, t.TopologyKey, value)
}
}
}
func (m topologyToMatchedTermCount) update(node *v1.Node, tk string, value int64) {
if tv, ok := node.Labels[tk]; ok {
pair := topologyPair{key: tk, value: tv}
m[pair] += value
// value could be negative, hence we delete the entry if it is down to zero.
if m[pair] == 0 {
delete(m, pair)
}
}
}
```
In the last two function, we can see that only an existing pod on a node with at least one topology key required by the incoming pod will be counted in the `state.affinityCount`.
/sig scheduling
### Kubernetes version
1.32.0
### Cloud provider
<details>
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/scheduling,needs-triage | low | Critical |
2,752,007,357 | transformers | Maybe the way SequenceClassification Model calculates the last non-pad token is not reasonable. | ### System Info
None
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
SequenceClassification Model finds the last token that is not a padding token in each row by cal the first position of pad token when a `pad_token_id` is defined in the configuration.
the code in https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py#L951
```
sequence_lengths = torch.eq(input_ids, self.config.pad_token_id).int().argmax(-1) - 1
```
When pad_token is eos token, what is calculated using this method will be the position of the first eos token. However, in some LLM templates, eos is often added at the end of the prompt. Also, when a model such llama does not have a default pad token, using eos as a pad token is a common practice.
For example, in llama, eos token is <|eot_id|>
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("meta-llama/Llama-3.2-1B-Instruct", num_labels=1)
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.2-1B-Instruct")
tokenizer.eos_token, model.config.pad_token_id, tokenizer.pad_token
# ('<|eot_id|>', None, None)
```
Using chat template to encode QA pairs we can get
```
prompt = "what is 1+1?"
response1 = "1+1=2"
response2 = "1+1=3"
conv1 = [{"role": "user", "content": prompt}, {"role": "assistant", "content": response1}]
conv2 = [{"role": "user", "content": prompt}, {"role": "assistant", "content": response2}]
conv1_tokenized = tokenizer.apply_chat_template(conv1, tokenize=True, return_tensors="pt")
conv2_tokenized = tokenizer.apply_chat_template(conv2, tokenize=True, return_tensors="pt")
# conv1_tokenized(input_ids)
# (tensor([[128000, 128006, 882, 128007, 271, 12840, 374, 220, 16,
# 10, 16, 30, 128009, 128006, 78191, 128007, 271, 16,
# 10, 16, 28, 17, 128009, 128006, 78191, 128007, 271]]),
# conv1_tokenized(tokens)
# '<|begin_of_text|><|start_header_id|>user<|end_header_id|>\n\nwhat is 1+1?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n1+1=2<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n')
```
In Llama chat template, eos_token is added on the end of prompt. If we do not pad the input_ids and don't set pad_token_id, the score is correct.
```
with torch.no_grad():
score1 = model(conv1_tokenized).logits[0][0].item()
score2 = model(conv2_tokenized).logits[0][0].item()
print(f"Score for response 1: {score1}")
print(f"Score for response 2: {score2}")
# Score for response 1: 1.7297050952911377
# Score for response 2: 1.43972647190094
```
If we set pad_token_id = eos_token_id, we get the same score for different QA pairs with the same prompt
```
model.config.pad_token_id = tokenizer.eos_token_id
with torch.no_grad():
score1 = model(conv1_tokenized).logits[0][0].item()
score2 = model(conv2_tokenized).logits[0][0].item()
print(f"Score for response 1: {score1}")
print(f"Score for response 2: {score2}")
# Score for response 1: -1.857212781906128
# Score for response 2: -1.857212781906128
```
This is because the score of SequenceClassification Model is the last non-pad token's logits๏ผand the last non-pad token is the token before the first eos token. This is incorrect especially when training reward models with preference pairs that have the same prompt.
The complete recurrence script is as follows
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("meta-llama/Llama-3.2-1B-Instruct", num_labels=1)
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.2-1B-Instruct")
prompt = "what is 1+1?"
response1 = "1+1=2"
response2 = "1+1=3"
conv1 = [{"role": "user", "content": prompt}, {"role": "assistant", "content": response1}]
conv2 = [{"role": "user", "content": prompt}, {"role": "assistant", "content": response2}]
conv1_tokenized1 = tokenizer.apply_chat_template(conv1, tokenize=True, return_tensors="pt")
conv2_tokenized1 = tokenizer.apply_chat_template(conv2, tokenize=True, return_tensors="pt")
model.config.pad_token_id = tokenizer.eos_token_id
with torch.no_grad():
score1 = model(conv1_tokenized1).logits[0][0].item()
score2 = model(conv2_tokenized1).logits[0][0].item()
print(f"Score for response 1: {score1}")
print(f"Score for response 2: {score2}")
```
### Expected behavior
Maybe we should prioritize using attention mask to calculate the position of the last non-pad token. | bug | low | Minor |
2,752,017,034 | PowerToys | Progress bar disappears | ### Microsoft PowerToys version
0.0.0
### Installation method
PowerToys auto-update
### Running as admin
None
### Area(s) with issue?
General
### Steps to reproduce
1: 
2: 
3: 
### โ๏ธ Expected Behavior
The update status should not be reset (the progress bar should always be visible after it appears while the update is being downloaded) after switching to any other tab.
### โ Actual Behavior
The update status is reset (the progress bar disappears) after the download starts and then you switch to any other tab.
### Other Software
_No response_ | Issue-Bug,Product-Settings,Needs-Triage | low | Minor |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.