id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
2,767,582,284 | deno | Support "https" dependencies in package.json | ```
$ deno --version
deno 2.1.4 (stable, release, aarch64-apple-darwin)
v8 13.0.245.12-rusty
typescript 5.6.2
```
My `package.json` contains
```
{
// ...
"devDependencies": {
// ...
"odh-dashboard-frontend": "https://github.com/opendatahub-io/odh-dashboard.git#semver:v2.29.0&path:/frontend",
```
This package installs fine with `pnpm`, but with `deno install`, it is not installed, and I get a Warning about it.
```
$ deno install
Warning Not implemented scheme 'https'
at file:///Users/jdanek/IdeaProjects/notebooks/tests/deno/package.json
```
Possibly related issue
* https://github.com/denoland/deno/issues/19158 | suggestion,node compat | low | Minor |
2,767,596,533 | PowerToys | Crashing on start: System.TypeInitializationException | ### Microsoft PowerToys version
0.87.1.0
### Installation method
WinGet
### Running as admin
No
### Area(s) with issue?
PowerToys Run
### Steps to reproduce
Just trying to start. Did install a few plugins
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
_No response_
### Other Software
[2025-01-03.txt](https://github.com/user-attachments/files/18300231/2025-01-03.txt)
| Issue-Bug,Needs-Triage | low | Critical |
2,767,611,547 | flutter | [Proposal]Add support for fixed or sticky headers in Table | ### Use case
The Table widget in Flutter does not currently support fixing all columns while scrolling horizontally. This feature is essential for scenarios where users need to navigate large tables while keeping all data in view and aligned.
### Proposal
Enhance the `Table` widget to support fixing all columns during horizontal scrolling.
**Key Features:**
- Allow all columns in the `Table` widget to remain fixed while scrolling vertically.
- Maintain alignment between headers and rows during scrolling.
- Provide seamless integration with existing table features such as styling and data binding.
**Visualization**
A table where all columns remain fixed in place while the user scrolls vertically. This ensures that all data remains visible, aligned, and accessible at all times. | c: new feature,framework,would be a good package,c: proposal,P3,workaround available,team-framework,triaged-framework | low | Major |
2,767,643,050 | ollama | Ollama should avoid calling hallucinated tools | ### What is the issue?
Sometimes the model seems to hallucinate and call a tool on the client that doesn't exist. In my opinion since Ollama has the list of tools being callable it should check that the tool being called is in this list before calling it.
This is described also there:
https://github.com/langchain4j/langchain4j/issues/1052
### OS
Linux, Docker
### GPU
Other
### CPU
Intel
### Ollama version
0.5.4 | bug | low | Major |
2,767,650,756 | godot | Inheriting from Translation class in gdscript crashes the Editor & Game on startup | ### Tested versions
Reproducible in 4.3.stable
### System information
Godot v4.3.stable - macOS 15.1.1 - Vulkan (Mobile) - integrated Apple M3 Pro - Apple M3 Pro (12 Threads)
### Issue description
I am trying to implement my own Translation Class in GDScript to do some data processing in the background. But when I try to use my own class (even when it's an emtpy class that just has name & extends Translation) and add this to the ProjectSettings->Localization the Godot Editor crashes when i reopen the project or when I start the game.
I can't get a log from the editor but the game when launched returns this crash in the log:
```
Godot Engine v4.3.stable.official.77dcf97d8 - https://godotengine.org
Vulkan 1.2.283 - Forward Mobile - Using Device #0: Apple - Apple M3 Pro
================================================================
handle_crash: Program crashed with signal 11
Engine version: Godot Engine v4.3.stable.official (77dcf97d82cbfe4e4615475fa52ca03da645dbd8)
Dumping the backtrace. Please include this when reporting the bug to the project developer.
[1] 1 libsystem_platform.dylib 0x000000018ed40184 _sigtramp + 56
[2] GDScriptParser::parse_identifier() (in Godot) + 36
[3] GDScriptParser::parse_class_name() (in Godot) + 228
[4] GDScriptParser::parse_program() (in Godot) + 1404
[5] GDScriptParser::parse(String const&, String const&, bool, bool) (in Godot) + 1956
[6] GDScriptParserRef::raise_status(GDScriptParserRef::Status) (in Godot) + 724
[7] GDScriptCache::get_parser(String const&, GDScriptParserRef::Status, Error&, String const&) (in Godot) + 520
[8] GDScriptCache::get_shallow_script(String const&, Error&, String const&) (in Godot) + 644
[9] GDScriptCache::get_full_script(String const&, Error&, String const&, bool) (in Godot) + 220
[10] ResourceFormatLoaderGDScript::load(String const&, String const&, Error*, bool, float*, ResourceFormatLoader::CacheMode) (in Godot) + 84
[11] ResourceLoader::_load(String const&, String const&, String const&, ResourceFormatLoader::CacheMode, Error*, bool, float*) (in Godot) + 696
[12] ResourceLoader::_thread_load_function(void*) (in Godot) + 580
[13] ResourceLoader::_load_start(String const&, String const&, ResourceLoader::LoadThreadMode, ResourceFormatLoader::CacheMode) (in Godot) + 1608
[14] ResourceLoaderBinary::load() (in Godot) + 816
[15] ResourceFormatLoaderBinary::load(String const&, String const&, Error*, bool, float*, ResourceFormatLoader::CacheMode) (in Godot) + 784
[16] ResourceLoader::_load(String const&, String const&, String const&, ResourceFormatLoader::CacheMode, Error*, bool, float*) (in Godot) + 696
[17] ResourceLoader::_thread_load_function(void*) (in Godot) + 580
[18] ResourceLoader::_load_start(String const&, String const&, ResourceLoader::LoadThreadMode, ResourceFormatLoader::CacheMode) (in Godot) + 1608
[19] ResourceLoader::load(String const&, String const&, ResourceFormatLoader::CacheMode, Error*) (in Godot) + 84
[20] TranslationServer::_load_translations(String const&) (in Godot) + 296
[21] TranslationServer::load_translations() (in Godot) + 52
[22] Main::setup2(bool) (in Godot) + 11608
[23] Main::setup(char const*, int, char**, bool) (in Godot) + 55888
[24] main (in Godot) + 300
[25] 25 dyld 0x000000018e988274 start + 2840
-- END OF BACKTRACE --
================================================================
```
### Steps to reproduce
Create a GDScript class that extends Translation, create an Resource instance of it and set it under ProjectSettings->Localization.
```
@tool
class_name TranslationGoogleSheet
extends Translation
````
A simple class like this is enough.
### Minimal reproduction project (MRP)
This Project has the most simple Translation class & an instance of the resource set as localization. When you try to open the project the Godot Editor immediately crashes. To open the project, remove the Localization Setting from the project.godot file.
```locale/translations=PackedStringArray("res://test.translation")```
[godot_localization_bug.zip](https://github.com/user-attachments/files/18300495/godot_localization_bug.zip)
| bug,topic:core,needs testing,crash | low | Critical |
2,767,671,507 | tensorflow | Tensorflow not supported on Windows + ARM CPUs | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.18
### Custom code
No
### OS platform and distribution
Windows 11
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
I can't import tensorflow
### Standalone code to reproduce the issue
```shell
I can't import tensorflow. Installation is successful. I uninstalled and reinstalled
```
### Relevant log output
```shell
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
File ~\anaconda3\Lib\site-packages\tensorflow\python\pywrap_tensorflow.py:70
69 try:
---> 70 from tensorflow.python._pywrap_tensorflow_internal import *
71 # This try catch logic is because there is no bazel equivalent for py_extension.
72 # Externally in opensource we must enable exceptions to load the shared object
73 # by exposing the PyInit symbols with pybind. This error will only be
74 # caught internally or if someone changes the name of the target _pywrap_tensorflow_internal.
75
76 # This logic is used in other internal projects using py_extension.
ImportError: DLL load failed while importing _pywrap_tensorflow_internal: A dynamic link library (DLL) initialization routine failed.
During handling of the above exception, another exception occurred:
ImportError Traceback (most recent call last)
Cell In[9], line 1
----> 1 import tensorflow as tf
File ~\anaconda3\Lib\site-packages\tensorflow\__init__.py:40
37 _os.environ.setdefault("ENABLE_RUNTIME_UPTIME_TELEMETRY", "1")
39 # Do not remove this line; See https://github.com/tensorflow/tensorflow/issues/42596
---> 40 from tensorflow.python import pywrap_tensorflow as _pywrap_tensorflow # pylint: disable=unused-import
41 from tensorflow.python.tools import module_util as _module_util
42 from tensorflow.python.util.lazy_loader import KerasLazyLoader as _KerasLazyLoader
File ~\anaconda3\Lib\site-packages\tensorflow\python\pywrap_tensorflow.py:85
83 sys.setdlopenflags(_default_dlopen_flags)
84 except ImportError:
---> 85 raise ImportError(
86 f'{traceback.format_exc()}'
87 f'\n\nFailed to load the native TensorFlow runtime.\n'
88 f'See https://www.tensorflow.org/install/errors '
89 f'for some common causes and solutions.\n'
90 f'If you need help, create an issue '
91 f'at https://github.com/tensorflow/tensorflow/issues '
92 f'and include the entire stack trace above this error message.')
ImportError: Traceback (most recent call last):
File "C:\Users\dhima\anaconda3\Lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 70, in <module>
from tensorflow.python._pywrap_tensorflow_internal import *
ImportError: DLL load failed while importing _pywrap_tensorflow_internal: A dynamic link library (DLL) initialization routine failed.
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors for some common causes and solutions.
If you need help, create an issue at https://github.com/tensorflow/tensorflow/issues and include the entire stack trace above this error message.
```
| stat:awaiting tensorflower,type:feature,type:build/install,subtype:windows,TF 2.18 | medium | Critical |
2,767,675,146 | storybook | [Bug]: automigrate fails on EXDEV error | ### Describe the bug
Running the command automigrate on windows, it tries to move the logfile from user's temp folder to the current project folder by renaming it. This fails, if temp folder and project folder are on different drives/partitions with the error:
`Error: EXDEV: cross-device link not permitted, rename 'C:\Users\[USER]\AppData\Local\Temp\435cc8918d4e493a2eb400fc4351782f\migration-storybook.log' -> '.\migration-storybook.log'`
### Reproduction link
NONE
### Reproduction steps
_No response_
### System
```bash
Storybook Environment Info:
System:
OS: Windows 11 10.0.22631
CPU: (16) x64 AMD Ryzen 7 PRO 7840U w/ Radeon 780M Graphics
Binaries:
Node: 22.12.0 - C:\Program Files\nodejs\node.EXE
npm: 11.0.0 - ~\AppData\Roaming\npm\npm.CMD <----- active
Browsers:
Edge: Chromium (128.0.2739.79)
npmPackages:
@storybook/addon-a11y: ^8.4.7 => 8.4.6
@storybook/addon-coverage: ^1.0.4 => 1.0.4
@storybook/addon-docs: ^8.4.7 => 8.4.6
@storybook/addon-essentials: ^8.4.7 => 8.4.6
@storybook/addon-interactions: ^8.4.7 => 8.4.6
@storybook/addon-jest: ^8.4.7 => 8.4.6
@storybook/addon-links: ^8.4.7 => 8.4.6
@storybook/angular: ^8.4.7 => 8.4.6
@storybook/core-server: ^8.4.7 => 8.4.6
@storybook/test-runner: 0.19.0 => 0.19.0
storybook: ^8.4.7 => 8.2.8
npmGlobalPackages:
@storybook/cli: 8.4.7
```
### Additional context
_No response_ | bug,help wanted,windows,automigrations | low | Critical |
2,767,700,983 | tensorflow | KeyError: "There is no item named 'PetImages\\Cat\\0.jpg' in the archive" When Running TensorFlow Locally(CPU) on Anaconda in VS Code. | ### Issue type
Documentation Bug
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
source
### TensorFlow version
2.17.1
### Custom code
Yes
### OS platform and distribution
window11
### Mobile device
_No response_
### Python version
3.10.11
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
I am a beginner and encountering an issue while trying to run TensorFlow locally using Anaconda in VS Code. The same code runs smoothly on Google Colab, but when executed locally, it fails during the dataset download process with the following error.
What I've Tried
Re-downloading TensorFlow and TensorFlow Datasets to ensure they are up to date.
Manually Unzipping the Dataset and verifying if 'PetImages/Cat/0.jpg' exists in the archive.
Re-adjusting Python, TensorFlow, and TensorFlow Datasets Versions to match those in Colab by using Python 3.10.11 instead of Python 3.10.12.
Recreating the Virtual Environment in Anaconda to ensure a clean setup.
Downloading the Cats vs Dogs Dataset from Different Sources, but the issue persists.
Asking ChatGPT for assistance, but the issue remains unresolved.
Additional Information
On Google Colab, the same code runs without any issues, and the dataset downloads successfully.
In VS Code, the error consistently occurs during the dataset download process, indicating that 'PetImages/Cat/0.jpg' is missing from the archive.
Network Stability: I have a stable internet connection, and downloads complete without interruption, but the error persists.
Questions
Why does the KeyError occur in VS Code but not in Google Colab?
Could this be related to the way the dataset is being downloaded or unzipped locally?
Are there any compatibility issues between the Python/TensorFlow versions and the dataset?
Request for Help
I would greatly appreciate any guidance or suggestions on how to resolve this issue. Thank you in advance for your assistance!
### Standalone code to reproduce the issue
```shell
import tensorflow_datasets as tfds
import tensorflow as tf
import numpy as np
CatsVsDogs_OrgData, info=tfds.load(name='cats_vs_dogs', with_info=True,
split=tfds.Split.TRAIN)
```
### Relevant log output
```shell
PS C:\Users\jbb86\桌面\圖樣辨識> & C:/Users/jbb86/桌面/圖樣辨識/.venv/Scripts/python.exe c:/Users/jbb86/桌面/圖樣辨識/.venv/CatsVsDogs.py
2025-01-03 22:38:53.599253: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2025-01-03 22:38:54.247816: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
Downloading and preparing dataset Unknown size (download: Unknown size, generated: Unknown size, total: Unknown size) to C:\Users\jbb86\tensorflow_datasets\cats_vs_dogs\4.0.1...
Dl Size...: 100%|██████████████████████████████████████████████████████████████████████████████████████████| 824887076/824887076 [00:00<00:00, 803489819418.28 MiB/s]
Dl Completed...: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 974.06 url/s]
Generating splits...: 0%| 2025-01-03 22:38:55.958212: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
Traceback (most recent call last):
File "c:\Users\jbb86\桌面\圖樣辨識\.venv\CatsVsDogs.py", line 7, in <module>
CatsVsDogs_OrgData, info=tfds.load(name='cats_vs_dogs', with_info=True,
File "C:\Users\jbb86\桌面\圖樣辨識\.venv\lib\site-packages\tensorflow_datasets\core\logging\__init__.py", line 176, in __call__
return function(*args, **kwargs)
File "C:\Users\jbb86\桌面\圖樣辨識\.venv\lib\site-packages\tensorflow_datasets\core\load.py", line 661, in load
_download_and_prepare_builder(dbuilder, download, download_and_prepare_kwargs)
File "C:\Users\jbb86\桌面\圖樣辨識\.venv\lib\site-packages\tensorflow_datasets\core\load.py", line 517, in _download_and_prepare_builder
dbuilder.download_and_prepare(**download_and_prepare_kwargs)
File "C:\Users\jbb86\桌面\圖樣辨識\.venv\lib\site-packages\tensorflow_datasets\core\logging\__init__.py", line 176, in __call__
return function(*args, **kwargs)
File "C:\Users\jbb86\桌面\圖樣辨識\.venv\lib\site-packages\tensorflow_datasets\core\dataset_builder.py", line 756, in download_and_prepare
self._download_and_prepare(
File "C:\Users\jbb86\桌面\圖樣辨識\.venv\lib\site-packages\tensorflow_datasets\core\dataset_builder.py", line 1752, in _download_and_prepare
split_infos = self._generate_splits(dl_manager, download_config)
File "C:\Users\jbb86\桌面\圖樣辨識\.venv\lib\site-packages\tensorflow_datasets\core\dataset_builder.py", line 1727, in _generate_splits
future = split_builder.submit_split_generation(
File "C:\Users\jbb86\桌面\圖樣辨識\.venv\lib\site-packages\tensorflow_datasets\core\split_builder.py", line 436, in submit_split_generation
return self._build_from_generator(**build_kwargs)
File "C:\Users\jbb86\桌面\圖樣辨識\.venv\lib\site-packages\tensorflow_datasets\core\split_builder.py", line 496, in _build_from_generator
for key, example in utils.tqdm(
File "C:\Users\jbb86\桌面\圖樣辨識\.venv\lib\site-packages\tqdm\std.py", line 1181, in __iter__
for obj in iterable:
File "C:\Users\jbb86\桌面\圖樣辨識\.venv\lib\site-packages\tensorflow_datasets\image_classification\cats_vs_dogs.py", line 117, in _generate_examples
new_fobj = zipfile.ZipFile(buffer).open(fname)
File "C:\Users\jbb86\AppData\Local\Programs\Python\Python310\lib\zipfile.py", line 1516, in open
zinfo = self.getinfo(name)
File "C:\Users\jbb86\AppData\Local\Programs\Python\Python310\lib\zipfile.py", line 1443, in getinfo
raise KeyError(
KeyError: "There is no item named 'PetImages\\\\Cat\\\\0.jpg' in the archive"
```
| type:others,2.17 | low | Critical |
2,767,736,985 | rust | Compiling -> "thread 'coordinator' panicked" | <!--
Thank you for finding an Internal Compiler Error! 🧊 If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
### Code
```Rust
Just the whole project. if needed i can share but no idea where the bug is.
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.79.0 (129f3b996 2024-06-10)
binary: rustc
commit-hash: 129f3b9964af4d4a709d1383930ade12dfe7c081
commit-date: 2024-06-10
host: x86_64-pc-windows-msvc
release: 1.79.0
LLVM version: 18.1.7
```
### Error output
```
thread 'coordinator' panicked at /rustc/129f3b9964af4d4a709d1383930ade12dfe7c081\compiler\rustc_codegen_ssa\src\back\write.rs:1638:29:
/rustc/129f3b9964af4d4a709d1383930ade12dfe7c081\compiler\rustc_codegen_ssa\src\back\write.rs:1638:29: worker thread panicked
stack backtrace:
0: 0x7ffb57ea3c48 - std::backtrace_rs::backtrace::dbghelp64::trace
at /rustc/129f3b9964af4d4a709d1383930ade12dfe7c081/library\std\src\..\..\backtrace\src\backtrace\dbghelp64.rs:91
1: 0x7ffb57ea3c48 - std::backtrace_rs::backtrace::trace_unsynchronized
at /rustc/129f3b9964af4d4a709d1383930ade12dfe7c081/library\std\src\..\..\backtrace\src\backtrace\mod.rs:66
2: 0x7ffb57ea3c48 - std::sys_common::backtrace::_print_fmt
at /rustc/129f3b9964af4d4a709d1383930ade12dfe7c081/library\std\src\sys_common\backtrace.rs:68
3: 0x7ffb57ea3c48 - std::sys_common::backtrace::_print::impl$0::fmt
at /rustc/129f3b9964af4d4a709d1383930ade12dfe7c081/library\std\src\sys_common\backtrace.rs:44
4: 0x7ffb57ed4f09 - core::fmt::rt::Argument::fmt
at /rustc/129f3b9964af4d4a709d1383930ade12dfe7c081/library\core\src\fmt\rt.rs:165
5: 0x7ffb57ed4f09 - core::fmt::write
at /rustc/129f3b9964af4d4a709d1383930ade12dfe7c081/library\core\src\fmt\mod.rs:1157
6: 0x7ffb57e9a3e1 - std::io::Write::write_fmt<std::sys::pal::windows::stdio::Stderr>
at /rustc/129f3b9964af4d4a709d1383930ade12dfe7c081/library\std\src\io\mod.rs:1832
7: 0x7ffb57ea3a26 - std::sys_common::backtrace::print
at /rustc/129f3b9964af4d4a709d1383930ade12dfe7c081/library\std\src\sys_common\backtrace.rs:34
8: 0x7ffb57ea6b88 - std::panicking::default_hook::closure$1
at /rustc/129f3b9964af4d4a709d1383930ade12dfe7c081/library\std\src\panicking.rs:271
9: 0x7ffb57ea67f7 - std::panicking::default_hook
at /rustc/129f3b9964af4d4a709d1383930ade12dfe7c081/library\std\src\panicking.rs:298
10: 0x7ffb49f9fdfe - __longjmp_internal
11: 0x7ffb57ea71b7 - alloc::boxed::impl$50::call
at /rustc/129f3b9964af4d4a709d1383930ade12dfe7c081/library\alloc\src\boxed.rs:2036
12: 0x7ffb57ea71b7 - std::panicking::rust_panic_with_hook
at /rustc/129f3b9964af4d4a709d1383930ade12dfe7c081/library\std\src\panicking.rs:799
13: 0x7ffb4b100734 - ar_archive_writer[96f46fe156109c5a]::archive_writer::write_symbols
14: 0x7ffb4b0f6029 - ar_archive_writer[96f46fe156109c5a]::archive_writer::write_symbols
15: 0x7ffb4b0e3c13 - ar_archive_writer[96f46fe156109c5a]::archive_writer::write_symbols
16: 0x7ffb4b195c6d - rustc_middle[9a5583d0f984d223]::util::bug::bug_fmt
17: 0x7ffb4b1767bd - rustc_middle[9a5583d0f984d223]::ty::consts::const_param_default
18: 0x7ffb4b1765fd - rustc_middle[9a5583d0f984d223]::ty::consts::const_param_default
19: 0x7ffb4b195ba2 - rustc_middle[9a5583d0f984d223]::util::bug::bug_fmt
20: 0x7ffb48d153a6 - <rustc_interface[5039fedd6fed528e]::passes::LintStoreExpandImpl as rustc_expand[8a362ccb6e11b5d0]::base::LintStoreExpand>::pre_expansion_lint
21: 0x7ffb45d79bfb - llvm::DenseMap<llvm::StructType * __ptr64,llvm::detail::DenseSetEmpty,llvm::IRMover::StructTypeKeyInfo,llvm::detail::DenseSetPair<llvm::StructType * __ptr64> >::~DenseMap<llvm::StructType * __ptr64,llvm::detail::DenseSetEmpty,llvm::IRMover::StructTypeKeyI
22: 0x7ffb45d8568d - llvm::DenseMap<llvm::StructType * __ptr64,llvm::detail::DenseSetEmpty,llvm::IRMover::StructTypeKeyInfo,llvm::detail::DenseSetPair<llvm::StructType * __ptr64> >::~DenseMap<llvm::StructType * __ptr64,llvm::detail::DenseSetEmpty,llvm::IRMover::StructTypeKeyI
23: 0x7ffb57eb822d - alloc::boxed::impl$48::call_once
at /rustc/129f3b9964af4d4a709d1383930ade12dfe7c081/library\alloc\src\boxed.rs:2022
24: 0x7ffb57eb822d - alloc::boxed::impl$48::call_once
at /rustc/129f3b9964af4d4a709d1383930ade12dfe7c081/library\alloc\src\boxed.rs:2022
25: 0x7ffb57eb822d - std::sys::pal::windows::thread::impl$0::new::thread_start
at /rustc/129f3b9964af4d4a709d1383930ade12dfe7c081/library\std\src\sys\pal\windows\thread.rs:52
26: 0x7ffbb8b07374 - BaseThreadInitThunk
27: 0x7ffbb965cc91 - RtlUserThreadStart
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: rustc 1.79.0 (129f3b996 2024-06-10) running on x86_64-pc-windows-msvc
note: compiler flags: --crate-type bin -C embed-bitcode=no -C debuginfo=2 -C incremental=[REDACTED]
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
end of query stack
error: cached cgu 58qpz0sqetpnq1gt should have an object file, but doesn't
error: could not compile `test-arca-ws` (bin "test-arca-ws") due to 1 previous error
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Backtrace</strong></summary>
<p>
```
running it with "RUST_BACKTRACE=1" gives exactly the same msg
```
</p>
</details>
| I-ICE,T-compiler,A-incr-comp,C-bug | low | Critical |
2,767,743,713 | rust | Trait bound not satisfied | ### Code
```Rust
pub trait Backend {}
pub struct Shape<const D: usize>([usize; D]);
pub struct Tensor<B: Backend, const D: usize, T> {
backend: B,
shape: Shape<D>,
data: Vec<T>,
}
impl<B: Backend, const D: usize, T> Tensor<B, D, T> {
pub fn empty(backend: B, shape: impl Into<Shape<D>>) -> Tensor<B, D, T> {
let shape: Shape<D> = shape.into();
let size = shape.0.iter().product();
Tensor {
backend,
shape,
data: Vec::with_capacity(size),
}
}
pub fn as_slice(&self) -> &[T] {
&self.data.as_slice()
}
}
impl<const D: usize> From<[usize; D]> for Shape<D> {
fn from(value: [usize; D]) -> Self {
Shape(value)
}
}
impl<const D: usize> From<u64> for Shape<D> {
fn from(value: u64) -> Self {
Shape([value as usize; D])
}
}
pub trait Op {}
pub trait ConvImpl<B: Backend, const D: usize, const D2: usize, T> {
fn conv(
&self,
tensor: &Tensor<B, D, T>,
weights: &Tensor<B, D2, T>,
stride: Shape<D2>,
) -> Tensor<B, D, T>;
}
pub trait Conv<B: Backend, const D: usize, const D2: usize, T> {
fn conv(&self, weights: &Tensor<B, D2, T>, stride: impl Into<Shape<D>>) -> Tensor<B, D, T>;
}
impl<B: Backend + ConvImpl<B, D, D2, T>, const D: usize, const D2: usize, T> Conv<B, D, D2, T>
for Tensor<B, D, T>
{
fn conv(&self, weights: &Tensor<B, D2, T>, stride: impl Into<Shape<D2>>) -> Tensor<B, D, T> {
self.backend.conv(&self, &weights, stride.into())
}
}
```
### Current output
```Shell
error[E0277]: the trait bound `Shape<D2>: From<impl Into<Shape<D2>>>` is not satisfied
--> crates/afterburner-core/src/lib.rs:58:61
|
58 | fn conv(&self, weights: &Tensor<B, D2, T>, stride: impl Into<Shape<D2>>) -> Tensor<B, D, T> {
| ^^^^^^^^^^^^^^^ the trait `From<impl Into<Shape<D2>>>` is not implemented for `Shape<D2>`, which is required by `impl Into<Shape<D2>>: Into<Shape<D2>>`
|
= note: required for `impl Into<Shape<D2>>` to implement `Into<Shape<D2>>`
note: the requirement `impl Into<Shape<D2>>: Into<Shape<D2>>` appears on the `impl`'s method `conv` but not on the corresponding trait's method
--> crates/afterburner-core/src/lib.rs:52:8
|
51 | pub trait Conv<B: Backend, const D: usize, const D2: usize, T> {
| ---- in this trait
52 | fn conv(&self, weights: &Tensor<B, D2, T>, stride: impl Into<Shape<D>>) -> Tensor<B, D, T>;
| ^^^^ this trait's method doesn't have the requirement `impl Into<Shape<D2>>: Into<Shape<D2>>`
```
### Desired output
```Shell
note: the requirement `impl Into<Shape<D2>>: Into<Shape<D2>>` appears on the `impl`'s method `conv` but not on the corresponding trait's method
--> crates/afterburner-core/src/lib.rs:52:8
|
51 | pub trait Conv<B: Backend, const D: usize, const D2: usize, T> {
| ---- in this trait
52 | fn conv(&self, weights: &Tensor<B, D2, T>, stride: impl Into<Shape<D>>) -> Tensor<B, D, T>;
| ^^^^ this trait's method doesn't have the requirement `impl Into<Shape<D2>>: Into<Shape<D2>>`
```
### Rationale and extra context
I can't find a good reason why rust would tell me that the trait bound ain't satisfied. When in reality the impls and traits method signature differ. Changing them to make them similar resolves the error. The info section already gives the correct clue I would have expected this to be the actual error though instead of just a note.
### Other cases
```Rust
```
### Rust Version
```Shell
rustc 1.81.0 (eeb90cda1 2024-09-04)
binary: rustc
commit-hash: eeb90cda1969383f56a2637cbd3037bdf598841c
commit-date: 2024-09-04
host: aarch64-apple-darwin
release: 1.81.0
LLVM version: 18.1.7
```
### Anything else?
_No response_ | A-diagnostics,T-compiler | low | Critical |
2,767,748,581 | godot | OS_LinuxBSD::get_processor_name fails on aarch64 Linux systems | ### Tested versions
Reproducible from 4.0-stable to 4.3-stable and current master. From a quick git blame, it's always been that way since `get_processor_name` was added.
### System information
Godot v4.3.stable.mono - Fedora Linux Asahi Remix 41 (KDE Plasma) - Wayland - Vulkan (Forward+) - integrated Apple M1 Max (G13C C0) - 8 × Apple Firestorm (M1 Max), 2 × Apple Icestorm (M1 Max) (10 Threads)
### Issue description
When using `OS.get_processor_name()` on an arm64 Linux host, be it in a custom script or via Help > Copy System Info, an empty string is returned instead of the actual processor name.
For easier comparison, this is the system info I get from Help > Copy System Info
```
Godot v4.3.stable.mono - Fedora Linux Asahi Remix 41 (KDE Plasma) - Wayland - Vulkan (Forward+) - integrated Apple M1 Max (G13C C0) - (10 Threads)
```
vs the actual processor name(s) Apple Icestorm and Firestorm in the provided system info
```
Godot v4.3.stable.mono - Fedora Linux Asahi Remix 41 (KDE Plasma) - Wayland - Vulkan (Forward+) - integrated Apple M1 Max (G13C C0) - 8 × Apple Firestorm (M1 Max), 2 × Apple Icestorm (M1 Max) (10 Threads)
```
From a quick Google search, this also affects current [Raspberry Pis](https://www.chrisrcook.com/2023/11/22/cat-proc-cpuinfo-for-a-raspberry-pi-5-8gb-model-b-rev-1-0/) which might be easier to get and test with than an M1/M2.
Complete output of `cat /proc/cpuinfo`:
```
processor : 0
BogoMIPS : 48.00
Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 asimddp sha512 asimdfhm dit uscat ilrcpc flagm ssbs sb paca pacg dcpodp flagm2 frint
CPU implementer : 0x61
CPU architecture: 8
CPU variant : 0x2
CPU part : 0x028
CPU revision : 0
processor : 1
BogoMIPS : 48.00
Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 asimddp sha512 asimdfhm dit uscat ilrcpc flagm ssbs sb paca pacg dcpodp flagm2 frint
CPU implementer : 0x61
CPU architecture: 8
CPU variant : 0x2
CPU part : 0x028
CPU revision : 0
processor : 2
BogoMIPS : 48.00
Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 asimddp sha512 asimdfhm dit uscat ilrcpc flagm ssbs sb paca pacg dcpodp flagm2 frint
CPU implementer : 0x61
CPU architecture: 8
CPU variant : 0x2
CPU part : 0x029
CPU revision : 0
processor : 3
BogoMIPS : 48.00
Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 asimddp sha512 asimdfhm dit uscat ilrcpc flagm ssbs sb paca pacg dcpodp flagm2 frint
CPU implementer : 0x61
CPU architecture: 8
CPU variant : 0x2
CPU part : 0x029
CPU revision : 0
processor : 4
BogoMIPS : 48.00
Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 asimddp sha512 asimdfhm dit uscat ilrcpc flagm ssbs sb paca pacg dcpodp flagm2 frint
CPU implementer : 0x61
CPU architecture: 8
CPU variant : 0x2
CPU part : 0x029
CPU revision : 0
processor : 5
BogoMIPS : 48.00
Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 asimddp sha512 asimdfhm dit uscat ilrcpc flagm ssbs sb paca pacg dcpodp flagm2 frint
CPU implementer : 0x61
CPU architecture: 8
CPU variant : 0x2
CPU part : 0x029
CPU revision : 0
processor : 6
BogoMIPS : 48.00
Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 asimddp sha512 asimdfhm dit uscat ilrcpc flagm ssbs sb paca pacg dcpodp flagm2 frint
CPU implementer : 0x61
CPU architecture: 8
CPU variant : 0x2
CPU part : 0x029
CPU revision : 0
processor : 7
BogoMIPS : 48.00
Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 asimddp sha512 asimdfhm dit uscat ilrcpc flagm ssbs sb paca pacg dcpodp flagm2 frint
CPU implementer : 0x61
CPU architecture: 8
CPU variant : 0x2
CPU part : 0x029
CPU revision : 0
processor : 8
BogoMIPS : 48.00
Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 asimddp sha512 asimdfhm dit uscat ilrcpc flagm ssbs sb paca pacg dcpodp flagm2 frint
CPU implementer : 0x61
CPU architecture: 8
CPU variant : 0x2
CPU part : 0x029
CPU revision : 0
processor : 9
BogoMIPS : 48.00
Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 asimddp sha512 asimdfhm dit uscat ilrcpc flagm ssbs sb paca pacg dcpodp flagm2 frint
CPU implementer : 0x61
CPU architecture: 8
CPU variant : 0x2
CPU part : 0x029
CPU revision : 0
```
Note that this is not dependent on the mono version, although I happened to run it at the time of reproducing the issue.
### Steps to reproduce
1. Open the Godot Editor
2. Help > Copy System Info
3. The error should occur
Alternatively, I included a minimal reproduction project you can run as well
### Minimal reproduction project (MRP)
[test-project.zip](https://github.com/user-attachments/files/18301078/test-project.zip)
| bug,platform:linuxbsd,topic:porting | low | Critical |
2,767,759,387 | go | proposal: change GORISCV64=rva20u64 to include compressed instructions | ### Proposal Details
When the proposal for `GORISCV64` was being discussed, the situation regarding compressed instructions for RISC-V was somewhat unclear and there were discussions about potentially reusing the encoding space. As such, the decision was to make the default `GORISCV64` value be `rva20u64` but explicitly exclude compressed instructions, effectively giving `rv64g`:
https://github.com/golang/go/issues/61476#issuecomment-1782053089
There has since been a decision regarding the future of the C extension, with it remaining a mandatory part of RVA20 and RVA22:
https://lists.riscv.org/g/tech-profiles/topic/rvi_bod_decision_regarding/102522954
This means that all general purpose RISC-V hardware that Go will run on must support compressed instructions.
This proposal is to change Go's meaning of `GORISCV64=rva20u64` to include compressed instructions, instead of continuing to prohibit them. | Proposal | low | Major |
2,767,790,965 | PowerToys | powertoys - workspaces - onenote app layout overrides the workspaces layout? | ### Microsoft PowerToys version
0.87.1
### Installation method
PowerToys auto-update
### Running as admin
None
### Area(s) with issue?
Workspaces
### Steps to reproduce
Create a workspace that includes onenote
Manually launch onenote
Arrange the manually launched onenote in a size/position that is different from the workspace layout size/position
Close onenote
Launch the workspace
Workspace will launch onenote, but the size/position will be determined by onenote's last size/position and NOT by the workspace layout size/position
UPDATED NOTE: I'm actually not sure how the OneNote size/position is determined. It does not seem to be OneNote's last size/position after all. All I know is that the Workspace layout does NOT seem to fully work
### ✔️ Expected Behavior
I expect the workspace layout size/position to win out over the size/position that onenote seems to remember.
See screenshot of how I set the layout in Workspaces

### ❌ Actual Behavior
Onenote seems to remember the size/position it was in last, and this size/position seems to override the workspace layout
See screenshot of what actually happens

### Other Software
_No response_ | Issue-Bug,Needs-Triage,Product-Workspaces | low | Minor |
2,767,809,021 | neovim | :checkhealth can create a bug report | # Problem
Bug reports sometimes are missing info. Should be easier to create a well-formed bug report using info available to `:checkhealth`.
# Expected behavior
Some sort of command, probably buffer-local in a `:checkhealth` buffer, like `gX` ? Or `:checkhealth bug`.
The command starts a github bug report by visiting a URL like this, contains all relevant info:
https://github.com/neovim/neovim/issues/new?labels=bug&title=user+docs+HTML%3A+gui.txt+&body=%23%23%20Problem%3A%0D%0D%23%23%20Steps%20to%20reproduce:%0D%0D%60%60%60%0DTODO%0D%60%60%60%0D%0D%23%23%20Expected%20behavior:%0D%0D%23%23%20System%20info:%0D%0D
| enhancement,plugin,runtime | low | Critical |
2,767,810,384 | pytorch | `torch.device(0)` makes CUDA init fail in subprocess since `2.5.0` | ### 🐛 Describe the bug
```python
from multiprocessing import Process
import torch
torch.device(0) # Note that torch.device('cuda') or torch.device('cuda:0') do not trigger the issue
def cuda_init():
torch.Tensor([0]).cuda()
p = Process(target=cuda_init)
p.start()
p.join()
assert p.exitcode == 0
```
This code snippet succeeds on PyTorch `2.4.1` and fails on `2.5.0`:
```
RuntimeError: CUDA error: initialization error
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
Indeed, since `2.5.0`, `torch.device(0)` calls `at::getAccelerator`, which ends up calling `cudaGetDeviceCount` and thus initializing CUDA and preventing forks
It seems to be directly linked with:
- https://github.com/pytorch/pytorch/pull/131811
(especially the change in `torch/csrc/utils/python_arg_parser.h`)
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.31
Python version: 3.10.9 (main, Feb 3 2023, 11:29:04) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-1028-aws-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.0.221
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A10G
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7R32
Stepping: 0
CPU MHz: 2820.130
BogoMIPS: 5599.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 128 KiB
L1i cache: 128 KiB
L2 cache: 2 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-7
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save rdpid
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu11==11.10.3.66
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu11==11.7.101
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu11==11.7.99
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu11==11.7.99
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu11==8.5.0.96
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu11==10.2.10.91
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu11==11.4.0.1
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu11==11.7.4.91
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu11==2.14.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu11==11.7.91
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchsde==0.2.6
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] Could not collect
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @ptrblck @eqy @albanD @guangyey @EikanWang | high priority,module: cuda,triaged,module: regression,module: accelerator | low | Critical |
2,767,823,837 | flutter | failed `flutter build aar` in version flutter 3.27.1 | ### Steps to reproduce
1. create flutter app
2. remove android \ iOS folder
3. add module config in pubspec.yaml
4. flutter build aar --no-debug --no-profile
### Expected results
flutter build aar success
### Actual results
flutter build aar failed
FAILURE: Build failed with an exception.
* What went wrong:
Could not receive a message from the daemon.
Could not stop root project 'android_generated'.
java.lang.OutOfMemoryError: Metaspace
Could not stop project ':firebase_core'.
java.lang.OutOfMemoryError: Metaspace
FAILURE: Build failed with an exception.
* What went wrong:
Metaspace
### Code sample
<details open><summary>Code sample</summary>
```dart
[Paste your code here]
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
FAILURE: Build failed with an exception.
* What went wrong:
Could not receive a message from the daemon.
Could not stop root project 'android_generated'.
java.lang.OutOfMemoryError: Metaspace
Could not stop project ':firebase_core'.
java.lang.OutOfMemoryError: Metaspace
FAILURE: Build failed with an exception.
* What went wrong:
Metaspace
```
```console
You are applying Flutter's main Gradle plugin imperatively using the apply script method, which is deprecated and will be removed in a future release. Migrate to applying Gradle plugins with the declarative plugins block: https://flutter.dev/to/flutter-gradle-plugin-apply
Warning: SDK processing. This version only understands SDK XML versions up to 3 but an SDK XML file of version 4 was encountered. This can happen if you use versions of Android Studio and the command-line tools that were released at different times.
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
注: 某些输入文件使用或覆盖了已过时的 API。
注: 有关详细信息, 请使用 -Xlint:deprecation 重新编译。
注: 某些输入文件使用了未经检查或不安全的操作。
注: 有关详细信息, 请使用 -Xlint:unchecked 重新编译。
注: /Users/chagee/.pub-cache/hosted/pub.dev/flutter_plugin_engagelab-1.2.7/android/src/main/java/com/engagelab/privates/flutter_plugin_engagelab/MTApplication.java使用或覆盖了已过时的 API。
注: 有关详细信息, 请使用 -Xlint:deprecation 重新编译。
注: /Users/chagee/.pub-cache/hosted/pub.dev/flutter_plugin_engagelab-1.2.7/android/src/main/java/com/engagelab/privates/flutter_plugin_engagelab/FlutterPluginEngagelabPlugin.java使用了未经检查或不安全的操作。
注: 有关详细信息, 请使用 -Xlint:unchecked 重新编译。
注: 某些输入文件使用或覆盖了已过时的 API。
注: 有关详细信息, 请使用 -Xlint:deprecation 重新编译。
注: 某些输入文件使用了未经检查或不安全的操作。
注: 有关详细信息, 请使用 -Xlint:unchecked 重新编译。
注: /Users/chagee/.pub-cache/hosted/pub.dev/tencent_cloud_uikit_core-1.7.3/android/src/main/java/com/tencent/cloud/uikit/core/utils/ToastUtil.java使用或覆盖了已过时的 API。
注: 有关详细信息, 请使用 -Xlint:deprecation 重新编译。
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
WARN: The registry key 'java.correct.class.type.by.place.resolve.scope' accessed, but not loaded yet
FAILURE: Build completed with 27 failures.
1: Task failed with an exception.
-----------
* What went wrong:
Execution failed for task ':flutter_plugin_android_lifecycle:javaDocReleaseGeneration'.
> A failure occurred while executing com.android.build.gradle.tasks.JavaDocGenerationTask$DokkaWorkAction
> java.lang.reflect.InvocationTargetException (no error message)
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
==============================================================================
2: Task failed with an exception.
-----------
* What went wrong:
Execution failed for task ':flutter_plugin_engagelab:javaDocReleaseGeneration'.
> A failure occurred while executing com.android.build.gradle.tasks.JavaDocGenerationTask$DokkaWorkAction
> Metaspace
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
==============================================================================
3: Task failed with an exception.
-----------
* What went wrong:
Execution failed for task ':fluttertoast:javaDocReleaseGeneration'.
> A failure occurred while executing com.android.build.gradle.tasks.JavaDocGenerationTask$DokkaWorkAction
> Metaspace
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
==============================================================================
4: Task failed with an exception.
-----------
* What went wrong:
Execution failed for task ':geocoding_android:javaDocReleaseGeneration'.
> A failure occurred while executing com.android.build.gradle.tasks.JavaDocGenerationTask$DokkaWorkAction
> Metaspace
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
==============================================================================
5: Task failed with an exception.
-----------
* What went wrong:
Execution failed for task ':geolocator_android:javaDocReleaseGeneration'.
> A failure occurred while executing com.android.build.gradle.tasks.JavaDocGenerationTask$DokkaWorkAction
> Metaspace
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
==============================================================================
6: Task failed with an exception.
-----------
* What went wrong:
Execution failed for task ':google_maps_flutter_android:javaDocReleaseGeneration'.
> A failure occurred while executing com.android.build.gradle.tasks.JavaDocGenerationTask$DokkaWorkAction
> Metaspace
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
==============================================================================
7: Task failed with an exception.
-----------
* What went wrong:
Execution failed for task ':google_sign_in_android:javaDocReleaseGeneration'.
> A failure occurred while executing com.android.build.gradle.tasks.JavaDocGenerationTask$DokkaWorkAction
> Metaspace
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
==============================================================================
8: Task failed with an exception.
-----------
* What went wrong:
Execution failed for task ':image_cropper:javaDocReleaseGeneration'.
> A failure occurred while executing com.android.build.gradle.tasks.JavaDocGenerationTask$DokkaWorkAction
> Metaspace
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
==============================================================================
9: Task failed with an exception.
-----------
* What went wrong:
Execution failed for task ':image_picker_android:javaDocReleaseGeneration'.
> A failure occurred while executing com.android.build.gradle.tasks.JavaDocGenerationTask$DokkaWorkAction
> Metaspace
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
==============================================================================
10: Task failed with an exception.
-----------
* What went wrong:
Execution failed for task ':just_audio:javaDocReleaseGeneration'.
> A failure occurred while executing com.android.build.gradle.tasks.JavaDocGenerationTask$DokkaWorkAction
> Metaspace
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
==============================================================================
11: Task failed with an exception.
-----------
* What went wrong:
Execution failed for task ':open_file_android:javaDocReleaseGeneration'.
> A failure occurred while executing com.android.build.gradle.tasks.JavaDocGenerationTask$DokkaWorkAction
> Metaspace
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
==============================================================================
12: Task failed with an exception.
-----------
* What went wrong:
Execution failed for task ':package_info_plus:javaDocReleaseGeneration'.
> A failure occurred while executing com.android.build.gradle.tasks.JavaDocGenerationTask$DokkaWorkAction
> Metaspace
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
==============================================================================
13: Task failed with an exception.
-----------
* What went wrong:
Execution failed for task ':path_provider_android:javaDocReleaseGeneration'.
> A failure occurred while executing com.android.build.gradle.tasks.JavaDocGenerationTask$DokkaWorkAction
> Metaspace
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
==============================================================================
14: Task failed with an exception.
-----------
* What went wrong:
Execution failed for task ':permission_handler_android:javaDocReleaseGeneration'.
> A failure occurred while executing com.android.build.gradle.tasks.JavaDocGenerationTask$DokkaWorkAction
> Metaspace
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
==============================================================================
15: Task failed with an exception.
-----------
* What went wrong:
Execution failed for task ':photo_manager:javaDocReleaseGeneration'.
> A failure occurred while executing com.android.build.gradle.tasks.JavaDocGenerationTask$DokkaWorkAction
> Metaspace
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
==============================================================================
16: Task failed with an exception.
-----------
* What went wrong:
Execution failed for task ':screen_protector:javaDocReleaseGeneration'.
> A failure occurred while executing com.android.build.gradle.tasks.JavaDocGenerationTask$DokkaWorkAction
> Metaspace
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
==============================================================================
17: Task failed with an exception.
-----------
* What went wrong:
Execution failed for task ':sensors_analytics_flutter_plugin:javaDocReleaseGeneration'.
> A failure occurred while executing com.android.build.gradle.tasks.JavaDocGenerationTask$DokkaWorkAction
> Metaspace
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
==============================================================================
18: Task failed with an exception.
-----------
* What went wrong:
Execution failed for task ':sensors_plus:javaDocReleaseGeneration'.
> A failure occurred while executing com.android.build.gradle.tasks.JavaDocGenerationTask$DokkaWorkAction
> Metaspace
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
==============================================================================
19: Task failed with an exception.
-----------
* What went wrong:
Execution failed for task ':shared_preferences_android:javaDocReleaseGeneration'.
> A failure occurred while executing com.android.build.gradle.tasks.JavaDocGenerationTask$DokkaWorkAction
> Metaspace
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
==============================================================================
20: Task failed with an exception.
-----------
* What went wrong:
Execution failed for task ':sign_in_with_apple:javaDocReleaseGeneration'.
> A failure occurred while executing com.android.build.gradle.tasks.JavaDocGenerationTask$DokkaWorkAction
> Metaspace
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
==============================================================================
21: Task failed with an exception.
-----------
* What went wrong:
Execution failed for task ':sqflite:javaDocReleaseGeneration'.
> A failure occurred while executing com.android.build.gradle.tasks.JavaDocGenerationTask$DokkaWorkAction
> Metaspace
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
==============================================================================
22: Task failed with an exception.
-----------
* What went wrong:
Execution failed for task ':tencent_cloud_chat_sdk:javaDocReleaseGeneration'.
> A failure occurred while executing com.android.build.gradle.tasks.JavaDocGenerationTask$DokkaWorkAction
> Metaspace
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
==============================================================================
23: Task failed with an exception.
-----------
* What went wrong:
Execution failed for task ':tencent_cloud_uikit_core:javaDocReleaseGeneration'.
> A failure occurred while executing com.android.build.gradle.tasks.JavaDocGenerationTask$DokkaWorkAction
> Metaspace
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
==============================================================================
24: Task failed with an exception.
-----------
* What went wrong:
Execution failed for task ':url_launcher_android:javaDocReleaseGeneration'.
> A failure occurred while executing com.android.build.gradle.tasks.JavaDocGenerationTask$DokkaWorkAction
> Metaspace
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
==============================================================================
25: Task failed with an exception.
-----------
* What went wrong:
Execution failed for task ':video_player_android:javaDocReleaseGeneration'.
> A failure occurred while executing com.android.build.gradle.tasks.JavaDocGenerationTask$DokkaWorkAction
> Metaspace
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
==============================================================================
26: Task failed with an exception.
-----------
* What went wrong:
Execution failed for task ':wakelock_plus:javaDocReleaseGeneration'.
> A failure occurred while executing com.android.build.gradle.tasks.JavaDocGenerationTask$DokkaWorkAction
> Metaspace
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
==============================================================================
27: Task failed with an exception.
-----------
* What went wrong:
Execution failed for task ':webview_flutter_android:javaDocReleaseGeneration'.
> A failure occurred while executing com.android.build.gradle.tasks.JavaDocGenerationTask$DokkaWorkAction
> Metaspace
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
$ flutter doctor -v
[!] Flutter (Channel [user-branch], 3.27.1, on macOS 14.6 23G80 darwin-arm64,
locale zh-Hans-CN)
! Flutter version 3.27.1 on channel [user-branch] at
/Users/chagee/FlutterSDK/flutter
Currently on an unknown channel. Run `flutter channel` to switch to an
official channel.
If that doesn't fix the issue, reinstall Flutter by following instructions
at https://flutter.dev/setup.
! Upstream repository unknown source is not a standard remote.
Set environment variable "FLUTTER_GIT_URL" to unknown source to dismiss
this error.
• Framework revision 17025dd882 (3 weeks ago), 2024-12-17 03:23:09 +0900
• Engine revision cb4b5fff73
• Dart version 3.6.0
• DevTools version 2.40.2
• If those were intentional, you can disregard the above warnings; however
it is recommended to use "git" directly to perform update checks and
upgrades.
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at /Users/chagee/Library/Android/sdk
• Platform android-35, build-tools 35.0.0
• ANDROID_HOME = /Users/chagee/Library/Android/sdk
• Java binary at: /Applications/Android
Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build
17.0.11+0-17.0.11b1207.24-11852314)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 16.0)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16A242d
• CocoaPods version 1.15.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.1)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build
17.0.11+0-17.0.11b1207.24-11852314)
[✓] IntelliJ IDEA Community Edition (version 2024.1.4)
• IntelliJ at /Applications/IntelliJ IDEA CE.app
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
[✓] VS Code (version 1.96.1)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.102.0
[✓] Connected device (4 available)
• Bask’s iPhone (mobile) • 00008120-001945A81147401E • ios
• iOS 18.2 22C152
• macOS (desktop) • macos • darwin-arm64
• macOS 14.6 23G80 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin
• macOS 14.6 23G80 darwin-arm64
• Chrome (web) • chrome •
web-javascript • Google Chrome 131.0.6778.205
! Error: Browsing on the local area network for MonkeySun. Ensure the device
is unlocked and attached with a cable or associated with the same local
area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code
-27)
! Error: Browsing on the local area network for 小殷的iPhone . Ensure the
device is unlocked and attached with a cable or associated with the same
local area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code
-27)
! Error: Browsing on the local area network for YGCHXBM. Ensure the device
is unlocked and attached with a cable or associated with the same local
area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code
-27)
[✓] Network resources
• All expected network resources are available.
! Doctor found issues in 1 category.
```
</details>
| waiting for customer response,in triage | low | Critical |
2,767,824,271 | rust | Upstream LLVM libunwind patch for SGX | Our LLVM fork currently carries a single patch, which adds support for SGX to libunwind: https://github.com/rust-lang/llvm-project/commit/29e82b2592450c43f7d0db2a18a7793c55c4957e
This patch should be upstreamed to LLVM (in some form that is acceptable for upstream).
cc @jethrogb @AdrianCX | C-cleanup,A-LLVM,O-SGX | low | Minor |
2,767,849,379 | tauri | [bug] WebviewWindow is not created at position on macOS | ### Describe the bug
Creating a new `WebviewWindow` at specific x-y coordinates on a second display does not move to position.
Instead it is created on the primary display (at coordinates x = 0, y =0).
E.g. Assuming primary display is 3840 x 2160 (4k), we would expect the `WebviewWindow` to be draw fullscreen on the second display. This works correctly on Windows but not on macOS.
```
const window = new WebviewWindow('second-window', {
title: 'Second Window',
url: '/second-window',
x: 3840,
y: 0,
fullscreen: true,
});
```
### Reproduction
See above snippet.
### Expected behavior
The `WebviewWindow` should be drawn fullscreen on the second display.
### Full `tauri info` output
```text
[✔] Environment
- OS: Mac OS 15.2.0 arm64 (X64)
✔ Xcode Command Line Tools: installed
✔ rustc: 1.83.0 (90b35a623 2024-11-26)
✔ cargo: 1.83.0 (5ffbef321 2024-10-29)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-aarch64-apple-darwin (default)
- node: 22.12.0
- pnpm: 9.15.2
- npm: 10.9.0
[-] Packages
- tauri 🦀: 2.1.1
- tauri-build 🦀: 2.0.3
- wry 🦀: 0.47.2
- tao 🦀: 0.30.8
- @tauri-apps/api : 2.1.1 (outdated, latest: 2.2.0)
- @tauri-apps/cli : 2.1.0 (outdated, latest: 2.2.1)
[-] Plugins
- tauri-plugin-fs 🦀: 2.2.0
- @tauri-apps/plugin-fs : 2.2.0
- tauri-plugin-http 🦀: 2.2.0
- @tauri-apps/plugin-http : 2.2.0
- tauri-plugin-upload 🦀: 2.2.1
- @tauri-apps/plugin-upload : 2.2.1
- tauri-plugin-os 🦀: 2.2.0
- @tauri-apps/plugin-os : 2.0.0 (outdated, latest: 2.2.0)
- tauri-plugin-log 🦀: 2.2.0
- @tauri-apps/plugin-log : 2.0.1 (outdated, latest: 2.2.0)
- tauri-plugin-sql 🦀: 2.2.0
- @tauri-apps/plugin-sql : 2.2.0
[-] App
- build-type: bundle
- CSP: default-src 'self' ipc: http://ipc.localhost; img-src 'self' asset: http://asset.localhost
- frontendDist: ../dist
- devUrl: http://localhost:5173/
- framework: Vue.js
- bundler: Rollup
```
### Stack trace
_No response_
### Additional context
#10420
#11170 | type: bug,platform: macOS,status: needs triage | low | Critical |
2,767,856,271 | go | x/net/websocket: unclear whether Conn.Read documentation refers to frame or message | ### Go version
any
### Output of `go env` in your module/workspace:
```shell
N/A
```
### What did you do?
The behavior of `websocket.Conn` is not clear.
### What did you see happen?
In https://pkg.go.dev/golang.org/x/net/websocket#Conn.Read, we have:
> Read implements the io.Reader interface: it reads data of a frame from the WebSocket connection. if msg is not large enough for the frame data, it fills the msg and next Read will read the rest of the frame data. it reads Text frame or Binary frame.
Does `Read` read a frame or a message? The text keeps saying "frame", but it actually sounds like a message, or else we don't have a way to get messages. The variable says "msg".
There's more confusion in the documentation. In one of the examples we have:
```go
// receive text frame
var message string
websocket.Message.Receive(ws, &message)
```
Is it a frame or a message?
### What did you expect to see?
In Websockets frames and messages are two different things with [well defined meaning in the RFC](https://datatracker.ietf.org/doc/html/rfc6455#section-5.4). This needs to be clarified.
| Documentation,NeedsInvestigation | low | Minor |
2,767,883,269 | godot | Exported arrays in extended scripts of inherited scenes all have the type of the first array declared (yes, again) | ### Tested versions
- Reproducible in 4.3-stable
### System information
MacOS Sonoma 14.4.1 (MacBook Pro M1)
### Issue description
The issue in #81526 hasn't been resolved, even though the supposed fix is part of 4.3. In summary, if an inherited scene with an extended script declares more than one typed array as an exported variable, the resulting `.tscn` file will incorrectly type all of the arrays to match the first one that was declared.
### Steps to reproduce
The MRP consists of two minimal scenes (one Node2D and nothing else): `foo.tscn` and `bar.tscn`. `bar.tscn` inherits from `foo.tscn` and has the following script:
```python
extends "res://foo.gd"
@export var bool_array: Array[bool] = []
@export var str_array: Array[String] = []
```
This causes `bar.tscn` to incorrectly save `str_array` as an array of `bool`, as can be seen here:
<img width="600" alt="Image" src="https://github.com/user-attachments/assets/d7d10113-407b-4bc5-b051-1d0cd62441f9" />
If you change the order of the declarations and save the scene again, then both will be typed as `Array[String]` in `bar.tscn`.
I think it's also relevant to mention that non-inherited scenes don't even save empty arrays to the `.tscn` file.
**UPDATE Nº 1**: It's not even necessary for `bar.gd` to extend `foo.gd` for the bug to surface. Scene inheritance is enough
**UPDATE Nº2**: If you remove the script attached to `foo.tscn`, the issue goes away. The empty arrays are not even saved in `bar.tscn`. The issue only seems to affect scenes that inherit from another scene and override or extend an existing script
### Minimal reproduction project (MRP)
[array-repro.zip](https://github.com/user-attachments/files/18301829/array-repro.zip) | bug,topic:gdscript,needs testing | low | Critical |
2,767,915,936 | flutter | [Flutter GPU] add compute support | ### Use case
This is needed for AI, post-processing and WebGPU.
### Proposal
The tracking issue for the new Flutter GPU that looks limited. | c: new feature,engine,c: proposal,P3,team-engine,triaged-engine,flutter-gpu | low | Minor |
2,767,918,117 | vscode | No hint shown when hovering over type in debugger variables view |
Type: <b>Bug</b>
In #214315, the debugger variables view was changed to show the types of variables. However, if the variable name + type name (with scoped types which might have multiple modules in it, this happens easily; also think of parametrized types) becomes too long for the variables window, of course the value of the variable can no longer be displayed, and the type might also not be display fully.
I would now expect that when I hover over the visible part of the type, I'll get a hint that tells me both the full type as well as the name of the variable. But instead, no hint appears at all, not even for the type itself. I have to resize the variables pane until it is wide enough to show at least the first character of the value, which is both annoying and can sometimes take half to full the screen.
In contrast to this, hovering over the variable name shows the variable name; and hovering over the value (if it is visible) shows the value. I would rather favor an implementation in which the hint shows the value of the full row instead of just the column the mouse is over - who knows, maybe the variable name will also be too long, so that even the type is not shown?
VS Code version: Code 1.96.2 (fabdb6a30b49f79a7aba0f2ad9df9b399473380f, 2024-12-19T10:22:47.216Z)
OS version: Windows_NT x64 10.0.22621
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz (16 x 2304)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|63.75GB (32.00GB free)|
|Process Argv|--crash-reporter-id a63676af-726b-415c-a109-15abbaf1887c|
|Screen Reader|no|
|VM|67%|
Connection to 'SSH: pi' could not be established Connecting with SSH timed out
</details><details><summary>Extensions (50)</summary>
Extension|Author (truncated)|Version
---|---|---
pythoncpp-debug|ben|0.3.0
composer-php-vscode|DEV|1.54.16574
intelli-php-vscode|DEV|0.12.15062
phptools-vscode|DEV|1.54.16574
profiler-php-vscode|DEV|1.54.16574
sync-scroll|dqi|1.3.1
gitlens|eam|16.1.1
texlab|efo|5.21.0
prettier-vscode|esb|11.0.0
linter-gfortran|for|3.2.0
msys2|fou|0.10.0
vscode-cpython-extension-pack|fra|0.1.0
codespaces|Git|1.17.3
remotehub|Git|0.64.0
vscode-github-actions|git|0.27.0
jbockle-format-files|jbo|3.4.0
language-julia|jul|1.83.2
regionfolder|map|1.0.22
language-matlab|Mat|1.3.0
git-graph|mhu|1.30.0
debugpy|ms-|2024.15.2024121701
isort|ms-|2023.10.1
python|ms-|2024.22.1
vscode-pylance|ms-|2024.12.1
jupyter|ms-|2024.11.0
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.21
vscode-jupyter-cell-tags|ms-|0.1.9
vscode-jupyter-powertoys|ms-|0.1.1
vscode-jupyter-slideshow|ms-|0.1.6
remote-containers|ms-|0.394.0
remote-ssh|ms-|0.116.1
remote-ssh-edit|ms-|0.87.0
remote-wsl|ms-|0.88.5
vscode-remote-extensionpack|ms-|0.26.0
azure-repos|ms-|0.40.0
cmake-tools|ms-|1.19.52
cpptools|ms-|1.22.11
cpptools-extension-pack|ms-|1.3.0
hexeditor|ms-|1.11.1
makefile-tools|ms-|0.11.13
remote-explorer|ms-|0.4.3
remote-repositories|ms-|0.42.0
unicode-latex|oij|1.2.0
ninja|sur|0.0.1
cython|tcw|0.1.0
pdf|tom|1.2.2
cmake|twx|0.0.17
php-debug|xde|1.35.0
explicit-folding|zok|0.24.1
(1 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
vscaac:30438847
c4g48928:30535728
azure-dev_surveyone:30548225
vscrpc:30673769
a9j8j154:30646983
962ge761:30959799
9b8hh234:30694863
pythonnoceb:30805159
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupyter:31046869
newcmakeconfigv2:31071590
nativerepl1:31139838
pythonrstrctxt:31112756
nativeloc1:31192215
cf971741:31144450
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
```
</details>
<!-- generated by issue reporter --> | bug,debug | low | Critical |
2,767,919,243 | go | x/tools/gopls: include implementations in definition results for interfaces and interface methods | **Is your feature request related to a problem? Please describe.**
Using the cmd+click/option+click flow, I always go to the interface instead of going to the source for the implementation.
**Describe the solution you'd like**
I would like a setting that opens the "Go to Implementations" dialog instead or might even go to the first implementation automatically.
**Describe alternatives you've considered**
https://github.com/microsoft/vscode/blob/e1de2a458dfb770545489daf499131fd328924e7/extensions/typescript-language-features/package.json#L1485 as a reference for how TypeScript extension deals with this. I've tried to see if I could add this behavior myself but for now I just rebound the cmd+f12 keybind instead.
**Additional context**
N/A | FeatureRequest,gopls,Tools,gopls/survey-candidate | low | Major |
2,767,929,785 | godot | [2D] `RemoteTransform2D` does not account for Camera offset when Target is Under a Separate `CanvasLayer` | ### Tested versions
v4.3.stable.official [77dcf97d8]
### System information
Godot v4.3.stable.mono - Windows 10.0.19044 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4060 Ti (NVIDIA; 32.0.15.6603) - AMD Ryzen 9 5900X 12-Core Processor (24 Threads)
### Issue description
When the target for a `RemoteTransform2D` is under a separate CanvasLayer with `follow_viewport_enabled = false`, the transformation for the target is incorrect if the camera is at a position different from the origin.
https://github.com/user-attachments/assets/2a92b072-affa-47e5-a4b5-dac5266daa14
### Steps to reproduce
1. Download and open the MRP
2. Run the main scene.
3. Scales the window or move the camera.
4. Observe the malfunction.
### Minimal reproduction project (MRP)
[MrpRemoteTransform2dError.zip](https://github.com/user-attachments/files/18302112/MrpRemoteTransform2dError.zip)
| bug,topic:2d | low | Critical |
2,767,938,447 | ollama | Some Models seem to be crashing while using with JSON Schema mode | ### What is the issue?
OS: MacOS Sequoia | Linux
Processor | GPU: M3, M4 Pro, i7 with RTX 4070 [Same issue across various devices]
ollama version is 0.5.4
When I try running a batch to get results for it If I specify a format (JSON Schema), then it randomly stops processing randomly at a certain level of the batch. Let's say if I have 100 images, it will randomly stop at 15 or 16 etc. but when you restart the same batch, it does not pause/stop at the number it stopped before. When using Ollama Serve, I am not able to see any errors. But when using without the JSON Schema; it does not fail. This means the issue may be with certain models and how they are handling the JSON Schema.
I used the following models, with vision:
Llama 3.2-vision (never crashed or paused)
Llava-llama3 (crashes/stops occasionally with the JSON Schema)
Minicpm-v (crashes/stops occasionally with the JSON Schema)
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.5.4 | bug | low | Critical |
2,767,954,911 | kubernetes | projected serviceAccountToken do not honour defaultMode or readOnly: true (tested in 1.30) | ### What happened?
We have a container which needs to start as root today (because we install packages, mount a docker socket and the like). But then we change uid to a lower privilege user for the rest of time.
That user needs access to a projected serviceAccountToken to access another service. The user cannot read the file because its mode is rw-------.
So, we set defaultMode on the projected volume definition, but this has no effect.
We also had readOnly: true in the mount definition, and that seems to not have effect either, as the mode was still rw------- not r--------.
<details>
<summary>Sample Pod definitions</summary>
```
apiVersion: v1
kind: Pod
metadata:
name: with-mode
namespace: robertc-scratch
spec:
containers:
- command:
- sleep
- "604800"
image: ubuntu
imagePullPolicy: IfNotPresent
name: test
securityContext:
allowPrivilegeEscalation: false
runAsUser: 0
volumeMounts:
- mountPath: /secrets/token
name: token
volumes:
- name: token
projected:
defaultMode: 292
sources:
- serviceAccountToken:
path: token
---
apiVersion: v1
kind: Pod
metadata:
name: with-readonly
namespace: robertc-scratch
spec:
containers:
- command:
- sleep
- "604800"
image: ubuntu
imagePullPolicy: IfNotPresent
name: test
securityContext:
allowPrivilegeEscalation: false
runAsUser: 0
volumeMounts:
- mountPath: /secrets/token
name: token
readOnly: true
volumes:
- name: token
projected:
sources:
- serviceAccountToken:
path: token
```
</details>
The test I did in making a SSCCE was to run this:
```
kubectl --namespace robertc-scratch exec -ti with-mode -- ls -l /secrets/token/..data/
total 4
-rw------- 1 root root 1414 Jan 3 17:45 token
@rbtcollins ➜ /workspaces/infrastructure (rbt/agent-permissions-3) $ kubectl --namespace robertc-scratch exec -ti with-mode -- ls -l /secrets/token/..data/
total 4
-rw------- 1 root root 1414 Jan 3 17:45 token
```
We checked the [documentation ](https://kubernetes.io/docs/concepts/storage/projected-volumes/#introduction) and it describes defaultMode with no restrictions on its relevance to difference sources. The API reference is no more useful.
### What did you expect to happen?
I expected the defaultMode setting to apply to the serviceAccountToken. All the same reasons one might want a different mode for a projected secret or config map also apply to a projected serviceAccountToken.
### How can we reproduce it (as minimally and precisely as possible)?
See above.
### Anything else we need to know?
Probably not ;) .
### Kubernetes version
<details>
```console
$ kubectl version
Client Version: v1.27.16
Kustomize Version: v5.0.1
Server Version: v1.30.5-gke.1699000
```
</details>
### Cloud provider
<details>
Google GKE
</details>
### OS version
<details>
I don't have access to the node OS itself, sorry.
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/storage,needs-triage | low | Major |
2,767,974,421 | flutter | [flutter_tools] Evaluate CMake BINARY_NAME instead of parsing the literal to allow conditional/dynamic value | Currently the following code is used to parse the BINARY_NAME variable's value and use it to run the built executable.
```dart
/// Extracts the `BINARY_NAME` from a project's CMake file.
///
/// Returns `null` if it cannot be found.
String? getCmakeExecutableName(CmakeBasedProject project) {
if (!project.cmakeFile.existsSync()) {
return null;
}
final RegExp nameSetPattern = RegExp(r'^\s*set\(BINARY_NAME\s*"(.*)"\s*\)\s*$');
for (final String line in project.cmakeFile.readAsLinesSync()) {
final RegExpMatch? match = nameSetPattern.firstMatch(line);
if (match != null) {
return match.group(1);
}
}
return null;
}
```
https://github.com/flutter/flutter/blob/dbf9e32879140d484c9c184e580883ffab668410/packages/flutter_tools/lib/src/cmake.dart#L11C1-L26C2
I believe this behavior is undesireable because someone (e.g. I) might want to set the BINARY_NAME dynamically based on environment ot other cmake variables like this:
```cmake
if (DEFINED ENV{MY_APP_IS_PRO_VERSION} AND $ENV{MY_APP_IS_PRO_VERSION} STREQUAL true)
set(BINARY_NAME "my_app_pro")
else()
set(BINARY_NAME "my_app_free")
endif()
```
This works as expected for the whole build process but not for running the executable at the end of `flutter run` since `getCmakeExecutableName` only parses the first occurance of `set(BINARY_NAME "xxxxxx")`.
Also setting BINARY_NAME from another variable will not work because it just parses the value as is without evaluating any variables.
I suggest either evaluating environment variables if `BINARY_NAME` is set to something like `"ENV{MY_ENV_BIN_NAME}"` or use a variation of `cmake -L` to get cached values of variables after the build to get the exact value of `BINARY_NAME` | tool,c: proposal,P2,team-tool,triaged-tool | low | Major |
2,767,975,167 | ollama | ollama should use `/usr/local` to store models in Linux | ### What is the issue?
Hello,
right now, ollama is using `/usr/share/ollama` as it's path to store models and other things when running at a system level (with systemd).
If you care for the FHS: https://refspecs.linuxfoundation.org/FHS_3.0/index.html, you should put it in `/usr/local/ollama` as a default. Why? Because /usr is for the system to manage. Stuff that comes from the system's repos.
In this case, ollama is external.
Alternatively (not my taste), you could use `/opt/ollama` to store everything ollama related. I'd much rather you adhere to the FHS.
And, that said, I started a conversation in the Fedora forums about where should models live in a system: https://discussion.fedoraproject.org/t/where-do-we-put-models-if-we-take-into-account-the-fhs/141302
Definitely not `/var` if you ask me since they're not variable.
In any case, feel free to join the conversation.
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.5.4 | bug | low | Major |
2,767,995,089 | godot | CapsuleShape2D does not allow to set programatically Radius and Height | ### Tested versions
-Reproducible in version 4.3
### System information
Windows 11 - Godot Engine v4.3.stable.official.77dcf97d8 - OpenGL API 3.3.0 NVIDIA 546.18 - NVIDIA GeForce RTX 4060 Laptop GPU
### Issue description
When you try to set up CapsuleShape2D programatically, it does not allow you to set Radius and Height at the same time, if you set the Radius first, then it will be overriden when setting the height later, or vice-versa, if you set the Height first, it will be overriden when setting the Radius later in a way that Height = 2*Radius, having a final circular collisionShape.
### Steps to reproduce
-Set a basic 2D Scene containing an StaticBody2D as parent, and Sprite2D and CollisionShape2D as children.
-For this issue, Sprite can be whatever (or even unnecesary, I am just replicating the configuration of the scene I created).
-Set the CollisionShape2D to be a CapsuleShape in the editor.
-Create the following script for the StaticBody2D:
extends StaticBody2D
```gdscript
# Instantiating the Sprite2D and the CollisionShape2D
@onready var sprite: Sprite2D = $Sprite2D
@onready var collision_shape: CollisionShape2D = $CollisionShape2D
# Setting the initialization of the Scene
func _ready() -> void:
configure_col_shape()
# Setting the texture and the CollisionShape from the Resource
func configure_col_shape():
# Configura el CollisionShape2D
if collision_shape and collision_shape.shape is CapsuleShape2D:
collision_shape.shape.radius = 100
collision_shape.shape.height = 20
print("Radio: ", collision_shape.shape.radius, " Altura: ", collision_shape.shape.height)
```
-According to what happened to me, Collisionshape.shape.radius should be equal to 10 and Collisionshape.shape.height should be equal to 20.
### Minimal reproduction project (MRP)
I do not have right now an MRP, but if instructions above are followed, the bug is easily replicable. | documentation,topic:physics | low | Critical |
2,768,031,584 | pytorch | Allow generic python data structure input for torch.autograd.Function | ### 🚀 The feature, motivation and pitch
I have a custom C++ function that takes in dicts of inputs which can contain different tensors depending on the mode and returns the gradients in the backwards pass. Currently, torch.autograd.Function does not support dict input. It would be nice if it could support dict/list/tuple as input and traverse the input internally and allow the output gradients to be of the same type as the input. There are workarounds for list and tuple, such as exploiting the `*` operator, but not for dict. Here is an example of the desired usage:
```
import torch
from custom_cpp_impl import CustomClass
cls = CustomClass()
class MyCustomFunction(torch.autograd.Function):
@staticmethod
def forward(ctx, input_dict):
ctx.save_for_backward(input_dict)
output = cls.forward(input_dict)
return output
@staticmethod
def backward(ctx, grad_output):
input_dict = ctx.saved_tensors
grad_dict = cls.backward(input_dict, grad_output) # Keys would be same as input_dict
return grad_dict
# Run in "xy" mode
input_dict0 = {'x': torch.tensor(2.0, requires_grad=True),
'y': torch.tensor(3.0, requires_grad=True)}
output0 = MyCustomFunction.apply(input_dict0)
output0.backward()
print(input_dict0['x'].grad)
print(input_dict0['y'].grad)
# Run in "yz" mode
input_dict1 = {'y': torch.tensor(3.0, requires_grad=True),
'z': torch.tensor(4.0, requires_grad=True)}
output1 = MyCustomFunction.apply(input_dict1)
output1.backward()
print(input_dict1['y'].grad)
print(input_dict1['z'].grad)
```
The returned gradients dict from `backward()` could have the same keys as the input dict, or be prefixed with `'grad_'` for example.
Currently when you try something like this you get `RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn`. I believe there are many users (such as the user in [this](https://discuss.pytorch.org/t/custom-autograd-function-list-dict-input/21862) discussion) who could benefit from this to write more concise custom autograd functions rather than making a different function for each case of the dict input which can be very lengthy and redundant.
### Alternatives
The only alternative I can think of right now is to make a different autograd function for each case of the dict input. For example, if the dict can contain 'x' and 'y' or 'y' and 'z' that get processed in the C++ code differently you would have to do this:
```
import torch
from custom_cpp_impl import CustomClass
cls = CustomClass()
class MyCustomFunctionXY(torch.autograd.Function):
@staticmethod
def forward(ctx, x, y):
output = cls.forward({"x": x, "y": y})
ctx.save_for_backward(x, y)
return output
@staticmethod
def backward(ctx, grad_output):
x, y = ctx.saved_tensors
grads = cls.backward({"x": x, "y": y}, grad_output)
return grads["grad_x"], grads["grad_y"]
class MyCustomFunctionYZ(torch.autograd.Function):
@staticmethod
def forward(ctx, y, z):
output = cls.forward({"y": y, "z": z})
ctx.save_for_backward(y, z)
return output
@staticmethod
def backward(ctx, grad_output):
y, z = ctx.saved_tensors
grads = cls.backward({"y": y, "z": z}, grad_output)
return grads["grad_y"], grads["grad_z"]
input_dict0 = {'x': torch.tensor(2.0, requires_grad=True),
'y': torch.tensor(3.0, requires_grad=True)}
input_dict1 = {'y': torch.tensor(3.0, requires_grad=True),
'z': torch.tensor(4.0, requires_grad=True)}
output0 = MyCustomFunctionXY.apply(input_dict0['x'], input_dict0['y'])
output0.backward()
output1 = MyCustomFunctionYZ.apply(input_dict1['y'], input_dict1['z'])
output1.backward()
```
which is not very concise considering the C++ code is written to take in different combinations of dict input.
### Additional context
_No response_
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan | module: autograd,triaged | low | Critical |
2,768,036,195 | go | internal/trace: failure to parse due to inconsistent status for proc | ```
#!watchflakes
default <- pkg == "internal/trace" && log ~ "inconsistent status for proc"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8726805723477811937)):
=== RUN TestTraceCPUProfile/Default
reader_test.go:112: unexpected error while reading the trace: inconsistent status for proc 1: old Syscall vs. new Running
trace_test.go:627: found bad trace; dumping to test log...
trace_test.go:638: Trace Go1.23
EventBatch gen=1 m=28747 time=2687627497990 size=65459
ProcStart dt=365 p=2 p_seq=1
GoStart dt=223 g=1 g_seq=1
HeapAlloc dt=839 heapalloc_value=4194304
GoStop dt=301 reason_string=16 stack=8
ProcStop dt=54
...
String id=138
data="runtime.traceStartReadCPU.func1"
String id=139
data="/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/tracecpu.go"
String id=140
data="runtime.traceLocker.ProcStart"
String id=141
data="runtime.acquirep"
--- FAIL: TestTraceCPUProfile/Default (18.13s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| WaitingForInfo,NeedsInvestigation,compiler/runtime | low | Critical |
2,768,055,276 | deno | Creates duplicates in lockfiles | Version: Deno 2.x.x (all)
Something's up with peer deps. My lockfile has 4 versions of `@vitest/ui`, and 3 versions of `vitest`.
https://github.com/lishaduck/effect-utils/blob/44843de95a2f80cb6277f9760869c05d1ca97fa0/deno.lock
This happens whenever Deno edits the lockfile (upgrade, outdated install), but it works when I manually edit it away. Sometimes, these duplicates cause type errors. | bug,node compat | low | Critical |
2,768,062,253 | godot | RichTextLabel and Label don't display changed text | ### Tested versions
- Reproducible in: v4.3.stable.mono.official [77dcf97d8]
- v4.4.dev7
### System information
Godot v4.3.stable.mono - Windows 10.0.22631 - Vulkan (Mobile) - integrated Intel(R) Iris(R) Xe Graphics (Intel Corporation; 31.0.101.5186) - 13th Gen Intel(R) Core(TM) i5-1335U (12 Threads)
### Issue description
When trying to change the text of a label or richtextlabel, the text is only visually changed on the declaration of text in _ready() or by changing text manually. Despite the text not changing visually, using get_text() and printing that value gives the desired value.
### Steps to reproduce
1. Open the MRP
2. Run the project and press the button. It will print get_text() from the richtextlabel and visually show that text does not change.
### Minimal reproduction project (MRP)
[project.zip](https://github.com/user-attachments/files/18303019/project.zip)
| needs testing,topic:gui | low | Minor |
2,768,068,711 | godot | Ctrl-Click on enum value/member does nothing | ### Tested versions
- reproducible in: 4.4.dev7
- reproducible on master (tested with a 03-Jan-2025 build - `v4.4.dev.gh-101012 [c37ada786]`)
- works in: 4.4.dev6
### System information
Godot v4.4.dev7 - Windows 11 (build 22631) - Multi-window, 1 monitor - OpenGL 3 (Compatibility) - NVIDIA GeForce RTX 2070 SUPER (NVIDIA; 32.0.15.6636) - 13th Gen Intel(R) Core(TM) i7-13700KF (24 threads)
### Issue description
I mentioned this here: https://github.com/godotengine/godot/issues/100680#issuecomment-2557966392
While general control-click was restored with https://github.com/godotengine/godot/pull/100707 the enum issue remains.
CTRL-clicking on an enum value/member does nothing (well, it actually navigates to the beginning of the line).
E.g. enum like this:
```
class_name E extends Object
enum Team {
None,
One,
}
func test():
var t = E.Team.None
```
and CTRL-clicking on the `None` of `E.Team.None` does not take you to the `None` under `enum Team`.
### Steps to reproduce
Open enum.gd in MRP - same code as in description.
### Minimal reproduction project (MRP)
[44dev7enum.zip](https://github.com/user-attachments/files/18303044/44dev7enum.zip)
| bug,topic:gdscript,topic:editor,regression | low | Minor |
2,768,115,973 | terminal | Launch size, Launch position should each have a "as current window" button | ### Description of the new feature
There are two settings that govern the size and position of the Terminal window. Both of these allow manual entry of values, so thank you for that.
Settings > Startup > Launch size
Settings > Startup > Launch position
However, using the *current size and position* of the *current terminal window* would be a super-convenient way to set either or both of these settings.
### Proposed technical implementation details
I suggest adding some affordance to the UI in both of those sections, such as a button, to "set from current window." That's the whole feature ask. | Issue-Feature,Needs-Triage,Needs-Tag-Fix | low | Minor |
2,768,127,218 | go | cmd/go: "cannot access the file because it is being used by another process" flakes on Windows | ```
#!watchflakes
default <- pkg == "cmd/go" && test == "TestScript" && `The process cannot access the file because it is being used by another process.`
```
I am getting a lot of these flakes on the Windows TryBots, three just in the last day, and they are getting kinda tedious.
```
=== RUN TestScript
vcs-test.golang.org rerouted to http://127.0.0.1:49858
https://vcs-test.golang.org rerouted to https://127.0.0.1:49859
go test proxy running at GOPROXY=http://127.0.0.1:49860/mod
script_test.go:54: remove C:\b\s\w\ir\x\t\cmd-go-test-3977021774\vcstest3518644654\auth\or401\.access: The process cannot access the file because it is being used by another process.
```
https://ci.chromium.org/ui/p/golang/builders/try/gotip-windows-amd64/b8726767142354977169/overview
https://ci.chromium.org/ui/p/golang/builders/try/gotip-windows-386/b8726850438465216129/overview
https://ci.chromium.org/ui/p/golang/builders/try/gotip-windows-amd64/b8726734771386763217/overview
AFAICT, there is no tracking issue, just a sea of TestScript testflakes issues, with this specific message showing up about 3 weeks ago in golang/go#66337, which has a catch-all `pkg == "cmd/go" && test == "TestScript"` testflakes definition.
https://github.com/golang/go/issues/66337#issuecomment-2540131223
https://github.com/golang/go/issues/66337#issuecomment-2541947109
https://github.com/golang/go/issues/66337#issuecomment-2563772528
https://github.com/golang/go/issues/66337#issuecomment-2565929105 | OS-Windows,NeedsInvestigation,GoCommand,BugReport | low | Major |
2,768,129,288 | go | x/build: update gopls version requirements as part of the Go toolchain release workflow | In https://go.dev/cl/640035, we apparently added an iterator use in gopls that made it susceptible to #70035. Thankfully, this was caught due to our legacy Go version builders. (Aside: this is a use-case for legacy builders that I'd not considered; apart from verifying integration with older Go commands, they also confirm that our minimum toolchain version is adequate).
We'll fix this by updating our minimum toolchain version, but that begs the question: why don't we *always* update our toolchain requirement to point to the latest patch release? Could we update the Go release workflow to send a CL against gopls? By analogy, we update VS Code following a gopls release.
The only reason that I can think of not to do this is that it may cause users to download more toolchain versions, or cause more friction for users that don't have GOTOOLCHAIN=auto. But given the frequency that we release new gopls versions, this seems acceptable. Am I missing something?
CC @adonovan @h9jiang @dmitshur | Builders,NeedsDecision | low | Minor |
2,768,160,480 | go | x/tools/gopls: sigpanic in persistent.(*mapNode).forEach | ```
#!stacks
"runtime.sigpanic" && "persistent.(*mapNode).forEach:+1"
```
Issue created by [stacks](https://pkg.go.dev/golang.org/x/tools/gopls/internal/telemetry/cmd/stacks).
This stack looks wrong, perhaps another example of the off-by-one problems in the traceback?
(Is it possible that somehow the tree has become a cyclic graph and what we are seeing is a stack overflow? The line number supports that, but I would expect the failure in that case to occur in `runtime.morestack_noctxt`.)
```go
func (node *mapNode) forEach(f func(key, value any)) {
if node == nil { // <--- sigpanic
return
}
node.left.forEach(f)
f(node.key, node.value.value)
node.right.forEach(f)
}
```
This stack `dVh6PA` was [reported by telemetry](https://storage.googleapis.com/prod-telemetry-merged/2024-12-31.json):
- `crash/crash`
- [`runtime.throw:+9`](https://cs.opensource.google/go/go/+/go1.23.2:src/runtime/panic.go;l=1067)
- [`runtime.sigpanic:+33`](https://cs.opensource.google/go/go/+/go1.23.2:src/runtime/signal_unix.go;l=914)
- [`golang.org/x/tools/gopls/internal/util/persistent.(*mapNode).forEach:+1`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0:gopls/internal/util/persistent/map.go;l=169)
- [`golang.org/x/tools/gopls/internal/util/persistent.(*mapNode).forEach:+4`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0:gopls/internal/util/persistent/map.go;l=172)
- [`golang.org/x/tools/gopls/internal/util/persistent.(*mapNode).forEach:+4`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0:gopls/internal/util/persistent/map.go;l=172)
- [`golang.org/x/tools/gopls/internal/util/persistent.(*mapNode).forEach:+4`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0:gopls/internal/util/persistent/map.go;l=172)
- [`golang.org/x/tools/gopls/internal/util/persistent.(*mapNode).forEach:+6`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0:gopls/internal/util/persistent/map.go;l=174)
- [`golang.org/x/tools/gopls/internal/util/persistent.(*mapNode).forEach:+6`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0:gopls/internal/util/persistent/map.go;l=174)
- [`golang.org/x/tools/gopls/internal/util/persistent.(*mapNode).forEach:+6`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0:gopls/internal/util/persistent/map.go;l=174)
- [`golang.org/x/tools/gopls/internal/util/persistent.(*mapNode).forEach:+4`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0:gopls/internal/util/persistent/map.go;l=172)
- [`golang.org/x/tools/gopls/internal/util/persistent.(*mapNode).forEach:+4`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0:gopls/internal/util/persistent/map.go;l=172)
- [`golang.org/x/tools/gopls/internal/util/persistent.(*mapNode).forEach:+6`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0:gopls/internal/util/persistent/map.go;l=174)
- [`golang.org/x/tools/gopls/internal/util/persistent.(*mapNode).forEach:+6`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0:gopls/internal/util/persistent/map.go;l=174)
- [`golang.org/x/tools/gopls/internal/util/persistent.(*mapNode).forEach:+4`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0:gopls/internal/util/persistent/map.go;l=172)
- [`golang.org/x/tools/gopls/internal/util/persistent.(*mapNode).forEach:+4`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0:gopls/internal/util/persistent/map.go;l=172)
- [`golang.org/x/tools/gopls/internal/util/persistent.(*mapNode).forEach:+4`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0:gopls/internal/util/persistent/map.go;l=172)
```
golang.org/x/tools/[email protected] go1.23.2 linux/amd64 vscode (1)
```
| NeedsInvestigation,gopls,Tools,gopls/telemetry-wins,BugReport | low | Critical |
2,768,201,277 | rust | `core::iter::from_fn` and `core::iter::successors` documentation does not explain callback function signature | ### Location
[`core::iter::from_fn`](https://doc.rust-lang.org/core/iter/fn.from_fn.html)
[`core::iter::successors`](https://doc.rust-lang.org/core/iter/fn.successors.html)
### Summary
Both of these functions take callback methods that return an `Option<T>`. However, nowhere is actually described that this `Option<T>` is used to detect the end of the iterator (i.e. return `Some` while there are items, return `None` when at the end). I was able to deduce how things worked eventually by looking at the examples, but I was definitely a bit confused at first.
There should be a clear description on what the callback is expected to return, not just when the callback is called. | A-docs,T-libs | low | Major |
2,768,205,321 | material-ui | [Autocomplete] Accessibility issues with keyboard navigation Autocomplete | ### Steps to reproduce
A) In Autocomplete, the Clear button is not accessible by keyboard. To reproduce:
1. Visit [Combo box example](https://mui.com/material-ui/react-autocomplete/?srsltid=AfmBOoow6C13-i92jh2hvdlccm30DWW7pWd_E09QJDiPA51stJqhRdHz#combo-box)
2. Select an option, then press tab
Expected: Focus moves to the now-visible Clear button
Current: Focus moves out of the example
My workaround:
```
slotProps={{
clearIndicator: {
tabIndex: 0,
// In order to make the Clear Button focusable, disable the default behavior of hiding it except on hover
sx: {
visibility: 'visible',
},
},
}}
```
However, with this workaround, the clear button can only be activated with Space. Ideally, it would be activated with either Space or Enter.
B) In Autocomplete with Multi, pressing Enter when focused on a selected dropdown element causes the element to get deleted. To reproduce:
1. Visit [Multiple Values example](https://mui.com/material-ui/react-autocomplete/?srsltid=AfmBOooMMc3DR1oOQKTCPYW8_jUur1VWJ0LRJTPEhcvGh2942nSnIMGH#multiple-values), press Reset Focus button
2. Press Tab then down-arrow to open the combobox
3. Press Enter
Current: The selected option is deleted from the selection.
Expected: My expectation is that the dropdown would close, and the selected option would stay selected. (But perhaps this is intended behavior?)
C) In Autocomplete with Multi, when navigating with Voiceover, after deleting an option, focus returns to the document body. To reproduce:
1. Visit [Multiple Values example](https://mui.com/material-ui/react-autocomplete/?srsltid=AfmBOooMMc3DR1oOQKTCPYW8_jUur1VWJ0LRJTPEhcvGh2942nSnIMGH#multiple-values)
2. Enter an option or two.
3. Using VO, ctrl-option(or capslock)-left to select an option and delete.
Current: Focus shifts to the document body
Expected: Focus remains on the autocomplete component. (Perhaps this a bug with VO, not MUI?)
D) In Autocomplete with Multi, users should be able to tab around the selected options. To reproduce:
1. Visit [Multiple Values example](https://mui.com/material-ui/react-autocomplete/?srsltid=AfmBOoow6C13-i92jh2hvdlccm30DWW7pWd_E09QJDiPA51stJqhRdHz#multiple-values)
2. Select multiple options, then press shift-tab.
Expected: Focus moves to the option chips
Current: Focus moves out of the example. Option chips can only be focused using left-right arrows. Relates to other issues with chip accessibility https://github.com/mui/material-ui/issues/20470, but I think this is a separate problem specific to chips within Autocomplete.
### Current behavior
_No response_
### Expected behavior
_No response_
### Context
Hi MUI maintainers! While my team was auditing our app's accessibility, we came across a few things that we thought were accessibility problems within the Autocomplete component. Since we couldn't find existing issues for these, we wanted to create an issue to discuss with you. Let us know, which ones of these are desired behavior, which you'd consider bugs, and which may be fixed with the upcoming work on https://github.com/mui/material-ui/issues/25365!
I'll add the caveat that we are not screenreader users, ourselves.
This is my first time creating an issue in the MUI repo. Please let me know if anything else would be helpful. Thanks!
Related issues:
- https://github.com/mui/material-ui/issues/25365
- https://github.com/mui/material-ui/issues/20470
### Your environment
Tested using examples in the current MUI docs (see links above), using VoiceOver on Chrome.
**Search keywords**: accessibility autocomplete | accessibility,package: material-ui,component: autocomplete | low | Critical |
2,768,207,624 | tauri | [bug] App does not initialize when debugging from Xcode | ### Describe the bug
When debugging a Tauri app in an iOS simulator or device from Xcode, the app _deploys_ but does not _initialize_. An empty white webview is show.
### Reproduction
Clone the example app in the `plugins-workspace` [repository](https://github.com/tauri-apps/plugins-workspace).
Confirm the app builds/deploys using the CLI to an iOS simulator.
Run the following command:
`pnpm tauri ios dev --open`

### Expected behavior
App should build, deploy and run on the iOS simulator.
### Full `tauri info` output
```text
[✔] Environment
- OS: Mac OS 15.2.0 arm64 (X64)
✔ Xcode Command Line Tools: installed
✔ rustc: 1.83.0 (90b35a623 2024-11-26)
✔ cargo: 1.83.0 (5ffbef321 2024-10-29)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-aarch64-apple-darwin (default)
- node: 22.12.0
- pnpm: 9.15.2
- npm: 10.9.0
[-] Packages
- tauri 🦀: 2.1.1
- tauri-build 🦀: 2.0.3
- wry 🦀: 0.47.2
- tao 🦀: 0.30.8
- @tauri-apps/api : 2.1.1 (outdated, latest: 2.2.0)
- @tauri-apps/cli : 2.1.0 (outdated, latest: 2.2.1)
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: needs triage | low | Critical |
2,768,222,109 | tauri | [feat] Improve IDE toolchain and debugging on iOS | ### Describe the problem
When developing Tauri applications on iOS there is considerable friction with the Xcode toolchain and debugging.
- Xcode does not display all source files, including Rust/Swift code, making it challenging to see the project structure.
- Xcode does not recognize Swift packages that are direct dependencies of the Tauri application.
- It is impossible to inspect or set breakpoints in plugin code, hindering in-depth debugging.
- Log outputs are not easily accessible, which complicates diagnosing runtime issues.
These limitations collectively slow down development and debugging cycles, especially when working with platform-specific features or debugging plugins.
<img width="1245" alt="Screenshot 2025-01-03 at 16 53 45" src="https://github.com/user-attachments/assets/bad9a8a8-449b-4c65-a815-c769d0587b55" />
### Describe the solution you'd like
1. Ensure that Xcode can load and display all relevant source files, including Rust and Swift.
2. Make Swift packages that are direct dependencies of the Tauri app visible and manageable within Xcode.
3. Add the ability to inspect and set breakpoints in Tauri plugin code, whether written in Swift or Rust.
4. Provide a way to stream and display log outputs more effectively, directly integrated with Xcode’s debugging tools.
5. Reduce build/deploy times.
### Alternatives considered
N/A
### Additional context
#10197
#12172
| type: feature request | low | Critical |
2,768,225,976 | tauri | [feat] Improve IDE toolchain and debugging on Android | ### Describe the problem
When developing Tauri applications on Android there is considerable friction with the Android Studio toolchain and debugging.
- Android Studio does not display all source files, including Rust/Kotlin code, making it challenging to see the project structure.
- It is impossible to inspect or set breakpoints in plugin code, hindering in-depth debugging.
- Log outputs are not easily accessible, which complicates diagnosing runtime issues.
These limitations collectively slow down development and debugging cycles, especially when working with platform-specific features or debugging plugins.
### Describe the solution you'd like
1. Ensure that Android Studio can load and display all relevant source files, including Rust and Kotlin.
2. Add the ability to inspect and set breakpoints in Tauri plugin code, whether written in Kotlin or Rust.
3. Provide a way to stream and display log outputs more effectively, directly integrated with Android Studio's debugging tools.
4. Reduce build/deploy times.
### Alternatives considered
N/A
### Additional context
_No response_ | type: feature request | low | Critical |
2,768,229,716 | godot | Auto interface scaling for the editor does not work on a per-monitor basis on Wayland. | ### Tested versions
Tested in 4.3.stable
### System information
Arch Linux with [Hyprland](www.hyprland.org)
### Issue description
I am using a Framework 13 laptop that is set to scale the laptop display to 2x. My monitor is set to scale to 1x. When the Godot Editor is set to "Auto" scale, it only uses 200%, even on the 1x monitor.
### Steps to reproduce
1. Using Wayland, set one monitor to 1x scale, and another to 2x scale.
2. Set Godot to prefer Wayland (run/platforms/linuxbsd/prefer_wayland in editor settings)
3. Set Godot's display scale to Auto (interface/editor/display_scale in editor settings)
4. Restart Godot
5. On both monitors, Godot scales to 200%.
### Minimal reproduction project (MRP)
N/A | bug,platform:linuxbsd,topic:gui | low | Major |
2,768,234,400 | ui | [bug]: Sidebar hides in mobile view | ### Describe the bug
I have implemented the sidebar with collapse button. The collapse button is inside the sidebar, and the sidebar hides in mobile view. There is no way to access the sidebar in mobile view. I want it to collapse to icons instead of hiding.
This can also be seen on some shadcn blocks.
https://ui.shadcn.com/view/styles/new-york/sidebar-13
sidebar in dialog
https://ui.shadcn.com/view/styles/new-york/sidebar-15
right sidebar in this case

### Affected component/components
Sidebar
### How to reproduce
1. Create a sidebar with trigger inside the sidebar
2. reduce the size of the screen below 768px
### Codesandbox/StackBlitz link
https://ui.shadcn.com/view/styles/new-york/sidebar-13
### Logs
_No response_
### System Info
```bash
Windows 11, NextJS 15, Chrome
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,768,239,151 | godot | Setter called twice if script is extended in instance | ### Tested versions
4.4 dev7 and earlier
### System information
W10
### Issue description
Given this simple script:
```GDScript
extends Node
@export var skin: String:
set(s):
print(">", s)
skin = s
```
There are cases when the setter will run twice (as evident by double print). This happens when you attach the script to a node and save as scene, then instantiate the scene in another scene, set the property to a different value in the inspector and extend the script. When you run the scene, the setter will be called with the default value and then your exported value.
### Steps to reproduce
Run MRP.
### Minimal reproduction project (MRP)
[SetterBug.zip](https://github.com/user-attachments/files/18304274/SetterBug.zip)
| bug,topic:gdscript | low | Critical |
2,768,257,338 | godot | TextEdit strips \r characters and doesn't remember that they were there, causing data loss | ### Tested versions
4.4.dev7
### System information
Godot v4.4.dev7 - Windows 10 (build 19045) - Multi-window, 2 monitors - OpenGL 3 (Compatibility) - AMD Radeon RX 6800 (Advanced Micro Devices, Inc.; 32.0.12033.1030) - AMD Ryzen 5 7600X 6-Core Processor (12 threads)
### Issue description
When assigning the text of a `TextEdit` node, it completely loses all information about whether the assigned string originally contained any `\r` characters at all. It can't even differentiate a consistently `\r\n` buffer from a `\n` buffer, and the API doesn't provide a way to recover this information. This happens even if you assign individual lines containing `\r` characters with `set_line`.
**This also means that any files edited with Godot's built-in script editor will immediately lose their `\r`s upon being saved, causing data loss.** In some cases, e.g. with text file processing test cases, this is a problem even if you're ideologically opposed to `\r\n` linefeeds.
To some extent some amount of information loss here is expectable because LineEdit is (apparently) based on an array of single-line strings without line terminators, but it should probably at least attempt to detect LF vs CRLF and restore that information when generating the output of `get_text()`, at least in simple cases like when `\n` never appears without a preceding `\r`. A way to query and override this detection would also be nice.
Also, `\r` characters should never be silently stripped from the **middles** of lines. Such characters can be the result of e.g. terminal capture logging, where `\r` literally means "go back to the start of the line and start typing from there again".
### Steps to reproduce
Assign `"\r\n\r\nasdf\rasdf\rasdf\rasdf"` to a TextEdit's `text` field and note that all the `\r` characters completely disappear (e.g. by running `print(editor.text.c_escape())`).
### Minimal reproduction project (MRP)
N/A | discussion,topic:gui | low | Critical |
2,768,297,947 | go | x/website, x/pkgsite, x/build/cmd/relui, vscode-go, x/telemetry: vulnerability GHSA-3xgq-45jj-v275/CVE-2024-21538 in cross-spawn dependency version 7.0.3 | ### Go version
1.23.4
### Output of `go env` in your module/workspace:
```shell
N/A - Container-based (docker image) scan.
```
### What did you do?
Anchore scans run periodically.
### What did you see happen?
Vulnerability scanners (such as Anchore) are detecting GHSA-3xgq-45jj-v275/CVE-2024-21538 in `cross-spawn` 7.0.3. That dependency needs to be upgraded to 7.0.5 or higher. Thank you.
Note: This was reported as Issue 71114, but that was closed with `not planned`. It is not a duplicate according to issue search, so asking for an explanation.
### What did you expect to see?
Clean scan results. | Security,NeedsInvestigation | low | Major |
2,768,299,715 | tauri | [feat] EdgeToEdge on Android / iOS | ### Describe the problem
On Android, there is a feature to allow the app to hide the Notification bar and the Navigation bar, otherwise called Edge to Edge.
[Documented Here](https://developer.android.com/develop/ui/views/layout/edge-to-edge#kotlin)
I know iOS has a similar API, but I am on android myself so this will be targeted mostly towards that.
### Describe the solution you'd like
It would be nice if we can programmatically change this, like the documentation says in the android dev site, or get a `tauri.conf.json` line about this as well.
### Alternatives considered
A plugin could be written, but this seems more "first-party" to me.
### Additional context
_No response_ | type: feature request | low | Minor |
2,768,303,990 | flutter | WASM + video_player + ShellRoute offset bug | ### Steps to reproduce
When building a video player with GoRouter ShellRoute and a left navigation bar, the video player is shifted to the right if building with WASM flag.
### Expected results
The video player does not shift to the right.
### Actual results
The video player shifts to the right.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:go_router/go_router.dart';
import 'package:video_player/video_player.dart';
void main() {
runApp(const MyApp());
}
// 1. We use MaterialApp.router with a GoRouter
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp.router(
debugShowCheckedModeBanner: false,
routerConfig: _router,
);
}
}
// 2. Define our GoRouter with a top-level '/' and a ShellRoute
final _router = GoRouter(
routes: [
// Top-level route (no shell)
GoRoute(
path: '/',
builder: (context, state) => const HomePage(),
),
// ShellRoute that surrounds the child route '/video'
ShellRoute(
builder: (context, state, child) {
return Scaffold(
// A row that has a left nav area and the main content
body: Row(
children: [
// Left nav bar
SizedBox(
width: 160,
// For styling, we give it a background color
child: _MyLeftNavBar(),
),
// The "main content" is whichever child route we navigate to
Expanded(
child: child,
),
],
),
);
},
routes: [
// The child route (the "main content" that goes to the Expanded section)
GoRoute(
path: '/video',
builder: (context, state) => const VideoPage(),
),
],
),
],
);
// 3. A simple “Home” page with a button to show we can navigate to the shell route
class HomePage extends StatelessWidget {
const HomePage({super.key});
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(title: const Text('Home (No Shell)')),
body: Center(
child: ElevatedButton(
onPressed: () => context.go('/video'),
child: const Text('Go to Video'),
),
),
);
}
}
// 4. Our left nav bar widget
// This is displayed by the shell, so it’s visible whenever we're on '/video' route.
class _MyLeftNavBar extends StatelessWidget {
@override
Widget build(BuildContext context) {
return Container(
color: Colors.blueGrey.shade100,
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: [
const Text('Left Nav', style: TextStyle(fontSize: 18)),
const SizedBox(height: 20),
ElevatedButton(
onPressed: () => context.go('/'), // navigate to Home
child: const Text('Go Home'),
),
const SizedBox(height: 20),
ElevatedButton(
onPressed: () => context.go('/video'), // navigate to /video
child: const Text('Go Video'),
),
],
),
);
}
}
// 5. The child route that shows a VideoPlayer
class VideoPage extends StatefulWidget {
const VideoPage({super.key});
@override
State<VideoPage> createState() => _VideoPageState();
}
class _VideoPageState extends State<VideoPage> {
late VideoPlayerController _controller;
late Future<void> _initializeVideoPlayerFuture;
@override
void initState() {
super.initState();
// A sample 5-second MP4 link. Replace with your own if desired
_controller = VideoPlayerController.networkUrl(
Uri.parse('https://samplelib.com/lib/preview/mp4/sample-5s.mp4'),
);
_initializeVideoPlayerFuture = _controller.initialize().then((_) {
_controller.setLooping(true);
_controller.setVolume(0); // Mute to avoid autoplay blocking
_controller.play();
setState(() {});
});
}
@override
Widget build(BuildContext context) {
return Center(
child: FutureBuilder<void>(
future: _initializeVideoPlayerFuture,
builder: (context, snapshot) {
if (snapshot.connectionState == ConnectionState.done) {
final ratio = _controller.value.aspectRatio;
return AspectRatio(
aspectRatio: ratio > 0 ? ratio : 16 / 9,
child: VideoPlayer(_controller),
);
} else {
return const CircularProgressIndicator();
}
},
),
);
}
@override
void dispose() {
_controller.dispose();
super.dispose();
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
<img width="1191" alt="Regular build" src="https://github.com/user-attachments/assets/7586d4d2-fe3f-4a78-ac66-bc50ba4f3563" />
<img width="1191" alt="WASM build" src="https://github.com/user-attachments/assets/6ad5cb07-60ae-432f-b95d-6d33ebf1096f" />
### Logs
<details open><summary>Logs</summary>
```console
flutter run -d chrome
Launching lib/main.dart on Chrome in debug mode...
Waiting for connection from debug service on Chrome... 9.7s
This app is linked to the debug service: ws://127.0.0.1:50602/IhdCtZcSa2M=/ws
Debug service listening on ws://127.0.0.1:50602/IhdCtZcSa2M=/ws
🔥 To hot restart changes while running, press "r" or "R".
For a more detailed help message, press "h". To quit, press "q".
A Dart VM Service on Chrome is available at: http://127.0.0.1:50602/IhdCtZcSa2M=
The Flutter DevTools debugger and profiler on Chrome is available at: http://127.0.0.1:9102?uri=http://127.0.0.1:50602/IhdCtZcSa2M=
Application finished.
flutter run -d chrome --wasm
Launching lib/main.dart on Chrome in debug mode...
Compiling lib/main.dart for the Web... 941ms
✓ Built build/web
🔥 To hot restart changes while running, press "r" or "R".
For a more detailed help message, press "h". To quit, press "q".
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.27.1, on macOS 14.5 23F79 darwin-arm64, locale en-US)
• Flutter version 3.27.1 on channel stable at /Users/anh/Dev/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 17025dd882 (3 weeks ago), 2024-12-17 03:23:09 +0900
• Engine revision cb4b5fff73
• Dart version 3.6.0
• DevTools version 2.40.2
[✓] Android toolchain - develop for Android devices (Android SDK version 33.0.0-rc2)
• Android SDK at /Users/anh/Library/Android/sdk
• Platform android-33, build-tools 33.0.0-rc2
• Java binary at: /Applications/Android Studio.app/Contents/jre/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 11.0.13+0-b1751.21-8125866)
• All Android licenses accepted.
[!] Xcode - develop for iOS and macOS (Xcode 16.2)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16C5032a
! iOS 18.2 Simulator not installed; this may be necessary for iOS and macOS development.
To download and install the platform, open Xcode, select Xcode > Settings > Components,
and click the GET button for the required platform.
For more information, please visit:
https://developer.apple.com/documentation/xcode/installing-additional-simulator-runtimes
• CocoaPods version 1.14.3
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2021.3)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 11.0.13+0-b1751.21-8125866)
[✓] VS Code (version 1.96.2)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.102.0
[✓] Connected device (3 available)
• macOS (desktop) • macos • darwin-arm64 • macOS 14.5 23F79 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 14.5 23F79 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.205
[✓] Network resources
• All expected network resources are available.
! Doctor found issues in 1 category.
```
</details>
| platform-web,p: video_player,has reproducible steps,P2,p: go_router,e: wasm,e: web_skwasm,team-web,triaged-web,found in release: 3.27,found in release: 3.28 | low | Critical |
2,768,306,920 | node | `tlsSocket.getPeerCertificate()` doesn't document format of `valid_from` / `valid_to` | ### Affected URL(s)
https://nodejs.org/api/tls.html#tlssocketgetpeercertificatedetailed
### Description of the problem
The "Certificate object" returned from `getPeerCertificate` is merely documented as,
> `valid_from <string>` The date-time the certificate is valid from.
> `valid_to <string>` The date-time the certificate is valid to.
First, these would be better as `Date`s, but I'm assuming that such a change cannot be made, as it would be a breaking change.
In lieu of that … the documentation should specify what format these strings are in. Unfortunately, it appears that these strings are in the format:
```
Mar 25 18:18:52 2025 GMT
```
which isn't in any standardized format I'm aware of. [It appears to be](https://github.com/nodejs/node/blob/01554f316c8647e1f893338f822b1116a937473d/deps/ncrypto/ncrypto.cc#L871) the output of [`ASN1_TIME_print`](https://docs.openssl.org/3.0/man3/ASN1_TIME_set/) from OpenSSL.
It would be nice to note at least that much; maybe, an example of the output?
---
Since this necessitates anyone wanting to use the data for more than printing to write a parser, I'll also note the following:
OpenSSL's documentation _also_ doesn't really document this format, instead only saying,
> The ASN1_TIME_print(), ASN1_UTCTIME_print() and ASN1_GENERALIZEDTIME_print() functions print the time structure s to BIO b in human readable format. It will be of the format MMM DD HH:MM:SS YYYY [GMT], for example "Feb 3 00:55:52 2015 GMT", which does not include a newline. If the time structure has invalid format it prints out "Bad time value" and returns an error. The output for generalized time may include a fractional part following the second.
Note that the example in the documentation is wrong; AFAICT with examples, that would output `Feb 3 00:55:52 2015 GMT`, not `Feb 3 00:55:52 2015 GMT`.
`ASN1_TIME_print` seems to be equivalent to calling `ASN1_TIME_print_ex` with `flags` set to `ASN1_DTFLGS_RFC822` (but don't let the name fool you, it's not RFC 822 format, which is like `25 Mar 25`, which matches nothing seen thus far…)
So `DD` = "space padded" day number, and the `[GMT]` is concerning, so I dug into OpenSSL's code to see in what circumstances it would be omitted.
The exact code is in `ossl_asn1_time_print_ex` in OpenSSL, but to save anyone the time, one of two format strings get used:
```
GenerializedTime = "%s %2d %02d:%02d:%02d%.*s %d%s",
UTC time = "%s %2d %02d:%02d:%02d %d%s"
```
With those replacements being 3-char month, day, hour:min:sec, (fractional sec, only for GeneralizedTime), year, and then " GMT" or ""; if the datetime is in GMT, then " GMT", if it's not, then "" — (and the string is thus mangled; AFAICT the underlying data has a UTC offset, it's just not printed!)
Cf. https://github.com/openssl/openssl/issues/26313 — I've asked if OpenSSL can't also document this a little more, too. | doc | low | Critical |
2,768,330,994 | godot | AudioStreamPlayer doesn't loop with parameters/looping set to true in web export | ### Tested versions
- Reproducible in 4.3.stable, 4.4.dev7
### System information
Godot v4.4.dev7 - macOS Sequoia (15.1.1) - Multi-window, 1 monitor - OpenGL 3 (Compatibility) - Apple M1 Pro - Apple M1 Pro (8 threads)
### Issue description
Looping an audio stream in an AudioStreamPlayer only works when the file is imported with the `loop` property set to true in web exports. Using the `parameters/looping` property of the AudioStreamPlayer node does not loop the audio, whereas this works as expected on desktop exports and in the editor. This occurs both with threads enabled and disabled and has been tested in Firefox 133.0.3 and Chrome 131.0.6778.205.
### Steps to reproduce
1. Create new scene
2. Add AudioStreamPlayer node, add an AudioStream (with the `loop` import property set to false), set `autoplay` to true, set `parameters/looping` to true
3. Run project in web browser
4. Observe that the stream does not loop
Alternatively:
1. Open minimal reproduction project
2. Run project in web browser
### Minimal reproduction project (MRP)
[web-audio-loop-bug.zip](https://github.com/user-attachments/files/18304864/web-audio-loop-bug.zip)
| bug,platform:web,topic:audio | low | Critical |
2,768,368,992 | terminal | Enable Clickable Directory Paths in Windows Terminal to Seamlessly Open in File Explorer - Just Like URLs | ### Description of the new feature
Windows Terminal currently displays URLs in a clickable format, allowing users to quickly navigate to web pages directly from the terminal. However, directory paths, such as those displayed in commands like `winget --info` or during output logs, are not clickable. This feature request proposes making directory paths clickable, enabling users to open the corresponding folder in File Explorer with a single click.
# Feature-Use-Benefit Analysis:
## Ease of Access and Workflow Optimization:
Clicking directory paths directly would save users the time and effort of manually copying the path, opening File Explorer, and pasting it in the address bar.
This would be especially beneficial for developers, system administrators, and power users who frequently work with file paths, logs, and configurations, as shown in the screenshot where paths to DiagOutputDir or settings.json are displayed.
## Consistency with Existing Functionality:
Windows Terminal already supports clickable URLs, which enhance usability and workflow efficiency. Extending this functionality to directory paths would create a cohesive and intuitive user experience.
Improved Error Resolution and Debugging:
For users troubleshooting logs or debugging application issues, being able to instantly open a directory reduces the cognitive load and speeds up the process of locating files or folders referenced in the terminal.
## Broader Use Cases and Adoption:
This feature aligns with the needs of diverse user groups, from beginner users navigating the file system to advanced users leveraging tools like winget, as shown in the screenshot.

### Proposed technical implementation details
# Implementation Timeline
## Phase 1: Path Detection
- Develop and test regex for detecting paths.
- Integrate path detection into the rendering pipeline.
## Phase 2: Clickable Rendering
- Implement tokenization and styling for clickable paths.
- Add configuration options for enabling/disabling the feature.
## Phase 3: Event Handling
- Implement mouse click handling and command execution.
- Add error handling and user feedback mechanisms.
## Phase 4: Optimization and Accessibility
- Optimize performance for large outputs.
- Add keyboard navigation and screen reader support.
## Phase 5: Testing and Release
- Conduct extensive testing.
- Release the feature as part of a Windows Terminal Preview build.
# Proposed Technical Implementation
## 1. Directory Path Detection
### Regex for Path Matching:
Create a robust regular expression to detect valid directory paths within terminal output. The regex must:
- Identify Windows-style paths (`C:\Users\username\Documents`).
- Handle spaces, special characters, and network paths (`\\server\share`).
- Consider relative paths like `./subfolder` and `../parentfolder`.
#### Example Regex:
```
(?:(?:[a-zA-Z]:\\)|(?:\\\\[^\s\\]+\\[^\s\\]+))[^<>:"/\\|?*\n\r]*[^<>:"/\\|?*\n\r\\]
```
### Integration into Output Rendering:
- Modify the terminal's rendering pipeline to scan each output line for matches using the regex.
- Detected paths should be tagged with metadata (e.g., `isClickable: true`).
## 2. Rendering Clickable Paths
### Tokenization and Styling:
- Tokenize the terminal output into clickable and non-clickable segments.
- Highlight clickable paths using underline styling or a distinct color, similar to clickable URLs already supported in Windows Terminal.
- Use the existing hyperlink styling engine to render these paths with visual feedback.
### Context Awareness:
- Ensure that clickable paths are contextually accurate:
- Validate the detected path (e.g., check if it exists).
- Only render paths as clickable if they are valid on the user’s file system.
## 3. Event Handling for Click Actions
### Mouse Click Handling:
- Extend the existing event-handling system to respond to clicks on clickable paths:
- Capture the click event and determine if it occurred on a clickable path.
- Extract the full path string from the metadata associated with the clickable segment.
### Execution of the `explorer` Command:
- Trigger the `explorer` command to open the directory in File Explorer:
```cmd
explorer "C:\path\to\directory"
```
- Use secure asynchronous system calls to avoid freezing the terminal during command execution.
### Error Handling:
- If the directory does not exist or the `explorer` command fails, display an appropriate error message in the terminal.
## 4. Configuration Options
### Enable/Disable Clickable Paths:
- Allow users to turn the feature on or off.
#### Example setting in `settings.json`:
```json
"clickablePaths": true
```
### Behavior on Click:
- Define default behavior for clicks:
- Open in File Explorer.
- Copy the path to the clipboard.
- Open the folder in a new terminal tab.
#### Example configuration:
```json
"pathClickBehavior": "openInExplorer" // Options: "openInExplorer", "copyToClipboard", "openInTerminal"
```
## 5. Keyboard Accessibility
### Keyboard Navigation:
- Allow users to navigate between clickable paths using keyboard shortcuts (e.g., Tab or arrow keys).
- Use Enter or a custom key combination (e.g., Ctrl+Enter) to open the selected path.
### Accessibility Considerations:
- Ensure the feature is accessible to screen readers by providing descriptive tooltips or metadata for each clickable path.
## 6. Performance Optimization
### Efficient Path Matching:
- Optimize regex processing to avoid slowing down terminal rendering, especially for outputs with large amounts of text.
- Implement lazy evaluation or process output lines incrementally.
### Caching:
- Cache the results of path validation to minimize repeated checks for the same paths.
## 7. Testing and Validation
### Unit Tests:
- Test regex patterns against a variety of valid and invalid paths to ensure accuracy.
- Validate behavior for edge cases, such as extremely long paths or invalid characters.
### Integration Tests:
- Simulate real-world terminal outputs to verify rendering and click behavior.
- Ensure compatibility across different themes, fonts, and screen resolutions.
### User Testing:
- Collect feedback from beta testers to refine usability and performance.
| Issue-Feature,Needs-Triage,Needs-Tag-Fix | low | Critical |
2,768,375,865 | node | Implement iterator/async iterator support for web crypto digest | ### What is the problem this feature will solve?
Per https://github.com/w3c/webcrypto/pull/390 ...
The `crypto.subtle.digest(...)` method will soon allow passing both an iterator or async iterator in addition to a `BufferSource`. The implementation will need to be updated to support it. Eventually `crypto.subtle.sign(...)` and `crypto.subtle.verify(...)` will also support iterator inputs.
### What is the feature you are proposing to solve the problem?
Modifying the web crypto impl to support streaming inputs.
### What alternatives have you considered?
_No response_ | crypto,feature request,webcrypto | low | Minor |
2,768,381,486 | node | Support webcrypto hash algorithm arguments in node.js crypto hash apis | ### What is the problem this feature will solve?
In Node.js, we would do `crypto.createHash('sha256')` ... in web crypto we can do either `crypto.subtle.digest('sha256', ...)` or `crypto.subtle.digest({ name: 'sha256' }, ...)` (string or object). For consistency, it would be helpful to be able to support the webcrypto style arguments in the Node.js APIs also..
### What is the feature you are proposing to solve the problem?
```js
const { createHash } = require('node:crypto');
const algorithm = { name: 'sha256' };
// Support the same argument types in createHash/createHmac that are supported in web crypto..
const hash = createHash(algorithm);
// ...
```
### What alternatives have you considered?
_No response_ | crypto,feature request,webcrypto | low | Minor |
2,768,382,668 | pytorch | [inductor] `index_copy` looses the shape check on inductor | ### 🐛 Describe the bug
**symptom**: eager would throw the error when the shape of x and y is different. But inductor seems to do a special processing.... It broadcasts y to the same shape as x.
**device**: both cuda and CPU
```python
import torch
import torch.nn as nn
torch.manual_seed(0)
torch.set_grad_enabled(False)
from torch._inductor import config
config.fallback_random = True
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
def forward(self, x, y, indices):
x = torch.index_copy(x, 0, indices, y)
return x
model = Model().cuda()
x = torch.randn(1, 2).cuda()
y = torch.randn(1, 1).cuda()
indices = torch.tensor([0]).cuda()
print(x)
print(y)
inputs = [x, y, indices]
try:
model(*inputs)
except Exception as e:
print(f"fail on eager: {e}")
try:
c_model = torch.compile(model)
output = c_model(*inputs)
print(output)
except Exception as e:
print(f"fail on inductor: {e}")
```
### Error logs
```
tensor([[ 1.5410, -0.2934]], device='cuda:0')
tensor([[-2.1788]], device='cuda:0')
fail on eager: index_copy_(): Source/destination tensor must have same slice shapes. Destination slice shape: 2 at dimension 0 and source slice shape: 1 at dimension 0.
tensor([[-2.1788, -2.1788]], device='cuda:0')
```
### Versions
PyTorch version: 20241230
OS: Ubuntu 20.04.6 LTS (x86_64)
CPU: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
GPU: V100
<details>
<summary>click for detailed env</summary>
```
PyTorch version: 2.6.0.dev20241230+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 16.0.1
CMake version: version 3.26.0
Libc version: glibc-2.31
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-204-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
GPU 2: Tesla V100-SXM2-32GB
GPU 3: Tesla V100-SXM2-32GB
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 40 bits physical, 48 bits virtual
CPU(s): 20
On-line CPU(s) list: 0-19
Thread(s) per core: 1
Core(s) per socket: 20
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2499.998
BogoMIPS: 4999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 640 KiB
L1i cache: 640 KiB
L2 cache: 80 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch topoext cpuid_fault invpcid_single pti ssbd ibrs ibpb fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke avx512_vnni
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.1.0.dev20241205
[pip3] optree==0.13.1
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.6.0.dev20241230+cu126
[pip3] torchaudio==2.6.0.dev20241230+cu126
[pip3] torchvision==0.22.0.dev20241230+cu126
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.6.0.dev20241230+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20241230+cu126 pypi_0 pypi
[conda] torchvision 0.22.0.dev20241230+cu126 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
```
</details>
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov | triaged,oncall: pt2,module: inductor | low | Critical |
2,768,389,844 | godot | The editor UI navigation experience is poor when using the `Tab` key | ### Tested versions
v4.4.dev7.official [46c8f8c5c]
### System information
Godot v4.4.dev7 - Linux Mint 22 (Wilma) on X11 - X11 display driver, Multi-window, 1 monitor - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1050 Ti (nvidia; 535.183.01) - Intel(R) Core(TM) i5-7300HQ CPU @ 2.50GHz (4 threads)
### Issue description
1. `MenuButton` is not navigable (its `focus_mode` defaults to `None`). Although the `focus` style is set, it does not seem to be used.
https://github.com/godotengine/godot/blob/bdf625bd54958c737fa6b7213b07581cc91059ad/scene/theme/default_theme.cpp#L276
https://github.com/godotengine/godot/blob/bdf625bd54958c737fa6b7213b07581cc91059ad/editor/themes/editor_theme_manager.cpp#L758
2. Some Buttons use the `FlatMenuButton` theme variations based on `MenuButton`. Although navigation is possible, it is not highlighted and it is unclear where to navigate.
https://github.com/godotengine/godot/blob/bdf625bd54958c737fa6b7213b07581cc91059ad/scene/theme/default_theme.cpp#L369
### Steps to reproduce
https://github.com/user-attachments/assets/49439e98-290d-468f-a54b-b024418e80a2
https://github.com/user-attachments/assets/6565aa22-292c-4015-b40a-f8ac66a98d9e
### Minimal reproduction project (MRP)
N/A | bug,discussion,topic:editor,usability | low | Minor |
2,768,393,602 | yt-dlp | YouTube Shorts subscriptions feed is broken, a third of videos missing in the feed | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
_No response_
### Provide a description that is worded well enough to be understood
First and foremost this is an issue with the YouTube shorts subscriptions feed itself, it comes up when just using <https://www.youtube.com/feed/subscriptions/shorts>.
I've noticed for a while now how sometimes when scrolling down to a part where I already was some time before, there are videos which I definitely haven't seen in the feed before. One would expect a subscriptions feed to be complete and chronological. This is not the case for the shorts subscriptions feed.
I have spent some time playing around now and I could see the behaviour based on new videos appearing (not based on day or time of day as far as I can see) and in that moment, the sliding window of the shorts that are visible changes slightly.
Here are some numbers: I kept downloading the first 2600 entries and saving each unique ID I came across. After a bunch of requests (whenever an actual new video was included in the feed) and approximately 9 actual new videos compared to the first request, the video that was in the first disappearing cycle I came across reappeared. At this point, counting the total number of entries, I was not at the expected 2600 encountered IDs, but at **3911**!
Here's an example diff of resulting video ids when querying the newest 35 videos of the feed. My example video IDs don't encode any meaning besides serving as an unique identifier.
```diff
+xxxxxxxxx90 -- new video compared to initial request
xxxxxxxxx01
xxxxxxxxx02
xxxxxxxxx03
xxxxxxxxx04
xxxxxxxxx05
xxxxxxxxx06
xxxxxxxxx07
xxxxxxxxx08
xxxxxxxxx09
xxxxxxxxx10
xxxxxxxxx11
xxxxxxxxx12
xxxxxxxxx13
xxxxxxxxx14
xxxxxxxxx15
+xxxxxxxxx80 -- a video that's not "new" but only now became visible
-xxxxxxxxx16 -- this one disappeared now
xxxxxxxxx17
xxxxxxxxx18
xxxxxxxxx19
xxxxxxxxx20
xxxxxxxxx21
xxxxxxxxx22
xxxxxxxxx23
xxxxxxxxx24
xxxxxxxxx25
xxxxxxxxx26
xxxxxxxxx27
xxxxxxxxx28
xxxxxxxxx29
xxxxxxxxx30
xxxxxxxxx31
+xxxxxxxxx81 -- another video that's not "new" but only now became visible
-xxxxxxxxx32 -- this one also disappeared
xxxxxxxxx33
xxxxxxxxx34
-xxxxxxxxx35 -- this one's just out of range now and not returned anymore, nothing special
```
If there's a second new video compared to the first time you checked, at each of these special points another one disappears and a "new" one appears, in this example it would be the video IDs xx..15 and xx..31. Rinse and repeat until after approximately 9 actual new videos appeared at the top of the feed compared to the first time, each video that disappeared after the first actual new video was in the feed (so in this example xx..16 and xx..32) has come back again. (However of course there are now like 8 different videos, the ones above these ones at each of the magic points, which have been initially visible and are now not visible anymore. The amount of videos that are not visible is approximately constant I guess. Don't pin me down on the specific number of 8 or 9 or something else, there may have been some off by one errors, especially when there's an extremely new video sometimes the sliding window moves even though the new video itself hasn't appeared at the top of the feed).
Now I feel like it's impossible to report a bug to YouTube, so I decided to collect my findings here. Since there might (maybe, I have no idea..) be other ways to query the shorts subscriptions feed, yt-dlp might change the way this specific feed data is retrieved to avoid being affected by this really weird bug.
As for the verbose output, I'm just copying the one I used for the other issue I opened, it's a command which can also be used to assess this bug, but the above example should be used to understand it, since it requires multiple requests and a changed feed to be noticeable, I think that would just not come across well using a bunch of console outputs (which I would like to censor anyways for personal reasons)
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', '--flat-playlist', '--skip-download', '--playlist-end', '50', 'https://youtube.com/feed/subscriptions/shorts', '--ignore-no-formats', '--print', '%(id)s %(title)s', '--cookies-from-browser', 'firefox']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp-nightly-builds [0b6b7742c] (pip)
[debug] Python 3.13.1 (CPython x86_64 64bit) - Linux-6.12.6-200.fc41.x86_64-x86_64-with-glibc2.40 (OpenSSL 3.2.2 4 Jun 2024, glibc 2.40)
[debug] exe versions: ffmpeg 7.0.2 (fdk,setts), ffprobe 7.0.2
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.12.14, mutagen-1.47.0, requests-2.32.3, sqlite3-3.46.1, urllib3-1.26.20, websockets-14.1
[debug] Proxy map: {}
Extracting cookies from firefox
[debug] Extracting cookies from: "/home/xxxxxxxxxxxxxxx/cookies.sqlite"
Extracted 1090 cookies from firefox
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp-nightly-builds)
[youtube:tab] Extracting URL: https://youtube.com/feed/subscriptions/shorts
[youtube:tab] subscriptions: Downloading webpage
[download] Downloading playlist: subscriptions
[debug] [youtube:tab] Extracted SAPISID cookie
[youtube:tab] subscriptions page 1: Downloading API JSON
[youtube:tab] subscriptions page 2: Downloading API JSON
[youtube:tab] subscriptions page 3: Downloading API JSON
[youtube:tab] Playlist subscriptions: Downloading 50 items
[debug] The information of all playlist entries will be held in memory
[download] Downloading item 1 of 50
xxxxxxxxxxx NA
[download] Downloading item 2 of 50
...
[download] Finished downloading playlist: subscriptions
```
| account-needed,site-bug,triage,site:youtube | low | Critical |
2,768,396,411 | langchain | openai.BadRequestError: Error code: 400 - {'object': 'error', 'message': 'Tool call IDs should be alphanumeric strings with length 9!', 'type': 'BadRequestError', 'param': None, 'code': 400} | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv())
import os
import random
import string
from math import factorial
from langgraph.graph import StateGraph, END
from typing import TypedDict, Annotated
import operator
from langchain_core.messages import AnyMessage,SystemMessage, HumanMessage , ToolMessage
# from openai import OpenAI
# from langchain_community.chat_models import ChatOpenAI
from pydantic import BaseModel
from langchain_openai import ChatOpenAI, OpenAI
from langchain_mistralai import ChatMistralAI
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_community.tools.jira.tool import JiraAction
import httpx
import io
http_client=httpx.Client(verify=False)
#create the tool
tool = TavilySearchResults(max_results=2)
print(type(tool))
print(tool.name)
#create an agent state: an annotated list of messages
class AgentState(TypedDict):
messages: Annotated[list[AnyMessage], operator.add]
class Agent:
def __init__(self, model, tools, system =""):
self.system = system
graph = StateGraph(AgentState)
graph.add_node("llm", self.call_model)
graph.add_node("action", self.take_action)
graph.add_conditional_edges(
"llm",
self.exists_action,
{True:"action", False:END}
)
graph.add_edge("action", "llm")
graph.set_entry_point("llm")
self.graph = graph.compile() # cpm[ile turns the graph into a langchain runnable
self.tools = {t.name: t for t in tools}
self.model = model.bind_tools(tools, tool_choice="tavily_search_results_json")
def exists_action(self, state:AgentState):
result = state['messages'][-1]
return len(result.tool_calls) > 0
def call_model(self, state:AgentState):
messages = state["messages"]
if self.system:
messages = [SystemMessage(content=self.system)] + messages
message = self.model.invoke(messages)
return {'messages':[message]}
def take_action(self, state:AgentState):
tool_calls = state['messages'][-1].tool_calls
results = []
for t in tool_calls:
print(f"calling: {t}")
result = self.tools[t['name']].invoke(t['args'])
#tool_call_id = t['id'].replace("chatcmpl-tool-", "call_")
results.append(ToolMessage(tool_call_id=t['id'],name = t['name'] ,content=str(result)))
print("Back Back to the model")
return {'messages': results}
@staticmethod
def generate_tool_call_id(length=9):
return ''.join(random.choices(string.ascii_letters + string.digits, k=length))
prompt = """
you are a smart research assistant use the search engine to look up information. \
you are allowed to make multiple calls(either together or in a sequence).
only look for information when you are sure of what you want .\
If you need to look up some information before asking a follow up questions, you are allowed to do that!
"""
# base_url = os.getenv("DSX_DEVGENAI_BASEURL")
base_url=os.getenv("BASE_URL")
key = os.getenv("DEVGENAI_KEY")
mistral_llm = ChatOpenAI(model ="mistral-7b-instruct-v03", model_name = "mistral-7b-instruct-v03",openai_api_base = base_url,http_client=http_client, openai_api_key = key, top_p = 1, temperature = 0.5, verbose = True,)
SearchAgent = Agent(mistral_llm, [tool], system = f"[INST]{prompt}[/INST]")
messages = [HumanMessage(content = "what's the weather in Boston, MA?")]
result = SearchAgent.graph.invoke({"messages":messages})
print(result)
print(result["messages"][-1].content)
messages = [HumanMessage(content = "what's the weather in Boston, MA and in sf?")]
result = SearchAgent.graph.invoke({"messages":messages})
print(result["messages"][-1].content)
### Error Message and Stack Trace (if applicable)
<class 'langchain_community.tools.tavily_search.tool.TavilySearchResults'>
tavily_search_results_json
calling: {'name': 'tavily_search_results_json', 'args': {'query': 'weather in Boston, MA'}, 'id': 'chatcmpl-tool-9d0f9ae5c5e54b53be284d9d934663eb', 'type': 'tool_call'}
Back Back to the model
Traceback (most recent call last):
File "/usr/lib/python3.11/runpy.py", line 198, in _run_module_as_main
return _run_code(code, main_globals, None,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/runpy.py", line 88, in _run_code
exec(code, run_globals)
File "/home/rachel_shalom/.vscode-server/extensions/ms-python.debugpy-2024.14.0-linux-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py", line 71, in <module>
cli.main()
File "/home/rachel_shalom/.vscode-server/extensions/ms-python.debugpy-2024.14.0-linux-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 501, in main
run()
File "/home/rachel_shalom/.vscode-server/extensions/ms-python.debugpy-2024.14.0-linux-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 351, in run_file
runpy.run_path(target, run_name="__main__")
File "/home/rachel_shalom/.vscode-server/extensions/ms-python.debugpy-2024.14.0-linux-x64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 310, in run_path
return _run_module_code(code, init_globals, run_name, pkg_name=pkg_name, script_name=fname)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rachel_shalom/.vscode-server/extensions/ms-python.debugpy-2024.14.0-linux-x64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 127, in _run_module_code
_run_code(code, mod_globals, init_globals, mod_name, mod_spec, pkg_name, script_name)
File "/home/rachel_shalom/.vscode-server/extensions/ms-python.debugpy-2024.14.0-linux-x64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 118, in _run_code
exec(code, run_globals)
File "/home/rachel_shalom/devx/fast_api_stream/agentic_flows/agent_with_lang_example.py", line 107, in <module>
result = SearchAgent.graph.invoke({"messages":messages})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rachel_shalom/devx/devx_env/lib/python3.11/site-packages/langgraph/pregel/__init__.py", line 1936, in invoke
for chunk in self.stream(
File "/home/rachel_shalom/devx/devx_env/lib/python3.11/site-packages/langgraph/pregel/__init__.py", line 1656, in stream
for _ in runner.tick(
File "/home/rachel_shalom/devx/devx_env/lib/python3.11/site-packages/langgraph/pregel/runner.py", line 167, in tick
run_with_retry(
File "/home/rachel_shalom/devx/devx_env/lib/python3.11/site-packages/langgraph/pregel/retry.py", line 40, in run_with_retry
return task.proc.invoke(task.input, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rachel_shalom/devx/devx_env/lib/python3.11/site-packages/langgraph/utils/runnable.py", line 408, in invoke
input = step.invoke(input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rachel_shalom/devx/devx_env/lib/python3.11/site-packages/langgraph/utils/runnable.py", line 184, in invoke
ret = context.run(self.func, input, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rachel_shalom/devx/fast_api_stream/agentic_flows/agent_with_lang_example.py", line 64, in call_model
message = self.model.invoke(messages)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rachel_shalom/devx/devx_env/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 5354, in invoke
return self.bound.invoke(
^^^^^^^^^^^^^^^^^^
File "/home/user/devx/devx_env/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 286, in invoke
self.generate_prompt(
File "/home/user/devx/devx_env/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 786, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/devx/devx_env/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 643, in generate
raise e
File "/home/user/devx/devx_env/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 633, in generate
self._generate_with_cache(
File "/home/user/devx/devx_env/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 851, in _generate_with_cache
result = self._generate(
^^^^^^^^^^^^^^^
File "/home/user/devx/devx_env/lib/python3.11/site-packages/langchain_openai/chat_models/base.py", line 717, in _generate
response = self.client.create(**payload)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/devx/devx_env/lib/python3.11/site-packages/openai/_utils/_utils.py", line 275, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/user/devx/devx_env/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 859, in create
return self._post(
^^^^^^^^^^^
File "/home/user/devx/devx_env/lib/python3.11/site-packages/openai/_base_client.py", line 1280, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/devx/devx_env/lib/python3.11/site-packages/openai/_base_client.py", line 957, in request
return self._request(
^^^^^^^^^^^^^^
File "/home/user/devx/devx_env/lib/python3.11/site-packages/openai/_base_client.py", line 1061, in _request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'object': 'error', 'message': 'Tool call IDs should be alphanumeric strings with length 9!', 'type': 'BadRequestError', 'param': None, 'code': 400}
During task with name 'llm' and id '52c2b916-2911-1e0d-7990-9bbd4baf7a5f'
### Description
I am trying ti run a simple example with langgraph and tavilly search. while debugging I see that the model managed to get the argument for the "tool" search and it fails in the last call where the input is a list of 4 messages:
message = self.model.invoke(messages))
where messages is :
[SystemMessage(content='[INST]\nyou are a smart research assistant use the search engi...dditional_kwargs={}, response_metadata={}), HumanMessage(content="what's the weather in Boston, MA?", additional_kwargs={}, response_metadata={}), AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'chatcmpl-tool-949a406...details': {}, 'output_token_details': {}}), ToolMessage(content='[{\'url\': \'https://weather.com/weather/tenday/l/USMA0046:1:US\...pl-tool-949a40625f644abb8b47e0d0e04147cc')]
it seems like it complains on the tool_call_id format but when I have tried to change the format of the id to examples I saw in the docs I got the same error.
not sure how to solve this one
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Fri Mar 29 23:14:13 UTC 2024
> Python Version: 3.11.11 (main, Dec 4 2024, 08:55:07) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.3.28
> langchain: 0.3.12
> langchain_community: 0.3.0
> langsmith: 0.1.147
> langchain_mistralai: 0.2.4
> langchain_openai: 0.2.14
> langchain_text_splitters: 0.3.4
> langgraph_sdk: 0.1.48
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.9.5
> async-timeout: 4.0.3
> dataclasses-json: 0.6.5
> httpx: 0.27.0
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.58.1
> orjson: 3.10.7
> packaging: 24.0
> pydantic: 2.9.1
> pydantic-settings: 2.7.1
> PyYAML: 6.0.1
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.29
> tenacity: 8.4.2
> tiktoken: 0.7.0
> tokenizers: 0.19.1
> typing-extensions: 4.12.2 | 🤖:bug | low | Critical |
2,768,408,146 | PowerToys | TinyTask Doesnt Recognize PowerToys | ### Microsoft PowerToys version
0.87.1
### Installation method
GitHub
### Running as admin
No
### Area(s) with issue?
TextExtractor
### Steps to reproduce
So I opened tinytask, and I pressed record what I did first was I clicked the button to text extractor which was window + shift + t and it worked on my screen, it copied the text and I could paste it. However when I was done recording it I played my tinytask, it did what I did but the difference was that the text extractor didn't copy the text. My cursor just kept on moving trying to copy the text but it didn't show any signs of it copying like a snap shot.
Additionally, I think the tiny task pressed the windows + shift + T but the text extractor didn't recognize this action.
Because when you press Window + shift + T it will take a snap shot right? the screen goes blurry and the cursor turns into a + button and when you hold the left click it will select into a box and you can select where you want to copy. However when the tinytask did the action windows + shift + t nothing happened.
(I was doing a macro on roblox)
### ✔️ Expected Behavior
When I recorded my tiny task to copy text I expected it to do the same when I played it. When it played I wanted it to take a snap shot of the text and copy in into a captcha and submit.
### ❌ Actual Behavior
What happened was when I played the tiny task or the macro the cursor just moved and the text extractor didn't work. I think the tiny task really pressed the keys windows + shift + T however the power tools didn't recognize this action.
Because when you press Window + shift + T it will take a snap shot right? the screen goes blurry and the cursor turns into a + button and when you hold the left click it will select into a box and you can select where you want to copy. However when the tinytask did the action windows + shift + t nothing happened.
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,768,416,677 | godot | Sometimes depth prepass inaccuracies leak the background color. | ### Tested versions
- Reproducible in master
- Reproducible in 4.3stable
### System information
Windows 11 - Vulkan 1.3.280 - Forward+ - Using Device #0: NVIDIA - NVIDIA GeForce RTX 3070 Ti
### Issue description
When using a shader that writes to DEPTH along with depth prepass, sometimes fragments are lost.

### Steps to reproduce
1. Load the attached project into the editor
2. The scene is a white sphere, intersected by a green box with a yellow sphere in the center.
3. Set the useless_work variable of the shader to get a low frame rate.
4. Move the camera around inside the green box, from time to time red static will appear on the green box where the clear color leaks though.
### Minimal reproduction project (MRP)
[prepass-leak.zip](https://github.com/user-attachments/files/18305568/prepass-leak.zip)
| bug,discussion,topic:shaders | low | Minor |
2,768,432,318 | Python | Addition of AI algorithms | ### Feature description
I would like to add a few AI algorithms like DPLL, TT-ENTAILS, WALKSAT, UNIFY, FOL-BC-ASK algorithms. To your repository as I beleive it will be a good contribution. | enhancement | medium | Minor |
2,768,449,770 | rust | Rustdoc does not combine documentation in re-exports of extern functions | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
With the code
```rust
mod native {
extern "C" {
/// bar.
pub fn bar();
}
}
/// foo
pub use native::bar;
```
I expected to have the documentation in the private module and the re-export to be combined
Instead, only the documentation in the private module in shown

### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustdoc --version --verbose`:
```
rustdoc 1.86.0-nightly (3f43b1a63 2025-01-03)
binary: rustdoc
commit-hash: 3f43b1a636738f41c48df073c5bcb97a97bf8459
commit-date: 2025-01-03
host: x86_64-unknown-linux-gnu
release: 1.86.0-nightly
LLVM version: 19.1.6
```
| T-rustdoc,C-bug,T-rustdoc-frontend | low | Critical |
2,768,449,907 | godot | Compatibility Renderer: LIGHT_VERTEX does not affect shadows | ### Tested versions
- Reproducible in: v4.3.stable.official [77dcf97d8], v4.4.dev.mono.gh [bdf625bd5]
### System information
Godot v4.3.stable - Fedora Linux 41 (Workstation Edition) - Wayland - GLES3 (Compatibility) - NVIDIA GeForce RTX 3060 Ti (nvidia; 565.77) - Intel(R) Core(TM) i7-10700F CPU @ 2.90GHz (16 Threads)
### Issue description
Shadows seem to ignore `LIGHT_VERTEX` in the Compatibility renderer.
These screenshots and the MRP change `LIGHT_VERTEX` to create a 4x4 grid on a plane.
> Compatibility

Forward+

Shader code:
```glsl
shader_type spatial;
uniform vec3 albedo_color : source_color;
#define SURFACE_SIZE 2.0
#define TEXEL_SIZE 4.0
void fragment() {
ALBEDO = albedo_color;
vec2 snapped_uv = (floor(UV * TEXEL_SIZE) / TEXEL_SIZE) + vec2(0.5 / TEXEL_SIZE);
vec2 uv_diff = snapped_uv - UV;
LIGHT_VERTEX += TANGENT * uv_diff.x * SURFACE_SIZE;
LIGHT_VERTEX += -BINORMAL * uv_diff.y * SURFACE_SIZE;
}
```
### Steps to reproduce
1. Open MRP
2. View shadows in Compatibility
3. Switch to Forward+
4. View shadows in Forward+
### Minimal reproduction project (MRP)
[light_vertex_mrp.zip](https://github.com/user-attachments/files/18305710/light_vertex_mrp.zip)
| discussion,topic:rendering | low | Minor |
2,768,476,563 | godot | Infinite log spamming 'Condition "!v.is_finite()" is true.' | ### Tested versions
v4.3.stable.mono.official [77dcf97d8]
### System information
Godot v4.3.stable.mono - Windows 10.0.19044 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 2060 SUPER (NVIDIA; 32.0.15.6094) - AMD Ryzen 7 5700X 8-Core Processor (16 Threads)
### Issue description
I was learning how to use quaternions and after some accidental mishap (or something, I'm not entirely sure) , my log is spammed with
```
servers/rendering/renderer_scene_cull.cpp:935 - Condition "!v.is_finite()" is true.
```
There is no spam if I do not select the affected node, but upon selecting it, it causes that to log every tick.
If I try to edit the Quaternion rotation in the editor during the spam, it occasionally comes up with:
```
Basis [X: (nan, nan, nan), Y: (nan, nan, nan), Z: (nan, nan, nan)] must be normalized in order to be casted to a Quaternion. Use get_rotation_quaternion() or call orthonormalized() if the Basis contains linearly independent vectors
```
I can only assume something failed and set the basis to invalid numbers.
### Steps to reproduce
Unfortunately I am not able to reproduce what occured. I have been attempting to re-attempt the steps I had been doing to cause the issue to no avail. But from what I could understand:
- I was rotating a Camera3D (with ``rotation_edit_mode`` set to ``Quaternion``) and seeing how the numbers changed.
- The numbers would get locked to ``1``, so I would attempt to force it by moving my mouse quickly.
- At some point, it bugged and would no longer update the Quaternion values and began spamming the log.
### Minimal reproduction project (MRP)
N/A | bug,topic:3d | low | Critical |
2,768,508,309 | rustdesk | After clicking on some menus, the menu disappears when you move the mouse away, and you cannot click on the options in the menu | ### Bug Description
When using iPad to control Windows 11, after clicking on some menus, the menu disappears when you move the mouse away, and you cannot click on the options in the menu. This is basically a bug that will inevitably occur.
### How to Reproduce
1. Control a win11 PC on ipad through Rustdesk with a mouse.
2. Click on the icon in picture below and move the mouse to the options. Please check the attached screenshots gif.
3. You will find the options disappear.
### Expected Behavior
The options should not disappear.
### Operating system(s) on local (controlling) side and remote (controlled) side
iPad OS 18.2 -> windows 11 23H2
### RustDesk Version(s) on local (controlling) side and remote (controlled) side
1.3.6 -> 1.3.2
### Screenshots

### Additional Context
_No response_ | bug | low | Critical |
2,768,508,824 | godot | [3.x] Scaling in windowed mode can crash the Linux Operating System | ### Tested versions
Reproducible in the Godot 3.6.0 release, other versions untested.
### System information
OS: Linux Mint 22 Cinnamon GPU: Intel Corporation Alder Lake-UP3 GT2 [UHD Graphics] CPU: 12th Gen Intel© Core™ i5-1235U × 10
### Issue description
Viewports can cause the OS to slow down under certain circumstances, eventually freezing and/or crashing the entire OS and potentially corrupting the open project as a result.
I have not tested this on other operating systems or hardware, nor have I tested to see if there are other steps that can be taken to reproduce the issue.
### Steps to reproduce
1. Create an AspectRatioContainer with a typical 3D Viewport setup as its child.
2. Set AspectRatioContainer.ratio to 1.7778 and AspectRatioContainer.stretch mode to STRETCH_COVER (or anything besides STRETCH_FIT).
3. Set "display/window/stretch/mode" to 2d and "display/window/stretch/aspect" to Expand.
4. Launch the project in windowed mode, then scale the window as thin as you can and start moving it around. After a few seconds of slowing down the computer (most obvious when listening to audio) it will begin to freeze and then crash after about 5-10 seconds.
You can skip to step 4 when using the MRP.
### Minimal reproduction project (MRP)
[Viewport Crash.zip](https://github.com/user-attachments/files/18305837/Viewport.Crash.zip)
| bug,platform:linuxbsd,crash | low | Critical |
2,768,525,104 | rust | Nested recursive enum with ManuallyDrop causes thread 'rustc' to overflow its stack | There seemed to be an issue with the type Composition<Conditional<T>> adding drop-check rules, so I added ManuallyDrop to get the program to compile. I also added the unpin impl as there was also an error regarding its implementation.
I tried this code:
```rust
use std::mem::ManuallyDrop;
pub enum Composition<T> {
Conditional(ManuallyDrop<Conditional<T>>),
Item(T),
}
impl <T> Unpin for Composition<T> {}
#[derive(Default)]
pub struct Conditional<T> {
_condition: Vec<Composition<Condition<T>>>,
_warrant: Vec<Composition<T>>,
}
pub enum Condition<T> {
T(T),
}
fn main() {
let _ = Conditional::<()>::default();
}
```
I expected wither for the program to run, or for the compiler to give the a similar error to when ManuallyDrop is removed: "overflow while adding drop-check rules for `Vec<Composition<T>>`"
Instead, the rustc thread overflowed its stack.
### Meta
`rustc --version --verbose`:
```
rustc 1.86.0-nightly (3f43b1a63 2025-01-03)
binary: rustc
commit-hash: 3f43b1a636738f41c48df073c5bcb97a97bf8459
commit-date: 2025-01-03
host: aarch64-apple-darwin
release: 1.86.0-nightly
LLVM version: 19.1.6
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary>Backtrace</summary>
<p>
```
Compiling overflow-issue v0.1.0 (/Users/wutter/dev/overflow-issue)
thread 'rustc' has overflowed its stack
fatal runtime error: stack overflow
error: could not compile `overflow-issue` (bin "overflow-issue")
Caused by:
process didn't exit successfully: `/Users/wutter/.rustup/toolchains/nightly-aarch64-apple-darwin/bin/rustc --crate-name overflow_issue --edition=2024 src/main.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --diagnostic-width=80 --crate-type bin --emit=dep-info,link -C embed-bitcode=no -C debuginfo=2 -C split-debuginfo=unpacked --check-cfg 'cfg(docsrs,test)' --check-cfg 'cfg(feature, values())' -C metadata=2a6ba930726f09c1 -C extra-filename=-15d023de81aec2c9 --out-dir /Users/wutter/dev/overflow-issue/target/debug/deps -C incremental=/Users/wutter/dev/overflow-issue/target/debug/incremental -L dependency=/Users/wutter/dev/overflow-issue/target/debug/deps` (signal: 6, SIGABRT: process abort signal)```
```
</p>
</details> | I-crash,A-debuginfo,T-compiler,C-bug,S-has-mcve | low | Critical |
2,768,542,491 | pytorch | Wrong meta function for constant_pad_nd | ### 🐛 Describe the bug
When I'm working on this [PR](https://github.com/pytorch/pytorch/pull/140399), I meet a test failed case `python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_fft_hfftn_cuda_float16`, it's related to `constant_pad_nd` op.
## Summary
I found there is two potential dispatched methods for `constant_pad_nd` when run it directly with `meta` device and run it in `inductor` and they didn't return the same tensor.
## Code
1. Directly run it with `meta` device:
```
import torch
a = torch.empty_strided((2, 4, 5), (20, 1, 4), dtype=torch.complex128, device='meta')
print(a.shape)
print(a.stride())
b = torch.constant_pad_nd(a, [0, 0, 0, -2, 0, 0])
print(b.shape)
print(b.stride())
```
If we run the above code, we could see for any of `meta`, `cuda` and `cpu` device, it will print:
```
torch.Size([2, 4, 5])
(20, 1, 4)
torch.Size([2, 2, 5])
(10, 1, 2)
```
So the `meta` device matches the `cpu` and `cuda` device, it do **meet our expectation**.
2. torch.compile:
We run `constant_pad_nd` with same arguments but run it with compiled.
```
import torch
a = torch.empty_strided((2, 4, 5), (20, 1, 4), dtype=torch.complex128, device='cuda')
def foo(a):
b = torch.constant_pad_nd(a, [0, 0, 0, -2, 0, 0])
return b
new_foo = torch.compile(foo)
new_foo(a)
```
We could try to print the graph of ir:

We could see in the pink box, the stride of `constant_pad_nd` is **`(10, 5, 1)`**, it's **not aligned** with the stride above **`(10, 1, 2)`**, so we could image there is mismatch in the meta function for directly run and run it in `inductor`.
## Analysis
### call stack
I tried to debug the two different behavior, I found both of the two approaches will go to:
https://github.com/pytorch/pytorch/blob/816328fa51382e9b50e60fb928a690d5c1bdadaf/torch/_prims_common/wrappers.py#L290-L291
but they may have different `fn`
1. Directly run it with `meta` device:
it will go to some function called `empty_like`:
https://github.com/pytorch/pytorch/blob/816328fa51382e9b50e60fb928a690d5c1bdadaf/torch/_refs/__init__.py#L4972-L5010
In this method, it will keep the output's stride aligned with the `cpu` or `cuda` device.
2. torch.compile:
It will go to some function called `constant_pad_nd`:
https://github.com/pytorch/pytorch/blob/816328fa51382e9b50e60fb928a690d5c1bdadaf/torch/_refs/__init__.py#L2901-L2981
In this method, it seems it didn't keep the stride align with the `cpu` or `cuda` device.
### my guess
I suspected it was caused when we initialize meta function for each op, because it seems we didn't have a meta function for `constant_pad_nd`, so pytorch skip some setup here:
https://github.com/pytorch/pytorch/blob/816328fa51382e9b50e60fb928a690d5c1bdadaf/torch/_meta_registrations.py#L6921-L6926
## Potential solution
I'm not very clear if it's an expected behavior, if it's not, there are two potential solution in my mind:
1. Ask `inductor` to call the `empty_like` in the above.
2. Introduce a meta function `meta_constant_pad_nd` for this op and make it run in both of the two places mentioned above.
WDYT? Any help would be very appreciated! Thank you!
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0a0+gitf3ec745
Is debug build: True
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 10.5.0-1ubuntu1~22.04) 10.5.0
Clang version: Could not collect
CMake version: version 3.31.1
Libc version: glibc-2.35
Python version: 3.11.10 (main, Oct 3 2024, 07:29:13) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-49-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i7-12700K
CPU family: 6
Model: 151
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Stepping: 2
CPU max MHz: 5000.0000
CPU min MHz: 800.0000
BogoMIPS: 7219.20
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 512 KiB (12 instances)
L1i cache: 512 KiB (12 instances)
L2 cache: 12 MiB (9 instances)
L3 cache: 25 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.11.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.0
[pip3] onnx==1.17.0
[pip3] onnxscript==0.1.0.dev20240817
[pip3] optree==0.13.1
[pip3] torch==2.6.0a0+gitf3ec745
[pip3] triton==3.1.0
[conda] numpy 1.26.0 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] torch 2.6.0a0+gitf3ec745 dev_0 <develop>
[conda] torchfix 0.4.0 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
```
cc @chauhang @penguinwu @SherlockNoMad @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @BoyuanFeng | oncall: pt2,module: decompositions,module: inductor | low | Critical |
2,768,548,566 | transformers | Memory Access out of bounds in mra/cuda_kernel.cu::index_max_cuda_kernel() | ### System Info
* OS: Linux ubuntu 22.04 LTS
* Device: A100-80GB
* docker: nvidia/pytorch:24.04-py3
* transformers: latest, 4.47.0
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
## Reproduction
1. pip install the latest transformers
2. prepare the UT test enviroments by `pip install -e .[testing]`
3. `pytest tests/models/mra/test_modeling_mra.py`
## Analysis
There might be some memory access out-of-bound behaviours in CUDA kernel `index_max_cuda_kernel()`
https://github.com/huggingface/transformers/blob/main/src/transformers/kernels/mra/cuda_kernel.cu#L6C1-L58C2
Note that `max_buffer` in this kernel is `extern __shared__ float` type, which means `max_buffer` would be stored in shared memory.
According to https://github.com/huggingface/transformers/blob/main/src/transformers/kernels/mra/cuda_launch.cu#L24-L35, CUDA would launch this kernel with
* gird size: `batch_size`
* block size: 256
* shared memory size: `A_num_block * 32 * sizeof(float)`
In case that `A_num_block` < 4, the for statement below might accidentally locate the memory out of `A_num_block * 32`, since num_thread here is 256, and threadIdx.x is [0, 255].
```
for (int idx_start = 0; idx_start < 32 * num_block; idx_start = idx_start + num_thread) {
```
Therefore, when threadblocks of threads try to access `max_buffer`, it would be wiser and more careful to always add `if` statements before to avoid memory access out of bounds.
So We suggest to add `if` statements in two places:

### Expected behavior
UT tests should all pass! | bug | low | Minor |
2,768,587,150 | react-native | Pressable onPress fires when scrolling FlatList | ### Description
When you have a list of `<Pressable>` inside a `<FlatList>`, and the list is short enough to fully fit on the screen (so that no scrolling is actually needed), then if you do scroll the list a whole bunch of `onPress` events will fire on the Pressable list items.
The expected behavior is that `onPress` should only be triggered when actually pressing an item.
I initially filed this report on the FlashList repo, but after further testing I actually believe this is a React Native issue.
Here's the FlashList issue: https://github.com/Shopify/flash-list/issues/1461
Here's an older very similar React Native issue that's been closed without a fix: https://github.com/facebook/react-native/issues/27355
### Steps to reproduce
See the linked FlashList issue for the code and steps needed to reproduce
### React Native Version
0.76.5
### Affected Platforms
Runtime - Android
### Output of `npx react-native info`
```text
System:
OS: macOS 15.1.1
CPU: (12) arm64 Apple M3 Pro
Memory: 560.73 MB / 36.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 20.18.1
path: /usr/local/bin/node
Yarn:
version: 4.5.1
path: /usr/local/bin/yarn
npm:
version: 10.8.2
path: /usr/local/bin/npm
Watchman:
version: 2024.12.02.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods: Not Found
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.1
- iOS 18.1
- macOS 15.1
- tvOS 18.1
- visionOS 2.1
- watchOS 11.1
Android SDK:
API Levels:
- "34"
- "35"
Build Tools:
- 34.0.0
- 35.0.0
System Images:
- android-34 | Google Play ARM 64 v8a
- android-35 | Google APIs ARM 64 v8a
- android-35 | Google Play ARM 64 v8a
Android NDK: Not Found
IDEs:
Android Studio: 2024.2 AI-242.23339.11.2421.12700392
Xcode:
version: 16.1/16B40
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.13
path: /usr/bin/javac
Ruby:
version: 2.6.10
path: /Users/tobbe/.rbenv/shims/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.1.3
wanted: ^15.1.3
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.5
wanted: 0.76.5
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: Not found
newArchEnabled: Not found
iOS:
hermesEnabled: Not found
newArchEnabled: Not found
```
### Stacktrace or Logs
```text
(NOBRIDGE) LOG {"item": {"id": "3ac68afc-c605-48d3-a4f8-fbd91aa97f63", "title": "Second Item"}}
(NOBRIDGE) LOG {"item": {"id": "58694a0f-3da1-471f-bd96-145571e29d72", "title": "Third Item"}}
(NOBRIDGE) LOG {"item": {"id": "58694a0f-3da1-471f-bd96-145571e29d72", "title": "Third Item"}}
(NOBRIDGE) LOG {"item": {"id": "58694a0f-3da1-471f-bd96-145571e29d72", "title": "Third Item"}}
(NOBRIDGE) LOG {"item": {"id": "58694a0f-3da1-471f-bd96-145571e29d72", "title": "Third Item"}}
(NOBRIDGE) LOG {"item": {"id": "58694a0f-3da1-471f-bd96-145571e29d72", "title": "Third Item"}}
(NOBRIDGE) LOG {"item": {"id": "bd7acbea-c1b1-46c2-aed5-3ad53abb28ba", "title": "First Item"}}
(NOBRIDGE) LOG {"item": {"id": "3ac68afc-c605-48d3-a4f8-fbd91aa97f63", "title": "Second Item"}}
(NOBRIDGE) LOG {"item": {"id": "3ac68afc-c605-48d3-a4f8-fbd91aa97f63", "title": "Second Item"}}
(NOBRIDGE) LOG {"item": {"id": "3ac68afc-c605-48d3-a4f8-fbd91aa97f63", "title": "Second Item"}}
(NOBRIDGE) LOG {"item": {"id": "58694a0f-3da1-471f-bd96-145571e29d72", "title": "Third Item"}}
(NOBRIDGE) LOG {"item": {"id": "58694a0f-3da1-471f-bd96-145571e29d72", "title": "Third Item"}}
(NOBRIDGE) LOG {"item": {"id": "58694a0f-3da1-471f-bd96-145571e29d72", "title": "Third Item"}}
(NOBRIDGE) LOG {"item": {"id": "58694a0f-3da1-471f-bd96-145571e29d72", "title": "Third Item"}}
(NOBRIDGE) LOG {"item": {"id": "3ac68afc-c605-48d3-a4f8-fbd91aa97f63", "title": "Second Item"}}
(NOBRIDGE) LOG {"item": {"id": "bd7acbea-c1b1-46c2-aed5-3ad53abb28ba", "title": "First Item"}}
(NOBRIDGE) LOG {"item": {"id": "bd7acbea-c1b1-46c2-aed5-3ad53abb28ba", "title": "First Item"}}
(NOBRIDGE) LOG {"item": {"id": "bd7acbea-c1b1-46c2-aed5-3ad53abb28ba", "title": "First Item"}}
(NOBRIDGE) LOG {"item": {"id": "58694a0f-3da1-471f-bd96-145571e29d72", "title": "Third Item"}}
(NOBRIDGE) LOG {"item": {"id": "58694a0f-3da1-471f-bd96-145571e29d72", "title": "Third Item"}}
```
### Reproducer
https://github.com/Tobbe/expo-bun-flash-list-issue
### Screenshots and Videos
https://github.com/user-attachments/assets/6626450b-a771-46f5-80f1-f0d620394816
| Issue: Author Provided Repro,Component: FlatList,Needs: Author Feedback | low | Minor |
2,768,611,677 | godot | [3.x] Mono HTML5 exports can't use System.Net.Http due to missing dependency WebAssembly.Net.Http | ### Tested versions
- Reproducible in: v3.6.stable.mono.official [de2f0f1]
### System information
Windows 11 64-bit, Chrome 131.0.6778.205
### Issue description
Many C# libraries have `System.Net.Http` as a dependency, most notably Firebase. On Web exports, C# uses a different implementation, provided via package `WebAssembly.Net.Http`, since browsers only allow requests via the `fetch` API rather than opening a raw socket.
However, exporting a web C# project which uses `System.Net.Http` as a dependency or sub-dependency crashes and logs the following messages to the console:
```
The following assembly referenced from .mono/assemblies/Debug/System.Net.Http.dll could not be loaded:
Assembly: WebAssembly.Net.Http (assemblyref_index=2)
Version: 1.0.0.0
Public Key: cc7b13ffcd2ddd51
The assembly was not found in the Global Assembly Cache, a path listed in the MONO_PATH environment variable, or in the location of the executing assembly (/.mono/assemblies/Debug/).
System.IO.FileNotFoundException: Could not load file or assembly 'WebAssembly.Net.Http, Version=1.0.0.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51' or one of its dependencies.
File name: 'WebAssembly.Net.Http, Version=1.0.0.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51'
at System.Net.Http.HttpClient..ctor () <0x3986018 + 0x00002> in <filename unknown>:0
```
This issue is present since at least 2021, evident by these forum posts:
* [Missing dependency after HTML5 export](https://forum.godotengine.org/t/missing-dependency-after-html5-export/9929)
* [Created HTML5 can not be run, as WebAssembly cannot find System.Net.Http.HttpClient](https://forum.godotengine.org/t/created-html5-can-not-be-run-as-webassembly-cannot-find-system-net-http-httpclient/12404)
### Steps to reproduce
1. Export the attached Minimal Reproduction Project to the web
2. Run and observe the browser console (F12)
### Minimal reproduction project (MRP)
[GodotHTTPIssue.zip](https://github.com/user-attachments/files/18306077/GodotHTTPIssue.zip)
| bug,platform:web,topic:dotnet | low | Critical |
2,768,632,275 | transformers | Reload Transformers imports | ### Feature request
It would be nice to have a transformers.clear_import_cache() or any other way to reset the dynamic imports of transformers.
### Motivation
Some people are having bugs after hacking on transformers [1](https://github.com/unslothai/unsloth/issues/1410), [2](https://github.com/unslothai/unsloth/issues/1499). Hacking on transformers [seems to be supported](https://huggingface.co/docs/transformers/how_to_hack_models)
### Your contribution
given some amount of guidance, I am willing to write a PR. | Good Second Issue,Feature request | low | Critical |
2,768,644,848 | godot | [3.x] ViewportContainer + Viewport is distorted. | ### Tested versions
Reproducable in Godot 3.6, not tested in any other version.
### System information
OS: Linux Mint 22 Cinnamon GPU: Intel Corporation Alder Lake-UP3 GT2 [UHD Graphics]
### Issue description
Viewports render a distorted image. The distortion is grid-like and most obvious with ViewportTexture filtering off.
Sorry for the sparse description, it's a very simple issue.

### Steps to reproduce
1. Open MRP
2. Launch the project and scale the window. Notice how the distortion increases as you make it smaller or change the aspect ratio.
### Minimal reproduction project (MRP)
[Viewport Distortion.zip](https://github.com/user-attachments/files/18306154/Viewport.Distortion.zip)
| bug,topic:rendering | low | Minor |
2,768,667,915 | godot | Having free() in a script in the scene tree will throw an error in exported game | ### Tested versions
Reproducible in 4.3 stable and 4.2.2 stable
### System information
Windows 11, macOS Ventura
### Issue description
If a script has free() anywhere in it (test.gd in this case) and is loaded into the scene tree via a node, the log will show the following error.
USER SCRIPT ERROR: Parse Error: Function "free()" not found in base self.
at: GDScript::reload (res://test.gd:20)
ERROR: Failed to load script "res://test.gd" with error "Parse error".
at: load (modules/gdscript/gdscript.cpp:2936)
It won't crash the exported game, but any node that has this and is added to the scene tree will just be added and not function. In the Editor, this runs normally with no issues or errors.
However, using free() in an unsafe line such as part of a "for loop" doesn't throw this error in the exported game. Example below:
```
for child in $SomeNode.get_children():
child.free()
```
### Steps to reproduce
1. Create a new script for any Node
2. Put the free() method to be called anywhere in the script
3. Have a function that will add this node to the scene tree (it can even be in the main scene script itself)
4. Export the game and run it (be sure to have file logging on)
5. Check the logs to see the parse error
### Minimal reproduction project (MRP)
MRP can be recreated easily using the steps above. | bug,topic:gdscript | low | Critical |
2,768,680,836 | pytorch | BackendCompilerFailed error is raised when applying torch.compile on torch.fill | ### 🐛 Describe the bug
When compiling torch.fill in cuda environment, the compiled function will raise `BackendCompilerFailed` error when input is an `uint` tensor. It seems that this issue is caused by invalid argument type when using triton's API `tl.full`.
Note that directly calling torch.fill with uint tensor does not lead to any exception.
Here is the code to reproduce:
```
import torch
f = torch.fill
cf = torch.compile(f)
input = torch.randn(1,2,3).to(torch.uint32).to('cuda')
value = -90
print(f(input,value)) # tensor([[[4294967206, 4294967206, 4294967206],[4294967206, 4294967206, 4294967206]]], device='cuda:0',dtype=torch.uint32)
cf_out = cf(input,value)
```
Although the traceback is super long, the error message seems straightforward:
```
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] Triton compilation failed: triton_poi_fused_fill_0
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] def triton_poi_fused_fill_0(out_ptr0, xnumel, XBLOCK : tl.constexpr):
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] xnumel = 6
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] xoffset = tl.program_id(0) * XBLOCK
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] xindex = xoffset + tl.arange(0, XBLOCK)[:]
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] xmask = xindex < xnumel
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] x0 = xindex
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] tmp0 = tl.full([1], -90, tl.uint32)
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] tl.store(out_ptr0 + (x0), tmp0, xmask)
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0]
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] metadata: {'signature': {'out_ptr0': '*u32', 'xnumel': 'i32'}, 'device': 0, 'constants': {'XBLOCK': 8}, 'configs': [AttrsDescriptor(divisible_by_16=(0,), equal_to_1=())], 'device_type': 'cuda', 'num_warps': 1, 'num_stages': 1, 'debug': True, 'cc': 86}
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] Traceback (most recent call last):
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] File "/opt/conda/lib/python3.11/site-packages/triton/language/core.py", line 35, in wrapper
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] return fn(*args, **kwargs)
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] ^^^^^^^^^^^^^^^^^^^
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] File "/opt/conda/lib/python3.11/site-packages/triton/language/core.py", line 1223, in full
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] return semantic.full(shape, value, dtype, _builder)
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] File "/opt/conda/lib/python3.11/site-packages/triton/language/semantic.py", line 530, in full
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] value = get_value_fn(value)
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] ^^^^^^^^^^^^^^^^^^^
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] TypeError: get_uint32(): incompatible function arguments. The following argument types are supported:
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] 1. (self: triton._C.libtriton.ir.builder, arg0: int) -> triton._C.libtriton.ir.value
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0]
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] Invoked with: <triton._C.libtriton.ir.builder object at 0x7fca9840ecf0>, -90
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0]
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] The above exception was the direct cause of the following exception:
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0]
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] Traceback (most recent call last):
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 532, in _precompile_config
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] binary = triton.compile(*compile_args, **compile_kwargs)
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] File "/opt/conda/lib/python3.11/site-packages/triton/compiler/compiler.py", line 276, in compile
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] module = src.make_ir(options, codegen_fns, context)
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] File "/opt/conda/lib/python3.11/site-packages/triton/compiler/compiler.py", line 113, in make_ir
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] return ast_to_ttir(self.fn, self, context=context, options=options, codegen_fns=codegen_fns)
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] triton.compiler.errors.CompilationError: at 7:11:
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] def triton_poi_fused_fill_0(out_ptr0, xnumel, XBLOCK : tl.constexpr):
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] xnumel = 6
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] xoffset = tl.program_id(0) * XBLOCK
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] xindex = xoffset + tl.arange(0, XBLOCK)[:]
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] xmask = xindex < xnumel
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] x0 = xindex
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] tmp0 = tl.full([1], -90, tl.uint32)
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] ^
Traceback (most recent call last):
File "/root/try.py", line 7, in <module>
cf_out = cf(input,value)
^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 576, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 1404, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 1188, in __call__
result = self._inner_convert(
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 1005, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 733, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 768, in _compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/bytecode_transformation.py", line 1402, in transform_code_object
transformations(instructions, code_options)
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1076, in run
while self.step():
^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 986, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3086, in RETURN_VALUE
self._return(inst)
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3071, in _return
self.output.compile_subgraph(
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 1107, in compile_subgraph
self.compile_and_call_fx_graph(
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 1390, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 1440, in call_user_compiler
return self._call_user_compiler(gm)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 1493, in _call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 1472, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/repro/after_dynamo.py", line 130, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/__init__.py", line 2314, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 1886, in compile_fx
return aot_autograd(
^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/backends/common.py", line 83, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 1170, in aot_module_simplified
compiled_fn = AOTAutogradCache.load(
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/autograd_cache.py", line 754, in load
compiled_fn = dispatch_and_compile()
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 1155, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 582, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 832, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 201, in aot_dispatch_base
compiled_fw = compiler(fw_module, updated_flat_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 491, in __call__
return self.compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 1764, in fw_compiler_base
return inner_compile(
^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 575, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/repro/after_aot.py", line 102, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 689, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 1132, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 1047, in codegen_and_compile
compiled_fn = graph.compile_to_module().call
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/graph.py", line 1978, in compile_to_module
return self._compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/graph.py", line 2019, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/codecache.py", line 2769, in load_by_key_path
mod = _reload_python_module(key, path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/runtime/compile_tasks.py", line 46, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/torchinductor_root/iy/ciyo7ytism5a7pcooebqj3wuyepsq3xgyjogs2vpfqkhrxqs2l5w.py", line 48, in <module>
triton_poi_fused_fill_0 = async_compile.triton('triton_poi_fused_fill_0', '''
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/async_compile.py", line 214, in triton
kernel.precompile()
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 302, in precompile
compiled_binary, launcher = self._precompile_config(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 532, in _precompile_config
binary = triton.compile(*compile_args, **compile_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/triton/compiler/compiler.py", line 276, in compile
module = src.make_ir(options, codegen_fns, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/triton/compiler/compiler.py", line 113, in make_ir
return ast_to_ttir(self.fn, self, context=context, options=options, codegen_fns=codegen_fns)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
CompilationError: at 7:11:
def triton_poi_fused_fill_0(out_ptr0, xnumel, XBLOCK : tl.constexpr):
xnumel = 6
xoffset = tl.program_id(0) * XBLOCK
xindex = xoffset + tl.arange(0, XBLOCK)[:]
xmask = xindex < xnumel
x0 = xindex
tmp0 = tl.full([1], -90, tl.uint32)
^
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0a0+gite15442a
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.11.10 | packaged by conda-forge | (main, Oct 16 2024, 01:27:36) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.37.1.el9_4.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 3975WX 32-Cores
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU max MHz: 4368.1641
CPU min MHz: 2200.0000
BogoMIPS: 7000.73
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnxscript==0.1.0.dev20240817
[pip3] optree==0.13.0
[pip3] torch==2.6.0a0+gite15442a
[pip3] triton==3.1.0
[conda] numpy 1.26.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] torch 2.6.0a0+gite15442a pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @BoyuanFeng | triaged,oncall: pt2,module: inductor | low | Critical |
2,768,711,421 | flutter | Linux OnScreen keyboard don't always show up | ### Steps to reproduce
In the Linux environment, the full screen is applied, and only the input box is required in the application. When I typed the first few times, the on-screen keyboard popped up normally, but after a flash, the on-screen keyboard never popped up again
### Expected results
Each time input box input, the on-screen keyboard can pop up normally
### Actual results
After an on-screen keyboard does not pop up normally, the on-screen keyboard will not appear again
### Code sample
await windowManager.ensureInitialized();
const windowOptions = WindowOptions(
center: true,
fullScreen: true,
);
windowManager.waitUntilReadyToShow(windowOptions, () async {
await windowManager.show();
});
### Screenshots or Video

Looks like the on-screen keyboard went under my app
### Logs
_No response_
### Flutter Doctor output
Doctor summary (to see all details, run flutter doctor -v):
[!] Flutter (Channel stable, 3.19.6, on Ubuntu 20.04 LTS 5.4.0-204-generic, locale en_US.UTF-8)
! Upstream repository https://github.com/flutter/flutter.git is not the same as FLUTTER_GIT_URL
[✗] Android toolchain - develop for Android devices
✗ Unable to locate Android SDK.
Install Android Studio from: https://developer.android.com/studio/index.html
On first launch it will assist you in installing the Android SDK components.
(or visit https://flutter.dev/docs/get-started/install/linux#android-setup for detailed instructions).
If the Android SDK has been installed to a custom location, please use
`flutter config --android-sdk` to update to that location.
[✗] Chrome - develop for the web (Cannot find Chrome executable at google-chrome)
! Cannot find Chrome. Try setting CHROME_EXECUTABLE to a Chrome executable.
[✓] Linux toolchain - develop for Linux desktop
[!] Android Studio (not installed)
[✓] VS Code (version 1.78.2)
[✓] Connected device (1 available)
[✓] Network resources
! Doctor found issues in 4 categories.
| a: text input,engine,a: accessibility,platform-linux,has reproducible steps,team-linux,found in release: 3.27,found in release: 3.28 | medium | Major |
2,768,730,882 | transformers | When gradient checkpointing is enabled, flash_attn_kwargs cannot be passed into the decoder_layer | ### System Info
transformers 4.47.1
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
https://github.com/huggingface/transformers/blob/241c04d36867259cdf11dbb4e9d9a60f9cb65ebc/src/transformers/models/llama/modeling_llama.py#L896C1-L931C54
```python
for decoder_layer in self.layers[: self.config.num_hidden_layers]:
if output_hidden_states:
all_hidden_states += (hidden_states,)
if self.gradient_checkpointing and self.training:
layer_outputs = self._gradient_checkpointing_func(
decoder_layer.__call__,
hidden_states,
causal_mask,
position_ids,
past_key_values,
output_attentions,
use_cache,
cache_position,
position_embeddings,
)
else:
layer_outputs = decoder_layer(
hidden_states,
attention_mask=causal_mask,
position_ids=position_ids,
past_key_value=past_key_values,
output_attentions=output_attentions,
use_cache=use_cache,
cache_position=cache_position,
position_embeddings=position_embeddings,
**flash_attn_kwargs,
)
```
### Expected behavior
x | bug | low | Minor |
2,768,780,083 | storybook | [Bug]: Overriding tsconfig aliases does not work | ### Describe the bug
I'm trying to mock modules using overriding [`tsconfig` aliases](https://nextjs.org/docs/app/getting-started/installation#set-up-absolute-imports-and-module-path-aliases) in [.storybook/main.ts](https://storybook.js.org/docs/get-started/frameworks/nextjs?renderer=react#with-module-aliases) and it does not work for me
`tsconfig`:
```
{
"compilerOptions": {
"target": "ES2017",
"lib": ["dom", "dom.iterable", "esnext"],
"allowJs": true,
"skipLibCheck": true,
"strict": true,
"noEmit": true,
"esModuleInterop": true,
"module": "esnext",
"moduleResolution": "bundler",
"resolveJsonModule": true,
"isolatedModules": true,
"jsx": "preserve",
"incremental": true,
"baseUrl": "src/",
"paths": {
"@/util/*": ["util/*"],
"@/testModule": ["util/testModule/index.ts"]
}
},
"include": ["next-env.d.ts", "**/*.ts", "**/*.tsx"],
"exclude": ["node_modules"]
}
```
Aliases replacements in `.storybook/main.ts`
```ts
webpackFinal: async (config) => {
if (config.resolve) {
config.resolve.alias = {
...config.resolve.alias,
// Example from documentation:
// '@/lib/db': path.resolve(__dirname, './lib/db.mock.ts'),
// --------------------------------------------------------
// WORKS — we can see replacement in TestComponent story
nanoid: path.resolve(__dirname, './mocks/nanoid.ts'),
// WORKS — we can see replacement in TestComponent story
'../../util/testModule': path.resolve(__dirname, './mocks/testModule.ts'),
// DOES NOT WORK
// — we can NOT see replacement in TestComponent story
// — if we provide broken path instead of real mock file, Storybook build won't fail because all these aliases replacements actually ignored
'@/util': path.resolve(__dirname, './mocks/testModule.ts'),
'@/util/testModule': path.resolve(__dirname, './mocks/testModule.ts'),
'@/util/testModule$': path.resolve(__dirname, './mocks/testModule.ts'),
'@/testModule': path.resolve(__dirname, './mocks/testModule.ts'),
'@/testModule$': path.resolve(__dirname, './mocks/testModule.ts'),
};
}
// console.log(config.resolve?.alias);
return config;
},
```
### Reproduction link
https://github.com/yoksel/storybook-aliases-bug/
### Reproduction steps
1. Install dependencies: `npm i`
1. Run Storybook: `npm run storybook`
1. Open Storybook and check component `TestComponent`: http://localhost:6007/?path=/story/components-testcomponent--default
1. See `Real content` was not replaced by `MOCKED content` despite of rewriting aliases in `.storybook/main.ts`.
We also can provide broken paths instead of real mocks like:
```
'@/testModule': path.resolve(__dirname, './mocks/BROKEN_PATH.ts'),
```
And if aliases work, Storybook build should fail, but it does not, so replacing aliases for Storybook does not work.
### System
```bash
Storybook Environment Info:
System:
OS: macOS 15.0
CPU: (12) arm64 Apple M2 Pro
Shell: 5.9 - /bin/zsh
Binaries:
Node: 18.18.0 - ~/.nvm/versions/node/v18.18.0/bin/node
Yarn: 1.22.19 - /usr/local/bin/yarn
npm: 9.8.1 - ~/.nvm/versions/node/v18.18.0/bin/npm <----- active
pnpm: 8.10.0 - /usr/local/bin/pnpm
Browsers:
Chrome: 131.0.6778.205
Edge: 131.0.2903.112
Safari: 18.0
npmPackages:
@storybook/addon-essentials: ^8.4.7 => 8.4.7
@storybook/addon-interactions: ^8.4.7 => 8.4.7
@storybook/addon-onboarding: ^8.4.7 => 8.4.7
@storybook/blocks: ^8.4.7 => 8.4.7
@storybook/nextjs: ^8.4.7 => 8.4.7
@storybook/react: ^8.4.7 => 8.4.7
@storybook/test: ^8.4.7 => 8.4.7
eslint-plugin-storybook: ^0.11.2 => 0.11.2
storybook: ^8.4.7 => 8.4.7
```
### Additional context
_No response_ | bug,typescript,nextjs | low | Critical |
2,768,795,700 | material-ui | [Collapse] Horizontal Collapse is twitching | ### Steps to reproduce
Steps:
1. Open this link to live example: https://codesandbox.io/p/sandbox/transition-group-with-horizontal-collapse-2frjrp
2. Click on the "Add" button several times
### Current behavior
In some instances a strange twitching happens. This event stretches the page, hence why there is a scrollbar for a brief moment. The card has a full width for a split second, and then it goes back to 0 width and expands slowly.
### Expected behavior
The animation is smooth. There is no twitching, it starts from 0 width and expands as expected.
### Context
I'm trying to achieve [this](https://mui.com/material-ui/transitions/?srsltid=AfmBOoqGwt8xSCgjV5fzpeMoWXfPlto6YrORkAeJ208b9a4EiKS3S-jF#transitiongroup) exact scenario from the documentation but in a horizontal manner.
### Your environment
<details>
<summary><code>npx @mui/envinfo</code></summary>
```
Don't forget to mention which browser you used.
Output from `npx @mui/envinfo` goes here.
```
</details>
**Search keywords**: horizontal, collapse, twitching, transition-group | bug 🐛,component: Collapse | low | Major |
2,768,796,767 | flutter | Google IAP: Re-subscribing after subscription cancellation on Google Play results in error "BillingResponse.developerError, details: Account identifiers don't match the previous subscription." | ### Steps to reproduce
User purchases a subscription product using Google IAP via the app.
User then cancels the subscription directly on Google Play.
The subscription remains active until the end of the current billing cycle but is marked as canceled for future renewals.
The user opens the app and tries to re-subscribe to the same product using the buyNonConsumable method provided by the in_app_purchase Flutter plugin.
The app receives an error message: "BillingResponse.developerError, details: Account identifiers don't match the previous subscription."
### Expected results
The user should be able to re-subscribe or restore their canceled subscription without encountering any errors.
### Actual results
Error: BillingResponse.developerError, details: Account identifiers don't match the previous subscription.

### Code sample
final PurchaseParam purchaseParam = PurchaseParam(
productDetails: productDetails,
);
_inAppPurchase.buyNonConsumable(purchaseParam: purchaseParam);
### Screenshots or Video

### Logs
I/flutter (12345): [InAppPurchase] Error during purchase: BillingResponse.developerError, details: Account identifiers don't match the previous subscription.
### Flutter Doctor output
[✓] Flutter (Channel stable, 3.10.5, on macOS 13.5.1 22G90 darwin-arm64, locale en-US)
• Flutter version 3.10.5 on channel stable at /Users/username/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 796c8ef792 (5 months ago), 2023-07-12 14:37:59 -0700
• Engine revision 879e614c7e
• Dart version 3.0.5
[✓] Android toolchain - develop for Android devices (Android SDK version 33.0.2)
• Android SDK at /Users/username/Library/Android/sdk
• Platform android-33, build-tools 33.0.2
• Java binary at: /Applications/Android Studio.app/Contents/jre/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 11.0.19+0-b1751.21-8125866)
[✓] Xcode - develop for iOS and macOS (Xcode 15.0)
• Xcode at /Applications/Xcode.app/Contents/Developer
• CocoaPods version 1.11.3
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2022.3)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔗 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔗 https://plugins.jetbrains.com/plugin/6351-dart
[✓] Connected device (2 available)
• iPhone 14 Pro (mobile) • 00008030-001A193234E2002E • ios • iOS 17.0
• Chrome (web) • chrome • web-javascript • Google Chrome 117.0.5938.62
• No issues found! | waiting for customer response,in triage | low | Critical |
2,768,803,219 | ollama | Speed ten times slower than llamafile | ### What is the issue?
Llamafile is much faster on cpu than ollama, what takes ollama 33 minutes takes llamafile 3 minutes with the same model.
llamafile crashes unfortunately after reusing it and spins its wheels staying at 100% CPU for hours.
I'd rather use a stable ollama, but you must work on speed on CPU
### OS
Linux
### CPU
Intel | bug | low | Critical |
2,768,807,550 | deno | Add support for arrays in OTEL attributes | Right now we support strings, bools, numbers, and bigints. We should also support homegeneous arrays of each of those types. | bug,otel | low | Minor |
2,768,809,291 | rust | List of trait implementations should be version-sorted | ### Code
```Rust
// tests/ui/suggestions/issue-71394-no-from-impl.rs
```
### Current output
```Shell
error[E0277]: the trait bound `&[i8]: From<&[u8]>` is not satisfied
--> $DIR/issue-71394-no-from-impl.rs:8:25
|
LL | let _: &[i8] = data.into();
| ^^^^ the trait `From<&[u8]>` is not implemented for `&[i8]`
|
= help: the following other types implement trait `From<T>`:
`[T; 10]` implements `From<(T, T, T, T, T, T, T, T, T, T)>`
`[T; 11]` implements `From<(T, T, T, T, T, T, T, T, T, T, T)>`
`[T; 12]` implements `From<(T, T, T, T, T, T, T, T, T, T, T, T)>`
`[T; 1]` implements `From<(T,)>`
`[T; 2]` implements `From<(T, T)>`
`[T; 3]` implements `From<(T, T, T)>`
[...]
```
### Desired output
```Shell
error[E0277]: the trait bound `&[i8]: From<&[u8]>` is not satisfied
--> $DIR/issue-71394-no-from-impl.rs:8:25
|
LL | let _: &[i8] = data.into();
| ^^^^ the trait `From<&[u8]>` is not implemented for `&[i8]`
|
= help: the following other types implement trait `From<T>`:
`[T; 1]` implements `From<(T,)>`
`[T; 2]` implements `From<(T, T)>`
`[T; 3]` implements `From<(T, T, T)>`
[...]
`[T; 10]` implements `From<(T, T, T, T, T, T, T, T, T, T)>`
`[T; 11]` implements `From<(T, T, T, T, T, T, T, T, T, T, T)>`
`[T; 12]` implements `From<(T, T, T, T, T, T, T, T, T, T, T, T)>`
[...]
```
### Rationale and extra context
The existing sorting displays 10, 11, 12, before it displays 1, 2, 3. Lower numbers should be sorted before higher numbers. In addition to sorting in a more semantically correct fashion, this would also present the user with less complex possibilities first.
### Other cases
```Rust
```
### Rust Version
```Shell
Git commit 7349f6b50359fd1f11738765b8deec5ee02d8710
```
### Anything else?
_No response_ | A-diagnostics,T-compiler,D-papercut | low | Critical |
2,768,821,538 | flutter | Need a Sliver Variant of `DragTarget` | ### Use case
A sliver variant of `DragTarget` would make it easier to implement advanced drag-and-drop workflows in scrollable, sliver-based layouts. This would simplify development for UIs that involve dynamic, interactive lists or grids within `CustomScrollView`.
eg:
```dart
CustomScrollView(
slivers: [
// ...
SliverList.builder(...),
DragTarget<String>( // Does not work
builder: (context, candidateData, rejectedData) {
return SliverList.builder(
itemCount: items.length,
itemBuilder: (context, index) {
return Text(items[index].name);
},
);
},
),
SliverList.builder(...),
// ...
],
)
```
I asked this question on Stack Overflow but did not find a solution. You can view the question here: https://stackoverflow.com/questions/79304188/how-to-make-a-sliverlist-a-dragtarget-in-flutter
### Proposal
Introduce a `SliverDragTarget` widget that can seamlessly integrate with `CustomScrollView` and other sliver-based layouts, allowing developers to apply drag-and-drop functionality to slivers directly | c: new feature,framework,f: scrolling,c: proposal,P3,team-framework,triaged-framework | low | Major |
2,768,822,952 | deno | Emphasize user code in stack traces | Creating a new issue after discussion in [#24002](https://github.com/denoland/deno/issues/24002#issuecomment-2561927997).
I’d like to propose a feature to improve readability when working with stack traces. Currently, assertion errors generate a verbose stack trace that includes both user code and internal library or runtime code. It can be challenging to quickly locate the lines relevant to my own logic amidst calls from the Deno standard library or third-party modules.
Failed assertions would be a good start for this feature, given stack traces are gnarly enough that a [flag was created to hide them](https://github.com/denoland/deno/issues/24002).
```
error: AssertionError: Values are not strictly equal.
[Diff] Actual / Expected
- 5
+ 3
throw new AssertionError(message);
^
at assertStrictEquals (https://jsr.io/@std/assert/1.0.10/strict_equals.ts:66:9)
at toBe (https://jsr.io/@std/expect/1.0.10/_matchers.ts:29:5)
at applyMatcher (https://jsr.io/@std/expect/1.0.10/expect.ts:223:13)
at Proxy.<anonymous> (https://jsr.io/@std/expect/1.0.10/expect.ts:233:13)
at checkQueueAndResults (file:///home/dandv/project/tests/common.ts:187:39) ⬅️
at Object.runMicrotasks (ext:core/01_core.js:683:26)
at processTicksAndRejections (ext:deno_node/_next_tick.ts:59:10)
at runNextTicks (ext:deno_node/_next_tick.ts:76:3)
at eventLoopTick (ext:core/01_core.js:182:21)
at async Object.<anonymous> (file:///home/dandv/project/tests/unit.test.ts:349:7) ⬅️
```
# Proposal
– Highlight or visually distinguish user code in the stack trace, for example with a marker like ⬅️ (since bold and colors are already used)
– Either hide references to the Deno and library calls, or de-emphasize them (greyed out?) so they don’t obscure the code paths I’m actively debugging. | suggestion,testing | low | Critical |
2,768,847,027 | next.js | Page crashes and "Error: Connection closed." in log | ### Link to the code that reproduces this issue
https://github.com/Abhii5496/store-thing
### To Reproduce
visit : https://store-thing.netlify.app/products
just select any product , im using Link tag
https://store-thing.netlify.app/products/1
when i hard refresh at same url it works
### Current vs. Expected behavior
visit : https://store-thing.netlify.app/products
just select any product , im using Link tag
https://store-thing.netlify.app/products/1
when i hard refresh at same url it works
### Provide environment information
```bash
"next": "15.1.3",
"react": "^19.0.0",
"react-dom": "^19.0.0",
```
### Which area(s) are affected? (Select all that apply)
Navigation, Runtime
### Which stage(s) are affected? (Select all that apply)
Vercel (Deployed)
### Additional context
next 15.1.3 version | Navigation,Runtime | low | Critical |
2,768,852,591 | rust | `impl ... for Box<CoreType>` in alloc should not require `#[cfg(not(test))]` | `library/alloc/src/ffi/c_str.rs` has:
```rust
#[cfg(not(test))]
#[stable(feature = "box_from_c_str", since = "1.17.0")]
impl From<&CStr> for Box<CStr> {
```
As far as I can tell, `#[cfg(not(test))]` is a workaround for something like https://github.com/rust-lang/rust/issues/87534, as evidenced by error messages like:
```note: the crate `alloc` is compiled multiple times, possibly with different configurations```
`alloc` is being compiled a second time as part of tests, and the two compilations are conflicting.
1) This workaround shouldn't be necessary.
2) This should be much better documented, so it doesn't send people on multi-hour debugging adventures when they try to write *new* trait impls in `alloc`. | T-compiler,T-bootstrap,A-docs,C-bug,E-needs-investigation | low | Critical |
2,768,874,547 | neovim | `vim.o.term` is missing | ### Problem
`vim.o.term` used to be available in 0.9.x but now it's gone in 0.10.x. The Vimscript equivalent option is still present (`:echo &term`).
More info in this discussion: https://github.com/neovim/neovim/discussions/28982
### Steps to reproduce
- nvim --clean
- :lua print(vim.o.term)
- :lua print(vim.opt.term)
### Expected behavior
Both commands should print the detected terminal.
### Nvim version (nvim -v)
0.10.3
### Vim (not Nvim) behaves the same?
N/A
### Operating system/version
Arch Linux
### Terminal name/version
Alacritty 0.14.0 / Neovide 0.14.0
### $TERM environment variable
xterm-256color
### Installation
Arch Linux (extra) | bug,compatibility,complexity:low,options | low | Major |
2,768,880,038 | go | proposal: go/parser: deprecate parser.ParseDir (which returns deprecated type ast.Package) | ### Go version
1.23.4
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='amd64'
GOBIN=''
GOCACHE='/home/soeren/.cache/go-build'
GOENV='/home/soeren/.config/go/env'
GOEXE=''
GOEXPERIMENT='nocoverageredesign'
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='linux'
GOINSECURE=''
GOMODCACHE='/home/soeren/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPATH='/home/soeren/go'
GOPRIVATE=''
GOPROXY='direct'
GOROOT='/opt/go'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/opt/go/pkg/tool/linux_amd64'
GOVCS=''
GOVERSION='go1.23.4'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/home/soeren/.config/go/telemetry'
GCCGO='gccgo'
GOAMD64='v1'
AR='ar'
CC='gcc'
CXX='g++'
CGO_ENABLED='1'
GOMOD='/home/soeren/usrdev/go-util-mutation/go.mod'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -m64 -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build2925151345=/tmp/go-build -gno-record-gcc-switches'
```
### What did you do?
Write some simple code using the go/ast and go/parser:
```
func createAST() {
ast := map[string]*ast.Package{}
fset := token.NewFileSet()
chain := &util.ChainContext{}
chain.Chain(func() {
ast, chain.Err = parser.ParseDir(fset, ".", nil, 0)
}).Chain(func() {
chain.Err = format.Node(os.Stdout, fset, ast)
}).ChainFatal("Error")
}
```
### What did you see happen?
The IDE marks `ast.Package` as deprecated conforming to the comment on the type. But `parser.ParseDir()` nevertheless references `ast.Package` in its return type.
### What did you expect to see?
Either:
* Having deprecated types consistently replaced in the stdlib. Thus, let `ParseDir()` use the recommended replacement type.
Or:
* Having `ParseDir()` marked as deprecated too. | Proposal | low | Critical |
2,768,887,057 | react-native | Dev app crashes when switching color scheme while Text element with DynamicColorIOS is used | ### Description
The React Native app in development mode will crash when switching the color scheme (through system or the app) while there's a `Text` element using `DynamicColorIOS`. (see `App.tsx` in repro)
The error message shown in Xcode is:
```
Assertion failure: self.size() > 3 || std::none_of( self.begin(), self.end(), [&](auto const& k) { return self.key_eq()(key, k); })
Message:
File: .../ReproducerApp/ios/Pods/Headers/Public/RCT-Folly/folly/container/detail/F14SetFallback.h
Line: 239
Function: findImpl
```
(It's a `FOLLY_SAFE_DCHECK` assertion written [here](https://github.com/facebook/folly/blob/v2024.12.30.00/folly/container/detail/F14SetFallback.h#L228-L234))
Stack trace shows the assertion failure occurs while [calling `_cache.get` here in `RCTTextLayoutManager.mm`](https://github.com/facebook/react-native/blob/v0.76.5/packages/react-native/ReactCommon/react/renderer/textlayoutmanager/platform/ios/react/renderer/textlayoutmanager/RCTTextLayoutManager.mm#L337):
```mm
- (NSAttributedString *)_nsAttributedStringFromAttributedString:(AttributedString)attributedString
{
auto sharedNSAttributedString = _cache.get(attributedString, [](AttributedString attributedString) {
return wrapManagedObject(RCTNSAttributedStringFromAttributedString(attributedString));
});
return unwrapManagedObject(sharedNSAttributedString);
}
```
Fun fact is that due to the assertion statement `(self.size() > 3 || ...)`, if we add more `Text` elements on the screen, the app won't crash.
This also doesn't affect release builds since [that assertion](https://github.com/facebook/folly/blob/b5650df8bc2d88472aa335a4ccebf6717e470994/folly/lang/SafeAssert.h#L63-L72) seems to [be](https://github.com/facebook/folly/blob/b5650df8bc2d88472aa335a4ccebf6717e470994/folly/lang/SafeAssert.h#L29) [dev-only](https://github.com/facebook/folly/blob/b5650df8bc2d88472aa335a4ccebf6717e470994/folly/Portability.h#L286-L293).
### Steps to reproduce
1. Run the app through Xcode.
2. Switch color scheme.
3. Notice the crash.
### React Native Version
0.76.0, 0.76.4, 0.76.5
### Affected Platforms
Runtime - iOS
### Areas
(not sure) Fabric - The New Renderer
### Output of `npx react-native info`
```text
System:
OS: macOS 14.5
CPU: (8) arm64 Apple M1
Memory: 203.86 MB / 16.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 20.15.0
path: ~/.asdf/installs/nodejs/20.15.0/bin/node
Yarn:
version: 1.22.22
path: ~/.asdf/installs/nodejs/20.15.0/bin/yarn
npm:
version: 10.7.0
path: ~/.asdf/plugins/nodejs/shims/npm
Watchman:
version: 2024.06.24.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.15.2
path: /Users/z/.asdf/shims/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.2
- iOS 18.2
- macOS 15.2
- tvOS 18.2
- visionOS 2.2
- watchOS 11.2
Android SDK: Not Found
IDEs:
Android Studio: 2024.2 AI-242.23339.11.2421.12550806
Xcode:
version: 16.2/16C5032a
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.1
path: /opt/homebrew/opt/openjdk/bin/javac
Ruby:
version: 3.0.2
path: /Users/z/.asdf/shims/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.0.1
wanted: 15.0.1
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.5
wanted: 0.76.5
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: true
newArchEnabled: true
```
### Stacktrace or Logs
```text
Assertion failure: self.size() > 3 || std::none_of( self.begin(), self.end(), [&](auto const& k) { return self.key_eq()(key, k); })
Message:
File: .../ReproducerApp/ios/Pods/Headers/Public/RCT-Folly/folly/container/detail/F14SetFallback.h
Line: 239
Function: findImpl
```
### Reproducer
https://github.com/zetavg/rn-crash-DynamicColorIOS-color-scheme-switch
### Screenshots and Videos

| Platform: iOS,Issue: Author Provided Repro,Component: Switch,Newer Patch Available,Type: New Architecture | low | Critical |
2,768,888,452 | godot | godot crashes after I close the debug window with the game | ### Tested versions
- version: 4.3-stable, 15 August 2024
### System information
windows 11 pro, Vulkan (Forward+), HexaCore AMD Ryzen 5 5600, 4442 MHz (44.5 x 100), ASRock B450M Pro4 R2.0, GTX 1650 4GB
### Issue description
I bought a new computer board and processor - b450m pro 4 2.0, r5 5600, and once I started working in godot. After I close the debug window with the game, rarely, but an error occurs, my monitor turns off completely, and after 2-3 seconds it turns on and Godot turns into a gray window without buttons, etc.
The error is endlessly reported to the console:
```
ERROR: Condition “err != VK_SUCCESS” is true. Returning: FAILED
at: command_queue_execute_and_present (drivers/vulkan/rendering_device_driver_vulkan.cpp:2266)
ERROR: Condition “err != VK_SUCCESS” is true. Returning: FAILED
at: command_queue_execute_and_present (drivers/vulkan/rendering_device_driver_vulkan.cpp:2266)
ERROR: Condition “err != VK_SUCCESS” is true. Returning: FAILED
at: fence_wait (drivers/vulkan/rendering_device_driver_vulkan.cpp:2066)
ERROR: Condition “err != VK_SUCCESS” is true. Returning: ERR_CANT_CREATE
at: swap_chain_resize (drivers/vulkan/rendering_device_driver_vulkan.cpp:2718)
ERROR: Condition “err != VK_SUCCESS” is true. Returning: FAILED
at: command_queue_execute_and_present (drivers/vulkan/rendering_device_driver_vulkan.cpp:2266)
ERROR: Condition “err != VK_SUCCESS” is true. Returning: FAILED
at: command_queue_execute_and_present (drivers/vulkan/rendering_device_driver_vulkan.cpp:2266)
ERROR: Condition “err != VK_SUCCESS” is true. Returning: FAILED
at: fence_wait (drivers/vulkan/rendering_device_driver_vulkan.cpp:2066)
ERROR: Condition “err != VK_SUCCESS” is true. Returning: FAILED
at: command_queue_execute_and_present (drivers/vulkan/rendering_device_driver_vulkan.cpp:2266)
ERROR: Condition “err != VK_SUCCESS” is true. Returning: FAILED
at: command_queue_execute_and_present (drivers/vulkan/rendering_device_driver_vulkan.cpp:2266)
```
### Steps to reproduce
the error appears only after I close the debug window
### Minimal reproduction project (MRP)
the error appears in 2 of my projects | bug,topic:rendering,crash | low | Critical |
2,768,903,609 | yt-dlp | iQIYI: Unable to download webpage: HTTP Error 413: Request Entity Too Large | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
_No response_
### Provide a description that is worded well enough to be understood
I am encountering an issue with downloading multiple videos from iQIYI. About a week ago, everything was working fine. However, now I am consistently getting the following error:
Unable to download webpage: HTTP Error 413: Request Entity Too Large
To temporarily resolve the issue, I have to repeatedly refresh the iQIYI page and export a new cookie. This workaround is inconvenient and disruptive.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['--yes-playlist', '--xff', 'default', '--fragment-retries', '10', '--abort-on-unavailable-fragments', '-a', 'D:\\Users\\rnpasinos\\Downloads\\(Convert)\\(YT-DLP)\\dl_video.txt', '-o', 'D:\\Users\\rnpasinos\\Downloads\\(Convert)\\(YT-DLP)\\%(playlist_title)s\\%(title)s [%(upload_date)s] [%(id)s] [%(resolution)s].%(ext)s', '-w', '--cookies', 'D:\\Users\\rnpasinos\\Downloads\\cookies.txt', '-v', '-f', 'bv*[height=1080]+ba/b[height=1080]', '--merge-output-format', 'mkv', '--sub-format', 'best', '--sub-lang', 'en,en-GB,en-US,English', '--embed-subs', '--embed-thumbnail', '--embed-metadata', '--convert-subs', 'srt', '--convert-thumbnails', 'png']
[debug] Batch file urls: ['https://www.iq.com/play/story-after-eternal-love-episode-24-19rreuo8kk?lang=en_us']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [65cf46cdd] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg 7.1-essentials_build-www.gyan.dev (setts), ffprobe 7.1-essentials_build-www.gyan.dev, phantomjs 2.1.1
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.12.14, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.3.0, websockets-14.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[iq.com] Extracting URL: https://www.iq.com/play/story-after-eternal-love-episode-24-19rreuo8kk?lang=en_us
[iq.com] 19rreuo8kk: Downloading webpage
ERROR: [iq.com] 19rreuo8kk: Unable to download webpage: HTTP Error 413: Request Entity Too Large (caused by <HTTPError 413: Request Entity Too Large>)
File "yt_dlp\extractor\common.py", line 742, in extract
File "yt_dlp\extractor\iqiyi.py", line 575, in _real_extract
File "yt_dlp\extractor\common.py", line 1201, in _download_webpage
File "yt_dlp\extractor\common.py", line 1152, in download_content
File "yt_dlp\extractor\common.py", line 962, in _download_webpage_handle
File "yt_dlp\extractor\common.py", line 911, in _request_webpage
File "yt_dlp\extractor\common.py", line 898, in _request_webpage
File "yt_dlp\YoutubeDL.py", line 4172, in urlopen
File "yt_dlp\networking\common.py", line 117, in send
File "yt_dlp\networking\_helper.py", line 208, in wrapper
File "yt_dlp\networking\common.py", line 340, in send
File "yt_dlp\networking\_requests.py", line 365, in _send
yt_dlp.networking.exceptions.HTTPError: HTTP Error 413: Request Entity Too Large
```
| incomplete,account-needed,site-bug,triage | low | Critical |
2,768,910,810 | go | runtime: SEGV in mapaccess2 (go1.23.2 darwin/amd64) | ```
#!stacks
"runtime.sigpanic" && "runtime.mapaccess2:+22"
```
Issue created by [stacks](https://pkg.go.dev/golang.org/x/tools/gopls/internal/telemetry/cmd/stacks).
The line numbers are again suspicious, possibly an instance of #70885.
This stack `zjaBOQ` was [reported by telemetry](https://storage.googleapis.com/prod-telemetry-merged/2025-01-03.json):
- `crash/crash`
- [`runtime.throw:+9`](https://cs.opensource.google/go/go/+/go1.23.2:src/runtime/panic.go;l=1067)
- [`runtime.sigpanic:+33`](https://cs.opensource.google/go/go/+/go1.23.2:src/runtime/signal_unix.go;l=914)
- [`runtime.interhash:+6`](https://cs.opensource.google/go/go/+/go1.23.2:src/runtime/alg.go;l=160)
- [`runtime.mapaccess2:+22`](https://cs.opensource.google/go/go/+/go1.23.2:src/runtime/map.go;l=501)
- [`golang.org/x/tools/internal/gcimporter.(*iexporter).typOff:+1`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.1:internal/gcimporter/iexport.go;l=973)
- [`golang.org/x/tools/internal/gcimporter.(*exportWriter).typ:+1`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.1:internal/gcimporter/iexport.go;l=959)
- [`golang.org/x/tools/internal/gcimporter.(*exportWriter).doTyp:+44`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.1:internal/gcimporter/iexport.go;l=1031)
- [`golang.org/x/tools/internal/gcimporter.(*iexporter).typOff:+4`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.1:internal/gcimporter/iexport.go;l=976)
- [`golang.org/x/tools/internal/gcimporter.(*exportWriter).typ:+1`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.1:internal/gcimporter/iexport.go;l=959)
- [`golang.org/x/tools/internal/gcimporter.(*exportWriter).param:+3`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.1:internal/gcimporter/iexport.go;l=1278)
- [`golang.org/x/tools/internal/gcimporter.(*exportWriter).paramList:+4`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.1:internal/gcimporter/iexport.go;l=1271)
- [`golang.org/x/tools/internal/gcimporter.(*exportWriter).signature:+1`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.1:internal/gcimporter/iexport.go;l=1210)
- [`golang.org/x/tools/internal/gcimporter.(*iexporter).doDecl:+141`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.1:internal/gcimporter/iexport.go;l=832)
- [`golang.org/x/tools/internal/gcimporter.iexportCommon:+55`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.1:internal/gcimporter/iexport.go;l=390)
- [`golang.org/x/tools/internal/gcimporter.IExportShallow:+8`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.1:internal/gcimporter/iexport.go;l=286)
- [`golang.org/x/tools/gopls/internal/cache.storePackageResults:+9`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.1:gopls/internal/cache/check.go;l=443)
```
golang.org/x/tools/[email protected] go1.23.2 darwin/amd64 other (1)
```
@prattmic
| NeedsInvestigation,gopls,Tools,compiler/runtime,gopls/telemetry-wins,BugReport | low | Critical |
2,768,914,377 | angular | Routing documentation does not appear to describe how to change route programatically | ### Describe the problem that you experienced
Unless I'm missing something there does not appear to be anything in the documentation on how to control the loaded route programatically.
[Common routing tasks](https://angular.dev/guide/routing/common-router-tasks) - nothing here.
[Routing in signle-page applications](https://angular.dev/guide/routing/router-tutorial) - nothing here.
[Creating custom route matches](https://angular.dev/guide/routing/routing-with-urlmatcher) - nothing here.
[Router reference](https://angular.dev/guide/routing/router-reference#configuration) - nothing here.
### Enter the URL of the topic with the problem
_No response_
### Describe what you were looking for in the documentation
_No response_
### Describe the actions that led you to experience the problem
_No response_
### Describe what you want to experience that would fix the problem
_No response_
### Add a screenshot if that helps illustrate the problem
_No response_
### If this problem caused an exception or error, please paste it here
```true
```
### If the problem is browser-specific, please specify the device, OS, browser, and version
```true
```
### Provide any additional information here in as much as detail as you can
```true
``` | needs: clarification,area: docs | low | Critical |
2,768,929,686 | godot | Atlas Merge Tool crashes when selecting entries to merge 2 | ### Tested versions
Godot v4.3.stable (77dcf97d8) - Windows 10.0.19045 - GLES3 (Compatibility) - NVIDIA GeForce GTX 980 Ti (NVIDIA; 32.0.15.6094) - Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz (8 Threads)
### System information
Godot v4.3.stable (77dcf97d8) - Windows 10.0.19045 - GLES3 (Compatibility) - NVIDIA GeForce GTX 980 Ti (NVIDIA; 32.0.15.6094) - Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz (8 Threads)
### Issue description
Godot crashes when I select multiple atlases to merge using the Atlas Merge Tool.
I found [this closed issue from 2023](https://github.com/godotengine/godot/issues/75354) that seemed like the same problem. I downloaded the MRP and everything worked fine so this doesn't seem to be the same issue.
I suspect it only crashes when you try to merge atlases that have already had their properties changed and/or used in a TileMapLayer.
### Steps to reproduce
1. Load several atlases into a Tile Set resource
2a. Define collision shapes for several tiles
and/or
2b. Setup animations for several tiles
and/or
2c. Setup terrain sets for several tiles
and/or
2d. Use tiles in a TileMapLayer node
4. Open Atlas Merge Tool
5. Hold shift and select multiple atlases
### Minimal reproduction project (MRP)
The assets used for the MRP are CC0. They are from here: https://kevins-moms-house.itch.io/
[atlas merge bug.zip](https://github.com/user-attachments/files/18307593/atlas.merge.bug.zip)
| bug,topic:editor,confirmed,crash,topic:2d | low | Critical |
2,768,939,613 | PowerToys | Add an option to re-arrange image sizes in Image Resizer | ### Description of the new feature / enhancement
I'd like an option to re-arrange image sizes in Image Resizer, please. This makes it easier to re-arrange image sizes in any order for example:
From
Small - 854x480
Medium - 1024x768
Large - 1280x1024
To
Large - 1280x1024
Small - 854x480
Medium - 1024x768
### Scenario when this would be used?
Either drag-and-drop the image preset sizes or click the up and down arrows to re-arrange them.
### Supporting information
It'd be useful to re-arrange the image preset sizes instead of deleting and manually re-creating them. | Needs-Triage | low | Minor |
2,768,939,679 | PowerToys | CalculatorApp.exe on Excluded apss for Fancy Zones, maximizes to zone when first opened. | ### Microsoft PowerToys version
0.87.1
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
FancyZones
### Steps to reproduce
CalculatorApp.exe on Excluded apss for Fancy Zones, maximize to zone when it is first opened. It should be opened with its last width and height.
### ✔️ Expected Behavior
It should be opened with its last width and height.
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,768,940,358 | neovim | Allow `String`s to not be nul-terminated | ### Problem
Even though the [`String`](https://github.com/neovim/neovim/blob/master/src/nvim/api/private/defs.h#L80-L83) struct contains a `size` field, the string data is still required to be nul-terminated.
I imagine this is to use it in functions operating on C strings, but it also creates some issues:
- performance overhead: need to scan for the nul terminator to find a string's end, even though we already know its length;
- double source of truth: having both the `size` field and nul terminator means there are two ways to determine a string's length, which need to be kept in sync;
- memory waste: each string requires an extra byte;
Personally, I'd like to avoid having to allocate every time a string is passed from Rust to C in `nvim-oxi`, which isn't currently possible due to `&str`s not being nul-terminated.
### Expected behavior
I'm curious how feasible it would be to modify the internal string-handling functions to work with length-delimited strings. This could be done by either:
- having them take a `String` instead of a `char *const` and use its `size` field;
- adding a length parameter;
External functions like `xstrdup` would need to be re-implemented.
Thoughts? I'm happy to work on this if there's interest. | enhancement,performance,core | low | Major |
2,768,942,408 | rust | The implementation of `InPlaceIterable` for `Flatten`&`FlatMap` is unsound | ```rust
const S: String = String::new();
fn main() {
let v = vec![[S, "Hello World!".into()], [S, S]];
let mut i = v.into_iter().flatten();
let _ = i.next();
let result: Vec<String> = i.clone().collect();
println!("{result:?}");
}
```
([playground](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2021&gist=4dd1a5df2c3d8a19afc99b33f71264c3))
```rust
["\0\0\0\0\0\0\0\0rld!", "\0\0\0\0\0\0\0\0rld!", ""]
free(): invalid pointer
[1] 1131073 IOT instruction (core dumped)
```
The above code is analogous to #85322 but applied to `Flatten` instead of `Peekable`: cloning the whole iterator doesn't preserve capacity in the inner `vec::IntoIter`. (This also applies to `FlatMap`.)
Introduced in 1.76
cc @the8472, #110353
@rustbot label T-libs, I-unsound, A-iterators | I-unsound,C-bug,A-iterators,T-libs | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.