id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
2,753,691,278 | pytorch | [torch.compile] `torch.compile` throws an error when nn.Module contains a dataclass with float values. | ### 🐛 Describe the bug
I think `torch.compile` somehow treats scalar values as `tensors` but i don't see how scalar values should be a problem in this case.
This helps but falls back to eager mode
```
torch._dynamo.config.suppress_errors = True
```
What is the best way to debug this?
```
Traceback (most recent call last):
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1446, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 129, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/__init__.py", line 2235, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1521, in compile_fx
return aot_autograd(
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 72, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1071, in aot_module_simplified
compiled_fn = dispatch_and_compile()
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1056, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 522, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 759, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 586, in aot_dispatch_autograd
compiled_fw_func = aot_config.fw_compiler(fw_module, adjusted_flat_args)
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1350, in fw_compiler_base
return _fw_compiler_base(model, example_inputs, is_inference)
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1421, in _fw_compiler_base
return inner_compile(
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 475, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/_dynamo/repro/after_aot.py", line 85, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 661, in _compile_fx_inner
compiled_graph = FxGraphCache.load(
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 1334, in load
compiled_graph = compile_fx_fn(
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 570, in codegen_and_compile
compiled_graph = fx_codegen_and_compile(gm, example_inputs, **fx_kwargs)
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 789, in fx_codegen_and_compile
_recursive_post_grad_passes(gm, is_inference=is_inference)
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 288, in _recursive_post_grad_passes
post_grad_passes(gm, is_inference)
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/_inductor/fx_passes/post_grad.py", line 100, in post_grad_passes
patterns.apply(gm.graph) # type: ignore[arg-type]
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/_inductor/pattern_matcher.py", line 1729, in apply
if is_match(m) and entry.extra_check(m):
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/_inductor/fx_passes/quantization.py", line 1448, in fn
scales = match.kwargs["scales"].meta["val"]
```
### Versions
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] open_clip_torch==2.28.0
[pip3] optree==0.13.1
[pip3] pytorch-labs-segment-anything-fast==0.2
[pip3] torch==2.5.0
[pip3] torch_scatter==2.1.2.dev4
[pip3] torch-tb-profiler==0.4.3
[pip3] torchao==0.6.1
[pip3] torchaudio==2.5.0
[pip3] torchinfo==1.8.0
[pip3] torchmetrics==1.6.0
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.20.0
[pip3] triton==3.1.0
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | triaged,oncall: pt2,module: dynamo | low | Critical |
2,753,698,269 | godot | Documentation tooltip disappears when you move the mouse - 4.4 dev7 | ### Tested versions
Godot 4.4 Dev 7 (MACOS mono build)
### System information
MacOS Sequoia 15.2 running on an M1 Pro
### Issue description
Several issues:
1) Tooltip disappears when you move your mouse over it. It should persist instead, and only fade out after like 0.5ms or a second when the mouse is no longer over the tooltip.
2) Cannot click on any hyperlinks within the tooltip, for example in the video I cannot click on the Color Cheatsheet web link on the bottom
3) Cannot scroll down
4) It would be nice to be able to click on documentation links within the tooltip, for example hovering over @export_range, we should be able to also click on the PROPERTY_HINT_RANGE link that comes under "See also" in the documentation.
I'm running on an M1 Pro with macOS Sequoia 15.2. Single window mode was off when the issues occurred. With this mode on I still have the same issues.
### Steps to reproduce
Just hover over a keyword and you will notice all the mentioned issues.
https://github.com/user-attachments/assets/689e6cf7-392e-4dae-a2e0-9f9a811bbc90
### Minimal reproduction project (MRP)
NA | bug,topic:editor,needs testing | low | Major |
2,753,722,279 | three.js | GLTFLoader: Error loading .glb from Revit 3D Tiles 1.1 export | ### Description
I exported the Revit file to 3Dtile (3Dtile 1.1 standard, Meshopt compression), which can be loaded normally in Cesium, but in 3Dtilerender it either cannot be loaded or the model loses objects after loading.After communicating with 3DTilesRendererJS( https://github.com/NASA-AMMOS/3DTilesRendererJS/issues/879 ), I think it is related to the gltf loader of threejs. I provide the 3Dtile model and hope to get help. Thank you。
### Reproduction steps
1.export revit to 3dtile
2.Load in3DTilesRendererJS
### Code
```js
// code goes here
```
### Live example
export revit to 3dtile model(3dtlile1.1)
[dtz2.zip](https://github.com/user-attachments/files/18216996/dtz2.zip)
### Screenshots


### Version
r162
### Device
Desktop
### Browser
Edge
### OS
Windows | Needs Investigation | low | Critical |
2,753,756,752 | transformers | `RuntimeError: self and mat2 must have the same dtype, but got Float and BFloat16` when training with `torch_compile` | ### System Info
- `transformers` version: 4.48.0.dev0
- Platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.12.5
- Huggingface_hub version: 0.25.1
- Safetensors version: 0.4.5
- Accelerate version: 0.34.2
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA GeForce RTX 4090
### Who can help?
@ArthurZucker @muellerzr @SunMarc
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When I try continual pretraining ModernBERT for a MLM objective with the `torch_compile` flag of my `TrainingArguments` set to `True`, I get the below error:
```python
0%| | 0/1223301 [00:00<?, ?it/s]
/home/dev/.venv/lib/python3.12/site-packages/onnxscript/converter.py:820: FutureWarning: 'onnxscript.values.Op.param_schemas' is deprecated in version 0.1 and will be removed in the future. Please use '.op_signature' instead.
param_schemas = callee.param_schemas()
/home/dev/.venv/lib/python3.12/site-packages/onnxscript/converter.py:820: FutureWarning: 'onnxscript.values.OnnxFunction.param_schemas' is deprecated in version 0.1 and will be removed in the future. Please use '.op_signature' instead.
param_schemas = callee.param_schemas()
W1221 15:33:48.046000 14779 torch/_dynamo/variables/tensor.py:776] [4/0] Graph break from `Tensor.item()`, consider setting:
W1221 15:33:48.046000 14779 torch/_dynamo/variables/tensor.py:776] [4/0] torch._dynamo.config.capture_scalar_outputs = True
W1221 15:33:48.046000 14779 torch/_dynamo/variables/tensor.py:776] [4/0] or:
W1221 15:33:48.046000 14779 torch/_dynamo/variables/tensor.py:776] [4/0] env TORCHDYNAMO_CAPTURE_SCALAR_OUTPUTS=1
W1221 15:33:48.046000 14779 torch/_dynamo/variables/tensor.py:776] [4/0] to include these operations in the captured graph.
W1221 15:33:48.046000 14779 torch/_dynamo/variables/tensor.py:776] [4/0]
W1221 15:33:48.046000 14779 torch/_dynamo/variables/tensor.py:776] [4/0] Graph break: from user code at:
W1221 15:33:48.046000 14779 torch/_dynamo/variables/tensor.py:776] [4/0] File "/home/dev/.venv/lib/python3.12/site-packages/transformers/models/modernbert/modeling_modernbert.py", line 711, in torch_dynamo_resume_in__unpad_modernbert_input_at_710
W1221 15:33:48.046000 14779 torch/_dynamo/variables/tensor.py:776] [4/0] max_seqlen_in_batch = int(seqlens_in_batch.max().item())
W1221 15:33:48.046000 14779 torch/_dynamo/variables/tensor.py:776] [4/0]
W1221 15:33:48.046000 14779 torch/_dynamo/variables/tensor.py:776] [4/0]
Traceback (most recent call last):
File "/home/dev/encoder/scripts/train/train_modernbert.py", line 206, in <module>
trainer.train(
File "/home/dev/.venv/lib/python3.12/site-packages/transformers/trainer.py", line 2163, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/transformers/trainer.py", line 2523, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/transformers/trainer.py", line 3668, in training_step
loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/transformers/trainer.py", line 3722, in compute_loss
outputs = model(**inputs)
^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 465, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/accelerate/utils/operations.py", line 820, in forward
return model_forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/accelerate/utils/operations.py", line 808, in __call__
return convert_to_fp32(self.model_forward(*args, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/torch/amp/autocast_mode.py", line 44, in decorate_autocast
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/transformers/models/modernbert/modeling_modernbert.py", line 1023, in forward
@add_start_docstrings_to_model_forward(MODERNBERT_INPUTS_DOCSTRING)
File "/home/dev/.venv/lib/python3.12/site-packages/transformers/models/modernbert/modeling_modernbert.py", line 1055, in torch_dynamo_resume_in_forward_at_1055
input_ids, indices, cu_seqlens, max_seqlen, position_ids, labels = _unpad_modernbert_input(
File "/home/dev/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/transformers/models/modernbert/modeling_modernbert.py", line 913, in forward
layer_outputs = encoder_layer(
^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/transformers/models/modernbert/modeling_modernbert.py", line 519, in forward
def forward(
File "/home/dev/.venv/lib/python3.12/site-packages/transformers/models/modernbert/modeling_modernbert.py", line 529, in torch_dynamo_resume_in_forward_at_529
attn_outputs = self.attn(
File "/home/dev/.venv/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 632, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 1100, in forward
return compiled_fn(full_args)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 308, in runtime_wrapper
all_outs = call_func_at_runtime_with_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/utils.py", line 124, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/utils.py", line 98, in g
return f(*args)
^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/torch/autograd/function.py", line 575, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1525, in forward
fw_outs = call_func_at_runtime_with_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/utils.py", line 124, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 488, in wrapper
return compiled_fn(runtime_args)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 667, in inner_fn
outs = compiled_fn(args)
^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 1478, in __call__
return self.current_callable(inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dev/.venv/lib/python3.12/site-packages/torch/_inductor/utils.py", line 1977, in run
return model(new_inputs)
^^^^^^^^^^^^^^^^^
File "/tmp/torchinductor_/y7/cy7xv2rrzhznbq3e2wurnq5pmygfytvnpovxlh5bugtoa3ebwy6f.py", line 277, in call
extern_kernels.addmm(buf9, buf7, reinterpret_tensor(buf8, (1152, 768), (1, 1152), 0), alpha=1, beta=1, out=buf10)
RuntimeError: self and mat2 must have the same dtype, but got Float and BFloat16
```
This does not occur when finetuning for a classification task.
I am using `bfloat16` mixed precision.
### Expected behavior
The training works. | bug | low | Critical |
2,753,782,631 | flutter | Letter spacing is wrong for text with long length | ### Steps to reproduce
1. Create Text widget with text of length >= 4,449,555 characters.
### Expected results
Letter spacing is correct and consistent throughout the entire text.
### Actual results
Letter spacing is not consistent and is wrong towards the end of the text.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() => runApp(const SampleApp());
class SampleApp extends StatelessWidget {
const SampleApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
appBar: AppBar(title: const Text('Sample')),
body: ListView(
children: [
Align(
alignment: Alignment.topCenter,
child: Container(
width: 512,
child: Text(
'Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.' * 9999,
),
),
),
],
),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
Correct letter spacing at the beginning of the text:
<img width="2560" alt="Screenshot 2024-12-20 at 9 31 30 PM" src="https://github.com/user-attachments/assets/ab337331-f2ab-4f61-bbea-b99793412622" />
Wrong letter spacing at the end of the text (this text is 4,449,555 characters long):
<img width="2560" alt="Screenshot 2024-12-20 at 9 31 40 PM" src="https://github.com/user-attachments/assets/87643522-9310-4ff4-8dd0-d7d894f89d38" />
</details>
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.27.1, on macOS 15.2 24C101 darwin-arm64, locale en-US)
• Flutter version 3.27.1 on channel stable at /Users/charleslu/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 17025dd882 (4 days ago), 2024-12-17 03:23:09 +0900
• Engine revision cb4b5fff73
• Dart version 3.6.0
• DevTools version 2.40.2
[✗] Android toolchain - develop for Android devices
✗ Unable to locate Android SDK.
Install Android Studio from: https://developer.android.com/studio/index.html
On first launch it will assist you in installing the Android SDK components.
(or visit https://flutter.dev/to/macos-android-setup for detailed instructions).
If the Android SDK has been installed to a custom location, please use
`flutter config --android-sdk` to update to that location.
[✗] Xcode - develop for iOS and macOS
✗ Xcode installation is incomplete; a full installation is necessary for iOS and macOS development.
Download at: https://developer.apple.com/xcode/
Or install Xcode via the App Store.
Once installed, run:
sudo xcode-select --switch /Applications/Xcode.app/Contents/Developer
sudo xcodebuild -runFirstLaunch
✗ CocoaPods not installed.
CocoaPods is a package manager for iOS or macOS platform code.
Without CocoaPods, plugins will not work on iOS or macOS.
For more info, see https://flutter.dev/to/platform-plugins
For installation instructions, see https://guides.cocoapods.org/using/getting-started.html#installation
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[!] Android Studio (not installed)
• Android Studio not found; download from https://developer.android.com/studio/index.html
(or visit https://flutter.dev/to/macos-android-setup for detailed instructions).
[✓] VS Code (version 1.96.1)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.102.0
[✓] Connected device (2 available)
• macOS (desktop) • macos • darwin-arm64 • macOS 15.2 24C101 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.205
[✓] Network resources
• All expected network resources are available.
! Doctor found issues in 3 categories.
```
</details>
| engine,a: typography,has reproducible steps,P3,team-engine,triaged-engine,found in release: 3.27,found in release: 3.28 | low | Minor |
2,753,784,715 | flutter | [LUCI] Engine builds can be excluded from pure framework updates | ### Type of Request
feature request
### Infrastructure Environment
LUCI
### What is happening?
Every PR will schedule engine builds for now. See https://github.com/flutter/flutter/pull/160643/checks as an example.
### Expected results
Not running engine builds for framework updates will improve the workflow in finding exceptions with commits, or those builds can be delayed until the request gets pushed into the merge queue. | team-infra | low | Major |
2,753,815,666 | excalidraw | Inconsisten Brush stroke which changes with speed | This is a pretty annoying to be honest.

Using a drawing pen, it changes based on the speed at which I draw and I mean I don't get how this could be useful in any sense, if it was based on pressure that would be fine but it's based on speed

As you can see writing with a pen becomes immensely annoying and this has been the only thing that has kept me on my other drawing apps and not excalidraw for visual note-taking. :/ Really hope this issue gets fixed soon, OR has a fix. (Tell me if so) | freedraw | low | Minor |
2,753,821,931 | PowerToys | [PowerToys Run] causes some fullscreen apps/3D games to minimize on launch. | ### Microsoft PowerToys version
0.87.1
### Installation method
WinGet
### Running as admin
No
### Area(s) with issue?
PowerToys Run
### Steps to reproduce
Clean install of Windows 10 (IOT LTSC used, but Win11 also affected), nVidia drivers, and Passmark Performance Test (Other apps affected, but this one is an simple reproducer)
Install powertoys using winget.
Run Passmark GPU tests. DX9 test won't run. DX11 test won't run. DX10, 12 tests run with obscenely low framerates.
Run a game (I ran Quake2 RTX).
Game launches directly, then minimizes to task bar.
Tabbing to game results in screen changing res and game minimizing.
Impossible to keep game fullscreen.
Disable PowerToys Run
Launch Quake 2 RTX, it runs fine. Run all Passmark GPU tests, good scores.
Enable PowerToys Run
Unable to keep Quake 2 RTX open fullscreen. Unable to run Passmark GPU tests as seen above.
### ✔️ Expected Behavior
Enable PowerToys Run
Launch Quake2 it Runs fine. Run all Parrmark GPU tests, good scores.
### ❌ Actual Behavior
Enable PowerToys Run
Unable to keep Quake 2 RTX open fullscreen. Unable to run Passmark GPU tests as seen above.
### Other Software
Using Passmark Performance Test 11.1 to reproduce, but observed with multiple applications, primarily Quake2 RTX from Steam, and DeusEX: Game of the Year Edition. | Issue-Bug,Product-PowerToys Run,Needs-Triage,Needs-Team-Response | low | Major |
2,753,823,513 | PowerToys | Quick Accent - Add Vietnamese language support | ### Description of the new feature / enhancement
As [the most diacriticated language currently in use](https://linguistics.stackexchange.com/q/16850), it would be useful to add Vietnamese support in Quick Accents.
May also address this misfiled issue #22347
### Scenario when this would be used?
As it would be used with any other of the supported languages, when typing on a pc with unsupported keyboard mapping.
### Supporting information
_No response_ | Idea-Enhancement,Help Wanted,Product-Quick Accent | low | Minor |
2,753,831,357 | PowerToys | Advanced Paste : remove locale part from URL | ### Description of the new feature / enhancement
Some document site's URL has locale string (en-us, ja-jp).
for example : https://learn.microsoft.com/en-us/windows/powertoys/install
Could you remove `en-us` when paste from clipboard.
copy string: https://learn.microsoft.com/en-us/windows/powertoys/install
paste string: https://learn.microsoft.com/windows/powertoys/install
### Scenario when this would be used?
When a URL containing a locale such as `en-us` is provided, the browser will display the specified URL.
However, if a URL without a locale is given, like on the Microsoft Learn site, it will automatically redirect to the browser's default language.
I believe it would be very convenient if the advanced paste feature could optionally remove the locale part, allowing users to view URLs shared from other countries in their own language.
### Supporting information
_No response_ | Needs-Triage,Needs-Team-Response,Product-Advanced Paste | low | Minor |
2,753,836,577 | PowerToys | PowerRename | ### Microsoft PowerToys version
0.87.1
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
PowerRename
### Steps to reproduce
Not in Standard Menu available even if it should appear
### ✔️ Expected Behavior
Should be in the standard menu
### ❌ Actual Behavior
Not showoing up, only in the advanced context menu
### Other Software
_No response_ | Issue-Bug,Product-PowerRename,Needs-Triage,Needs-Team-Response | low | Minor |
2,753,848,089 | three.js | GLTFLoader support KHR_texture_procedurals | ### Description
GLTF is gearing up to be the final mile delivery system for 3D platforms. One of the key goals expressed is better interoperability between GLTF and OpenUSD. One of the primary ways to do that is supporting MaterialX.
To do that, GLTF first wants to support procedural materials (including MaterialX) through a JSON defined node graph system for materials: https://github.com/KhronosGroup/glTF/tree/KHR_texture_procedurals/extensions/2.0/Khronos/KHR_texture_procedurals
From there, different systems/versions of procedural materials can be loaded, (however this primarily seems to be aimed at specifying versions of MaterialX, though I could see it supporting other systems)
[Outline of GLTF/MaterialX 1.39](https://github.com/KhronosGroup/glTF/tree/KHR_texture_procedurals/extensions/2.0/Vendor/EXT_texture_procedurals_mx_1_39)
[Link to the PR](https://github.com/KhronosGroup/glTF/pull/2381)
At Siggraph 2024 Asia this was highlighted as done/near done and also one of the key goals within gltf. This is 100% happening and I think threejs should be a leader of it's implementation or at a minimum prepared for it.
This issue is first to mark the item within threeJS and hopefully could act as a holder of other issues related to it. I searched (a little) and found some issues mentioning parts of this, but it's hard to understand what is happening without following all open issues.
### Solution
The first task would be parsing the GLTF Json into a ThreeJS material using TSL and the correct material type.
I'm not sure if the GLTF system can tag in a way if a material would need to be a basic, standard, or Physical material, however it could be built as a standard material and then rebuilt as a basic if it only has the basic inputs or a physical if it has the more complex ones. (implementation discussion)
I know there is currently a materialX loader, however, that is parsing the XML matx files and not this converted JSON. A new parser would have to be put together but could use that as a starting point.
I am willing to start working on this in my desire to do more for things within gltf, however I didn't want to start if @sunag already had something in the works
### Alternatives
-
### Additional context
There are a million links to MaterialX stuff. But [MaterialXLearn](https://kwokcb.github.io/MaterialX_Learn/) might be the easiest to link to all the other resources, see graphs, editors, etc. | Loaders | low | Major |
2,753,850,408 | flutter | [video_player] Allow Passing query parameters to every segment request and manifest in MPEG DASH | ### What package does this bug report belong to?
video_player
### What target platforms are you seeing this bug on?
Android, iOS
### Have you already upgraded your packages?
Yes
### Dependency versions
_No response_
### Steps to reproduce
Hi Everyone,
Is there a way to dynamically add query parameters to all segment and manifest requests that ExoPlayer makes when using the video_player package? These parameters won't be part of the manifest.mpd itself but need to be appended to each request at runtime.
### Expected results
In every request my player was supposed to send query params.
### Actual results
<img width="1375" alt="Screenshot 2024-12-19 at 5 00 31 PM" src="https://github.com/user-attachments/assets/cb19efdc-adb2-4162-b977-68fe855db988" />
Its just sending the video in first request.
### Code sample
NA
### Screenshots or Videos
NA
### Logs
NA
### Flutter Doctor output
<img width="1387" alt="Screenshot 2024-12-19 at 5 05 30 PM" src="https://github.com/user-attachments/assets/9735b2d1-e833-4384-bfc4-dfc891139a75" />
| c: new feature,p: video_player,package,c: proposal,team-ecosystem,P3,triaged-ecosystem | low | Critical |
2,753,852,918 | PowerToys | Command not Found is not installable | ### Microsoft PowerToys version
0.86.0
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
Command not found
### Steps to reproduce
Install Powertoys, and attempt to install the winget module
### ✔️ Expected Behavior
Expect installation to succeed
### ❌ Actual Behavior
Powertoys stops responding, and after a few seconds, it shows error message, saying that administrative privileges are required. Similar issue occurs, in a non-admin attempt.
### Other Software
Powershell v7.4.6
Oh my Posh
| Issue-Bug,Needs-Triage,Needs-Team-Response,Product-CommandNotFound | low | Critical |
2,753,861,784 | pytorch | Segmentation Fault (core dumped) on as_strided with torch.compile | ### 🐛 Describe the bug
The following script lead to a segmentation fault.
```
import torch
@torch.compile
def as_strided(input, size, stride, storage_offset=0):
return input.as_strided(size, stride, storage_offset)
input = torch.tensor([], dtype=torch.float32)
size = [17,18]
stride = [-80,1]
storage_offset = 1
out2 = as_strided(input,size,stride,storage_offset)
```
Without torch.compile, this function will raises a runtime error:
```
as_strided: Negative strides are not supported at the moment, got strides: [-80, 1]
```
Here are some details:
- This issue seems to be related to the first element of the `stride`. If I change the stride to `[1, -80]`, no segmentation fault but normal runtime error raises.
- I faced this warning when run this script: `Bypassing autograd cache due to: Cannot cache a graph with functional tensor`
### Versions
<details>
<summary>Envs</summary>
Collecting environment information...
PyTorch version: 2.6.0a0+gitdeb1da1
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.11.10 | packaged by conda-forge | (main, Oct 16 2024, 01:27:36) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.37.1.el9_4.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 3975WX 32-Cores
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU max MHz: 4368.1641
CPU min MHz: 2200.0000
BogoMIPS: 7000.73
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.13.0
[pip3] torch==2.6.0a0+gitdeb1da1
[pip3] torchaudio==2.5.1+cu124
[pip3] torchelastic==0.2.2
[pip3] torchvision==0.20.1+cu124
[pip3] triton==3.1.0
[conda] magma-cuda124 2.6.1 1 pytorch
[conda] mkl 2025.0.0 h901ac74_941 conda-forge
[conda] mkl-include 2025.0.0 hf2ce2f3_941 conda-forge
[conda] numpy 2.1.2 py311h71ddf71_0 conda-forge
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] torch 2.6.0a0+gitdeb1da1 pypi_0 pypi
[conda] torchaudio 2.5.1+cu124 pypi_0 pypi
[conda] torchelastic 0.2.2 pypi_0 pypi
[conda] torchvision 0.20.1+cu124 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
</details>
cc @chauhang @penguinwu @eellison @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @zou3519 @bdhirsh | module: crash,triaged,oncall: pt2,module: fakeTensor,module: aotdispatch | low | Critical |
2,753,862,599 | pytorch | Unable for CMake in setup.py to list anything OpenCL-ROCm | ### Commands that are run to build PyTorch
```
python3.11 -m venv /opt/pyt2c1k/pyenv
source /opt/pyt2c1k/pyenv/bin/activate
export HSA_OVERRIDE_GFX_VERSION=9.0.0
export PATH=/opt/rocm/bin:$PATH
export LD_LIBRARY_PATH=/opt/rocm/lib:$LD_LIBRARY_PATH
export OpenCL_INCLUDE_DIR=/opt/rocm-6.3.0/include
export OpenCL_LIBRARY=/opt/rocm-6.3.0/lib/libOpenCL.so
git clone --recursive https://github.com/pytorch/pytorch.git PyTorch
cd PyTorch
git pull
git checkout main
git submodule sync
git submodule update --init --recursive
PIP_REQUIRE_VIRTUALENV=true python3.11 -m pip install setuptools ninja mkl-static mkl-include -r requirements.txt
doas rm -rf $(locate LibTorch|grep -Eie 'CMakeCache.txt|CMakeFiles')
PYTORCH_ROCM_ARCH=gfx90c USE_OPENCL=1 USE_CUDA=0 USE_CUDNN=0 USE_CUSPARSELT=0 USE_CUDSS=0 USE_CUFILE=0 BUILD_TEST=0 PROJECT_BINARY_DIR=/opt/LibTorch/MkTorch CFLAGS="-DCMAKE_C_FLAGS='-w',-DCMAKE_CXX_FLAGS='-w'" python3.11 setup.py clean
PYTORCH_ROCM_ARCH=gfx90c USE_OPENCL=1 USE_CUDA=0 USE_CUDNN=0 USE_CUSPARSELT=0 USE_CUDSS=0 USE_CUFILE=0 BUILD_TEST=0 PROJECT_BINARY_DIR=/opt/LibTorch/MkTorch CFLAGS="-DCMAKE_C_FLAGS='-w',-DCMAKE_CXX_FLAGS='-w'" python3.11 setup.py bdist_wheel
PIP_REQUIRE_VIRTUALENV=true python3.11 -m pip install dist/*.whl
python3.11 -c "import torch; print(torch.__version__)"
cd ..
git clone https://github.com/mlverse/torchvision.git TorchVision
cd TorchVision
git pull
git checkout main
PIP_REQUIRE_VIRTUALENV=true python3.11 -m pip install setuptools ninja -r requirements.txt
python3.11 setup.py clean
python3.11 setup.py bdist_wheel
PIP_REQUIRE_VIRTUALENV=true python3.11 -m pip install dist/*.whl
cd ..
git clone https://github.com/mlverse/torchaudio.git TorchAudio
cd TorchAudio
git pull
git checkout main
PIP_REQUIRE_VIRTUALENV=true python3.11 -m pip install setuptools ninja -r requirements.txt
python3.11 setup.py clean
python3.11 setup.py bdist_wheel
PIP_REQUIRE_VIRTUALENV=true python3.11 -m pip install dist/*.whl
cd ..
python3.11 -c "import torch; print(torch.__version__)"
python3.11 -c "import torchvision; print(torchvision.__version__)"
python3.11 -c "import torchaudio; print(torchaudio.__version__)"
echo "LibTorch, TorchVision, and TorchAudio installed successfully from my custom wheel files! :)"
```
### 🐛 Describe the bug
```INFOUSING OPENCL
-- Looking for CL_VERSION_3_0
-- Looking for CL_VERSION_3_0 - not found
-- Looking for CL_VERSION_2_2
-- Looking for CL_VERSION_2_2 - not found
-- Looking for CL_VERSION_2_1
-- Looking for CL_VERSION_2_1 - not found
-- Looking for CL_VERSION_2_0
-- Looking for CL_VERSION_2_0 - not found
-- Looking for CL_VERSION_1_2
-- Looking for CL_VERSION_1_2 - not found
-- Looking for CL_VERSION_1_1
-- Looking for CL_VERSION_1_1 - not found
-- Looking for CL_VERSION_1_0
-- Looking for CL_VERSION_1_0 - not found
CMake Error at /opt/pyt2c1k/pyenv/lib/python3.11/site-packages/cmake/data/share/cmake-3.31/Modules/FindPackageHandleStandardArgs.cmake:233 (message):
Could NOT find OpenCL (missing: OpenCL_INCLUDE_DIR)
Call Stack (most recent call first):
/opt/pyt2c1k/pyenv/lib/python3.11/site-packages/cmake/data/share/cmake-3.31/Modules/FindPackageHandleStandardArgs.cmake:603 (_FPHSA_FAILURE_MESSAGE)
/opt/pyt2c1k/pyenv/lib/python3.11/site-packages/cmake/data/share/cmake-3.31/Modules/FindOpenCL.cmake:177 (find_package_handle_standard_args)
cmake/Dependencies.cmake:761 (find_package)
CMakeLists.txt:865 (include)
-- Configuring incomplete, errors occurred!
WARNING: Requirement 'dist/*.whl' looks like a filename, but the file does not exist
ERROR: *.whl is not a valid wheel filename.
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/opt/local/.B/terminal/AI/LibTorch/PyTorch/torch/__init__.py", line 77, in <module>
from torch.torch_version import __version__ as __version__
File "/opt/local/.B/terminal/AI/LibTorch/PyTorch/torch/torch_version.py", line 4, in <module>
from torch.version import __version__ as internal_version
ModuleNotFoundError: No module named 'torch.version'
```
### Versions
```
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Artix Linux (x86_64)
GCC version: (GCC) 14.2.1 20240910
Clang version: 18.1.8
CMake version: version 3.31.2
Libc version: glibc-2.40
Python version: 3.11.9 (main, Jul 9 2024, 00:31:01) [GCC 14.1.1 20240522] (64-bit runtime)
Python platform: Linux-6.12.5-lqx1-1-lqx-x86_64-with-glibc2.40
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 7 PRO 4750G with Radeon Graphics
CPU family: 23
Model: 96
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 1
Frequency boost: enabled
CPU(s) scaling MHz: 19%
CPU max MHz: 4454.0000
CPU min MHz: 400.0000
BogoMIPS: 7186.09
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca
Virtualization: AMD-V
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 4 MiB (8 instances)
L3 cache: 8 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.0
[pip3] optree==0.13.1
[conda] Could not collect
```
cc @malfet @seemethere @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | module: build,module: rocm,triaged | low | Critical |
2,753,868,039 | PowerToys | PowerToys Run: Support array when customize the direct activation commands | ### Description of the new feature / enhancement
the config of direct activation commands only support one value now.
Hope to support more value for the config.
### cureent commands
| Plug-in | Direct activation command(one character) |
|--------|--------|
| Windows search | `?` |
| Shell command | `>` |
### new feature
| Plug-in | Direct activation commands(array) |
|--------|--------|
| Windows search | [`?`, `?`] |
| Shell command | [`>`, `》`, `and other custom symbols`] |
input `?` or `?` to activated Windows search
### Scenario when this would be used?
The symbols in Chinese are different from those in English. We need to switch frequently(half-width/full-width).
### Supporting information
_No response_ | Idea-Enhancement,Product-PowerToys Run | low | Minor |
2,753,869,125 | tauri | [feat] Support Android 6.0 | ### Describe the problem
Currently, there are still some devices running on Android 6.0, and we hope that the official support can be provided.
### Describe the solution you'd like
Support Android 6.0
### Alternatives considered
_No response_
### Additional context
_No response_ | type: feature request,platform: Android | low | Minor |
2,753,883,842 | kubernetes | TestServerRunWithSNI Unit Test Fails Intermittently | ### What happened?
The `TestServerRunWithSNI` unit test is failing intermittently.
```
=== NAME TestServerRunWithSNI/loopback:_bind_to_0.0.0.0_=>_loopback_uses_localhost
serving_test.go:339: Dialing localhost:43713 as ""
serving_test.go:372: failed to connect with loopback client: Get "https://0.0.0.0:43713/version?timeout=32s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
--- FAIL: TestServerRunWithSNI (0.00s)
--- PASS: TestServerRunWithSNI/one_SNI_and_the_default_cert_with_the_same_name (0.23s)
--- PASS: TestServerRunWithSNI/cert_with_multiple_alternate_names (0.28s)
--- PASS: TestServerRunWithSNI/only_one_cert (0.30s)
--- PASS: TestServerRunWithSNI/loopback:_LoopbackClientServerNameOverride_on_server_cert (0.37s)
--- PASS: TestServerRunWithSNI/loopback:_LoopbackClientServerNameOverride_not_on_any_cert (0.38s)
--- PASS: TestServerRunWithSNI/loopback:_LoopbackClientServerNameOverride_on_SNI_cert (0.55s)
--- PASS: TestServerRunWithSNI/matching_IP_in_SNI_cert_and_the_server_cert (0.55s)
--- PASS: TestServerRunWithSNI/matching_SNI_cert (0.57s)
--- PASS: TestServerRunWithSNI/wildcards (0.62s)
--- FAIL: TestServerRunWithSNI/loopback:_bind_to_0.0.0.0_=>_loopback_uses_localhost (32.69s)
FAIL
I1221 18:33:43.081497 65589 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::testdata/localhost__/cert::testdata/localhost__/key"
I1221 18:33:43.081523 65589 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
I1221 18:33:43.081523 65589 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
I1221 18:33:43.081567 65589 secure_serving.go:258] Stopped listening on 127.0.0.1:43713
FAIL k8s.io/apiserver/pkg/server/options 32.722s
FAIL
```
### What did you expect to happen?
The TestServerRunWithSNI test should pass consistently without any errors
### How can we reproduce it (as minimally and precisely as possible)?
go test -v k8s.io/apiserver/pkg/server/options -run TestServerRunWithSNI
### Anything else we need to know?
_No response_
### Kubernetes version
1.31
### Cloud provider
<details>
</details>
### OS version
windows 11
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details> | kind/bug,area/apiserver,sig/api-machinery,triage/accepted | low | Critical |
2,753,887,465 | godot | [.Net] `Handle is not initialized.` when assigning `TweenMethod` to a later-disposed `Tween` | ### Tested versions
v4.3.stable.mono.official [77dcf97d8]
### System information
Godot v4.3.stable.mono - Windows 10.0.19044 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4060 Ti (NVIDIA; 32.0.15.6603) - AMD Ryzen 9 5900X 12-Core Processor (24 Threads)
### Issue description
Assigning `TweenMethod` to a `Tween` that will be disposed later will result in the following counterintuitive error.
```csharp
ERROR: System.InvalidOperationException: Handle is not initialized.
at System.Runtime.InteropServices.GCHandle.FromIntPtr(IntPtr value)
at Godot.Bridge.ScriptManagerBridge.SetGodotObjectPtr(IntPtr gcHandlePtr, IntPtr newPtr) in /root/godot/modules/mono/glue/GodotSharp/GodotSharp/Core/Bridge/ScriptManagerBridge.cs:line 295
at: void Godot.NativeInterop.ExceptionUtils.LogException(System.Exception) (/root/godot/modules/mono/glue/GodotSharp/GodotSharp/Core/NativeInterop/ExceptionUtils.cs:113)
```
### Steps to reproduce
Run the following code:
```csharp
using Godot;
public partial class Main : Node
{
public override void _Process(double delta)
{
for (var i = 0; i < 1000; i++)
{
var node = new Node2D();
using var tween = node.CreateTween();
tween.TweenMethod(
new(node, CanvasItem.MethodName.SetModulate),
Colors.Black,
Colors.White,
0.1f
);
AddChild(node);
}
}
}
```
Please note that removing the `using` keyword stops the issue from happening.
### Minimal reproduction project (MRP)
[Mrp_HandleNotInitialized.zip](https://github.com/user-attachments/files/18218193/Mrp_HandleNotInitialized.zip)
| enhancement,topic:core,topic:dotnet | low | Critical |
2,753,890,420 | tauri | [feat] Rust equivalent convertFileSrc() | ### Describe the problem
I'm currently trying to write a markdown editor in Tauri. For this I need to convert the file paths the user enters to tauris file paths.
Since I use a Rust library to parse the markdown to HTML, I need to do the file path conversion in Rust.
### Describe the solution you'd like
For this reason I'd like to have the _convertFileSrc()_ function in Rust.
### Alternatives considered
Currently I'm trying to either call the Javascript version of the function from within Rust but this is hacky and I'd like to have a native way.
I'm also considering just passing a template to rust at the start of the program, that describes the transformation, but this feels also somewhat prone to breaking.
### Additional context
_No response_ | type: feature request | low | Minor |
2,753,892,992 | flutter | DraggableScrollableSheet does not resize on a single child | ### Steps to reproduce
1. Start scrolling the items list up as in attempt to go to item 100
### Expected results
The `DraggableScrollableSheet` should first be resized to max height.
### Actual results
The `DraggableScrollableSheet` stays the same.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:flutter/scheduler.dart';
void main() {
runApp(const MainApp());
}
class MainApp extends StatelessWidget {
const MainApp({super.key});
@override
Widget build(BuildContext context) {
return const MaterialApp(
home: DraggablePage(),
);
}
}
class DraggablePage extends StatefulWidget {
const DraggablePage({super.key});
@override
State<DraggablePage> createState() => _DraggablePageState();
}
class _DraggablePageState extends State<DraggablePage> {
ScrollController? _scrollController;
@override
void initState() {
super.initState();
SchedulerBinding.instance.addPostFrameCallback((timeStamp) {
_scrollController!.jumpTo(50);
});
}
@override
Widget build(BuildContext context) {
return DraggableScrollableSheet(
builder: (context, scrollController) {
_scrollController = scrollController;
return Scaffold(
body: ListView.builder(
controller: scrollController,
itemCount: 100,
itemExtent: 100,
itemBuilder: (context, index) => ColoredBox(
color: Colors.primaries[index % Colors.primaries.length],
child: Center(
child: Text('$index'),
),
),
),
);
},
);
}
}
```
</details>
### Screenshots or Video
_No response_
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.27.0, on macOS 15.1 24B83 darwin-arm64, locale en-SI)
• Flutter version 3.27.0 on channel stable at /Users/.../Development/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 8495dee1fd (11 days ago), 2024-12-10 14:23:39 -0800
• Engine revision 83bacfc525
• Dart version 3.6.0
• DevTools version 2.40.2
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/.../Library/Android/sdk
• Platform android-35, build-tools 34.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 21.0.3+-79915917-b509.11)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 16.1)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16B40
• CocoaPods version 1.16.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.2)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 21.0.3+-79915917-b509.11)
[✓] VS Code (version 1.97.0-insider)
• VS Code at /Applications/Visual Studio Code - Insiders.app/Contents
• Flutter extension version 3.102.0
[✓] Connected device (4 available)
• Pixel (mobile) • FA76R0301797 • android-arm64 • Android 10 (API 29)
• macOS (desktop) • macos • darwin-arm64 • macOS 15.1 24B83 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 15.1 24B83 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.205
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| framework,f: material design,has reproducible steps,P2,team-framework,triaged-framework,found in release: 3.27,found in release: 3.28 | low | Minor |
2,753,966,940 | PowerToys | Error in PowerToys.PowerLauncher.exe on ntdll.dll | ### Microsoft PowerToys version
0.87.1
### Installation method
GitHub
### Running as admin
Yes
### Area(s) with issue?
PowerToys Run
### Steps to reproduce
I used machine translation, which may result in inaccurate translation.
I tried pressing Alt+Space to start it, but it didn't work.
I am certain that this is not a conflict between Alt+Space or an issue with administrator privileges. I have attempted to modify it to other shortcut keys or disable administrator privileges (along with some other unethical methods), but none of them have worked.
My Windows log contains the following information:
错误应用程序名称: PowerToys.PowerLauncher.exe,版本: 0.87.1.0,时间戳: 0x67200000
错误模块名称: ntdll.dll,版本: 10.0.22621.4541,时间戳: 0xe7035eba
异常代码: 0xc0000409
错误偏移量: 0x00000000000a4a96
错误进程 ID: 0x0x6028
错误应用程序启动时间: 0x0x1DB539F6E807119
错误应用程序路径: C:\Program Files\PowerToys\PowerToys.PowerLauncher.exe
错误模块路径: C:\WINDOWS\SYSTEM32\ntdll.dll
报告 ID: 244f1ed7-2870-4206-808a-aa7a6e404927
### ✔️ Expected Behavior
I hope to open it through Alt+Space.
### ❌ Actual Behavior
It is not working properly, this exe is not running.
### Other Software
_No response_ | Issue-Bug,Severity-High,Needs-Triage,Needs-Team-Response | low | Critical |
2,753,970,671 | godot | Wrong Bus for `play_stream` in Web Export Only (in 4.3, worked in 4.2) | ### Tested versions
- Reproducible in: v4.3.stable.official [77dcf97d8]
- Not reproducible in: v4.2.stable.official [46dc27791]
### System information
Windows 11 - Google Chrome
### Issue description
The method `play_stream` has a new argument `bus` in 4.3.
- 4.3. https://docs.godotengine.org/en/4.3/classes/class_audiostreamplaybackpolyphonic.html
- 4.2. https://docs.godotengine.org/en/4.2/classes/class_audiostreamplaybackpolyphonic.html
When testing DESKTOP (run), the same code works in both 4.2 and 4.3 versions.
When testing WEB export, the same code works in 4.2 but does not work in 4.3. version.
_(Uses Master bus instead of set e.g. "Music" bus.)_
_(Note that in web export, you need to click on the project window to grab focus to allow creation of audio context.)_
### Steps to reproduce
The minimal code below:
- in 4.3. it is muted in desktop but playing in web
- in 4.2. it is muted in both desktop and web
```
extends Node2D
func _ready():
AudioServer.set_bus_mute(AudioServer.get_bus_index("Music"), true)
var audio = load("res://menu_doodle_2_loop.ogg")
var player = AudioStreamPlayer.new()
var stream = AudioStreamPolyphonic.new()
player.stream = stream
player.process_mode = process_mode
player.bus = "Music"
player.stream.polyphony = 1
player.max_polyphony = 1
add_child(player)
player.play()
var playback = player.get_stream_playback() as AudioStreamPlaybackPolyphonic
var stream_id = playback.play_stream(audio as AudioStream)
```
Temporary workaround (in GDScript) for 4.3. web export is to set bus explicitly in the new argument (pass it `player.bus`).
But this is not compatible with 4.2. or earlier because the new argument exists only in newer versions.
### Minimal reproduction project (MRP)
[new-project-4_3.zip](https://github.com/user-attachments/files/18218458/new-project-4_3.zip)
[new-project-4_2.zip](https://github.com/user-attachments/files/18218459/new-project-4_2.zip)
| bug,platform:web,topic:porting,needs testing,topic:audio | low | Minor |
2,753,981,206 | excalidraw | Bug: typing while scrolled up from WYSIWYG causes menus and toolbars to be scrolled up | https://github.com/user-attachments/assets/981981c0-3e65-497e-9285-073b58a62a65
It seems to happen due to the browser's behavior that scrolls the cursor into view when text is inputted, which violates how scrolling is handled in the canvas. It doesn't happen when the WYSIWYG is hidden from the top. Probably because ExcalidrawContainer scrollTop is 0, so it can't scroll up. | bug | low | Critical |
2,753,998,570 | transformers | Support modernBERT for encoder-decoder models | ### Feature request
The docs state that the [EncoderDecoderModel](https://huggingface.co/docs/transformers/main/en/model_doc/encoder-decoder#transformers.EncoderDecoderModel) can be used to initialize a sequence-to-sequence model with any pretrained autoencoding model as the encoder. Though [ModernBERT](https://huggingface.co/answerdotai/ModernBERT-base) isn't supported:
```
File "/content/syntax_transformer/data/../models/encoderDecoder.py", line 40, in __init__
self.model = EncoderDecoderModel.from_encoder_decoder_pretrained(
File "/usr/local/lib/python3.10/dist-packages/transformers/models/encoder_decoder/modeling_encoder_decoder.py", line 538, in from_encoder_decoder_pretrained
decoder = AutoModelForCausalLM.from_pretrained(decoder_pretrained_model_name_or_path, **kwargs_decoder)
File "/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py", line 567, in from_pretrained
raise ValueError(
ValueError: Unrecognized configuration class <class 'transformers.models.modernbert.configuration_modernbert.ModernBertConfig'> for this kind of AutoModel: AutoModelForCausalLM.
Model type should be one of AriaTextConfig, BambaConfig, BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, LlamaConfig, CodeGenConfig, CohereConfig, Cohere2Config, CpmAntConfig, CTRLConfig, Data2VecTextConfig, DbrxConfig, ElectraConfig, ErnieConfig, FalconConfig, FalconMambaConfig, FuyuConfig, GemmaConfig, Gemma2Config, GitConfig, GlmConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, GraniteConfig, GraniteMoeConfig, JambaConfig, JetMoeConfig, LlamaConfig, MambaConfig, Mamba2Config, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MistralConfig, MixtralConfig, MllamaConfig, MoshiConfig, MptConfig, MusicgenConfig, MusicgenMelodyConfig, MvpConfig, NemotronConfig, OlmoConfig, Olmo2Config, OlmoeConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PersimmonConfig, PhiConfig, Phi3Config, PhimoeConfig, PLBartConfig, ProphetNetConfig, QDQBertConfig, Qwen2Config, Qwen2MoeConfig, RecurrentGemmaConfig, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2Text2Config, StableLmConfig, Starcoder2Config, TransfoXLConfig, TrOCRConfig, WhisperConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig, ZambaConfig.
```
### Motivation
ModernBert has a better performance and a longer context length.
### Your contribution
How is it possible to support monderBERT? It isn't that different from other BERT models. | Feature request | low | Critical |
2,754,032,790 | rust | internal compiler error: Missing value for constant, but no error reported? | <!--
Thank you for finding an Internal Compiler Error! 🧊 If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
### Code
```Rust
#![feature(generic_const_exprs)]
struct Struct;
trait Trait {
const CONST: usize = 0;
}
impl Trait for Struct {}
fn f_<T>() {}
trait DynTrait {
fn f(&self);
}
impl<T> DynTrait for T
where
T: Trait,
[(); T::CONST]:,
{
fn f(&self) {
f_::<T>()
}
}
fn main() {
Box::new(Struct).f();
}
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.85.0-nightly (9e136a30a 2024-12-19)
binary: rustc
commit-hash: 9e136a30a965bf4e63f03095c57df7257bf96fd6
commit-date: 2024-12-19
host: x86_64-pc-windows-msvc
release: 1.85.0-nightly
LLVM version: 19.1.6
```
### Error output
```
note: no errors encountered even though delayed bugs were created
note: those delayed bugs will now be shown as internal compiler errors
error: internal compiler error: Missing value for constant, but no error reported?
|
= note: delayed at compiler\rustc_trait_selection\src\traits\const_evaluatable.rs:74:68 - disabled backtrace
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: please attach the file at `C:\Users\agero\Documents\projets\ramsey-dl\rustc-ice-2024-12-21T13_27_24-551924.txt` to your bug report
note: compiler flags: --crate-type bin -C embed-bitcode=no -C debuginfo=2 -C incremental=[REDACTED]
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
end of query stack
warning: `reproduce_crash` (bin "reproduce_crash") generated 2 warnings
error: could not compile `reproduce_crash` (bin "reproduce_crash"); 2 warnings emitted
Caused by:
process didn't exit successfully: `C:\Users\agero\.rustup\toolchains\nightly-2024-12-20-x86_64-pc-windows-msvc\bin\rustc.exe --crate-name reproduce_crash --edition=2024 src\main.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --diagnostic-width=119 --crate-type bin --emit=dep-info,link -C embed-bitcode=no -C debuginfo=2 --check-cfg cfg(docsrs) --check-cfg "cfg(feature, values())" -C metadata=f41a790c0ca691fa --out-dir C:\Users\agero\Documents\projets\ramsey-dl\target\debug\deps -C incremental=C:\Users\agero\Documents\projets\ramsey-dl\target\debug\incremental -L dependency=C:\Users\agero\Documents\projets\ramsey-dl\target\debug\deps` (exit code: 101)
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Backtrace</strong></summary>
<p>
```
delayed bug: Missing value for constant, but no error reported?
0: std::backtrace_rs::backtrace::dbghelp64::trace
at /rustc/9e136a30a965bf4e63f03095c57df7257bf96fd6/library\std\src\..\..\backtrace\src\backtrace\dbghelp64.rs:91
1: std::backtrace_rs::backtrace::trace_unsynchronized
at /rustc/9e136a30a965bf4e63f03095c57df7257bf96fd6/library\std\src\..\..\backtrace\src\backtrace\mod.rs:66
2: std::backtrace::Backtrace::create
at /rustc/9e136a30a965bf4e63f03095c57df7257bf96fd6/library\std\src\backtrace.rs:331
3: std::backtrace::Backtrace::capture
at /rustc/9e136a30a965bf4e63f03095c57df7257bf96fd6/library\std\src\backtrace.rs:296
4: <rustc_errors::DiagCtxtHandle>::flush_delayed
5: <rustc_errors::DiagCtxtHandle>::emit_diagnostic
6: <rustc_span::ErrorGuaranteed as rustc_errors::diagnostic::EmissionGuarantee>::emit_producing_guarantee
7: <hashbrown::raw::RawTable<usize>>::reserve_rehash::<indexmap::map::core::get_hash<rustc_middle::ty::region::Region, ()>::{closure#0}>
8: rustc_trait_selection::traits::const_evaluatable::is_const_evaluatable
9: <rustc_trait_selection::traits::select::SelectionContext>::evaluate_root_obligation
10: rustc_traits::evaluate_obligation::evaluate_obligation
11: rustc_query_impl::plumbing::query_key_hash_verify_all
12: RINvNtNtCsQCwQoe5Jo1_18rustc_query_system5query8plumbing17try_execute_queryINtCs4qpE3tcYOsg_16rustc_query_impl13DynamicConfigINtNtB4_6caches12DefaultCacheINtNtCs4xXlGRTpLhK_13rustc_type_ir9canonical19CanonicalQueryInputNtNtNtCsabap53b4qYL_12rustc_middle2ty
13: rustc_query_impl::plumbing::query_key_hash_verify_all
14: <rustc_infer::infer::InferCtxt as rustc_trait_selection::traits::query::evaluate_obligation::InferCtxtExt>::evaluate_obligation_no_overflow
15: <rustc_infer::infer::InferCtxt as rustc_trait_selection::traits::query::evaluate_obligation::InferCtxtExt>::predicate_may_hold
16: <alloc::raw_vec::RawVec<rustc_hir_typeck::method::probe::Candidate>>::grow_one
17: rustc_hir_typeck::method::probe::method_autoderef_steps
18: rustc_hir_typeck::method::probe::method_autoderef_steps
19: rustc_hir_typeck::typeck
20: <<rustc_hir_typeck::fn_ctxt::FnCtxt>::deduce_closure_signature_from_predicates::MentionsTy as rustc_type_ir::visit::TypeVisitor<rustc_middle::ty::context::TyCtxt>>::visit_ty
21: <<rustc_hir_typeck::fn_ctxt::FnCtxt>::deduce_closure_signature_from_predicates::MentionsTy as rustc_type_ir::visit::TypeVisitor<rustc_middle::ty::context::TyCtxt>>::visit_ty
22: <<rustc_hir_typeck::fn_ctxt::FnCtxt>::deduce_closure_signature_from_predicates::MentionsTy as rustc_type_ir::visit::TypeVisitor<rustc_middle::ty::context::TyCtxt>>::visit_ty
23: rustc_hir_typeck::typeck
24: rustc_hir_typeck::typeck
25: rustc_query_impl::plumbing::query_key_hash_verify_all
26: RINvNtNtCsQCwQoe5Jo1_18rustc_query_system5query8plumbing17try_execute_queryINtCs4qpE3tcYOsg_16rustc_query_impl13DynamicConfigINtNtCsl5fsRWpEAH1_21rustc_data_structures9vec_cache8VecCacheNtNtCshwu7G5mdp9_10rustc_span6def_id10LocalDefIdINtNtNtCsabap53b4qYL_1
27: rustc_query_impl::plumbing::query_key_hash_verify_all
28: RINvMs6_NtCsfSe2htisICB_9hashbrown3rawINtB6_8RawTablejE14reserve_rehashNCINvNtNtCsggveAX2leLM_8indexmap3map4core8get_hashTNtNtNtCsabap53b4qYL_12rustc_middle2ty9predicate6ClauseNtNtCshwu7G5mdp9_10rustc_span13span_encoding4SpanEuE0ECs4Qtc6Yytiwo_18rustc_hir_
29: rustc_hir_analysis::check_crate
30: rustc_interface::passes::resolver_for_lowering_raw
31: rustc_interface::passes::analysis
32: <alloc::sync::Arc<rustc_session::cstore::CrateSource>>::drop_slow
33: RINvNtNtCsQCwQoe5Jo1_18rustc_query_system5query8plumbing17try_execute_queryINtCs4qpE3tcYOsg_16rustc_query_impl13DynamicConfigINtNtB4_6caches11SingleCacheINtNtNtCsabap53b4qYL_12rustc_middle5query5erase6ErasedAhj0_EEKb0_KB3r_KB3r_ENtNtB1e_8plumbing9QueryCtxt
34: rustc_query_impl::query_system
35: RINvNtNtCs8RqLblBvRGN_3std3sys9backtrace28___rust_begin_short_backtraceNCNCNCINvMNtB6_6threadNtB1h_7Builder16spawn_unchecked_INtNtCsjGo8snB8Fhu_5alloc5boxed3BoxDINtNtNtCs6jWqyqrNxNm_4core3ops8function6FnOnceuEp6OutputuNtNtB2G_6marker4SendEL_EuEs_000uECs36X
36: RINvNtNtCs8RqLblBvRGN_3std3sys9backtrace28___rust_begin_short_backtraceNCNCNCINvMNtB6_6threadNtB1h_7Builder16spawn_unchecked_INtNtCsjGo8snB8Fhu_5alloc5boxed3BoxDINtNtNtCs6jWqyqrNxNm_4core3ops8function6FnOnceuEp6OutputuNtNtB2G_6marker4SendEL_EuEs_000uECs36X
37: RINvNtNtCs8RqLblBvRGN_3std3sys9backtrace28___rust_begin_short_backtraceNCNCINvNtCsjzFb9IgEjHS_15rustc_interface4util26run_in_thread_with_globalsNCINvB1e_31run_in_thread_pool_with_globalsNCINvNtB1g_9interface12run_compileruNCNvCs36XBSsR0b4J_17rustc_driver_i
38: RINvNtNtCs8RqLblBvRGN_3std3sys9backtrace28___rust_begin_short_backtraceNCNCNCINvMNtB6_6threadNtB1h_7Builder16spawn_unchecked_INtNtCsjGo8snB8Fhu_5alloc5boxed3BoxDINtNtNtCs6jWqyqrNxNm_4core3ops8function6FnOnceuEp6OutputuNtNtB2G_6marker4SendEL_EuEs_000uECs36X
39: alloc::boxed::impl$28::call_once
at /rustc/9e136a30a965bf4e63f03095c57df7257bf96fd6/library\alloc\src\boxed.rs:1970
40: alloc::boxed::impl$28::call_once
at /rustc/9e136a30a965bf4e63f03095c57df7257bf96fd6/library\alloc\src\boxed.rs:1970
41: std::sys::pal::windows::thread::impl$0::new::thread_start
at /rustc/9e136a30a965bf4e63f03095c57df7257bf96fd6/library\std\src\sys\pal\windows\thread.rs:55
42: BaseThreadInitThunk
43: RtlUserThreadStart
rustc version: 1.85.0-nightly (9e136a30a 2024-12-19)
platform: x86_64-pc-windows-msvc
```
</p>
</details>
| I-ICE,T-compiler,C-bug,F-generic_const_exprs,S-has-mcve,S-bug-has-test | low | Critical |
2,754,038,793 | tauri | [bug] inner_size giving incorrect dimensions on window creation | ### Describe the bug
I'm trying to create a new window from rust and embed a GStreamer sink into it. However, to do so, I need the dimensions of the window to embed the surface. The window is opening in fullscreen (1920 x 1800). I'm trying to use the `inner_size` method and I'm not getting the correct dimensions i.e width: 852, height: 652 even though my monitor's dimensions are 1920 x 1080
```rs
static APP_HANDLE: OnceCell<Mutex<AppHandle>> = OnceCell::const_new();
#[tauri::command]
async fn initialize() -> Result<(), Error> {
// some other stuff happening here
let webview_window = WebviewWindowBuilder::new(
app_handle.app_handle(),
"overlay",
WebviewUrl::External(Url::parse("https://example.com").unwrap()),
)
.decorations(false)
.shadow(false)
.fullscreen(true)
.minimizable(false)
.closable(false)
.skip_taskbar(true)
.transparent(true)
.focused(false)
.build()
.expect("Failed to create overlay window");
if let Some(overlay) = app_handle.get_webview_window("overlay") {
// giving incorrect dimensions
let size = overlay.inner_size();
println!("SIZE {:?}", size);
}
let inner_size = webview_window.inner_size().unwrap();
let inner_position = webview_window.inner_position().unwrap();
// giving incorrect dimensions
println!("{:?}, {:?}", inner_size, inner_position);
}
pub fn run() {
let app = tauri::Builder::default()
.plugin(tauri_plugin_http::init())
.plugin(tauri_plugin_shell::init())
.invoke_handler(tauri::generate_handler![greet, get_pin, socket_initialize,])
.build(tauri::generate_context!())
.expect("error while building");
let app_handle = app.handle().clone();
let cloned_app_handle = app.handle().clone();
APP_HANDLE
.set(Mutex::new(app_handle))
.expect("Could not set static variable");
tauri::async_runtime::spawn(async move {
let mut interval = tokio::time::interval(Duration::from_secs(2));
loop {
interval.tick().await;
if let Some(overlay) = cloned_app_handle.get_webview_window("overlay") {
let size = overlay.inner_size();
// giving correct dimensions 1920 x 1080
println!("SIZE: {:?}", size);
}
}
});
app.run(|_, _| {});
}
```
### Reproduction
_No response_
### Expected behavior
Expecting `inner_size()` to give me correct dimensions of 1920 x 1080
### Full `tauri info` output
```text
✔] Environment
- OS: Fedora 41.0.0 x86_64 (X64)
✔ webkit2gtk-4.1: 2.46.4
✔ rsvg2: 2.59.2
✔ rustc: 1.83.0 (90b35a623 2024-11-26)
✔ cargo: 1.83.0 (5ffbef321 2024-10-29)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-x86_64-unknown-linux-gnu (default)
- node: 22.12.0
- pnpm: 9.15.1
- npm: 10.9.0
[-] Packages
- tauri 🦀: 2.1.1
- tauri-build 🦀: 2.0.3
- wry 🦀: 0.47.2
- tao 🦀: 0.30.8
- @tauri-apps/api : 2.1.1
- @tauri-apps/cli : 2.1.0
[-] Plugins
- tauri-plugin-http 🦀: 2.2.0
- @tauri-apps/plugin-http : 2.0.0 (outdated, latest: 2.2.0)
- tauri-plugin-shell 🦀: 2.2.0
- @tauri-apps/plugin-shell : 2.0.0 (outdated, latest: 2.2.0)
- tauri-plugin-fs 🦀: 2.2.0
- @tauri-apps/plugin-fs : not installed!
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../build
- devUrl: http://localhost:1420/
- framework: Svelte
- bundler: Vite
```
### Stack trace
_No response_
### Additional context
I'm seeing that the dimensions that I'm getting correct after the first restart. But doing a run after that is giving me the incorrect dimensions | type: bug,status: needs triage | low | Critical |
2,754,046,388 | go | mime/quotedprintable: LWSP-char not accepted between = and CRLF | ### Go version
go version go1.23.4 linux/amd64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='amd64'
GOBIN=''
GOCACHE='/home/simon/.cache/go-build'
GOENV='/home/simon/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='linux'
GOINSECURE=''
GOMODCACHE='/home/simon/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPATH='/home/simon/go'
GOPRIVATE=''
GOPROXY='direct'
GOROOT='/usr/lib/go'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='local'
GOTOOLDIR='/usr/lib/go/pkg/tool/linux_amd64'
GOVCS=''
GOVERSION='go1.23.4'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/home/simon/.config/go/telemetry'
GCCGO='gccgo'
GOAMD64='v1'
AR='ar'
CC='gcc'
CXX='g++'
CGO_ENABLED='1'
GOMOD='/home/simon/src/go-imap/go.mod'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -m64 -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build2161554395=/tmp/go-build -gno-record-gcc-switches'
```
### What did you do?
Try to decode the following MIME message:
https://github.com/dovecot/imaptest/blob/73dcb3ff9ae8b6859ceaf383f21c5b62a9fd7500/src/tests/fetch-binary-mime-qp.mbox
RFC 2045 section 6.7 allows a transport-padding (see details in https://github.com/golang/go/pull/70951).
### What did you see happen?
quotedprintable: invalid bytes after =: " \r\n"
quotedprintable: invalid bytes after =: "\t \r\n"
### What did you expect to see?
No error. | FixPending | low | Critical |
2,754,067,094 | rust | atomic RMW intrinsics: avoid unnecessary ptr/int conversions | Currently, the type of our atomic RMW intrinsics looks like
```rust
fn atomic_xadd_seqcst<T: Copy>(_dst: *mut T, _src: T) -> T
```
However, this is not quite what we want: for atomic operations on a pointer, we want `dst` to be something like `*mut *mut T`, but `src` should be `usize`. The return type should be `*mut T`.
This would let us avoid some unnecessary casts in `AtomicPtr`, and shift the burden of mapping this operation to something LLVM supports into the backend. It also makes the semantics of these operations more clear: only the provenance of the in-memory data matters; `src` carries no provenance. | A-codegen,A-intrinsics,A-strict-provenance,A-atomic | low | Minor |
2,754,104,263 | puppeteer | [Bug]: `TypeError: Cannot convert undefined or null to object` with resolve argument of a Promise | ### Minimal, reproducible example
```TypeScript
import puppeteer from "puppeteer";
function createPromise(page, callback) {
return page.evaluateHandle(
// eslint-disable-next-line no-eval
cb => [new Promise(eval(`(${cb})`))],
callback.toString()
);
}
function awaitPromise(promise) {
return promise.evaluate(([p]) => p);
}
const browser = await puppeteer.launch({
browser: "chrome",
headless: false,
protocol: "webDriverBiDi",
});
const page = await browser.newPage();
await page.goto("https://mozilla.github.io/pdf.js/web/viewer.html");
await page.bringToFront();
const handle = await createPromise(page, resolve => {
window.PDFViewerApplication.eventBus.on("textlayerrendered", resolve, { once: true });
});
await awaitPromise(handle);
await page.close();
await browser.close();
```
### Background
This bug was previously fixed in #12111, but unfortunately it seems like it has returned. This was found in the context of https://github.com/mozilla/pdf.js/issues/17961 where we try to enable WebDriverBidi for Chrome in the Mozilla PDF.js project again, and currently lots of test fail because of this.
Steps to reproduce:
1. Save the reproducible example as `repro.js`.
2. Run `npm install puppeteer` (to install the latest version).
3. Run `node repro.js` and notice the traceback.
Note that this only fails for Chrome with WebDriverBiDi. It works for Firefox with WebDriverBiDi and for Chrome with CDP.
### Expectation
The script doesn't log anything.
### Reality
The script logs the following traceback for Chrome with WebDriverBiDi:
```
file:///home/timvandermeij/Documenten/Ontwikkeling/pdf.js/Code/node_modules/puppeteer-core/lib/esm/puppeteer/common/CallbackRegistry.js:97
#error = new ProtocolError();
^
ProtocolError: Protocol error (script.callFunction): unknown error Cannot convert undefined or null to object TypeError: Cannot convert undefined or null to object
at Function.hasOwn (<anonymous>)
at WindowRealm.serializeForBiDi (/home/timvandermeij/Documenten/Ontwikkeling/pdf.js/Code/node_modules/chromium-bidi/lib/cjs/bidiMapper/modules/script/Realm.js:65:20)
at WindowRealm.serializeForBiDi (/home/timvandermeij/Documenten/Ontwikkeling/pdf.js/Code/node_modules/chromium-bidi/lib/cjs/bidiMapper/modules/script/WindowRealm.js:93:22)
at WindowRealm.serializeForBiDi (/home/timvandermeij/Documenten/Ontwikkeling/pdf.js/Code/node_modules/chromium-bidi/lib/cjs/bidiMapper/modules/script/Realm.js:88:26)
at WindowRealm.serializeForBiDi (/home/timvandermeij/Documenten/Ontwikkeling/pdf.js/Code/node_modules/chromium-bidi/lib/cjs/bidiMapper/modules/script/WindowRealm.js:93:22)
at WindowRealm.serializeForBiDi (/home/timvandermeij/Documenten/Ontwikkeling/pdf.js/Code/node_modules/chromium-bidi/lib/cjs/bidiMapper/modules/script/Realm.js:88:26)
at WindowRealm.serializeForBiDi (/home/timvandermeij/Documenten/Ontwikkeling/pdf.js/Code/node_modules/chromium-bidi/lib/cjs/bidiMapper/modules/script/WindowRealm.js:93:22)
at WindowRealm.serializeForBiDi (/home/timvandermeij/Documenten/Ontwikkeling/pdf.js/Code/node_modules/chromium-bidi/lib/cjs/bidiMapper/modules/script/Realm.js:88:26)
at WindowRealm.serializeForBiDi (/home/timvandermeij/Documenten/Ontwikkeling/pdf.js/Code/node_modules/chromium-bidi/lib/cjs/bidiMapper/modules/script/WindowRealm.js:93:22)
at WindowRealm.serializeForBiDi (/home/timvandermeij/Documenten/Ontwikkeling/pdf.js/Code/node_modules/chromium-bidi/lib/cjs/bidiMapper/modules/script/Realm.js:88:26)
at <instance_members_initializer> (file:///home/timvandermeij/Documenten/Ontwikkeling/pdf.js/Code/node_modules/puppeteer-core/lib/esm/puppeteer/common/CallbackRegistry.js:97:14)
at new Callback (file:///home/timvandermeij/Documenten/Ontwikkeling/pdf.js/Code/node_modules/puppeteer-core/lib/esm/puppeteer/common/CallbackRegistry.js:101:16)
at CallbackRegistry.create (file:///home/timvandermeij/Documenten/Ontwikkeling/pdf.js/Code/node_modules/puppeteer-core/lib/esm/puppeteer/common/CallbackRegistry.js:20:26)
at BidiConnection.send (file:///home/timvandermeij/Documenten/Ontwikkeling/pdf.js/Code/node_modules/puppeteer-core/lib/esm/puppeteer/bidi/Connection.js:51:32)
at Session.send (file:///home/timvandermeij/Documenten/Ontwikkeling/pdf.js/Code/node_modules/puppeteer-core/lib/esm/puppeteer/bidi/core/Session.js:134:42)
at Session.<anonymous> (file:///home/timvandermeij/Documenten/Ontwikkeling/pdf.js/Code/node_modules/puppeteer-core/lib/esm/puppeteer/util/decorators.js:101:27)
at WindowRealm.callFunction (file:///home/timvandermeij/Documenten/Ontwikkeling/pdf.js/Code/node_modules/puppeteer-core/lib/esm/puppeteer/bidi/core/Realm.js:92:51)
at WindowRealm.<anonymous> (file:///home/timvandermeij/Documenten/Ontwikkeling/pdf.js/Code/node_modules/puppeteer-core/lib/esm/puppeteer/util/decorators.js:101:27)
at #evaluate (file:///home/timvandermeij/Documenten/Ontwikkeling/pdf.js/Code/node_modules/puppeteer-core/lib/esm/puppeteer/bidi/Realm.js:134:42)
at BidiFrameRealm.evaluate (file:///home/timvandermeij/Documenten/Ontwikkeling/pdf.js/Code/node_modules/puppeteer-core/lib/esm/puppeteer/bidi/Realm.js:104:36)
```
### Puppeteer configuration file (if used)
_No response_
### Puppeteer version
23.11.1
### Node version
23.4.0
### Package manager
npm
### Package manager version
10.9.2
### Operating system
Linux | bug,P1,confirmed | medium | Critical |
2,754,135,782 | transformers | modernbert logits do not have gradient | ### System Info
latest transformers version (from source), python 3.10
### Who can help?
@Arthurz
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
model_id = "answerdotai/ModernBERT-base"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForMaskedLM.from_pretrained(model_id).to("cuda")
# Create a simple input
inputs = {
"input_ids": torch.randint(0, 1000, (1, 10)).cuda(),
"attention_mask": torch.ones(1, 10).cuda()
}
# Set to train mode and check all parameters
model.train()
for name, param in model.named_parameters():
print(f"{name}: requires_grad = {param.requires_grad}")
# Do forward pass
outputs = model(**inputs)
print("\nOutput logits requires_grad:", outputs.logits.requires_grad)
print("Output logits grad_fn:", outputs.logits.grad_fn)
```
### Expected behavior
When I do this, the output is:
```
Output logits requires_grad: False
Output logits grad_fn: None
```
Despite explicitly setting all the parameters to requires_grad = True! And when printing all the params, they all are correctly set to requires_grad = True.
Just to sanity check, I ran the same code but set model_id = "bert-base-uncased", and got:
```
Output logits requires_grad: True
Output logits grad_fn: <ViewBackward0 object at 0x7f0ca6abf370>
```
So it's def a ModernBERT specific problem! | bug | low | Minor |
2,754,155,291 | svelte | {#if}/{:else} blocks should preserve order during transitions | ### Describe the problem
When the condition of an `{#if}/{:else}` is updated, the content of the blocks are removed/added into the DOM.
But the order of the blocks are not preserved during transitions : the new visible block is always added after the previous visible block.
I think it would make more sense to preserve the order of the if/else.
Exemple : Push the button toggle and wait for the end of the transition. Push it again and the `if` block will pop from the bottom.
https://svelte.dev/playground/hello-world?version=5.15.0#H4sIAAAAAAAACp1Sy27bMBD8FZYOEAtxYqePQxVJQNFL-w11DxS5FAhTpECu3LiC_r0kJdl5XgodJC1nZ3aHM1DDWqA5_QFaW_LHOi3IGoRCEBndUKk0eJr_GiieuoiLhVCfu7513Z0_gsZYq5mHt-rcGgSDgYYWnjvVYbU3e1RtZx2SgXitBJCRSGdbcj21bdEx4xUqa64jWAOSo_Kq1kBKcuWRIazR9ZA9hOMACCIeie1ihw-QQfSOxZ-c3O92uzHAiu1F3RR1j2gNsYZrxQ_lsM7KalYoP8wfY4W2aTQU2wmdOoeVksssY5QuhDqSy7x52qcc5llG4vGkIa8ZPzTO9kaUe7qqmZT8654mJ4IXMhFtA1OoDDlo___cUtafaha4I8uZc6vkmKxKT5Eak_psxDBNIpTvNDvlpNaWHx6mYseEUKbJye7uC7TkHtr5oGWuUWapsx5tOkijx8ln0to6AS7cRPdIvA0rkBXn_CX5R3fmlSEyt179hWdVhEe8ZVo1QZGHRIFb1MLVLvuYkLgIpHmMx7h5J7nfg1uB4M30vjp7J8ExlDxiL5HcLXm8AimB43qdkbJafJhCeghwD_gzahyZniGJ6OZmk-KaTXgH2DtDZoAG5s5Nh2zafXq9Cnc1JL4xJLd6ed_LNE9M_jx7_MTK5zb-Hv8Bcz2ulioEAAA=
### Describe the proposed solution
Technically the solution seems simple, since it would be enough to change this line :
https://github.com/sveltejs/svelte/blob/1d773ef3a471adb36eb3c992168b21fbaf349562/packages/svelte/src/internal/client/dom/blocks/if.js#L72
```js
consequent_effect = branch(() => fn(alternate_effect?.nodes_start ?? anchor));
```
But it changes the behavior and I don't know how to estimate the impact of that.
### Importance
nice to have | transition/animation | low | Minor |
2,754,192,863 | excalidraw | PNG copy & paste results in black stroke around the pasted element | https://discord.com/channels/723672430744174682/1319699270630113400/1319699270630113400 | bug,firefox,blocked-upstream | low | Minor |
2,754,199,964 | kubernetes | [FG:InPlacePodVerticalScaling] Pod CPU limit is not configured to cgroups as calculated if systemd cgroup driver is used | ### What happened?
As a result of #124216, which was introduced in v.1.32, a pod CPU limit calculated in `ResourceConfigForPod()` is rounded up to the nearest 10ms in `libcontainer` at resizing the pod:
- Resize a pod:
```
$ kubectl patch pod resize-pod --subresource=resize --patch '{"spec":{"containers":[{"name":"resize-container", "resources":{"limits":{"cpu":"417m"}}}]}}'
pod/resize-pod patched
```
- The container cgroup value is set with 1ms precision:
```
$ kubectl exec resize-pod -- cat /sys/fs/cgroup/cpu.max
41700 100000
```
- The pod cgroup value is rounded up:
```
$ cat /sys/fs/cgroup/kubelet.slice/kubelet-kubepods.slice/kubelet-kubepods-burstable.slice/kubelet-kubepods-burstable-pod68a17b59_0d31_40b2_ba86_ea43f3b2f05c.slice/cpu.max
42000 100000
```
When `systemd` cgroup driver is used, `libcontainer` passes the CPU Quota to `systemd` with rounding up:
https://github.com/kubernetes/kubernetes/blob/a4b8a3b2e33a3b591884f69b64f439e6b880dc40/vendor/github.com/opencontainers/runc/libcontainer/cgroups/systemd/common.go#L304-L311
In addition, there seems to be a race in `libcontainer`. It directly writes values to the cgroup file without roundup after it passes the rounded value to `systemd`:
https://github.com/kubernetes/kubernetes/blob/a4b8a3b2e33a3b591884f69b64f439e6b880dc40/vendor/github.com/opencontainers/runc/libcontainer/cgroups/systemd/v2.go#L489-L493
So, there is also a case where the cgroup value is set as calculated. As far as I tried, decreasing CPU limits usually hits this case though I’m not sure why:
- Decrease the CPU limits:
```
$ kubectl patch pod resize-pod --subresource=resize --patch '{"spec":{"containers":[{"name":"resize-container", "resources":{"limits":{"cpu":"365m"}}}]}}'
pod/resize-pod patched
```
- Both the container and the pod cgroup values are set with 1ms precision:
```
$ kubectl exec resize-pod -- cat /sys/fs/cgroup/cpu.max
36500 100000
$ cat /sys/fs/cgroup/kubelet.slice/kubelet-kubepods.slice/kubelet-kubepods-burstable.slice/kubelet-kubepods-burstable-pod68a17b59_0d31_40b2_ba86_ea43f3b2f05c.slice/cpu.max
36500 100000
```
### What did you expect to happen?
This roundup looks like the intended behavior of `systemd` cgroup driver because CPU quota is also rounded up when a pod is just created with 1ms precision CPU limits. However, I have the following concerns:
- We might need to confirm this tiny gap doesn’t cause a similar issue to #128769 at resizing pods.
- We might need to clarify why the CPU quota of pod cgroup is sometimes not rounded up. This is especially necessary to complete #127192, which is going to add pod cgroup verification to resize tests.
### How can we reproduce it (as minimally and precisely as possible)?
0. Use `systemd` cgroup driver and enable `InPlacePodVertialScaling`.
1. Resize CPU limits of a pod with 1ms precision.
### Anything else we need to know?
_No response_
### Kubernetes version
V1.32
<details>
```console
$ kubectl version
# paste output here
Client Version: v1.31.4
Kustomize Version: v5.4.2
Server Version: v1.32.0
```
</details>
### Cloud provider
N/A
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/node,triage/accepted | medium | Major |
2,754,210,042 | three.js | `DataTexture`: Proposal to support partial update | ### Description
Hi!
I am using `DataTexture` quite a lot to handle data with `BatchedMesh` and `InstancedMesh2` (`InstancedMesh` + indirection).
In my case, I would like to update the color of only one instance (on the mouse over), but send the gpu the whole texture is expensive because it's very large.
I tried using the [WebGLRenderer.copyTextureToTexture](https://threejs.org/docs/#api/en/renderers/WebGLRenderer.copyTextureToTexture) method but it doesn't work when `src` and `dest` are the same texture (I might open a separate bug for this).
Anyway, this method is useless if [`BatchedMesh` automatically updates `.needsUpdate` flag](https://github.com/mrdoob/three.js/blob/master/src/objects/BatchedMesh.js#L869) which will make the whole texture update anyway.
It would be fantastic to have a partial update system like `BufferAttribute`.
I know that it's an important change, but if you want I can help.
Thank you for all the work you do. 😄
### Solution
Implementing an `addUpdateRange` method similar to that of `BufferAttribute`.
`.addUpdateRange ( region : Box2 ) : this`
### Alternatives
~~Fix and use [WebGLRenderer.copyTextureToTexture](https://threejs.org/docs/#api/en/renderers/WebGLRenderer.copyTextureToTexture), but we should remove `.needsUpdate = true` from `BatchedMesh`?~~
### Additional context
_No response_ | Suggestion | low | Critical |
2,754,232,632 | rust | Tracking issue for release notes of #133820: Stabilize `derive(CoercePointee)` |
This issue tracks the release notes text for #133820.
### Steps
- [ ] Proposed text is drafted by PR author (or team) making the noteworthy change.
- [ ] Issue is nominated for release team review of clarity for wider audience.
- [ ] Release team includes text in release notes/blog posts.
### Release notes text
The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing).
````markdown
# Category (e.g. Language, Compiler, Libraries, Compatibility notes, ...)
- [Stabilize `derive(CoercePointee)`](https://github.com/rust-lang/rust/pull/133820)
````
> [!TIP]
> Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use.
> The category will be de-duplicated with all the other ones by the release team.
>
> *More than one section can be included if needed.*
### Release blog section
If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section.
*Otherwise leave it empty.*
````markdown
````
cc @dingxiangfei2009, @joboet -- origin issue/PR authors and assignees for starting to draft text
| T-lang,relnotes,needs-triage,F-derive_coerce_pointee,relnotes-tracking-issue | low | Minor |
2,754,236,624 | godot | class_name doc comments edited with external editor don't update until editor restart | ### Tested versions
Reproducible in v4.3.stable.official [77dcf97d8]
Class doc comments were added in d1231be1c8f8f2c16fd1047adcd3c7f997a07c1f, but I haven't checked if it occurred in that build.
### System information
Godot v4.3.stable - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 2060 (NVIDIA; 32.0.15.6590) - Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz (12 Threads)
### Issue description
When you use an external editor to modify a class_name doc comment, the old value is still displayed (in Create New Node) until you restart Godot.
I have auto reload scripts enabled:

### Steps to reproduce
1. Enable external editor (I use vim).
2. Existing script nilrotation.gd:
```gdscript
## Sometimes you don't want rotation to change.
class_name NilRotation
extends Node2D
func _process(_dt):
global_rotation = 0
```
3. Confirm Create New Node shows the correct docstring

5. In external editor, change the line to `## Force the global rotation to stay at 0 (never rotate).`
6. Check docstring shown in Create New Node.
**Expected:** It's updated.
**Actual:** It hasn't changed.
### Minimal reproduction project (MRP)
trivially reproducible in new project. | bug,topic:gdscript,topic:editor,needs testing | low | Minor |
2,754,292,765 | ollama | Enhanced aria2c download support with optimized configurations | the install script uses curl for downloading Ollama components. While there is value in adding aria2c support for faster downloads, we can further optimize it with additional aria2c configurations for better reliability and performance. | feature request | low | Major |
2,754,292,812 | godot | Behavior of VERTEX.y in canvas_item Shaders changes drastically depending on whether Compatibility or Forward+ Render Engine is selected | ### Tested versions
Godot 4.x (tested up to Godot 4.3-stable)
Godot 3.x (tested from Godot 3.5 upwards)
### System information
OS and hardware independent
### Issue description
The goal was to highlight the square tile map cell over which the mouse cursor is currently hovering with a simple shader. However, when switching the render engine from `Forward+` to `Compatibility` or vice versa, the behavior of the shader changed unexpectedly.

The shader is of type `canvas_item` and looks as follows:
```glsl
// ADAPTED FROM: https://www.youtube.com/watch?v=7nTQA_6CL6M
shader_type canvas_item;
uniform vec2 globalMousePosition; // DESCRIPTION: Input from GDScript
uniform vec2 tileSize; // DESCRIPTION: Input from GDScript
varying flat vec2 vertexPosition;
varying flat vec2 vertexRaw; // REMARK: Only for debugging purposes
void vertex() {
vertexPosition = (MODEL_MATRIX * vec4(VERTEX, 0.0, 1.0)).xy; // DESCRIPTION: Take transformations into account
vertexRaw = VERTEX; // REMARK: Only for debugging purposes
}
void fragment() {
float isWithinY = step(vertexPosition.y, globalMousePosition.y) * step(globalMousePosition.y, vertexPosition.y + tileSize.y);
float isWithinX = step(vertexPosition.x, globalMousePosition.x) * step(globalMousePosition.x, vertexPosition.x + tileSize.x);
float isWithin = isWithinY * isWithinX;
vec4 textureColor = texture(TEXTURE, UV);
COLOR = mix(textureColor, vec4(0.7,0.0,0.0,1), 0.7*isWithin);
/* DESCRIPTION: FOR DEBUGGING PURPOSES ONLY
// DESCRIPTION: Raw Vertex Color Data
COLOR = vec4(vertexRaw, 0.0, 1.0);
// DESCRIPTION: Normalized Vertex Color data;
// VALUES: number of horizontal tiles per screen = 20.0, tile width = 64.0 px,
// number of vertical tiles per screen = 8.0, tile height = 64.0 px
COLOR = vec4(vertexRaw/vec2(10.0*64.0, 8.0*64.0), 0.0, 1.0);
*/
}
```
During the debugging, I took a look at the raw vertex color in the tile set

as well as the raw vertex color of the complete screen

and the normalized vertex color of the complete screen
It is not only limited to the `TileMapLayer` Node used for the example. It could also be reproduced for e.g. `Sprite2D`

## Expected Behavior
Independent of the render engine, the result should be the same, since the documentation [documentation for `CanvasItem` shaders](https://docs.godotengine.org/en/4.3/tutorials/shaders/shader_reference/canvas_item_shader.html#vertex-built-ins) states: <br>
> Vertex data (`VERTEX`) is presented in local space (pixel coordinates, relative to the `Node2D`'s origin). If not written to, these values will not be modified and be passed through as they came. The user can disable the built-in model to world transform (world to screen and projection will still happen later)...
From this statement and the vertex color visualization, the expected behavior would be that shown by the `Forward+` render engine.
## Probable Cause
It cannot be related to the implementation of the `step()` function, since the issue also occurs when viewing the raw/normalized vertex colors. This points towards an inconsistency in handling `VERTEX.y` between the `Compatibility` and `Forward+` render engines, probably occuring during the parsing of the `GDShader` to `GLSL` and `Glslang`/`SPIR-V` respectively.
## Affected Nodes and Versions
- **Any `Node2D`** (including `Control` Nodes) which can have a `Shader Material` attached to itself
- So far, this behavior could be reproduced for **all stable versions of Godot 4.x** (including Godot 4.3 at the time of writing) and due to the nature of the issue should be **platform independent** (issue could be confirmed for/tested on `Windows`, `Linux` and `Web`).
- In Godot 3.x, the `Open GLES 3.0` shows the same behavior as `Compatibility` in Godot 4.x (tested for Godot 3.5 and newer).
### Steps to reproduce
1. Create a scene with `Godot 4.3-stable` (or any other version of Godot 4.x) with at least an object which has the above shader attached to it and forward the global mouse position to it. Alternatively: Download the example project attached to this issue (Remark: Example requires due to the usage of the `TileMapLayer` Node `Godot 4.3-stable` and newer!).
2. **Two Options:**
- **Option 1:** Swap the render engine each time between `Compatibility` or `Forward+` and reload the project. Then run the project natively with `F5`/`F6`.
- **Option 2:** Set the render engine to `Forward+` and run the project natively with `F5`/`F6` for testing the behavior of `Forward+`. Run a `Remote Debug` Session in a browser to test the behavior of `Compatibility` (Remark: works due to the fact that in `Godot 4.3-stable`, the web export is not Vulkan based, but no matter what the default render engine is, will utilize `Compatibility`).
### Minimal reproduction project (MRP)
To obtain the vertex color output, simply comment in/out the respective lines in the shader
[mrp_canvasItem_godot4-3_tileMapLayer.zip](https://github.com/user-attachments/files/18219469/mrp_canvasItem_godot4-3_tileMapLayer.zip)
| bug,topic:rendering | low | Critical |
2,754,318,238 | youtube-dl | Abema TV support | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2021.12.17. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a new site support request
- [x] I've verified that I'm running youtube-dl version **2021.12.17**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that none of provided URLs violate any copyrights
- [x] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
- Single video: https://abema.tv/channels/fighting-sports2/slots/C3K1gmnm9FchM1?lang=en
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
https://abema.tv/channels/fighting-sports2/slots/C3K1gmnm9FchM1?lang=en
trying to download this video but i get this error:
ERROR: No video formats found; please report this issue on https://github.com/ytdl-org/youtube-dl/issues , using the appropriate issue template. Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose option and include the complete output.
| site-support-request | low | Critical |
2,754,318,668 | ollama | docker installation failure due to your installation failure... | ### What is the issue?
=> [vps2_core 23/28] RUN curl -fsSL https://ollama.com/install.sh | sh 5432.3s
=> => # >>> Installing ollama to /usr/local
=> => # >>> Downloading Linux amd64 bundle
=> => # ############################################# 63.8%
=> => # [output clipped, log limit 2MiB reached]
### OS
Linux, Docker, WSL2
### GPU
_No response_
### CPU
Intel
### Ollama version
? | bug | low | Critical |
2,754,321,971 | ui | [bug]: monorepo CLI fails to create project `BUN` | ### Describe the bug
When attempting to create a new Next.js monorepo project using the shadcn CLI (canary version), the process fails due to missing workspace dependencies.
### Affected component/components
CLI
### How to reproduce
## Steps to Reproduce
1. Run the following command: `bunx --bun shadcn@canary init`
2. Choose "Next.js (Monorepo)" when prompted to start a new project
3. Enter a project name (e.g., "test-monorp-shadcn")
## Expected Behavior
The CLI should successfully create a new Next.js monorepo project with all necessary dependencies and configurations.
## Actual Behavior
The CLI fails to create the project, reporting missing workspace dependencies:
```shell
✖ Something went wrong creating a new Next.js monorepo.
Something went wrong. Please check the error below for more details.
If the problem persists, please open an issue on GitHub.
# Logs section...
```
### Codesandbox/StackBlitz link
_No response_
### Logs
```bash
PS C:\Users\Jozef\Desktop> bunx --bun shadcn@canary init
√ The path C:\Users\Jozef\Desktop does not contain a package.json file.
Would you like to start a new project? » Next.js (Monorepo)
√ What is your project named? ... test-monorp-shadcn
✖ Something went wrong creating a new Next.js monorepo.
Something went wrong. Please check the error below for more details.
If the problem persists, please open an issue on GitHub.
Command failed with exit code 1: bun install
error: Workspace dependency "@workspace/eslint-config" not found
Searched in "..\..\..\..\..\C:\Users\Jozef\Desktop\test-monorp-shadcn\*"
Workspace documentation: https://bun.sh/docs/install/workspaces
error: Workspace dependency "@workspace/typescript-config" not found
Searched in "..\..\..\..\..\C:\Users\Jozef\Desktop\test-monorp-shadcn\*"
Workspace documentation: https://bun.sh/docs/install/workspaces
error: @workspace/eslint-config@* failed to resolve
error: @workspace/typescript-config@* failed to resolve
bun install v1.1.42 (50eec002)
```
### System Info
```bash
- Operating System: Windows 11 version 23H2 (SO 22631.4602)
- Node.js version: v22.11.0
- Bun version: 1.1.42
- shadcn CLI version: canary
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,754,329,611 | go | all: CI's enforcement of clean `go generate` is lacking | ### Go version
go version devel go1.24-110ab1aaf4 Sat Dec 21 08:22:08 2024 -0800 linux/amd64
### Output of `go env` in your module/workspace:
```shell
AR='ar'
CC='gcc'
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_ENABLED='1'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
CXX='g++'
GCCGO='gccgo'
GO111MODULE=''
GOAMD64='v3'
GOARCH='amd64'
GOAUTH='netrc'
GOBIN=''
GOCACHE='/tmp/go-build'
GODEBUG=''
GOENV='/home/hugo/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFIPS140='off'
GOFLAGS=''
GOGCCFLAGS='-fPIC -m64 -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build610608568=/tmp/go-build -gno-record-gcc-switches'
GOHOSTARCH='amd64'
GOHOSTOS='linux'
GOINSECURE=''
GOMOD='/home/hugo/k/go/src/go.mod'
GOMODCACHE='/home/hugo/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPATH='/home/hugo/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/home/hugo/k/go'
GOSUMDB='sum.golang.org'
GOTELEMETRY='local'
GOTELEMETRYDIR='/home/hugo/.config/go/telemetry'
GOTMPDIR=''
GOTOOLCHAIN='local'
GOTOOLDIR='/home/hugo/k/go/pkg/tool/linux_amd64'
GOVCS=''
GOVERSION='devel go1.24-110ab1aaf4 Sat Dec 21 08:22:08 2024 -0800'
GOWORK=''
PKG_CONFIG='pkg-config'
```
### What did you do?
In the go repo:
```bash
cd src
./make.bash
cd internal/goexperiment/ && go generate .
git status
```
### What did you see happen?
```
On branch master
Your branch is up to date with 'origin/master'.
Untracked files:
(use "git add <file>..." to include in what will be committed)
exp_synctest_off.go
exp_synctest_on.go
nothing added to commit but untracked files present (use "git add" to track)
```
### What did you expect to see?
```
On branch master
Your branch is up to date with 'origin/master'.
nothing to commit, working tree clean
``` | help wanted,NeedsInvestigation | low | Critical |
2,754,349,650 | ollama | MultiGPU ROCm | ### What is the issue?
System:
CPU AMD Ryzen 9950X
RAM 128 GB DDR5
GPU0 AMD Radeon PRO W7900
GPU1 AMD Radeon RX7900XTX
ROCM: 6.3.1
Ubuntu 24.04 LTS (currently patched)
ERROR:
I start a large LLM (e.g. Llama-3.3-70B-Instruct-Q4_K_L) with open webui and a context window of 32678 and get the following error in ollama:
Dec 22 03:52:04 ollama ollama[6345]: time=2024-12-22T03:52:04.990Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
Dec 22 03:52:41 ollama ollama[6345]: ROCm error: out of memory
Dec 22 03:52:41 ollama ollama[6345]: llama/ggml-cuda/ggml-cuda.cu:96: ROCm error
=========================================== ROCm System Management Interface ==========================================
==================================================== Concise Info ====================================================
Device Node IDs Temp Power Partitions SCLK MCLK Fan Perf PwrCap VRAM% GPU%
(DID, GUID) (Edge) (Avg) (Mem, Compute, ID)
========================================================== ==========================================================
0 1 0x7448, 54057 59.0°C 56.0W N/A, N/A, 0 651Mhz 96Mhz 20.0% auto 241.0W 0% 82%
1 2 0x744c, 53541 40.0°C 75.0W N/A, N/A, 0 1301Mhz 456Mhz 0% auto 327.0W 0% 39%
========================================================== ==========================================================
================================================ End of ROCm SMI Log ================================================
The VRAM on both cards is never fully utilized and the normal RAM is almost completely free. SWAP is not used.
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.5.4 | bug,amd,gpu | low | Critical |
2,754,349,667 | flutter | Laggy First Keyboard Open (Physical iOS Device) | ### Steps to reproduce
checkout https://github.com/willsmanley/flutter-textfield-keyboard-ios-bug
run on a physical ios device (with either impeller or skia)
press the textfield to focus it and trigger a keyboard open. the first time during app run takes 2 second lag. every subsequent time is immediate.
note: it only happens the very first time you open the keyboard. hot restarting and hot reloading will not reset this. only by rebuilding can you experience the lag again.
### Expected results
should open immediately like native swift apps
### Actual results
opens keyboard after 1-2 second delay
### Code sample
https://github.com/willsmanley/flutter-textfield-keyboard-ios-bug
### Screenshots or Video
_No response_
### Logs
n/a
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel stable, 3.27.0, on macOS 15.1 24B2082 darwin-arm64, locale en-US)
[✗] Android toolchain - develop for Android devices
✗ Unable to locate Android SDK.
Install Android Studio from: https://developer.android.com/studio/index.html
On first launch it will assist you in installing the Android SDK components.
(or visit https://flutter.dev/to/macos-android-setup for detailed instructions).
If the Android SDK has been installed to a custom location, please use
`flutter config --android-sdk` to update to that location.
[✓] Xcode - develop for iOS and macOS (Xcode 16.2)
[✓] Chrome - develop for the web
[!] Android Studio (not installed)
[✓] Connected device (6 available)
[✓] Network resources
! Doctor found issues in 2 categories.
```
</details>
| a: text input,platform-ios,a: quality,has reproducible steps,team-ios,fyi-text-input,found in release: 3.27,found in release: 3.28 | medium | Critical |
2,754,350,307 | godot | GPU Particles 3D: New particles not emitting with specific Lifetime + Scale Curve settings | ### Tested versions
Occurs in 4.3
Tested in 4.2, but can't get the specific particle system to work, in general
### System information
Godot v4.3.stable - Windows 10.0.19045 - Vulkan (Forward+) - dedicated GeForce GTX 1060 6GB - Intel(R) Core(TM) i5-10400 CPU @ 2.90GHz (12 Threads)
### Issue description
The current Particle settings in the provided MRP result in no new particles being spawned.
This persists when the Lifetime settings is 16 seconds, or any value greater than that
When Lifetime is less than 16 seconds or the Scale Curve settings are altered, the system spawns new particles. Did I manage to find a perfect, broken balance or something?
### Steps to reproduce
Mess with the Lifetime and Scale Curve values in the MRP.
Or other settings- perhaps something else is adding to it.
### Minimal reproduction project (MRP)
[mrp.zip](https://github.com/user-attachments/files/18219866/mrp.zip)
| bug,topic:particles | low | Critical |
2,754,356,100 | flutter | Ancient Unicode character on Windows not visible | ### Steps to reproduce
1. Create a Text widget with the Unicode characters 133FA-13402
### Expected results
I expect to see 9 the ancient Egyptian hieroglyph Z015-Z015H which represent the digits 1-9.
### Actual results
I see only 8 hieroglyphs. The digit 3 ("𓏼" Z015B) is missing.
### Code sample
<details open><summary>Code sample</summary>
```dart
Text('''
1=\u{133FA}
2=\u{133FB}
3=\u{133FC}
4=\u{133FD}
5=\u{133FE}
6=\u{133FF}
7=\u{13400}
8=\u{13401}
9=\u{13402}
''',
style: Theme.of(context).textTheme.displayLarge,
);
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>

</details>
### Logs
<details open><summary>Logs</summary>
```console
Launching lib\main.dart on Windows in debug mode...
Building Windows application...
√ Built build\windows\x64\runner\Debug\digits.exe
Debug service listening on ws://127.0.0.1:63341/iDV28gHmAgY=/ws
Syncing files to device Windows...
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Doctor summary (to see all details, run flutter doctor -v):
[√] Flutter (Channel stable, 3.27.1, on Microsoft Windows [Version 10.0.22631.4460], locale de-DE)
[√] Windows Version (Installed version of Windows is version 10 or higher) [Spoiler it is Windows 11]
[√] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[√] Chrome - develop for the web
[√] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.8.3)
[√] Android Studio (version 2024.2)
[√] Connected device (3 available)
[√] Network resources
• No issues found!
```
</details>
| platform-windows,a: typography,P3,a: adaptivity,team-engine,triaged-engine | low | Critical |
2,754,365,893 | ollama | Documentation for manual Linux installation is outdated/doesn't work for AMD GPU setup | ### What is the issue?
[The documentation for manual Linux installation](https://github.com/ollama/ollama/blob/d8bab8ea4403d3fb05a9bf408e638195b72bebf9/docs/linux.md) provides the following instructions to set up AMD gpu: download an additional archive and extract it using
```
sudo tar -C /usr -xzf ollama-linux-amd64-rocm.tgz
```
However, this will put the libraries under `/usr/lib/ollama/` directory. This seems to be wrong, as the ollama binary searches for compatible libraries in the following directories, according to debug output:
```
/usr/local/lib/ollama
/opt/rocm/lib
/usr/lib64
/usr/share/ollama/lib/rocm
```
I'm not sure if the `/usr/lib/ollama` is ever used for anything, but only after moving `usr/lib/ollama` to `/usr/local/lib/ollama`, I get working amd gpu support. This is highly annoying because the documentation suggests that I need to install ROCm, which I did and which didn't help either
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.5.4 | bug | low | Critical |
2,754,367,917 | yt-dlp | [Youtube] Immediate HTTP Error 403 on download | ### 🚨 [click here to see the current status of this issue](https://github.com/yt-dlp/yt-dlp/issues/11868#issuecomment-2560431566) 🚨
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Germany
### Provide a description that is worded well enough to be understood
Downloading videos currently results in an 403 error.
What I found out is, that the video gets played in browser when being logged in, when trying to start the videos in a private window without cookies, the video doesn't start without an error. However, the ads do play.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
yt-dlp -vU https://www.youtube.com/watch?v=l35ok-7n2IU
[debug] Command-line config: ['-vU', 'https://www.youtube.com/watch?v=l35ok-7n2IU']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp-nightly-builds [d298693b1] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg 2024-05-13-git-37db0454e4-essentials_build-www.gyan.dev (setts), ffprobe 2024-05-13-git-37db0454e4-essentials_build-www.gyan.dev, phantomjs 2.1.1
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.12.14, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.2.3, websockets-14.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp-nightly-builds)
[youtube] Extracting URL: https://www.youtube.com/watch?v=l35ok-7n2IU
[youtube] l35ok-7n2IU: Downloading webpage
[youtube] l35ok-7n2IU: Downloading ios player API JSON
[youtube] l35ok-7n2IU: Downloading mweb player API JSON
[debug] [youtube] Extracting signature function js_03dbdfab_107
[debug] Loading youtube-sigfuncs.js_03dbdfab_107 from cache
[debug] Loading youtube-nsig.03dbdfab from cache
[debug] [youtube] Decrypted nsig aV2SlxDn5CNaYgrXF => IlSumDx_jM-HcQ
[debug] Loading youtube-nsig.03dbdfab from cache
[debug] [youtube] Decrypted nsig DD96eHcKsL6Qq9E90 => hxviA3B9kS_9QA
[debug] [youtube] Extracting signature function js_03dbdfab_103
[debug] Loading youtube-sigfuncs.js_03dbdfab_103 from cache
[youtube] l35ok-7n2IU: Downloading m3u8 information
[debug] Sort order given by extractor: quality, res, fps, hdr:12, source, vcodec, channels, acodec, lang, proto
[debug] Formats sorted by: hasvid, ie_pref, quality, res, fps, hdr:12(7), source, vcodec, channels, acodec, lang, proto, size, br, asr, vext, aext, hasaud, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] l35ok-7n2IU: Downloading 1 format(s): 313+251
[debug] Invoking http downloader on "https://rr3---sn-8vq54voxqx-cxgs.googlevideo.com/videoplayback?expire=1734837482&ei=ijBnZ5bkLN7t6dsPxZnjqA8&ip=2a02%3A8109%3A9d8f%3A3e00%3A693d%3A9ea1%3A7054%3A8015&id=o-AD-6zsiRz7FIC1te5s18LyYs5Y8ksIOFcZIi9DwV-4eZ&itag=313&source=youtube&requiressl=yes&xpc=EgVo2aDSNQ%3D%3D&met=1734815882%2C&mh=OR&mm=31%2C29&mn=sn-8vq54voxqx-cxgs%2Csn-i5h7lnll&ms=au%2Crdu&mv=m&mvi=3&pl=43&rms=au%2Cau&initcwndbps=3333750&bui=AfMhrI_ngaE8pa7Haos9jOTZ3q5auYyyVfDQXr5jsroWyEFRCr9GmkX8qUqEH8wbn8V2hlhS9c3-WuEV&spc=x-caUMh6H49vmv_2Nq1W3Z-hU6SkreEU_haD9Rl720YYO1YqHc5O&vprv=1&svpuc=1&mime=video%2Fwebm&rqh=1&gir=yes&clen=6804056254&dur=3879.720&lmt=1734697485233688&mt=1734815349&fvip=3&keepalive=yes&fexp=51326932%2C51335594%2C51371294&c=IOS&txp=4432434&sparams=expire%2Cei%2Cip%2Cid%2Citag%2Csource%2Crequiressl%2Cxpc%2Cbui%2Cspc%2Cvprv%2Csvpuc%2Cmime%2Crqh%2Cgir%2Cclen%2Cdur%2Clmt&sig=AJfQdSswRgIhAPI4tNsuKx7xFM9mNgrxyyp8hSGN0-5CbzTuHZVdC9jXAiEA8wF4h9cuQjRMIDaEFCRCiwLDkWelVq5pslq_8uHVYPg%3D&lsparams=met%2Cmh%2Cmm%2Cmn%2Cms%2Cmv%2Cmvi%2Cpl%2Crms%2Cinitcwndbps&lsig=AGluJ3MwRgIhAJJ-X3tKgTINiH5L18EpIN5fb92EiSjLW5srWP1fl6CmAiEAncDu51bsexappSLmrSkwFZ4ZUfKzdhk4uOfkHq8jU8Q%3D"
ERROR: unable to download video data: HTTP Error 403: Forbidden
Traceback (most recent call last):
File "yt_dlp\YoutubeDL.py", line 3461, in process_info
File "yt_dlp\YoutubeDL.py", line 3199, in dl
File "yt_dlp\downloader\common.py", line 464, in download
File "yt_dlp\downloader\http.py", line 367, in real_download
File "yt_dlp\downloader\http.py", line 118, in establish_connection
File "yt_dlp\YoutubeDL.py", line 4162, in urlopen
File "yt_dlp\networking\common.py", line 117, in send
File "yt_dlp\networking\_helper.py", line 208, in wrapper
File "yt_dlp\networking\common.py", line 340, in send
File "yt_dlp\networking\_requests.py", line 365, in _send
yt_dlp.networking.exceptions.HTTPError: HTTP Error 403: Forbidden
```
| site-bug,needs-investigating,site:youtube | high | Critical |
2,754,368,863 | next.js | Unexpected Turbopack Error: AliasMap::lookup panicked in Next.js | ### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/sweet-nova-x2vydg
### To Reproduce
**Steps to Reproduce:**
- Set up a Next.js project with Turbopack by including the dependencies as described in the package.json below.
- Add a basic useRouter hook usage in any component to trigger navigation (e.g., useRouter in a page or component).
- Try to run the development server using npm run dev or next dev --turbopack -H 0.0.0.0.
- Observe the error occurring during the development server startup or when interacting with routing/navigation features.
### Current vs. Expected behavior
## Current Behavior:
- Turbopack throws a panic due to alias resolution errors.
The following error appears in the logs:
```
Panic: panicked at turbopack/crates/turbopack-core/src/resolve/alias_map.rs:203:13:
AliasMap::lookup must not be called on alternatives, received Alternatives([Dynamic, Constant("null")])
```
- Dynamic imports and `useRouter` from `next/navigation` seem to trigger this error.
- The server crashes or hangs with the error preventing any further development.
## Expected Behavior:
- The application should function as expected, allowing navigation using `useRouter` from `next/navigation` and handling dynamic imports correctly.
- No errors should occur in the console or browser, and dynamic imports and alias resolution should work seamlessly.
- The `npm run dev` or `next dev --turbopack -H 0.0.0.0` command should start the server without triggering a panic in Turbopack.
- The error should not appear when navigating between pages or using dynamic imports in the project
```
### Provide environment information
```bash
Operating System: Fedora Linux
Node.js Version: v22.11.0
npm Version: 10.9.0
npx Version: 10.9.0
React Version: 18.3.1
React DOM Version: 18.3.1
Next.js Version: 15.0.3
Turbopack Version: (Should match with Next.js version)
Package Manager: npm
```
### Which area(s) are affected? (Select all that apply)
create-next-app
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local), Vercel (Deployed)
### Additional context
The error seems to be related to Turbopack's alias resolution system, potentially with dynamic and constant alias resolution conflicts.
- The issue appears when trying to use features like useRouter from next/navigation or dynamic imports.
I am using Next.js with the --turbopack flag for bundling.
**Possible Solutions or Workaround:**
1. Check if the error is caused by a combination of dynamic imports and aliasing in Turbopack.
2. Consider switching back to Webpack if the issue persists, as Turbopack is still under active development.
3. - Investigate if certain conflicting dependencies (like React packages or versions) might be contributing to the error.
- The issue appears to be related to how Turbopack handles dynamic imports and alias resolution, which may involve some conflicting configurations.
- It might be related to the use of constants and dynamic variables within the project.
- I'm using Next.js 15.0.3 with the --turbopack flag enabled for bundling.
- I have also confirmed that my project uses React 18.3.1 and React-DOM 18.3.1, which might be relevant to the problem.
- The issue may be occurring specifically with the newer versions of Turbopack, which is under active development, and it may not have stable handling for certain dynamic import patterns.
**Dependenices**
```
"dependencies": {
"@hookform/resolvers": "^3.9.1",
"@radix-ui/react-accordion": "^1.2.2",
"@radix-ui/react-avatar": "^1.1.1",
"@radix-ui/react-dialog": "^1.1.2",
"@radix-ui/react-dropdown-menu": "^2.1.2",
"@radix-ui/react-icons": "^1.3.1",
"@radix-ui/react-label": "^2.1.0",
"@radix-ui/react-select": "^2.1.2",
"@radix-ui/react-separator": "^1.1.0",
"@radix-ui/react-slot": "^1.1.0",
"@radix-ui/react-switch": "^1.1.1",
"@radix-ui/react-toast": "^1.2.2",
"@radix-ui/react-tooltip": "^1.1.3",
"@tanstack/react-table": "^8.20.5",
"axios": "^1.6.5",
"class-variance-authority": "^0.7.0",
"clsx": "^2.1.1",
"firebase": "^11.1.0",
"framer-motion": "^11.13.5",
"input-otp": "^1.4.1",
"js-cookie": "^3.0.5",
"lucide-react": "^0.454.0",
"next": "^15.0.3",
"react": "^18.3.1",
"react-dom": "^18.3.1",
"react-hook-form": "^7.53.1",
"react-hot-toast": "^2.4.1",
"react-router-dom": "^6.28.0",
"recharts": "^2.13.3",
"sonner": "^1.7.1",
"tailwind-merge": "^2.5.4",
"tailwindcss-animate": "^1.0.7",
"zod": "^3.23.8"
}
```
**Dev Dependencies**
```
"devDependencies": {
"@eslint/js": "^9.14.0",
"@types/js-cookie": "^3.0.6",
"@types/node": "^20.17.6",
"@types/react": "^18",
"@types/react-dom": "^18",
"eslint": "^8.57.1",
"eslint-config-next": "15.0.2",
"eslint-plugin-react": "^7.37.2",
"globals": "^15.12.0",
"postcss": "^8",
"tailwindcss": "^3.4.1",
"typescript": "^5",
"typescript-eslint": "^8.14.0"
}```
| create-next-app | low | Critical |
2,754,376,982 | PowerToys | Quick Accent doesn't have "È" symbol and others, that are important in most languages, in UTF-8 | ### Microsoft PowerToys version
0.87.1
### Installation method
Microsoft Store
### Running as admin
None
### Area(s) with issue?
Quick Accent
### Steps to reproduce
1. Open any text editor or application where text input is required.
2. Use PowerToys to access the character set.
3. Attempt to insert an accented uppercase letter (e.g., "È").
4. Observe that the selected character does not appear in the text.
### ✔️ Expected Behavior
The accented uppercase letter should be displayed correctly in the text.
### ❌ Actual Behavior
The accented uppercase letter does not appear in the text, causing issues with accurate typing.
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Major |
2,754,384,930 | PowerToys | Support ability to configure/place independent "activation zones" in FancyZones | ### Description of the new feature / enhancement
I would like to see support for small "activation zones" representing a larger zone that can be manually placed during layout configuration. This is a crude drawing showing what this might look like:

The black box is the desktop. The blue outlines are the configured zones. The red boxes are what the user would actually see on the desktop when dragging a window around. These are the zone activation areas. The user would have to drag their window to the smaller zone, and then it would resize and fit into the blue area.
### Scenario when this would be used?
This would allow a user to disable hotkey activation of zone snapping and still have some freedom to manually place windows even if the entire display is filled with zones. In the current implementation, if a user has zones filling all usable space and disables hotkey activation, it's near impossible to manually place a window without having it snap to a zone.
This also allows more complex overlapping zones while still keeping most of the usability of the desktop without needing any hotkeys. If the activation area is independent of the zone, a user could configure multiple "layouts" in the same layout with overlapping zones.
Ideally, the zones would be able to be "named" to make it clear what area does what. Building on the example before, this would allow configuration of something like this, splitting the right side of the screen to allow optionally using the full right side or the top/bottom halves.

Here, I could drop my window onto the "Full" activation area and it would fill the brown colored zone. If I drop it in the yellow "Top" area, it filles the yellow zone in the top right. Same with the green zone which would fill the "Bottom" area.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,754,392,604 | PowerToys | PowerToys Run does not show when Brave Browser or Terminal has focus | ### Microsoft PowerToys version
0.87.1
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
PowerToys Run
### Steps to reproduce
Set PowerToys run shortcut to "Windows Key" + "Spacebar"
Open Brave browser or Open Windows Terminal
Try and open Run. Nothing happens
Only way to get it kind of working is to press "Windows Key" wait for that to show and take focus then press shorcut command. Even then PowerToys Run is behind the Windows search, so its not being placed on top of everything
### ✔️ Expected Behavior
When Browser ior terminal has focus and the PowerToys Run shortcut is pressed the Run window should show. It works as expected with Notepad++ and other applications
### ❌ Actual Behavior
Nothing shows for PowerToys Run command
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,754,403,538 | rust | Always show the lint name involved in a diagnostic? | ### Description
After the first occurrence of a lint, the name of the lint is not displayed anymore.
That is a real annoyance:
- When the diagnostics are shown in a console, one has to find out the first occurrence of the error.
- When the diagnostics are shown in an IDE, there is not such a thing as the first diagnostics so it is almost impossible to find out the name of the lint.
I do not see any good reason to optimize one line of such a useful information. (Or maybe there are still hobbyist using teleprinter and Altair 8800???).
### Version
```text
```
### Additional Labels
_No response_ | A-lints,A-diagnostics,T-compiler,WG-diagnostics,C-discussion | low | Critical |
2,754,405,636 | transformers | ModernBERT inference fails on CPU: ValueError: Pointer argument (at 0) cannot be accessed from Triton (cpu tensor?) | ### System Info
- `transformers` version: 4.48.0.dev0
- Platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.12.5
- Huggingface_hub version: 0.25.1
- Safetensors version: 0.4.5
- Accelerate version: 0.34.2
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA GeForce RTX 4090
### Who can help?
@Rocketknight1 @Arthu
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When one runs the below code, taken exactly from the Hugging Face ModernBERT's README except for the addition of `device = 'cpu'`, they get the error `ValueError: Pointer argument (at 0) cannot be accessed from Triton (cpu tensor?)`:
```python
import torch
from transformers import pipeline
from pprint import pprint
pipe = pipeline(
"fill-mask",
model="answerdotai/ModernBERT-base",
torch_dtype=torch.bfloat16,
device='cpu',
)
input_text = "He walked to the [MASK]."
results = pipe(input_text)
pprint(results)
```
Here is the full traceback of the error:
```
ValueError Traceback (most recent call last)
Cell In[1], line 13
5 pipe = pipeline(
6 "fill-mask",
7 model="answerdotai/ModernBERT-base",
8 torch_dtype=torch.bfloat16,
9 device='cpu',
10 )
12 input_text = "He walked to the [MASK]."
---> 13 results = pipe(input_text)
14 pprint(results)
File ~/dev/.venv/lib/python3.12/site-packages/transformers/pipelines/fill_mask.py:270, in FillMaskPipeline.__call__(self, inputs, **kwargs)
248 def __call__(self, inputs, **kwargs):
249 """
250 Fill the masked token in the text(s) given as inputs.
251
(...)
268 - **token_str** (str) -- The predicted token (to replace the masked one).
269 """
--> 270 outputs = super().__call__(inputs, **kwargs)
271 if isinstance(inputs, list) and len(inputs) == 1:
272 return outputs[0]
File ~/dev/.venv/lib/python3.12/site-packages/transformers/pipelines/base.py:1301, in Pipeline.__call__(self, inputs, num_workers, batch_size, *args, **kwargs)
1293 return next(
1294 iter(
1295 self.get_iterator(
(...)
1298 )
1299 )
1300 else:
-> 1301 return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File ~/dev/.venv/lib/python3.12/site-packages/transformers/pipelines/base.py:1308, in Pipeline.run_single(self, inputs, preprocess_params, forward_params, postprocess_params)
1306 def run_single(self, inputs, preprocess_params, forward_params, postprocess_params):
1307 model_inputs = self.preprocess(inputs, **preprocess_params)
-> 1308 model_outputs = self.forward(model_inputs, **forward_params)
1309 outputs = self.postprocess(model_outputs, **postprocess_params)
1310 return outputs
File ~/dev/.venv/lib/python3.12/site-packages/transformers/pipelines/base.py:1208, in Pipeline.forward(self, model_inputs, **forward_params)
1206 with inference_context():
1207 model_inputs = self._ensure_tensor_on_device(model_inputs, device=self.device)
-> 1208 model_outputs = self._forward(model_inputs, **forward_params)
1209 model_outputs = self._ensure_tensor_on_device(model_outputs, device=torch.device("cpu"))
1210 else:
File ~/dev/.venv/lib/python3.12/site-packages/transformers/pipelines/fill_mask.py:127, in FillMaskPipeline._forward(self, model_inputs)
126 def _forward(self, model_inputs):
--> 127 model_outputs = self.model(**model_inputs)
128 model_outputs["input_ids"] = model_inputs["input_ids"]
129 return model_outputs
File ~/dev/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1736, in Module._wrapped_call_impl(self, *args, **kwargs)
1734 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1735 else:
-> 1736 return self._call_impl(*args, **kwargs)
File ~/dev/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1747, in Module._call_impl(self, *args, **kwargs)
1742 # If we don't have any hooks, we want to skip the rest of the logic in
1743 # this function, and just call forward.
1744 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1745 or _global_backward_pre_hooks or _global_backward_hooks
1746 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1747 return forward_call(*args, **kwargs)
1749 result = None
1750 called_always_called_hooks = set()
File ~/dev/.venv/lib/python3.12/site-packages/transformers/models/modernbert/modeling_modernbert.py:1059, in ModernBertForMaskedLM.forward(self, input_ids, attention_mask, sliding_window_mask, position_ids, labels, indices, cu_seqlens, max_seqlen, batch_size, seq_len, output_attentions, output_hidden_states, return_dict, **kwargs)
1054 with torch.no_grad():
1055 input_ids, indices, cu_seqlens, max_seqlen, position_ids, labels = _unpad_modernbert_input(
1056 inputs=input_ids, attention_mask=attention_mask, position_ids=position_ids, labels=labels
1057 )
-> 1059 outputs = self.model(
1060 input_ids,
1061 attention_mask=attention_mask,
1062 sliding_window_mask=sliding_window_mask,
1063 position_ids=position_ids,
1064 indices=indices,
1065 cu_seqlens=cu_seqlens,
1066 max_seqlen=max_seqlen,
1067 batch_size=batch_size,
1068 seq_len=seq_len,
1069 output_attentions=output_attentions,
1070 output_hidden_states=output_hidden_states,
1071 return_dict=return_dict,
1072 )
1073 last_hidden_state = outputs[0]
1075 if self.sparse_prediction and labels is not None:
1076 # flatten labels and output first
File ~/dev/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1736, in Module._wrapped_call_impl(self, *args, **kwargs)
1734 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1735 else:
-> 1736 return self._call_impl(*args, **kwargs)
File ~/dev/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1747, in Module._call_impl(self, *args, **kwargs)
1742 # If we don't have any hooks, we want to skip the rest of the logic in
1743 # this function, and just call forward.
1744 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1745 or _global_backward_pre_hooks or _global_backward_hooks
1746 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1747 return forward_call(*args, **kwargs)
1749 result = None
1750 called_always_called_hooks = set()
File ~/dev/.venv/lib/python3.12/site-packages/transformers/models/modernbert/modeling_modernbert.py:913, in ModernBertModel.forward(self, input_ids, attention_mask, sliding_window_mask, position_ids, indices, cu_seqlens, max_seqlen, batch_size, seq_len, output_attentions, output_hidden_states, return_dict)
902 layer_outputs = self._gradient_checkpointing_func(
903 encoder_layer.__call__,
904 hidden_states,
(...)
910 output_attentions,
911 )
912 else:
--> 913 layer_outputs = encoder_layer(
914 hidden_states,
915 attention_mask=attention_mask,
916 sliding_window_mask=sliding_window_mask,
917 position_ids=position_ids,
918 cu_seqlens=cu_seqlens,
919 max_seqlen=max_seqlen,
920 output_attentions=output_attentions,
921 )
922 hidden_states = layer_outputs[0]
923 if output_attentions and len(layer_outputs) > 1:
File ~/dev/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1736, in Module._wrapped_call_impl(self, *args, **kwargs)
1734 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1735 else:
-> 1736 return self._call_impl(*args, **kwargs)
File ~/dev/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1747, in Module._call_impl(self, *args, **kwargs)
1742 # If we don't have any hooks, we want to skip the rest of the logic in
1743 # this function, and just call forward.
1744 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1745 or _global_backward_pre_hooks or _global_backward_hooks
1746 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1747 return forward_call(*args, **kwargs)
1749 result = None
1750 called_always_called_hooks = set()
File ~/dev/.venv/lib/python3.12/site-packages/transformers/models/modernbert/modeling_modernbert.py:529, in ModernBertEncoderLayer.forward(self, hidden_states, attention_mask, sliding_window_mask, position_ids, cu_seqlens, max_seqlen, output_attentions)
519 def forward(
520 self,
521 hidden_states: torch.Tensor,
(...)
527 output_attentions: Optional[bool] = False,
528 ) -> torch.Tensor:
--> 529 attn_outputs = self.attn(
530 self.attn_norm(hidden_states),
531 attention_mask=attention_mask,
532 sliding_window_mask=sliding_window_mask,
533 position_ids=position_ids,
534 cu_seqlens=cu_seqlens,
535 max_seqlen=max_seqlen,
536 output_attentions=output_attentions,
537 )
538 hidden_states = hidden_states + attn_outputs[0]
539 mlp_output = (
540 self.compiled_mlp(hidden_states)
541 if self.config.reference_compile
542 else self.mlp(self.mlp_norm(hidden_states))
543 )
File ~/dev/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1736, in Module._wrapped_call_impl(self, *args, **kwargs)
1734 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1735 else:
-> 1736 return self._call_impl(*args, **kwargs)
File ~/dev/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1747, in Module._call_impl(self, *args, **kwargs)
1742 # If we don't have any hooks, we want to skip the rest of the logic in
1743 # this function, and just call forward.
1744 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1745 or _global_backward_pre_hooks or _global_backward_hooks
1746 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1747 return forward_call(*args, **kwargs)
1749 result = None
1750 called_always_called_hooks = set()
File ~/dev/.venv/lib/python3.12/site-packages/transformers/models/modernbert/modeling_modernbert.py:487, in ModernBertAttention.forward(self, hidden_states, output_attentions, **kwargs)
484 else:
485 qkv = qkv.view(bs, -1, 3, self.num_heads, self.head_dim)
--> 487 attn_outputs = MODERNBERT_ATTENTION_FUNCTION[self.config._attn_implementation](
488 self,
489 qkv=qkv,
490 rotary_emb=self.rotary_emb,
491 local_attention=self.local_attention,
492 bs=bs,
493 dim=self.all_head_size,
494 output_attentions=output_attentions,
495 **kwargs,
496 )
497 hidden_states = attn_outputs[0]
498 hidden_states = self.out_drop(self.Wo(hidden_states))
File ~/dev/.venv/lib/python3.12/site-packages/transformers/models/modernbert/modeling_modernbert.py:349, in flash_attention_forward(module, qkv, rotary_emb, cu_seqlens, max_seqlen, local_attention, bs, dim, target_dtype, **_kwargs)
336 def flash_attention_forward(
337 module: "ModernBertAttention",
338 qkv: torch.Tensor,
(...)
347 ) -> Tuple[torch.Tensor]:
348 # (total_seqlen, 3, nheads, headdim)
--> 349 qkv = rotary_emb(qkv, cu_seqlens=cu_seqlens, max_seqlen=max_seqlen)
351 convert_dtype = qkv.dtype not in (torch.float16, torch.bfloat16)
352 if convert_dtype:
353 # FA2 implementation only supports fp16 and bf16. If FA2 is supported,
354 # bfloat16 must be supported as of FA2 2.5.7. (Turing GPUs not supported)
File ~/dev/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1736, in Module._wrapped_call_impl(self, *args, **kwargs)
1734 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1735 else:
-> 1736 return self._call_impl(*args, **kwargs)
File ~/dev/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1747, in Module._call_impl(self, *args, **kwargs)
1742 # If we don't have any hooks, we want to skip the rest of the logic in
1743 # this function, and just call forward.
1744 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1745 or _global_backward_pre_hooks or _global_backward_hooks
1746 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1747 return forward_call(*args, **kwargs)
1749 result = None
1750 called_always_called_hooks = set()
File ~/dev/.venv/lib/python3.12/site-packages/transformers/models/modernbert/modeling_modernbert.py:178, in ModernBertUnpaddedRotaryEmbedding.forward(self, qkv, cu_seqlens, max_seqlen)
175 if max_seqlen is not None:
176 self._update_cos_sin_cache(max_seqlen, device=qkv.device, dtype=qkv.dtype)
--> 178 qkv = apply_rotary_unpadded(
179 qkv,
180 self._cos_cached,
181 self._sin_cached,
182 cu_seqlens=cu_seqlens,
183 max_seqlen=max_seqlen,
184 )
186 return qkv
File ~/dev/.venv/lib/python3.12/site-packages/transformers/models/modernbert/modeling_modernbert.py:136, in apply_rotary_unpadded(qkv, cos, sin, cu_seqlens, max_seqlen)
113 def apply_rotary_unpadded(
114 qkv,
115 cos,
(...)
118 max_seqlen: Optional[int] = None,
119 ):
120 """
121 Arguments:
122 qkv: (total_nnz, 3, nheads, headdim) - input tensor for packed QKV.
(...)
134 Apply rotary embedding to the first rotary_dim of x.
135 """
--> 136 return ApplyRotaryEmbUnpad.apply(qkv, cos, sin, cu_seqlens, max_seqlen)
File ~/dev/.venv/lib/python3.12/site-packages/torch/autograd/function.py:575, in Function.apply(cls, *args, **kwargs)
572 if not torch._C._are_functorch_transforms_active():
573 # See NOTE: [functorch vjp and autograd interaction]
574 args = _functorch.utils.unwrap_dead_wrappers(args)
--> 575 return super().apply(*args, **kwargs) # type: ignore[misc]
577 if not is_setup_ctx_defined:
578 raise RuntimeError(
579 "In order to use an autograd.Function with functorch transforms "
580 "(vmap, grad, jvp, jacrev, ...), it must override the setup_context "
581 "staticmethod. For more details, please see "
582 "https://pytorch.org/docs/main/notes/extending.func.html"
583 )
File ~/dev/.venv/lib/python3.12/site-packages/transformers/models/modernbert/modeling_modernbert.py:75, in ApplyRotaryEmbUnpad.forward(ctx, qkv, cos, sin, cu_seqlens, max_seqlen)
71 # We need qkv to be contiguous so that when we reshape to combine (3, nheads) dimensions,
72 # we get the same tensor
73 # qk = rearrange(qkv[:, :2], "b_s t h d -> b_s (t h) d")
74 qk = qkv[:, :2].view(total_nnz, -1, headdim)
---> 75 apply_rotary(
76 qk,
77 cos,
78 sin,
79 seqlen_offsets=0,
80 cu_seqlens=cu_seqlens,
81 max_seqlen=max_seqlen,
82 interleaved=False,
83 inplace=True,
84 )
86 ctx.save_for_backward(cos, sin, cu_seqlens)
87 ctx.max_seqlen = max_seqlen
File ~/dev/.venv/lib/python3.12/site-packages/flash_attn/ops/triton/rotary.py:202, in apply_rotary(x, cos, sin, seqlen_offsets, cu_seqlens, max_seqlen, interleaved, inplace, conjugate)
199 # Need this, otherwise Triton tries to launch from cuda:0 and we get
200 # ValueError: Pointer argument (at 0) cannot be accessed from Triton (cpu tensor?)
201 with torch.cuda.device(x.device.index):
--> 202 rotary_kernel[grid](
203 output, # data ptrs
204 x,
205 cos,
206 sin,
207 cu_seqlens,
208 seqlen_offsets,
209 seqlen, # shapes
210 rotary_dim,
211 seqlen_ro,
212 output.stride(0) if not is_varlen else 0, # batch_strides if not varlen else 0
213 output.stride(-3), # seqlen_stride or total_seqlen_stride
214 output.stride(-2), # nheads_stride
215 output.stride(-1), # headdim_stride
216 x.stride(0) if not is_varlen else 0, # batch_strides if not varlen else 0
217 x.stride(-3), # seqlen stride or total_seqlen_stride
218 x.stride(-2), # nheads stride
219 x.stride(-1), # headdim stride
220 BLOCK_K,
221 isinstance(seqlen_offsets, torch.Tensor),
222 is_varlen,
223 interleaved,
224 conjugate,
225 BLOCK_M,
226 )
227 return output
File ~/dev/.venv/lib/python3.12/site-packages/triton/runtime/jit.py:345, in KernelInterface.__getitem__.<locals>.<lambda>(*args, **kwargs)
339 def __getitem__(self, grid) -> T:
340 """
341 A JIT function is launched with: fn[grid](*args, **kwargs).
342 Hence JITFunction.__getitem__ returns a callable proxy that
343 memorizes the grid.
344 """
--> 345 return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
File ~/dev/.venv/lib/python3.12/site-packages/triton/runtime/jit.py:691, in JITFunction.run(self, grid, warmup, *args, **kwargs)
689 # launch kernel
690 launch_metadata = kernel.launch_metadata(grid, stream, *non_constexpr_vals)
--> 691 kernel.run(grid_0, grid_1, grid_2, stream, kernel.function, kernel.packed_metadata, launch_metadata,
692 self.CompiledKernel.launch_enter_hook, self.CompiledKernel.launch_exit_hook, *non_constexpr_vals)
693 return kernel
File ~/dev/.venv/lib/python3.12/site-packages/triton/backends/nvidia/driver.py:365, in CudaLauncher.__call__(self, *args, **kwargs)
364 def __call__(self, *args, **kwargs):
--> 365 self.launch(*args, **kwargs)
ValueError: Pointer argument (at 0) cannot be accessed from Triton (cpu tensor?)
```
### Expected behavior
It works. | bug | low | Critical |
2,754,406,060 | godot | 3D viewport camera freelook doesn't work anymore with hold rotate + WASD | ### Tested versions
- Reproducible in: 4.4 dev3 and above
- Not reproducible in: 4.3 and earlier
### System information
Godot v4.4.dev7.mono - Windows 10 (build 19045) - Multi-window, 2 monitors
### Issue description
In version 4.3 and before, I could move 3D viewport camera with holding camera rotate button and using WASD keys.
In 4.4, it's not possible anymore and I can only enter WASD navigation by using freelook hotkey.
### Steps to reproduce
- Enter 3D viewport
- Hold camera rotate button
- Press WASD keys
### Minimal reproduction project (MRP)
- | bug,topic:editor,topic:3d | low | Minor |
2,754,415,549 | godot | GridMapEditorPlugin: `set_selected_palette_item` does not update the cursor mesh | ### Tested versions
- Reproducible since 4.4.dev [baf03e4fb60bd642bd3ad361b4021470e00ccd3a]
### System information
Godot v4.4.dev7 - Windows 11 (build 22631) - Multi-window, 1 monitor - Vulkan (Mobile) - dedicated AMD Radeon RX 6700 XT (Advanced Micro Devices, Inc.; 32.0.12011.1036) - AMD Ryzen 5 5600X 6-Core Processor (12 threads)
### Issue description
GH-99639 exposed the `GridMapEditorPlugin` class to scripts.
However, the `set_selected_palette_item` function from this class does not update the cursor mesh as it would when selecting from the MeshLibrary palette.
The palette selection updates correctly and you will paint with the correct mesh item, but the cursor will still show the old mesh.
### Steps to reproduce
https://github.com/user-attachments/assets/ab0d9e99-2ce2-46e8-93ff-d920b7b5b841
0. Open the MRP
1. Make sure that the GridMapTool plugin is enabled
2. Select the GridMap node in the scene tree
3. In the GridMap bottom panel, select the paint tool so you can see the cursor mesh
4. Open the new "GridMap Tool" panel in the bottom left dock
5. Click a button ("red", "green", "blue") to select a mesh - this will call `set_selected_palette_item`
6. Compare your choice with the current GridMap cursor.
It should be the same color as your selection in the tool panel.
### Minimal reproduction project (MRP)
[godot-gridmap-cursor.zip](https://github.com/user-attachments/files/18220338/godot-gridmap-cursor.zip)
| bug,topic:editor,topic:3d | low | Minor |
2,754,415,559 | godot | InputEventScreenTouch & get_get_global_mouse_position() updates delayed after initial tap | ### Tested versions
3.6 stable
### System information
Android 10
### Issue description
On Android export. InputEventScreenDrag, InputEventMouseMove and get_global_mouse_position() cannot detect the movement of a touch until it passes a pixel threshold
relative to (may need a backport to 3.x, to add more android firendly)
#84138
#84331
### Steps to reproduce
In the MRP below, touch screen and make a small drag, you'll notice that the sprites only moves after you drag more than about 10 pixels.
### Minimal reproduction project (MRP)
[ScreenDragBug.zip](https://github.com/user-attachments/files/18220329/ScreenDragBug.zip)
| bug,platform:android,topic:input | low | Critical |
2,754,419,256 | rust | Missed optimization: bounds check not elided for `i * s < n` when `0 <= i < n / s` | I tried this code:
```rust
#[inline(never)]
pub fn step_sum(arr: &[u32]) -> u32
{
const STEP_SIZE: usize = 8;
let mut result = 0;
for step in 0..(arr.len() / STEP_SIZE) {
result += arr[step * STEP_SIZE];
// I'd also want to add an offset `i < STEP_SIZE`, but it's not necessary to cause the issue.
// Or alternatively, replacing the division with `.dev_ceil()`.
}
result
}
```
I expected to see this happen: the index is in the bounds, so the check is elided.
Instead, this happened:
```asm
.LBB0_4:
cmp rcx, rsi
jae .LBB0_6
add eax, dword ptr [rdi + 4*rcx]
add rcx, 8
dec rdx
jne .LBB0_4
ret
.LBB0_6:
push rax
lea rdx, [rip + .L__unnamed_1]
mov rdi, rcx
call qword ptr [rip + core::panicking::panic_bounds_check::h300eea3d2ac1c8da@GOTPCREL]
```
[Godbolt](https://godbolt.org/#g:!((g:!((g:!((h:codeEditor,i:(filename:'1',fontScale:14,fontUsePx:'0',j:1,lang:rust,selection:(endColumn:2,endLineNumber:13,positionColumn:2,positionLineNumber:13,selectionStartColumn:2,selectionStartLineNumber:13,startColumn:2,startLineNumber:13),source:'%23%5Binline(never)%5D%0Apub+fn+step_sum(arr:+%26%5Bu32%5D)+-%3E+u32%0A%7B%0A++++const+STEP_SIZE:+usize+%3D+8%3B%0A%0A++++let+mut+result+%3D+1%3B%0A%0A++++for+step+in+0..(arr.len()+/+STEP_SIZE)+%7B%0A++++++++result+*%3D+arr%5Bstep+*+STEP_SIZE%5D%3B%0A++++%7D%0A%0A++++result%0A%7D'),l:'5',n:'0',o:'Rust+source+%231',t:'0')),k:50,l:'4',n:'0',o:'',s:0,t:'0'),(g:!((h:compiler,i:(compiler:r1830,filters:(b:'0',binary:'1',binaryObject:'1',commentOnly:'0',debugCalls:'1',demangle:'0',directives:'0',execute:'1',intel:'0',libraryCode:'0',trim:'1',verboseDemangling:'0'),flagsViewOpen:'1',fontScale:14,fontUsePx:'0',j:1,lang:rust,libs:!(),options:'-C+opt-level%3D3',overrides:!((name:edition,value:'2021')),selection:(endColumn:1,endLineNumber:1,positionColumn:1,positionLineNumber:1,selectionStartColumn:1,selectionStartLineNumber:1,startColumn:1,startLineNumber:1),source:1),l:'5',n:'0',o:'+rustc+1.83.0+(Editor+%231)',t:'0')),k:50,l:'4',n:'0',o:'',s:0,t:'0')),l:'2',n:'0',o:'',t:'0')),version:4) | A-LLVM,T-compiler,C-optimization | low | Critical |
2,754,430,060 | pytorch | Upgrading torch 2.5.0+xpu to torch 2.6.0+xpu breaks import torch on Ubuntu 24.04.1 / Python 3.12 | ### 🐛 Describe the bug
Installing the new 2.6.0 xpu torch version from https://download.pytorch.org/whl/test/xpu on Ubuntu 24.04.1 / Python 3.12 breaks
`import torch`
for me with an undefined symbol error. This error does not happen with version 2.5.0+xpu, where I can successfully import torch on the same system and use the xpu backend on my Intel N100 iGPU.
```
:: initializing oneAPI environment ...
-bash: BASH_VERSION = 5.2.21(1)-release
args: Using "$@" for oneapi-vars.sh arguments:
:: compiler -- processing etc/compiler/vars.sh
:: debugger -- processing etc/debugger/vars.sh
:: dpl -- processing etc/dpl/vars.sh
:: mkl -- processing etc/mkl/vars.sh
:: tbb -- processing etc/tbb/vars.sh
:: oneAPI environment initialized ::
Python 3.12.3 (main, Nov 6 2024, 18:32:19) [GCC 13.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/me/.local/lib/python3.12/site-packages/torch/__init__.py", line 379, in <module>
from torch._C import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^
ImportError: /home/me/.local/lib/python3.12/site-packages/torch/lib/../../../../libsycl.so.8: undefined symbol: urBindlessImagesImportExternalMemoryExp, version LIBUR_LOADER_0.10
>>>
```
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: 18.1.3 (1ubuntu1)
CMake version: version 3.31.2
Libc version: glibc-2.39
Python version: 3.12.3 (main, Nov 6 2024, 18:32:19) [GCC 13.2.0] (64-bit runtime)
Python platform: Linux-6.12.6-zabbly+-x86_64-with-glibc2.39
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Vendor ID: GenuineIntel
Model name: Intel(R) N100
CPU family: 6
Model: 190
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
Stepping: 0
CPU(s) scaling MHz: 95%
CPU max MHz: 3400.0000
CPU min MHz: 700.0000
BogoMIPS: 1612.80
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 128 KiB (4 instances)
L1i cache: 256 KiB (4 instances)
L2 cache: 2 MiB (1 instance)
L3 cache: 6 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-3
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS Not affected; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] intel_extension_for_pytorch==2.5.0
[pip3] numpy==2.0.2
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxruntime-tools==1.7.0
[pip3] optree==0.13.1
[pip3] pytorch-lamb==1.0.0
[pip3] pytorch-triton==3.2.0+git35c6c7c6
[pip3] pytorch-triton-xpu==3.2.0
[pip3] pytorch-wpe==0.0.1
[pip3] torch==2.6.0+xpu
[pip3] torch-complex==0.4.4
[pip3] torchaudio==2.6.0+xpu
[pip3] torchvision==0.21.0+xpu
[pip3] triton==3.2.0
[conda] Could not collect
cc @seemethere @malfet @osalpekar @atalman @gujinghui @EikanWang @fengyuan14 @guangyey | needs reproduction,module: binaries,triaged,module: regression,module: xpu | low | Critical |
2,754,432,386 | neovim | 0.10.3 `Inspect` command throws error | ### Problem
## Problem
After upgrading to 0.10.3 (build from source)`Inspect` command throws error:
```
Error executing Lua callback: ...s/jlima/.local/share/nvim/r
untime/lua/vim/_inspector.lua:189: attempt to index field 'h
l' (a nil value)
stack traceback:
...s/jlima/.local/share/nvim/runtime/lua/vim/_inspec
tor.lua:189: in function 'show_pos'
vim/_defaults.lua: in function <vim/_defaults.lua:0>
```
```
:version
NVIM v0.10.3
Build type: Release
LuaJIT 2.1.1713484068
Run ":verbose version" for more info
```
The error happens on different file types, javascript, python, golang.
works with `nvim --clean`
`InspectTree` works as expected.
### Steps to reproduce
1. build from source
2. install tree-sitter via lazy.nvim
3. Inspect any word in a file that uses tree-sitter for syntax highlighting
### Expected behavior
Show highlight groups as before when calling `:Inspect` over a highlighted word.
### Nvim version (nvim -v)
0.10.3
### Vim (not Nvim) behaves the same?
No, output is correct when tree-sitter is not loaded
### Operating system/version
macOS 15.1.1 (24B91)
### Terminal name/version
kitty 0.38.0
### $TERM environment variable
xterm-kitty
### Installation
from source | bug,lua | medium | Critical |
2,754,447,908 | ui | [bug]: Toaster Default Setting Not working | ### Describe the bug
In line 3 of toaster.jsx it shows:
`import { useToast } from "@/components/hooks/use-toast"
`
This causes an error because use toast is in @/hooks/use-toast
It should be:
` import { useToast } from "@/hooks/use-toast"
`
### Affected component/components
Toast
### How to reproduce
`npx shadcn add toast`
Go to toaster.jsx in `@/components/ui`
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Windows 11
Node.Js
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,754,459,085 | rust | [ICE]: Associated const projection not yet supported | ### Code
```Rust
#![feature(specialization)]
#![feature(associated_const_equality)]
pub trait IsVoid
{
const IS_VOID: bool;
}
impl<T> IsVoid for T
where
T: ?Sized
{
default const IS_VOID: bool = false;
}
impl IsVoid for ()
{
const IS_VOID: bool = true;
}
pub trait NotVoid: IsVoid<IS_VOID = false>
{
}
impl<T> NotVoid for T
where
T: IsVoid<IS_VOID = false> + ?Sized
{
}
pub trait Maybe<T>
where
T: ?Sized
{
}
impl<T> Maybe<T> for T
where
T: ?Sized
{
}
impl<T> Maybe<T> for ()
where
T: NotVoid + ?Sized
{
}
```
### Affected release channels
- [ ] Previous Stable
- [ ] Current Stable
- [ ] Current Beta
- [x] Current Nightly
### Rust Version
```Shell
$ rustc --version --verbose
rustc 1.85.0-nightly (5f23ef7d3 2024-12-20)
binary: rustc
commit-hash: 5f23ef7d3f7a8c3e0ca5c4e1978829c0448a3686
commit-date: 2024-12-20
host: x86_64-unknown-linux-gnu
release: 1.85.0-nightly
LLVM version: 19.1.6
```
### Current error output
```Shell
thread 'rustc' panicked at /rustc/5f23ef7d3f7a8c3e0ca5c4e1978829c0448a3686/compiler/rustc_next_trait_solver/src/solve/normalizes_to/mod.rs:305:25:
associated const projection is not supported yet
stack backtrace:
0: 0x7d04bc345a25 - std::backtrace::Backtrace::create::hdebb59933e0bdc30
1: 0x7d04ba957975 - std::backtrace::Backtrace::force_capture::h53ec951c239bfd2c
2: 0x7d04b9ada1fe - std[da923ce35ba8e93c]::panicking::update_hook::<alloc[4441fbf4cb6e4b6f]::boxed::Box<rustc_driver_impl[c20abde1d352895d]::install_ice_hook::{closure#0}>>::{closure#0}
3: 0x7d04ba96f8e8 - std::panicking::rust_panic_with_hook::h7657da2fd0e20b26
4: 0x7d04ba96f5a6 - std::panicking::begin_panic_handler::{{closure}}::hcfd13c7cdcd5288f
5: 0x7d04ba96d279 - std::sys::backtrace::__rust_end_short_backtrace::h3430a6f8e3db4577
6: 0x7d04ba96f29d - rust_begin_unwind
7: 0x7d04b75a9a90 - core::panicking::panic_fmt::h662311b58e97f788
8: 0x7d04bbbe87d8 - <rustc_type_ir[ae70c870ceb953b8]::predicate::NormalizesTo<rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt> as rustc_next_trait_solver[c1d5e55899188338]::solve::assembly::GoalKind<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::consider_impl_candidate
9: 0x7d04bbc0179a - <rustc_type_ir[ae70c870ceb953b8]::search_graph::SearchGraph<rustc_next_trait_solver[c1d5e55899188338]::solve::search_graph::SearchGraphDelegate<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate>, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::evaluate_goal_in_task::<&mut <rustc_next_trait_solver[c1d5e55899188338]::solve::eval_ctxt::EvalCtxt<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::evaluate_canonical_goal::{closure#0}::{closure#0}::{closure#0}>
10: 0x7d04bbbfb086 - <rustc_type_ir[ae70c870ceb953b8]::search_graph::SearchGraph<rustc_next_trait_solver[c1d5e55899188338]::solve::search_graph::SearchGraphDelegate<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate>, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::with_new_goal::<<rustc_next_trait_solver[c1d5e55899188338]::solve::eval_ctxt::EvalCtxt<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::evaluate_canonical_goal::{closure#0}::{closure#0}::{closure#0}>
11: 0x7d04bbbf4f62 - <rustc_next_trait_solver[c1d5e55899188338]::solve::eval_ctxt::EvalCtxt<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::evaluate_goal_raw
12: 0x7d04bbbefdac - <rustc_next_trait_solver[c1d5e55899188338]::solve::eval_ctxt::EvalCtxt<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::try_evaluate_added_goals
13: 0x7d04bbc01f19 - <rustc_type_ir[ae70c870ceb953b8]::search_graph::SearchGraph<rustc_next_trait_solver[c1d5e55899188338]::solve::search_graph::SearchGraphDelegate<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate>, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::evaluate_goal_in_task::<&mut <rustc_next_trait_solver[c1d5e55899188338]::solve::eval_ctxt::EvalCtxt<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::evaluate_canonical_goal::{closure#0}::{closure#0}::{closure#0}>
14: 0x7d04bbbfb086 - <rustc_type_ir[ae70c870ceb953b8]::search_graph::SearchGraph<rustc_next_trait_solver[c1d5e55899188338]::solve::search_graph::SearchGraphDelegate<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate>, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::with_new_goal::<<rustc_next_trait_solver[c1d5e55899188338]::solve::eval_ctxt::EvalCtxt<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::evaluate_canonical_goal::{closure#0}::{closure#0}::{closure#0}>
15: 0x7d04bbbf4f62 - <rustc_next_trait_solver[c1d5e55899188338]::solve::eval_ctxt::EvalCtxt<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::evaluate_goal_raw
16: 0x7d04bbbefb0e - <rustc_next_trait_solver[c1d5e55899188338]::solve::eval_ctxt::EvalCtxt<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::try_evaluate_added_goals
17: 0x7d04bbbf09f5 - <rustc_next_trait_solver[c1d5e55899188338]::solve::eval_ctxt::EvalCtxt<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::evaluate_added_goals_and_make_canonical_response::{closure#0}
18: 0x7d04bbc02bbb - <rustc_type_ir[ae70c870ceb953b8]::search_graph::SearchGraph<rustc_next_trait_solver[c1d5e55899188338]::solve::search_graph::SearchGraphDelegate<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate>, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::evaluate_goal_in_task::<&mut <rustc_next_trait_solver[c1d5e55899188338]::solve::eval_ctxt::EvalCtxt<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::evaluate_canonical_goal::{closure#0}::{closure#0}::{closure#0}>
19: 0x7d04bbbfb086 - <rustc_type_ir[ae70c870ceb953b8]::search_graph::SearchGraph<rustc_next_trait_solver[c1d5e55899188338]::solve::search_graph::SearchGraphDelegate<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate>, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::with_new_goal::<<rustc_next_trait_solver[c1d5e55899188338]::solve::eval_ctxt::EvalCtxt<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::evaluate_canonical_goal::{closure#0}::{closure#0}::{closure#0}>
20: 0x7d04bbbf4f62 - <rustc_next_trait_solver[c1d5e55899188338]::solve::eval_ctxt::EvalCtxt<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::evaluate_goal_raw
21: 0x7d04bbbefb0e - <rustc_next_trait_solver[c1d5e55899188338]::solve::eval_ctxt::EvalCtxt<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::try_evaluate_added_goals
22: 0x7d04bbbf09f5 - <rustc_next_trait_solver[c1d5e55899188338]::solve::eval_ctxt::EvalCtxt<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::evaluate_added_goals_and_make_canonical_response::{closure#0}
23: 0x7d04bbbe92a1 - <rustc_type_ir[ae70c870ceb953b8]::predicate::TraitPredicate<rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt> as rustc_next_trait_solver[c1d5e55899188338]::solve::assembly::GoalKind<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::consider_impl_candidate
24: 0x7d04b88e98f3 - <rustc_next_trait_solver[c1d5e55899188338]::solve::eval_ctxt::EvalCtxt<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::assemble_and_evaluate_candidates::<rustc_type_ir[ae70c870ceb953b8]::predicate::TraitPredicate<rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>
25: 0x7d04bbc014fd - <rustc_type_ir[ae70c870ceb953b8]::search_graph::SearchGraph<rustc_next_trait_solver[c1d5e55899188338]::solve::search_graph::SearchGraphDelegate<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate>, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::evaluate_goal_in_task::<&mut <rustc_next_trait_solver[c1d5e55899188338]::solve::eval_ctxt::EvalCtxt<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::evaluate_canonical_goal::{closure#0}::{closure#0}::{closure#0}>
26: 0x7d04bbbfb086 - <rustc_type_ir[ae70c870ceb953b8]::search_graph::SearchGraph<rustc_next_trait_solver[c1d5e55899188338]::solve::search_graph::SearchGraphDelegate<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate>, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::with_new_goal::<<rustc_next_trait_solver[c1d5e55899188338]::solve::eval_ctxt::EvalCtxt<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::evaluate_canonical_goal::{closure#0}::{closure#0}::{closure#0}>
27: 0x7d04bbbf4f62 - <rustc_next_trait_solver[c1d5e55899188338]::solve::eval_ctxt::EvalCtxt<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::evaluate_goal_raw
28: 0x7d04bbbedefa - rustc_trait_selection[4ac552fdfe0abeba]::traits::coherence::overlap
29: 0x7d04bb018499 - <rustc_middle[647cd99ec441dfa7]::traits::specialization_graph::Children as rustc_trait_selection[4ac552fdfe0abeba]::traits::specialize::specialization_graph::ChildrenExt>::insert
30: 0x7d04bbec3075 - <rustc_middle[647cd99ec441dfa7]::traits::specialization_graph::Graph as rustc_trait_selection[4ac552fdfe0abeba]::traits::specialize::specialization_graph::GraphExt>::insert
31: 0x7d04bb1f4c50 - rustc_trait_selection[4ac552fdfe0abeba]::traits::specialize::specialization_graph_provider
32: 0x7d04bb1f49a1 - rustc_query_impl[baa7e9e66730a851]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[baa7e9e66730a851]::query_impl::specialization_graph_of::dynamic_query::{closure#2}::{closure#0}, rustc_middle[647cd99ec441dfa7]::query::erase::Erased<[u8; 8usize]>>
33: 0x7d04bb420fb5 - rustc_query_system[e35f56ec1f563ec5]::query::plumbing::try_execute_query::<rustc_query_impl[baa7e9e66730a851]::DynamicConfig<rustc_query_system[e35f56ec1f563ec5]::query::caches::DefIdCache<rustc_middle[647cd99ec441dfa7]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[baa7e9e66730a851]::plumbing::QueryCtxt, true>
34: 0x7d04bbe331e8 - rustc_query_impl[baa7e9e66730a851]::query_impl::specialization_graph_of::get_query_incr::__rust_end_short_backtrace
35: 0x7d04bb6ab290 - rustc_hir_analysis[e35026f032d6c888]::coherence::coherent_trait
36: 0x7d04bb6aafef - rustc_query_impl[baa7e9e66730a851]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[baa7e9e66730a851]::query_impl::coherent_trait::dynamic_query::{closure#2}::{closure#0}, rustc_middle[647cd99ec441dfa7]::query::erase::Erased<[u8; 1usize]>>
37: 0x7d04bb416b72 - rustc_query_system[e35f56ec1f563ec5]::query::plumbing::try_execute_query::<rustc_query_impl[baa7e9e66730a851]::DynamicConfig<rustc_query_system[e35f56ec1f563ec5]::query::caches::DefIdCache<rustc_middle[647cd99ec441dfa7]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[baa7e9e66730a851]::plumbing::QueryCtxt, true>
38: 0x7d04bb77d9e0 - rustc_query_impl[baa7e9e66730a851]::query_impl::coherent_trait::get_query_incr::__rust_end_short_backtrace
39: 0x7d04b8b967d5 - rustc_hir_analysis[e35026f032d6c888]::check::wfcheck::check_well_formed
40: 0x7d04bba1f17b - rustc_query_impl[baa7e9e66730a851]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[baa7e9e66730a851]::query_impl::check_well_formed::dynamic_query::{closure#2}::{closure#0}, rustc_middle[647cd99ec441dfa7]::query::erase::Erased<[u8; 1usize]>>
41: 0x7d04bb41a0d2 - rustc_query_system[e35f56ec1f563ec5]::query::plumbing::try_execute_query::<rustc_query_impl[baa7e9e66730a851]::DynamicConfig<rustc_data_structures[d4cf7d399ba92dfc]::vec_cache::VecCache<rustc_span[3a9ac5c7918c6957]::def_id::LocalDefId, rustc_middle[647cd99ec441dfa7]::query::erase::Erased<[u8; 1usize]>, rustc_query_system[e35f56ec1f563ec5]::dep_graph::graph::DepNodeIndex>, false, false, false>, rustc_query_impl[baa7e9e66730a851]::plumbing::QueryCtxt, true>
42: 0x7d04bb41942e - rustc_query_impl[baa7e9e66730a851]::query_impl::check_well_formed::get_query_incr::__rust_end_short_backtrace
43: 0x7d04bba1f3ec - rustc_hir_analysis[e35026f032d6c888]::check::wfcheck::check_mod_type_wf
44: 0x7d04bba1f20b - rustc_query_impl[baa7e9e66730a851]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[baa7e9e66730a851]::query_impl::check_mod_type_wf::dynamic_query::{closure#2}::{closure#0}, rustc_middle[647cd99ec441dfa7]::query::erase::Erased<[u8; 1usize]>>
45: 0x7d04bbf8aec1 - rustc_query_system[e35f56ec1f563ec5]::query::plumbing::try_execute_query::<rustc_query_impl[baa7e9e66730a851]::DynamicConfig<rustc_query_system[e35f56ec1f563ec5]::query::caches::DefaultCache<rustc_span[3a9ac5c7918c6957]::def_id::LocalModDefId, rustc_middle[647cd99ec441dfa7]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[baa7e9e66730a851]::plumbing::QueryCtxt, true>
46: 0x7d04bbf8bbda - rustc_query_impl[baa7e9e66730a851]::query_impl::check_mod_type_wf::get_query_incr::__rust_end_short_backtrace
47: 0x7d04bb250fdc - rustc_hir_analysis[e35026f032d6c888]::check_crate
48: 0x7d04bb40c23c - rustc_interface[ea487bdd263c790d]::passes::run_required_analyses
49: 0x7d04bbf1629e - rustc_interface[ea487bdd263c790d]::passes::analysis
50: 0x7d04bbf1626f - rustc_query_impl[baa7e9e66730a851]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[baa7e9e66730a851]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[647cd99ec441dfa7]::query::erase::Erased<[u8; 0usize]>>
51: 0x7d04bbf1babc - rustc_query_system[e35f56ec1f563ec5]::query::plumbing::try_execute_query::<rustc_query_impl[baa7e9e66730a851]::DynamicConfig<rustc_query_system[e35f56ec1f563ec5]::query::caches::SingleCache<rustc_middle[647cd99ec441dfa7]::query::erase::Erased<[u8; 0usize]>>, false, false, false>, rustc_query_impl[baa7e9e66730a851]::plumbing::QueryCtxt, true>
52: 0x7d04bbf1b315 - rustc_query_impl[baa7e9e66730a851]::query_impl::analysis::get_query_incr::__rust_end_short_backtrace
53: 0x7d04bbf7f61e - rustc_interface[ea487bdd263c790d]::passes::create_and_enter_global_ctxt::<core[5897668cda8e5696]::option::Option<rustc_interface[ea487bdd263c790d]::queries::Linker>, rustc_driver_impl[c20abde1d352895d]::run_compiler::{closure#0}::{closure#2}>::{closure#2}::{closure#0}
54: 0x7d04bbfd4aa8 - rustc_interface[ea487bdd263c790d]::interface::run_compiler::<(), rustc_driver_impl[c20abde1d352895d]::run_compiler::{closure#0}>::{closure#1}
55: 0x7d04bbee1c87 - std[da923ce35ba8e93c]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[ea487bdd263c790d]::util::run_in_thread_with_globals<rustc_interface[ea487bdd263c790d]::util::run_in_thread_pool_with_globals<rustc_interface[ea487bdd263c790d]::interface::run_compiler<(), rustc_driver_impl[c20abde1d352895d]::run_compiler::{closure#0}>::{closure#1}, ()>::{closure#0}, ()>::{closure#0}::{closure#0}, ()>
56: 0x7d04bbee211c - <<std[da923ce35ba8e93c]::thread::Builder>::spawn_unchecked_<rustc_interface[ea487bdd263c790d]::util::run_in_thread_with_globals<rustc_interface[ea487bdd263c790d]::util::run_in_thread_pool_with_globals<rustc_interface[ea487bdd263c790d]::interface::run_compiler<(), rustc_driver_impl[c20abde1d352895d]::run_compiler::{closure#0}>::{closure#1}, ()>::{closure#0}, ()>::{closure#0}::{closure#0}, ()>::{closure#1} as core[5897668cda8e5696]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
57: 0x7d04bbee3701 - std::sys::pal::unix::thread::Thread::new::thread_start::h1c08419e700d562f
58: 0x7d04b62a339d - <unknown>
59: 0x7d04b632849c - <unknown>
60: 0x0 - <unknown>
rustc version: 1.85.0-nightly (5f23ef7d3 2024-12-20)
platform: x86_64-unknown-linux-gnu
query stack during panic:
#0 [specialization_graph_of] building specialization graph of trait `Maybe`
#1 [coherent_trait] coherence checking all impls of trait `Maybe`
#2 [check_well_formed] checking that `<impl at minimum_reproducable_example/src/main.rs:36:1: 38:14>` is well-formed
#3 [check_mod_type_wf] checking that types are well-formed in top-level module
#4 [analysis] running analysis passes on this crate
end of query stack
```
### Backtrace
```Shell
Compiling minimum_reproducable_example v0.1.0 (/home/sigurd/Code/rust/sss/minimum_reproducable_example)
warning: the feature `specialization` is incomplete and may not be safe to use and/or cause compiler crashes
--> minimum_reproducable_example/src/main.rs:1:12
|
1 | #![feature(specialization)]
| ^^^^^^^^^^^^^^
|
= note: see issue #31844 <https://github.com/rust-lang/rust/issues/31844> for more information
= help: consider using `min_specialization` instead, which is more stable and complete
= note: `#[warn(incomplete_features)]` on by default
error[E0601]: `main` function not found in crate `minimum_reproducable_example`
--> minimum_reproducable_example/src/main.rs:47:2
|
47 | }
| ^ consider adding a `main` function to `minimum_reproducable_example/src/main.rs`
thread 'rustc' panicked at /rustc/5f23ef7d3f7a8c3e0ca5c4e1978829c0448a3686/compiler/rustc_next_trait_solver/src/solve/normalizes_to/mod.rs:305:25:
associated const projection is not supported yet
stack backtrace:
0: 0x78a95ad6cdda - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h4fc5c49bc95814ca
1: 0x78a95b41b13c - core::fmt::write::h02478b1210db5d91
2: 0x78a95c312411 - std::io::Write::write_fmt::hf65323128dc901c4
3: 0x78a95ad6cc32 - std::sys::backtrace::BacktraceLock::print::heed3495364f53012
4: 0x78a95ad6f12a - std::panicking::default_hook::{{closure}}::h3a2c63f83169a713
5: 0x78a95ad6ef73 - std::panicking::default_hook::h5d312750e94809b0
6: 0x78a959ed9a98 - std[da923ce35ba8e93c]::panicking::update_hook::<alloc[4441fbf4cb6e4b6f]::boxed::Box<rustc_driver_impl[c20abde1d352895d]::install_ice_hook::{closure#0}>>::{closure#0}
7: 0x78a95ad6f8e8 - std::panicking::rust_panic_with_hook::h7657da2fd0e20b26
8: 0x78a95ad6f5a6 - std::panicking::begin_panic_handler::{{closure}}::hcfd13c7cdcd5288f
9: 0x78a95ad6d279 - std::sys::backtrace::__rust_end_short_backtrace::h3430a6f8e3db4577
10: 0x78a95ad6f29d - rust_begin_unwind
11: 0x78a9579a9a90 - core::panicking::panic_fmt::h662311b58e97f788
12: 0x78a95bfe87d8 - <rustc_type_ir[ae70c870ceb953b8]::predicate::NormalizesTo<rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt> as rustc_next_trait_solver[c1d5e55899188338]::solve::assembly::GoalKind<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::consider_impl_candidate
13: 0x78a95c00179a - <rustc_type_ir[ae70c870ceb953b8]::search_graph::SearchGraph<rustc_next_trait_solver[c1d5e55899188338]::solve::search_graph::SearchGraphDelegate<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate>, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::evaluate_goal_in_task::<&mut <rustc_next_trait_solver[c1d5e55899188338]::solve::eval_ctxt::EvalCtxt<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::evaluate_canonical_goal::{closure#0}::{closure#0}::{closure#0}>
14: 0x78a95bffb086 - <rustc_type_ir[ae70c870ceb953b8]::search_graph::SearchGraph<rustc_next_trait_solver[c1d5e55899188338]::solve::search_graph::SearchGraphDelegate<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate>, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::with_new_goal::<<rustc_next_trait_solver[c1d5e55899188338]::solve::eval_ctxt::EvalCtxt<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::evaluate_canonical_goal::{closure#0}::{closure#0}::{closure#0}>
15: 0x78a95bff4f62 - <rustc_next_trait_solver[c1d5e55899188338]::solve::eval_ctxt::EvalCtxt<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::evaluate_goal_raw
16: 0x78a95bfefdac - <rustc_next_trait_solver[c1d5e55899188338]::solve::eval_ctxt::EvalCtxt<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::try_evaluate_added_goals
17: 0x78a95c001f19 - <rustc_type_ir[ae70c870ceb953b8]::search_graph::SearchGraph<rustc_next_trait_solver[c1d5e55899188338]::solve::search_graph::SearchGraphDelegate<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate>, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::evaluate_goal_in_task::<&mut <rustc_next_trait_solver[c1d5e55899188338]::solve::eval_ctxt::EvalCtxt<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::evaluate_canonical_goal::{closure#0}::{closure#0}::{closure#0}>
18: 0x78a95bffb086 - <rustc_type_ir[ae70c870ceb953b8]::search_graph::SearchGraph<rustc_next_trait_solver[c1d5e55899188338]::solve::search_graph::SearchGraphDelegate<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate>, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::with_new_goal::<<rustc_next_trait_solver[c1d5e55899188338]::solve::eval_ctxt::EvalCtxt<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::evaluate_canonical_goal::{closure#0}::{closure#0}::{closure#0}>
19: 0x78a95bff4f62 - <rustc_next_trait_solver[c1d5e55899188338]::solve::eval_ctxt::EvalCtxt<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::evaluate_goal_raw
20: 0x78a95bfefb0e - <rustc_next_trait_solver[c1d5e55899188338]::solve::eval_ctxt::EvalCtxt<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::try_evaluate_added_goals
21: 0x78a95bff09f5 - <rustc_next_trait_solver[c1d5e55899188338]::solve::eval_ctxt::EvalCtxt<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::evaluate_added_goals_and_make_canonical_response::{closure#0}
22: 0x78a95c002bbb - <rustc_type_ir[ae70c870ceb953b8]::search_graph::SearchGraph<rustc_next_trait_solver[c1d5e55899188338]::solve::search_graph::SearchGraphDelegate<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate>, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::evaluate_goal_in_task::<&mut <rustc_next_trait_solver[c1d5e55899188338]::solve::eval_ctxt::EvalCtxt<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::evaluate_canonical_goal::{closure#0}::{closure#0}::{closure#0}>
23: 0x78a95bffb086 - <rustc_type_ir[ae70c870ceb953b8]::search_graph::SearchGraph<rustc_next_trait_solver[c1d5e55899188338]::solve::search_graph::SearchGraphDelegate<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate>, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::with_new_goal::<<rustc_next_trait_solver[c1d5e55899188338]::solve::eval_ctxt::EvalCtxt<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::evaluate_canonical_goal::{closure#0}::{closure#0}::{closure#0}>
24: 0x78a95bff4f62 - <rustc_next_trait_solver[c1d5e55899188338]::solve::eval_ctxt::EvalCtxt<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::evaluate_goal_raw
25: 0x78a95bfefb0e - <rustc_next_trait_solver[c1d5e55899188338]::solve::eval_ctxt::EvalCtxt<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::try_evaluate_added_goals
26: 0x78a95bff09f5 - <rustc_next_trait_solver[c1d5e55899188338]::solve::eval_ctxt::EvalCtxt<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::evaluate_added_goals_and_make_canonical_response::{closure#0}
27: 0x78a95bfe92a1 - <rustc_type_ir[ae70c870ceb953b8]::predicate::TraitPredicate<rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt> as rustc_next_trait_solver[c1d5e55899188338]::solve::assembly::GoalKind<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::consider_impl_candidate
28: 0x78a958ce98f3 - <rustc_next_trait_solver[c1d5e55899188338]::solve::eval_ctxt::EvalCtxt<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::assemble_and_evaluate_candidates::<rustc_type_ir[ae70c870ceb953b8]::predicate::TraitPredicate<rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>
29: 0x78a95c0014fd - <rustc_type_ir[ae70c870ceb953b8]::search_graph::SearchGraph<rustc_next_trait_solver[c1d5e55899188338]::solve::search_graph::SearchGraphDelegate<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate>, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::evaluate_goal_in_task::<&mut <rustc_next_trait_solver[c1d5e55899188338]::solve::eval_ctxt::EvalCtxt<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::evaluate_canonical_goal::{closure#0}::{closure#0}::{closure#0}>
30: 0x78a95bffb086 - <rustc_type_ir[ae70c870ceb953b8]::search_graph::SearchGraph<rustc_next_trait_solver[c1d5e55899188338]::solve::search_graph::SearchGraphDelegate<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate>, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::with_new_goal::<<rustc_next_trait_solver[c1d5e55899188338]::solve::eval_ctxt::EvalCtxt<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::evaluate_canonical_goal::{closure#0}::{closure#0}::{closure#0}>
31: 0x78a95bff4f62 - <rustc_next_trait_solver[c1d5e55899188338]::solve::eval_ctxt::EvalCtxt<rustc_trait_selection[4ac552fdfe0abeba]::solve::delegate::SolverDelegate, rustc_middle[647cd99ec441dfa7]::ty::context::TyCtxt>>::evaluate_goal_raw
32: 0x78a95bfedefa - rustc_trait_selection[4ac552fdfe0abeba]::traits::coherence::overlap
33: 0x78a95b418499 - <rustc_middle[647cd99ec441dfa7]::traits::specialization_graph::Children as rustc_trait_selection[4ac552fdfe0abeba]::traits::specialize::specialization_graph::ChildrenExt>::insert
34: 0x78a95c2c3075 - <rustc_middle[647cd99ec441dfa7]::traits::specialization_graph::Graph as rustc_trait_selection[4ac552fdfe0abeba]::traits::specialize::specialization_graph::GraphExt>::insert
35: 0x78a95b5f4c50 - rustc_trait_selection[4ac552fdfe0abeba]::traits::specialize::specialization_graph_provider
36: 0x78a95b5f49a1 - rustc_query_impl[baa7e9e66730a851]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[baa7e9e66730a851]::query_impl::specialization_graph_of::dynamic_query::{closure#2}::{closure#0}, rustc_middle[647cd99ec441dfa7]::query::erase::Erased<[u8; 8usize]>>
37: 0x78a95b820fb5 - rustc_query_system[e35f56ec1f563ec5]::query::plumbing::try_execute_query::<rustc_query_impl[baa7e9e66730a851]::DynamicConfig<rustc_query_system[e35f56ec1f563ec5]::query::caches::DefIdCache<rustc_middle[647cd99ec441dfa7]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[baa7e9e66730a851]::plumbing::QueryCtxt, true>
38: 0x78a95c2331e8 - rustc_query_impl[baa7e9e66730a851]::query_impl::specialization_graph_of::get_query_incr::__rust_end_short_backtrace
39: 0x78a95baab290 - rustc_hir_analysis[e35026f032d6c888]::coherence::coherent_trait
40: 0x78a95baaafef - rustc_query_impl[baa7e9e66730a851]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[baa7e9e66730a851]::query_impl::coherent_trait::dynamic_query::{closure#2}::{closure#0}, rustc_middle[647cd99ec441dfa7]::query::erase::Erased<[u8; 1usize]>>
41: 0x78a95b816b72 - rustc_query_system[e35f56ec1f563ec5]::query::plumbing::try_execute_query::<rustc_query_impl[baa7e9e66730a851]::DynamicConfig<rustc_query_system[e35f56ec1f563ec5]::query::caches::DefIdCache<rustc_middle[647cd99ec441dfa7]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[baa7e9e66730a851]::plumbing::QueryCtxt, true>
42: 0x78a95bb7d9e0 - rustc_query_impl[baa7e9e66730a851]::query_impl::coherent_trait::get_query_incr::__rust_end_short_backtrace
43: 0x78a958f967d5 - rustc_hir_analysis[e35026f032d6c888]::check::wfcheck::check_well_formed
44: 0x78a95be1f17b - rustc_query_impl[baa7e9e66730a851]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[baa7e9e66730a851]::query_impl::check_well_formed::dynamic_query::{closure#2}::{closure#0}, rustc_middle[647cd99ec441dfa7]::query::erase::Erased<[u8; 1usize]>>
45: 0x78a95b81a0d2 - rustc_query_system[e35f56ec1f563ec5]::query::plumbing::try_execute_query::<rustc_query_impl[baa7e9e66730a851]::DynamicConfig<rustc_data_structures[d4cf7d399ba92dfc]::vec_cache::VecCache<rustc_span[3a9ac5c7918c6957]::def_id::LocalDefId, rustc_middle[647cd99ec441dfa7]::query::erase::Erased<[u8; 1usize]>, rustc_query_system[e35f56ec1f563ec5]::dep_graph::graph::DepNodeIndex>, false, false, false>, rustc_query_impl[baa7e9e66730a851]::plumbing::QueryCtxt, true>
46: 0x78a95b81942e - rustc_query_impl[baa7e9e66730a851]::query_impl::check_well_formed::get_query_incr::__rust_end_short_backtrace
47: 0x78a95be1f3ec - rustc_hir_analysis[e35026f032d6c888]::check::wfcheck::check_mod_type_wf
48: 0x78a95be1f20b - rustc_query_impl[baa7e9e66730a851]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[baa7e9e66730a851]::query_impl::check_mod_type_wf::dynamic_query::{closure#2}::{closure#0}, rustc_middle[647cd99ec441dfa7]::query::erase::Erased<[u8; 1usize]>>
49: 0x78a95c38aec1 - rustc_query_system[e35f56ec1f563ec5]::query::plumbing::try_execute_query::<rustc_query_impl[baa7e9e66730a851]::DynamicConfig<rustc_query_system[e35f56ec1f563ec5]::query::caches::DefaultCache<rustc_span[3a9ac5c7918c6957]::def_id::LocalModDefId, rustc_middle[647cd99ec441dfa7]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[baa7e9e66730a851]::plumbing::QueryCtxt, true>
50: 0x78a95c38bbda - rustc_query_impl[baa7e9e66730a851]::query_impl::check_mod_type_wf::get_query_incr::__rust_end_short_backtrace
51: 0x78a95b650fdc - rustc_hir_analysis[e35026f032d6c888]::check_crate
52: 0x78a95b80c23c - rustc_interface[ea487bdd263c790d]::passes::run_required_analyses
53: 0x78a95c31629e - rustc_interface[ea487bdd263c790d]::passes::analysis
54: 0x78a95c31626f - rustc_query_impl[baa7e9e66730a851]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[baa7e9e66730a851]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[647cd99ec441dfa7]::query::erase::Erased<[u8; 0usize]>>
55: 0x78a95c31babc - rustc_query_system[e35f56ec1f563ec5]::query::plumbing::try_execute_query::<rustc_query_impl[baa7e9e66730a851]::DynamicConfig<rustc_query_system[e35f56ec1f563ec5]::query::caches::SingleCache<rustc_middle[647cd99ec441dfa7]::query::erase::Erased<[u8; 0usize]>>, false, false, false>, rustc_query_impl[baa7e9e66730a851]::plumbing::QueryCtxt, true>
56: 0x78a95c31b315 - rustc_query_impl[baa7e9e66730a851]::query_impl::analysis::get_query_incr::__rust_end_short_backtrace
57: 0x78a95c37f61e - rustc_interface[ea487bdd263c790d]::passes::create_and_enter_global_ctxt::<core[5897668cda8e5696]::option::Option<rustc_interface[ea487bdd263c790d]::queries::Linker>, rustc_driver_impl[c20abde1d352895d]::run_compiler::{closure#0}::{closure#2}>::{closure#2}::{closure#0}
58: 0x78a95c3d4aa8 - rustc_interface[ea487bdd263c790d]::interface::run_compiler::<(), rustc_driver_impl[c20abde1d352895d]::run_compiler::{closure#0}>::{closure#1}
59: 0x78a95c2e1c87 - std[da923ce35ba8e93c]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[ea487bdd263c790d]::util::run_in_thread_with_globals<rustc_interface[ea487bdd263c790d]::util::run_in_thread_pool_with_globals<rustc_interface[ea487bdd263c790d]::interface::run_compiler<(), rustc_driver_impl[c20abde1d352895d]::run_compiler::{closure#0}>::{closure#1}, ()>::{closure#0}, ()>::{closure#0}::{closure#0}, ()>
60: 0x78a95c2e211c - <<std[da923ce35ba8e93c]::thread::Builder>::spawn_unchecked_<rustc_interface[ea487bdd263c790d]::util::run_in_thread_with_globals<rustc_interface[ea487bdd263c790d]::util::run_in_thread_pool_with_globals<rustc_interface[ea487bdd263c790d]::interface::run_compiler<(), rustc_driver_impl[c20abde1d352895d]::run_compiler::{closure#0}>::{closure#1}, ()>::{closure#0}, ()>::{closure#0}::{closure#0}, ()>::{closure#1} as core[5897668cda8e5696]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
61: 0x78a95c2e3701 - std::sys::pal::unix::thread::Thread::new::thread_start::h1c08419e700d562f
62: 0x78a9566a339d - <unknown>
63: 0x78a95672849c - <unknown>
64: 0x0 - <unknown>
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: please attach the file at `/home/sigurd/Code/rust/sss/rustc-ice-2024-12-22T03_01_02-3451.txt` to your bug report
note: compiler flags: --crate-type bin -C embed-bitcode=no -C debuginfo=2 -C incremental=[REDACTED]
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
#0 [specialization_graph_of] building specialization graph of trait `Maybe`
#1 [coherent_trait] coherence checking all impls of trait `Maybe`
#2 [check_well_formed] checking that `<impl at minimum_reproducable_example/src/main.rs:36:1: 38:14>` is well-formed
#3 [check_mod_type_wf] checking that types are well-formed in top-level module
#4 [analysis] running analysis passes on this crate
end of query stack
For more information about this error, try `rustc --explain E0601`.
warning: `minimum_reproducable_example` (bin "minimum_reproducable_example") generated 1 warning
error: could not compile `minimum_reproducable_example` (bin "minimum_reproducable_example") due to 1 previous error; 1 warning emitted
```
### Anything else?
Managed to reproduce this with just the code shown in this report. | I-ICE,T-compiler,C-bug,S-has-mcve,S-bug-has-test,F-associated_const_equality,WG-trait-system-refactor | low | Critical |
2,754,465,605 | rust | A shebang is displaced in HIR & expanded outputs | I tried this code:
```rs
#!/usr/bin/env rust
fn test() {}
```
After running `rustc -Zunpretty=hir test.rs`, I expected to see this happen:
```rs
#!/usr/bin/env rust
#[prelude_import]
use ::std::prelude::rust_2015::*;
#[macro_use]
extern crate std;
fn test() { }
```
Instead, this happened:
```rs
#[prelude_import]
use ::std::prelude::rust_2015::*;
#[macro_use]
extern crate std;
#!/usr/bin/env rust
fn test() { }
```
A shebang should remain at the beginning of an output but instead is placed in-between. The same happens with `rustc -Zunpretty=expanded test.rs`.
### Meta
`rustc --version --verbose`:
```
rustc 1.85.0-nightly (5f23ef7d3 2024-12-20)
binary: rustc
commit-hash: 5f23ef7d3f7a8c3e0ca5c4e1978829c0448a3686
commit-date: 2024-12-20
host: x86_64-unknown-linux-gnu
release: 1.85.0-nightly
LLVM version: 19.1.6
```
| A-pretty,C-enhancement,T-compiler,requires-nightly | low | Minor |
2,754,479,627 | three.js | WebGLRenderer: Add support for Node Materials | ### Description
I wanted to open a discussion to talk about the possibility of adding Node Materials to WebGLRenderer - as far as I understand Node Materials are only supported with WebGPURenderer which poses a lot of issues for community adoption, practical testing, and library support:
WebGPURenderer looks like it's coming along great but until it's more fully featured and mature I cannot use it as a platform for any of my professional work. Likewise many tools I've developed and work on are already built on WebGLRenderer and cannot ported to WebGPURenderer. I assume many developers are in the same boat. This means I haven't been able to invest any time in learning and trying the new node material system despite being very interested in the benefits after years of trying to solve shader and material problems.
Adding support to WebGLRenderer would help afford the well-experienced three.js community the ability to test the system in large scale, complex projects and give practical feedback sooner rather than trying it to WebGPURenderer. This will also help ease the transition projects and developers that currently need to use WebGLRenderer. Currently all material development currently being done for WebGL will be rendered useless in WebGPU which this can help avoid.
And from the library perspective - I need to be able to develop something that will work in both WebGLRenderer and WebGPURenderer. Asking developers to maintain two versions of their materials or libraries isn't great and I think will otherwise just make this renderer transition significantly more difficult than it needs to be.
I appreciate all the work done on the new material systems - it looks like a large improvement over what we have now so it would be great to make this transition as smooth as possible.
cc @sunag @RenaudRohlinger
### Solution
Add support for NodeMaterials to WebGLRenderer.
### Alternatives
Community feedback is significantly delayed, developers and libraries are fragmented.
### Additional context
_No response_ | TSL | low | Major |
2,754,481,265 | next.js | [Edge runtime] `console.error` does not include `error.cause` | ### Link to the code that reproduces this issue
https://github.com/juliesaia/next-error-edge-repro
### To Reproduce
1. next dev
2. Click "Trigger error"
3. No cause is logged
4. Uncomment `export const runtime = "edge";`
5. Click "Trigger error"
6. Cause logged (`[cause]: Error: cause...`)
### Current vs. Expected behavior
Error causes should be included in the console log, like in the node.js runtime
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #1 SMP Tue Nov 5 00:21:55 UTC 2024
Available memory (MB): 15566
Available CPU cores: 12
Binaries:
Node: 22.11.0
npm: 10.9.0
Yarn: 1.22.22
pnpm: 9.15.0
Relevant Packages:
next: 15.1.2 // Latest available version is detected (15.1.2).
eslint-config-next: 15.1.2
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Runtime
### Which stage(s) are affected? (Select all that apply)
next dev (local), next start (local), Vercel (Deployed)
### Additional context
I imagine the bug is somewhere in `packages/next/src/server/patch-error-inspect.ts`, but not sure | Runtime | low | Critical |
2,754,495,297 | rust | Tracking Issue for `sync_nonpoison` and `nonpoison_{condvar,mutex,once,rwlock}` | Feature gates:
- `#![feature(sync_nonpoison)]`
- `#![feature(nonpoison_condvar)]`
- `#![feature(nonpoison_mutex)]`
- `#![feature(nonpoison_once)]`
- `#![feature(nonpoison_rwlock)]`
This is a tracking issue for versions of synchronization primitives that do not not need to worry about poison.
### Public API
#### `sync_nonpoison`
The module itself and common types will be gated by this feature:
```rust
// std::sync
mod nonpoison {
pub type TryLockResult<Guard> = Result<Guard, WouldBlock>;
// Error type for failed locking
pub struct WouldBlock;
}
```
#### `nonpoison_condvar`
```rust
// std::sync::nonpoison
pub struct Condvar { /* ... */ }
impl Condvar {
pub const fn new() -> Self;
pub fn wait<'a, T>(&self, guard: MutexGuard<'a, T>) -> MutexGuard<'a, T>;
pub fn notify_one(&self);
pub fn notify_all(&self);
pub fn wait_while<'a, T, F>(
&self,
guard: MutexGuard<'a, T>,
condition: F,
) -> MutexGuard<'a, T>
where
F: FnMut(&mut T) -> bool;
pub fn wait_timeout<'a, T>(
&self,
guard: MutexGuard<'a, T>,
dur: Duration,
) -> (MutexGuard<'a, T>, WaitTimeoutResult);
pub fn wait_timeout_while<'a, T, F>(
&self,
guard: MutexGuard<'a, T>,
dur: Duration,
condition: F,
) -> (MutexGuard<'a, T>, WaitTimeoutResult)
where
F: FnMut(&mut T) -> bool;
}
/* trait implementations from `std::sync::poison::Condvar` */
```
#### `nonpoison_mutex`
```rust
// std::sync::nonpoison
pub struct Mutex<T: ?Sized> { /* ... */ }
impl<T> Mutex<T> {
pub const fn new(t: T) -> Self;
}
impl<T: ?Sized> Mutex<T> {
pub fn lock(&self) -> MutexGuard<'_, T>;
pub fn try_lock(&self) -> TryLockResult<MutexGuard<'_, T>>;
pub fn get_mut(&mut self) -> &mut T;
pub fn into_inner(self) -> T
where
T: Sized;
}
/* trait implementations from `std::sync::poison::Mutex` */
pub struct MutexGuard<'a, T: ?Sized + 'a> { /* ... */ }
impl<'a, T: ?Sized> MutexGuard<'a, T> {
// Unstable API from `mapped_lock_guards`
}
/* trait implementations from `std::sync::poison::MutexGuard` */
// Currently unstable under `mapped_lock_guards`, see that tracking issue for more
pub struct MappedMutexGuard<'a, T: ?Sized + 'a> { /* ... */ }
```
#### `nonpoison_once`
```rust
// std::sync::nonpoison
pub struct Once { /* ... */ }
impl Once {
pub const fn new() -> Self;
pub fn call_once<F: FnOnce()>(&self, f: F);
pub fn call_once_force<F: FnOnce(&OnceState)>(&self, f: F)
pub fn is_completed(&self) -> bool
// Currently unstable from `once_wait`
pub fn wait(&self);
}
/* trait implementations from `std::sync::poison::Once` */
```
#### `nonpoison_rwlock`
```rust
// std::sync::nonpoison
pub struct RwLock<T: ?Sized> { /* ... */ }
impl<T> RwLock<T> {
pub const fn new(t: T) -> Self;
}
impl<T: ?Sized> RwLock<T> {
pub fn read(&self) -> RwLockReadGuard<'_, T>;
pub fn try_read(&self) -> TryLockResult<RwLockReadGuard<'_, T>>;
pub fn write(&self) -> RwLockWriteGuard<'_, T>;
pub fn try_write(&self) -> TryLockResult<RwLockWriteGuard<'_, T>>;
pub fn get_mut(&mut self) -> &mut T;
pub fn into_inner(self) -> T
where
T: Sized;
}
/* trait implementations from `std::sync::poison::RwLock` */
pub struct RwLockReadGuard<'a, T: ?Sized + 'a> { /* private fields */ }
impl<'a, T: ?Sized> RwLockReadGuard<'a, T> {
// Unstable API from `mapped_lock_guards`
}
/* trait implementations from `std::sync::poison::RwLockReadGuard` */
impl<'a, T: ?Sized> RwLockWriteGuard<'a, T> {
// Unstable API from `mapped_lock_guards`
}
/* trait implementations from `std::sync::poison::RwLockWriteGuard` */
// Currently unstable under `mapped_lock_guards`, see that tracking issue for more
pub struct MappedRwLockReadGuard<'a, T: ?Sized + 'a> { /* ... */ }
pub struct MappedRwLockReadGuard<'a, T: ?Sized + 'a> { /* ... */ }
```
### Steps / History
- [ ] ACP: https://github.com/rust-lang/libs-team/issues/169
- [ ] Implementation: #...
- [ ] Final comment period (FCP)[^1]
- [ ] Stabilization PR
### Unresolved Questions
- Should existing types without poison be moved to this module? `Barrier`, `LazyLock`, `OnceLock`, `ReentrantLock`
### Related
- Move all existing poisonable types to `std::sync::poison` and reexport them https://github.com/rust-lang/rust/issues/134646
- The unstable `ReentrantLock` currently does not support poisoning https://github.com/rust-lang/rust/issues/121440
- `mapped_lock_guards` adds API to the guard types https://github.com/rust-lang/rust/issues/117108
[^1]: https://std-dev-guide.rust-lang.org/feature-lifecycle/stabilization.html
| E-hard,T-libs-api,E-help-wanted,C-tracking-issue | low | Critical |
2,754,497,590 | rust | Tracking Issue for `sync_poison_mod` | Feature gate: `#![feature(sync_poison_mod)]`
This is a tracking issue moving all poisonable `std::sync` types to `std::sync::poison`, with reexports in `std::sync`. In a future edition, we will be able to instead reexport `std::sync::nonpoison` (this module does not exist yet).
### Public API
```rust
// std::sync::poison
type LockResult;
type TryLockResult;
struct Condvar { /* ... */ };
struct Mutex<T: ?Sized> { /* ... */ };
struct MutexGuard { /* ... */ };
struct Once { /* ... */ };
struct OnceState { /* ... */ };
struct PoisonError { /* ... */ };
struct RwLock { /* ... */ };
struct RwLockReadGuard { /* ... */ };
struct RwLockWriteGuard { /* ... */ };
enum TryLockError { /* ... */ };
// Unstable types
pub struct MappedMutexGuard<'a, T: ?Sized + 'a> { /* ... */ }
pub struct MappedRwLockReadGuard<'a, T: ?Sized + 'a> { /* ... */ }
pub struct MappedRwLockWriteGuard<'a, T: ?Sized + 'a> { /* ... */ }
```
```rust
// std::sync
// This module will be gated behind `sync_poison`
pub mod poison;
pub use poision::*;
```
### Steps / History
- [x] ACP: https://github.com/rust-lang/libs-team/issues/169#issuecomment-2558025680
- [ ] Implementation: #...
- [ ] Final comment period (FCP)[^1]
- [ ] Stabilization PR
### Unresolved Questions
- None yet.
### Related
- `nonpoison` versions of existing `poison` types: https://github.com/rust-lang/rust/issues/134645
[^1]: https://std-dev-guide.rust-lang.org/feature-lifecycle/stabilization.html
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"GrigorenkoPV"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | E-easy,T-libs-api,E-help-wanted,C-tracking-issue | low | Critical |
2,754,504,927 | yt-dlp | [telegram:embed] Extractor telegram:embed returned nothing | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Russia
### Provide a description that is worded well enough to be understood
Unable to download audio or videos messages from telegram
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', '--verbose', 'https://t.me/trsch/4068']
[debug] Encodings: locale cp1251, fs utf-8, pref cp1251, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [542166962] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.17763-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg 6.0-full_build-www.gyan.dev (setts), ffprobe 6.0-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.2.3, websockets-14.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp)
[telegram:embed] Extracting URL: https://t.me/trsch/4068
[telegram:embed] 4068: Downloading embed frame
WARNING: Extractor telegram:embed returned nothing; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
```
| site-bug,patch-available | low | Critical |
2,754,512,698 | vscode | SCM Graph - does not detect changes in the cloud when working for a remote source (SSH) |
Type: <b>Bug</b>
When I'm working on a remote (SSH) repository. Source Control Graph under Source Control tab never detect changes on the cloud.
I tried to manually click on refresh, it still does not sync commit activities on the cloud.
See picture
<img width="756" alt="Image" src="https://github.com/user-attachments/assets/76dffda1-6f51-4b8f-9e4f-17d485d58ea4" />
On the left is the window that i'm working on the local repo, On the right is the window that I ssh-ed to the location on my server. You can see on the right side it is not showing the same commit activities as on the left.
I tried to look at git output. They are totally the same and no error messages.
This is never a problem when I am working on a local repository. Which is why I think this might be a bug.
I can still sync from the cloud properly. When I do so, the Source Control Graph will also update at the same time properly.
VS Code version: Code 1.96.0 (138f619c86f1199955d53b4166bef66ef252935c, 2024-12-11T02:29:09.626Z)
OS version: Darwin arm64 24.2.0
Modes:
Remote OS version: Linux arm64 5.15.0-303.171.5.2.el8uek.aarch64
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Apple M1 Pro (8 x 2400)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|2, 2, 2|
|Memory (System)|16.00GB (0.07GB free)|
|Process Argv|--crash-reporter-id 37e9c29d-3a7e-417e-94bf-116eb1535b4c|
|Screen Reader|no|
|VM|0%|
|Item|Value|
|---|---|
|Remote|SSH: dailyreport.suntify.co.nz|
|OS|Linux arm64 5.15.0-303.171.5.2.el8uek.aarch64|
|CPUs|Neoverse-N1 (4 x 0)|
|Memory (System)|23.16GB (20.78GB free)|
|VM|0%|
</details><details><summary>Extensions (10)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-subtitles|ast|0.4.0
es7-react-js-snippets|dsz|4.4.3
jupyter-keymap|ms-|1.1.2
remote-ssh|ms-|0.116.1
remote-ssh-edit|ms-|0.87.0
remote-explorer|ms-|0.4.3
material-icon-theme|PKi|5.16.0
paradox-syntax|tbo|0.1.17
vscode-icons|vsc|12.10.0
jinja|who|0.0.8
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
pythonvspyt551cf:31179979
vscod805cf:30301675
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
vscrp:30673768
2i9eh265:30646982
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupyter:31046869
newcmakeconfigv2:31071590
nativerepl2:31139839
pythonrstrctxt:31112756
nativeloc2:31192216
cf971741:31144450
iacca1:31171482
notype1cf:31157160
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
```
</details>
<!-- generated by issue reporter --> | bug,scm | low | Critical |
2,754,513,402 | go | cmd/compile: compile-time-evaluate memory loads statically known to only touch the `.rodata` section | While doing https://go-review.googlesource.com/c/go/+/637936/3 I tested it on some code of mine and it failed to mark many of my functions pure because one callgraph leaf use a `const string` as a LUT.
This would also allow the compiler to reorder theses loads with stores which could help regalloc among other things.
For an example I made up this very simple function:
```go
func hexDigit(x uint8) rune {
return rune("0123456789abcdef"[x & 0b1111])
}
```
You can see that the Load op takes memory as an argument when this is not needed.

In this context we should model `[]` as an odd arithmetic operator by removing it's memory argument.
Chances I'll come back to this at some point. | Performance,NeedsInvestigation,compiler/runtime | low | Critical |
2,754,514,847 | rust | something made tidy extremely slow when running it first time | When I run `./x test tidy --bless` on my laptop, first invocation took around 4 minutes for some reason but this didn’t happen again. The same thing happened on my desktop machine. Something caused `tidy` to run extremely slowly when it's running first time on the system.

| T-bootstrap,C-bug,A-tidy,E-needs-investigation | low | Major |
2,754,515,874 | ui | [bug]: Calender in Shadcn not working properly | ### Describe the bug
when we used shadcn components, one component calender component not working properly.
### Affected component/components
Calendar
### How to reproduce
go to shadcn library where you search calender component ..
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Google chrome browser
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,754,518,481 | godot | UI Elements of Godot not aligned on ARM | ### Tested versions
Reproducible in 4.3
### System information
Godot v4.3.stable.mono - Windows 10.0.26100 - GLES3 (Compatibility) - D3D12 (Qualcomm(R) Adreno(TM) X1-85 GPU) - Snapdragon(R) X 10-core X1P64100 @ 3.40 GHz (10 Threads)
### Issue description
Many of the UI elements are not aligned to where your mouse is, it's hard to descript so I will try to include some screenshots
Buttons on button are clickable where the lower button is, and obviously there are two-ish of the buttons

Watch my mouse closely here:
https://github.com/user-attachments/assets/d3275105-70a0-4d80-acad-956b7badbe41
### Steps to reproduce
Just open Godot, try to add nodes
### Minimal reproduction project (MRP)
N/A | bug,topic:gui | low | Minor |
2,754,535,105 | godot | Removing a type in Theme Editor's Manage Items dialog removes all types instead of just the one clicked | ### Tested versions
Reproducible in (at least): v4.4.dev7.official [46c8f8c5c]
### System information
Windows 11
### Issue description
Simply put, when I click the trash icon next to a type item in the Manage Items dialog (Theme Editor), it removes ALL items, instead of just the one I clicked.
### Steps to reproduce
1. Create a new Theme resource
2. Add at least 2 types with the + icon
3. Click "Manage Items..."
4. Click "Remove Item" trash icon next to any item
5. Notice how all items are removed
### Minimal reproduction project (MRP)
N/A | bug,topic:editor | low | Minor |
2,754,550,731 | three.js | Request Uniforms/Inputs from a NodeMaterial | ### Description
When creating a node material you can create inputs with the `uniform` node and later change them from javascript etc.
However, there's no easy way to get the uniforms or inputs later in the system.
Consider a loader that returns a `NodeMaterial` you wont know what the inputs are let alone change them.
### Solution
I think it would be possible to take the material input nodes and use them as start points to [traverse](https://github.com/mrdoob/three.js/blob/284c5fee2bc069cd1575c08dd1a8f0ea415dc83c/src/nodes/core/Node.js#L300) and do tests for `isUniformNode` or `isInputNode` and then return an object or array of the nodes..
Consider:
```ts
const mesh = getSomeMeshFromGLTF('simplified.glb');
const inputs = mesh.material.getInputs();
// in a helper function
makeUiControls(inputs);
// direct
if (inputs.color1) inputs.color1.value = color('#fff');
if (inputs.mix) inputs.mix = 0.5;
```
### Alternatives
Users will have to implement it themselves.
### Additional context
_No response_ | Suggestion,WebGPU | low | Minor |
2,754,556,260 | PowerToys | Simple Enhancements to the "Find My Mouse" Feature for Better Reading Focus | ### Description of the new feature / enhancement
The "Find My Mouse" feature is currently designed to locate the cursor by creating a spotlight effect but I found this feature to be especially useful for improving focus while reading because of how the contrast between the background and spotlight work, I think the following enhancements can improve its usability for reading focus even further, especially for users like me who are neurodivergent or have ADHD:
1. Add an option to change the spotlight shape to a rectangle. This would align with how we read line by line, making it more effective for focusing on text.
2. Allow hiding the mouse cursor while keeping the spotlight active.
3. Provide an option to keep the spotlight active until manually turned off or to set a timer for how long it stays visible, even after the mouse stops moving.
These changes can really help direct their attention better, improving focus while reading.
### Scenario when this would be used?
This can be useful for individuals who need help focusing on specific areas of the screen, such as when reading or analyzing content. It’s particularly useful for neurodivergent users or those with ADHD who benefit from visual cues that reduce distractions. Enhancing "Find My Mouse" to support these use cases would expand its functionality and accessibility.
### Supporting information
[Something like this](https://www.reddit.com/r/chrome_extensions/comments/1bkt3sp/i_built_a_chrome_extension_to_help_you_focus_on/)
I’ve found the current feature immensely helpful for reading and maintaining focus due to its color contrast between the spotlight and background. Enhancements like rectangular shapes and persistent spotlight modes would make this tool even better.
Thank you the Devs at PowerToys, including Raymond Chen. | Needs-Triage | low | Minor |
2,754,558,089 | pytorch | [Fake Tensor] `.div` pass the check on inductor when divisor is zero | ### 🐛 Describe the bug
An error is raised when the divisor is 0 on eager. However, inductor passes the check and outputs the max value of Long.
both occurs on cpu and cuda.
```python
import torch
import torch.nn as nn
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
def forward(self, x, y):
x.div_(y)
return x
x = torch.tensor([1])
y = torch.tensor([0])
inputs = [x, y]
def run_test(inputs, device, mode):
model = Model()
if mode == "inductor":
model = torch.compile(model)
if device == "cuda":
inputs = [x.cuda() for x in inputs]
model = model.cuda()
try:
output = model(*inputs)
print(f"{mode} succeeds: {output}")
except Exception as e:
print(e)
run_test(inputs, "cuda", "eager")
run_test(inputs, "cuda", "inductor")
run_test(inputs, "cpu", "eager")
run_test(inputs, "cpu", "inductor")
```
### Error logs
```
result type Float can't be cast to the desired output type Long
inductor succeeds: tensor([9223372036854775807], device='cuda:0')
result type Float can't be cast to the desired output type Long
inductor succeeds: tensor([-9223372036854775808])
```
### Versions
PyTorch version: 2.6.0.dev20241218+cu126
OS: Ubuntu 20.04.6 LTS (x86_64)
CPU: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
GPU: V100
<details>
<summary>click for detailed env</summary>
```
PyTorch version: 2.6.0.dev20241218+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 16.0.1
CMake version: version 3.26.0
Libc version: glibc-2.31
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-202-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
GPU 2: Tesla V100-SXM2-32GB
GPU 3: Tesla V100-SXM2-32GB
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 40 bits physical, 48 bits virtual
CPU(s): 20
On-line CPU(s) list: 0-19
Thread(s) per core: 1
Core(s) per socket: 20
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2499.996
BogoMIPS: 4999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 640 KiB
L1i cache: 640 KiB
L2 cache: 80 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch topoext cpuid_fault invpcid_single pti ssbd ibrs ibpb fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke avx512_vnni
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.1.0.dev20241205
[pip3] optree==0.13.1
[pip3] pytorch-triton==3.2.0+gitf9cdf582
[pip3] torch==2.6.0.dev20241218+cu126
[pip3] torchaudio==2.6.0.dev20241218+cu126
[pip3] torchvision==0.22.0.dev20241218+cu126
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] pytorch-triton 3.2.0+gitf9cdf582 pypi_0 pypi
[conda] torch 2.6.0.dev20241218+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20241218+cu126 pypi_0 pypi
[conda] torchvision 0.22.0.dev20241218+cu126 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
```
</details>
cc @chauhang @penguinwu @eellison @zou3519 @bdhirsh @yf225 @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov | triaged,oncall: pt2,module: fakeTensor,module: aotdispatch,module: pt2-dispatcher | low | Critical |
2,754,560,102 | flutter | Customizable Keybindings for Flutter interactive Console | ### Use case
When running flutter run in an interactive console, pressing certain keys can trigger actions that may be unintentionally invoked during regular development work. For example, in my case, pressing J accidentally results in the creation of jank metrics. While these metrics are useful, they can interfere with the workflow, especially when working in an environment like Vim where the terminal is frequently focused.
Currently, there's no way to customize or disable specific keybindings without turning off interactivity altogether, which would also disable essential features like r for hot reload and R for hot restart.
### Proposal
- Provide a way for developers to either disable or customize specific keybindings (e.g., J for jank metrics) within the Flutter interactive console.
- Ensure that essential keybindings like r (hot reload) and R (hot restart) remain intact while allowing specific actions to be muted or reassigned.
This can be achieved either by adding an optional configuration file or environment variable where developers can specify which keys they want to disable or remap.
- Increased flexibility and customization for developers using the Flutter CLI.
- Prevention of accidental key presses that can disrupt the development workflow.
- Allow developers to maintain essential interactivity features while avoiding unwanted commands. | c: new feature,tool,c: proposal,team-tool | low | Minor |
2,754,565,944 | rust | `minicore`: use the host's `core` when available | From discussion at https://rust-lang.zulipchat.com/#narrow/channel/131828-t-compiler/topic/Using.20.60minicore.60.20for.20more.20compiletests, it would be ideal to replace `minicore` with the host's `core` if that is built. @jieyouxu pointed out that this did exist for the ABI compatibility test at some point https://github.com/rust-lang/rust/pull/130693/commits/adb6d4752fd00093d44e0285cfff28406dfb7229#diff-55f122dec36f78625c830ef254174f8ac89b16959557e11dff4d33ef9fd12438L70.
The main advantage here is it gives us a better chance of catching (unlikely) deviations between `core` and `minicore`, as opposed to always using `minicore`. | A-testsuite,C-enhancement,T-bootstrap,A-compiletest | low | Minor |
2,754,579,538 | godot | Environment: Background Mode: Canvas causes Popup windows to become severely transparent (including default "ToolTip") regardless of `background_canvas_max_layer` property | ### Tested versions
- Reproducible in Godot 4.3 (Renderer: Mobile)
### System information
Windows 11 - Godot Engine v4.3.stable.official.77dcf97d8 - https://godotengine.org Vulkan 1.3.278 - Forward Mobile - Using Device #0: NVIDIA - NVIDIA GeForce RTX 3050 Laptop GPU
### Issue description
When using World environment in a scene, and setting the background mode of an environment to `canvas`, every popup window (transparent = true) to become severely transparent (almost invisible) Even when the theme is custom set to be fully opaque have tried to change the CanvasLayer's `Layer` which is the parent of the GUI node to a higher value than the `background_canvas_max_layer` but it has no effect.

EXPECTED OUTCOME (WorldEnvironment `Background Mode` set to: `Clear Color`):

If this is the intended behavior, please tell me how to turn the severe transparency off for the default tooltip since i am overriding it with my own using `_make_custom_tooltip()` (even though the issue still exists with the default ones). thank you.
### Steps to reproduce
1. Create a new projectin godot 4.3 (Mobile)
2. Make a 2D scene:

3. Add a `WorldEnvironment` Node as a child.
4. Add a new environment and in the "Background" subgroup in the editor, Change the mode to `Canvas` and set `Canvas Max Layer` to 1:

5. Add a `CanvasLayer` as the Child of Root `Node2D`. Set the `Layer` to anything greater than 1 (5 in my case):

7. Add a `PanelContainer` as the child of `CanvasLayer`. Set the `TooltipText` to any text in the editor:

8. Save and Run the Scene and Hover your mouse cursor over the `PanelContainer`. The (More) Transparent Tooltip Text will be displayed:

9. Now Change the Background Mode of the `WorldEnvironment` to anything other than "Canvas".

10. Save and Run the Scene. Now the Popup (Tooltip) is being displayed as Intended:

### Minimal reproduction project (MRP)
[mrp-bg_canvas.zip](https://github.com/user-attachments/files/18221440/mrp-bg_canvas.zip)
| bug,topic:rendering,topic:2d | low | Minor |
2,754,585,605 | go | go/parser: `goto;` is incorrectly valid | While poking around the `go/types` internals, i noticed that nothing really "type-checks" `goto`s without labels.
First pass:
https://github.com/golang/go/blob/500675a7c8c72bd6b1054a7eb4daaf61970f5ad7/src/go/types/stmt.go#L578-L608
Second pass:
https://github.com/golang/go/blob/500675a7c8c72bd6b1054a7eb4daaf61970f5ad7/src/go/types/labels.go#L173-L176
Also it also turns out that:
```go
func test() {
goto;
}
```
Is accepted by the `go/parser`. The spec disallows `goto`s without labels, so we should either return an appropriate error in `go/types` or `go/parser`. Currently `go/types` returns an `InvalidSyntaxTree` error.
https://github.com/golang/go/blob/500675a7c8c72bd6b1054a7eb4daaf61970f5ad7/src/go/types/stmt.go#L605-L607
CC @griesemer @adonovan
| NeedsFix | low | Critical |
2,754,595,435 | vscode | В jupiter notebook создаются пустые ячейки с которыми нельяз взаимодействовать |
Type: <b>Bug</b>
В какой-то момент после недавнего обновления появиласть проблема. Ячейка дя кода создаётся, но писать в неё нелья. Иногда помогает создать ещё одну ячейку и удалить сломанную. Иногда помогает только перезапуск
VS Code version: Code 1.96.0 (138f619c86f1199955d53b4166bef66ef252935c, 2024-12-11T02:29:09.626Z)
OS version: Windows_NT x64 10.0.19044
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|AMD Ryzen 5 4600H with Radeon Graphics (12 x 2994)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|31.36GB (25.67GB free)|
|Process Argv||
|Screen Reader|no|
|VM|50%|
</details><details><summary>Extensions (14)</summary>
Extension|Author (truncated)|Version
---|---|---
gc-excelviewer|Gra|4.2.62
plantuml|jeb|2.18.1
vscode-language-pack-ru|MS-|1.96.2024121109
debugpy|ms-|2024.14.0
isort|ms-|2023.10.1
python|ms-|2024.22.0
vscode-pylance|ms-|2024.12.1
datawrangler|ms-|1.14.0
jupyter|ms-|2024.11.0
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.21
vscode-jupyter-cell-tags|ms-|0.1.9
vscode-jupyter-slideshow|ms-|0.1.6
pip-manager|sli|1.1.4
</details>
<!-- generated by issue reporter --> | info-needed,*english-please,translation-required-russian | low | Critical |
2,754,599,702 | flutter | [format] Set up line length & rules for Dart code when using VSCode | ### Use case
Developers wanting to contribute to the framework (or engine), by writing Dart code, while having their VSCode `User` settings set to something that differs from the framework's settings.
Context:
In https://github.com/flutter/flutter/commit/5491c8c146441d3126aff91beaa3fb5df6d710d0 the settings for `formatOnSave` and `formatOnPaste` were removed, as those were explicitly set to `false`. This presumably restored VSCode's default settings to format by default.
I started work on fixing a bug in the framework, post the engine binaries outage from yesterday, having not worked on the framework since the auto format change landed. Since I was aware of the autoformatting change, I pulled from master at 11AM UTC+1 on 22/12/2024, to start without issues. However, when using the `Format Document` action in `carousel_test.dart` I had a formatting diff in a piece of code that I did not write.
However, I have the following in my `User` settings, which overrides the `Workspace` settings from VSCode, if it finds things that are not defined.
```
"dart.lineLength": 120,
"[dart]": {
"editor.formatOnSave": true,
"editor.formatOnType": true,
"editor.rulers": [
120
],
},
```
Neither the `dart.lineLength` nor `editor.rulers` is defined by the framework's `settings.json`, so I think it will use my own (incorrect) settings?
### Proposal
Add
```
"dart.lineLength": frameworkLineLength,
"[dart]": {
"editor.formatOnSave": true,
"editor.formatOnType": true,
"editor.rulers": [
frameworkLineLength
],
},
```
to the framework's `.vscode/settings.json`, where `frameworkLineLength` is the desired value for formatting code in the framework. By making the line length & formatOnSave/Type explicit, we avoid this being overridden by user level settings. | framework,c: proposal,P2,team-framework,triaged-framework | low | Critical |
2,754,601,584 | rust | ICE: const eval: `InterpErrorInfo(InterpErrorInfoInner { kind: ResourceExhaustion(MemoryExhausted) .. ` | <!--
ICE: Rustc ./a.rs '-Zmir-opt-level=5 -Zvalidate-mir -ooutputfile -Zdump-mir-dir=dir' 'thread 'rustc' panicked at compiler/rustc_const_eval/src/const_eval/valtrees.rs:225:55: 'called `Result::unwrap()` on an `Err` value: InterpErrorInfo(InterpErrorInfoInner { kind: ResourceExhaustion(MemoryExhausted), backtrace: InterpErrorBacktrace { backtrace: None } })'', 'thread 'rustc' panicked at compiler/rustc_const_eval/src/const_eval/valtrees.rs:225:55: 'called `Result::unwrap()` on an `Err` value: InterpErrorInfo(InterpErrorInfoInner { kind: ResourceExhaustion(MemoryExhausted), backtrace: InterpErrorBacktrace { backtrace: None } })''
File: /home/gh-matthiaskrgr/tmp/a.rs
-->
auto-reduced (treereduce-rust):
````rust
//@compile-flags: -Zmir-opt-level=5 -Zvalidate-mir
pub fn function_with_bytes<const BYTES: &'static [u8; 0xa9008fb6c9d81e42_0e25730562a601c8_u128]>(
) -> &'static [u8] {
BYTES
}
pub fn main() {
assert_eq!(
function_with_bytes::<b"foo {:}">(),
&[0x41, 0x41, 0x41, 0x41]
);
}
````
original:
````rust
pub fn function_with_bytes<const BYTES: &'static [u8; 0xa9008fb6c9d81e42_0e25730562a601c8_u128]>() -> &'static [u8] {
BYTES
}
pub fn main() {
assert_eq!(function_with_bytes::<b"foo {:}">(), &[0x41, 0x41, 0x41, 0x41]);
}
````
Version information
````
rustc 1.85.0-nightly (a2bcfae5c 2024-12-22)
binary: rustc
commit-hash: a2bcfae5c5d05dd7806a79194cda39108ed6cd7d
commit-date: 2024-12-22
host: x86_64-unknown-linux-gnu
release: 1.85.0-nightly
LLVM version: 19.1.6
````
Possibly related line of code:
https://github.com/rust-lang/rust/blob/a2bcfae5c5d05dd7806a79194cda39108ed6cd7d/compiler/rustc_const_eval/src/const_eval/valtrees.rs#L219-L231
Command:
`/home/gh-matthiaskrgr/.rustup/toolchains/master/bin/rustc -Zmir-opt-level=5 -Zvalidate-mir`
<details><summary><strong>Program output</strong></summary>
<p>
```
error: `&'static [u8; 1019347357636362696]` is forbidden as the type of a const generic parameter
--> /run/user/1085/tmp/dir.F5Q0BVNjJitF/rustc_testrunner_tmpdir_reporting.HKYel3RV0kZB/mvce.rs:1:41
|
1 | pub fn function_with_bytes<const BYTES: &'static [u8; 0xa9008fb6c9d81e42_0e25730562a601c8_u128]>(
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: the only supported types are integers, `bool`, and `char`
help: add `#![feature(adt_const_params)]` to the crate attributes to enable more complex and user defined types
|
1 + #![feature(adt_const_params)]
|
help: add `#![feature(unsized_const_params)]` to the crate attributes to enable references to implement the `ConstParamTy` trait
|
1 + #![feature(unsized_const_params)]
|
error[E0308]: mismatched types
--> /run/user/1085/tmp/dir.F5Q0BVNjJitF/rustc_testrunner_tmpdir_reporting.HKYel3RV0kZB/mvce.rs:1:55
|
1 | pub fn function_with_bytes<const BYTES: &'static [u8; 0xa9008fb6c9d81e42_0e25730562a601c8_u128]>(
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected `usize`, found `u128`
|
help: change the type of the numeric literal from `u128` to `usize`
|
1 | pub fn function_with_bytes<const BYTES: &'static [u8; 0xa9008fb6c9d81e42_0e25730562a601c8_usize]>(
| ~~~~~
error[E0308]: mismatched types
--> /run/user/1085/tmp/dir.F5Q0BVNjJitF/rustc_testrunner_tmpdir_reporting.HKYel3RV0kZB/mvce.rs:8:31
|
8 | function_with_bytes::<b"foo {:}">(),
| ^^^^^^^^^^ expected an array with a size of 1019347357636362696, found one with a size of 7
thread 'rustc' panicked at compiler/rustc_const_eval/src/const_eval/valtrees.rs:225:55:
called `Result::unwrap()` on an `Err` value: InterpErrorInfo(InterpErrorInfoInner { kind: ResourceExhaustion(MemoryExhausted), backtrace: InterpErrorBacktrace { backtrace: None } })
stack backtrace:
0: 0x73f0e04b775a - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::he57e0fea652ab9fd
1: 0x73f0e0c13866 - core::fmt::write::hd55e7fc717febc4b
2: 0x73f0e1b48bd1 - std::io::Write::write_fmt::h78281f831863065d
3: 0x73f0e04b75b2 - std::sys::backtrace::BacktraceLock::print::h7acbe748d8539c38
4: 0x73f0e04b9aaa - std::panicking::default_hook::{{closure}}::hea23bfd5c34c8db7
5: 0x73f0e04b98f3 - std::panicking::default_hook::hff2113baba127713
6: 0x73f0df62ecc8 - std[e6d36fd50c659ae0]::panicking::update_hook::<alloc[4740fe411def7ec4]::boxed::Box<rustc_driver_impl[89e0991809749e6d]::install_ice_hook::{closure#0}>>::{closure#0}
7: 0x73f0e04ba2a8 - std::panicking::rust_panic_with_hook::hfc1c16069db4c6ff
8: 0x73f0e04b9f9a - std::panicking::begin_panic_handler::{{closure}}::hb21d96c911822c44
9: 0x73f0e04b7c09 - std::sys::backtrace::__rust_end_short_backtrace::h8e09f77e03188563
10: 0x73f0e04b9c5d - rust_begin_unwind
11: 0x73f0dd19c340 - core::panicking::panic_fmt::hfa3a678dda58d831
12: 0x73f0dd63d256 - core::result::unwrap_failed::h9cfde6874ed92b87
13: 0x73f0e1b7a5f3 - rustc_const_eval[c45611a6115a718]::const_eval::valtrees::create_valtree_place
14: 0x73f0e13c7150 - rustc_const_eval[c45611a6115a718]::const_eval::valtrees::valtree_to_ref
15: 0x73f0e1727db9 - rustc_const_eval[c45611a6115a718]::const_eval::valtrees::valtree_to_const_value
16: 0x73f0e1727b72 - <rustc_const_eval[c45611a6115a718]::provide::{closure#1} as core[a17736bada842104]::ops::function::FnOnce<(rustc_middle[5e0371d42ed41497]::ty::context::TyCtxt, (rustc_middle[5e0371d42ed41497]::ty::Ty, rustc_middle[5e0371d42ed41497]::ty::consts::valtree::ValTree))>>::call_once
17: 0x73f0e1727b2e - rustc_query_impl[a3a64fb382d359d2]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[a3a64fb382d359d2]::query_impl::valtree_to_const_val::dynamic_query::{closure#2}::{closure#0}, rustc_middle[5e0371d42ed41497]::query::erase::Erased<[u8; 24usize]>>
18: 0x73f0e1727aed - <rustc_query_impl[a3a64fb382d359d2]::query_impl::valtree_to_const_val::dynamic_query::{closure#2} as core[a17736bada842104]::ops::function::FnOnce<(rustc_middle[5e0371d42ed41497]::ty::context::TyCtxt, (rustc_middle[5e0371d42ed41497]::ty::Ty, rustc_middle[5e0371d42ed41497]::ty::consts::valtree::ValTree))>>::call_once
19: 0x73f0e1726c89 - rustc_query_system[7ed8f12aa9c699c]::query::plumbing::try_execute_query::<rustc_query_impl[a3a64fb382d359d2]::DynamicConfig<rustc_query_system[7ed8f12aa9c699c]::query::caches::DefaultCache<(rustc_middle[5e0371d42ed41497]::ty::Ty, rustc_middle[5e0371d42ed41497]::ty::consts::valtree::ValTree), rustc_middle[5e0371d42ed41497]::query::erase::Erased<[u8; 24usize]>>, false, false, false>, rustc_query_impl[a3a64fb382d359d2]::plumbing::QueryCtxt, false>
20: 0x73f0e17269bb - rustc_query_impl[a3a64fb382d359d2]::query_impl::valtree_to_const_val::get_query_non_incr::__rust_end_short_backtrace
21: 0x73f0e1711e25 - <rustc_mir_transform[93024dc8ef41ddad]::gvn::VnState>::insert
22: 0x73f0e1708aef - <rustc_mir_transform[93024dc8ef41ddad]::gvn::VnState>::simplify_operand
23: 0x73f0e170aab5 - <rustc_mir_transform[93024dc8ef41ddad]::gvn::VnState>::simplify_rvalue
24: 0x73f0de6fae3e - <rustc_mir_transform[93024dc8ef41ddad]::gvn::GVN as rustc_mir_transform[93024dc8ef41ddad]::pass_manager::MirPass>::run_pass
25: 0x73f0e0c0ee6e - rustc_mir_transform[93024dc8ef41ddad]::pass_manager::run_passes_inner
26: 0x73f0e0d4215f - rustc_mir_transform[93024dc8ef41ddad]::optimized_mir
27: 0x73f0e0d41a2b - rustc_query_impl[a3a64fb382d359d2]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[a3a64fb382d359d2]::query_impl::optimized_mir::dynamic_query::{closure#2}::{closure#0}, rustc_middle[5e0371d42ed41497]::query::erase::Erased<[u8; 8usize]>>
28: 0x73f0e0d0f21f - rustc_query_system[7ed8f12aa9c699c]::query::plumbing::try_execute_query::<rustc_query_impl[a3a64fb382d359d2]::DynamicConfig<rustc_query_system[7ed8f12aa9c699c]::query::caches::DefIdCache<rustc_middle[5e0371d42ed41497]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[a3a64fb382d359d2]::plumbing::QueryCtxt, false>
29: 0x73f0e0d0e6f3 - rustc_query_impl[a3a64fb382d359d2]::query_impl::optimized_mir::get_query_non_incr::__rust_end_short_backtrace
30: 0x73f0ddbc6df9 - <rustc_middle[5e0371d42ed41497]::ty::context::TyCtxt>::instance_mir
31: 0x73f0e0ebc514 - rustc_interface[36124661c84dc53c]::passes::run_required_analyses
32: 0x73f0e1b4ca5e - rustc_interface[36124661c84dc53c]::passes::analysis
33: 0x73f0e1b4ca2f - rustc_query_impl[a3a64fb382d359d2]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[a3a64fb382d359d2]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[5e0371d42ed41497]::query::erase::Erased<[u8; 0usize]>>
34: 0x73f0e1ba7855 - rustc_query_system[7ed8f12aa9c699c]::query::plumbing::try_execute_query::<rustc_query_impl[a3a64fb382d359d2]::DynamicConfig<rustc_query_system[7ed8f12aa9c699c]::query::caches::SingleCache<rustc_middle[5e0371d42ed41497]::query::erase::Erased<[u8; 0usize]>>, false, false, false>, rustc_query_impl[a3a64fb382d359d2]::plumbing::QueryCtxt, false>
35: 0x73f0e1ba758e - rustc_query_impl[a3a64fb382d359d2]::query_impl::analysis::get_query_non_incr::__rust_end_short_backtrace
36: 0x73f0e1c437de - rustc_interface[36124661c84dc53c]::passes::create_and_enter_global_ctxt::<core[a17736bada842104]::option::Option<rustc_interface[36124661c84dc53c]::queries::Linker>, rustc_driver_impl[89e0991809749e6d]::run_compiler::{closure#0}::{closure#2}>::{closure#2}::{closure#0}
37: 0x73f0e1bc2aa8 - rustc_interface[36124661c84dc53c]::interface::run_compiler::<(), rustc_driver_impl[89e0991809749e6d]::run_compiler::{closure#0}>::{closure#1}
38: 0x73f0e1a51311 - std[e6d36fd50c659ae0]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[36124661c84dc53c]::util::run_in_thread_with_globals<rustc_interface[36124661c84dc53c]::util::run_in_thread_pool_with_globals<rustc_interface[36124661c84dc53c]::interface::run_compiler<(), rustc_driver_impl[89e0991809749e6d]::run_compiler::{closure#0}>::{closure#1}, ()>::{closure#0}, ()>::{closure#0}::{closure#0}, ()>
39: 0x73f0e1a517a6 - <<std[e6d36fd50c659ae0]::thread::Builder>::spawn_unchecked_<rustc_interface[36124661c84dc53c]::util::run_in_thread_with_globals<rustc_interface[36124661c84dc53c]::util::run_in_thread_pool_with_globals<rustc_interface[36124661c84dc53c]::interface::run_compiler<(), rustc_driver_impl[89e0991809749e6d]::run_compiler::{closure#0}>::{closure#1}, ()>::{closure#0}, ()>::{closure#0}::{closure#0}, ()>::{closure#1} as core[a17736bada842104]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
40: 0x73f0e1a52d81 - std::sys::pal::unix::thread::Thread::new::thread_start::h0cedf7b69d2cc02b
41: 0x73f0dbc9ca94 - start_thread
at ./nptl/pthread_create.c:447:8
42: 0x73f0dbd29c3c - clone3
at ./misc/../sysdeps/unix/sysv/linux/x86_64/clone3.S:78
43: 0x0 - <unknown>
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: rustc 1.85.0-nightly (a2bcfae5c 2024-12-22) running on x86_64-unknown-linux-gnu
note: compiler flags: -Z mir-opt-level=5 -Z validate-mir -Z dump-mir-dir=dir
query stack during panic:
#0 [valtree_to_const_val] converting type-level constant value to mir constant value
#1 [optimized_mir] optimizing MIR for `main`
end of query stack
error: aborting due to 3 previous errors
For more information about this error, try `rustc --explain E0308`.
```
</p>
</details>
<!--
query stack:
#0 [valtree_to_const_val] converting type-level constant value to mir constant value
#1 [optimized_mir] optimizing MIR for `main`
--> | I-ICE,T-compiler,C-bug,A-mir-opt-inlining,S-bug-has-test,A-mir-opt-GVN | low | Critical |
2,754,619,048 | PowerToys | Update fail, uninstall/reinstall fail | ### Microsoft PowerToys version
0.87.1 (version prior)
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
Installer
### Steps to reproduce
Attempted to update from notice, just stopped with no notice. Tried 'modify' in Apps, it uninstalled, tried to install from Store - kept failing. Deleted references to Powertoys in C:, attempted install again, it did and got the message to bug report. Haven't run program yet.
### ✔️ Expected Behavior
see above
### ❌ Actual Behavior
see above
### Other Software
[PowerToysReport_2024-12-22-06-41-04.zip](https://github.com/user-attachments/files/18221709/PowerToysReport_2024-12-22-06-41-04.zip)
| Issue-Bug,Needs-Triage,Needs-Team-Response | low | Critical |
2,754,619,123 | flutter | [proposal][wasm][ffi] AOT wasm interoperability | ### Use case
wasm interoperability would offers great value to flutter's framework
I'm sure the team is aware of this [[1](https://github.com/dart-archive/wasm/issues/146#issuecomment-1578567050), [2](https://github.com/dart-archive/wasm/issues/146#issuecomment-1579495860)] given that there used to be a [labs.dart.dev](https://pub.dev/publishers/labs.dart.dev/packages)'s package [wasm](https://pub.dev/packages/wasm)
that was [discontinued](https://github.com/dart-archive/wasm/issues/146) in summer 2023;
according to this [comment](https://github.com/dart-archive/wasm/issues/146#issuecomment-1577707770) such decision was caused by 3 technical obstacles:
- lack of dart/flutter native assets feature
- lack of AOT mode of the wasm runtime of choice for ios
- need for a separated package for bindings
as of today there is visible momentum towards the clearing of the main obstacles
- dart has an [experimental native assets feature](https://github.com/dart-lang/native/tree/main/pkgs/native_assets_cli)
- https://github.com/dart-lang/sdk/issues/50565
- https://github.com/flutter/flutter/issues/129757
- [wasmedge](https://github.com/WasmEdge/WasmEdge) besides provide support for [several platforms](https://wasmedge.org/docs/category/supported-platforms) is working on wasm AOT fro android and ios
- https://github.com/WasmEdge/WasmEdge/issues/2621
### Proposal
Track the clearing of technical blocker for a `wasm` implementation
and if/when these blockers are cleared resume/re-work the deprecated [package:wasm](https://pub.dev/packages/wasm)
#
cc @devoncarew @mit-mit @liamappelbe @hydai | c: new feature,dependency: dart,will need additional triage,c: proposal,e: wasm | low | Major |
2,754,648,764 | godot | [3.x] Task.Delay throws NullReferenceException on HTML5 exports | ### Tested versions
Reproducible in: v3.6.stable.mono.official [de2f0f147]
* Occurs both when "Export as Debug" is set and when it isn't.
* Does *not* occur on Windows exports
### System information
Windows 11 64-bit, Chrome 131.0.6778.205
### Issue description
Awaiting `Task.Delay` works fine in editor builds, but crashes on HTML5 exports, showing the following message on the console (F12):
> Unhandled Exception:
> System.NullReferenceException: Object reference not set to an instance of an object.
> at System.Threading.WasmRuntime.TimeoutCallback (System.Int32 id) <0x2f69080 + 0x0000a> in <filename unknown>:0
> timeout callback threw a System.NullReferenceException
### Steps to reproduce
* Open the attached minimal reproduction project in Godot
* It contains a single script:
```csharp
public override void _Ready() {
Task.Run(async () => {
GD.Print("Waiting...");
await Task.Delay(1000);
GD.Print("Done");
});
}
```
* Observe how it runs as expected in the editor
* Export to HTML5, then run `python -m http.server` when `cd`-d into the exported directory
* Browse `http://localhost:8000`, press F12 to keep the console window open
* Click `TaskDelayDemo.html` and observe the errors in the console, and that "Done" is never printed
### Minimal reproduction project (MRP)
[TaskDelayDemo.zip](https://github.com/user-attachments/files/18221915/TaskDelayDemo.zip)
| bug,platform:web | low | Critical |
2,754,651,443 | vscode | [json] Automatically Escape Pasted Text in JSON string | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
I have a need to paste a large amount of text into a JSON string via the clipboard, but I hope it can automatically escape the clipboard content, which would be very convenient for pasting large amounts of text.
Here is a use case:
1. I want to set the `github.copilot.chat.commitMessageGeneration.instructions` setting
```json
{
"github.copilot.chat.commitMessageGeneration.instructions": [
{
"text": ""
}
]
}
```
2. I have some text that I want to paste
```md
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC 2119](https://www.ietf.org/rfc/rfc2119.txt).
1. Commits MUST be prefixed with a type, which consists of a noun, `feat`, `fix`, etc., followed by the OPTIONAL scope, OPTIONAL `!`, and REQUIRED terminal colon and space.
2. The type `feat` MUST be used when a commit adds a new feature to your application or library.
3. The type `fix` MUST be used when a commit represents a bug fix for your application.
4. A scope MAY be provided after a type. A scope MUST consist of a noun describing a section of the codebase surrounded by parenthesis, e.g., `fix(parser):`
5. A description MUST immediately follow the colon and space after the type/scope prefix. The description is a short summary of the code changes, e.g., *fix: array parsing issue when multiple spaces were contained in string*.
6. A longer commit body MAY be provided after the short description, providing additional contextual information about the code changes. The body MUST begin one blank line after the description.
7. A commit body is free-form and MAY consist of any number of newline separated paragraphs.
8. One or more footers MAY be provided one blank line after the body. Each footer MUST consist of a word token, followed by either a `:<space>` or `<space>#` separator, followed by a string value (this is inspired by the [git trailer convention](https://git-scm.com/docs/git-interpret-trailers)).
9. A footer's token MUST use `-` in place of whitespace characters, e.g., `Acked-by` (this helps differentiate the footer section from a multi-paragraph body). An exception is made for `BREAKING CHANGE`, which MAY also be used as a token.
10. A footer's value MAY contain spaces and newlines, and parsing MUST terminate when the next valid footer token/separator pair is observed.
11. Breaking changes MUST be indicated in the type/scope prefix of a commit, or as an entry in the footer.
12. If included as a footer, a breaking change MUST consist of the uppercase text BREAKING CHANGE, followed by a colon, space, and description, e.g., *BREAKING CHANGE: environment variables now take precedence over config files*.
13. If included in the type/scope prefix, breaking changes MUST be indicated by a `!` immediately before the `:`. If `!` is used, `BREAKING CHANGE:` MAY be omitted from the footer section, and the commit description SHALL be used to describe the breaking change.
14. Types other than `feat` and `fix` MAY be used in your commit messages, e.g., *docs: update ref docs.*
15. The units of information that make up Conventional Commits MUST NOT be treated as case sensitive by implementors, with the exception of BREAKING CHANGE which MUST be uppercase.
16. BREAKING-CHANGE MUST be synonymous with BREAKING CHANGE, when used as a token in a footer.
```
3. I want to paste this text into the JSON settings, so I will move the keyboard cursor between `""` and then press `Ctrl + V` to paste the text
```json
{
"github.copilot.chat.commitMessageGeneration.instructions": [
{
"text": "|"
}
]
}
```
4. At this point, I hope VS Code can automatically recognize this paste action and automatically escape the pasted text, allowing me to directly paste the text into the JSON settings
```json
{
"github.copilot.chat.commitMessageGeneration.instructions": [
{
"text": "The key words \"MUST\", ......"
}
]
}
```
| feature-request,json | low | Critical |
2,754,653,452 | flutter | [go_router] Add meta property to GoRoute for custom attributes | ### Use case
The meta property can be used to store additional metadata for routes, which can then be utilized for various purposes, such as generating navigation menus and managing permissions.
### Proposal
I would like to propose adding a meta property to the GoRoute class in the go_router package. This property would allow users to define custom attributes such as labels, icons, and permissions for routes.
Here is an example of how the GoRoute class could be extended with the meta property:
```dart
GoRoute({
required this.path,
this.name,
this.builder,
this.pageBuilder,
super.parentNavigatorKey,
super.redirect,
this.onExit,
super.routes = const <RouteBase>[],
this.meta, // New meta property
});
// Usage example
GoRoute(
path: '/home',
name: 'home',
builder: (context, state) => HomeScreen(),
meta: {
'label': 'Home',
'icon': Icons.home,
'permission': 'view_home',
},
);
``` | package,c: proposal,P3,p: go_router,team-go_router,triaged-go_router | low | Major |
2,754,655,952 | react | [DevTools Bug] Could not find commit data for root "1" | ### Website or app
http://localhost:5173/
### Repro steps
was trying to test the performance of my react application
### How often does this bug happen?
Only once
### DevTools package (automated)
react-devtools-extensions
### DevTools version (automated)
6.0.1-c7c68ef842
### Error message (automated)
Could not find commit data for root "1"
### Error call stack (automated)
```text
at me.getDataForRoot (chrome-extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:1:1167575)
at SnapshotSelector_SnapshotSelector (chrome-extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:1:1528048)
at renderWithHooks (chrome-extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:1:52283)
at updateFunctionComponent (chrome-extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:1:81757)
at beginWork (chrome-extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:1:95976)
at performUnitOfWork (chrome-extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:1:154382)
at workLoopSync (chrome-extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:1:154250)
at renderRootSync (chrome-extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:1:153981)
at performWorkOnRoot (chrome-extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:1:149824)
at performSyncWorkOnRoot (chrome-extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:1:164385)
```
### Error component stack (automated)
```text
at SnapshotSelector_SnapshotSelector (chrome-extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:1:1527987)
at div (<anonymous>)
at div (<anonymous>)
at div (<anonymous>)
at SettingsModalContextController (chrome-extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:1:1297652)
at chrome-extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:1:1536522
at da (chrome-extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:1:1315454)
at div (<anonymous>)
at div (<anonymous>)
at ThemeProvider (chrome-extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:1:1318165)
at chrome-extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:1:1318362
at div (<anonymous>)
at div (<anonymous>)
at div (<anonymous>)
at ThemeProvider (chrome-extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:1:1318165)
at TimelineContextController (chrome-extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:1:1395852)
at ProfilerContextController (chrome-extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:1:1387571)
at TreeContextController (chrome-extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:1:1209877)
at SettingsContextController (chrome-extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:1:1238028)
at ModalDialogContextController (chrome-extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:1:1375269)
at DevTools_DevTools (chrome-extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:1:1544258)
```
### GitHub query string (automated)
```text
https://api.github.com/search/issues?q=Could not find commit data for root in:title is:issue is:open is:public label:"Component: Developer Tools" repo:facebook/react
```
| Type: Bug,Status: Unconfirmed,Component: Developer Tools | medium | Critical |
2,754,661,714 | kubernetes | Enhance Local service affinity to reduce service to service network calls. | ### What would you like to be added?
Suggestion to enhance Kubernetes by introducing a process to **prioritize local service communication** when endpoints exist on the same node is insightful and addresses one of the key efficiency challenges in service-to-service communication. Let's unpack this idea and its implications.
---
### Key Idea
Introduce a mechanism in Kubernetes' service proxy (e.g., kube-proxy) or in a sidecar/service mesh that:
1. **Looks up local endpoints** of the target service on the same node.
2. **Routes traffic locally** (e.g., via shared memory, IPC, or direct communication) if an endpoint resides on the same node.
3. Falls back to **network calls** only when no local endpoint is available.
---
### Benefits of the Proposed Design
1. **Reduced Latency**
Local communication (via loopback or IPC) is significantly faster than inter-node or even intra-node network communication.
2. **Lower Network Overhead**
By bypassing the network stack for local communication, the approach reduces cluster-wide bandwidth usage, alleviating congestion and improving performance for other applications.
3. **Cost Efficiency**
In cloud environments, reducing cross-zone or inter-node traffic can lower costs, as many providers charge for data egress between zones or regions.
4. **Improved Scalability**
With fewer network calls, clusters can handle larger workloads without hitting network bandwidth or performance bottlenecks.
5. **Seamless Integration**
If designed well, the change would be transparent to applications, preserving Kubernetes' abstraction of services while optimizing performance.
---
### Challenges and Considerations
1. **Service Discovery Enhancements**
- The process needs to be aware of **local service endpoints** in real-time, potentially requiring integration with Kubernetes' Endpoints API or similar mechanisms.
- Any lag in updates could result in stale routing decisions.
2. **Proxy Modification**
- Kube-proxy would need modifications to check for local endpoints and route calls accordingly. Alternatively, a custom sidecar or service mesh could implement this logic.
3. **State Consistency**
- Handling cases where a local endpoint becomes unavailable during a request is critical to avoid failures or retries.
4. **Cross-Node Communication Scenarios**
- Some use cases explicitly require inter-node communication (e.g., when data locality or availability constraints exist). The design must ensure it doesn’t interfere with such requirements.
5. **Shared Memory or IPC Implementation**
- While bypassing the network stack, efficient local communication mechanisms (e.g., shared memory or Unix domain sockets) must be introduced to enable this functionality.
---
### Minimal Design Change Example
1. **Enhance kube-proxy or Service Mesh**
Modify kube-proxy to:
- Query the Kubernetes Endpoints API for available endpoints of the target service.
- Prioritize endpoints on the same node.
Example workflow:
- Request arrives at kube-proxy.
- Kube-proxy looks up endpoints for the service.
- If a local endpoint exists, kube-proxy routes the request locally.
- Otherwise, the request follows the standard networking path.
2. **Optional Local Library Layer**
Introduce a lightweight library or sidecar for service-to-service communication:
- Services communicate with the library/sidecar instead of directly invoking the service proxy.
- The library decides whether to route traffic locally or via the network.
---
### Long-Term Improvements
1. **Topology-Aware Improvements**
Kubernetes already has **topology-aware hints** (beta as of Kubernetes 1.21+), which prioritize routing within the same node or zone. Your idea could extend these hints to enforce strict local communication when possible.
2. **Dynamic Endpoint Prioritization**
Extend Kubernetes' native load balancing to dynamically prioritize local endpoints while maintaining failover capabilities for cross-node communication.
3. **Integration with Service Mesh**
Service meshes like Istio or Linkerd could adopt this logic, ensuring optimized routing at the application layer without changing Kubernetes' core.
---
### Why is this needed?
Introducing local endpoint prioritization is a promising optimization that aligns well with Kubernetes' goal of efficient, scalable service orchestration. While requiring some design changes, the benefits in reduced latency, cost, and network usage make it worth exploring, particularly for workloads with high intra-node communication. | kind/feature,needs-sig,triage/needs-information,needs-triage | low | Critical |
2,754,666,574 | go | proposal: errors: In() to check Is against multiple errors | ### Proposal Details
**Problem**
The code looks redundant and cumbersome in cases when you need to check an error match several times. Example:
```go
func foo() {
err := bar()
if errors.Is(err, err1) || errors.Is(err, err2) || errors.Is(err, err3) {
// some logic
}
}
```
**Solution**
A new function that will allow you to check for errors like this:
```go
func foo() {
err := bar()
if errors.In(err, err1, err2, err3) {
// some logic
}
}
```
**Implementation**
```go
package errors
func In(err error, target ...error) bool {
for _, v := range target {
if errors.Is(err, v) {
return true
}
}
return false
}
``` | Proposal | low | Critical |
2,754,668,118 | go | proposal: x/crypto/ssh/knownhosts: Add `LineWithMarker` function and `MarkerCert` and `MarkerRevoked` constants | ### Proposal Details
The [Line](https://cs.opensource.google/go/x/crypto/+/refs/tags/v0.31.0:ssh/knownhosts/knownhosts.go;l=455) function doesn't allow you to write known host lines with the optional `@cert-authority` and `@revoked` markers, I suggest we introduce a new function:
```go
const MarkerCert = "@cert-authority"
const MarkerRevoked = "@revoked"
func LineWithMarker(marker string, addresses []string, key ssh.PublicKey) string {
if marker != "" {
return marker + " " + Line(addresses, key)
} else {
return Line(addresses, key)
}
}
```
| Proposal,Proposal-Crypto | low | Minor |
2,754,670,877 | svelte | Static support for `$state` in `.svelte.ts` | ### Describe the problem
I have a class that is entirely static (only static properties and methods). I cannot declare a `$state` property, either with a field initializer or in a static initialization block. In both cases, Svelte complains that:
>_Error: CompileError: state_invalid_placement: `$state(...)` can only be used as a variable declaration initializer or a class field
Of course, this is a class field, it's just that it's a static field, which seems to be unsupported for some reason.
### Describe the proposed solution
Support static class fields.
### Importance
would make my life easier | feature request,runes | low | Critical |
2,754,682,233 | node | Node.js tty.ReadStream does not pass in mouse event ANSI escape codes in Windows terminal | ### Version
v22.9.0
### Platform
```text
Microsoft Windows NT 10.0.19045.0 x64
```
### Subsystem
_No response_
### What steps will reproduce the bug?
1. Save blew code snippet to file named like "mouse-test.js"
```js
process.stdout.write('\x1b[?1000h');
process.stdout.write('\x1b[?1003h'); // Enable all mouse events (including drag)
process.stdin.setRawMode(true);
process.stdin.resume();
process.stdin.setEncoding('utf8');
// eslint-disable-next-line no-console
console.log("Mouse event reporting enabled. Move or click the mouse to see events. Press 'q' to quit.");
process.stdin.on('data', (chunk) => {
if (chunk === '\u0003' || chunk === 'q') { // Ctrl+C or 'q' to exit
// Disable mouse event reporting before exiting
process.stdout.write('\x1b[?1000l');
process.exit();
}
// Parse mouse event data
if (chunk.startsWith('\x1b[M')) {
const mouseData = chunk.slice(3);
const buttonCode = mouseData.charCodeAt(0) - 32;
const x = mouseData.charCodeAt(1) - 32;
const y = mouseData.charCodeAt(2) - 32;
// eslint-disable-next-line no-console
console.log(`Mouse event: button=${buttonCode}, x=${x}, y=${y}`);
}
});
```
2. Running the Script
Run the Script: Open Windows Terminal and navigate to the directory where your mouse-test.js script is located. Run the script using Node.js:
```bash
node mouse-test.js
```
3. Interact with the Terminal: Once the script is running, move the mouse or click within the terminal window. The script will output mouse events to the console.
4. Quit the Script: Press q or Ctrl+C to quit the script. The script will disable mouse event reporting before exiting.
### Explanation
- **Enabling Mouse Event Reporting**: The script enables mouse event reporting using the ANSI escape codes \x1b[?1000h and \x1b[?1003h.
- **Reading Input**: The script sets the terminal input to raw mode and starts reading input.
- **Handling Mouse Events**: Mouse events are identified by the \x1b[M sequence. The script parses the mouse event data to extract the button code, x, and y coordinates.
- **Outputting Events**: The parsed mouse event details are printed to the console.
- **Quitting the Script**: Pressing q or Ctrl+C will disable mouse event reporting and exit the script.
By running this script, you can verify if Windows Terminal supports mouse event reporting via ANSI escape codes. If mouse events are correctly reported, you will see the mouse event details printed in the terminal.
### How often does it reproduce? Is there a required condition?
Always happens.
### What is the expected behavior? Why is that the expected behavior?
I am writing a command line based UI application.
I found same problem was addressed under Window Terminal's repo and it has been well explained by some thoughtful comments in [an issue of Windows terminal](https://github.com/microsoft/terminal/issues/15977#issuecomment-1745843557), I think the linked comment explained that this is a missing behavior or case in source code of [tty.c](https://github.com/nodejs/node/blob/31c20f6e52dfd96c8972f92231f002c483a6eb47/deps/uv/src/win/tty.c#L625).
Let me quote some original sentences here:
> A little searching though the nodejs source suggests it is getting in the middle and trying to help you, but incompletely. In [here ](https://github.com/nodejs/node/blob/main/deps/uv/src/win/tty.c)you'll see it's using both SetConsoleMode and ReadConsoleInput to simulate some VT codes itself, but doesn't ask for or pass through mouse input at all
> nodejs reading the input on Windows via ReadConsoleInput and then generating it's own VT sequences for a set of inputs it knows about, which notably does not include mouse inputs
Hopefully in the future it will learn to either generate mouse input or enable VT input when you put it in raw mode, but right now it does neither
The impact is that some Node.js based command line or terminal UI applications are not able to support mouse event without some Windows native executable's help in Window terminal, but same function works fine in those popular terminal emulators for Linux and Mac OS, e.g. iTerm2 in Mac OS, WSL...
The above script I have tested successfully in iTerm2 for Mac OS, WSL and Termux of Android OS, it prints out mouse point location when I move or clicks mouse button.
So I feel that this issue may can also be considered as a **feature** which is unexpectedly worked for OS environments other than Windows, and it leaves Windows as the last missing piece.
### What do you see instead?
I see not text output when I click and move mouse, which is supposed to print mouse point coordinates.
### Additional information
_No response_ | tty | low | Critical |
2,754,684,064 | flutter | [pigeon] Kotlin code should be `internal` | As noted in https://github.com/flutter/flutter/issues/160737, we are currently generating Kotlin code using public visibility, which isn't what we want. `internal` would be consistent with Swift and with our usage guidance. | package,team-ecosystem,p: pigeon,P2,triaged-ecosystem | low | Minor |
2,754,689,073 | flutter | Scroll Listener does not trigger when keyboard is dismissed | ### Steps to reproduce
When a TextField is used within a CustomScrollView, opening the keyboard triggers events from the ScrollController due to changes in the scroll position. However, when the keyboard is dismissed, no ScrollController events are triggered, even though the scroll position adjusts due to layout changes.
UI elements or functionalities relying on ScrollController events, such as animations or dynamic header color changes, may not function correctly when the keyboard is dismissed.
1. Run the app on a physical or virtual device.
2. Tap on the TextField to focus on it, causing the keyboard to appear.
3. Observe that the ScrollController updates its events and observe the top header color changing dynamically.
4. Dismiss the keyboard.
5. Observe that the ScrollController does not trigger any events and the top header color doesn't change.
### Expected results
The ScrollController should trigger events when the keyboard is dismissed to reflect changes in the scroll position caused by layout adjustments.
### Actual results
The ScrollController does not trigger any events when the keyboard is dismissed, causing inconsistencies in offset-dependent behavior.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return const MaterialApp(
home: ScrollColorChangePage(),
);
}
}
class ScrollColorChangePage extends StatefulWidget {
const ScrollColorChangePage({super.key});
@override
ScrollColorChangePageState createState() => ScrollColorChangePageState();
}
class ScrollColorChangePageState extends State<ScrollColorChangePage> {
final ScrollController _scrollController = ScrollController();
Color _headerColor = Colors.blue;
@override
void initState() {
super.initState();
_scrollController.addListener(_onScroll);
}
@override
void dispose() {
_scrollController.removeListener(_onScroll);
_scrollController.dispose();
super.dispose();
}
void _onScroll() {
final scrollOffset = _scrollController.offset;
print(scrollOffset);
setState(() {
_headerColor = Color.lerp(Colors.blue, Colors.red, scrollOffset / 100) ?? Colors.blue;
});
}
@override
Widget build(BuildContext context) {
return Scaffold(
body: CustomScrollView(
controller: _scrollController,
slivers: [
SliverToBoxAdapter(
child: Container(
height: 300,
color: _headerColor,
),
),
const SliverToBoxAdapter(
child: Padding(
padding: EdgeInsets.only(top: 300),
child: TextField(
decoration: InputDecoration(
border: OutlineInputBorder(),
labelText: 'Enter text here',
),
),
),
),
],
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/941b0a9a-e5b8-4201-8610-4463de01043d
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.27.0, on macOS 15.2 24C101 darwin-arm64, locale en-EG)
• Flutter version 3.27.0 on channel stable at /Users/ahmedelsayed/.puro/envs/stable/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 8495dee1fd (12 days ago), 2024-12-10 14:23:39 -0800
• Engine revision 83bacfc525
• Dart version 3.6.0
• DevTools version 2.40.2
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/ahmedelsayed/Library/Android/sdk
• Platform android-34, build-tools 34.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 21.0.3+-79915917-b509.11)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 16.2)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16C5032a
• CocoaPods version 1.16.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.2)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 21.0.3+-79915917-b509.11)
[✓] VS Code (version 1.96.2)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.102.0
[✓] Connected device (4 available)
• iPhone 16 Pro (mobile) • DBB1EC9A-D253-49C3-BFC2-077B99FFDCC7 • ios •
com.apple.CoreSimulator.SimRuntime.iOS-18-2 (simulator)
• macOS (desktop) • macos • darwin-arm64 • macOS 15.2 24C101
darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 15.2 24C101
darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.205
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| framework,f: scrolling,has reproducible steps,P2,team-framework,triaged-framework,found in release: 3.27,found in release: 3.28 | low | Major |
2,754,690,800 | pytorch | add "enabled=True" to DistributedDataParallel.no_sync() | ### 🚀 The feature, motivation and pitch
Training a model with DDP and gradient accumulation is quite common.
To avoid unnecessary sync, the no_sync() operation is used.
Providing an `enabled=True` argument is already done in pytorch, and is very useful in pytorch in `torch.amp.autocast` and `torch.amp.GradScaler`.
```
if (step % grad_accum_steps + 1) == 0:
# forward+ backward code
loss = ddp_model(inputs)
(loss / grad_accum_steps).backward()
else:
with ddp_model.no_sync():
# forward + backward code
loss = ddp_model(inputs)
(loss / grad_accum_steps).backward()
```
using the `enabled` argument this can be simplified, preventing bug-prone code duplications:
```
with ddp_model.no_sync(enabled=(step % grad_accum_steps + 1) != 0):
loss = ddp_model(inputs)
(loss / grad_accum_steps).backward()
```
The implementation doesn't seem hard, and it will be back-compatible.
### Alternatives
_No response_
### Additional context
DDP with grad accum:
https://discuss.pytorch.org/t/gradient-accumulation-with-ddp-no-sync-interface/169593/3
Current no_sync implementation:
https://github.com/pytorch/pytorch/blob/main/torch/nn/parallel/distributed.py#L1420
torch.amp.autocast enabled=True API:
https://github.com/pytorch/pytorch/blob/09c950cc872dfcee453307db47fa10553c3f5616/torch/amp/autocast_mode.py#L222
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed,module: ddp | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.